From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 52224A051C for ; Tue, 11 Feb 2020 12:27:43 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4249E2B9C; Tue, 11 Feb 2020 12:27:43 +0100 (CET) Received: from mail-wm1-f66.google.com (mail-wm1-f66.google.com [209.85.128.66]) by dpdk.org (Postfix) with ESMTP id 5B6DA2B9C for ; Tue, 11 Feb 2020 12:27:42 +0100 (CET) Received: by mail-wm1-f66.google.com with SMTP id c84so3062427wme.4 for ; Tue, 11 Feb 2020 03:27:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=IlfLKzGdUEf+4y5ijmn6C2/4QN7q1gC1yBJCEuEBVTQ=; b=TsixK4fWwTOtmDlBWjEIG7ARxESh+LsSFcQ5IlJmJYtPm9KPd/aikdOkAMGrXNlQjk 7LFhFc+QSU1QGIXVFL+MU5RA0kUevoDTnxVW8cbuOUbXrKU1vhNHAJqwRqcq3ZN7BlO2 8xAntZRfifN3UjvMei3Syw2PCXb6cAkYxq1W0uCTJebeztYMwXap8AVxpYANSyA3QScp MXKyjBhQ7sW/fy7S8rGnMfaj+t4hVYzlVWV6kNaewxwix88IwWAFrAypqx4IMy/vjYol fEyknDZGnbYmTap9fSqhSCzcxHAY+RdCJxepHxzAUOV6Nmg6gk7b3yRA09etze5dC8Xe sQvQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=IlfLKzGdUEf+4y5ijmn6C2/4QN7q1gC1yBJCEuEBVTQ=; b=dF6YDMPTvCWOaqiGt/JeSp0SksB2zr4QIHL2nzPEP7iLDOGwSvpaiCPlLypgiH87N7 qdnEGg/b8+FMYXYi7r4pIx+4QCy3IXIplUYYqzt37kCtSkNCRhUFfeDqeXJC8O2H9KjH 1haxbGIxmxxSe7qukZ2WIdISWEoXdZ6C7I6gPnfEBNNCqjEkXdFHJa1g/gTV660LcJvD /+0XFCBWRxSFhxXCqViSnqJCN9edVZKGQGP9cDz230GywykRZ4uRqjcVj+XGQifc/q9J cwuYvFJoLUENid2Mutf5JaoHdw61QTJucbDe89RXywT6KdcCgliB2yDv734qTdxTLDC5 vZlA== X-Gm-Message-State: APjAAAUHiyQp15vkSzKpfvh0npbnTjAxdM1YxishYCM7TLUakGtYA+xj SjzsOxoSpOKdh8hsnuFlEkw= X-Google-Smtp-Source: APXvYqy5GVRZXC6/FnEFttQvNaebgdlWzgvof/62p+A5I/UpwhHzqraUsUOBGmrufg3AU1R9ui1tXw== X-Received: by 2002:a05:600c:34d:: with SMTP id u13mr5286484wmd.77.1581420462057; Tue, 11 Feb 2020 03:27:42 -0800 (PST) Received: from localhost ([88.98.246.218]) by smtp.gmail.com with ESMTPSA id q14sm4809889wrj.81.2020.02.11.03.27.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 11 Feb 2020 03:27:41 -0800 (PST) From: luca.boccassi@gmail.com To: Somnath Kotur Cc: Santoshkumar Karanappa Rastapur , dpdk stable Date: Tue, 11 Feb 2020 11:20:00 +0000 Message-Id: <20200211112216.3929-54-luca.boccassi@gmail.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200211112216.3929-1-luca.boccassi@gmail.com> References: <20200211112216.3929-1-luca.boccassi@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [dpdk-stable] patch 'net/bnxt: fix flow flush to sync with flow destroy' has been queued to stable release 19.11.1 X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Sender: "stable" Hi, FYI, your patch has been queued to stable release 19.11.1 Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. It will be pushed if I get no objections before 02/13/20. So please shout if anyone has objections. Also note that after the patch there's a diff of the upstream commit vs the patch applied to the branch. This will indicate if there was any rebasing needed to apply to the stable branch. If there were code changes for rebasing (ie: not only metadata diffs), please double check that the rebase was correctly done. Thanks. Luca Boccassi --- >From d6fd94c20c1af71a5b0eb00ce271d85152976374 Mon Sep 17 00:00:00 2001 From: Somnath Kotur Date: Fri, 20 Dec 2019 18:29:39 -0800 Subject: [PATCH] net/bnxt: fix flow flush to sync with flow destroy [ upstream commit e339ef6e357c97c512a37fbba13859878a496636 ] Sync flow flush routine with flow destroy so that the operations performed per flow during a flush are the same as that are done for an individual flow destroy by having a common function to call for both. One of the things that was missed in the flow flush routine was the deletion of the L2 filter that would have been created as part of an n-tuple filter. Also, decrement the l2_ref_cnt for a filter in the case of a filter update as it would've bumped up previously in validate_and_parse_flow() Fixes: 89278c59d97c ("net/bnxt: fix flow flush handling") Signed-off-by: Somnath Kotur Reviewed-by: Santoshkumar Karanappa Rastapur --- drivers/net/bnxt/bnxt_flow.c | 132 ++++++++++++----------------------- 1 file changed, 46 insertions(+), 86 deletions(-) diff --git a/drivers/net/bnxt/bnxt_flow.c b/drivers/net/bnxt/bnxt_flow.c index 76e9584da7..dd40b2d72e 100644 --- a/drivers/net/bnxt/bnxt_flow.c +++ b/drivers/net/bnxt/bnxt_flow.c @@ -1537,10 +1537,13 @@ bnxt_update_filter(struct bnxt *bp, struct bnxt_filter_info *old_filter, * filter which points to the new destination queue and so we clear * the previous L2 filter. For ntuple filters, we are going to reuse * the old L2 filter and create new NTUPLE filter with this new - * destination queue subsequently during bnxt_flow_create. + * destination queue subsequently during bnxt_flow_create. So we + * decrement the ref cnt of the L2 filter that would've been bumped + * up previously in bnxt_validate_and_parse_flow as the old n-tuple + * filter that was referencing it will be deleted now. */ + bnxt_hwrm_clear_l2_filter(bp, old_filter); if (new_filter->filter_type == HWRM_CFA_L2_FILTER) { - bnxt_hwrm_clear_l2_filter(bp, old_filter); bnxt_hwrm_set_l2_filter(bp, new_filter->dst_id, new_filter); } else { if (new_filter->filter_type == HWRM_CFA_EM_FILTER) @@ -1817,46 +1820,24 @@ static int bnxt_handle_tunnel_redirect_destroy(struct bnxt *bp, } static int -bnxt_flow_destroy(struct rte_eth_dev *dev, - struct rte_flow *flow, - struct rte_flow_error *error) +_bnxt_flow_destroy(struct bnxt *bp, + struct rte_flow *flow, + struct rte_flow_error *error) { - struct bnxt *bp = dev->data->dev_private; struct bnxt_filter_info *filter; struct bnxt_vnic_info *vnic; int ret = 0; - bnxt_acquire_flow_lock(bp); - if (!flow) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_HANDLE, NULL, - "Invalid flow: failed to destroy flow."); - bnxt_release_flow_lock(bp); - return -EINVAL; - } - filter = flow->filter; vnic = flow->vnic; - if (!filter) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_HANDLE, NULL, - "Invalid flow: failed to destroy flow."); - bnxt_release_flow_lock(bp); - return -EINVAL; - } - if (filter->filter_type == HWRM_CFA_TUNNEL_REDIRECT_FILTER && filter->enables == filter->tunnel_type) { - ret = bnxt_handle_tunnel_redirect_destroy(bp, - filter, - error); - if (!ret) { + ret = bnxt_handle_tunnel_redirect_destroy(bp, filter, error); + if (!ret) goto done; - } else { - bnxt_release_flow_lock(bp); + else return ret; - } } ret = bnxt_match_filter(bp, filter); @@ -1903,7 +1884,36 @@ done: "Failed to destroy flow."); } + return ret; +} + +static int +bnxt_flow_destroy(struct rte_eth_dev *dev, + struct rte_flow *flow, + struct rte_flow_error *error) +{ + struct bnxt *bp = dev->data->dev_private; + int ret = 0; + + bnxt_acquire_flow_lock(bp); + if (!flow) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_HANDLE, NULL, + "Invalid flow: failed to destroy flow."); + bnxt_release_flow_lock(bp); + return -EINVAL; + } + + if (!flow->filter) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_HANDLE, NULL, + "Invalid flow: failed to destroy flow."); + bnxt_release_flow_lock(bp); + return -EINVAL; + } + ret = _bnxt_flow_destroy(bp, flow, error); bnxt_release_flow_lock(bp); + return ret; } @@ -1911,7 +1921,6 @@ static int bnxt_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *error) { struct bnxt *bp = dev->data->dev_private; - struct bnxt_filter_info *filter = NULL; struct bnxt_vnic_info *vnic; struct rte_flow *flow; unsigned int i; @@ -1925,66 +1934,17 @@ bnxt_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *error) while (!STAILQ_EMPTY(&vnic->flow_list)) { flow = STAILQ_FIRST(&vnic->flow_list); - filter = flow->filter; - if (filter->filter_type == - HWRM_CFA_TUNNEL_REDIRECT_FILTER && - filter->enables == filter->tunnel_type) { - ret = - bnxt_handle_tunnel_redirect_destroy(bp, - filter, - error); - if (!ret) { - goto done; - } else { - bnxt_release_flow_lock(bp); - return ret; - } - } + if (!flow->filter) + continue; - if (filter->filter_type == HWRM_CFA_EM_FILTER) - ret = bnxt_hwrm_clear_em_filter(bp, filter); - if (filter->filter_type == HWRM_CFA_NTUPLE_FILTER) - ret = bnxt_hwrm_clear_ntuple_filter(bp, filter); - else if (i) - ret = bnxt_hwrm_clear_l2_filter(bp, filter); - - if (ret) { - rte_flow_error_set - (error, - -ret, - RTE_FLOW_ERROR_TYPE_HANDLE, - NULL, - "Failed to flush flow in HW."); - bnxt_release_flow_lock(bp); - return -rte_errno; - } -done: - STAILQ_REMOVE(&vnic->flow_list, flow, - rte_flow, next); - - STAILQ_REMOVE(&vnic->filter, - filter, - bnxt_filter_info, - next); - bnxt_free_filter(bp, filter); - - rte_free(flow); - - /* If this was the last flow associated with this vnic, - * switch the queue back to RSS pool. - */ - if (STAILQ_EMPTY(&vnic->flow_list)) { - rte_free(vnic->fw_grp_ids); - if (vnic->rx_queue_cnt > 1) - bnxt_hwrm_vnic_ctx_free(bp, vnic); - bnxt_hwrm_vnic_free(bp, vnic); - vnic->rx_queue_cnt = 0; - } + ret = _bnxt_flow_destroy(bp, flow, error); + if (ret) + break; } } - bnxt_release_flow_lock(bp); + return ret; } -- 2.20.1 --- Diff of the applied patch vs upstream commit (please double-check if non-empty: --- --- - 2020-02-11 11:17:40.953827556 +0000 +++ 0054-net-bnxt-fix-flow-flush-to-sync-with-flow-destroy.patch 2020-02-11 11:17:38.424001795 +0000 @@ -1,8 +1,10 @@ -From e339ef6e357c97c512a37fbba13859878a496636 Mon Sep 17 00:00:00 2001 +From d6fd94c20c1af71a5b0eb00ce271d85152976374 Mon Sep 17 00:00:00 2001 From: Somnath Kotur Date: Fri, 20 Dec 2019 18:29:39 -0800 Subject: [PATCH] net/bnxt: fix flow flush to sync with flow destroy +[ upstream commit e339ef6e357c97c512a37fbba13859878a496636 ] + Sync flow flush routine with flow destroy so that the operations performed per flow during a flush are the same as that are done for an individual flow destroy by having a common function to call for both. @@ -13,7 +15,6 @@ update as it would've bumped up previously in validate_and_parse_flow() Fixes: 89278c59d97c ("net/bnxt: fix flow flush handling") -Cc: stable@dpdk.org Signed-off-by: Somnath Kotur Reviewed-by: Santoshkumar Karanappa Rastapur @@ -22,10 +23,10 @@ 1 file changed, 46 insertions(+), 86 deletions(-) diff --git a/drivers/net/bnxt/bnxt_flow.c b/drivers/net/bnxt/bnxt_flow.c -index 7bd6811f16..7343b7e7b4 100644 +index 76e9584da7..dd40b2d72e 100644 --- a/drivers/net/bnxt/bnxt_flow.c +++ b/drivers/net/bnxt/bnxt_flow.c -@@ -1536,10 +1536,13 @@ bnxt_update_filter(struct bnxt *bp, struct bnxt_filter_info *old_filter, +@@ -1537,10 +1537,13 @@ bnxt_update_filter(struct bnxt *bp, struct bnxt_filter_info *old_filter, * filter which points to the new destination queue and so we clear * the previous L2 filter. For ntuple filters, we are going to reuse * the old L2 filter and create new NTUPLE filter with this new @@ -41,7 +42,7 @@ bnxt_hwrm_set_l2_filter(bp, new_filter->dst_id, new_filter); } else { if (new_filter->filter_type == HWRM_CFA_EM_FILTER) -@@ -1816,46 +1819,24 @@ static int bnxt_handle_tunnel_redirect_destroy(struct bnxt *bp, +@@ -1817,46 +1820,24 @@ static int bnxt_handle_tunnel_redirect_destroy(struct bnxt *bp, } static int @@ -94,7 +95,7 @@ } ret = bnxt_match_filter(bp, filter); -@@ -1902,7 +1883,36 @@ done: +@@ -1903,7 +1884,36 @@ done: "Failed to destroy flow."); } @@ -131,7 +132,7 @@ return ret; } -@@ -1910,7 +1920,6 @@ static int +@@ -1911,7 +1921,6 @@ static int bnxt_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *error) { struct bnxt *bp = dev->data->dev_private; @@ -139,7 +140,7 @@ struct bnxt_vnic_info *vnic; struct rte_flow *flow; unsigned int i; -@@ -1924,66 +1933,17 @@ bnxt_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *error) +@@ -1925,66 +1934,17 @@ bnxt_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *error) while (!STAILQ_EMPTY(&vnic->flow_list)) { flow = STAILQ_FIRST(&vnic->flow_list);