From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id A36FFA04DD for ; Wed, 28 Oct 2020 11:54:56 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9DB8ECA4E; Wed, 28 Oct 2020 11:54:55 +0100 (CET) Received: from mail-wr1-f65.google.com (mail-wr1-f65.google.com [209.85.221.65]) by dpdk.org (Postfix) with ESMTP id E924BCA68 for ; Wed, 28 Oct 2020 11:54:51 +0100 (CET) Received: by mail-wr1-f65.google.com with SMTP id g12so5138771wrp.10 for ; Wed, 28 Oct 2020 03:54:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=MYsdZ6a447yDLSKj687l4huqU1zq2kNjdR9jVzu7DZ8=; b=mDUcJVj7Zy9AfSe4r9E1cEU5G714gB7kGC4j405YLA0F4LJCzf/yt7A4gM0Fb6Xp6z BWsfU1ZUGjVs4K0oHUgp0d/PKbrJHUSM4SV7vviKZ3VAlQ0IY3X/n0QZI9qJLrXgXHsH UOL1AYJMPjdAbF9MVM8va9Gi3SVqbgUcodTZvIXD4rsAEDYQg6b6ATRctlP/vCyAfskk sRX8ZLRdBS5YO/ijBxrF2T2gwhxSw2qaloeHybCvufHsFlhQZHed0xxl+ZoyOvNXHBzd ceCssbaIR/jrTjt4H4UuSLw6ZK1jxDyDYvAd0fwOszB5YvryLJyp3EbB9HFkwO+4tSOk 3alw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=MYsdZ6a447yDLSKj687l4huqU1zq2kNjdR9jVzu7DZ8=; b=uGaAagHK9xr5ROaNi8q0/9CKbcPA22C9pcfe8YMMnhVND2sUvJGV+J5I76basAOSL2 5Bjl6Ms7lOIATuJ3cIqmfmhQjbDa5otyJVpIyZOzOayj7JzsQ+Gm28LIuAGKlUQjqQTX DTB3fzHClfwpcbmmzwa8ZQXme+rMCBu4PWLB192mnJepH2CDhfTY/RRS9JFjB4IhJR28 65/gjE9+OdzrXKduNY93ivQIPxm+vXABPYK5hPIu6wOQ+RUuSPyDRO/8KX8CN+7zUecg P5GqTr9uoRlji5MMUHvz+i/YY6ssbu0vVsy6rPZ8cafEZK8CWUyeKqlhU9OeS/6Mp04I Qa6g== X-Gm-Message-State: AOAM5329dN2kX3UT5GWLBUHTJFVN3y2z2dTlDT36w+/Q2e9Wx4oMIaha LpERdAHA1ZGG4ZLCjxC55tTZv37zt1RHzIwA X-Google-Smtp-Source: ABdhPJwIXhWX5utwZUs42abIiUT+ugbWY6BnWskavMwxYjusoyRmsipgyWnBrSyDJnzRm1935FGHaA== X-Received: by 2002:adf:e650:: with SMTP id b16mr8385150wrn.350.1603882490661; Wed, 28 Oct 2020 03:54:50 -0700 (PDT) Received: from localhost ([88.98.246.218]) by smtp.gmail.com with ESMTPSA id m9sm5780096wmc.31.2020.10.28.03.54.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Oct 2020 03:54:50 -0700 (PDT) From: luca.boccassi@gmail.com To: RongQing Li Cc: Dongsheng Rong , Wei Hu , dpdk stable Date: Wed, 28 Oct 2020 10:45:24 +0000 Message-Id: <20201028104606.3504127-165-luca.boccassi@gmail.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201028104606.3504127-1-luca.boccassi@gmail.com> References: <20201028104606.3504127-1-luca.boccassi@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [dpdk-stable] patch 'net/bonding: fix possible unbalanced packet receiving' has been queued to stable release 19.11.6 X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Sender: "stable" Hi, FYI, your patch has been queued to stable release 19.11.6 Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. It will be pushed if I get no objections before 10/30/20. So please shout if anyone has objections. Also note that after the patch there's a diff of the upstream commit vs the patch applied to the branch. This will indicate if there was any rebasing needed to apply to the stable branch. If there were code changes for rebasing (ie: not only metadata diffs), please double check that the rebase was correctly done. Thanks. Luca Boccassi --- >From 8b53ea69745f6fd2fc21029c3dc7f36e4ea7abdf Mon Sep 17 00:00:00 2001 From: RongQing Li Date: Tue, 22 Sep 2020 18:29:31 +0800 Subject: [PATCH] net/bonding: fix possible unbalanced packet receiving [ upstream commit 97602faa9e03c91465fc55f5464762796ce641c7 ] Current Rx round robin policy for the slaves has two issue: 1. active_slave in bond_dev_private is shared by multiple PMDS which maybe cause some slave Rx hungry, for example, there is two PMD and two slave port, both PMDs start to receive, and see that active_slave is 0, and receive from slave 0, after complete, they increase active_slave by one, totally active_slave are increased by two, next time, they will start to receive from slave 0 again, at last, slave 1 maybe drop packets during to not be polled by PMD 2. active_slave is shared and written by multiple PMD in RX path for every time RX, this is a kind of cache false share, low performance. So move active_slave from bond_dev_private to bond_rx_queue make it as per queue variable Fixes: ae2a04864a9a ("net/bonding: reduce slave starvation on Rx poll") Signed-off-by: RongQing Li Signed-off-by: Dongsheng Rong Reviewed-by: Wei Hu (Xavier) --- drivers/net/bonding/eth_bond_private.h | 3 ++- drivers/net/bonding/rte_eth_bond_api.c | 6 ------ drivers/net/bonding/rte_eth_bond_pmd.c | 14 +++++++------- 3 files changed, 9 insertions(+), 14 deletions(-) diff --git a/drivers/net/bonding/eth_bond_private.h b/drivers/net/bonding/eth_bond_private.h index c9b2d0fe46..af92a4c52a 100644 --- a/drivers/net/bonding/eth_bond_private.h +++ b/drivers/net/bonding/eth_bond_private.h @@ -50,6 +50,8 @@ extern const struct rte_flow_ops bond_flow_ops; /** Port Queue Mapping Structure */ struct bond_rx_queue { uint16_t queue_id; + /**< Next active_slave to poll */ + uint16_t active_slave; /**< Queue Id */ struct bond_dev_private *dev_private; /**< Reference to eth_dev private structure */ @@ -132,7 +134,6 @@ struct bond_dev_private { uint16_t nb_rx_queues; /**< Total number of rx queues */ uint16_t nb_tx_queues; /**< Total number of tx queues*/ - uint16_t active_slave; /**< Next active_slave to poll */ uint16_t active_slave_count; /**< Number of active slaves */ uint16_t active_slaves[RTE_MAX_ETHPORTS]; /**< Active slave list */ diff --git a/drivers/net/bonding/rte_eth_bond_api.c b/drivers/net/bonding/rte_eth_bond_api.c index 97c667e007..a4007fe07c 100644 --- a/drivers/net/bonding/rte_eth_bond_api.c +++ b/drivers/net/bonding/rte_eth_bond_api.c @@ -129,12 +129,6 @@ deactivate_slave(struct rte_eth_dev *eth_dev, uint16_t port_id) RTE_ASSERT(active_count < RTE_DIM(internals->active_slaves)); internals->active_slave_count = active_count; - /* Resetting active_slave when reaches to max - * no of slaves in active list - */ - if (internals->active_slave >= active_count) - internals->active_slave = 0; - if (eth_dev->data->dev_started) { if (internals->mode == BONDING_MODE_8023AD) { bond_mode_8023ad_start(eth_dev); diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c index fccfcb2c89..178a39096b 100644 --- a/drivers/net/bonding/rte_eth_bond_pmd.c +++ b/drivers/net/bonding/rte_eth_bond_pmd.c @@ -69,7 +69,7 @@ bond_ethdev_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) struct bond_rx_queue *bd_rx_q = (struct bond_rx_queue *)queue; internals = bd_rx_q->dev_private; slave_count = internals->active_slave_count; - active_slave = internals->active_slave; + active_slave = bd_rx_q->active_slave; for (i = 0; i < slave_count && nb_pkts; i++) { uint16_t num_rx_slave; @@ -86,8 +86,8 @@ bond_ethdev_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) active_slave = 0; } - if (++internals->active_slave >= slave_count) - internals->active_slave = 0; + if (++bd_rx_q->active_slave >= slave_count) + bd_rx_q->active_slave = 0; return num_rx_total; } @@ -303,9 +303,9 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts, memcpy(slaves, internals->active_slaves, sizeof(internals->active_slaves[0]) * slave_count); - idx = internals->active_slave; + idx = bd_rx_q->active_slave; if (idx >= slave_count) { - internals->active_slave = 0; + bd_rx_q->active_slave = 0; idx = 0; } for (i = 0; i < slave_count && num_rx_total < nb_pkts; i++) { @@ -367,8 +367,8 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts, idx = 0; } - if (++internals->active_slave >= slave_count) - internals->active_slave = 0; + if (++bd_rx_q->active_slave >= slave_count) + bd_rx_q->active_slave = 0; return num_rx_total; } -- 2.20.1 --- Diff of the applied patch vs upstream commit (please double-check if non-empty: --- --- - 2020-10-28 10:35:16.847205425 +0000 +++ 0165-net-bonding-fix-possible-unbalanced-packet-receiving.patch 2020-10-28 10:35:11.768833910 +0000 @@ -1,8 +1,10 @@ -From 97602faa9e03c91465fc55f5464762796ce641c7 Mon Sep 17 00:00:00 2001 +From 8b53ea69745f6fd2fc21029c3dc7f36e4ea7abdf Mon Sep 17 00:00:00 2001 From: RongQing Li Date: Tue, 22 Sep 2020 18:29:31 +0800 Subject: [PATCH] net/bonding: fix possible unbalanced packet receiving +[ upstream commit 97602faa9e03c91465fc55f5464762796ce641c7 ] + Current Rx round robin policy for the slaves has two issue: 1. active_slave in bond_dev_private is shared by multiple PMDS which @@ -20,7 +22,6 @@ per queue variable Fixes: ae2a04864a9a ("net/bonding: reduce slave starvation on Rx poll") -Cc: stable@dpdk.org Signed-off-by: RongQing Li Signed-off-by: Dongsheng Rong @@ -32,7 +33,7 @@ 3 files changed, 9 insertions(+), 14 deletions(-) diff --git a/drivers/net/bonding/eth_bond_private.h b/drivers/net/bonding/eth_bond_private.h -index 0a0034705d..62e3a9dbf3 100644 +index c9b2d0fe46..af92a4c52a 100644 --- a/drivers/net/bonding/eth_bond_private.h +++ b/drivers/net/bonding/eth_bond_private.h @@ -50,6 +50,8 @@ extern const struct rte_flow_ops bond_flow_ops; @@ -70,7 +71,7 @@ if (internals->mode == BONDING_MODE_8023AD) { bond_mode_8023ad_start(eth_dev); diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c -index 1f761c7c9e..05ac25fcad 100644 +index fccfcb2c89..178a39096b 100644 --- a/drivers/net/bonding/rte_eth_bond_pmd.c +++ b/drivers/net/bonding/rte_eth_bond_pmd.c @@ -69,7 +69,7 @@ bond_ethdev_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)