From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 86D49A04BC; Fri, 9 Oct 2020 15:44:29 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 2D28B1D61F; Fri, 9 Oct 2020 15:44:28 +0200 (CEST) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by dpdk.org (Postfix) with ESMTP id 845331D414 for ; Fri, 9 Oct 2020 15:44:25 +0200 (CEST) IronPort-SDR: oYmCiRUd9JGmU1J5/UbjV/tL2se7rNBeLUCJfhZuBv6ulDi+bdc+0NeAXfZRiQDeJZVvpfQZ16 Yw8LYnMFxK5g== X-IronPort-AV: E=McAfee;i="6000,8403,9768"; a="144805248" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="144805248" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 06:44:24 -0700 IronPort-SDR: lYZoKMZEY86G/13IYKsIO2g5Padji/tXeloiBpLt0vLLzGPmgtEVIr/Tprji1Sk4OoilihH6mw pMZQgSf/PpiQ== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="462201351" Received: from fyigit-mobl1.ger.corp.intel.com (HELO [10.252.18.7]) ([10.252.18.7]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 06:44:23 -0700 To: Li RongQing , dev@dpdk.org References: <1600770572-22716-1-git-send-email-lirongqing@baidu.com> From: Ferruh Yigit Message-ID: <0a936a50-abb5-15ae-7a1a-0a2ae7a9d03c@intel.com> Date: Fri, 9 Oct 2020 14:44:20 +0100 MIME-Version: 1.0 In-Reply-To: <1600770572-22716-1-git-send-email-lirongqing@baidu.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit Subject: Re: [dpdk-dev] [PATCH 1/2] net/bonding: fix a possible unbalance packet receiving X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 9/22/2020 11:29 AM, Li RongQing wrote: > Current Rx round robin policy for the slaves has two issue: > > 1. active_slave in bond_dev_private is shared by multiple PMDS > which maybe cause some slave Rx hungry, for example, there > is two PMD and two slave port, both PMDs start to receive, and > see that active_slave is 0, and receive from slave 0, after > complete, they increase active_slave by one, totally active_slave > are increased by two, next time, they will start to receive > from slave 0 again, at last, slave 1 maybe drop packets during > to not be polled by PMD > > 2. active_slave is shared and written by multiple PMD in RX path > for every time RX, this is a kind of cache false share, low > performance. > > so move active_slave from bond_dev_private to bond_rx_queue > make it as per queue variable > > Signed-off-by: Li RongQing > Signed-off-by: Dongsheng Rong > Fixes: ae2a04864a9a ("net/bonding: reduce slave starvation on Rx poll") Cc: stable@dpdk.org For series, Reviewed-by: Wei Hu (Xavier) Series applied to dpdk-next-net/main, thanks.