From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id 1F7E2201 for ; Fri, 10 Feb 2017 17:29:57 +0100 (CET) Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga105.fm.intel.com with ESMTP; 10 Feb 2017 08:29:57 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.35,142,1484035200"; d="scan'208";a="1093229447" Received: from dwdohert-dpdk.ir.intel.com ([163.33.210.152]) by orsmga001.jf.intel.com with ESMTP; 10 Feb 2017 08:29:56 -0800 To: haifeng.lin@huawei.com, "dev@dpdk.org" , "Yigit, Ferruh" References: <1479460122-18780-1-git-send-email-haifeng.lin@huawei.com> From: Declan Doherty Message-ID: <0568e9d3-414d-c8b1-2d58-14a25a1eced3@intel.com> Date: Fri, 10 Feb 2017 16:30:32 +0000 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.6.0 MIME-Version: 1.0 In-Reply-To: <1479460122-18780-1-git-send-email-haifeng.lin@huawei.com> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH] net/bonding: improve non-ip packets RSS X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 10 Feb 2017 16:29:58 -0000 On 18/11/16 09:08, haifeng.lin at huawei.com (Haifeng Lin) wrote: > Most ethernet not support non-ip packets RSS and only first > queue can used to receive. In this scenario lacp bond can > only use one queue even if multi queue configured. > > We use below formula to change the map between bond_qid and > slave_qid to let at least slave_num queues to receive packets: > > slave_qid = (bond_qid + slave_id) % queue_num > > Signed-off-by: Haifeng Lin > --- > drivers/net/bonding/rte_eth_bond_pmd.c | 6 +++++- > 1 file changed, 5 insertions(+), 1 deletion(-) > > diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c > index 09ce7bf..8ad843a 100644 > --- a/drivers/net/bonding/rte_eth_bond_pmd.c > +++ b/drivers/net/bonding/rte_eth_bond_pmd.c > @@ -141,6 +141,8 @@ bond_ethdev_rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, > uint8_t collecting; /* current slave collecting status */ > const uint8_t promisc = internals->promiscuous_en; > uint8_t i, j, k; > + int slave_qid, bond_qid = bd_rx_q->queue_id; > + int queue_num = internals->nb_rx_queues; > > rte_eth_macaddr_get(internals->port_id, &bond_mac); > /* Copy slave list to protect against slave up/down changes during tx > @@ -154,7 +156,9 @@ bond_ethdev_rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, > collecting = ACTOR_STATE(&mode_8023ad_ports[slaves[i]], COLLECTING); > > /* Read packets from this slave */ > - num_rx_total += rte_eth_rx_burst(slaves[i], bd_rx_q->queue_id, > + slave_qid = queue_num ? (bond_qid + slaves[i]) % queue_num : > + bond_qid; > + num_rx_total += rte_eth_rx_burst(slaves[i], slave_qid, > &bufs[num_rx_total], nb_pkts - num_rx_total); > > for (k = j; k < 2 && k < num_rx_total; k++) > Nack, I think this could introduce unexpected behaviour as could then be read from a different of a slave queue that the queue id specified by the calling function, where the expected behaviour is that there is a 1:1 queue mapping from bond to slave queues. If RSS is needed for ethdevs which don't support it natively I think the appropriate solution is to create a software RSS solution which can be enabled at the slave ethdev level itself. I don't think the bonding layer should be implementing this functionality.