From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id 68663938E for ; Tue, 5 Jan 2016 14:47:03 +0100 (CET) Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga102.fm.intel.com with ESMTP; 05 Jan 2016 05:47:02 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.20,525,1444719600"; d="scan'208";a="853902918" Received: from dwdohert-dpdk.ir.intel.com ([163.33.213.167]) by orsmga001.jf.intel.com with ESMTP; 05 Jan 2016 05:47:01 -0800 To: Eric Kinzie , Andriy Berestovskyy References: <1449249260-15165-1-git-send-email-stephen@networkplumber.org> <1449249260-15165-7-git-send-email-stephen@networkplumber.org> <20151204191831.GA20647@roosta.home> From: Declan Doherty Message-ID: <568BC91D.8090806@intel.com> Date: Tue, 5 Jan 2016 13:46:05 +0000 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.4.0 MIME-Version: 1.0 In-Reply-To: <20151204191831.GA20647@roosta.home> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Cc: dev@dpdk.org, Eric Kinzie Subject: Re: [dpdk-dev] [PATCH 6/8] bond: handle slaves with fewer queues than bonding device X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 05 Jan 2016 13:47:03 -0000 On 04/12/15 19:18, Eric Kinzie wrote: > On Fri Dec 04 19:36:09 +0100 2015, Andriy Berestovskyy wrote: >> Hi guys, >> I'm not quite sure if we can support less TX queues on a slave that easy: >> >>> queue_id = bond_slave_txqid(internals, i, bd_tx_q->queue_id); >>> num_tx_slave = rte_eth_tx_burst(slaves[i], queue_id, >>> slave_bufs[i], slave_nb_pkts[i]); >> >> It seems that two different lcores might end up writing to the same >> slave queue at the same time, isn't it? >> >> Regards, >> Andriy > > Andriy, I think you're probably right about this. Perhaps it should > instead refuse to add or refuse to activate a slave with too few > tx queues. Could probably fix this with another layer of buffering > so that an lcore with a valid tx queue could pick up the mbufs later, > but this doesn't seem very appealing. > > Eric > > >> On Fri, Dec 4, 2015 at 6:14 PM, Stephen Hemminger >> wrote: >>> From: Eric Kinzie >>> >>> In the event that the bonding device has a greater number of tx and/or rx >>> queues than the slave being added, track the queue limits of the slave. >>> On receive, ignore queue identifiers beyond what the slave interface >>> can support. During transmit, pick a different queue id to use if the >>> intended queue is not available on the slave. >>> >>> Signed-off-by: Eric Kinzie >>> Signed-off-by: Stephen Hemminger >>> --- ... I don't there is any straight forward way of supporting slaves with different numbers of queues, the initial library was written with the assumption that the number of tx/rx queues would always be the same on each slave. This is why,when a slave is added to a bonded device we reconfigure the queues. For features like RSS we have to have the same number of rx queues otherwise the flow distribution to an application could change in the case of a fail over event. Also by supporting different numbers of queues between slaves we would be no longer be supporting the standard behavior of ethdevs in DPDK were we expect that by using different queues we don't require locking to be thread safe.