From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id D3DCB1B6E3 for ; Fri, 10 Nov 2017 17:37:28 +0100 (CET) Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 10 Nov 2017 08:37:27 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.44,375,1505804400"; d="scan'208";a="6232283" Received: from dwdohert-mobl.ger.corp.intel.com (HELO [163.33.228.189]) ([163.33.228.189]) by orsmga002.jf.intel.com with ESMTP; 10 Nov 2017 08:37:26 -0800 To: Kyle Larose , "dev@dpdk.org" References: From: "Doherty, Declan" Message-ID: Date: Fri, 10 Nov 2017 16:37:25 +0000 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.4.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] rte_eth_bond 8023ad behaviour under congestion X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 10 Nov 2017 16:37:29 -0000 On 08/11/2017 7:33 PM, Kyle Larose wrote: > Hello, > > I've been doing some testing using the 8023ad link bonding driver on a system with 4 10G i40e interfaces in the link bond. One thing I've noticed is that if any of the links are overloaded when I don't have dedicated control queues enabled, it starts dropping LACPDUs on transmit. I quickly realized that it's because of the following code in bond_ethdev_tx_burst_8023ad: > > > > num_tx_slave = rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id, > slave_bufs[i], slave_nb_pkts[i]); > > /* If tx burst fails drop slow packets */ > for ( ; num_tx_slave < slave_slow_nb_pkts[i]; num_tx_slave++) > rte_pktmbuf_free(slave_bufs[i][num_tx_slave]); > > This chunk of code basically treats the LACPPDUs at a very low priority, since they are generated infrequently. I'd like to ensure that LACPPDUs are transmitted when there's congestion in the case where dedicated queues are not supported. > > I can think of a few options to resolve this: > 1) Store the LACPDUS for later sending in a per-slave buffer if tx fails, and make sure these are always at the front of the send buffer, so that when there's room, they're sent (I'm not quite sure what the best way to do this is). Yes this sounds like a good idea, ideally we would use the same buffers which is used pass the LACPDUs from the state machines to the slaves, but add the packet back to the head of the buffer. As the LACPDUs are generated at such a slow rate we could probably just re-enqueue to the existing ring we have today. If it was configured as a multi-producer. > 2) Allow enabling the dedicated tx queue without enabling the dedicated rx queue. > > I think both 1 & 2 are good solutions on their own, and should probably both be implemented. #2 is ideal, but doesn't cover all cases (like if there are insufficient tx queues to dedicate one to this). > > How do people feel about these proposals? > I don't have any problems with independent enablement of the dedicated tx/rx queues, and it should be pretty straight forward to do, as I think they are pretty decoupled in the implementation but unfortunately it will require some new public APIs or breaking of the existing API, a new API isn't a big issue. I do think it's likely in most cases that both tx/rx dedicated queues would be either both enable or disabled? > Note: I understand that this is not ideal at all, since the lack of a dedicated rx queue means that lacpdus could drop on rx. But, in my use-case that's less likely than link congestion, so I'd like to at least be resilient here. > > Thanks, > > Kyle > > Thanks Declan