From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail1.sandvine.com (Mail1.sandvine.com [64.7.137.134]) by dpdk.org (Postfix) with ESMTP id 9CA9B1B2A5 for ; Wed, 8 Nov 2017 20:21:18 +0100 (CET) Received: from BLR-EXCHP-2.sandvine.com (192.168.196.172) by WTL-EXCHP-2.sandvine.com (192.168.194.177) with Microsoft SMTP Server (TLS) id 14.3.319.2; Wed, 8 Nov 2017 14:21:16 -0500 Received: from WTL-EXCHP-1.sandvine.com ([fe80::ac6b:cc1e:f2ff:93aa]) by blr-exchp-2.sandvine.com ([::1]) with mapi id 14.03.0319.002; Wed, 8 Nov 2017 14:21:16 -0500 From: Kyle Larose To: "dev@dpdk.org" CC: Declan Doherty Thread-Topic: rte_eth_bond 8023ad dedicated queues with i40e with vectorized rx does not work Thread-Index: AdNYxqK8T0Fk2AtBQGKGFCJv9vtlOA== Date: Wed, 8 Nov 2017 19:21:15 +0000 Message-ID: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [192.168.200.51] x-c2processedorg: b2f06e69-072f-40ee-90c5-80a34e700794 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: [dpdk-dev] rte_eth_bond 8023ad dedicated queues with i40e with vectorized rx does not work X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 Nov 2017 19:21:18 -0000 Hello, I've been doing some testing using the 8023ad link bonding driver on a syst= em with 4 10G i40e interfaces in the link bond. It's working fine, except t= hat when any of the links are overloaded, it starts dropping the LACPDUs, w= hich is rather unfortunate for many reasons. While thinking about that problem, I noticed that the driver provides the a= bility to allocate dedicated queues for rx and tx of LACPDUs. This is great= ! Solves my problem (sort of - I'll send another email about that later)...= Or so I thought. After enabling the dedicated queues, I noticed a few thi= ngs: 1. The link bond never started distributing 2. The slave interfaces started dropping frames on their dedicated control= queues after some time 3. The connected interfaces reported both sending and receiving LACP PDUs.= =20 After digging in to this, I found out that the call to rte_eth_rx_burst was= returning 0 packets, despite their being many in the queue. It turns out t= hat the i40e was using one of the vectorized rx_burst functions, which requ= ire that the user poll for more than 1 packet at a time. bond_mode_8023ad_p= eriodic_cb was polling for exactly one. I changed the code to read up to 16 at a time, and everything started worki= ng. I'm not sure this is the right fix, though, since the normal behaviour = of processing one packet at a time maintains some hold offs/etc that may be= nice, and I don't want to discard any packets past the first one. Does anyone have some thoughts/comments on this? I can submit a patch with = my current workaround, if desired. Thanks, Kyle=20