From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from bluebox.CS.Princeton.EDU (bluebox.CS.Princeton.EDU [128.112.136.38]) by dpdk.org (Postfix) with ESMTP id 92C7D1396 for ; Sun, 12 Apr 2015 18:17:57 +0200 (CEST) Received: from mail-ig0-f170.google.com (mail-ig0-f170.google.com [209.85.213.170]) (authenticated bits=0) by bluebox.CS.Princeton.EDU (8.13.8/8.13.8) with ESMTP id t3CGHrDq016510 (version=TLSv1/SSLv3 cipher=RC4-SHA bits=128 verify=NOT) for ; Sun, 12 Apr 2015 12:17:56 -0400 Received: by igblo3 with SMTP id lo3so29311494igb.1 for ; Sun, 12 Apr 2015 09:17:53 -0700 (PDT) X-Received: by 10.50.25.225 with SMTP id f1mr10863096igg.29.1428855473137; Sun, 12 Apr 2015 09:17:53 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Raghav Sethi Date: Sun, 12 Apr 2015 16:17:52 +0000 Message-ID: To: "Zhou, Danny" , "dev@dpdk.org" Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-dev] Mellanox Flow Steering X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 12 Apr 2015 16:17:58 -0000 Hi Danny, Thanks, that's helpful. However, Mellanox cards don't support Intel Flow Director, so how would one go about installing these rules in the NIC? The only technique the Mellanox User Manual ( http://www.mellanox.com/related-docs/prod_software/Mellanox_EN_for_Linux_User_Manual_v2_0-3_0_0.pdf) lists to use Flow Steering is the ethtool based method. Additionally, the mlx4_core driver is used both by DPDK PMD and otherwise (unlike the igb_uio driver, which needs to be loaded to use PMD) and it seems weird that only the packets affected by the rules don't hit the DPDK application. That indicates to me that the NIC is dealing with the rules somehow even though a DPDK application is running. Best, Raghav On Sun, Apr 12, 2015 at 7:47 AM Zhou, Danny wrote: > Currently, the DPDK PMD and NIC kernel driver cannot drive a same NIC > device simultaneously. When you > use ethtool to setup flow director filter, the rules are written to NIC > via ethtool support in kernel driver. But when > DPDK PMD is loaded to drive same device, the rules previously written by > ethtool/kernel_driver will be invalid, so > you may have to use DPDK APIs to rewrite your rules to the NIC again. > > The bifurcated driver is designed to provide a solution to support the > kernel driver and DPDK coexist scenarios, but > it has security concern so netdev maintainer rejects it. > > It should not be a Mellanox hardware problem, if you try it on Intel NIC > the result is same. > > > -----Original Message----- > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Raghav Sethi > > Sent: Sunday, April 12, 2015 1:10 PM > > To: dev@dpdk.org > > Subject: [dpdk-dev] Mellanox Flow Steering > > > > Hi folks, > > > > I'm trying to use the flow steering features of the Mellanox card to > > effectively use a multicore server for a benchmark. > > > > The system has a single-port Mellanox ConnectX-3 EN, and I want to use 4 > of > > the 32 cores present and 4 of the 16 RX queues supported by the hardware > > (i.e. one RX queue per core). > > > > I assign RX queues to each of the cores, but obviously without flow > > steering (all the packets have the same IP and UDP headers, but different > > dest MACs in the ethernet headers) each of the packets hits one core. > I've > > set up the client such that it sends packets with a different destination > > MAC for each RX queue (e.g. RX queue 1 should get 10:00:00:00:00:00, RX > > queue 2 should get 10:00:00:00:00:01 and so on). > > > > I try to accomplish this by using ethtool to set flow steering rules > (e.g. > > ethtool -U p7p1 flow-type ether dst 10:00:00:00:00:00 action 1 loc 1, > > ethtool -U p7p1 flow-type ether dst 10:00:00:00:00:01 action 2 loc 2..). > > > > As soon as I set up these rules though, packets matching them just stop > > hitting my application. All other packets go through, and removing the > > rules also causes the packets to go through. I'm pretty sure my > application > > is looking at all the queues, but I tried changing the rules to try a > rule > > for every single destination RX queue (0-16), and that doesn't work > either. > > > > If it helps, my code is based on the l2fwd sample application, and is > here: > > https://gist.github.com/raghavsethi/416fb77d74ccf81bd93e > > > > Also, I added the following to my /etc/init.d: options mlx4_core > > log_num_mgm_entry_size=-1, and restarted the driver before any of these > > tests. > > > > Any ideas what might be causing my packets to drop? In case this is a > > Mellanox issue, should I be talking to their customer support? > > > > Best, > > Raghav Sethi >