DPDK patches and discussions
 help / color / mirror / Atom feed
From: Olga Shern <olgas@mellanox.com>
To: "Zhou, Danny" <danny.zhou@intel.com>,
	Raghav Sethi <raghavs@CS.Princeton.EDU>,
	"dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] Mellanox Flow Steering
Date: Mon, 13 Apr 2015 16:59:48 +0000	[thread overview]
Message-ID: <AM2PR05MB099505AC7339666E65EADF1ED3E70@AM2PR05MB0995.eurprd05.prod.outlook.com> (raw)
In-Reply-To: <DFDF335405C17848924A094BC35766CF0AB54B69@SHSMSX104.ccr.corp.intel.com>

Hi Danny, 

Please see below

Best Regards,
Olga

-----Original Message-----
From: Zhou, Danny [mailto:danny.zhou@intel.com] 
Sent: Monday, April 13, 2015 2:30 AM
To: Olga Shern; Raghav Sethi; dev@dpdk.org
Subject: RE: [dpdk-dev] Mellanox Flow Steering

Thanks for clarification Olga. I assume when PMD is upgraded to support flow director, the rules should be only set by PMD while DPDK application is running, right?
[Olga ] Right
 Also, when DPDK application exits, the rules previously written by the PMD are invalid then user needs to reset rules by ethtool via mlx4_en driver.
[Olga ] Right

I think it does not make sense to allow two drivers, one in kernel and another in user space, to control a same NIC device simultaneously. Or a control plane synchronization mechanism is needed between two drivers.
[Olga ] Agree :) We are looking for a solution 
 
A master driver responsible for NIC control solely is expected.
[Olga ] Or there should be synchronization mechanism as you mentioned before 

> -----Original Message-----
> From: Olga Shern [mailto:olgas@mellanox.com]
> Sent: Monday, April 13, 2015 4:39 AM
> To: Raghav Sethi; Zhou, Danny; dev@dpdk.org
> Subject: RE: [dpdk-dev] Mellanox Flow Steering
> 
> Hi Raghav,
> 
> You are right with your observations,  Mellanox PMD and mlx4_en (kernel driver) are co-exist.
> When DPDK application run, all traffic is redirected to DPDK 
> application. When DPDK application exit the traffic is received by mlx4_en driver.
> 
> Regarding ethtool configuration you did, it influence only mlx4_en driver, it doesn't influence Mellanox PMD queues.
> 
> Mellanox PMD doesn't support Flow Director, like you mention, and we are working to add it.
> Currently the only way to spread traffic between different PMD queues is using RSS.
> 
> Best Regards,
> Olga
> 
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Raghav Sethi
> Sent: Sunday, April 12, 2015 7:18 PM
> To: Zhou, Danny; dev@dpdk.org
> Subject: Re: [dpdk-dev] Mellanox Flow Steering
> 
> Hi Danny,
> 
> Thanks, that's helpful. However, Mellanox cards don't support Intel 
> Flow Director, so how would one go about installing these rules in the 
> NIC? The only technique the Mellanox User Manual (
> http://www.mellanox.com/related-docs/prod_software/Mellanox_EN_for_Lin
> ux_User_Manual_v2_0-3_0_0.pdf) lists to use Flow Steering is the 
> ethtool based method.
> 
> Additionally, the mlx4_core driver is used both by DPDK PMD and 
> otherwise (unlike the igb_uio driver, which needs to be loaded to use 
> PMD) and it seems weird that only the packets affected by the rules don't hit the DPDK application. That indicates to me that the NIC is dealing with the rules somehow even though a DPDK application is running.
> 
> Best,
> Raghav
> 
> On Sun, Apr 12, 2015 at 7:47 AM Zhou, Danny <danny.zhou@intel.com> wrote:
> 
> > Currently, the DPDK PMD and NIC kernel driver cannot drive a same 
> > NIC device simultaneously. When you use ethtool to setup flow 
> > director filter, the rules are written to NIC via ethtool support in 
> > kernel driver. But when DPDK PMD is loaded to drive same device, the 
> > rules previously written by ethtool/kernel_driver will be invalid, 
> > so you may have to use DPDK APIs to rewrite your rules to the NIC again.
> >
> > The bifurcated driver is designed to provide a solution to support 
> > the kernel driver and DPDK coexist scenarios, but it has security 
> > concern so netdev maintainer rejects it.
> >
> > It should not be a Mellanox hardware problem, if you try it on Intel 
> > NIC the result is same.
> >
> > > -----Original Message-----
> > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Raghav Sethi
> > > Sent: Sunday, April 12, 2015 1:10 PM
> > > To: dev@dpdk.org
> > > Subject: [dpdk-dev] Mellanox Flow Steering
> > >
> > > Hi folks,
> > >
> > > I'm trying to use the flow steering features of the Mellanox card 
> > > to effectively use a multicore server for a benchmark.
> > >
> > > The system has a single-port Mellanox ConnectX-3 EN, and I want to 
> > > use 4
> > of
> > > the 32 cores present and 4 of the 16 RX queues supported by the 
> > > hardware (i.e. one RX queue per core).
> > >
> > > I assign RX queues to each of the cores, but obviously without 
> > > flow steering (all the packets have the same IP and UDP headers, 
> > > but different dest MACs in the ethernet headers) each of the packets hits one core.
> > I've
> > > set up the client such that it sends packets with a different 
> > > destination MAC for each RX queue (e.g. RX queue 1 should get 
> > > 10:00:00:00:00:00, RX queue 2 should get 10:00:00:00:00:01 and so on).
> > >
> > > I try to accomplish this by using ethtool to set flow steering 
> > > rules
> > (e.g.
> > > ethtool -U p7p1 flow-type ether dst 10:00:00:00:00:00 action 1 loc 
> > > 1, ethtool -U p7p1 flow-type ether dst 10:00:00:00:00:01 action 2 loc 2..).
> > >
> > > As soon as I set up these rules though, packets matching them just 
> > > stop hitting my application. All other packets go through, and 
> > > removing the rules also causes the packets to go through. I'm 
> > > pretty sure my
> > application
> > > is looking at all the queues, but I tried changing the rules to 
> > > try a
> > rule
> > > for every single destination RX queue (0-16), and that doesn't 
> > > work
> > either.
> > >
> > > If it helps, my code is based on the l2fwd sample application, and 
> > > is
> > here:
> > > https://gist.github.com/raghavsethi/416fb77d74ccf81bd93e
> > >
> > > Also, I added the following to my /etc/init.d: options mlx4_core 
> > > log_num_mgm_entry_size=-1, and restarted the driver before any of 
> > > these tests.
> > >
> > > Any ideas what might be causing my packets to drop? In case this 
> > > is a Mellanox issue, should I be talking to their customer support?
> > >
> > > Best,
> > > Raghav Sethi
> >

  reply	other threads:[~2015-04-13 16:59 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-04-12  5:10 Raghav Sethi
2015-04-12 11:47 ` Zhou, Danny
2015-04-12 16:17   ` Raghav Sethi
2015-04-12 20:39     ` Olga Shern
2015-04-12 23:29       ` Zhou, Danny
2015-04-13 16:59         ` Olga Shern [this message]
2015-04-13 18:01       ` Raghav Sethi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=AM2PR05MB099505AC7339666E65EADF1ED3E70@AM2PR05MB0995.eurprd05.prod.outlook.com \
    --to=olgas@mellanox.com \
    --cc=danny.zhou@intel.com \
    --cc=dev@dpdk.org \
    --cc=raghavs@CS.Princeton.EDU \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).