DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Dumitrescu, Cristian" <cristian.dumitrescu@intel.com>
To: husainee <husainee.plumber@nevisnetworks.com>,
	"dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] Random packet drops with ip_pipeline on R730.
Date: Wed, 9 Sep 2015 16:31:49 +0000	[thread overview]
Message-ID: <3EB4FA525960D640B5BDFFD6A3D89126478B93EC@IRSMSX108.ger.corp.intel.com> (raw)
In-Reply-To: <55F022F4.30207@nevisnetworks.com>

Hi Husainee,

Yes, please try on release 2.1 and do come back to us with your findings. Based on your findings so far though, looks like this is not a SW issue with the ip_pipeline application (from 2.0 release).

The packet i/O rate that you are using is of a few Mpps, which is low enough to be sustained by a 1.6 GHz CPU or 3.1 GHz CPU, so I don’t think the CPU is the issue, but some other HW things might be: how many PCIe lanes are routed to each of the NICs, are they PCI Gen2 or Gen1, are the PCIs slots used by the NICs on the same CPU socket with the CPU core(s) you’re using for packet forwarding, etc? I think you got it right: the fastest way to debug this issue is to try our multiple CPUs and NICs.

Regards,
Cristian

From: husainee [mailto:husainee.plumber@nevisnetworks.com]
Sent: Wednesday, September 9, 2015 3:16 PM
To: Dumitrescu, Cristian; dev@dpdk.org
Cc: Cao, Waterman
Subject: Re: [dpdk-dev] Random packet drops with ip_pipeline on R730.

hi Cristian

I am using 2.0 release. I will try with 2.1 and revert.

But for additional information I tried the same 2.0 ip_pipeline application with a Desktop system which has a single socket  Intel(R) Core(TM) i5-4440 CPU @ 3.10GHz, 4 core. The nic is same i350.

On this machine I am sending packets on 4 ports at 1Gbps full duplex and i get 4Gbps throughput with no drops.

The change between the two systems is the processor (speed, cores) and no of sockets. Is the speed of processor reducing the performance of DPDK drastically from 4Gbps to something <0.5Gpbs. This is confusing!

regards
husainee

On 09/09/2015 05:09 PM, Dumitrescu, Cristian wrote:
Hi Husainee,

Looking at your config file, looks like you are using an old DPDK release prior to 2.1, can you please try out same simple test in your environment for latest DPDK 2.1 release?

We did a lot of work in DPDK release 2.1 for the ip_pipeline application, we basically rewrote large parts of it, including the parser, checks, run-time, library of pipelines, etc. The format of the config file has been improved a lot, you should be able to adapt your config file to the latest syntax very quickly.

Btw, you config file is not really equivalent to the l2fwd, as you are using two CPU cores connected through software rings rather than a single core, as l2fwd.

Here is an equivalent DPDK 2.1 config file using two cores connected through software rings (port 0 -> port 1, port 1-> port 0, port 2 -> port 3, port 3 -> port2):

[PIPELINE0]
type = MASTER
core = 0

[PIPELINE1]
type = PASS-THROUGH
core = 1
pktq_in = RXQ0.0 RXQ1.0 RXQ2.0 RXQ3.0
pktq_out = SWQ0 SWQ1 SWQ2 SWQ3

[PIPELINE2]
type = PASS-THROUGH
core = 2; you can also place PIPELINE2 on same core as PIPELINE1: core = 1
pktq_in = SWQ1 SWQ0 SWQ3 SWQ2
pktq_out = TXQ0.0 TXQ1.0 TXQ2.0 TXQ3.0

Here is an config file doing similar processing with a single core, closer configuration to l2fwd (port 0 -> port 1, port 1-> port 0, port 2 -> port 3, port 3 -> port2):

[PIPELINE0]
type = MASTER
core = 0

[PIPELINE1]
type = PASS-THROUGH
core = 1
pktq_in = RXQ0.0 RXQ1.0 RXQ2.0 RXQ3.0
pktq_out = TXQ1.0 TXQ0.0 TXQ3.0 TXQ2.0

Regards,
Cristian

From: husainee [mailto:husainee.plumber@nevisnetworks.com]
Sent: Wednesday, September 9, 2015 12:47 PM
To: Dumitrescu, Cristian; dev@dpdk.org<mailto:dev@dpdk.org>
Cc: Cao, Waterman
Subject: Re: [dpdk-dev] Random packet drops with ip_pipeline on R730.

Hi Cristian
PFA the config file.

I am sending packets from port0 and receiving on port1.

By random packet drops I mean, on every run the number of packets dropped is not same. Here are some results as below.

Frame sent rate 1488095.2 fps, 64Byte packets (100% of 1000Mbps)
Run1- 0.0098% (20-22 Million Packets)
Run2- 0.021% (20-22 Million Packets)
Run3- 0.0091% (20-22 Million Packets)

Frame rate 744047.62 fps, 64 Byte packets, (50% of 1000Mbps)
Run1- 0.0047% (20-22 Million Packets)
Run2- 0.0040% (20-22 Million Packets)
Run3- 0.0040% (20-22 Million Packets)


Frame rate 148809.52 fps, 64 Byte packets,(10% of 1000Mbps)
Run1- 0 (20-22 Million Packets)
Run2- 0 (20-22 Million Packets)
Run3- 0 (20-22 Million Packets)



Following are the hw nic setting differences btw ip_pipeline and l2fwd app.
parameter

ip_pipeline

l2fwd

jumbo frame

1

0

hw_ip_checksum

1

0

rx_conf. wthresh

4

0

rx_conf.rx_free_thresh

64

32

tx_conf.pthresh

36

32

burst size

64

32


We tried to make the ip_pipeline settings same as l2fwd but no change in results.

I have not tried with 10GbE . I do not have 10GbE test equipment.



regards
husainee




On 09/08/2015 06:32 PM, Dumitrescu, Cristian wrote:

Hi Husainee,



Can you please explain what do you mean by random packet drops? What percentage of the input packets get dropped, does it take place on every run, does the number of dropped packets vary on every run, etc?



Are you also able to reproduce this issue with other NICs, e.g. 10GbE NIC?



Can you share your config file?



Can you please double check the low level NIC settings between the two applications, i.e. the settings in structures link_params_default, default_hwq_in_params, default_hwq_out_params from ip_pipeline file config_parse.c vs. their equivalents from l2fwd? The only thing I can think of right now is maybe one of the low level threshold values for the Ethernet link is not tuned for your 1GbE NIC.



Regards,

Cristian



-----Original Message-----

From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of husainee

Sent: Tuesday, September 8, 2015 7:56 AM

To: dev@dpdk.org<mailto:dev@dpdk.org>

Subject: [dpdk-dev] Random packet drops with ip_pipeline on R730.



Hi



I am using a DELL730 with Dual socket. Processor in each socket is

Intel(R) Xeon(R) CPU E5-2603 v3 @ 1.60GHz- 6Cores.

The CPU layout has socket 0 with 0,2,4,6,8,10 cores and socket 1 with

1,3,5,7,9,11 cores.

The NIC card is i350.



The Cores 2-11 are isolated using isolcpus kernel parameter. We are

running the ip_peipeline application with only Master, RX and TX threads

(Flow and Route have been removed from cfg file). The threads are run as

follows



- Master on CPU core 2

- RX on CPU core 4

- TX on CPU core 6



64 byte packets are sent from ixia at different speeds, but we are

seeing random packet drops.  Same excercise is done on core 3,5,7 and

results are same.



We tried the l2fwd app and it works fine with no packet drops.



Hugepages per 1024 x 2M per socket.





Can anyone suggest what could be the reason for these random packet

drops.



regards

husainee































      reply	other threads:[~2015-09-09 16:32 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-09-08  4:55 husainee
2015-09-08 13:02 ` Dumitrescu, Cristian
2015-09-09  9:47   ` husainee
2015-09-09 11:39     ` Dumitrescu, Cristian
2015-09-09 12:15       ` husainee
2015-09-09 16:31         ` Dumitrescu, Cristian [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3EB4FA525960D640B5BDFFD6A3D89126478B93EC@IRSMSX108.ger.corp.intel.com \
    --to=cristian.dumitrescu@intel.com \
    --cc=dev@dpdk.org \
    --cc=husainee.plumber@nevisnetworks.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).