DPDK usage discussions
 help / color / mirror / Atom feed
From: Mohammad Malihi <mohammad.malihi@gmail.com>
To: users@dpdk.org
Subject: [dpdk-users] High Packet missed rate
Date: Wed, 23 Nov 2016 15:21:37 +0330	[thread overview]
Message-ID: <CADJHsVqa1XW+WDGYit0TzzXdC9yc1oHb3c9bUHiOHTpqUrd5PQ@mail.gmail.com> (raw)

Hi
I'm new on dpdk and i have problems about high packet dropping ratio
when using pktgen and l2fwd in rates greter than 1.5 Gbs (using pktgen).
In order to have benchmarks on dpdk forwarding capabilities in 10 Gbs rate,
i've established a very simple test environment as depicted bellow :

---------------------------------
----------------------------------------------
|Server 1 port 0 (Intel 82599ES)| -----> |Server 2 port 0 (Intel 82599ES)|
---------------------------------
----------------------------------------------
                                                                          |
                                                             (bridge via
l2fwd app)
                                                                          |
----------------------------------------------
----------------------------------------------
|Server 1 port 1 (Intel 82599ES)| <-----  |Server 2 port 1 (Intel 82599ES)|
---------------------------------------------
----------------------------------------------

Sending packets (on server 1) with size 64 bytes at rate 10 Gbs can be done
by running pktgen with the following parameters :
   -c 0x07 -n 12 -- -P -m "1.0, 2.1"
and following commands in interactive mode :
    set 0 rate 100
    set 0 size 64
    start 0

At the other side (server 2), l2fwd app forwards packets with parameters :
   -c 0x07 -n 12 -- -p 0x03 -q 1 -T 10
(core 0 receives packets from port 0 and core 1 sends them using port 1)

Hardware Specifications (same for 2 servers) :
    Processors : 2 "Intel Xeon 2690 v2" (each of them has 20 cores).
    NIC : "Intel 82599ES 10-GB" with 2 interfaces (connected to x8 PCIe Gen
2 -> 5 GT/s)
    Memory : 264115028 KB

Also hugepage sizes on each side are totally 128 GB (64 GB(huge page->1GB)
for each node)
and all ports and used cores(0,1) are on the same NUMA node.

I've made some modifcations to l2fwd app to show packet dropping count by
calling
"rte_eth_stats_get" in "print_stats" function and using "imissed" member of
"rte_eth_stas".

The results on screen show that in rates greater than 1.5 Gbs, packet
dropping (by hardware) occures.
I wonder, why there is packet missing in rates bellow 10 Gbs

Thanks in advance

             reply	other threads:[~2016-11-23 11:51 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-11-23 11:51 Mohammad Malihi [this message]
2016-11-23 14:56 ` Wiles, Keith

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CADJHsVqa1XW+WDGYit0TzzXdC9yc1oHb3c9bUHiOHTpqUrd5PQ@mail.gmail.com \
    --to=mohammad.malihi@gmail.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).