DPDK usage discussions
 help / color / mirror / Atom feed
* [dpdk-users] High Packet missed rate
@ 2016-11-23 11:51 Mohammad Malihi
  2016-11-23 14:56 ` Wiles, Keith
  0 siblings, 1 reply; 2+ messages in thread
From: Mohammad Malihi @ 2016-11-23 11:51 UTC (permalink / raw)
  To: users

Hi
I'm new on dpdk and i have problems about high packet dropping ratio
when using pktgen and l2fwd in rates greter than 1.5 Gbs (using pktgen).
In order to have benchmarks on dpdk forwarding capabilities in 10 Gbs rate,
i've established a very simple test environment as depicted bellow :

---------------------------------
----------------------------------------------
|Server 1 port 0 (Intel 82599ES)| -----> |Server 2 port 0 (Intel 82599ES)|
---------------------------------
----------------------------------------------
                                                                          |
                                                             (bridge via
l2fwd app)
                                                                          |
----------------------------------------------
----------------------------------------------
|Server 1 port 1 (Intel 82599ES)| <-----  |Server 2 port 1 (Intel 82599ES)|
---------------------------------------------
----------------------------------------------

Sending packets (on server 1) with size 64 bytes at rate 10 Gbs can be done
by running pktgen with the following parameters :
   -c 0x07 -n 12 -- -P -m "1.0, 2.1"
and following commands in interactive mode :
    set 0 rate 100
    set 0 size 64
    start 0

At the other side (server 2), l2fwd app forwards packets with parameters :
   -c 0x07 -n 12 -- -p 0x03 -q 1 -T 10
(core 0 receives packets from port 0 and core 1 sends them using port 1)

Hardware Specifications (same for 2 servers) :
    Processors : 2 "Intel Xeon 2690 v2" (each of them has 20 cores).
    NIC : "Intel 82599ES 10-GB" with 2 interfaces (connected to x8 PCIe Gen
2 -> 5 GT/s)
    Memory : 264115028 KB

Also hugepage sizes on each side are totally 128 GB (64 GB(huge page->1GB)
for each node)
and all ports and used cores(0,1) are on the same NUMA node.

I've made some modifcations to l2fwd app to show packet dropping count by
calling
"rte_eth_stats_get" in "print_stats" function and using "imissed" member of
"rte_eth_stas".

The results on screen show that in rates greater than 1.5 Gbs, packet
dropping (by hardware) occures.
I wonder, why there is packet missing in rates bellow 10 Gbs

Thanks in advance

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [dpdk-users] High Packet missed rate
  2016-11-23 11:51 [dpdk-users] High Packet missed rate Mohammad Malihi
@ 2016-11-23 14:56 ` Wiles, Keith
  0 siblings, 0 replies; 2+ messages in thread
From: Wiles, Keith @ 2016-11-23 14:56 UTC (permalink / raw)
  To: Mohammad Malihi; +Cc: users


> On Nov 23, 2016, at 5:51 AM, Mohammad Malihi <mohammad.malihi@gmail.com> wrote:
> 
> Hi
> I'm new on dpdk and i have problems about high packet dropping ratio
> when using pktgen and l2fwd in rates greter than 1.5 Gbs (using pktgen).
> In order to have benchmarks on dpdk forwarding capabilities in 10 Gbs rate,
> i've established a very simple test environment as depicted bellow :
> 
> ---------------------------------
> ----------------------------------------------
> |Server 1 port 0 (Intel 82599ES)| -----> |Server 2 port 0 (Intel 82599ES)|
> ---------------------------------
> ----------------------------------------------
>                                                                          |
>                                                             (bridge via
> l2fwd app)
>                                                                          |
> ----------------------------------------------
> ----------------------------------------------
> |Server 1 port 1 (Intel 82599ES)| <-----  |Server 2 port 1 (Intel 82599ES)|
> ---------------------------------------------
> ----------------------------------------------
> 
> Sending packets (on server 1) with size 64 bytes at rate 10 Gbs can be done
> by running pktgen with the following parameters :
>   -c 0x07 -n 12 -- -P -m "1.0, 2.1"
> and following commands in interactive mode :
>    set 0 rate 100
>    set 0 size 64
>    start 0
> 
> At the other side (server 2), l2fwd app forwards packets with parameters :
>   -c 0x07 -n 12 -- -p 0x03 -q 1 -T 10
> (core 0 receives packets from port 0 and core 1 sends them using port 1)

What is the core mapping here, are the two cores on different sockets?

You can use the python script in the tools directory to print the info out. We should not see drops at that rate unless the packets are moving between sockets.

> 
> Hardware Specifications (same for 2 servers) :
>    Processors : 2 "Intel Xeon 2690 v2" (each of them has 20 cores).
>    NIC : "Intel 82599ES 10-GB" with 2 interfaces (connected to x8 PCIe Gen
> 2 -> 5 GT/s)
>    Memory : 264115028 KB
> 
> Also hugepage sizes on each side are totally 128 GB (64 GB(huge page->1GB)
> for each node)
> and all ports and used cores(0,1) are on the same NUMA node.
> 
> I've made some modifcations to l2fwd app to show packet dropping count by
> calling
> "rte_eth_stats_get" in "print_stats" function and using "imissed" member of
> "rte_eth_stas".
> 
> The results on screen show that in rates greater than 1.5 Gbs, packet
> dropping (by hardware) occures.
> I wonder, why there is packet missing in rates bellow 10 Gbs
> 
> Thanks in advance

Regards,
Keith

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2016-11-23 14:56 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-11-23 11:51 [dpdk-users] High Packet missed rate Mohammad Malihi
2016-11-23 14:56 ` Wiles, Keith

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).