DPDK usage discussions
 help / color / mirror / Atom feed
* [dpdk-users] Packet losses using DPDK
@ 2017-05-09  7:53 David Fernandes
  0 siblings, 0 replies; 9+ messages in thread
From: David Fernandes @ 2017-05-09  7:53 UTC (permalink / raw)
  To: users

Hi !

I am working with MoonGen which is a fully scriptable packet generator 
build on DPDK.
(→ https://github.com/emmericp/MoonGen)

The system on which I perform tests has the following characteristics :

CPU : Intel Core i3-⁠6100 (3.70 GHz, 2 cores, 2 threads/⁠core)
NIC : X540-⁠AT2 with 2x10Gbe ports
OS : Linux Ubuntu Server 16.04 (kernel 4.4)

I coded a MoonGen script which requests DPDK to transmit packets from 
one physical port and to receive them at the second physical port. The 2 
physical ports are directly connected with an RJ-45 cat6 cable.

The issue is that I perform the same test with exactly the same script 
and the same parameters several times and the results show a random 
behavior. For most of the tests there is no losses but for some of them 
I observe packet losses. The percentage of lost packets is very 
variable. It happens even when the packet rate is very low.

Some examples of random failed tests :

# 1,000,000 packets sent (packets size = 124 bytes, rate = 76 Mbps) → 
10170 lost packets

# 3,000,000 packets sent (packets size = 450 bytes, rate = 460 Mbps) → 
ALL packets lost


I tested the following system modifications without success :

# BIOS parameters :

     Hyperthreading : enable (because the machine has only 2 cores)
     Multi-⁠⁠processor : enable
     Virtualization Technology (VTx) : disable
     Virtualization Technology for Directed I/⁠⁠O (VTd) : disable
     Allow PCIe/⁠⁠PCI SERR# Interrupt (=PCIe System Errors) : disable
     NUMA unavailable

# use of isolcpus in order to isolate the cores which are in charge of 
transmission and reception

# hugepages size = 1048576 kB

# size of buffer descriptors : tried with Tx = 512 descriptors and Rx = 
128 descriptors and also with  Tx = 4096 descriptors and Rx = 4096 
  descriptors

# Tested with 2 different X540-⁠T2 NICs units

# I tested all with a Dell FC430 which has a CPU Intel Xeon E5-2660 v3 @ 
2.6GHz with 10 Cores and 2threads/Core (tested with and without 
hyper-threading)
     → same results and even worse


Remark concerning the NIC stats :
      I used the rte_eth_stats struct in order to get more information 
about the losses and I observed that in some cases, when there is packet 
losses,  ierrors value is > 0 and also ierrors + imissed + ipackets < 
opackets. In other cases I get ierrors = 0 and  imissed + ipackets = 
opackets which has more sense.

What could be the origin of that erroneous packets counting?

Do you have any explanation about that behaviour ?

Thanks in advance.

David

^ permalink raw reply	[flat|nested] 9+ messages in thread
* [dpdk-users] Packet losses using DPDK
@ 2017-05-12 15:45 dfernandes
  2017-05-12 16:18 ` Wiles, Keith
  2017-05-15  8:25 ` Andriy Berestovskyy
  0 siblings, 2 replies; 9+ messages in thread
From: dfernandes @ 2017-05-12 15:45 UTC (permalink / raw)
  To: users

Hi !

I am working with MoonGen which is a fully scriptable packet generator 
build on DPDK.
(→ https://github.com/emmericp/MoonGen)

The system on which I perform tests has the following characteristics :

CPU : Intel Core i3-⁠⁠6100 (3.70 GHz, 2 cores, 2 threads/⁠⁠core)
NIC : X540-⁠⁠AT2 with 2x10Gbe ports
OS : Linux Ubuntu Server 16.04 (kernel 4.4)

I coded a MoonGen script which requests DPDK to transmit packets from 
one physical port and to receive them at the second physical port. The 2 
physical ports are directly connected with an RJ-45 cat6 cable.

The issue is that I perform the same test with exactly the same script 
and the same parameters several times and the results show a random 
behavior. For most of the tests there is no losses but for some of them 
I observe packet losses. The percentage of lost packets is very 
variable. It happens even when the packet rate is very low.

Some examples of random failed tests :

# 1,000,000 packets sent (packets size = 124 bytes, rate = 76 Mbps) → 
10170 lost packets

# 3,000,000 packets sent (packets size = 450 bytes, rate = 460 Mbps) → 
ALL packets lost


I tested the following system modifications without success :

# BIOS parameters :

     Hyperthreading : enable (because the machine has only 2 cores)
     Multi-⁠⁠⁠processor : enable
     Virtualization Technology (VTx) : disable
     Virtualization Technology for Directed I/⁠⁠⁠O (VTd) : disable
     Allow PCIe/⁠⁠⁠PCI SERR# Interrupt (=PCIe System Errors) : disable
     NUMA unavailable

# use of isolcpus in order to isolate the cores which are in charge of 
transmission and reception

# hugepages size = 1048576 kB

# size of buffer descriptors : tried with Tx = 512 descriptors and Rx = 
128 descriptors and also with  Tx = 4096 descriptors and Rx = 4096  
descriptors

# Tested with 2 different X540-⁠⁠T2 NICs units

# I tested all with a Dell FC430 which has a CPU Intel Xeon E5-2660 v3 @ 
2.6GHz with 10 Cores and 2threads/Core (tested with and without 
hyper-threading)
     → same results and even worse


Remark concerning the NIC stats :
      I used the rte_eth_stats struct in order to get more information 
about the losses and I observed that in some cases, when there is packet 
losses,  ierrors value is > 0 and also ierrors + imissed + ipackets < 
opackets. In other cases I get ierrors = 0 and  imissed + ipackets = 
opackets which has more sense.

What could be the origin of that erroneous packets counting?

Do you have any explanation about that behaviour ?

Thanks in advance.

David

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2017-05-22 14:12 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-05-09  7:53 [dpdk-users] Packet losses using DPDK David Fernandes
2017-05-12 15:45 dfernandes
2017-05-12 16:18 ` Wiles, Keith
2017-05-17  7:53   ` dfernandes
2017-05-22  9:40     ` dfernandes
2017-05-22 12:10       ` Andriy Berestovskyy
2017-05-22 14:12       ` Wiles, Keith
2017-05-15  8:25 ` Andriy Berestovskyy
2017-05-15 13:49   ` dfernandes

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).