DPDK usage discussions
 help / color / mirror / Atom feed
* [dpdk-users] Packet losses using DPDK
@ 2017-05-09  7:53 David Fernandes
  0 siblings, 0 replies; 9+ messages in thread
From: David Fernandes @ 2017-05-09  7:53 UTC (permalink / raw)
  To: users

Hi !

I am working with MoonGen which is a fully scriptable packet generator 
build on DPDK.
(→ https://github.com/emmericp/MoonGen)

The system on which I perform tests has the following characteristics :

CPU : Intel Core i3-⁠6100 (3.70 GHz, 2 cores, 2 threads/⁠core)
NIC : X540-⁠AT2 with 2x10Gbe ports
OS : Linux Ubuntu Server 16.04 (kernel 4.4)

I coded a MoonGen script which requests DPDK to transmit packets from 
one physical port and to receive them at the second physical port. The 2 
physical ports are directly connected with an RJ-45 cat6 cable.

The issue is that I perform the same test with exactly the same script 
and the same parameters several times and the results show a random 
behavior. For most of the tests there is no losses but for some of them 
I observe packet losses. The percentage of lost packets is very 
variable. It happens even when the packet rate is very low.

Some examples of random failed tests :

# 1,000,000 packets sent (packets size = 124 bytes, rate = 76 Mbps) → 
10170 lost packets

# 3,000,000 packets sent (packets size = 450 bytes, rate = 460 Mbps) → 
ALL packets lost


I tested the following system modifications without success :

# BIOS parameters :

     Hyperthreading : enable (because the machine has only 2 cores)
     Multi-⁠⁠processor : enable
     Virtualization Technology (VTx) : disable
     Virtualization Technology for Directed I/⁠⁠O (VTd) : disable
     Allow PCIe/⁠⁠PCI SERR# Interrupt (=PCIe System Errors) : disable
     NUMA unavailable

# use of isolcpus in order to isolate the cores which are in charge of 
transmission and reception

# hugepages size = 1048576 kB

# size of buffer descriptors : tried with Tx = 512 descriptors and Rx = 
128 descriptors and also with  Tx = 4096 descriptors and Rx = 4096 
  descriptors

# Tested with 2 different X540-⁠T2 NICs units

# I tested all with a Dell FC430 which has a CPU Intel Xeon E5-2660 v3 @ 
2.6GHz with 10 Cores and 2threads/Core (tested with and without 
hyper-threading)
     → same results and even worse


Remark concerning the NIC stats :
      I used the rte_eth_stats struct in order to get more information 
about the losses and I observed that in some cases, when there is packet 
losses,  ierrors value is > 0 and also ierrors + imissed + ipackets < 
opackets. In other cases I get ierrors = 0 and  imissed + ipackets = 
opackets which has more sense.

What could be the origin of that erroneous packets counting?

Do you have any explanation about that behaviour ?

Thanks in advance.

David

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [dpdk-users] Packet losses using DPDK
  2017-05-22  9:40     ` dfernandes
  2017-05-22 12:10       ` Andriy Berestovskyy
@ 2017-05-22 14:12       ` Wiles, Keith
  1 sibling, 0 replies; 9+ messages in thread
From: Wiles, Keith @ 2017-05-22 14:12 UTC (permalink / raw)
  To: dfernandes; +Cc: users


> On May 22, 2017, at 4:40 AM, dfernandes@toulouse.viveris.com wrote:
> 
> Hi !
> 
> I performed many tests using Pktgen and it seems to work much better. However, I observed that one of the tests showed that 2 packets were dropped. In this test I sent packets between the 2 physical ports in bidirectional mode during 24 hours. The packets size was 450 bytes and the rate in both ports was 1500 Mbps.
> 
> The port stats I got are the following :
> 
> 
> ** Port 0 **  Tx: 34481474912. Rx: 34481474846. Dropped: 2
> ** Port 1 **  Tx: 34481474848. Rx: 34481474912. Dropped: 0
> 
> DEBUG portStats = {
>  [1] = {
>    ["ipackets"] = 34481474912,
>    ["ierrors"] = 0,
>    ["rx_nombuf"] = 0,
>    ["ibytes"] = 15378737810752,
>    ["oerrors"] = 0,
>    ["opackets"] = 34481474848,
>    ["obytes"] = 15378737782208,
>  },
>  [0] = {
>    ["ipackets"] = 34481474846,
>    ["ierrors"] = 1,
>    ["rx_nombuf"] = 0,
>    ["ibytes"] = 15378737781316,
>    ["oerrors"] = 0,
>    ["opackets"] = 34481474912,
>    ["obytes"] = 15378737810752,
>  },
>  ["n"] = 2,
> }
> 
> So 2 packets were dropped by port 0 and I see that "ierrors" counter has a value of 1. Do you know what does this counter represent ? And what could it be interpreted ?
> By the way, I performed as well the same test changing the packet size to 1518 bytes and the rate to 4500 Mbps (on each port) and 0 packets were dropped.


The ierrors contains a number of combined errors, it is hard to determine which one is the problem. The normal issue reported is missed frame, which is most likely the problem. The reason for the dropped frames could be any number of issues related to the system, like Linux took too much time to handle some higher interrupt. By increasing the frame size you give the host time to recover in the likely hood it caused a system pause.

> 
> David
> 
> 
> 
> Le 17.05.2017 09:53, dfernandes@toulouse.viveris.com a écrit :
>> Thanks for your response !
>> I have installed Pktgen and I will perform some tests. So far it seems
>> to work fine. I'll keep you informed. Thanks again.
>> David
>> Le 12.05.2017 18:18, Wiles, Keith a écrit :
>>>> On May 12, 2017, at 10:45 AM, dfernandes@toulouse.viveris.com wrote:
>>>> Hi !
>>>> I am working with MoonGen which is a fully scriptable packet generator build on DPDK.
>>>> (→ https://github.com/emmericp/MoonGen)
>>>> The system on which I perform tests has the following characteristics :
>>>> CPU : Intel Core i3-⁠⁠6100 (3.70 GHz, 2 cores, 2 threads/⁠⁠core)
>>>> NIC : X540-⁠⁠AT2 with 2x10Gbe ports
>>>> OS : Linux Ubuntu Server 16.04 (kernel 4.4)
>>>> I coded a MoonGen script which requests DPDK to transmit packets from one physical port and to receive them at the second physical port. The 2 physical ports are directly connected with an RJ-45 cat6 cable.
>>>> The issue is that I perform the same test with exactly the same script and the same parameters several times and the results show a random behavior. For most of the tests there is no losses but for some of them I observe packet losses. The percentage of lost packets is very variable. It happens even when the packet rate is very low.
>>>> Some examples of random failed tests :
>>>> # 1,000,000 packets sent (packets size = 124 bytes, rate = 76 Mbps) → 10170 lost packets
>>>> # 3,000,000 packets sent (packets size = 450 bytes, rate = 460 Mbps) → ALL packets lost
>>>> I tested the following system modifications without success :
>>>> # BIOS parameters :
>>>>   Hyperthreading : enable (because the machine has only 2 cores)
>>>>   Multi-⁠⁠⁠processor : enable
>>>>   Virtualization Technology (VTx) : disable
>>>>   Virtualization Technology for Directed I/⁠⁠⁠O (VTd) : disable
>>>>   Allow PCIe/⁠⁠⁠PCI SERR# Interrupt (=PCIe System Errors) : disable
>>>>   NUMA unavailable
>>>> # use of isolcpus in order to isolate the cores which are in charge of transmission and reception
>>>> # hugepages size = 1048576 kB
>>>> # size of buffer descriptors : tried with Tx = 512 descriptors and Rx = 128 descriptors and also with  Tx = 4096 descriptors and Rx = 4096  descriptors
>>>> # Tested with 2 different X540-⁠⁠T2 NICs units
>>>> # I tested all with a Dell FC430 which has a CPU Intel Xeon E5-2660 v3 @ 2.6GHz with 10 Cores and 2threads/Core (tested with and without hyper-threading)
>>>>   → same results and even worse
>>>> Remark concerning the NIC stats :
>>>>    I used the rte_eth_stats struct in order to get more information about the losses and I observed that in some cases, when there is packet losses,  ierrors value is > 0 and also ierrors + imissed + ipackets < opackets. In other cases I get ierrors = 0 and  imissed + ipackets = opackets which has more sense.
>>>> What could be the origin of that erroneous packets counting?
>>>> Do you have any explanation about that behaviour ?
>>> Not knowing MoonGen at all other then a brief look at the source I may
>>> not be much help, but I have a few ideas to help locate the problem.
>>> Try using testpmd in tx-only mode or try Pktgen to see if you get the
>>> same problem. I hope this would narrow down the problem to a specific
>>> area. As we know DPDK works if correctly coded and testpmd/pktgen
>>> work.
>>>> Thanks in advance.
>>>> David
>>> Regards,
>>> Keith
> 

Regards,
Keith


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [dpdk-users] Packet losses using DPDK
  2017-05-22  9:40     ` dfernandes
@ 2017-05-22 12:10       ` Andriy Berestovskyy
  2017-05-22 14:12       ` Wiles, Keith
  1 sibling, 0 replies; 9+ messages in thread
From: Andriy Berestovskyy @ 2017-05-22 12:10 UTC (permalink / raw)
  To: dfernandes; +Cc: Wiles, Keith, users

Hi,
Please have a look at https://en.wikipedia.org/wiki/High_availability
I was trying to calculate your link availability, but my Ubuntu
calculator gives me 0 for  2 / 34 481 474 846 ;)

Most probably you dropped a packet during the start/stop.
ierrors is what you NIC consider as an error Ethernet frame
(checksums, runts, giants etc)

Regards,
Andriy

On Mon, May 22, 2017 at 11:40 AM,  <dfernandes@toulouse.viveris.com> wrote:
> Hi !
>
> I performed many tests using Pktgen and it seems to work much better.
> However, I observed that one of the tests showed that 2 packets were
> dropped. In this test I sent packets between the 2 physical ports in
> bidirectional mode during 24 hours. The packets size was 450 bytes and the
> rate in both ports was 1500 Mbps.
>
> The port stats I got are the following :
>
>
> ** Port 0 **  Tx: 34481474912. Rx: 34481474846. Dropped: 2
> ** Port 1 **  Tx: 34481474848. Rx: 34481474912. Dropped: 0
>
> DEBUG portStats = {
>   [1] = {
>     ["ipackets"] = 34481474912,
>     ["ierrors"] = 0,
>     ["rx_nombuf"] = 0,
>     ["ibytes"] = 15378737810752,
>     ["oerrors"] = 0,
>     ["opackets"] = 34481474848,
>     ["obytes"] = 15378737782208,
>   },
>   [0] = {
>     ["ipackets"] = 34481474846,
>     ["ierrors"] = 1,
>     ["rx_nombuf"] = 0,
>     ["ibytes"] = 15378737781316,
>     ["oerrors"] = 0,
>     ["opackets"] = 34481474912,
>     ["obytes"] = 15378737810752,
>   },
>   ["n"] = 2,
> }
>
> So 2 packets were dropped by port 0 and I see that "ierrors" counter has a
> value of 1. Do you know what does this counter represent ? And what could it
> be interpreted ?
> By the way, I performed as well the same test changing the packet size to
> 1518 bytes and the rate to 4500 Mbps (on each port) and 0 packets were
> dropped.
>
> David
>
>
>
>
> Le 17.05.2017 09:53, dfernandes@toulouse.viveris.com a écrit :
>>
>> Thanks for your response !
>>
>> I have installed Pktgen and I will perform some tests. So far it seems
>> to work fine. I'll keep you informed. Thanks again.
>>
>> David
>>
>> Le 12.05.2017 18:18, Wiles, Keith a écrit :
>>>>
>>>> On May 12, 2017, at 10:45 AM, dfernandes@toulouse.viveris.com wrote:
>>>>
>>>> Hi !
>>>>
>>>> I am working with MoonGen which is a fully scriptable packet generator
>>>> build on DPDK.
>>>> (→ https://github.com/emmericp/MoonGen)
>>>>
>>>> The system on which I perform tests has the following characteristics :
>>>>
>>>> CPU : Intel Core i3-⁠⁠6100 (3.70 GHz, 2 cores, 2 threads/⁠⁠core)
>>>> NIC : X540-⁠⁠AT2 with 2x10Gbe ports
>>>> OS : Linux Ubuntu Server 16.04 (kernel 4.4)
>>>>
>>>> I coded a MoonGen script which requests DPDK to transmit packets from
>>>> one physical port and to receive them at the second physical port. The 2
>>>> physical ports are directly connected with an RJ-45 cat6 cable.
>>>>
>>>> The issue is that I perform the same test with exactly the same script
>>>> and the same parameters several times and the results show a random
>>>> behavior. For most of the tests there is no losses but for some of them I
>>>> observe packet losses. The percentage of lost packets is very variable. It
>>>> happens even when the packet rate is very low.
>>>>
>>>> Some examples of random failed tests :
>>>>
>>>> # 1,000,000 packets sent (packets size = 124 bytes, rate = 76 Mbps) →
>>>> 10170 lost packets
>>>>
>>>> # 3,000,000 packets sent (packets size = 450 bytes, rate = 460 Mbps) →
>>>> ALL packets lost
>>>>
>>>>
>>>> I tested the following system modifications without success :
>>>>
>>>> # BIOS parameters :
>>>>
>>>>    Hyperthreading : enable (because the machine has only 2 cores)
>>>>    Multi-⁠⁠⁠processor : enable
>>>>    Virtualization Technology (VTx) : disable
>>>>    Virtualization Technology for Directed I/⁠⁠⁠O (VTd) : disable
>>>>    Allow PCIe/⁠⁠⁠PCI SERR# Interrupt (=PCIe System Errors) : disable
>>>>    NUMA unavailable
>>>>
>>>> # use of isolcpus in order to isolate the cores which are in charge of
>>>> transmission and reception
>>>>
>>>> # hugepages size = 1048576 kB
>>>>
>>>> # size of buffer descriptors : tried with Tx = 512 descriptors and Rx =
>>>> 128 descriptors and also with  Tx = 4096 descriptors and Rx = 4096
>>>> descriptors
>>>>
>>>> # Tested with 2 different X540-⁠⁠T2 NICs units
>>>>
>>>> # I tested all with a Dell FC430 which has a CPU Intel Xeon E5-2660 v3 @
>>>> 2.6GHz with 10 Cores and 2threads/Core (tested with and without
>>>> hyper-threading)
>>>>    → same results and even worse
>>>>
>>>>
>>>> Remark concerning the NIC stats :
>>>>     I used the rte_eth_stats struct in order to get more information
>>>> about the losses and I observed that in some cases, when there is packet
>>>> losses,  ierrors value is > 0 and also ierrors + imissed + ipackets <
>>>> opackets. In other cases I get ierrors = 0 and  imissed + ipackets =
>>>> opackets which has more sense.
>>>>
>>>> What could be the origin of that erroneous packets counting?
>>>>
>>>> Do you have any explanation about that behaviour ?
>>>
>>>
>>> Not knowing MoonGen at all other then a brief look at the source I may
>>> not be much help, but I have a few ideas to help locate the problem.
>>>
>>> Try using testpmd in tx-only mode or try Pktgen to see if you get the
>>> same problem. I hope this would narrow down the problem to a specific
>>> area. As we know DPDK works if correctly coded and testpmd/pktgen
>>> work.
>>>
>>>>
>>>> Thanks in advance.
>>>>
>>>> David
>>>
>>>
>>> Regards,
>>> Keith
>
>



-- 
Andriy Berestovskyy

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [dpdk-users] Packet losses using DPDK
  2017-05-17  7:53   ` dfernandes
@ 2017-05-22  9:40     ` dfernandes
  2017-05-22 12:10       ` Andriy Berestovskyy
  2017-05-22 14:12       ` Wiles, Keith
  0 siblings, 2 replies; 9+ messages in thread
From: dfernandes @ 2017-05-22  9:40 UTC (permalink / raw)
  To: Wiles, Keith; +Cc: users

Hi !

I performed many tests using Pktgen and it seems to work much better. 
However, I observed that one of the tests showed that 2 packets were 
dropped. In this test I sent packets between the 2 physical ports in 
bidirectional mode during 24 hours. The packets size was 450 bytes and 
the rate in both ports was 1500 Mbps.

The port stats I got are the following :


** Port 0 **  Tx: 34481474912. Rx: 34481474846. Dropped: 2
** Port 1 **  Tx: 34481474848. Rx: 34481474912. Dropped: 0

DEBUG portStats = {
   [1] = {
     ["ipackets"] = 34481474912,
     ["ierrors"] = 0,
     ["rx_nombuf"] = 0,
     ["ibytes"] = 15378737810752,
     ["oerrors"] = 0,
     ["opackets"] = 34481474848,
     ["obytes"] = 15378737782208,
   },
   [0] = {
     ["ipackets"] = 34481474846,
     ["ierrors"] = 1,
     ["rx_nombuf"] = 0,
     ["ibytes"] = 15378737781316,
     ["oerrors"] = 0,
     ["opackets"] = 34481474912,
     ["obytes"] = 15378737810752,
   },
   ["n"] = 2,
}

So 2 packets were dropped by port 0 and I see that "ierrors" counter has 
a value of 1. Do you know what does this counter represent ? And what 
could it be interpreted ?
By the way, I performed as well the same test changing the packet size 
to 1518 bytes and the rate to 4500 Mbps (on each port) and 0 packets 
were dropped.

David



Le 17.05.2017 09:53, dfernandes@toulouse.viveris.com a écrit :
> Thanks for your response !
> 
> I have installed Pktgen and I will perform some tests. So far it seems
> to work fine. I'll keep you informed. Thanks again.
> 
> David
> 
> Le 12.05.2017 18:18, Wiles, Keith a écrit :
>>> On May 12, 2017, at 10:45 AM, dfernandes@toulouse.viveris.com wrote:
>>> 
>>> Hi !
>>> 
>>> I am working with MoonGen which is a fully scriptable packet 
>>> generator build on DPDK.
>>> (→ https://github.com/emmericp/MoonGen)
>>> 
>>> The system on which I perform tests has the following characteristics 
>>> :
>>> 
>>> CPU : Intel Core i3-⁠⁠6100 (3.70 GHz, 2 cores, 2 threads/⁠⁠core)
>>> NIC : X540-⁠⁠AT2 with 2x10Gbe ports
>>> OS : Linux Ubuntu Server 16.04 (kernel 4.4)
>>> 
>>> I coded a MoonGen script which requests DPDK to transmit packets from 
>>> one physical port and to receive them at the second physical port. 
>>> The 2 physical ports are directly connected with an RJ-45 cat6 cable.
>>> 
>>> The issue is that I perform the same test with exactly the same 
>>> script and the same parameters several times and the results show a 
>>> random behavior. For most of the tests there is no losses but for 
>>> some of them I observe packet losses. The percentage of lost packets 
>>> is very variable. It happens even when the packet rate is very low.
>>> 
>>> Some examples of random failed tests :
>>> 
>>> # 1,000,000 packets sent (packets size = 124 bytes, rate = 76 Mbps) → 
>>> 10170 lost packets
>>> 
>>> # 3,000,000 packets sent (packets size = 450 bytes, rate = 460 Mbps) 
>>> → ALL packets lost
>>> 
>>> 
>>> I tested the following system modifications without success :
>>> 
>>> # BIOS parameters :
>>> 
>>>    Hyperthreading : enable (because the machine has only 2 cores)
>>>    Multi-⁠⁠⁠processor : enable
>>>    Virtualization Technology (VTx) : disable
>>>    Virtualization Technology for Directed I/⁠⁠⁠O (VTd) : disable
>>>    Allow PCIe/⁠⁠⁠PCI SERR# Interrupt (=PCIe System Errors) : disable
>>>    NUMA unavailable
>>> 
>>> # use of isolcpus in order to isolate the cores which are in charge 
>>> of transmission and reception
>>> 
>>> # hugepages size = 1048576 kB
>>> 
>>> # size of buffer descriptors : tried with Tx = 512 descriptors and Rx 
>>> = 128 descriptors and also with  Tx = 4096 descriptors and Rx = 4096  
>>> descriptors
>>> 
>>> # Tested with 2 different X540-⁠⁠T2 NICs units
>>> 
>>> # I tested all with a Dell FC430 which has a CPU Intel Xeon E5-2660 
>>> v3 @ 2.6GHz with 10 Cores and 2threads/Core (tested with and without 
>>> hyper-threading)
>>>    → same results and even worse
>>> 
>>> 
>>> Remark concerning the NIC stats :
>>>     I used the rte_eth_stats struct in order to get more information 
>>> about the losses and I observed that in some cases, when there is 
>>> packet losses,  ierrors value is > 0 and also ierrors + imissed + 
>>> ipackets < opackets. In other cases I get ierrors = 0 and  imissed + 
>>> ipackets = opackets which has more sense.
>>> 
>>> What could be the origin of that erroneous packets counting?
>>> 
>>> Do you have any explanation about that behaviour ?
>> 
>> Not knowing MoonGen at all other then a brief look at the source I may
>> not be much help, but I have a few ideas to help locate the problem.
>> 
>> Try using testpmd in tx-only mode or try Pktgen to see if you get the
>> same problem. I hope this would narrow down the problem to a specific
>> area. As we know DPDK works if correctly coded and testpmd/pktgen
>> work.
>> 
>>> 
>>> Thanks in advance.
>>> 
>>> David
>> 
>> Regards,
>> Keith

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [dpdk-users] Packet losses using DPDK
  2017-05-12 16:18 ` Wiles, Keith
@ 2017-05-17  7:53   ` dfernandes
  2017-05-22  9:40     ` dfernandes
  0 siblings, 1 reply; 9+ messages in thread
From: dfernandes @ 2017-05-17  7:53 UTC (permalink / raw)
  To: Wiles, Keith; +Cc: users

Thanks for your response !

I have installed Pktgen and I will perform some tests. So far it seems 
to work fine. I'll keep you informed. Thanks again.

David

Le 12.05.2017 18:18, Wiles, Keith a écrit :
>> On May 12, 2017, at 10:45 AM, dfernandes@toulouse.viveris.com wrote:
>> 
>> Hi !
>> 
>> I am working with MoonGen which is a fully scriptable packet generator 
>> build on DPDK.
>> (→ https://github.com/emmericp/MoonGen)
>> 
>> The system on which I perform tests has the following characteristics 
>> :
>> 
>> CPU : Intel Core i3-⁠⁠6100 (3.70 GHz, 2 cores, 2 threads/⁠⁠core)
>> NIC : X540-⁠⁠AT2 with 2x10Gbe ports
>> OS : Linux Ubuntu Server 16.04 (kernel 4.4)
>> 
>> I coded a MoonGen script which requests DPDK to transmit packets from 
>> one physical port and to receive them at the second physical port. The 
>> 2 physical ports are directly connected with an RJ-45 cat6 cable.
>> 
>> The issue is that I perform the same test with exactly the same script 
>> and the same parameters several times and the results show a random 
>> behavior. For most of the tests there is no losses but for some of 
>> them I observe packet losses. The percentage of lost packets is very 
>> variable. It happens even when the packet rate is very low.
>> 
>> Some examples of random failed tests :
>> 
>> # 1,000,000 packets sent (packets size = 124 bytes, rate = 76 Mbps) → 
>> 10170 lost packets
>> 
>> # 3,000,000 packets sent (packets size = 450 bytes, rate = 460 Mbps) → 
>> ALL packets lost
>> 
>> 
>> I tested the following system modifications without success :
>> 
>> # BIOS parameters :
>> 
>>    Hyperthreading : enable (because the machine has only 2 cores)
>>    Multi-⁠⁠⁠processor : enable
>>    Virtualization Technology (VTx) : disable
>>    Virtualization Technology for Directed I/⁠⁠⁠O (VTd) : disable
>>    Allow PCIe/⁠⁠⁠PCI SERR# Interrupt (=PCIe System Errors) : disable
>>    NUMA unavailable
>> 
>> # use of isolcpus in order to isolate the cores which are in charge of 
>> transmission and reception
>> 
>> # hugepages size = 1048576 kB
>> 
>> # size of buffer descriptors : tried with Tx = 512 descriptors and Rx 
>> = 128 descriptors and also with  Tx = 4096 descriptors and Rx = 4096  
>> descriptors
>> 
>> # Tested with 2 different X540-⁠⁠T2 NICs units
>> 
>> # I tested all with a Dell FC430 which has a CPU Intel Xeon E5-2660 v3 
>> @ 2.6GHz with 10 Cores and 2threads/Core (tested with and without 
>> hyper-threading)
>>    → same results and even worse
>> 
>> 
>> Remark concerning the NIC stats :
>>     I used the rte_eth_stats struct in order to get more information 
>> about the losses and I observed that in some cases, when there is 
>> packet losses,  ierrors value is > 0 and also ierrors + imissed + 
>> ipackets < opackets. In other cases I get ierrors = 0 and  imissed + 
>> ipackets = opackets which has more sense.
>> 
>> What could be the origin of that erroneous packets counting?
>> 
>> Do you have any explanation about that behaviour ?
> 
> Not knowing MoonGen at all other then a brief look at the source I may
> not be much help, but I have a few ideas to help locate the problem.
> 
> Try using testpmd in tx-only mode or try Pktgen to see if you get the
> same problem. I hope this would narrow down the problem to a specific
> area. As we know DPDK works if correctly coded and testpmd/pktgen
> work.
> 
>> 
>> Thanks in advance.
>> 
>> David
> 
> Regards,
> Keith

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [dpdk-users] Packet losses using DPDK
  2017-05-15  8:25 ` Andriy Berestovskyy
@ 2017-05-15 13:49   ` dfernandes
  0 siblings, 0 replies; 9+ messages in thread
From: dfernandes @ 2017-05-15 13:49 UTC (permalink / raw)
  To: Andriy Berestovskyy; +Cc: users

Hi Andriy !

Thanks for your response.

Yes, I wait links are up.

David


Le 15.05.2017 10:25, Andriy Berestovskyy a écrit :
> Hey,
> It might be a silly guess, but do you wait for the links are up and
> ready to send/receive packets?
> 
> Andriy
> 
> On Fri, May 12, 2017 at 5:45 PM,  <dfernandes@toulouse.viveris.com> 
> wrote:
>> Hi !
>> 
>> I am working with MoonGen which is a fully scriptable packet generator 
>> build
>> on DPDK.
>> (→ https://github.com/emmericp/MoonGen)
>> 
>> The system on which I perform tests has the following characteristics 
>> :
>> 
>> CPU : Intel Core i3-⁠⁠6100 (3.70 GHz, 2 cores, 2 threads/⁠⁠core)
>> NIC : X540-⁠⁠AT2 with 2x10Gbe ports
>> OS : Linux Ubuntu Server 16.04 (kernel 4.4)
>> 
>> I coded a MoonGen script which requests DPDK to transmit packets from 
>> one
>> physical port and to receive them at the second physical port. The 2
>> physical ports are directly connected with an RJ-45 cat6 cable.
>> 
>> The issue is that I perform the same test with exactly the same script 
>> and
>> the same parameters several times and the results show a random 
>> behavior.
>> For most of the tests there is no losses but for some of them I 
>> observe
>> packet losses. The percentage of lost packets is very variable. It 
>> happens
>> even when the packet rate is very low.
>> 
>> Some examples of random failed tests :
>> 
>> # 1,000,000 packets sent (packets size = 124 bytes, rate = 76 Mbps) → 
>> 10170
>> lost packets
>> 
>> # 3,000,000 packets sent (packets size = 450 bytes, rate = 460 Mbps) → 
>> ALL
>> packets lost
>> 
>> 
>> I tested the following system modifications without success :
>> 
>> # BIOS parameters :
>> 
>>     Hyperthreading : enable (because the machine has only 2 cores)
>>     Multi-⁠⁠⁠processor : enable
>>     Virtualization Technology (VTx) : disable
>>     Virtualization Technology for Directed I/⁠⁠⁠O (VTd) : disable
>>     Allow PCIe/⁠⁠⁠PCI SERR# Interrupt (=PCIe System Errors) : disable
>>     NUMA unavailable
>> 
>> # use of isolcpus in order to isolate the cores which are in charge of
>> transmission and reception
>> 
>> # hugepages size = 1048576 kB
>> 
>> # size of buffer descriptors : tried with Tx = 512 descriptors and Rx 
>> = 128
>> descriptors and also with  Tx = 4096 descriptors and Rx = 4096  
>> descriptors
>> 
>> # Tested with 2 different X540-⁠⁠T2 NICs units
>> 
>> # I tested all with a Dell FC430 which has a CPU Intel Xeon E5-2660 v3 
>> @
>> 2.6GHz with 10 Cores and 2threads/Core (tested with and without
>> hyper-threading)
>>     → same results and even worse
>> 
>> 
>> Remark concerning the NIC stats :
>>      I used the rte_eth_stats struct in order to get more information 
>> about
>> the losses and I observed that in some cases, when there is packet 
>> losses,
>> ierrors value is > 0 and also ierrors + imissed + ipackets < opackets. 
>> In
>> other cases I get ierrors = 0 and  imissed + ipackets = opackets which 
>> has
>> more sense.
>> 
>> What could be the origin of that erroneous packets counting?
>> 
>> Do you have any explanation about that behaviour ?
>> 
>> Thanks in advance.
>> 
>> David

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [dpdk-users] Packet losses using DPDK
  2017-05-12 15:45 dfernandes
  2017-05-12 16:18 ` Wiles, Keith
@ 2017-05-15  8:25 ` Andriy Berestovskyy
  2017-05-15 13:49   ` dfernandes
  1 sibling, 1 reply; 9+ messages in thread
From: Andriy Berestovskyy @ 2017-05-15  8:25 UTC (permalink / raw)
  To: dfernandes; +Cc: users

Hey,
It might be a silly guess, but do you wait for the links are up and
ready to send/receive packets?

Andriy

On Fri, May 12, 2017 at 5:45 PM,  <dfernandes@toulouse.viveris.com> wrote:
> Hi !
>
> I am working with MoonGen which is a fully scriptable packet generator build
> on DPDK.
> (→ https://github.com/emmericp/MoonGen)
>
> The system on which I perform tests has the following characteristics :
>
> CPU : Intel Core i3-⁠⁠6100 (3.70 GHz, 2 cores, 2 threads/⁠⁠core)
> NIC : X540-⁠⁠AT2 with 2x10Gbe ports
> OS : Linux Ubuntu Server 16.04 (kernel 4.4)
>
> I coded a MoonGen script which requests DPDK to transmit packets from one
> physical port and to receive them at the second physical port. The 2
> physical ports are directly connected with an RJ-45 cat6 cable.
>
> The issue is that I perform the same test with exactly the same script and
> the same parameters several times and the results show a random behavior.
> For most of the tests there is no losses but for some of them I observe
> packet losses. The percentage of lost packets is very variable. It happens
> even when the packet rate is very low.
>
> Some examples of random failed tests :
>
> # 1,000,000 packets sent (packets size = 124 bytes, rate = 76 Mbps) → 10170
> lost packets
>
> # 3,000,000 packets sent (packets size = 450 bytes, rate = 460 Mbps) → ALL
> packets lost
>
>
> I tested the following system modifications without success :
>
> # BIOS parameters :
>
>     Hyperthreading : enable (because the machine has only 2 cores)
>     Multi-⁠⁠⁠processor : enable
>     Virtualization Technology (VTx) : disable
>     Virtualization Technology for Directed I/⁠⁠⁠O (VTd) : disable
>     Allow PCIe/⁠⁠⁠PCI SERR# Interrupt (=PCIe System Errors) : disable
>     NUMA unavailable
>
> # use of isolcpus in order to isolate the cores which are in charge of
> transmission and reception
>
> # hugepages size = 1048576 kB
>
> # size of buffer descriptors : tried with Tx = 512 descriptors and Rx = 128
> descriptors and also with  Tx = 4096 descriptors and Rx = 4096  descriptors
>
> # Tested with 2 different X540-⁠⁠T2 NICs units
>
> # I tested all with a Dell FC430 which has a CPU Intel Xeon E5-2660 v3 @
> 2.6GHz with 10 Cores and 2threads/Core (tested with and without
> hyper-threading)
>     → same results and even worse
>
>
> Remark concerning the NIC stats :
>      I used the rte_eth_stats struct in order to get more information about
> the losses and I observed that in some cases, when there is packet losses,
> ierrors value is > 0 and also ierrors + imissed + ipackets < opackets. In
> other cases I get ierrors = 0 and  imissed + ipackets = opackets which has
> more sense.
>
> What could be the origin of that erroneous packets counting?
>
> Do you have any explanation about that behaviour ?
>
> Thanks in advance.
>
> David



-- 
Andriy Berestovskyy

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [dpdk-users] Packet losses using DPDK
  2017-05-12 15:45 dfernandes
@ 2017-05-12 16:18 ` Wiles, Keith
  2017-05-17  7:53   ` dfernandes
  2017-05-15  8:25 ` Andriy Berestovskyy
  1 sibling, 1 reply; 9+ messages in thread
From: Wiles, Keith @ 2017-05-12 16:18 UTC (permalink / raw)
  To: dfernandes; +Cc: users


> On May 12, 2017, at 10:45 AM, dfernandes@toulouse.viveris.com wrote:
> 
> Hi !
> 
> I am working with MoonGen which is a fully scriptable packet generator build on DPDK.
> (→ https://github.com/emmericp/MoonGen)
> 
> The system on which I perform tests has the following characteristics :
> 
> CPU : Intel Core i3-⁠⁠6100 (3.70 GHz, 2 cores, 2 threads/⁠⁠core)
> NIC : X540-⁠⁠AT2 with 2x10Gbe ports
> OS : Linux Ubuntu Server 16.04 (kernel 4.4)
> 
> I coded a MoonGen script which requests DPDK to transmit packets from one physical port and to receive them at the second physical port. The 2 physical ports are directly connected with an RJ-45 cat6 cable.
> 
> The issue is that I perform the same test with exactly the same script and the same parameters several times and the results show a random behavior. For most of the tests there is no losses but for some of them I observe packet losses. The percentage of lost packets is very variable. It happens even when the packet rate is very low.
> 
> Some examples of random failed tests :
> 
> # 1,000,000 packets sent (packets size = 124 bytes, rate = 76 Mbps) → 10170 lost packets
> 
> # 3,000,000 packets sent (packets size = 450 bytes, rate = 460 Mbps) → ALL packets lost
> 
> 
> I tested the following system modifications without success :
> 
> # BIOS parameters :
> 
>    Hyperthreading : enable (because the machine has only 2 cores)
>    Multi-⁠⁠⁠processor : enable
>    Virtualization Technology (VTx) : disable
>    Virtualization Technology for Directed I/⁠⁠⁠O (VTd) : disable
>    Allow PCIe/⁠⁠⁠PCI SERR# Interrupt (=PCIe System Errors) : disable
>    NUMA unavailable
> 
> # use of isolcpus in order to isolate the cores which are in charge of transmission and reception
> 
> # hugepages size = 1048576 kB
> 
> # size of buffer descriptors : tried with Tx = 512 descriptors and Rx = 128 descriptors and also with  Tx = 4096 descriptors and Rx = 4096  descriptors
> 
> # Tested with 2 different X540-⁠⁠T2 NICs units
> 
> # I tested all with a Dell FC430 which has a CPU Intel Xeon E5-2660 v3 @ 2.6GHz with 10 Cores and 2threads/Core (tested with and without hyper-threading)
>    → same results and even worse
> 
> 
> Remark concerning the NIC stats :
>     I used the rte_eth_stats struct in order to get more information about the losses and I observed that in some cases, when there is packet losses,  ierrors value is > 0 and also ierrors + imissed + ipackets < opackets. In other cases I get ierrors = 0 and  imissed + ipackets = opackets which has more sense.
> 
> What could be the origin of that erroneous packets counting?
> 
> Do you have any explanation about that behaviour ?

Not knowing MoonGen at all other then a brief look at the source I may not be much help, but I have a few ideas to help locate the problem.

Try using testpmd in tx-only mode or try Pktgen to see if you get the same problem. I hope this would narrow down the problem to a specific area. As we know DPDK works if correctly coded and testpmd/pktgen work.

> 
> Thanks in advance.
> 
> David

Regards,
Keith


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [dpdk-users] Packet losses using DPDK
@ 2017-05-12 15:45 dfernandes
  2017-05-12 16:18 ` Wiles, Keith
  2017-05-15  8:25 ` Andriy Berestovskyy
  0 siblings, 2 replies; 9+ messages in thread
From: dfernandes @ 2017-05-12 15:45 UTC (permalink / raw)
  To: users

Hi !

I am working with MoonGen which is a fully scriptable packet generator 
build on DPDK.
(→ https://github.com/emmericp/MoonGen)

The system on which I perform tests has the following characteristics :

CPU : Intel Core i3-⁠⁠6100 (3.70 GHz, 2 cores, 2 threads/⁠⁠core)
NIC : X540-⁠⁠AT2 with 2x10Gbe ports
OS : Linux Ubuntu Server 16.04 (kernel 4.4)

I coded a MoonGen script which requests DPDK to transmit packets from 
one physical port and to receive them at the second physical port. The 2 
physical ports are directly connected with an RJ-45 cat6 cable.

The issue is that I perform the same test with exactly the same script 
and the same parameters several times and the results show a random 
behavior. For most of the tests there is no losses but for some of them 
I observe packet losses. The percentage of lost packets is very 
variable. It happens even when the packet rate is very low.

Some examples of random failed tests :

# 1,000,000 packets sent (packets size = 124 bytes, rate = 76 Mbps) → 
10170 lost packets

# 3,000,000 packets sent (packets size = 450 bytes, rate = 460 Mbps) → 
ALL packets lost


I tested the following system modifications without success :

# BIOS parameters :

     Hyperthreading : enable (because the machine has only 2 cores)
     Multi-⁠⁠⁠processor : enable
     Virtualization Technology (VTx) : disable
     Virtualization Technology for Directed I/⁠⁠⁠O (VTd) : disable
     Allow PCIe/⁠⁠⁠PCI SERR# Interrupt (=PCIe System Errors) : disable
     NUMA unavailable

# use of isolcpus in order to isolate the cores which are in charge of 
transmission and reception

# hugepages size = 1048576 kB

# size of buffer descriptors : tried with Tx = 512 descriptors and Rx = 
128 descriptors and also with  Tx = 4096 descriptors and Rx = 4096  
descriptors

# Tested with 2 different X540-⁠⁠T2 NICs units

# I tested all with a Dell FC430 which has a CPU Intel Xeon E5-2660 v3 @ 
2.6GHz with 10 Cores and 2threads/Core (tested with and without 
hyper-threading)
     → same results and even worse


Remark concerning the NIC stats :
      I used the rte_eth_stats struct in order to get more information 
about the losses and I observed that in some cases, when there is packet 
losses,  ierrors value is > 0 and also ierrors + imissed + ipackets < 
opackets. In other cases I get ierrors = 0 and  imissed + ipackets = 
opackets which has more sense.

What could be the origin of that erroneous packets counting?

Do you have any explanation about that behaviour ?

Thanks in advance.

David

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2017-05-22 14:12 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-05-09  7:53 [dpdk-users] Packet losses using DPDK David Fernandes
2017-05-12 15:45 dfernandes
2017-05-12 16:18 ` Wiles, Keith
2017-05-17  7:53   ` dfernandes
2017-05-22  9:40     ` dfernandes
2017-05-22 12:10       ` Andriy Berestovskyy
2017-05-22 14:12       ` Wiles, Keith
2017-05-15  8:25 ` Andriy Berestovskyy
2017-05-15 13:49   ` dfernandes

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).