DPDK usage discussions
 help / color / mirror / Atom feed
From: Andriy Berestovskyy <aber@semihalf.com>
To: dfernandes@toulouse.viveris.com
Cc: "Wiles, Keith" <keith.wiles@intel.com>, users <users@dpdk.org>
Subject: Re: [dpdk-users] Packet losses using DPDK
Date: Mon, 22 May 2017 14:10:20 +0200	[thread overview]
Message-ID: <CAOysbxqUe-fptEox+24DaWNuaiP4WmS0WF7ZKn1CkExCjxK13Q@mail.gmail.com> (raw)
In-Reply-To: <aa3d6e6f6ab49d7ef05472fd735216a7@toulouse.viveris.com>

Hi,
Please have a look at https://en.wikipedia.org/wiki/High_availability
I was trying to calculate your link availability, but my Ubuntu
calculator gives me 0 for  2 / 34 481 474 846 ;)

Most probably you dropped a packet during the start/stop.
ierrors is what you NIC consider as an error Ethernet frame
(checksums, runts, giants etc)

Regards,
Andriy

On Mon, May 22, 2017 at 11:40 AM,  <dfernandes@toulouse.viveris.com> wrote:
> Hi !
>
> I performed many tests using Pktgen and it seems to work much better.
> However, I observed that one of the tests showed that 2 packets were
> dropped. In this test I sent packets between the 2 physical ports in
> bidirectional mode during 24 hours. The packets size was 450 bytes and the
> rate in both ports was 1500 Mbps.
>
> The port stats I got are the following :
>
>
> ** Port 0 **  Tx: 34481474912. Rx: 34481474846. Dropped: 2
> ** Port 1 **  Tx: 34481474848. Rx: 34481474912. Dropped: 0
>
> DEBUG portStats = {
>   [1] = {
>     ["ipackets"] = 34481474912,
>     ["ierrors"] = 0,
>     ["rx_nombuf"] = 0,
>     ["ibytes"] = 15378737810752,
>     ["oerrors"] = 0,
>     ["opackets"] = 34481474848,
>     ["obytes"] = 15378737782208,
>   },
>   [0] = {
>     ["ipackets"] = 34481474846,
>     ["ierrors"] = 1,
>     ["rx_nombuf"] = 0,
>     ["ibytes"] = 15378737781316,
>     ["oerrors"] = 0,
>     ["opackets"] = 34481474912,
>     ["obytes"] = 15378737810752,
>   },
>   ["n"] = 2,
> }
>
> So 2 packets were dropped by port 0 and I see that "ierrors" counter has a
> value of 1. Do you know what does this counter represent ? And what could it
> be interpreted ?
> By the way, I performed as well the same test changing the packet size to
> 1518 bytes and the rate to 4500 Mbps (on each port) and 0 packets were
> dropped.
>
> David
>
>
>
>
> Le 17.05.2017 09:53, dfernandes@toulouse.viveris.com a écrit :
>>
>> Thanks for your response !
>>
>> I have installed Pktgen and I will perform some tests. So far it seems
>> to work fine. I'll keep you informed. Thanks again.
>>
>> David
>>
>> Le 12.05.2017 18:18, Wiles, Keith a écrit :
>>>>
>>>> On May 12, 2017, at 10:45 AM, dfernandes@toulouse.viveris.com wrote:
>>>>
>>>> Hi !
>>>>
>>>> I am working with MoonGen which is a fully scriptable packet generator
>>>> build on DPDK.
>>>> (→ https://github.com/emmericp/MoonGen)
>>>>
>>>> The system on which I perform tests has the following characteristics :
>>>>
>>>> CPU : Intel Core i3-⁠⁠6100 (3.70 GHz, 2 cores, 2 threads/⁠⁠core)
>>>> NIC : X540-⁠⁠AT2 with 2x10Gbe ports
>>>> OS : Linux Ubuntu Server 16.04 (kernel 4.4)
>>>>
>>>> I coded a MoonGen script which requests DPDK to transmit packets from
>>>> one physical port and to receive them at the second physical port. The 2
>>>> physical ports are directly connected with an RJ-45 cat6 cable.
>>>>
>>>> The issue is that I perform the same test with exactly the same script
>>>> and the same parameters several times and the results show a random
>>>> behavior. For most of the tests there is no losses but for some of them I
>>>> observe packet losses. The percentage of lost packets is very variable. It
>>>> happens even when the packet rate is very low.
>>>>
>>>> Some examples of random failed tests :
>>>>
>>>> # 1,000,000 packets sent (packets size = 124 bytes, rate = 76 Mbps) →
>>>> 10170 lost packets
>>>>
>>>> # 3,000,000 packets sent (packets size = 450 bytes, rate = 460 Mbps) →
>>>> ALL packets lost
>>>>
>>>>
>>>> I tested the following system modifications without success :
>>>>
>>>> # BIOS parameters :
>>>>
>>>>    Hyperthreading : enable (because the machine has only 2 cores)
>>>>    Multi-⁠⁠⁠processor : enable
>>>>    Virtualization Technology (VTx) : disable
>>>>    Virtualization Technology for Directed I/⁠⁠⁠O (VTd) : disable
>>>>    Allow PCIe/⁠⁠⁠PCI SERR# Interrupt (=PCIe System Errors) : disable
>>>>    NUMA unavailable
>>>>
>>>> # use of isolcpus in order to isolate the cores which are in charge of
>>>> transmission and reception
>>>>
>>>> # hugepages size = 1048576 kB
>>>>
>>>> # size of buffer descriptors : tried with Tx = 512 descriptors and Rx =
>>>> 128 descriptors and also with  Tx = 4096 descriptors and Rx = 4096
>>>> descriptors
>>>>
>>>> # Tested with 2 different X540-⁠⁠T2 NICs units
>>>>
>>>> # I tested all with a Dell FC430 which has a CPU Intel Xeon E5-2660 v3 @
>>>> 2.6GHz with 10 Cores and 2threads/Core (tested with and without
>>>> hyper-threading)
>>>>    → same results and even worse
>>>>
>>>>
>>>> Remark concerning the NIC stats :
>>>>     I used the rte_eth_stats struct in order to get more information
>>>> about the losses and I observed that in some cases, when there is packet
>>>> losses,  ierrors value is > 0 and also ierrors + imissed + ipackets <
>>>> opackets. In other cases I get ierrors = 0 and  imissed + ipackets =
>>>> opackets which has more sense.
>>>>
>>>> What could be the origin of that erroneous packets counting?
>>>>
>>>> Do you have any explanation about that behaviour ?
>>>
>>>
>>> Not knowing MoonGen at all other then a brief look at the source I may
>>> not be much help, but I have a few ideas to help locate the problem.
>>>
>>> Try using testpmd in tx-only mode or try Pktgen to see if you get the
>>> same problem. I hope this would narrow down the problem to a specific
>>> area. As we know DPDK works if correctly coded and testpmd/pktgen
>>> work.
>>>
>>>>
>>>> Thanks in advance.
>>>>
>>>> David
>>>
>>>
>>> Regards,
>>> Keith
>
>



-- 
Andriy Berestovskyy

  reply	other threads:[~2017-05-22 12:10 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-05-12 15:45 dfernandes
2017-05-12 16:18 ` Wiles, Keith
2017-05-17  7:53   ` dfernandes
2017-05-22  9:40     ` dfernandes
2017-05-22 12:10       ` Andriy Berestovskyy [this message]
2017-05-22 14:12       ` Wiles, Keith
2017-05-15  8:25 ` Andriy Berestovskyy
2017-05-15 13:49   ` dfernandes
  -- strict thread matches above, loose matches on Subject: below --
2017-05-09  7:53 David Fernandes

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAOysbxqUe-fptEox+24DaWNuaiP4WmS0WF7ZKn1CkExCjxK13Q@mail.gmail.com \
    --to=aber@semihalf.com \
    --cc=dfernandes@toulouse.viveris.com \
    --cc=keith.wiles@intel.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).