From: David Fernandes <dfernandes@toulouse.viveris.com>
To: users@dpdk.org
Subject: [dpdk-users] Packet losses using DPDK
Date: Tue, 9 May 2017 09:53:31 +0200 [thread overview]
Message-ID: <759280c8-ff0d-a61b-e051-819f97be86d6@toulouse.viveris.com> (raw)
Hi !
I am working with MoonGen which is a fully scriptable packet generator
build on DPDK.
(→ https://github.com/emmericp/MoonGen)
The system on which I perform tests has the following characteristics :
CPU : Intel Core i3-6100 (3.70 GHz, 2 cores, 2 threads/core)
NIC : X540-AT2 with 2x10Gbe ports
OS : Linux Ubuntu Server 16.04 (kernel 4.4)
I coded a MoonGen script which requests DPDK to transmit packets from
one physical port and to receive them at the second physical port. The 2
physical ports are directly connected with an RJ-45 cat6 cable.
The issue is that I perform the same test with exactly the same script
and the same parameters several times and the results show a random
behavior. For most of the tests there is no losses but for some of them
I observe packet losses. The percentage of lost packets is very
variable. It happens even when the packet rate is very low.
Some examples of random failed tests :
# 1,000,000 packets sent (packets size = 124 bytes, rate = 76 Mbps) →
10170 lost packets
# 3,000,000 packets sent (packets size = 450 bytes, rate = 460 Mbps) →
ALL packets lost
I tested the following system modifications without success :
# BIOS parameters :
Hyperthreading : enable (because the machine has only 2 cores)
Multi-processor : enable
Virtualization Technology (VTx) : disable
Virtualization Technology for Directed I/O (VTd) : disable
Allow PCIe/PCI SERR# Interrupt (=PCIe System Errors) : disable
NUMA unavailable
# use of isolcpus in order to isolate the cores which are in charge of
transmission and reception
# hugepages size = 1048576 kB
# size of buffer descriptors : tried with Tx = 512 descriptors and Rx =
128 descriptors and also with Tx = 4096 descriptors and Rx = 4096
descriptors
# Tested with 2 different X540-T2 NICs units
# I tested all with a Dell FC430 which has a CPU Intel Xeon E5-2660 v3 @
2.6GHz with 10 Cores and 2threads/Core (tested with and without
hyper-threading)
→ same results and even worse
Remark concerning the NIC stats :
I used the rte_eth_stats struct in order to get more information
about the losses and I observed that in some cases, when there is packet
losses, ierrors value is > 0 and also ierrors + imissed + ipackets <
opackets. In other cases I get ierrors = 0 and imissed + ipackets =
opackets which has more sense.
What could be the origin of that erroneous packets counting?
Do you have any explanation about that behaviour ?
Thanks in advance.
David
next reply other threads:[~2017-05-09 7:53 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-05-09 7:53 David Fernandes [this message]
2017-05-12 15:45 dfernandes
2017-05-12 16:18 ` Wiles, Keith
2017-05-17 7:53 ` dfernandes
2017-05-22 9:40 ` dfernandes
2017-05-22 12:10 ` Andriy Berestovskyy
2017-05-22 14:12 ` Wiles, Keith
2017-05-15 8:25 ` Andriy Berestovskyy
2017-05-15 13:49 ` dfernandes
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=759280c8-ff0d-a61b-e051-819f97be86d6@toulouse.viveris.com \
--to=dfernandes@toulouse.viveris.com \
--cc=users@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).