DPDK usage discussions
 help / color / mirror / Atom feed
From: Paul T <paultop6@outlook.com>
To: "Tomáš Jánský" <tomas.jansky@flowmon.com>
Cc: "users@dpdk.org" <users@dpdk.org>
Subject: Re: [dpdk-users] X710 DA2 (2x10G) performance 64B packet
Date: Thu, 21 Mar 2019 14:31:16 +0000	[thread overview]
Message-ID: <DB7PR06MB482725FDAEEF8AB3F61126B889420@DB7PR06MB4827.eurprd06.prod.outlook.com> (raw)
In-Reply-To: <CAPP7y6x3xRWPjrH=VhZmQ=q==R9FH1hv0j7Kuy0iUaj_xUHshw@mail.gmail.com>

1GB huge page chunks instead of 2MB would also be worth a try

________________________________
From: Tomáš Jánský <tomas.jansky@flowmon.com>
Sent: 21 March 2019 14:00
To: Paul T
Cc: users@dpdk.org
Subject: Re: [dpdk-users] X710 DA2 (2x10G) performance 64B packet

Hi Paul,

thank you for your suggestion.
I tried isolating the cores; however, the improvement was negligible.

Tomas

On Thu, Mar 21, 2019 at 12:50 PM Paul T <paultop6@outlook.com<mailto:paultop6@outlook.com>> wrote:
Hi Tomas,

I would isolate the CPUs in which the dpdk threads are running from the linux schedular.  The low packet drop at 64B makes me thing its context switching happen on the core because of the linux scheduler.

Use the following command in the linux command line params in your grub config:
isolcpus=cpus to isolate, e.g. 1,3,4 or 1-4

Regards

Paul

Message: 3
Date: Thu, 21 Mar 2019 10:53:34 +0100
From: Tom?? J?nsk? <tomas.jansky@flowmon.com<mailto:tomas.jansky@flowmon.com>>
To: users@dpdk.org<mailto:users@dpdk.org>
Subject: [dpdk-users] X710 DA2 (2x10G) performance 64B packets
Message-ID:
        <CAPP7y6z13qFR-34+-Xn97ru5jOnaVAV7s=6WPgk_j=9CLMQrSQ@mail.gmail.com<mailto:9CLMQrSQ@mail.gmail.com>>
Content-Type: text/plain; charset="UTF-8"

Hello DPDK users,

I am having an issue concerning the performance of X710 DA2 (2x10G) NIC
when using testpmd (and also l2fwd) application on both ports.

HW and SW parameters:
CPUs: Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz x16
Disabled hyperthreading.
All used lcores and ports are on the same NUMA node (0).
Hugepages: 1024x 2MB on the NUMA node 0.
RAM: 64 GB

DPDK version: 18.05.1
Modue: IGB UIO
GCC version: 4.8.5

When using testpmd application only on one port:
./testpmd -b 0000:04:00.0 -n 4 --lcore=0@0,2@2 -- --socket-num=0
--nb-cores=1 --nb-ports=1 --numa --forward-mode=rxonly

14.63 Mpps (64B packet length) - 0.01% packets dropped

When using testmpd on both ports:
./testpmd -n 4 --lcore=0@0,2@2,4@4 -- --socket-num=0 --nb-cores=2
--nb-ports=2 --numa --forward-mode=rxonly

28.08 Mpps (64B packet length) - 3.47% packets dropped

Does anybody have an explanation why am I experiencing this performance
drop?
Any suggestion would be much appreciated.

Thank you
Tomas

  reply	other threads:[~2019-03-21 14:31 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-03-21 11:50 Paul T
2019-03-21 14:00 ` Tomáš Jánský
2019-03-21 14:31   ` Paul T [this message]
2019-03-21 14:44     ` Tomáš Jánský

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DB7PR06MB482725FDAEEF8AB3F61126B889420@DB7PR06MB4827.eurprd06.prod.outlook.com \
    --to=paultop6@outlook.com \
    --cc=tomas.jansky@flowmon.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).