DPDK usage discussions
 help / color / mirror / Atom feed
* Re: [dpdk-users] X710 DA2 (2x10G) performance 64B packet
@ 2019-03-21 11:50 Paul T
  2019-03-21 14:00 ` Tomáš Jánský
  0 siblings, 1 reply; 4+ messages in thread
From: Paul T @ 2019-03-21 11:50 UTC (permalink / raw)
  To: tomas.jansky, users

Hi Tomas,

I would isolate the CPUs in which the dpdk threads are running from the linux schedular.  The low packet drop at 64B makes me thing its context switching happen on the core because of the linux scheduler.

Use the following command in the linux command line params in your grub config:
isolcpus=cpus to isolate, e.g. 1,3,4 or 1-4

Regards

Paul

Message: 3
Date: Thu, 21 Mar 2019 10:53:34 +0100
From: Tom?? J?nsk? <tomas.jansky@flowmon.com>
To: users@dpdk.org
Subject: [dpdk-users] X710 DA2 (2x10G) performance 64B packets
Message-ID:
        <CAPP7y6z13qFR-34+-Xn97ru5jOnaVAV7s=6WPgk_j=9CLMQrSQ@mail.gmail.com>
Content-Type: text/plain; charset="UTF-8"

Hello DPDK users,

I am having an issue concerning the performance of X710 DA2 (2x10G) NIC
when using testpmd (and also l2fwd) application on both ports.

HW and SW parameters:
CPUs: Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz x16
Disabled hyperthreading.
All used lcores and ports are on the same NUMA node (0).
Hugepages: 1024x 2MB on the NUMA node 0.
RAM: 64 GB

DPDK version: 18.05.1
Modue: IGB UIO
GCC version: 4.8.5

When using testpmd application only on one port:
./testpmd -b 0000:04:00.0 -n 4 --lcore=0@0,2@2 -- --socket-num=0
--nb-cores=1 --nb-ports=1 --numa --forward-mode=rxonly

14.63 Mpps (64B packet length) - 0.01% packets dropped

When using testmpd on both ports:
./testpmd -n 4 --lcore=0@0,2@2,4@4 -- --socket-num=0 --nb-cores=2
--nb-ports=2 --numa --forward-mode=rxonly

28.08 Mpps (64B packet length) - 3.47% packets dropped

Does anybody have an explanation why am I experiencing this performance
drop?
Any suggestion would be much appreciated.

Thank you
Tomas

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dpdk-users] X710 DA2 (2x10G) performance 64B packet
  2019-03-21 11:50 [dpdk-users] X710 DA2 (2x10G) performance 64B packet Paul T
@ 2019-03-21 14:00 ` Tomáš Jánský
  2019-03-21 14:31   ` Paul T
  0 siblings, 1 reply; 4+ messages in thread
From: Tomáš Jánský @ 2019-03-21 14:00 UTC (permalink / raw)
  To: Paul T; +Cc: users

Hi Paul,

thank you for your suggestion.
I tried isolating the cores; however, the improvement was negligible.

Tomas

On Thu, Mar 21, 2019 at 12:50 PM Paul T <paultop6@outlook.com> wrote:

> Hi Tomas,
>
> I would isolate the CPUs in which the dpdk threads are running from the
> linux schedular.  The low packet drop at 64B makes me thing its context
> switching happen on the core because of the linux scheduler.
>
> Use the following command in the linux command line params in your grub
> config:
> isolcpus=cpus to isolate, e.g. 1,3,4 or 1-4
>
> Regards
>
> Paul
>
> Message: 3
> Date: Thu, 21 Mar 2019 10:53:34 +0100
> From: Tom?? J?nsk? <tomas.jansky@flowmon.com>
> To: users@dpdk.org
> Subject: [dpdk-users] X710 DA2 (2x10G) performance 64B packets
> Message-ID:
>         <CAPP7y6z13qFR-34+-Xn97ru5jOnaVAV7s=6WPgk_j=
> 9CLMQrSQ@mail.gmail.com>
> Content-Type: text/plain; charset="UTF-8"
>
> Hello DPDK users,
>
> I am having an issue concerning the performance of X710 DA2 (2x10G) NIC
> when using testpmd (and also l2fwd) application on both ports.
>
> HW and SW parameters:
> CPUs: Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz x16
> Disabled hyperthreading.
> All used lcores and ports are on the same NUMA node (0).
> Hugepages: 1024x 2MB on the NUMA node 0.
> RAM: 64 GB
>
> DPDK version: 18.05.1
> Modue: IGB UIO
> GCC version: 4.8.5
>
> When using testpmd application only on one port:
> ./testpmd -b 0000:04:00.0 -n 4 --lcore=0@0,2@2 -- --socket-num=0
> --nb-cores=1 --nb-ports=1 --numa --forward-mode=rxonly
>
> 14.63 Mpps (64B packet length) - 0.01% packets dropped
>
> When using testmpd on both ports:
> ./testpmd -n 4 --lcore=0@0,2@2,4@4 -- --socket-num=0 --nb-cores=2
> --nb-ports=2 --numa --forward-mode=rxonly
>
> 28.08 Mpps (64B packet length) - 3.47% packets dropped
>
> Does anybody have an explanation why am I experiencing this performance
> drop?
> Any suggestion would be much appreciated.
>
> Thank you
> Tomas
>

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dpdk-users] X710 DA2 (2x10G) performance 64B packet
  2019-03-21 14:00 ` Tomáš Jánský
@ 2019-03-21 14:31   ` Paul T
  2019-03-21 14:44     ` Tomáš Jánský
  0 siblings, 1 reply; 4+ messages in thread
From: Paul T @ 2019-03-21 14:31 UTC (permalink / raw)
  To: Tomáš Jánský; +Cc: users

1GB huge page chunks instead of 2MB would also be worth a try

________________________________
From: Tomáš Jánský <tomas.jansky@flowmon.com>
Sent: 21 March 2019 14:00
To: Paul T
Cc: users@dpdk.org
Subject: Re: [dpdk-users] X710 DA2 (2x10G) performance 64B packet

Hi Paul,

thank you for your suggestion.
I tried isolating the cores; however, the improvement was negligible.

Tomas

On Thu, Mar 21, 2019 at 12:50 PM Paul T <paultop6@outlook.com<mailto:paultop6@outlook.com>> wrote:
Hi Tomas,

I would isolate the CPUs in which the dpdk threads are running from the linux schedular.  The low packet drop at 64B makes me thing its context switching happen on the core because of the linux scheduler.

Use the following command in the linux command line params in your grub config:
isolcpus=cpus to isolate, e.g. 1,3,4 or 1-4

Regards

Paul

Message: 3
Date: Thu, 21 Mar 2019 10:53:34 +0100
From: Tom?? J?nsk? <tomas.jansky@flowmon.com<mailto:tomas.jansky@flowmon.com>>
To: users@dpdk.org<mailto:users@dpdk.org>
Subject: [dpdk-users] X710 DA2 (2x10G) performance 64B packets
Message-ID:
        <CAPP7y6z13qFR-34+-Xn97ru5jOnaVAV7s=6WPgk_j=9CLMQrSQ@mail.gmail.com<mailto:9CLMQrSQ@mail.gmail.com>>
Content-Type: text/plain; charset="UTF-8"

Hello DPDK users,

I am having an issue concerning the performance of X710 DA2 (2x10G) NIC
when using testpmd (and also l2fwd) application on both ports.

HW and SW parameters:
CPUs: Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz x16
Disabled hyperthreading.
All used lcores and ports are on the same NUMA node (0).
Hugepages: 1024x 2MB on the NUMA node 0.
RAM: 64 GB

DPDK version: 18.05.1
Modue: IGB UIO
GCC version: 4.8.5

When using testpmd application only on one port:
./testpmd -b 0000:04:00.0 -n 4 --lcore=0@0,2@2 -- --socket-num=0
--nb-cores=1 --nb-ports=1 --numa --forward-mode=rxonly

14.63 Mpps (64B packet length) - 0.01% packets dropped

When using testmpd on both ports:
./testpmd -n 4 --lcore=0@0,2@2,4@4 -- --socket-num=0 --nb-cores=2
--nb-ports=2 --numa --forward-mode=rxonly

28.08 Mpps (64B packet length) - 3.47% packets dropped

Does anybody have an explanation why am I experiencing this performance
drop?
Any suggestion would be much appreciated.

Thank you
Tomas

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dpdk-users] X710 DA2 (2x10G) performance 64B packet
  2019-03-21 14:31   ` Paul T
@ 2019-03-21 14:44     ` Tomáš Jánský
  0 siblings, 0 replies; 4+ messages in thread
From: Tomáš Jánský @ 2019-03-21 14:44 UTC (permalink / raw)
  To: Paul T; +Cc: users

Thanks Paul for another suggestion.

This is my boot setup now:
default_hugepagesz=1G hugepagesz=1G isolcpus=1-4 nohz_full=1-4 rcu_nocbs=1-4

and I am using 16x 1GB hugepages on the NUMA node.
But so far no improvement.

Tomas

On Thu, Mar 21, 2019 at 3:31 PM Paul T <paultop6@outlook.com> wrote:

> 1GB huge page chunks instead of 2MB would also be worth a try
>
> ------------------------------
> *From:* Tomáš Jánský <tomas.jansky@flowmon.com>
> *Sent:* 21 March 2019 14:00
> *To:* Paul T
> *Cc:* users@dpdk.org
> *Subject:* Re: [dpdk-users] X710 DA2 (2x10G) performance 64B packet
>
> Hi Paul,
>
> thank you for your suggestion.
> I tried isolating the cores; however, the improvement was negligible.
>
> Tomas
>
> On Thu, Mar 21, 2019 at 12:50 PM Paul T <paultop6@outlook.com> wrote:
>
> Hi Tomas,
>
> I would isolate the CPUs in which the dpdk threads are running from the
> linux schedular.  The low packet drop at 64B makes me thing its context
> switching happen on the core because of the linux scheduler.
>
> Use the following command in the linux command line params in your grub
> config:
> isolcpus=cpus to isolate, e.g. 1,3,4 or 1-4
>
> Regards
>
> Paul
>
> Message: 3
> Date: Thu, 21 Mar 2019 10:53:34 +0100
> From: Tom?? J?nsk? <tomas.jansky@flowmon.com>
> To: users@dpdk.org
> Subject: [dpdk-users] X710 DA2 (2x10G) performance 64B packets
> Message-ID:
>         <CAPP7y6z13qFR-34+-Xn97ru5jOnaVAV7s=6WPgk_j=
> 9CLMQrSQ@mail.gmail.com>
> Content-Type: text/plain; charset="UTF-8"
>
> Hello DPDK users,
>
> I am having an issue concerning the performance of X710 DA2 (2x10G) NIC
> when using testpmd (and also l2fwd) application on both ports.
>
> HW and SW parameters:
> CPUs: Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz x16
> Disabled hyperthreading.
> All used lcores and ports are on the same NUMA node (0).
> Hugepages: 1024x 2MB on the NUMA node 0.
> RAM: 64 GB
>
> DPDK version: 18.05.1
> Modue: IGB UIO
> GCC version: 4.8.5
>
> When using testpmd application only on one port:
> ./testpmd -b 0000:04:00.0 -n 4 --lcore=0@0,2@2 -- --socket-num=0
> --nb-cores=1 --nb-ports=1 --numa --forward-mode=rxonly
>
> 14.63 Mpps (64B packet length) - 0.01% packets dropped
>
> When using testmpd on both ports:
> ./testpmd -n 4 --lcore=0@0,2@2,4@4 -- --socket-num=0 --nb-cores=2
> --nb-ports=2 --numa --forward-mode=rxonly
>
> 28.08 Mpps (64B packet length) - 3.47% packets dropped
>
> Does anybody have an explanation why am I experiencing this performance
> drop?
> Any suggestion would be much appreciated.
>
> Thank you
> Tomas
>
>

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2019-03-21 14:45 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-03-21 11:50 [dpdk-users] X710 DA2 (2x10G) performance 64B packet Paul T
2019-03-21 14:00 ` Tomáš Jánský
2019-03-21 14:31   ` Paul T
2019-03-21 14:44     ` Tomáš Jánský

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).