DPDK usage discussions
 help / color / mirror / Atom feed
* [dpdk-users] Tx burst getting failed with Virtio driver
  2019-04-04  7:25 ` [dpdk-users] Tx burst getting failed with Virtio driver Sharon
@ 2019-04-04  7:31   ` Sharon
  2019-04-04 15:41     ` Stephen Hemminger
  0 siblings, 1 reply; 4+ messages in thread
From: Sharon @ 2019-04-04  7:31 UTC (permalink / raw)
  To: users

Hi,

With a dpdk based application inside GCP VM instance,
it is observed that while sending UDP packets of length 1300 at around 6 
Gbps rate, tx burst starts failing frequently.

On enabling virtio pmd logs, following error is found:

PMD: virtio_xmit_pkts() tx: No free tx descriptors to transmit

*VM configuration:*

Cores: Total 12 cores, 5 cores for the application
OS: CentOS 7.2.1511
Linux kernel : 3.10.0-327.13.1.el7.x86_64
DPDK version : 17.05*
*

*Ring configuration in application:*

TX descriptor ring size: 4096
RX descriptor ring size: 1024
4 cores for rx and tx in a run to completion model.
Single port is used for Rx and Tx.
lcore to queue mapping is 1:1, i.e 1rx queue and 1tx queue per lcore

*ethtool -g eth0*
Ring parameters for eth0:
Pre-set maximums:
RX:             4096
RX Mini:           0
RX Jumbo:       0
TX:             4096
Current hardware settings:
RX:             4096
RX Mini:           0
RX Jumbo:       0
TX:             4096

*lscpu*

Architecture:            x86_64
CPU op-mode(s):      32-bit, 64-bit
Byte Order:              Little Endian
CPU(s):                     12
On-line CPU(s) list:    0-11
Thread(s) per core:  2
Core(s) per socket:   6
Socket(s):                 1
NUMA node(s):         1
Vendor ID:               GenuineIntel
CPU family:              6
Model:                     85
Model name:           Intel(R) Xeon(R) CPU @ 2.00GHz
Stepping:                 3
CPU MHz:                2000.168
BogoMIPS:              4000.33
Hypervisor vendor: KVM
Virtualization type:  full
L1d cache:              32K
L1i cache:               32K
L2 cache:                256K
L3 cache:                56320K
NUMA node0 CPU(s): 0-11

*NIC stats is provided below:*

100330695 packets input, 141154589263 bytes
100310198 packets output, 136450396190 bytes
0 packets missed
0 erroneous packets received
0 multicast packets received
0 failed transmitted packets
0 No Mbufs


Stats reg 0 RX-packets: 23454090 RX-errors: 0 RX-bytes: 32948589715
Stats reg 1 RX-packets: 23954850 RX-errors: 0 RX-bytes: 33699067521
Stats reg 2 RX-packets: 26014919 RX-errors: 0 RX-bytes: 36614183404
Stats reg 3 RX-packets: 26906836 RX-errors: 0 RX-bytes: 37892748623


Stats reg 0 TX-packets: 23450529 TX-bytes: 31934107195
Stats reg 1 TX-packets: 23951340 TX-bytes: 32591951998
Stats reg 2 TX-packets: 26008018 TX-bytes: 35366748482
Stats reg 3 TX-packets: 26900311 TX-bytes: 36557588515

Increasing the number of cores for the application is also not helping 
in this case.

Kindly suggest as what needs to be done to improve the performance and 
avoid this issue.

Thanks & Regards,
Sharon T N


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dpdk-users] Tx burst getting failed with Virtio driver
  2019-04-04  7:31   ` Sharon
@ 2019-04-04 15:41     ` Stephen Hemminger
  2019-04-05  6:42       ` Sharon
  0 siblings, 1 reply; 4+ messages in thread
From: Stephen Hemminger @ 2019-04-04 15:41 UTC (permalink / raw)
  To: Sharon; +Cc: users

On Thu, 4 Apr 2019 13:01:16 +0530
Sharon <sharon.t@altencalsoftlabs.com> wrote:

> Hi,
> 
> With a dpdk based application inside GCP VM instance,
> it is observed that while sending UDP packets of length 1300 at around 6 
> Gbps rate, tx burst starts failing frequently.

Virtio (like all devices) can only transmit so fast.
If you transmit faster than the host can consume, the queue will get full
this shows up in DPDK when all transmit descriptors are used.

6Gbps is about 4.4 Mpps and the upper bound on virtio is usually about 1 to 2 Mpps
because of the overhead of host processing (vhost and linux bridge).

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dpdk-users] Tx burst getting failed with Virtio driver
  2019-04-04 15:41     ` Stephen Hemminger
@ 2019-04-05  6:42       ` Sharon
  0 siblings, 0 replies; 4+ messages in thread
From: Sharon @ 2019-04-05  6:42 UTC (permalink / raw)
  To: Stephen Hemminger, users

Hi,

Regarding pps,

6 Gbps comes around to be less than 0.6 Mpps with packet size 1300 Bytes.

Also, in a GCP VM instance, virtio is able to process 10 Gbps easily 
with 5-6 cores. (without DPDK)

Kindly advice.

Thanks & Regards,
Sharon T N


On 04/04/19 9:11 PM, Stephen Hemminger wrote:
> On Thu, 4 Apr 2019 13:01:16 +0530
> Sharon <sharon.t@altencalsoftlabs.com> wrote:
>
>> Hi,
>>
>> With a dpdk based application inside GCP VM instance,
>> it is observed that while sending UDP packets of length 1300 at around 6
>> Gbps rate, tx burst starts failing frequently.
> Virtio (like all devices) can only transmit so fast.
> If you transmit faster than the host can consume, the queue will get full
> this shows up in DPDK when all transmit descriptors are used.
>
> 6Gbps is about 4.4 Mpps and the upper bound on virtio is usually about 1 to 2 Mpps
> because of the overhead of host processing (vhost and linux bridge).
>

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [dpdk-users] Tx burst getting failed with Virtio driver
       [not found] <9279c2f3-195c-0034-48e6-4246eee428a9@altencalsoftlabs.com>
@ 2019-04-04  7:25 ` Sharon
  2019-04-04  7:31   ` Sharon
  0 siblings, 1 reply; 4+ messages in thread
From: Sharon @ 2019-04-04  7:25 UTC (permalink / raw)
  To: users

Hi,


With a dpdk based application inside GCP VM instance,
it is observed that while sending UDP packets of length 1300 at around 6 
Gbps rate, tx burst starts failing frequently.

On enabling virtio pmd logs, following error is found:

PMD: virtio_xmit_pkts() tx: No free tx descriptors to transmit

*VM configuration:*

Cores: Total 12 cores, 5 cores for the application
OS: CentOS 7.2.1511
Linux kernel : 3.10.0-327.13.1.el7.x86_64
DPDK version : 17.05*
*

*Ring configuration in application:*

TX descriptor ring size: 4096
RX descriptor ring size: 1024
4 cores for rx and tx in a run to completion model.
Single port is used for Rx and Tx.
lcore to queue mapping is 1:1, i.e 1rx queue and 1tx queue per lcore

*ethtool -g eth0*
Ring parameters for eth0:
Pre-set maximums:
RX:             4096
RX Mini:           0
RX Jumbo:       0
TX:             4096
Current hardware settings:
RX:             4096
RX Mini:           0
RX Jumbo:       0
TX:             4096

*lscpu*

Architecture:            x86_64
CPU op-mode(s):      32-bit, 64-bit
Byte Order:              Little Endian
CPU(s):                     12
On-line CPU(s) list:    0-11
Thread(s) per core:  2
Core(s) per socket:   6
Socket(s):                 1
NUMA node(s):         1
Vendor ID:               GenuineIntel
CPU family:              6
Model:                     85
Model name:           Intel(R) Xeon(R) CPU @ 2.00GHz
Stepping:                 3
CPU MHz:                2000.168
BogoMIPS:              4000.33
Hypervisor vendor: KVM
Virtualization type:  full
L1d cache:              32K
L1i cache:               32K
L2 cache:                256K
L3 cache:                56320K
NUMA node0 CPU(s): 0-11

*NIC stats is provided below:*

100330695 packets input, 141154589263 bytes
100310198 packets output, 136450396190 bytes
0 packets missed
0 erroneous packets received
0 multicast packets received
0 failed transmitted packets
0 No Mbufs


Stats reg 0 RX-packets: 23454090 RX-errors: 0 RX-bytes: 32948589715
Stats reg 1 RX-packets: 23954850 RX-errors: 0 RX-bytes: 33699067521
Stats reg 2 RX-packets: 26014919 RX-errors: 0 RX-bytes: 36614183404
Stats reg 3 RX-packets: 26906836 RX-errors: 0 RX-bytes: 37892748623


Stats reg 0 TX-packets: 23450529 TX-bytes: 31934107195
Stats reg 1 TX-packets: 23951340 TX-bytes: 32591951998
Stats reg 2 TX-packets: 26008018 TX-bytes: 35366748482
Stats reg 3 TX-packets: 26900311 TX-bytes: 36557588515

Increasing the number of cores for the application is also not helping 
in this case.

Kindly suggest as what needs to be done to improve the performance and 
avoid this issue.

Thanks & Regards,
Sharon T N


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2019-04-05 20:08 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <9279c2f3-195c-0034-48e6-4246eee428a9@altencalsoftlabs.com>
2019-04-04  7:25 ` [dpdk-users] Tx burst getting failed with Virtio driver Sharon
2019-04-04  7:31   ` Sharon
2019-04-04 15:41     ` Stephen Hemminger
2019-04-05  6:42       ` Sharon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).