DPDK usage discussions
 help / color / mirror / Atom feed
* poor performances with DPDK on a gcloud VM
@ 2022-06-21 16:29 Sylvain Vasseur
  2022-06-21 16:52 ` Stephen Hemminger
  0 siblings, 1 reply; 2+ messages in thread
From: Sylvain Vasseur @ 2022-06-21 16:29 UTC (permalink / raw)
  To: users

[-- Attachment #1: Type: text/plain, Size: 1039 bytes --]

Hello,

Did someone already manage to use DPDK on a GCP VM and have good
performances? I am trying this configuration lately, have some awful
bandwidth results and no idea what I could be doing wrong.

I use the VirtIO interfaces on my VMs, was able to use both the vfio-pci
and igb_uio PMD. But can only get ~350Mbps (yes bits!) with both, when
trying to transmit data with a very basic testpmd run.
dpdk-testpmd -a 0000:00:05.0 -- --forward-mode=txonly --stats-period 1

Any non-dpdk use of the network interface gives me way better results
(GBps!).

Device:
00:05.0 Ethernet controller: Red Hat, Inc. Virtio network device

Bind status
Network devices using DPDK-compatible driver
============================================
0000:00:05.0 'Virtio network device 1000' drv=igb_uio unused=vfio-pci

I would bet I am doing something wrong, either with the PMD choice, or with
some setting. But can't figure what. Did anyone manage to have good
performances or some experience of using DPDK on a google cloud VM?

Thanks in advance
Sylvain

[-- Attachment #2: Type: text/html, Size: 1209 bytes --]

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: poor performances with DPDK on a gcloud VM
  2022-06-21 16:29 poor performances with DPDK on a gcloud VM Sylvain Vasseur
@ 2022-06-21 16:52 ` Stephen Hemminger
  0 siblings, 0 replies; 2+ messages in thread
From: Stephen Hemminger @ 2022-06-21 16:52 UTC (permalink / raw)
  To: Sylvain Vasseur; +Cc: users

On Tue, 21 Jun 2022 17:29:38 +0100
Sylvain Vasseur <remmog@gmail.com> wrote:

> Hello,
> 
> Did someone already manage to use DPDK on a GCP VM and have good
> performances? I am trying this configuration lately, have some awful
> bandwidth results and no idea what I could be doing wrong.
> 
> I use the VirtIO interfaces on my VMs, was able to use both the vfio-pci
> and igb_uio PMD. But can only get ~350Mbps (yes bits!) with both, when
> trying to transmit data with a very basic testpmd run.
> dpdk-testpmd -a 0000:00:05.0 -- --forward-mode=txonly --stats-period 1
> 
> Any non-dpdk use of the network interface gives me way better results
> (GBps!).
> 
> Device:
> 00:05.0 Ethernet controller: Red Hat, Inc. Virtio network device
> 
> Bind status
> Network devices using DPDK-compatible driver
> ============================================
> 0000:00:05.0 'Virtio network device 1000' drv=igb_uio unused=vfio-pci
> 
> I would bet I am doing something wrong, either with the PMD choice, or with
> some setting. But can't figure what. Did anyone manage to have good
> performances or some experience of using DPDK on a google cloud VM?
> 
> Thanks in advance
> Sylvain

Check the negotiation of virtio features and checksum offload bits.

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2022-06-21 16:52 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-06-21 16:29 poor performances with DPDK on a gcloud VM Sylvain Vasseur
2022-06-21 16:52 ` Stephen Hemminger

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).