DPDK usage discussions
 help / color / mirror / Atom feed
* dpd3-l3fwd RX offload capabilities issue inside VM
@ 2023-07-10  8:12 Yurii Skrypka
  0 siblings, 0 replies; only message in thread
From: Yurii Skrypka @ 2023-07-10  8:12 UTC (permalink / raw)
  To: users; +Cc: Justas Poderys, Maksym Kovaliov

[-- Attachment #1: Type: text/plain, Size: 5056 bytes --]

Hi, team



We have the problem of running the dpdk-l3fwd application inside a VM.

The tool can’t be started inside the VM with the following error message:



dpdk-l3fwd -l 1 -- -p 0x3 --config="(0,0,1),(1,0,1)"  --parse-ptype



EAL: Detected CPU lcores: 9

EAL: Detected NUMA nodes: 1

EAL: Detected static linkage of DPDK

EAL: Multi-process socket /var/run/dpdk/rte/mp_socket

EAL: Selected IOVA mode 'PA'

EAL: Probe PCI driver: net_virtio (1af4:1000) device: 0000:00:04.0 (socket -1)

EAL: Probe PCI driver: net_virtio (1af4:1000) device: 0000:00:05.0 (socket -1)

TELEMETRY: No legacy callbacks, legacy socket not created

soft parse-ptype is enabled

Neither ACL, LPM, EM, or FIB selected, defaulting to LPM

L3FWD: Missing 1 or more rule files, using default instead

Initializing port 0 ... Creating queues: nb_rxq=1 nb_txq=1... Port 0 modified RSS hash function based on hardware support,requested:0xa38c configured:0

Ethdev port_id=0 requested Rx offloads 0xe doesn't match Rx offloads capabilities 0x2201 in rte_eth_dev_configure()

EAL: Error - exiting with code: 1

  Cause: Cannot configure device: err=-22, port=0



Could you please advise on what have been done in configuration in wrong way?

Please find below the details of environment preparation.



Additional note: on the same VM with the same configuration testpmd is being run successfully and traffic is being offloaded without any issues.



Env details:

  1.  We use the VM image based on ‘RedHat 8.8’ with DPDK 22.11.2 LTS. We have built DPDK inside VM with the following command:


meson --buildtype=debug -Dexamples=all -Dplatform=generic x86_64-native-linuxapp-gcc

ninja -C x86_64-native-linuxapp-gcc


  1.  Additionally, we have configured 1G huge pages.



  1.  We have used the following command to start the VM:



taskset -c 3,5,7,9,15,17,19,21,23 /usr/libexec/qemu-kvm -enable-kvm \

-cpu host -m 8192 \

-object memory-backend-file,id=mem,size=8192M,mem-path=/mnt/huge,share=on \

-numa node,memdev=mem -mem-prealloc -smp 9 \

-chardev socket,id=char0,path=/usr/local/var/run/stdvio5,server=on \

-netdev type=vhost-user,id=mynet0,chardev=char0,vhostforce=on,queues=1 \

-device virtio-net-pci,packed=on,mq=on,vectors=4,rx_queue_size=1024,tx_queue_size=1024,netdev=mynet0,mac=52:54:00:00:0a:01,mrg_rxbuf=on \

-chardev socket,id=char1,path=/usr/local/var/run/stdvio6,server=on \

-netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce=on,queues=1 \

-device virtio-net-pci,packed=on,mq=on,vectors=4,rx_queue_size=1024,tx_queue_size=1024,netdev=mynet1,mac=52:54:00:00:0a:02,mrg_rxbuf=on \

-net user,hostfwd=tcp::10021-:22 \

-net nic,macaddr=52:54:00:00:0a:01 \

-nographic /tmp/vm1.qcow2





  1.  Once VM is started the following configuration is inside the VM:


modprobe uio_pci_generic
sleep 3
/root/dpdk//usertools/dpdk-devbind.py --bind=uio_pci_generic 00:04.0
/root/dpdk//usertools/dpdk-devbind.py --bind=uio_pci_generic 00:05.0


----------


/root/dpdk/usertools/dpdk-hugepages.py --show


Node Pages Size Total
0    5     1Gb    5Gb

Hugepages mounted on /dev/hugepages


----------


lspci


00:03.0 Ethernet controller: Intel Corporation 82540EM Gigabit Ethernet Controller (rev 03)
00:04.0 Ethernet controller: Red Hat, Inc. Virtio network device


----------



/root/dpdk/usertools/dpdk-devbind.py -s



Network devices using DPDK-compatible driver

============================================

0000:00:04.0 'Virtio network device 1000' drv=uio_pci_generic unused=

0000:00:05.0 'Virtio network device 1000' drv=uio_pci_generic unused=



Network devices using kernel driver

===================================

0000:00:03.0 '82540EM Gigabit Ethernet Controller 100e' if=eth2 drv=e1000 unused=uio_pci_generic *Active*



----------



/root/dpdk/x86_64-native-linuxapp-gcc/examples/dpdk-ethtool



EAL: Detected CPU lcores: 9

EAL: Detected NUMA nodes: 1

EAL: Detected static linkage of DPDK

EAL: Multi-process socket /var/run/dpdk/rte/mp_socket

EAL: Selected IOVA mode 'PA'

EAL: Probe PCI driver: net_virtio (1af4:1000) device: 0000:00:04.0 (socket -1)

EAL: Probe PCI driver: net_virtio (1af4:1000) device: 0000:00:05.0 (socket -1)

TELEMETRY: No legacy callbacks, legacy socket not created

Number of NICs: 2

Init port 0..

Init port 1..

EthApp> drvinfo

firmware version get error: (Operation not supported)

Port 0 driver: net_virtio (ver: DPDK 22.11.2)

firmware-version:

bus-info: 0000:00:04.0

firmware version get error: (Operation not supported)

Port 1 driver: net_virtio (ver: DPDK 22.11.2)

firmware-version:

bus-info: 0000:00:05.0


  1.  Host configuration:
     *
Red Hat Enterprise Linux 8.7 (Ootpa), Linux 4.18.0-425.19.2.el8_7.x86_64

QEMU emulator version 6.2.0 (qemu-kvm-6.2.0-22.module+el8.7.0+18170+646069c1.2)
OVS 2.17.2
DPDK 21.11.1


Thank you.


With best regards,

Yurii


[-- Attachment #2: Type: text/html, Size: 30496 bytes --]

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2023-07-20  5:19 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-07-10  8:12 dpd3-l3fwd RX offload capabilities issue inside VM Yurii Skrypka

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).