* Guest OS drapping packets with RedHat loopback example and 65535 byte packets
@ 2024-01-19 3:42 Nicolson Ken (ニコルソン ケン)
0 siblings, 0 replies; only message in thread
From: Nicolson Ken (ニコルソン ケン) @ 2024-01-19 3:42 UTC (permalink / raw)
To: users
Hi all,
Using DPDK 22.11.2 on CentOS
I've configured my Guest OS according to the RedHat tutorial here:
https://www.redhat.com/en/blog/hands-vhost-user-warm-welcome-dpdk
Using the commandline:
root@guest $ testpmd -l 0,1,2 --socket-mem 1024 -n 4 \
--proc-type auto --file-prefix pg -- \
--portmask=3 --forward-mode=macswap --port-topology=chained \
--disable-rss -i --rxq=1 --txq=1 \
--rxd=256 --txd=256 --nb-cores=2 --auto-start
In my host, I have my own code that sends fixed-size mbufs; if I use 9000 byte packets (ie 5 mbufs in a chain) with a speed of 15Gbps (ie about 200,000 packets per second), everything runs smoothly, but if I go up to 65535 byte packets (32 mbufs in a chain) at 15Gbps (almost 29,000 packets per second) I start losing packets. A burst of 100 packets, for instance, will see all 100 received, but only 7 forwarded.
If I use the following with the same 100 packet burst:
testpmd> set verbose 3
I see 14 pairs of Rx and Tx packets.
testpmd> set verbose 2
Gives 18 Rx packets and 14 Tx. "show port stats all" shows no Rx or Tx errors, either in Guest or Host.
There's something very weird going on, but I haven't a clue where to start looking!
Has anyone got any hints for me for where to look or to get more debug info? Common sense would suggest that larger
Thanks,
Ken
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2024-01-19 3:42 UTC | newest]
Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-01-19 3:42 Guest OS drapping packets with RedHat loopback example and 65535 byte packets Nicolson Ken (ニコルソン ケン)
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).