DPDK usage discussions
 help / color / mirror / Atom feed
* Downstream PMD running on same core, leading to horrible performance
@ 2023-11-02  8:12 Nicolson Ken (ニコルソン ケン)
  0 siblings, 0 replies; only message in thread
From: Nicolson Ken (ニコルソン ケン) @ 2023-11-02  8:12 UTC (permalink / raw)
  To: users

Hi everyone,

I've got a stand-alone executable that auto-loads PMDs with the usual rte_eal_init() call, but the performance is horrible at higher speeds, and it looks like my call to rte_eth_tx_burst() is being handled by a different thread, but running in the same core, so I end up losing lots of packets as the downstream rx buffer fills up quicker than it can be emptied.

rte_lcore_count() reports just one core, so is there some way to get either my main code or the downstream PMD (vhost, by the way) to run on a separate core.

I'm sending iperf3-generated TCP traffic wrapped as ROS2 messages, so I have just got simple code that copies the message bytes from ROS2 format to mbufs, but at about 10 Gbps and 1400 byte packets I'm losing about 10000 packets per second through rte_eth_tx_burst() errors! If I comment out that function call, the code can keep up, so I don't believe it is my upstream code.

Any hints on how to fix this would be helpful.

Thanks,
Ken



^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2023-11-02  8:12 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-11-02  8:12 Downstream PMD running on same core, leading to horrible performance Nicolson Ken (ニコルソン ケン)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).