* [dpdk-dev] RTE pipeline table lookup miss with > 1 core
@ 2020-10-23 20:52 Alipour, Mehrdad
0 siblings, 0 replies; only message in thread
From: Alipour, Mehrdad @ 2020-10-23 20:52 UTC (permalink / raw)
To: dev
Hi,
I am testing a pipeline application with two or more cores using dpdk 19.05.
The application consists of:
Core1: forever get packets from an Ethernet IF (using rte_eth_rx_burst)
Inspect packet header such as EtherType, UDP desp_port, etc; determine application (say app_1, app_2, etc)
Forward to rte_ring of app_i (call it app_i_ring)
Core2: Specialized for app_1 processing, has an RX rte_ring (call it app_1_ring) and a app_1 pipeline consisting of a few Hash/Array tables
Core3: Specialized for app_2 processing, has an RX rte_ring (call it app_2_ring) and a app_2 pipeline consisting of a few Hash/Array tables
When I run this application with core1-3, it works fine without any table miss.
When I add a second app_1 or app_2 core (for instance adding core4 running app_1), I get about 0.05% table miss of app_1 hash tables.
The only difference in the core1-3 and core1-4 config setup is that app_1 has two cores simultaneously running its pipeline instance and doing lookup on the same set of tables.
Please note that I have logged the missed lookup packets and the key in metadata and the keys are correct when the miss happens.
Any reason for this table miss? Am I missing something?
Thanks,
Mehrdad
malipour@ciena.com
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2020-10-23 20:53 UTC | newest]
Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-23 20:52 [dpdk-dev] RTE pipeline table lookup miss with > 1 core Alipour, Mehrdad
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).