* High packet capturing rate in DPDK enabled port
@ 2024-05-05 7:09 Fuji Nafiul
2024-05-05 16:02 ` Stephen Hemminger
0 siblings, 1 reply; 2+ messages in thread
From: Fuji Nafiul @ 2024-05-05 7:09 UTC (permalink / raw)
To: users, dev
[-- Attachment #1: Type: text/plain, Size: 1214 bytes --]
I have a DPDK-enabled port (Linux server) that serves around 5,000-50,000
concurrent calls, per packet size of 80 bytes to 200 bytes. so in peak
time, I require packet capture + file writing speed of around 1GByte/s or 8
Gbit/sec (at least 0.5Gbyte/s is expected). dpdk official packet capture
example project "dpdk-dumpcap"'s documentation says it has a capability of
around 10MByte/s which is far less than required. I implemented a simple
packet capture and pcap writing code which was able to dump
around 5000-7000 concurrent call data where I used 1 core and 1 single ring
of size 4096, and this was all integrated into actual media code (didn't
use librte_pdump, simply copied to separate rte_ring after capturing
through rte_eth_rx_burst() and before sending through rte_eth_tx_burst() ).
I know I can launch this multiple cores and with multiple rings and so on
but is there any current project which already does this?
I found a third-party project named "dpdkcap" which says it can support up
to 10Gbit/s. Has anyone used it and what's the review?
Or, should I modify the "dpdk-dumpcap" project to my need to
implement multicore and multi-ring support so I can extend the capability?
Thanks in advance
[-- Attachment #2: Type: text/html, Size: 1357 bytes --]
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: High packet capturing rate in DPDK enabled port
2024-05-05 7:09 High packet capturing rate in DPDK enabled port Fuji Nafiul
@ 2024-05-05 16:02 ` Stephen Hemminger
0 siblings, 0 replies; 2+ messages in thread
From: Stephen Hemminger @ 2024-05-05 16:02 UTC (permalink / raw)
To: Fuji Nafiul; +Cc: users, dev
On Sun, 5 May 2024 13:09:42 +0600
Fuji Nafiul <nafiul.fuji@gmail.com> wrote:
> I have a DPDK-enabled port (Linux server) that serves around 5,000-50,000
> concurrent calls, per packet size of 80 bytes to 200 bytes. so in peak
> time, I require packet capture + file writing speed of around 1GByte/s or 8
> Gbit/sec (at least 0.5Gbyte/s is expected). dpdk official packet capture
> example project "dpdk-dumpcap"'s documentation says it has a capability of
> around 10MByte/s which is far less than required. I implemented a simple
> packet capture and pcap writing code which was able to dump
> around 5000-7000 concurrent call data where I used 1 core and 1 single ring
> of size 4096, and this was all integrated into actual media code (didn't
> use librte_pdump, simply copied to separate rte_ring after capturing
> through rte_eth_rx_burst() and before sending through rte_eth_tx_burst() ).
> I know I can launch this multiple cores and with multiple rings and so on
> but is there any current project which already does this?
>
> I found a third-party project named "dpdkcap" which says it can support up
> to 10Gbit/s. Has anyone used it and what's the review?
>
> Or, should I modify the "dpdk-dumpcap" project to my need to
> implement multicore and multi-ring support so I can extend the capability?
> Thanks in advance
The limitations of high speed packet capture is more about speed of writing
to disk. Doing single write per packet is part of the problem. Getting higher
performance requires faster SSD, and using ioring API.
I do not believe that dpdkcap is really supporting writing at 10 Gbit/sec only
that it can capture data on a 10 Gbit/sec device.
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2024-05-05 16:03 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-05-05 7:09 High packet capturing rate in DPDK enabled port Fuji Nafiul
2024-05-05 16:02 ` Stephen Hemminger
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).