Sorry, right interface:
 
Network devices using DPDK-compatible driver
============================================
0000:09:00.0 'Ethernet Connection X553 10 GbE SFP+ 15c4' drv=igb_uio unused=ixgbe,vfio-pci
 
 
26.12.2022, 22:21, "Ruslan R. Laishev" <zator@yandex.ru>:
There is what i do at xmiter side, may be u will get a quantum to see a code of pupil:  https://pastebin.com/1WMyXtr5
 
I spent some time with the testpmd, sorry but there is not an ability to get rate information on sending, may be i'll add it into the code ...
Some statistics (rte_eth_stats)  :
 
Date;Time;Device;Port;Name;Area;In pkts;Out pkts;In bytes;Out bytes;In missed;In errors;Out errors;No mbufs;
(payload is 0 octets)
26-12-2022; 22:14:10; 0000:09:00.0; _PEA00:; WAN0; WAN; 0; 21122753; 0; 1563085750; 0; 0; 0; 0
26-12-2022; 22:14:20; 0000:09:00.0; _PEA00:; WAN0; WAN; 0; 21122392; 0; 1563057008; 0; 0; 0; 0
26-12-2022; 22:14:30; 0000:09:00.0; _PEA00:; WAN0; WAN; 0; 21121978; 0; 1563024500; 0; 0; 0; 0
26-12-2022; 22:14:40; 0000:09:00.0; _PEA00:; WAN0; WAN; 0; 21122012; 0; 1563028888; 0; 0; 0; 0
 
(payload is 1024 octets)
26-12-2022; 22:15:20; 0000:09:00.0; _PEA00:; WAN0; WAN; 0; 5246799; 0; 4648659464; 0; 0; 0; 0
26-12-2022; 22:15:30; 0000:09:00.0; _PEA00:; WAN0; WAN; 0; 5246456; 0; 4648360016; 0; 0; 0; 0
26-12-2022; 22:15:40; 0000:09:00.0; _PEA00:; WAN0; WAN; 0; 5246168; 0; 4648108408; 0; 0; 0; 0
26-12-2022; 22:15:50; 0000:09:00.0; _PEA00:; WAN0; WAN; 0; 5246143; 0; 4648084478; 0; 0; 0; 0
26-12-2022; 22:16:00; 0000:09:00.0; _PEA00:; WAN0; WAN; 0; 5246129; 0; 4648070294; 0; 0; 0; 0
 
A piece of the DPDK-DEVBIND.SH
0000:02:00.0 'I211 Gigabit Network Connection 1539' if=enp2s0 drv=igb unused=igb_uio,vfio-pci *Active*
 
26.12.2022, 16:22, "Ruslan R. Laishev" <zator@yandex.ru>:
Thanks for the answer.
 
Oops, sorry, some details:
- one core run generator routine
- one core run routine to save/display statistic
 
core run a generator routine like:
 
while (1) {
get buffer from pool
make eth+ip+udp header (it's static content)
generate payload like memset(packet.payload , 'A' + something, payload_size);
generate packet sequence and CRC32C  - and add it to the payload part
"send" packet to tx_buffer
 
if (tx_buffer.size == tx_buffer.length)
do flush()
}
 
"header; part of the packet : sizeof(eth+ip+udp) -
"payload" part - 20-1024 octets
 
RSS - it's on received side, yes ?
 
testpmd - have not tried, I'll.
 
 
26.12.2022, 16:07, "Dmitry Kozlyuk" <dmitry.kozliuk@gmail.com>:

Hi,

2022-12-26 15:20 (UTC+0300), Ruslan R. Laishev:

 I studying programming with DPDK SDK . So I write a small app to send/receive packets , now I testing it and see next situation:
 iperf3 show  9,4 - 9,7 Gbps on TCP
  
 my app can *send* only at 4+Gbps (I see counters in the rte_eth_stats) .  I have tried to speed-up my app by:
 -  using 1+ number of TX queues (device claim support 64)
 -  increase size of burst from 32 up to 128  
 - turn off any offloads related to checksumming
  
 No effect.


Please tell more about what your app does and how (w.r.t. DPDK usage).
Are you sure that all cores are loaded? E.g. if you send identical packets,
RSS can steer them all to a single queue and thus a single core.

What performance do you see using testpmd with txonly/rxonly forward mode,
if applicable?

What is the packet performance, i.e. Mpps, not Gbps, and packet size?
Unless you do TCP payload processing (or compute large payload checksums),
packets per second usually matter rather than bits per second.

 
 
--- 
С уважением,
Ruslan R. Laishev
OpenVMS bigot, natural born system/network progger, C contractor.
+79013163222
+79910009922
 
 
 
--- 
С уважением,
Ruslan R. Laishev
OpenVMS bigot, natural born system/network progger, C contractor.
+79013163222
+79910009922
 
 
 
--- 
С уважением,
Ruslan R. Laishev
OpenVMS bigot, natural born system/network progger, C contractor.
+79013163222
+79910009922