* Issue with DPDK-Burst Replay – No Frame Transmission Observed Despite Successful Replay @ 2025-07-18 5:10 Gokul K R (MS/ETA7-ETAS) 2025-07-18 11:58 ` Ivan Malov 0 siblings, 1 reply; 8+ messages in thread From: Gokul K R (MS/ETA7-ETAS) @ 2025-07-18 5:10 UTC (permalink / raw) To: users; +Cc: dev [-- Attachment #1: Type: text/plain, Size: 1715 bytes --] Hi Team, I’m currently working with the dpdk-burst-replay tool and encountered an issue during execution. Below are the details: ________________________________ Observation: During replay, we received the following informational message: port 0 is not on the good numa id (-1) As per the DPDK mailing list discussions, this warning is typically benign—often seen on NICs like Intel I225/I210, which do not report NUMA affinity. Hence, we proceeded with execution. ________________________________ Command Used: sudo numactl --cpunodebind=0 --membind=0 ./src/dpdk-replay Original_MAC.pcap 0000:01:00.1 Execution Output: preloading Original_MAC.pcap file (of size: 143959 bytes) file read at 100.00% read 675 pkts (for a total of 143959 bytes). max packet length = 1518 bytes. -> Needed MBUF size: 1648 -> Needed number of MBUFs: 675 -> Needed Memory = 1.061 MB -> Needed Hugepages of 1 GB = 1 -> Needed CPUs: 1 -> Create mempool of 675 mbufs of 1648 octets. -> Will cache 675 pkts on 1 caches. ________________________________ Issue: Despite successful parsing of the pcap file and proper initialization, no frames were transmitted or received on either the sender or receiver sides. ________________________________ Environment Details: * NIC used: Intel I225/I210 * Hugepages configured: 1 GB * NUMA binding: --cpunodebind=0 --membind=0 * OS: [Your Linux distribution, e.g., Ubuntu 20.04] * DPDK version: [Mention if known] ________________________________ Could you please advise if any additional setup, configuration, or known limitations may be impacting the packet transmission? Thank you in advance for your support! Best regards, Gokul K R [-- Attachment #2: Type: text/html, Size: 11541 bytes --] ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Issue with DPDK-Burst Replay – No Frame Transmission Observed Despite Successful Replay 2025-07-18 5:10 Issue with DPDK-Burst Replay – No Frame Transmission Observed Despite Successful Replay Gokul K R (MS/ETA7-ETAS) @ 2025-07-18 11:58 ` Ivan Malov 2025-07-23 5:35 ` Gokul K R (MS/ETA7-ETAS) 0 siblings, 1 reply; 8+ messages in thread From: Ivan Malov @ 2025-07-18 11:58 UTC (permalink / raw) To: Gokul K R (MS/ETA7-ETAS); +Cc: users, dev [-- Attachment #1: Type: text/plain, Size: 4583 bytes --] Hi, (please see below) On Fri, 18 Jul 2025, Gokul K R (MS/ETA7-ETAS) wrote: > > Hi Team, > > I’m currently working with the dpdk-burst-replay tool and encountered an issue during execution. Below are the details: > > > __________________________________________________________________________________________________________________________________________________________________________________________ > > > Observation: > During replay, we received the following informational message: > > port 0 is not on the good numa id (-1) Which API was used to check this? Was API [1] used? If not, what does it show in the absence of 'numactl' command? [1] https://doc.dpdk.org/api-25.03/rte__ethdev_8h.html#ad032e25f712e6ffeb0c19eab1ec1fd2e > > As per the DPDK mailing list discussions, this warning is typically benign—often seen on NICs like Intel I225/I210, which do not report NUMA affinity. Hence, we proceeded with execution. > > > __________________________________________________________________________________________________________________________________________________________________________________________ > > > Command Used: > > sudo numactl --cpunodebind=0 --membind=0 ./src/dpdk-replay Original_MAC.pcap 0000:01:00.1 > > Execution Output: > > preloading Original_MAC.pcap file (of size: 143959 bytes) > > file read at 100.00% > > read 675 pkts (for a total of 143959 bytes). max packet length = 1518 bytes. > > -> Needed MBUF size: 1648 > > -> Needed number of MBUFs: 675 > > -> Needed Memory = 1.061 MB > > -> Needed Hugepages of 1 GB = 1 > > -> Needed CPUs: 1 > > -> Create mempool of 675 mbufs of 1648 octets. > > -> Will cache 675 pkts on 1 caches. What does this 'cache' stand for? Does it refer to the mempool per-lcore cache? If so, please note that, according to API [2] documentation, cache size "must be lower or equal to RTE_MEMPOOL_CACHE_MAX_SIZE and n / 1.5", where 'n' stands for the number of mbufs. Also, documentation says it is advised to choose cache_size to have "n modulo cache_size == 0". Does your code meet these requirements? By the looks of it, it doesn't (cache_size = n = 675). Consider to double-check. [2] https://doc.dpdk.org/api-25.03/rte__mempool_8h.html#a0b64d611bc140a4d2a0c94911580efd5 > > > __________________________________________________________________________________________________________________________________________________________________________________________ > > > Issue: > Despite successful parsing of the pcap file and proper initialization, no frames were transmitted or received on either the sender or receiver sides. Is this observation based solely on watching APIs [3] and [4] return 0 all the time? If yes, one can consider to introduce invocations of APIs [5], [6] and [7] in order to periodically poll and print statistics (may be with 1-second delay), which may, for example, shed light on mbuf allocation errors (extended stats). Do statistics display any interesting figures to be discussed? [3] https://doc.dpdk.org/api-25.03/rte__ethdev_8h.html#a3e7d76a451b46348686ea97d6367f102 [4] https://doc.dpdk.org/api-25.03/rte__ethdev_8h.html#a83e56cabbd31637efd648e3fc010392b [5] https://doc.dpdk.org/api-25.03/rte__ethdev_8h.html#adec226574c53ae413252c9b15f6f4bab [6] https://doc.dpdk.org/api-25.03/rte__ethdev_8h.html#a418ad970673eb171673185e36044fd79 [7] https://doc.dpdk.org/api-25.03/rte__ethdev_8h.html#a300d75b583c1f5acfe5b162a5d8c0ac1 > > > __________________________________________________________________________________________________________________________________________________________________________________________ > > > Environment Details: > > * NIC used: Intel I225/I210 > * Hugepages configured: 1 GB > * NUMA binding: --cpunodebind=0 --membind=0 > * OS: [Your Linux distribution, e.g., Ubuntu 20.04] > * DPDK version: [Mention if known] > > __________________________________________________________________________________________________________________________________________________________________________________________ > > > Could you please advise if any additional setup, configuration, or known limitations may be impacting the packet transmission? This may be a wild suggestion from my side, but it pays to check whether link status is "up" upon port start on both ends. One can use API [8] to do that. [8] https://doc.dpdk.org/api-25.03/rte__ethdev_8h.html#ac05878578e4fd9ef3551d2c1c175ebe7 Thank you. > > Thank you in advance for your support! > > > Best regards, > Gokul K R > > > > > ^ permalink raw reply [flat|nested] 8+ messages in thread
* RE: Issue with DPDK-Burst Replay – No Frame Transmission Observed Despite Successful Replay 2025-07-18 11:58 ` Ivan Malov @ 2025-07-23 5:35 ` Gokul K R (MS/ETA7-ETAS) 2025-07-23 5:58 ` Ivan Malov 0 siblings, 1 reply; 8+ messages in thread From: Gokul K R (MS/ETA7-ETAS) @ 2025-07-23 5:35 UTC (permalink / raw) To: Ivan Malov; +Cc: users, dev, Nivethitha N Shanmugasundaram (ETAS-ICA/XPC-Fe6) Hi Ivan Malov 😊 Use Case: I'm currently using the DPDK-Burst-Replay tool to replay captured PCAP files at specific data rates (e.g., 150–200 Mbps). Response to your feedback: Point 1: "Port 0 is not on the good NUMA ID (-1)" I’m aware that this message is printed due to the NUMA ID being returned as -1. I've just started diving into the source code and found that the call to rte_eth_dev_socket_id() returns -1, which typically indicates an error. However, the current implementation does not output the rte_errno, which could help identify the root cause. I'm working on modifying the code to print the error code for better debugging. Point 2: "NIC Link is UP" Yes, on the NIC side, the link is up. I'm also able to transmit packets successfully using the testpmd application. Question: Could you please confirm if the current version of the DPDK-Burst-Replay tool supports replaying Ethernet frames larger than 64 bytes (e.g., up to 1500 bytes)? Or has the tool been enhanced to support this use case? Thanks for your time and support! Best regards, Gokul K.R -----Original Message----- From: Ivan Malov <ivan.malov@arknetworks.am> Sent: Friday, July 18, 2025 5:29 PM To: Gokul K R (MS/ETA7-ETAS) <KR.Gokul@in.bosch.com> Cc: users@dpdk.org; dev@dpdk.org Subject: Re: Issue with DPDK-Burst Replay – No Frame Transmission Observed Despite Successful Replay Hi, (please see below) On Fri, 18 Jul 2025, Gokul K R (MS/ETA7-ETAS) wrote: > > Hi Team, > > I’m currently working with the dpdk-burst-replay tool and encountered an issue during execution. Below are the details: > > > ______________________________________________________________________ > ______________________________________________________________________ > ______________________________________________ > > > Observation: > During replay, we received the following informational message: > > port 0 is not on the good numa id (-1) Which API was used to check this? Was API [1] used? If not, what does it show in the absence of 'numactl' command? [1] https://doc.dpdk.org/api-25.03/rte__ethdev_8h.html#ad032e25f712e6ffeb0c19eab1ec1fd2e > > As per the DPDK mailing list discussions, this warning is typically benign—often seen on NICs like Intel I225/I210, which do not report NUMA affinity. Hence, we proceeded with execution. > > > ______________________________________________________________________ > ______________________________________________________________________ > ______________________________________________ > > > Command Used: > > sudo numactl --cpunodebind=0 --membind=0 ./src/dpdk-replay > Original_MAC.pcap 0000:01:00.1 > > Execution Output: > > preloading Original_MAC.pcap file (of size: 143959 bytes) > > file read at 100.00% > > read 675 pkts (for a total of 143959 bytes). max packet length = 1518 bytes. > > -> Needed MBUF size: 1648 > > -> Needed number of MBUFs: 675 > > -> Needed Memory = 1.061 MB > > -> Needed Hugepages of 1 GB = 1 > > -> Needed CPUs: 1 > > -> Create mempool of 675 mbufs of 1648 octets. > > -> Will cache 675 pkts on 1 caches. What does this 'cache' stand for? Does it refer to the mempool per-lcore cache? If so, please note that, according to API [2] documentation, cache size "must be lower or equal to RTE_MEMPOOL_CACHE_MAX_SIZE and n / 1.5", where 'n' stands for the number of mbufs. Also, documentation says it is advised to choose cache_size to have "n modulo cache_size == 0". Does your code meet these requirements? By the looks of it, it doesn't (cache_size = n = 675). Consider to double-check. [2] https://doc.dpdk.org/api-25.03/rte__mempool_8h.html#a0b64d611bc140a4d2a0c94911580efd5 > > > ______________________________________________________________________ > ______________________________________________________________________ > ______________________________________________ > > > Issue: > Despite successful parsing of the pcap file and proper initialization, no frames were transmitted or received on either the sender or receiver sides. Is this observation based solely on watching APIs [3] and [4] return 0 all the time? If yes, one can consider to introduce invocations of APIs [5], [6] and [7] in order to periodically poll and print statistics (may be with 1-second delay), which may, for example, shed light on mbuf allocation errors (extended stats). Do statistics display any interesting figures to be discussed? [3] https://doc.dpdk.org/api-25.03/rte__ethdev_8h.html#a3e7d76a451b46348686ea97d6367f102 [4] https://doc.dpdk.org/api-25.03/rte__ethdev_8h.html#a83e56cabbd31637efd648e3fc010392b [5] https://doc.dpdk.org/api-25.03/rte__ethdev_8h.html#adec226574c53ae413252c9b15f6f4bab [6] https://doc.dpdk.org/api-25.03/rte__ethdev_8h.html#a418ad970673eb171673185e36044fd79 [7] https://doc.dpdk.org/api-25.03/rte__ethdev_8h.html#a300d75b583c1f5acfe5b162a5d8c0ac1 > > > ______________________________________________________________________ > ______________________________________________________________________ > ______________________________________________ > > > Environment Details: > > * NIC used: Intel I225/I210 > * Hugepages configured: 1 GB > * NUMA binding: --cpunodebind=0 --membind=0 > * OS: [Your Linux distribution, e.g., Ubuntu 20.04] > * DPDK version: [Mention if known] > > ______________________________________________________________________ > ______________________________________________________________________ > ______________________________________________ > > > Could you please advise if any additional setup, configuration, or known limitations may be impacting the packet transmission? This may be a wild suggestion from my side, but it pays to check whether link status is "up" upon port start on both ends. One can use API [8] to do that. [8] https://doc.dpdk.org/api-25.03/rte__ethdev_8h.html#ac05878578e4fd9ef3551d2c1c175ebe7 Thank you. > > Thank you in advance for your support! > > > Best regards, > Gokul K R > > > > > ^ permalink raw reply [flat|nested] 8+ messages in thread
* RE: Issue with DPDK-Burst Replay – No Frame Transmission Observed Despite Successful Replay 2025-07-23 5:35 ` Gokul K R (MS/ETA7-ETAS) @ 2025-07-23 5:58 ` Ivan Malov 2025-07-23 6:12 ` Ivan Malov 0 siblings, 1 reply; 8+ messages in thread From: Ivan Malov @ 2025-07-23 5:58 UTC (permalink / raw) To: Gokul K R (MS/ETA7-ETAS) Cc: users, dev, Nivethitha N Shanmugasundaram (ETAS-ICA/XPC-Fe6) [-- Attachment #1: Type: text/plain, Size: 6774 bytes --] Hi, On Wed, 23 Jul 2025, Gokul K R (MS/ETA7-ETAS) wrote: > Hi Ivan Malov 😊 > > Use Case: > I'm currently using the DPDK-Burst-Replay tool to replay captured PCAP files at specific data rates (e.g., 150–200 Mbps). > > Response to your feedback: > Point 1: "Port 0 is not on the good NUMA ID (-1)" > I’m aware that this message is printed due to the NUMA ID being returned as -1. > I've just started diving into the source code and found that the call to rte_eth_dev_socket_id() returns -1, which typically indicates an error. No, -1 typically translates to 'SOCKET_ID_ANY'. In order to rule out 'EINVAL' in 'rte_errno', one should attempt to invoke 'rte_eth_dev_socket_id' within the loop of 'RTE_ETH_FOREACH_DEV' (see examples in DPDK) and print the socket ID. > However, the current implementation does not output the rte_errno, which could help identify the root cause. I'm working on modifying the code to print the error code for better debugging. > > Point 2: "NIC Link is UP" > Yes, on the NIC side, the link is up. I'm also able to transmit packets successfully using the testpmd application. > > Question: > Could you please confirm if the current version of the DPDK-Burst-Replay tool supports replaying Ethernet frames larger than 64 bytes (e.g., up to 1500 bytes)? Or has the tool been enhanced to support this use case? The tool seems like an external application. Try to look for any mentions of API 'rte_eth_dev_set_mtu' or just the term 'mtu' in that source code. If there are no such mentions, then default MTU applies, which depends on the driver. Have you tried querying statistics to find the cause of the drops? Thank you. > > Thanks for your time and support! > > Best regards, > Gokul K.R > > > > > > -----Original Message----- > From: Ivan Malov <ivan.malov@arknetworks.am> > Sent: Friday, July 18, 2025 5:29 PM > To: Gokul K R (MS/ETA7-ETAS) <KR.Gokul@in.bosch.com> > Cc: users@dpdk.org; dev@dpdk.org > Subject: Re: Issue with DPDK-Burst Replay – No Frame Transmission Observed Despite Successful Replay > > Hi, > > (please see below) > > On Fri, 18 Jul 2025, Gokul K R (MS/ETA7-ETAS) wrote: > >> >> Hi Team, >> >> I’m currently working with the dpdk-burst-replay tool and encountered an issue during execution. Below are the details: >> >> >> ______________________________________________________________________ >> ______________________________________________________________________ >> ______________________________________________ >> >> >> Observation: >> During replay, we received the following informational message: >> >> port 0 is not on the good numa id (-1) > > Which API was used to check this? Was API [1] used? If not, what does it show in the absence of 'numactl' command? > > [1] https://doc.dpdk.org/api-25.03/rte__ethdev_8h.html#ad032e25f712e6ffeb0c19eab1ec1fd2e > >> >> As per the DPDK mailing list discussions, this warning is typically benign—often seen on NICs like Intel I225/I210, which do not report NUMA affinity. Hence, we proceeded with execution. >> >> >> ______________________________________________________________________ >> ______________________________________________________________________ >> ______________________________________________ >> >> >> Command Used: >> >> sudo numactl --cpunodebind=0 --membind=0 ./src/dpdk-replay >> Original_MAC.pcap 0000:01:00.1 >> >> Execution Output: >> >> preloading Original_MAC.pcap file (of size: 143959 bytes) >> >> file read at 100.00% >> >> read 675 pkts (for a total of 143959 bytes). max packet length = 1518 bytes. >> >> -> Needed MBUF size: 1648 >> >> -> Needed number of MBUFs: 675 >> >> -> Needed Memory = 1.061 MB >> >> -> Needed Hugepages of 1 GB = 1 >> >> -> Needed CPUs: 1 >> >> -> Create mempool of 675 mbufs of 1648 octets. >> >> -> Will cache 675 pkts on 1 caches. > > What does this 'cache' stand for? Does it refer to the mempool per-lcore cache? > If so, please note that, according to API [2] documentation, cache size "must be lower or equal to RTE_MEMPOOL_CACHE_MAX_SIZE and n / 1.5", where 'n' stands for the number of mbufs. Also, documentation says it is advised to choose cache_size to have "n modulo cache_size == 0". Does your code meet these requirements? > > By the looks of it, it doesn't (cache_size = n = 675). Consider to double-check. > > [2] https://doc.dpdk.org/api-25.03/rte__mempool_8h.html#a0b64d611bc140a4d2a0c94911580efd5 > >> >> >> ______________________________________________________________________ >> ______________________________________________________________________ >> ______________________________________________ >> >> >> Issue: >> Despite successful parsing of the pcap file and proper initialization, no frames were transmitted or received on either the sender or receiver sides. > > Is this observation based solely on watching APIs [3] and [4] return 0 all the time? If yes, one can consider to introduce invocations of APIs [5], [6] and [7] in order to periodically poll and print statistics (may be with 1-second delay), which may, for example, shed light on mbuf allocation errors (extended stats). > > Do statistics display any interesting figures to be discussed? > > [3] https://doc.dpdk.org/api-25.03/rte__ethdev_8h.html#a3e7d76a451b46348686ea97d6367f102 > [4] https://doc.dpdk.org/api-25.03/rte__ethdev_8h.html#a83e56cabbd31637efd648e3fc010392b > > [5] https://doc.dpdk.org/api-25.03/rte__ethdev_8h.html#adec226574c53ae413252c9b15f6f4bab > [6] https://doc.dpdk.org/api-25.03/rte__ethdev_8h.html#a418ad970673eb171673185e36044fd79 > [7] https://doc.dpdk.org/api-25.03/rte__ethdev_8h.html#a300d75b583c1f5acfe5b162a5d8c0ac1 > >> >> >> ______________________________________________________________________ >> ______________________________________________________________________ >> ______________________________________________ >> >> >> Environment Details: >> >> * NIC used: Intel I225/I210 >> * Hugepages configured: 1 GB >> * NUMA binding: --cpunodebind=0 --membind=0 >> * OS: [Your Linux distribution, e.g., Ubuntu 20.04] >> * DPDK version: [Mention if known] >> >> ______________________________________________________________________ >> ______________________________________________________________________ >> ______________________________________________ >> >> >> Could you please advise if any additional setup, configuration, or known limitations may be impacting the packet transmission? > > This may be a wild suggestion from my side, but it pays to check whether link status is "up" upon port start on both ends. One can use API [8] to do that. > > [8] https://doc.dpdk.org/api-25.03/rte__ethdev_8h.html#ac05878578e4fd9ef3551d2c1c175ebe7 > > Thank you. > > >> >> Thank you in advance for your support! >> >> >> Best regards, >> Gokul K R >> >> >> >> >> > ^ permalink raw reply [flat|nested] 8+ messages in thread
* RE: Issue with DPDK-Burst Replay – No Frame Transmission Observed Despite Successful Replay 2025-07-23 5:58 ` Ivan Malov @ 2025-07-23 6:12 ` Ivan Malov 2025-07-24 10:08 ` Gokul K R (MS/ETA7-ETAS) 0 siblings, 1 reply; 8+ messages in thread From: Ivan Malov @ 2025-07-23 6:12 UTC (permalink / raw) To: Gokul K R (MS/ETA7-ETAS) Cc: users, dev, Nivethitha N Shanmugasundaram (ETAS-ICA/XPC-Fe6) [-- Attachment #1: Type: text/plain, Size: 7580 bytes --] On Wed, 23 Jul 2025, Ivan Malov wrote: > Hi, > > On Wed, 23 Jul 2025, Gokul K R (MS/ETA7-ETAS) wrote: > >> Hi Ivan Malov 😊 >> >> Use Case: >> I'm currently using the DPDK-Burst-Replay tool to replay captured PCAP >> files at specific data rates (e.g., 150–200 Mbps). >> >> Response to your feedback: >> Point 1: "Port 0 is not on the good NUMA ID (-1)" >> I’m aware that this message is printed due to the NUMA ID being returned as >> -1. >> I've just started diving into the source code and found that the call to >> rte_eth_dev_socket_id() returns -1, which typically indicates an error. > > No, -1 typically translates to 'SOCKET_ID_ANY'. In order to rule out 'EINVAL' > in 'rte_errno', one should attempt to invoke 'rte_eth_dev_socket_id' within > the > loop of 'RTE_ETH_FOREACH_DEV' (see examples in DPDK) and print the socket ID. > >> However, the current implementation does not output the rte_errno, which >> could help identify the root cause. I'm working on modifying the code to >> print the error code for better debugging. >> >> Point 2: "NIC Link is UP" >> Yes, on the NIC side, the link is up. I'm also able to transmit packets >> successfully using the testpmd application. >> >> Question: >> Could you please confirm if the current version of the DPDK-Burst-Replay >> tool supports replaying Ethernet frames larger than 64 bytes (e.g., up to >> 1500 bytes)? Or has the tool been enhanced to support this use case? > > The tool seems like an external application. Try to look for any mentions of > API 'rte_eth_dev_set_mtu' or just the term 'mtu' in that source code. If > there > are no such mentions, then default MTU applies, which depends on the driver. > > Have you tried querying statistics to find the cause of the drops? Also, given the fact that the application replays some pcap traffic, have you made sure the receiver (where you seek to watch the replayed traffic arrive) has got 'promiscuous' mode [1] enabled? IIRC, 'test-pmd' enables it by default. [1] https://doc.dpdk.org/api-25.07/rte__ethdev_8h.html#a5dd1dedaa45f05c72bcc35495e441e91 Thank you. > > Thank you. > >> >> Thanks for your time and support! >> >> Best regards, >> Gokul K.R >> >> >> >> >> >> -----Original Message----- >> From: Ivan Malov <ivan.malov@arknetworks.am> >> Sent: Friday, July 18, 2025 5:29 PM >> To: Gokul K R (MS/ETA7-ETAS) <KR.Gokul@in.bosch.com> >> Cc: users@dpdk.org; dev@dpdk.org >> Subject: Re: Issue with DPDK-Burst Replay – No Frame Transmission Observed >> Despite Successful Replay >> >> Hi, >> >> (please see below) >> >> On Fri, 18 Jul 2025, Gokul K R (MS/ETA7-ETAS) wrote: >> >>> >>> Hi Team, >>> >>> I’m currently working with the dpdk-burst-replay tool and encountered an >>> issue during execution. Below are the details: >>> >>> >>> ______________________________________________________________________ >>> ______________________________________________________________________ >>> ______________________________________________ >>> >>> >>> Observation: >>> During replay, we received the following informational message: >>> >>> port 0 is not on the good numa id (-1) >> >> Which API was used to check this? Was API [1] used? If not, what does it >> show in the absence of 'numactl' command? >> >> [1] >> https://doc.dpdk.org/api-25.03/rte__ethdev_8h.html#ad032e25f712e6ffeb0c19eab1ec1fd2e >> >>> >>> As per the DPDK mailing list discussions, this warning is typically >>> benign—often seen on NICs like Intel I225/I210, which do not report NUMA >>> affinity. Hence, we proceeded with execution. >>> >>> >>> ______________________________________________________________________ >>> ______________________________________________________________________ >>> ______________________________________________ >>> >>> >>> Command Used: >>> >>> sudo numactl --cpunodebind=0 --membind=0 ./src/dpdk-replay >>> Original_MAC.pcap 0000:01:00.1 >>> >>> Execution Output: >>> >>> preloading Original_MAC.pcap file (of size: 143959 bytes) >>> >>> file read at 100.00% >>> >>> read 675 pkts (for a total of 143959 bytes). max packet length = 1518 >>> bytes. >>> >>> -> Needed MBUF size: 1648 >>> >>> -> Needed number of MBUFs: 675 >>> >>> -> Needed Memory = 1.061 MB >>> >>> -> Needed Hugepages of 1 GB = 1 >>> >>> -> Needed CPUs: 1 >>> >>> -> Create mempool of 675 mbufs of 1648 octets. >>> >>> -> Will cache 675 pkts on 1 caches. >> >> What does this 'cache' stand for? Does it refer to the mempool per-lcore >> cache? >> If so, please note that, according to API [2] documentation, cache size >> "must be lower or equal to RTE_MEMPOOL_CACHE_MAX_SIZE and n / 1.5", where >> 'n' stands for the number of mbufs. Also, documentation says it is advised >> to choose cache_size to have "n modulo cache_size == 0". Does your code >> meet these requirements? >> >> By the looks of it, it doesn't (cache_size = n = 675). Consider to >> double-check. >> >> [2] >> https://doc.dpdk.org/api-25.03/rte__mempool_8h.html#a0b64d611bc140a4d2a0c94911580efd5 >> >>> >>> >>> ______________________________________________________________________ >>> ______________________________________________________________________ >>> ______________________________________________ >>> >>> >>> Issue: >>> Despite successful parsing of the pcap file and proper initialization, no >>> frames were transmitted or received on either the sender or receiver >>> sides. >> >> Is this observation based solely on watching APIs [3] and [4] return 0 all >> the time? If yes, one can consider to introduce invocations of APIs [5], >> [6] and [7] in order to periodically poll and print statistics (may be with >> 1-second delay), which may, for example, shed light on mbuf allocation >> errors (extended stats). >> >> Do statistics display any interesting figures to be discussed? >> >> [3] >> https://doc.dpdk.org/api-25.03/rte__ethdev_8h.html#a3e7d76a451b46348686ea97d6367f102 >> [4] >> https://doc.dpdk.org/api-25.03/rte__ethdev_8h.html#a83e56cabbd31637efd648e3fc010392b >> >> [5] >> https://doc.dpdk.org/api-25.03/rte__ethdev_8h.html#adec226574c53ae413252c9b15f6f4bab >> [6] >> https://doc.dpdk.org/api-25.03/rte__ethdev_8h.html#a418ad970673eb171673185e36044fd79 >> [7] >> https://doc.dpdk.org/api-25.03/rte__ethdev_8h.html#a300d75b583c1f5acfe5b162a5d8c0ac1 >> >>> >>> >>> ______________________________________________________________________ >>> ______________________________________________________________________ >>> ______________________________________________ >>> >>> >>> Environment Details: >>> >>> * NIC used: Intel I225/I210 >>> * Hugepages configured: 1 GB >>> * NUMA binding: --cpunodebind=0 --membind=0 >>> * OS: [Your Linux distribution, e.g., Ubuntu 20.04] >>> * DPDK version: [Mention if known] >>> >>> ______________________________________________________________________ >>> ______________________________________________________________________ >>> ______________________________________________ >>> >>> >>> Could you please advise if any additional setup, configuration, or known >>> limitations may be impacting the packet transmission? >> >> This may be a wild suggestion from my side, but it pays to check whether >> link status is "up" upon port start on both ends. One can use API [8] to do >> that. >> >> [8] >> https://doc.dpdk.org/api-25.03/rte__ethdev_8h.html#ac05878578e4fd9ef3551d2c1c175ebe7 >> >> Thank you. >> >> >>> >>> Thank you in advance for your support! >>> >>> >>> Best regards, >>> Gokul K R >>> >>> >>> >>> >>> > ^ permalink raw reply [flat|nested] 8+ messages in thread
* RE: Issue with DPDK-Burst Replay – No Frame Transmission Observed Despite Successful Replay 2025-07-23 6:12 ` Ivan Malov @ 2025-07-24 10:08 ` Gokul K R (MS/ETA7-ETAS) 2025-07-24 10:24 ` Ivan Malov 0 siblings, 1 reply; 8+ messages in thread From: Gokul K R (MS/ETA7-ETAS) @ 2025-07-24 10:08 UTC (permalink / raw) To: Ivan Malov Cc: users, dev, Nivethitha N Shanmugasundaram (ETAS-ICA/XPC-Fe6), Aloysius Nishanth Britto (MS/ETA7-ETAS) Hi Ivan, I hope you're doing well. Point #1: Since the DPDK-Burst Replay tool is not an official part of the DPDK project and is community-developed, could you please let us know whom we should contact for support or further information regarding this tool? Point #2: We have identified an issue while using dpdk-burst replay tool. During EAL mempool initialization, “--pci-whitelist” parameter is given as an argument for “rte_eal_init” function. However, when rte_eal_init is executed with these arguments, it fails with the following error: ./dpdk-replay: unrecognized option ‘--pci-whitelist’ Invalid command line argument We couldn’t find any official documentation on the --pci-whitelist parameter. Could you help clarify its purpose and whether this argument is essential for the tool to operate correctly? Point #3: Could you also suggest any official DPDK tool that supports replaying PCAP files? This would help us explore alternatives that are actively maintained by the DPDK community. Looking forward to your inputs. Best regards, Gokul K R -----Original Message----- From: Ivan Malov <ivan.malov@arknetworks.am> Sent: Wednesday, July 23, 2025 11:43 AM To: Gokul K R (MS/ETA7-ETAS) <KR.Gokul@in.bosch.com> Cc: users@dpdk.org; dev@dpdk.org; Nivethitha N Shanmugasundaram (ETAS-ICA/XPC-Fe6) <NShanmugasundaram.Nivethitha@etas.com> Subject: RE: Issue with DPDK-Burst Replay – No Frame Transmission Observed Despite Successful Replay On Wed, 23 Jul 2025, Ivan Malov wrote: > Hi, > > On Wed, 23 Jul 2025, Gokul K R (MS/ETA7-ETAS) wrote: > >> Hi Ivan Malov 😊 >> >> Use Case: >> I'm currently using the DPDK-Burst-Replay tool to replay captured >> PCAP files at specific data rates (e.g., 150–200 Mbps). >> >> Response to your feedback: >> Point 1: "Port 0 is not on the good NUMA ID (-1)" >> I’m aware that this message is printed due to the NUMA ID being >> returned as -1. >> I've just started diving into the source code and found that the call >> to >> rte_eth_dev_socket_id() returns -1, which typically indicates an error. > > No, -1 typically translates to 'SOCKET_ID_ANY'. In order to rule out 'EINVAL' > in 'rte_errno', one should attempt to invoke 'rte_eth_dev_socket_id' > within the loop of 'RTE_ETH_FOREACH_DEV' (see examples in DPDK) and > print the socket ID. > >> However, the current implementation does not output the rte_errno, >> which could help identify the root cause. I'm working on modifying >> the code to print the error code for better debugging. >> >> Point 2: "NIC Link is UP" >> Yes, on the NIC side, the link is up. I'm also able to transmit >> packets successfully using the testpmd application. >> >> Question: >> Could you please confirm if the current version of the >> DPDK-Burst-Replay tool supports replaying Ethernet frames larger than >> 64 bytes (e.g., up to >> 1500 bytes)? Or has the tool been enhanced to support this use case? > > The tool seems like an external application. Try to look for any > mentions of API 'rte_eth_dev_set_mtu' or just the term 'mtu' in that > source code. If there are no such mentions, then default MTU applies, > which depends on the driver. > > Have you tried querying statistics to find the cause of the drops? Also, given the fact that the application replays some pcap traffic, have you made sure the receiver (where you seek to watch the replayed traffic arrive) has got 'promiscuous' mode [1] enabled? IIRC, 'test-pmd' enables it by default. [1] https://doc.dpdk.org/api-25.07/rte__ethdev_8h.html#a5dd1dedaa45f05c72bcc35495e441e91 Thank you. > > Thank you. > >> >> Thanks for your time and support! >> >> Best regards, >> Gokul K.R >> >> >> >> >> >> -----Original Message----- >> From: Ivan Malov <ivan.malov@arknetworks.am> >> Sent: Friday, July 18, 2025 5:29 PM >> To: Gokul K R (MS/ETA7-ETAS) <KR.Gokul@in.bosch.com> >> Cc: users@dpdk.org; dev@dpdk.org >> Subject: Re: Issue with DPDK-Burst Replay – No Frame Transmission >> Observed Despite Successful Replay >> >> Hi, >> >> (please see below) >> >> On Fri, 18 Jul 2025, Gokul K R (MS/ETA7-ETAS) wrote: >> >>> >>> Hi Team, >>> >>> I’m currently working with the dpdk-burst-replay tool and >>> encountered an issue during execution. Below are the details: >>> >>> >>> ____________________________________________________________________ >>> __ >>> ____________________________________________________________________ >>> __ ______________________________________________ >>> >>> >>> Observation: >>> During replay, we received the following informational message: >>> >>> port 0 is not on the good numa id (-1) >> >> Which API was used to check this? Was API [1] used? If not, what does >> it show in the absence of 'numactl' command? >> >> [1] >> https://doc/ >> .dpdk.org%2Fapi-25.03%2Frte__ethdev_8h.html%23ad032e25f712e6ffeb0c19e >> ab1ec1fd2e&data=05%7C02%7CKR.Gokul%40in.bosch.com%7C53a20884cad14b947 >> a0008ddc9aff5bf%7C0ae51e1907c84e4bbb6d648ee58410f4%7C0%7C0%7C63888847 >> 9792834143%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLj >> AuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C% >> 7C&sdata=XxPfmsX4WsR83ifUfkkup2jIOw4PU%2FX%2BdefEqVfazj4%3D&reserved= >> 0 >> >>> >>> As per the DPDK mailing list discussions, this warning is typically >>> benign—often seen on NICs like Intel I225/I210, which do not report >>> NUMA affinity. Hence, we proceeded with execution. >>> >>> >>> ____________________________________________________________________ >>> __ >>> ____________________________________________________________________ >>> __ ______________________________________________ >>> >>> >>> Command Used: >>> >>> sudo numactl --cpunodebind=0 --membind=0 ./src/dpdk-replay >>> Original_MAC.pcap 0000:01:00.1 >>> >>> Execution Output: >>> >>> preloading Original_MAC.pcap file (of size: 143959 bytes) >>> >>> file read at 100.00% >>> >>> read 675 pkts (for a total of 143959 bytes). max packet length = >>> 1518 bytes. >>> >>> -> Needed MBUF size: 1648 >>> >>> -> Needed number of MBUFs: 675 >>> >>> -> Needed Memory = 1.061 MB >>> >>> -> Needed Hugepages of 1 GB = 1 >>> >>> -> Needed CPUs: 1 >>> >>> -> Create mempool of 675 mbufs of 1648 octets. >>> >>> -> Will cache 675 pkts on 1 caches. >> >> What does this 'cache' stand for? Does it refer to the mempool >> per-lcore cache? >> If so, please note that, according to API [2] documentation, cache >> size "must be lower or equal to RTE_MEMPOOL_CACHE_MAX_SIZE and n / >> 1.5", where 'n' stands for the number of mbufs. Also, documentation >> says it is advised to choose cache_size to have "n modulo cache_size >> == 0". Does your code meet these requirements? >> >> By the looks of it, it doesn't (cache_size = n = 675). Consider to >> double-check. >> >> [2] >> https://doc/ >> .dpdk.org%2Fapi-25.03%2Frte__mempool_8h.html%23a0b64d611bc140a4d2a0c9 >> 4911580efd5&data=05%7C02%7CKR.Gokul%40in.bosch.com%7C53a20884cad14b94 >> 7a0008ddc9aff5bf%7C0ae51e1907c84e4bbb6d648ee58410f4%7C0%7C0%7C6388884 >> 79792843977%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwL >> jAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C >> %7C&sdata=VLPTrAXqY2FNmhA6mwLFEZgc6f4LFc7oa%2BUb8DrmtWs%3D&reserved=0 >> >>> >>> >>> ____________________________________________________________________ >>> __ >>> ____________________________________________________________________ >>> __ ______________________________________________ >>> >>> >>> Issue: >>> Despite successful parsing of the pcap file and proper >>> initialization, no frames were transmitted or received on either the >>> sender or receiver sides. >> >> Is this observation based solely on watching APIs [3] and [4] return >> 0 all the time? If yes, one can consider to introduce invocations of >> APIs [5], [6] and [7] in order to periodically poll and print >> statistics (may be with 1-second delay), which may, for example, shed >> light on mbuf allocation errors (extended stats). >> >> Do statistics display any interesting figures to be discussed? >> >> [3] >> https://doc/ >> .dpdk.org%2Fapi-25.03%2Frte__ethdev_8h.html%23a3e7d76a451b46348686ea9 >> 7d6367f102&data=05%7C02%7CKR.Gokul%40in.bosch.com%7C53a20884cad14b947 >> a0008ddc9aff5bf%7C0ae51e1907c84e4bbb6d648ee58410f4%7C0%7C0%7C63888847 >> 9792853601%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLj >> AuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C% >> 7C&sdata=56BbJeMo6BtewrRGcnIIWBx4gxbvQKl7iATnoUpcTMc%3D&reserved=0 >> [4] >> https://doc/ >> .dpdk.org%2Fapi-25.03%2Frte__ethdev_8h.html%23a83e56cabbd31637efd648e >> 3fc010392b&data=05%7C02%7CKR.Gokul%40in.bosch.com%7C53a20884cad14b947 >> a0008ddc9aff5bf%7C0ae51e1907c84e4bbb6d648ee58410f4%7C0%7C0%7C63888847 >> 9792862833%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLj >> AuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C% >> 7C&sdata=bDP8uNF03zuftWy%2BRbXKg1sYwv8DruMG53ntffCy%2BOk%3D&reserved= >> 0 >> >> [5] >> https://doc/ >> .dpdk.org%2Fapi-25.03%2Frte__ethdev_8h.html%23adec226574c53ae413252c9 >> b15f6f4bab&data=05%7C02%7CKR.Gokul%40in.bosch.com%7C53a20884cad14b947 >> a0008ddc9aff5bf%7C0ae51e1907c84e4bbb6d648ee58410f4%7C0%7C0%7C63888847 >> 9792872496%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLj >> AuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C% >> 7C&sdata=dJ3yB2MTz28n2mQ5ogbYilcj4sr7K2JD723k5zWvlyU%3D&reserved=0 >> [6] >> https://doc/ >> .dpdk.org%2Fapi-25.03%2Frte__ethdev_8h.html%23a418ad970673eb171673185 >> e36044fd79&data=05%7C02%7CKR.Gokul%40in.bosch.com%7C53a20884cad14b947 >> a0008ddc9aff5bf%7C0ae51e1907c84e4bbb6d648ee58410f4%7C0%7C0%7C63888847 >> 9792882662%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLj >> AuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C% >> 7C&sdata=aFt%2Fn%2B4FqX6cJDxWVRyWlZrAlVPXOqE6jV7Uymq7WQw%3D&reserved= >> 0 >> [7] >> https://doc/ >> .dpdk.org%2Fapi-25.03%2Frte__ethdev_8h.html%23a300d75b583c1f5acfe5b16 >> 2a5d8c0ac1&data=05%7C02%7CKR.Gokul%40in.bosch.com%7C53a20884cad14b947 >> a0008ddc9aff5bf%7C0ae51e1907c84e4bbb6d648ee58410f4%7C0%7C0%7C63888847 >> 9792892124%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLj >> AuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C% >> 7C&sdata=TlmU3IaIKCzOEvsmFn8if%2BftgQq7ZJCuzcs6x%2B8jaAY%3D&reserved= >> 0 >> >>> >>> >>> ____________________________________________________________________ >>> __ >>> ____________________________________________________________________ >>> __ ______________________________________________ >>> >>> >>> Environment Details: >>> >>> * NIC used: Intel I225/I210 >>> * Hugepages configured: 1 GB >>> * NUMA binding: --cpunodebind=0 --membind=0 >>> * OS: [Your Linux distribution, e.g., Ubuntu 20.04] >>> * DPDK version: [Mention if known] >>> >>> ____________________________________________________________________ >>> __ >>> ____________________________________________________________________ >>> __ ______________________________________________ >>> >>> >>> Could you please advise if any additional setup, configuration, or >>> known limitations may be impacting the packet transmission? >> >> This may be a wild suggestion from my side, but it pays to check >> whether link status is "up" upon port start on both ends. One can use >> API [8] to do that. >> >> [8] >> https://doc/ >> .dpdk.org%2Fapi-25.03%2Frte__ethdev_8h.html%23ac05878578e4fd9ef3551d2 >> c1c175ebe7&data=05%7C02%7CKR.Gokul%40in.bosch.com%7C53a20884cad14b947 >> a0008ddc9aff5bf%7C0ae51e1907c84e4bbb6d648ee58410f4%7C0%7C0%7C63888847 >> 9792901138%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLj >> AuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C% >> 7C&sdata=kLtuxeh0nYIV7UnU5WGLykXqngpIXzqHNsMlRG%2F89e0%3D&reserved=0 >> >> Thank you. >> >> >>> >>> Thank you in advance for your support! >>> >>> >>> Best regards, >>> Gokul K R >>> >>> >>> >>> >>> > ^ permalink raw reply [flat|nested] 8+ messages in thread
* RE: Issue with DPDK-Burst Replay – No Frame Transmission Observed Despite Successful Replay 2025-07-24 10:08 ` Gokul K R (MS/ETA7-ETAS) @ 2025-07-24 10:24 ` Ivan Malov [not found] ` <PAVPR10MB73559ECA8CBD947C9A80D111A45EA@PAVPR10MB7355.EURPRD10.PROD.OUTLOOK.COM> 0 siblings, 1 reply; 8+ messages in thread From: Ivan Malov @ 2025-07-24 10:24 UTC (permalink / raw) To: Gokul K R (MS/ETA7-ETAS) Cc: users, dev, Nivethitha N Shanmugasundaram (ETAS-ICA/XPC-Fe6), Aloysius Nishanth Britto (MS/ETA7-ETAS) [-- Attachment #1: Type: text/plain, Size: 13164 bytes --] Hi, On Thu, 24 Jul 2025, Gokul K R (MS/ETA7-ETAS) wrote: > Hi Ivan, > > I hope you're doing well. > > Point #1: > Since the DPDK-Burst Replay tool is not an official part of the DPDK project and is community-developed, could you please let us know whom we should contact for support or further information regarding this tool? For the very same reason (not an in-house tool), I have no way of knowing. Perhaps take a look at repository owner's profile on GitHub or something. > > Point #2: > We have identified an issue while using dpdk-burst replay tool. > During EAL mempool initialization, “--pci-whitelist” parameter is given as an argument for “rte_eal_init” function. However, when rte_eal_init is executed with these arguments, it fails with the following error: > > ./dpdk-replay: unrecognized option ‘--pci-whitelist’ > Invalid command line argument > > We couldn’t find any official documentation on the --pci-whitelist parameter. Could you help clarify its purpose and whether this argument is essential for the tool to operate correctly? 1) In 2020, commit db27370b5720 ("eal: replace blacklist/whitelist options") renamed this argument. Now it is '-a' ('--allow') and '-b' ('--block') [1]. 2) The argument is application-agnostic and is designed to give a finer control on which devices get picked by DPDK during EAL initialisation. 3) Whether the argument is needed for the use of this particular application is not apparent to me. One should refer to the application's documentation. > > Point #3: > Could you also suggest any official DPDK tool that supports replaying PCAP files? This would help us explore alternatives that are actively maintained by the DPDK community. Please take a look at DPDK pcap PMD [2]. It may meet your needs. [1] https://doc.dpdk.org/guides-25.07/linux_gsg/linux_eal_parameters.html#device-related-options [2] https://doc.dpdk.org/guides-25.07/nics/pcap_ring.html Thank you. > > Looking forward to your inputs. > > Best regards, > Gokul K R > -----Original Message----- > From: Ivan Malov <ivan.malov@arknetworks.am> > Sent: Wednesday, July 23, 2025 11:43 AM > To: Gokul K R (MS/ETA7-ETAS) <KR.Gokul@in.bosch.com> > Cc: users@dpdk.org; dev@dpdk.org; Nivethitha N Shanmugasundaram (ETAS-ICA/XPC-Fe6) <NShanmugasundaram.Nivethitha@etas.com> > Subject: RE: Issue with DPDK-Burst Replay – No Frame Transmission Observed Despite Successful Replay > > On Wed, 23 Jul 2025, Ivan Malov wrote: > >> Hi, >> >> On Wed, 23 Jul 2025, Gokul K R (MS/ETA7-ETAS) wrote: >> >>> Hi Ivan Malov 😊 >>> >>> Use Case: >>> I'm currently using the DPDK-Burst-Replay tool to replay captured >>> PCAP files at specific data rates (e.g., 150–200 Mbps). >>> >>> Response to your feedback: >>> Point 1: "Port 0 is not on the good NUMA ID (-1)" >>> I’m aware that this message is printed due to the NUMA ID being >>> returned as -1. >>> I've just started diving into the source code and found that the call >>> to >>> rte_eth_dev_socket_id() returns -1, which typically indicates an error. >> >> No, -1 typically translates to 'SOCKET_ID_ANY'. In order to rule out 'EINVAL' >> in 'rte_errno', one should attempt to invoke 'rte_eth_dev_socket_id' >> within the loop of 'RTE_ETH_FOREACH_DEV' (see examples in DPDK) and >> print the socket ID. >> >>> However, the current implementation does not output the rte_errno, >>> which could help identify the root cause. I'm working on modifying >>> the code to print the error code for better debugging. >>> >>> Point 2: "NIC Link is UP" >>> Yes, on the NIC side, the link is up. I'm also able to transmit >>> packets successfully using the testpmd application. >>> >>> Question: >>> Could you please confirm if the current version of the >>> DPDK-Burst-Replay tool supports replaying Ethernet frames larger than >>> 64 bytes (e.g., up to >>> 1500 bytes)? Or has the tool been enhanced to support this use case? >> >> The tool seems like an external application. Try to look for any >> mentions of API 'rte_eth_dev_set_mtu' or just the term 'mtu' in that >> source code. If there are no such mentions, then default MTU applies, >> which depends on the driver. >> >> Have you tried querying statistics to find the cause of the drops? > > Also, given the fact that the application replays some pcap traffic, have you made sure the receiver (where you seek to watch the replayed traffic arrive) has got 'promiscuous' mode [1] enabled? IIRC, 'test-pmd' enables it by default. > > [1] https://doc.dpdk.org/api-25.07/rte__ethdev_8h.html#a5dd1dedaa45f05c72bcc35495e441e91 > > Thank you. > >> >> Thank you. >> >>> >>> Thanks for your time and support! >>> >>> Best regards, >>> Gokul K.R >>> >>> >>> >>> >>> >>> -----Original Message----- >>> From: Ivan Malov <ivan.malov@arknetworks.am> >>> Sent: Friday, July 18, 2025 5:29 PM >>> To: Gokul K R (MS/ETA7-ETAS) <KR.Gokul@in.bosch.com> >>> Cc: users@dpdk.org; dev@dpdk.org >>> Subject: Re: Issue with DPDK-Burst Replay – No Frame Transmission >>> Observed Despite Successful Replay >>> >>> Hi, >>> >>> (please see below) >>> >>> On Fri, 18 Jul 2025, Gokul K R (MS/ETA7-ETAS) wrote: >>> >>>> >>>> Hi Team, >>>> >>>> I’m currently working with the dpdk-burst-replay tool and >>>> encountered an issue during execution. Below are the details: >>>> >>>> >>>> ____________________________________________________________________ >>>> __ >>>> ____________________________________________________________________ >>>> __ ______________________________________________ >>>> >>>> >>>> Observation: >>>> During replay, we received the following informational message: >>>> >>>> port 0 is not on the good numa id (-1) >>> >>> Which API was used to check this? Was API [1] used? If not, what does >>> it show in the absence of 'numactl' command? >>> >>> [1] >>> https://doc/ >>> .dpdk.org%2Fapi-25.03%2Frte__ethdev_8h.html%23ad032e25f712e6ffeb0c19e >>> ab1ec1fd2e&data=05%7C02%7CKR.Gokul%40in.bosch.com%7C53a20884cad14b947 >>> a0008ddc9aff5bf%7C0ae51e1907c84e4bbb6d648ee58410f4%7C0%7C0%7C63888847 >>> 9792834143%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLj >>> AuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C% >>> 7C&sdata=XxPfmsX4WsR83ifUfkkup2jIOw4PU%2FX%2BdefEqVfazj4%3D&reserved= >>> 0 >>> >>>> >>>> As per the DPDK mailing list discussions, this warning is typically >>>> benign—often seen on NICs like Intel I225/I210, which do not report >>>> NUMA affinity. Hence, we proceeded with execution. >>>> >>>> >>>> ____________________________________________________________________ >>>> __ >>>> ____________________________________________________________________ >>>> __ ______________________________________________ >>>> >>>> >>>> Command Used: >>>> >>>> sudo numactl --cpunodebind=0 --membind=0 ./src/dpdk-replay >>>> Original_MAC.pcap 0000:01:00.1 >>>> >>>> Execution Output: >>>> >>>> preloading Original_MAC.pcap file (of size: 143959 bytes) >>>> >>>> file read at 100.00% >>>> >>>> read 675 pkts (for a total of 143959 bytes). max packet length = >>>> 1518 bytes. >>>> >>>> -> Needed MBUF size: 1648 >>>> >>>> -> Needed number of MBUFs: 675 >>>> >>>> -> Needed Memory = 1.061 MB >>>> >>>> -> Needed Hugepages of 1 GB = 1 >>>> >>>> -> Needed CPUs: 1 >>>> >>>> -> Create mempool of 675 mbufs of 1648 octets. >>>> >>>> -> Will cache 675 pkts on 1 caches. >>> >>> What does this 'cache' stand for? Does it refer to the mempool >>> per-lcore cache? >>> If so, please note that, according to API [2] documentation, cache >>> size "must be lower or equal to RTE_MEMPOOL_CACHE_MAX_SIZE and n / >>> 1.5", where 'n' stands for the number of mbufs. Also, documentation >>> says it is advised to choose cache_size to have "n modulo cache_size >>> == 0". Does your code meet these requirements? >>> >>> By the looks of it, it doesn't (cache_size = n = 675). Consider to >>> double-check. >>> >>> [2] >>> https://doc/ >>> .dpdk.org%2Fapi-25.03%2Frte__mempool_8h.html%23a0b64d611bc140a4d2a0c9 >>> 4911580efd5&data=05%7C02%7CKR.Gokul%40in.bosch.com%7C53a20884cad14b94 >>> 7a0008ddc9aff5bf%7C0ae51e1907c84e4bbb6d648ee58410f4%7C0%7C0%7C6388884 >>> 79792843977%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwL >>> jAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C >>> %7C&sdata=VLPTrAXqY2FNmhA6mwLFEZgc6f4LFc7oa%2BUb8DrmtWs%3D&reserved=0 >>> >>>> >>>> >>>> ____________________________________________________________________ >>>> __ >>>> ____________________________________________________________________ >>>> __ ______________________________________________ >>>> >>>> >>>> Issue: >>>> Despite successful parsing of the pcap file and proper >>>> initialization, no frames were transmitted or received on either the >>>> sender or receiver sides. >>> >>> Is this observation based solely on watching APIs [3] and [4] return >>> 0 all the time? If yes, one can consider to introduce invocations of >>> APIs [5], [6] and [7] in order to periodically poll and print >>> statistics (may be with 1-second delay), which may, for example, shed >>> light on mbuf allocation errors (extended stats). >>> >>> Do statistics display any interesting figures to be discussed? >>> >>> [3] >>> https://doc/ >>> .dpdk.org%2Fapi-25.03%2Frte__ethdev_8h.html%23a3e7d76a451b46348686ea9 >>> 7d6367f102&data=05%7C02%7CKR.Gokul%40in.bosch.com%7C53a20884cad14b947 >>> a0008ddc9aff5bf%7C0ae51e1907c84e4bbb6d648ee58410f4%7C0%7C0%7C63888847 >>> 9792853601%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLj >>> AuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C% >>> 7C&sdata=56BbJeMo6BtewrRGcnIIWBx4gxbvQKl7iATnoUpcTMc%3D&reserved=0 >>> [4] >>> https://doc/ >>> .dpdk.org%2Fapi-25.03%2Frte__ethdev_8h.html%23a83e56cabbd31637efd648e >>> 3fc010392b&data=05%7C02%7CKR.Gokul%40in.bosch.com%7C53a20884cad14b947 >>> a0008ddc9aff5bf%7C0ae51e1907c84e4bbb6d648ee58410f4%7C0%7C0%7C63888847 >>> 9792862833%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLj >>> AuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C% >>> 7C&sdata=bDP8uNF03zuftWy%2BRbXKg1sYwv8DruMG53ntffCy%2BOk%3D&reserved= >>> 0 >>> >>> [5] >>> https://doc/ >>> .dpdk.org%2Fapi-25.03%2Frte__ethdev_8h.html%23adec226574c53ae413252c9 >>> b15f6f4bab&data=05%7C02%7CKR.Gokul%40in.bosch.com%7C53a20884cad14b947 >>> a0008ddc9aff5bf%7C0ae51e1907c84e4bbb6d648ee58410f4%7C0%7C0%7C63888847 >>> 9792872496%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLj >>> AuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C% >>> 7C&sdata=dJ3yB2MTz28n2mQ5ogbYilcj4sr7K2JD723k5zWvlyU%3D&reserved=0 >>> [6] >>> https://doc/ >>> .dpdk.org%2Fapi-25.03%2Frte__ethdev_8h.html%23a418ad970673eb171673185 >>> e36044fd79&data=05%7C02%7CKR.Gokul%40in.bosch.com%7C53a20884cad14b947 >>> a0008ddc9aff5bf%7C0ae51e1907c84e4bbb6d648ee58410f4%7C0%7C0%7C63888847 >>> 9792882662%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLj >>> AuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C% >>> 7C&sdata=aFt%2Fn%2B4FqX6cJDxWVRyWlZrAlVPXOqE6jV7Uymq7WQw%3D&reserved= >>> 0 >>> [7] >>> https://doc/ >>> .dpdk.org%2Fapi-25.03%2Frte__ethdev_8h.html%23a300d75b583c1f5acfe5b16 >>> 2a5d8c0ac1&data=05%7C02%7CKR.Gokul%40in.bosch.com%7C53a20884cad14b947 >>> a0008ddc9aff5bf%7C0ae51e1907c84e4bbb6d648ee58410f4%7C0%7C0%7C63888847 >>> 9792892124%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLj >>> AuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C% >>> 7C&sdata=TlmU3IaIKCzOEvsmFn8if%2BftgQq7ZJCuzcs6x%2B8jaAY%3D&reserved= >>> 0 >>> >>>> >>>> >>>> ____________________________________________________________________ >>>> __ >>>> ____________________________________________________________________ >>>> __ ______________________________________________ >>>> >>>> >>>> Environment Details: >>>> >>>> * NIC used: Intel I225/I210 >>>> * Hugepages configured: 1 GB >>>> * NUMA binding: --cpunodebind=0 --membind=0 >>>> * OS: [Your Linux distribution, e.g., Ubuntu 20.04] >>>> * DPDK version: [Mention if known] >>>> >>>> ____________________________________________________________________ >>>> __ >>>> ____________________________________________________________________ >>>> __ ______________________________________________ >>>> >>>> >>>> Could you please advise if any additional setup, configuration, or >>>> known limitations may be impacting the packet transmission? >>> >>> This may be a wild suggestion from my side, but it pays to check >>> whether link status is "up" upon port start on both ends. One can use >>> API [8] to do that. >>> >>> [8] >>> https://doc/ >>> .dpdk.org%2Fapi-25.03%2Frte__ethdev_8h.html%23ac05878578e4fd9ef3551d2 >>> c1c175ebe7&data=05%7C02%7CKR.Gokul%40in.bosch.com%7C53a20884cad14b947 >>> a0008ddc9aff5bf%7C0ae51e1907c84e4bbb6d648ee58410f4%7C0%7C0%7C63888847 >>> 9792901138%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLj >>> AuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C% >>> 7C&sdata=kLtuxeh0nYIV7UnU5WGLykXqngpIXzqHNsMlRG%2F89e0%3D&reserved=0 >>> >>> Thank you. >>> >>> >>>> >>>> Thank you in advance for your support! >>>> >>>> >>>> Best regards, >>>> Gokul K R >>>> >>>> >>>> >>>> >>>> >> > ^ permalink raw reply [flat|nested] 8+ messages in thread
[parent not found: <PAVPR10MB73559ECA8CBD947C9A80D111A45EA@PAVPR10MB7355.EURPRD10.PROD.OUTLOOK.COM>]
* RE: Issue with DPDK-Burst Replay – No Frame Transmission Observed Despite Successful Replay [not found] ` <PAVPR10MB73559ECA8CBD947C9A80D111A45EA@PAVPR10MB7355.EURPRD10.PROD.OUTLOOK.COM> @ 2025-07-24 16:35 ` Ivan Malov 0 siblings, 0 replies; 8+ messages in thread From: Ivan Malov @ 2025-07-24 16:35 UTC (permalink / raw) To: Gokul K R (MS/ETA7-ETAS) Cc: users, dev, Nivethitha N Shanmugasundaram (ETAS-ICA/XPC-Fe6), Aloysius Nishanth Britto (MS/ETA7-ETAS) [-- Attachment #1: Type: text/plain, Size: 16956 bytes --] On Thu, 24 Jul 2025, Gokul K R (MS/ETA7-ETAS) wrote: > Hi, > > In the meantime, I was able to successfully replay my PCAP file using the dpdk-burst-replay tool from your repository (https://github.com/FraudBuster/dpdk-burst-replay). Below is the output from the tool: > > sudo ./src/dpdk-replay wendutput.pcap 0000:01:00.0 > preloading wendutput.pcap file (of size: 5900 bytes) > file read at 100.00% > read 100 pkts (for a total of 5900 bytes). max packet length = 43 bytes. > -> Needed MBUF size: 174 > -> Needed number of MBUFS: 100 > -> Needed Memory = 16.992 Mo > -> Needed Hugepages of 1 Go = 1 > -> Needed CPUs: 1 > -> Create mempool of 100 mbufs of 174 octs. > -> Will cache 100 pkts on 1 caches. > file read at 100.00% > -> NIC port 0 ready. > > RESULTS : > [thread 00]: 1.703287 Gbit/s, 3874767.513949 pps on 0.000026 sec (0 pkts dropped) > TOTAL : 1.703 Gbit/s. 3874767.514 pps. > Total dropped: 0/100 packets (0.000000%) > Cannot close started device (port 0) > I have a question: > > #1: The tool shows that packets are transmitted, but they are not received on the other end. The same setup works fine with the testpmd application. Since the transmit-side NIC is bound to DPDK, I cannot use tools like Wireshark or tcpdump to verify transmission. Is there any recommended way to confirm whether packets are actually being sent from the DPDK-bound NIC? Once again, are you sure destination MAC addresses in the replayed packets match that of the receiver NIC? Have you enabled promiscuous mode on the receiver? Thank you. > > In parallel, as per your suggestion, I’m also exploring the DPDK PCAP PMD to replay my PCAP file. > > > Mit freundlichen Grüßen / Best regards > > K R Gokul > > Compact Devices, ESxx, Product and System Acceptance Tests (MS/ETA7-ETAS) > Robert Bosch GmbH | Postfach 10 60 50 | 70049 Stuttgart | GERMANY | http://www.bosch.com/ > Mobile +91-9003755978 | KR.Gokul@in.bosch.com > > Registered Office: Stuttgart, Registration Court: Amtsgericht Stuttgart, HRB 14000; > Chairman of the Supervisory Board: Prof. Dr. Stefan Asenkerschbaumer; > Managing Directors: Dr. Stefan Hartung, Dr. Christian Fischer, Dr. Markus Forschner, > Stefan Grosch, Dr. Markus Heyn, Dr. Frank Meyer, Katja von Raven, Dr. Tanja Rückert > > -----Original Message----- > From: Ivan Malov <ivan.malov@arknetworks.am> > Sent: Thursday, July 24, 2025 3:55 PM > To: Gokul K R (MS/ETA7-ETAS) <KR.Gokul@in.bosch.com> > Cc: users@dpdk.org; dev@dpdk.org; Nivethitha N Shanmugasundaram (ETAS-ICA/XPC-Fe6) <NShanmugasundaram.Nivethitha@etas.com>; Aloysius Nishanth Britto (MS/ETA7-ETAS) <Britto.AloysiusNishanth@in.bosch.com> > Subject: RE: Issue with DPDK-Burst Replay – No Frame Transmission Observed Despite Successful Replay > > Hi, > > On Thu, 24 Jul 2025, Gokul K R (MS/ETA7-ETAS) wrote: > >> Hi Ivan, >> >> I hope you're doing well. >> >> Point #1: >> Since the DPDK-Burst Replay tool is not an official part of the DPDK project and is community-developed, could you please let us know whom we should contact for support or further information regarding this tool? > > For the very same reason (not an in-house tool), I have no way of knowing. > Perhaps take a look at repository owner's profile on GitHub or something. > >> >> Point #2: >> We have identified an issue while using dpdk-burst replay tool. >> During EAL mempool initialization, “--pci-whitelist” parameter is given as an argument for “rte_eal_init” function. However, when rte_eal_init is executed with these arguments, it fails with the following error: >> >> ./dpdk-replay: unrecognized option ‘--pci-whitelist’ >> Invalid command line argument >> >> We couldn’t find any official documentation on the --pci-whitelist parameter. Could you help clarify its purpose and whether this argument is essential for the tool to operate correctly? > > 1) In 2020, commit db27370b5720 ("eal: replace blacklist/whitelist options") > renamed this argument. Now it is '-a' ('--allow') and '-b' ('--block') [1]. > 2) The argument is application-agnostic and is designed to give a finer control > on which devices get picked by DPDK during EAL initialisation. > 3) Whether the argument is needed for the use of this particular application > is not apparent to me. One should refer to the application's documentation. > >> >> Point #3: >> Could you also suggest any official DPDK tool that supports replaying PCAP files? This would help us explore alternatives that are actively maintained by the DPDK community. > > Please take a look at DPDK pcap PMD [2]. It may meet your needs. > > [1] https://doc.dpdk.org/guides-25.07/linux_gsg/linux_eal_parameters.html#device-related-options > [2] https://doc.dpdk.org/guides-25.07/nics/pcap_ring.html > > Thank you. > >> >> Looking forward to your inputs. >> >> Best regards, >> Gokul K R >> -----Original Message----- >> From: Ivan Malov <ivan.malov@arknetworks.am> >> Sent: Wednesday, July 23, 2025 11:43 AM >> To: Gokul K R (MS/ETA7-ETAS) <KR.Gokul@in.bosch.com> >> Cc: users@dpdk.org; dev@dpdk.org; Nivethitha N Shanmugasundaram >> (ETAS-ICA/XPC-Fe6) <NShanmugasundaram.Nivethitha@etas.com> >> Subject: RE: Issue with DPDK-Burst Replay – No Frame Transmission >> Observed Despite Successful Replay >> >> On Wed, 23 Jul 2025, Ivan Malov wrote: >> >>> Hi, >>> >>> On Wed, 23 Jul 2025, Gokul K R (MS/ETA7-ETAS) wrote: >>> >>>> Hi Ivan Malov 😊 >>>> >>>> Use Case: >>>> I'm currently using the DPDK-Burst-Replay tool to replay captured >>>> PCAP files at specific data rates (e.g., 150–200 Mbps). >>>> >>>> Response to your feedback: >>>> Point 1: "Port 0 is not on the good NUMA ID (-1)" >>>> I’m aware that this message is printed due to the NUMA ID being >>>> returned as -1. >>>> I've just started diving into the source code and found that the >>>> call to >>>> rte_eth_dev_socket_id() returns -1, which typically indicates an error. >>> >>> No, -1 typically translates to 'SOCKET_ID_ANY'. In order to rule out 'EINVAL' >>> in 'rte_errno', one should attempt to invoke 'rte_eth_dev_socket_id' >>> within the loop of 'RTE_ETH_FOREACH_DEV' (see examples in DPDK) and >>> print the socket ID. >>> >>>> However, the current implementation does not output the rte_errno, >>>> which could help identify the root cause. I'm working on modifying >>>> the code to print the error code for better debugging. >>>> >>>> Point 2: "NIC Link is UP" >>>> Yes, on the NIC side, the link is up. I'm also able to transmit >>>> packets successfully using the testpmd application. >>>> >>>> Question: >>>> Could you please confirm if the current version of the >>>> DPDK-Burst-Replay tool supports replaying Ethernet frames larger >>>> than >>>> 64 bytes (e.g., up to >>>> 1500 bytes)? Or has the tool been enhanced to support this use case? >>> >>> The tool seems like an external application. Try to look for any >>> mentions of API 'rte_eth_dev_set_mtu' or just the term 'mtu' in that >>> source code. If there are no such mentions, then default MTU applies, >>> which depends on the driver. >>> >>> Have you tried querying statistics to find the cause of the drops? >> >> Also, given the fact that the application replays some pcap traffic, have you made sure the receiver (where you seek to watch the replayed traffic arrive) has got 'promiscuous' mode [1] enabled? IIRC, 'test-pmd' enables it by default. >> >> [1] >> https://doc/. >> dpdk.org%2Fapi-25.07%2Frte__ethdev_8h.html%23a5dd1dedaa45f05c72bcc3549 >> 5e441e91&data=05%7C02%7CKR.Gokul%40in.bosch.com%7Cb670bffe53b8485c75a9 >> 08ddca9c56bf%7C0ae51e1907c84e4bbb6d648ee58410f4%7C0%7C0%7C638889495040 >> 750455%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDA >> wMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&sda >> ta=7xqux5VPYgMerNkhy7YOCzj371irrkhEFFrkHTuK1pg%3D&reserved=0 >> >> Thank you. >> >>> >>> Thank you. >>> >>>> >>>> Thanks for your time and support! >>>> >>>> Best regards, >>>> Gokul K.R >>>> >>>> >>>> >>>> >>>> >>>> -----Original Message----- >>>> From: Ivan Malov <ivan.malov@arknetworks.am> >>>> Sent: Friday, July 18, 2025 5:29 PM >>>> To: Gokul K R (MS/ETA7-ETAS) <KR.Gokul@in.bosch.com> >>>> Cc: users@dpdk.org; dev@dpdk.org >>>> Subject: Re: Issue with DPDK-Burst Replay – No Frame Transmission >>>> Observed Despite Successful Replay >>>> >>>> Hi, >>>> >>>> (please see below) >>>> >>>> On Fri, 18 Jul 2025, Gokul K R (MS/ETA7-ETAS) wrote: >>>> >>>>> >>>>> Hi Team, >>>>> >>>>> I’m currently working with the dpdk-burst-replay tool and >>>>> encountered an issue during execution. Below are the details: >>>>> >>>>> >>>>> ___________________________________________________________________ >>>>> _ >>>>> __ >>>>> ___________________________________________________________________ >>>>> _ __ ______________________________________________ >>>>> >>>>> >>>>> Observation: >>>>> During replay, we received the following informational message: >>>>> >>>>> port 0 is not on the good numa id (-1) >>>> >>>> Which API was used to check this? Was API [1] used? If not, what >>>> does it show in the absence of 'numactl' command? >>>> >>>> [1] >>>> https://doc/ >>>> .dpdk.org%2Fapi-25.03%2Frte__ethdev_8h.html%23ad032e25f712e6ffeb0c19 >>>> e >>>> ab1ec1fd2e&data=05%7C02%7CKR.Gokul%40in.bosch.com%7C53a20884cad14b94 >>>> 7 >>>> a0008ddc9aff5bf%7C0ae51e1907c84e4bbb6d648ee58410f4%7C0%7C0%7C6388884 >>>> 7 >>>> 9792834143%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwL >>>> j >>>> AuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C >>>> % >>>> 7C&sdata=XxPfmsX4WsR83ifUfkkup2jIOw4PU%2FX%2BdefEqVfazj4%3D&reserved >>>> = >>>> 0 >>>> >>>>> >>>>> As per the DPDK mailing list discussions, this warning is typically >>>>> benign—often seen on NICs like Intel I225/I210, which do not report >>>>> NUMA affinity. Hence, we proceeded with execution. >>>>> >>>>> >>>>> ___________________________________________________________________ >>>>> _ >>>>> __ >>>>> ___________________________________________________________________ >>>>> _ __ ______________________________________________ >>>>> >>>>> >>>>> Command Used: >>>>> >>>>> sudo numactl --cpunodebind=0 --membind=0 ./src/dpdk-replay >>>>> Original_MAC.pcap 0000:01:00.1 >>>>> >>>>> Execution Output: >>>>> >>>>> preloading Original_MAC.pcap file (of size: 143959 bytes) >>>>> >>>>> file read at 100.00% >>>>> >>>>> read 675 pkts (for a total of 143959 bytes). max packet length = >>>>> 1518 bytes. >>>>> >>>>> -> Needed MBUF size: 1648 >>>>> >>>>> -> Needed number of MBUFs: 675 >>>>> >>>>> -> Needed Memory = 1.061 MB >>>>> >>>>> -> Needed Hugepages of 1 GB = 1 >>>>> >>>>> -> Needed CPUs: 1 >>>>> >>>>> -> Create mempool of 675 mbufs of 1648 octets. >>>>> >>>>> -> Will cache 675 pkts on 1 caches. >>>> >>>> What does this 'cache' stand for? Does it refer to the mempool >>>> per-lcore cache? >>>> If so, please note that, according to API [2] documentation, cache >>>> size "must be lower or equal to RTE_MEMPOOL_CACHE_MAX_SIZE and n / >>>> 1.5", where 'n' stands for the number of mbufs. Also, documentation >>>> says it is advised to choose cache_size to have "n modulo cache_size >>>> == 0". Does your code meet these requirements? >>>> >>>> By the looks of it, it doesn't (cache_size = n = 675). Consider to >>>> double-check. >>>> >>>> [2] >>>> https://doc/ >>>> .dpdk.org%2Fapi-25.03%2Frte__mempool_8h.html%23a0b64d611bc140a4d2a0c >>>> 9 >>>> 4911580efd5&data=05%7C02%7CKR.Gokul%40in.bosch.com%7C53a20884cad14b9 >>>> 4 >>>> 7a0008ddc9aff5bf%7C0ae51e1907c84e4bbb6d648ee58410f4%7C0%7C0%7C638888 >>>> 4 >>>> 79792843977%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIw >>>> L >>>> jAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7 >>>> C >>>> %7C&sdata=VLPTrAXqY2FNmhA6mwLFEZgc6f4LFc7oa%2BUb8DrmtWs%3D&reserved= >>>> 0 >>>> >>>>> >>>>> >>>>> ___________________________________________________________________ >>>>> _ >>>>> __ >>>>> ___________________________________________________________________ >>>>> _ __ ______________________________________________ >>>>> >>>>> >>>>> Issue: >>>>> Despite successful parsing of the pcap file and proper >>>>> initialization, no frames were transmitted or received on either >>>>> the sender or receiver sides. >>>> >>>> Is this observation based solely on watching APIs [3] and [4] return >>>> 0 all the time? If yes, one can consider to introduce invocations of >>>> APIs [5], [6] and [7] in order to periodically poll and print >>>> statistics (may be with 1-second delay), which may, for example, >>>> shed light on mbuf allocation errors (extended stats). >>>> >>>> Do statistics display any interesting figures to be discussed? >>>> >>>> [3] >>>> https://doc/ >>>> .dpdk.org%2Fapi-25.03%2Frte__ethdev_8h.html%23a3e7d76a451b46348686ea >>>> 9 >>>> 7d6367f102&data=05%7C02%7CKR.Gokul%40in.bosch.com%7C53a20884cad14b94 >>>> 7 >>>> a0008ddc9aff5bf%7C0ae51e1907c84e4bbb6d648ee58410f4%7C0%7C0%7C6388884 >>>> 7 >>>> 9792853601%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwL >>>> j >>>> AuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C >>>> % >>>> 7C&sdata=56BbJeMo6BtewrRGcnIIWBx4gxbvQKl7iATnoUpcTMc%3D&reserved=0 >>>> [4] >>>> https://doc/ >>>> .dpdk.org%2Fapi-25.03%2Frte__ethdev_8h.html%23a83e56cabbd31637efd648 >>>> e >>>> 3fc010392b&data=05%7C02%7CKR.Gokul%40in.bosch.com%7C53a20884cad14b94 >>>> 7 >>>> a0008ddc9aff5bf%7C0ae51e1907c84e4bbb6d648ee58410f4%7C0%7C0%7C6388884 >>>> 7 >>>> 9792862833%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwL >>>> j >>>> AuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C >>>> % >>>> 7C&sdata=bDP8uNF03zuftWy%2BRbXKg1sYwv8DruMG53ntffCy%2BOk%3D&reserved >>>> = >>>> 0 >>>> >>>> [5] >>>> https://doc/ >>>> .dpdk.org%2Fapi-25.03%2Frte__ethdev_8h.html%23adec226574c53ae413252c >>>> 9 >>>> b15f6f4bab&data=05%7C02%7CKR.Gokul%40in.bosch.com%7C53a20884cad14b94 >>>> 7 >>>> a0008ddc9aff5bf%7C0ae51e1907c84e4bbb6d648ee58410f4%7C0%7C0%7C6388884 >>>> 7 >>>> 9792872496%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwL >>>> j >>>> AuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C >>>> % >>>> 7C&sdata=dJ3yB2MTz28n2mQ5ogbYilcj4sr7K2JD723k5zWvlyU%3D&reserved=0 >>>> [6] >>>> https://doc/ >>>> .dpdk.org%2Fapi-25.03%2Frte__ethdev_8h.html%23a418ad970673eb17167318 >>>> 5 >>>> e36044fd79&data=05%7C02%7CKR.Gokul%40in.bosch.com%7C53a20884cad14b94 >>>> 7 >>>> a0008ddc9aff5bf%7C0ae51e1907c84e4bbb6d648ee58410f4%7C0%7C0%7C6388884 >>>> 7 >>>> 9792882662%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwL >>>> j >>>> AuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C >>>> % >>>> 7C&sdata=aFt%2Fn%2B4FqX6cJDxWVRyWlZrAlVPXOqE6jV7Uymq7WQw%3D&reserved >>>> = >>>> 0 >>>> [7] >>>> https://doc/ >>>> .dpdk.org%2Fapi-25.03%2Frte__ethdev_8h.html%23a300d75b583c1f5acfe5b1 >>>> 6 >>>> 2a5d8c0ac1&data=05%7C02%7CKR.Gokul%40in.bosch.com%7C53a20884cad14b94 >>>> 7 >>>> a0008ddc9aff5bf%7C0ae51e1907c84e4bbb6d648ee58410f4%7C0%7C0%7C6388884 >>>> 7 >>>> 9792892124%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwL >>>> j >>>> AuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C >>>> % >>>> 7C&sdata=TlmU3IaIKCzOEvsmFn8if%2BftgQq7ZJCuzcs6x%2B8jaAY%3D&reserved >>>> = >>>> 0 >>>> >>>>> >>>>> >>>>> ___________________________________________________________________ >>>>> _ >>>>> __ >>>>> ___________________________________________________________________ >>>>> _ __ ______________________________________________ >>>>> >>>>> >>>>> Environment Details: >>>>> >>>>> * NIC used: Intel I225/I210 >>>>> * Hugepages configured: 1 GB >>>>> * NUMA binding: --cpunodebind=0 --membind=0 >>>>> * OS: [Your Linux distribution, e.g., Ubuntu 20.04] >>>>> * DPDK version: [Mention if known] >>>>> >>>>> ___________________________________________________________________ >>>>> _ >>>>> __ >>>>> ___________________________________________________________________ >>>>> _ __ ______________________________________________ >>>>> >>>>> >>>>> Could you please advise if any additional setup, configuration, or >>>>> known limitations may be impacting the packet transmission? >>>> >>>> This may be a wild suggestion from my side, but it pays to check >>>> whether link status is "up" upon port start on both ends. One can >>>> use API [8] to do that. >>>> >>>> [8] >>>> https://doc/ >>>> .dpdk.org%2Fapi-25.03%2Frte__ethdev_8h.html%23ac05878578e4fd9ef3551d >>>> 2 >>>> c1c175ebe7&data=05%7C02%7CKR.Gokul%40in.bosch.com%7C53a20884cad14b94 >>>> 7 >>>> a0008ddc9aff5bf%7C0ae51e1907c84e4bbb6d648ee58410f4%7C0%7C0%7C6388884 >>>> 7 >>>> 9792901138%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwL >>>> j >>>> AuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C >>>> % >>>> 7C&sdata=kLtuxeh0nYIV7UnU5WGLykXqngpIXzqHNsMlRG%2F89e0%3D&reserved=0 >>>> >>>> Thank you. >>>> >>>> >>>>> >>>>> Thank you in advance for your support! >>>>> >>>>> >>>>> Best regards, >>>>> Gokul K R >>>>> >>>>> >>>>> >>>>> >>>>> >>> >> > ^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2025-07-24 16:35 UTC | newest] Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2025-07-18 5:10 Issue with DPDK-Burst Replay – No Frame Transmission Observed Despite Successful Replay Gokul K R (MS/ETA7-ETAS) 2025-07-18 11:58 ` Ivan Malov 2025-07-23 5:35 ` Gokul K R (MS/ETA7-ETAS) 2025-07-23 5:58 ` Ivan Malov 2025-07-23 6:12 ` Ivan Malov 2025-07-24 10:08 ` Gokul K R (MS/ETA7-ETAS) 2025-07-24 10:24 ` Ivan Malov [not found] ` <PAVPR10MB73559ECA8CBD947C9A80D111A45EA@PAVPR10MB7355.EURPRD10.PROD.OUTLOOK.COM> 2025-07-24 16:35 ` Ivan Malov
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).