* unable to capture packets
@ 2024-10-07 13:52 Lokesh Chakka
2024-10-07 15:21 ` Bing Zhao
0 siblings, 1 reply; 9+ messages in thread
From: Lokesh Chakka @ 2024-10-07 13:52 UTC (permalink / raw)
To: users
[-- Attachment #1: Type: text/plain, Size: 705 bytes --]
hello,
I'm trying to capture packets using the following piece of code :
==========================================================
struct rte_eth_rxconf rxq_conf = dev_info.default_rxconf;
rxq_conf.offloads = port_conf.rxmode.offloads;
rte_eth_rx_queue_setup( 0, 0, num_of_pkts_per_queue, (unsigned int)sock,
&rxq_conf, mem_pool );
rte_eth_dev_start( 0 );
while( 1 )
{
num_of_pkts_rcvd = rte_eth_rx_burst( 0, 0, mbuf, num_of_pkts_per_queue );
fprintf( stderr, "num_of_pkts_rcvd: %u\n", num_of_pkts_rcvd );
}
==========================================================
It's always printing num_of_pkts_rcvd as 0.
Can someone help me understand what the issue is ....
Thanks & Regards
--
Lokesh Chakka.
[-- Attachment #2: Type: text/html, Size: 1174 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* RE: unable to capture packets
2024-10-07 13:52 unable to capture packets Lokesh Chakka
@ 2024-10-07 15:21 ` Bing Zhao
2024-10-07 15:34 ` Stephen Hemminger
2024-10-07 15:36 ` Lokesh Chakka
0 siblings, 2 replies; 9+ messages in thread
From: Bing Zhao @ 2024-10-07 15:21 UTC (permalink / raw)
To: Lokesh Chakka, users
[-- Attachment #1: Type: text/plain, Size: 1096 bytes --]
Which NIC are you using?
Have you tried dpdk-testpmd or l2fwd on your setup to check if the packet can be sent and received correctly?
BR. Bing
From: Lokesh Chakka <lvenkatakumarchakka@gmail.com>
Sent: Monday, October 7, 2024 9:52 PM
To: users <users@dpdk.org>
Subject: unable to capture packets
External email: Use caution opening links or attachments
hello,
I'm trying to capture packets using the following piece of code :
==========================================================
struct rte_eth_rxconf rxq_conf = dev_info.default_rxconf;
rxq_conf.offloads = port_conf.rxmode.offloads;
rte_eth_rx_queue_setup( 0, 0, num_of_pkts_per_queue, (unsigned int)sock, &rxq_conf, mem_pool );
rte_eth_dev_start( 0 );
while( 1 )
{
num_of_pkts_rcvd = rte_eth_rx_burst( 0, 0, mbuf, num_of_pkts_per_queue );
fprintf( stderr, "num_of_pkts_rcvd: %u\n", num_of_pkts_rcvd );
}
==========================================================
It's always printing num_of_pkts_rcvd as 0.
Can someone help me understand what the issue is ....
Thanks & Regards
--
Lokesh Chakka.
[-- Attachment #2: Type: text/html, Size: 4912 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: unable to capture packets
2024-10-07 15:21 ` Bing Zhao
@ 2024-10-07 15:34 ` Stephen Hemminger
2024-10-07 15:36 ` Lokesh Chakka
1 sibling, 0 replies; 9+ messages in thread
From: Stephen Hemminger @ 2024-10-07 15:34 UTC (permalink / raw)
To: Bing Zhao; +Cc: Lokesh Chakka, users
On Mon, 7 Oct 2024 15:21:58 +0000
Bing Zhao <bingz@nvidia.com> wrote:
> Which NIC are you using?
> Have you tried dpdk-testpmd or l2fwd on your setup to check if the packet can be sent and received correctly?
>
> BR. Bing
>
> From: Lokesh Chakka <lvenkatakumarchakka@gmail.com>
> Sent: Monday, October 7, 2024 9:52 PM
> To: users <users@dpdk.org>
> Subject: unable to capture packets
>
> External email: Use caution opening links or attachments
>
> hello,
>
> I'm trying to capture packets using the following piece of code :
>
> ==========================================================
> struct rte_eth_rxconf rxq_conf = dev_info.default_rxconf;
> rxq_conf.offloads = port_conf.rxmode.offloads;
> rte_eth_rx_queue_setup( 0, 0, num_of_pkts_per_queue, (unsigned int)sock, &rxq_conf, mem_pool );
> rte_eth_dev_start( 0 );
> while( 1 )
> {
> num_of_pkts_rcvd = rte_eth_rx_burst( 0, 0, mbuf, num_of_pkts_per_queue );
> fprintf( stderr, "num_of_pkts_rcvd: %u\n", num_of_pkts_rcvd );
> }
> ==========================================================
> It's always printing num_of_pkts_rcvd as 0.
>
> Can someone help me understand what the issue is ....
>
> Thanks & Regards
> --
> Lokesh Chakka.
Also what is the startup log from the EAL and PMD.
You can also enable debug logging
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: unable to capture packets
2024-10-07 15:21 ` Bing Zhao
2024-10-07 15:34 ` Stephen Hemminger
@ 2024-10-07 15:36 ` Lokesh Chakka
2024-10-07 16:02 ` Pathak, Pravin
1 sibling, 1 reply; 9+ messages in thread
From: Lokesh Chakka @ 2024-10-07 15:36 UTC (permalink / raw)
To: Bing Zhao; +Cc: users
[-- Attachment #1: Type: text/plain, Size: 1315 bytes --]
I've tried TX. It's working fine.
I'm sure problem is only with my code.
On Mon, 7 Oct, 2024, 20:52 Bing Zhao, <bingz@nvidia.com> wrote:
> Which NIC are you using?
>
> Have you tried dpdk-testpmd or l2fwd on your setup to check if the packet
> can be sent and received correctly?
>
>
>
> BR. Bing
>
>
>
> *From:* Lokesh Chakka <lvenkatakumarchakka@gmail.com>
> *Sent:* Monday, October 7, 2024 9:52 PM
> *To:* users <users@dpdk.org>
> *Subject:* unable to capture packets
>
>
>
> *External email: Use caution opening links or attachments*
>
>
>
> hello,
>
>
>
> I'm trying to capture packets using the following piece of code :
>
>
>
> ==========================================================
>
> struct rte_eth_rxconf rxq_conf = dev_info.default_rxconf;
> rxq_conf.offloads = port_conf.rxmode.offloads;
> rte_eth_rx_queue_setup( 0, 0, num_of_pkts_per_queue, (unsigned int)sock,
> &rxq_conf, mem_pool );
> rte_eth_dev_start( 0 );
> while( 1 )
> {
> num_of_pkts_rcvd = rte_eth_rx_burst( 0, 0, mbuf, num_of_pkts_per_queue );
> fprintf( stderr, "num_of_pkts_rcvd: %u\n", num_of_pkts_rcvd );
> }
>
> ==========================================================
>
> It's always printing num_of_pkts_rcvd as 0.
>
>
>
> Can someone help me understand what the issue is ....
>
>
> Thanks & Regards
> --
> Lokesh Chakka.
>
[-- Attachment #2: Type: text/html, Size: 4019 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* RE: unable to capture packets
2024-10-07 15:36 ` Lokesh Chakka
@ 2024-10-07 16:02 ` Pathak, Pravin
2024-10-07 22:10 ` Lokesh Chakka
0 siblings, 1 reply; 9+ messages in thread
From: Pathak, Pravin @ 2024-10-07 16:02 UTC (permalink / raw)
To: Lokesh Chakka, Bing Zhao; +Cc: users
[-- Attachment #1: Type: text/plain, Size: 1588 bytes --]
I hope accidentally num_of_pkts_per_queue is not zero.
Pravin
From: Lokesh Chakka <lvenkatakumarchakka@gmail.com>
Sent: Monday, October 7, 2024 11:36 AM
To: Bing Zhao <bingz@nvidia.com>
Cc: users <users@dpdk.org>
Subject: Re: unable to capture packets
I've tried TX. It's working fine.
I'm sure problem is only with my code.
On Mon, 7 Oct, 2024, 20:52 Bing Zhao, <bingz@nvidia.com<mailto:bingz@nvidia.com>> wrote:
Which NIC are you using?
Have you tried dpdk-testpmd or l2fwd on your setup to check if the packet can be sent and received correctly?
BR. Bing
From: Lokesh Chakka <lvenkatakumarchakka@gmail.com<mailto:lvenkatakumarchakka@gmail.com>>
Sent: Monday, October 7, 2024 9:52 PM
To: users <users@dpdk.org<mailto:users@dpdk.org>>
Subject: unable to capture packets
External email: Use caution opening links or attachments
hello,
I'm trying to capture packets using the following piece of code :
==========================================================
struct rte_eth_rxconf rxq_conf = dev_info.default_rxconf;
rxq_conf.offloads = port_conf.rxmode.offloads;
rte_eth_rx_queue_setup( 0, 0, num_of_pkts_per_queue, (unsigned int)sock, &rxq_conf, mem_pool );
rte_eth_dev_start( 0 );
while( 1 )
{
num_of_pkts_rcvd = rte_eth_rx_burst( 0, 0, mbuf, num_of_pkts_per_queue );
fprintf( stderr, "num_of_pkts_rcvd: %u\n", num_of_pkts_rcvd );
}
==========================================================
It's always printing num_of_pkts_rcvd as 0.
Can someone help me understand what the issue is ....
Thanks & Regards
--
Lokesh Chakka.
[-- Attachment #2: Type: text/html, Size: 7673 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: unable to capture packets
2024-10-07 16:02 ` Pathak, Pravin
@ 2024-10-07 22:10 ` Lokesh Chakka
2024-10-08 0:23 ` Stephen Hemminger
0 siblings, 1 reply; 9+ messages in thread
From: Lokesh Chakka @ 2024-10-07 22:10 UTC (permalink / raw)
To: Pathak, Pravin; +Cc: Bing Zhao, users
[-- Attachment #1.1: Type: text/plain, Size: 1882 bytes --]
please find the full fledged code as attachment.
Thanks & Regards
--
Lokesh Chakka.
On Mon, Oct 7, 2024 at 9:32 PM Pathak, Pravin <pravin.pathak@intel.com>
wrote:
> I hope accidentally num_of_pkts_per_queue is not zero.
>
> Pravin
>
>
>
> *From:* Lokesh Chakka <lvenkatakumarchakka@gmail.com>
> *Sent:* Monday, October 7, 2024 11:36 AM
> *To:* Bing Zhao <bingz@nvidia.com>
> *Cc:* users <users@dpdk.org>
> *Subject:* Re: unable to capture packets
>
>
>
> I've tried TX. It's working fine.
>
> I'm sure problem is only with my code.
>
>
>
> On Mon, 7 Oct, 2024, 20:52 Bing Zhao, <bingz@nvidia.com> wrote:
>
> Which NIC are you using?
>
> Have you tried dpdk-testpmd or l2fwd on your setup to check if the packet
> can be sent and received correctly?
>
>
>
> BR. Bing
>
>
>
> *From:* Lokesh Chakka <lvenkatakumarchakka@gmail.com>
> *Sent:* Monday, October 7, 2024 9:52 PM
> *To:* users <users@dpdk.org>
> *Subject:* unable to capture packets
>
>
>
> *External email: Use caution opening links or attachments*
>
>
>
> hello,
>
>
>
> I'm trying to capture packets using the following piece of code :
>
>
>
> ==========================================================
>
> struct rte_eth_rxconf rxq_conf = dev_info.default_rxconf;
> rxq_conf.offloads = port_conf.rxmode.offloads;
> rte_eth_rx_queue_setup( 0, 0, num_of_pkts_per_queue, (unsigned int)sock,
> &rxq_conf, mem_pool );
> rte_eth_dev_start( 0 );
> while( 1 )
> {
> num_of_pkts_rcvd = rte_eth_rx_burst( 0, 0, mbuf, num_of_pkts_per_queue );
> fprintf( stderr, "num_of_pkts_rcvd: %u\n", num_of_pkts_rcvd );
> }
>
> ==========================================================
>
> It's always printing num_of_pkts_rcvd as 0.
>
>
>
> Can someone help me understand what the issue is ....
>
>
> Thanks & Regards
> --
> Lokesh Chakka.
>
>
[-- Attachment #1.2: Type: text/html, Size: 6438 bytes --]
[-- Attachment #2: pmd.c --]
[-- Type: text/x-csrc, Size: 4481 bytes --]
//pmd 0 s a
#include<rte_ethdev.h>
#include<rte_malloc.h>
#include<pthread.h>
#include<signal.h>
_Bool received_sigint = false;
struct rte_eth_conf port_conf =
{
.rxmode = { .mq_mode = RTE_ETH_MQ_RX_NONE, },
.txmode = { .mq_mode = RTE_ETH_MQ_TX_NONE, },
};
static void SIGINT_signal_handler( const int signal )
{
fprintf( stderr, "\b\bReceived Interrupt Signal SIGINT (%d). Exiting...\n", signal );
received_sigint = true;
}
int main( int argc, char **argv )
{
uint16_t pkt_count, num_of_pkts_rcvd;
struct rte_eth_dev_info dev_info;
if( signal( SIGINT, SIGINT_signal_handler ) == SIG_ERR )
{
fprintf( stderr, "%s %d SIGINT signal handling failed\n", __func__, __LINE__ );
exit( 1 );
}
const int ret = rte_eal_init( argc, argv );
if( ret < 0 )
rte_exit( EXIT_FAILURE, "Error with EAL initialization\n" );
argc -= ret;
argv += ret;
const int port_id = atoi( argv[1] );
if( rte_eth_dev_info_get( port_id, &dev_info ) != 0 )
rte_exit( EXIT_FAILURE, "%s %d rte_eth_dev_info_get\n", __func__, __LINE__ );
uint16_t fetched_mtu = 0;
if( rte_eth_dev_get_mtu( port_id, &fetched_mtu ) != 0 )
{
rte_exit( EXIT_FAILURE, "%s %d rte_eth_dev_stop port id: %u errno: %u Error: %s\n", __func__, __LINE__, port_id, rte_errno, rte_strerror( rte_errno ) );
}
port_conf.rxmode.mtu = dev_info.max_mtu = fetched_mtu;
const int sock = rte_eth_dev_socket_id( port_id );
if( sock == -1 )
rte_exit( EXIT_FAILURE, "%s %d rte_eth_dev_socket_id port id: %u\n", __func__, __LINE__, port_id );
char mem_pool_name[20];
sprintf( mem_pool_name, "pool_%u_r", port_id );
const uint32_t num_of_pkts_per_queue = 4096;
struct rte_mbuf *mbuf[num_of_pkts_per_queue];
char *packet_buffer[num_of_pkts_per_queue];
fprintf( stderr, "%s %d port id: %d num_of_pkts_per_queue: %u\n", __func__, __LINE__, port_id, num_of_pkts_per_queue );
struct rte_mempool *mem_pool = rte_pktmbuf_pool_create( mem_pool_name, num_of_pkts_per_queue, RTE_MEMPOOL_CACHE_MAX_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE, sock );
if( mem_pool == NULL )
{
fprintf( stderr, "%d %s\n", rte_errno, rte_strerror(rte_errno) );
rte_exit( EXIT_FAILURE, "%s %d rte_pktmbuf_pool_create port id: %u\n", __func__, __LINE__, port_id );
}
if( rte_eth_dev_configure( port_id, 1, 0, &port_conf ) != 0 )
{
fprintf( stderr, "%d %s\n", rte_errno, rte_strerror(rte_errno) );
rte_exit( EXIT_FAILURE, "%s %d rte_eth_dev_configure port id: %u\n", __func__, __LINE__, port_id );
}
struct rte_eth_rxconf rxq_conf;
rxq_conf = dev_info.default_rxconf;
rxq_conf.offloads = port_conf.rxmode.offloads;
if( rte_eth_rx_queue_setup( port_id, 0, num_of_pkts_per_queue, (unsigned int)sock, &rxq_conf, mem_pool ) < 0 )
{
fprintf( stderr, "%d %s\n", rte_errno, rte_strerror(rte_errno) );
rte_exit( EXIT_FAILURE, "%s %d rte_eth_rx_queue_setup port id: %u\n", __func__, __LINE__, port_id );
}
if( rte_eth_dev_start( port_id ) < 0 )
rte_exit( EXIT_FAILURE, "%s %d rte_eth_dev_start port id: %u\n", __func__, __LINE__, port_id );
const time_t begin_time = time( NULL );
uint64_t pkts_sent_till_now = 0;
for( int count=0; count<2; count++ )
{
num_of_pkts_rcvd = rte_eth_rx_burst( port_id, 0, mbuf, num_of_pkts_per_queue );
fprintf( stderr, "port: %u rte_eth_rx_burst %u num_of_pkts_rcvd\n", port_id, num_of_pkts_rcvd );
pkts_sent_till_now += num_of_pkts_rcvd;
for( pkt_count=0; pkt_count<num_of_pkts_rcvd; pkt_count++ )
{
if( mbuf[pkt_count]->pkt_len != mbuf[pkt_count]->data_len )
rte_exit( EXIT_FAILURE, "%s %d mbuf[pkt_count]->pkt_len(%u) != mbuf[pkt_count]->data_len(%u) port id: %u\n", __func__, __LINE__, mbuf[pkt_count]->pkt_len, mbuf[pkt_count]->data_len, port_id );
if( mbuf[pkt_count]->pkt_len > 40 )
mbuf[pkt_count]->pkt_len = 40;
fprintf( stderr, "port: %u pkt count: %u\t", port_id, pkt_count );
for( uint8_t i=0; i<mbuf[pkt_count]->pkt_len; i++ )
fprintf( stderr, "%02X ", packet_buffer[pkt_count][i] );
fprintf( stderr, "\n" );
}
}
const time_t end_time =time( NULL );
const time_t elapsed_time = end_time-begin_time;
const uint64_t bw = ( 2048*8*pkts_sent_till_now )/ elapsed_time;
fprintf( stderr, "%s %d time : %ld total pkts sent: %lu bandwidth: %lu\n", __func__, __LINE__, elapsed_time, pkts_sent_till_now, bw/1024/1024/1000 );
rte_pktmbuf_free_bulk( mbuf, num_of_pkts_per_queue );
rte_mempool_free( mem_pool );
if( rte_eth_dev_stop( port_id ) < 0 )
rte_exit( EXIT_FAILURE, "%s %d rte_eth_dev_stop port id: %u\n", __func__, __LINE__, port_id );
return 0;
}
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: unable to capture packets
2024-10-07 22:10 ` Lokesh Chakka
@ 2024-10-08 0:23 ` Stephen Hemminger
2024-10-09 12:15 ` Lokesh Chakka
0 siblings, 1 reply; 9+ messages in thread
From: Stephen Hemminger @ 2024-10-08 0:23 UTC (permalink / raw)
To: Lokesh Chakka; +Cc: Pathak, Pravin, Bing Zhao, users
On Tue, 8 Oct 2024 03:40:52 +0530
Lokesh Chakka <lvenkatakumarchakka@gmail.com> wrote:
> please find the full fledged code as attachment.
>
>
> Thanks & Regards
> --
> Lokesh Chakka.
>
>
> On Mon, Oct 7, 2024 at 9:32 PM Pathak, Pravin <pravin.pathak@intel.com>
> wrote:
>
> > I hope accidentally num_of_pkts_per_queue is not zero.
> >
> > Pravin
> >
> >
> >
> > *From:* Lokesh Chakka <lvenkatakumarchakka@gmail.com>
> > *Sent:* Monday, October 7, 2024 11:36 AM
> > *To:* Bing Zhao <bingz@nvidia.com>
> > *Cc:* users <users@dpdk.org>
> > *Subject:* Re: unable to capture packets
> >
> >
> >
> > I've tried TX. It's working fine.
> >
> > I'm sure problem is only with my code.
> >
> >
> >
> > On Mon, 7 Oct, 2024, 20:52 Bing Zhao, <bingz@nvidia.com> wrote:
> >
> > Which NIC are you using?
> >
> > Have you tried dpdk-testpmd or l2fwd on your setup to check if the packet
> > can be sent and received correctly?
> >
> >
> >
> > BR. Bing
> >
> >
> >
> > *From:* Lokesh Chakka <lvenkatakumarchakka@gmail.com>
> > *Sent:* Monday, October 7, 2024 9:52 PM
> > *To:* users <users@dpdk.org>
> > *Subject:* unable to capture packets
> >
> >
> >
> > *External email: Use caution opening links or attachments*
> >
> >
> >
> > hello,
> >
> >
> >
> > I'm trying to capture packets using the following piece of code :
> >
> >
> >
> > ==========================================================
> >
> > struct rte_eth_rxconf rxq_conf = dev_info.default_rxconf;
> > rxq_conf.offloads = port_conf.rxmode.offloads;
> > rte_eth_rx_queue_setup( 0, 0, num_of_pkts_per_queue, (unsigned int)sock,
> > &rxq_conf, mem_pool );
> > rte_eth_dev_start( 0 );
> > while( 1 )
> > {
> > num_of_pkts_rcvd = rte_eth_rx_burst( 0, 0, mbuf, num_of_pkts_per_queue );
> > fprintf( stderr, "num_of_pkts_rcvd: %u\n", num_of_pkts_rcvd );
> > }
> >
> > ==========================================================
> >
> > It's always printing num_of_pkts_rcvd as 0.
> >
> >
> >
> > Can someone help me understand what the issue is ....
> >
> >
> > Thanks & Regards
> > --
> > Lokesh Chakka.
> >
> >
Save yourself some pain, and make sure to initialize all structures like:
struct rte_eth_rxconf rxq_conf = { };
A rx queue depth of 4K is excessive; all the packets in mempool will be
tied up in the device. If you want to keep a pool size of 4K, try dropping
the rx descriptors to something much smaller like 128
After you receive a burst of packet you need to return them to the pool by freeing them.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: unable to capture packets
2024-10-08 0:23 ` Stephen Hemminger
@ 2024-10-09 12:15 ` Lokesh Chakka
2024-10-09 13:27 ` Van Haaren, Harry
0 siblings, 1 reply; 9+ messages in thread
From: Lokesh Chakka @ 2024-10-09 12:15 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: Pathak, Pravin, Bing Zhao, users
[-- Attachment #1.1: Type: text/plain, Size: 4046 bytes --]
hi,
did certain modifications as per your suggestions. still the same problem.
not able to capture any packets....!!!
I replaced 4096 with 512. rte_pktmbuf_pool_create is giving an error. for
the time being i've left it as 4K only.
I feel it should not be a problem.
PFA for the revised code.
Output is as follows :
==============================================================================================
EAL: Detected CPU lcores: 40
EAL: Detected NUMA nodes: 1
EAL: Detected shared linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: VFIO support initialized
EAL: Using IOMMU type 1 (Type 1)
ice_dev_init(): Failed to read device serial number
ice_load_pkg_type(): Active package is: 1.3.39.0, ICE OS Default Package
(double VLAN mode)
main 53 port id: 0 num_of_pkts_per_queue: 4096
ice_set_rx_function(): Using AVX2 Vector Rx (port 0).
Received Interrupt Signal SIGINT (2). Exiting...
main 98 time : 4 total pkts rcvd: 0 bandwidth: 0
==============================================================================================
Thanks & Regards
--
Lokesh Chakka.
On Tue, Oct 8, 2024 at 5:53 AM Stephen Hemminger <stephen@networkplumber.org>
wrote:
> On Tue, 8 Oct 2024 03:40:52 +0530
> Lokesh Chakka <lvenkatakumarchakka@gmail.com> wrote:
>
> > please find the full fledged code as attachment.
> >
> >
> > Thanks & Regards
> > --
> > Lokesh Chakka.
> >
> >
> > On Mon, Oct 7, 2024 at 9:32 PM Pathak, Pravin <pravin.pathak@intel.com>
> > wrote:
> >
> > > I hope accidentally num_of_pkts_per_queue is not zero.
> > >
> > > Pravin
> > >
> > >
> > >
> > > *From:* Lokesh Chakka <lvenkatakumarchakka@gmail.com>
> > > *Sent:* Monday, October 7, 2024 11:36 AM
> > > *To:* Bing Zhao <bingz@nvidia.com>
> > > *Cc:* users <users@dpdk.org>
> > > *Subject:* Re: unable to capture packets
> > >
> > >
> > >
> > > I've tried TX. It's working fine.
> > >
> > > I'm sure problem is only with my code.
> > >
> > >
> > >
> > > On Mon, 7 Oct, 2024, 20:52 Bing Zhao, <bingz@nvidia.com> wrote:
> > >
> > > Which NIC are you using?
> > >
> > > Have you tried dpdk-testpmd or l2fwd on your setup to check if the
> packet
> > > can be sent and received correctly?
> > >
> > >
> > >
> > > BR. Bing
> > >
> > >
> > >
> > > *From:* Lokesh Chakka <lvenkatakumarchakka@gmail.com>
> > > *Sent:* Monday, October 7, 2024 9:52 PM
> > > *To:* users <users@dpdk.org>
> > > *Subject:* unable to capture packets
> > >
> > >
> > >
> > > *External email: Use caution opening links or attachments*
> > >
> > >
> > >
> > > hello,
> > >
> > >
> > >
> > > I'm trying to capture packets using the following piece of code :
> > >
> > >
> > >
> > > ==========================================================
> > >
> > > struct rte_eth_rxconf rxq_conf = dev_info.default_rxconf;
> > > rxq_conf.offloads = port_conf.rxmode.offloads;
> > > rte_eth_rx_queue_setup( 0, 0, num_of_pkts_per_queue, (unsigned
> int)sock,
> > > &rxq_conf, mem_pool );
> > > rte_eth_dev_start( 0 );
> > > while( 1 )
> > > {
> > > num_of_pkts_rcvd = rte_eth_rx_burst( 0, 0, mbuf, num_of_pkts_per_queue
> );
> > > fprintf( stderr, "num_of_pkts_rcvd: %u\n", num_of_pkts_rcvd );
> > > }
> > >
> > > ==========================================================
> > >
> > > It's always printing num_of_pkts_rcvd as 0.
> > >
> > >
> > >
> > > Can someone help me understand what the issue is ....
> > >
> > >
> > > Thanks & Regards
> > > --
> > > Lokesh Chakka.
> > >
> > >
>
>
> Save yourself some pain, and make sure to initialize all structures like:
> struct rte_eth_rxconf rxq_conf = { };
>
> A rx queue depth of 4K is excessive; all the packets in mempool will be
> tied up in the device. If you want to keep a pool size of 4K, try dropping
> the rx descriptors to something much smaller like 128
>
> After you receive a burst of packet you need to return them to the pool by
> freeing them.
>
[-- Attachment #1.2: Type: text/html, Size: 6067 bytes --]
[-- Attachment #2: pmd.c --]
[-- Type: text/x-csrc, Size: 4272 bytes --]
//pmd 0 s a
#include<rte_ethdev.h>
#include<rte_malloc.h>
#include<pthread.h>
#include<signal.h>
_Bool received_sigint = false;
struct rte_eth_conf port_conf =
{
.rxmode = { .mq_mode = RTE_ETH_MQ_RX_NONE, },
.txmode = { .mq_mode = RTE_ETH_MQ_TX_NONE, },
};
static void SIGINT_signal_handler( const int signal )
{
fprintf( stderr, "\b\bReceived Interrupt Signal SIGINT (%d). Exiting...\n", signal );
received_sigint = true;
}
int main( int argc, char **argv )
{
uint16_t pkt_count, num_of_pkts_rcvd;
struct rte_eth_dev_info dev_info;
if( signal( SIGINT, SIGINT_signal_handler ) == SIG_ERR )
{
fprintf( stderr, "%s %d SIGINT signal handling failed\n", __func__, __LINE__ );
exit( 1 );
}
const int ret = rte_eal_init( argc, argv );
if( ret < 0 )
rte_exit( EXIT_FAILURE, "Error with EAL initialization\n" );
argc -= ret;
argv += ret;
const int port_id = atoi( argv[1] );
if( rte_eth_dev_info_get( port_id, &dev_info ) != 0 )
rte_exit( EXIT_FAILURE, "%s %d rte_eth_dev_info_get\n", __func__, __LINE__ );
uint16_t fetched_mtu = 0;
if( rte_eth_dev_get_mtu( port_id, &fetched_mtu ) != 0 )
{
rte_exit( EXIT_FAILURE, "%s %d rte_eth_dev_stop port id: %u errno: %u Error: %s\n", __func__, __LINE__, port_id, rte_errno, rte_strerror( rte_errno ) );
}
port_conf.rxmode.mtu = dev_info.max_mtu = fetched_mtu;
const int sock = rte_eth_dev_socket_id( port_id );
if( sock == -1 )
rte_exit( EXIT_FAILURE, "%s %d rte_eth_dev_socket_id port id: %u\n", __func__, __LINE__, port_id );
char mem_pool_name[20];
sprintf( mem_pool_name, "pool_%u_r", port_id );
const uint32_t num_of_pkts_per_queue = 4096;
struct rte_mbuf *mbuf[num_of_pkts_per_queue];
//char *packet_buffer[num_of_pkts_per_queue];
fprintf( stderr, "%s %d port id: %d num_of_pkts_per_queue: %u\n", __func__, __LINE__, port_id, num_of_pkts_per_queue );
struct rte_mempool *mem_pool = rte_pktmbuf_pool_create( mem_pool_name, num_of_pkts_per_queue, RTE_MEMPOOL_CACHE_MAX_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE, sock );
if( mem_pool == NULL )
{
fprintf( stderr, "%d %s\n", rte_errno, rte_strerror(rte_errno) );
rte_exit( EXIT_FAILURE, "%s %d rte_pktmbuf_pool_create port id: %u\n", __func__, __LINE__, port_id );
}
if( rte_eth_dev_configure( port_id, 1, 0, &port_conf ) != 0 )
{
fprintf( stderr, "%d %s\n", rte_errno, rte_strerror(rte_errno) );
rte_exit( EXIT_FAILURE, "%s %d rte_eth_dev_configure port id: %u\n", __func__, __LINE__, port_id );
}
struct rte_eth_rxconf rxq_conf = {};
rxq_conf = dev_info.default_rxconf;
rxq_conf.offloads = port_conf.rxmode.offloads;
if( rte_eth_rx_queue_setup( port_id, 0, num_of_pkts_per_queue, (unsigned int)sock, &rxq_conf, mem_pool ) < 0 )
{
fprintf( stderr, "%d %s\n", rte_errno, rte_strerror(rte_errno) );
rte_exit( EXIT_FAILURE, "%s %d rte_eth_rx_queue_setup port id: %u\n", __func__, __LINE__, port_id );
}
if( rte_eth_dev_start( port_id ) < 0 )
rte_exit( EXIT_FAILURE, "%s %d rte_eth_dev_start port id: %u\n", __func__, __LINE__, port_id );
const time_t begin_time = time( NULL );
uint64_t pkts_rcvd_till_now = 0;
while( received_sigint == false )
{
num_of_pkts_rcvd = rte_eth_rx_burst( port_id, 0, mbuf, num_of_pkts_per_queue );
if (num_of_pkts_rcvd == 0) continue;
fprintf( stderr, "port: %u rte_eth_rx_burst %u num_of_pkts_rcvd\n", port_id, num_of_pkts_rcvd );
pkts_rcvd_till_now += num_of_pkts_rcvd;
for( pkt_count=0; pkt_count<num_of_pkts_rcvd; pkt_count++ )
{
uint8_t *packet_data = rte_pktmbuf_mtod(mbuf[pkt_count], uint8_t *);
uint16_t pkt_len = rte_pktmbuf_pkt_len(mbuf[pkt_count]);
for (int z = 0; z < pkt_len; z++)
{
fprintf( stderr, "%02x ", packet_data[z]);
}
fprintf(stderr, "\n");
rte_pktmbuf_free( mbuf[pkt_count] );
}
}
const time_t end_time =time( NULL );
const time_t elapsed_time = end_time-begin_time;
const uint64_t bw = ( 2048*8*pkts_rcvd_till_now )/ elapsed_time;
fprintf( stderr, "%s %d time : %ld total pkts rcvd: %lu bandwidth: %lu\n", __func__, __LINE__, elapsed_time, pkts_rcvd_till_now, bw/1024/1024/1000 );
//rte_pktmbuf_free_bulk( mbuf, num_of_pkts_per_queue );
rte_mempool_free( mem_pool );
if( rte_eth_dev_stop( port_id ) < 0 )
rte_exit( EXIT_FAILURE, "%s %d rte_eth_dev_stop port id: %u\n", __func__, __LINE__, port_id );
return 0;
}
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: unable to capture packets
2024-10-09 12:15 ` Lokesh Chakka
@ 2024-10-09 13:27 ` Van Haaren, Harry
0 siblings, 0 replies; 9+ messages in thread
From: Van Haaren, Harry @ 2024-10-09 13:27 UTC (permalink / raw)
To: Lokesh Chakka, Stephen Hemminger; +Cc: Pathak, Pravin, Bing Zhao, users
[-- Attachment #1: Type: text/plain, Size: 2782 bytes --]
> From: Lokesh Chakka <lvenkatakumarchakka@gmail.com>
> Sent: Wednesday, October 9, 2024 1:15 PM
> To: Stephen Hemminger <stephen@networkplumber.org>
> Cc: Pathak, Pravin <pravin.pathak@intel.com>; Bing Zhao <bingz@nvidia.com>; users <users@dpdk.org>
> Subject: Re: unable to capture packets
>
> hi,
Hi Chakka,
Please don't "top post" on mailing lists; a reply "inline" with context is a lot easier to follow for all current (and future!) readers.
> did certain modifications as per your suggestions. still the same problem. not able to capture any packets....!!!
Have you tried to run the DPDK example applications? Specifically, the skeleton/basicfwd.c has a "known good" setup routine,
and forwards packets on a single core. Perhaps it is a good place to compare your setup code to, as it its known working.
> I replaced 4096 with 512. rte_pktmbuf_pool_create is giving an error. for the time being i've left it as 4K only.
> I feel it should not be a problem.
There is a problem somewhere - and currently in your code it is not root caused. This is a time to re-check
code that "seemed ok" before, because somewhere something is not behaving as you expect.
> PFA for the revised code.
Attaching code to mailing lists is not an easy/good way to review; perhaps create a git repo and push the code there?
Then provide a link, and future patches could easily show what improvements/fixes occur. (If you email another
version of the "pmd.c" file, readers cannot know what changes were made -> leads to lots of duplication of review effort)
> Output is as follows :
>
> ==============================================================================================
> EAL: Detected CPU lcores: 40
> EAL: Detected NUMA nodes: 1
> EAL: Detected shared linkage of DPDK
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Selected IOVA mode 'VA'
> EAL: VFIO support initialized
> EAL: Using IOMMU type 1 (Type 1)
> ice_dev_init(): Failed to read device serial number
>
> ice_load_pkg_type(): Active package is: 1.3.39.0, ICE OS Default Package (double VLAN mode)
> main 53 port id: 0 num_of_pkts_per_queue: 4096
> ice_set_rx_function(): Using AVX2 Vector Rx (port 0).
> Received Interrupt Signal SIGINT (2). Exiting...
> main 98 time : 4 total pkts rcvd: 0 bandwidth: 0
> ==============================================================================================
<snip> previous conversation, as top-posted answers, and discussion was hard to follow.
It seems "num_of_pkts_per_queue" is used everywhere (mempool size, mbuf batch array size, rx_burst size, etc)
This is almost certainly impacting things somehow. Review skeleton/basicfwd.c for better sizes/values.
Hope that helps! Regards, -Harry
[-- Attachment #2: Type: text/html, Size: 13098 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2024-10-09 13:27 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-10-07 13:52 unable to capture packets Lokesh Chakka
2024-10-07 15:21 ` Bing Zhao
2024-10-07 15:34 ` Stephen Hemminger
2024-10-07 15:36 ` Lokesh Chakka
2024-10-07 16:02 ` Pathak, Pravin
2024-10-07 22:10 ` Lokesh Chakka
2024-10-08 0:23 ` Stephen Hemminger
2024-10-09 12:15 ` Lokesh Chakka
2024-10-09 13:27 ` Van Haaren, Harry
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).