DPDK usage discussions
 help / color / mirror / Atom feed
* RE: [dpdk-users] Accessing packet data from different lcores
@ 2022-01-12  0:51 Ramin.Taraz
  2022-02-23 15:15 ` Ramin.Taraz
  0 siblings, 1 reply; 5+ messages in thread
From: Ramin.Taraz @ 2022-01-12  0:51 UTC (permalink / raw)
  To: Ramin.Taraz; +Cc: users

Hi All,
Any chance I can get some comments on this question?
Do I need to provide any more info?

^ permalink raw reply	[flat|nested] 5+ messages in thread

* RE: [dpdk-users] Accessing packet data from different lcores
  2022-01-12  0:51 [dpdk-users] Accessing packet data from different lcores Ramin.Taraz
@ 2022-02-23 15:15 ` Ramin.Taraz
  2022-02-23 15:58   ` Stephen Hemminger
  0 siblings, 1 reply; 5+ messages in thread
From: Ramin.Taraz @ 2022-02-23 15:15 UTC (permalink / raw)
  To: users

Back in December of 2021, I posted a question about accessing the buff_addr pointer from different cores, which oddly, wasn't working properly.

The original question is pasted below.

I was specifically using the packet_ordering example, as listed below and running it with --disable-reorder option.

The problem of buff_addr having the wrong value in different cores ended up being the result of a bug in packet_ordering example.

The example allows the user to select whether they want to stamp packets with a seq # or not to compare the performance difference.

In the case of not stamping with a seq #, the rx_thread doesn't disable stamping 

Rx_thread ()
{
....
    /* mark sequence number */
    for (i = 0; i < nb_rx_pkts; )
        {
             *rte_reorder_seqn(pkts[i++]) = seqn++;
        }
.....
}

When disordering is disabled, rte_reorder_create isn't called and the variable rte_reorder_seqn_dynfield_offset stays at default value of -1.  And eventually  *rte_reorder_seqn(pkts[i++]) = seqn++; corrupts buff_addr.


Proper fix is not stamping the packet with seq # in dpdk-21.11/examples/packet-ordering/main.c:rx_thread if the flag --disable-ordering is set.


--------------------------

I have been playing with dpdk 21.11 for a week or two and run into something that has me scratching my head a bit.

I'm looking at packet_reorder example.  I'm running this sample with 3 cores: 1 RX, 1 Worker, and 1 TX.

dpdk-packet_ordering  -l 0-3  --  -p 3 --disable-reorder

In this example:
RX thread reads the packets from receive queue and puts them in a rx_to_workers ring Worker thread reads from the rx_to_workers ring, changes the port number and queues to workers_to_tx ring Tx thread reads from workers_to_tx ring and calls rte_eth_tx_buffer


What I like to do is access the packet content in the worker thread and print out a few bytes from it.

What I'm finding is that mbuf_addr value, for the same mbuf, read from RX thread is different than if read in the Worker or TX thread.

For example, when printing out the address of mbuf, and mbuf_addr values, in the three threads, I get:

rx_thread
mbuf      = 100e30000
mbuf_addr = 100e30080

worker_thread
mbuf      = 100e30000
mbuf_addr = 100000000

tx_thread
mbuf      = 100e30000
mbuf_addr = 100000000


So although mbuf is the same, the mbuf_addr is different.   The packet content is obviously (?) different if read in RX, vs if read in Worker or TX thread.

Is this what is supposed to happen?

Basically: how do I get access to the actual ethernet packet, for reading or modifying, on different lcores; in this case the worker thread.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [dpdk-users] Accessing packet data from different lcores
  2022-02-23 15:15 ` Ramin.Taraz
@ 2022-02-23 15:58   ` Stephen Hemminger
  2022-02-23 16:10     ` Ramin.Taraz
  0 siblings, 1 reply; 5+ messages in thread
From: Stephen Hemminger @ 2022-02-23 15:58 UTC (permalink / raw)
  To: Ramin.Taraz; +Cc: users

On Wed, 23 Feb 2022 15:15:43 +0000
"Ramin.Taraz@gd-ms.com" <Ramin.Taraz@gd-ms.com> wrote:

> Back in December of 2021, I posted a question about accessing the buff_addr pointer from different cores, which oddly, wasn't working properly.
> 
> The original question is pasted below.
> 
> I was specifically using the packet_ordering example, as listed below and running it with --disable-reorder option.
> 
> The problem of buff_addr having the wrong value in different cores ended up being the result of a bug in packet_ordering example.
> 
> The example allows the user to select whether they want to stamp packets with a seq # or not to compare the performance difference.
> 
> In the case of not stamping with a seq #, the rx_thread doesn't disable stamping 
> 
> Rx_thread ()
> {
> ....
>     /* mark sequence number */
>     for (i = 0; i < nb_rx_pkts; )
>         {
>              *rte_reorder_seqn(pkts[i++]) = seqn++;
>         }
> .....
> }
> 
> When disordering is disabled, rte_reorder_create isn't called and the variable rte_reorder_seqn_dynfield_offset stays at default value of -1.  And eventually  *rte_reorder_seqn(pkts[i++]) = seqn++; corrupts buff_addr.
> 
> 
> Proper fix is not stamping the packet with seq # in dpdk-21.11/examples/packet-ordering/main.c:rx_thread if the flag --disable-ordering is set.
> 
> 
> --------------------------
> 
> I have been playing with dpdk 21.11 for a week or two and run into something that has me scratching my head a bit.
> 
> I'm looking at packet_reorder example.  I'm running this sample with 3 cores: 1 RX, 1 Worker, and 1 TX.
> 
> dpdk-packet_ordering  -l 0-3  --  -p 3 --disable-reorder
> 
> In this example:
> RX thread reads the packets from receive queue and puts them in a rx_to_workers ring Worker thread reads from the rx_to_workers ring, changes the port number and queues to workers_to_tx ring Tx thread reads from workers_to_tx ring and calls rte_eth_tx_buffer
> 
> 
> What I like to do is access the packet content in the worker thread and print out a few bytes from it.
> 
> What I'm finding is that mbuf_addr value, for the same mbuf, read from RX thread is different than if read in the Worker or TX thread.
> 
> For example, when printing out the address of mbuf, and mbuf_addr values, in the three threads, I get:
> 
> rx_thread
> mbuf      = 100e30000
> mbuf_addr = 100e30080
> 
> worker_thread
> mbuf      = 100e30000
> mbuf_addr = 100000000
> 
> tx_thread
> mbuf      = 100e30000
> mbuf_addr = 100000000
> 
> 
> So although mbuf is the same, the mbuf_addr is different.   The packet content is obviously (?) different if read in RX, vs if read in Worker or TX thread.
> 
> Is this what is supposed to happen?
> 
> Basically: how do I get access to the actual ethernet packet, for reading or modifying, on different lcores; in this case the worker thread.

If this field is going to be referenced by other cores it needs
to be done inside lock or use atomic builtin primitives.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* RE: [dpdk-users] Accessing packet data from different lcores
  2022-02-23 15:58   ` Stephen Hemminger
@ 2022-02-23 16:10     ` Ramin.Taraz
  2022-02-23 17:03       ` Stephen Hemminger
  0 siblings, 1 reply; 5+ messages in thread
From: Ramin.Taraz @ 2022-02-23 16:10 UTC (permalink / raw)
  To: users



-----Original Message-----
From: Stephen Hemminger <stephen@networkplumber.org> 
If this field is going to be referenced by other cores it needs to be done inside lock or use atomic builtin primitives.


Mbufs are passed from core to core via rings so in practice there will 1 core accessing it at any time.
Why should it be locked against multiple access?

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [dpdk-users] Accessing packet data from different lcores
  2022-02-23 16:10     ` Ramin.Taraz
@ 2022-02-23 17:03       ` Stephen Hemminger
  0 siblings, 0 replies; 5+ messages in thread
From: Stephen Hemminger @ 2022-02-23 17:03 UTC (permalink / raw)
  To: Ramin.Taraz; +Cc: users

On Wed, 23 Feb 2022 16:10:00 +0000
"Ramin.Taraz@gd-ms.com" <Ramin.Taraz@gd-ms.com> wrote:

> -----Original Message-----
> From: Stephen Hemminger <stephen@networkplumber.org> 
> If this field is going to be referenced by other cores it needs to be done inside lock or use atomic builtin primitives.
> 
> 
> Mbufs are passed from core to core via rings so in practice there will 1 core accessing it at any time.
> Why should it be locked against multiple access?

More details please? which CPU type? and what type of ring?

If you are using DPDK rings, they use atomic operations which are equivalent to
locks.  

You should experiment with adding thread fence (__atomic_thread_fence) to make
sure this is not your problem.

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2022-02-23 17:03 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-01-12  0:51 [dpdk-users] Accessing packet data from different lcores Ramin.Taraz
2022-02-23 15:15 ` Ramin.Taraz
2022-02-23 15:58   ` Stephen Hemminger
2022-02-23 16:10     ` Ramin.Taraz
2022-02-23 17:03       ` Stephen Hemminger

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).