DPDK usage discussions
 help / color / mirror / Atom feed
* [dpdk-users] 回覆﹕  Packets drop while fetching with rte_eth_rx_burst
       [not found] <277260559.2895913.1521977412628.ref@mail.yahoo.com>
@ 2018-03-25 11:30 ` MAC Lee
  2018-03-25 12:14   ` Filip Janiszewski
  0 siblings, 1 reply; 3+ messages in thread
From: MAC Lee @ 2018-03-25 11:30 UTC (permalink / raw)
  To: users, Filip Janiszewski

Hi Filip,
    which dpdk version are you using? You can take a look to the source code of dpdk , the rxdrop counter may be not implemented in dpdk. So you always get 0 in rxdrop. 

Thanks,
Marco
--------------------------------------------
18/3/25 (週日),Filip Janiszewski <contact@filipjaniszewski.com> 寫道:

 主旨: [dpdk-users] Packets drop while fetching with rte_eth_rx_burst
 收件者: users@dpdk.org
 日期: 2018年3月25日,日,下午6:33
 
 Hi Everybody,
 
 I have a weird drop problem, and to
 understand my question the best way
 is to have a look at this simple (and
 cleaned from all the not relevant
 stuff) snippet:
 
 while( 1 )
 {
     if( config->running ==
 false ) {
         break;
     }
     num_of_pkt =
 rte_eth_rx_burst( config->port_id,
          
                
          config->queue_idx,
          
                
          buffers,
          
                
          MAX_BURST_DEQ_SIZE);
     if( unlikely( num_of_pkt
 == MAX_BURST_DEQ_SIZE ) ) {
        
 rx_ring_full = true; //probably not the best name
     }
 
     if( likely( num_of_pkt
 > 0 ) )
     {
         pk_captured
 += num_of_pkt;
 
        
 num_of_enq_pkt =
 rte_ring_sp_enqueue_bulk(config->incoming_pkts_ring,
          
                
                
      (void*)buffers,
          
                
                
      num_of_pkt,
          
                
                
      &rx_ring_free_space);
         //if
 num_of_enq_pkt == 0 free the mbufs..
      }
 }
 
 This loop is retrieving packets from
 the device and pushing them into a
 queue for further processing by another
 lcore.
 
 When I run a test with a Mellanox card
 sending 20M (20878300) packets at
 2.5M p/s the loop seems to miss some
 packets and pk_captured is always
 like 19M or similar.
 
 rx_ring_full is never true, which means
 that num_of_pkt is always <
 MAX_BURST_DEQ_SIZE, so according to the
 documentation I shall not have
 drops at HW level. Also, num_of_enq_pkt
 is never 0 which means that all
 the packets are enqueued.
 
 Now, if from that snipped I remove the
 rte_ring_sp_enqueue_bulk call
 (and make sure to release all the
 mbufs) then pk_captured is always
 exactly equal to the amount of packets
 I've send to the NIC.
 
 So it seems (but I cant deal with this
 idea) that
 rte_ring_sp_enqueue_bulk is somehow too
 slow and between one call to
 rte_eth_rx_burst and another some
 packets are dropped due to full ring
 on the NIC, but, why num_of_pkt (from
 rte_eth_rx_burst) is always
 smaller than MAX_BURST_DEQ_SIZE (much
 smaller) as if there was always
 sufficient room for the packets?
 
 Is anybody able to help me understand
 what's happening here?
 
 Note, MAX_BURST_DEQ_SIZE is 512.
 
 Thanks
 

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-users] 回覆﹕  Packets drop while fetching with rte_eth_rx_burst
  2018-03-25 11:30 ` [dpdk-users] 回覆﹕ Packets drop while fetching with rte_eth_rx_burst MAC Lee
@ 2018-03-25 12:14   ` Filip Janiszewski
  0 siblings, 0 replies; 3+ messages in thread
From: Filip Janiszewski @ 2018-03-25 12:14 UTC (permalink / raw)
  To: MAC Lee, users

Thanks Marco,

I'm running DPDK 18.02. I might understand that the counter is not
implemented yet, but why rte_eth_rx_burst never returns nb_pkts?
According to:
http://dpdk.org/doc/api/rte__ethdev_8h.html#a3e7d76a451b46348686ea97d6367f102

"The rte_eth_rx_burst() function returns the number of packets actually
retrieved, which is the number of rte_mbuf data structures effectively
supplied into the rx_pkts array. A return value equal to nb_pkts
indicates that the RX queue contained at least rx_pkts packets, and this
is likely to signify that other received packets remain in the input queue."

So in case of drops I would expect the rx queue to be full and
rte_eth_rx_burst to return nb_pkts, but this never happen and it seems
that there's plenty of space in the ring, is that correct?

Thanks

Il 03/25/2018 01:30 PM, MAC Lee ha scritto:
> Hi Filip,
>     which dpdk version are you using? You can take a look to the source code of dpdk , the rxdrop counter may be not implemented in dpdk. So you always get 0 in rxdrop. 
> 
> Thanks,
> Marco
> --------------------------------------------
> 18/3/25 (週日),Filip Janiszewski <contact@filipjaniszewski.com> 寫道:
> 
>  主旨: [dpdk-users] Packets drop while fetching with rte_eth_rx_burst
>  收件者: users@dpdk.org
>  日期: 2018年3月25日,日,下午6:33
>  
>  Hi Everybody,
>  
>  I have a weird drop problem, and to
>  understand my question the best way
>  is to have a look at this simple (and
>  cleaned from all the not relevant
>  stuff) snippet:
>  
>  while( 1 )
>  {
>      if( config->running ==
>  false ) {
>          break;
>      }
>      num_of_pkt =
>  rte_eth_rx_burst( config->port_id,
>           
>                 
>           config->queue_idx,
>           
>                 
>           buffers,
>           
>                 
>           MAX_BURST_DEQ_SIZE);
>      if( unlikely( num_of_pkt
>  == MAX_BURST_DEQ_SIZE ) ) {
>         
>  rx_ring_full = true; //probably not the best name
>      }
>  
>      if( likely( num_of_pkt
>  > 0 ) )
>      {
>          pk_captured
>  += num_of_pkt;
>  
>         
>  num_of_enq_pkt =
>  rte_ring_sp_enqueue_bulk(config->incoming_pkts_ring,
>           
>                 
>                 
>       (void*)buffers,
>           
>                 
>                 
>       num_of_pkt,
>           
>                 
>                 
>       &rx_ring_free_space);
>          //if
>  num_of_enq_pkt == 0 free the mbufs..
>       }
>  }
>  
>  This loop is retrieving packets from
>  the device and pushing them into a
>  queue for further processing by another
>  lcore.
>  
>  When I run a test with a Mellanox card
>  sending 20M (20878300) packets at
>  2.5M p/s the loop seems to miss some
>  packets and pk_captured is always
>  like 19M or similar.
>  
>  rx_ring_full is never true, which means
>  that num_of_pkt is always <
>  MAX_BURST_DEQ_SIZE, so according to the
>  documentation I shall not have
>  drops at HW level. Also, num_of_enq_pkt
>  is never 0 which means that all
>  the packets are enqueued.
>  
>  Now, if from that snipped I remove the
>  rte_ring_sp_enqueue_bulk call
>  (and make sure to release all the
>  mbufs) then pk_captured is always
>  exactly equal to the amount of packets
>  I've send to the NIC.
>  
>  So it seems (but I cant deal with this
>  idea) that
>  rte_ring_sp_enqueue_bulk is somehow too
>  slow and between one call to
>  rte_eth_rx_burst and another some
>  packets are dropped due to full ring
>  on the NIC, but, why num_of_pkt (from
>  rte_eth_rx_burst) is always
>  smaller than MAX_BURST_DEQ_SIZE (much
>  smaller) as if there was always
>  sufficient room for the packets?
>  
>  Is anybody able to help me understand
>  what's happening here?
>  
>  Note, MAX_BURST_DEQ_SIZE is 512.
>  
>  Thanks
>  
> 

-- 
BR, Filip
+48 666 369 823

^ permalink raw reply	[flat|nested] 3+ messages in thread

* [dpdk-users] Packets drop while fetching with rte_eth_rx_burst
@ 2018-03-25 10:33 Filip Janiszewski
  0 siblings, 0 replies; 3+ messages in thread
From: Filip Janiszewski @ 2018-03-25 10:33 UTC (permalink / raw)
  To: users

Hi Everybody,

I have a weird drop problem, and to understand my question the best way
is to have a look at this simple (and cleaned from all the not relevant
stuff) snippet:

while( 1 )
{
    if( config->running == false ) {
        break;
    }
    num_of_pkt = rte_eth_rx_burst( config->port_id,
                                   config->queue_idx,
                                   buffers,
                                   MAX_BURST_DEQ_SIZE);
    if( unlikely( num_of_pkt == MAX_BURST_DEQ_SIZE ) ) {
        rx_ring_full = true; //probably not the best name
    }

    if( likely( num_of_pkt > 0 ) )
    {
        pk_captured += num_of_pkt;

        num_of_enq_pkt =
rte_ring_sp_enqueue_bulk(config->incoming_pkts_ring,
                                               (void*)buffers,
                                               num_of_pkt,
                                               &rx_ring_free_space);
        //if num_of_enq_pkt == 0 free the mbufs..
     }
}

This loop is retrieving packets from the device and pushing them into a
queue for further processing by another lcore.

When I run a test with a Mellanox card sending 20M (20878300) packets at
2.5M p/s the loop seems to miss some packets and pk_captured is always
like 19M or similar.

rx_ring_full is never true, which means that num_of_pkt is always <
MAX_BURST_DEQ_SIZE, so according to the documentation I shall not have
drops at HW level. Also, num_of_enq_pkt is never 0 which means that all
the packets are enqueued.

Now, if from that snipped I remove the rte_ring_sp_enqueue_bulk call
(and make sure to release all the mbufs) then pk_captured is always
exactly equal to the amount of packets I've send to the NIC.

So it seems (but I cant deal with this idea) that
rte_ring_sp_enqueue_bulk is somehow too slow and between one call to
rte_eth_rx_burst and another some packets are dropped due to full ring
on the NIC, but, why num_of_pkt (from rte_eth_rx_burst) is always
smaller than MAX_BURST_DEQ_SIZE (much smaller) as if there was always
sufficient room for the packets?

Is anybody able to help me understand what's happening here?

Note, MAX_BURST_DEQ_SIZE is 512.

Thanks

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2018-03-25 12:15 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <277260559.2895913.1521977412628.ref@mail.yahoo.com>
2018-03-25 11:30 ` [dpdk-users] 回覆﹕ Packets drop while fetching with rte_eth_rx_burst MAC Lee
2018-03-25 12:14   ` Filip Janiszewski
2018-03-25 10:33 [dpdk-users] " Filip Janiszewski

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).