DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] Packet drops at lower tc transmit-rates.
@ 2016-04-07 19:24 Sridhar.V.Iyer
  2016-04-11 21:00 ` Dumitrescu, Cristian
  0 siblings, 1 reply; 4+ messages in thread
From: Sridhar.V.Iyer @ 2016-04-07 19:24 UTC (permalink / raw)
  To: dev

Hi all,

We are using DPDK 1.7 in our application.
We are running into an issue where a lower transmit-rate configured at the traffic class of a subport is causing complete packet drops. 
Here are few parameters to clear up some context:

Packet length              = 728 byte
Port rate                      = 1Gbps       = 12500000 bytes/s
Subport tc_period       = 10 ms 
Configured TC0 rate   = 500 kbps   = 62500 bytes/s

This means that for the given subport tc0_credits_per_period = 10 * 62500 / 1000 = 625 (from rte_sched_subport_config)

Now, there is no token bucket at the subport-tc level, so there are no credits to accrue. The tc0_credits are just initialized to 625.
- This means that we’ll never have “enough_credits” in grinder_credits_check to process a 728 byte packet.
   - grinder_schedule will then return 0.
     - grinder_handle will return 0.
       - which implies that the rte_sched_port_dequeue will never dequeue any packet.
- After port->time exceeds the subport->tc_time, tc0_credit will be re-initialized back to 625 again.

Is this a bug in the logic?
What are some of the viable workarounds?

Is this issue taken care of in the later releases?

Regards,
Sridhar V Iyer










 <http://www.versa-networks.com/> <http://www.versa-networks.com/>

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2016-04-13  9:31 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-04-07 19:24 [dpdk-dev] Packet drops at lower tc transmit-rates Sridhar.V.Iyer
2016-04-11 21:00 ` Dumitrescu, Cristian
2016-04-12 23:38   ` Sridhar.V.Iyer
2016-04-13  9:31     ` Dumitrescu, Cristian

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).