DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] Packet drops at lower tc transmit-rates.
@ 2016-04-07 19:24 Sridhar.V.Iyer
  2016-04-11 21:00 ` Dumitrescu, Cristian
  0 siblings, 1 reply; 4+ messages in thread
From: Sridhar.V.Iyer @ 2016-04-07 19:24 UTC (permalink / raw)
  To: dev

Hi all,

We are using DPDK 1.7 in our application.
We are running into an issue where a lower transmit-rate configured at the traffic class of a subport is causing complete packet drops. 
Here are few parameters to clear up some context:

Packet length              = 728 byte
Port rate                      = 1Gbps       = 12500000 bytes/s
Subport tc_period       = 10 ms 
Configured TC0 rate   = 500 kbps   = 62500 bytes/s

This means that for the given subport tc0_credits_per_period = 10 * 62500 / 1000 = 625 (from rte_sched_subport_config)

Now, there is no token bucket at the subport-tc level, so there are no credits to accrue. The tc0_credits are just initialized to 625.
- This means that we’ll never have “enough_credits” in grinder_credits_check to process a 728 byte packet.
   - grinder_schedule will then return 0.
     - grinder_handle will return 0.
       - which implies that the rte_sched_port_dequeue will never dequeue any packet.
- After port->time exceeds the subport->tc_time, tc0_credit will be re-initialized back to 625 again.

Is this a bug in the logic?
What are some of the viable workarounds?

Is this issue taken care of in the later releases?

Regards,
Sridhar V Iyer










 <http://www.versa-networks.com/> <http://www.versa-networks.com/>

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dpdk-dev] Packet drops at lower tc transmit-rates.
  2016-04-07 19:24 [dpdk-dev] Packet drops at lower tc transmit-rates Sridhar.V.Iyer
@ 2016-04-11 21:00 ` Dumitrescu, Cristian
  2016-04-12 23:38   ` Sridhar.V.Iyer
  0 siblings, 1 reply; 4+ messages in thread
From: Dumitrescu, Cristian @ 2016-04-11 21:00 UTC (permalink / raw)
  To: Sridhar.V.Iyer, dev



> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Sridhar.V.Iyer
> Sent: Thursday, April 7, 2016 8:24 PM
> To: dev@dpdk.org
> Subject: [dpdk-dev] Packet drops at lower tc transmit-rates.
> 
> Hi all,
> 
> We are using DPDK 1.7 in our application.
> We are running into an issue where a lower transmit-rate configured at the
> traffic class of a subport is causing complete packet drops.
> Here are few parameters to clear up some context:
> 
> Packet length              = 728 byte
> Port rate                      = 1Gbps       = 12500000 bytes/s
> Subport tc_period       = 10 ms
> Configured TC0 rate   = 500 kbps   = 62500 bytes/s
> 
> This means that for the given subport tc0_credits_per_period = 10 * 62500 /
> 1000 = 625 (from rte_sched_subport_config)
> 
> Now, there is no token bucket at the subport-tc level, so there are no credits
> to accrue. The tc0_credits are just initialized to 625.
> - This means that we’ll never have “enough_credits” in
> grinder_credits_check to process a 728 byte packet.
>    - grinder_schedule will then return 0.
>      - grinder_handle will return 0.
>        - which implies that the rte_sched_port_dequeue will never dequeue
> any packet.
> - After port->time exceeds the subport->tc_time, tc0_credit will be re-
> initialized back to 625 again.
> 
> Is this a bug in the logic?
> What are some of the viable workarounds?
> 
> Is this issue taken care of in the later releases?
> 
> Regards,
> Sridhar V Iyer
> 
>  <http://www.versa-networks.com/> <http://www.versa-networks.com/>


Hi Sridhar,

This case seems to take place only when the pipe TC is configured with relatively low rate.

One potential workaround could be to detect the case when the pipe TC credits per period is less than MTU size and either flag it as error or round up the pipe TC credits per period to at least 1x MTU:

	if (pipe_params->tc_credits_per_period[i] < MTU) error(…);
	if (pipe_params->tc_credits_per_period[i] < MTU) pipe_params->tc_credits_per_period[i] = MTU;

Another potential workaround could be to change the pipe TC credit update logic from straightforward re-initialization to a slightly more tuned strategy that, in some cases, keeps some of the existing credits, so that the existing credits are not completely lost but some of them (value capped to 1x MTU) are carried forward:

	pipe->tc_credits[i] = (params->tc_credits_per_period[i] < MTU)?
		((pipe->tc_credits[i] % MTU) + params->tc_credits_per_period[i]) : 
		params->tc_credits_per_period[i];

This would give the chance to the pipe TC credits to accumulate and become greater than the MTU every few periods and a packet to be transmitted for this pipe TC. Of course, this strategy needs to be further developed.

Regards,
Cristian


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dpdk-dev] Packet drops at lower tc transmit-rates.
  2016-04-11 21:00 ` Dumitrescu, Cristian
@ 2016-04-12 23:38   ` Sridhar.V.Iyer
  2016-04-13  9:31     ` Dumitrescu, Cristian
  0 siblings, 1 reply; 4+ messages in thread
From: Sridhar.V.Iyer @ 2016-04-12 23:38 UTC (permalink / raw)
  To: Dumitrescu, Cristian; +Cc: dev

Hi Cristian,

Thanks for the response.

> 
> Another potential workaround could be to change the pipe TC credit update logic from straightforward re-initialization to a slightly more tuned strategy that, in some cases, keeps some of the existing credits, so that the existing credits are not completely lost but some of them (value capped to 1x MTU) are carried forward:
> 
> 	pipe->tc_credits[i] = (params->tc_credits_per_period[i] < MTU)?
> 		((pipe->tc_credits[i] % MTU) + params->tc_credits_per_period[i]) : 
> 		params->tc_credits_per_period[i];
> 
> This would give the chance to the pipe TC credits to accumulate and become greater than the MTU every few periods and a packet to be transmitted for this pipe TC. Of course, this strategy needs to be further developed.

This approach seemed to give the apparent rate closest to the configured rate, irrespective of the MTU, the packet size, or the min packet size. I’ll use the port->mtu to influence the tc_credits_per_period accumulation.

Is there any particular reason why a token bucket was not used for traffic classes?

Regards,
Sridhar

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dpdk-dev] Packet drops at lower tc transmit-rates.
  2016-04-12 23:38   ` Sridhar.V.Iyer
@ 2016-04-13  9:31     ` Dumitrescu, Cristian
  0 siblings, 0 replies; 4+ messages in thread
From: Dumitrescu, Cristian @ 2016-04-13  9:31 UTC (permalink / raw)
  To: Sridhar.V.Iyer; +Cc: dev



> -----Original Message-----
> From: Sridhar.V.Iyer [mailto:sridhariyer@versa-networks.com]
> Sent: Wednesday, April 13, 2016 12:39 AM
> To: Dumitrescu, Cristian <cristian.dumitrescu@intel.com>
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] Packet drops at lower tc transmit-rates.
> 
> Hi Cristian,
> 
> Thanks for the response.
> 
> >
> > Another potential workaround could be to change the pipe TC credit
> update logic from straightforward re-initialization to a slightly more tuned
> strategy that, in some cases, keeps some of the existing credits, so that the
> existing credits are not completely lost but some of them (value capped to 1x
> MTU) are carried forward:
> >
> > 	pipe->tc_credits[i] = (params->tc_credits_per_period[i] < MTU)?
> > 		((pipe->tc_credits[i] % MTU) + params-
> >tc_credits_per_period[i]) :
> > 		params->tc_credits_per_period[i];
> >
> > This would give the chance to the pipe TC credits to accumulate and
> become greater than the MTU every few periods and a packet to be
> transmitted for this pipe TC. Of course, this strategy needs to be further
> developed.
> 
> This approach seemed to give the apparent rate closest to the configured
> rate, irrespective of the MTU, the packet size, or the min packet size. I’ll use
> the port->mtu to influence the tc_credits_per_period accumulation.
> 
> Is there any particular reason why a token bucket was not used for traffic
> classes?

Yes, all the pipe traffic classes are sharing the rate of their pipe, i.e. credits from the same pipe token bucket. Traffic classes are there just to describe how to divide the pipe rate amongst different types of traffic of the same user.

> 
> Regards,
> Sridhar

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2016-04-13  9:31 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-04-07 19:24 [dpdk-dev] Packet drops at lower tc transmit-rates Sridhar.V.Iyer
2016-04-11 21:00 ` Dumitrescu, Cristian
2016-04-12 23:38   ` Sridhar.V.Iyer
2016-04-13  9:31     ` Dumitrescu, Cristian

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).