DPDK usage discussions
 help / color / mirror / Atom feed
* [dpdk-users] Test Pipeline application - lockup
@ 2015-12-29 10:23 Vijay S
  2015-12-29 11:42 ` Singh, Jasvinder
  0 siblings, 1 reply; 3+ messages in thread
From: Vijay S @ 2015-12-29 10:23 UTC (permalink / raw)
  To: users

Hi folks,

I am running the test pipeline application using extendible bucket hash
table with 16 byte key size. I have made changes to the test pipeline
application such that for EVERY port instance there are 3 cores: Rx Core,
Worker Core and Tx Core instead of ALL ports mapping to a single Rx/Tx Core
like in the original implementation. This is being done to try and increase
the overall throughput of the application.

In the app_main_loop_worker_pipeline_hash function, I am instantiating a
new pipeline instance for each port and stitching the rings_rx and rings_tx
accordingly. With these changes I am able to get a throughput of ~13Mpps on
one 10G port. But, when I send a high rate traffic (13Mpps each) from BOTH
the ports simultaneously, the traffic stops completely. Stopping the
restarting traffic doesn't seem to be helping. It almost looks like the
pipeline is locked up and not seeing any packets in the Rx rings.
Has anyone encountered this before ? Any idea how to debug/what could be
going wrong ? Appreciate your help !

Regards,
Vijay

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-users] Test Pipeline application - lockup
  2015-12-29 10:23 [dpdk-users] Test Pipeline application - lockup Vijay S
@ 2015-12-29 11:42 ` Singh, Jasvinder
  2015-12-30  4:58   ` Vijay S
  0 siblings, 1 reply; 3+ messages in thread
From: Singh, Jasvinder @ 2015-12-29 11:42 UTC (permalink / raw)
  To: Vijay S, users

Hi Vijay,

> 
> I am running the test pipeline application using extendible bucket hash table
> with 16 byte key size. I have made changes to the test pipeline application
> such that for EVERY port instance there are 3 cores: Rx Core, Worker Core
> and Tx Core instead of ALL ports mapping to a single Rx/Tx Core like in the
> original implementation. This is being done to try and increase the overall
> throughput of the application.
> 
> In the app_main_loop_worker_pipeline_hash function, I am instantiating a
> new pipeline instance for each port and stitching the rings_rx and rings_tx
> accordingly. With these changes I am able to get a throughput of ~13Mpps on
> one 10G port. But, when I send a high rate traffic (13Mpps each) from BOTH
> the ports simultaneously, the traffic stops completely. Stopping the
> restarting traffic doesn't seem to be helping. It almost looks like the pipeline
> is locked up and not seeing any packets in the Rx rings.
> Has anyone encountered this before ? Any idea how to debug/what could be
> going wrong ? Appreciate your help !

 
In pipeline Core, have you created separate table for each pipeline instance? Each pipeline instance should have input port, table and output port configured and linked properly to make passage for the packets. 

Jasvinder


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-users] Test Pipeline application - lockup
  2015-12-29 11:42 ` Singh, Jasvinder
@ 2015-12-30  4:58   ` Vijay S
  0 siblings, 0 replies; 3+ messages in thread
From: Vijay S @ 2015-12-30  4:58 UTC (permalink / raw)
  To: Singh, Jasvinder; +Cc: users

Hi Jasvinder,

On Tue, Dec 29, 2015 at 3:42 AM, Singh, Jasvinder <jasvinder.singh@intel.com
> wrote:

> Hi Vijay,
>
> >
> > I am running the test pipeline application using extendible bucket hash
> table
> > with 16 byte key size. I have made changes to the test pipeline
> application
> > such that for EVERY port instance there are 3 cores: Rx Core, Worker Core
> > and Tx Core instead of ALL ports mapping to a single Rx/Tx Core like in
> the
> > original implementation. This is being done to try and increase the
> overall
> > throughput of the application.
> >
> > In the app_main_loop_worker_pipeline_hash function, I am instantiating a
> > new pipeline instance for each port and stitching the rings_rx and
> rings_tx
> > accordingly. With these changes I am able to get a throughput of ~13Mpps
> on
> > one 10G port. But, when I send a high rate traffic (13Mpps each) from
> BOTH
> > the ports simultaneously, the traffic stops completely. Stopping the
> > restarting traffic doesn't seem to be helping. It almost looks like the
> pipeline
> > is locked up and not seeing any packets in the Rx rings.
> > Has anyone encountered this before ? Any idea how to debug/what could be
> > going wrong ? Appreciate your help !
>
>
> In pipeline Core, have you created separate table for each pipeline
> instance? Each pipeline instance should have input port, table and output
> port configured and linked properly to make passage for the packets.


Moving to multi-consumer ring apis and allocating rx mbuf per port fixed
the lockup issue. I am able to receive packets on all the ports without any
lockup. Thanks for your help.

Regards,
Vijay

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2015-12-30  4:58 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-12-29 10:23 [dpdk-users] Test Pipeline application - lockup Vijay S
2015-12-29 11:42 ` Singh, Jasvinder
2015-12-30  4:58   ` Vijay S

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).