DPDK usage discussions
 help / color / mirror / Atom feed
From: "Pathak, Pravin" <pravin.pathak@intel.com>
To: "Trahe, Fiona" <fiona.trahe@intel.com>,
	Changchun Zhang <changchun.zhang@oracle.com>,
	"users@dpdk.org" <users@dpdk.org>
Cc: "Trahe, Fiona" <fiona.trahe@intel.com>
Subject: Re: [dpdk-users] Run-to-completion or Pipe-line for QAT PMD in DPDK
Date: Fri, 18 Jan 2019 14:29:04 +0000	[thread overview]
Message-ID: <168A68C163D584429EF02A476D5274424DEA9B7C@FMSMSX108.amr.corp.intel.com> (raw)
In-Reply-To: <348A99DA5F5B7549AA880327E580B435896CD08F@IRSMSX101.ger.corp.intel.com>

Hi Alex -

-----Original Message-----
From: users [mailto:users-bounces@dpdk.org] On Behalf Of Trahe, Fiona
Sent: Friday, January 18, 2019 8:14 AM
To: Changchun Zhang <changchun.zhang@oracle.com>; users@dpdk.org
Cc: Trahe, Fiona <fiona.trahe@intel.com>
Subject: Re: [dpdk-users] Run-to-completion or Pipe-line for QAT PMD in DPDK

Hi Alex,

> -----Original Message-----
> From: users [mailto:users-bounces@dpdk.org] On Behalf Of Changchun 
> Zhang
> Sent: Thursday, January 17, 2019 11:01 PM
> To: users@dpdk.org
> Subject: [dpdk-users] Run-to-completion or Pipe-line for QAT PMD in 
> DPDK
> 
> Hi,
> 
> 
> 
> I have user question on using the QAT device in the DPDK.
> 
> In the real design, after calling enqueuer_burst() on the specified 
> queue pair at one of the lcore, usually which one is usually done?
> 
> 1.     should we do run-to-completion to call dequeuer_burst() waiting for the device finishing the
> crypto operation,
> 
> 2.     or should we do pipe-line, in which we return right after enqueuer_burst() and release the CPU.
> And call dequeuer_burst() on other thread function?
> 
> Option 1 is more like synchronous and can be seen on all the DPDK 
> crypto examples, while option 2 is asynchronous which I have never seen in any reference design if I missed anything.
[Fiona]
Option 2 is not possible with QAT - the dequeue must be called in the same thread as the enqueue. This is optimised without atomics for best performance - if this is a problem let us know. 
However best performance is not quite using option 1 and not a synchronous blocking method. 
If you enqueue and then go straight to dequeue, you're not getting the best advantage from the cycles freed up by  offloading. 
i.e. best to enqueue a burst, then go do some other work, like maybe collecting more requests for next enqueue or other processing, then dequeue. Take and process whatever ops are dequeued - this will not necessarily match up with the number you've enqueued - depends on how quickly you call the dequeue.
Don't wait until all the enqueued ops are dequeued before enqueuing the next batch.
SO it's asynchronous. But in the same thread.
You'll get best throughput when you keep the input filled up so the device has operations to work on and regularly dequeue a burst. Dequeuing too often will waste cycles in the overhead calling the API, dequeuing too slowly will cause the device to back up. Ideally tune for your application to find the sweet spot in between these 2 extremes.  
 [Pravin]
I faced exact same issue while moving from software crypto to HW. I implemented option Fiona suggested.  
Thread enqueues to crypto engine and goes back to other work. It periodically polls crypto to see if work is finished.
As we have a single thread running, it keeps doing queuing as work arrives and de-queuing as results are ready while in between doing other stuff.
To keep track of packets, I put some ID into crypto operation private data.

  reply	other threads:[~2019-01-18 14:29 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-01-17 23:00 Changchun Zhang
2019-01-18 13:13 ` Trahe, Fiona
2019-01-18 14:29   ` Pathak, Pravin [this message]
2019-01-18 15:44     ` Changchun Zhang
2019-01-18 16:26       ` Trahe, Fiona
2019-01-18 16:41         ` Changchun Zhang
2019-01-18 16:57           ` Trahe, Fiona
2019-01-18 17:55             ` Changchun Zhang
2019-01-18 18:20               ` Trahe, Fiona
2019-01-18 18:52                 ` Changchun Zhang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=168A68C163D584429EF02A476D5274424DEA9B7C@FMSMSX108.amr.corp.intel.com \
    --to=pravin.pathak@intel.com \
    --cc=changchun.zhang@oracle.com \
    --cc=fiona.trahe@intel.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).