DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Coyle, David" <david.coyle@intel.com>
To: Jerin Jacob <jerinjacobk@gmail.com>
Cc: dpdk-dev <dev@dpdk.org>,
	"Doherty, Declan" <declan.doherty@intel.com>,
	"Trahe, Fiona" <fiona.trahe@intel.com>
Subject: Re: [dpdk-dev] [RFC] Accelerator API to chain packet processing functions
Date: Fri, 7 Feb 2020 12:38:09 +0000	[thread overview]
Message-ID: <SN6PR11MB308670D0FF44DE39B477188DE31C0@SN6PR11MB3086.namprd11.prod.outlook.com> (raw)
In-Reply-To: <CALBAE1PMoH7iF1NANqtKVR3MEgQSe6Aj82Chs14e2dAY7Upwcg@mail.gmail.com>

Hi Jerin, see below

> 
> On Thu, Feb 6, 2020 at 10:01 PM Coyle, David <david.coyle@intel.com>
> wrote:
> 
> Hi David,
> 
> > >
> > >
> > > > > > - XGS-PON MAC: Crypto-CRC-BIP
> > > > > >         - Order:
> > > > > >                 - Downstream: CRC, Encrypt, BIP
> > > > >
> > > > > I understand if the chain has two operations then it may
> > > > > possible to have handcrafted SW code to do both operations in one
> pass.
> > > > > I understand the spec is agnostic on a number of passes it does
> > > > > require to enable the xfrom but To understand the SW/HW
> > > > > capability, In the above case, "CRC, Encrypt, BIP", It is done
> > > > > in one pass in SW or three passes in SW or one pass using HW?
> > > >
> > > > [DC] The CRC, Encrypt, BIP is also currently done as 1 pass in
> > > > AESNI MB
> > > library SW.
> > > > However, this could also be performed as a single pass in a HW
> > > > accelerator
> > >
> > > As a specification, cascading the xform chains make sense.
> > > Do we have any HW that does support chaining the xforms more than
> "two"
> > > in one pass?
> > > i.e real chaining function where two blocks of HWs work hand in hand
> > > for chaining.
> > > If none, it may be better to abstract as synonymous API(No dequeue,
> > > no
> > > enqueue) for the CPU use case.
> >
> > [DC] I'm not aware of any HW that supports this at the moment, but that's
> not to say it couldn't in the future - if anyone else has any examples though,
> please feel free to share.
> > Regardless, I don't see why we would introduce a different API for SW
> devices and HW devices.
> 
> There is a risk in drafting API that meant for HW without any HW exists.
> Because there could be inefficiency on the metadata and fast path API for
> both models.
> For example, In the case of CPU based scheme, it will be pure overhead
> emulate the "queue"(the enqueue and dequeue) for the sake of abstraction
> where CPU works better in the synchronous model and I have doubt that the
> session-based scheme will work for HW or not as both difference  HW needs
> to work hand in hand(IOMMU aspects for two PCI device)

[DC] I understand what you are saying about the overhead of emulating the "sw queue" but this same model is already used in many of the existing device PMDs.
In the case of SW devices, such as AESNI-MB or NULL for crypto or zlib for compression, the enqueue/dequeue in the PMD is emulated through an rte_ring which is very efficient.
The accelerator API will use the existing device PMDs so keeping the same model seems like a sensible approach.

From an application's point of view, this abstraction of the underlying device type is important for usability and maintainability -  the application doesn't need to know
the device type as such and therefore doesn't need to make different API calls. 

The enqueue/dequeue type API was also used with QAT in mind. While QAT HW doesn't support these xform chains at the moment, it could potentially do so in the future.
As a side note, as part of the work of adding the accelerator API, the QAT PMD will be updated to support the DOCSIS Crypto-CRC accelerator xform chain, where the Crypto
is done on QAT HW and the CRC will be done in SW, most likely through a call to the optimized rte_net_crc library. This will give a consistent API for the DOCSIS-MAC data-plane
pipeline prototype we have developed, which uses both AESNI-MB and QAT for benchmarks.

We will take your feedback on the enqueue/dequeue approach for SW devices into consideration though during development.

Finally, I'm unsure what you mean by this line:

	"I have doubt that the session-based scheme will work for HW or not as both difference  HW needs to work hand in hand(IOMMU aspects for two PCI device)"

What do mean by different HW working "hand in hand" and "two PCI device"?
The intention is that 1 HW device (or it's PMD) would have to support the accel xform chain

> 
> Having said that, I agree with the need for use case and API for CPU case. Till
> we find a HW spec, we need to make the solution as CPU specific and latter
> extend based on HW metadata required.
> Accelerator API sounds like HW accelerator and there is no HW support then
> it may not good. We can change the API that works for the use cases that we
> know how it works efficiently.
> 
> 
> 
> 
> 
> 
> 
> > It would be up to each underlying PMD to decide if/how it supports a
> > particular accelerator xform chain, but from an application's point of
> > view, the accelerator API is always the same
> >
> >

  reply	other threads:[~2020-02-07 12:38 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-02-04 14:45 David Coyle
2020-02-04 19:52 ` Jerin Jacob
2020-02-06 10:04   ` Coyle, David
2020-02-06 10:54     ` Jerin Jacob
2020-02-06 16:31       ` Coyle, David
2020-02-06 17:13         ` Jerin Jacob
2020-02-07 12:38           ` Coyle, David [this message]
2020-02-07 14:18             ` Jerin Jacob
2020-02-07 20:34               ` Stephen Hemminger
2020-02-08  7:22                 ` Jerin Jacob
2020-03-05 17:01                   ` Coyle, David
2020-03-06  8:43                     ` Jerin Jacob
2020-02-13 11:50               ` Doherty, Declan
2020-02-18  5:15                 ` Jerin Jacob
2020-02-13 11:44           ` Doherty, Declan
2020-02-18  5:30             ` Jerin Jacob
2020-02-13 11:31       ` Doherty, Declan
2020-02-18  5:12         ` Jerin Jacob
2020-03-05 16:44 Coyle, David
2020-03-06  9:06 ` Jerin Jacob
2020-03-06 14:55   ` Coyle, David
2020-03-06 16:22     ` Jerin Jacob
2020-03-13 18:00       ` Coyle, David
2020-03-13 18:03         ` Jerin Jacob

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=SN6PR11MB308670D0FF44DE39B477188DE31C0@SN6PR11MB3086.namprd11.prod.outlook.com \
    --to=david.coyle@intel.com \
    --cc=declan.doherty@intel.com \
    --cc=dev@dpdk.org \
    --cc=fiona.trahe@intel.com \
    --cc=jerinjacobk@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).