DPDK patches and discussions
 help / color / mirror / Atom feed
From: Bruce Richardson <bruce.richardson@intel.com>
To: Jerin Jacob <jerinjacobk@gmail.com>
Cc: Thomas Monjalon <thomas@monjalon.net>,
	fengchengwen <fengchengwen@huawei.com>,
	Ferruh Yigit <ferruh.yigit@intel.com>,
	"dev@dpdk.org" <dev@dpdk.org>, Nipun Gupta <nipun.gupta@nxp.com>,
	Hemant Agrawal <hemant.agrawal@nxp.com>,
	Maxime Coquelin <maxime.coquelin@redhat.com>,
	Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>,
	Jerin Jacob <jerinj@marvell.com>,
	David Marchand <david.marchand@redhat.com>
Subject: Re: [dpdk-dev] RFC: Kunpeng DMA driver API design decision
Date: Mon, 14 Jun 2021 19:18:09 +0100	[thread overview]
Message-ID: <YMedYVF0//Xd9SDm@bricha3-MOBL.ger.corp.intel.com> (raw)
In-Reply-To: <CALBAE1NzpcwXP1ROMcc+yrA-CE0SgjEpm9kFY6AkeMyuc8CXMA@mail.gmail.com>

On Sat, Jun 12, 2021 at 02:11:10PM +0530, Jerin Jacob wrote:
> On Sat, Jun 12, 2021 at 2:01 PM Thomas Monjalon <thomas@monjalon.net> wrote:
> >
> > 12/06/2021 09:01, fengchengwen:
> > > Hi all,
> > >
> > > We prepare support Kunpeng DMA engine under rawdev framework, and observed that
> > > there are two different implementations of the data plane API:
> > > 1. rte_rawdev_enqueue/dequeue_buffers which was implemented by dpaa2_qdma and
> > >    octeontx2_dma driver.
> > > 2. rte_ioat_enqueue_xxx/rte_ioat_completed_ops which was implemented by ioat
> > >    driver.
> > >
> > > Due to following consideration (mainly performance), we plan to implement API
> > > like ioat (not the same, have some differences) in data plane:
> > > 1. The rte_rawdev_enqueue_buffers use opaque buffer reference which is vendor's
> > >    specific, so it needs first to translate application parameters to opaque
> > >    pointer, and then driver writes the opaque data onto hardware, this may lead
> > >    to performance problem.
> > > 2. rte_rawdev_xxx doesn't provide memory barrier API which may need to extend
> > >    by opaque data (e.g. add flag to every request), this may introduce some
> > >    complexity.
> > >
> > > Also the example/ioat was used to compare DMA and CPU-memcopy performance,
> > > Could we generalized it so that it supports multiple-vendor ?
> > >
> > > I don't know if the community accepts this kind of implementation, so if you
> > > have any comments, please provide feedback.
> >
> > I would love having a common generic API.
> > I would prefer having drivers under drivers/dma/ directory,
> > rather than rawdev.
> 
> +1 for rte_dmadev.
> 
> Now that we have multiple DMA drivers, it better to have a common
> generic API for API.
> 
> @fengchengwen  If you would like to pursue generic DMA API the please
> propose an RFC for dmadev PUBLIC API before implementing it,
> We can help you review the proposal of API.
> 
I'd like to volunteer to help with this effort also, having a large
interest in it from my work on ioat driver (thanks for the positive words
on the API :-)).

Based on our experience with ioat driver, we are also looking into possible
prototypes for a dmadev device type too, and hopefully will have some RFC
to share soon. As might be expected this will be very similar to the
existing ioat APIs, though with one change to the dataplane API I'll call
out here initially. The use of explicit source and destination handles for
each operation is a little inflexible, so we are looking at replacing that
mechanism with one where the APIs return a (sequentially increasing) job id
after each enqueue, and having the completion function return the id of the
last completed job (or error info in case of an error). This would have the
advantage of allowing each app or library using the dmadev to store as much
or as little context information as desired in its own circular buffer or
buffers, and not be limited to just two uint64_t's. It would also simplify
the drivers, since they have less data to manage.

I'd hope to have a more complete API description to send out very shortly
to kick off reviews and discussion.

Regards,
/Bruce

      parent reply	other threads:[~2021-06-14 18:18 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-12  7:01 fengchengwen
2021-06-12  8:31 ` Thomas Monjalon
2021-06-12  8:41   ` Jerin Jacob
2021-06-12 11:53     ` Fengchengwen
2021-06-14 18:18     ` Bruce Richardson [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YMedYVF0//Xd9SDm@bricha3-MOBL.ger.corp.intel.com \
    --to=bruce.richardson@intel.com \
    --cc=david.marchand@redhat.com \
    --cc=dev@dpdk.org \
    --cc=fengchengwen@huawei.com \
    --cc=ferruh.yigit@intel.com \
    --cc=hemant.agrawal@nxp.com \
    --cc=honnappa.nagarahalli@arm.com \
    --cc=jerinj@marvell.com \
    --cc=jerinjacobk@gmail.com \
    --cc=maxime.coquelin@redhat.com \
    --cc=nipun.gupta@nxp.com \
    --cc=thomas@monjalon.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).