DPDK patches and discussions
 help / color / mirror / Atom feed
From: Bruce Richardson <bruce.richardson@intel.com>
To: Jerin Jacob <jerinjacobk@gmail.com>
Cc: fengchengwen <fengchengwen@huawei.com>,
	"Morten Brørup" <mb@smartsharesystems.com>,
	"Thomas Monjalon" <thomas@monjalon.net>,
	"Ferruh Yigit" <ferruh.yigit@intel.com>, dpdk-dev <dev@dpdk.org>,
	"Nipun Gupta" <nipun.gupta@nxp.com>,
	"Hemant Agrawal" <hemant.agrawal@nxp.com>,
	"Maxime Coquelin" <maxime.coquelin@redhat.com>,
	"Honnappa Nagarahalli" <honnappa.nagarahalli@arm.com>,
	"Jerin Jacob" <jerinj@marvell.com>,
	"David Marchand" <david.marchand@redhat.com>,
	"Satananda Burla" <sburla@marvell.com>,
	"Prasun Kapoor" <pkapoor@marvell.com>
Subject: Re: [dpdk-dev] [RFC PATCH] dmadev: introduce DMA device library
Date: Wed, 23 Jun 2021 10:37:45 +0100	[thread overview]
Message-ID: <YNMA6Ve31yij9rZK@bricha3-MOBL.ger.corp.intel.com> (raw)
In-Reply-To: <CALBAE1OM-LKm1B_z=C==LBXybsH1-v5=JQXzbKB4E=Mu2ka0hQ@mail.gmail.com>

On Wed, Jun 23, 2021 at 12:51:07PM +0530, Jerin Jacob wrote:
> On Wed, Jun 23, 2021 at 9:00 AM fengchengwen <fengchengwen@huawei.com> wrote:
> >
> 
> > >>>
> > >>>>
> > >>>>> The above will give better performance and is the best trade-off c
> > >>>>> between performance and per transfer variables.
> > >>>>
> > >>>> We may need to have different APIs for context-aware and context-unaware
> > >>>> processing, with which to use determined by the capabilities discovery.
> > >>>> Given that for these DMA devices the offload cost is critical, more so than
> > >>>> any other dev class I've looked at before, I'd like to avoid having APIs
> > >>>> with extra parameters than need to be passed about since that just adds
> > >>>> extra CPU cycles to the offload.
> > >>>
> > >>> If driver does not support additional attributes and/or the
> > >>> application does not need it, rte_dmadev_desc_t can be NULL.
> > >>> So that it won't have any cost in the datapath. I think, we can go to
> > >>> different API
> > >>> cases if we can not abstract problems without performance impact.
> > >>> Otherwise, it will be too much
> > >>> pain for applications.
> > >>
> > >> Yes, currently we plan to use different API for different case, e.g.
> > >>   rte_dmadev_memcpy()  -- deal with local to local memcopy
> > >>   rte_dmadev_memset()  -- deal with fill with local memory with pattern
> > >> maybe:
> > >>   rte_dmadev_imm_data()  --deal with copy very little data
> > >>   rte_dmadev_p2pcopy()   --deal with peer-to-peer copy of diffenet PCIE addr
> > >>
> > >> These API capabilities will be reflected in the device capability set so that
> > >> application could know by standard API.
> > >
> > >
> > > There will be a lot of combination of that it will be like M x N cross
> > > base case, It won't scale.
> >
> > Currently, it is hard to define generic dma descriptor, I think the well-defined
> > APIs is feasible.
> 
> I would like to understand why not feasible? if we move the
> preparation to the slow path.
> 
> i.e
> 
> struct rte_dmadev_desc defines all the "attributes" of all DMA devices available
> using capability. I believe with the scheme, we can scale and
> incorporate all features of
> all DMA HW without any performance impact.
> 
> something like:
> 
> struct rte_dmadev_desc {
>   /* Attributes all DMA transfer available for all HW under capability. */
>   channel or port;
>   ops ; // copy, fill etc..
>  /* impemention opqueue memory as zero length array,
> rte_dmadev_desc_prep() update this memory with HW specific information
> */
>   uint8_t impl_opq[];
> }
> 
> // allocate the memory for dma decriptor
> struct rte_dmadev_desc *rte_dmadev_desc_alloc(devid);
> // Convert DPDK specific descriptors to HW specific descriptors in slowpath */
> rte_dmadev_desc_prep(devid, struct rte_dmadev_desc *desc);
> // Free dma descriptor memory
> rte_dmadev_desc_free(devid, struct rte_dmadev_desc *desc )
> 
> The above calls in slow path.
> 
> Only below call in fastpath.
> // Here desc can be NULL(in case you don't need any specific attribute
> attached to transfer, if needed, it can be an object which is gone
> through rte_dmadev_desc_prep())
> rte_dmadev_enq(devid, struct rte_dmadev_desc *desc, void *src, void
> *dest, unsigned int len, cookie)
> 

The trouble here is the performance penalty due to building up and tearing
down structures and passing those structures into functions via function
pointer. With the APIs for enqueue/dequeue that have been discussed here,
all parameters will be passed in registers, and then each driver can do a
write of the actual hardware descriptor straight to cache/memory from
registers. With the scheme you propose above, the register contains a
pointer to the data which must then be loaded into the CPU before being
written out again. This increases our offload cost.

However, assuming that the desc_prep call is just for slowpath or
initialization time, I'd be ok to have the functions take an extra
hw-specific parameter for each call prepared with tx_prep. It would still
allow all other parameters to be passed in registers. How much data are you
looking to store in this desc struct? It can't all be represented as flags,
for example?

As for the individual APIs, we could do a generic "enqueue" API, which
takes the op as a parameter, I prefer having each operation as a separate
function, in order to increase the readability of the code and to reduce
the number of parameters needed per function i.e. thereby saving registers
needing to be used and potentially making the function calls and offload
cost cheaper. Perhaps we can have the "common" ops such as copy, fill, have
their own functions, and have a generic "enqueue" function for the
less-commonly used or supported ops?

/Bruce

  reply	other threads:[~2021-06-23  9:37 UTC|newest]

Thread overview: 79+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-15 13:22 Chengwen Feng
2021-06-15 16:38 ` Bruce Richardson
2021-06-16  7:09   ` Morten Brørup
2021-06-16 10:17     ` fengchengwen
2021-06-16 12:09       ` Morten Brørup
2021-06-16 13:06       ` Bruce Richardson
2021-06-16 14:37       ` Jerin Jacob
2021-06-17  9:15         ` Bruce Richardson
2021-06-18  5:52           ` Jerin Jacob
2021-06-18  9:41             ` fengchengwen
2021-06-22 17:25               ` Jerin Jacob
2021-06-23  3:30                 ` fengchengwen
2021-06-23  7:21                   ` Jerin Jacob
2021-06-23  9:37                     ` Bruce Richardson [this message]
2021-06-23 11:40                       ` Jerin Jacob
2021-06-23 14:19                         ` Bruce Richardson
2021-06-24  6:49                           ` Jerin Jacob
2021-06-23  9:41                 ` Bruce Richardson
2021-06-23 10:10                   ` Morten Brørup
2021-06-23 11:46                   ` Jerin Jacob
2021-06-23 14:22                     ` Bruce Richardson
2021-06-18  9:55             ` Bruce Richardson
2021-06-22 17:31               ` Jerin Jacob
2021-06-22 19:17                 ` Bruce Richardson
2021-06-23  7:00                   ` Jerin Jacob
2021-06-16  9:41   ` fengchengwen
2021-06-16 17:31     ` Bruce Richardson
2021-06-16 18:08       ` Jerin Jacob
2021-06-16 19:13         ` Bruce Richardson
2021-06-17  7:42           ` Jerin Jacob
2021-06-17  8:00             ` Bruce Richardson
2021-06-18  5:16               ` Jerin Jacob
2021-06-18 10:03                 ` Bruce Richardson
2021-06-22 17:36                   ` Jerin Jacob
2021-06-17  9:48       ` fengchengwen
2021-06-17 11:02         ` Bruce Richardson
2021-06-17 14:18           ` Bruce Richardson
2021-06-18  8:52             ` fengchengwen
2021-06-18  9:30               ` Bruce Richardson
2021-06-22 17:51               ` Jerin Jacob
2021-06-23  3:50                 ` fengchengwen
2021-06-23 11:00                   ` Jerin Jacob
2021-06-23 14:56                   ` Bruce Richardson
2021-06-24 12:19                     ` fengchengwen
2021-06-26  3:59                       ` [dpdk-dev] dmadev discussion summary fengchengwen
2021-06-28 10:00                         ` Bruce Richardson
2021-06-28 11:14                           ` Ananyev, Konstantin
2021-06-28 12:53                             ` Bruce Richardson
2021-07-02 13:31                           ` fengchengwen
2021-07-01 15:01                         ` Jerin Jacob
2021-07-01 16:33                           ` Bruce Richardson
2021-07-02  7:39                             ` Morten Brørup
2021-07-02 10:05                               ` Bruce Richardson
2021-07-02 13:45                           ` fengchengwen
2021-07-02 14:57                             ` Morten Brørup
2021-07-03  0:32                               ` fengchengwen
2021-07-03  8:53                                 ` Morten Brørup
2021-07-03  9:08                                   ` Jerin Jacob
2021-07-03 12:24                                     ` Morten Brørup
2021-07-04  7:43                                       ` Jerin Jacob
2021-07-05 10:28                                         ` Morten Brørup
2021-07-06  7:11                                           ` fengchengwen
2021-07-03  9:45                                   ` fengchengwen
2021-07-03 12:00                                     ` Morten Brørup
2021-07-04  7:34                                       ` Jerin Jacob
2021-07-02  7:07                         ` Liang Ma
2021-07-02 13:59                           ` fengchengwen
2021-06-24  7:03                   ` [dpdk-dev] [RFC PATCH] dmadev: introduce DMA device library Jerin Jacob
2021-06-24  7:59                     ` Morten Brørup
2021-06-24  8:05                       ` Jerin Jacob
2021-06-23  5:34       ` Hu, Jiayu
2021-06-23 11:07         ` Jerin Jacob
2021-06-16  2:17 ` Wang, Haiyue
2021-06-16  8:04   ` Bruce Richardson
2021-06-16  8:16     ` Wang, Haiyue
2021-06-16 12:14 ` David Marchand
2021-06-16 13:11   ` Bruce Richardson
2021-06-16 16:48     ` Honnappa Nagarahalli
2021-06-16 19:10       ` Bruce Richardson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YNMA6Ve31yij9rZK@bricha3-MOBL.ger.corp.intel.com \
    --to=bruce.richardson@intel.com \
    --cc=david.marchand@redhat.com \
    --cc=dev@dpdk.org \
    --cc=fengchengwen@huawei.com \
    --cc=ferruh.yigit@intel.com \
    --cc=hemant.agrawal@nxp.com \
    --cc=honnappa.nagarahalli@arm.com \
    --cc=jerinj@marvell.com \
    --cc=jerinjacobk@gmail.com \
    --cc=maxime.coquelin@redhat.com \
    --cc=mb@smartsharesystems.com \
    --cc=nipun.gupta@nxp.com \
    --cc=pkapoor@marvell.com \
    --cc=sburla@marvell.com \
    --cc=thomas@monjalon.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).