DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Liang, Cunming" <cunming.liang@intel.com>
To: Jerin Jacob <jerinjacobk@gmail.com>,
	"Fu, Patrick" <patrick.fu@intel.com>
Cc: Maxime Coquelin <maxime.coquelin@redhat.com>,
	"dev@dpdk.org" <dev@dpdk.org>,
	"Ye, Xiaolong" <xiaolong.ye@intel.com>,
	"Hu, Jiayu" <jiayu.hu@intel.com>,
	"Wang, Zhihong" <zhihong.wang@intel.com>
Subject: Re: [dpdk-dev] [RFC] Accelerating Data Movement for DPDK vHost with DMA Engines
Date: Tue, 21 Apr 2020 08:30:14 +0000	[thread overview]
Message-ID: <BYAPR11MB2552A3ACA24AF3BA46EFCFD7F9D50@BYAPR11MB2552.namprd11.prod.outlook.com> (raw)
In-Reply-To: <CALBAE1OSwR27hnRyQm0d4pfrk6qpTMrSTgVkX795LoX=Z-o2QA@mail.gmail.com>

Hi Jerin,

> -----Original Message-----
> From: Jerin Jacob <jerinjacobk@gmail.com>
> Sent: Tuesday, April 21, 2020 2:04 PM
> To: Fu, Patrick <patrick.fu@intel.com>
> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>; Liang, Cunming
> <cunming.liang@intel.com>; dev@dpdk.org; Ye, Xiaolong
> <xiaolong.ye@intel.com>; Hu, Jiayu <jiayu.hu@intel.com>; Wang, Zhihong
> <zhihong.wang@intel.com>
> Subject: Re: [dpdk-dev] [RFC] Accelerating Data Movement for DPDK vHost with
> DMA Engines
> 
> On Tue, Apr 21, 2020 at 8:14 AM Fu, Patrick <patrick.fu@intel.com> wrote:
> >
> > Hi Jerin
> 
> Hi Patrick,
> 
> >
> > > -----Original Message-----
> > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > Sent: Monday, April 20, 2020 8:15 PM
> > > To: Maxime Coquelin <maxime.coquelin@redhat.com>
> > > Cc: Liang, Cunming <cunming.liang@intel.com>; Fu, Patrick
> > > <patrick.fu@intel.com>; dev@dpdk.org; Ye, Xiaolong
> > > <xiaolong.ye@intel.com>; Hu, Jiayu <jiayu.hu@intel.com>; Wang,
> > > Zhihong <zhihong.wang@intel.com>
> > > Subject: Re: [dpdk-dev] [RFC] Accelerating Data Movement for DPDK
> > > vHost with DMA Engines
> > >
> > > On Mon, Apr 20, 2020 at 5:40 PM Maxime Coquelin
> > > <maxime.coquelin@redhat.com> wrote:
> > > >
> > > >
> > > >
> > > > On 4/20/20 2:08 PM, Jerin Jacob wrote:
> > > > > On Mon, Apr 20, 2020 at 5:14 PM Maxime Coquelin
> > > > > <maxime.coquelin@redhat.com> wrote:
> > > > >>
> > > > >>
> > > > >>
> > > > >> On 4/20/20 1:13 PM, Jerin Jacob wrote:
> > > > >>> On Mon, Apr 20, 2020 at 1:29 PM Liang, Cunming
> > > <cunming.liang@intel.com> wrote:
> > > > >>>>
> > > > >>>>
> > > > >>>>
> > > > >>>>> -----Original Message-----
> > > > >>>>> From: Jerin Jacob <jerinjacobk@gmail.com>
> > > > >>>>> Sent: Friday, April 17, 2020 5:55 PM
> > > > >>>>> To: Fu, Patrick <patrick.fu@intel.com>
> > > > >>>>> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>;
> > > dev@dpdk.org;
> > > > >>>>> Ye, Xiaolong <xiaolong.ye@intel.com>; Hu, Jiayu
> > > > >>>>> <jiayu.hu@intel.com>; Wang, Zhihong
> > > > >>>>> <zhihong.wang@intel.com>; Liang, Cunming
> > > > >>>>> <cunming.liang@intel.com>
> > > > >>>>> Subject: Re: [dpdk-dev] [RFC] Accelerating Data Movement for
> > > > >>>>> DPDK vHost with DMA Engines
> > > > >>>>>
> > > > >>>>> On Fri, Apr 17, 2020 at 2:56 PM Fu, Patrick
> > > > >>>>> <patrick.fu@intel.com>
> > > wrote:
> > > > >>>>>>
> > > > >>>>>>
> > > > >>>> [...]
> > > > >>>>>>>>
> > > > >>>>>>>> I believe it doesn't conflict. The purpose of this RFC is
> > > > >>>>>>>> to create an async
> > > > >>>>>>> data path in vhost-user and provide a way for applications
> > > > >>>>>>> to work with this new path. dmadev is another topic which
> > > > >>>>>>> could be discussed separately. If we do have the dmadev
> > > > >>>>>>> available in the future, this vhost async data path could
> > > > >>>>>>> certainly be backed by the new dma abstraction without
> > > > >>>>>>> major interface
> > > change.
> > > > >>>>>>>
> > > > >>>>>>> Maybe that one advantage of a dmadev class is that it
> > > > >>>>>>> would be easier and more transparent for the application to
> consume.
> > > > >>>>>>>
> > > > >>>>>>> The application would register some DMA devices, pass them
> > > > >>>>>>> to the Vhost library, and then
> > > > >>>>>>> rte_vhost_submit_enqueue_burst and
> > > > >>>>>>> rte_vhost_poll_enqueue_completed would call the dmadev
> > > callbacks directly.
> > > > >>>>>>>
> > > > >>>>>>> Do you think that could work?
> > > > >>>>>>>
> > > > >>>>>> Yes, this is a workable model. As I said in previous reply,
> > > > >>>>>> I have no objection to
> > > > >>>>> make the dmadev. However, what we currently want to do is
> > > > >>>>> creating the async data path for vhost, and we actually have
> > > > >>>>> no preference to the underlying DMA device model. I believe
> > > > >>>>> our current design of the API proto type /data structures
> > > > >>>>> are quite common for various DMA acceleration solutions and
> > > > >>>>> there is no
> > > blocker for any new DMA device to adapt to these APIs or extend to a
> > > new one.
> > > > >>>>>
> > > > >>>>> IMO, as a driver writer,  we should not be writing TWO DMA
> > > > >>>>> driver. One for vhost and other one for rawdev.
> > > > >>>> It's the most simplest case if statically 1:1 mapping driver
> > > > >>>> (e.g. {port,
> > > queue}) to a vhost session {vid, qid}. However, it's not enough
> > > scalable to integrate device model with vhost library. There're a
> > > few intentions belong to app logic rather than driver, e.g. 1:N load
> > > balancing, various device type usages (e.g. vhost zcopy via ethdev) and etc.
> > > > >>>
> > > > >>>
> > > > >>> Before moving to reply to comments, Which DMA engine you are
> > > > >>> planning to integrate with vHOST?
> > > > >>> Is is ioat? if not ioat(drivers/raw/ioat/), How do you think,
> > > > >>> how we can integrate this IOAT DMA engine to vHOST as a use case?
> > > > >>>
> > > > >>
> > > > >> I guess it could be done in the vhost example.
> > > > >
> > > > >
> > > > > Could not see any reference to DMA in  examples/vhost*
> > > > >
> > > >
> > > > That's because we are discussing the API to introduce DMA support
> > > > in this exact mail thread, nothing has been merged yet.
> > >
> > > Some confusion here. Original question was, # This is an RFC for DMA
> > > support in vHOST # What is the underneath DMA engine planned for
> > > hooking to vHOST async API as a "implementation" for this RFC?
> > > # If it ioat, How does the integration work with ioat exiting
> > > rawdriver and new API?
> > > # if it not ioat, What it takes to add support ioat based DMA engine
> > > to vHOST aysnc API
> > >
> > It most likely that IOAT could be leveraged as the first demonstration on the
> async DMA acceleration for vHOST. However, this is neither a limitation nor do we
> design this RFC specifically for IOAT.
> > With current RFC design, we will need applications to implement callbacks
> (which will call into the IOAT pmd in IOAT case) that can work with vHost async
> path.
> 
> Then it would be calling some PMD specific APIs for dpaa2_qdma, octeontx2_dma,
> ioat and there will issue with integrating  DMA consumer as vHOST and another
> consumer together.
The main effort is to intro async-mode API for vhost allowing external hooks for raw buffer (VM and/or host app) access regardless of virtio ring layout.
It never forces ops to leverage DMA device, think about rxtx_callback of ethdev. That's pretty much app's or helper library's flavor of the ops.
If not comfortable to demo ops with dma, that's fine for us to focus on CPU only as a hook provider in the sample, and omit 'w/ DMA engine' from RFC.

> The correct approach is to create a new class for dma like Linux and vHOST
> consume as a client so that integration aspects are intact.
I'm curious what's the usages when dpaa2_qdma, octeontx2_dma, ioat being introduce into raw device, why these usage don’t have excuse, but you believe vhost has.
They exists for a while as a raw device. If it's do necessary, they've already built a class... 
As you said, vhost is one of the client to consume but not owns device class, moreover these raw device is not the only 'server' to vhost.
For your concern, it worth a separate conversation but not limited for vhost case.

Thanks,
Steve
> 
> 
> 
> 
> 
> >
> > Thanks,
> >
> > Patrick
> >
> >

  reply	other threads:[~2020-04-21  8:30 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-04-17  7:26 Fu, Patrick
2020-04-17  8:01 ` Jerin Jacob
2020-04-17  8:29   ` Fu, Patrick
2020-04-17  8:40     ` Maxime Coquelin
2020-04-17  9:26       ` Fu, Patrick
2020-04-17  9:54         ` Jerin Jacob
2020-04-20  7:59           ` Liang, Cunming
2020-04-20 11:13             ` Jerin Jacob
2020-04-20 11:44               ` Maxime Coquelin
2020-04-20 12:08                 ` Jerin Jacob
2020-04-20 12:10                   ` Maxime Coquelin
2020-04-20 12:15                     ` Jerin Jacob
2020-04-21  2:44                       ` Fu, Patrick
2020-04-21  6:04                         ` Jerin Jacob
2020-04-21  8:30                           ` Liang, Cunming [this message]
2020-04-21  9:05                             ` Jerin Jacob
2020-04-20 11:47             ` Maxime Coquelin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=BYAPR11MB2552A3ACA24AF3BA46EFCFD7F9D50@BYAPR11MB2552.namprd11.prod.outlook.com \
    --to=cunming.liang@intel.com \
    --cc=dev@dpdk.org \
    --cc=jerinjacobk@gmail.com \
    --cc=jiayu.hu@intel.com \
    --cc=maxime.coquelin@redhat.com \
    --cc=patrick.fu@intel.com \
    --cc=xiaolong.ye@intel.com \
    --cc=zhihong.wang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).