DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Wang, YuanX" <yuanx.wang@intel.com>
To: Maxime Coquelin <maxime.coquelin@redhat.com>,
	"dev@dpdk.org" <dev@dpdk.org>
Cc: "Hu, Jiayu" <jiayu.hu@intel.com>
Subject: RE: [RFC] net/vhost: support asynchronous data path
Date: Fri, 6 Jan 2023 09:08:54 +0000	[thread overview]
Message-ID: <CY8PR11MB6940068EF428C284E6E172AE85FB9@CY8PR11MB6940.namprd11.prod.outlook.com> (raw)
In-Reply-To: <b598866f-8cb3-3b1f-d467-3369db853e31@redhat.com>

Hi Maxime,

Sorry about not being clear about the intentions.
The patch is for a whitepaper, we use it for tests and we need to attach the patch link.
Maybe I should set the patch state to superseded?

Thanks,
Yuan

> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Monday, January 2, 2023 6:59 PM
> To: Wang, YuanX <yuanx.wang@intel.com>; dev@dpdk.org
> Cc: Hu, Jiayu <jiayu.hu@intel.com>
> Subject: Re: [RFC] net/vhost: support asynchronous data path
> 
> Hi Yuan,
> 
> On 12/16/22 03:00, Yuan Wang wrote:
> > Vhost asynchronous data-path offloads packet copy from the CPU to the
> > DMA engine. As a result, large packet copy can be accelerated by the
> > DMA engine, and vhost can free CPU cycles for higher level functions.
> >
> > In this patch, we enable asynchronous data-path for vhostpmd.
> > Asynchronous data path is enabled per tx/rx queue, and users need to
> > specify the DMA device used by the tx/rx queue. Each tx/rx queue only
> > supports to use one DMA device, but one DMA device can be shared
> among
> > multiple tx/rx queues of different vhost PMD ports.
> >
> > Two PMD parameters are added:
> > - dmas:	specify the used DMA device for a tx/rx queue.
> > 	(Default: no queues enable asynchronous data path)
> > - dma-ring-size: DMA ring size.
> > 	(Default: 4096).
> >
> > Here is an example:
> > --vdev
> 'eth_vhost0,iface=./s0,dmas=[txq0@0000:00.01.0;rxq0@0000:00.01.1],dma-
> ring-size=4096'
> >
> > Signed-off-by: Jiayu Hu <jiayu.hu@intel.com>
> > Signed-off-by: Yuan Wang <yuanx.wang@intel.com>
> > Signed-off-by: Wenwu Ma <wenwux.ma@intel.com>
> > ---
> >   drivers/net/vhost/meson.build     |   1 +
> >   drivers/net/vhost/rte_eth_vhost.c | 512
> ++++++++++++++++++++++++++++--
> >   drivers/net/vhost/rte_eth_vhost.h |  15 +
> >   drivers/net/vhost/version.map     |   7 +
> >   drivers/net/vhost/vhost_testpmd.c |  67 ++++
> >   5 files changed, 569 insertions(+), 33 deletions(-)
> >   create mode 100644 drivers/net/vhost/vhost_testpmd.c
> >
> 
> This RFC is identical to the v5 that you sent for last release, and so the
> comments I made on it are still valid.
> 
> Is this intentionally re-sent?
> 
> Regards,
> Maxime


  reply	other threads:[~2023-01-06  9:09 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-12-16  2:00 Yuan Wang
2023-01-02 10:58 ` Maxime Coquelin
2023-01-06  9:08   ` Wang, YuanX [this message]
2023-01-06  9:33     ` Maxime Coquelin
2023-01-06  9:47       ` Wang, YuanX

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CY8PR11MB6940068EF428C284E6E172AE85FB9@CY8PR11MB6940.namprd11.prod.outlook.com \
    --to=yuanx.wang@intel.com \
    --cc=dev@dpdk.org \
    --cc=jiayu.hu@intel.com \
    --cc=maxime.coquelin@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).