From: "Hu, Jiayu" <jiayu.hu@intel.com>
To: Maxime Coquelin <maxime.coquelin@redhat.com>,
"dev@dpdk.org" <dev@dpdk.org>
Cc: "Bie, Tiwei" <tiwei.bie@intel.com>,
"Wang, Zhihong" <zhihong.wang@intel.com>,
"Richardson, Bruce" <bruce.richardson@intel.com>
Subject: Re: [dpdk-dev] [RFC v2 2/2] net/vhost_dma: add vHost DMA driver
Date: Wed, 18 Dec 2019 03:11:24 +0000 [thread overview]
Message-ID: <ED946F0BEFE0A141B63BABBD629A2A9B3D029891@shsmsx102.ccr.corp.intel.com> (raw)
In-Reply-To: <a8c2b400-d52c-649e-8d01-b6fb7c530776@redhat.com>
Hi Maxime,
Replies are inline.
> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Tuesday, December 17, 2019 4:27 PM
> To: Hu, Jiayu <jiayu.hu@intel.com>; dev@dpdk.org
> Cc: Bie, Tiwei <tiwei.bie@intel.com>; Wang, Zhihong
> <zhihong.wang@intel.com>; Richardson, Bruce
> <bruce.richardson@intel.com>
> Subject: Re: [RFC v2 2/2] net/vhost_dma: add vHost DMA driver
>
> Hi Jiayu,
> > Here is an example.
> > $ ./testpmd -c f -n 4 \
> > --vdev
> 'dma_vhost0,iface=/tmp/sock0,queues=1,dmas=txq0@00:04.0,client=0'
>
> dma_vhost0 is not a good name, you have to mention it is net specific.
>
> Is there a tool to list available DMA engines?
Yes, you can use dpdk-devbind.py to list available DMA engines.
>
> >
> > Signed-off-by: Jiayu Hu <jiayu.hu@intel.com>
> > ---
> > config/common_base | 2 +
> > config/common_linux | 1 +
> > drivers/Makefile | 2 +-
> > drivers/net/Makefile | 1 +
> > drivers/net/vhost_dma/Makefile | 31 +
> > drivers/net/vhost_dma/eth_vhost.c | 1495
> ++++++++++++++++++++
> > drivers/net/vhost_dma/eth_vhost.h | 264 ++++
> > drivers/net/vhost_dma/internal.h | 225 +++
> > .../net/vhost_dma/rte_pmd_vhost_dma_version.map | 4 +
> > drivers/net/vhost_dma/virtio_net.c | 1234 ++++++++++++++++
> > mk/rte.app.mk | 1 +
> > 11 files changed, 3259 insertions(+), 1 deletion(-)
> > create mode 100644 drivers/net/vhost_dma/Makefile
> > create mode 100644 drivers/net/vhost_dma/eth_vhost.c
> > create mode 100644 drivers/net/vhost_dma/eth_vhost.h
> > create mode 100644 drivers/net/vhost_dma/internal.h
> > create mode 100644
> drivers/net/vhost_dma/rte_pmd_vhost_dma_version.map
> > create mode 100644 drivers/net/vhost_dma/virtio_net.c
>
> You need to add Meson support.
Will add meson build later.
>
>
> More generally, I have been through the series and I'm not sure having a
> dedicated PMD driver for this is a good idea due to all the code
> duplication it implies.
>
> I understand it has been done this way to avoid impacting the pure SW
> datapath implementation. But I'm sure the series could be reduced to a
> few hundred of lines if it was integrated in vhost-user library.
> Moreover, your series does not support packed ring, so it means even
> more code would need to be duplicated in the end.
Yes, providing a new PMD is to avoid impacting vhost library. To avoid
too much duplicated code, we can just provide a separate DMA accelerated
data-path in vhost-user PMD, rather than introducing a new PMD. How
do you think?
Thanks,
Jiayu
>
> What do you think?
>
> Thanks,
> Maxime
prev parent reply other threads:[~2019-12-18 3:11 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-09-26 14:26 [dpdk-dev] [RFC 0/2] Add a PMD for I/OAT accelerated vhost-user Jiayu Hu
2019-09-26 14:26 ` [dpdk-dev] [RFC 1/2] vhost: populate guest memory for DMA-accelerated vhost-user Jiayu Hu
2019-12-17 7:18 ` Maxime Coquelin
2019-09-26 14:26 ` [dpdk-dev] [RFC 2/2] net/vhost_ioat: add vhost I/OAT driver Jiayu Hu
2019-11-01 8:54 ` [dpdk-dev] [RFC v2 0/2] Add a PMD for DMA-accelerated vhost-user Jiayu Hu
2019-11-01 8:54 ` [dpdk-dev] [RFC v2 1/2] vhost: populate guest memory " Jiayu Hu
2019-11-01 8:54 ` [dpdk-dev] [RFC v2 2/2] net/vhost_dma: add vHost DMA driver Jiayu Hu
2019-12-17 8:27 ` Maxime Coquelin
2019-12-17 10:20 ` Maxime Coquelin
2019-12-18 2:51 ` Hu, Jiayu
2019-12-18 3:11 ` Hu, Jiayu [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ED946F0BEFE0A141B63BABBD629A2A9B3D029891@shsmsx102.ccr.corp.intel.com \
--to=jiayu.hu@intel.com \
--cc=bruce.richardson@intel.com \
--cc=dev@dpdk.org \
--cc=maxime.coquelin@redhat.com \
--cc=tiwei.bie@intel.com \
--cc=zhihong.wang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).