DPDK patches and discussions
 help / color / mirror / Atom feed
From: Maxime Coquelin <maxime.coquelin@redhat.com>
To: Jiayu Hu <jiayu.hu@intel.com>, dev@dpdk.org
Cc: tiwei.bie@intel.com, zhihong.wang@intel.com, bruce.richardson@intel.com
Subject: Re: [dpdk-dev] [RFC v2 2/2] net/vhost_dma: add vHost DMA driver
Date: Tue, 17 Dec 2019 09:27:07 +0100	[thread overview]
Message-ID: <a8c2b400-d52c-649e-8d01-b6fb7c530776@redhat.com> (raw)
In-Reply-To: <1572598450-245091-3-git-send-email-jiayu.hu@intel.com>

Hi Jiayu,

On 11/1/19 9:54 AM, Jiayu Hu wrote:
> This patch introduces a new PMD for DMA accelerated vhost-user, which
> provides basic functionality of packet reception and transmission. This
> PMD leverages librte_vhost to handle vhost messages, but it implements
> own vring's enqueue and dequeue operations.
> 
> The PMD leverages DMA engines (e.g., I/OAT, a DMA engine in Intel's
> processor), to accelerate large data movements in enqueue and dequeue
> operations. Large copies are offloaded to the DMA in an asynchronous mode.
> That is, the CPU just submits copy jobs to the DMA but without waiting
> for its completion; there is no CPU intervention during data transfer.
> Thus, we can save precious CPU cycles and improve the overall performance
> for vhost-user based applications, like OVS. The PMD still uses the CPU to
> performs small copies, due to startup overheads associated with the DMA.
> 
> Note that the PMD is able to support various DMA engines to accelerate
> data movements in enqueue and dequeue operations; currently the supported
> DMA engine is I/OAT. The PMD just supports I/OAT acceleration in the
> enqueue operation, and it still uses the CPU to perform all copies in
> the dequeue operation. In addition, the PMD only supports the split ring.
> 
> The DMA device used by a queue is assigned by users; for the queue
> without assigning a DMA device, the PMD will use the CPU to perform
> all copies for both enqueue and dequeue operations. Currently, the PMD
> just supports I/OAT, and a queue can only use one I/OAT device, and an
> I/OAT device can only be used by one queue at a time.
> 
> The PMD has 4 parameters.
>  - iface: The parameter is used to specify a path to connect to a
>  	front end device.
>  - queues: The parameter is used to specify the number of the queues
>  	front end device has (Default is 1).
>  - client: The parameter is used to specify the vhost port working as
>  	client mode or server mode (Default is server mode).
>  - dmas: This parameter is used to specify the assigned DMA device
>  	of a queue.
> 
> Here is an example.
> $ ./testpmd -c f -n 4 \
> --vdev 'dma_vhost0,iface=/tmp/sock0,queues=1,dmas=txq0@00:04.0,client=0'

dma_vhost0 is not a good name, you have to mention it is net specific.

Is there a tool to list available DMA engines?

> 
> Signed-off-by: Jiayu Hu <jiayu.hu@intel.com>
> ---
>  config/common_base                                 |    2 +
>  config/common_linux                                |    1 +
>  drivers/Makefile                                   |    2 +-
>  drivers/net/Makefile                               |    1 +
>  drivers/net/vhost_dma/Makefile                     |   31 +
>  drivers/net/vhost_dma/eth_vhost.c                  | 1495 ++++++++++++++++++++
>  drivers/net/vhost_dma/eth_vhost.h                  |  264 ++++
>  drivers/net/vhost_dma/internal.h                   |  225 +++
>  .../net/vhost_dma/rte_pmd_vhost_dma_version.map    |    4 +
>  drivers/net/vhost_dma/virtio_net.c                 | 1234 ++++++++++++++++
>  mk/rte.app.mk                                      |    1 +
>  11 files changed, 3259 insertions(+), 1 deletion(-)
>  create mode 100644 drivers/net/vhost_dma/Makefile
>  create mode 100644 drivers/net/vhost_dma/eth_vhost.c
>  create mode 100644 drivers/net/vhost_dma/eth_vhost.h
>  create mode 100644 drivers/net/vhost_dma/internal.h
>  create mode 100644 drivers/net/vhost_dma/rte_pmd_vhost_dma_version.map
>  create mode 100644 drivers/net/vhost_dma/virtio_net.c

You need to add Meson support.


More generally, I have been through the series and I'm not sure having a
dedicated PMD driver for this is a good idea due to all the code
duplication it implies.

I understand it has been done this way to avoid impacting the pure SW
datapath implementation. But I'm sure the series could be reduced to a
few hundred of lines if it was integrated in vhost-user library.
Moreover, your series does not support packed ring, so it means even
more code would need to be duplicated in the end.

What do you think?

Thanks,
Maxime


  reply	other threads:[~2019-12-17  8:27 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-09-26 14:26 [dpdk-dev] [RFC 0/2] Add a PMD for I/OAT accelerated vhost-user Jiayu Hu
2019-09-26 14:26 ` [dpdk-dev] [RFC 1/2] vhost: populate guest memory for DMA-accelerated vhost-user Jiayu Hu
2019-12-17  7:18   ` Maxime Coquelin
2019-09-26 14:26 ` [dpdk-dev] [RFC 2/2] net/vhost_ioat: add vhost I/OAT driver Jiayu Hu
2019-11-01  8:54 ` [dpdk-dev] [RFC v2 0/2] Add a PMD for DMA-accelerated vhost-user Jiayu Hu
2019-11-01  8:54   ` [dpdk-dev] [RFC v2 1/2] vhost: populate guest memory " Jiayu Hu
2019-11-01  8:54   ` [dpdk-dev] [RFC v2 2/2] net/vhost_dma: add vHost DMA driver Jiayu Hu
2019-12-17  8:27     ` Maxime Coquelin [this message]
2019-12-17 10:20       ` Maxime Coquelin
2019-12-18  2:51         ` Hu, Jiayu
2019-12-18  3:11       ` Hu, Jiayu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a8c2b400-d52c-649e-8d01-b6fb7c530776@redhat.com \
    --to=maxime.coquelin@redhat.com \
    --cc=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    --cc=jiayu.hu@intel.com \
    --cc=tiwei.bie@intel.com \
    --cc=zhihong.wang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).