DPDK patches and discussions
 help / color / mirror / Atom feed
From: Ouyang Changchun <changchun.ouyang@intel.com>
To: dev@dpdk.org
Subject: [dpdk-dev] [PATCH v3 0/3] Support zero copy RX/TX in user space vhost
Date: Wed, 28 May 2014 16:06:35 +0800	[thread overview]
Message-ID: <1401264398-1289-1-git-send-email-changchun.ouyang@intel.com> (raw)

This patch v3 fixes some errors and warnings reported by checkpatch.pl,
please ignore previous 2 patches: patch v1 and patch v2, only apply this v3 patch for
zero copy RX/TX in user space vhost.

This patch series support user space vhost zero copy. It removes packets copying between host and guest
in RX/TX. And it introduces an extra ring to store the detached mbufs. At initialization stage all mbufs
put into this ring; when one guest starts, vhost gets the available buffer address allocated by guest
for RX and translates them into host space addresses, then attaches them to mbufs and puts the attached
mbufs into mempool.
 
Queue starting and DMA refilling will get mbufs from mempool and use them to set the DMA addresses.
 
For TX, it gets the buffer addresses of available packets to be transmitted from guest and translates
them to host space addresses, then attaches them to mbufs and puts them to TX queues.
After TX finishes, it pulls mbufs out from mempool, detaches them and puts them back into the extra ring.
 
This patch series also implement queue start and stop functionality in IXGBE PMD; and enable hardware
loopback for VMDQ mode in IXGBE PMD.

Ouyang Changchun (3):
  Add API to support queue start and stop functionality for RX/TX.
  Implement queue start and stop functionality in IXGBE PMD; Enable
    hardware loopback for VMDQ mode in IXGBE PMD.
  Support user space vhost zero copy, it removes packets copying between
    host and guest in RX/TX.

 examples/vhost/main.c                    | 1476 ++++++++++++++++++++++++++++--
 examples/vhost/virtio-net.c              |  186 +++-
 examples/vhost/virtio-net.h              |   23 +-
 lib/librte_eal/linuxapp/eal/eal_memory.c |    2 +-
 lib/librte_ether/rte_ethdev.c            |  104 +++
 lib/librte_ether/rte_ethdev.h            |   80 ++
 lib/librte_pmd_ixgbe/ixgbe_ethdev.c      |    4 +
 lib/librte_pmd_ixgbe/ixgbe_ethdev.h      |    8 +
 lib/librte_pmd_ixgbe/ixgbe_rxtx.c        |  239 ++++-
 lib/librte_pmd_ixgbe/ixgbe_rxtx.h        |    6 +
 10 files changed, 2028 insertions(+), 100 deletions(-)

-- 
1.9.0

             reply	other threads:[~2014-05-28  8:10 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-05-28  8:06 Ouyang Changchun [this message]
2014-05-28  8:06 ` [dpdk-dev] [PATCH v3 1/3] ethdev: Add API to support queue start and stop functionality for RX/TX Ouyang Changchun
2014-05-28  8:06 ` [dpdk-dev] [PATCH v3 2/3] ixgbe: Implement queue start and stop functionality in IXGBE PMD Ouyang Changchun
2014-05-28  8:06 ` [dpdk-dev] [PATCH v3 3/3] examples/vhost: Support user space vhost zero copy Ouyang Changchun
2014-05-28 14:13 ` [dpdk-dev] [PATCH v3 0/3] Support zero copy RX/TX in user space vhost Thomas Monjalon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1401264398-1289-1-git-send-email-changchun.ouyang@intel.com \
    --to=changchun.ouyang@intel.com \
    --cc=dev@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).