From: Ilya Maximets <i.maximets@ovn.org>
To: Flavio Leitner <fbl@sysclose.org>, dev@dpdk.org
Cc: Ilya Maximets <i.maximets@ovn.org>,
Maxime Coquelin <maxime.coquelin@redhat.com>,
Shahaf Shuler <shahafs@mellanox.com>,
David Marchand <david.marchand@redhat.com>,
Tiwei Bie <tiwei.bie@intel.com>,
Obrembski MichalX <michalx.obrembski@intel.com>,
Stokes Ian <ian.stokes@intel.com>
Subject: Re: [dpdk-dev] [PATCH v4] vhost: add support for large buffers
Date: Tue, 15 Oct 2019 19:41:52 +0200 [thread overview]
Message-ID: <d1c9ccac-88c6-a8c2-7069-5bea2b548c38@ovn.org> (raw)
In-Reply-To: <20191015161727.32570-1-fbl@sysclose.org>
Hi.
Not a full review. Few comments inline.
Best regards, Ilya Maximets.
On 15.10.2019 18:17, Flavio Leitner wrote:
> The rte_vhost_dequeue_burst supports two ways of dequeuing data.
> If the data fits into a buffer, then all data is copied and a
> single linear buffer is returned. Otherwise it allocates
> additional mbufs and chains them together to return a multiple
> segments mbuf.
>
> While that covers most use cases, it forces applications that
> need to work with larger data sizes to support multiple segments
> mbufs. The non-linear characteristic brings complexity and
> performance implications to the application.
>
> To resolve the issue, add support to attach external buffer
> to a pktmbuf and let the host provide during registration if
> attaching an external buffer to pktmbuf is supported and if
> only linear buffer are supported.
>
> Signed-off-by: Flavio Leitner <fbl@sysclose.org>
> ---
> doc/guides/prog_guide/vhost_lib.rst | 35 +++++++++
> lib/librte_vhost/rte_vhost.h | 4 +
> lib/librte_vhost/socket.c | 22 ++++++
> lib/librte_vhost/vhost.c | 22 ++++++
> lib/librte_vhost/vhost.h | 4 +
> lib/librte_vhost/virtio_net.c | 109 ++++++++++++++++++++++++----
> 6 files changed, 182 insertions(+), 14 deletions(-)
>
>
> - Changelog:
> v4:
> - allow to use pktmbuf if there is exact space
> - removed log message if the buffer is too big
> - fixed the length to include align padding
> - free allocated buf if shinfo fails
> v3:
> - prevent the new features to be used with zero copy
> - fixed sizeof() usage
> - fixed log msg indentation
> - removed/replaced asserts
> - used the correct virt2iova function
> - fixed the patch's title
> - OvS PoC code:
> https://github.com/fleitner/ovs/tree/rte_malloc-v3
> v2:
> - Used rte_malloc() instead of another mempool as suggested by Shahaf.
> - Added the documentation section.
> - Using driver registration to negotiate the features.
> - OvS PoC code:
> https://github.com/fleitner/ovs/commit/8fc197c40b1d4fda331686a7b919e9e2b670dda7
>
>
>
> diff --git a/doc/guides/prog_guide/vhost_lib.rst b/doc/guides/prog_guide/vhost_lib.rst
> index fc3ee4353..07e40e3c5 100644
> --- a/doc/guides/prog_guide/vhost_lib.rst
> +++ b/doc/guides/prog_guide/vhost_lib.rst
> @@ -117,6 +117,41 @@ The following is an overview of some key Vhost API functions:
> Enabling this flag should only be done when the calling application does
> not pre-fault the guest shared memory, otherwise migration would fail.
>
> + - ``RTE_VHOST_USER_LINEARBUF_SUPPORT``
> +
> + Enabling this flag forces vhost dequeue function to only provide linear
> + pktmbuf (no multi-segmented pktmbuf).
> +
> + The vhost library by default provides a single pktmbuf for given a
> + packet, but if for some reason the data doesn't fit into a single
> + pktmbuf (e.g., TSO is enabled), the library will allocate additional
> + pktmbufs from the same mempool and chain them together to create a
> + multi-segmented pktmbuf.
> +
> + However, the vhost application needs to support multi-segmented format.
> + If the vhost application does not support that format and requires large
> + buffers to be dequeue, this flag should be enabled to force only linear
> + buffers (see RTE_VHOST_USER_EXTBUF_SUPPORT) or drop the packet.
> +
> + It is disabled by default.
> +
> + - ``RTE_VHOST_USER_EXTBUF_SUPPORT``
> +
> + Enabling this flag allows vhost dequeue function to allocate and attach
> + an external buffer to a pktmbuf if the pkmbuf doesn't provide enough
> + space to store all data.
> +
> + This is useful when the vhost application wants to support large packets
> + but doesn't want to increase the default mempool object size nor to
> + support multi-segmented mbufs (non-linear). In this case, a fresh buffer
> + is allocated using rte_malloc() which gets attached to a pktmbuf using
> + rte_pktmbuf_attach_extbuf().
> +
> + See RTE_VHOST_USER_LINEARBUF_SUPPORT as well to disable multi-segmented
> + mbufs for application that doesn't support chained mbufs.
> +
> + It is disabled by default.
> +
> * ``rte_vhost_driver_set_features(path, features)``
>
> This function sets the feature bits the vhost-user driver supports. The
> diff --git a/lib/librte_vhost/rte_vhost.h b/lib/librte_vhost/rte_vhost.h
> index 19474bca0..b821b5df4 100644
> --- a/lib/librte_vhost/rte_vhost.h
> +++ b/lib/librte_vhost/rte_vhost.h
> @@ -30,6 +30,10 @@ extern "C" {
> #define RTE_VHOST_USER_DEQUEUE_ZERO_COPY (1ULL << 2)
> #define RTE_VHOST_USER_IOMMU_SUPPORT (1ULL << 3)
> #define RTE_VHOST_USER_POSTCOPY_SUPPORT (1ULL << 4)
> +/* support mbuf with external buffer attached */
> +#define RTE_VHOST_USER_EXTBUF_SUPPORT (1ULL << 5)
> +/* support only linear buffers (no chained mbufs) */
> +#define RTE_VHOST_USER_LINEARBUF_SUPPORT (1ULL << 6)
>
> /** Protocol features. */
> #ifndef VHOST_USER_PROTOCOL_F_MQ
> diff --git a/lib/librte_vhost/socket.c b/lib/librte_vhost/socket.c
> index 274988c4d..e546be2a8 100644
> --- a/lib/librte_vhost/socket.c
> +++ b/lib/librte_vhost/socket.c
> @@ -40,6 +40,8 @@ struct vhost_user_socket {
> bool dequeue_zero_copy;
> bool iommu_support;
> bool use_builtin_virtio_net;
> + bool extbuf;
> + bool linearbuf;
>
> /*
> * The "supported_features" indicates the feature bits the
> @@ -232,6 +234,12 @@ vhost_user_add_connection(int fd, struct vhost_user_socket *vsocket)
> if (vsocket->dequeue_zero_copy)
> vhost_enable_dequeue_zero_copy(vid);
>
> + if (vsocket->extbuf)
> + vhost_enable_extbuf(vid);
> +
> + if (vsocket->linearbuf)
> + vhost_enable_linearbuf(vid);
> +
> RTE_LOG(INFO, VHOST_CONFIG, "new device, handle is %d\n", vid);
>
> if (vsocket->notify_ops->new_connection) {
> @@ -870,6 +878,8 @@ rte_vhost_driver_register(const char *path, uint64_t flags)
> goto out_free;
> }
> vsocket->dequeue_zero_copy = flags & RTE_VHOST_USER_DEQUEUE_ZERO_COPY;
> + vsocket->extbuf = flags & RTE_VHOST_USER_EXTBUF_SUPPORT;
> + vsocket->linearbuf = flags & RTE_VHOST_USER_LINEARBUF_SUPPORT;
>
> /*
> * Set the supported features correctly for the builtin vhost-user
> @@ -894,6 +904,18 @@ rte_vhost_driver_register(const char *path, uint64_t flags)
> * not compatible with postcopy.
> */
> if (vsocket->dequeue_zero_copy) {
> + if (vsocket->extbuf) {
> + RTE_LOG(ERR, VHOST_CONFIG,
> + "error: zero copy is incompatible with external buffers\n");
> + ret = -1;
> + goto out_free;
There should be 'out_mutex'.
> + }
> + if (vsocket->linearbuf) {
> + RTE_LOG(ERR, VHOST_CONFIG,
> + "error: zero copy is incompatible with linear buffers\n");
> + ret = -1;
> + goto out_free;
Ditto.
> + }
> vsocket->supported_features &= ~(1ULL << VIRTIO_F_IN_ORDER);
> vsocket->features &= ~(1ULL << VIRTIO_F_IN_ORDER);
>
> diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c
> index cea44df8c..77457f538 100644
> --- a/lib/librte_vhost/vhost.c
> +++ b/lib/librte_vhost/vhost.c
> @@ -605,6 +605,28 @@ vhost_set_builtin_virtio_net(int vid, bool enable)
> dev->flags &= ~VIRTIO_DEV_BUILTIN_VIRTIO_NET;
> }
>
> +void
> +vhost_enable_extbuf(int vid)
> +{
> + struct virtio_net *dev = get_device(vid);
> +
> + if (dev == NULL)
> + return;
> +
> + dev->extbuf = 1;
> +}
> +
> +void
> +vhost_enable_linearbuf(int vid)
> +{
> + struct virtio_net *dev = get_device(vid);
> +
> + if (dev == NULL)
> + return;
> +
> + dev->linearbuf = 1;
> +}
> +
> int
> rte_vhost_get_mtu(int vid, uint16_t *mtu)
> {
> diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
> index 5131a97a3..0346bd118 100644
> --- a/lib/librte_vhost/vhost.h
> +++ b/lib/librte_vhost/vhost.h
> @@ -302,6 +302,8 @@ struct virtio_net {
> rte_atomic16_t broadcast_rarp;
> uint32_t nr_vring;
> int dequeue_zero_copy;
> + int extbuf;
> + int linearbuf;
> struct vhost_virtqueue *virtqueue[VHOST_MAX_QUEUE_PAIRS * 2];
> #define IF_NAME_SZ (PATH_MAX > IFNAMSIZ ? PATH_MAX : IFNAMSIZ)
> char ifname[IF_NAME_SZ];
> @@ -476,6 +478,8 @@ void vhost_attach_vdpa_device(int vid, int did);
> void vhost_set_ifname(int, const char *if_name, unsigned int if_len);
> void vhost_enable_dequeue_zero_copy(int vid);
> void vhost_set_builtin_virtio_net(int vid, bool enable);
> +void vhost_enable_extbuf(int vid);
> +void vhost_enable_linearbuf(int vid);
>
> struct vhost_device_ops const *vhost_driver_callback_get(const char *path);
>
> diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
> index 5b85b832d..da69ab1db 100644
> --- a/lib/librte_vhost/virtio_net.c
> +++ b/lib/librte_vhost/virtio_net.c
> @@ -1289,6 +1289,93 @@ get_zmbuf(struct vhost_virtqueue *vq)
> return NULL;
> }
>
> +static void
> +virtio_dev_extbuf_free(void *addr __rte_unused, void *opaque)
> +{
> + rte_free(opaque);
> +}
> +
> +static int
> +virtio_dev_extbuf_alloc(struct rte_mbuf *pkt, uint32_t size)
> +{
> + struct rte_mbuf_ext_shared_info *shinfo = NULL;
> + uint32_t total_len = RTE_PKTMBUF_HEADROOM + size;
> + uint16_t buf_len;
> + rte_iova_t iova;
> + void *buf;
> +
> + /* Try to use pkt buffer to store shinfo to reduce the amount of memory
> + * required, otherwise store shinfo in the new buffer.
> + */
> + if (rte_pktmbuf_tailroom(pkt) >= sizeof(*shinfo))
> + shinfo = rte_pktmbuf_mtod(pkt,
> + struct rte_mbuf_ext_shared_info *);
> + else {
> + total_len += sizeof(*shinfo) + sizeof(uintptr_t);
> + total_len = RTE_ALIGN_CEIL(total_len, sizeof(uintptr_t));
> + }
> +
> + if (unlikely(total_len > UINT16_MAX))
> + return -ENOSPC;
> +
> + buf_len = total_len;
> + buf = rte_malloc(NULL, buf_len, RTE_CACHE_LINE_SIZE);
> + if (unlikely(buf == NULL))
> + return -ENOMEM;
> +
> + /* Initialize shinfo */
> + if (shinfo) {
> + shinfo->free_cb = virtio_dev_extbuf_free;
> + shinfo->fcb_opaque = buf;
> + rte_mbuf_ext_refcnt_set(shinfo, 1);
> + } else {
> + shinfo = rte_pktmbuf_ext_shinfo_init_helper(buf, &buf_len,
> + virtio_dev_extbuf_free, buf);
> + if (unlikely(shinfo == NULL)) {
> + rte_free(buf);
> + RTE_LOG(ERR, VHOST_DATA, "Failed to init shinfo\n");
> + return -1;
> + }
> + }
> +
> + iova = rte_malloc_virt2iova(buf);
> + rte_pktmbuf_attach_extbuf(pkt, buf, iova, buf_len, shinfo);
> + rte_pktmbuf_reset_headroom(pkt);
> +
> + return 0;
> +}
> +
> +/*
> + * Allocate a host supported pktmbuf.
> + */
> +static __rte_always_inline struct rte_mbuf *
> +virtio_dev_pktmbuf_alloc(struct virtio_net *dev, struct rte_mempool *mp,
> + uint32_t data_len)
> +{
> + struct rte_mbuf *pkt = rte_pktmbuf_alloc(mp);
> +
> + if (unlikely(pkt == NULL))
> + return NULL;
> +
> + if (rte_pktmbuf_tailroom(pkt) >= data_len)
> + return pkt;
> +
> + /* attach an external buffer if supported */
> + if (dev->extbuf && !virtio_dev_extbuf_alloc(pkt, data_len))
> + return pkt;
> +
> + /* check if chained buffers are allowed */
> + if (!dev->linearbuf)
> + return pkt;
I guess, checking of the 'linearbuf' should go before checking the 'extbuf'.
The usecase is that allocation of several buffers from the memory pool is,
probably, faster than rte_malloc() + memory attaching. So, if the 'linearbuf'
is not requested, it might be faster to use chained mbufs.
BTW, I'm not sure if we really need 2 separate options for this.
i.e. 1. +linear +extbuf --> extbuf allocated
2. +linear -extbuf --> buffer dropped (is this case is useful at all?)
3. -linear +extbuf --> chained mbufs might be preferred (see above)
4. -linear -extbuf --> chained mbufs
Case 4 is a default case. Case 1 is our target case for supporting large buffers.
Case 3 might makes no much sense as the result is equal to case 4.
Case 2 might be not interesting for as at all, because it will lead to random
packet drops depending on their size.
But, if only cases 1 and 4 are valid and interesting to us, we could easily merge
both flags.
Thoughts?
next prev parent reply other threads:[~2019-10-16 20:59 UTC|newest]
Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-10-01 22:19 [dpdk-dev] [PATCH] vhost: add support to large linear mbufs Flavio Leitner
2019-10-01 23:10 ` Flavio Leitner
2019-10-02 4:45 ` Shahaf Shuler
2019-10-02 8:04 ` David Marchand
2019-10-02 9:00 ` Shahaf Shuler
2019-10-02 12:58 ` Flavio Leitner
2019-10-02 17:50 ` Shahaf Shuler
2019-10-02 18:15 ` Flavio Leitner
2019-10-03 16:57 ` Ilya Maximets
2019-10-03 21:25 ` Flavio Leitner
2019-10-02 7:51 ` Maxime Coquelin
2019-10-04 20:10 ` [dpdk-dev] [PATCH v2] vhost: add support for large buffers Flavio Leitner
2019-10-06 4:47 ` Shahaf Shuler
2019-10-10 5:12 ` Tiwei Bie
2019-10-10 12:12 ` Flavio Leitner
2019-10-11 17:09 ` [dpdk-dev] [PATCH v3] " Flavio Leitner
2019-10-14 2:44 ` Tiwei Bie
2019-10-15 16:17 ` [dpdk-dev] [PATCH v4] " Flavio Leitner
2019-10-15 17:41 ` Ilya Maximets [this message]
2019-10-15 18:44 ` Flavio Leitner
2019-10-15 18:59 ` [dpdk-dev] [PATCH v5] " Flavio Leitner
2019-10-16 10:02 ` Maxime Coquelin
2019-10-16 11:13 ` Maxime Coquelin
2019-10-16 13:32 ` Ilya Maximets
2019-10-16 13:46 ` Maxime Coquelin
2019-10-16 14:02 ` Flavio Leitner
2019-10-16 14:08 ` Ilya Maximets
2019-10-16 14:14 ` Flavio Leitner
2019-10-16 14:05 ` Ilya Maximets
2019-10-29 9:02 ` David Marchand
2019-10-29 12:21 ` Flavio Leitner
2019-10-29 16:19 ` David Marchand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=d1c9ccac-88c6-a8c2-7069-5bea2b548c38@ovn.org \
--to=i.maximets@ovn.org \
--cc=david.marchand@redhat.com \
--cc=dev@dpdk.org \
--cc=fbl@sysclose.org \
--cc=ian.stokes@intel.com \
--cc=maxime.coquelin@redhat.com \
--cc=michalx.obrembski@intel.com \
--cc=shahafs@mellanox.com \
--cc=tiwei.bie@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).