From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1EB58A2EEB for ; Sun, 6 Oct 2019 06:47:29 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 876021D14E; Sun, 6 Oct 2019 06:47:28 +0200 (CEST) Received: from EUR03-AM5-obe.outbound.protection.outlook.com (mail-eopbgr30051.outbound.protection.outlook.com [40.107.3.51]) by dpdk.org (Postfix) with ESMTP id DD45E1D14B for ; Sun, 6 Oct 2019 06:47:27 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=hm5FgwfDt8l2NDKTwQsVXkMV+Oy3nJ3t8yyM5OqCTlgCO/4vKcXRkhH/haphhQwwpZYX8PafW2F4Ja0JnN0cUzU5+u0OmElKliUmjIIevJkr6D+gIXbVnwyrKSeIaQg1wgR4WXh5rViL0CbJfHgD5GNJSd0UMolrjlA37vH3F5eEqZJZz3+DeqEVdzRJ8V/Fpax9odZxbM+nayljCCE0RgjI+5P70FwhXDr7gfmKFUvQ8wcmBL6pvnlztjjm/xaTOabiCieGLHbpVpyUk1UQGCtBpxk0uretkEqwCETiHOtLjMR1qlpmgXB79Vvg84HdluoD5yM820eliOJ11mXC0Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=YsgKNzPo13W1xTTYT7CeMeumdvAUGxrJZwuaY4fjU/c=; b=JOXm7EQMvnZm9sFrSww0X0k3r0UxZ9FGnxHKUbLu/G565rqB2yKt76au1MKAvnCC9H3Au/OFyYbtD54+X/bViol19SI/liZAr/1EZlIf6ba1gzy3O/hYqnMJZZMTEob82ZjWs2XK+URF+S7kE1hKdfCmZbjjilhTbtYnENakKy1jNlQPb1dyO/bdAbc4y/7dLjkwD073ChLVwd6z0Q3i1kNI+WAQVjvRgkGwJx+x97fAljea/k3920s5TI2/AhBnEtyZR21S5mbv26rdr2uYVXzJJJDKF/N0SJ6j14RJL2JVDWIOXqF9k1ygxBYlmfDq/MmyG1059ZLyEkS8Om9bfw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=mellanox.com; dmarc=pass action=none header.from=mellanox.com; dkim=pass header.d=mellanox.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Mellanox.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=YsgKNzPo13W1xTTYT7CeMeumdvAUGxrJZwuaY4fjU/c=; b=BPQCo6oiQ6BQ6+qbQhsSdu8Qca5S5zNuSam8/AQxMVxTVGQCvnIBCkeIAUfW3A/wYXVygw1mvuBHPB/RstzSmDQc4D/sodXVjr7lYxk5mpFZGYcJPLaC9xSMoq+BFaiXK1UMq+rKdoHbNb36a2Wije0WtI/I6NEkhsqErnIVK5U= Received: from AM0PR0502MB3795.eurprd05.prod.outlook.com (52.133.45.150) by AM0PR0502MB3747.eurprd05.prod.outlook.com (52.133.47.16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2327.24; Sun, 6 Oct 2019 04:47:26 +0000 Received: from AM0PR0502MB3795.eurprd05.prod.outlook.com ([fe80::683e:2a0f:9e1:e50c]) by AM0PR0502MB3795.eurprd05.prod.outlook.com ([fe80::683e:2a0f:9e1:e50c%7]) with mapi id 15.20.2305.023; Sun, 6 Oct 2019 04:47:26 +0000 From: Shahaf Shuler To: Flavio Leitner , "dev@dpdk.org" CC: Ilya Maximets , Maxime Coquelin , David Marchand , Tiwei Bie , Obrembski MichalX , Stokes Ian Thread-Topic: [dpdk-dev] [PATCH v2] vhost: add support for large buffers. Thread-Index: AQHVeu/HRYJiVZ4beU+8teMITIXPoKdNDC2Q Date: Sun, 6 Oct 2019 04:47:26 +0000 Message-ID: References: <20191001221935.12140-1-fbl@sysclose.org> <20191004201008.3981-1-fbl@sysclose.org> In-Reply-To: <20191004201008.3981-1-fbl@sysclose.org> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: spf=none (sender IP is ) smtp.mailfrom=shahafs@mellanox.com; x-originating-ip: [2.53.147.213] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: eb86eec0-d28a-461a-5f51-08d74a184945 x-ms-office365-filtering-ht: Tenant x-ms-traffictypediagnostic: AM0PR0502MB3747: x-ms-exchange-purlcount: 1 x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:9508; x-forefront-prvs: 0182DBBB05 x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(4636009)(366004)(376002)(396003)(39850400004)(346002)(136003)(189003)(199004)(54534003)(486006)(966005)(14454004)(54906003)(110136005)(316002)(66066001)(5024004)(14444005)(256004)(6506007)(7736002)(11346002)(446003)(30864003)(476003)(86362001)(8676002)(81156014)(478600001)(8936002)(81166006)(52536014)(5660300002)(2906002)(3846002)(6436002)(55016002)(4326008)(186003)(6246003)(6116002)(45080400002)(99286004)(26005)(7696005)(229853002)(9686003)(71190400001)(74316002)(76176011)(2501003)(305945005)(102836004)(66446008)(76116006)(66476007)(66556008)(66946007)(33656002)(64756008)(71200400001)(6306002)(25786009); DIR:OUT; SFP:1101; SCL:1; SRVR:AM0PR0502MB3747; H:AM0PR0502MB3795.eurprd05.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1; received-spf: None (protection.outlook.com: mellanox.com does not designate permitted sender hosts) x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: vpD/8DGbruMU0pE/dsq+EJjX0kVj9fDzqgFNtX5DKh4WMtJNlU4YxHg++aHruo/z2JR7NxNtwwhXne2OJzl07ccibEiRNCrR11Jp2he6Hbb8P94PqG+sKDCAWqnb6CCzlQw/XW+JgeudaIpn5yHD5PSglOKmrRNJueBvqqVPRUolx81oUSkPSA23Z38WTXqJ4bCUaFWabJwsJrPZy8307ZlDgcsLZMilZI0JFj343w7dJMoJTWnhzE/Q6zTgzRylFAhwYnLGQVhYlc8m4uFcrPDigh4oDoiNW2uyQJ+D3pygTN2Jxtl5iA2RRiou45ERRdXwXi1s/f/UtxibSzYkm8as8ga35X4DC7gdAafsUxEjCwOXQ5pkQZv6szsJ19BDSMqctq4dYnozzpkpmLw2WMGGjnjJ/sO2TSzLY/F7t/k= x-ms-exchange-transport-forked: True Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: Mellanox.com X-MS-Exchange-CrossTenant-Network-Message-Id: eb86eec0-d28a-461a-5f51-08d74a184945 X-MS-Exchange-CrossTenant-originalarrivaltime: 06 Oct 2019 04:47:26.3818 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: a652971c-7d2e-4d9b-a6a4-d149256f461b X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: ep6gfQs20aCNoDBBn4WL36X4YIRSv3zihME+xJ4cCN5njoV3mA3KajG/kqOeuM7bknMJzmcWU0CxBdDcG1po/w== X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR0502MB3747 Subject: Re: [dpdk-dev] [PATCH v2] vhost: add support for large buffers. X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Friday, October 4, 2019 11:10 PM, Flavio Leitner: > Subject: [dpdk-dev] [PATCH v2] vhost: add support for large buffers. >=20 > The rte_vhost_dequeue_burst supports two ways of dequeuing data. > If the data fits into a buffer, then all data is copied and a single line= ar buffer is > returned. Otherwise it allocates additional mbufs and chains them togethe= r > to return a multiple segments mbuf. >=20 > While that covers most use cases, it forces applications that need to wor= k > with larger data sizes to support multiple segments mbufs. The non-linear > characteristic brings complexity and performance implications to the > application. >=20 > To resolve the issue, let the host provide during registration if attachi= ng an > external buffer to pktmbuf is supported and if only linear buffer are > supported. >=20 > Signed-off-by: Flavio Leitner Havn't reviewed the code in details. But for the high level direction Acked-by: Shahaf Shuler > --- > doc/guides/prog_guide/vhost_lib.rst | 35 ++++++++++ > lib/librte_vhost/rte_vhost.h | 4 ++ > lib/librte_vhost/socket.c | 10 +++ > lib/librte_vhost/vhost.c | 22 ++++++ > lib/librte_vhost/vhost.h | 4 ++ > lib/librte_vhost/virtio_net.c | 103 ++++++++++++++++++++++++++-- > 6 files changed, 172 insertions(+), 6 deletions(-) >=20 > - Changelog: > V2: > - Used rte_malloc() instead of another mempool as suggested by Shahaf= . > - Added the documentation section. > - Using driver registration to negotiate the features. > - OvS PoC code: >=20 > https://eur03.safelinks.protection.outlook.com/?url=3Dhttps%3A%2F%2Fgithu > b.com%2Ffleitner%2Fovs%2Fcommit%2F8fc197c40b1d4fda331686a7b919e9e > 2b670dda7&data=3D02%7C01%7Cshahafs%40mellanox.com%7C505d8fd73 > 8da4cfe988a08d74906e87d%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7 > C0%7C637058166345885355&sdata=3DSlYdj%2BoTG%2BufcAgkwp%2FfHo > nXR6x4JuGvMmr7gZGsJRE%3D&reserved=3D0 >=20 > diff --git a/doc/guides/prog_guide/vhost_lib.rst > b/doc/guides/prog_guide/vhost_lib.rst > index fc3ee4353..07e40e3c5 100644 > --- a/doc/guides/prog_guide/vhost_lib.rst > +++ b/doc/guides/prog_guide/vhost_lib.rst > @@ -117,6 +117,41 @@ The following is an overview of some key Vhost API > functions: > Enabling this flag should only be done when the calling application = does > not pre-fault the guest shared memory, otherwise migration would fai= l. >=20 > + - ``RTE_VHOST_USER_LINEARBUF_SUPPORT`` > + > + Enabling this flag forces vhost dequeue function to only provide lin= ear > + pktmbuf (no multi-segmented pktmbuf). > + > + The vhost library by default provides a single pktmbuf for given a > + packet, but if for some reason the data doesn't fit into a single > + pktmbuf (e.g., TSO is enabled), the library will allocate additional > + pktmbufs from the same mempool and chain them together to create a > + multi-segmented pktmbuf. > + > + However, the vhost application needs to support multi-segmented > format. > + If the vhost application does not support that format and requires l= arge > + buffers to be dequeue, this flag should be enabled to force only lin= ear > + buffers (see RTE_VHOST_USER_EXTBUF_SUPPORT) or drop the packet. > + > + It is disabled by default. > + > + - ``RTE_VHOST_USER_EXTBUF_SUPPORT`` > + > + Enabling this flag allows vhost dequeue function to allocate and att= ach > + an external buffer to a pktmbuf if the pkmbuf doesn't provide enough > + space to store all data. > + > + This is useful when the vhost application wants to support large pac= kets > + but doesn't want to increase the default mempool object size nor to > + support multi-segmented mbufs (non-linear). In this case, a fresh bu= ffer > + is allocated using rte_malloc() which gets attached to a pktmbuf usi= ng > + rte_pktmbuf_attach_extbuf(). > + > + See RTE_VHOST_USER_LINEARBUF_SUPPORT as well to disable multi- > segmented > + mbufs for application that doesn't support chained mbufs. > + > + It is disabled by default. > + > * ``rte_vhost_driver_set_features(path, features)`` >=20 > This function sets the feature bits the vhost-user driver supports. Th= e diff -- > git a/lib/librte_vhost/rte_vhost.h b/lib/librte_vhost/rte_vhost.h index > 19474bca0..b821b5df4 100644 > --- a/lib/librte_vhost/rte_vhost.h > +++ b/lib/librte_vhost/rte_vhost.h > @@ -30,6 +30,10 @@ extern "C" { > #define RTE_VHOST_USER_DEQUEUE_ZERO_COPY (1ULL << 2) > #define RTE_VHOST_USER_IOMMU_SUPPORT (1ULL << 3) > #define RTE_VHOST_USER_POSTCOPY_SUPPORT (1ULL << 4) > +/* support mbuf with external buffer attached */ > +#define RTE_VHOST_USER_EXTBUF_SUPPORT (1ULL << 5) > +/* support only linear buffers (no chained mbufs) */ > +#define RTE_VHOST_USER_LINEARBUF_SUPPORT (1ULL << 6) >=20 > /** Protocol features. */ > #ifndef VHOST_USER_PROTOCOL_F_MQ > diff --git a/lib/librte_vhost/socket.c b/lib/librte_vhost/socket.c index > 274988c4d..0ba610fda 100644 > --- a/lib/librte_vhost/socket.c > +++ b/lib/librte_vhost/socket.c > @@ -40,6 +40,8 @@ struct vhost_user_socket { > bool dequeue_zero_copy; > bool iommu_support; > bool use_builtin_virtio_net; > + bool extbuf; > + bool linearbuf; >=20 > /* > * The "supported_features" indicates the feature bits the @@ - > 232,6 +234,12 @@ vhost_user_add_connection(int fd, struct > vhost_user_socket *vsocket) > if (vsocket->dequeue_zero_copy) > vhost_enable_dequeue_zero_copy(vid); >=20 > + if (vsocket->extbuf) > + vhost_enable_extbuf(vid); > + > + if (vsocket->linearbuf) > + vhost_enable_linearbuf(vid); > + > RTE_LOG(INFO, VHOST_CONFIG, "new device, handle is %d\n", vid); >=20 > if (vsocket->notify_ops->new_connection) { @@ -870,6 +878,8 @@ > rte_vhost_driver_register(const char *path, uint64_t flags) > goto out_free; > } > vsocket->dequeue_zero_copy =3D flags & > RTE_VHOST_USER_DEQUEUE_ZERO_COPY; > + vsocket->extbuf =3D flags & RTE_VHOST_USER_EXTBUF_SUPPORT; > + vsocket->linearbuf =3D flags & > RTE_VHOST_USER_LINEARBUF_SUPPORT; >=20 > /* > * Set the supported features correctly for the builtin vhost-user diff > --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c index > cea44df8c..77457f538 100644 > --- a/lib/librte_vhost/vhost.c > +++ b/lib/librte_vhost/vhost.c > @@ -605,6 +605,28 @@ vhost_set_builtin_virtio_net(int vid, bool enable) > dev->flags &=3D ~VIRTIO_DEV_BUILTIN_VIRTIO_NET; } >=20 > +void > +vhost_enable_extbuf(int vid) > +{ > + struct virtio_net *dev =3D get_device(vid); > + > + if (dev =3D=3D NULL) > + return; > + > + dev->extbuf =3D 1; > +} > + > +void > +vhost_enable_linearbuf(int vid) > +{ > + struct virtio_net *dev =3D get_device(vid); > + > + if (dev =3D=3D NULL) > + return; > + > + dev->linearbuf =3D 1; > +} > + > int > rte_vhost_get_mtu(int vid, uint16_t *mtu) { diff --git > a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h index > 5131a97a3..0346bd118 100644 > --- a/lib/librte_vhost/vhost.h > +++ b/lib/librte_vhost/vhost.h > @@ -302,6 +302,8 @@ struct virtio_net { > rte_atomic16_t broadcast_rarp; > uint32_t nr_vring; > int dequeue_zero_copy; > + int extbuf; > + int linearbuf; > struct vhost_virtqueue *virtqueue[VHOST_MAX_QUEUE_PAIRS * > 2]; > #define IF_NAME_SZ (PATH_MAX > IFNAMSIZ ? PATH_MAX : IFNAMSIZ) > char ifname[IF_NAME_SZ]; > @@ -476,6 +478,8 @@ void vhost_attach_vdpa_device(int vid, int did); voi= d > vhost_set_ifname(int, const char *if_name, unsigned int if_len); void > vhost_enable_dequeue_zero_copy(int vid); void > vhost_set_builtin_virtio_net(int vid, bool enable); > +void vhost_enable_extbuf(int vid); > +void vhost_enable_linearbuf(int vid); >=20 > struct vhost_device_ops const *vhost_driver_callback_get(const char > *path); >=20 > diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.= c index > 5b85b832d..fca75161d 100644 > --- a/lib/librte_vhost/virtio_net.c > +++ b/lib/librte_vhost/virtio_net.c > @@ -2,6 +2,7 @@ > * Copyright(c) 2010-2016 Intel Corporation > */ >=20 > +#include > #include > #include > #include > @@ -1289,6 +1290,96 @@ get_zmbuf(struct vhost_virtqueue *vq) > return NULL; > } >=20 > +static void > +virtio_dev_extbuf_free(void *addr __rte_unused, void *opaque) { > + rte_free(opaque); > +} > + > +static int > +virtio_dev_extbuf_alloc(struct rte_mbuf *pkt, uint16_t size) { > + struct rte_mbuf_ext_shared_info *shinfo; > + uint16_t buf_len; > + rte_iova_t iova; > + void *buf; > + > + shinfo =3D NULL; > + buf_len =3D size + RTE_PKTMBUF_HEADROOM; > + > + /* Try to use pkt buffer to store shinfo to reduce the amount of > memory > + * required, otherwise store shinfo in the new buffer. > + */ > + if (rte_pktmbuf_tailroom(pkt) > sizeof(*shinfo)) > + shinfo =3D rte_pktmbuf_mtod(pkt, > + struct rte_mbuf_ext_shared_info > *); > + else { > + if (unlikely(buf_len + sizeof(shinfo) > UINT16_MAX)) { > + RTE_LOG(ERR, VHOST_DATA, > + "buffer size exceeded maximum.\n"); > + return -ENOSPC; > + } > + > + buf_len +=3D sizeof(shinfo); > + } > + > + buf =3D rte_malloc(NULL, buf_len, RTE_CACHE_LINE_SIZE); > + if (unlikely(buf =3D=3D NULL)) { > + RTE_LOG(ERR, VHOST_DATA, > + "Failed to allocate memory for mbuf.\n"); > + return -ENOMEM; > + } > + > + /* initialize shinfo */ > + if (shinfo) { > + shinfo->free_cb =3D virtio_dev_extbuf_free; > + shinfo->fcb_opaque =3D buf; > + rte_mbuf_ext_refcnt_set(shinfo, 1); > + } else { > + shinfo =3D rte_pktmbuf_ext_shinfo_init_helper(buf, > &buf_len, > + virtio_dev_extbuf_free, buf); > + assert(shinfo); > + } > + > + iova =3D rte_mempool_virt2iova(buf); > + rte_pktmbuf_attach_extbuf(pkt, buf, iova, buf_len, shinfo); > + rte_pktmbuf_reset_headroom(pkt); > + assert(pkt->ol_flags =3D=3D EXT_ATTACHED_MBUF); > + > + return 0; > +} > + > +/* > + * Allocate a host supported pktmbuf. > + */ > +static __rte_always_inline struct rte_mbuf * > +virtio_dev_pktmbuf_alloc(struct virtio_net *dev, struct rte_mempool *mp, > + uint16_t data_len) > +{ > + struct rte_mbuf *pkt =3D rte_pktmbuf_alloc(mp); > + > + if (unlikely(pkt =3D=3D NULL)) > + return NULL; > + > + if (rte_pktmbuf_tailroom(pkt) >=3D data_len) > + return pkt; > + > + /* attach an external buffer if supported */ > + if (dev->extbuf && !virtio_dev_extbuf_alloc(pkt, data_len)) > + return pkt; > + > + /* check if chained buffers are allowed */ > + if (!dev->linearbuf) > + return pkt; > + > + /* Data doesn't fit into the buffer and the host supports > + * only linear buffers > + */ > + rte_pktmbuf_free(pkt); > + > + return NULL; > +} > + > static __rte_noinline uint16_t > virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, > struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t > count) @@ -1343,21 +1434,21 @@ virtio_dev_tx_split(struct virtio_net *dev= , > struct vhost_virtqueue *vq, > for (i =3D 0; i < count; i++) { > struct buf_vector buf_vec[BUF_VECTOR_MAX]; > uint16_t head_idx; > - uint32_t dummy_len; > + uint32_t buf_len; > uint16_t nr_vec =3D 0; > int err; >=20 > if (unlikely(fill_vec_buf_split(dev, vq, > vq->last_avail_idx + i, > &nr_vec, buf_vec, > - &head_idx, &dummy_len, > + &head_idx, &buf_len, > VHOST_ACCESS_RO) < 0)) > break; >=20 > if (likely(dev->dequeue_zero_copy =3D=3D 0)) > update_shadow_used_ring_split(vq, head_idx, 0); >=20 > - pkts[i] =3D rte_pktmbuf_alloc(mbuf_pool); > + pkts[i] =3D virtio_dev_pktmbuf_alloc(dev, mbuf_pool, > buf_len); > if (unlikely(pkts[i] =3D=3D NULL)) { > RTE_LOG(ERR, VHOST_DATA, > "Failed to allocate memory for mbuf.\n"); > @@ -1451,14 +1542,14 @@ virtio_dev_tx_packed(struct virtio_net *dev, > struct vhost_virtqueue *vq, > for (i =3D 0; i < count; i++) { > struct buf_vector buf_vec[BUF_VECTOR_MAX]; > uint16_t buf_id; > - uint32_t dummy_len; > + uint32_t buf_len; > uint16_t desc_count, nr_vec =3D 0; > int err; >=20 > if (unlikely(fill_vec_buf_packed(dev, vq, > vq->last_avail_idx, > &desc_count, > buf_vec, &nr_vec, > - &buf_id, &dummy_len, > + &buf_id, &buf_len, > VHOST_ACCESS_RO) < 0)) > break; >=20 > @@ -1466,7 +1557,7 @@ virtio_dev_tx_packed(struct virtio_net *dev, struct > vhost_virtqueue *vq, > update_shadow_used_ring_packed(vq, buf_id, 0, > desc_count); >=20 > - pkts[i] =3D rte_pktmbuf_alloc(mbuf_pool); > + pkts[i] =3D virtio_dev_pktmbuf_alloc(dev, mbuf_pool, > buf_len); > if (unlikely(pkts[i] =3D=3D NULL)) { > RTE_LOG(ERR, VHOST_DATA, > "Failed to allocate memory for mbuf.\n"); > -- > 2.20.1