From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from EUR02-AM5-obe.outbound.protection.outlook.com (mail-eopbgr00065.outbound.protection.outlook.com [40.107.0.65]) by dpdk.org (Postfix) with ESMTP id 0A93054AE for ; Fri, 21 Dec 2018 07:27:33 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector1-arm-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=dJXfKkYH4ucZGny+Ncvtu5Gfa0dDCLzRf/1H1Nn2c3A=; b=aAXpXYZH+cFloirQAkSSbJsWKB/IUxqcqk76UEgABnjrGWj9xCd//c+hf/AwrXEXatr6LxXH4qUZyK1D1sh301JTivwATNAGJEACyD+//u57No00G8MJrygKL1gzwiNOiIZnzWwKtnPUCoPNhxCG4ULQVWfKQSISPlQsflNZQdU= Received: from VI1PR08MB3167.eurprd08.prod.outlook.com (52.133.15.142) by VI1PR08MB3072.eurprd08.prod.outlook.com (52.133.15.13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1446.17; Fri, 21 Dec 2018 06:27:31 +0000 Received: from VI1PR08MB3167.eurprd08.prod.outlook.com ([fe80::b5a5:e179:34f1:7d21]) by VI1PR08MB3167.eurprd08.prod.outlook.com ([fe80::b5a5:e179:34f1:7d21%5]) with mapi id 15.20.1446.020; Fri, 21 Dec 2018 06:27:31 +0000 From: "Gavin Hu (Arm Technology China)" To: Maxime Coquelin , "dev@dpdk.org" , "jfreimann@redhat.com" , "tiwei.bie@intel.com" , "zhihong.wang@intel.com" CC: nd Thread-Topic: [dpdk-dev] [PATCH v3 3/3] net/virtio: improve batching in mergeable path Thread-Index: AQHUmIlcTFuVCBTVtkST0FSlq0mI8KWIuY9g Date: Fri, 21 Dec 2018 06:27:31 +0000 Message-ID: References: <20181220172718.9615-1-maxime.coquelin@redhat.com> <20181220172718.9615-4-maxime.coquelin@redhat.com> In-Reply-To: <20181220172718.9615-4-maxime.coquelin@redhat.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: spf=none (sender IP is ) smtp.mailfrom=Gavin.Hu@arm.com; x-originating-ip: [113.29.88.7] x-ms-publictraffictype: Email x-microsoft-exchange-diagnostics: 1; VI1PR08MB3072; 6:mi06Zpm8rbOzDnYGWbX9bbzfrisxEv7NZK2T4wKtgYdqRmKwgoB9C/BpPaPRCcEUpBDzexwXsgjVgiW80SyitZ+LcxRURqGI1Tx4+zNjNK5izA2/dzCJ94ppRRHBaXTzXR+7mQkwRTVvbhD+GEbNSuNkLvsrdv0RcfP3fbfYE91dU95ELsPy2XjzkT1UAh+fnyCBq4IJa6gmO+pTx7yvDoHfs1Ka2Xo7FxOwoAVFGJI488oNIAadV2ROKlBmX58kV2Mc5gs5FEPCvSDKjPY/3l73nQnevE6Zhf0bFIsdTiaUJtbxR4d6sxe1CekigoYkN8QG+q7xBH+iJ4+1Bsum8RshyZ4oLHuDDMlNmHZth3iooHdXrLzMPMswPl777t/uXrGB9T6VqN/DTXIAR9luOP29dNVjdbZ0VRcoEmBlOrWSOp+hBWR3ZB5yP8psiGS/KNYM+jeWOP+FGctRV0c1+Q==; 5:yLvob7jl4xPfEy0j9nkJhDtVXnLoxm40Hj0wIwlyW7wrj+8iKFCaf3ymJKaUVxqpL54YQosssxzHdyv80yByZF2R/cgSUR1/cOFWdo9xh2ui8iPQhKl1XsNXa87mM/G2IG2CxR8yTdVczACSGrFvwgidWNOMtKa/GCJJyrsbIPk=; 7:qs0/TmyqDwbtSnSr+dfPB5GRCtl/tyDKHaUt7+KJd58ikW5yFr9+nN5Vn2FqGdxmXf4Z7lZvyWZM+C22vY0Mk9kNzNx6DTIGiCIun43pEiupAYnkggDDg7Bxiy9vKWXMTNQcjSZFNnQfbWQJS74MNQ== x-ms-exchange-antispam-srfa-diagnostics: SOS; x-ms-office365-filtering-correlation-id: ba73917e-ef00-450e-9ca3-08d6670d634c x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(5600074)(711020)(4618075)(4534185)(7168020)(4627221)(201703031133081)(201702281549075)(8990200)(2017052603328)(7153060)(7193020); SRVR:VI1PR08MB3072; x-ms-traffictypediagnostic: VI1PR08MB3072: nodisclaimer: True x-microsoft-antispam-prvs: x-ms-exchange-senderadcheck: 1 x-exchange-antispam-report-cfa-test: BCL:0; PCL:0; RULEID:(8211001083)(3230021)(999002)(5005026)(6040522)(2401047)(8121501046)(93006095)(93001095)(3231475)(944501520)(52105112)(3002001)(10201501046)(6055026)(149066)(150057)(6041310)(20161123558120)(20161123564045)(20161123562045)(20161123560045)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(201708071742011)(7699051)(76991095); SRVR:VI1PR08MB3072; BCL:0; PCL:0; RULEID:; SRVR:VI1PR08MB3072; x-forefront-prvs: 0893636978 x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(979002)(376002)(396003)(346002)(39860400002)(366004)(136003)(189003)(199004)(51234002)(13464003)(446003)(33656002)(305945005)(72206003)(476003)(11346002)(478600001)(256004)(14444005)(81166006)(81156014)(2906002)(53546011)(102836004)(76176011)(55236004)(110136005)(99286004)(6506007)(106356001)(186003)(14454004)(7696005)(8936002)(105586002)(26005)(74316002)(53936002)(97736004)(9686003)(316002)(6246003)(55016002)(7736002)(575784001)(86362001)(68736007)(6436002)(4744004)(66066001)(5660300001)(71190400001)(71200400001)(6116002)(2201001)(486006)(3846002)(4326008)(229853002)(25786009)(2501003)(969003)(989001)(999001)(1009001)(1019001); DIR:OUT; SFP:1101; SCL:1; SRVR:VI1PR08MB3072; H:VI1PR08MB3167.eurprd08.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1; received-spf: None (protection.outlook.com: arm.com does not designate permitted sender hosts) x-microsoft-antispam-message-info: bgRqTh2SqQmupB8t1cwqLVk29S7/SAnV4PfE0exFPrXEr0/lpyuIkhZS2iIhWIS7tc03Pvg9W8aWCnZ6Mm4g5u4c7aCSoa7jD4kHjIyjeIJ6T0N5r8YMApaqr5UeDh3y16og2aa3ehLSkMRmooR79a6JtsS6XoBe6/F4KJzN2lY0a5ZScqsimw1ZG3+EcSkPPM3oHsATalaYfqlk0Lsy3G+B7XHjr+vZSaJuQEr9o7fx73A3A6qDe4OFbznbMUFItMfuhoLiqVGze9p2IUZHx5UcO5b00ERSysM4KnBLpjxpyrHr8R5E8XqlGpHCAGIk spamdiagnosticoutput: 1:99 spamdiagnosticmetadata: NSPM Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-Network-Message-Id: ba73917e-ef00-450e-9ca3-08d6670d634c X-MS-Exchange-CrossTenant-originalarrivaltime: 21 Dec 2018 06:27:31.6434 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB3072 Subject: Re: [dpdk-dev] [PATCH v3 3/3] net/virtio: improve batching in mergeable path X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 21 Dec 2018 06:27:34 -0000 > -----Original Message----- > From: dev On Behalf Of Maxime Coquelin > Sent: Friday, December 21, 2018 1:27 AM > To: dev@dpdk.org; jfreimann@redhat.com; tiwei.bie@intel.com; > zhihong.wang@intel.com > Cc: Maxime Coquelin > Subject: [dpdk-dev] [PATCH v3 3/3] net/virtio: improve batching in > mergeable path >=20 > This patch improves both descriptors dequeue and refill, > by using the same batching strategy as done in in-order path. >=20 > Signed-off-by: Maxime Coquelin > Tested-by: Jens Freimann > Reviewed-by: Jens Freimann > --- > drivers/net/virtio/virtio_rxtx.c | 239 +++++++++++++++++-------------- > 1 file changed, 129 insertions(+), 110 deletions(-) >=20 > diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio= _rxtx.c > index 58376ced3..1cfa2f0d6 100644 > --- a/drivers/net/virtio/virtio_rxtx.c > +++ b/drivers/net/virtio/virtio_rxtx.c > @@ -353,41 +353,44 @@ virtqueue_enqueue_refill_inorder(struct > virtqueue *vq, > } >=20 > static inline int > -virtqueue_enqueue_recv_refill(struct virtqueue *vq, struct rte_mbuf > *cookie) > +virtqueue_enqueue_recv_refill(struct virtqueue *vq, struct rte_mbuf > **cookie, > + uint16_t num) > { > struct vq_desc_extra *dxp; > struct virtio_hw *hw =3D vq->hw; > - struct vring_desc *start_dp; > - uint16_t needed =3D 1; > - uint16_t head_idx, idx; > + struct vring_desc *start_dp =3D vq->vq_ring.desc; > + uint16_t idx, i; >=20 > if (unlikely(vq->vq_free_cnt =3D=3D 0)) > return -ENOSPC; > - if (unlikely(vq->vq_free_cnt < needed)) > + if (unlikely(vq->vq_free_cnt < num)) > return -EMSGSIZE; >=20 > - head_idx =3D vq->vq_desc_head_idx; > - if (unlikely(head_idx >=3D vq->vq_nentries)) > + if (unlikely(vq->vq_desc_head_idx >=3D vq->vq_nentries)) > return -EFAULT; >=20 > - idx =3D head_idx; > - dxp =3D &vq->vq_descx[idx]; > - dxp->cookie =3D (void *)cookie; > - dxp->ndescs =3D needed; > + for (i =3D 0; i < num; i++) { > + idx =3D vq->vq_desc_head_idx; > + dxp =3D &vq->vq_descx[idx]; > + dxp->cookie =3D (void *)cookie[i]; > + dxp->ndescs =3D 1; >=20 > - start_dp =3D vq->vq_ring.desc; > - start_dp[idx].addr =3D > - VIRTIO_MBUF_ADDR(cookie, vq) + > - RTE_PKTMBUF_HEADROOM - hw->vtnet_hdr_size; > - start_dp[idx].len =3D > - cookie->buf_len - RTE_PKTMBUF_HEADROOM + hw- > >vtnet_hdr_size; > - start_dp[idx].flags =3D VRING_DESC_F_WRITE; > - idx =3D start_dp[idx].next; > - vq->vq_desc_head_idx =3D idx; > - if (vq->vq_desc_head_idx =3D=3D VQ_RING_DESC_CHAIN_END) > - vq->vq_desc_tail_idx =3D idx; > - vq->vq_free_cnt =3D (uint16_t)(vq->vq_free_cnt - needed); > - vq_update_avail_ring(vq, head_idx); > + start_dp[idx].addr =3D > + VIRTIO_MBUF_ADDR(cookie[i], vq) + > + RTE_PKTMBUF_HEADROOM - hw->vtnet_hdr_size; > + start_dp[idx].len =3D > + cookie[i]->buf_len - RTE_PKTMBUF_HEADROOM + > + hw->vtnet_hdr_size; > + start_dp[idx].flags =3D VRING_DESC_F_WRITE; > + vq->vq_desc_head_idx =3D start_dp[idx].next; > + vq_update_avail_ring(vq, idx); > + if (vq->vq_desc_head_idx =3D=3D VQ_RING_DESC_CHAIN_END) > { > + vq->vq_desc_tail_idx =3D vq->vq_desc_head_idx; > + break; > + } > + } > + > + vq->vq_free_cnt =3D (uint16_t)(vq->vq_free_cnt - num); >=20 > return 0; > } > @@ -892,7 +895,8 @@ virtio_dev_rx_queue_setup_finish(struct > rte_eth_dev *dev, uint16_t queue_idx) > error =3D > virtqueue_enqueue_recv_refill_packed(vq, > &m, 1); > else > - error =3D virtqueue_enqueue_recv_refill(vq, > m); > + error =3D virtqueue_enqueue_recv_refill(vq, > + &m, 1); > if (error) { > rte_pktmbuf_free(m); > break; > @@ -991,7 +995,7 @@ virtio_discard_rxbuf(struct virtqueue *vq, struct > rte_mbuf *m) > if (vtpci_packed_queue(vq->hw)) > error =3D virtqueue_enqueue_recv_refill_packed(vq, &m, 1); > else > - error =3D virtqueue_enqueue_recv_refill(vq, m); > + error =3D virtqueue_enqueue_recv_refill(vq, &m, 1); >=20 > if (unlikely(error)) { > RTE_LOG(ERR, PMD, "cannot requeue discarded mbuf"); > @@ -1211,7 +1215,7 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf > **rx_pkts, uint16_t nb_pkts) > dev->data->rx_mbuf_alloc_failed++; > break; > } > - error =3D virtqueue_enqueue_recv_refill(vq, new_mbuf); > + error =3D virtqueue_enqueue_recv_refill(vq, &new_mbuf, 1); > if (unlikely(error)) { > rte_pktmbuf_free(new_mbuf); > break; > @@ -1528,19 +1532,18 @@ virtio_recv_mergeable_pkts(void *rx_queue, > struct virtnet_rx *rxvq =3D rx_queue; > struct virtqueue *vq =3D rxvq->vq; > struct virtio_hw *hw =3D vq->hw; > - struct rte_mbuf *rxm, *new_mbuf; > - uint16_t nb_used, num, nb_rx; > + struct rte_mbuf *rxm; > + struct rte_mbuf *prev; > + uint16_t nb_used, num, nb_rx =3D 0; > uint32_t len[VIRTIO_MBUF_BURST_SZ]; > struct rte_mbuf *rcv_pkts[VIRTIO_MBUF_BURST_SZ]; > - struct rte_mbuf *prev; > int error; > - uint32_t i, nb_enqueued; > - uint32_t seg_num; > - uint16_t extra_idx; > - uint32_t seg_res; > - uint32_t hdr_size; > + uint32_t nb_enqueued =3D 0; > + uint32_t seg_num =3D 0; > + uint32_t seg_res =3D 0; > + uint32_t hdr_size =3D hw->vtnet_hdr_size; > + int32_t i; >=20 > - nb_rx =3D 0; > if (unlikely(hw->started =3D=3D 0)) > return nb_rx; >=20 > @@ -1550,31 +1553,25 @@ virtio_recv_mergeable_pkts(void *rx_queue, >=20 > PMD_RX_LOG(DEBUG, "used:%d", nb_used); >=20 > - i =3D 0; > - nb_enqueued =3D 0; > - seg_num =3D 0; > - extra_idx =3D 0; > - seg_res =3D 0; > - hdr_size =3D hw->vtnet_hdr_size; > - > - while (i < nb_used) { > - struct virtio_net_hdr_mrg_rxbuf *header; > + num =3D likely(nb_used <=3D nb_pkts) ? nb_used : nb_pkts; > + if (unlikely(num > VIRTIO_MBUF_BURST_SZ)) > + num =3D VIRTIO_MBUF_BURST_SZ; > + if (likely(num > DESC_PER_CACHELINE)) > + num =3D num - ((vq->vq_used_cons_idx + num) % > + DESC_PER_CACHELINE); >=20 > - if (nb_rx =3D=3D nb_pkts) > - break; >=20 > - num =3D virtqueue_dequeue_burst_rx(vq, rcv_pkts, len, 1); > - if (num !=3D 1) > - continue; > + num =3D virtqueue_dequeue_burst_rx(vq, rcv_pkts, len, num); >=20 > - i++; > + for (i =3D 0; i < num; i++) { > + struct virtio_net_hdr_mrg_rxbuf *header; >=20 > PMD_RX_LOG(DEBUG, "dequeue:%d", num); > - PMD_RX_LOG(DEBUG, "packet len:%d", len[0]); > + PMD_RX_LOG(DEBUG, "packet len:%d", len[i]); >=20 > - rxm =3D rcv_pkts[0]; > + rxm =3D rcv_pkts[i]; >=20 > - if (unlikely(len[0] < hdr_size + ETHER_HDR_LEN)) { > + if (unlikely(len[i] < hdr_size + ETHER_HDR_LEN)) { > PMD_RX_LOG(ERR, "Packet drop"); > nb_enqueued++; > virtio_discard_rxbuf(vq, rxm); > @@ -1582,10 +1579,10 @@ virtio_recv_mergeable_pkts(void *rx_queue, > continue; > } >=20 > - header =3D (struct virtio_net_hdr_mrg_rxbuf *)((char *)rxm- > >buf_addr + > - RTE_PKTMBUF_HEADROOM - hdr_size); > + header =3D (struct virtio_net_hdr_mrg_rxbuf *) > + ((char *)rxm->buf_addr + > RTE_PKTMBUF_HEADROOM > + - hdr_size); > seg_num =3D header->num_buffers; > - > if (seg_num =3D=3D 0) > seg_num =3D 1; >=20 > @@ -1593,10 +1590,11 @@ virtio_recv_mergeable_pkts(void *rx_queue, > rxm->nb_segs =3D seg_num; > rxm->ol_flags =3D 0; > rxm->vlan_tci =3D 0; > - rxm->pkt_len =3D (uint32_t)(len[0] - hdr_size); > - rxm->data_len =3D (uint16_t)(len[0] - hdr_size); > + rxm->pkt_len =3D (uint32_t)(len[i] - hdr_size); > + rxm->data_len =3D (uint16_t)(len[i] - hdr_size); >=20 > rxm->port =3D rxvq->port_id; > + > rx_pkts[nb_rx] =3D rxm; > prev =3D rxm; >=20 > @@ -1607,75 +1605,96 @@ virtio_recv_mergeable_pkts(void *rx_queue, > continue; > } >=20 > + if (hw->vlan_strip) > + rte_vlan_strip(rx_pkts[nb_rx]); > + > seg_res =3D seg_num - 1; >=20 > - while (seg_res !=3D 0) { > - /* > - * Get extra segments for current uncompleted > packet. > - */ > - uint16_t rcv_cnt =3D > - RTE_MIN(seg_res, RTE_DIM(rcv_pkts)); > - if (likely(VIRTQUEUE_NUSED(vq) >=3D rcv_cnt)) { > - uint32_t rx_num =3D > - virtqueue_dequeue_burst_rx(vq, > - rcv_pkts, len, rcv_cnt); > - i +=3D rx_num; > - rcv_cnt =3D rx_num; > - } else { > - PMD_RX_LOG(ERR, > - "No enough segments for > packet."); > - nb_enqueued++; > - virtio_discard_rxbuf(vq, rxm); > - rxvq->stats.errors++; > - break; > - } > + /* Merge remaining segments */ > + while (seg_res !=3D 0 && i < (num - 1)) { > + i++; > + > + rxm =3D rcv_pkts[i]; > + rxm->data_off =3D RTE_PKTMBUF_HEADROOM - > hdr_size; > + rxm->pkt_len =3D (uint32_t)(len[i]); > + rxm->data_len =3D (uint16_t)(len[i]); > + > + rx_pkts[nb_rx]->pkt_len +=3D (uint32_t)(len[i]); > + rx_pkts[nb_rx]->data_len +=3D (uint16_t)(len[i]); > + > + if (prev) > + prev->next =3D rxm; > + > + prev =3D rxm; > + seg_res -=3D 1; > + } > + > + if (!seg_res) { > + virtio_rx_stats_updated(rxvq, rx_pkts[nb_rx]); > + nb_rx++; > + } > + } > + > + /* Last packet still need merge segments */ > + while (seg_res !=3D 0) { > + uint16_t rcv_cnt =3D RTE_MIN((uint16_t)seg_res, > + VIRTIO_MBUF_BURST_SZ); >=20 > - extra_idx =3D 0; > + prev =3D rcv_pkts[nb_rx]; > + if (likely(VIRTQUEUE_NUSED(vq) >=3D rcv_cnt)) { > + num =3D virtqueue_dequeue_burst_rx(vq, rcv_pkts, > len, > + rcv_cnt); > + uint16_t extra_idx =3D 0; >=20 > + rcv_cnt =3D num; > while (extra_idx < rcv_cnt) { > rxm =3D rcv_pkts[extra_idx]; > - > - rxm->data_off =3D > RTE_PKTMBUF_HEADROOM - hdr_size; > + rxm->data_off =3D > + RTE_PKTMBUF_HEADROOM - > hdr_size; > rxm->pkt_len =3D (uint32_t)(len[extra_idx]); > rxm->data_len =3D (uint16_t)(len[extra_idx]); > - > - if (prev) > - prev->next =3D rxm; > - > + prev->next =3D rxm; > prev =3D rxm; > - rx_pkts[nb_rx]->pkt_len +=3D rxm->pkt_len; > - extra_idx++; > + rx_pkts[nb_rx]->pkt_len +=3D len[extra_idx]; > + rx_pkts[nb_rx]->data_len +=3D len[extra_idx]; > + extra_idx +=3D 1; > }; > seg_res -=3D rcv_cnt; > - } > - > - if (hw->vlan_strip) > - rte_vlan_strip(rx_pkts[nb_rx]); > - > - VIRTIO_DUMP_PACKET(rx_pkts[nb_rx], > - rx_pkts[nb_rx]->data_len); >=20 > - virtio_update_packet_stats(&rxvq->stats, rx_pkts[nb_rx]); > - nb_rx++; > + if (!seg_res) { > + virtio_rx_stats_updated(rxvq, > rx_pkts[nb_rx]); > + nb_rx++; > + } > + } else { > + PMD_RX_LOG(ERR, > + "No enough segments for packet."); > + virtio_discard_rxbuf(vq, prev); > + rxvq->stats.errors++; > + break; > + } > } >=20 > rxvq->stats.packets +=3D nb_rx; >=20 > /* Allocate new mbuf for the used descriptor */ > - while (likely(!virtqueue_full(vq))) { > - new_mbuf =3D rte_mbuf_raw_alloc(rxvq->mpool); > - if (unlikely(new_mbuf =3D=3D NULL)) { > - struct rte_eth_dev *dev > - =3D &rte_eth_devices[rxvq->port_id]; > - dev->data->rx_mbuf_alloc_failed++; > - break; > - } > - error =3D virtqueue_enqueue_recv_refill(vq, new_mbuf); > - if (unlikely(error)) { > - rte_pktmbuf_free(new_mbuf); > - break; > + if (likely(!virtqueue_full(vq))) { > + /* free_cnt may include mrg descs */ > + uint16_t free_cnt =3D vq->vq_free_cnt; > + struct rte_mbuf *new_pkts[free_cnt]; > + > + if (!rte_pktmbuf_alloc_bulk(rxvq->mpool, new_pkts, > free_cnt)) { > + error =3D virtqueue_enqueue_recv_refill(vq, new_pkts, > + free_cnt); > + if (unlikely(error)) { > + for (i =3D 0; i < free_cnt; i++) > + rte_pktmbuf_free(new_pkts[i]); Missing error handling here? the execution keeps going on without the mbuf= s? /Gavin > + } > + nb_enqueued +=3D free_cnt; > + } else { > + struct rte_eth_dev *dev =3D > + &rte_eth_devices[rxvq->port_id]; > + dev->data->rx_mbuf_alloc_failed +=3D free_cnt; > } > - nb_enqueued++; > } >=20 > if (likely(nb_enqueued)) { > -- > 2.17.2