From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 04880A0093 for ; Thu, 28 May 2020 18:25:37 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9FE1D1DC13; Thu, 28 May 2020 18:25:36 +0200 (CEST) Received: from us-smtp-delivery-1.mimecast.com (us-smtp-1.mimecast.com [205.139.110.61]) by dpdk.org (Postfix) with ESMTP id BEAAD1DC13 for ; Thu, 28 May 2020 18:25:34 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1590683134; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=FyP5ie68q0h+405mZzslf70TuHoKwrx+gOU/NWgkueQ=; b=XgQQxVoNUCX3YwpK7mLMsqyMYh5LHF9kXn2ICE8xNQOfEG0M/8VXg9EC6YcoODom2MO+NS tJqf/SYybv0R+Ra13J8YeWafCV1X0W7C+cvM1GRFyvDXZmWE6PbGffIALZoSnDORIYe6Qr 3mEFMl4B07fGKKQWV278gO3yZoWR3ac= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-480-IRtyk7g5Pv-3Ah9oZ4_ReQ-1; Thu, 28 May 2020 12:25:30 -0400 X-MC-Unique: IRtyk7g5Pv-3Ah9oZ4_ReQ-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id D86B018FE870; Thu, 28 May 2020 16:25:29 +0000 (UTC) Received: from rh.redhat.com (unknown [10.33.36.235]) by smtp.corp.redhat.com (Postfix) with ESMTP id 28CBE60C05; Thu, 28 May 2020 16:25:28 +0000 (UTC) From: Kevin Traynor To: Stephen Hemminger Cc: dpdk stable Date: Thu, 28 May 2020 17:22:43 +0100 Message-Id: <20200528162322.7863-56-ktraynor@redhat.com> In-Reply-To: <20200528162322.7863-1-ktraynor@redhat.com> References: <20200528162322.7863-1-ktraynor@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 8bit Subject: [dpdk-stable] patch 'net/netvsc: split send buffers from Tx descriptors' has been queued to LTS release 18.11.9 X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Sender: "stable" Hi, FYI, your patch has been queued to LTS release 18.11.9 Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. It will be pushed if I get no objections before 06/03/20. So please shout if anyone has objections. Also note that after the patch there's a diff of the upstream commit vs the patch applied to the branch. This will indicate if there was any rebasing needed to apply to the stable branch. If there were code changes for rebasing (ie: not only metadata diffs), please double check that the rebase was correctly done. Queued patches are on a temporary branch at: https://github.com/kevintraynor/dpdk-stable-queue This queued commit can be viewed at: https://github.com/kevintraynor/dpdk-stable-queue/commit/570ceb4cce401850005b7590a4e669533da73173 Thanks. Kevin. --- >From 570ceb4cce401850005b7590a4e669533da73173 Mon Sep 17 00:00:00 2001 From: Stephen Hemminger Date: Tue, 31 Mar 2020 10:13:59 -0700 Subject: [PATCH] net/netvsc: split send buffers from Tx descriptors [ upstream commit cc0251813277fcf43b930b43ab4a423ed7536120 ] The VMBus has reserved transmit area (per device) and transmit descriptors (per queue). The previous code was always having a 1:1 mapping between send buffers and descriptors. This can lead to one queue starving another and also buffer bloat. Change to working more like FreeBSD where there is a pool of transmit descriptors per queue. If send buffer is not available then no aggregation happens but the queue can still drain. Fixes: 4e9c73e96e83 ("net/netvsc: add Hyper-V network device") Signed-off-by: Stephen Hemminger --- drivers/net/netvsc/hn_ethdev.c | 9 +- drivers/net/netvsc/hn_rxtx.c | 269 ++++++++++++++++++++------------- drivers/net/netvsc/hn_var.h | 10 +- 3 files changed, 179 insertions(+), 109 deletions(-) diff --git a/drivers/net/netvsc/hn_ethdev.c b/drivers/net/netvsc/hn_ethdev.c index 04efd092ec..d452bb4f7e 100644 --- a/drivers/net/netvsc/hn_ethdev.c +++ b/drivers/net/netvsc/hn_ethdev.c @@ -241,4 +241,7 @@ static void hn_dev_info_get(struct rte_eth_dev *dev, dev_info->max_tx_queues = hv->max_queues; + dev_info->tx_desc_lim.nb_min = 1; + dev_info->tx_desc_lim.nb_max = 4096; + if (rte_eal_process_type() != RTE_PROC_PRIMARY) return; @@ -777,5 +780,5 @@ eth_hn_dev_init(struct rte_eth_dev *eth_dev) goto failed; - err = hn_tx_pool_init(eth_dev); + err = hn_chim_init(eth_dev); if (err) goto failed; @@ -813,5 +816,5 @@ failed: PMD_INIT_LOG(NOTICE, "device init failed"); - hn_tx_pool_uninit(eth_dev); + hn_chim_uninit(eth_dev); hn_detach(hv); return err; @@ -836,5 +839,5 @@ eth_hn_dev_uninit(struct rte_eth_dev *eth_dev) hn_detach(hv); - hn_tx_pool_uninit(eth_dev); + hn_chim_uninit(eth_dev); rte_vmbus_chan_close(hv->primary->chan); rte_free(hv->primary); diff --git a/drivers/net/netvsc/hn_rxtx.c b/drivers/net/netvsc/hn_rxtx.c index 5ffc0ee145..06408250a8 100644 --- a/drivers/net/netvsc/hn_rxtx.c +++ b/drivers/net/netvsc/hn_rxtx.c @@ -19,4 +19,5 @@ #include #include +#include #include #include @@ -84,5 +85,5 @@ struct hn_txdesc { uint16_t queue_id; - uint16_t chim_index; + uint32_t chim_index; uint32_t chim_size; uint32_t data_size; @@ -99,9 +100,11 @@ struct hn_txdesc { RNDIS_PKTINFO_SIZE(NDIS_TXCSUM_INFO_SIZE)) +#define HN_RNDIS_PKT_ALIGNED RTE_ALIGN(HN_RNDIS_PKT_LEN, RTE_CACHE_LINE_SIZE) + /* Minimum space required for a packet */ #define HN_PKTSIZE_MIN(align) \ RTE_ALIGN(ETHER_MIN_LEN + HN_RNDIS_PKT_LEN, align) -#define DEFAULT_TX_FREE_THRESH 32U +#define DEFAULT_TX_FREE_THRESH 32 static void @@ -151,61 +154,75 @@ static void hn_txd_init(struct rte_mempool *mp __rte_unused, void *opaque, void *obj, unsigned int idx) { + struct hn_tx_queue *txq = opaque; struct hn_txdesc *txd = obj; - struct rte_eth_dev *dev = opaque; - struct rndis_packet_msg *pkt; memset(txd, 0, sizeof(*txd)); - txd->chim_index = idx; - pkt = rte_malloc_socket("RNDIS_TX", HN_RNDIS_PKT_LEN, - rte_align32pow2(HN_RNDIS_PKT_LEN), - dev->device->numa_node); - if (!pkt) - rte_exit(EXIT_FAILURE, "can not allocate RNDIS header"); - - txd->rndis_pkt = pkt; + txd->queue_id = txq->queue_id; + txd->chim_index = NVS_CHIM_IDX_INVALID; + txd->rndis_pkt = (struct rndis_packet_msg *)(char *)txq->tx_rndis + + idx * HN_RNDIS_PKT_ALIGNED; } -/* - * Unlike Linux and FreeBSD, this driver uses a mempool - * to limit outstanding transmits and reserve buffers - */ int -hn_tx_pool_init(struct rte_eth_dev *dev) +hn_chim_init(struct rte_eth_dev *dev) { struct hn_data *hv = dev->data->dev_private; - char name[RTE_MEMPOOL_NAMESIZE]; - struct rte_mempool *mp; + uint32_t i, chim_bmp_size; - snprintf(name, sizeof(name), - "hn_txd_%u", dev->data->port_id); - - PMD_INIT_LOG(DEBUG, "create a TX send pool %s n=%u size=%zu socket=%d", - name, hv->chim_cnt, sizeof(struct hn_txdesc), - dev->device->numa_node); + rte_spinlock_init(&hv->chim_lock); + chim_bmp_size = rte_bitmap_get_memory_footprint(hv->chim_cnt); + hv->chim_bmem = rte_zmalloc("hn_chim_bitmap", chim_bmp_size, + RTE_CACHE_LINE_SIZE); + if (hv->chim_bmem == NULL) { + PMD_INIT_LOG(ERR, "failed to allocate bitmap size %u", + chim_bmp_size); + return -1; + } - mp = rte_mempool_create(name, hv->chim_cnt, sizeof(struct hn_txdesc), - HN_TXD_CACHE_SIZE, 0, - NULL, NULL, - hn_txd_init, dev, - dev->device->numa_node, 0); - if (!mp) { - PMD_DRV_LOG(ERR, - "mempool %s create failed: %d", name, rte_errno); - return -rte_errno; + hv->chim_bmap = rte_bitmap_init(hv->chim_cnt, + hv->chim_bmem, chim_bmp_size); + if (hv->chim_bmap == NULL) { + PMD_INIT_LOG(ERR, "failed to init chim bitmap"); + return -1; } - hv->tx_pool = mp; + for (i = 0; i < hv->chim_cnt; i++) + rte_bitmap_set(hv->chim_bmap, i); + return 0; } void -hn_tx_pool_uninit(struct rte_eth_dev *dev) +hn_chim_uninit(struct rte_eth_dev *dev) { struct hn_data *hv = dev->data->dev_private; - if (hv->tx_pool) { - rte_mempool_free(hv->tx_pool); - hv->tx_pool = NULL; + rte_bitmap_free(hv->chim_bmap); + rte_free(hv->chim_bmem); + hv->chim_bmem = NULL; +} + +static uint32_t hn_chim_alloc(struct hn_data *hv) +{ + uint32_t index = NVS_CHIM_IDX_INVALID; + uint64_t slab; + + rte_spinlock_lock(&hv->chim_lock); + if (rte_bitmap_scan(hv->chim_bmap, &index, &slab)) + rte_bitmap_clear(hv->chim_bmap, index); + rte_spinlock_unlock(&hv->chim_lock); + + return index; +} + +static void hn_chim_free(struct hn_data *hv, uint32_t chim_idx) +{ + if (chim_idx >= hv->chim_cnt) { + PMD_DRV_LOG(ERR, "Invalid chimney index %u", chim_idx); + } else { + rte_spinlock_lock(&hv->chim_lock); + rte_bitmap_set(hv->chim_bmap, chim_idx); + rte_spinlock_unlock(&hv->chim_lock); } } @@ -221,5 +238,5 @@ static void hn_reset_txagg(struct hn_tx_queue *txq) int hn_dev_tx_queue_setup(struct rte_eth_dev *dev, - uint16_t queue_idx, uint16_t nb_desc __rte_unused, + uint16_t queue_idx, uint16_t nb_desc, unsigned int socket_id, const struct rte_eth_txconf *tx_conf) @@ -228,6 +245,7 @@ hn_dev_tx_queue_setup(struct rte_eth_dev *dev, struct hn_data *hv = dev->data->dev_private; struct hn_tx_queue *txq; + char name[RTE_MEMPOOL_NAMESIZE]; uint32_t tx_free_thresh; - int err; + int err = -ENOMEM; PMD_INIT_FUNC_TRACE(); @@ -245,12 +263,40 @@ hn_dev_tx_queue_setup(struct rte_eth_dev *dev, tx_free_thresh = tx_conf->tx_free_thresh; if (tx_free_thresh == 0) - tx_free_thresh = RTE_MIN(hv->chim_cnt / 4, + tx_free_thresh = RTE_MIN(nb_desc / 4, DEFAULT_TX_FREE_THRESH); - if (tx_free_thresh >= hv->chim_cnt - 3) - tx_free_thresh = hv->chim_cnt - 3; + if (tx_free_thresh + 3 >= nb_desc) { + PMD_INIT_LOG(ERR, + "tx_free_thresh must be less than the number of TX entries minus 3(%u)." + " (tx_free_thresh=%u port=%u queue=%u)\n", + nb_desc - 3, + tx_free_thresh, dev->data->port_id, queue_idx); + return -EINVAL; + } txq->free_thresh = tx_free_thresh; + snprintf(name, sizeof(name), + "hn_txd_%u_%u", dev->data->port_id, queue_idx); + + PMD_INIT_LOG(DEBUG, "TX descriptor pool %s n=%u size=%zu", + name, nb_desc, sizeof(struct hn_txdesc)); + + txq->tx_rndis = rte_calloc("hn_txq_rndis", nb_desc, + HN_RNDIS_PKT_ALIGNED, RTE_CACHE_LINE_SIZE); + if (txq->tx_rndis == NULL) + goto error; + + txq->txdesc_pool = rte_mempool_create(name, nb_desc, + sizeof(struct hn_txdesc), + 0, 0, NULL, NULL, + hn_txd_init, txq, + dev->device->numa_node, 0); + if (txq->txdesc_pool == NULL) { + PMD_DRV_LOG(ERR, + "mempool %s create failed: %d", name, rte_errno); + goto error; + } + txq->agg_szmax = RTE_MIN(hv->chim_szmax, hv->rndis_agg_size); txq->agg_pktmax = hv->rndis_agg_pkts; @@ -261,11 +307,39 @@ hn_dev_tx_queue_setup(struct rte_eth_dev *dev, err = hn_vf_tx_queue_setup(dev, queue_idx, nb_desc, socket_id, tx_conf); - if (err) { - rte_free(txq); - return err; + if (err == 0) { + dev->data->tx_queues[queue_idx] = txq; + return 0; } - dev->data->tx_queues[queue_idx] = txq; - return 0; +error: + if (txq->txdesc_pool) + rte_mempool_free(txq->txdesc_pool); + rte_free(txq->tx_rndis); + rte_free(txq); + return err; +} + + +static struct hn_txdesc *hn_txd_get(struct hn_tx_queue *txq) +{ + struct hn_txdesc *txd; + + if (rte_mempool_get(txq->txdesc_pool, (void **)&txd)) { + ++txq->stats.ring_full; + PMD_TX_LOG(DEBUG, "tx pool exhausted!"); + return NULL; + } + + txd->m = NULL; + txd->packets = 0; + txd->data_size = 0; + txd->chim_size = 0; + + return txd; +} + +static void hn_txd_put(struct hn_tx_queue *txq, struct hn_txdesc *txd) +{ + rte_mempool_put(txq->txdesc_pool, txd); } @@ -274,5 +348,4 @@ hn_dev_tx_queue_release(void *arg) { struct hn_tx_queue *txq = arg; - struct hn_txdesc *txd; PMD_INIT_FUNC_TRACE(); @@ -281,9 +354,8 @@ hn_dev_tx_queue_release(void *arg) return; - /* If any pending data is still present just drop it */ - txd = txq->agg_txd; - if (txd) - rte_mempool_put(txq->hv->tx_pool, txd); + if (txq->txdesc_pool) + rte_mempool_free(txq->txdesc_pool); + rte_free(txq->tx_rndis); rte_free(txq); } @@ -293,4 +365,5 @@ hn_nvs_send_completed(struct rte_eth_dev *dev, uint16_t queue_id, unsigned long xactid, const struct hn_nvs_rndis_ack *ack) { + struct hn_data *hv = dev->data->dev_private; struct hn_txdesc *txd = (struct hn_txdesc *)xactid; struct hn_tx_queue *txq; @@ -313,7 +386,9 @@ hn_nvs_send_completed(struct rte_eth_dev *dev, uint16_t queue_id, } + if (txd->chim_index != NVS_CHIM_IDX_INVALID) + hn_chim_free(hv, txd->chim_index); + rte_pktmbuf_free(txd->m); - - rte_mempool_put(txq->hv->tx_pool, txd); + hn_txd_put(txq, txd); } @@ -1021,26 +1096,13 @@ static int hn_flush_txagg(struct hn_tx_queue *txq, bool *need_sig) } -static struct hn_txdesc *hn_new_txd(struct hn_data *hv, - struct hn_tx_queue *txq) -{ - struct hn_txdesc *txd; - - if (rte_mempool_get(hv->tx_pool, (void **)&txd)) { - ++txq->stats.ring_full; - PMD_TX_LOG(DEBUG, "tx pool exhausted!"); - return NULL; - } - - txd->m = NULL; - txd->queue_id = txq->queue_id; - txd->packets = 0; - txd->data_size = 0; - txd->chim_size = 0; - - return txd; -} - +/* + * Try and find a place in a send chimney buffer to put + * the small packet. If space is available, this routine + * returns a pointer of where to place the data. + * If no space, caller should try direct transmit. + */ static void * -hn_try_txagg(struct hn_data *hv, struct hn_tx_queue *txq, uint32_t pktsize) +hn_try_txagg(struct hn_data *hv, struct hn_tx_queue *txq, + struct hn_txdesc *txd, uint32_t pktsize) { struct hn_txdesc *agg_txd = txq->agg_txd; @@ -1070,5 +1132,5 @@ hn_try_txagg(struct hn_data *hv, struct hn_tx_queue *txq, uint32_t pktsize) chim = (uint8_t *)pkt + pkt->len; - + txq->agg_prevpkt = chim; txq->agg_pktleft--; txq->agg_szleft -= pktsize; @@ -1080,16 +1142,19 @@ hn_try_txagg(struct hn_data *hv, struct hn_tx_queue *txq, uint32_t pktsize) txq->agg_pktleft = 0; } - } else { - agg_txd = hn_new_txd(hv, txq); - if (!agg_txd) - return NULL; - chim = (uint8_t *)hv->chim_res->addr - + agg_txd->chim_index * hv->chim_szmax; - - txq->agg_txd = agg_txd; - txq->agg_pktleft = txq->agg_pktmax - 1; - txq->agg_szleft = txq->agg_szmax - pktsize; + hn_txd_put(txq, txd); + return chim; } + + txd->chim_index = hn_chim_alloc(hv); + if (txd->chim_index == NVS_CHIM_IDX_INVALID) + return NULL; + + chim = (uint8_t *)hv->chim_res->addr + + txd->chim_index * hv->chim_szmax; + + txq->agg_txd = txd; + txq->agg_pktleft = txq->agg_pktmax - 1; + txq->agg_szleft = txq->agg_szmax - pktsize; txq->agg_prevpkt = chim; @@ -1314,5 +1379,5 @@ hn_xmit_pkts(void *ptxq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) } - if (rte_mempool_avail_count(hv->tx_pool) <= txq->free_thresh) + if (rte_mempool_avail_count(txq->txdesc_pool) <= txq->free_thresh) hn_process_events(hv, txq->queue_id, 0); @@ -1321,4 +1386,9 @@ hn_xmit_pkts(void *ptxq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) uint32_t pkt_size = m->pkt_len + HN_RNDIS_PKT_LEN; struct rndis_packet_msg *pkt; + struct hn_txdesc *txd; + + txd = hn_txd_get(txq); + if (txd == NULL) + break; /* For small packets aggregate them in chimney buffer */ @@ -1331,5 +1401,6 @@ hn_xmit_pkts(void *ptxq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) } - pkt = hn_try_txagg(hv, txq, pkt_size); + + pkt = hn_try_txagg(hv, txq, txd, pkt_size); if (unlikely(!pkt)) break; @@ -1345,19 +1416,11 @@ hn_xmit_pkts(void *ptxq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) goto fail; } else { - struct hn_txdesc *txd; - - /* can send chimney data and large packet at once */ - txd = txq->agg_txd; - if (txd) { - hn_reset_txagg(txq); - } else { - txd = hn_new_txd(hv, txq); - if (unlikely(!txd)) - break; - } + /* Send any outstanding packets in buffer */ + if (txq->agg_txd && hn_flush_txagg(txq, &need_sig)) + goto fail; pkt = txd->rndis_pkt; txd->m = m; - txd->data_size += m->pkt_len; + txd->data_size = m->pkt_len; ++txd->packets; @@ -1368,5 +1431,5 @@ hn_xmit_pkts(void *ptxq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) PMD_TX_LOG(NOTICE, "sg send failed: %d", ret); ++txq->stats.errors; - rte_mempool_put(hv->tx_pool, txd); + hn_txd_put(txq, txd); goto fail; } diff --git a/drivers/net/netvsc/hn_var.h b/drivers/net/netvsc/hn_var.h index d10e164e68..ed0387cd4a 100644 --- a/drivers/net/netvsc/hn_var.h +++ b/drivers/net/netvsc/hn_var.h @@ -53,4 +53,6 @@ struct hn_tx_queue { uint16_t queue_id; uint32_t free_thresh; + struct rte_mempool *txdesc_pool; + void *tx_rndis; /* Applied packet transmission aggregation limits. */ @@ -115,6 +117,8 @@ struct hn_data { uint64_t rss_offloads; + rte_spinlock_t chim_lock; struct rte_mem_resource *chim_res; /* UIO resource for Tx */ - struct rte_mempool *tx_pool; /* Tx descriptors */ + struct rte_bitmap *chim_bmap; /* Send buffer map */ + void *chim_bmem; uint32_t chim_szmax; /* Max size per buffer */ uint32_t chim_cnt; /* Max packets per buffer */ @@ -153,6 +157,6 @@ uint16_t hn_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); -int hn_tx_pool_init(struct rte_eth_dev *dev); -void hn_tx_pool_uninit(struct rte_eth_dev *dev); +int hn_chim_init(struct rte_eth_dev *dev); +void hn_chim_uninit(struct rte_eth_dev *dev); int hn_dev_link_update(struct rte_eth_dev *dev, int wait); int hn_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, -- 2.21.3 --- Diff of the applied patch vs upstream commit (please double-check if non-empty: --- --- - 2020-05-28 17:13:02.016680288 +0100 +++ 0056-net-netvsc-split-send-buffers-from-Tx-descriptors.patch 2020-05-28 17:12:59.130555744 +0100 @@ -1 +1 @@ -From cc0251813277fcf43b930b43ab4a423ed7536120 Mon Sep 17 00:00:00 2001 +From 570ceb4cce401850005b7590a4e669533da73173 Mon Sep 17 00:00:00 2001 @@ -5,0 +6,2 @@ +[ upstream commit cc0251813277fcf43b930b43ab4a423ed7536120 ] + @@ -16 +17,0 @@ -Cc: stable@dpdk.org @@ -26 +27 @@ -index 564620748d..ac66108380 100644 +index 04efd092ec..d452bb4f7e 100644 @@ -29 +30 @@ -@@ -258,4 +258,7 @@ static int hn_dev_info_get(struct rte_eth_dev *dev, +@@ -241,4 +241,7 @@ static void hn_dev_info_get(struct rte_eth_dev *dev, @@ -36,2 +37,2 @@ - return 0; -@@ -983,5 +986,5 @@ eth_hn_dev_init(struct rte_eth_dev *eth_dev) + return; +@@ -777,5 +780,5 @@ eth_hn_dev_init(struct rte_eth_dev *eth_dev) @@ -44 +45 @@ -@@ -1019,5 +1022,5 @@ failed: +@@ -813,5 +816,5 @@ failed: @@ -51 +52 @@ -@@ -1043,5 +1046,5 @@ eth_hn_dev_uninit(struct rte_eth_dev *eth_dev) +@@ -836,5 +839,5 @@ eth_hn_dev_uninit(struct rte_eth_dev *eth_dev) @@ -59 +60 @@ -index 7212780c15..32c03e3da0 100644 +index 5ffc0ee145..06408250a8 100644 @@ -82 +83 @@ - RTE_ALIGN(RTE_ETHER_MIN_LEN + HN_RNDIS_PKT_LEN, align) + RTE_ALIGN(ETHER_MIN_LEN + HN_RNDIS_PKT_LEN, align) @@ -343 +344 @@ -@@ -1037,26 +1112,13 @@ static int hn_flush_txagg(struct hn_tx_queue *txq, bool *need_sig) +@@ -1021,26 +1096,13 @@ static int hn_flush_txagg(struct hn_tx_queue *txq, bool *need_sig) @@ -378 +379 @@ -@@ -1086,5 +1148,5 @@ hn_try_txagg(struct hn_data *hv, struct hn_tx_queue *txq, uint32_t pktsize) +@@ -1070,5 +1132,5 @@ hn_try_txagg(struct hn_data *hv, struct hn_tx_queue *txq, uint32_t pktsize) @@ -385 +386 @@ -@@ -1096,16 +1158,19 @@ hn_try_txagg(struct hn_data *hv, struct hn_tx_queue *txq, uint32_t pktsize) +@@ -1080,16 +1142,19 @@ hn_try_txagg(struct hn_data *hv, struct hn_tx_queue *txq, uint32_t pktsize) @@ -415 +416 @@ -@@ -1330,5 +1395,5 @@ hn_xmit_pkts(void *ptxq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) +@@ -1314,5 +1379,5 @@ hn_xmit_pkts(void *ptxq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) @@ -422 +423 @@ -@@ -1337,4 +1402,9 @@ hn_xmit_pkts(void *ptxq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) +@@ -1321,4 +1386,9 @@ hn_xmit_pkts(void *ptxq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) @@ -432 +433 @@ -@@ -1347,5 +1417,6 @@ hn_xmit_pkts(void *ptxq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) +@@ -1331,5 +1401,6 @@ hn_xmit_pkts(void *ptxq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) @@ -440 +441 @@ -@@ -1361,19 +1432,11 @@ hn_xmit_pkts(void *ptxq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) +@@ -1345,19 +1416,11 @@ hn_xmit_pkts(void *ptxq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) @@ -464 +465 @@ -@@ -1384,5 +1447,5 @@ hn_xmit_pkts(void *ptxq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) +@@ -1368,5 +1431,5 @@ hn_xmit_pkts(void *ptxq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) @@ -472 +473 @@ -index 05bc492511..822d737bd3 100644 +index d10e164e68..ed0387cd4a 100644 @@ -482 +483 @@ -@@ -116,6 +118,8 @@ struct hn_data { +@@ -115,6 +117,8 @@ struct hn_data { @@ -492 +493 @@ -@@ -158,6 +162,6 @@ uint16_t hn_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, +@@ -153,6 +157,6 @@ uint16_t hn_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,