From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 29CCF42E9A; Mon, 17 Jul 2023 11:46:51 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B4EF340E25; Mon, 17 Jul 2023 11:46:50 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id B2B9240698 for ; Mon, 17 Jul 2023 11:46:48 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1689587208; x=1721123208; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=T+dKY3HJUWkA7CWKspbZfyAe0quVg7BF+sMySauqHK0=; b=CsvJBnnE+61GJoCIy1fEoe8EqbRSAjl8+/XZbf5GYDbBIxphMHtDVSiB XHHAIpr7Bzcy3/BwGFNWTpQhgBid5D3p/tAlhF2MssCFGkigU6J5MBI0m oYaQCZMDM/3d8iA7siQbX52oBJaJn6hafcTq6bHpDuxUnNeke6C4xC6S2 XzDt1IkqhRCKbuMzbi3/L4HUyj3UPOLzd8k0DmBfgTitdQkgePn3JhzTt 2zdkjoibTxFCrfKMCq/PSDxQb1Fg+8X0YIqjWd3JY7vMcVSZaWPCs3lrf Kxt9q4au9jDgAuz8zTltGIonFyDPIbEBMecuuS8Yj63NF2zTczAl7pnGZ A==; X-IronPort-AV: E=McAfee;i="6600,9927,10773"; a="452261997" X-IronPort-AV: E=Sophos;i="6.01,211,1684825200"; d="scan'208";a="452261997" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jul 2023 02:46:46 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10773"; a="813256313" X-IronPort-AV: E=Sophos;i="6.01,211,1684825200"; d="scan'208";a="813256313" Received: from unknown (HELO localhost.localdomain) ([10.239.252.253]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jul 2023 02:46:43 -0700 From: Mingjin Ye To: dev@dpdk.org Cc: qiming.yang@intel.com, yidingx.zhou@intel.com, Mingjin Ye , Jingjing Wu , Beilei Xing , Bruce Richardson , Konstantin Ananyev Subject: [POC] net/iavf: support no data path polling mode Date: Mon, 17 Jul 2023 09:36:51 +0000 Message-Id: <20230717093651.338017-1-mingjinx.ye@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230420061656.140315-1-mingjinx.ye@intel.com> References: <20230420061656.140315-1-mingjinx.ye@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Introduces a devargs "no-poll-on-link-down" in iavf PMD. When this flag is set, the PMD switches to no-poll mode when the link state is down (rx/tx burst returns to 0 immediately). When the link state returns to normal, PMD switches to normal rx/tx burst state. Signed-off-by: Mingjin Ye --- drivers/net/iavf/iavf.h | 2 ++ drivers/net/iavf/iavf_ethdev.c | 10 ++++++ drivers/net/iavf/iavf_rxtx.c | 29 +++++++++++++++-- drivers/net/iavf/iavf_rxtx.h | 1 + drivers/net/iavf/iavf_rxtx_vec_avx2.c | 29 ++++++++++++++--- drivers/net/iavf/iavf_rxtx_vec_avx512.c | 42 ++++++++++++++++++++++--- drivers/net/iavf/iavf_rxtx_vec_sse.c | 21 ++++++++++++- drivers/net/iavf/iavf_vchnl.c | 19 +++++++++++ 8 files changed, 141 insertions(+), 12 deletions(-) diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h index 98861e4242..30b05d25b6 100644 --- a/drivers/net/iavf/iavf.h +++ b/drivers/net/iavf/iavf.h @@ -305,6 +305,7 @@ struct iavf_devargs { uint8_t proto_xtr[IAVF_MAX_QUEUE_NUM]; uint16_t quanta_size; uint32_t watchdog_period; + uint16_t no_poll_on_link_down; }; struct iavf_security_ctx; @@ -323,6 +324,7 @@ struct iavf_adapter { uint32_t ptype_tbl[IAVF_MAX_PKT_TYPE] __rte_cache_min_aligned; bool stopped; bool closed; + bool no_poll; uint16_t fdir_ref_cnt; struct iavf_devargs devargs; }; diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c index ac7154d720..41a8947f61 100644 --- a/drivers/net/iavf/iavf_ethdev.c +++ b/drivers/net/iavf/iavf_ethdev.c @@ -37,6 +37,7 @@ #define IAVF_PROTO_XTR_ARG "proto_xtr" #define IAVF_QUANTA_SIZE_ARG "quanta_size" #define IAVF_RESET_WATCHDOG_ARG "watchdog_period" +#define IAVF_NO_POLL_ON_LINK_DOWN_ARG "no-poll-on-link-down" uint64_t iavf_timestamp_dynflag; int iavf_timestamp_dynfield_offset = -1; @@ -2237,6 +2238,7 @@ static int iavf_parse_devargs(struct rte_eth_dev *dev) struct rte_kvargs *kvlist; int ret; int watchdog_period = -1; + uint16_t no_poll_on_link_down; if (!devargs) return 0; @@ -2270,6 +2272,14 @@ static int iavf_parse_devargs(struct rte_eth_dev *dev) else ad->devargs.watchdog_period = watchdog_period; + no_poll_on_link_down = rte_kvargs_count(kvlist, + IAVF_NO_POLL_ON_LINK_DOWN_ARG); + + if (no_poll_on_link_down == 0) + ad->devargs.no_poll_on_link_down = 0; + else + ad->devargs.no_poll_on_link_down = 1; + if (ad->devargs.quanta_size != 0 && (ad->devargs.quanta_size < 256 || ad->devargs.quanta_size > 4096 || ad->devargs.quanta_size & 0x40)) { diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c index f7df4665d1..447e306fee 100644 --- a/drivers/net/iavf/iavf_rxtx.c +++ b/drivers/net/iavf/iavf_rxtx.c @@ -770,6 +770,7 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev, IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + struct iavf_vsi *vsi = &vf->vsi; struct iavf_tx_queue *txq; const struct rte_memzone *mz; uint32_t ring_size; @@ -843,6 +844,7 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev, txq->port_id = dev->data->port_id; txq->offloads = offloads; txq->tx_deferred_start = tx_conf->tx_deferred_start; + txq->vsi = vsi; if (iavf_ipsec_crypto_supported(adapter)) txq->ipsec_crypto_pkt_md_offset = @@ -1406,9 +1408,12 @@ iavf_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) uint64_t pkt_flags; const uint32_t *ptype_tbl; + rxq = rx_queue; + if (!rxq->vsi || rxq->vsi->adapter->no_poll) + return 0; + nb_rx = 0; nb_hold = 0; - rxq = rx_queue; rx_id = rxq->rx_tail; rx_ring = rxq->rx_ring; ptype_tbl = rxq->vsi->adapter->ptype_tbl; @@ -1515,9 +1520,12 @@ iavf_recv_pkts_flex_rxd(void *rx_queue, const uint32_t *ptype_tbl; uint64_t ts_ns; + rxq = rx_queue; + if (!rxq->vsi || rxq->vsi->adapter->no_poll) + return 0; + nb_rx = 0; nb_hold = 0; - rxq = rx_queue; rx_id = rxq->rx_tail; rx_ring = rxq->rx_ring; ptype_tbl = rxq->vsi->adapter->ptype_tbl; @@ -1641,6 +1649,9 @@ iavf_recv_scattered_pkts_flex_rxd(void *rx_queue, struct rte_mbuf **rx_pkts, volatile union iavf_rx_flex_desc *rxdp; const uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl; + if (!rxq->vsi || rxq->vsi->adapter->no_poll) + return 0; + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) { uint64_t sw_cur_time = rte_get_timer_cycles() / (rte_get_timer_hz() / 1000); @@ -1818,6 +1829,9 @@ iavf_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, volatile union iavf_rx_desc *rxdp; const uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl; + if (!rxq->vsi || rxq->vsi->adapter->no_poll) + return 0; + while (nb_rx < nb_pkts) { rxdp = &rx_ring[rx_id]; qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len); @@ -1973,6 +1987,9 @@ iavf_rx_scan_hw_ring_flex_rxd(struct iavf_rx_queue *rxq, const uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl; uint64_t ts_ns; + if (!rxq->vsi || rxq->vsi->adapter->no_poll) + return 0; + rxdp = (volatile union iavf_rx_flex_desc *)&rxq->rx_ring[rxq->rx_tail]; rxep = &rxq->sw_ring[rxq->rx_tail]; @@ -2104,6 +2121,9 @@ iavf_rx_scan_hw_ring(struct iavf_rx_queue *rxq, struct rte_mbuf **rx_pkts, uint1 uint64_t pkt_flags; const uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl; + if (!rxq->vsi || rxq->vsi->adapter->no_poll) + return 0; + rxdp = &rxq->rx_ring[rxq->rx_tail]; rxep = &rxq->sw_ring[rxq->rx_tail]; @@ -2281,6 +2301,9 @@ rx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) struct iavf_rx_queue *rxq = (struct iavf_rx_queue *)rx_queue; uint16_t nb_rx = 0; + if (!rxq->vsi || rxq->vsi->adapter->no_poll) + return 0; + if (!nb_pkts) return 0; @@ -2768,6 +2791,8 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) uint16_t idx; uint16_t slen; + if (!txq->vsi || txq->vsi->adapter->no_poll) + return 0; /* Check if the descriptor ring needs to be cleaned. */ if (txq->nb_free < txq->free_thresh) diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h index 605ea3f824..d3324e0e6e 100644 --- a/drivers/net/iavf/iavf_rxtx.h +++ b/drivers/net/iavf/iavf_rxtx.h @@ -288,6 +288,7 @@ struct iavf_tx_queue { uint16_t free_thresh; uint16_t rs_thresh; uint8_t rel_mbufs_type; + struct iavf_vsi *vsi; /**< the VSI this queue belongs to */ uint16_t port_id; uint16_t queue_id; diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c index c10f24036e..98316bed24 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c +++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c @@ -26,8 +26,7 @@ _iavf_recv_raw_pkts_vec_avx2(struct iavf_rx_queue *rxq, { #define IAVF_DESCS_PER_LOOP_AVX 8 - /* const uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl; */ - const uint32_t *type_table = rxq->vsi->adapter->ptype_tbl; + const uint32_t *type_table; const __m256i mbuf_init = _mm256_set_epi64x(0, 0, 0, rxq->mbuf_initializer); @@ -36,6 +35,11 @@ _iavf_recv_raw_pkts_vec_avx2(struct iavf_rx_queue *rxq, volatile union iavf_rx_desc *rxdp = rxq->rx_ring + rxq->rx_tail; const int avx_aligned = ((rxq->rx_tail & 1) == 0); + if (!rxq->vsi || rxq->vsi->adapter->no_poll) + return 0; + + type_table = rxq->vsi->adapter->ptype_tbl; + rte_prefetch0(rxdp); /* nb_pkts has to be floor-aligned to IAVF_DESCS_PER_LOOP_AVX */ @@ -530,12 +534,12 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq, { #define IAVF_DESCS_PER_LOOP_AVX 8 - struct iavf_adapter *adapter = rxq->vsi->adapter; + struct iavf_adapter *adapter; #ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC - uint64_t offloads = adapter->dev_data->dev_conf.rxmode.offloads; + uint64_t offloads; #endif - const uint32_t *type_table = adapter->ptype_tbl; + const uint32_t *type_table; const __m256i mbuf_init = _mm256_set_epi64x(0, 0, 0, rxq->mbuf_initializer); @@ -543,6 +547,15 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq, volatile union iavf_rx_flex_desc *rxdp = (union iavf_rx_flex_desc *)rxq->rx_ring + rxq->rx_tail; + if (!rxq->vsi || rxq->vsi->adapter->no_poll) + return 0; + + adapter = rxq->vsi->adapter; +#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC + offloads = adapter->dev_data->dev_conf.rxmode.offloads; +#endif + type_table = adapter->ptype_tbl; + rte_prefetch0(rxdp); /* nb_pkts has to be floor-aligned to IAVF_DESCS_PER_LOOP_AVX */ @@ -1774,6 +1787,9 @@ iavf_xmit_fixed_burst_vec_avx2(void *tx_queue, struct rte_mbuf **tx_pkts, uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC; uint64_t rs = IAVF_TX_DESC_CMD_RS | flags; + if (!txq->vsi || txq->vsi->adapter->no_poll) + return 0; + if (txq->nb_free < txq->free_thresh) iavf_tx_free_bufs(txq); @@ -1834,6 +1850,9 @@ iavf_xmit_pkts_vec_avx2_common(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_tx = 0; struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue; + if (!txq->vsi || txq->vsi->adapter->no_poll) + return 0; + while (nb_pkts) { uint16_t ret, num; diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c index 3e66df5341..8de739434c 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c +++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c @@ -45,7 +45,7 @@ _iavf_recv_raw_pkts_vec_avx512(struct iavf_rx_queue *rxq, bool offload) { #ifdef IAVF_RX_PTYPE_OFFLOAD - const uint32_t *type_table = rxq->vsi->adapter->ptype_tbl; + const uint32_t *type_table; #endif const __m256i mbuf_init = _mm256_set_epi64x(0, 0, 0, @@ -53,6 +53,13 @@ _iavf_recv_raw_pkts_vec_avx512(struct iavf_rx_queue *rxq, struct rte_mbuf **sw_ring = &rxq->sw_ring[rxq->rx_tail]; volatile union iavf_rx_desc *rxdp = rxq->rx_ring + rxq->rx_tail; + if (!rxq->vsi || rxq->vsi->adapter->no_poll) + return 0; + +#ifdef IAVF_RX_PTYPE_OFFLOAD + type_table = rxq->vsi->adapter->ptype_tbl; +#endif + rte_prefetch0(rxdp); /* nb_pkts has to be floor-aligned to IAVF_DESCS_PER_LOOP_AVX */ @@ -588,12 +595,12 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct iavf_rx_queue *rxq, uint8_t *split_packet, bool offload) { - struct iavf_adapter *adapter = rxq->vsi->adapter; + struct iavf_adapter *adapter; #ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC - uint64_t offloads = adapter->dev_data->dev_conf.rxmode.offloads; + uint64_t offloads; #endif #ifdef IAVF_RX_PTYPE_OFFLOAD - const uint32_t *type_table = adapter->ptype_tbl; + const uint32_t *type_table; #endif const __m256i mbuf_init = _mm256_set_epi64x(0, 0, 0, @@ -602,6 +609,17 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct iavf_rx_queue *rxq, volatile union iavf_rx_flex_desc *rxdp = (union iavf_rx_flex_desc *)rxq->rx_ring + rxq->rx_tail; + if (!rxq->vsi || rxq->vsi->adapter->no_poll) + return 0; + + adapter = rxq->vsi->adapter; +#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC + offloads = adapter->dev_data->dev_conf.rxmode.offloads; +#endif +#ifdef IAVF_RX_PTYPE_OFFLOAD + type_table = adapter->ptype_tbl; +#endif + rte_prefetch0(rxdp); /* nb_pkts has to be floor-aligned to IAVF_DESCS_PER_LOOP_AVX */ @@ -1700,6 +1718,10 @@ iavf_recv_scattered_pkts_vec_avx512_cmn(void *rx_queue, struct rte_mbuf **rx_pkt uint16_t nb_pkts, bool offload) { uint16_t retval = 0; + struct iavf_rx_queue *rxq = rx_queue; + + if (!rxq->vsi || rxq->vsi->adapter->no_poll) + return 0; while (nb_pkts > IAVF_VPMD_RX_MAX_BURST) { uint16_t burst = iavf_recv_scattered_burst_vec_avx512(rx_queue, @@ -2303,6 +2325,9 @@ iavf_xmit_fixed_burst_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts, uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC; uint64_t rs = IAVF_TX_DESC_CMD_RS | flags; + if (!txq->vsi || txq->vsi->adapter->no_poll) + return 0; + if (txq->nb_free < txq->free_thresh) iavf_tx_free_bufs_avx512(txq); @@ -2370,6 +2395,9 @@ iavf_xmit_fixed_burst_vec_avx512_ctx(void *tx_queue, struct rte_mbuf **tx_pkts, uint64_t flags = IAVF_TX_DESC_CMD_EOP | IAVF_TX_DESC_CMD_ICRC; uint64_t rs = IAVF_TX_DESC_CMD_RS | flags; + if (!txq->vsi || txq->vsi->adapter->no_poll) + return 0; + if (txq->nb_free < txq->free_thresh) iavf_tx_free_bufs_avx512(txq); @@ -2432,6 +2460,9 @@ iavf_xmit_pkts_vec_avx512_cmn(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_tx = 0; struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue; + if (!txq->vsi || txq->vsi->adapter->no_poll) + return 0; + while (nb_pkts) { uint16_t ret, num; @@ -2498,6 +2529,9 @@ iavf_xmit_pkts_vec_avx512_ctx_cmn(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_tx = 0; struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue; + if (!txq->vsi || txq->vsi->adapter->no_poll) + return 0; + while (nb_pkts) { uint16_t ret, num; diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c index 892bfa4cf3..4007ce6f6f 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_sse.c +++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c @@ -479,7 +479,12 @@ _recv_raw_pkts_vec(struct iavf_rx_queue *rxq, struct rte_mbuf **rx_pkts, int pos; uint64_t var; __m128i shuf_msk; - const uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl; + const uint32_t *ptype_tbl; + + if (!rxq->vsi || rxq->vsi->adapter->no_poll) + return 0; + + ptype_tbl = rxq->vsi->adapter->ptype_tbl; __m128i crc_adjust = _mm_set_epi16( 0, 0, 0, /* ignore non-length fields */ @@ -1198,6 +1203,11 @@ uint16_t iavf_recv_pkts_vec_flex_rxd(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) { + struct iavf_rx_queue *rxq = rx_queue; + + if (!rxq->vsi) + return 0; + return _recv_raw_pkts_vec_flex_rxd(rx_queue, rx_pkts, nb_pkts, NULL); } @@ -1215,6 +1225,9 @@ iavf_recv_scattered_burst_vec(void *rx_queue, struct rte_mbuf **rx_pkts, uint8_t split_flags[IAVF_VPMD_RX_MAX_BURST] = {0}; unsigned int i = 0; + if (!rxq->vsi || rxq->vsi->adapter->no_poll) + return 0; + /* get some new buffers */ uint16_t nb_bufs = _recv_raw_pkts_vec(rxq, rx_pkts, nb_pkts, split_flags); @@ -1284,6 +1297,9 @@ iavf_recv_scattered_burst_vec_flex_rxd(void *rx_queue, uint8_t split_flags[IAVF_VPMD_RX_MAX_BURST] = {0}; unsigned int i = 0; + if (!rxq->vsi || rxq->vsi->adapter->no_poll) + return 0; + /* get some new buffers */ uint16_t nb_bufs = _recv_raw_pkts_vec_flex_rxd(rxq, rx_pkts, nb_pkts, split_flags); @@ -1437,6 +1453,9 @@ iavf_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_tx = 0; struct iavf_tx_queue *txq = (struct iavf_tx_queue *)tx_queue; + if (!txq->vsi || txq->vsi->adapter->no_poll) + return 0; + while (nb_pkts) { uint16_t ret, num; diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c index 524732f67d..ca2bdd5408 100644 --- a/drivers/net/iavf/iavf_vchnl.c +++ b/drivers/net/iavf/iavf_vchnl.c @@ -263,6 +263,15 @@ iavf_read_msg_from_pf(struct iavf_adapter *adapter, uint16_t buf_len, if (!vf->link_up) iavf_dev_watchdog_enable(adapter); } + if (vf->link_up && adapter->no_poll) { + adapter->no_poll = false; + PMD_DRV_LOG(DEBUG, "IAVF no-poll turned off !!!\n"); + } + if (adapter->dev_data->dev_started && !vf->link_up && + adapter->devargs.no_poll_on_link_down) { + adapter->no_poll = true; + PMD_DRV_LOG(DEBUG, "IAVF no-poll turned on !!!\n"); + } PMD_DRV_LOG(INFO, "Link status update:%s", vf->link_up ? "up" : "down"); break; @@ -465,6 +474,16 @@ iavf_handle_pf_event_msg(struct rte_eth_dev *dev, uint8_t *msg, if (!vf->link_up) iavf_dev_watchdog_enable(adapter); } + if (vf->link_up && adapter->no_poll) { + adapter->no_poll = false; + PMD_DRV_LOG(DEBUG, "IAVF no-poll turned off!!!\n"); + } + if (dev->data->dev_started && !vf->link_up && + adapter->devargs.no_poll_on_link_down) { + adapter->no_poll = true; + PMD_DRV_LOG(DEBUG, "IAVF no-poll turned on !!!\n"); + } + iavf_dev_event_post(dev, RTE_ETH_EVENT_INTR_LSC, NULL, 0); break; case VIRTCHNL_EVENT_PF_DRIVER_CLOSE: -- 2.25.1