patches for DPDK stable branches
 help / color / mirror / Atom feed
* [dpdk-stable] [PATCH] net/iavf: fix multi-process shared data
@ 2021-09-28  3:37 dapengx.yu
  2021-09-28 11:12 ` [dpdk-stable] [dpdk-dev] " Zhang, Qi Z
                   ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: dapengx.yu @ 2021-09-28  3:37 UTC (permalink / raw)
  To: Bruce Richardson, Konstantin Ananyev, Jingjing Wu, Beilei Xing
  Cc: dev, Dapeng Yu, stable

From: Dapeng Yu <dapengx.yu@intel.com>

When the iavf_adapter instance is not initialized completedly in the
primary process, the secondary process accesses its "rte_eth_dev"
member, it causes secondary process crash.

This patch replaces adapter->eth_dev with rte_eth_devices[port_id] in
the data paths where rte_eth_dev instance is accessed.

Fixes: f978c1c9b3b5 ("net/iavf: add RSS hash parsing in AVX path")
Fixes: 9c9aa0040344 ("net/iavf: add offload path for Rx AVX512 flex descriptor")
Fixes: 63660ea3ee0b ("net/iavf: add RSS hash parsing in SSE path")
Cc: stable@dpdk.org

Signed-off-by: Dapeng Yu <dapengx.yu@intel.com>
---
 drivers/net/iavf/iavf_rxtx_vec_avx2.c   | 5 +++--
 drivers/net/iavf/iavf_rxtx_vec_avx512.c | 5 +++--
 drivers/net/iavf/iavf_rxtx_vec_sse.c    | 3 ++-
 3 files changed, 8 insertions(+), 5 deletions(-)

diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index 475070e036..59b086ade5 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -525,6 +525,7 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq,
 #define IAVF_DESCS_PER_LOOP_AVX 8
 
 	const uint32_t *type_table = rxq->vsi->adapter->ptype_tbl;
+	struct rte_eth_dev *dev = &rte_eth_devices[rxq->port_id];
 
 	const __m256i mbuf_init = _mm256_set_epi64x(0, 0,
 			0, rxq->mbuf_initializer);
@@ -903,7 +904,7 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq,
 		 * needs to load 2nd 16B of each desc for RSS hash parsing,
 		 * will cause performance drop to get into this context.
 		 */
-		if (rxq->vsi->adapter->eth_dev->data->dev_conf.rxmode.offloads &
+		if (dev->data->dev_conf.rxmode.offloads &
 				DEV_RX_OFFLOAD_RSS_HASH ||
 				rxq->rx_flags & IAVF_RX_FLAGS_VLAN_TAG_LOC_L2TAG2_2) {
 			/* load bottom half of every 32B desc */
@@ -956,7 +957,7 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq,
 					(_mm256_castsi128_si256(raw_desc_bh0),
 					raw_desc_bh1, 1);
 
-			if (rxq->vsi->adapter->eth_dev->data->dev_conf.rxmode.offloads &
+			if (dev->data->dev_conf.rxmode.offloads &
 					DEV_RX_OFFLOAD_RSS_HASH) {
 				/**
 				 * to shift the 32b RSS hash value to the
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index 571161c0cd..ed64a232e7 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -713,6 +713,7 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct iavf_rx_queue *rxq,
 #ifdef IAVF_RX_PTYPE_OFFLOAD
 	const uint32_t *type_table = rxq->vsi->adapter->ptype_tbl;
 #endif
+	struct rte_eth_dev *dev = &rte_eth_devices[rxq->port_id];
 
 	const __m256i mbuf_init = _mm256_set_epi64x(0, 0, 0,
 						    rxq->mbuf_initializer);
@@ -1137,7 +1138,7 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct iavf_rx_queue *rxq,
 			 * needs to load 2nd 16B of each desc for RSS hash parsing,
 			 * will cause performance drop to get into this context.
 			 */
-			if (rxq->vsi->adapter->eth_dev->data->dev_conf.rxmode.offloads &
+			if (dev->data->dev_conf.rxmode.offloads &
 			    DEV_RX_OFFLOAD_RSS_HASH ||
 			    rxq->rx_flags & IAVF_RX_FLAGS_VLAN_TAG_LOC_L2TAG2_2) {
 				/* load bottom half of every 32B desc */
@@ -1190,7 +1191,7 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct iavf_rx_queue *rxq,
 						(_mm256_castsi128_si256(raw_desc_bh0),
 						 raw_desc_bh1, 1);
 
-				if (rxq->vsi->adapter->eth_dev->data->dev_conf.rxmode.offloads &
+				if (dev->data->dev_conf.rxmode.offloads &
 						DEV_RX_OFFLOAD_RSS_HASH) {
 					/**
 					 * to shift the 32b RSS hash value to the
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index ee1e905525..1231d0f63d 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -645,6 +645,7 @@ _recv_raw_pkts_vec_flex_rxd(struct iavf_rx_queue *rxq,
 	int pos;
 	uint64_t var;
 	const uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl;
+	struct rte_eth_dev *dev = &rte_eth_devices[rxq->port_id];
 	__m128i crc_adjust = _mm_set_epi16
 				(0, 0, 0,       /* ignore non-length fields */
 				 -rxq->crc_len, /* sub crc on data_len */
@@ -817,7 +818,7 @@ _recv_raw_pkts_vec_flex_rxd(struct iavf_rx_queue *rxq,
 		 * needs to load 2nd 16B of each desc for RSS hash parsing,
 		 * will cause performance drop to get into this context.
 		 */
-		if (rxq->vsi->adapter->eth_dev->data->dev_conf.rxmode.offloads &
+		if (dev->data->dev_conf.rxmode.offloads &
 				DEV_RX_OFFLOAD_RSS_HASH) {
 			/* load bottom half of every 32B desc */
 			const __m128i raw_desc_bh3 =
-- 
2.27.0


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2021-10-11  2:57 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-28  3:37 [dpdk-stable] [PATCH] net/iavf: fix multi-process shared data dapengx.yu
2021-09-28 11:12 ` [dpdk-stable] [dpdk-dev] " Zhang, Qi Z
2021-09-29 16:28 ` [dpdk-stable] " Ferruh Yigit
2021-09-30  9:11   ` Yu, DapengX
2021-09-30 10:57     ` Ferruh Yigit
2021-10-07  4:50       ` [dpdk-stable] [dpdk-dev] " Zhang, Qi Z
2021-10-09  3:25 ` [dpdk-stable] [PATCH v2] " dapengx.yu
2021-10-09  9:40   ` Zhang, Qi Z
2021-10-11  2:01   ` [dpdk-stable] [PATCH v3] " dapengx.yu
2021-10-11  2:57     ` Zhang, Qi Z

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).