From: dapengx.yu@intel.com
To: Bruce Richardson <bruce.richardson@intel.com>,
Konstantin Ananyev <konstantin.ananyev@intel.com>,
Jingjing Wu <jingjing.wu@intel.com>,
Beilei Xing <beilei.xing@intel.com>
Cc: dev@dpdk.org, Dapeng Yu <dapengx.yu@intel.com>, stable@dpdk.org
Subject: [dpdk-dev] [PATCH] net/iavf: fix multi-process shared data
Date: Tue, 28 Sep 2021 11:37:53 +0800 [thread overview]
Message-ID: <20210928033753.1955674-1-dapengx.yu@intel.com> (raw)
From: Dapeng Yu <dapengx.yu@intel.com>
When the iavf_adapter instance is not initialized completedly in the
primary process, the secondary process accesses its "rte_eth_dev"
member, it causes secondary process crash.
This patch replaces adapter->eth_dev with rte_eth_devices[port_id] in
the data paths where rte_eth_dev instance is accessed.
Fixes: f978c1c9b3b5 ("net/iavf: add RSS hash parsing in AVX path")
Fixes: 9c9aa0040344 ("net/iavf: add offload path for Rx AVX512 flex descriptor")
Fixes: 63660ea3ee0b ("net/iavf: add RSS hash parsing in SSE path")
Cc: stable@dpdk.org
Signed-off-by: Dapeng Yu <dapengx.yu@intel.com>
---
drivers/net/iavf/iavf_rxtx_vec_avx2.c | 5 +++--
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 5 +++--
drivers/net/iavf/iavf_rxtx_vec_sse.c | 3 ++-
3 files changed, 8 insertions(+), 5 deletions(-)
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index 475070e036..59b086ade5 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -525,6 +525,7 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq,
#define IAVF_DESCS_PER_LOOP_AVX 8
const uint32_t *type_table = rxq->vsi->adapter->ptype_tbl;
+ struct rte_eth_dev *dev = &rte_eth_devices[rxq->port_id];
const __m256i mbuf_init = _mm256_set_epi64x(0, 0,
0, rxq->mbuf_initializer);
@@ -903,7 +904,7 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq,
* needs to load 2nd 16B of each desc for RSS hash parsing,
* will cause performance drop to get into this context.
*/
- if (rxq->vsi->adapter->eth_dev->data->dev_conf.rxmode.offloads &
+ if (dev->data->dev_conf.rxmode.offloads &
DEV_RX_OFFLOAD_RSS_HASH ||
rxq->rx_flags & IAVF_RX_FLAGS_VLAN_TAG_LOC_L2TAG2_2) {
/* load bottom half of every 32B desc */
@@ -956,7 +957,7 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq,
(_mm256_castsi128_si256(raw_desc_bh0),
raw_desc_bh1, 1);
- if (rxq->vsi->adapter->eth_dev->data->dev_conf.rxmode.offloads &
+ if (dev->data->dev_conf.rxmode.offloads &
DEV_RX_OFFLOAD_RSS_HASH) {
/**
* to shift the 32b RSS hash value to the
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index 571161c0cd..ed64a232e7 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -713,6 +713,7 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct iavf_rx_queue *rxq,
#ifdef IAVF_RX_PTYPE_OFFLOAD
const uint32_t *type_table = rxq->vsi->adapter->ptype_tbl;
#endif
+ struct rte_eth_dev *dev = &rte_eth_devices[rxq->port_id];
const __m256i mbuf_init = _mm256_set_epi64x(0, 0, 0,
rxq->mbuf_initializer);
@@ -1137,7 +1138,7 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct iavf_rx_queue *rxq,
* needs to load 2nd 16B of each desc for RSS hash parsing,
* will cause performance drop to get into this context.
*/
- if (rxq->vsi->adapter->eth_dev->data->dev_conf.rxmode.offloads &
+ if (dev->data->dev_conf.rxmode.offloads &
DEV_RX_OFFLOAD_RSS_HASH ||
rxq->rx_flags & IAVF_RX_FLAGS_VLAN_TAG_LOC_L2TAG2_2) {
/* load bottom half of every 32B desc */
@@ -1190,7 +1191,7 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct iavf_rx_queue *rxq,
(_mm256_castsi128_si256(raw_desc_bh0),
raw_desc_bh1, 1);
- if (rxq->vsi->adapter->eth_dev->data->dev_conf.rxmode.offloads &
+ if (dev->data->dev_conf.rxmode.offloads &
DEV_RX_OFFLOAD_RSS_HASH) {
/**
* to shift the 32b RSS hash value to the
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index ee1e905525..1231d0f63d 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -645,6 +645,7 @@ _recv_raw_pkts_vec_flex_rxd(struct iavf_rx_queue *rxq,
int pos;
uint64_t var;
const uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl;
+ struct rte_eth_dev *dev = &rte_eth_devices[rxq->port_id];
__m128i crc_adjust = _mm_set_epi16
(0, 0, 0, /* ignore non-length fields */
-rxq->crc_len, /* sub crc on data_len */
@@ -817,7 +818,7 @@ _recv_raw_pkts_vec_flex_rxd(struct iavf_rx_queue *rxq,
* needs to load 2nd 16B of each desc for RSS hash parsing,
* will cause performance drop to get into this context.
*/
- if (rxq->vsi->adapter->eth_dev->data->dev_conf.rxmode.offloads &
+ if (dev->data->dev_conf.rxmode.offloads &
DEV_RX_OFFLOAD_RSS_HASH) {
/* load bottom half of every 32B desc */
const __m128i raw_desc_bh3 =
--
2.27.0
next reply other threads:[~2021-09-28 3:38 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-09-28 3:37 dapengx.yu [this message]
2021-09-28 11:12 ` Zhang, Qi Z
2021-09-29 16:28 ` [dpdk-dev] [dpdk-stable] " Ferruh Yigit
2021-09-30 9:11 ` Yu, DapengX
2021-09-30 10:57 ` Ferruh Yigit
2021-10-07 4:50 ` Zhang, Qi Z
2021-10-09 3:25 ` [dpdk-dev] [PATCH v2] " dapengx.yu
2021-10-09 9:40 ` Zhang, Qi Z
2021-10-11 2:01 ` [dpdk-dev] [PATCH v3] " dapengx.yu
2021-10-11 2:57 ` Zhang, Qi Z
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210928033753.1955674-1-dapengx.yu@intel.com \
--to=dapengx.yu@intel.com \
--cc=beilei.xing@intel.com \
--cc=bruce.richardson@intel.com \
--cc=dev@dpdk.org \
--cc=jingjing.wu@intel.com \
--cc=konstantin.ananyev@intel.com \
--cc=stable@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).