patches for DPDK stable branches
 help / color / mirror / Atom feed
From: Ferruh Yigit <ferruh.yigit@intel.com>
To: <dapengx.yu@intel.com>,
	Bruce Richardson <bruce.richardson@intel.com>,
	Konstantin Ananyev <konstantin.ananyev@intel.com>,
	Jingjing Wu <jingjing.wu@intel.com>,
	Beilei Xing <beilei.xing@intel.com>
Cc: <dev@dpdk.org>, <stable@dpdk.org>
Subject: Re: [dpdk-stable] [PATCH] net/iavf: fix multi-process shared data
Date: Wed, 29 Sep 2021 17:28:25 +0100	[thread overview]
Message-ID: <ffe33a81-3a8c-c306-db34-909afcd8ea44@intel.com> (raw)
In-Reply-To: <20210928033753.1955674-1-dapengx.yu@intel.com>

On 9/28/2021 4:37 AM, dapengx.yu@intel.com wrote:
> From: Dapeng Yu <dapengx.yu@intel.com>
> 
> When the iavf_adapter instance is not initialized completedly in the
> primary process, the secondary process accesses its "rte_eth_dev"
> member, it causes secondary process crash.
> 
> This patch replaces adapter->eth_dev with rte_eth_devices[port_id] in
> the data paths where rte_eth_dev instance is accessed.
> 
> Fixes: f978c1c9b3b5 ("net/iavf: add RSS hash parsing in AVX path")
> Fixes: 9c9aa0040344 ("net/iavf: add offload path for Rx AVX512 flex descriptor")
> Fixes: 63660ea3ee0b ("net/iavf: add RSS hash parsing in SSE path")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Dapeng Yu <dapengx.yu@intel.com>
> ---
>  drivers/net/iavf/iavf_rxtx_vec_avx2.c   | 5 +++--
>  drivers/net/iavf/iavf_rxtx_vec_avx512.c | 5 +++--
>  drivers/net/iavf/iavf_rxtx_vec_sse.c    | 3 ++-
>  3 files changed, 8 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
> index 475070e036..59b086ade5 100644
> --- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
> +++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
> @@ -525,6 +525,7 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq,
>  #define IAVF_DESCS_PER_LOOP_AVX 8
>  
>  	const uint32_t *type_table = rxq->vsi->adapter->ptype_tbl;
> +	struct rte_eth_dev *dev = &rte_eth_devices[rxq->port_id];
>  

It is not good idea to access global variable directly from the driver.

The problem definition is correct, eth_dev is unique per process, so it can't be
saved to a shared struct.

But here I assume real intention is to be able to access PMD specific data from
queue struct, for this what about storing 'rte_eth_dev_data' in the
'iavf_rx_queue', this should sove the problem without accessing the global variable.

  parent reply	other threads:[~2021-09-29 16:28 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-28  3:37 dapengx.yu
2021-09-28 11:12 ` [dpdk-stable] [dpdk-dev] " Zhang, Qi Z
2021-09-29 16:28 ` Ferruh Yigit [this message]
2021-09-30  9:11   ` [dpdk-stable] " Yu, DapengX
2021-09-30 10:57     ` Ferruh Yigit
2021-10-07  4:50       ` [dpdk-stable] [dpdk-dev] " Zhang, Qi Z
2021-10-09  3:25 ` [dpdk-stable] [PATCH v2] " dapengx.yu
2021-10-09  9:40   ` Zhang, Qi Z
2021-10-11  2:01   ` [dpdk-stable] [PATCH v3] " dapengx.yu
2021-10-11  2:57     ` Zhang, Qi Z

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ffe33a81-3a8c-c306-db34-909afcd8ea44@intel.com \
    --to=ferruh.yigit@intel.com \
    --cc=beilei.xing@intel.com \
    --cc=bruce.richardson@intel.com \
    --cc=dapengx.yu@intel.com \
    --cc=dev@dpdk.org \
    --cc=jingjing.wu@intel.com \
    --cc=konstantin.ananyev@intel.com \
    --cc=stable@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).