patches for DPDK stable branches
 help / color / mirror / Atom feed
From: Ferruh Yigit <ferruh.yigit@amd.com>
To: Kaiwen Deng <kaiwenx.deng@intel.com>, dev@dpdk.org
Cc: stable@dpdk.org, qiming.yang@intel.com, yidingx.zhou@intel.com,
	Chas Williams <chas3@att.com>,
	"Min Hu (Connor)" <humin29@huawei.com>,
	Jingjing Wu <jingjing.wu@intel.com>,
	Beilei Xing <beilei.xing@intel.com>,
	Mike Pattrick <mkp@redhat.com>, Qi Zhang <qi.z.zhang@intel.com>,
	Declan Doherty <declan.doherty@intel.com>,
	Daniel Mrzyglod <danielx.t.mrzyglod@intel.com>,
	Dapeng Yu <dapengx.yu@intel.com>
Subject: Re: [PATCH v3] net/iavf: fix iavf query stats in intr thread
Date: Thu, 23 Mar 2023 15:39:16 +0000	[thread overview]
Message-ID: <0f1ae3b9-f378-9a75-65f7-fcd6e2630414@amd.com> (raw)
In-Reply-To: <20230322072615.179196-1-kaiwenx.deng@intel.com>

On 3/22/2023 7:26 AM, Kaiwen Deng wrote:
> When iavf send query-stats command in eal-intr-thread through
> virtual channel, there will be no response received from
> iavf_dev_virtchnl_handler for this command during block and wait.
> Because iavf_dev_virtchnl_handler is also registered in eal-intr-thread.
> 
> When vf device is bonded as BONDING_MODE_TLB mode, the slave device
> update callback will registered in alarm and called by eal-intr-thread,
> it would also raise the above issue.
> 
> This commit add to poll the response for VIRTCHNL_OP_GET_STATS
> when it is called by eal-intr-thread to fix this issue.
> 
> Fixes: 91bf37d250aa ("net/iavf: add lock for VF commands")
> Fixes: 22b123a36d07 ("net/avf: initialize PMD")
> Fixes: 7c76a747e68c ("bond: add mode 5")
> Fixes: 435d523112cc ("net/iavf: fix multi-process shared data")
> Fixes: cb5c1b91f76f ("net/iavf: add thread for event callbacks")


Hi Kaiwen,

Above commit already seems trying to address same issue, it creates
"iavf-event-thread" control thread to asyncroniously handle the
interrupts, in non-interrupt context, why it is not working?

Instead of adding 'rte_thread_is_intr()' checks, can't you make sure all
interrupts handled in control tread?

And can you please provide a stack trace in commit log, to describe the
issue better?

> Cc: stable@dpdk.org
> 
> Signed-off-by: Kaiwen Deng <kaiwenx.deng@intel.com>
> ---
> Changes since v2:
> - Add to poll the response for VIRTCHNL_OP_GET_STATS.
> 
> Changes since v1:
> - Add lock to avoid race condition.
> ---
> ---
>  drivers/net/bonding/rte_eth_bond_pmd.c |  7 ++-
>  drivers/net/iavf/iavf_ethdev.c         |  5 +-
>  drivers/net/iavf/iavf_vchnl.c          | 71 ++++++++++++++++++--------
>  3 files changed, 58 insertions(+), 25 deletions(-)
> 
> diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
> index f0c4f7d26b..edce621496 100644
> --- a/drivers/net/bonding/rte_eth_bond_pmd.c
> +++ b/drivers/net/bonding/rte_eth_bond_pmd.c
> @@ -894,6 +894,7 @@ bond_ethdev_update_tlb_slave_cb(void *arg)
>  	uint8_t update_stats = 0;
>  	uint16_t slave_id;
>  	uint16_t i;
> +	int ret;
>  
>  	internals->slave_update_idx++;
>  
> @@ -903,7 +904,10 @@ bond_ethdev_update_tlb_slave_cb(void *arg)
>  
>  	for (i = 0; i < internals->active_slave_count; i++) {
>  		slave_id = internals->active_slaves[i];
> -		rte_eth_stats_get(slave_id, &slave_stats);
> +		ret = rte_eth_stats_get(slave_id, &slave_stats);
> +		if (ret)
> +			goto OUT;
> +
>  		tx_bytes = slave_stats.obytes - tlb_last_obytets[slave_id];
>  		bandwidth_left(slave_id, tx_bytes,
>  				internals->slave_update_idx, &bwg_array[i]);
> @@ -922,6 +926,7 @@ bond_ethdev_update_tlb_slave_cb(void *arg)
>  	for (i = 0; i < slave_count; i++)
>  		internals->tlb_slaves_order[i] = bwg_array[i].slave;
>  
> +OUT:
>  	rte_eal_alarm_set(REORDER_PERIOD_MS * 1000, bond_ethdev_update_tlb_slave_cb,
>  			(struct bond_dev_private *)internals);
>  }
> diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
> index 3196210f2c..d6e1f1a7f4 100644
> --- a/drivers/net/iavf/iavf_ethdev.c
> +++ b/drivers/net/iavf/iavf_ethdev.c
> @@ -2607,6 +2607,9 @@ iavf_dev_init(struct rte_eth_dev *eth_dev)
>  	adapter->dev_data = eth_dev->data;
>  	adapter->stopped = 1;
>  
> +	if (iavf_dev_event_handler_init())
> +		goto init_vf_err;
> +
>  	if (iavf_init_vf(eth_dev) != 0) {
>  		PMD_INIT_LOG(ERR, "Init vf failed");
>  		return -1;
> @@ -2634,8 +2637,6 @@ iavf_dev_init(struct rte_eth_dev *eth_dev)
>  	rte_ether_addr_copy((struct rte_ether_addr *)hw->mac.addr,
>  			&eth_dev->data->mac_addrs[0]);
>  
> -	if (iavf_dev_event_handler_init())
> -		goto init_vf_err;
>  
>  	if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) {
>  		/* register callback func to eal lib */
> diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c
> index 9adaadb173..aeffb07cca 100644
> --- a/drivers/net/iavf/iavf_vchnl.c
> +++ b/drivers/net/iavf/iavf_vchnl.c
> @@ -256,6 +256,7 @@ iavf_read_msg_from_pf(struct iavf_adapter *adapter, uint16_t buf_len,
>  				vf->link_speed = iavf_convert_link_speed(speed);
>  			}
>  			iavf_dev_link_update(vf->eth_dev, 0);
> +			iavf_dev_event_post(vf->eth_dev, RTE_ETH_EVENT_INTR_LSC, NULL, 0);
>  			PMD_DRV_LOG(INFO, "Link status update:%s",
>  					vf->link_up ? "up" : "down");
>  			break;
> @@ -368,28 +369,48 @@ iavf_execute_vf_cmd(struct iavf_adapter *adapter, struct iavf_cmd_info *args,
>  		_clear_cmd(vf);
>  		break;
>  	default:
> -		/* For other virtchnl ops in running time,
> -		 * wait for the cmd done flag.
> -		 */
> -		do {
> -			if (vf->pend_cmd == VIRTCHNL_OP_UNKNOWN)
> -				break;
> -			iavf_msec_delay(ASQ_DELAY_MS);
> -			/* If don't read msg or read sys event, continue */
> -		} while (i++ < MAX_TRY_TIMES);
> -
> -		if (i >= MAX_TRY_TIMES) {
> -			PMD_DRV_LOG(ERR, "No response for cmd %d", args->ops);
> +		if (rte_thread_is_intr()) {
> +			/* For virtchnl ops were executed in eal_intr_thread,
> +			 * need to poll the response.
> +			 */
> +			do {
> +				result = iavf_read_msg_from_pf(adapter, args->out_size,
> +							args->out_buffer);
> +				if (result == IAVF_MSG_CMD)
> +					break;
> +				iavf_msec_delay(ASQ_DELAY_MS);
> +			} while (i++ < MAX_TRY_TIMES);
> +			if (i >= MAX_TRY_TIMES ||
> +				vf->cmd_retval != VIRTCHNL_STATUS_SUCCESS) {
> +				err = -1;
> +				PMD_DRV_LOG(ERR, "No response or return failure (%d)"
> +						" for cmd %d", vf->cmd_retval, args->ops);
> +			}
>  			_clear_cmd(vf);
> -			err = -EIO;
> -		} else if (vf->cmd_retval ==
> -			   VIRTCHNL_STATUS_ERR_NOT_SUPPORTED) {
> -			PMD_DRV_LOG(ERR, "Cmd %d not supported", args->ops);
> -			err = -ENOTSUP;
> -		} else if (vf->cmd_retval != VIRTCHNL_STATUS_SUCCESS) {
> -			PMD_DRV_LOG(ERR, "Return failure %d for cmd %d",
> -				    vf->cmd_retval, args->ops);
> -			err = -EINVAL;
> +		} else {
> +			/* For other virtchnl ops in running time,
> +			 * wait for the cmd done flag.
> +			 */
> +			do {
> +				if (vf->pend_cmd == VIRTCHNL_OP_UNKNOWN)
> +					break;
> +				iavf_msec_delay(ASQ_DELAY_MS);
> +				/* If don't read msg or read sys event, continue */
> +			} while (i++ < MAX_TRY_TIMES);
> +
> +			if (i >= MAX_TRY_TIMES) {
> +				PMD_DRV_LOG(ERR, "No response for cmd %d", args->ops);
> +				_clear_cmd(vf);
> +				err = -EIO;
> +			} else if (vf->cmd_retval ==
> +				VIRTCHNL_STATUS_ERR_NOT_SUPPORTED) {
> +				PMD_DRV_LOG(ERR, "Cmd %d not supported", args->ops);
> +				err = -ENOTSUP;
> +			} else if (vf->cmd_retval != VIRTCHNL_STATUS_SUCCESS) {
> +				PMD_DRV_LOG(ERR, "Return failure %d for cmd %d",
> +						vf->cmd_retval, args->ops);
> +				err = -EINVAL;
> +			}
>  		}
>  		break;
>  	}
> @@ -403,8 +424,14 @@ iavf_execute_vf_cmd_safe(struct iavf_adapter *adapter,
>  {
>  	struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
>  	int ret;
> +	int is_intr_thread = rte_thread_is_intr();
>  
> -	rte_spinlock_lock(&vf->aq_lock);
> +	if (is_intr_thread) {
> +		if (!rte_spinlock_trylock(&vf->aq_lock))
> +			return -EIO;
> +	} else {
> +		rte_spinlock_lock(&vf->aq_lock);
> +	}
>  	ret = iavf_execute_vf_cmd(adapter, args, async);
>  	rte_spinlock_unlock(&vf->aq_lock);
>  


  reply	other threads:[~2023-03-23 15:39 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-02-22  4:40 [PATCH] " Kaiwen Deng
2023-02-27  0:56 ` Zhang, Qi Z
2023-02-27  6:02   ` Deng, KaiwenX
2023-03-07  3:27   ` Deng, KaiwenX
2023-03-07  2:55 ` [PATCH v2] " Kaiwen Deng
2023-03-15 13:40   ` Zhang, Qi Z
2023-03-22  7:26   ` [PATCH v3] " Kaiwen Deng
2023-03-23 15:39     ` Ferruh Yigit [this message]
2023-03-27  5:31       ` Deng, KaiwenX
2023-03-27 12:31         ` Ferruh Yigit
2023-03-27 12:37           ` Ferruh Yigit
2023-03-29  7:53             ` Deng, KaiwenX
2023-05-05  2:31             ` Deng, KaiwenX
2023-05-23  1:45               ` Deng, KaiwenX
2023-05-23 10:26                 ` Ferruh Yigit
2023-03-29  6:41           ` Deng, KaiwenX
2023-06-06  5:41     ` Jiale, SongX
2023-06-07  2:03     ` [PATCH v4] " Kaiwen Deng
2023-06-07  4:01       ` Zhang, Qi Z

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0f1ae3b9-f378-9a75-65f7-fcd6e2630414@amd.com \
    --to=ferruh.yigit@amd.com \
    --cc=beilei.xing@intel.com \
    --cc=chas3@att.com \
    --cc=danielx.t.mrzyglod@intel.com \
    --cc=dapengx.yu@intel.com \
    --cc=declan.doherty@intel.com \
    --cc=dev@dpdk.org \
    --cc=humin29@huawei.com \
    --cc=jingjing.wu@intel.com \
    --cc=kaiwenx.deng@intel.com \
    --cc=mkp@redhat.com \
    --cc=qi.z.zhang@intel.com \
    --cc=qiming.yang@intel.com \
    --cc=stable@dpdk.org \
    --cc=yidingx.zhou@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).