patches for DPDK stable branches
 help / color / mirror / Atom feed
From: "Zhang, Qi Z" <qi.z.zhang@intel.com>
To: "Zhao1, Wei" <wei.zhao1@intel.com>, "dev@dpdk.org" <dev@dpdk.org>
Cc: "stable@dpdk.org" <stable@dpdk.org>
Subject: Re: [dpdk-stable] [PATCH] net/iavf: fix vertor interrupt number configuration error
Date: Tue, 19 Mar 2019 04:00:40 +0000	[thread overview]
Message-ID: <039ED4275CED7440929022BC67E706115334EC3D@SHSMSX103.ccr.corp.intel.com> (raw)
In-Reply-To: <1552964680-45860-1-git-send-email-wei.zhao1@intel.com>



> -----Original Message-----
> From: Zhao1, Wei
> Sent: Tuesday, March 19, 2019 11:05 AM
> To: dev@dpdk.org
> Cc: stable@dpdk.org; Zhang, Qi Z <qi.z.zhang@intel.com>; Zhao1, Wei
> <wei.zhao1@intel.com>
> Subject: [PATCH] net/iavf: fix vertor interrupt number configuration error
> 
> There is a issue when iavf do vertor interrupt configuration, it will miss one
> interrupt vector which set admin queue interrupt when communicate with host
> pf.
> 
> Fixes: 69dd4c3d0898 ("net/avf: enable queue and device")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
> Signed-off-by: Zhao Wei <wei.zhao1@intel.com>
> ---
>  drivers/net/iavf/iavf_ethdev.c | 4 ++--  drivers/net/iavf/iavf_vchnl.c  | 8
> +++++++-
>  2 files changed, 9 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c index
> 846e604..49c9499 100644
> --- a/drivers/net/iavf/iavf_ethdev.c
> +++ b/drivers/net/iavf/iavf_ethdev.c
> @@ -308,7 +308,7 @@ static int iavf_config_rx_queues_irqs(struct rte_eth_dev
> *dev,
>  	if (!dev->data->dev_conf.intr_conf.rxq ||
>  	    !rte_intr_dp_is_en(intr_handle)) {
>  		/* Rx interrupt disabled, Map interrupt only for writeback */
> -		vf->nb_msix = 1;
> +		vf->nb_msix = 2;
>  		if (vf->vf_res->vf_cap_flags &
>  		    VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) {
>  			/* If WB_ON_ITR supports, enable it */ @@ -338,7 +338,7 @@
> static int iavf_config_rx_queues_irqs(struct rte_eth_dev *dev,
>  			vf->rxq_map[vf->msix_base] |= 1 << i;
>  	} else {
>  		if (!rte_intr_allow_others(intr_handle)) {
> -			vf->nb_msix = 1;
> +			vf->nb_msix = 2;
>  			vf->msix_base = IAVF_MISC_VEC_ID;
>  			for (i = 0; i < dev->data->nb_rx_queues; i++) {
>  				vf->rxq_map[vf->msix_base] |= 1 << i; diff --git


Looks like something missing in below else branch

			} else {
                       /* If Rx interrupt is reuquired, and we can use
                         * multi interrupts, then the vec is from 1
                         */
                        vf->nb_msix = RTE_MIN(vf->vf_res->max_vectors,
                                              intr_handle->nb_efd);
                        vf->msix_base = IAVF_RX_VEC_START;
                        vec = IAVF_RX_VEC_START;
                        for (i = 0; i < dev->data->nb_rx_queues; i++) {
                                vf->rxq_map[vec] |= 1 << i;
                                intr_handle->intr_vec[i] = vec++;
                                if (vec >= vf->nb_msix)
                                        vec = IAVF_RX_VEC_START;
                        }
                        PMD_DRV_LOG(DEBUG,
                                    "%u vectors are mapping to %u Rx queues",
                                    vf->nb_msix, dev->data->nb_rx_queues);
                }
Should we also reserve 1 vector for admin queue in this case, 
or looks like some Rx queue will share the irq vector with admin queue during iavf_config_irq_map which is not expected?


> a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c index
> 6381fb6..d9a376e 100644
> --- a/drivers/net/iavf/iavf_vchnl.c
> +++ b/drivers/net/iavf/iavf_vchnl.c
> @@ -609,7 +609,7 @@ iavf_config_irq_map(struct iavf_adapter *adapter)
>  		return -ENOMEM;
> 
>  	map_info->num_vectors = vf->nb_msix;
> -	for (i = 0; i < vf->nb_msix; i++) {
> +	for (i = 0; i < vf->nb_msix - 1; i++) {
>  		vecmap = &map_info->vecmap[i];
>  		vecmap->vsi_id = vf->vsi_res->vsi_id;
>  		vecmap->rxitr_idx = IAVF_ITR_INDEX_DEFAULT; @@ -618,6 +618,12
> @@ iavf_config_irq_map(struct iavf_adapter *adapter)
>  		vecmap->rxq_map = vf->rxq_map[vf->msix_base + i];
>  	}
> 
> +	vecmap = &map_info->vecmap[i];
> +	vecmap->vsi_id = vf->vsi_res->vsi_id;
> +	vecmap->vector_id = 0;
> +	vecmap->txq_map = 0;
> +	vecmap->rxq_map = 0;
> +
>  	args.ops = VIRTCHNL_OP_CONFIG_IRQ_MAP;
>  	args.in_args = (u8 *)map_info;
>  	args.in_args_size = len;
> --
> 2.7.5


  reply	other threads:[~2019-03-19  4:00 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-03-19  3:04 Wei Zhao
2019-03-19  4:00 ` Zhang, Qi Z [this message]
2019-03-19  5:57   ` Zhao1, Wei
2019-03-19  4:16 ` [dpdk-stable] [dpdk-dev] " Stillwell Jr, Paul M
2019-03-21  3:30   ` Zhao1, Wei
2019-03-22  6:27 ` [dpdk-stable] [PATCH v2] " Wei Zhao
2019-03-26  2:40   ` [dpdk-stable] [PATCH v3] net/iavf: fix Tx interrupt vertor " Wei Zhao
2019-03-26  5:07     ` [dpdk-stable] [PATCH v4] " Wei Zhao
2019-03-26  7:03       ` Zhang, Qi Z
2019-04-02  2:49         ` [dpdk-stable] [dpdk-dev] " Zhang, Qi Z

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=039ED4275CED7440929022BC67E706115334EC3D@SHSMSX103.ccr.corp.intel.com \
    --to=qi.z.zhang@intel.com \
    --cc=dev@dpdk.org \
    --cc=stable@dpdk.org \
    --cc=wei.zhao1@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).