From: "Zeng, XiaoxiaoX" <xiaoxiaox.zeng@intel.com>
To: "Xu, Ting" <ting.xu@intel.com>, "dev@dpdk.org" <dev@dpdk.org>
Cc: "Xing, Beilei" <beilei.xing@intel.com>,
"Wu, Jingjing" <jingjing.wu@intel.com>,
"Ye, Xiaolong" <xiaolong.ye@intel.com>,
"stable@dpdk.org" <stable@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH v1] net/iavf: fix setting wrong RXDID value for Rx queue
Date: Fri, 15 May 2020 03:29:40 +0000 [thread overview]
Message-ID: <FA979DD015B0CA41A7C777E75BD0A9F003E903BC@CDSMSX102.ccr.corp.intel.com> (raw)
In-Reply-To: <20200511152748.21144-1-ting.xu@intel.com>
Tested-by: Zeng,XiaoxiaoX<xiaoxiaox.zeng@intel.com>
Best regards,
Zeng,xiaoxiao
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Ting Xu
> Sent: Monday, May 11, 2020 11:28 PM
> To: dev@dpdk.org
> Cc: Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>; Ye, Xiaolong <xiaolong.ye@intel.com>;
> stable@dpdk.org
> Subject: [dpdk-dev] [PATCH v1] net/iavf: fix setting wrong RXDID value for Rx
> queue
>
> CVL kernel PF configures all reserved queues for VF, including Rx queue
> RXDID. The number of reserved queues is the maximum between Tx and Rx
> queues. If the number of the enabled Rx queues is less than that of reserved
> queues, required RXDID will only be set for those enabled, but default value
> (0) is set for others.
> However, RXDID 0 (legacy 16byte descriptor) is not supported now, PF will
> return error when configuring those disabled VF queues.
>
> In this patch, required RXDID is set for all reserved Rx queues, no matter
> enabled or not. In this way, PF will configure Rx queues correctly without
> reporting error.
>
> Fixes: b8b4c54ef9b0 ("net/iavf: support flexible Rx descriptor in normal path")
> Cc: stable@dpdk.org
>
> Signed-off-by: Ting Xu <ting.xu@intel.com>
> ---
> drivers/net/iavf/iavf_vchnl.c | 44 +++++++++++++++++------------------
> 1 file changed, 22 insertions(+), 22 deletions(-)
>
> diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c index
> 2a0cdd927..328cfdf01 100644
> --- a/drivers/net/iavf/iavf_vchnl.c
> +++ b/drivers/net/iavf/iavf_vchnl.c
> @@ -593,32 +593,32 @@ iavf_configure_queues(struct iavf_adapter
> *adapter)
> vc_qp->rxq.ring_len = rxq[i]->nb_rx_desc;
> vc_qp->rxq.dma_ring_addr = rxq[i]-
> >rx_ring_phys_addr;
> vc_qp->rxq.databuffer_size = rxq[i]->rx_buf_len;
> + }
>
> #ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
> - if (vf->vf_res->vf_cap_flags &
> - VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
> - vf->supported_rxdid &
> BIT(IAVF_RXDID_COMMS_OVS_1)) {
> - vc_qp->rxq.rxdid =
> IAVF_RXDID_COMMS_OVS_1;
> - PMD_DRV_LOG(NOTICE, "request RXDID
> == %d in "
> - "Queue[%d]", vc_qp->rxq.rxdid, i);
> - } else {
> - vc_qp->rxq.rxdid = IAVF_RXDID_LEGACY_1;
> - PMD_DRV_LOG(NOTICE, "request RXDID
> == %d in "
> - "Queue[%d]", vc_qp->rxq.rxdid, i);
> - }
> + if (vf->vf_res->vf_cap_flags &
> + VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
> + vf->supported_rxdid &
> BIT(IAVF_RXDID_COMMS_OVS_1)) {
> + vc_qp->rxq.rxdid = IAVF_RXDID_COMMS_OVS_1;
> + PMD_DRV_LOG(NOTICE, "request RXDID == %d in "
> + "Queue[%d]", vc_qp->rxq.rxdid, i);
> + } else {
> + vc_qp->rxq.rxdid = IAVF_RXDID_LEGACY_1;
> + PMD_DRV_LOG(NOTICE, "request RXDID == %d in "
> + "Queue[%d]", vc_qp->rxq.rxdid, i);
> + }
> #else
> - if (vf->vf_res->vf_cap_flags &
> - VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
> - vf->supported_rxdid &
> BIT(IAVF_RXDID_LEGACY_0)) {
> - vc_qp->rxq.rxdid = IAVF_RXDID_LEGACY_0;
> - PMD_DRV_LOG(NOTICE, "request RXDID
> == %d in "
> - "Queue[%d]", vc_qp->rxq.rxdid, i);
> - } else {
> - PMD_DRV_LOG(ERR, "RXDID == 0 is not
> supported");
> - return -1;
> - }
> -#endif
> + if (vf->vf_res->vf_cap_flags &
> + VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
> + vf->supported_rxdid & BIT(IAVF_RXDID_LEGACY_0))
> {
> + vc_qp->rxq.rxdid = IAVF_RXDID_LEGACY_0;
> + PMD_DRV_LOG(NOTICE, "request RXDID == %d in "
> + "Queue[%d]", vc_qp->rxq.rxdid, i);
> + } else {
> + PMD_DRV_LOG(ERR, "RXDID == 0 is not supported");
> + return -1;
> }
> +#endif
> }
>
> memset(&args, 0, sizeof(args));
> --
> 2.17.1
next prev parent reply other threads:[~2020-05-15 3:29 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-05-11 15:27 Ting Xu
2020-05-15 3:29 ` Zeng, XiaoxiaoX [this message]
2020-05-18 2:01 ` Ye Xiaolong
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=FA979DD015B0CA41A7C777E75BD0A9F003E903BC@CDSMSX102.ccr.corp.intel.com \
--to=xiaoxiaox.zeng@intel.com \
--cc=beilei.xing@intel.com \
--cc=dev@dpdk.org \
--cc=jingjing.wu@intel.com \
--cc=stable@dpdk.org \
--cc=ting.xu@intel.com \
--cc=xiaolong.ye@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).