From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id BA26BA04B0; Sun, 18 Oct 2020 12:29:56 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 56D4DBCF7; Sun, 18 Oct 2020 12:29:54 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 0EBE2BCF5 for ; Sun, 18 Oct 2020 12:29:52 +0200 (CEST) IronPort-SDR: WExblDUgUcajnci/+WcbU6KbaahSW24o4PfqxrOnP4/BP4DlMxXCtDPHEBM085+tgiQpNTxAzP f+MyaveHpA7A== X-IronPort-AV: E=McAfee;i="6000,8403,9777"; a="164274089" X-IronPort-AV: E=Sophos;i="5.77,390,1596524400"; d="scan'208";a="164274089" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Oct 2020 03:29:46 -0700 IronPort-SDR: uhfndLcT6iVEaeUCHHB5TW7QPA5QKPcJKJZLyoEAdyeSzAmrQB4K+cwgb7F9Mf6clumETuURWT Gvb4w1zOhW+A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,390,1596524400"; d="scan'208";a="301114045" Received: from fmsmsx606.amr.corp.intel.com ([10.18.126.86]) by fmsmga007.fm.intel.com with ESMTP; 18 Oct 2020 03:29:45 -0700 Received: from fmsmsx603.amr.corp.intel.com (10.18.126.83) by fmsmsx606.amr.corp.intel.com (10.18.126.86) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1713.5; Sun, 18 Oct 2020 03:29:45 -0700 Received: from fmsedg602.ED.cps.intel.com (10.1.192.136) by fmsmsx603.amr.corp.intel.com (10.18.126.83) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1713.5 via Frontend Transport; Sun, 18 Oct 2020 03:29:45 -0700 Received: from NAM10-DM6-obe.outbound.protection.outlook.com (104.47.58.108) by edgegateway.intel.com (192.55.55.71) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.1713.5; Sun, 18 Oct 2020 03:29:44 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=SXHHTcFYy5d9vIFjJg0UL4xOp5JykM9sOLaCHhCpY1ntYtBcHbjZpD6JpktwxPrQKhoOXEY4JslOHb14lCWiNxDN7fsa4yiRBIjUk9cX34/kFWQ4vhPVggpTTec1Huu7Q6NvkkgJ7U/51ysgMMZspTfgSzyxsSPmWkLrYSu+dHVMo7H/dnCRjKXS9ZBPG892+tzf1ySGLMcee7dFjQjxcVcVehtcFxWweTjeBFOANU8ZJaK+YE/r83LJd6eK5A3cCeDgFct4ZjEGxaO45kRNTJfH9put6U2AXzc4htOCp8Ts9jZbn1alGbPc9SqKAOLomGhDFS89DWXsspBAY15h6w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=KEjfjEjiWLqsXrXgxX64B8DmByKyVp33/ylG98FmixQ=; b=g5Oe7teqUhwuNaALPnsQUlqb2SUy32IAkw41zQXuDqBbBtgAWTPc3HV5bsLUyy23yav90NdxDfv76C3VgxNzBeNN/9YloaFpertetOG3qCxcqUZBVD8XU3pKVboO8650XugdYwj27F677d3QOCaJDJLULd9YDV5ROSTpPkdIedG2PqrM+ebvLilhkzCaYmiVVAgc7uBKSughg5dazYKH8oOkfHd+TSUdXRsxXNIH0e8lBXHBLPrldvEwwdLTInfKnHEprMK6tFdR4O6fkXUZLaMd8lNxdQVmiPezZ2vS0Ti/RnPEYnI+t+8mBgXsuZ1Va5g39MGQLw8xXkQZyY6CDw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com; s=selector2-intel-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=KEjfjEjiWLqsXrXgxX64B8DmByKyVp33/ylG98FmixQ=; b=fKijVLQ8I63cPoVvQm1+of8xp1XN6/IgCpxtRezFo8Z8Dgoaow5X45cV2ZzSGdtIy2yRrE/tnzM2VdJhAEBdcYlhFuY63WQkBCnYyJU7lNaLX6n3T7GEi8OFRP9F4SaSqeVQ3Beck0IUGDn77TdkAp9rSpVNyoBrBJohtfiCeVc= Received: from CY4PR1101MB2310.namprd11.prod.outlook.com (2603:10b6:910:1b::16) by CY4PR1101MB2263.namprd11.prod.outlook.com (2603:10b6:910:19::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Sun, 18 Oct 2020 10:29:40 +0000 Received: from CY4PR1101MB2310.namprd11.prod.outlook.com ([fe80::69cb:1c26:c52c:c2a6]) by CY4PR1101MB2310.namprd11.prod.outlook.com ([fe80::69cb:1c26:c52c:c2a6%3]) with mapi id 15.20.3477.028; Sun, 18 Oct 2020 10:29:40 +0000 From: "Xu, Ting" To: "Xing, Beilei" , "dev@dpdk.org" CC: "Zhang, Qi Z" , "Wu, Jingjing" Thread-Topic: [PATCH v6 1/2] net/iavf: add IAVF request queues function Thread-Index: AQHWo11jsMyXoO5kaEC9jlXIc1OlJKmZ6ZgAgANBJKA= Date: Sun, 18 Oct 2020 10:29:40 +0000 Message-ID: References: <20200909072028.16726-1-ting.xu@intel.com> <20201016014329.54760-1-ting.xu@intel.com> <20201016014329.54760-2-ting.xu@intel.com> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-version: 11.5.1.3 dlp-product: dlpe-windows dlp-reaction: no-action authentication-results: intel.com; dkim=none (message not signed) header.d=none;intel.com; dmarc=none action=none header.from=intel.com; x-originating-ip: [192.198.147.209] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 31f242f5-3ad5-4b36-a377-08d87350b8b2 x-ms-traffictypediagnostic: CY4PR1101MB2263: x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr x-ms-exchange-transport-forked: True x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:161; x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: TZHUnB64yVWj0shUJ/wbSEnYLtn6vNV3VMxpvAs8uLSPEjxmod9ZbZTujT9z54NRwgXCCmTsF0h6ltw9W6YsoPPdU1n8EHbJXIQFWYhmAk5AavZJN/PdgkFYw5mcTagfdKYgoDjaaF12xgcbVisuy/WRJOzslOrpZwgpZ8+2u4DHO13bKs6AYI+ck4lSiO7ceRG8nq9mp4BszbvOcolgJyKe2Htccy0dR/lc+ckzb4wYK1AS3BHz+njFRJ08IDAMsMTaI4sVp2/L6Oh48sGAz9d9Ok/dZzDkDqZme8io59KKSZSVLpyhEL+93nf42kg9/STdnGJzuZZHO8qzpRxUxw== x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:CY4PR1101MB2310.namprd11.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(136003)(366004)(346002)(39860400002)(376002)(396003)(4326008)(2906002)(66476007)(66556008)(76116006)(64756008)(478600001)(52536014)(9686003)(5660300002)(55016002)(66446008)(83380400001)(107886003)(8936002)(110136005)(316002)(54906003)(6506007)(8676002)(33656002)(71200400001)(66946007)(30864003)(86362001)(26005)(53546011)(7696005)(186003); DIR:OUT; SFP:1102; x-ms-exchange-antispam-messagedata: X3OmFGjkPTcPyCGb4BNfCEzdWkutizt4el2i85SVGg1YY7HcGvveAcJkmp0fm4Ln7Fckq0IyCVNP3M0jQx77esVeN7w8zlTNLEvGZISuq93oo12UxmNvSYWS0vIGrfDxvN53EVgmFH51oab1aP39AC5NZbhpSdWziHDQWINWYUV0RNvV8OQRO1EINTMncJK8lejiQLi+SDaSPqiUbkhKJWkpAda+7cPSP9FUzTPbTeJnB5jy4csUHYGAiOB3/2qwmoHwgQrCZCUSLo+CFktS6F5A4vqWLmZUYWB6nDIeD34Dq8ki8DQZ1Shu/E2Ne6EHh9dGmtx5RklFuGLPkBRqCWfOoGlOv8QHjkgByqSMcoxOtYesHYZed7ZdVt3E4zwyLO1vhcrSa3dISOQ4GdnGNy3npGBID9H/6Lsh7onNS3e5Siz7l3RaBn5LEKAV4P5l69A5WO7CwWbTi9xY1Kb/DWgqoVogjjKKRMS5ETFXsP4Ml4yHtQAuqCPdZe+BvkJETyYKw+PldpyJuzzK7iB6+tObU5NdJp5mz3U54X6OXVECBzbpF4NIvAdJOn2t47vOBO3qzfK+SkVOefE3SW2fNprThLNCDqhzeabzxNsBpTs2leIFTY07CSZiphjv0AU7avgOj10NZ1e9YrRkWgdxSg== Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: CY4PR1101MB2310.namprd11.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: 31f242f5-3ad5-4b36-a377-08d87350b8b2 X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Oct 2020 10:29:40.4308 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: slB5W75mXgk0S9II5ZI74POyvSkXXPNQ4lNUGHmS+tbTreWCEdtXdLv59ZSFd3VJOe11RX19KDMRU/g+IfQocA== X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR1101MB2263 X-OriginatorOrg: intel.com Subject: Re: [dpdk-dev] [PATCH v6 1/2] net/iavf: add IAVF request queues function X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" > -----Original Message----- > From: Xing, Beilei > Sent: Friday, October 16, 2020 4:42 PM > To: Xu, Ting ; dev@dpdk.org > Cc: Zhang, Qi Z ; Wu, Jingjing > Subject: RE: [PATCH v6 1/2] net/iavf: add IAVF request queues function >=20 >=20 >=20 > > -----Original Message----- > > From: Xu, Ting > > Sent: Friday, October 16, 2020 9:43 AM > > To: dev@dpdk.org > > Cc: Zhang, Qi Z ; Xing, Beilei > > ; Wu, Jingjing ; Xu, > > Ting > > Subject: [PATCH v6 1/2] net/iavf: add IAVF request queues function > > > > Add new virtchnl function to request additional queues from PF. > > Current default queue pairs number when creating a VF is 16. In order > > to support up to > > 256 queue pairs, enable this request queues function. > > Since request queues command may return event message, modify function > > iavf_read_msg_from_pf to identify event opcode and mark VF reset status= . > > > > Signed-off-by: Ting Xu > > --- > > drivers/net/iavf/iavf.h | 9 ++ > > drivers/net/iavf/iavf_ethdev.c | 11 +- > > drivers/net/iavf/iavf_vchnl.c | 226 > > +++++++++++++++++++++++++-------- > > 3 files changed, 192 insertions(+), 54 deletions(-) > > > > diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h index > > d56611608..93c165c62 100644 > > --- a/drivers/net/iavf/iavf.h > > +++ b/drivers/net/iavf/iavf.h > > @@ -107,6 +107,14 @@ struct iavf_fdir_info { > > /* TODO: is that correct to assume the max number to be 16 ?*/ > > #define IAVF_MAX_MSIX_VECTORS 16 > > > > +/* Message type read in admin queue from PF */ enum iavf_aq_result { > > + IAVF_MSG_ERR =3D -1, /* Meet error when accessing admin queue */ > > + IAVF_MSG_NON, /* Read nothing from admin queue */ > > + IAVF_MSG_SYS, /* Read system msg from admin queue */ > > + IAVF_MSG_CMD, /* Read async command result */ > > +}; >=20 > Is there no such message type in shared code? >=20 Yes, I did not find similar codes suitable for this case. > > + > > /* Structure to store private data specific for VF instance. */ struc= t > iavf_info { > > uint16_t num_queue_pairs; > > @@ -301,4 +309,5 @@ int iavf_add_del_rss_cfg(struct iavf_adapter > > *adapter, int iavf_add_del_mc_addr_list(struct iavf_adapter *adapter, > > struct rte_ether_addr *mc_addrs, > > uint32_t mc_addrs_num, bool add); > > +int iavf_request_queues(struct iavf_adapter *adapter, uint16_t num); > > #endif /* _IAVF_ETHDEV_H_ */ > > diff --git a/drivers/net/iavf/iavf_ethdev.c > > b/drivers/net/iavf/iavf_ethdev.c index 8b1cf8f1c..a4a28b885 100644 > > --- a/drivers/net/iavf/iavf_ethdev.c > > +++ b/drivers/net/iavf/iavf_ethdev.c > > @@ -1282,7 +1282,7 @@ iavf_dev_rx_queue_intr_disable(struct > > rte_eth_dev *dev, uint16_t queue_id) } > > > > static int > > -iavf_check_vf_reset_done(struct iavf_hw *hw) > > +iavf_check_vf_reset_done(struct iavf_hw *hw, struct iavf_info *vf) > > { > > int i, reset; > > > > @@ -1299,6 +1299,9 @@ iavf_check_vf_reset_done(struct iavf_hw *hw) > > if (i >=3D IAVF_RESET_WAIT_CNT) > > return -1; > > > > + /* VF is not in reset or reset is completed */ > > + vf->vf_reset =3D false; >=20 > Seems it's not related to the feature. > Is the fix for commit 1eab95fe2e36e191ad85a9aacf82a44e7c8011fc? > If yes, it's better to separate bug fix from the feature. >=20 I missed the vf->vf_reset setting at the bottom of iavf_dev_init(). It is n= ot needed here. > > + > > return 0; > > } > > > > @@ -1666,7 +1669,7 @@ iavf_init_vf(struct rte_eth_dev *dev) > > goto err; > > } > > > > - err =3D iavf_check_vf_reset_done(hw); > > + err =3D iavf_check_vf_reset_done(hw, vf); > > if (err) { > > PMD_INIT_LOG(ERR, "VF is still resetting"); > > goto err; > > @@ -1911,7 +1914,9 @@ iavf_dev_close(struct rte_eth_dev *dev) > > > > iavf_dev_stop(dev); > > iavf_flow_flush(dev, NULL); > > - iavf_flow_uninit(adapter); > > + /* if VF is in reset, adminq is disabled, skip the process via adminq= */ > > + if (!vf->vf_reset) > > + iavf_flow_uninit(adapter); >=20 > Same as above. >=20 > > > > /* > > * disable promiscuous mode before reset vf diff --git > > a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c index > > 5e7142893..11a1ff608 100644 > > --- a/drivers/net/iavf/iavf_vchnl.c > > +++ b/drivers/net/iavf/iavf_vchnl.c > > @@ -17,6 +17,7 @@ > > #include > > #include > > #include > > +#include > > #include > > > > #include "iavf.h" > > @@ -25,14 +26,54 @@ > > #define MAX_TRY_TIMES 200 > > #define ASQ_DELAY_MS 10 > > > > +static uint32_t > > +iavf_convert_link_speed(enum virtchnl_link_speed virt_link_speed) { > > + uint32_t speed; > > + > > + switch (virt_link_speed) { > > + case VIRTCHNL_LINK_SPEED_100MB: > > + speed =3D 100; > > + break; > > + case VIRTCHNL_LINK_SPEED_1GB: > > + speed =3D 1000; > > + break; > > + case VIRTCHNL_LINK_SPEED_10GB: > > + speed =3D 10000; > > + break; > > + case VIRTCHNL_LINK_SPEED_40GB: > > + speed =3D 40000; > > + break; > > + case VIRTCHNL_LINK_SPEED_20GB: > > + speed =3D 20000; > > + break; > > + case VIRTCHNL_LINK_SPEED_25GB: > > + speed =3D 25000; > > + break; > > + case VIRTCHNL_LINK_SPEED_2_5GB: > > + speed =3D 2500; > > + break; > > + case VIRTCHNL_LINK_SPEED_5GB: > > + speed =3D 5000; > > + break; > > + default: > > + speed =3D 0; > > + break; > > + } > > + > > + return speed; > > +} > > + > > /* Read data in admin queue to get msg from pf driver */ -static enum > > iavf_status > > +static enum iavf_aq_result > > iavf_read_msg_from_pf(struct iavf_adapter *adapter, uint16_t buf_len, > > uint8_t *buf) > > { > > struct iavf_hw *hw =3D IAVF_DEV_PRIVATE_TO_HW(adapter); > > struct iavf_info *vf =3D IAVF_DEV_PRIVATE_TO_VF(adapter); > > + struct rte_eth_dev *dev =3D adapter->eth_dev; > > struct iavf_arq_event_info event; > > + enum iavf_aq_result result =3D IAVF_MSG_NON; > > enum virtchnl_ops opcode; > > int ret; > > > > @@ -42,7 +83,9 @@ iavf_read_msg_from_pf(struct iavf_adapter *adapter, > > uint16_t buf_len, > > /* Can't read any msg from adminQ */ > > if (ret) { > > PMD_DRV_LOG(DEBUG, "Can't read msg from AQ"); > > - return ret; > > + if (ret !=3D IAVF_ERR_ADMIN_QUEUE_NO_WORK) > > + result =3D IAVF_MSG_ERR; > > + return result; > > } > > > > opcode =3D (enum > > virtchnl_ops)rte_le_to_cpu_32(event.desc.cookie_high); > > @@ -52,16 +95,51 @@ iavf_read_msg_from_pf(struct iavf_adapter > > *adapter, uint16_t buf_len, > > PMD_DRV_LOG(DEBUG, "AQ from pf carries opcode %u, retval %d", > > opcode, vf->cmd_retval); > > > > - if (opcode !=3D vf->pend_cmd) { > > - if (opcode !=3D VIRTCHNL_OP_EVENT) { > > - PMD_DRV_LOG(WARNING, > > - "command mismatch, expect %u, get %u", > > - vf->pend_cmd, opcode); > > + if (opcode =3D=3D VIRTCHNL_OP_EVENT) { > > + struct virtchnl_pf_event *vpe =3D > > + (struct virtchnl_pf_event *)event.msg_buf; > > + > > + result =3D IAVF_MSG_SYS; > > + switch (vpe->event) { > > + case VIRTCHNL_EVENT_LINK_CHANGE: > > + vf->link_up =3D > > + vpe->event_data.link_event.link_status; > > + if (vf->vf_res->vf_cap_flags & > > + VIRTCHNL_VF_CAP_ADV_LINK_SPEED) { > > + vf->link_speed =3D > > + vpe- > > >event_data.link_event_adv.link_speed; > > + } else { > > + enum virtchnl_link_speed speed; > > + speed =3D vpe- > > >event_data.link_event.link_speed; > > + vf->link_speed =3D > > iavf_convert_link_speed(speed); > > + } > > + iavf_dev_link_update(dev, 0); > > + PMD_DRV_LOG(INFO, "Link status update:%s", > > + vf->link_up ? "up" : "down"); > > + break; > > + case VIRTCHNL_EVENT_RESET_IMPENDING: > > + vf->vf_reset =3D true; > > + PMD_DRV_LOG(INFO, "VF is resetting"); > > + break; > > + case VIRTCHNL_EVENT_PF_DRIVER_CLOSE: > > + vf->dev_closed =3D true; > > + PMD_DRV_LOG(INFO, "PF driver closed"); > > + break; > > + default: > > + PMD_DRV_LOG(ERR, "%s: Unknown event %d from pf", > > + __func__, vpe->event); > > + } > > + } else { > > + /* async reply msg on command issued by vf previously */ > > + result =3D IAVF_MSG_CMD; > > + if (opcode !=3D vf->pend_cmd) { > > + PMD_DRV_LOG(WARNING, "command mismatch, > > expect %u, get %u", > > + vf->pend_cmd, opcode); > > + result =3D IAVF_MSG_ERR; > > } > > - return IAVF_ERR_OPCODE_MISMATCH; > > } > > > > - return IAVF_SUCCESS; > > + return result; > > } >=20 > How about separate this part which is handling the msg from PF? >=20 Done > > > > static int > > @@ -69,6 +147,7 @@ iavf_execute_vf_cmd(struct iavf_adapter *adapter, > > struct iavf_cmd_info *args) { > > struct iavf_hw *hw =3D IAVF_DEV_PRIVATE_TO_HW(adapter); > > struct iavf_info *vf =3D IAVF_DEV_PRIVATE_TO_VF(adapter); > > + enum iavf_aq_result result; > > enum iavf_status ret; > > int err =3D 0; > > int i =3D 0; > > @@ -97,9 +176,9 @@ iavf_execute_vf_cmd(struct iavf_adapter *adapter, > > struct iavf_cmd_info *args) > > case VIRTCHNL_OP_GET_SUPPORTED_RXDIDS: > > /* for init virtchnl ops, need to poll the response */ > > do { > > - ret =3D iavf_read_msg_from_pf(adapter, args->out_size, > > + result =3D iavf_read_msg_from_pf(adapter, args- > > >out_size, > > args->out_buffer); > > - if (ret =3D=3D IAVF_SUCCESS) > > + if (result =3D=3D IAVF_MSG_CMD) > > break; > > rte_delay_ms(ASQ_DELAY_MS); > > } while (i++ < MAX_TRY_TIMES); > > @@ -111,7 +190,33 @@ iavf_execute_vf_cmd(struct iavf_adapter *adapter, > > struct iavf_cmd_info *args) > > } > > _clear_cmd(vf); > > break; > > - > > + case VIRTCHNL_OP_REQUEST_QUEUES: > > + /* > > + * ignore async reply, only wait for system message, > > + * vf_reset =3D true if get VIRTCHNL_EVENT_RESET_IMPENDING, > > + * if not, means request queues failed. > > + */ > > + do { > > + result =3D iavf_read_msg_from_pf(adapter, args- > > >out_size, > > + args->out_buffer); > > + if (result =3D=3D IAVF_MSG_SYS && vf->vf_reset) { > > + break; > > + } else if (result =3D=3D IAVF_MSG_CMD || > > + result =3D=3D IAVF_MSG_ERR) { > > + err =3D -1; > > + break; > > + } > > + rte_delay_ms(ASQ_DELAY_MS); > > + /* If don't read msg or read sys event, continue */ > > + } while (i++ < MAX_TRY_TIMES); > > + if (i >=3D MAX_TRY_TIMES || > > + vf->cmd_retval !=3D VIRTCHNL_STATUS_SUCCESS) { > > + err =3D -1; > > + PMD_DRV_LOG(ERR, "No response or return failure > > (%d)" > > + " for cmd %d", vf->cmd_retval, args->ops); > > + } > > + _clear_cmd(vf); > > + break; > > default: > > /* For other virtchnl ops in running time, > > * wait for the cmd done flag. > > @@ -136,44 +241,6 @@ iavf_execute_vf_cmd(struct iavf_adapter *adapter, > > struct iavf_cmd_info *args) > > return err; > > } > > > > -static uint32_t > > -iavf_convert_link_speed(enum virtchnl_link_speed virt_link_speed) -{ > > - uint32_t speed; > > - > > - switch (virt_link_speed) { > > - case VIRTCHNL_LINK_SPEED_100MB: > > - speed =3D 100; > > - break; > > - case VIRTCHNL_LINK_SPEED_1GB: > > - speed =3D 1000; > > - break; > > - case VIRTCHNL_LINK_SPEED_10GB: > > - speed =3D 10000; > > - break; > > - case VIRTCHNL_LINK_SPEED_40GB: > > - speed =3D 40000; > > - break; > > - case VIRTCHNL_LINK_SPEED_20GB: > > - speed =3D 20000; > > - break; > > - case VIRTCHNL_LINK_SPEED_25GB: > > - speed =3D 25000; > > - break; > > - case VIRTCHNL_LINK_SPEED_2_5GB: > > - speed =3D 2500; > > - break; > > - case VIRTCHNL_LINK_SPEED_5GB: > > - speed =3D 5000; > > - break; > > - default: > > - speed =3D 0; > > - break; > > - } > > - > > - return speed; > > -} > > - > > static void > > iavf_handle_pf_event_msg(struct rte_eth_dev *dev, uint8_t *msg, > > uint16_t msglen) > > @@ -389,7 +456,8 @@ iavf_get_vf_resource(struct iavf_adapter *adapter) > > caps =3D IAVF_BASIC_OFFLOAD_CAPS | > > VIRTCHNL_VF_CAP_ADV_LINK_SPEED | > > VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC | > > VIRTCHNL_VF_OFFLOAD_FDIR_PF | > > - VIRTCHNL_VF_OFFLOAD_ADV_RSS_PF; > > + VIRTCHNL_VF_OFFLOAD_ADV_RSS_PF | > > + VIRTCHNL_VF_OFFLOAD_REQ_QUEUES; > > > > args.in_args =3D (uint8_t *)∩︀ > > args.in_args_size =3D sizeof(caps); > > @@ -1148,3 +1216,59 @@ iavf_add_del_mc_addr_list(struct iavf_adapter > > *adapter, > > > > return 0; > > } > > + > > +int > > +iavf_request_queues(struct iavf_adapter *adapter, uint16_t num) { > > + struct rte_eth_dev *dev =3D adapter->eth_dev; > > + struct iavf_info *vf =3D IAVF_DEV_PRIVATE_TO_VF(adapter); > > + struct rte_pci_device *pci_dev =3D RTE_ETH_DEV_TO_PCI(dev); > > + struct virtchnl_vf_res_request vfres; > > + struct iavf_cmd_info args; > > + uint16_t num_queue_pairs; > > + int err; > > + > > + if (!(vf->vf_res->vf_cap_flags & > > + VIRTCHNL_VF_OFFLOAD_REQ_QUEUES)) { > > + PMD_DRV_LOG(ERR, "request queues not supported"); > > + return -1; > > + } > > + > > + if (num =3D=3D 0) { > > + PMD_DRV_LOG(ERR, "queue number cannot be zero"); > > + return -1; > > + } > > + vfres.num_queue_pairs =3D num; > > + > > + args.ops =3D VIRTCHNL_OP_REQUEST_QUEUES; > > + args.in_args =3D (u8 *)&vfres; > > + args.in_args_size =3D sizeof(vfres); > > + args.out_buffer =3D vf->aq_resp; > > + args.out_size =3D IAVF_AQ_BUF_SZ; > > + > > + /* > > + * disable interrupt to avoid the admin queue message to be read > > + * before iavf_read_msg_from_pf. > > + */ > > + rte_intr_disable(&pci_dev->intr_handle); > > + err =3D iavf_execute_vf_cmd(adapter, &args); > > + rte_intr_enable(&pci_dev->intr_handle); > > + if (err) { > > + PMD_DRV_LOG(ERR, "fail to execute command > > OP_REQUEST_QUEUES"); > > + return err; > > + } > > + > > + /* request queues succeeded, vf is resetting */ > > + if (vf->vf_reset) { > > + PMD_DRV_LOG(INFO, "vf is resetting"); > > + return 0; > > + } > > + > > + /* request additional queues failed, return available number */ > > + num_queue_pairs =3D > > + ((struct virtchnl_vf_res_request *)args.out_buffer)- > > >num_queue_pairs; > > + PMD_DRV_LOG(ERR, "request queues failed, only %u queues " > > + "available", num_queue_pairs); > > + > > + return -1; > > +} > > -- > > 2.17.1