From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from dpdk.org (dpdk.org [92.243.14.124])
	by inbox.dpdk.org (Postfix) with ESMTP id 8BC1FA04DB;
	Fri, 16 Oct 2020 10:41:53 +0200 (CEST)
Received: from [92.243.14.124] (localhost [127.0.0.1])
	by dpdk.org (Postfix) with ESMTP id 591751EB5F;
	Fri, 16 Oct 2020 10:41:52 +0200 (CEST)
Received: from mga12.intel.com (mga12.intel.com [192.55.52.136])
 by dpdk.org (Postfix) with ESMTP id 85F161EB5E
 for <dev@dpdk.org>; Fri, 16 Oct 2020 10:41:49 +0200 (CEST)
IronPort-SDR: WbO+O67PvcKtv2LX7VEV73a5LDZ96Y+mLbqRVkMTKTLzyGYC9A9wOZGScpKGcQSP59mecBH6Le
 GgcIYCKiaP8A==
X-IronPort-AV: E=McAfee;i="6000,8403,9775"; a="145865822"
X-IronPort-AV: E=Sophos;i="5.77,382,1596524400"; d="scan'208";a="145865822"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga002.jf.intel.com ([10.7.209.21])
 by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 16 Oct 2020 01:41:47 -0700
IronPort-SDR: BpTceXlFaFJWkMFeNspSTUf3cGrPCXH8clRb7Ksa26zytLg2tAGfpIiJzWpBwbzLA35KrNUPIr
 dJ/RACYifOEQ==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.77,382,1596524400"; d="scan'208";a="331100778"
Received: from fmsmsx602.amr.corp.intel.com ([10.18.126.82])
 by orsmga002.jf.intel.com with ESMTP; 16 Oct 2020 01:41:46 -0700
Received: from fmsmsx607.amr.corp.intel.com (10.18.126.87) by
 fmsmsx602.amr.corp.intel.com (10.18.126.82) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.1713.5; Fri, 16 Oct 2020 01:41:46 -0700
Received: from fmsmsx603.amr.corp.intel.com (10.18.126.83) by
 fmsmsx607.amr.corp.intel.com (10.18.126.87) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.1713.5; Fri, 16 Oct 2020 01:41:46 -0700
Received: from fmsedg602.ED.cps.intel.com (10.1.192.136) by
 fmsmsx603.amr.corp.intel.com (10.18.126.83) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1713.5
 via Frontend Transport; Fri, 16 Oct 2020 01:41:46 -0700
Received: from NAM12-DM6-obe.outbound.protection.outlook.com (104.47.59.174)
 by edgegateway.intel.com (192.55.55.71) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.1713.5; Fri, 16 Oct 2020 01:41:44 -0700
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fH+JJQ/quJeNL9K60kpGoQHDuXSqELyeS2vlxuWByS5mMkFuhMQYoqkyS7VQ4KTGS670I6s57XrLr44Pw3IV1tlXN4HVC5/jsARSyfxQxlKJ4hRLHW6/QVVXEezwfqbM5PR8JFYMs8UJoPRcxiIjLbWRe3xRrxWIo/aRVA0/mdbtb3yzV/sccgGeOcfLvfjDHBLdKHOKGqIyT6sHd7iAYx2yn+UKCOZ3KIN+E55II+hensfUw41LClYcV6ESAubBzkIHKvhSvNlhCsuWYHRv0+UKVYAbtj7uky1Wa03MF2yAzgJRJllsgnzc1yN9shOtSzNWVDdjkE19wmyXftGQfw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sOAqXUM9loppR6N4O6HVEr0hqAgDDZxuMiZCGag4uDQ=;
 b=TrvxnXd0grEquwzTHYzOLFCQ/FYJhagRpn+4uZ1vgXinxQwg+jAJOMysdQaK0MVxEfP3VNqUOA0n58QSYw7RF3f+SrNX/jfNFvzrdwd8FSndqB/jOtN6NktCjvSWA2XTgDmAiqwZ7ZyuLBrzgex5Iu2D+2u7oP3WFazTusT+++EqNp64fLfnk2DoCPSqz8sYwvIaP6J4SrA3TJjBni7MHBVqQILRI7KFr//F9+wGVXoJb/fYs7vpR0UQYAapoJRw6NHEqbbW2OXGX0cyG5Lqxf6k92TlZjinthL3k4q6Mjw95AH6mbkHSRLBGnPyHj+N+lkT+I++GqqcKoxIsoTgCw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com; 
 s=selector2-intel-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sOAqXUM9loppR6N4O6HVEr0hqAgDDZxuMiZCGag4uDQ=;
 b=bOz3WmavrvBEiokY080KMGMz/WFgB1hzldnUN4A9KkHiSxSlIgHUI2Vim2lT3FIylDfkqKuozDoHsjf7N0IhTobT9+c79EAfYAPBtgjVS5nLxhrSM3snvyupY9c1lfk25sTwh3+vGUq5e2z0fd/NajFnBs+zPqDSMkNNHVtnl/E=
Received: from MN2PR11MB3807.namprd11.prod.outlook.com (2603:10b6:208:f0::24)
 by MN2PR11MB3790.namprd11.prod.outlook.com (2603:10b6:208:f6::26)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.24; Fri, 16 Oct
 2020 08:41:38 +0000
Received: from MN2PR11MB3807.namprd11.prod.outlook.com
 ([fe80::4dc9:358a:cd2e:45d6]) by MN2PR11MB3807.namprd11.prod.outlook.com
 ([fe80::4dc9:358a:cd2e:45d6%6]) with mapi id 15.20.3477.020; Fri, 16 Oct 2020
 08:41:38 +0000
From: "Xing, Beilei" <beilei.xing@intel.com>
To: "Xu, Ting" <ting.xu@intel.com>, "dev@dpdk.org" <dev@dpdk.org>
CC: "Zhang, Qi Z" <qi.z.zhang@intel.com>, "Wu, Jingjing"
 <jingjing.wu@intel.com>
Thread-Topic: [PATCH v6 1/2] net/iavf: add IAVF request queues function
Thread-Index: AQHWo11jdJZv+SrGSUmz1R7We10LgqmZ473w
Date: Fri, 16 Oct 2020 08:41:38 +0000
Message-ID: <MN2PR11MB3807E4010BF68C7B5202AE88F7030@MN2PR11MB3807.namprd11.prod.outlook.com>
References: <20200909072028.16726-1-ting.xu@intel.com>
 <20201016014329.54760-1-ting.xu@intel.com>
 <20201016014329.54760-2-ting.xu@intel.com>
In-Reply-To: <20201016014329.54760-2-ting.xu@intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: intel.com; dkim=none (message not signed)
 header.d=none;intel.com; dmarc=none action=none header.from=intel.com;
x-originating-ip: [192.102.204.37]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: eda8a508-ec32-4260-25dc-08d871af4c39
x-ms-traffictypediagnostic: MN2PR11MB3790:
x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <MN2PR11MB3790472300333A9316C6CA8AF7030@MN2PR11MB3790.namprd11.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:127;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: NpTorVTtrUcEJ6CnYLQZYkb3aHDuuftfBVJ1+eFN/65lRr7D0rgDsdaAcG2OueCccTMdCrXhQyHisBkKG9vhg34N21bRnjnzkPXe0nGjudrvVtxBikI83+jUGJiL598fDEwkS4quBsiZQGKfh1ZK7cbaS3dJxljHRbrJIdbXoYwSl3PUK16rKOPF7kOx+y1gRvd6lPyWihjKbr4k4D/1/YlqXFlMJS9TQu+Ppl3/17xPT451uZKMFjDi21JxUNPfSAgufqyj9Wp9uk1Gs4m/Vd2cmik6vMfGKCWBVMj+JHvmU2GCjvCegkwAqD76xcrFSGH/S3qQn135rDX0HxX4bQ==
x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:MN2PR11MB3807.namprd11.prod.outlook.com; PTR:; CAT:NONE;
 SFS:(4636009)(39860400002)(366004)(376002)(136003)(346002)(396003)(316002)(478600001)(71200400001)(55016002)(54906003)(66946007)(66476007)(186003)(66556008)(110136005)(86362001)(9686003)(76116006)(83380400001)(8936002)(7696005)(2906002)(107886003)(4326008)(8676002)(53546011)(6506007)(26005)(30864003)(52536014)(5660300002)(66446008)(64756008)(33656002);
 DIR:OUT; SFP:1102; 
x-ms-exchange-antispam-messagedata: waSrghSfkXvMtbjQteLdyvRj2gPGoAZWnrGni/gYARI38GHqqrpnteIfTYa73N7kBfVxAoR7+VWDtDqTBf8I87vwFPTa882LKsKrRda12ZcPUbOVQKEu0e7F9du1GKAnBhFk1RMRdxOx8StnRID56jfKUVGoRJ0AKwihIWCyO2JcTiCM924Pdw2W7jGvBlKlfZWFnM8gKAqptXGb6/G62HEq9mqGDn4+DAcRjCieB2AN83XP1g21R+Qi2a9vazSJjhTYLwzjR/1+WoDGg+mVHhYDfFPf8f/ntNa3/JULK7d5ENOICvE/lpJPps7MrCTdW5uqhtfLEsuI8aaxJI4hZhoIDyeKErlbxomM2lCrAGqKI/niiy0exa3pAg3Wx3mVX9r1HiNNyzHOt4e8a33OOrivMJ0rus+/M70ICQAPqC5V1BCSFPxZItaqi/UKuBKxONKTSgrpYwki97ZC3+/QbgQ5bVTLd8TS855xRkHiWpmX11bnPQJet7H6kXoToNpYqZa/+a2X7ERlOWtkAYiIcZyS1IaDU++iET0WACOTgYiTteIfXW6NBXYTDJJAD7g1VHqgxufVMGs/tCTp53iabMDcI6SVQB+Ri7nfViQVaWijRYBraxQdznhlfA76NeMOqbqtc2TAxAHJGpc1asxcPg==
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: MN2PR11MB3807.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: eda8a508-ec32-4260-25dc-08d871af4c39
X-MS-Exchange-CrossTenant-originalarrivaltime: 16 Oct 2020 08:41:38.2765 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: mYIJBBamUjIJV0wlHboeGdgb2jVEqmWJWxVETu2Ac0/7o5/nFU619mh0oXEsUxlpPXlfic0IMu23Eh2DQdOgxg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR11MB3790
X-OriginatorOrg: intel.com
Subject: Re: [dpdk-dev] [PATCH v6 1/2] net/iavf: add IAVF request queues
	function
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org
Sender: "dev" <dev-bounces@dpdk.org>



> -----Original Message-----
> From: Xu, Ting <ting.xu@intel.com>
> Sent: Friday, October 16, 2020 9:43 AM
> To: dev@dpdk.org
> Cc: Zhang, Qi Z <qi.z.zhang@intel.com>; Xing, Beilei <beilei.xing@intel.c=
om>;
> Wu, Jingjing <jingjing.wu@intel.com>; Xu, Ting <ting.xu@intel.com>
> Subject: [PATCH v6 1/2] net/iavf: add IAVF request queues function
>=20
> Add new virtchnl function to request additional queues from PF. Current
> default queue pairs number when creating a VF is 16. In order to support =
up to
> 256 queue pairs, enable this request queues function.
> Since request queues command may return event message, modify function
> iavf_read_msg_from_pf to identify event opcode and mark VF reset status.
>=20
> Signed-off-by: Ting Xu <ting.xu@intel.com>
> ---
>  drivers/net/iavf/iavf.h        |   9 ++
>  drivers/net/iavf/iavf_ethdev.c |  11 +-  drivers/net/iavf/iavf_vchnl.c  =
| 226
> +++++++++++++++++++++++++--------
>  3 files changed, 192 insertions(+), 54 deletions(-)
>=20
> diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h index
> d56611608..93c165c62 100644
> --- a/drivers/net/iavf/iavf.h
> +++ b/drivers/net/iavf/iavf.h
> @@ -107,6 +107,14 @@ struct iavf_fdir_info {
>  /* TODO: is that correct to assume the max number to be 16 ?*/
>  #define IAVF_MAX_MSIX_VECTORS   16
>=20
> +/* Message type read in admin queue from PF */ enum iavf_aq_result {
> +	IAVF_MSG_ERR =3D -1, /* Meet error when accessing admin queue */
> +	IAVF_MSG_NON,      /* Read nothing from admin queue */
> +	IAVF_MSG_SYS,      /* Read system msg from admin queue */
> +	IAVF_MSG_CMD,      /* Read async command result */
> +};

Is there no such message type in shared code?

> +
>  /* Structure to store private data specific for VF instance. */  struct =
iavf_info {
>  	uint16_t num_queue_pairs;
> @@ -301,4 +309,5 @@ int iavf_add_del_rss_cfg(struct iavf_adapter *adapter=
,
> int iavf_add_del_mc_addr_list(struct iavf_adapter *adapter,
>  			struct rte_ether_addr *mc_addrs,
>  			uint32_t mc_addrs_num, bool add);
> +int iavf_request_queues(struct iavf_adapter *adapter, uint16_t num);
>  #endif /* _IAVF_ETHDEV_H_ */
> diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethde=
v.c
> index 8b1cf8f1c..a4a28b885 100644
> --- a/drivers/net/iavf/iavf_ethdev.c
> +++ b/drivers/net/iavf/iavf_ethdev.c
> @@ -1282,7 +1282,7 @@ iavf_dev_rx_queue_intr_disable(struct rte_eth_dev
> *dev, uint16_t queue_id)  }
>=20
>  static int
> -iavf_check_vf_reset_done(struct iavf_hw *hw)
> +iavf_check_vf_reset_done(struct iavf_hw *hw, struct iavf_info *vf)
>  {
>  	int i, reset;
>=20
> @@ -1299,6 +1299,9 @@ iavf_check_vf_reset_done(struct iavf_hw *hw)
>  	if (i >=3D IAVF_RESET_WAIT_CNT)
>  		return -1;
>=20
> +	/* VF is not in reset or reset is completed */
> +	vf->vf_reset =3D false;

Seems it's not related to the feature.
Is the fix for commit 1eab95fe2e36e191ad85a9aacf82a44e7c8011fc?
If yes, it's better to separate bug fix from the feature.

> +
>  	return 0;
>  }
>=20
> @@ -1666,7 +1669,7 @@ iavf_init_vf(struct rte_eth_dev *dev)
>  		goto err;
>  	}
>=20
> -	err =3D iavf_check_vf_reset_done(hw);
> +	err =3D iavf_check_vf_reset_done(hw, vf);
>  	if (err) {
>  		PMD_INIT_LOG(ERR, "VF is still resetting");
>  		goto err;
> @@ -1911,7 +1914,9 @@ iavf_dev_close(struct rte_eth_dev *dev)
>=20
>  	iavf_dev_stop(dev);
>  	iavf_flow_flush(dev, NULL);
> -	iavf_flow_uninit(adapter);
> +	/* if VF is in reset, adminq is disabled, skip the process via adminq *=
/
> +	if (!vf->vf_reset)
> +		iavf_flow_uninit(adapter);

Same as above.

>=20
>  	/*
>  	 * disable promiscuous mode before reset vf diff --git
> a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c index
> 5e7142893..11a1ff608 100644
> --- a/drivers/net/iavf/iavf_vchnl.c
> +++ b/drivers/net/iavf/iavf_vchnl.c
> @@ -17,6 +17,7 @@
>  #include <rte_eal.h>
>  #include <rte_ether.h>
>  #include <rte_ethdev_driver.h>
> +#include <rte_ethdev_pci.h>
>  #include <rte_dev.h>
>=20
>  #include "iavf.h"
> @@ -25,14 +26,54 @@
>  #define MAX_TRY_TIMES 200
>  #define ASQ_DELAY_MS  10
>=20
> +static uint32_t
> +iavf_convert_link_speed(enum virtchnl_link_speed virt_link_speed) {
> +	uint32_t speed;
> +
> +	switch (virt_link_speed) {
> +	case VIRTCHNL_LINK_SPEED_100MB:
> +		speed =3D 100;
> +		break;
> +	case VIRTCHNL_LINK_SPEED_1GB:
> +		speed =3D 1000;
> +		break;
> +	case VIRTCHNL_LINK_SPEED_10GB:
> +		speed =3D 10000;
> +		break;
> +	case VIRTCHNL_LINK_SPEED_40GB:
> +		speed =3D 40000;
> +		break;
> +	case VIRTCHNL_LINK_SPEED_20GB:
> +		speed =3D 20000;
> +		break;
> +	case VIRTCHNL_LINK_SPEED_25GB:
> +		speed =3D 25000;
> +		break;
> +	case VIRTCHNL_LINK_SPEED_2_5GB:
> +		speed =3D 2500;
> +		break;
> +	case VIRTCHNL_LINK_SPEED_5GB:
> +		speed =3D 5000;
> +		break;
> +	default:
> +		speed =3D 0;
> +		break;
> +	}
> +
> +	return speed;
> +}
> +
>  /* Read data in admin queue to get msg from pf driver */ -static enum
> iavf_status
> +static enum iavf_aq_result
>  iavf_read_msg_from_pf(struct iavf_adapter *adapter, uint16_t buf_len,
>  		     uint8_t *buf)
>  {
>  	struct iavf_hw *hw =3D IAVF_DEV_PRIVATE_TO_HW(adapter);
>  	struct iavf_info *vf =3D IAVF_DEV_PRIVATE_TO_VF(adapter);
> +	struct rte_eth_dev *dev =3D adapter->eth_dev;
>  	struct iavf_arq_event_info event;
> +	enum iavf_aq_result result =3D IAVF_MSG_NON;
>  	enum virtchnl_ops opcode;
>  	int ret;
>=20
> @@ -42,7 +83,9 @@ iavf_read_msg_from_pf(struct iavf_adapter *adapter,
> uint16_t buf_len,
>  	/* Can't read any msg from adminQ */
>  	if (ret) {
>  		PMD_DRV_LOG(DEBUG, "Can't read msg from AQ");
> -		return ret;
> +		if (ret !=3D IAVF_ERR_ADMIN_QUEUE_NO_WORK)
> +			result =3D IAVF_MSG_ERR;
> +		return result;
>  	}
>=20
>  	opcode =3D (enum
> virtchnl_ops)rte_le_to_cpu_32(event.desc.cookie_high);
> @@ -52,16 +95,51 @@ iavf_read_msg_from_pf(struct iavf_adapter *adapter,
> uint16_t buf_len,
>  	PMD_DRV_LOG(DEBUG, "AQ from pf carries opcode %u, retval %d",
>  		    opcode, vf->cmd_retval);
>=20
> -	if (opcode !=3D vf->pend_cmd) {
> -		if (opcode !=3D VIRTCHNL_OP_EVENT) {
> -			PMD_DRV_LOG(WARNING,
> -				    "command mismatch, expect %u, get %u",
> -				    vf->pend_cmd, opcode);
> +	if (opcode =3D=3D VIRTCHNL_OP_EVENT) {
> +		struct virtchnl_pf_event *vpe =3D
> +			(struct virtchnl_pf_event *)event.msg_buf;
> +
> +		result =3D IAVF_MSG_SYS;
> +		switch (vpe->event) {
> +		case VIRTCHNL_EVENT_LINK_CHANGE:
> +			vf->link_up =3D
> +				vpe->event_data.link_event.link_status;
> +			if (vf->vf_res->vf_cap_flags &
> +				VIRTCHNL_VF_CAP_ADV_LINK_SPEED) {
> +				vf->link_speed =3D
> +				    vpe-
> >event_data.link_event_adv.link_speed;
> +			} else {
> +				enum virtchnl_link_speed speed;
> +				speed =3D vpe-
> >event_data.link_event.link_speed;
> +				vf->link_speed =3D
> iavf_convert_link_speed(speed);
> +			}
> +			iavf_dev_link_update(dev, 0);
> +			PMD_DRV_LOG(INFO, "Link status update:%s",
> +					vf->link_up ? "up" : "down");
> +			break;
> +		case VIRTCHNL_EVENT_RESET_IMPENDING:
> +			vf->vf_reset =3D true;
> +			PMD_DRV_LOG(INFO, "VF is resetting");
> +			break;
> +		case VIRTCHNL_EVENT_PF_DRIVER_CLOSE:
> +			vf->dev_closed =3D true;
> +			PMD_DRV_LOG(INFO, "PF driver closed");
> +			break;
> +		default:
> +			PMD_DRV_LOG(ERR, "%s: Unknown event %d from pf",
> +					__func__, vpe->event);
> +		}
> +	}  else {
> +		/* async reply msg on command issued by vf previously */
> +		result =3D IAVF_MSG_CMD;
> +		if (opcode !=3D vf->pend_cmd) {
> +			PMD_DRV_LOG(WARNING, "command mismatch,
> expect %u, get %u",
> +					vf->pend_cmd, opcode);
> +			result =3D IAVF_MSG_ERR;
>  		}
> -		return IAVF_ERR_OPCODE_MISMATCH;
>  	}
>=20
> -	return IAVF_SUCCESS;
> +	return result;
>  }

How about separate this part which is handling the msg from PF?

>=20
>  static int
> @@ -69,6 +147,7 @@ iavf_execute_vf_cmd(struct iavf_adapter *adapter,
> struct iavf_cmd_info *args)  {
>  	struct iavf_hw *hw =3D IAVF_DEV_PRIVATE_TO_HW(adapter);
>  	struct iavf_info *vf =3D IAVF_DEV_PRIVATE_TO_VF(adapter);
> +	enum iavf_aq_result result;
>  	enum iavf_status ret;
>  	int err =3D 0;
>  	int i =3D 0;
> @@ -97,9 +176,9 @@ iavf_execute_vf_cmd(struct iavf_adapter *adapter,
> struct iavf_cmd_info *args)
>  	case VIRTCHNL_OP_GET_SUPPORTED_RXDIDS:
>  		/* for init virtchnl ops, need to poll the response */
>  		do {
> -			ret =3D iavf_read_msg_from_pf(adapter, args->out_size,
> +			result =3D iavf_read_msg_from_pf(adapter, args-
> >out_size,
>  						   args->out_buffer);
> -			if (ret =3D=3D IAVF_SUCCESS)
> +			if (result =3D=3D IAVF_MSG_CMD)
>  				break;
>  			rte_delay_ms(ASQ_DELAY_MS);
>  		} while (i++ < MAX_TRY_TIMES);
> @@ -111,7 +190,33 @@ iavf_execute_vf_cmd(struct iavf_adapter *adapter,
> struct iavf_cmd_info *args)
>  		}
>  		_clear_cmd(vf);
>  		break;
> -
> +	case VIRTCHNL_OP_REQUEST_QUEUES:
> +		/*
> +		 * ignore async reply, only wait for system message,
> +		 * vf_reset =3D true if get VIRTCHNL_EVENT_RESET_IMPENDING,
> +		 * if not, means request queues failed.
> +		 */
> +		do {
> +			result =3D iavf_read_msg_from_pf(adapter, args-
> >out_size,
> +						   args->out_buffer);
> +			if (result =3D=3D IAVF_MSG_SYS && vf->vf_reset) {
> +				break;
> +			} else if (result =3D=3D IAVF_MSG_CMD ||
> +				result =3D=3D IAVF_MSG_ERR) {
> +				err =3D -1;
> +				break;
> +			}
> +			rte_delay_ms(ASQ_DELAY_MS);
> +			/* If don't read msg or read sys event, continue */
> +		} while (i++ < MAX_TRY_TIMES);
> +		if (i >=3D MAX_TRY_TIMES ||
> +		    vf->cmd_retval !=3D VIRTCHNL_STATUS_SUCCESS) {
> +			err =3D -1;
> +			PMD_DRV_LOG(ERR, "No response or return failure
> (%d)"
> +				    " for cmd %d", vf->cmd_retval, args->ops);
> +		}
> +		_clear_cmd(vf);
> +		break;
>  	default:
>  		/* For other virtchnl ops in running time,
>  		 * wait for the cmd done flag.
> @@ -136,44 +241,6 @@ iavf_execute_vf_cmd(struct iavf_adapter *adapter,
> struct iavf_cmd_info *args)
>  	return err;
>  }
>=20
> -static uint32_t
> -iavf_convert_link_speed(enum virtchnl_link_speed virt_link_speed) -{
> -	uint32_t speed;
> -
> -	switch (virt_link_speed) {
> -	case VIRTCHNL_LINK_SPEED_100MB:
> -		speed =3D 100;
> -		break;
> -	case VIRTCHNL_LINK_SPEED_1GB:
> -		speed =3D 1000;
> -		break;
> -	case VIRTCHNL_LINK_SPEED_10GB:
> -		speed =3D 10000;
> -		break;
> -	case VIRTCHNL_LINK_SPEED_40GB:
> -		speed =3D 40000;
> -		break;
> -	case VIRTCHNL_LINK_SPEED_20GB:
> -		speed =3D 20000;
> -		break;
> -	case VIRTCHNL_LINK_SPEED_25GB:
> -		speed =3D 25000;
> -		break;
> -	case VIRTCHNL_LINK_SPEED_2_5GB:
> -		speed =3D 2500;
> -		break;
> -	case VIRTCHNL_LINK_SPEED_5GB:
> -		speed =3D 5000;
> -		break;
> -	default:
> -		speed =3D 0;
> -		break;
> -	}
> -
> -	return speed;
> -}
> -
>  static void
>  iavf_handle_pf_event_msg(struct rte_eth_dev *dev, uint8_t *msg,
>  			uint16_t msglen)
> @@ -389,7 +456,8 @@ iavf_get_vf_resource(struct iavf_adapter *adapter)
>  	caps =3D IAVF_BASIC_OFFLOAD_CAPS |
> VIRTCHNL_VF_CAP_ADV_LINK_SPEED |
>  		VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC |
>  		VIRTCHNL_VF_OFFLOAD_FDIR_PF |
> -		VIRTCHNL_VF_OFFLOAD_ADV_RSS_PF;
> +		VIRTCHNL_VF_OFFLOAD_ADV_RSS_PF |
> +		VIRTCHNL_VF_OFFLOAD_REQ_QUEUES;
>=20
>  	args.in_args =3D (uint8_t *)&caps;
>  	args.in_args_size =3D sizeof(caps);
> @@ -1148,3 +1216,59 @@ iavf_add_del_mc_addr_list(struct iavf_adapter
> *adapter,
>=20
>  	return 0;
>  }
> +
> +int
> +iavf_request_queues(struct iavf_adapter *adapter, uint16_t num) {
> +	struct rte_eth_dev *dev =3D adapter->eth_dev;
> +	struct iavf_info *vf =3D  IAVF_DEV_PRIVATE_TO_VF(adapter);
> +	struct rte_pci_device *pci_dev =3D RTE_ETH_DEV_TO_PCI(dev);
> +	struct virtchnl_vf_res_request vfres;
> +	struct iavf_cmd_info args;
> +	uint16_t num_queue_pairs;
> +	int err;
> +
> +	if (!(vf->vf_res->vf_cap_flags &
> +		VIRTCHNL_VF_OFFLOAD_REQ_QUEUES)) {
> +		PMD_DRV_LOG(ERR, "request queues not supported");
> +		return -1;
> +	}
> +
> +	if (num =3D=3D 0) {
> +		PMD_DRV_LOG(ERR, "queue number cannot be zero");
> +		return -1;
> +	}
> +	vfres.num_queue_pairs =3D num;
> +
> +	args.ops =3D VIRTCHNL_OP_REQUEST_QUEUES;
> +	args.in_args =3D (u8 *)&vfres;
> +	args.in_args_size =3D sizeof(vfres);
> +	args.out_buffer =3D vf->aq_resp;
> +	args.out_size =3D IAVF_AQ_BUF_SZ;
> +
> +	/*
> +	 * disable interrupt to avoid the admin queue message to be read
> +	 * before iavf_read_msg_from_pf.
> +	 */
> +	rte_intr_disable(&pci_dev->intr_handle);
> +	err =3D iavf_execute_vf_cmd(adapter, &args);
> +	rte_intr_enable(&pci_dev->intr_handle);
> +	if (err) {
> +		PMD_DRV_LOG(ERR, "fail to execute command
> OP_REQUEST_QUEUES");
> +		return err;
> +	}
> +
> +	/* request queues succeeded, vf is resetting */
> +	if (vf->vf_reset) {
> +		PMD_DRV_LOG(INFO, "vf is resetting");
> +		return 0;
> +	}
> +
> +	/* request additional queues failed, return available number */
> +	num_queue_pairs =3D
> +	  ((struct virtchnl_vf_res_request *)args.out_buffer)-
> >num_queue_pairs;
> +	PMD_DRV_LOG(ERR, "request queues failed, only %u queues "
> +		"available", num_queue_pairs);
> +
> +	return -1;
> +}
> --
> 2.17.1