From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E075742408; Wed, 18 Jan 2023 05:00:21 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 86AC84067E; Wed, 18 Jan 2023 05:00:21 +0100 (CET) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by mails.dpdk.org (Postfix) with ESMTP id 7215C40151 for ; Wed, 18 Jan 2023 05:00:19 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1674014419; x=1705550419; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=zufaXQGYOlWQnh91IVWgjrn4fdjMfsmAmFDEp4E1R0o=; b=YnCSt3e0a43Hl0haNMhxguQfPU3i/2+7csCOnN73hS+cemPQI2yIe0RF Ik7tkslSnH+gdgu92p+QC/bv1d/CCjfc8f/welXaOU1WDNYRdrRwEx82/ cYnu/DejVyqDQf7PriK55Pus3tF16Yx/ucLnOyEpMiEvMaSWBUM9hIqmK B/a5HCoyfv+ODl3YlABe4nv6juy82KbG9a8FHckJyb13ZbR+aIBSeoBng C5uud51fPEeB5WIRlZPCTHEqwI+Arl+/n62OIr5v176e1Xc2BR1xkWjYz XjLAi1q71QVT1GUkpFQPkAD+FD/MyuVKYsIF/BoiBDSlFv74BqxmsJoVl w==; X-IronPort-AV: E=McAfee;i="6500,9779,10593"; a="308457658" X-IronPort-AV: E=Sophos;i="5.97,224,1669104000"; d="scan'208";a="308457658" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jan 2023 20:00:18 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10593"; a="609482049" X-IronPort-AV: E=Sophos;i="5.97,224,1669104000"; d="scan'208";a="609482049" Received: from orsmsx603.amr.corp.intel.com ([10.22.229.16]) by orsmga003.jf.intel.com with ESMTP; 17 Jan 2023 20:00:18 -0800 Received: from orsmsx612.amr.corp.intel.com (10.22.229.25) by ORSMSX603.amr.corp.intel.com (10.22.229.16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.16; Tue, 17 Jan 2023 20:00:18 -0800 Received: from ORSEDG601.ED.cps.intel.com (10.7.248.6) by orsmsx612.amr.corp.intel.com (10.22.229.25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.16 via Frontend Transport; Tue, 17 Jan 2023 20:00:18 -0800 Received: from NAM10-MW2-obe.outbound.protection.outlook.com (104.47.55.107) by edgegateway.intel.com (134.134.137.102) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.16; Tue, 17 Jan 2023 20:00:17 -0800 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=NgYyn4fhbzLnfdYt46fNDkvfq5RhTlfl2HQA4EqLXp6sOd0RahOuEqw7WRSrsmqQOs2WKQCBCkvyKElhaSzZorZLHq00/bjbjc2gUEO5q7cfeG8pRlb6k6YXkgCobSllHaM53//Cb/GzjtnO4/tVHlQK9mz6SobcYhcVwFP9QjQJRs8ZnAkVQ80NHNlTZ0fhMjiViFfQUyyszeWEJAyvtPPyp59kHxqZ7D1gY7DLG9eh0AWPzQJzpxymBvwgk9V8viHouAZsuXNygmnKnyQJ0vLhbDIVXM1J4dPyaFnpl06JmXu3Qb1WsJny4I1zupZHcFN/gslZjM92M2oak5SROA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=rH1ls37mL8A9nOoEvGkBrl6AYF7w7KqrFRCkH8PmdQs=; b=czFoDt3egrAa04vyX5zKDeUVAt1XK9pEslltLNKT2BQPqc2jCf1HO2bNtKUXVf3zLhlZvFvge1vk4eqygQOzho8elMszuNaIPvA+t11c1EMff/W6G3g7VkvgaHAqGOWUA4/zUWKVfhXpWUjfMhg2PG63ftYkp/7LbXmzt1tcA8zv9c1TAv/vKtMAFy+q58OeeWJ6tK9UU3sJ+rNkUNiC9eEykj/SUmkCENz+7x60dPs8T2JUwBZFX0MUmFnf6I2dyWTw+e9DEBbW8435kjX0rmLnlFsx/tJRCv0okcuZ0v86IPIkBnPJChZ2gHhGaNiKJrPBdOAX1r3u0zUcdugdnQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none Received: from DM4PR11MB5994.namprd11.prod.outlook.com (2603:10b6:8:5d::20) by CO6PR11MB5572.namprd11.prod.outlook.com (2603:10b6:303:13c::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.24; Wed, 18 Jan 2023 04:00:14 +0000 Received: from DM4PR11MB5994.namprd11.prod.outlook.com ([fe80::eb70:80e0:1b88:3ca5]) by DM4PR11MB5994.namprd11.prod.outlook.com ([fe80::eb70:80e0:1b88:3ca5%9]) with mapi id 15.20.5986.023; Wed, 18 Jan 2023 04:00:14 +0000 From: "Zhang, Qi Z" To: "Xing, Beilei" , "Wu, Jingjing" CC: "dev@dpdk.org" , "Wu, Wenjun1" Subject: RE: [PATCH v4 03/15] common/idpf: add virtual channel functions Thread-Topic: [PATCH v4 03/15] common/idpf: add virtual channel functions Thread-Index: AQHZKk4F9i+TS68jNki5jmVi/xazS66jjF/w Date: Wed, 18 Jan 2023 04:00:14 +0000 Message-ID: References: <20230117080622.105657-1-beilei.xing@intel.com> <20230117080622.105657-4-beilei.xing@intel.com> In-Reply-To: <20230117080622.105657-4-beilei.xing@intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=intel.com; x-ms-publictraffictype: Email x-ms-traffictypediagnostic: DM4PR11MB5994:EE_|CO6PR11MB5572:EE_ x-ms-office365-filtering-correlation-id: 77e6df93-f550-4822-4b70-08daf90880ce x-ms-exchange-senderadcheck: 1 x-ms-exchange-antispam-relay: 0 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: 1bgVUz9M6a0ZFl4k1i3JZh6xnl45Mg8GjWD5uy6xMGhpQm0WfrS2bZ3uac1UnGHzwSdD5VT3ZCduPJQTdOgAhWPNHOI7L4VQZ3ePq6KC+9GG3p/s8+/S4gryfrWNQ58jBlUvx9CRza9wnRLTRXmspdTARRSQf57LrfBQwwonn+It83DMZYl023k4oQcis7mXdKY1pSvzgv8Bq2TGzfmQZ68Wxj/Z6B/Ge/MYe3WyKp5VjqHpF+vMyCuPrxKanhHJ5fp3gp3V0V9UpkcjY9ZW1wqLhR9GiPKOsIHQH1r8LHvS/R4/dfKKhGE9x0zRSJ0joXeW42PiKkV1ecMkUnfZW4yMDpxvDM9kM4wI3wVbuxSXPjiVY12zEUfzyM6LOh76H14aU3iNKpMQ24bsVd/LsXY6KXm73nOu4i+D3nezjqCCTHr1k9IpAJ/rVNixEHxkEzKGJLkchj9UNNz5N7BQ444lcqmw6i6nPa2+n+13wrDMC8I5HMc2/noCsOtqbcbFXaVmf4QhMHUDMGLZLY30n+MIAc599yA5WeR1DtpI2FOj3ikf391C6DiOaJKkJG4O9mtiFduDPWgnlG/vdKNEXhr1CHmCXRoIvG2MPERc53DvlnMZdWQbpwrTHnx9NVtAHsILuOxZtzbutfaP2UK0bS4OMys63kS8uX7mXBQaKhFljmuHOpJoTr8kaPQ3HtmoEk7qidn8KxCHNVJWrZ9+zA== x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DM4PR11MB5994.namprd11.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230022)(396003)(346002)(136003)(366004)(376002)(39860400002)(451199015)(82960400001)(38100700002)(122000001)(86362001)(38070700005)(33656002)(5660300002)(8936002)(52536014)(2906002)(30864003)(76116006)(66946007)(66556008)(66476007)(55016003)(66446008)(64756008)(41300700001)(8676002)(4326008)(26005)(9686003)(186003)(83380400001)(53546011)(71200400001)(316002)(54906003)(6636002)(110136005)(107886003)(7696005)(6506007)(478600001)(66899015)(559001)(579004); DIR:OUT; SFP:1102; x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?YO5wWW2XgetJXk3qEMCSYv5i4IdTlfeCl4aOxYBC8XAsET05ka7JL8Nff3X5?= =?us-ascii?Q?BL5EAj8giILOAmJ/QBe4KPKplEV3P7zLEGmyHovy8HP3kutwNXksiGiGHHAM?= =?us-ascii?Q?sj5+4X1MzGgX41M5JtXlL3eUPkW9Zxe5noIHaLpL1fOI7cRGg3QfnccYWNsz?= =?us-ascii?Q?e8tGbt4qj+bmPsytvreuneFqtSysW+NTvHLjFIUYzKXlABL1rWVESgYnYW1p?= =?us-ascii?Q?d2gPHOrRFpbWujxWTfkAvWfyc5DA7qXotx8Cg0Iql9yhyqAFQ8oPSARhYNsA?= =?us-ascii?Q?bEb1zfSSgtFBPYfaO3WnCrCKfOYpsFNtsRZekw8k6EjPV9ev+Z7KuusYd9V9?= =?us-ascii?Q?Mh8THXDNEyCDkr7r4G5NEHIomIWYpetEYfTqCO4w0t/N16Z87AtHWG6lXLjt?= =?us-ascii?Q?ffdcQlqB66k8SPc6PfTPizSstNiW/+oSBBik2vgJYNYfZwcpsR5avLsl0Bkf?= =?us-ascii?Q?4gy2jkBQAzlhQ1Xum66ClkB9ThYG+T43+rvf3IywATM+VVR6204df/Y0WQg8?= =?us-ascii?Q?zTYQqugDYJgCj5ujG67QyZdcF0FmjtM8T0W40wFcaWtcusMiCWilEnwov7jV?= =?us-ascii?Q?d0rKvX00KsleHr/4gkWhhuN3Wagu28OWQRHbGwTGZuqBAAlewyhkFzqu7Zg1?= =?us-ascii?Q?8TeZqyVS0mgPxCIFyrdsQQHYejICZnSqyhUwH8qIOZwqBraorOBaMlnOg74Z?= =?us-ascii?Q?KGJyXNLJSPZGUN3RqLvQaKSM2SM2Ha+IpBdv1Z5dUY6pZ1PVfHWuuXkUfX2U?= =?us-ascii?Q?WGxZdIcnwrCu+bzC0G+DkgsRzXoHMXF14fPkdSltks/+bfGfVSJhC+K1KkDF?= =?us-ascii?Q?JnrcuYbvbyvCqSkvHLkcqEB8J/g0JVy1njH3M+SCKg9AD1Ej99/ylHnmgObd?= =?us-ascii?Q?NJRYT2HXJn5ZqKmJ90kMXiINlHLr1PTlpvYov8EPQFSAKMy++vLKqjyHkPbt?= =?us-ascii?Q?5N8dEq/tlE9DKXc6lAsRLuuiB7h0jPTH/q18U2OzHBjNIm4gX9r3LknOdvXv?= =?us-ascii?Q?+5wtQShfmUA6nqzabP4GBkY4IgejdfgYJAw67G+v36Yk33Pvg/zwMzOPqKPA?= =?us-ascii?Q?UwxZN0jZQFIKL3Zp9Ibj7ZR6UISvp36y4hoTXRzrf6+bqs/9xtmQOjki7He6?= =?us-ascii?Q?XFPi0nb9MtMXP7hw1CAH7uQO51FotnnnLsbHerQKjIEoa2UjwCYDPz1K7Qnw?= =?us-ascii?Q?PPmtlOsO/wxTWe5E2jPUNVkPfx4iVF3u8Buj1AOMZUiAGqNwsBQeHogTzeqz?= =?us-ascii?Q?R3mwxL34bDPyfh4AkfnFPN+reJ/33iaKGfHbPVgeEFfEDn+XNOoVOLLD7g9G?= =?us-ascii?Q?GcDRVsiL4Y+TBzCq+M8QKH5GJH3FkF8fbEt8v8tGU5fp1sJ9L9Sk9XCrSKG9?= =?us-ascii?Q?eG/+8eJTZiPNSXh3tqcNn/hEKcTk4E/F68+mwsDCprIofwcj9Bf491TeP6tM?= =?us-ascii?Q?ZzWrBhjrFpJQOMJqW/ITke+KY3F96JHvrZ3hW5PTm6J4aWg8MlcGdzaJ7xKd?= =?us-ascii?Q?Iocy7/tT2mq6/pFVXHnsD3kkHPKVBTqAQw/OeeRA4xiVk0wACR1b4PDO3+JD?= =?us-ascii?Q?cmEqjGVNq3FFEOsCWkJkCzETF7vckRFrfFywMJel?= Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: DM4PR11MB5994.namprd11.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: 77e6df93-f550-4822-4b70-08daf90880ce X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Jan 2023 04:00:14.2248 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: +EgimckguF50CxrzlIjwat/igAT0U0ISUaK7i5CXPss7YZG9IsiYUSrfi0d8sfHE2BOCwlwTQkGbkgKrM5gA7g== X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO6PR11MB5572 X-OriginatorOrg: intel.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org > -----Original Message----- > From: Xing, Beilei > Sent: Tuesday, January 17, 2023 4:06 PM > To: Wu, Jingjing > Cc: dev@dpdk.org; Zhang, Qi Z ; Xing, Beilei > ; Wu, Wenjun1 > Subject: [PATCH v4 03/15] common/idpf: add virtual channel functions >=20 > From: Beilei Xing >=20 > Move most of the virtual channel functions to idpf common module. >=20 > Signed-off-by: Wenjun Wu > Signed-off-by: Beilei Xing > --- > drivers/common/idpf/base/meson.build | 2 +- > drivers/common/idpf/idpf_common_device.c | 8 + > drivers/common/idpf/idpf_common_device.h | 61 ++ > drivers/common/idpf/idpf_common_logs.h | 23 + > drivers/common/idpf/idpf_common_virtchnl.c | 815 > +++++++++++++++++++++ > drivers/common/idpf/idpf_common_virtchnl.h | 48 ++ > drivers/common/idpf/meson.build | 5 + > drivers/common/idpf/version.map | 20 +- > drivers/net/idpf/idpf_ethdev.c | 9 +- > drivers/net/idpf/idpf_ethdev.h | 85 +-- > drivers/net/idpf/idpf_vchnl.c | 815 +-------------------- > 11 files changed, 983 insertions(+), 908 deletions(-) > create mode 100644 drivers/common/idpf/idpf_common_device.c > create mode 100644 drivers/common/idpf/idpf_common_logs.h > create mode 100644 drivers/common/idpf/idpf_common_virtchnl.c > create mode 100644 drivers/common/idpf/idpf_common_virtchnl.h >=20 > diff --git a/drivers/common/idpf/base/meson.build > b/drivers/common/idpf/base/meson.build > index 183587b51a..dc4b93c198 100644 > --- a/drivers/common/idpf/base/meson.build > +++ b/drivers/common/idpf/base/meson.build > @@ -1,7 +1,7 @@ > # SPDX-License-Identifier: BSD-3-Clause > # Copyright(c) 2022 Intel Corporation >=20 > -sources =3D files( > +sources +=3D files( > 'idpf_common.c', > 'idpf_controlq.c', > 'idpf_controlq_setup.c', > diff --git a/drivers/common/idpf/idpf_common_device.c > b/drivers/common/idpf/idpf_common_device.c > new file mode 100644 > index 0000000000..5062780362 > --- /dev/null > +++ b/drivers/common/idpf/idpf_common_device.c > @@ -0,0 +1,8 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright(c) 2022 Intel Corporation > + */ > + > +#include > +#include > + > +RTE_LOG_REGISTER_SUFFIX(idpf_common_logtype, common, NOTICE); > diff --git a/drivers/common/idpf/idpf_common_device.h > b/drivers/common/idpf/idpf_common_device.h > index b7fff84b25..a7537281d1 100644 > --- a/drivers/common/idpf/idpf_common_device.h > +++ b/drivers/common/idpf/idpf_common_device.h > @@ -7,6 +7,12 @@ >=20 > #include > #include > +#include > + > +#define IDPF_CTLQ_LEN 64 > +#define IDPF_DFLT_MBX_BUF_SIZE 4096 > + > +#define IDPF_MAX_PKT_TYPE 1024 >=20 > struct idpf_adapter { > struct idpf_hw hw; > @@ -76,4 +82,59 @@ struct idpf_vport { > bool stopped; > }; >=20 > +/* Message type read in virtual channel from PF */ > +enum idpf_vc_result { > + IDPF_MSG_ERR =3D -1, /* Meet error when accessing admin queue */ > + IDPF_MSG_NON, /* Read nothing from admin queue */ > + IDPF_MSG_SYS, /* Read system msg from admin queue */ > + IDPF_MSG_CMD, /* Read async command result */ > +}; > + > +/* structure used for sending and checking response of virtchnl ops */ > +struct idpf_cmd_info { > + uint32_t ops; > + uint8_t *in_args; /* buffer for sending */ > + uint32_t in_args_size; /* buffer size for sending */ > + uint8_t *out_buffer; /* buffer for response */ > + uint32_t out_size; /* buffer size for response */ > +}; > + > +/* notify current command done. Only call in case execute > + * _atomic_set_cmd successfully. > + */ > +static inline void > +notify_cmd(struct idpf_adapter *adapter, int msg_ret) > +{ > + adapter->cmd_retval =3D msg_ret; > + /* Return value may be checked in anither thread, need to ensure > the coherence. */ > + rte_wmb(); > + adapter->pend_cmd =3D VIRTCHNL2_OP_UNKNOWN; > +} > + > +/* clear current command. Only call in case execute > + * _atomic_set_cmd successfully. > + */ > +static inline void > +clear_cmd(struct idpf_adapter *adapter) > +{ > + /* Return value may be checked in anither thread, need to ensure > the coherence. */ > + rte_wmb(); > + adapter->pend_cmd =3D VIRTCHNL2_OP_UNKNOWN; > + adapter->cmd_retval =3D VIRTCHNL_STATUS_SUCCESS; > +} > + > +/* Check there is pending cmd in execution. If none, set new command. */ > +static inline bool > +atomic_set_cmd(struct idpf_adapter *adapter, uint32_t ops) > +{ > + uint32_t op_unk =3D VIRTCHNL2_OP_UNKNOWN; > + bool ret =3D __atomic_compare_exchange(&adapter->pend_cmd, > &op_unk, &ops, > + 0, __ATOMIC_ACQUIRE, > __ATOMIC_ACQUIRE); > + > + if (!ret) > + DRV_LOG(ERR, "There is incomplete cmd %d", adapter- > >pend_cmd); > + > + return !ret; > +} > + > #endif /* _IDPF_COMMON_DEVICE_H_ */ > diff --git a/drivers/common/idpf/idpf_common_logs.h > b/drivers/common/idpf/idpf_common_logs.h > new file mode 100644 > index 0000000000..fe36562769 > --- /dev/null > +++ b/drivers/common/idpf/idpf_common_logs.h > @@ -0,0 +1,23 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright(c) 2022 Intel Corporation > + */ > + > +#ifndef _IDPF_COMMON_LOGS_H_ > +#define _IDPF_COMMON_LOGS_H_ > + > +#include > + > +extern int idpf_common_logtype; > + > +#define DRV_LOG_RAW(level, ...) \ > + rte_log(RTE_LOG_ ## level, \ > + idpf_common_logtype, \ > + RTE_FMT("%s(): " \ > + RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ > + __func__, \ > + RTE_FMT_TAIL(__VA_ARGS__,))) > + > +#define DRV_LOG(level, fmt, args...) \ > + DRV_LOG_RAW(level, fmt "\n", ## args) > + > +#endif /* _IDPF_COMMON_LOGS_H_ */ > diff --git a/drivers/common/idpf/idpf_common_virtchnl.c > b/drivers/common/idpf/idpf_common_virtchnl.c > new file mode 100644 > index 0000000000..2e94a95876 > --- /dev/null > +++ b/drivers/common/idpf/idpf_common_virtchnl.c > @@ -0,0 +1,815 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright(c) 2022 Intel Corporation > + */ > + > +#include > +#include > + > +static int > +idpf_vc_clean(struct idpf_adapter *adapter) > +{ > + struct idpf_ctlq_msg *q_msg[IDPF_CTLQ_LEN]; > + uint16_t num_q_msg =3D IDPF_CTLQ_LEN; > + struct idpf_dma_mem *dma_mem; > + int err; > + uint32_t i; > + > + for (i =3D 0; i < 10; i++) { > + err =3D idpf_ctlq_clean_sq(adapter->hw.asq, &num_q_msg, > q_msg); > + msleep(20); > + if (num_q_msg > 0) > + break; > + } > + if (err !=3D 0) > + return err; > + > + /* Empty queue is not an error */ > + for (i =3D 0; i < num_q_msg; i++) { > + dma_mem =3D q_msg[i]->ctx.indirect.payload; > + if (dma_mem !=3D NULL) { > + idpf_free_dma_mem(&adapter->hw, dma_mem); > + rte_free(dma_mem); > + } > + rte_free(q_msg[i]); > + } > + > + return 0; > +} > + > +static int > +idpf_send_vc_msg(struct idpf_adapter *adapter, uint32_t op, > + uint16_t msg_size, uint8_t *msg) > +{ > + struct idpf_ctlq_msg *ctlq_msg; > + struct idpf_dma_mem *dma_mem; > + int err; > + > + err =3D idpf_vc_clean(adapter); > + if (err !=3D 0) > + goto err; > + > + ctlq_msg =3D rte_zmalloc(NULL, sizeof(struct idpf_ctlq_msg), 0); > + if (ctlq_msg =3D=3D NULL) { > + err =3D -ENOMEM; > + goto err; > + } > + > + dma_mem =3D rte_zmalloc(NULL, sizeof(struct idpf_dma_mem), 0); > + if (dma_mem =3D=3D NULL) { > + err =3D -ENOMEM; > + goto dma_mem_error; > + } > + > + dma_mem->size =3D IDPF_DFLT_MBX_BUF_SIZE; > + idpf_alloc_dma_mem(&adapter->hw, dma_mem, dma_mem->size); > + if (dma_mem->va =3D=3D NULL) { > + err =3D -ENOMEM; > + goto dma_alloc_error; > + } > + > + memcpy(dma_mem->va, msg, msg_size); > + > + ctlq_msg->opcode =3D idpf_mbq_opc_send_msg_to_pf; > + ctlq_msg->func_id =3D 0; > + ctlq_msg->data_len =3D msg_size; > + ctlq_msg->cookie.mbx.chnl_opcode =3D op; > + ctlq_msg->cookie.mbx.chnl_retval =3D VIRTCHNL_STATUS_SUCCESS; > + ctlq_msg->ctx.indirect.payload =3D dma_mem; > + > + err =3D idpf_ctlq_send(&adapter->hw, adapter->hw.asq, 1, ctlq_msg); > + if (err !=3D 0) > + goto send_error; > + > + return 0; > + > +send_error: > + idpf_free_dma_mem(&adapter->hw, dma_mem); > +dma_alloc_error: > + rte_free(dma_mem); > +dma_mem_error: > + rte_free(ctlq_msg); > +err: > + return err; > +} > + > +static enum idpf_vc_result > +idpf_read_msg_from_cp(struct idpf_adapter *adapter, uint16_t buf_len, > + uint8_t *buf) > +{ > + struct idpf_hw *hw =3D &adapter->hw; > + struct idpf_ctlq_msg ctlq_msg; > + struct idpf_dma_mem *dma_mem =3D NULL; > + enum idpf_vc_result result =3D IDPF_MSG_NON; > + uint32_t opcode; > + uint16_t pending =3D 1; > + int ret; > + > + ret =3D idpf_ctlq_recv(hw->arq, &pending, &ctlq_msg); > + if (ret !=3D 0) { > + DRV_LOG(DEBUG, "Can't read msg from AQ"); > + if (ret !=3D -ENOMSG) > + result =3D IDPF_MSG_ERR; > + return result; > + } > + > + rte_memcpy(buf, ctlq_msg.ctx.indirect.payload->va, buf_len); > + > + opcode =3D rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_opcode); > + adapter->cmd_retval =3D > rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_retval); > + > + DRV_LOG(DEBUG, "CQ from CP carries opcode %u, retval %d", > + opcode, adapter->cmd_retval); > + > + if (opcode =3D=3D VIRTCHNL2_OP_EVENT) { > + struct virtchnl2_event *ve =3D ctlq_msg.ctx.indirect.payload- > >va; > + > + result =3D IDPF_MSG_SYS; > + switch (ve->event) { > + case VIRTCHNL2_EVENT_LINK_CHANGE: > + /* TBD */ > + break; > + default: > + DRV_LOG(ERR, "%s: Unknown event %d from CP", > + __func__, ve->event); > + break; > + } > + } else { > + /* async reply msg on command issued by pf previously */ > + result =3D IDPF_MSG_CMD; > + if (opcode !=3D adapter->pend_cmd) { > + DRV_LOG(WARNING, "command mismatch, > expect %u, get %u", > + adapter->pend_cmd, opcode); > + result =3D IDPF_MSG_ERR; > + } > + } > + > + if (ctlq_msg.data_len !=3D 0) > + dma_mem =3D ctlq_msg.ctx.indirect.payload; > + else > + pending =3D 0; > + > + ret =3D idpf_ctlq_post_rx_buffs(hw, hw->arq, &pending, &dma_mem); > + if (ret !=3D 0 && dma_mem !=3D NULL) > + idpf_free_dma_mem(hw, dma_mem); > + > + return result; > +} > + > +#define MAX_TRY_TIMES 200 > +#define ASQ_DELAY_MS 10 > + > +int > +idpf_read_one_msg(struct idpf_adapter *adapter, uint32_t ops, uint16_t > buf_len, > + uint8_t *buf) > +{ > + int err =3D 0; > + int i =3D 0; > + int ret; > + > + do { > + ret =3D idpf_read_msg_from_cp(adapter, buf_len, buf); > + if (ret =3D=3D IDPF_MSG_CMD) > + break; > + rte_delay_ms(ASQ_DELAY_MS); > + } while (i++ < MAX_TRY_TIMES); > + if (i >=3D MAX_TRY_TIMES || > + adapter->cmd_retval !=3D VIRTCHNL_STATUS_SUCCESS) { > + err =3D -EBUSY; > + DRV_LOG(ERR, "No response or return failure (%d) for > cmd %d", > + adapter->cmd_retval, ops); > + } > + > + return err; > +} > + > +int > +idpf_execute_vc_cmd(struct idpf_adapter *adapter, struct idpf_cmd_info > *args) > +{ > + int err =3D 0; > + int i =3D 0; > + int ret; > + > + if (atomic_set_cmd(adapter, args->ops)) > + return -EINVAL; > + > + ret =3D idpf_send_vc_msg(adapter, args->ops, args->in_args_size, args- > >in_args); > + if (ret !=3D 0) { > + DRV_LOG(ERR, "fail to send cmd %d", args->ops); > + clear_cmd(adapter); > + return ret; > + } > + > + switch (args->ops) { > + case VIRTCHNL_OP_VERSION: > + case VIRTCHNL2_OP_GET_CAPS: > + case VIRTCHNL2_OP_CREATE_VPORT: > + case VIRTCHNL2_OP_DESTROY_VPORT: > + case VIRTCHNL2_OP_SET_RSS_KEY: > + case VIRTCHNL2_OP_SET_RSS_LUT: > + case VIRTCHNL2_OP_SET_RSS_HASH: > + case VIRTCHNL2_OP_CONFIG_RX_QUEUES: > + case VIRTCHNL2_OP_CONFIG_TX_QUEUES: > + case VIRTCHNL2_OP_ENABLE_QUEUES: > + case VIRTCHNL2_OP_DISABLE_QUEUES: > + case VIRTCHNL2_OP_ENABLE_VPORT: > + case VIRTCHNL2_OP_DISABLE_VPORT: > + case VIRTCHNL2_OP_MAP_QUEUE_VECTOR: > + case VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR: > + case VIRTCHNL2_OP_ALLOC_VECTORS: > + case VIRTCHNL2_OP_DEALLOC_VECTORS: > + /* for init virtchnl ops, need to poll the response */ > + err =3D idpf_read_one_msg(adapter, args->ops, args->out_size, > args->out_buffer); > + clear_cmd(adapter); > + break; > + case VIRTCHNL2_OP_GET_PTYPE_INFO: > + /* for multuple response message, > + * do not handle the response here. > + */ > + break; > + default: > + /* For other virtchnl ops in running time, > + * wait for the cmd done flag. > + */ > + do { > + if (adapter->pend_cmd =3D=3D VIRTCHNL_OP_UNKNOWN) > + break; > + rte_delay_ms(ASQ_DELAY_MS); > + /* If don't read msg or read sys event, continue */ > + } while (i++ < MAX_TRY_TIMES); > + /* If there's no response is received, clear command */ > + if (i >=3D MAX_TRY_TIMES || > + adapter->cmd_retval !=3D VIRTCHNL_STATUS_SUCCESS) { > + err =3D -EBUSY; > + DRV_LOG(ERR, "No response or return failure (%d) > for cmd %d", > + adapter->cmd_retval, args->ops); > + clear_cmd(adapter); > + } > + break; > + } > + > + return err; > +} > + > +int > +idpf_vc_check_api_version(struct idpf_adapter *adapter) > +{ > + struct virtchnl2_version_info version, *pver; > + struct idpf_cmd_info args; > + int err; > + > + memset(&version, 0, sizeof(struct virtchnl_version_info)); > + version.major =3D VIRTCHNL2_VERSION_MAJOR_2; > + version.minor =3D VIRTCHNL2_VERSION_MINOR_0; > + > + args.ops =3D VIRTCHNL_OP_VERSION; > + args.in_args =3D (uint8_t *)&version; > + args.in_args_size =3D sizeof(version); > + args.out_buffer =3D adapter->mbx_resp; > + args.out_size =3D IDPF_DFLT_MBX_BUF_SIZE; > + > + err =3D idpf_execute_vc_cmd(adapter, &args); > + if (err !=3D 0) { > + DRV_LOG(ERR, > + "Failed to execute command of > VIRTCHNL_OP_VERSION"); > + return err; > + } > + > + pver =3D (struct virtchnl2_version_info *)args.out_buffer; > + adapter->virtchnl_version =3D *pver; > + > + if (adapter->virtchnl_version.major !=3D > VIRTCHNL2_VERSION_MAJOR_2 || > + adapter->virtchnl_version.minor !=3D > VIRTCHNL2_VERSION_MINOR_0) { > + DRV_LOG(ERR, "VIRTCHNL API version mismatch:(%u.%u)- > (%u.%u)", > + adapter->virtchnl_version.major, > + adapter->virtchnl_version.minor, > + VIRTCHNL2_VERSION_MAJOR_2, > + VIRTCHNL2_VERSION_MINOR_0); > + return -EINVAL; > + } > + > + return 0; > +} > + > +int > +idpf_vc_get_caps(struct idpf_adapter *adapter) > +{ > + struct virtchnl2_get_capabilities caps_msg; > + struct idpf_cmd_info args; > + int err; > + > + memset(&caps_msg, 0, sizeof(struct virtchnl2_get_capabilities)); > + > + caps_msg.csum_caps =3D > + VIRTCHNL2_CAP_TX_CSUM_L3_IPV4 | > + VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_TCP | > + VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_UDP | > + VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_SCTP | > + VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_TCP | > + VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_UDP | > + VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_SCTP | > + VIRTCHNL2_CAP_TX_CSUM_GENERIC | > + VIRTCHNL2_CAP_RX_CSUM_L3_IPV4 | > + VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_TCP | > + VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_UDP | > + VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_SCTP | > + VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_TCP | > + VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_UDP | > + VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_SCTP | > + VIRTCHNL2_CAP_RX_CSUM_GENERIC; > + > + caps_msg.rss_caps =3D > + VIRTCHNL2_CAP_RSS_IPV4_TCP | > + VIRTCHNL2_CAP_RSS_IPV4_UDP | > + VIRTCHNL2_CAP_RSS_IPV4_SCTP | > + VIRTCHNL2_CAP_RSS_IPV4_OTHER | > + VIRTCHNL2_CAP_RSS_IPV6_TCP | > + VIRTCHNL2_CAP_RSS_IPV6_UDP | > + VIRTCHNL2_CAP_RSS_IPV6_SCTP | > + VIRTCHNL2_CAP_RSS_IPV6_OTHER | > + VIRTCHNL2_CAP_RSS_IPV4_AH | > + VIRTCHNL2_CAP_RSS_IPV4_ESP | > + VIRTCHNL2_CAP_RSS_IPV4_AH_ESP | > + VIRTCHNL2_CAP_RSS_IPV6_AH | > + VIRTCHNL2_CAP_RSS_IPV6_ESP | > + VIRTCHNL2_CAP_RSS_IPV6_AH_ESP; > + > + caps_msg.other_caps =3D VIRTCHNL2_CAP_WB_ON_ITR; > + > + args.ops =3D VIRTCHNL2_OP_GET_CAPS; > + args.in_args =3D (uint8_t *)&caps_msg; > + args.in_args_size =3D sizeof(caps_msg); > + args.out_buffer =3D adapter->mbx_resp; > + args.out_size =3D IDPF_DFLT_MBX_BUF_SIZE; > + > + err =3D idpf_execute_vc_cmd(adapter, &args); > + if (err !=3D 0) { > + DRV_LOG(ERR, > + "Failed to execute command of > VIRTCHNL2_OP_GET_CAPS"); > + return err; > + } > + > + rte_memcpy(&adapter->caps, args.out_buffer, sizeof(caps_msg)); > + > + return 0; > +} > + > +int > +idpf_vc_create_vport(struct idpf_vport *vport, > + struct virtchnl2_create_vport *vport_req_info) > +{ > + struct idpf_adapter *adapter =3D vport->adapter; > + struct virtchnl2_create_vport vport_msg; > + struct idpf_cmd_info args; > + int err =3D -1; > + > + memset(&vport_msg, 0, sizeof(struct virtchnl2_create_vport)); > + vport_msg.vport_type =3D vport_req_info->vport_type; > + vport_msg.txq_model =3D vport_req_info->txq_model; > + vport_msg.rxq_model =3D vport_req_info->rxq_model; > + vport_msg.num_tx_q =3D vport_req_info->num_tx_q; > + vport_msg.num_tx_complq =3D vport_req_info->num_tx_complq; > + vport_msg.num_rx_q =3D vport_req_info->num_rx_q; > + vport_msg.num_rx_bufq =3D vport_req_info->num_rx_bufq; > + > + memset(&args, 0, sizeof(args)); > + args.ops =3D VIRTCHNL2_OP_CREATE_VPORT; > + args.in_args =3D (uint8_t *)&vport_msg; > + args.in_args_size =3D sizeof(vport_msg); > + args.out_buffer =3D adapter->mbx_resp; > + args.out_size =3D IDPF_DFLT_MBX_BUF_SIZE; > + > + err =3D idpf_execute_vc_cmd(adapter, &args); > + if (err !=3D 0) { > + DRV_LOG(ERR, > + "Failed to execute command of > VIRTCHNL2_OP_CREATE_VPORT"); > + return err; > + } > + > + rte_memcpy(vport->vport_info, args.out_buffer, > IDPF_DFLT_MBX_BUF_SIZE); > + return 0; > +} > + > +int > +idpf_vc_destroy_vport(struct idpf_vport *vport) > +{ > + struct idpf_adapter *adapter =3D vport->adapter; > + struct virtchnl2_vport vc_vport; > + struct idpf_cmd_info args; > + int err; > + > + vc_vport.vport_id =3D vport->vport_id; > + > + memset(&args, 0, sizeof(args)); > + args.ops =3D VIRTCHNL2_OP_DESTROY_VPORT; > + args.in_args =3D (uint8_t *)&vc_vport; > + args.in_args_size =3D sizeof(vc_vport); > + args.out_buffer =3D adapter->mbx_resp; > + args.out_size =3D IDPF_DFLT_MBX_BUF_SIZE; > + > + err =3D idpf_execute_vc_cmd(adapter, &args); > + if (err !=3D 0) > + DRV_LOG(ERR, "Failed to execute command of > VIRTCHNL2_OP_DESTROY_VPORT"); > + > + return err; > +} > + > +int > +idpf_vc_set_rss_key(struct idpf_vport *vport) > +{ > + struct idpf_adapter *adapter =3D vport->adapter; > + struct virtchnl2_rss_key *rss_key; > + struct idpf_cmd_info args; > + int len, err; > + > + len =3D sizeof(*rss_key) + sizeof(rss_key->key[0]) * > + (vport->rss_key_size - 1); > + rss_key =3D rte_zmalloc("rss_key", len, 0); > + if (rss_key =3D=3D NULL) > + return -ENOMEM; > + > + rss_key->vport_id =3D vport->vport_id; > + rss_key->key_len =3D vport->rss_key_size; > + rte_memcpy(rss_key->key, vport->rss_key, > + sizeof(rss_key->key[0]) * vport->rss_key_size); > + > + memset(&args, 0, sizeof(args)); > + args.ops =3D VIRTCHNL2_OP_SET_RSS_KEY; > + args.in_args =3D (uint8_t *)rss_key; > + args.in_args_size =3D len; > + args.out_buffer =3D adapter->mbx_resp; > + args.out_size =3D IDPF_DFLT_MBX_BUF_SIZE; > + > + err =3D idpf_execute_vc_cmd(adapter, &args); > + if (err !=3D 0) > + DRV_LOG(ERR, "Failed to execute command of > VIRTCHNL2_OP_SET_RSS_KEY"); > + > + rte_free(rss_key); > + return err; > +} > + > +int > +idpf_vc_set_rss_lut(struct idpf_vport *vport) > +{ > + struct idpf_adapter *adapter =3D vport->adapter; > + struct virtchnl2_rss_lut *rss_lut; > + struct idpf_cmd_info args; > + int len, err; > + > + len =3D sizeof(*rss_lut) + sizeof(rss_lut->lut[0]) * > + (vport->rss_lut_size - 1); > + rss_lut =3D rte_zmalloc("rss_lut", len, 0); > + if (rss_lut =3D=3D NULL) > + return -ENOMEM; > + > + rss_lut->vport_id =3D vport->vport_id; > + rss_lut->lut_entries =3D vport->rss_lut_size; > + rte_memcpy(rss_lut->lut, vport->rss_lut, > + sizeof(rss_lut->lut[0]) * vport->rss_lut_size); > + > + memset(&args, 0, sizeof(args)); > + args.ops =3D VIRTCHNL2_OP_SET_RSS_LUT; > + args.in_args =3D (uint8_t *)rss_lut; > + args.in_args_size =3D len; > + args.out_buffer =3D adapter->mbx_resp; > + args.out_size =3D IDPF_DFLT_MBX_BUF_SIZE; > + > + err =3D idpf_execute_vc_cmd(adapter, &args); > + if (err !=3D 0) > + DRV_LOG(ERR, "Failed to execute command of > VIRTCHNL2_OP_SET_RSS_LUT"); > + > + rte_free(rss_lut); > + return err; > +} > + > +int > +idpf_vc_set_rss_hash(struct idpf_vport *vport) > +{ > + struct idpf_adapter *adapter =3D vport->adapter; > + struct virtchnl2_rss_hash rss_hash; > + struct idpf_cmd_info args; > + int err; > + > + memset(&rss_hash, 0, sizeof(rss_hash)); > + rss_hash.ptype_groups =3D vport->rss_hf; > + rss_hash.vport_id =3D vport->vport_id; > + > + memset(&args, 0, sizeof(args)); > + args.ops =3D VIRTCHNL2_OP_SET_RSS_HASH; > + args.in_args =3D (uint8_t *)&rss_hash; > + args.in_args_size =3D sizeof(rss_hash); > + args.out_buffer =3D adapter->mbx_resp; > + args.out_size =3D IDPF_DFLT_MBX_BUF_SIZE; > + > + err =3D idpf_execute_vc_cmd(adapter, &args); > + if (err !=3D 0) > + DRV_LOG(ERR, "Failed to execute command of > OP_SET_RSS_HASH"); > + > + return err; > +} > + > +int > +idpf_vc_config_irq_map_unmap(struct idpf_vport *vport, uint16_t nb_rxq, > bool map) > +{ > + struct idpf_adapter *adapter =3D vport->adapter; > + struct virtchnl2_queue_vector_maps *map_info; > + struct virtchnl2_queue_vector *vecmap; > + struct idpf_cmd_info args; > + int len, i, err =3D 0; > + > + len =3D sizeof(struct virtchnl2_queue_vector_maps) + > + (nb_rxq - 1) * sizeof(struct virtchnl2_queue_vector); > + > + map_info =3D rte_zmalloc("map_info", len, 0); > + if (map_info =3D=3D NULL) > + return -ENOMEM; > + > + map_info->vport_id =3D vport->vport_id; > + map_info->num_qv_maps =3D nb_rxq; > + for (i =3D 0; i < nb_rxq; i++) { > + vecmap =3D &map_info->qv_maps[i]; > + vecmap->queue_id =3D vport->qv_map[i].queue_id; > + vecmap->vector_id =3D vport->qv_map[i].vector_id; > + vecmap->itr_idx =3D VIRTCHNL2_ITR_IDX_0; > + vecmap->queue_type =3D VIRTCHNL2_QUEUE_TYPE_RX; > + } > + > + args.ops =3D map ? VIRTCHNL2_OP_MAP_QUEUE_VECTOR : > + VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR; > + args.in_args =3D (uint8_t *)map_info; > + args.in_args_size =3D len; > + args.out_buffer =3D adapter->mbx_resp; > + args.out_size =3D IDPF_DFLT_MBX_BUF_SIZE; > + err =3D idpf_execute_vc_cmd(adapter, &args); > + if (err !=3D 0) > + DRV_LOG(ERR, "Failed to execute command of > VIRTCHNL2_OP_%s_QUEUE_VECTOR", > + map ? "MAP" : "UNMAP"); > + > + rte_free(map_info); > + return err; > +} > + > +int > +idpf_vc_alloc_vectors(struct idpf_vport *vport, uint16_t num_vectors) > +{ > + struct idpf_adapter *adapter =3D vport->adapter; > + struct virtchnl2_alloc_vectors *alloc_vec; > + struct idpf_cmd_info args; > + int err, len; > + > + len =3D sizeof(struct virtchnl2_alloc_vectors) + > + (num_vectors - 1) * sizeof(struct virtchnl2_vector_chunk); > + alloc_vec =3D rte_zmalloc("alloc_vec", len, 0); > + if (alloc_vec =3D=3D NULL) > + return -ENOMEM; > + > + alloc_vec->num_vectors =3D num_vectors; > + > + args.ops =3D VIRTCHNL2_OP_ALLOC_VECTORS; > + args.in_args =3D (uint8_t *)alloc_vec; > + args.in_args_size =3D sizeof(struct virtchnl2_alloc_vectors); > + args.out_buffer =3D adapter->mbx_resp; > + args.out_size =3D IDPF_DFLT_MBX_BUF_SIZE; > + err =3D idpf_execute_vc_cmd(adapter, &args); > + if (err !=3D 0) > + DRV_LOG(ERR, "Failed to execute command > VIRTCHNL2_OP_ALLOC_VECTORS"); > + > + if (vport->recv_vectors =3D=3D NULL) { > + vport->recv_vectors =3D rte_zmalloc("recv_vectors", len, 0); > + if (vport->recv_vectors =3D=3D NULL) { > + rte_free(alloc_vec); > + return -ENOMEM; > + } > + } > + > + rte_memcpy(vport->recv_vectors, args.out_buffer, len); > + rte_free(alloc_vec); > + return err; > +} > + > +int > +idpf_vc_dealloc_vectors(struct idpf_vport *vport) > +{ > + struct idpf_adapter *adapter =3D vport->adapter; > + struct virtchnl2_alloc_vectors *alloc_vec; > + struct virtchnl2_vector_chunks *vcs; > + struct idpf_cmd_info args; > + int err, len; > + > + alloc_vec =3D vport->recv_vectors; > + vcs =3D &alloc_vec->vchunks; > + > + len =3D sizeof(struct virtchnl2_vector_chunks) + > + (vcs->num_vchunks - 1) * sizeof(struct > virtchnl2_vector_chunk); > + > + args.ops =3D VIRTCHNL2_OP_DEALLOC_VECTORS; > + args.in_args =3D (uint8_t *)vcs; > + args.in_args_size =3D len; > + args.out_buffer =3D adapter->mbx_resp; > + args.out_size =3D IDPF_DFLT_MBX_BUF_SIZE; > + err =3D idpf_execute_vc_cmd(adapter, &args); > + if (err !=3D 0) > + DRV_LOG(ERR, "Failed to execute command > VIRTCHNL2_OP_DEALLOC_VECTORS"); > + > + return err; > +} > + > +static int > +idpf_vc_ena_dis_one_queue(struct idpf_vport *vport, uint16_t qid, > + uint32_t type, bool on) > +{ > + struct idpf_adapter *adapter =3D vport->adapter; > + struct virtchnl2_del_ena_dis_queues *queue_select; > + struct virtchnl2_queue_chunk *queue_chunk; > + struct idpf_cmd_info args; > + int err, len; > + > + len =3D sizeof(struct virtchnl2_del_ena_dis_queues); > + queue_select =3D rte_zmalloc("queue_select", len, 0); > + if (queue_select =3D=3D NULL) > + return -ENOMEM; > + > + queue_chunk =3D queue_select->chunks.chunks; > + queue_select->chunks.num_chunks =3D 1; > + queue_select->vport_id =3D vport->vport_id; > + > + queue_chunk->type =3D type; > + queue_chunk->start_queue_id =3D qid; > + queue_chunk->num_queues =3D 1; > + > + args.ops =3D on ? VIRTCHNL2_OP_ENABLE_QUEUES : > + VIRTCHNL2_OP_DISABLE_QUEUES; > + args.in_args =3D (uint8_t *)queue_select; > + args.in_args_size =3D len; > + args.out_buffer =3D adapter->mbx_resp; > + args.out_size =3D IDPF_DFLT_MBX_BUF_SIZE; > + err =3D idpf_execute_vc_cmd(adapter, &args); > + if (err !=3D 0) > + DRV_LOG(ERR, "Failed to execute command of > VIRTCHNL2_OP_%s_QUEUES", > + on ? "ENABLE" : "DISABLE"); > + > + rte_free(queue_select); > + return err; > +} > + > +int > +idpf_switch_queue(struct idpf_vport *vport, uint16_t qid, > + bool rx, bool on) > +{ > + uint32_t type; > + int err, queue_id; > + > + /* switch txq/rxq */ > + type =3D rx ? VIRTCHNL2_QUEUE_TYPE_RX : > VIRTCHNL2_QUEUE_TYPE_TX; > + > + if (type =3D=3D VIRTCHNL2_QUEUE_TYPE_RX) > + queue_id =3D vport->chunks_info.rx_start_qid + qid; > + else > + queue_id =3D vport->chunks_info.tx_start_qid + qid; > + err =3D idpf_vc_ena_dis_one_queue(vport, queue_id, type, on); > + if (err !=3D 0) > + return err; > + > + /* switch tx completion queue */ > + if (!rx && vport->txq_model =3D=3D VIRTCHNL2_QUEUE_MODEL_SPLIT) { > + type =3D VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION; > + queue_id =3D vport->chunks_info.tx_compl_start_qid + qid; > + err =3D idpf_vc_ena_dis_one_queue(vport, queue_id, type, on); > + if (err !=3D 0) > + return err; > + } > + > + /* switch rx buffer queue */ > + if (rx && vport->rxq_model =3D=3D VIRTCHNL2_QUEUE_MODEL_SPLIT) { > + type =3D VIRTCHNL2_QUEUE_TYPE_RX_BUFFER; > + queue_id =3D vport->chunks_info.rx_buf_start_qid + 2 * qid; > + err =3D idpf_vc_ena_dis_one_queue(vport, queue_id, type, on); > + if (err !=3D 0) > + return err; > + queue_id++; > + err =3D idpf_vc_ena_dis_one_queue(vport, queue_id, type, on); > + if (err !=3D 0) > + return err; > + } > + > + return err; > +} > + > +#define IDPF_RXTX_QUEUE_CHUNKS_NUM 2 > +int > +idpf_vc_ena_dis_queues(struct idpf_vport *vport, bool enable) > +{ > + struct idpf_adapter *adapter =3D vport->adapter; > + struct virtchnl2_del_ena_dis_queues *queue_select; > + struct virtchnl2_queue_chunk *queue_chunk; > + uint32_t type; > + struct idpf_cmd_info args; > + uint16_t num_chunks; > + int err, len; > + > + num_chunks =3D IDPF_RXTX_QUEUE_CHUNKS_NUM; > + if (vport->txq_model =3D=3D VIRTCHNL2_QUEUE_MODEL_SPLIT) > + num_chunks++; > + if (vport->rxq_model =3D=3D VIRTCHNL2_QUEUE_MODEL_SPLIT) > + num_chunks++; > + > + len =3D sizeof(struct virtchnl2_del_ena_dis_queues) + > + sizeof(struct virtchnl2_queue_chunk) * (num_chunks - 1); > + queue_select =3D rte_zmalloc("queue_select", len, 0); > + if (queue_select =3D=3D NULL) > + return -ENOMEM; > + > + queue_chunk =3D queue_select->chunks.chunks; > + queue_select->chunks.num_chunks =3D num_chunks; > + queue_select->vport_id =3D vport->vport_id; > + > + type =3D VIRTCHNL_QUEUE_TYPE_RX; > + queue_chunk[type].type =3D type; > + queue_chunk[type].start_queue_id =3D vport- > >chunks_info.rx_start_qid; > + queue_chunk[type].num_queues =3D vport->num_rx_q; > + > + type =3D VIRTCHNL2_QUEUE_TYPE_TX; > + queue_chunk[type].type =3D type; > + queue_chunk[type].start_queue_id =3D vport- > >chunks_info.tx_start_qid; > + queue_chunk[type].num_queues =3D vport->num_tx_q; > + > + if (vport->rxq_model =3D=3D VIRTCHNL2_QUEUE_MODEL_SPLIT) { > + type =3D VIRTCHNL2_QUEUE_TYPE_RX_BUFFER; > + queue_chunk[type].type =3D type; > + queue_chunk[type].start_queue_id =3D > + vport->chunks_info.rx_buf_start_qid; > + queue_chunk[type].num_queues =3D vport->num_rx_bufq; > + } > + > + if (vport->txq_model =3D=3D VIRTCHNL2_QUEUE_MODEL_SPLIT) { > + type =3D VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION; > + queue_chunk[type].type =3D type; > + queue_chunk[type].start_queue_id =3D > + vport->chunks_info.tx_compl_start_qid; > + queue_chunk[type].num_queues =3D vport->num_tx_complq; > + } > + > + args.ops =3D enable ? VIRTCHNL2_OP_ENABLE_QUEUES : > + VIRTCHNL2_OP_DISABLE_QUEUES; > + args.in_args =3D (uint8_t *)queue_select; > + args.in_args_size =3D len; > + args.out_buffer =3D adapter->mbx_resp; > + args.out_size =3D IDPF_DFLT_MBX_BUF_SIZE; > + err =3D idpf_execute_vc_cmd(adapter, &args); > + if (err !=3D 0) > + DRV_LOG(ERR, "Failed to execute command of > VIRTCHNL2_OP_%s_QUEUES", > + enable ? "ENABLE" : "DISABLE"); > + > + rte_free(queue_select); > + return err; > +} > + > +int > +idpf_vc_ena_dis_vport(struct idpf_vport *vport, bool enable) > +{ > + struct idpf_adapter *adapter =3D vport->adapter; > + struct virtchnl2_vport vc_vport; > + struct idpf_cmd_info args; > + int err; > + > + vc_vport.vport_id =3D vport->vport_id; > + args.ops =3D enable ? VIRTCHNL2_OP_ENABLE_VPORT : > + VIRTCHNL2_OP_DISABLE_VPORT; > + args.in_args =3D (uint8_t *)&vc_vport; > + args.in_args_size =3D sizeof(vc_vport); > + args.out_buffer =3D adapter->mbx_resp; > + args.out_size =3D IDPF_DFLT_MBX_BUF_SIZE; > + > + err =3D idpf_execute_vc_cmd(adapter, &args); > + if (err !=3D 0) { > + DRV_LOG(ERR, "Failed to execute command of > VIRTCHNL2_OP_%s_VPORT", > + enable ? "ENABLE" : "DISABLE"); > + } > + > + return err; > +} > + > +int > +idpf_vc_query_ptype_info(struct idpf_adapter *adapter) > +{ > + struct virtchnl2_get_ptype_info *ptype_info; > + struct idpf_cmd_info args; > + int len, err; > + > + len =3D sizeof(struct virtchnl2_get_ptype_info); > + ptype_info =3D rte_zmalloc("ptype_info", len, 0); > + if (ptype_info =3D=3D NULL) > + return -ENOMEM; > + > + ptype_info->start_ptype_id =3D 0; > + ptype_info->num_ptypes =3D IDPF_MAX_PKT_TYPE; > + args.ops =3D VIRTCHNL2_OP_GET_PTYPE_INFO; > + args.in_args =3D (uint8_t *)ptype_info; > + args.in_args_size =3D len; > + > + err =3D idpf_execute_vc_cmd(adapter, &args); > + if (err !=3D 0) > + DRV_LOG(ERR, "Failed to execute command of > VIRTCHNL2_OP_GET_PTYPE_INFO"); > + > + rte_free(ptype_info); > + return err; > +} > diff --git a/drivers/common/idpf/idpf_common_virtchnl.h > b/drivers/common/idpf/idpf_common_virtchnl.h > new file mode 100644 > index 0000000000..bbc66d63c4 > --- /dev/null > +++ b/drivers/common/idpf/idpf_common_virtchnl.h > @@ -0,0 +1,48 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright(c) 2022 Intel Corporation > + */ > + > +#ifndef _IDPF_COMMON_VIRTCHNL_H_ > +#define _IDPF_COMMON_VIRTCHNL_H_ > + > +#include > + > +__rte_internal > +int idpf_vc_check_api_version(struct idpf_adapter *adapter); > +__rte_internal > +int idpf_vc_get_caps(struct idpf_adapter *adapter); > +__rte_internal > +int idpf_vc_create_vport(struct idpf_vport *vport, > + struct virtchnl2_create_vport *vport_info); > +__rte_internal > +int idpf_vc_destroy_vport(struct idpf_vport *vport); > +__rte_internal > +int idpf_vc_set_rss_key(struct idpf_vport *vport); > +__rte_internal > +int idpf_vc_set_rss_lut(struct idpf_vport *vport); > +__rte_internal > +int idpf_vc_set_rss_hash(struct idpf_vport *vport); > +__rte_internal > +int idpf_switch_queue(struct idpf_vport *vport, uint16_t qid, > + bool rx, bool on); > +__rte_internal > +int idpf_vc_ena_dis_queues(struct idpf_vport *vport, bool enable); > +__rte_internal > +int idpf_vc_ena_dis_vport(struct idpf_vport *vport, bool enable); > +__rte_internal > +int idpf_vc_config_irq_map_unmap(struct idpf_vport *vport, > + uint16_t nb_rxq, bool map); > +__rte_internal > +int idpf_vc_alloc_vectors(struct idpf_vport *vport, uint16_t num_vectors= ); > +__rte_internal > +int idpf_vc_dealloc_vectors(struct idpf_vport *vport); > +__rte_internal > +int idpf_vc_query_ptype_info(struct idpf_adapter *adapter); > +__rte_internal > +int idpf_read_one_msg(struct idpf_adapter *adapter, uint32_t ops, > + uint16_t buf_len, uint8_t *buf); > +__rte_internal > +int idpf_execute_vc_cmd(struct idpf_adapter *adapter, > + struct idpf_cmd_info *args); > + > +#endif /* _IDPF_COMMON_VIRTCHNL_H_ */ > diff --git a/drivers/common/idpf/meson.build > b/drivers/common/idpf/meson.build > index 77d997b4a7..d1578641ba 100644 > --- a/drivers/common/idpf/meson.build > +++ b/drivers/common/idpf/meson.build > @@ -1,4 +1,9 @@ > # SPDX-License-Identifier: BSD-3-Clause > # Copyright(c) 2022 Intel Corporation >=20 > +sources =3D files( > + 'idpf_common_device.c', > + 'idpf_common_virtchnl.c', > +) > + > subdir('base') > diff --git a/drivers/common/idpf/version.map > b/drivers/common/idpf/version.map > index bfb246c752..a2b8780780 100644 > --- a/drivers/common/idpf/version.map > +++ b/drivers/common/idpf/version.map > @@ -1,12 +1,28 @@ > INTERNAL { > global: >=20 > + idpf_ctlq_clean_sq; > idpf_ctlq_deinit; > idpf_ctlq_init; > - idpf_ctlq_clean_sq; > + idpf_ctlq_post_rx_buffs; > idpf_ctlq_recv; > idpf_ctlq_send; > - idpf_ctlq_post_rx_buffs; > + idpf_execute_vc_cmd; > + idpf_read_one_msg; > + idpf_switch_queue; I think all APsI be exposed from idpf_common_virtchnl.h can follow the same= naming rule "idpf_vc*"