From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1E98BA0544; Mon, 10 Oct 2022 09:48:48 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 13C4640687; Mon, 10 Oct 2022 09:48:48 +0200 (CEST) Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id 0F6AA40041 for ; Mon, 10 Oct 2022 09:48:45 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1665388126; x=1696924126; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=4YDL5259j4VvLi+7MI5UgSLYbbC5zA90QVC5d2r7ZAg=; b=E7boYBTZ2vOSD2n2+e1JwOVIouT1na4ij3VtYrpvjUWKEcbP48y+O5gS aNWEh8EQ3UFAV6GpaP8oV8v1hhXg2nVjcel6946HW0e3ybaC514eik6vz umplQN6kaFAAxbgCtDlIB5mBBqDQ4lMjYRGA6j05kORapVm85P2Zue1m8 lHe1VycOQ/M0I9amjh0xUxZ67hUdbyiaoPJs/RHq97x0cdWs1zHNrIbkL EnT3vlfKTWH+aAg7+M4rM56itktHVHtPyIYZO/MN9yd2vM4uN2wifn1id madym2oGmSjPgo9LcEsgrSNiquUJwC+CuG5ix7AqgIBS+qsjm0KuyLEYj g==; X-IronPort-AV: E=McAfee;i="6500,9779,10495"; a="366129768" X-IronPort-AV: E=Sophos;i="5.95,173,1661842800"; d="scan'208";a="366129768" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Oct 2022 00:48:44 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10495"; a="628190872" X-IronPort-AV: E=Sophos;i="5.95,173,1661842800"; d="scan'208";a="628190872" Received: from orsmsx601.amr.corp.intel.com ([10.22.229.14]) by fmsmga007.fm.intel.com with ESMTP; 10 Oct 2022 00:48:41 -0700 Received: from orsmsx610.amr.corp.intel.com (10.22.229.23) by ORSMSX601.amr.corp.intel.com (10.22.229.14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Mon, 10 Oct 2022 00:48:40 -0700 Received: from orsmsx607.amr.corp.intel.com (10.22.229.20) by ORSMSX610.amr.corp.intel.com (10.22.229.23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Mon, 10 Oct 2022 00:48:40 -0700 Received: from ORSEDG602.ED.cps.intel.com (10.7.248.7) by orsmsx607.amr.corp.intel.com (10.22.229.20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31 via Frontend Transport; Mon, 10 Oct 2022 00:48:40 -0700 Received: from NAM11-DM6-obe.outbound.protection.outlook.com (104.47.57.175) by edgegateway.intel.com (134.134.137.103) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2375.31; Mon, 10 Oct 2022 00:48:39 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ipOgV0jxi+fFHk+fLDQwHUy9kw3AkD0HM5cutmxnd4hHUKEzO7C+P1C0YVby5o6YGJd0w442cfJg5fsRWc5JdKoGa5P8BtCZRbirx2hK8TzDyMZqT2sx8WGtmICt1AM53A3URCZET/OP7G3lAZalPwy1z5KABEzxeb+ndH1Q9OfH8mEh9VN0kBhnfFuCQo3Tp+5h+Pd9qGNBEtEZryKIcruzIE0g0T3WihadYg3nJHzTL4Oow/iRNwC5tNJ4rYcnb+t+1NSD/H+BcZMTFR742rHHc5KzVtKNgSyzE79jb/mqm9VhQoFgWr0X2BfsrUpgpbdm8Uh0zGiGifMxxy49fQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=MFAWnRWKcKUHyFxYuV6KvQhmOqQbtOSp+MPd+ueNdJs=; b=Xl1cnZHw6tEGvAl/RXWVrrALdacky6zPrA16+vZ6eJW28XUXU+OoWS0s2tTxHQcF+cEen7YLy5Se+Gm4kuedH8H6LYUo8OZpPcvb8WjVMVzPk4VSpHvT58r8V3GX4CEjzmWeqcBXmMOobH9Zp/sWR6tQdMfju/LHd9Q1HkAXrRj1l33YYb4Za7sTjg/tNabU1mIYivpyaywkqAW3AguT8NYa4TGeV/eQVP2R1MGTiQVTv+aUCLjKMsGSe0Kn+rLGKcIUgf5NpfmMXMGCKg1N6UPiCZxYtZHtCy6Rs2J41D10qshygN9wPmIFUNuYM+m1qEZ1gwmNerEIWEeXmo8OGQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none Received: from SA0PR11MB4575.namprd11.prod.outlook.com (2603:10b6:806:9b::18) by MN0PR11MB5962.namprd11.prod.outlook.com (2603:10b6:208:371::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5676.36; Mon, 10 Oct 2022 07:48:32 +0000 Received: from SA0PR11MB4575.namprd11.prod.outlook.com ([fe80::95dc:6564:8d3b:bcf]) by SA0PR11MB4575.namprd11.prod.outlook.com ([fe80::95dc:6564:8d3b:bcf%3]) with mapi id 15.20.5709.015; Mon, 10 Oct 2022 07:48:32 +0000 From: "Wu, Wenjun1" To: "Guo, Junfeng" , "Zhang, Qi Z" , "Wu, Jingjing" , "Xing, Beilei" CC: "dev@dpdk.org" , "Wang, Xiao W" , "Guo, Junfeng" , "Li, Xiaoyun" Subject: RE: [PATCH v2 03/14] net/idpf: add support for device initialization Thread-Topic: [PATCH v2 03/14] net/idpf: add support for device initialization Thread-Index: AQHYwRaQPSBYUFLcJU6hzwDrgv7gGa4Hdftw Date: Mon, 10 Oct 2022 07:48:32 +0000 Message-ID: References: <20220803113104.1184059-1-junfeng.guo@intel.com> <20220905105828.3190335-1-junfeng.guo@intel.com> <20220905105828.3190335-4-junfeng.guo@intel.com> In-Reply-To: <20220905105828.3190335-4-junfeng.guo@intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-product: dlpe-windows dlp-version: 11.6.500.17 dlp-reaction: no-action authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=intel.com; x-ms-publictraffictype: Email x-ms-traffictypediagnostic: SA0PR11MB4575:EE_|MN0PR11MB5962:EE_ x-ms-office365-filtering-correlation-id: 84457039-7ecf-42a2-df3d-08daaa93d430 x-ms-exchange-senderadcheck: 1 x-ms-exchange-antispam-relay: 0 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: 7/5hcwYijpACpNed6qwlLhjISmKutQGfJWmArg1F/spk/PfBuR4xqDFg22KY+n7M4YVLv6gFPIESrw7c/g3Fk3EQI370gcEUP0dUKMTxjdWb1G/NZocY9CGFlZTpVZ8jPMXPGDDiTZW9JMXOJI69O+7E2sHfsugaZt5riZ7heqkBFmYGcmp5W1oWiSksZR6UW1CmnbJEJSbZSdTgfXYL14yGQmZbUMkOecFWQgZYGPrZMK71FzEa0+/ud+8rNT0QxUjcPiYoyGChgLizNE4fcccVj91Q58bNWP1ma72xNQw4bNS+53qEcpvNiQ3yS2byzuqzpdS1Q6uKOB2cx5TaYDlnMy5Xt0DVJDblvbC+mbb7PJ49WD++9snCGUd20meiFrKJVj2JzBSvd0/+0RrT7nCGoWUwq6URkNp+du9pRIPW9QrvcYObZ2kixbTM6XVkryNQKaUYJRJdUZbCXVFM+23MdFR0kdV+3YSSzLD9qQAUEvZq+855oF/eC9SCa16Tv3VekJ+oV87gEgoNd7Fx00KPhVrKMCh+4tQsaS6MKLmyvOb22YbcoiTS/57iCslC3+RMKcO5iRlvOB1krhXKCC6C940pS0chrA2iA4v8HIIBNWw3Zv/GoPMPdJoo4UKm2SxnqdZF/6+AL+PnPQRmrM8nHA8CfiY9ULD9LNEwSVN5X5pk+nJavKP3HfiKuNOPhBoO1b0bPLx9uWJ08Ux8FgxsdflBlHJbGF+E051YbkER0yyJCgKMYbUM5hDP/wFSpu7LPqbQOIYGK44dmXGZxQ== x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:SA0PR11MB4575.namprd11.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230022)(136003)(396003)(346002)(376002)(366004)(39860400002)(451199015)(53546011)(52536014)(41300700001)(2906002)(66899015)(5660300002)(30864003)(316002)(110136005)(122000001)(76116006)(6636002)(107886003)(66946007)(478600001)(83380400001)(66556008)(66446008)(54906003)(8676002)(66476007)(4326008)(6506007)(7696005)(33656002)(86362001)(71200400001)(38070700005)(8936002)(38100700002)(64756008)(55016003)(9686003)(82960400001)(26005)(186003)(579004)(559001); DIR:OUT; SFP:1102; x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?oiIvl0ihmL0mImIv4cwaLSRc99FX6+rOZhH4I/z+8jgoar/coCaSv+RTCeDj?= =?us-ascii?Q?85fcnz8ngeXBVCsFWwq+skIml5ZuVM/V57BmAOTrhfCixfUxffYc64KYYKnQ?= =?us-ascii?Q?DPcahYWBCEtxj82kzIxLcqmvNE3vpkQIq4K5IBEu5KthaoC7wbgJ9uJJTAB8?= =?us-ascii?Q?fDOKeDr+noruDL851Xd/skB5MqTQIEi3K3aEEP/NM/s3zLVlEvBG8M/Cc+Sj?= =?us-ascii?Q?yOx+0FScvUAvo4XD5hOYqZHTaLwXGY/1aI1LrKk1t57cIa6Z+ET0W0bl7Q21?= =?us-ascii?Q?RBfQ3LbPBzR7NqxRYre+2azi1aJRgADsPzrOOwcO6T77Js6vOXAkgWrv4dS3?= =?us-ascii?Q?KgKYl2kplymkNUo5R2JWfY4UAudUCPK+RMUI6e76EbwpZQ5Oko2zNqLaQ4b4?= =?us-ascii?Q?3KLlXrvbgv0wMDNqc8cdqw+n8sUHqlmIOAMOP8Oelo8U/41EmoPwS8zzFm17?= =?us-ascii?Q?tOwmEybhUHN7iIZVBwUktDqH1QIoHj8zE+v95WAbU2X07pZOqNb6yTGQQeWw?= =?us-ascii?Q?45sWTuoZipgfBm/+3LXhwaM4a1PbPaRquzV7knyF5IOzO7xKPHzJeY6Dipvr?= =?us-ascii?Q?ndY3/m/ySHUrsDOtEHuhQuxqVpLU8oGCi6I2QOw3Sh12epzzQhG3tlaq7mez?= =?us-ascii?Q?DqM6xXgfJGkvXqHXsH0EkKZ3TqZ7EyQznTu4kKFhh3rM41M8CANDbJuePCOH?= =?us-ascii?Q?6+EvakdRJtxgNE+q/AZtlNdCGEoCC3q7elRfeWLMDGbGtDS2t7E3dDX7Gfn4?= =?us-ascii?Q?JbmZLvgBnPmvCYh7M5Fq9xNGxh2oJ8YxHc3l/pM288VrdJw+uVflDgxRakUC?= =?us-ascii?Q?XYiyTSJWpwJh13riC4fRFpGrUlpZd9OnxioUANh0q8BkZn9jPq0pQkR+k0MK?= =?us-ascii?Q?k2/SMfV64urhk7TbQ3wLCZYe4WFCJsjekln8x0BJN0a1R+jEp0po3DeUquBN?= =?us-ascii?Q?Jd3N+tnNmqA+xpT1qRr5Sn6c+F/8bYbkir4s0q3IsSCXmj8US1F/Vi+y4P4o?= =?us-ascii?Q?lj/61TiABYezwML3CuRh1ruE06sv+L4Du0Hb9gnmUaEWojYVCBauw/s8Klf/?= =?us-ascii?Q?+j6swGqZwzcApxokJkLHzyPxJwES0mvQMIqkbzZbmytGNSNQzVPv7b72OC+/?= =?us-ascii?Q?Fcb+cWxmWquTwbFm68PwNaO9BD1tO9cq++rS7JEw9L2+e2dLO7xE4lag9a8/?= =?us-ascii?Q?YRyZX/TaIU899G/2drtKH5+GeTBjZX3K0C0NEumai9BDm4JxGLzhmatNoJ0a?= =?us-ascii?Q?focgEAE3VITEjLJA82L0xojAo+iHxG9wPTCxUYx/wltKkkf7k5niHCX+3YPA?= =?us-ascii?Q?UyDqJ2LtIPaguOX4G/+6QJrI9fC+jOlWzwr+UQJSMeTc7AmPzICDOAFZ39MF?= =?us-ascii?Q?Cwjeb361ByMWOQwYg+UDXILWxpBdvJr8eWrvmBHbROZDSXfrz8ltu7bcaIgb?= =?us-ascii?Q?A4stm1FqW/5DzTPZtWttnzl9f2U0zNJYcVC3JERCBy6moPuyMu0EP5tRY0m+?= =?us-ascii?Q?k9DsVtvL8mgEGe32IT9kus/qvH+lc0tKnUOgQKQ5ZglVx44Jio4GG/pZ4ASZ?= =?us-ascii?Q?ytlDs1Rz4MFDYeZUZ7ZAppZVmyBzPfhvllrWZfBp?= Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: SA0PR11MB4575.namprd11.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: 84457039-7ecf-42a2-df3d-08daaa93d430 X-MS-Exchange-CrossTenant-originalarrivaltime: 10 Oct 2022 07:48:32.2608 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: Y3s8z74ArYh1phhDcAZ6riBLFQ2f0uuU+YeHIgrHbFvukSTANhkzoBg5eIVYtEqoa8wCKd4baLk4VuXbb2cEwQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN0PR11MB5962 X-OriginatorOrg: intel.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org > -----Original Message----- > From: Junfeng Guo > Sent: Monday, September 5, 2022 6:58 PM > To: Zhang, Qi Z ; Wu, Jingjing > ; Xing, Beilei > Cc: dev@dpdk.org; Wang, Xiao W ; Guo, Junfeng > ; Li, Xiaoyun > Subject: [PATCH v2 03/14] net/idpf: add support for device initialization >=20 > Support device init and the following dev ops: > - dev_configure > - dev_start > - dev_stop > - dev_close >=20 > Signed-off-by: Beilei Xing > Signed-off-by: Xiaoyun Li > Signed-off-by: Xiao Wang > Signed-off-by: Junfeng Guo > --- > drivers/net/idpf/idpf_ethdev.c | 810 > +++++++++++++++++++++++++++++++++ drivers/net/idpf/idpf_ethdev.h | > 229 ++++++++++ drivers/net/idpf/idpf_vchnl.c | 495 > ++++++++++++++++++++ > drivers/net/idpf/meson.build | 18 + > drivers/net/idpf/version.map | 3 + > drivers/net/meson.build | 1 + > 6 files changed, 1556 insertions(+) > create mode 100644 drivers/net/idpf/idpf_ethdev.c create mode 100644 > drivers/net/idpf/idpf_ethdev.h create mode 100644 > drivers/net/idpf/idpf_vchnl.c create mode 100644 > drivers/net/idpf/meson.build create mode 100644 > drivers/net/idpf/version.map >=20 > diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethde= v.c > new file mode 100644 index 0000000000..f0452de09e > --- /dev/null > +++ b/drivers/net/idpf/idpf_ethdev.c > @@ -0,0 +1,810 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright(c) 2022 Intel Corporation > + */ > + > +#include > +#include > +#include > +#include > +#include > +#include > + > +#include "idpf_ethdev.h" > + > +#define IDPF_VPORT "vport" > + > +struct idpf_adapter_list adapter_list; > +bool adapter_list_init; > + > +uint64_t idpf_timestamp_dynflag; > + > +static const char * const idpf_valid_args[] =3D { > + IDPF_VPORT, > + NULL > +}; > + > +static int idpf_dev_configure(struct rte_eth_dev *dev); static int > +idpf_dev_start(struct rte_eth_dev *dev); static int > +idpf_dev_stop(struct rte_eth_dev *dev); static int > +idpf_dev_close(struct rte_eth_dev *dev); > + > +static const struct eth_dev_ops idpf_eth_dev_ops =3D { > + .dev_configure =3D idpf_dev_configure, > + .dev_start =3D idpf_dev_start, > + .dev_stop =3D idpf_dev_stop, > + .dev_close =3D idpf_dev_close, > +}; > + > +static int > +idpf_init_vport_req_info(struct rte_eth_dev *dev) { > + struct idpf_vport *vport =3D dev->data->dev_private; > + struct idpf_adapter *adapter =3D vport->adapter; > + struct virtchnl2_create_vport *vport_info; > + uint16_t idx =3D adapter->next_vport_idx; > + > + if (!adapter->vport_req_info[idx]) { > + adapter->vport_req_info[idx] =3D rte_zmalloc(NULL, > + sizeof(struct virtchnl2_create_vport), 0); > + if (!adapter->vport_req_info[idx]) { > + PMD_INIT_LOG(ERR, "Failed to allocate > vport_req_info"); > + return -1; > + } > + } > + > + vport_info =3D > + (struct virtchnl2_create_vport *)adapter- > >vport_req_info[idx]; > + > + vport_info->vport_type =3D > rte_cpu_to_le_16(VIRTCHNL2_VPORT_TYPE_DEFAULT); > + if (!adapter->txq_model) { > + vport_info->txq_model =3D > + rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SPLIT); > + vport_info->num_tx_q =3D IDPF_DEFAULT_TXQ_NUM; > + vport_info->num_tx_complq =3D > + IDPF_DEFAULT_TXQ_NUM * > IDPF_TX_COMPLQ_PER_GRP; > + } else { > + vport_info->txq_model =3D > + > rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SINGLE); > + vport_info->num_tx_q =3D IDPF_DEFAULT_TXQ_NUM; > + vport_info->num_tx_complq =3D 0; > + } > + if (!adapter->rxq_model) { > + vport_info->rxq_model =3D > + rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SPLIT); > + vport_info->num_rx_q =3D IDPF_DEFAULT_RXQ_NUM; > + vport_info->num_rx_bufq =3D > + IDPF_DEFAULT_RXQ_NUM * > IDPF_RX_BUFQ_PER_GRP; > + } else { > + vport_info->rxq_model =3D > + > rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SINGLE); > + vport_info->num_rx_q =3D IDPF_DEFAULT_RXQ_NUM; > + vport_info->num_rx_bufq =3D 0; > + } > + > + return 0; > +} > + > +static uint16_t > +idpf_get_next_vport_idx(struct idpf_vport **vports, uint16_t > max_vport_nb, > + uint16_t cur_vport_idx) > +{ > + uint16_t vport_idx; > + uint16_t i; > + > + if (cur_vport_idx < max_vport_nb && !vports[cur_vport_idx + 1]) { > + vport_idx =3D cur_vport_idx + 1; > + return vport_idx; > + } > + > + for (i =3D 0; i < max_vport_nb; i++) { > + if (!vports[i]) > + break; > + } > + > + if (i =3D=3D max_vport_nb) > + vport_idx =3D IDPF_INVALID_VPORT_IDX; > + else > + vport_idx =3D i; > + > + return vport_idx; > +} > + > +static uint16_t > +idpf_parse_devarg_id(char *name) > +{ > + uint16_t val; > + char *p; > + > + p =3D strstr(name, "vport_"); > + p +=3D sizeof("vport_") - 1; > + > + val =3D strtoul(p, NULL, 10); > + > + return val; > +} > + > +#ifndef IDPF_RSS_KEY_LEN > +#define IDPF_RSS_KEY_LEN 52 > +#endif > + > +static int > +idpf_init_vport(struct rte_eth_dev *dev) { > + struct idpf_vport *vport =3D dev->data->dev_private; > + struct idpf_adapter *adapter =3D vport->adapter; > + uint16_t idx =3D adapter->next_vport_idx; > + struct virtchnl2_create_vport *vport_info =3D > + (struct virtchnl2_create_vport *)adapter- > >vport_recv_info[idx]; > + int i; > + > + vport->vport_id =3D vport_info->vport_id; > + vport->txq_model =3D vport_info->txq_model; > + vport->rxq_model =3D vport_info->rxq_model; > + vport->num_tx_q =3D vport_info->num_tx_q; > + vport->num_tx_complq =3D vport_info->num_tx_complq; > + vport->num_rx_q =3D vport_info->num_rx_q; > + vport->num_rx_bufq =3D vport_info->num_rx_bufq; > + vport->max_mtu =3D vport_info->max_mtu; > + rte_memcpy(vport->default_mac_addr, > + vport_info->default_mac_addr, ETH_ALEN); > + vport->rss_algorithm =3D vport_info->rss_algorithm; > + vport->rss_key_size =3D RTE_MIN(IDPF_RSS_KEY_LEN, > + vport_info->rss_key_size); > + vport->rss_lut_size =3D vport_info->rss_lut_size; > + vport->sw_idx =3D idx; > + > + for (i =3D 0; i < vport_info->chunks.num_chunks; i++) { > + if (vport_info->chunks.chunks[i].type =3D=3D > + VIRTCHNL2_QUEUE_TYPE_TX) { > + vport->chunks_info.tx_start_qid =3D > + vport_info->chunks.chunks[i].start_queue_id; > + vport->chunks_info.tx_qtail_start =3D > + vport_info->chunks.chunks[i].qtail_reg_start; > + vport->chunks_info.tx_qtail_spacing =3D > + vport_info- > >chunks.chunks[i].qtail_reg_spacing; > + } else if (vport_info->chunks.chunks[i].type =3D=3D > + VIRTCHNL2_QUEUE_TYPE_RX) { > + vport->chunks_info.rx_start_qid =3D > + vport_info->chunks.chunks[i].start_queue_id; > + vport->chunks_info.rx_qtail_start =3D > + vport_info->chunks.chunks[i].qtail_reg_start; > + vport->chunks_info.rx_qtail_spacing =3D > + vport_info- > >chunks.chunks[i].qtail_reg_spacing; > + } else if (vport_info->chunks.chunks[i].type =3D=3D > + VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION) { > + vport->chunks_info.tx_compl_start_qid =3D > + vport_info->chunks.chunks[i].start_queue_id; > + vport->chunks_info.tx_compl_qtail_start =3D > + vport_info->chunks.chunks[i].qtail_reg_start; > + vport->chunks_info.tx_compl_qtail_spacing =3D > + vport_info- > >chunks.chunks[i].qtail_reg_spacing; > + } else if (vport_info->chunks.chunks[i].type =3D=3D > + VIRTCHNL2_QUEUE_TYPE_RX_BUFFER) { > + vport->chunks_info.rx_buf_start_qid =3D > + vport_info->chunks.chunks[i].start_queue_id; > + vport->chunks_info.rx_buf_qtail_start =3D > + vport_info->chunks.chunks[i].qtail_reg_start; > + vport->chunks_info.rx_buf_qtail_spacing =3D > + vport_info- > >chunks.chunks[i].qtail_reg_spacing; > + } > + } > + > + adapter->vports[idx] =3D vport; > + adapter->next_vport_idx =3D idpf_get_next_vport_idx(adapter->vports, > + adapter->max_vport_nb, > idx); > + if (adapter->next_vport_idx =3D=3D IDPF_INVALID_VPORT_IDX) { > + PMD_INIT_LOG(ERR, "Failed to get next vport id"); > + return -1; > + } > + > + vport->devarg_id =3D idpf_parse_devarg_id(dev->data->name); > + vport->dev_data =3D dev->data; > + > + return 0; > +} > + > +static int > +idpf_dev_configure(__rte_unused struct rte_eth_dev *dev) { > + return 0; > +} > + > +static int > +idpf_dev_start(struct rte_eth_dev *dev) { > + struct idpf_vport *vport =3D dev->data->dev_private; > + > + PMD_INIT_FUNC_TRACE(); > + > + vport->stopped =3D 0; > + > + if (idpf_ena_dis_vport(vport, true)) { > + PMD_DRV_LOG(ERR, "Failed to enable vport"); > + goto err_vport; > + } > + > + return 0; > + > +err_vport: > + return -1; > +} > + > +static int > +idpf_dev_stop(struct rte_eth_dev *dev) > +{ > + struct idpf_vport *vport =3D dev->data->dev_private; > + > + PMD_INIT_FUNC_TRACE(); > + > + if (vport->stopped =3D=3D 1) > + return 0; > + > + if (idpf_ena_dis_vport(vport, false)) > + PMD_DRV_LOG(ERR, "disable vport failed"); > + > + vport->stopped =3D 1; > + dev->data->dev_started =3D 0; > + > + return 0; > +} > + > +static int > +idpf_dev_close(struct rte_eth_dev *dev) { > + struct idpf_vport *vport =3D dev->data->dev_private; > + struct idpf_adapter *adapter =3D vport->adapter; > + > + if (rte_eal_process_type() !=3D RTE_PROC_PRIMARY) > + return 0; > + > + idpf_dev_stop(dev); > + idpf_destroy_vport(vport); > + > + adapter->cur_vports &=3D ~BIT(vport->devarg_id); > + > + rte_free(vport); > + dev->data->dev_private =3D NULL; > + > + return 0; > +} > + > +static int > +insert_value(struct idpf_adapter *adapter, uint16_t id) { > + uint16_t i; > + > + for (i =3D 0; i < adapter->req_vport_nb; i++) { > + if (adapter->req_vports[i] =3D=3D id) > + return 0; > + } > + > + if (adapter->req_vport_nb >=3D RTE_DIM(adapter->req_vports)) { > + PMD_INIT_LOG(ERR, "Total vport number can't be > %d", > + IDPF_MAX_VPORT_NUM); > + return -1; > + } > + > + adapter->req_vports[adapter->req_vport_nb] =3D id; > + adapter->req_vport_nb++; > + > + return 0; > +} > + > +static const char * > +parse_range(const char *value, struct idpf_adapter *adapter) { > + uint16_t lo, hi, i; > + int n =3D 0; > + int result; > + const char *pos =3D value; > + > + result =3D sscanf(value, "%hu%n-%hu%n", &lo, &n, &hi, &n); > + if (result =3D=3D 1) { > + if (lo >=3D IDPF_MAX_VPORT_NUM) > + return NULL; > + if (insert_value(adapter, lo)) > + return NULL; > + } else if (result =3D=3D 2) { > + if (lo > hi || hi >=3D IDPF_MAX_VPORT_NUM) > + return NULL; > + for (i =3D lo; i <=3D hi; i++) { > + if (insert_value(adapter, i)) > + return NULL; > + } > + } else { > + return NULL; > + } > + > + return pos + n; > +} > + > +static int > +parse_vport(const char *key, const char *value, void *args) { > + struct idpf_adapter *adapter =3D (struct idpf_adapter *)args; > + const char *pos =3D value; > + int i; > + > + adapter->req_vport_nb =3D 0; > + > + if (*pos =3D=3D '[') > + pos++; > + > + while (1) { > + pos =3D parse_range(pos, adapter); > + if (pos =3D=3D NULL) { > + PMD_INIT_LOG(ERR, "invalid value:\"%s\" for > key:\"%s\", ", > + value, key); > + return -1; > + } > + if (*pos !=3D ',') > + break; > + pos++; > + } > + > + if (*value =3D=3D '[' && *pos !=3D ']') { > + PMD_INIT_LOG(ERR, "invalid value:\"%s\" for key:\"%s\", ", > + value, key); > + return -1; > + } > + > + if (adapter->cur_vport_nb + adapter->req_vport_nb > > + IDPF_MAX_VPORT_NUM) { > + PMD_INIT_LOG(ERR, "Total vport number can't be > %d", > + IDPF_MAX_VPORT_NUM); > + return -1; > + } > + > + for (i =3D 0; i < adapter->req_vport_nb; i++) { > + if (!(adapter->cur_vports & BIT(adapter->req_vports[i]))) { > + adapter->cur_vports |=3D BIT(adapter->req_vports[i]); > + adapter->cur_vport_nb++; > + } else { > + PMD_INIT_LOG(ERR, "Vport %d has been created", > adapter->req_vports[i]); > + return -1; > + } > + } > + > + return 0; > +} > + > +static int > +idpf_parse_devargs(struct rte_pci_device *pci_dev, struct idpf_adapter > +*adapter) { > + struct rte_devargs *devargs =3D pci_dev->device.devargs; > + struct rte_kvargs *kvlist; > + int ret; > + > + if (!devargs) > + return 0; > + > + kvlist =3D rte_kvargs_parse(devargs->args, idpf_valid_args); > + if (!kvlist) { > + PMD_INIT_LOG(ERR, "invalid kvargs key"); > + return -EINVAL; > + } > + > + ret =3D rte_kvargs_process(kvlist, IDPF_VPORT, &parse_vport, > + adapter); > + > + rte_kvargs_free(kvlist); > + return ret; > +} > + > +static void > +idpf_reset_pf(struct iecm_hw *hw) > +{ > + uint32_t reg; > + > + reg =3D IECM_READ_REG(hw, PFGEN_CTRL); > + IECM_WRITE_REG(hw, PFGEN_CTRL, (reg | PFGEN_CTRL_PFSWR)); } > + > +#define IDPF_RESET_WAIT_CNT 100 > +static int > +idpf_check_pf_reset_done(struct iecm_hw *hw) { > + uint32_t reg; > + int i; > + > + for (i =3D 0; i < IDPF_RESET_WAIT_CNT; i++) { > + reg =3D IECM_READ_REG(hw, PFGEN_RSTAT); > + if (reg !=3D 0xFFFFFFFF && (reg & PFGEN_RSTAT_PFR_STATE_M)) > + return 0; > + rte_delay_ms(1000); > + } > + > + PMD_INIT_LOG(ERR, "IDPF reset timeout"); > + return -EBUSY; > +} > + > +#define CTLQ_NUM 2 > +static int > +idpf_init_mbx(struct iecm_hw *hw) > +{ > + struct iecm_ctlq_create_info ctlq_info[CTLQ_NUM] =3D { > + { > + .type =3D IECM_CTLQ_TYPE_MAILBOX_TX, > + .id =3D IDPF_CTLQ_ID, > + .len =3D IDPF_CTLQ_LEN, > + .buf_size =3D IDPF_DFLT_MBX_BUF_SIZE, > + .reg =3D { > + .head =3D PF_FW_ATQH, > + .tail =3D PF_FW_ATQT, > + .len =3D PF_FW_ATQLEN, > + .bah =3D PF_FW_ATQBAH, > + .bal =3D PF_FW_ATQBAL, > + .len_mask =3D PF_FW_ATQLEN_ATQLEN_M, > + .len_ena_mask =3D > PF_FW_ATQLEN_ATQENABLE_M, > + .head_mask =3D PF_FW_ATQH_ATQH_M, > + } > + }, > + { > + .type =3D IECM_CTLQ_TYPE_MAILBOX_RX, > + .id =3D IDPF_CTLQ_ID, > + .len =3D IDPF_CTLQ_LEN, > + .buf_size =3D IDPF_DFLT_MBX_BUF_SIZE, > + .reg =3D { > + .head =3D PF_FW_ARQH, > + .tail =3D PF_FW_ARQT, > + .len =3D PF_FW_ARQLEN, > + .bah =3D PF_FW_ARQBAH, > + .bal =3D PF_FW_ARQBAL, > + .len_mask =3D PF_FW_ARQLEN_ARQLEN_M, > + .len_ena_mask =3D > PF_FW_ARQLEN_ARQENABLE_M, > + .head_mask =3D PF_FW_ARQH_ARQH_M, > + } > + } > + }; > + struct iecm_ctlq_info *ctlq; > + int ret; > + > + ret =3D iecm_ctlq_init(hw, CTLQ_NUM, ctlq_info); > + if (ret) > + return ret; > + > + LIST_FOR_EACH_ENTRY_SAFE(ctlq, NULL, &hw->cq_list_head, > + struct iecm_ctlq_info, cq_list) { > + if (ctlq->q_id =3D=3D IDPF_CTLQ_ID && ctlq->cq_type =3D=3D > IECM_CTLQ_TYPE_MAILBOX_TX) > + hw->asq =3D ctlq; > + if (ctlq->q_id =3D=3D IDPF_CTLQ_ID && ctlq->cq_type =3D=3D > IECM_CTLQ_TYPE_MAILBOX_RX) > + hw->arq =3D ctlq; > + } > + > + if (!hw->asq || !hw->arq) { > + iecm_ctlq_deinit(hw); > + ret =3D -ENOENT; > + } > + > + return ret; > +} > + > +static int > +idpf_adapter_init(struct rte_pci_device *pci_dev, struct idpf_adapter > +*adapter) { > + struct iecm_hw *hw =3D &adapter->hw; > + int ret =3D 0; > + > + hw->hw_addr =3D (void *)pci_dev->mem_resource[0].addr; > + hw->hw_addr_len =3D pci_dev->mem_resource[0].len; > + hw->back =3D adapter; > + hw->vendor_id =3D pci_dev->id.vendor_id; > + hw->device_id =3D pci_dev->id.device_id; > + hw->subsystem_vendor_id =3D pci_dev->id.subsystem_vendor_id; > + > + strncpy(adapter->name, pci_dev->device.name, PCI_PRI_STR_SIZE); > + > + idpf_reset_pf(hw); > + ret =3D idpf_check_pf_reset_done(hw); > + if (ret) { > + PMD_INIT_LOG(ERR, "IDPF is still resetting"); > + goto err; > + } > + > + ret =3D idpf_init_mbx(hw); > + if (ret) { > + PMD_INIT_LOG(ERR, "Failed to init mailbox"); > + goto err; > + } > + > + adapter->mbx_resp =3D rte_zmalloc("idpf_adapter_mbx_resp", > IDPF_DFLT_MBX_BUF_SIZE, 0); > + if (!adapter->mbx_resp) { > + PMD_INIT_LOG(ERR, "Failed to allocate > idpf_adapter_mbx_resp memory"); > + goto err_mbx; > + } > + > + if (idpf_check_api_version(adapter)) { > + PMD_INIT_LOG(ERR, "Failed to check api version"); > + goto err_api; > + } > + > + adapter->caps =3D rte_zmalloc("idpf_caps", > + sizeof(struct virtchnl2_get_capabilities), 0); > + if (!adapter->caps) { > + PMD_INIT_LOG(ERR, "Failed to allocate idpf_caps memory"); > + goto err_api; > + } > + > + if (idpf_get_caps(adapter)) { > + PMD_INIT_LOG(ERR, "Failed to get capabilities"); > + goto err_caps; > + } > + > + adapter->max_vport_nb =3D adapter->caps->max_vports; > + > + adapter->vport_req_info =3D rte_zmalloc("vport_req_info", > + adapter->max_vport_nb * > + sizeof(*adapter->vport_req_info), > + 0); > + if (!adapter->vport_req_info) { > + PMD_INIT_LOG(ERR, "Failed to allocate vport_req_info > memory"); > + goto err_caps; > + } > + > + adapter->vport_recv_info =3D rte_zmalloc("vport_recv_info", > + adapter->max_vport_nb * > + sizeof(*adapter- > >vport_recv_info), > + 0); > + if (!adapter->vport_recv_info) { > + PMD_INIT_LOG(ERR, "Failed to allocate vport_recv_info > memory"); > + goto err_vport_recv_info; > + } > + > + adapter->vports =3D rte_zmalloc("vports", > + adapter->max_vport_nb * > + sizeof(*adapter->vports), > + 0); > + if (!adapter->vports) { > + PMD_INIT_LOG(ERR, "Failed to allocate vports memory"); > + goto err_vports; > + } > + > + adapter->max_rxq_per_msg =3D (IDPF_DFLT_MBX_BUF_SIZE - > + sizeof(struct virtchnl2_config_rx_queues)) / > + sizeof(struct virtchnl2_rxq_info); > + adapter->max_txq_per_msg =3D (IDPF_DFLT_MBX_BUF_SIZE - > + sizeof(struct virtchnl2_config_tx_queues)) / > + sizeof(struct virtchnl2_txq_info); > + > + adapter->cur_vports =3D 0; > + adapter->cur_vport_nb =3D 0; > + adapter->next_vport_idx =3D 0; > + > + return ret; > + > +err_vports: > + rte_free(adapter->vports); > + adapter->vports =3D NULL; > +err_vport_recv_info: > + rte_free(adapter->vport_req_info); > + adapter->vport_req_info =3D NULL; > +err_caps: > + rte_free(adapter->caps); > + adapter->caps =3D NULL; > +err_api: > + rte_free(adapter->mbx_resp); > + adapter->mbx_resp =3D NULL; > +err_mbx: > + iecm_ctlq_deinit(hw); > +err: > + return -1; > +} > + > + > +static int > +idpf_dev_init(struct rte_eth_dev *dev, void *init_params) { > + struct idpf_vport *vport =3D dev->data->dev_private; > + struct idpf_adapter *adapter =3D init_params; > + int ret =3D 0; > + > + PMD_INIT_FUNC_TRACE(); > + > + dev->dev_ops =3D &idpf_eth_dev_ops; > + vport->adapter =3D adapter; > + > + /* for secondary processes, we don't initialise any further as primary > + * has already done this work. > + */ > + if (rte_eal_process_type() !=3D RTE_PROC_PRIMARY) > + return ret; > + > + dev->data->dev_flags |=3D RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS; > + > + ret =3D idpf_init_vport_req_info(dev); > + if (ret) { > + PMD_INIT_LOG(ERR, "Failed to init vport req_info."); > + goto err; > + } > + > + ret =3D idpf_create_vport(dev); > + if (ret) { > + PMD_INIT_LOG(ERR, "Failed to create vport."); > + goto err_create_vport; > + } > + > + ret =3D idpf_init_vport(dev); > + if (ret) { > + PMD_INIT_LOG(ERR, "Failed to init vports."); > + goto err_init_vport; > + } > + > + dev->data->mac_addrs =3D rte_zmalloc(NULL, RTE_ETHER_ADDR_LEN, > 0); > + if (dev->data->mac_addrs =3D=3D NULL) { > + PMD_INIT_LOG(ERR, "Cannot allocate mac_addr memory."); > + ret =3D -ENOMEM; > + goto err_init_vport; > + } > + > + rte_ether_addr_copy((struct rte_ether_addr *)vport- > >default_mac_addr, > + &dev->data->mac_addrs[0]); > + > + return 0; > + > +err_init_vport: > + idpf_destroy_vport(vport); > +err_create_vport: > + > +rte_free(vport->adapter->vport_req_info[vport->adapter->next_vport_idx] > +); > +err: > + return ret; > +} > + > +static int > +idpf_dev_uninit(struct rte_eth_dev *dev) { > + if (rte_eal_process_type() !=3D RTE_PROC_PRIMARY) > + return -EPERM; > + > + idpf_dev_close(dev); > + > + return 0; > +} > + > +static const struct rte_pci_id pci_id_idpf_map[] =3D { > + { RTE_PCI_DEVICE(IECM_INTEL_VENDOR_ID, IECM_DEV_ID_PF) }, > + { .vendor_id =3D 0, /* sentinel */ }, > +}; > + > +struct idpf_adapter * > +idpf_find_adapter(struct rte_pci_device *pci_dev) { > + struct idpf_adapter *adapter; > + > + TAILQ_FOREACH(adapter, &adapter_list, next) { > + if (!strncmp(adapter->name, pci_dev->device.name, > PCI_PRI_STR_SIZE)) > + return adapter; > + } > + > + return NULL; > +} > + > +static int > +idpf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, > + struct rte_pci_device *pci_dev) { > + struct idpf_adapter *adapter; > + char name[RTE_ETH_NAME_MAX_LEN]; > + int i, retval; > + > + if (!adapter_list_init) { > + TAILQ_INIT(&adapter_list); > + adapter_list_init =3D true; > + } > + > + adapter =3D idpf_find_adapter(pci_dev); > + if (!adapter) { > + adapter =3D (struct idpf_adapter *)rte_zmalloc("idpf_adapter", > + sizeof(struct idpf_adapter), 0); > + if (!adapter) { > + PMD_INIT_LOG(ERR, "Failed to allocate adapter."); > + return -1; > + } > + > + retval =3D idpf_adapter_init(pci_dev, adapter); > + if (retval) { > + PMD_INIT_LOG(ERR, "Failed to init adapter."); > + return retval; > + } > + > + TAILQ_INSERT_TAIL(&adapter_list, adapter, next); > + } > + > + retval =3D idpf_parse_devargs(pci_dev, adapter); > + if (retval) { > + PMD_INIT_LOG(ERR, "Failed to parse devargs"); > + return retval; > + } > + > + for (i =3D 0; i < adapter->req_vport_nb; i++) { > + snprintf(name, sizeof(name), "idpf_%s_vport_%d", > + pci_dev->device.name, > + adapter->req_vports[i]); > + retval =3D rte_eth_dev_create(&pci_dev->device, name, > + sizeof(struct idpf_vport), > + NULL, NULL, idpf_dev_init, > + adapter); > + if (retval) > + PMD_DRV_LOG(ERR, "failed to creat vport %d", > + adapter->req_vports[i]); > + } > + > + return 0; > +} > + > +static void > +idpf_adapter_rel(struct idpf_adapter *adapter) { > + struct iecm_hw *hw =3D &adapter->hw; > + int i; > + > + iecm_ctlq_deinit(hw); > + > + rte_free(adapter->caps); > + adapter->caps =3D NULL; > + > + rte_free(adapter->mbx_resp); > + adapter->mbx_resp =3D NULL; > + > + if (adapter->vport_req_info) { > + for (i =3D 0; i < adapter->max_vport_nb; i++) { > + rte_free(adapter->vport_req_info[i]); > + adapter->vport_req_info[i] =3D NULL; > + } > + rte_free(adapter->vport_req_info); > + adapter->vport_req_info =3D NULL; > + } > + > + if (adapter->vport_recv_info) { > + for (i =3D 0; i < adapter->max_vport_nb; i++) { > + rte_free(adapter->vport_recv_info[i]); > + adapter->vport_recv_info[i] =3D NULL; > + } > + } > + > + rte_free(adapter->vports); > + adapter->vports =3D NULL; > +} > + > +static int > +idpf_pci_remove(struct rte_pci_device *pci_dev) { > + struct idpf_adapter *adapter =3D idpf_find_adapter(pci_dev); > + char name[RTE_ETH_NAME_MAX_LEN]; > + struct rte_eth_dev *eth_dev; > + int i; > + > + for (i =3D 0; i < IDPF_MAX_VPORT_NUM; i++) { > + if (adapter->cur_vports & BIT(i)) { > + snprintf(name, sizeof(name), "idpf_%s_vport_%d", > + pci_dev->device.name, i); > + eth_dev =3D rte_eth_dev_allocated(name); > + if (eth_dev) > + rte_eth_dev_destroy(eth_dev, > idpf_dev_uninit); > + } > + } > + > + TAILQ_REMOVE(&adapter_list, adapter, next); > + idpf_adapter_rel(adapter); > + rte_free(adapter); > + > + return 0; > +} > + > +static struct rte_pci_driver rte_idpf_pmd =3D { > + .id_table =3D pci_id_idpf_map, > + .drv_flags =3D RTE_PCI_DRV_NEED_MAPPING | > RTE_PCI_DRV_PROBE_AGAIN, > + .probe =3D idpf_pci_probe, > + .remove =3D idpf_pci_remove, > +}; > + > +/** > + * Driver initialization routine. > + * Invoked once at EAL init time. > + * Register itself as the [Poll Mode] Driver of PCI devices. > + */ > +RTE_PMD_REGISTER_PCI(net_idpf, rte_idpf_pmd); > +RTE_PMD_REGISTER_PCI_TABLE(net_idpf, pci_id_idpf_map); > +RTE_PMD_REGISTER_KMOD_DEP(net_ice, "* igb_uio | uio_pci_generic | > +vfio-pci"); > + > +RTE_LOG_REGISTER_SUFFIX(idpf_logtype_init, init, NOTICE); > +RTE_LOG_REGISTER_SUFFIX(idpf_logtype_driver, driver, NOTICE); > diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethde= v.h > new file mode 100644 index 0000000000..e60ba73ec1 > --- /dev/null > +++ b/drivers/net/idpf/idpf_ethdev.h > @@ -0,0 +1,229 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright(c) 2022 Intel Corporation > + */ > + > +#ifndef _IDPF_ETHDEV_H_ > +#define _IDPF_ETHDEV_H_ > + > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > + > +#include "base/iecm_osdep.h" > +#include "base/iecm_type.h" > +#include "base/iecm_devids.h" > +#include "base/iecm_lan_txrx.h" > +#include "base/iecm_lan_pf_regs.h" > +#include "base/virtchnl.h" > +#include "base/virtchnl2.h" > + > +#define IDPF_MAX_VPORT_NUM 8 > + > +#define IDPF_DEFAULT_RXQ_NUM 16 > +#define IDPF_DEFAULT_TXQ_NUM 16 > + > +#define IDPF_INVALID_VPORT_IDX 0xffff > +#define IDPF_TXQ_PER_GRP 1 > +#define IDPF_TX_COMPLQ_PER_GRP 1 > +#define IDPF_RXQ_PER_GRP 1 > +#define IDPF_RX_BUFQ_PER_GRP 2 > + > +#define IDPF_CTLQ_ID -1 > +#define IDPF_CTLQ_LEN 64 > +#define IDPF_DFLT_MBX_BUF_SIZE 4096 > + > +#define IDPF_DFLT_Q_VEC_NUM 1 > +#define IDPF_DFLT_INTERVAL 16 > + > +#define IDPF_MAX_NUM_QUEUES 256 > +#define IDPF_MIN_BUF_SIZE 1024 > +#define IDPF_MAX_FRAME_SIZE 9728 > + > +#define IDPF_NUM_MACADDR_MAX 64 > + > +#define IDPF_MAX_PKT_TYPE 1024 > + > +#define IDPF_VLAN_TAG_SIZE 4 > +#define IDPF_ETH_OVERHEAD \ > + (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + > IDPF_VLAN_TAG_SIZE * 2) > + > +#ifndef ETH_ADDR_LEN > +#define ETH_ADDR_LEN 6 > +#endif > + > +#define IDPF_ADAPTER_NAME_LEN (PCI_PRI_STR_SIZE + 1) > + > +/* Message type read in virtual channel from PF */ enum idpf_vc_result > +{ > + IDPF_MSG_ERR =3D -1, /* Meet error when accessing admin queue */ > + IDPF_MSG_NON, /* Read nothing from admin queue */ > + IDPF_MSG_SYS, /* Read system msg from admin queue */ > + IDPF_MSG_CMD, /* Read async command result */ > +}; > + > +struct idpf_chunks_info { > + uint32_t tx_start_qid; > + uint32_t rx_start_qid; > + /* Valid only if split queue model */ > + uint32_t tx_compl_start_qid; > + uint32_t rx_buf_start_qid; > + > + uint64_t tx_qtail_start; > + uint32_t tx_qtail_spacing; > + uint64_t rx_qtail_start; > + uint32_t rx_qtail_spacing; > + uint64_t tx_compl_qtail_start; > + uint32_t tx_compl_qtail_spacing; > + uint64_t rx_buf_qtail_start; > + uint32_t rx_buf_qtail_spacing; > +}; > + > +struct idpf_vport { > + struct idpf_adapter *adapter; /* Backreference to associated adapter > */ > + uint16_t vport_id; > + uint32_t txq_model; > + uint32_t rxq_model; > + uint16_t num_tx_q; > + /* valid only if txq_model is split Q */ > + uint16_t num_tx_complq; > + uint16_t num_rx_q; > + /* valid only if rxq_model is split Q */ > + uint16_t num_rx_bufq; > + > + uint16_t max_mtu; > + uint8_t default_mac_addr[VIRTCHNL_ETH_LENGTH_OF_ADDRESS]; > + > + enum virtchnl_rss_algorithm rss_algorithm; > + uint16_t rss_key_size; > + uint16_t rss_lut_size; > + > + uint16_t sw_idx; /* SW idx */ > + > + struct rte_eth_dev_data *dev_data; /* Pointer to the device data */ > + uint16_t max_pkt_len; /* Maximum packet length */ > + > + /* RSS info */ > + uint32_t *rss_lut; > + uint8_t *rss_key; > + uint64_t rss_hf; > + > + /* MSIX info*/ > + struct virtchnl2_queue_vector *qv_map; /* queue vector mapping */ > + uint16_t max_vectors; > + struct virtchnl2_alloc_vectors *recv_vectors; > + > + /* Chunk info */ > + struct idpf_chunks_info chunks_info; > + > + /* Event from ipf */ > + bool link_up; > + uint32_t link_speed; > + > + uint16_t devarg_id; > + bool stopped; > + struct virtchnl2_vport_stats eth_stats_offset; }; > + > +struct idpf_adapter { > + TAILQ_ENTRY(idpf_adapter) next; > + struct iecm_hw hw; > + char name[IDPF_ADAPTER_NAME_LEN]; > + > + struct virtchnl_version_info virtchnl_version; > + struct virtchnl2_get_capabilities *caps; > + > + volatile enum virtchnl_ops pend_cmd; /* pending command not > finished */ > + uint32_t cmd_retval; /* return value of the cmd response from ipf */ > + uint8_t *mbx_resp; /* buffer to store the mailbox response from ipf > */ > + > + uint32_t txq_model; > + uint32_t rxq_model; > + > + /* Vport info */ > + uint8_t **vport_req_info; > + uint8_t **vport_recv_info; > + struct idpf_vport **vports; > + uint16_t max_vport_nb; > + uint16_t req_vports[IDPF_MAX_VPORT_NUM]; > + uint16_t req_vport_nb; > + uint16_t cur_vports; > + uint16_t cur_vport_nb; > + uint16_t next_vport_idx; > + > + uint16_t used_vecs_num; > + > + /* Max config queue number per VC message */ > + uint32_t max_rxq_per_msg; > + uint32_t max_txq_per_msg; > + > + uint32_t ptype_tbl[IDPF_MAX_PKT_TYPE] __rte_cache_min_aligned; > + > + bool stopped; > +}; > + > +TAILQ_HEAD(idpf_adapter_list, idpf_adapter); extern struct > +idpf_adapter_list adapter_list; > + > +#define IDPF_DEV_TO_PCI(eth_dev) \ > + RTE_DEV_TO_PCI((eth_dev)->device) > + > +/* structure used for sending and checking response of virtchnl ops */ > +struct idpf_cmd_info { > + uint32_t ops; > + uint8_t *in_args; /* buffer for sending */ > + uint32_t in_args_size; /* buffer size for sending */ > + uint8_t *out_buffer; /* buffer for response */ > + uint32_t out_size; /* buffer size for response */ > +}; > + > +/* notify current command done. Only call in case execute > + * _atomic_set_cmd successfully. > + */ > +static inline void > +_notify_cmd(struct idpf_adapter *adapter, int msg_ret) { > + adapter->cmd_retval =3D msg_ret; > + rte_wmb(); > + adapter->pend_cmd =3D VIRTCHNL_OP_UNKNOWN; } > + > +/* clear current command. Only call in case execute > + * _atomic_set_cmd successfully. > + */ > +static inline void > +_clear_cmd(struct idpf_adapter *adapter) { > + rte_wmb(); > + adapter->pend_cmd =3D VIRTCHNL_OP_UNKNOWN; > + adapter->cmd_retval =3D VIRTCHNL_STATUS_SUCCESS; } > + > +/* Check there is pending cmd in execution. If none, set new command. > +*/ static inline int _atomic_set_cmd(struct idpf_adapter *adapter, enum > +virtchnl_ops ops) { > + int ret =3D rte_atomic32_cmpset(&adapter->pend_cmd, > VIRTCHNL_OP_UNKNOWN, > +ops); > + > + if (!ret) > + PMD_DRV_LOG(ERR, "There is incomplete cmd %d", adapter- > >pend_cmd); > + > + return !ret; > +} > + > +struct idpf_adapter *idpf_find_adapter(struct rte_pci_device *pci_dev); > +void idpf_handle_virtchnl_msg(struct rte_eth_dev *dev); int > +idpf_check_api_version(struct idpf_adapter *adapter); int > +idpf_get_caps(struct idpf_adapter *adapter); int > +idpf_create_vport(struct rte_eth_dev *dev); int > +idpf_destroy_vport(struct idpf_vport *vport); int > +idpf_ena_dis_vport(struct idpf_vport *vport, bool enable); int > +idpf_read_one_msg(struct idpf_adapter *adapter, uint32_t ops, > + uint16_t buf_len, uint8_t *buf); > + > +#endif /* _IDPF_ETHDEV_H_ */ > diff --git a/drivers/net/idpf/idpf_vchnl.c b/drivers/net/idpf/idpf_vchnl.= c new > file mode 100644 index 0000000000..4708fb0666 > --- /dev/null > +++ b/drivers/net/idpf/idpf_vchnl.c > @@ -0,0 +1,495 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright(c) 2022 Intel Corporation > + */ > + > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > + > +#include > +#include > +#include > +#include > +#include > +#include > +#include > + > +#include "idpf_ethdev.h" > + > +#include "base/iecm_prototype.h" > + > +#define IDPF_CTLQ_LEN 64 > + > +static int > +idpf_vc_clean(struct idpf_adapter *adapter) { > + struct iecm_ctlq_msg *q_msg[IDPF_CTLQ_LEN]; > + uint16_t num_q_msg =3D IDPF_CTLQ_LEN; > + struct iecm_dma_mem *dma_mem; > + int err =3D 0; > + uint32_t i; > + > + for (i =3D 0; i < 10; i++) { > + err =3D iecm_ctlq_clean_sq(adapter->hw.asq, &num_q_msg, > q_msg); > + msleep(20); > + if (num_q_msg) > + break; > + } > + if (err) > + goto error; > + > + /* Empty queue is not an error */ > + for (i =3D 0; i < num_q_msg; i++) { > + dma_mem =3D q_msg[i]->ctx.indirect.payload; > + if (dma_mem) { > + iecm_free_dma_mem(&adapter->hw, dma_mem); > + rte_free(dma_mem); > + } > + rte_free(q_msg[i]); > + } > + > +error: > + return err; > +} > + > +static int > +idpf_send_vc_msg(struct idpf_adapter *adapter, enum virtchnl_ops op, > + uint16_t msg_size, uint8_t *msg) > +{ > + struct iecm_ctlq_msg *ctlq_msg; > + struct iecm_dma_mem *dma_mem; > + int err =3D 0; > + > + err =3D idpf_vc_clean(adapter); > + if (err) > + goto err; > + > + ctlq_msg =3D (struct iecm_ctlq_msg *)rte_zmalloc(NULL, > + sizeof(struct iecm_ctlq_msg), 0); > + if (!ctlq_msg) { > + err =3D -ENOMEM; > + goto err; > + } > + > + dma_mem =3D (struct iecm_dma_mem *)rte_zmalloc(NULL, > + sizeof(struct iecm_dma_mem), 0); > + if (!dma_mem) { > + err =3D -ENOMEM; > + goto dma_mem_error; > + } > + > + dma_mem->size =3D IDPF_DFLT_MBX_BUF_SIZE; > + iecm_alloc_dma_mem(&adapter->hw, dma_mem, dma_mem->size); > + if (!dma_mem->va) { > + err =3D -ENOMEM; > + goto dma_alloc_error; > + } > + > + memcpy(dma_mem->va, msg, msg_size); > + > + ctlq_msg->opcode =3D iecm_mbq_opc_send_msg_to_pf; > + ctlq_msg->func_id =3D 0; > + ctlq_msg->data_len =3D msg_size; > + ctlq_msg->cookie.mbx.chnl_opcode =3D op; > + ctlq_msg->cookie.mbx.chnl_retval =3D VIRTCHNL_STATUS_SUCCESS; > + ctlq_msg->ctx.indirect.payload =3D dma_mem; > + > + err =3D iecm_ctlq_send(&adapter->hw, adapter->hw.asq, 1, ctlq_msg); > + if (err) > + goto send_error; > + > + return err; > + > +send_error: > + iecm_free_dma_mem(&adapter->hw, dma_mem); > +dma_alloc_error: > + rte_free(dma_mem); > +dma_mem_error: > + rte_free(ctlq_msg); > +err: > + return err; > +} > + > +static enum idpf_vc_result > +idpf_read_msg_from_ipf(struct idpf_adapter *adapter, uint16_t buf_len, > + uint8_t *buf) > +{ > + struct iecm_hw *hw =3D &adapter->hw; > + struct iecm_ctlq_msg ctlq_msg; > + struct iecm_dma_mem *dma_mem =3D NULL; > + enum idpf_vc_result result =3D IDPF_MSG_NON; > + enum virtchnl_ops opcode; > + uint16_t pending =3D 1; > + int ret; > + > + ret =3D iecm_ctlq_recv(hw->arq, &pending, &ctlq_msg); > + if (ret) { > + PMD_DRV_LOG(DEBUG, "Can't read msg from AQ"); > + if (ret !=3D IECM_ERR_CTLQ_NO_WORK) > + result =3D IDPF_MSG_ERR; > + return result; > + } > + > + rte_memcpy(buf, ctlq_msg.ctx.indirect.payload->va, buf_len); > + > + opcode =3D (enum > virtchnl_ops)rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_opcode); > + adapter->cmd_retval =3D > + (enum > +virtchnl_status_code)rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_retval); > + > + PMD_DRV_LOG(DEBUG, "CQ from ipf carries opcode %u, retval %d", > + opcode, adapter->cmd_retval); > + > + if (opcode =3D=3D VIRTCHNL2_OP_EVENT) { > + struct virtchnl2_event *ve =3D > + (struct virtchnl2_event > *)ctlq_msg.ctx.indirect.payload->va; > + > + result =3D IDPF_MSG_SYS; > + switch (ve->event) { > + case VIRTCHNL2_EVENT_LINK_CHANGE: > + /* TBD */ > + break; > + default: > + PMD_DRV_LOG(ERR, "%s: Unknown event %d from > ipf", > + __func__, ve->event); > + break; > + } > + } else { > + /* async reply msg on command issued by pf previously */ > + result =3D IDPF_MSG_CMD; > + if (opcode !=3D adapter->pend_cmd) { > + PMD_DRV_LOG(WARNING, "command mismatch, > expect %u, get %u", > + adapter->pend_cmd, opcode); > + result =3D IDPF_MSG_ERR; > + } > + } > + > + if (ctlq_msg.data_len) > + dma_mem =3D ctlq_msg.ctx.indirect.payload; > + else > + pending =3D 0; > + > + ret =3D iecm_ctlq_post_rx_buffs(hw, hw->arq, &pending, &dma_mem); > + if (ret && dma_mem) > + iecm_free_dma_mem(hw, dma_mem); > + > + return result; > +} > + > +#define MAX_TRY_TIMES 200 > +#define ASQ_DELAY_MS 10 > + > +int > +idpf_read_one_msg(struct idpf_adapter *adapter, uint32_t ops, uint16_t > buf_len, > + uint8_t *buf) > +{ > + int ret, err =3D 0, i =3D 0; > + > + do { > + ret =3D idpf_read_msg_from_ipf(adapter, buf_len, buf); > + if (ret =3D=3D IDPF_MSG_CMD) > + break; > + rte_delay_ms(ASQ_DELAY_MS); > + } while (i++ < MAX_TRY_TIMES); > + if (i >=3D MAX_TRY_TIMES || > + adapter->cmd_retval !=3D VIRTCHNL_STATUS_SUCCESS) { > + err =3D -1; > + PMD_DRV_LOG(ERR, "No response or return failure (%d) for > cmd %d", > + adapter->cmd_retval, ops); > + } > + > + return err; > +} > + > +static int > +idpf_execute_vc_cmd(struct idpf_adapter *adapter, struct idpf_cmd_info > +*args) { > + int err =3D 0; > + int i =3D 0; > + int ret; > + > + if (_atomic_set_cmd(adapter, args->ops)) > + return -1; > + > + ret =3D idpf_send_vc_msg(adapter, args->ops, args->in_args_size, args- > >in_args); > + if (ret) { > + PMD_DRV_LOG(ERR, "fail to send cmd %d", args->ops); > + _clear_cmd(adapter); > + return ret; > + } > + > + switch (args->ops) { > + case VIRTCHNL_OP_VERSION: > + case VIRTCHNL2_OP_GET_CAPS: > + case VIRTCHNL2_OP_CREATE_VPORT: > + case VIRTCHNL2_OP_DESTROY_VPORT: > + case VIRTCHNL2_OP_SET_RSS_KEY: > + case VIRTCHNL2_OP_SET_RSS_LUT: > + case VIRTCHNL2_OP_SET_RSS_HASH: > + case VIRTCHNL2_OP_CONFIG_RX_QUEUES: > + case VIRTCHNL2_OP_CONFIG_TX_QUEUES: > + case VIRTCHNL2_OP_ENABLE_QUEUES: > + case VIRTCHNL2_OP_DISABLE_QUEUES: > + case VIRTCHNL2_OP_ENABLE_VPORT: > + case VIRTCHNL2_OP_DISABLE_VPORT: > + case VIRTCHNL2_OP_MAP_QUEUE_VECTOR: > + case VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR: > + case VIRTCHNL2_OP_ALLOC_VECTORS: > + case VIRTCHNL2_OP_DEALLOC_VECTORS: > + case VIRTCHNL2_OP_GET_STATS: > + /* for init virtchnl ops, need to poll the response */ > + err =3D idpf_read_one_msg(adapter, args->ops, args->out_size, > args->out_buffer); > + _clear_cmd(adapter); > + break; > + case VIRTCHNL2_OP_GET_PTYPE_INFO: > + /* for multuple response message, > + * do not handle the response here. > + */ > + break; > + default: > + /* For other virtchnl ops in running time, > + * wait for the cmd done flag. > + */ > + do { > + if (adapter->pend_cmd =3D=3D VIRTCHNL_OP_UNKNOWN) > + break; > + rte_delay_ms(ASQ_DELAY_MS); > + /* If don't read msg or read sys event, continue */ > + } while (i++ < MAX_TRY_TIMES); > + /* If there's no response is received, clear command */ > + if (i >=3D MAX_TRY_TIMES || > + adapter->cmd_retval !=3D VIRTCHNL_STATUS_SUCCESS) { > + err =3D -1; > + PMD_DRV_LOG(ERR, "No response or return failure > (%d) for cmd %d", > + adapter->cmd_retval, args->ops); > + _clear_cmd(adapter); > + } > + break; > + } > + > + return err; > +} > + > +int > +idpf_check_api_version(struct idpf_adapter *adapter) { > + struct virtchnl_version_info version; > + struct idpf_cmd_info args; > + int err; > + > + memset(&version, 0, sizeof(struct virtchnl_version_info)); > + version.major =3D VIRTCHNL_VERSION_MAJOR_2; > + version.minor =3D VIRTCHNL_VERSION_MINOR_0; > + > + args.ops =3D VIRTCHNL_OP_VERSION; > + args.in_args =3D (uint8_t *)&version; > + args.in_args_size =3D sizeof(version); > + args.out_buffer =3D adapter->mbx_resp; > + args.out_size =3D IDPF_DFLT_MBX_BUF_SIZE; > + > + err =3D idpf_execute_vc_cmd(adapter, &args); > + if (err) { > + PMD_DRV_LOG(ERR, > + "Failed to execute command of > VIRTCHNL_OP_VERSION"); > + return err; > + } > + > + return err; > +} > + > +int > +idpf_get_caps(struct idpf_adapter *adapter) { > + struct virtchnl2_get_capabilities caps_msg; > + struct idpf_cmd_info args; > + int err; > + > + memset(&caps_msg, 0, sizeof(struct virtchnl2_get_capabilities)); > + caps_msg.csum_caps =3D > + VIRTCHNL2_CAP_TX_CSUM_L3_IPV4 | > + VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_TCP | > + VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_UDP | > + VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_SCTP | > + VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_TCP | > + VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_UDP | > + VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_SCTP | > + VIRTCHNL2_CAP_TX_CSUM_GENERIC | > + VIRTCHNL2_CAP_RX_CSUM_L3_IPV4 | > + VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_TCP | > + VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_UDP | > + VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_SCTP | > + VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_TCP | > + VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_UDP | > + VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_SCTP | > + VIRTCHNL2_CAP_RX_CSUM_GENERIC; > + > + caps_msg.seg_caps =3D > + VIRTCHNL2_CAP_SEG_IPV4_TCP | > + VIRTCHNL2_CAP_SEG_IPV4_UDP | > + VIRTCHNL2_CAP_SEG_IPV4_SCTP | > + VIRTCHNL2_CAP_SEG_IPV6_TCP | > + VIRTCHNL2_CAP_SEG_IPV6_UDP | > + VIRTCHNL2_CAP_SEG_IPV6_SCTP | > + VIRTCHNL2_CAP_SEG_GENERIC; > + > + caps_msg.rss_caps =3D > + VIRTCHNL2_CAP_RSS_IPV4_TCP | > + VIRTCHNL2_CAP_RSS_IPV4_UDP | > + VIRTCHNL2_CAP_RSS_IPV4_SCTP | > + VIRTCHNL2_CAP_RSS_IPV4_OTHER | > + VIRTCHNL2_CAP_RSS_IPV6_TCP | > + VIRTCHNL2_CAP_RSS_IPV6_UDP | > + VIRTCHNL2_CAP_RSS_IPV6_SCTP | > + VIRTCHNL2_CAP_RSS_IPV6_OTHER | > + VIRTCHNL2_CAP_RSS_IPV4_AH | > + VIRTCHNL2_CAP_RSS_IPV4_ESP | > + VIRTCHNL2_CAP_RSS_IPV4_AH_ESP | > + VIRTCHNL2_CAP_RSS_IPV6_AH | > + VIRTCHNL2_CAP_RSS_IPV6_ESP | > + VIRTCHNL2_CAP_RSS_IPV6_AH_ESP; > + > + caps_msg.hsplit_caps =3D > + VIRTCHNL2_CAP_RX_HSPLIT_AT_L2 | > + VIRTCHNL2_CAP_RX_HSPLIT_AT_L3 | > + VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V4 | > + VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V6; > + > + caps_msg.rsc_caps =3D > + VIRTCHNL2_CAP_RSC_IPV4_TCP | > + VIRTCHNL2_CAP_RSC_IPV4_SCTP | > + VIRTCHNL2_CAP_RSC_IPV6_TCP | > + VIRTCHNL2_CAP_RSC_IPV6_SCTP; > + > + caps_msg.other_caps =3D > + VIRTCHNL2_CAP_RDMA | > + VIRTCHNL2_CAP_SRIOV | > + VIRTCHNL2_CAP_MACFILTER | > + VIRTCHNL2_CAP_FLOW_DIRECTOR | > + VIRTCHNL2_CAP_SPLITQ_QSCHED | > + VIRTCHNL2_CAP_CRC | > + VIRTCHNL2_CAP_WB_ON_ITR | > + VIRTCHNL2_CAP_PROMISC | > + VIRTCHNL2_CAP_LINK_SPEED | > + VIRTCHNL2_CAP_VLAN; > + > + args.ops =3D VIRTCHNL2_OP_GET_CAPS; > + args.in_args =3D (uint8_t *)&caps_msg; > + args.in_args_size =3D sizeof(caps_msg); > + args.out_buffer =3D adapter->mbx_resp; > + args.out_size =3D IDPF_DFLT_MBX_BUF_SIZE; > + > + err =3D idpf_execute_vc_cmd(adapter, &args); > + if (err) { > + PMD_DRV_LOG(ERR, > + "Failed to execute command of > VIRTCHNL2_OP_GET_CAPS"); > + return err; > + } > + > + rte_memcpy(adapter->caps, args.out_buffer, sizeof(caps_msg)); > + > + return err; > +} > + > +int > +idpf_create_vport(struct rte_eth_dev *dev) { > + struct rte_pci_device *pci_dev =3D IDPF_DEV_TO_PCI(dev); > + struct idpf_adapter *adapter =3D idpf_find_adapter(pci_dev); > + uint16_t idx =3D adapter->next_vport_idx; > + struct virtchnl2_create_vport *vport_req_info =3D > + (struct virtchnl2_create_vport *)adapter- > >vport_req_info[idx]; > + struct virtchnl2_create_vport vport_msg; > + struct idpf_cmd_info args; > + int err =3D -1; > + > + memset(&vport_msg, 0, sizeof(struct virtchnl2_create_vport)); > + vport_msg.vport_type =3D vport_req_info->vport_type; > + vport_msg.txq_model =3D vport_req_info->txq_model; > + vport_msg.rxq_model =3D vport_req_info->rxq_model; > + vport_msg.num_tx_q =3D vport_req_info->num_tx_q; > + vport_msg.num_tx_complq =3D vport_req_info->num_tx_complq; > + vport_msg.num_rx_q =3D vport_req_info->num_rx_q; > + vport_msg.num_rx_bufq =3D vport_req_info->num_rx_bufq; > + > + memset(&args, 0, sizeof(args)); > + args.ops =3D VIRTCHNL2_OP_CREATE_VPORT; > + args.in_args =3D (uint8_t *)&vport_msg; > + args.in_args_size =3D sizeof(vport_msg); > + args.out_buffer =3D adapter->mbx_resp; > + args.out_size =3D IDPF_DFLT_MBX_BUF_SIZE; > + > + err =3D idpf_execute_vc_cmd(adapter, &args); > + if (err) { > + PMD_DRV_LOG(ERR, > + "Failed to execute command of > VIRTCHNL2_OP_CREATE_VPORT"); > + return err; > + } > + > + if (!adapter->vport_recv_info[idx]) { > + adapter->vport_recv_info[idx] =3D rte_zmalloc(NULL, > + IDPF_DFLT_MBX_BUF_SIZE, > 0); > + if (!adapter->vport_recv_info[idx]) { > + PMD_INIT_LOG(ERR, "Failed to alloc > vport_recv_info."); > + return err; > + } > + } > + rte_memcpy(adapter->vport_recv_info[idx], args.out_buffer, > + IDPF_DFLT_MBX_BUF_SIZE); > + return err; > +} > + > +int > +idpf_destroy_vport(struct idpf_vport *vport) { > + struct idpf_adapter *adapter =3D vport->adapter; > + struct virtchnl2_vport vc_vport; > + struct idpf_cmd_info args; > + int err; > + > + vc_vport.vport_id =3D vport->vport_id; > + > + memset(&args, 0, sizeof(args)); > + args.ops =3D VIRTCHNL2_OP_DESTROY_VPORT; > + args.in_args =3D (uint8_t *)&vc_vport; > + args.in_args_size =3D sizeof(vc_vport); > + args.out_buffer =3D adapter->mbx_resp; > + args.out_size =3D IDPF_DFLT_MBX_BUF_SIZE; > + > + err =3D idpf_execute_vc_cmd(adapter, &args); > + if (err) { > + PMD_DRV_LOG(ERR, "Failed to execute command of > VIRTCHNL2_OP_DESTROY_VPORT"); > + return err; > + } > + > + return err; > +} > + > +int > +idpf_ena_dis_vport(struct idpf_vport *vport, bool enable) { > + struct idpf_adapter *adapter =3D vport->adapter; > + struct virtchnl2_vport vc_vport; > + struct idpf_cmd_info args; > + int err; > + > + vc_vport.vport_id =3D vport->vport_id; > + args.ops =3D enable ? VIRTCHNL2_OP_ENABLE_VPORT : > + VIRTCHNL2_OP_DISABLE_VPORT; > + args.in_args =3D (u8 *)&vc_vport; > + args.in_args_size =3D sizeof(vc_vport); > + args.out_buffer =3D adapter->mbx_resp; > + args.out_size =3D IDPF_DFLT_MBX_BUF_SIZE; > + > + err =3D idpf_execute_vc_cmd(adapter, &args); > + if (err) { > + PMD_DRV_LOG(ERR, "Failed to execute command of > VIRTCHNL2_OP_%s_VPORT", > + enable ? "ENABLE" : "DISABLE"); > + } > + > + return err; > +} > diff --git a/drivers/net/idpf/meson.build b/drivers/net/idpf/meson.build > new file mode 100644 index 0000000000..7d776d3a15 > --- /dev/null > +++ b/drivers/net/idpf/meson.build > @@ -0,0 +1,18 @@ > +# SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2022 Intel > +Corporation > + > +if is_windows > + build =3D false > + reason =3D 'not supported on Windows' > + subdir_done() We should use 4 spaces instead of a tab in meson files. > +endif > + > +subdir('base') > +objs =3D [base_objs] > + > +sources =3D files( > + 'idpf_ethdev.c', > + 'idpf_vchnl.c', Ditto. > +) > + > +includes +=3D include_directories('base') > \ No newline at end of file > diff --git a/drivers/net/idpf/version.map b/drivers/net/idpf/version.map > new file mode 100644 index 0000000000..b7da224860 > --- /dev/null > +++ b/drivers/net/idpf/version.map > @@ -0,0 +1,3 @@ > +DPDK_22 { > + local: *; > +}; > \ No newline at end of file > diff --git a/drivers/net/meson.build b/drivers/net/meson.build index > e35652fe63..8faf8120c2 100644 > --- a/drivers/net/meson.build > +++ b/drivers/net/meson.build > @@ -28,6 +28,7 @@ drivers =3D [ > 'i40e', > 'iavf', > 'ice', > + 'idpf', Ditto. > 'igc', > 'ionic', > 'ipn3ke', > -- > 2.25.1