From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id 0C2AE42408;
	Wed, 18 Jan 2023 05:10:29 +0100 (CET)
Received: from mails.dpdk.org (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id E363C4067E;
	Wed, 18 Jan 2023 05:10:28 +0100 (CET)
Received: from mga05.intel.com (mga05.intel.com [192.55.52.43])
 by mails.dpdk.org (Postfix) with ESMTP id 018D740151
 for <dev@dpdk.org>; Wed, 18 Jan 2023 05:10:25 +0100 (CET)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
 d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
 t=1674015026; x=1705551026;
 h=from:to:cc:subject:date:message-id:references:
 in-reply-to:content-transfer-encoding:mime-version;
 bh=ufNVUxM5shbVk7Zue6XXH0Ij22HE9QMHS015pNHgw2I=;
 b=aD/ouVF5Vq/pkbKKgT6o0o1USwib2+hVWMQj9WwTXfNLAZ4W9yjZn74t
 EIBBxspVPe1v+QLjLXSWtnMU/KwlFTyjBiasNMFsftmtg8dVj7t2mFtW+
 z+sXw9ULm9ZbFahRVrF44IeTerzddg4fHNGzHyoU9Gtf0g37LQuY2eyZy
 L1M7laCTykh/OVOJLrE3XEAOvJ5teuP8J8LZj2t/yq34knJ76UexrN5Xn
 jNzs0IbravmW0sO+ikZWAOK8RetKIMd/qEAD0tc0mFXXqoeUGGY2aWo/R
 q+rbYlt9s5Nv7cx0B/vf7tj9Jj4CNXQB+bJX1hXh1pjck5e5fXRe4WBJO w==;
X-IronPort-AV: E=McAfee;i="6500,9779,10593"; a="411132614"
X-IronPort-AV: E=Sophos;i="5.97,224,1669104000"; d="scan'208";a="411132614"
Received: from orsmga005.jf.intel.com ([10.7.209.41])
 by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 17 Jan 2023 20:10:25 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=McAfee;i="6500,9779,10593"; a="833409601"
X-IronPort-AV: E=Sophos;i="5.97,224,1669104000"; d="scan'208";a="833409601"
Received: from orsmsx603.amr.corp.intel.com ([10.22.229.16])
 by orsmga005.jf.intel.com with ESMTP; 17 Jan 2023 20:10:24 -0800
Received: from orsmsx611.amr.corp.intel.com (10.22.229.24) by
 ORSMSX603.amr.corp.intel.com (10.22.229.16) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.16; Tue, 17 Jan 2023 20:10:24 -0800
Received: from orsmsx610.amr.corp.intel.com (10.22.229.23) by
 ORSMSX611.amr.corp.intel.com (10.22.229.24) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.16; Tue, 17 Jan 2023 20:10:23 -0800
Received: from orsedg603.ED.cps.intel.com (10.7.248.4) by
 orsmsx610.amr.corp.intel.com (10.22.229.23) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.16 via Frontend Transport; Tue, 17 Jan 2023 20:10:23 -0800
Received: from NAM10-MW2-obe.outbound.protection.outlook.com (104.47.55.103)
 by edgegateway.intel.com (134.134.137.100) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.2507.16; Tue, 17 Jan 2023 20:10:23 -0800
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iKT++fQmX5zCDAqOi/f9jwhdyG7iBB6qgxTMiHDtIyr6QkoQn51rW2L24kemn8hXwdPWECOo98JHJmW/TltFPJoTRT8bSaluJiGZ03jWhzvSnKUc0jSmyNl0ANgpsqHBrWJBk1NogtxzTJBwDXwIxflYS/tl7vM4rbJfKTQci++MPW2xpBbnsBlzr/+vZlJLB00wAS//3UsbuMzrK94iHN4QqlfZmBKKSiOZt3cvDptHtq50UIDjJCwN8uFTEGD2nstR7wNcNA1w/q6tRHNbNvKdqJjNie8MZGPTH0wlObY3uAECXMvZaTWAXmRpirVZ8ONL1HwvZV53CyHs4SnOKQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=z9OOzEjP8KN5RizfEKbmdbtfMUPsZDVPzuHlRajk+n8=;
 b=j5wvzq7ZXnVCtbYq87YBV7IQTubHA+M79LsobTcftkdP/ZM/F7x7cG1Q1tmrmMbfmpSCKK5wAsDhW6oFk+zvFS0p7sR5Mh/X7TVVxS/Uy/4HSSHj9zlUeq87KDr9v2aBuAkp1ehoLOfGzt8J3RmXmR8uWa7GWpp6so7LCCjLUOk8sjPTaz75qi4rjzGEZKBAQ4//boM7Do5Q2LdO4/hYK63fTT0H0J5btPD4foP1x18kVSDZXRHjcxsdHEgrFfv/hehjKE4YpN5OS4CadHP2gZdL2Im8BDYZes1mfWzuGrVRw7fPj3CketTF98RdSmrJ/NaRIFEJUr9VTUmecZbf2Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
Received: from DM4PR11MB5994.namprd11.prod.outlook.com (2603:10b6:8:5d::20) by
 PH7PR11MB5944.namprd11.prod.outlook.com (2603:10b6:510:124::5) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.5986.23; Wed, 18 Jan 2023 04:10:21 +0000
Received: from DM4PR11MB5994.namprd11.prod.outlook.com
 ([fe80::eb70:80e0:1b88:3ca5]) by DM4PR11MB5994.namprd11.prod.outlook.com
 ([fe80::eb70:80e0:1b88:3ca5%9]) with mapi id 15.20.5986.023; Wed, 18 Jan 2023
 04:10:20 +0000
From: "Zhang, Qi Z" <qi.z.zhang@intel.com>
To: "Xing, Beilei" <beilei.xing@intel.com>, "Wu, Jingjing"
 <jingjing.wu@intel.com>
CC: "dev@dpdk.org" <dev@dpdk.org>, "Wu, Wenjun1" <wenjun1.wu@intel.com>
Subject: RE: [PATCH v4 03/15] common/idpf: add virtual channel functions
Thread-Topic: [PATCH v4 03/15] common/idpf: add virtual channel functions
Thread-Index: AQHZKk4F9i+TS68jNki5jmVi/xazS66jjF/wgAACmUA=
Date: Wed, 18 Jan 2023 04:10:20 +0000
Message-ID: <DM4PR11MB5994DB0D1A14506388DE12EED7C79@DM4PR11MB5994.namprd11.prod.outlook.com>
References: <https://patches.dpdk.org/project/dpdk/cover/20230117072626.93796-1-beilei.xing@intel.com/>
 <20230117080622.105657-1-beilei.xing@intel.com>
 <20230117080622.105657-4-beilei.xing@intel.com>
 <DM4PR11MB59947C7C9B63D921E1334366D7C79@DM4PR11MB5994.namprd11.prod.outlook.com>
In-Reply-To: <DM4PR11MB59947C7C9B63D921E1334366D7C79@DM4PR11MB5994.namprd11.prod.outlook.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=intel.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: DM4PR11MB5994:EE_|PH7PR11MB5944:EE_
x-ms-office365-filtering-correlation-id: 10ba23f4-162d-4696-0585-08daf909ea63
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: MHA/uYQQXvfhaFPO5LPz8/PLjAos1hnG1h9ykPim5rL4te20jF9UGPKM5oQWe+9WgCpMobmFXSv/7Uqnd5Uq+V83oxAA8waE1yiG2obeIuCZlOHut0/uWHuf+3QK+OWxvcj0ZpCDyvKoyfXP5M19NFEI7bkmK0H7zfpP0twxqzRqM1mWDTDWzT83Zeh1o5z9l2v3zXTZqxecIvTGPxRAbp8+XW0myvxtD4p0sr0uLSdS2JDW/XUsntPEVDJAqonrOTobbsomYu63bZpscmPOrwQQ3Wc55yZidbnpT2mvXADsFEyCapubqalpiPq1P68WOqk5Vd+1clcUeONNa67mJhWcJ3UPHXB32yZlZ8HpXpJFsH/q4cVDp771MJSSASHilyEF18hTRTeVndmRPI57w7cGFUIJkqKZneCT25Eaqd2jGQFHUA7l/aGYSO7vXiIDwm77Sw7ewV6YFwTmpiQAfLIVy2vzjZHPwVPVBSkgbj56oX4MhnnWbnVS0iIYODJAS+kJUOxx19UhoM6GGktI0TFibqs/SOkBnjUArw6Cib8WpzcY8XQdnQCwLaO+LWL8K7WP/uC3nNMrKNSn4BsbEt90+9bi7t3ykAzrg8c3V9XOr9JybfJ+v5bO1gOTz1JLjYfYepqsKB/IP9UGD+vwEnl0eYjEEqXcNG4DEaNbBfXQbqTYAGO7NjBTXKqTbe3PLfsqj3avT5leyJeHVqSk7g==
x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:DM4PR11MB5994.namprd11.prod.outlook.com; PTR:; CAT:NONE;
 SFS:(13230022)(396003)(366004)(346002)(136003)(39860400002)(376002)(451199015)(53546011)(33656002)(86362001)(76116006)(26005)(55016003)(4326008)(8676002)(64756008)(186003)(9686003)(66446008)(66556008)(41300700001)(66476007)(2940100002)(66946007)(71200400001)(316002)(6636002)(54906003)(478600001)(107886003)(6506007)(110136005)(7696005)(122000001)(38100700002)(38070700005)(2906002)(30864003)(82960400001)(5660300002)(83380400001)(52536014)(8936002)(66899015)(559001)(579004);
 DIR:OUT; SFP:1102; 
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?72rIiuSFN0OTkQCUB579QxP/xbHb4GMEjgTq9BvmVwUOneZxQGu3Qh9JPMSm?=
 =?us-ascii?Q?m9nXgXewBZbBMeINPpen1tzygfhNgwkuX1eK5SFjCwzOLcqOUr9/Dz0IFfU9?=
 =?us-ascii?Q?JDM/nqmzRfwG4WQbzmmpPLM7abQCqeoso4iPNtXxrLkyMbT8S63mAiS/OeDk?=
 =?us-ascii?Q?7qKwLlvhTJNWqLAU/YneK87jIL75/a8kiQue3GEQEe6L2wv2pu1EXS8dErf+?=
 =?us-ascii?Q?FW+StVpX6sv8JsoO1Lrmuswfm1a1dGZ+Yb4CBb5710nYc+NYwCQ1XOK2ak1e?=
 =?us-ascii?Q?RAS52OjzMN5jT5xcP88pB1p92xOAuJZ4OxM4sg7AQ/v6RE4yQLE1TVYUgNRE?=
 =?us-ascii?Q?8Atw+Qbh2Ufi3m9cK8QfcexLMu53z44m3dPbdTDKvXrvQ1vFT2CfFH5IYF3X?=
 =?us-ascii?Q?krSuOHcAz7xpBBJ6Axdx2mRW+rYThUTH6yY9BceBBmbRHRIkp16jTKcbYl7V?=
 =?us-ascii?Q?Mf98V8+LwpRRuKMmA9gnmsC4SCR8u636JcS6MUEAkc53JbnCndVBmvT+UFiq?=
 =?us-ascii?Q?k9lgO/FLgB1SqO3CQ5h8ThMrnDKnHXpDxnxdxyv1pm4FU9oR6QSuFf6s5csG?=
 =?us-ascii?Q?nlW8qGjLxKrc+jXN30UGp3dAo17/noDeb9G18yNLKudH0sMXpWsr+hW+X2Bh?=
 =?us-ascii?Q?UmjxZkYISmjQjMbOzdnlTQGsFCEC3RZfAPM/xEzMQ0oHl4QTaV44JBT8VND9?=
 =?us-ascii?Q?De1M+Jiih2r4CDJp8BiEnZAi2nKYlP+mNOBYihdgjqlZy7/adu24foraJ7qX?=
 =?us-ascii?Q?C9tsLqk7gAfDde5MmwJdoqNR2/FFZrblmhckQ1bDGEj/EbKWQzSNAocOfHIW?=
 =?us-ascii?Q?WWkz3FyKU7AEPoF66YdjC8m+6YRyZbaS+dtqE6WHLJsnMEZcd6z6xEVecbic?=
 =?us-ascii?Q?qAjk4RIx1CfVO5Ka5eGOpqmPbExDe01aXk6tzE7TPMrYxatMw3VG9K0BW++d?=
 =?us-ascii?Q?0O355i67jOJnxqVNOQbJs/fETwvTYpF5wUVJ69G4Y9ZEzWd6oVKKMMlcZo3x?=
 =?us-ascii?Q?sy5MX+wosqTkBa7BctEfyd+agh/+vLi438Rx76lITfjcqRIPElnFliDt0sRx?=
 =?us-ascii?Q?raFHnaMD/o7bjoJnb+MffabzJg0O9nlo1e/sh2OiLmXeZlegHYIVhXNJ17f3?=
 =?us-ascii?Q?MF8P2p24WLnOXBPcRWtFGkGQASHMZiw+Qb56HBgIh1D97g+x+xBXqgWyq1eu?=
 =?us-ascii?Q?crNzhME1MdjPu/P4zw6eyX7B8C5x8lE3G+07NL8Zu1q2cZKkpwcVJDAZwxeh?=
 =?us-ascii?Q?DEL8+EG3aQIQvIpKhqr87cjfROjKcRdFWdtwRkZzdAA88WUPA1KSWCs1Yups?=
 =?us-ascii?Q?tntvECJvIK6jWLUWowvPVN7aD1+iBk35O2kh6mBEDUbKMlC1oxt28tGYVwzv?=
 =?us-ascii?Q?dRynE6t77awhfRBFM3oXEHEfrq9wyVVBw/WBIyfmjUWZ/e2h49J3wjTgEWOF?=
 =?us-ascii?Q?JC+cZMwgf1geU+H3fHvrGypzqev6WP5hMvOYpkPr9518fgyzYHSaCXVHCwjy?=
 =?us-ascii?Q?liU49GATgrt+W0kAq8Cm2ROyIsPZQRQlDpaN6nNFqFgiWmYDwv2ZP7XjJtBl?=
 =?us-ascii?Q?nzUS93oq60I/kaAqPdPf9fZ8I/rpwQ9ZtqX2fEIh?=
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: DM4PR11MB5994.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 10ba23f4-162d-4696-0585-08daf909ea63
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Jan 2023 04:10:20.8413 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 3oMZPw5kk+xc8hbo0ZpSWxr4tNkUGEAiJaDft+M9hYTeuLk7p2XCqJlih5rB7de58nYW0fGCKVDE+CEhnrKkSA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR11MB5944
X-OriginatorOrg: intel.com
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org



> -----Original Message-----
> From: Zhang, Qi Z
> Sent: Wednesday, January 18, 2023 12:00 PM
> To: Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing <jingjing.wu@intel=
.com>
> Cc: dev@dpdk.org; Wu, Wenjun1 <Wenjun1.Wu@intel.com>
> Subject: RE: [PATCH v4 03/15] common/idpf: add virtual channel functions
>=20
>=20
>=20
> > -----Original Message-----
> > From: Xing, Beilei <beilei.xing@intel.com>
> > Sent: Tuesday, January 17, 2023 4:06 PM
> > To: Wu, Jingjing <jingjing.wu@intel.com>
> > Cc: dev@dpdk.org; Zhang, Qi Z <qi.z.zhang@intel.com>; Xing, Beilei
> > <beilei.xing@intel.com>; Wu, Wenjun1 <wenjun1.wu@intel.com>
> > Subject: [PATCH v4 03/15] common/idpf: add virtual channel functions
> >
> > From: Beilei Xing <beilei.xing@intel.com>
> >
> > Move most of the virtual channel functions to idpf common module.
> >
> > Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
> > Signed-off-by: Beilei Xing <beilei.xing@intel.com>
> > ---
> >  drivers/common/idpf/base/meson.build       |   2 +-
> >  drivers/common/idpf/idpf_common_device.c   |   8 +
> >  drivers/common/idpf/idpf_common_device.h   |  61 ++
> >  drivers/common/idpf/idpf_common_logs.h     |  23 +
> >  drivers/common/idpf/idpf_common_virtchnl.c | 815
> > +++++++++++++++++++++
> >  drivers/common/idpf/idpf_common_virtchnl.h |  48 ++
> >  drivers/common/idpf/meson.build            |   5 +
> >  drivers/common/idpf/version.map            |  20 +-
> >  drivers/net/idpf/idpf_ethdev.c             |   9 +-
> >  drivers/net/idpf/idpf_ethdev.h             |  85 +--
> >  drivers/net/idpf/idpf_vchnl.c              | 815 +--------------------
> >  11 files changed, 983 insertions(+), 908 deletions(-)  create mode
> > 100644 drivers/common/idpf/idpf_common_device.c
> >  create mode 100644 drivers/common/idpf/idpf_common_logs.h
> >  create mode 100644 drivers/common/idpf/idpf_common_virtchnl.c
> >  create mode 100644 drivers/common/idpf/idpf_common_virtchnl.h
> >
> > diff --git a/drivers/common/idpf/base/meson.build
> > b/drivers/common/idpf/base/meson.build
> > index 183587b51a..dc4b93c198 100644
> > --- a/drivers/common/idpf/base/meson.build
> > +++ b/drivers/common/idpf/base/meson.build
> > @@ -1,7 +1,7 @@
> >  # SPDX-License-Identifier: BSD-3-Clause  # Copyright(c) 2022 Intel
> > Corporation
> >
> > -sources =3D files(
> > +sources +=3D files(
> >          'idpf_common.c',
> >          'idpf_controlq.c',
> >          'idpf_controlq_setup.c',
> > diff --git a/drivers/common/idpf/idpf_common_device.c
> > b/drivers/common/idpf/idpf_common_device.c
> > new file mode 100644
> > index 0000000000..5062780362
> > --- /dev/null
> > +++ b/drivers/common/idpf/idpf_common_device.c
> > @@ -0,0 +1,8 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(c) 2022 Intel Corporation  */
> > +
> > +#include <rte_log.h>
> > +#include <idpf_common_device.h>
> > +
> > +RTE_LOG_REGISTER_SUFFIX(idpf_common_logtype, common, NOTICE);
> > diff --git a/drivers/common/idpf/idpf_common_device.h
> > b/drivers/common/idpf/idpf_common_device.h
> > index b7fff84b25..a7537281d1 100644
> > --- a/drivers/common/idpf/idpf_common_device.h
> > +++ b/drivers/common/idpf/idpf_common_device.h
> > @@ -7,6 +7,12 @@
> >
> >  #include <base/idpf_prototype.h>
> >  #include <base/virtchnl2.h>
> > +#include <idpf_common_logs.h>
> > +
> > +#define IDPF_CTLQ_LEN		64
> > +#define IDPF_DFLT_MBX_BUF_SIZE	4096
> > +
> > +#define IDPF_MAX_PKT_TYPE	1024
> >
> >  struct idpf_adapter {
> >  	struct idpf_hw hw;
> > @@ -76,4 +82,59 @@ struct idpf_vport {
> >  	bool stopped;
> >  };
> >
> > +/* Message type read in virtual channel from PF */ enum
> > +idpf_vc_result {
> > +	IDPF_MSG_ERR =3D -1, /* Meet error when accessing admin queue */
> > +	IDPF_MSG_NON,      /* Read nothing from admin queue */
> > +	IDPF_MSG_SYS,      /* Read system msg from admin queue */
> > +	IDPF_MSG_CMD,      /* Read async command result */
> > +};
> > +
> > +/* structure used for sending and checking response of virtchnl ops
> > +*/ struct idpf_cmd_info {
> > +	uint32_t ops;
> > +	uint8_t *in_args;       /* buffer for sending */
> > +	uint32_t in_args_size;  /* buffer size for sending */
> > +	uint8_t *out_buffer;    /* buffer for response */
> > +	uint32_t out_size;      /* buffer size for response */
> > +};
> > +
> > +/* notify current command done. Only call in case execute
> > + * _atomic_set_cmd successfully.
> > + */
> > +static inline void
> > +notify_cmd(struct idpf_adapter *adapter, int msg_ret) {
> > +	adapter->cmd_retval =3D msg_ret;
> > +	/* Return value may be checked in anither thread, need to ensure
> > the coherence. */
> > +	rte_wmb();
> > +	adapter->pend_cmd =3D VIRTCHNL2_OP_UNKNOWN; }
> > +
> > +/* clear current command. Only call in case execute
> > + * _atomic_set_cmd successfully.
> > + */
> > +static inline void
> > +clear_cmd(struct idpf_adapter *adapter) {
> > +	/* Return value may be checked in anither thread, need to ensure
> > the coherence. */
> > +	rte_wmb();
> > +	adapter->pend_cmd =3D VIRTCHNL2_OP_UNKNOWN;
> > +	adapter->cmd_retval =3D VIRTCHNL_STATUS_SUCCESS; }
> > +
> > +/* Check there is pending cmd in execution. If none, set new command.
> > +*/ static inline bool atomic_set_cmd(struct idpf_adapter *adapter,
> > +uint32_t ops) {
> > +	uint32_t op_unk =3D VIRTCHNL2_OP_UNKNOWN;
> > +	bool ret =3D __atomic_compare_exchange(&adapter->pend_cmd,
> > &op_unk, &ops,
> > +					    0, __ATOMIC_ACQUIRE,
> > __ATOMIC_ACQUIRE);
> > +
> > +	if (!ret)
> > +		DRV_LOG(ERR, "There is incomplete cmd %d", adapter-
> > >pend_cmd);
> > +
> > +	return !ret;
> > +}
> > +
> >  #endif /* _IDPF_COMMON_DEVICE_H_ */
> > diff --git a/drivers/common/idpf/idpf_common_logs.h
> > b/drivers/common/idpf/idpf_common_logs.h
> > new file mode 100644
> > index 0000000000..fe36562769
> > --- /dev/null
> > +++ b/drivers/common/idpf/idpf_common_logs.h
> > @@ -0,0 +1,23 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(c) 2022 Intel Corporation  */
> > +
> > +#ifndef _IDPF_COMMON_LOGS_H_
> > +#define _IDPF_COMMON_LOGS_H_
> > +
> > +#include <rte_log.h>
> > +
> > +extern int idpf_common_logtype;
> > +
> > +#define DRV_LOG_RAW(level, ...)					\
> > +	rte_log(RTE_LOG_ ## level,				\
> > +		idpf_common_logtype,				\
> > +		RTE_FMT("%s(): "				\
> > +			RTE_FMT_HEAD(__VA_ARGS__,) "\n",	\
> > +			__func__,				\
> > +			RTE_FMT_TAIL(__VA_ARGS__,)))
> > +
> > +#define DRV_LOG(level, fmt, args...)		\
> > +	DRV_LOG_RAW(level, fmt "\n", ## args)
> > +
> > +#endif /* _IDPF_COMMON_LOGS_H_ */
> > diff --git a/drivers/common/idpf/idpf_common_virtchnl.c
> > b/drivers/common/idpf/idpf_common_virtchnl.c
> > new file mode 100644
> > index 0000000000..2e94a95876
> > --- /dev/null
> > +++ b/drivers/common/idpf/idpf_common_virtchnl.c
> > @@ -0,0 +1,815 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(c) 2022 Intel Corporation  */
> > +
> > +#include <idpf_common_virtchnl.h>
> > +#include <idpf_common_logs.h>
> > +
> > +static int
> > +idpf_vc_clean(struct idpf_adapter *adapter) {
> > +	struct idpf_ctlq_msg *q_msg[IDPF_CTLQ_LEN];
> > +	uint16_t num_q_msg =3D IDPF_CTLQ_LEN;
> > +	struct idpf_dma_mem *dma_mem;
> > +	int err;
> > +	uint32_t i;
> > +
> > +	for (i =3D 0; i < 10; i++) {
> > +		err =3D idpf_ctlq_clean_sq(adapter->hw.asq, &num_q_msg,
> > q_msg);
> > +		msleep(20);
> > +		if (num_q_msg > 0)
> > +			break;
> > +	}
> > +	if (err !=3D 0)
> > +		return err;
> > +
> > +	/* Empty queue is not an error */
> > +	for (i =3D 0; i < num_q_msg; i++) {
> > +		dma_mem =3D q_msg[i]->ctx.indirect.payload;
> > +		if (dma_mem !=3D NULL) {
> > +			idpf_free_dma_mem(&adapter->hw, dma_mem);
> > +			rte_free(dma_mem);
> > +		}
> > +		rte_free(q_msg[i]);
> > +	}
> > +
> > +	return 0;
> > +}
> > +
> > +static int
> > +idpf_send_vc_msg(struct idpf_adapter *adapter, uint32_t op,
> > +		 uint16_t msg_size, uint8_t *msg)
> > +{
> > +	struct idpf_ctlq_msg *ctlq_msg;
> > +	struct idpf_dma_mem *dma_mem;
> > +	int err;
> > +
> > +	err =3D idpf_vc_clean(adapter);
> > +	if (err !=3D 0)
> > +		goto err;
> > +
> > +	ctlq_msg =3D rte_zmalloc(NULL, sizeof(struct idpf_ctlq_msg), 0);
> > +	if (ctlq_msg =3D=3D NULL) {
> > +		err =3D -ENOMEM;
> > +		goto err;
> > +	}
> > +
> > +	dma_mem =3D rte_zmalloc(NULL, sizeof(struct idpf_dma_mem), 0);
> > +	if (dma_mem =3D=3D NULL) {
> > +		err =3D -ENOMEM;
> > +		goto dma_mem_error;
> > +	}
> > +
> > +	dma_mem->size =3D IDPF_DFLT_MBX_BUF_SIZE;
> > +	idpf_alloc_dma_mem(&adapter->hw, dma_mem, dma_mem->size);
> > +	if (dma_mem->va =3D=3D NULL) {
> > +		err =3D -ENOMEM;
> > +		goto dma_alloc_error;
> > +	}
> > +
> > +	memcpy(dma_mem->va, msg, msg_size);
> > +
> > +	ctlq_msg->opcode =3D idpf_mbq_opc_send_msg_to_pf;
> > +	ctlq_msg->func_id =3D 0;
> > +	ctlq_msg->data_len =3D msg_size;
> > +	ctlq_msg->cookie.mbx.chnl_opcode =3D op;
> > +	ctlq_msg->cookie.mbx.chnl_retval =3D VIRTCHNL_STATUS_SUCCESS;
> > +	ctlq_msg->ctx.indirect.payload =3D dma_mem;
> > +
> > +	err =3D idpf_ctlq_send(&adapter->hw, adapter->hw.asq, 1, ctlq_msg);
> > +	if (err !=3D 0)
> > +		goto send_error;
> > +
> > +	return 0;
> > +
> > +send_error:
> > +	idpf_free_dma_mem(&adapter->hw, dma_mem);
> > +dma_alloc_error:
> > +	rte_free(dma_mem);
> > +dma_mem_error:
> > +	rte_free(ctlq_msg);
> > +err:
> > +	return err;
> > +}
> > +
> > +static enum idpf_vc_result
> > +idpf_read_msg_from_cp(struct idpf_adapter *adapter, uint16_t buf_len,
> > +		      uint8_t *buf)
> > +{
> > +	struct idpf_hw *hw =3D &adapter->hw;
> > +	struct idpf_ctlq_msg ctlq_msg;
> > +	struct idpf_dma_mem *dma_mem =3D NULL;
> > +	enum idpf_vc_result result =3D IDPF_MSG_NON;
> > +	uint32_t opcode;
> > +	uint16_t pending =3D 1;
> > +	int ret;
> > +
> > +	ret =3D idpf_ctlq_recv(hw->arq, &pending, &ctlq_msg);
> > +	if (ret !=3D 0) {
> > +		DRV_LOG(DEBUG, "Can't read msg from AQ");
> > +		if (ret !=3D -ENOMSG)
> > +			result =3D IDPF_MSG_ERR;
> > +		return result;
> > +	}
> > +
> > +	rte_memcpy(buf, ctlq_msg.ctx.indirect.payload->va, buf_len);
> > +
> > +	opcode =3D rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_opcode);
> > +	adapter->cmd_retval =3D
> > rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_retval);
> > +
> > +	DRV_LOG(DEBUG, "CQ from CP carries opcode %u, retval %d",
> > +		opcode, adapter->cmd_retval);
> > +
> > +	if (opcode =3D=3D VIRTCHNL2_OP_EVENT) {
> > +		struct virtchnl2_event *ve =3D ctlq_msg.ctx.indirect.payload-
> > >va;
> > +
> > +		result =3D IDPF_MSG_SYS;
> > +		switch (ve->event) {
> > +		case VIRTCHNL2_EVENT_LINK_CHANGE:
> > +			/* TBD */
> > +			break;
> > +		default:
> > +			DRV_LOG(ERR, "%s: Unknown event %d from CP",
> > +				__func__, ve->event);
> > +			break;
> > +		}
> > +	} else {
> > +		/* async reply msg on command issued by pf previously */
> > +		result =3D IDPF_MSG_CMD;
> > +		if (opcode !=3D adapter->pend_cmd) {
> > +			DRV_LOG(WARNING, "command mismatch,
> > expect %u, get %u",
> > +				adapter->pend_cmd, opcode);
> > +			result =3D IDPF_MSG_ERR;
> > +		}
> > +	}
> > +
> > +	if (ctlq_msg.data_len !=3D 0)
> > +		dma_mem =3D ctlq_msg.ctx.indirect.payload;
> > +	else
> > +		pending =3D 0;
> > +
> > +	ret =3D idpf_ctlq_post_rx_buffs(hw, hw->arq, &pending, &dma_mem);
> > +	if (ret !=3D 0 && dma_mem !=3D NULL)
> > +		idpf_free_dma_mem(hw, dma_mem);
> > +
> > +	return result;
> > +}
> > +
> > +#define MAX_TRY_TIMES 200
> > +#define ASQ_DELAY_MS  10
> > +
> > +int
> > +idpf_read_one_msg(struct idpf_adapter *adapter, uint32_t ops,
> > +uint16_t
> > buf_len,
> > +		  uint8_t *buf)
> > +{
> > +	int err =3D 0;
> > +	int i =3D 0;
> > +	int ret;
> > +
> > +	do {
> > +		ret =3D idpf_read_msg_from_cp(adapter, buf_len, buf);
> > +		if (ret =3D=3D IDPF_MSG_CMD)
> > +			break;
> > +		rte_delay_ms(ASQ_DELAY_MS);
> > +	} while (i++ < MAX_TRY_TIMES);
> > +	if (i >=3D MAX_TRY_TIMES ||
> > +	    adapter->cmd_retval !=3D VIRTCHNL_STATUS_SUCCESS) {
> > +		err =3D -EBUSY;
> > +		DRV_LOG(ERR, "No response or return failure (%d) for
> > cmd %d",
> > +			adapter->cmd_retval, ops);
> > +	}
> > +
> > +	return err;
> > +}
> > +
> > +int
> > +idpf_execute_vc_cmd(struct idpf_adapter *adapter, struct
> > +idpf_cmd_info
> > *args)
> > +{
> > +	int err =3D 0;
> > +	int i =3D 0;
> > +	int ret;
> > +
> > +	if (atomic_set_cmd(adapter, args->ops))
> > +		return -EINVAL;
> > +
> > +	ret =3D idpf_send_vc_msg(adapter, args->ops, args->in_args_size, args=
-
> > >in_args);
> > +	if (ret !=3D 0) {
> > +		DRV_LOG(ERR, "fail to send cmd %d", args->ops);
> > +		clear_cmd(adapter);
> > +		return ret;
> > +	}
> > +
> > +	switch (args->ops) {
> > +	case VIRTCHNL_OP_VERSION:
> > +	case VIRTCHNL2_OP_GET_CAPS:
> > +	case VIRTCHNL2_OP_CREATE_VPORT:
> > +	case VIRTCHNL2_OP_DESTROY_VPORT:
> > +	case VIRTCHNL2_OP_SET_RSS_KEY:
> > +	case VIRTCHNL2_OP_SET_RSS_LUT:
> > +	case VIRTCHNL2_OP_SET_RSS_HASH:
> > +	case VIRTCHNL2_OP_CONFIG_RX_QUEUES:
> > +	case VIRTCHNL2_OP_CONFIG_TX_QUEUES:
> > +	case VIRTCHNL2_OP_ENABLE_QUEUES:
> > +	case VIRTCHNL2_OP_DISABLE_QUEUES:
> > +	case VIRTCHNL2_OP_ENABLE_VPORT:
> > +	case VIRTCHNL2_OP_DISABLE_VPORT:
> > +	case VIRTCHNL2_OP_MAP_QUEUE_VECTOR:
> > +	case VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR:
> > +	case VIRTCHNL2_OP_ALLOC_VECTORS:
> > +	case VIRTCHNL2_OP_DEALLOC_VECTORS:
> > +		/* for init virtchnl ops, need to poll the response */
> > +		err =3D idpf_read_one_msg(adapter, args->ops, args->out_size,
> > args->out_buffer);
> > +		clear_cmd(adapter);
> > +		break;
> > +	case VIRTCHNL2_OP_GET_PTYPE_INFO:
> > +		/* for multuple response message,
> > +		 * do not handle the response here.
> > +		 */
> > +		break;
> > +	default:
> > +		/* For other virtchnl ops in running time,
> > +		 * wait for the cmd done flag.
> > +		 */
> > +		do {
> > +			if (adapter->pend_cmd =3D=3D VIRTCHNL_OP_UNKNOWN)
> > +				break;
> > +			rte_delay_ms(ASQ_DELAY_MS);
> > +			/* If don't read msg or read sys event, continue */
> > +		} while (i++ < MAX_TRY_TIMES);
> > +		/* If there's no response is received, clear command */
> > +		if (i >=3D MAX_TRY_TIMES  ||
> > +		    adapter->cmd_retval !=3D VIRTCHNL_STATUS_SUCCESS) {
> > +			err =3D -EBUSY;
> > +			DRV_LOG(ERR, "No response or return failure (%d)
> > for cmd %d",
> > +				adapter->cmd_retval, args->ops);
> > +			clear_cmd(adapter);
> > +		}
> > +		break;
> > +	}
> > +
> > +	return err;
> > +}
> > +
> > +int
> > +idpf_vc_check_api_version(struct idpf_adapter *adapter) {
> > +	struct virtchnl2_version_info version, *pver;
> > +	struct idpf_cmd_info args;
> > +	int err;
> > +
> > +	memset(&version, 0, sizeof(struct virtchnl_version_info));
> > +	version.major =3D VIRTCHNL2_VERSION_MAJOR_2;
> > +	version.minor =3D VIRTCHNL2_VERSION_MINOR_0;
> > +
> > +	args.ops =3D VIRTCHNL_OP_VERSION;
> > +	args.in_args =3D (uint8_t *)&version;
> > +	args.in_args_size =3D sizeof(version);
> > +	args.out_buffer =3D adapter->mbx_resp;
> > +	args.out_size =3D IDPF_DFLT_MBX_BUF_SIZE;
> > +
> > +	err =3D idpf_execute_vc_cmd(adapter, &args);
> > +	if (err !=3D 0) {
> > +		DRV_LOG(ERR,
> > +			"Failed to execute command of
> > VIRTCHNL_OP_VERSION");
> > +		return err;
> > +	}
> > +
> > +	pver =3D (struct virtchnl2_version_info *)args.out_buffer;
> > +	adapter->virtchnl_version =3D *pver;
> > +
> > +	if (adapter->virtchnl_version.major !=3D
> > VIRTCHNL2_VERSION_MAJOR_2 ||
> > +	    adapter->virtchnl_version.minor !=3D
> > VIRTCHNL2_VERSION_MINOR_0) {
> > +		DRV_LOG(ERR, "VIRTCHNL API version mismatch:(%u.%u)-
> > (%u.%u)",
> > +			adapter->virtchnl_version.major,
> > +			adapter->virtchnl_version.minor,
> > +			VIRTCHNL2_VERSION_MAJOR_2,
> > +			VIRTCHNL2_VERSION_MINOR_0);
> > +		return -EINVAL;
> > +	}
> > +
> > +	return 0;
> > +}
> > +
> > +int
> > +idpf_vc_get_caps(struct idpf_adapter *adapter) {
> > +	struct virtchnl2_get_capabilities caps_msg;
> > +	struct idpf_cmd_info args;
> > +	int err;
> > +
> > +	memset(&caps_msg, 0, sizeof(struct virtchnl2_get_capabilities));
> > +
> > +	caps_msg.csum_caps =3D
> > +		VIRTCHNL2_CAP_TX_CSUM_L3_IPV4          |
> > +		VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_TCP      |
> > +		VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_UDP      |
> > +		VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_SCTP     |
> > +		VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_TCP      |
> > +		VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_UDP      |
> > +		VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_SCTP     |
> > +		VIRTCHNL2_CAP_TX_CSUM_GENERIC          |
> > +		VIRTCHNL2_CAP_RX_CSUM_L3_IPV4          |
> > +		VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_TCP      |
> > +		VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_UDP      |
> > +		VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_SCTP     |
> > +		VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_TCP      |
> > +		VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_UDP      |
> > +		VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_SCTP     |
> > +		VIRTCHNL2_CAP_RX_CSUM_GENERIC;
> > +
> > +	caps_msg.rss_caps =3D
> > +		VIRTCHNL2_CAP_RSS_IPV4_TCP             |
> > +		VIRTCHNL2_CAP_RSS_IPV4_UDP             |
> > +		VIRTCHNL2_CAP_RSS_IPV4_SCTP            |
> > +		VIRTCHNL2_CAP_RSS_IPV4_OTHER           |
> > +		VIRTCHNL2_CAP_RSS_IPV6_TCP             |
> > +		VIRTCHNL2_CAP_RSS_IPV6_UDP             |
> > +		VIRTCHNL2_CAP_RSS_IPV6_SCTP            |
> > +		VIRTCHNL2_CAP_RSS_IPV6_OTHER           |
> > +		VIRTCHNL2_CAP_RSS_IPV4_AH              |
> > +		VIRTCHNL2_CAP_RSS_IPV4_ESP             |
> > +		VIRTCHNL2_CAP_RSS_IPV4_AH_ESP          |
> > +		VIRTCHNL2_CAP_RSS_IPV6_AH              |
> > +		VIRTCHNL2_CAP_RSS_IPV6_ESP             |
> > +		VIRTCHNL2_CAP_RSS_IPV6_AH_ESP;
> > +
> > +	caps_msg.other_caps =3D VIRTCHNL2_CAP_WB_ON_ITR;
> > +
> > +	args.ops =3D VIRTCHNL2_OP_GET_CAPS;
> > +	args.in_args =3D (uint8_t *)&caps_msg;
> > +	args.in_args_size =3D sizeof(caps_msg);
> > +	args.out_buffer =3D adapter->mbx_resp;
> > +	args.out_size =3D IDPF_DFLT_MBX_BUF_SIZE;
> > +
> > +	err =3D idpf_execute_vc_cmd(adapter, &args);
> > +	if (err !=3D 0) {
> > +		DRV_LOG(ERR,
> > +			"Failed to execute command of
> > VIRTCHNL2_OP_GET_CAPS");
> > +		return err;
> > +	}
> > +
> > +	rte_memcpy(&adapter->caps, args.out_buffer, sizeof(caps_msg));
> > +
> > +	return 0;
> > +}
> > +
> > +int
> > +idpf_vc_create_vport(struct idpf_vport *vport,
> > +		     struct virtchnl2_create_vport *vport_req_info) {
> > +	struct idpf_adapter *adapter =3D vport->adapter;
> > +	struct virtchnl2_create_vport vport_msg;
> > +	struct idpf_cmd_info args;
> > +	int err =3D -1;
> > +
> > +	memset(&vport_msg, 0, sizeof(struct virtchnl2_create_vport));
> > +	vport_msg.vport_type =3D vport_req_info->vport_type;
> > +	vport_msg.txq_model =3D vport_req_info->txq_model;
> > +	vport_msg.rxq_model =3D vport_req_info->rxq_model;
> > +	vport_msg.num_tx_q =3D vport_req_info->num_tx_q;
> > +	vport_msg.num_tx_complq =3D vport_req_info->num_tx_complq;
> > +	vport_msg.num_rx_q =3D vport_req_info->num_rx_q;
> > +	vport_msg.num_rx_bufq =3D vport_req_info->num_rx_bufq;
> > +
> > +	memset(&args, 0, sizeof(args));
> > +	args.ops =3D VIRTCHNL2_OP_CREATE_VPORT;
> > +	args.in_args =3D (uint8_t *)&vport_msg;
> > +	args.in_args_size =3D sizeof(vport_msg);
> > +	args.out_buffer =3D adapter->mbx_resp;
> > +	args.out_size =3D IDPF_DFLT_MBX_BUF_SIZE;
> > +
> > +	err =3D idpf_execute_vc_cmd(adapter, &args);
> > +	if (err !=3D 0) {
> > +		DRV_LOG(ERR,
> > +			"Failed to execute command of
> > VIRTCHNL2_OP_CREATE_VPORT");
> > +		return err;
> > +	}
> > +
> > +	rte_memcpy(vport->vport_info, args.out_buffer,
> > IDPF_DFLT_MBX_BUF_SIZE);
> > +	return 0;
> > +}
> > +
> > +int
> > +idpf_vc_destroy_vport(struct idpf_vport *vport) {
> > +	struct idpf_adapter *adapter =3D vport->adapter;
> > +	struct virtchnl2_vport vc_vport;
> > +	struct idpf_cmd_info args;
> > +	int err;
> > +
> > +	vc_vport.vport_id =3D vport->vport_id;
> > +
> > +	memset(&args, 0, sizeof(args));
> > +	args.ops =3D VIRTCHNL2_OP_DESTROY_VPORT;
> > +	args.in_args =3D (uint8_t *)&vc_vport;
> > +	args.in_args_size =3D sizeof(vc_vport);
> > +	args.out_buffer =3D adapter->mbx_resp;
> > +	args.out_size =3D IDPF_DFLT_MBX_BUF_SIZE;
> > +
> > +	err =3D idpf_execute_vc_cmd(adapter, &args);
> > +	if (err !=3D 0)
> > +		DRV_LOG(ERR, "Failed to execute command of
> > VIRTCHNL2_OP_DESTROY_VPORT");
> > +
> > +	return err;
> > +}
> > +
> > +int
> > +idpf_vc_set_rss_key(struct idpf_vport *vport) {
> > +	struct idpf_adapter *adapter =3D vport->adapter;
> > +	struct virtchnl2_rss_key *rss_key;
> > +	struct idpf_cmd_info args;
> > +	int len, err;
> > +
> > +	len =3D sizeof(*rss_key) + sizeof(rss_key->key[0]) *
> > +		(vport->rss_key_size - 1);
> > +	rss_key =3D rte_zmalloc("rss_key", len, 0);
> > +	if (rss_key =3D=3D NULL)
> > +		return -ENOMEM;
> > +
> > +	rss_key->vport_id =3D vport->vport_id;
> > +	rss_key->key_len =3D vport->rss_key_size;
> > +	rte_memcpy(rss_key->key, vport->rss_key,
> > +		   sizeof(rss_key->key[0]) * vport->rss_key_size);
> > +
> > +	memset(&args, 0, sizeof(args));
> > +	args.ops =3D VIRTCHNL2_OP_SET_RSS_KEY;
> > +	args.in_args =3D (uint8_t *)rss_key;
> > +	args.in_args_size =3D len;
> > +	args.out_buffer =3D adapter->mbx_resp;
> > +	args.out_size =3D IDPF_DFLT_MBX_BUF_SIZE;
> > +
> > +	err =3D idpf_execute_vc_cmd(adapter, &args);
> > +	if (err !=3D 0)
> > +		DRV_LOG(ERR, "Failed to execute command of
> > VIRTCHNL2_OP_SET_RSS_KEY");
> > +
> > +	rte_free(rss_key);
> > +	return err;
> > +}
> > +
> > +int
> > +idpf_vc_set_rss_lut(struct idpf_vport *vport) {
> > +	struct idpf_adapter *adapter =3D vport->adapter;
> > +	struct virtchnl2_rss_lut *rss_lut;
> > +	struct idpf_cmd_info args;
> > +	int len, err;
> > +
> > +	len =3D sizeof(*rss_lut) + sizeof(rss_lut->lut[0]) *
> > +		(vport->rss_lut_size - 1);
> > +	rss_lut =3D rte_zmalloc("rss_lut", len, 0);
> > +	if (rss_lut =3D=3D NULL)
> > +		return -ENOMEM;
> > +
> > +	rss_lut->vport_id =3D vport->vport_id;
> > +	rss_lut->lut_entries =3D vport->rss_lut_size;
> > +	rte_memcpy(rss_lut->lut, vport->rss_lut,
> > +		   sizeof(rss_lut->lut[0]) * vport->rss_lut_size);
> > +
> > +	memset(&args, 0, sizeof(args));
> > +	args.ops =3D VIRTCHNL2_OP_SET_RSS_LUT;
> > +	args.in_args =3D (uint8_t *)rss_lut;
> > +	args.in_args_size =3D len;
> > +	args.out_buffer =3D adapter->mbx_resp;
> > +	args.out_size =3D IDPF_DFLT_MBX_BUF_SIZE;
> > +
> > +	err =3D idpf_execute_vc_cmd(adapter, &args);
> > +	if (err !=3D 0)
> > +		DRV_LOG(ERR, "Failed to execute command of
> > VIRTCHNL2_OP_SET_RSS_LUT");
> > +
> > +	rte_free(rss_lut);
> > +	return err;
> > +}
> > +
> > +int
> > +idpf_vc_set_rss_hash(struct idpf_vport *vport) {
> > +	struct idpf_adapter *adapter =3D vport->adapter;
> > +	struct virtchnl2_rss_hash rss_hash;
> > +	struct idpf_cmd_info args;
> > +	int err;
> > +
> > +	memset(&rss_hash, 0, sizeof(rss_hash));
> > +	rss_hash.ptype_groups =3D vport->rss_hf;
> > +	rss_hash.vport_id =3D vport->vport_id;
> > +
> > +	memset(&args, 0, sizeof(args));
> > +	args.ops =3D VIRTCHNL2_OP_SET_RSS_HASH;
> > +	args.in_args =3D (uint8_t *)&rss_hash;
> > +	args.in_args_size =3D sizeof(rss_hash);
> > +	args.out_buffer =3D adapter->mbx_resp;
> > +	args.out_size =3D IDPF_DFLT_MBX_BUF_SIZE;
> > +
> > +	err =3D idpf_execute_vc_cmd(adapter, &args);
> > +	if (err !=3D 0)
> > +		DRV_LOG(ERR, "Failed to execute command of
> > OP_SET_RSS_HASH");
> > +
> > +	return err;
> > +}
> > +
> > +int
> > +idpf_vc_config_irq_map_unmap(struct idpf_vport *vport, uint16_t
> > +nb_rxq,
> > bool map)
> > +{
> > +	struct idpf_adapter *adapter =3D vport->adapter;
> > +	struct virtchnl2_queue_vector_maps *map_info;
> > +	struct virtchnl2_queue_vector *vecmap;
> > +	struct idpf_cmd_info args;
> > +	int len, i, err =3D 0;
> > +
> > +	len =3D sizeof(struct virtchnl2_queue_vector_maps) +
> > +		(nb_rxq - 1) * sizeof(struct virtchnl2_queue_vector);
> > +
> > +	map_info =3D rte_zmalloc("map_info", len, 0);
> > +	if (map_info =3D=3D NULL)
> > +		return -ENOMEM;
> > +
> > +	map_info->vport_id =3D vport->vport_id;
> > +	map_info->num_qv_maps =3D nb_rxq;
> > +	for (i =3D 0; i < nb_rxq; i++) {
> > +		vecmap =3D &map_info->qv_maps[i];
> > +		vecmap->queue_id =3D vport->qv_map[i].queue_id;
> > +		vecmap->vector_id =3D vport->qv_map[i].vector_id;
> > +		vecmap->itr_idx =3D VIRTCHNL2_ITR_IDX_0;
> > +		vecmap->queue_type =3D VIRTCHNL2_QUEUE_TYPE_RX;
> > +	}
> > +
> > +	args.ops =3D map ? VIRTCHNL2_OP_MAP_QUEUE_VECTOR :
> > +		VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR;
> > +	args.in_args =3D (uint8_t *)map_info;
> > +	args.in_args_size =3D len;
> > +	args.out_buffer =3D adapter->mbx_resp;
> > +	args.out_size =3D IDPF_DFLT_MBX_BUF_SIZE;
> > +	err =3D idpf_execute_vc_cmd(adapter, &args);
> > +	if (err !=3D 0)
> > +		DRV_LOG(ERR, "Failed to execute command of
> > VIRTCHNL2_OP_%s_QUEUE_VECTOR",
> > +			map ? "MAP" : "UNMAP");
> > +
> > +	rte_free(map_info);
> > +	return err;
> > +}
> > +
> > +int
> > +idpf_vc_alloc_vectors(struct idpf_vport *vport, uint16_t num_vectors)
> > +{
> > +	struct idpf_adapter *adapter =3D vport->adapter;
> > +	struct virtchnl2_alloc_vectors *alloc_vec;
> > +	struct idpf_cmd_info args;
> > +	int err, len;
> > +
> > +	len =3D sizeof(struct virtchnl2_alloc_vectors) +
> > +		(num_vectors - 1) * sizeof(struct virtchnl2_vector_chunk);
> > +	alloc_vec =3D rte_zmalloc("alloc_vec", len, 0);
> > +	if (alloc_vec =3D=3D NULL)
> > +		return -ENOMEM;
> > +
> > +	alloc_vec->num_vectors =3D num_vectors;
> > +
> > +	args.ops =3D VIRTCHNL2_OP_ALLOC_VECTORS;
> > +	args.in_args =3D (uint8_t *)alloc_vec;
> > +	args.in_args_size =3D sizeof(struct virtchnl2_alloc_vectors);
> > +	args.out_buffer =3D adapter->mbx_resp;
> > +	args.out_size =3D IDPF_DFLT_MBX_BUF_SIZE;
> > +	err =3D idpf_execute_vc_cmd(adapter, &args);
> > +	if (err !=3D 0)
> > +		DRV_LOG(ERR, "Failed to execute command
> > VIRTCHNL2_OP_ALLOC_VECTORS");
> > +
> > +	if (vport->recv_vectors =3D=3D NULL) {
> > +		vport->recv_vectors =3D rte_zmalloc("recv_vectors", len, 0);
> > +		if (vport->recv_vectors =3D=3D NULL) {
> > +			rte_free(alloc_vec);
> > +			return -ENOMEM;
> > +		}
> > +	}
> > +
> > +	rte_memcpy(vport->recv_vectors, args.out_buffer, len);
> > +	rte_free(alloc_vec);
> > +	return err;
> > +}
> > +
> > +int
> > +idpf_vc_dealloc_vectors(struct idpf_vport *vport) {
> > +	struct idpf_adapter *adapter =3D vport->adapter;
> > +	struct virtchnl2_alloc_vectors *alloc_vec;
> > +	struct virtchnl2_vector_chunks *vcs;
> > +	struct idpf_cmd_info args;
> > +	int err, len;
> > +
> > +	alloc_vec =3D vport->recv_vectors;
> > +	vcs =3D &alloc_vec->vchunks;
> > +
> > +	len =3D sizeof(struct virtchnl2_vector_chunks) +
> > +		(vcs->num_vchunks - 1) * sizeof(struct
> > virtchnl2_vector_chunk);
> > +
> > +	args.ops =3D VIRTCHNL2_OP_DEALLOC_VECTORS;
> > +	args.in_args =3D (uint8_t *)vcs;
> > +	args.in_args_size =3D len;
> > +	args.out_buffer =3D adapter->mbx_resp;
> > +	args.out_size =3D IDPF_DFLT_MBX_BUF_SIZE;
> > +	err =3D idpf_execute_vc_cmd(adapter, &args);
> > +	if (err !=3D 0)
> > +		DRV_LOG(ERR, "Failed to execute command
> > VIRTCHNL2_OP_DEALLOC_VECTORS");
> > +
> > +	return err;
> > +}
> > +
> > +static int
> > +idpf_vc_ena_dis_one_queue(struct idpf_vport *vport, uint16_t qid,
> > +			  uint32_t type, bool on)
> > +{
> > +	struct idpf_adapter *adapter =3D vport->adapter;
> > +	struct virtchnl2_del_ena_dis_queues *queue_select;
> > +	struct virtchnl2_queue_chunk *queue_chunk;
> > +	struct idpf_cmd_info args;
> > +	int err, len;
> > +
> > +	len =3D sizeof(struct virtchnl2_del_ena_dis_queues);
> > +	queue_select =3D rte_zmalloc("queue_select", len, 0);
> > +	if (queue_select =3D=3D NULL)
> > +		return -ENOMEM;
> > +
> > +	queue_chunk =3D queue_select->chunks.chunks;
> > +	queue_select->chunks.num_chunks =3D 1;
> > +	queue_select->vport_id =3D vport->vport_id;
> > +
> > +	queue_chunk->type =3D type;
> > +	queue_chunk->start_queue_id =3D qid;
> > +	queue_chunk->num_queues =3D 1;
> > +
> > +	args.ops =3D on ? VIRTCHNL2_OP_ENABLE_QUEUES :
> > +		VIRTCHNL2_OP_DISABLE_QUEUES;
> > +	args.in_args =3D (uint8_t *)queue_select;
> > +	args.in_args_size =3D len;
> > +	args.out_buffer =3D adapter->mbx_resp;
> > +	args.out_size =3D IDPF_DFLT_MBX_BUF_SIZE;
> > +	err =3D idpf_execute_vc_cmd(adapter, &args);
> > +	if (err !=3D 0)
> > +		DRV_LOG(ERR, "Failed to execute command of
> > VIRTCHNL2_OP_%s_QUEUES",
> > +			on ? "ENABLE" : "DISABLE");
> > +
> > +	rte_free(queue_select);
> > +	return err;
> > +}
> > +
> > +int
> > +idpf_switch_queue(struct idpf_vport *vport, uint16_t qid,
> > +		  bool rx, bool on)
> > +{
> > +	uint32_t type;
> > +	int err, queue_id;
> > +
> > +	/* switch txq/rxq */
> > +	type =3D rx ? VIRTCHNL2_QUEUE_TYPE_RX :
> > VIRTCHNL2_QUEUE_TYPE_TX;
> > +
> > +	if (type =3D=3D VIRTCHNL2_QUEUE_TYPE_RX)
> > +		queue_id =3D vport->chunks_info.rx_start_qid + qid;
> > +	else
> > +		queue_id =3D vport->chunks_info.tx_start_qid + qid;
> > +	err =3D idpf_vc_ena_dis_one_queue(vport, queue_id, type, on);
> > +	if (err !=3D 0)
> > +		return err;
> > +
> > +	/* switch tx completion queue */
> > +	if (!rx && vport->txq_model =3D=3D VIRTCHNL2_QUEUE_MODEL_SPLIT) {
> > +		type =3D VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
> > +		queue_id =3D vport->chunks_info.tx_compl_start_qid + qid;
> > +		err =3D idpf_vc_ena_dis_one_queue(vport, queue_id, type, on);
> > +		if (err !=3D 0)
> > +			return err;
> > +	}
> > +
> > +	/* switch rx buffer queue */
> > +	if (rx && vport->rxq_model =3D=3D VIRTCHNL2_QUEUE_MODEL_SPLIT) {
> > +		type =3D VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
> > +		queue_id =3D vport->chunks_info.rx_buf_start_qid + 2 * qid;
> > +		err =3D idpf_vc_ena_dis_one_queue(vport, queue_id, type, on);
> > +		if (err !=3D 0)
> > +			return err;
> > +		queue_id++;
> > +		err =3D idpf_vc_ena_dis_one_queue(vport, queue_id, type, on);
> > +		if (err !=3D 0)
> > +			return err;
> > +	}
> > +
> > +	return err;
> > +}
> > +
> > +#define IDPF_RXTX_QUEUE_CHUNKS_NUM	2
> > +int
> > +idpf_vc_ena_dis_queues(struct idpf_vport *vport, bool enable) {
> > +	struct idpf_adapter *adapter =3D vport->adapter;
> > +	struct virtchnl2_del_ena_dis_queues *queue_select;
> > +	struct virtchnl2_queue_chunk *queue_chunk;
> > +	uint32_t type;
> > +	struct idpf_cmd_info args;
> > +	uint16_t num_chunks;
> > +	int err, len;
> > +
> > +	num_chunks =3D IDPF_RXTX_QUEUE_CHUNKS_NUM;
> > +	if (vport->txq_model =3D=3D VIRTCHNL2_QUEUE_MODEL_SPLIT)
> > +		num_chunks++;
> > +	if (vport->rxq_model =3D=3D VIRTCHNL2_QUEUE_MODEL_SPLIT)
> > +		num_chunks++;
> > +
> > +	len =3D sizeof(struct virtchnl2_del_ena_dis_queues) +
> > +		sizeof(struct virtchnl2_queue_chunk) * (num_chunks - 1);
> > +	queue_select =3D rte_zmalloc("queue_select", len, 0);
> > +	if (queue_select =3D=3D NULL)
> > +		return -ENOMEM;
> > +
> > +	queue_chunk =3D queue_select->chunks.chunks;
> > +	queue_select->chunks.num_chunks =3D num_chunks;
> > +	queue_select->vport_id =3D vport->vport_id;
> > +
> > +	type =3D VIRTCHNL_QUEUE_TYPE_RX;
> > +	queue_chunk[type].type =3D type;
> > +	queue_chunk[type].start_queue_id =3D vport-
> > >chunks_info.rx_start_qid;
> > +	queue_chunk[type].num_queues =3D vport->num_rx_q;
> > +
> > +	type =3D VIRTCHNL2_QUEUE_TYPE_TX;
> > +	queue_chunk[type].type =3D type;
> > +	queue_chunk[type].start_queue_id =3D vport-
> > >chunks_info.tx_start_qid;
> > +	queue_chunk[type].num_queues =3D vport->num_tx_q;
> > +
> > +	if (vport->rxq_model =3D=3D VIRTCHNL2_QUEUE_MODEL_SPLIT) {
> > +		type =3D VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
> > +		queue_chunk[type].type =3D type;
> > +		queue_chunk[type].start_queue_id =3D
> > +			vport->chunks_info.rx_buf_start_qid;
> > +		queue_chunk[type].num_queues =3D vport->num_rx_bufq;
> > +	}
> > +
> > +	if (vport->txq_model =3D=3D VIRTCHNL2_QUEUE_MODEL_SPLIT) {
> > +		type =3D VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
> > +		queue_chunk[type].type =3D type;
> > +		queue_chunk[type].start_queue_id =3D
> > +			vport->chunks_info.tx_compl_start_qid;
> > +		queue_chunk[type].num_queues =3D vport->num_tx_complq;
> > +	}
> > +
> > +	args.ops =3D enable ? VIRTCHNL2_OP_ENABLE_QUEUES :
> > +		VIRTCHNL2_OP_DISABLE_QUEUES;
> > +	args.in_args =3D (uint8_t *)queue_select;
> > +	args.in_args_size =3D len;
> > +	args.out_buffer =3D adapter->mbx_resp;
> > +	args.out_size =3D IDPF_DFLT_MBX_BUF_SIZE;
> > +	err =3D idpf_execute_vc_cmd(adapter, &args);
> > +	if (err !=3D 0)
> > +		DRV_LOG(ERR, "Failed to execute command of
> > VIRTCHNL2_OP_%s_QUEUES",
> > +			enable ? "ENABLE" : "DISABLE");
> > +
> > +	rte_free(queue_select);
> > +	return err;
> > +}
> > +
> > +int
> > +idpf_vc_ena_dis_vport(struct idpf_vport *vport, bool enable) {
> > +	struct idpf_adapter *adapter =3D vport->adapter;
> > +	struct virtchnl2_vport vc_vport;
> > +	struct idpf_cmd_info args;
> > +	int err;
> > +
> > +	vc_vport.vport_id =3D vport->vport_id;
> > +	args.ops =3D enable ? VIRTCHNL2_OP_ENABLE_VPORT :
> > +		VIRTCHNL2_OP_DISABLE_VPORT;
> > +	args.in_args =3D (uint8_t *)&vc_vport;
> > +	args.in_args_size =3D sizeof(vc_vport);
> > +	args.out_buffer =3D adapter->mbx_resp;
> > +	args.out_size =3D IDPF_DFLT_MBX_BUF_SIZE;
> > +
> > +	err =3D idpf_execute_vc_cmd(adapter, &args);
> > +	if (err !=3D 0) {
> > +		DRV_LOG(ERR, "Failed to execute command of
> > VIRTCHNL2_OP_%s_VPORT",
> > +			enable ? "ENABLE" : "DISABLE");
> > +	}
> > +
> > +	return err;
> > +}
> > +
> > +int
> > +idpf_vc_query_ptype_info(struct idpf_adapter *adapter) {
> > +	struct virtchnl2_get_ptype_info *ptype_info;
> > +	struct idpf_cmd_info args;
> > +	int len, err;
> > +
> > +	len =3D sizeof(struct virtchnl2_get_ptype_info);
> > +	ptype_info =3D rte_zmalloc("ptype_info", len, 0);
> > +	if (ptype_info =3D=3D NULL)
> > +		return -ENOMEM;
> > +
> > +	ptype_info->start_ptype_id =3D 0;
> > +	ptype_info->num_ptypes =3D IDPF_MAX_PKT_TYPE;
> > +	args.ops =3D VIRTCHNL2_OP_GET_PTYPE_INFO;
> > +	args.in_args =3D (uint8_t *)ptype_info;
> > +	args.in_args_size =3D len;
> > +
> > +	err =3D idpf_execute_vc_cmd(adapter, &args);
> > +	if (err !=3D 0)
> > +		DRV_LOG(ERR, "Failed to execute command of
> > VIRTCHNL2_OP_GET_PTYPE_INFO");
> > +
> > +	rte_free(ptype_info);
> > +	return err;
> > +}
> > diff --git a/drivers/common/idpf/idpf_common_virtchnl.h
> > b/drivers/common/idpf/idpf_common_virtchnl.h
> > new file mode 100644
> > index 0000000000..bbc66d63c4
> > --- /dev/null
> > +++ b/drivers/common/idpf/idpf_common_virtchnl.h
> > @@ -0,0 +1,48 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(c) 2022 Intel Corporation  */
> > +
> > +#ifndef _IDPF_COMMON_VIRTCHNL_H_
> > +#define _IDPF_COMMON_VIRTCHNL_H_
> > +
> > +#include <idpf_common_device.h>
> > +
> > +__rte_internal
> > +int idpf_vc_check_api_version(struct idpf_adapter *adapter);
> > +__rte_internal int idpf_vc_get_caps(struct idpf_adapter *adapter);
> > +__rte_internal int idpf_vc_create_vport(struct idpf_vport *vport,
> > +			 struct virtchnl2_create_vport *vport_info);
> __rte_internal int
> > +idpf_vc_destroy_vport(struct idpf_vport *vport); __rte_internal int
> > +idpf_vc_set_rss_key(struct idpf_vport *vport); __rte_internal int
> > +idpf_vc_set_rss_lut(struct idpf_vport *vport); __rte_internal int
> > +idpf_vc_set_rss_hash(struct idpf_vport *vport); __rte_internal int
> > +idpf_switch_queue(struct idpf_vport *vport, uint16_t qid,
> > +		      bool rx, bool on);
> > +__rte_internal
> > +int idpf_vc_ena_dis_queues(struct idpf_vport *vport, bool enable);
> > +__rte_internal int idpf_vc_ena_dis_vport(struct idpf_vport *vport,
> > +bool enable); __rte_internal int idpf_vc_config_irq_map_unmap(struct
> > +idpf_vport *vport,
> > +				 uint16_t nb_rxq, bool map);
> > +__rte_internal
> > +int idpf_vc_alloc_vectors(struct idpf_vport *vport, uint16_t
> > +num_vectors); __rte_internal int idpf_vc_dealloc_vectors(struct
> > +idpf_vport *vport); __rte_internal int
> > +idpf_vc_query_ptype_info(struct idpf_adapter *adapter);
> > +__rte_internal int idpf_read_one_msg(struct idpf_adapter *adapter,
> > +uint32_t ops,
> > +		      uint16_t buf_len, uint8_t *buf); __rte_internal int
> > +idpf_execute_vc_cmd(struct idpf_adapter *adapter,
> > +			struct idpf_cmd_info *args);
> > +
> > +#endif /* _IDPF_COMMON_VIRTCHNL_H_ */
> > diff --git a/drivers/common/idpf/meson.build
> > b/drivers/common/idpf/meson.build index 77d997b4a7..d1578641ba
> 100644
> > --- a/drivers/common/idpf/meson.build
> > +++ b/drivers/common/idpf/meson.build
> > @@ -1,4 +1,9 @@
> >  # SPDX-License-Identifier: BSD-3-Clause  # Copyright(c) 2022 Intel
> > Corporation
> >
> > +sources =3D files(
> > +    'idpf_common_device.c',
> > +    'idpf_common_virtchnl.c',
> > +)
> > +
> >  subdir('base')
> > diff --git a/drivers/common/idpf/version.map
> > b/drivers/common/idpf/version.map index bfb246c752..a2b8780780
> 100644
> > --- a/drivers/common/idpf/version.map
> > +++ b/drivers/common/idpf/version.map
> > @@ -1,12 +1,28 @@
> >  INTERNAL {
> >  	global:
> >
> > +	idpf_ctlq_clean_sq;
> >  	idpf_ctlq_deinit;
> >  	idpf_ctlq_init;
> > -	idpf_ctlq_clean_sq;
> > +	idpf_ctlq_post_rx_buffs;
> >  	idpf_ctlq_recv;
> >  	idpf_ctlq_send;
> > -	idpf_ctlq_post_rx_buffs;

And do we really need to expose all ctlq APIs , ideally all APIs in drivers=
/common/idpf/base folder could only be consumed by the idpf common module i=
nside, we should wrap it on the upper layer.

> > +	idpf_execute_vc_cmd;
> > +	idpf_read_one_msg;
> > +	idpf_switch_queue;
>=20
> I think all APsI be exposed from idpf_common_virtchnl.h can follow the sa=
me
> naming rule "idpf_vc*"
>=20