From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 539CA41C7F; Mon, 13 Feb 2023 04:30:21 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 40B8040EF1; Mon, 13 Feb 2023 04:30:21 +0100 (CET) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by mails.dpdk.org (Postfix) with ESMTP id 5E05340EE7 for ; Mon, 13 Feb 2023 04:30:19 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1676259019; x=1707795019; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=5J+4e1Qcm2FYvM/8w0ij/E9veq8om+BNPHMCQ3xB6Pg=; b=Qp2UwRQuWs/ePeghx1wdaabdvswCcJEXf6oHtJb4EUi+EUZmRdPZTaLf JzKc2fIdRsugwmM3Bx/yFJZ00ClmvT6+H7jOaDs/u93ICI3rxFbcOf738 B1urKsyM9hHEWavDbhaIHJ8HaZpmE+eo0/wpbHup7rJcQu6aBYsoY1ZNq /84GbNHLfh7vTS7oDSwYQgneKmGdLjWbv5lrAOkIefZrK20PUw3CdYnSk EZc/KCLgJK8gILLRvqJ7WM824gFdjIwtAterC915+hZ5TEWLyJ/tNWJhl 30myPms72hWHqcXMVSvb2Oq6LMtnVkmZy3qhhT1RXU3pfWJXpAEFewDOh Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10619"; a="310432857" X-IronPort-AV: E=Sophos;i="5.97,291,1669104000"; d="scan'208";a="310432857" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Feb 2023 19:30:18 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10619"; a="699032690" X-IronPort-AV: E=Sophos;i="5.97,291,1669104000"; d="scan'208";a="699032690" Received: from orsmsx602.amr.corp.intel.com ([10.22.229.15]) by orsmga008.jf.intel.com with ESMTP; 12 Feb 2023 19:30:18 -0800 Received: from orsmsx610.amr.corp.intel.com (10.22.229.23) by ORSMSX602.amr.corp.intel.com (10.22.229.15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.16; Sun, 12 Feb 2023 19:30:17 -0800 Received: from orsedg603.ED.cps.intel.com (10.7.248.4) by orsmsx610.amr.corp.intel.com (10.22.229.23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.16 via Frontend Transport; Sun, 12 Feb 2023 19:30:17 -0800 Received: from NAM10-MW2-obe.outbound.protection.outlook.com (104.47.55.107) by edgegateway.intel.com (134.134.137.100) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.16; Sun, 12 Feb 2023 19:30:17 -0800 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=kyXV5Wc35d245dvW3n7nCGe6z0K5TSNmiPTQeTsQfdz9bgc63HYQMDmpNieGZs51gpeIHr/GNUQu+Nd0marJ2/feJXIIr4YGYnFp/pKcgn4oA5MQ9eD+ozjXlFqMLJMr1Eke/1rRUuj3+tuvrr89Rsht5mbvqDYh7qUGSg0s8qqE9IIb6UuJoNrGmPPP9xR/o18lmQXOWC1jNQ7Jhv3j5bzpsmzH1NL8+3LcyD9Hulo8ES+U1mNw9FirFkKBvjKWyDYsqP13a9kloB1rR0RL+tgHpWUzJKphVjhmnsSTuPd076YntGVxTeRBIDtBK/8z0xhMxuJbqOhz5K+RUsKrnw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=FFO6ibqQ6T1Z/xoshY/4s+EJrv7MCt++be7kgQXe/ew=; b=A9/vz6qHS+UhSuUzBfLaejKsbB3TX16UC2Mjf54tYg/QPtEGxaF/m7flw8yYbvP/j+tzj39A7WRNFiatV9t8Z8X0BziGpB5Iqz28nZ1rxzO3L21NmaXH6k5S+uEgvKyAITLeJMDrJznOYD0W0mvXHPSDwoyvuBCuPR5Uy+cYClvnUgOrpDDoqZAcVG8PbHKhzkIoC4BL0UGLFvoY1kXykN8+M/jSt1B+iVA2HHQI6XzRBkePuhLEoynKS0rgTV6kswzYcaiFB6EEvnc1yPLKLluYr75ziwb8qrHF85pO96FofqI0j0AAafTfp7FpLpAS/SKtB7IPkKd1I6ZRwNnDIA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none Received: from LV2PR11MB5997.namprd11.prod.outlook.com (2603:10b6:408:17f::10) by PH7PR11MB7430.namprd11.prod.outlook.com (2603:10b6:510:274::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6086.23; Mon, 13 Feb 2023 03:30:15 +0000 Received: from LV2PR11MB5997.namprd11.prod.outlook.com ([fe80::19ed:7c4b:6f5a:5377]) by LV2PR11MB5997.namprd11.prod.outlook.com ([fe80::19ed:7c4b:6f5a:5377%9]) with mapi id 15.20.6086.024; Mon, 13 Feb 2023 03:30:15 +0000 From: "Xing, Beilei" To: "Liu, Mingxia" , "dev@dpdk.org" , "Zhang, Qi Z" , "Wu, Jingjing" CC: "Wang, Xiao W" , "Guo, Junfeng" Subject: RE: [PATCH 1/1] net/cpfl: add port to port feature. Thread-Topic: [PATCH 1/1] net/cpfl: add port to port feature. Thread-Index: AQHZK0WRWkEW1Y91CUebuKhJBxS6zq7MP1jw Date: Mon, 13 Feb 2023 03:30:14 +0000 Message-ID: References: <20230118130659.976873-1-mingxia.liu@intel.com> <20230118130659.976873-2-mingxia.liu@intel.com> In-Reply-To: <20230118130659.976873-2-mingxia.liu@intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=intel.com; x-ms-publictraffictype: Email x-ms-traffictypediagnostic: LV2PR11MB5997:EE_|PH7PR11MB7430:EE_ x-ms-office365-filtering-correlation-id: efc93b01-6d79-44fd-5e6a-08db0d729f0c x-ms-exchange-senderadcheck: 1 x-ms-exchange-antispam-relay: 0 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: GWLIT9VaThqgI4HJrv4d0fw/Z8tFpv7cIZp41J0cgYuvXAkOoobK0ZZq6X6ZUgU35hwg+2bUEOETNGNqq3/eqTGIcha2TzcZpJZWwXV/MflKjAd1hVp2fTgRuMMQljlBUi+W6DzlOXrRDXcrYy2eWDtKxkrKdqgacmNQFvOD0+lOI+uPoNRMmjADR/Q4ONVb6OFTZ2UchqVT0zthOpFDAyiNHbUwNcXTYM+lUL21R5B5pMXmGl9xfccusHaRVJFXM11J1F6jAhtOB+dxSMOBpgtVrARjFmkTvoWa6vUJpPkkcyDB7J7knfJPDMlU27eP2O716194ipacFbnXKADa2QJE6i0eMl0OV2rPH2Q6j82fUPApnio86xV/yPoCIHPmzDDz7gmM3KfWq3Os0uXJIOfUCZDAO7SXdnMomoPbKbsg1S+y2+QlHMdcXebtlDl1wjhMmKEoFaeZwkmN9XJwHOqa4l8WM5vPSg9HlZceANarvAXUI720ioyr81W3fpyP/LzumjKNkb6MHrvFPupGY+V2/PUpXLVtnwGugUoyIAGsqio/+l9yv2pR4cOiU53IbUhevguJeFPBvikIvsIRw4su7DVyA8PBZzue9E9hLkWw5+iOeybem9aXh5Ir5/7sdi83Hu79ygeB/noa9oKNBCwzGa+3rIIUykwQbGsuzMQY2GkJDdnLXQRCGS3EFSLNkKpU5YehcEFhzLgtPanjDw== x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:LV2PR11MB5997.namprd11.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230025)(39860400002)(396003)(136003)(376002)(346002)(366004)(451199018)(8936002)(2906002)(82960400001)(38100700002)(52536014)(30864003)(71200400001)(66556008)(66446008)(66476007)(66946007)(4326008)(122000001)(64756008)(8676002)(76116006)(5660300002)(41300700001)(38070700005)(107886003)(6636002)(110136005)(53546011)(86362001)(478600001)(33656002)(7696005)(186003)(26005)(9686003)(6506007)(316002)(54906003)(83380400001)(55016003)(559001)(579004); DIR:OUT; SFP:1102; x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?xzox97mlcLkHeqcXGAL2rYB0niyUUAoHCKgl0U/j6TgXJh5hEKZXQv19FpTZ?= =?us-ascii?Q?tv+Wuq74z/wyfccZDNW9qo52KVAUKTNxoQrs2xOGfl7ZfuXXxHzshK9VD0WA?= =?us-ascii?Q?yHGHS6rVv2XKtkqDwv8f6DMvPokF1yT97WyjAuqkA8GetM3THCW10bmjRE+V?= =?us-ascii?Q?7o1Zjs//rKLV2gDeyPjKSfOkULGtjoqjqUKDcFbv7R7wZjcBEk2xsNaNEeHI?= =?us-ascii?Q?GYVLhcsYl7tOE+3OF8X3BXWtACgT/JVNtvVkc6vJr5M5stnqIt8igbfdKvXr?= =?us-ascii?Q?3fbN+C75MDs29lnxzlRsgVtEs0b0rcHwxgmL5w5EJyhVCfou4FStdMfVegon?= =?us-ascii?Q?+GEcz189HHPMXaKQghlFKy5QSu3Obs/HexVa6X1AT2fYiHUbuN/MaoJ0V67I?= =?us-ascii?Q?eD0KxOlovltA+XkFeZhPWoeqLElVo/0T5KJoELUtlcReWSEOpYDigqf4uxH3?= =?us-ascii?Q?H8bJTfwMmAvOo0R4vlxHxt99Zm92wcN7KbRALgP7b5aELC0sDRHDQy9CYe6J?= =?us-ascii?Q?SWDnYA+W+Rlf00s0iZmcCnNzbLKoBlp2zEsj1oTfHB1SK3qXpWiKmAe7S2vu?= =?us-ascii?Q?ExkW8Jl2h2df530YYVrWX+vLNk+nVy1SWVsHHXxhvVUoiK0Nh1C/ERUvqsFg?= =?us-ascii?Q?G06L0z9a12/TvrGZdPixDsZ3Ylb0ZBaORCwpcTmQZ5sYOmxCNh07XtAEMR3+?= =?us-ascii?Q?8DxndZO+iLqLWVBsAf8u8NonZ8neRTxx5tNhgdBeEl2FLVaZeHhNy0doQArl?= =?us-ascii?Q?24VCUaIOX7BCIwOOOvNxaJa1jq7cN5CTgXnUDMvaJO4CzUhv8YIVUiBuDE0U?= =?us-ascii?Q?qZhKUZBOo3QdAeMvbnx8EqjePKjuFRaa67Ad2NInysVZOf6DPmeQtaaH/TPX?= =?us-ascii?Q?Ev2jwJIP45QUr75vUw78D+B7zvfs1amKiVD2+YeO5F/8dfBCjnUh/pgxTIAa?= =?us-ascii?Q?XlVvxrfihuHifGUFu1n2LCPtFPfutrei3hztv4Xmf1o0obp2nhtSENJbYaJx?= =?us-ascii?Q?1OYhuircfTkkFZOK/iQ/+qZWzcZepnFqYM9T/Gctqy/dvhq8cxanXFKZoh6e?= =?us-ascii?Q?Z/ZJnfF9sIWD+0t3ufr/e/Dk9anl6bXeKw4hS5Ro6zSklxY9wygIqhCWxhNG?= =?us-ascii?Q?ZjZ/vu7LyoZJfxQEjOl545gS1BdS4Bkt3UPzALvdpbnwFHIz3eg6m88R0C31?= =?us-ascii?Q?qQg0XQLA23YGMerINtS13e0psbHj+yzPuIp7huVyU1A+bMI9462f2xUm0fHA?= =?us-ascii?Q?EXIWf5ZC+4oIkFGgKKHqnZ6yvQcAQ1UpNvHsxFqRq4Dnb+pxkGyD6N3YDS/j?= =?us-ascii?Q?XeXmgeTcuUw6r0449kPy93Dt+DO50f63xN4IuSkJt2o4PClHsTBRUbj5z1VM?= =?us-ascii?Q?py/PnLfqRrnEHY/bVrMDHAt+hpN2inO7rwUYHqJkgb7A9+GJcdLaALC+Yn61?= =?us-ascii?Q?A5IDNz7u1rWPppKB2sSH4ohqi2P+/9HhbSlRk3Xwk0lrInQdwIMtNiLCZXlp?= =?us-ascii?Q?8ynbIVev8d/SxQA2v4becY6JAKOl9pErkmFLSBBUOqt1VThju4s8O4UrkOm3?= =?us-ascii?Q?XTcxgBapHXPpWwZE7T58nTzTIDXdNuhhoFtwDPqP?= Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: LV2PR11MB5997.namprd11.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: efc93b01-6d79-44fd-5e6a-08db0d729f0c X-MS-Exchange-CrossTenant-originalarrivaltime: 13 Feb 2023 03:30:14.8408 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: TZh8wpYW0l6s381wk3QR+ZZNf4aVNvFp0JsqPhLnXMiUqL8stlZ4mOZoRF5nWdKofZJfT59hM4PVKZ08Ilbd0g== X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR11MB7430 X-OriginatorOrg: intel.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org > -----Original Message----- > From: Liu, Mingxia > Sent: Wednesday, January 18, 2023 9:07 PM > To: dev@dpdk.org; Zhang, Qi Z ; Wu, Jingjing > ; Xing, Beilei > Cc: Liu, Mingxia ; Wang, Xiao W > ; Guo, Junfeng > Subject: [PATCH 1/1] net/cpfl: add port to port feature. No need . at the end of the title >=20 > - Implement hairpin queue setup/confige/enable/disable. Confige->configure > - Cross-vport hairpin queue implemented via hairpin_bind/unbind API. Better to split the features into different patches. >=20 > Test step: > 1. Make sure no bug on CP side. > 2. Add rule on IMC. > - devmem 0x202920C100 64 0x804 > - opcode=3D0x1303 prof_id=3D0x34 sub_prof_id=3D0x0 cookie=3D0xa2b87 ke= y=3D0x18,\ > 0x0,00,00,00,00,de,0xad,0xbe,0xef,0x20,0x24,0x0,0x0,0x0,0x0,00,00,\ > 00,00,00,00,0xa,0x2,0x1d,0x64,00,00,00,00,00,00,00,00,00,00,00,00,\ > 0xa,0x2,0x1d,0x2,00,00,00,00,00,00,00,00,00,00,00,00 act=3Dset_vsi{\ > act_val=3D0 val_type=3D2 dst_pe=3D0 slot=3D0x0} act=3Dset_q{\ > qnum=3D0x142 no_implicit_vsi=3D1 prec=3D5} > 3. Send packets on ixia side > UDP packets with dmac=3Dde:ad:be:ef:20:24 sip=3D10.2.29.100 > dip=3D10.2.29.2 The steps should be refined with an example. Step 1 can be removed. >=20 > Signed-off-by: Beilei Xing > Signed-off-by: Xiao Wang > Signed-off-by: Junfeng Guo > Signed-off-by: Mingxia Liu > --- > drivers/common/idpf/idpf_common_device.c | 50 ++ > drivers/common/idpf/idpf_common_device.h | 2 + > drivers/common/idpf/idpf_common_virtchnl.c | 100 ++- > drivers/common/idpf/idpf_common_virtchnl.h | 12 + > drivers/common/idpf/version.map | 5 + > drivers/net/cpfl/cpfl_ethdev.c | 374 +++++++-- > drivers/net/cpfl/cpfl_ethdev.h | 8 +- > drivers/net/cpfl/cpfl_logs.h | 2 + > drivers/net/cpfl/cpfl_rxtx.c | 851 +++++++++++++++++++-- > drivers/net/cpfl/cpfl_rxtx.h | 58 ++ > drivers/net/cpfl/cpfl_rxtx_vec_common.h | 18 +- > 11 files changed, 1347 insertions(+), 133 deletions(-) >=20 > diff --git a/drivers/common/idpf/idpf_common_device.c > b/drivers/common/idpf/idpf_common_device.c > index b90b20d0f2..be2ec19650 100644 > --- a/drivers/common/idpf/idpf_common_device.c > +++ b/drivers/common/idpf/idpf_common_device.c > @@ -362,6 +362,56 @@ idpf_adapter_init(struct idpf_adapter *adapter) > return ret; > } >=20 > +int > +idpf_adapter_common_init(struct idpf_adapter *adapter) It's quite similar to idpf_adapter_init. Can be refined. > +{ > + struct idpf_hw *hw =3D &adapter->hw; > + int ret; > + > + idpf_reset_pf(hw); > + ret =3D idpf_check_pf_reset_done(hw); > + if (ret !=3D 0) { > + DRV_LOG(ERR, "IDPF is still resetting"); > + goto err_check_reset; > + } > + > + ret =3D idpf_init_mbx(hw); > + if (ret !=3D 0) { > + DRV_LOG(ERR, "Failed to init mailbox"); > + goto err_check_reset; > + } > + > + adapter->mbx_resp =3D rte_zmalloc("idpf_adapter_mbx_resp", > + IDPF_DFLT_MBX_BUF_SIZE, 0); > + if (adapter->mbx_resp =3D=3D NULL) { > + DRV_LOG(ERR, "Failed to allocate idpf_adapter_mbx_resp > memory"); > + ret =3D -ENOMEM; > + goto err_mbx_resp; > + } > + > + ret =3D idpf_vc_check_api_version(adapter); > + if (ret !=3D 0) { > + DRV_LOG(ERR, "Failed to check api version"); > + goto err_check_api; > + } > + > + ret =3D idpf_get_pkt_type(adapter); > + if (ret !=3D 0) { > + DRV_LOG(ERR, "Failed to set ptype table"); > + goto err_check_api; > + } > + > + return 0; > + > +err_check_api: > + rte_free(adapter->mbx_resp); > + adapter->mbx_resp =3D NULL; > +err_mbx_resp: > + idpf_ctlq_deinit(hw); > +err_check_reset: > + return ret; > +} > + <...> > --- a/drivers/common/idpf/version.map > +++ b/drivers/common/idpf/version.map > @@ -67,6 +67,11 @@ INTERNAL { > idpf_vc_get_rss_key; > idpf_vc_get_rss_lut; > idpf_vc_get_rss_hash; > + idpf_vc_ena_dis_one_queue; > + idpf_vc_config_rxq_by_info; > + idpf_vc_config_txq_by_info; > + idpf_vc_get_caps_by_caps_info; > + idpf_adapter_common_init; Oder alphabetically. >=20 > local: *; > }; > diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethde= v.c > index f178f3fbb8..e464d76b60 100644 > --- a/drivers/net/cpfl/cpfl_ethdev.c > +++ b/drivers/net/cpfl/cpfl_ethdev.c > @@ -108,7 +108,9 @@ static int > cpfl_dev_link_update(struct rte_eth_dev *dev, > __rte_unused int wait_to_complete) > { > - struct idpf_vport *vport =3D dev->data->dev_private; > + struct cpfl_vport *cpfl_vport =3D > + (struct cpfl_vport *)dev->data->dev_private; > + struct idpf_vport *vport =3D &(cpfl_vport->base); > struct rte_eth_link new_link; >=20 > memset(&new_link, 0, sizeof(new_link)); > @@ -157,10 +159,24 @@ cpfl_dev_link_update(struct rte_eth_dev *dev, > return rte_eth_linkstatus_set(dev, &new_link); > } >=20 > +static int > +cpfl_hairpin_cap_get(__rte_unused struct rte_eth_dev *dev, > + struct rte_eth_hairpin_cap *cap) > +{ > + cap->max_nb_queues =3D 1; > + cap->max_rx_2_tx =3D 1; > + cap->max_tx_2_rx =3D 1; > + cap->max_nb_desc =3D 1024; Better to use macro. > + > + return 0; > +} > + > static int > cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_= info) > { > - struct idpf_vport *vport =3D dev->data->dev_private; > + struct cpfl_vport *cpfl_vport =3D > + (struct cpfl_vport *)dev->data->dev_private; > + struct idpf_vport *vport =3D &(cpfl_vport->base); > struct idpf_adapter *adapter =3D vport->adapter; >=20 > dev_info->max_rx_queues =3D adapter->caps.max_rx_q; > @@ -274,8 +290,9 @@ cpfl_get_mbuf_alloc_failed_stats(struct rte_eth_dev > *dev) > static int > cpfl_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) > { > - struct idpf_vport *vport =3D > - (struct idpf_vport *)dev->data->dev_private; > + struct cpfl_vport *cpfl_vport =3D > + (struct cpfl_vport *)dev->data->dev_private; > + struct idpf_vport *vport =3D &(cpfl_vport->base); > struct virtchnl2_vport_stats *pstats =3D NULL; > int ret; >=20 > @@ -319,8 +336,9 @@ cpfl_reset_mbuf_alloc_failed_stats(struct rte_eth_dev > *dev) > static int > cpfl_dev_stats_reset(struct rte_eth_dev *dev) > { > - struct idpf_vport *vport =3D > - (struct idpf_vport *)dev->data->dev_private; > + struct cpfl_vport *cpfl_vport =3D > + (struct cpfl_vport *)dev->data->dev_private; > + struct idpf_vport *vport =3D &(cpfl_vport->base); > struct virtchnl2_vport_stats *pstats =3D NULL; > int ret; >=20 > @@ -345,8 +363,9 @@ static int cpfl_dev_xstats_reset(struct rte_eth_dev > *dev) > static int cpfl_dev_xstats_get(struct rte_eth_dev *dev, > struct rte_eth_xstat *xstats, unsigned int n) > { > - struct idpf_vport *vport =3D > - (struct idpf_vport *)dev->data->dev_private; > + struct cpfl_vport *cpfl_vport =3D > + (struct cpfl_vport *)dev->data->dev_private; > + struct idpf_vport *vport =3D &(cpfl_vport->base); > struct virtchnl2_vport_stats *pstats =3D NULL; > unsigned int i; > int ret; > @@ -442,7 +461,8 @@ cpfl_init_rss(struct idpf_vport *vport) > { > struct rte_eth_rss_conf *rss_conf; > struct rte_eth_dev_data *dev_data; > - uint16_t i, nb_q; > + struct cpfl_rx_queue *cpfl_rxq; > + uint16_t i, nb_q, max_nb_data_q; > int ret =3D 0; >=20 > dev_data =3D vport->dev_data; > @@ -461,8 +481,16 @@ cpfl_init_rss(struct idpf_vport *vport) > vport->rss_key_size); > } >=20 > + /* RSS only to the data queues */ > + max_nb_data_q =3D nb_q; > + if (nb_q > 1) { > + cpfl_rxq =3D dev_data->rx_queues[nb_q - 1]; > + if (cpfl_rxq && cpfl_rxq->hairpin_info.hairpin_q) > + max_nb_data_q =3D nb_q - 1; > + } > + > for (i =3D 0; i < vport->rss_lut_size; i++) > - vport->rss_lut[i] =3D i % nb_q; > + vport->rss_lut[i] =3D i % max_nb_data_q; >=20 > vport->rss_hf =3D IDPF_DEFAULT_RSS_HASH_EXPANDED; >=20 > @@ -478,7 +506,9 @@ cpfl_rss_reta_update(struct rte_eth_dev *dev, > struct rte_eth_rss_reta_entry64 *reta_conf, > uint16_t reta_size) > { > - struct idpf_vport *vport =3D dev->data->dev_private; > + struct cpfl_vport *cpfl_vport =3D > + (struct cpfl_vport *)dev->data->dev_private; > + struct idpf_vport *vport =3D &(cpfl_vport->base); > struct idpf_adapter *adapter =3D vport->adapter; > uint16_t idx, shift; > uint32_t *lut; > @@ -534,7 +564,9 @@ cpfl_rss_reta_query(struct rte_eth_dev *dev, > struct rte_eth_rss_reta_entry64 *reta_conf, > uint16_t reta_size) > { > - struct idpf_vport *vport =3D dev->data->dev_private; > + struct cpfl_vport *cpfl_vport =3D > + (struct cpfl_vport *)dev->data->dev_private; > + struct idpf_vport *vport =3D &(cpfl_vport->base); > struct idpf_adapter *adapter =3D vport->adapter; > uint16_t idx, shift; > int ret =3D 0; > @@ -572,7 +604,9 @@ static int > cpfl_rss_hash_update(struct rte_eth_dev *dev, > struct rte_eth_rss_conf *rss_conf) > { > - struct idpf_vport *vport =3D dev->data->dev_private; > + struct cpfl_vport *cpfl_vport =3D > + (struct cpfl_vport *)dev->data->dev_private; > + struct idpf_vport *vport =3D &(cpfl_vport->base); > struct idpf_adapter *adapter =3D vport->adapter; > int ret =3D 0; >=20 > @@ -637,7 +671,9 @@ static int > cpfl_rss_hash_conf_get(struct rte_eth_dev *dev, > struct rte_eth_rss_conf *rss_conf) > { > - struct idpf_vport *vport =3D dev->data->dev_private; > + struct cpfl_vport *cpfl_vport =3D > + (struct cpfl_vport *)dev->data->dev_private; > + struct idpf_vport *vport =3D &(cpfl_vport->base); > struct idpf_adapter *adapter =3D vport->adapter; > int ret =3D 0; >=20 > @@ -674,10 +710,10 @@ cpfl_rss_hash_conf_get(struct rte_eth_dev *dev, > static int > cpfl_dev_configure(struct rte_eth_dev *dev) > { > - struct idpf_vport *vport =3D dev->data->dev_private; > + struct cpfl_vport *cpfl_vport =3D > + (struct cpfl_vport *)dev->data->dev_private; > + struct idpf_vport *vport =3D &(cpfl_vport->base); > struct rte_eth_conf *conf =3D &dev->data->dev_conf; > - struct idpf_adapter *adapter =3D vport->adapter; > - int ret; >=20 > if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) { > PMD_INIT_LOG(ERR, "Setting link speed is not supported"); > @@ -716,17 +752,6 @@ cpfl_dev_configure(struct rte_eth_dev *dev) > return -ENOTSUP; > } >=20 > - if (adapter->caps.rss_caps !=3D 0 && dev->data->nb_rx_queues !=3D 0) { > - ret =3D cpfl_init_rss(vport); > - if (ret !=3D 0) { > - PMD_INIT_LOG(ERR, "Failed to init rss"); > - return ret; > - } > - } else { > - PMD_INIT_LOG(ERR, "RSS is not supported."); > - return -1; > - } > - > vport->max_pkt_len =3D > (dev->data->mtu =3D=3D 0) ? CPFL_DEFAULT_MTU : dev->data- > >mtu + > CPFL_ETH_OVERHEAD; > @@ -737,7 +762,9 @@ cpfl_dev_configure(struct rte_eth_dev *dev) > static int > cpfl_config_rx_queues_irqs(struct rte_eth_dev *dev) > { > - struct idpf_vport *vport =3D dev->data->dev_private; > + struct cpfl_vport *cpfl_vport =3D > + (struct cpfl_vport *)dev->data->dev_private; > + struct idpf_vport *vport =3D &(cpfl_vport->base); > uint16_t nb_rx_queues =3D dev->data->nb_rx_queues; >=20 > return idpf_config_irq_map(vport, nb_rx_queues); > @@ -746,30 +773,92 @@ cpfl_config_rx_queues_irqs(struct rte_eth_dev *dev) > static int > cpfl_start_queues(struct rte_eth_dev *dev) > { > - struct idpf_rx_queue *rxq; > - struct idpf_tx_queue *txq; > + struct cpfl_vport *cpfl_vport =3D > + (struct cpfl_vport *)dev->data->dev_private; > + struct idpf_vport *vport =3D &(cpfl_vport->base); > + struct idpf_adapter *adapter =3D vport->adapter; > + struct cpfl_rx_queue *cpfl_rxq; > + struct cpfl_tx_queue *cpfl_txq; > int err =3D 0; > int i; >=20 > - for (i =3D 0; i < dev->data->nb_tx_queues; i++) { > - txq =3D dev->data->tx_queues[i]; > - if (txq =3D=3D NULL || txq->tx_deferred_start) > - continue; > - err =3D cpfl_tx_queue_start(dev, i); > + if (adapter->caps.rss_caps !=3D 0 && dev->data->nb_rx_queues !=3D 0) { > + err =3D cpfl_init_rss(vport); > if (err !=3D 0) { > - PMD_DRV_LOG(ERR, "Fail to start Tx queue %u", i); > + PMD_INIT_LOG(ERR, "Failed to init rss"); > return err; > } > + } else { > + PMD_INIT_LOG(ERR, "RSS is not supported."); > + return -1; > + } > + > + for (i =3D 0; i < dev->data->nb_tx_queues; i++) { > + cpfl_txq =3D dev->data->tx_queues[i]; > + if (cpfl_txq =3D=3D NULL || cpfl_txq->base.tx_deferred_start) > + continue; > + > + if (!cpfl_txq->hairpin_info.hairpin_q) { > + err =3D cpfl_tx_queue_start(dev, i); > + if (err !=3D 0) { > + PMD_DRV_LOG(ERR, "Fail to start Tx > queue %u", i); > + return err; > + } > + } else if (!cpfl_txq->hairpin_info.hairpin_cv) { > + err =3D cpfl_set_hairpin_txqinfo(vport, cpfl_txq); > + if (err) { > + PMD_DRV_LOG(ERR, "Fail to configure hairpin > Tx queue %u", i); > + return err; > + } > + } > } >=20 > for (i =3D 0; i < dev->data->nb_rx_queues; i++) { > - rxq =3D dev->data->rx_queues[i]; > - if (rxq =3D=3D NULL || rxq->rx_deferred_start) > + cpfl_rxq =3D dev->data->rx_queues[i]; > + if (cpfl_rxq =3D=3D NULL || cpfl_rxq->base.rx_deferred_start) > continue; > - err =3D cpfl_rx_queue_start(dev, i); > - if (err !=3D 0) { > - PMD_DRV_LOG(ERR, "Fail to start Rx queue %u", i); > - return err; > + if (!cpfl_rxq->hairpin_info.hairpin_q) { > + err =3D cpfl_rx_queue_start(dev, i); > + if (err !=3D 0) { > + PMD_DRV_LOG(ERR, "Fail to start Rx > queue %u", i); > + return err; > + } > + } else if (!cpfl_rxq->hairpin_info.hairpin_cv) { > + err =3D cpfl_set_hairpin_rxqinfo(vport, cpfl_rxq); > + if (err) { > + PMD_DRV_LOG(ERR, "Fail to configure hairpin > Rx queue %u", i); > + return err; > + } > + err =3D cpfl_rx_queue_init(dev, i); > + if (err) { > + PMD_DRV_LOG(ERR, "Fail to init hairpin Rx > queue %u", i); > + return err; > + } > + } > + } > + > + /* For non-cross vport hairpin queues, enable Txq and Rxq at last. */ > + for (i =3D 0; i < dev->data->nb_tx_queues; i++) { > + cpfl_txq =3D dev->data->tx_queues[i]; > + if (cpfl_txq->hairpin_info.hairpin_q && !cpfl_txq- > >hairpin_info.hairpin_cv) { > + err =3D cpfl_switch_hairpin_queue(vport, i, false, true); > + if (err) > + PMD_DRV_LOG(ERR, "Failed to switch hairpin > TX queue %u on", > + i); > + else > + cpfl_txq->base.q_started =3D true; > + } > + } > + > + for (i =3D 0; i < dev->data->nb_rx_queues; i++) { > + cpfl_rxq =3D dev->data->rx_queues[i]; > + if (cpfl_rxq->hairpin_info.hairpin_q && !cpfl_rxq- > >hairpin_info.hairpin_cv) { > + err =3D cpfl_switch_hairpin_queue(vport, i, true, true); > + if (err) > + PMD_DRV_LOG(ERR, "Failed to switch hairpin > RX queue %u on", > + i); > + else > + cpfl_rxq->base.q_started =3D true; > } > } >=20 > @@ -779,7 +868,9 @@ cpfl_start_queues(struct rte_eth_dev *dev) > static int > cpfl_dev_start(struct rte_eth_dev *dev) > { > - struct idpf_vport *vport =3D dev->data->dev_private; > + struct cpfl_vport *cpfl_vport =3D > + (struct cpfl_vport *)dev->data->dev_private; > + struct idpf_vport *vport =3D &(cpfl_vport->base); > struct idpf_adapter *base =3D vport->adapter; > struct cpfl_adapter_ext *adapter =3D CPFL_ADAPTER_TO_EXT(base); > uint16_t num_allocated_vectors =3D base->caps.num_allocated_vectors; > @@ -841,10 +932,106 @@ cpfl_dev_start(struct rte_eth_dev *dev) > return ret; > } >=20 > +static int > +cpfl_hairpin_get_peer_ports(struct rte_eth_dev *dev, uint16_t *peer_port= s, > + __rte_unused size_t len, uint32_t tx) > +{ > + /* Assume the last queue is used by app as hairpin */ > + int qid =3D dev->data->nb_tx_queues - 1; > + struct cpfl_txq_hairpin_info *txq_hairpin_info; > + struct cpfl_rxq_hairpin_info *rxq_hairpin_info; > + struct cpfl_tx_queue *cpfl_txq =3D dev->data->tx_queues[qid]; > + struct cpfl_rx_queue *cpfl_rxq =3D dev->data->rx_queues[qid]; > + > + PMD_INIT_FUNC_TRACE(); > + > + txq_hairpin_info =3D &(cpfl_txq->hairpin_info); > + rxq_hairpin_info =3D &(cpfl_rxq->hairpin_info); > + > + if (tx && txq_hairpin_info->hairpin_cv) { > + peer_ports[0] =3D txq_hairpin_info->peer_rxp; > + return 1; > + } else if (!tx && rxq_hairpin_info->hairpin_cv) { > + peer_ports[0] =3D rxq_hairpin_info->peer_txp; > + return 1; > + } > + > + return 0; > +} > + > +static int > +cpfl_hairpin_bind(struct rte_eth_dev *dev, uint16_t rx_port) > +{ > + struct cpfl_vport *cpfl_vport, *peer_cpfl_vport; > + struct idpf_vport *vport, *peer_vport; > + /* Assume the last queue is used by app as hairpin */ > + int qid =3D dev->data->nb_tx_queues - 1; > + struct cpfl_tx_queue *cpfl_txq =3D dev->data->tx_queues[qid]; > + struct cpfl_rx_queue *cpfl_rxq; > + struct rte_eth_dev *peer_dev; > + int err; > + > + PMD_INIT_FUNC_TRACE(); > + if (rx_port >=3D RTE_MAX_ETHPORTS) > + return 0; > + > + if (cpfl_txq->hairpin_info.bound) { > + PMD_DRV_LOG(INFO, "port %u already hairpin bound", > + dev->data->port_id); > + return 0; > + } > + > + cpfl_vport =3D > + (struct cpfl_vport *)dev->data->dev_private; > + vport =3D &(cpfl_vport->base); > + err =3D cpfl_set_hairpin_txqinfo(vport, cpfl_txq); > + if (err) { > + PMD_DRV_LOG(ERR, "Fail to configure hairpin Tx queue %u of > port %u", > + qid, dev->data->port_id); > + return err; > + } > + > + peer_dev =3D &rte_eth_devices[rx_port]; > + peer_cpfl_vport =3D (struct cpfl_vport *)peer_dev->data->dev_private; > + peer_vport =3D &(peer_cpfl_vport->base); > + cpfl_rxq =3D peer_dev->data->rx_queues[qid]; > + err =3D cpfl_set_hairpin_rxqinfo(peer_vport, cpfl_rxq); > + if (err) { > + PMD_DRV_LOG(ERR, "Fail to configure hairpin Rx queue %u of > port %u", > + qid, peer_dev->data->port_id); > + return err; > + } > + err =3D cpfl_rx_queue_init(peer_dev, qid); > + if (err) { > + PMD_DRV_LOG(ERR, "Fail to init hairpin Rx queue %u of > port %u", > + qid, peer_dev->data->port_id); > + return err; > + } > + > + err =3D cpfl_switch_hairpin_queue(vport, qid, false, true); > + if (err) { > + PMD_DRV_LOG(ERR, "Fail to enable hairpin Tx queue %u of > port %u", > + qid, dev->data->port_id); > + return err; > + } > + > + err =3D cpfl_switch_hairpin_queue(peer_vport, qid, true, true); > + if (err) { > + PMD_DRV_LOG(ERR, "Fail to enable hairpin Rx queue %u of > port %u", > + qid, peer_dev->data->port_id); > + return err; > + } > + > + cpfl_txq->hairpin_info.bound =3D true; > + return 0; > +} > + > static int > cpfl_dev_stop(struct rte_eth_dev *dev) > { > - struct idpf_vport *vport =3D dev->data->dev_private; > + struct cpfl_vport *cpfl_vport =3D > + (struct cpfl_vport *)dev->data->dev_private; > + struct idpf_vport *vport =3D &(cpfl_vport->base); >=20 > if (vport->stopped =3D=3D 1) > return 0; > @@ -865,17 +1052,23 @@ cpfl_dev_stop(struct rte_eth_dev *dev) > static int > cpfl_dev_close(struct rte_eth_dev *dev) > { > - struct idpf_vport *vport =3D dev->data->dev_private; > + struct cpfl_vport *cpfl_vport =3D > + (struct cpfl_vport *)dev->data->dev_private; > + struct idpf_vport *vport =3D &(cpfl_vport->base); > struct cpfl_adapter_ext *adapter =3D CPFL_ADAPTER_TO_EXT(vport- > >adapter); >=20 > cpfl_dev_stop(dev); > + if (cpfl_vport->p2p_mp) { > + rte_mempool_free(cpfl_vport->p2p_mp); > + cpfl_vport->p2p_mp =3D NULL; > + } > idpf_vport_deinit(vport); >=20 > adapter->cur_vports &=3D ~RTE_BIT32(vport->devarg_id); > adapter->cur_vport_nb--; > dev->data->dev_private =3D NULL; > adapter->vports[vport->sw_idx] =3D NULL; > - rte_free(vport); > + rte_free(cpfl_vport); >=20 > return 0; > } > @@ -1048,7 +1241,7 @@ cpfl_find_vport(struct cpfl_adapter_ext *adapter, > uint32_t vport_id) > int i; >=20 > for (i =3D 0; i < adapter->cur_vport_nb; i++) { > - vport =3D adapter->vports[i]; > + vport =3D &(adapter->vports[i]->base); > if (vport->vport_id !=3D vport_id) > continue; > else > @@ -1162,6 +1355,72 @@ cpfl_dev_alarm_handler(void *param) > rte_eal_alarm_set(CPFL_ALARM_INTERVAL, cpfl_dev_alarm_handler, > adapter); > } >=20 > +static int > +cpfl_get_caps(struct idpf_adapter *adapter) > +{ > + struct virtchnl2_get_capabilities caps_msg =3D {0}; > + > + caps_msg.csum_caps =3D > + VIRTCHNL2_CAP_TX_CSUM_L3_IPV4 | > + VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_TCP | > + VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_UDP | > + VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_SCTP | > + VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_TCP | > + VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_UDP | > + VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_SCTP | > + VIRTCHNL2_CAP_TX_CSUM_GENERIC | > + VIRTCHNL2_CAP_RX_CSUM_L3_IPV4 | > + VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_TCP | > + VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_UDP | > + VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_SCTP | > + VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_TCP | > + VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_UDP | > + VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_SCTP | > + VIRTCHNL2_CAP_RX_CSUM_GENERIC; > + > + caps_msg.rss_caps =3D > + VIRTCHNL2_CAP_RSS_IPV4_TCP | > + VIRTCHNL2_CAP_RSS_IPV4_UDP | > + VIRTCHNL2_CAP_RSS_IPV4_SCTP | > + VIRTCHNL2_CAP_RSS_IPV4_OTHER | > + VIRTCHNL2_CAP_RSS_IPV6_TCP | > + VIRTCHNL2_CAP_RSS_IPV6_UDP | > + VIRTCHNL2_CAP_RSS_IPV6_SCTP | > + VIRTCHNL2_CAP_RSS_IPV6_OTHER | > + VIRTCHNL2_CAP_RSS_IPV4_AH | > + VIRTCHNL2_CAP_RSS_IPV4_ESP | > + VIRTCHNL2_CAP_RSS_IPV4_AH_ESP | > + VIRTCHNL2_CAP_RSS_IPV6_AH | > + VIRTCHNL2_CAP_RSS_IPV6_ESP | > + VIRTCHNL2_CAP_RSS_IPV6_AH_ESP; > + > + caps_msg.other_caps =3D VIRTCHNL2_CAP_WB_ON_ITR | > + VIRTCHNL2_CAP_PTP | > + VIRTCHNL2_CAP_RX_FLEX_DESC; > + > + return idpf_vc_get_caps_by_caps_info(adapter, &caps_msg); > +} > + > +static int > +cpfl_adapter_init(struct idpf_adapter *adapter) > +{ > + int ret =3D 0; > + > + ret =3D idpf_adapter_common_init(adapter); > + if (ret !=3D 0) { > + PMD_DRV_LOG(ERR, "Failed to init idpf common adapter"); > + return ret; > + } > + > + ret =3D cpfl_get_caps(adapter); > + if (ret !=3D 0) { > + PMD_DRV_LOG(ERR, "Failed to get capabilities"); > + return ret; > + } > + > + return ret; > +} > + > static int > cpfl_adapter_ext_init(struct rte_pci_device *pci_dev, struct cpfl_adapte= r_ext > *adapter) > { > @@ -1178,7 +1437,7 @@ cpfl_adapter_ext_init(struct rte_pci_device > *pci_dev, struct cpfl_adapter_ext *a >=20 > strncpy(adapter->name, pci_dev->device.name, PCI_PRI_STR_SIZE); >=20 > - ret =3D idpf_adapter_init(base); > + ret =3D cpfl_adapter_init(base); > if (ret !=3D 0) { > PMD_INIT_LOG(ERR, "Failed to init adapter"); > goto err_adapter_init; > @@ -1237,6 +1496,11 @@ static const struct eth_dev_ops cpfl_eth_dev_ops > =3D { > .xstats_get =3D cpfl_dev_xstats_get, > .xstats_get_names =3D cpfl_dev_xstats_get_names, > .xstats_reset =3D cpfl_dev_xstats_reset, > + .hairpin_cap_get =3D cpfl_hairpin_cap_get, > + .rx_hairpin_queue_setup =3D > cpfl_rx_hairpin_queue_setup, > + .tx_hairpin_queue_setup =3D > cpfl_tx_hairpin_queue_setup, > + .hairpin_get_peer_ports =3D cpfl_hairpin_get_peer_ports, > + .hairpin_bind =3D cpfl_hairpin_bind, > }; >=20 > static uint16_t > @@ -1261,7 +1525,9 @@ cpfl_vport_idx_alloc(struct cpfl_adapter_ext *ad) > static int > cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params) > { > - struct idpf_vport *vport =3D dev->data->dev_private; > + struct cpfl_vport *cpfl_vport =3D > + (struct cpfl_vport *)dev->data->dev_private; > + struct idpf_vport *vport =3D &(cpfl_vport->base); > struct cpfl_vport_param *param =3D init_params; > struct cpfl_adapter_ext *adapter =3D param->adapter; > /* for sending create vport virtchnl msg prepare */ > @@ -1287,7 +1553,7 @@ cpfl_dev_vport_init(struct rte_eth_dev *dev, void > *init_params) > goto err; > } >=20 > - adapter->vports[param->idx] =3D vport; > + adapter->vports[param->idx] =3D cpfl_vport; > adapter->cur_vports |=3D RTE_BIT32(param->devarg_id); > adapter->cur_vport_nb++; >=20 > @@ -1370,7 +1636,7 @@ cpfl_pci_probe(struct rte_pci_driver *pci_drv > __rte_unused, > if (adapter =3D=3D NULL) { > first_probe =3D true; > adapter =3D rte_zmalloc("cpfl_adapter_ext", > - sizeof(struct cpfl_adapter_ext), > 0); > + sizeof(struct cpfl_adapter_ext), 0); > if (adapter =3D=3D NULL) { > PMD_INIT_LOG(ERR, "Failed to allocate adapter."); > return -ENOMEM; > @@ -1405,7 +1671,7 @@ cpfl_pci_probe(struct rte_pci_driver *pci_drv > __rte_unused, > snprintf(name, sizeof(name), "cpfl_%s_vport_0", > pci_dev->device.name); > retval =3D rte_eth_dev_create(&pci_dev->device, name, > - sizeof(struct idpf_vport), > + sizeof(struct cpfl_vport), > NULL, NULL, cpfl_dev_vport_init, > &vport_param); > if (retval !=3D 0) > @@ -1423,7 +1689,7 @@ cpfl_pci_probe(struct rte_pci_driver *pci_drv > __rte_unused, > pci_dev->device.name, > devargs.req_vports[i]); > retval =3D rte_eth_dev_create(&pci_dev->device, name, > - sizeof(struct idpf_vport), > + sizeof(struct cpfl_vport), > NULL, NULL, > cpfl_dev_vport_init, > &vport_param); > if (retval !=3D 0) > diff --git a/drivers/net/cpfl/cpfl_ethdev.h b/drivers/net/cpfl/cpfl_ethde= v.h > index 0d60ee3aed..65c10c0c64 100644 > --- a/drivers/net/cpfl/cpfl_ethdev.h > +++ b/drivers/net/cpfl/cpfl_ethdev.h > @@ -70,13 +70,19 @@ struct cpfl_devargs { > uint16_t req_vport_nb; > }; >=20 > +struct cpfl_vport { > + /* p2p mbuf pool */ > + struct rte_mempool *p2p_mp; > + struct idpf_vport base; > +}; It can be in a separate patch which introduces the new structure and code r= efactor.=20 > + > struct cpfl_adapter_ext { > TAILQ_ENTRY(cpfl_adapter_ext) next; > struct idpf_adapter base; >=20 > char name[CPFL_ADAPTER_NAME_LEN]; >=20 > - struct idpf_vport **vports; > + struct cpfl_vport **vports; > uint16_t max_vport_nb; >=20 > uint16_t cur_vports; /* bit mask of created vport */