From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 08D7F42C34; Mon, 5 Jun 2023 10:53:55 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id ED9F540A7F; Mon, 5 Jun 2023 10:53:54 +0200 (CEST) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by mails.dpdk.org (Postfix) with ESMTP id 1589C4003C for ; Mon, 5 Jun 2023 10:53:52 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685955233; x=1717491233; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=A2fiDBXFgvzwtDhPphaAZPwug21FwFDWayUEJido544=; b=bfcNWdfH8lbMeynV5UCn30QUQU5AVy/dIMwfzXtOzVqPIDRX/xDeL56p f+CCyyPgCoCbafz/QvU2UtgGVFKEs5ahlfh/plZpik4izF4FmEGux3S+1 WZVFYDTKbRieMy+dlN2l5c+8Qe9gu2t3i9/0jh+LzSS6GStNdU1d512wd /ZOlrcL4dyW1MW748ZKAryWEKB5DDtO75sPTWNrV7MsVoUCie5JDmb8iD 42WKaHoV68CioW6opIEX/dloKpUTkp47YN55fsQqKVkHnqdBgc9tDsz+6 psWmPeydvz0riGwaP88TnXyhOBwR74tZilR1Gu4wy7H4IUhjVff7DsIeu Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="384618514" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="384618514" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jun 2023 01:53:52 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10731"; a="852898091" X-IronPort-AV: E=Sophos;i="6.00,217,1681196400"; d="scan'208";a="852898091" Received: from fmsmsx602.amr.corp.intel.com ([10.18.126.82]) by fmsmga001.fm.intel.com with ESMTP; 05 Jun 2023 01:53:50 -0700 Received: from fmsmsx601.amr.corp.intel.com (10.18.126.81) by fmsmsx602.amr.corp.intel.com (10.18.126.82) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 5 Jun 2023 01:53:50 -0700 Received: from FMSEDG603.ED.cps.intel.com (10.1.192.133) by fmsmsx601.amr.corp.intel.com (10.18.126.81) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23 via Frontend Transport; Mon, 5 Jun 2023 01:53:50 -0700 Received: from NAM11-BN8-obe.outbound.protection.outlook.com (104.47.58.169) by edgegateway.intel.com (192.55.55.68) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.23; Mon, 5 Jun 2023 01:53:50 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=JoD6vTvEmws2nHbCZTLGAg96gxGrcrjXOyCeLujUG/BfQc18CQGx67ez+74d5GuXU9teW+mYq3qX1mrLpYCWSVhSGcJZoPuPfTXijYiixkKP4N0Ej35C5dZmMl84VI8wGrha1Bc4v1rmv4FOg6Y7cVRpKEAnQFhmYHmEPkDrPo8+yVihb+xwF+d9VQrm5GdFnvv9abJ0QSKMZzBRyCL3tUWudFFRIss8tcTVvMYHhiXWyUgXBQfUt5+y+//qha3Zh2ROESd646hF9X1xiIxS/L9PuDPpw/RAjzH8C7i1IoP3W9ZhZjMpCLpB0GnzYizSwgD+RutmX6c8cw4sZisE3A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=CD5AjvZLb65QZV4Hnk8Toiqnsf5S6CijVzD/oQopd1M=; b=Ge4CMc3Fh1IQpiJt8bmXk3TLUB7eYQGsBymNJE6zv+HpsnuuXz/5kgiivoUxyT1M3Oi+Y1jTpp4K82dU0uaWo8zhnJtXAvhliDbUpD/xfr1k4fF2oi8hr2nFrcIAyI/2z9K2DvhIXfbtzffvpBmGWd7DW4flVRYbVTUAITeL8uWhs059c15ai8A8ERVbzXIJCLzXx1odXVzv6/3t9apRqa5n+e2oGp7+QzD/IQeAcB2hNis6Ruq0wK4BZr4bsMmWIiKTobfacy6W4tVMsdoiJ+EM84Kyo+QI/xX9kKVNy4DWD2VKQUmaa8NGAsrInYeY6Z67jzWO8YH9eVUG7DxaXw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none Received: from LV2PR11MB5997.namprd11.prod.outlook.com (2603:10b6:408:17f::10) by CH3PR11MB7793.namprd11.prod.outlook.com (2603:10b6:610:129::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6455.33; Mon, 5 Jun 2023 08:53:48 +0000 Received: from LV2PR11MB5997.namprd11.prod.outlook.com ([fe80::a1be:2406:376c:612a]) by LV2PR11MB5997.namprd11.prod.outlook.com ([fe80::a1be:2406:376c:612a%7]) with mapi id 15.20.6455.030; Mon, 5 Jun 2023 08:53:47 +0000 From: "Xing, Beilei" To: "Wu, Jingjing" CC: "dev@dpdk.org" , "Liu, Mingxia" Subject: RE: [PATCH v8 03/14] net/cpfl: add haipin queue group during vport init Thread-Topic: [PATCH v8 03/14] net/cpfl: add haipin queue group during vport init Thread-Index: AQHZl3jilMsd7Vu0KkGQS+Iftd/pz6974nCAgAAE4sA= Date: Mon, 5 Jun 2023 08:53:47 +0000 Message-ID: References: <20230531130450.26380-1-beilei.xing@intel.com> <20230605061724.88130-1-beilei.xing@intel.com> <20230605061724.88130-4-beilei.xing@intel.com> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=intel.com; x-ms-publictraffictype: Email x-ms-traffictypediagnostic: LV2PR11MB5997:EE_|CH3PR11MB7793:EE_ x-ms-office365-filtering-correlation-id: a6e8955b-9211-4a32-a817-08db65a2603b x-ms-exchange-senderadcheck: 1 x-ms-exchange-antispam-relay: 0 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: QvoWffI/5h0f6s0i3BK5sgU5G2LWC1d7iFmauaAjaDLERkzgW/cE7irBtlK5LvI8cY9PDp9UATIxEZr6ShpukUrczT/Dv7XYobOqg5MElNyAA1tWr6PqivZXHpwHUZPkho6K120wvA9GpbGMWD8O5ihDAkMCAIamV6Ja04avoBwg3uCVtnDvNHKKD7F9wGhHvWJ5r8b1QiXUIYMcS4Fm/fHuo4aAaGS5ymp3yvJGSwbCsplTCGL6/EHrD/fbP0+YyWtaB82Xf0CXw+Ob3wT3CbiC+nb1gKvAoDdkNfsOiJ+k5oPxlthoQ+JzTyeydMS4WDazjWVQZZVpt0VXK5P41adLUjjzJe6T6kv0Y0vOc+dygbQ6nfIo4PCc8eBhiIz3TFIwrKBRItNcEplZRc/5CMgKH/Iercsb4YWjeAAOg4oCZfY3/17WZhdlsD/tolrKNCzJVU+pCaQlvNSD5NaQqurIHQQwlZsaHynbf6/FSM7aWXzcq4K4kDEbktnOp59wPHVlU7DtweCyXxO0N96eqsKeprYjJIl1lTsNV9LANXC/OY8qDG3s87FU2RB9FaGmtYidJcD3H66KTMwhXoiPMrUi0rn21K5HXFtpLJTGZWArNcNX1YOpl2znCUXjv4Wb x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:LV2PR11MB5997.namprd11.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230028)(346002)(366004)(396003)(376002)(39860400002)(136003)(451199021)(55016003)(54906003)(478600001)(82960400001)(8936002)(8676002)(6862004)(6636002)(4326008)(76116006)(66946007)(66556008)(66476007)(66446008)(64756008)(122000001)(316002)(38100700002)(41300700001)(186003)(83380400001)(71200400001)(7696005)(107886003)(53546011)(9686003)(26005)(6506007)(33656002)(86362001)(5660300002)(52536014)(38070700005)(2906002); DIR:OUT; SFP:1102; x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?HLIL5ryx3YyS/CnypZbBdLTRHk7dQ/tyXw83DwO08eByOgOyPg3jIPlKYDsP?= =?us-ascii?Q?VrE49AOlM1y6ohFhDtLd3lkqPlvOef2UZ/EqViz+u5n4Qjtv4ABaEGK93Z5+?= =?us-ascii?Q?c3lKQKlCONIeDl1eE3RquincvaBa4wmqNagQ+rBJD3hs0VyCUPWfdkXb2MGm?= =?us-ascii?Q?h9pp7f8kg34ID6wY5DIIU3BV2FOGDogqgmaIF+5jZ4E3iatmJhanAQmu2NwN?= =?us-ascii?Q?GES1LJ7EKRK4MAwzvz7gum4pPw8h/tYRgrVkFBR0Obpr4qVbWxbWJyEDdaFl?= =?us-ascii?Q?5J+KCoPSV23GIy8v6l4mp9wj8vhVGhH3ogpm/Njdij8m05FWjzAVTuUu1kSp?= =?us-ascii?Q?PtSzQ1LQxZHL8Oun2l49hzLZQRms5lFGiEMDWPeiYe28ylfV/quf6eDEUqfM?= =?us-ascii?Q?WbuFpg4VA5V4/SGzfxO4MULH96AYiUhnc65nX/SsSvUfojZLd7d39ilnlzLp?= =?us-ascii?Q?1Mre4zyBd/m+NlDlJszsDx78KPsWGVB9ZRvwq5G/Vbrf1wJNnrnckVIpNb1y?= =?us-ascii?Q?llTjkp7peqVW5iglEote1YL9n20tLL/0h5cAb9g5V1uLp+jyan/8i/qQoQa5?= =?us-ascii?Q?OTZSqj/MAsFH2DF/Nz06TR4R5EYsZoH3gEjcEjmC873S+yMkCU+yzJyzNt1w?= =?us-ascii?Q?ztcddm+kbUg8mn1t2JknivovOtPVe4yE1Xc/0G4QL+5KDEY18/JEPo44G50A?= =?us-ascii?Q?Nfgbw8dVuhJg7ruhlBJAwAmEGDlapgIqH3jHcCa9p4Fqr0xV8jS0IsiENsMt?= =?us-ascii?Q?AyJA8psptcnGCR5dYtt5b2hN/Gk1bDe6/3YBzFHxd2y9cd0pGdYACk1jYjIl?= =?us-ascii?Q?lpaTGJRL9nF7dzdTTvP7sZnrFrvjQCXaoroT/XDZRdtvvN0HVYpapgEIT38H?= =?us-ascii?Q?0jkjxKM968X5KaBXOfnTiLjnecn3/g1uXvJ36dYkKZCqZKQq3DvlEk9IU/c9?= =?us-ascii?Q?llGKg/aKMZdbu3XGGcAIZZuf/eefV1LV5cTirVNp96v8VnTCkFvAjWucj3Co?= =?us-ascii?Q?YLoiNliwNFJLIFa1FDvmUpd9f5gPwuwKLLXCb3jjRr2pie7diSzoIc5pVL+9?= =?us-ascii?Q?c8I/1A6VZ1D3lnTacXw6vhnhwahmBp90HnQtN8S1/jsyym5jtzKBLK8cURGa?= =?us-ascii?Q?uXkdVVHOCVYLYgL3fnrDOtor2/xMUFv35oqawQp7wO1KHU0WE1gXLBrahVVC?= =?us-ascii?Q?KMlx1JwH23oIIa5SDB6BEYt9LMMCw9c5rFJEp43usS10KeDRz4tkgHSgtJ9x?= =?us-ascii?Q?p5djr6OFxIuARHlkuZJKcGqmvr9neOghZg8tKA/Sp5uf4BamBRn64c6Tah9O?= =?us-ascii?Q?vSTZewiP8TsqQlNYFxX6aZqfq0v0088/mzGlNx4ZZjXuT8UczWNXOJxpzqHc?= =?us-ascii?Q?ecOsf9NZXg2BJsmSF0esvOfwNAcBdr9kBMo8bf6wF+rESVvTFujg6x84Ls+3?= =?us-ascii?Q?6lBO5qVKCNPmBud6vGOsDhTCLc7dah2g8UEX8MgFMPa0yP6RIf/rDp6rhZnz?= =?us-ascii?Q?5BKLF6pl3kF8H1oB0/UefkLTq0h1kUQ4wpufSIvgtskgw56BnCs2oa9K9o9M?= =?us-ascii?Q?PfrIr9alEYWZ5vIHEJMFefaTUyP5LOtHsEakmOff?= Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: LV2PR11MB5997.namprd11.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: a6e8955b-9211-4a32-a817-08db65a2603b X-MS-Exchange-CrossTenant-originalarrivaltime: 05 Jun 2023 08:53:47.6476 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: 42RkNIGSFAUXlWrbFxX+MG2n7dx/G6Sp2ruif6lIycdPZl/fC0TzQ/e54h2eKKy9CwNpmcYAW0M7wVfUE4Iq4Q== X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH3PR11MB7793 X-OriginatorOrg: intel.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org > -----Original Message----- > From: Wu, Jingjing > Sent: Monday, June 5, 2023 4:36 PM > To: Xing, Beilei > Cc: dev@dpdk.org; Liu, Mingxia > Subject: RE: [PATCH v8 03/14] net/cpfl: add haipin queue group during vpo= rt > init >=20 >=20 >=20 > > -----Original Message----- > > From: Xing, Beilei > > Sent: Monday, June 5, 2023 2:17 PM > > To: Wu, Jingjing > > Cc: dev@dpdk.org; Liu, Mingxia ; Xing, Beilei > > > > Subject: [PATCH v8 03/14] net/cpfl: add haipin queue group during > > vport init > > > > From: Beilei Xing > > > > This patch adds haipin queue group during vport init. > > > > Signed-off-by: Mingxia Liu > > Signed-off-by: Beilei Xing > > --- > > drivers/net/cpfl/cpfl_ethdev.c | 133 > > +++++++++++++++++++++++++++++++++ drivers/net/cpfl/cpfl_ethdev.h | 18 > +++++ > > drivers/net/cpfl/cpfl_rxtx.h | 7 ++ > > 3 files changed, 158 insertions(+) > > > > diff --git a/drivers/net/cpfl/cpfl_ethdev.c > > b/drivers/net/cpfl/cpfl_ethdev.c index e587155db6..c1273a7478 100644 > > --- a/drivers/net/cpfl/cpfl_ethdev.c > > +++ b/drivers/net/cpfl/cpfl_ethdev.c > > @@ -840,6 +840,20 @@ cpfl_dev_stop(struct rte_eth_dev *dev) > > return 0; > > } > > > > +static int > > +cpfl_p2p_queue_grps_del(struct idpf_vport *vport) { > > + struct virtchnl2_queue_group_id qg_ids[CPFL_P2P_NB_QUEUE_GRPS] > =3D {0}; > > + int ret =3D 0; > > + > > + qg_ids[0].queue_group_id =3D CPFL_P2P_QUEUE_GRP_ID; > > + qg_ids[0].queue_group_type =3D VIRTCHNL2_QUEUE_GROUP_P2P; > > + ret =3D idpf_vc_queue_grps_del(vport, CPFL_P2P_NB_QUEUE_GRPS, > qg_ids); > > + if (ret) > > + PMD_DRV_LOG(ERR, "Failed to delete p2p queue groups"); > > + return ret; > > +} > > + > > static int > > cpfl_dev_close(struct rte_eth_dev *dev) { @@ -848,7 +862,12 @@ > > cpfl_dev_close(struct rte_eth_dev *dev) > > struct cpfl_adapter_ext *adapter =3D > > CPFL_ADAPTER_TO_EXT(vport->adapter); > > > > cpfl_dev_stop(dev); > > + > > + if (!adapter->base.is_rx_singleq && !adapter->base.is_tx_singleq) > > + cpfl_p2p_queue_grps_del(vport); > > + > > idpf_vport_deinit(vport); > > + rte_free(cpfl_vport->p2p_q_chunks_info); > > > > adapter->cur_vports &=3D ~RTE_BIT32(vport->devarg_id); > > adapter->cur_vport_nb--; > > @@ -1284,6 +1303,96 @@ cpfl_vport_idx_alloc(struct cpfl_adapter_ext > *adapter) > > return vport_idx; > > } > > > > +static int > > +cpfl_p2p_q_grps_add(struct idpf_vport *vport, > > + struct virtchnl2_add_queue_groups *p2p_queue_grps_info, > > + uint8_t *p2p_q_vc_out_info) > > +{ > > + int ret; > > + > > + p2p_queue_grps_info->vport_id =3D vport->vport_id; > > + p2p_queue_grps_info->qg_info.num_queue_groups =3D > > CPFL_P2P_NB_QUEUE_GRPS; > > + p2p_queue_grps_info->qg_info.groups[0].num_rx_q =3D > > CPFL_MAX_P2P_NB_QUEUES; > > + p2p_queue_grps_info->qg_info.groups[0].num_rx_bufq =3D > > CPFL_P2P_NB_RX_BUFQ; > > + p2p_queue_grps_info->qg_info.groups[0].num_tx_q =3D > > CPFL_MAX_P2P_NB_QUEUES; > > + p2p_queue_grps_info->qg_info.groups[0].num_tx_complq =3D > > CPFL_P2P_NB_TX_COMPLQ; > > + p2p_queue_grps_info->qg_info.groups[0].qg_id.queue_group_id =3D > > CPFL_P2P_QUEUE_GRP_ID; > > + p2p_queue_grps_info->qg_info.groups[0].qg_id.queue_group_type =3D > > VIRTCHNL2_QUEUE_GROUP_P2P; > > + p2p_queue_grps_info->qg_info.groups[0].rx_q_grp_info.rss_lut_size =3D > 0; > > + p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.tx_tc =3D 0; > > + p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.priority =3D 0; > > + p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.is_sp =3D 0; > > + p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.pir_weight =3D > 0; > > + > > + ret =3D idpf_vc_queue_grps_add(vport, p2p_queue_grps_info, > > p2p_q_vc_out_info); > > + if (ret !=3D 0) { > > + PMD_DRV_LOG(ERR, "Failed to add p2p queue groups."); > > + return ret; > > + } > > + > > + return ret; > > +} > > + > > +static int > > +cpfl_p2p_queue_info_init(struct cpfl_vport *cpfl_vport, > > + struct virtchnl2_add_queue_groups > *p2p_q_vc_out_info) { > > + struct p2p_queue_chunks_info *p2p_q_chunks_info =3D cpfl_vport- > > >p2p_q_chunks_info; > > + struct virtchnl2_queue_reg_chunks *vc_chunks_out; > > + int i, type; > > + > > + if (p2p_q_vc_out_info->qg_info.groups[0].qg_id.queue_group_type !=3D > > + VIRTCHNL2_QUEUE_GROUP_P2P) { > > + PMD_DRV_LOG(ERR, "Add queue group response mismatch."); > > + return -EINVAL; > > + } > > + > > + vc_chunks_out =3D &p2p_q_vc_out_info->qg_info.groups[0].chunks; > > + > > + for (i =3D 0; i < vc_chunks_out->num_chunks; i++) { > > + type =3D vc_chunks_out->chunks[i].type; > > + switch (type) { > > + case VIRTCHNL2_QUEUE_TYPE_TX: > > + p2p_q_chunks_info->tx_start_qid =3D > > + vc_chunks_out->chunks[i].start_queue_id; > > + p2p_q_chunks_info->tx_qtail_start =3D > > + vc_chunks_out->chunks[i].qtail_reg_start; > > + p2p_q_chunks_info->tx_qtail_spacing =3D > > + vc_chunks_out->chunks[i].qtail_reg_spacing; > > + break; > > + case VIRTCHNL2_QUEUE_TYPE_RX: > > + p2p_q_chunks_info->rx_start_qid =3D > > + vc_chunks_out->chunks[i].start_queue_id; > > + p2p_q_chunks_info->rx_qtail_start =3D > > + vc_chunks_out->chunks[i].qtail_reg_start; > > + p2p_q_chunks_info->rx_qtail_spacing =3D > > + vc_chunks_out->chunks[i].qtail_reg_spacing; > > + break; > > + case VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION: > > + p2p_q_chunks_info->tx_compl_start_qid =3D > > + vc_chunks_out->chunks[i].start_queue_id; > > + p2p_q_chunks_info->tx_compl_qtail_start =3D > > + vc_chunks_out->chunks[i].qtail_reg_start; > > + p2p_q_chunks_info->tx_compl_qtail_spacing =3D > > + vc_chunks_out->chunks[i].qtail_reg_spacing; > > + break; > > + case VIRTCHNL2_QUEUE_TYPE_RX_BUFFER: > > + p2p_q_chunks_info->rx_buf_start_qid =3D > > + vc_chunks_out->chunks[i].start_queue_id; > > + p2p_q_chunks_info->rx_buf_qtail_start =3D > > + vc_chunks_out->chunks[i].qtail_reg_start; > > + p2p_q_chunks_info->rx_buf_qtail_spacing =3D > > + vc_chunks_out->chunks[i].qtail_reg_spacing; > > + break; > > + default: > > + PMD_DRV_LOG(ERR, "Unsupported queue type"); > > + break; > > + } > > + } > > + > > + return 0; > > +} > > + > > static int > > cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params) { @@ > > -1293,6 +1402,8 @@ cpfl_dev_vport_init(struct rte_eth_dev *dev, void > > *init_params) > > struct cpfl_adapter_ext *adapter =3D param->adapter; > > /* for sending create vport virtchnl msg prepare */ > > struct virtchnl2_create_vport create_vport_info; > > + struct virtchnl2_add_queue_groups p2p_queue_grps_info; > > + uint8_t p2p_q_vc_out_info[IDPF_DFLT_MBX_BUF_SIZE] =3D {0}; > > int ret =3D 0; > > > > dev->dev_ops =3D &cpfl_eth_dev_ops; > > @@ -1327,6 +1438,28 @@ cpfl_dev_vport_init(struct rte_eth_dev *dev, > > void > > *init_params) > > rte_ether_addr_copy((struct rte_ether_addr *)vport- > >default_mac_addr, > > &dev->data->mac_addrs[0]); > > > > + if (!adapter->base.is_rx_singleq && !adapter->base.is_tx_singleq) { > > + memset(&p2p_queue_grps_info, 0, > sizeof(p2p_queue_grps_info)); > > + ret =3D cpfl_p2p_q_grps_add(vport, &p2p_queue_grps_info, > > p2p_q_vc_out_info); > > + if (ret !=3D 0) { > > + PMD_INIT_LOG(ERR, "Failed to add p2p queue > group."); > > + return 0; > > + } > > + cpfl_vport->p2p_q_chunks_info =3D rte_zmalloc(NULL, > > + sizeof(struct > p2p_queue_chunks_info), > > 0); > > + if (cpfl_vport->p2p_q_chunks_info =3D=3D NULL) { > > + PMD_INIT_LOG(ERR, "Failed to allocate p2p queue > info."); > > + cpfl_p2p_queue_grps_del(vport); > > + return 0; > > + } > > + ret =3D cpfl_p2p_queue_info_init(cpfl_vport, > > + (struct virtchnl2_add_queue_groups > > *)p2p_q_vc_out_info); > > + if (ret !=3D 0) { > > + PMD_INIT_LOG(ERR, "Failed to init p2p queue info."); > > + cpfl_p2p_queue_grps_del(vport); >=20 > Forgot to free p2p_q_chunks_info? > And better to use WARNING, as it is not returned with negative value. Yes, need to free p2p_q_chunks_info. Will fix in next version.