From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D63B642BDA; Tue, 30 May 2023 04:50:03 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5AF0E40F18; Tue, 30 May 2023 04:50:03 +0200 (CEST) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by mails.dpdk.org (Postfix) with ESMTP id 512AA406BC for ; Tue, 30 May 2023 04:49:58 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685415001; x=1716951001; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=0X7uEstpx5AQn9RrcLeWIcz+CzGvUCU1fxc3/LYXU0I=; b=HliRuKZwBPnbvzKgkP45CcByq4fXvn+3nX8rcGhNpBWmrbIEqdTcoxR6 IvdWioNwHzVRcUH8bpjWDPiM6k63t3/7YBTkOHfomRin6LfMuameKzsbu S/WUmytZPgxaA9eV7jg7l2sFkuI6W13pvzQnQMFymzh3LgBTr8ddIZY2h sD6F5s1LCHKTZ1l2l6dyusKOm2wRAlzfGt8Uv8J+uZvcyHzXKdOjvL+js iQ6GE2TSpDHJ1/J6fowPMnB9CLLQ6EAshEcT4ILgidlym13O5Fzwzxb/M 04Kek0P8LC8VT5jBOxqv5djZRc6Qq3xbkfhuVzrlPbjroke5JH/lsNEoT Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10725"; a="339392069" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="339392069" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 May 2023 19:49:56 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10725"; a="818608689" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="818608689" Received: from fmsmsx603.amr.corp.intel.com ([10.18.126.83]) by fmsmga002.fm.intel.com with ESMTP; 29 May 2023 19:49:49 -0700 Received: from fmsmsx611.amr.corp.intel.com (10.18.126.91) by fmsmsx603.amr.corp.intel.com (10.18.126.83) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 29 May 2023 19:49:48 -0700 Received: from fmsedg602.ED.cps.intel.com (10.1.192.136) by fmsmsx611.amr.corp.intel.com (10.18.126.91) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23 via Frontend Transport; Mon, 29 May 2023 19:49:48 -0700 Received: from NAM04-MW2-obe.outbound.protection.outlook.com (104.47.73.170) by edgegateway.intel.com (192.55.55.71) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.23; Mon, 29 May 2023 19:49:47 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=oDbEOPHgofX0106+Hd2yA66PxNAz7PPsliPHPrdocfn6ZwV0ccKdfzSDdxu5e59s4GFeRcDZCZCrEQ6DECpia2dANhsJMrPiXjt6AIFafvZKjc52k/slkP5iuUCxJyWOzejl1amo4yRw2JRkKS2Jmvzw8QBa/46zIZPJfJO2DkxipxCW6ILAC5VwJrb88sIvT6NnbbaxBSgWCudap8nCbUWaV7HWo9iyd/Zw/SeGNDgHvf2vUowY4FPcx0RtbvLlP7VlIJUkO2aEFBloB6qUOWYCU6jbHjoQtUOo7FDvwPSKnws3QdCdZCDbkYvpUm7fNq90XwW6kC5S7yo0vBc/mQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=j60ME2aRaQnoW+IglaqL97boBB6a+VoM6a9q2zKlCqc=; b=nJJZ7JWBn+vUumty4QMXFoFfWjf/tw+3jA3S/XpyMoDdn9BNfKofA02Pyrb23zAY07DvvLHON73NzuRBtCk0dHr0C9jnFlIdrvw1RcCEjkvyIMbReyA4WqI81L8Aak2+rNjwtLWteyDvHUr2yLX/YVqutaK3lCL0OMXHscoPl3wesUPNuiRQuPzatH/C25z4OTyRzHrDrj1/kNawiPzsFn9E9CVoEBWX6ftEn5pQBc/SHYRPOinET1tiQlDWqtvxBloptl6xKWUSJLAM4+Lnglj6tYw8Xo29dYJQZ8n4Pd10z0MwZlbf7uieDKnoLJX43gmphKXu2pKGx4AV0Go3vA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none Received: from PH0PR11MB5877.namprd11.prod.outlook.com (2603:10b6:510:141::12) by SA1PR11MB8490.namprd11.prod.outlook.com (2603:10b6:806:3a7::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6433.23; Tue, 30 May 2023 02:49:45 +0000 Received: from PH0PR11MB5877.namprd11.prod.outlook.com ([fe80::da1b:ee87:709:3174]) by PH0PR11MB5877.namprd11.prod.outlook.com ([fe80::da1b:ee87:709:3174%4]) with mapi id 15.20.6433.018; Tue, 30 May 2023 02:49:45 +0000 From: "Liu, Mingxia" To: "Xing, Beilei" , "Wu, Jingjing" CC: "dev@dpdk.org" , "Wang, Xiao W" Subject: RE: [PATCH v4 05/13] net/cpfl: support hairpin queue setup and release Thread-Topic: [PATCH v4 05/13] net/cpfl: support hairpin queue setup and release Thread-Index: AQHZj6h1TcZrIi8rtkKFu48MQZC+Gq9yGe2AgAAHIuA= Date: Tue, 30 May 2023 02:49:45 +0000 Message-ID: References: <20230519073116.56749-1-beilei.xing@intel.com> <20230526073850.101079-1-beilei.xing@intel.com> <20230526073850.101079-6-beilei.xing@intel.com> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=intel.com; x-ms-publictraffictype: Email x-ms-traffictypediagnostic: PH0PR11MB5877:EE_|SA1PR11MB8490:EE_ x-ms-office365-filtering-correlation-id: a16389cc-a7da-4246-5c60-08db60b886b3 x-ms-exchange-senderadcheck: 1 x-ms-exchange-antispam-relay: 0 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: 8B8VOitfrlVTp3qw9ping34T9B970d7OBm8n9/6suA1YsbQJXRukbOte+8AzjkaB9ko2NzpBGRolhjg21aRpP2tgZQpkiSdt0nbZRK+8DbWmoXYJiMuxMBqwoZ8r+kD/582SB6S6hAh5vnlxOAN59PL8pSqkTMrI9XNKf+aSNnBSZwC8U8HxCdBUlvUTIx5a+e2xTwFy17uZ+s1EahaE3i5taSqbvfVJMtjd/Makeabcy4l3XTdJ1g/ZsuiLN4fyPFXXQE3FOxmGp0ezhyJKtHtRSRMdEzU7OTj+8y9hRN3KxByLPsQuAe7HhiDWM4bKieyqA3QssuHAPGdv8ML46d6xxKXTZRzjeYmlDDQIvz3778fPHG699+utPW1Ok35fIv5D8hl8UXKBDpptWqMIAnIk+Vb2dyhUoEw61iIH7P+aXsL4KzCmQ76S7XMxfeD4h6q9Fuh1nLYC6CyOsMfNXVCW5gejkU49Sk0KibxnvTDGjTHN51+6L20mSZpBG/mkBmytOS2/sy42AWgWoQ0HzI7saK87bJ65YctPNG9QVL3yNnl0ya2XbRpdA1kOacmSO8E6nnqKyZPQ4jNSQnHOK7CG0JDBE6sMPeUsNnFZkeCGQByGDbqFM/GEO2Pe5gCy x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:PH0PR11MB5877.namprd11.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230028)(39860400002)(136003)(376002)(396003)(366004)(346002)(451199021)(5660300002)(82960400001)(33656002)(54906003)(110136005)(478600001)(122000001)(38100700002)(66556008)(71200400001)(76116006)(66446008)(66476007)(66946007)(64756008)(7696005)(41300700001)(52536014)(8676002)(86362001)(8936002)(107886003)(26005)(2906002)(30864003)(53546011)(55016003)(2940100002)(186003)(6506007)(9686003)(83380400001)(38070700005)(6636002)(4326008)(316002); DIR:OUT; SFP:1102; x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: =?iso-2022-jp?B?d1V3U1pYc0txUGp4aGsvT2JFeiswUjRmWXF6MWVxSUIwVThjbklKTm4x?= =?iso-2022-jp?B?Y2srTGlMQi84Ym00bUQ5OHdsaTJQTm5VRFVWVVNvNjJpeW5TTGVEZzM3?= =?iso-2022-jp?B?SnAyd2lqeHpieU16L1FjZ0NWbXcyUjN1OWhodWEzVC81YXRtayt3STZZ?= =?iso-2022-jp?B?NnpFM3BHN3pmUC9Jemg0UURtUXAxZzA2czlzTVZXN20yek5mOEgwZTBI?= =?iso-2022-jp?B?MEpGaW9WaWVjZjlzVzFXL2Q1b2FyRHdOS3N3aEllUlN6empGOGpIZjdG?= =?iso-2022-jp?B?Vml3bmVOc0UrTlZhWHM3YzFtRzBJRTRNTGN3c3pjOVpHTkNzNXlLWmw4?= =?iso-2022-jp?B?dVpUSkVPNjVRME1aSzFRakJDdEorbjdScTJFbnQzd1RsbEZYS2huN2ZU?= =?iso-2022-jp?B?ckdTNFZPWjZVTWR0b2t2bDNmdUhoZFJXTDBaNGRrUUU4bGVwb0xENXRs?= =?iso-2022-jp?B?WGExRWdvQnZEcXRXeTBLY29YN0FpWXlDMllweCtYWFFHMi8vSk1TTkNu?= =?iso-2022-jp?B?MWl4Z3UzMkY0dm8rcUp5a0xSTzc0b0l3ZnFyZlE1NlpDYzJnMXhXamZM?= =?iso-2022-jp?B?TlFIMEdzaVhRL1dOcW03d2FPNmhHTFdjOFkwNlN1ZjFrYmwrV3MwWWdW?= =?iso-2022-jp?B?TnBKTmo1Zy85SUZYcTNxbStZYTFnMlpyNU9vYVpjZ2tRSXBicnZsaVNa?= =?iso-2022-jp?B?bFhwR3VHMDV3RXlLa2tadW9KMEJHSTFSNUZhV1V4bkRYeDkwYkZSeXMv?= =?iso-2022-jp?B?bFY2aUowVUhZMVRoQW1lMGJBZUVIVkdGSVNaM3N5VFFkQWFvWW0rMnli?= =?iso-2022-jp?B?MnFpUXc1bDhxenhCUDRtUW5aMTAyQTJMZEt5N1ZUMmdSeHJEMWJ1alUx?= =?iso-2022-jp?B?d2ozdXlqWmk1Skx0VjA3MGo4OWxUOU1seE5NTUtOYkZQNkdMYmdqVnBq?= =?iso-2022-jp?B?L1U1MDN3WFNNL0RvTmU4aHhqZXVpM21zOUtlNWhVeEdCVUN3Y0M0YU5y?= =?iso-2022-jp?B?NDJzSGV0ZGVJMWM2RGtXVzNHQ0JRdWh4TVEzcTNkUWRCN0RLMmt3cnlH?= =?iso-2022-jp?B?bCs2VHNyODdaWGQyb2psbnRDVVlLSThPQ3IvT3pTcUNrRlNxSm1Tb2Vx?= =?iso-2022-jp?B?U1RyYTAzZ05keFM3My9YRWdVZS9uaDhteXEwQjdIWUZvcmJMTHAyNjFi?= =?iso-2022-jp?B?dXAyUWdacDNVcHJNQkV3V2tmTVk0d2dpdkhLZDJNSE02VGFPZ1QwUThD?= =?iso-2022-jp?B?bmlSREtxb0F1MFpBQ0FJU2JjUEp6cTEzSkZlenArNldjSVpvZUVjdlhr?= =?iso-2022-jp?B?VXhaUHlJYXhYeUhHSjVFZGJRVi9lMHl1SXZNRXBDbXQyclJ4aU5VelRt?= =?iso-2022-jp?B?WVBOcWNoNzR4NmxJTVh5by9ESmxZL2dIZHR6S2k1TTFmRVZBTWE0K1h1?= =?iso-2022-jp?B?NXNYYmYrVzRPRkJaOStleGUyYWdwelVua29JRVcvT3lDcmE4dFJVQ3c4?= =?iso-2022-jp?B?UEFBVWJ4THFlN3NqemhTVzVKdkxRUGpmeEF6amZ0YStJTkIyM0tSRU9i?= =?iso-2022-jp?B?eUQzbk1aQ2ZCNUluN2FKVmFLWTRva2VZOW1xYXdDb0F5ZWI4VjlzTmRS?= =?iso-2022-jp?B?WWx0U2JkcFY0VTRPcVdoTWIvM2ZiNFZQN0pqeGVTVytIRVRyZW9PZHgv?= =?iso-2022-jp?B?QVd0ZlJxWU04SWV4R2k1djJ3OVRKNHB0Q3h1SWtSZEtYRlV3NVgvVnFs?= =?iso-2022-jp?B?eTBVcGZnUjE2R1hJelBPbUNFM0hkRFJQelBrcUxqTldsUnpwL2JWUFpL?= =?iso-2022-jp?B?WklZWXNpM0FFTVVISDc3V3Jjd1lkVk1sZWNDYmdKQjVNRGFKcktxMzFq?= =?iso-2022-jp?B?cFNUTmN6Zk55OElhNG9ZN0x0cWlUNVdLbnl5dFdta3RRQ2pWMW52U2h5?= =?iso-2022-jp?B?Q1ovbE8zWVYrYTBLSlpTL0NFSEFTK1lvSEZBU3VoanJyZXkvN1hTWFJs?= =?iso-2022-jp?B?QlZEbll5Mjk0WGl6eXJwMWxDUUFWUGQxeE1tdEhPM2V0cGxlQTFsQThU?= =?iso-2022-jp?B?UlpZWTIvNmNuaDVBbzZPOEZFVHVHb3gwUjR6SWdmQndaZkJZMmRtTnJM?= =?iso-2022-jp?B?emJ6cVhVN0dRbTBmR096ZFFTam1ZcHpCR05KQmJRcTJHVVZKYjVOMXdy?= =?iso-2022-jp?B?aGhzdzRQV3FCOE80a0xiLzBHOUI1blRqN3gvdE9VSXUxQjlEelg4UjVj?= =?iso-2022-jp?B?Qm16NjFjR1k4VEd3RUVtUTBkdWU3VmhwZmxSdmVYSytXUTdnL2p2cXhp?= =?iso-2022-jp?B?bjYwdQ==?= Content-Type: text/plain; charset="iso-2022-jp" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: PH0PR11MB5877.namprd11.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: a16389cc-a7da-4246-5c60-08db60b886b3 X-MS-Exchange-CrossTenant-originalarrivaltime: 30 May 2023 02:49:45.2774 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: aHrlbxIYh0rMhq5sVOCrScuSDEzShz2C0s9L14uhHuM+lfdT4aSpVC37a3ALBfiJN7HC3qaUOTR7kU4pIhndMQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR11MB8490 X-OriginatorOrg: intel.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org > -----Original Message----- > From: Liu, Mingxia > Sent: Tuesday, May 30, 2023 10:27 AM > To: Xing, Beilei ; Wu, Jingjing > Cc: dev@dpdk.org; Wang, Xiao W > Subject: RE: [PATCH v4 05/13] net/cpfl: support hairpin queue setup and r= elease >=20 >=20 >=20 > > -----Original Message----- > > From: Xing, Beilei > > Sent: Friday, May 26, 2023 3:39 PM > > To: Wu, Jingjing > > Cc: dev@dpdk.org; Liu, Mingxia ; Xing, Beilei > > ; Wang, Xiao W > > Subject: [PATCH v4 05/13] net/cpfl: support hairpin queue setup and > > release > > > > From: Beilei Xing > > > > Support hairpin Rx/Tx queue setup and release. > > > > Signed-off-by: Xiao Wang > > Signed-off-by: Mingxia Liu > > Signed-off-by: Beilei Xing > > --- > > drivers/net/cpfl/cpfl_ethdev.c | 6 + > > drivers/net/cpfl/cpfl_ethdev.h | 11 + > > drivers/net/cpfl/cpfl_rxtx.c | 353 +++++++++++++++++++++++- > > drivers/net/cpfl/cpfl_rxtx.h | 36 +++ > > drivers/net/cpfl/cpfl_rxtx_vec_common.h | 4 + > > 5 files changed, 409 insertions(+), 1 deletion(-) > > > > diff --git a/drivers/net/cpfl/cpfl_ethdev.c > > b/drivers/net/cpfl/cpfl_ethdev.c index > > 40b4515539..b17c538ec2 100644 > > --- a/drivers/net/cpfl/cpfl_ethdev.c > > +++ b/drivers/net/cpfl/cpfl_ethdev.c > > @@ -879,6 +879,10 @@ cpfl_dev_close(struct rte_eth_dev *dev) > > struct cpfl_adapter_ext *adapter =3D CPFL_ADAPTER_TO_EXT(vport- > > >adapter); > > > > cpfl_dev_stop(dev); > > + if (cpfl_vport->p2p_mp) { > > + rte_mempool_free(cpfl_vport->p2p_mp); > > + cpfl_vport->p2p_mp =3D NULL; > > + } > > > > if (!adapter->base.is_rx_singleq && !adapter->base.is_tx_singleq) > > cpfl_p2p_queue_grps_del(vport); > > @@ -922,6 +926,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = =3D { > > .xstats_get_names =3D cpfl_dev_xstats_get_names, > > .xstats_reset =3D cpfl_dev_xstats_reset, > > .hairpin_cap_get =3D cpfl_hairpin_cap_get, > > + .rx_hairpin_queue_setup =3D cpfl_rx_hairpin_queue_setup, > > + .tx_hairpin_queue_setup =3D cpfl_tx_hairpin_queue_setup, > > }; > > > > +int > > +cpfl_rx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t queue_id= x, > > + uint16_t nb_desc, > > + const struct rte_eth_hairpin_conf *conf) { > > + struct cpfl_vport *cpfl_vport =3D (struct cpfl_vport *)dev->data- > > >dev_private; > > + struct idpf_vport *vport =3D &cpfl_vport->base; > > + struct idpf_adapter *adapter_base =3D vport->adapter; > > + uint16_t logic_qid =3D cpfl_vport->nb_p2p_rxq; > > + struct cpfl_rxq_hairpin_info *hairpin_info; > > + struct cpfl_rx_queue *cpfl_rxq; > > + struct idpf_rx_queue *bufq1 =3D NULL; > > + struct idpf_rx_queue *rxq; > > + uint16_t peer_port, peer_q; > > + uint16_t qid; > > + int ret; > > + > > + if (vport->rxq_model =3D=3D VIRTCHNL2_QUEUE_MODEL_SINGLE) { > > + PMD_INIT_LOG(ERR, "Only spilt queue model supports hairpin > > queue."); > > + return -EINVAL; > > + } > > + > > + if (conf->peer_count !=3D 1) { > > + PMD_INIT_LOG(ERR, "Can't support Rx hairpin queue peer > > count %d", conf->peer_count); > > + return -EINVAL; > > + } > > + > > + peer_port =3D conf->peers[0].port; > > + peer_q =3D conf->peers[0].queue; > > + > > + if (nb_desc % CPFL_ALIGN_RING_DESC !=3D 0 || > > + nb_desc > CPFL_MAX_RING_DESC || > > + nb_desc < CPFL_MIN_RING_DESC) { > > + PMD_INIT_LOG(ERR, "Number (%u) of receive descriptors is > > invalid", nb_desc); > > + return -EINVAL; > > + } > > + > > + /* Free memory if needed */ > > + if (dev->data->rx_queues[queue_idx]) { > > + cpfl_rx_queue_release(dev->data->rx_queues[queue_idx]); > > + dev->data->rx_queues[queue_idx] =3D NULL; > > + } > > + > > + /* Setup Rx description queue */ > > + cpfl_rxq =3D rte_zmalloc_socket("cpfl hairpin rxq", > > + sizeof(struct cpfl_rx_queue), > > + RTE_CACHE_LINE_SIZE, > > + SOCKET_ID_ANY); > > + if (!cpfl_rxq) { > > + PMD_INIT_LOG(ERR, "Failed to allocate memory for rx queue > > data structure"); > > + return -ENOMEM; > > + } > > + > > + rxq =3D &cpfl_rxq->base; > > + hairpin_info =3D &cpfl_rxq->hairpin_info; > > + rxq->nb_rx_desc =3D nb_desc * 2; > > + rxq->queue_id =3D cpfl_hw_qid_get(cpfl_vport->p2p_q_chunks_info- > > >rx_start_qid, logic_qid); > > + rxq->port_id =3D dev->data->port_id; > > + rxq->adapter =3D adapter_base; > > + rxq->rx_buf_len =3D CPFL_P2P_MBUF_SIZE - RTE_PKTMBUF_HEADROOM; > > + hairpin_info->hairpin_q =3D true; > > + hairpin_info->peer_txp =3D peer_port; > > + hairpin_info->peer_txq_id =3D peer_q; > > + > > + if (conf->manual_bind !=3D 0) > > + cpfl_vport->p2p_manual_bind =3D true; > > + else > > + cpfl_vport->p2p_manual_bind =3D false; > > + > > + if (cpfl_vport->p2p_rx_bufq =3D=3D NULL) { > > + bufq1 =3D rte_zmalloc_socket("hairpin rx bufq1", > > + sizeof(struct idpf_rx_queue), > > + RTE_CACHE_LINE_SIZE, > > + SOCKET_ID_ANY); > > + if (!bufq1) { > > + PMD_INIT_LOG(ERR, "Failed to allocate memory for > > hairpin Rx buffer queue 1."); > > + ret =3D -ENOMEM; > > + goto err_alloc_bufq1; > > + } > > + qid =3D 2 * logic_qid; > > + ret =3D cpfl_rx_hairpin_bufq_setup(dev, bufq1, qid, nb_desc); > > + if (ret) { > > + PMD_INIT_LOG(ERR, "Failed to setup hairpin Rx buffer > > queue 1"); > > + ret =3D -EINVAL; > > + goto err_setup_bufq1; > > + } > > + cpfl_vport->p2p_rx_bufq =3D bufq1; > > + } > > + > > + rxq->bufq1 =3D cpfl_vport->p2p_rx_bufq; > > + rxq->bufq2 =3D NULL; > > + > > + cpfl_vport->nb_p2p_rxq++; > > + rxq->q_set =3D true; > > + dev->data->rx_queues[queue_idx] =3D cpfl_rxq; > > + > > + return 0; > > + > > +err_setup_bufq1: > > + rte_free(bufq1); > > +err_alloc_bufq1: > > + rte_free(rxq); > [Liu, Mingxia] Here should free cpfl_rxq, right=1B$B!)=1B(B > > + > > + return ret; > > +} > > + > > +int > > +cpfl_tx_hairpin_queue_setup(struct rte_eth_dev *dev, uint16_t queue_id= x, > > + uint16_t nb_desc, > > + const struct rte_eth_hairpin_conf *conf) { > > + struct cpfl_vport *cpfl_vport =3D > > + (struct cpfl_vport *)dev->data->dev_private; > > + > > + struct idpf_vport *vport =3D &cpfl_vport->base; > > + struct idpf_adapter *adapter_base =3D vport->adapter; > > + uint16_t logic_qid =3D cpfl_vport->nb_p2p_txq; > > + struct cpfl_txq_hairpin_info *hairpin_info; > > + struct idpf_hw *hw =3D &adapter_base->hw; > > + struct cpfl_tx_queue *cpfl_txq; > > + struct idpf_tx_queue *txq, *cq; > > + const struct rte_memzone *mz; > > + uint32_t ring_size; > > + uint16_t peer_port, peer_q; > > + > > + if (vport->txq_model =3D=3D VIRTCHNL2_QUEUE_MODEL_SINGLE) { > > + PMD_INIT_LOG(ERR, "Only spilt queue model supports hairpin > > queue."); > > + return -EINVAL; > > + } > > + > > + if (conf->peer_count !=3D 1) { > > + PMD_INIT_LOG(ERR, "Can't support Tx hairpin queue peer > > count %d", conf->peer_count); > > + return -EINVAL; > > + } > > + > > + peer_port =3D conf->peers[0].port; > > + peer_q =3D conf->peers[0].queue; > > + > > + if (nb_desc % CPFL_ALIGN_RING_DESC !=3D 0 || > > + nb_desc > CPFL_MAX_RING_DESC || > > + nb_desc < CPFL_MIN_RING_DESC) { > > + PMD_INIT_LOG(ERR, "Number (%u) of transmit descriptors is > > invalid", > > + nb_desc); > > + return -EINVAL; > > + } > > + > > + /* Free memory if needed. */ > > + if (dev->data->tx_queues[queue_idx]) { > > + cpfl_tx_queue_release(dev->data->tx_queues[queue_idx]); > > + dev->data->tx_queues[queue_idx] =3D NULL; > > + } > > + > > + /* Allocate the TX queue data structure. */ > > + cpfl_txq =3D rte_zmalloc_socket("cpfl hairpin txq", > > + sizeof(struct cpfl_tx_queue), > > + RTE_CACHE_LINE_SIZE, > > + SOCKET_ID_ANY); > > + if (!cpfl_txq) { > > + PMD_INIT_LOG(ERR, "Failed to allocate memory for tx queue > > structure"); > > + return -ENOMEM; > > + } > > + > > + txq =3D &cpfl_txq->base; > > + hairpin_info =3D &cpfl_txq->hairpin_info; > > + /* Txq ring length should be 2 times of Tx completion queue size. */ > > + txq->nb_tx_desc =3D nb_desc * 2; > > + txq->queue_id =3D cpfl_hw_qid_get(cpfl_vport->p2p_q_chunks_info- > > >tx_start_qid, logic_qid); > > + txq->port_id =3D dev->data->port_id; > > + hairpin_info->hairpin_q =3D true; > > + hairpin_info->peer_rxp =3D peer_port; > > + hairpin_info->peer_rxq_id =3D peer_q; > > + > > + if (conf->manual_bind !=3D 0) > > + cpfl_vport->p2p_manual_bind =3D true; > > + else > > + cpfl_vport->p2p_manual_bind =3D false; > > + > > + /* Always Tx hairpin queue allocates Tx HW ring */ > > + ring_size =3D RTE_ALIGN(txq->nb_tx_desc * CPFL_P2P_DESC_LEN, > > + CPFL_DMA_MEM_ALIGN); > > + mz =3D rte_eth_dma_zone_reserve(dev, "hairpin_tx_ring", logic_qid, > > + ring_size + CPFL_P2P_RING_BUF, > > + CPFL_RING_BASE_ALIGN, > > + dev->device->numa_node); > > + if (!mz) { > > + PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for TX"); > > + rte_free(txq); > [Liu, Mingxia] Here should free cpfl_txq, right=1B$B!)=1B(B > > + return -ENOMEM; > > + } > > + > > + txq->tx_ring_phys_addr =3D mz->iova; > > + txq->desc_ring =3D mz->addr; > > + txq->mz =3D mz; > > + > > + cpfl_tx_hairpin_descq_reset(txq); > > + txq->qtx_tail =3D hw->hw_addr + > > + cpfl_hw_qtail_get(cpfl_vport->p2p_q_chunks_info- > > >tx_qtail_start, > > + logic_qid, cpfl_vport->p2p_q_chunks_info- > > >tx_qtail_spacing); > > + txq->ops =3D &def_txq_ops; > > + > > + if (cpfl_vport->p2p_tx_complq =3D=3D NULL) { > > + cq =3D rte_zmalloc_socket("cpfl hairpin cq", > > + sizeof(struct idpf_tx_queue), > > + RTE_CACHE_LINE_SIZE, > > + dev->device->numa_node); > > + if (!cq) { > > + PMD_INIT_LOG(ERR, "Failed to allocate memory for tx > > queue structure"); > [Liu, Mingxia] Before returning, should free some resource, such as free = cpfl_txq, > right=1B$B!)=1B(B [Liu, Mingxia] In addition, should txq->mz be freed before release cpfl_txq= ? > > + return -ENOMEM; > > + } > > + > > + cq->nb_tx_desc =3D nb_desc; > > + cq->queue_id =3D cpfl_hw_qid_get(cpfl_vport- > > >p2p_q_chunks_info->tx_compl_start_qid, > > + 0); > > + cq->port_id =3D dev->data->port_id; > > + > > + /* Tx completion queue always allocates the HW ring */ > > + ring_size =3D RTE_ALIGN(cq->nb_tx_desc * CPFL_P2P_DESC_LEN, > > + CPFL_DMA_MEM_ALIGN); > > + mz =3D rte_eth_dma_zone_reserve(dev, "hairpin_tx_compl_ring", > > logic_qid, > > + ring_size + CPFL_P2P_RING_BUF, > > + CPFL_RING_BASE_ALIGN, > > + dev->device->numa_node); > > + if (!mz) { > > + PMD_INIT_LOG(ERR, "Failed to reserve DMA memory > > for TX completion queue"); > > + rte_free(txq); >=20 > [Liu, Mingxia]Here should free cpfl_txq, right=1B$B!)=1B(BIn addition, sh= ould cq resource > be released? [Liu, Mingxia] In addition, should txq->mz be freed before release cpfl_txq= ? >=20 >=20