From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BC7E5429D8; Mon, 24 Apr 2023 11:48:32 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6C2E4410DE; Mon, 24 Apr 2023 11:48:32 +0200 (CEST) Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id 5C6F5410D0 for ; Mon, 24 Apr 2023 11:48:30 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1682329710; x=1713865710; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=yySuXgcWbtl2pq2HgscksPcK8+yDD6vY1BgQG8l+zbI=; b=XPhMH2qKYRyKXzhMvq7EhU9WP3MdrDalGvziQ7pBrnHKSiJY+ZUkL8Ur iv+uwIiOm3Ed0ruJU7kRvYTW0Z6F0M7V+TNL3pjfCOJeifTIyNYUXvn0W ZZmvXTTf2DTxIB0QjDw+pCCoqEN/zkK4ThPIHbn5W/ANTMt0iD+vU/TCc 4m9alQUOos9pOlqDB5AQ/zx8I+f3KyfQNPumrKa98O0gTtoabIqFthrFX iGxU9SXCrEAjv4lk+wWauRIrKSNZV3bJjWvdkJZpI9jU+TCWnYGw4CswN G1G85fXFqeKCpZDMAIMHj/AbtbASqn+2P+6HDjB+d+M94jduxxQ1ohypi A==; X-IronPort-AV: E=McAfee;i="6600,9927,10689"; a="409343913" X-IronPort-AV: E=Sophos;i="5.99,222,1677571200"; d="scan'208";a="409343913" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Apr 2023 02:48:29 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10689"; a="817196560" X-IronPort-AV: E=Sophos;i="5.99,222,1677571200"; d="scan'208";a="817196560" Received: from fmsmsx601.amr.corp.intel.com ([10.18.126.81]) by orsmga004.jf.intel.com with ESMTP; 24 Apr 2023 02:48:29 -0700 Received: from fmsmsx610.amr.corp.intel.com (10.18.126.90) by fmsmsx601.amr.corp.intel.com (10.18.126.81) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 24 Apr 2023 02:48:28 -0700 Received: from fmsedg602.ED.cps.intel.com (10.1.192.136) by fmsmsx610.amr.corp.intel.com (10.18.126.90) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23 via Frontend Transport; Mon, 24 Apr 2023 02:48:28 -0700 Received: from NAM12-MW2-obe.outbound.protection.outlook.com (104.47.66.44) by edgegateway.intel.com (192.55.55.71) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.23; Mon, 24 Apr 2023 02:48:28 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=O9ZR9fwxlbRnw1azKtOfazW4seNisul9Ne32ZBD5Q6zDvxjwWBGLF45cl44caQ385qUHtNY5eo8OloHLC1Abx7guiKapzhtOMYf4nHXgk00XACIUiPL4xKCe8sUMuP/IQpAIxrRciUjr1JVK7zCSu1zPidz3nWqXdJr/ZfmmLw6t6YVbyB/ksh9YzWZi74/LEiuEif7E1UtzChlFRGZaMCyxY+02tmK9AIItT1ijGqZQs4rmM++dpMaFwS3ul+Hs7NXKbOyHnUjgZt3+s2WzX9O4KtOhZDKbWEYniJqsHsmXC5D8AM3eRDZmH48EXh2XG1g79a5rhsR3pUMQbm0t7g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=69WlcawQoMmwRXMa546Kx1aY3mmKI0f9yAHOxPnSDes=; b=BkgpEA8LZclKqmSOOVOFYzM5T4UxmohYaEOYIbYBwqyqU8KMnh5DqfB3T1Q3orYcsQAeX3O9DCT/3I3F4fx2CmUKGPSiLj445xJnRZaBl5i1Ezj8BMJx+zFVNRNgKN3J0y/VDrdZc5vWpVc8POBjnZal+9hatEPCzZ2FUfNAZC2b34HV/Z/bM+uxxf85qLwbPTyAkoOHlvC9WYwGD4a8ILljyOVn19UHZsKxh1X7k0yqNdtRgdz+nEkOIrArziP2b5U5XASmVGSSYDjKj2xZw2oihZPhUKikG4LSfiwnmstBy1ZRMUgnq0jTAUMqfk4vPNNfQAnn31yJFrN8HuyoWQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none Received: from PH0PR11MB5877.namprd11.prod.outlook.com (2603:10b6:510:141::12) by SA1PR11MB6687.namprd11.prod.outlook.com (2603:10b6:806:25a::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.32; Mon, 24 Apr 2023 09:48:26 +0000 Received: from PH0PR11MB5877.namprd11.prod.outlook.com ([fe80::da1b:ee87:709:3174]) by PH0PR11MB5877.namprd11.prod.outlook.com ([fe80::da1b:ee87:709:3174%2]) with mapi id 15.20.6319.022; Mon, 24 Apr 2023 09:48:26 +0000 From: "Liu, Mingxia" To: "Xing, Beilei" , "Wu, Jingjing" CC: "dev@dpdk.org" , "Wang, Xiao W" Subject: RE: [PATCH 06/10] net/cpfl: support hairpin queue configuration Thread-Topic: [PATCH 06/10] net/cpfl: support hairpin queue configuration Thread-Index: AQHZdCDcWYeWMcCoP0uO1ZGMYlbhhK86N8dg Date: Mon, 24 Apr 2023 09:48:25 +0000 Message-ID: References: <20230421065048.106899-1-beilei.xing@intel.com> <20230421065048.106899-7-beilei.xing@intel.com> In-Reply-To: <20230421065048.106899-7-beilei.xing@intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=intel.com; x-ms-publictraffictype: Email x-ms-traffictypediagnostic: PH0PR11MB5877:EE_|SA1PR11MB6687:EE_ x-ms-office365-filtering-correlation-id: 69253eeb-0a94-4843-b17c-08db44a90ccf x-ms-exchange-senderadcheck: 1 x-ms-exchange-antispam-relay: 0 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: 2NOEi/BTxuWpC5NF0EvpYe8k23OIVnSymeoU9lw1ZQwelE0PGXl7Mvg0K7mB7isaxJKgKg0Jr5nTmfdz3UoSGK9XlFSdH3UqGcCAYqijficMTEV+a1+wz03jTiOa9HZ0juOXvQ5aDK3/Zkd9tZsmHx8HgboUbBKzMN3/8vFM3ohP/N2ACNlvcBcVxbY7iIdQx14Dxm15ioSB3iC1lbi6kklt53aC2dlGg+BTkEAofT8LyNxHGJEYoOdkBjnWGVSd1gOtJcfZc3AKav2EDbAwXj7vOLqq8rDcdsOCqXAou2h5gyukS1SQrUltzpZ4jMv2S2vJK4HoLfiG6b43R8rWhO19qc8noB6eppKbxrRLpuPWQvu6kblBhB5JQXClGeHvAYISEeyPQsz1EsqYf8drb4whP9uKAmMb4HVrkh6rBXwdkett/9sCWIsHHTqGUFZcWVugbrh6T3jTWSSGFoPYsgz6jjNLQ6a9qbg4BqCHyYbzKUZWgnrQ7luCXJjkl9hBanI7HYnPxDCL4LfQTwoxESpXXolr/Jhp62mbY2Nn18zl7WYznDWgxOHobRzG0xBLMif8Jn9+AmdaoNgCK2G7q3yXZu354xdUIoUqq8GZVssoXrSKqxwFgwbNCZJQM8rd x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:PH0PR11MB5877.namprd11.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230028)(396003)(136003)(39860400002)(376002)(366004)(346002)(451199021)(107886003)(9686003)(53546011)(6506007)(26005)(38070700005)(55016003)(83380400001)(186003)(82960400001)(38100700002)(122000001)(66946007)(478600001)(76116006)(86362001)(64756008)(66556008)(66476007)(66446008)(8936002)(8676002)(6636002)(54906003)(110136005)(52536014)(5660300002)(71200400001)(7696005)(66899021)(41300700001)(30864003)(2906002)(4326008)(33656002)(316002); DIR:OUT; SFP:1102; x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?GqLByKcQewuE42Cdx6sG9PoNtnfbSbHa/hpDH9Cmx745t+C0b07MXCbP/gLa?= =?us-ascii?Q?jFnoitCqZxqPyMbdE4MTUSpDwLo/Xz92I845dfa3/RjGoiK+qWdoyCUIvfr+?= =?us-ascii?Q?6U2yUvHJb7bXr4FKWhpO0p/K1YicIcSPNVYPLf6XBIGYYm0E5hRddzCwE6g7?= =?us-ascii?Q?eePbhmvd1feeepfd+bFpZvOguWg9DqPU0s2qAlVvn35CoMgJp4YpQB133tb9?= =?us-ascii?Q?cYF56xEMQAz/Ew/UAI3Vjs/i6LM7dau1KALtBwXlrA/DeYz22JwV3SH+UkhO?= =?us-ascii?Q?i1Bst1noIv05zdKPZ6WGGm7NWX5IvIWjGR6tpoyFOn6T4z0+Mov1HZJdjg0e?= =?us-ascii?Q?kY38oBrQuIqXYb3T786BIaqO7fK2aVgCsNWYSxpB135bbee3tQSorveBneYh?= =?us-ascii?Q?JdfxKHgWlwALBDzYhacdtIxEHEYOxl0x+21l0fRYlN+HISSOByCJc00CKVNa?= =?us-ascii?Q?y9ELupDcx+K6SlLsOAXNJP9KGKymCZReQyZAcKnFNrrdk/K29iZ+19ArMuiB?= =?us-ascii?Q?78iY8fUjSu8fdaEPeus0iOSfhM4qTz1qwPW1q6JHM/nNwTafueylyPH63R2z?= =?us-ascii?Q?ekkYPl438SwCb5LKgSlttnQcj6Pkq72Bp7T8BO7mIAD0m++E4uzDXvQ2x1JP?= =?us-ascii?Q?eNJV3rO2tZ647rSLZxt+yFWYXmU6+FI0rk5/TZLVUJQZtFjflEaYTh9We7zh?= =?us-ascii?Q?9oV42kPwTEOew+I6XzYs3fHjMgKsfYqWlbOi1OJOC2+LQHu79dkilSI5RMaC?= =?us-ascii?Q?b9SEi22TRi2L31FR7hG2ogDse9o7lBy7YvbLxtx39ykTAw38+s7Vy9xOWvdE?= =?us-ascii?Q?SZueNZePvGQCjcqQcvEsmt4BoiTfM2Ea0uPvfe27n0g+t6vSY2yQ43zD6F9f?= =?us-ascii?Q?kGn9nNVMFZbZo8GuJCrcmpnz/1BUKW+9deUYYFK/FmfBpuV7nAsVO+QfcNny?= =?us-ascii?Q?b2fE4fmpgrPr7kDlVCNsA5arAQnkhZIl/lmHXt2qXIHtY/i544b0u5IfrSSt?= =?us-ascii?Q?rMil3B1TgiNgcjg4+067501iF1eh1ZFuCWc9fEXG59k8/o4EtrZvamzUkPdV?= =?us-ascii?Q?zwIOe271xLZR8NoMmf8V6HQGCs+kOF8xT7nLj7qboKDniqA9d4lubB0Rfh4K?= =?us-ascii?Q?Vk2Yx/2g24hNTq/CXWT3VMo3HIUmbD6jG+6F4F19iZkYET2xLrbp3Uhn0ig5?= =?us-ascii?Q?4Pefmr17Al5jppNmAhRaN3Ew68NnrJE4DNVW2yHZZ6iDxDulw9SsPr7AGHOu?= =?us-ascii?Q?o3PBEj1klQ3wvBRereQvpDk3Qa7HjKyHIjJ/Dzm1EHTq9BSUCnU4SQYHVaFb?= =?us-ascii?Q?sFBA8v96MBQvvpbrWuF6YGLDen4uE3W00jrQkoqqaqTwWnbfbh2RIVayxV55?= =?us-ascii?Q?Hlavy291NE4llHCPmNn5+TYrNs2hZEUQPWsvjZOw+m4bAPfoLjRzrfvNwTtx?= =?us-ascii?Q?SO4nauMKSle4/nwXXFsgHj4CbLASCni+maYFQfsD39GTeNZBg4faueHMhw3J?= =?us-ascii?Q?HWfBTlNkWN5ccrsjOKBVNOxsisIlI0tsSom511dgiq8C8eyTCeziCzGipPdS?= =?us-ascii?Q?bZ1kk2hK2R7giUEpLYEqcy4b0q3ad30fY6s1L0jv?= Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: PH0PR11MB5877.namprd11.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: 69253eeb-0a94-4843-b17c-08db44a90ccf X-MS-Exchange-CrossTenant-originalarrivaltime: 24 Apr 2023 09:48:25.7687 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: WM5WqlovSJRy9loNslFNU8laOT7i9YfI8BgdRqQtVLjDSPAvh712Lv3AGL5p2I9TeQzd6651AjlW1je6WeMkrQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR11MB6687 X-OriginatorOrg: intel.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org > -----Original Message----- > From: Xing, Beilei > Sent: Friday, April 21, 2023 2:51 PM > To: Wu, Jingjing > Cc: dev@dpdk.org; Liu, Mingxia ; Xing, Beilei > ; Wang, Xiao W > Subject: [PATCH 06/10] net/cpfl: support hairpin queue configuration >=20 > From: Beilei Xing >=20 > This patch supports Rx/Tx hairpin queue configuration. >=20 > Signed-off-by: Xiao Wang > Signed-off-by: Mingxia Liu > Signed-off-by: Beilei Xing > --- > drivers/common/idpf/idpf_common_virtchnl.c | 70 +++++++++++ > drivers/common/idpf/idpf_common_virtchnl.h | 6 + > drivers/common/idpf/version.map | 2 + > drivers/net/cpfl/cpfl_ethdev.c | 136 ++++++++++++++++++++- > drivers/net/cpfl/cpfl_rxtx.c | 80 ++++++++++++ > drivers/net/cpfl/cpfl_rxtx.h | 7 ++ > 6 files changed, 297 insertions(+), 4 deletions(-) >=20 > diff --git a/drivers/common/idpf/idpf_common_virtchnl.c > b/drivers/common/idpf/idpf_common_virtchnl.c > index 76a658bb26..50cd43a8dd 100644 > --- a/drivers/common/idpf/idpf_common_virtchnl.c > +++ b/drivers/common/idpf/idpf_common_virtchnl.c > @@ -1050,6 +1050,41 @@ idpf_vc_rxq_config(struct idpf_vport *vport, struc= t > idpf_rx_queue *rxq) > return err; > } >=20 > +int idpf_vc_rxq_config_by_info(struct idpf_vport *vport, struct > virtchnl2_rxq_info *rxq_info, > + uint16_t num_qs) > +{ > + struct idpf_adapter *adapter =3D vport->adapter; > + struct virtchnl2_config_rx_queues *vc_rxqs =3D NULL; > + struct idpf_cmd_info args; > + int size, err, i; > + > + size =3D sizeof(*vc_rxqs) + (num_qs - 1) * > + sizeof(struct virtchnl2_rxq_info); > + vc_rxqs =3D rte_zmalloc("cfg_rxqs", size, 0); > + if (vc_rxqs =3D=3D NULL) { > + DRV_LOG(ERR, "Failed to allocate virtchnl2_config_rx_queues"); > + err =3D -ENOMEM; > + return err; > + } > + vc_rxqs->vport_id =3D vport->vport_id; > + vc_rxqs->num_qinfo =3D num_qs; > + memcpy(vc_rxqs->qinfo, rxq_info, num_qs * sizeof(struct > +virtchnl2_rxq_info)); > + > + memset(&args, 0, sizeof(args)); > + args.ops =3D VIRTCHNL2_OP_CONFIG_RX_QUEUES; > + args.in_args =3D (uint8_t *)vc_rxqs; > + args.in_args_size =3D size; > + args.out_buffer =3D adapter->mbx_resp; > + args.out_size =3D IDPF_DFLT_MBX_BUF_SIZE; > + > + err =3D idpf_vc_cmd_execute(adapter, &args); > + rte_free(vc_rxqs); > + if (err !=3D 0) > + DRV_LOG(ERR, "Failed to execute command of > +VIRTCHNL2_OP_CONFIG_RX_QUEUES"); > + > + return err; > +} > + > int > idpf_vc_txq_config(struct idpf_vport *vport, struct idpf_tx_queue *txq) = { @@ - > 1121,6 +1156,41 @@ idpf_vc_txq_config(struct idpf_vport *vport, struct > idpf_tx_queue *txq) > return err; > } >=20 > +int > +idpf_vc_txq_config_by_info(struct idpf_vport *vport, struct virtchnl2_tx= q_info > *txq_info, > + uint16_t num_qs) > +{ > + struct idpf_adapter *adapter =3D vport->adapter; > + struct virtchnl2_config_tx_queues *vc_txqs =3D NULL; > + struct idpf_cmd_info args; > + int size, err; > + > + size =3D sizeof(*vc_txqs) + (num_qs - 1) * sizeof(struct virtchnl2_txq_= info); > + vc_txqs =3D rte_zmalloc("cfg_txqs", size, 0); > + if (vc_txqs =3D=3D NULL) { > + DRV_LOG(ERR, "Failed to allocate virtchnl2_config_tx_queues"); > + err =3D -ENOMEM; > + return err; > + } > + vc_txqs->vport_id =3D vport->vport_id; > + vc_txqs->num_qinfo =3D num_qs; > + memcpy(vc_txqs->qinfo, txq_info, num_qs * sizeof(struct > +virtchnl2_txq_info)); > + > + memset(&args, 0, sizeof(args)); > + args.ops =3D VIRTCHNL2_OP_CONFIG_TX_QUEUES; > + args.in_args =3D (uint8_t *)vc_txqs; > + args.in_args_size =3D size; > + args.out_buffer =3D adapter->mbx_resp; > + args.out_size =3D IDPF_DFLT_MBX_BUF_SIZE; > + > + err =3D idpf_vc_cmd_execute(adapter, &args); > + rte_free(vc_txqs); > + if (err !=3D 0) > + DRV_LOG(ERR, "Failed to execute command of > +VIRTCHNL2_OP_CONFIG_TX_QUEUES"); > + > + return err; > +} > + > int > idpf_vc_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg, > struct idpf_ctlq_msg *q_msg) > diff --git a/drivers/common/idpf/idpf_common_virtchnl.h > b/drivers/common/idpf/idpf_common_virtchnl.h > index bf1d014c8d..277235ba7d 100644 > --- a/drivers/common/idpf/idpf_common_virtchnl.h > +++ b/drivers/common/idpf/idpf_common_virtchnl.h > @@ -65,6 +65,12 @@ __rte_internal > int idpf_vc_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info= *cq, > u16 *buff_count, struct idpf_dma_mem **buffs); > __rte_internal > +int idpf_vc_rxq_config_by_info(struct idpf_vport *vport, struct > virtchnl2_rxq_info *rxq_info, > + uint16_t num_qs); > +__rte_internal > +int idpf_vc_txq_config_by_info(struct idpf_vport *vport, struct > virtchnl2_txq_info *txq_info, > + uint16_t num_qs); > +__rte_internal > int idpf_vc_queue_grps_del(struct idpf_vport *vport, > uint16_t num_q_grps, > struct virtchnl2_queue_group_id *qg_ids); diff --git > a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map index > aa67f7ee27..a339a4bf8e 100644 > --- a/drivers/common/idpf/version.map > +++ b/drivers/common/idpf/version.map > @@ -59,8 +59,10 @@ INTERNAL { > idpf_vc_rss_lut_get; > idpf_vc_rss_lut_set; > idpf_vc_rxq_config; > + idpf_vc_rxq_config_by_info; > idpf_vc_stats_query; > idpf_vc_txq_config; > + idpf_vc_txq_config_by_info; > idpf_vc_vectors_alloc; > idpf_vc_vectors_dealloc; > idpf_vc_vport_create; > diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethde= v.c index > d3300f17cc..13edf2e706 100644 > --- a/drivers/net/cpfl/cpfl_ethdev.c > +++ b/drivers/net/cpfl/cpfl_ethdev.c > @@ -737,32 +737,160 @@ cpfl_config_rx_queues_irqs(struct rte_eth_dev *dev= ) > return idpf_vport_irq_map_config(vport, nb_rx_queues); } >=20 > +/* Update hairpin_info for dev's tx hairpin queue */ static int > +cpfl_txq_hairpin_info_update(struct rte_eth_dev *dev, uint16_t rx_port) > +{ > + struct cpfl_vport *cpfl_tx_vport =3D dev->data->dev_private; > + struct rte_eth_dev *peer_dev =3D &rte_eth_devices[rx_port]; > + struct cpfl_vport *cpfl_rx_vport =3D peer_dev->data->dev_private; > + struct cpfl_txq_hairpin_info *hairpin_info; > + struct cpfl_tx_queue *cpfl_txq; > + int i; > + > + for (i =3D cpfl_tx_vport->nb_data_txq; i < dev->data->nb_tx_queues; i++= ) { > + cpfl_txq =3D dev->data->tx_queues[i]; > + hairpin_info =3D &cpfl_txq->hairpin_info; > + if (hairpin_info->peer_rxp !=3D rx_port) { > + PMD_DRV_LOG(ERR, "port %d is not the peer port", > rx_port); > + return -EINVAL; > + } > + hairpin_info->peer_rxq_id =3D > + cpfl_hw_qid_get(cpfl_rx_vport- > >p2p_q_chunks_info.rx_start_qid, > + hairpin_info->peer_rxq_id - > cpfl_rx_vport->nb_data_rxq); > + } > + > + return 0; > +} > + > +/* Bind Rx hairpin queue's memory zone to peer Tx hairpin queue's > +memory zone */ static void cpfl_rxq_hairpin_mz_bind(struct rte_eth_dev > +*dev) { > + struct cpfl_vport *cpfl_rx_vport =3D dev->data->dev_private; > + struct idpf_vport *vport =3D &cpfl_rx_vport->base; > + struct idpf_adapter *adapter =3D vport->adapter; > + struct idpf_hw *hw =3D &adapter->hw; > + struct cpfl_rx_queue *cpfl_rxq; > + struct cpfl_tx_queue *cpfl_txq; > + struct rte_eth_dev *peer_dev; > + const struct rte_memzone *mz; > + uint16_t peer_tx_port; > + uint16_t peer_tx_qid; > + int i; > + > + for (i =3D cpfl_rx_vport->nb_data_rxq; i < dev->data->nb_rx_queues; i++= ) { > + cpfl_rxq =3D dev->data->rx_queues[i]; > + peer_tx_port =3D cpfl_rxq->hairpin_info.peer_txp; > + peer_tx_qid =3D cpfl_rxq->hairpin_info.peer_txq_id; > + peer_dev =3D &rte_eth_devices[peer_tx_port]; > + cpfl_txq =3D peer_dev->data->tx_queues[peer_tx_qid]; > + > + /* bind rx queue */ > + mz =3D cpfl_txq->base.mz; > + cpfl_rxq->base.rx_ring_phys_addr =3D mz->iova; > + cpfl_rxq->base.rx_ring =3D mz->addr; > + cpfl_rxq->base.mz =3D mz; > + > + /* bind rx buffer queue */ > + mz =3D cpfl_txq->base.complq->mz; > + cpfl_rxq->base.bufq1->rx_ring_phys_addr =3D mz->iova; > + cpfl_rxq->base.bufq1->rx_ring =3D mz->addr; > + cpfl_rxq->base.bufq1->mz =3D mz; > + cpfl_rxq->base.bufq1->qrx_tail =3D hw->hw_addr + > + cpfl_hw_qtail_get(cpfl_rx_vport- > >p2p_q_chunks_info.rx_buf_qtail_start, > + 0, cpfl_rx_vport- > >p2p_q_chunks_info.rx_buf_qtail_spacing); > + } > +} > + > static int > cpfl_start_queues(struct rte_eth_dev *dev) { > + struct cpfl_vport *cpfl_vport =3D dev->data->dev_private; > + struct idpf_vport *vport =3D &cpfl_vport->base; > struct cpfl_rx_queue *cpfl_rxq; > struct cpfl_tx_queue *cpfl_txq; > + int tx_cmplq_flag =3D 0; > + int rx_bufq_flag =3D 0; > + int flag =3D 0; > int err =3D 0; > int i; >=20 > + /* For normal data queues, configure, init and enale Txq. > + * For non-cross vport hairpin queues, configure Txq. > + */ > for (i =3D 0; i < dev->data->nb_tx_queues; i++) { > cpfl_txq =3D dev->data->tx_queues[i]; > if (cpfl_txq =3D=3D NULL || cpfl_txq->base.tx_deferred_start) > continue; > - err =3D cpfl_tx_queue_start(dev, i); > + if (!cpfl_txq->hairpin_info.hairpin_q) { > + err =3D cpfl_tx_queue_start(dev, i); > + if (err !=3D 0) { > + PMD_DRV_LOG(ERR, "Fail to start Tx > queue %u", i); > + return err; > + } > + } else if (!cpfl_txq->hairpin_info.manual_bind) { > + if (flag =3D=3D 0) { > + err =3D cpfl_txq_hairpin_info_update(dev, > + cpfl_txq- > >hairpin_info.peer_rxp); > + if (err !=3D 0) { > + PMD_DRV_LOG(ERR, "Fail to update Tx > hairpin queue info"); > + return err; > + } > + flag =3D 1; [Liu, Mingxia] The variable flag is not been used, can it be removed? > + } > + err =3D cpfl_hairpin_txq_config(vport, cpfl_txq); > + if (err !=3D 0) { > + PMD_DRV_LOG(ERR, "Fail to configure hairpin > Tx queue %u", i); > + return err; > + } > + tx_cmplq_flag =3D 1; > + } > + } > + > + /* For non-cross vport hairpin queues, configure Tx completion queue > first.*/ > + if (tx_cmplq_flag =3D=3D 1 && cpfl_vport->p2p_tx_complq !=3D NULL) { > + err =3D cpfl_hairpin_tx_complq_config(cpfl_vport); > if (err !=3D 0) { > - PMD_DRV_LOG(ERR, "Fail to start Tx queue %u", i); > + PMD_DRV_LOG(ERR, "Fail to config Tx completion > queue"); > return err; > } > } >=20 [Liu, Mingxia] Better to move this code next to + err =3D cpfl_hairpin_txq_config(vport, cpfl_txq); + if (err !=3D 0) { + PMD_DRV_LOG(ERR, "Fail to configure hairpin Tx queue %u", i); + return err; + } When cpfl_rxq->hairpin_info.hairpin_q is true, then cpfl_vport->p2p_tx_comp= lq is not null, right ? And remove tx_cmplq_flag? > + /* For normal data queues, configure, init and enale Rxq. > + * For non-cross vport hairpin queues, configure Rxq, and then init Rxq= . > + */ > + cpfl_rxq_hairpin_mz_bind(dev); > for (i =3D 0; i < dev->data->nb_rx_queues; i++) { > cpfl_rxq =3D dev->data->rx_queues[i]; > if (cpfl_rxq =3D=3D NULL || cpfl_rxq->base.rx_deferred_start) > continue; > - err =3D cpfl_rx_queue_start(dev, i); > + if (!cpfl_rxq->hairpin_info.hairpin_q) { > + err =3D cpfl_rx_queue_start(dev, i); > + if (err !=3D 0) { > + PMD_DRV_LOG(ERR, "Fail to start Rx > queue %u", i); > + return err; > + } > + } else if (!cpfl_rxq->hairpin_info.manual_bind) { > + err =3D cpfl_hairpin_rxq_config(vport, cpfl_rxq); > + if (err !=3D 0) { > + PMD_DRV_LOG(ERR, "Fail to configure hairpin > Rx queue %u", i); > + return err; > + } > + err =3D cpfl_rx_queue_init(dev, i); > + if (err !=3D 0) { > + PMD_DRV_LOG(ERR, "Fail to init hairpin Rx > queue %u", i); > + return err; > + } > + rx_bufq_flag =3D 1; > + } > + } > + > + /* For non-cross vport hairpin queues, configure Rx buffer queue.*/ > + if (rx_bufq_flag =3D=3D 1 && cpfl_vport->p2p_rx_bufq !=3D NULL) { > + err =3D cpfl_hairpin_rx_bufq_config(cpfl_vport); > if (err !=3D 0) { > - PMD_DRV_LOG(ERR, "Fail to start Rx queue %u", i); > + PMD_DRV_LOG(ERR, "Fail to config Rx buffer queue"); > return err; > } > } [Liu, Mingxia] Similar to above. > diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c = index > 64ed331a6d..040beb5bac 100644 > --- a/drivers/net/cpfl/cpfl_rxtx.c > +++ b/drivers/net/cpfl/cpfl_rxtx.c > @@ -930,6 +930,86 @@ cpfl_tx_hairpin_queue_setup(struct rte_eth_dev *dev, > uint16_t queue_idx, > return 0; > } >=20 > +int > +cpfl_hairpin_rx_bufq_config(struct cpfl_vport *cpfl_vport) { > + struct idpf_rx_queue *rx_bufq =3D cpfl_vport->p2p_rx_bufq; > + struct virtchnl2_rxq_info rxq_info[1] =3D {0}; > + > + rxq_info[0].type =3D VIRTCHNL2_QUEUE_TYPE_RX_BUFFER; > + rxq_info[0].queue_id =3D rx_bufq->queue_id; > + rxq_info[0].ring_len =3D rx_bufq->nb_rx_desc; > + rxq_info[0].dma_ring_addr =3D rx_bufq->rx_ring_phys_addr; > + rxq_info[0].desc_ids =3D VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M; > + rxq_info[0].rx_buffer_low_watermark =3D > CPFL_RXBUF_LOW_WATERMARK; > + rxq_info[0].model =3D VIRTCHNL2_QUEUE_MODEL_SPLIT; > + rxq_info[0].data_buffer_size =3D rx_bufq->rx_buf_len; > + rxq_info[0].buffer_notif_stride =3D CPFL_RX_BUF_STRIDE; > + > + return idpf_vc_rxq_config_by_info(&cpfl_vport->base, rxq_info, 1); } > + > +int > +cpfl_hairpin_rxq_config(struct idpf_vport *vport, struct cpfl_rx_queue > +*cpfl_rxq) { > + struct virtchnl2_rxq_info rxq_info[1] =3D {0}; > + struct idpf_rx_queue *rxq =3D &cpfl_rxq->base; > + > + rxq_info[0].type =3D VIRTCHNL2_QUEUE_TYPE_RX; > + rxq_info[0].queue_id =3D rxq->queue_id; > + rxq_info[0].ring_len =3D rxq->nb_rx_desc; > + rxq_info[0].dma_ring_addr =3D rxq->rx_ring_phys_addr; > + rxq_info[0].rx_bufq1_id =3D rxq->bufq1->queue_id; > + rxq_info[0].max_pkt_size =3D vport->max_pkt_len; > + rxq_info[0].desc_ids =3D VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M; > + rxq_info[0].qflags |=3D VIRTCHNL2_RX_DESC_SIZE_16BYTE; > + > + rxq_info[0].data_buffer_size =3D rxq->rx_buf_len; > + rxq_info[0].model =3D VIRTCHNL2_QUEUE_MODEL_SPLIT; > + rxq_info[0].rx_buffer_low_watermark =3D > CPFL_RXBUF_LOW_WATERMARK; > + > + PMD_DRV_LOG(NOTICE, "hairpin: vport %u, Rxq id 0x%x", > + vport->vport_id, rxq_info[0].queue_id); > + > + return idpf_vc_rxq_config_by_info(vport, rxq_info, 1); } > + > +int > +cpfl_hairpin_tx_complq_config(struct cpfl_vport *cpfl_vport) { > + struct idpf_tx_queue *tx_complq =3D cpfl_vport->p2p_tx_complq; > + struct virtchnl2_txq_info txq_info[1] =3D {0}; > + > + txq_info[0].dma_ring_addr =3D tx_complq->tx_ring_phys_addr; > + txq_info[0].type =3D VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION; > + txq_info[0].queue_id =3D tx_complq->queue_id; > + txq_info[0].ring_len =3D tx_complq->nb_tx_desc; > + txq_info[0].peer_rx_queue_id =3D cpfl_vport->p2p_rx_bufq->queue_id; > + txq_info[0].model =3D VIRTCHNL2_QUEUE_MODEL_SPLIT; > + txq_info[0].sched_mode =3D VIRTCHNL2_TXQ_SCHED_MODE_FLOW; > + > + return idpf_vc_txq_config_by_info(&cpfl_vport->base, txq_info, 1); } > + > +int > +cpfl_hairpin_txq_config(struct idpf_vport *vport, struct cpfl_tx_queue > +*cpfl_txq) { > + struct idpf_tx_queue *txq =3D &cpfl_txq->base; > + struct virtchnl2_txq_info txq_info[1] =3D {0}; > + > + txq_info[0].dma_ring_addr =3D txq->tx_ring_phys_addr; > + txq_info[0].type =3D VIRTCHNL2_QUEUE_TYPE_TX; > + txq_info[0].queue_id =3D txq->queue_id; > + txq_info[0].ring_len =3D txq->nb_tx_desc; > + txq_info[0].tx_compl_queue_id =3D txq->complq->queue_id; > + txq_info[0].relative_queue_id =3D txq->queue_id; > + txq_info[0].peer_rx_queue_id =3D cpfl_txq->hairpin_info.peer_rxq_id; > + txq_info[0].model =3D VIRTCHNL2_QUEUE_MODEL_SPLIT; > + txq_info[0].sched_mode =3D VIRTCHNL2_TXQ_SCHED_MODE_FLOW; > + > + return idpf_vc_txq_config_by_info(vport, txq_info, 1); } > + > int > cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id) { dif= f --git > a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h index > d844c9f057..b01ce5edf9 100644 > --- a/drivers/net/cpfl/cpfl_rxtx.h > +++ b/drivers/net/cpfl/cpfl_rxtx.h > @@ -30,12 +30,15 @@ > #define CPFL_RING_BASE_ALIGN 128 >=20 > #define CPFL_DEFAULT_RX_FREE_THRESH 32 > +#define CPFL_RXBUF_LOW_WATERMARK 64 >=20 > #define CPFL_DEFAULT_TX_RS_THRESH 32 > #define CPFL_DEFAULT_TX_FREE_THRESH 32 >=20 > #define CPFL_SUPPORT_CHAIN_NUM 5 >=20 > +#define CPFL_RX_BUF_STRIDE 64 > + > struct cpfl_rxq_hairpin_info { > bool hairpin_q; /* if rx queue is a hairpin queue */ > bool manual_bind; /* for cross vport */ > @@ -85,4 +88,8 @@ int cpfl_rx_hairpin_queue_setup(struct rte_eth_dev *dev= , > uint16_t queue_idx, int cpfl_tx_hairpin_queue_setup(struct rte_eth_dev *= dev, > uint16_t queue_idx, > uint16_t nb_desc, > const struct rte_eth_hairpin_conf *conf); > +int cpfl_hairpin_tx_complq_config(struct cpfl_vport *cpfl_vport); int > +cpfl_hairpin_txq_config(struct idpf_vport *vport, struct cpfl_tx_queue > +*cpfl_txq); int cpfl_hairpin_rx_bufq_config(struct cpfl_vport > +*cpfl_vport); int cpfl_hairpin_rxq_config(struct idpf_vport *vport, > +struct cpfl_rx_queue *cpfl_rxq); > #endif /* _CPFL_RXTX_H_ */ > -- > 2.26.2