From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F07D442579; Tue, 12 Sep 2023 09:40:39 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C339D40293; Tue, 12 Sep 2023 09:40:39 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id CC48C4027E for ; Tue, 12 Sep 2023 09:40:37 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1694504438; x=1726040438; h=from:to:subject:date:message-id:references:in-reply-to: content-transfer-encoding:mime-version; bh=zso7Z19c4cmU9BZxG6aTayVAspqqGkL3SKVIHyzP+dw=; b=HxzKGTGfhJsRoGeOGL3PenTCiK3yzmi2nCTn77NqShwL76Ymnt0NiAMG FIn8xFDaVNznT1RqCXPtfuVR8umH55PP3LpZkbToJLt0nCnbMjDdboMmQ SkKocgkyWKfoPyE67glpL4Lt6EW0yLhmfQRwBceXKmrYku6sfHnM/Ub7Q 7lkEXJBW9l5lmP8pEG9V5K+t7F9pml6cleJKJg0CXI6qU/SYd4fIgNUDe 0WnxtRpJt6XMdWEul5GEM0PnOwYoMSuG7Zxcqoxa/Rd+QBJqfuIssEieL cXVS04OzvTqSi8Rh/SseY1s8wCsy5ivZkX6ffVjGSi8sSU9k5mdKQBF8y w==; X-IronPort-AV: E=McAfee;i="6600,9927,10830"; a="444740712" X-IronPort-AV: E=Sophos;i="6.02,245,1688454000"; d="scan'208";a="444740712" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Sep 2023 00:40:37 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10830"; a="743623702" X-IronPort-AV: E=Sophos;i="6.02,245,1688454000"; d="scan'208";a="743623702" Received: from orsmsx601.amr.corp.intel.com ([10.22.229.14]) by orsmga002.jf.intel.com with ESMTP/TLS/AES256-GCM-SHA384; 12 Sep 2023 00:40:36 -0700 Received: from orsmsx612.amr.corp.intel.com (10.22.229.25) by ORSMSX601.amr.corp.intel.com (10.22.229.14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.32; Tue, 12 Sep 2023 00:40:36 -0700 Received: from orsmsx610.amr.corp.intel.com (10.22.229.23) by ORSMSX612.amr.corp.intel.com (10.22.229.25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.32; Tue, 12 Sep 2023 00:40:36 -0700 Received: from ORSEDG602.ED.cps.intel.com (10.7.248.7) by orsmsx610.amr.corp.intel.com (10.22.229.23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.32 via Frontend Transport; Tue, 12 Sep 2023 00:40:36 -0700 Received: from NAM11-BN8-obe.outbound.protection.outlook.com (104.47.58.168) by edgegateway.intel.com (134.134.137.103) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.32; Tue, 12 Sep 2023 00:40:35 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Fp57T4y5+2tM46Qjf6KVBSksFVXa0ZLgWnefov2l5e+01Yw1H8cQyD2zDgSou+Q9ENbGvppVBBYr8bRyVqv6zWdbMgNnXU2qHextoG0VoZ87DoVHglzMl3MI2LIMk5Nd5WMP4kbediI02SxeymhM1CFbrIrpDNnTvJ9qEE4Or6M5uZSTUbxvpOBfdNRI6X+ia9UC26/aymYZ32SUOnDFq5ORJz4+Hefpi4LR41Znm2NNNb5Mp/aqTrpDOacjE1nll7ZGqyQOpOR/Qezzhacz92Ra50yOGHrfEa4cSQayAlSrhbyZDJpDYsFTsHDi0ZXraQ9k9cFOHgy3kn3AOLU3/w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=2jo7lwMZxglGPYnIZyyA0Kcebqbq+IgGSoy5Xu4lg6w=; b=MpdjsmLkluEzcbUFfgrt5Ixily53SuwviZtifBg8QOliXz6XuN94bA1DtvLTGB94C4wf8oF3vmrMbbLoJ0q8gqoCXMKUApLI0eXStMQ79LnCH8v/TKS0gRXw51CyfbjZPjWEz9tGX/768dkl39iD/lX7F23n4AmrRn8qI4oViXETG7a6cMYxT5r8DyJKYLZrdQvMB3eMYSjg5vf6p8+P0gC3SGZVPjSa5ERtynypfIOLdPmPuEFOtw/2zqUaxwyZ/BdDRi9nt0I7sUIuSQyluoA7q9bOZa18wpLS+2EZa00Q8xiwUlklts/3S6UdLLsRBVEpFlKpmwPLmV4Y4DsY4g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none Received: from PH0PR11MB5877.namprd11.prod.outlook.com (2603:10b6:510:141::12) by SA1PR11MB6568.namprd11.prod.outlook.com (2603:10b6:806:253::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6768.31; Tue, 12 Sep 2023 07:40:31 +0000 Received: from PH0PR11MB5877.namprd11.prod.outlook.com ([fe80::95b7:4291:8e30:e19f]) by PH0PR11MB5877.namprd11.prod.outlook.com ([fe80::95b7:4291:8e30:e19f%5]) with mapi id 15.20.6768.029; Tue, 12 Sep 2023 07:40:31 +0000 From: "Liu, Mingxia" To: "Qiao, Wenjing" , "Zhang, Yuying" , "dev@dpdk.org" , "Zhang, Qi Z" , "Wu, Jingjing" , "Xing, Beilei" Subject: FW: [PATCH v3 6/9] net/cpfl: add fxp rule module Thread-Topic: [PATCH v3 6/9] net/cpfl: add fxp rule module Thread-Index: AQHZ4KVgfns+nV3sAkWbuijQRXeguLAWz8KQ Date: Tue, 12 Sep 2023 07:40:31 +0000 Message-ID: References: <20230901113158.1654044-1-yuying.zhang@intel.com> <20230906093407.3635038-1-wenjing.qiao@intel.com> <20230906093407.3635038-7-wenjing.qiao@intel.com> In-Reply-To: <20230906093407.3635038-7-wenjing.qiao@intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=intel.com; x-ms-publictraffictype: Email x-ms-traffictypediagnostic: PH0PR11MB5877:EE_|SA1PR11MB6568:EE_ x-ms-office365-filtering-correlation-id: a9a1b463-7e2e-484f-bbc7-08dbb3638aca x-ms-exchange-senderadcheck: 1 x-ms-exchange-antispam-relay: 0 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: 8eviM6RY1IbQBi1Scg/D81PFgkwaYvaVI7C/1yjOoiIq9LxRAThligjINtFLW1RIOq4YP6JfouInQPeG+nk6UBvW+1AQnzqcZTuPGLbiq+m7+w8e3fF+4yYrrkuOu3AKY8PzWM8GA2rAJCZw5jmHuwITd7c4IUQk4X7EmI9wCu++NZQ/JWbLz04HHHlqTiznVLb84yKZsC3BOdOcjWH4KYzttrUxDEC3xm+JCuoMxrCA7285Iu3m7hAaDtyxUQnFm7o5tHAd1J5TOlEW1nayilLvgLqQkT0WrbT+ibf80oI9AxQZcWOqEDs0IksFAXkKsJFid7fU5Sms4xQ+eZymClCuzEX5BUicbleRiNt0UZnW+RAgNP3DAee9TUqCnqPZuVQ6WVKXjuNfijiVsHViW+DOTqrPTRAt2OEBtQmgNMFjKVjzHh7sy7dagCgKoGgQX2w6Zf9uq+Zy1I39F4gAxf0bcEBPUKlGmAmq1NIcdMeF2QUcTYLEdsP7KOgquXNyref72EmumhPZUqnPqmEedzyMCz/h+vqoYaC3KbOm8kvyQW45duRvW3HGv6n0vqFWIxTT+BEXmKNz9M8fnz6jb3PsH+ZiGZ+3XTv8i46bjKH6BVo54iZDhgUcfNvV1hKyogaesbk064FOZuJNLxdfdg== x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:PH0PR11MB5877.namprd11.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230031)(366004)(396003)(376002)(136003)(39860400002)(346002)(451199024)(186009)(1800799009)(110136005)(921005)(122000001)(71200400001)(6506007)(53546011)(7696005)(33656002)(86362001)(82960400001)(55016003)(38070700005)(38100700002)(26005)(30864003)(9686003)(2906002)(478600001)(83380400001)(5660300002)(52536014)(8936002)(41300700001)(8676002)(66476007)(66946007)(66556008)(76116006)(6636002)(66446008)(64756008)(316002)(559001)(579004); DIR:OUT; SFP:1102; x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?F+j+0wDAc2pFkscJlPv/y8agX8Ov+UB+tJj0uVnGiUOf4CIZi63MHdGxXBoX?= =?us-ascii?Q?DSdjF7iIa8W+ZdWnhu59a0iZ+64WGduGoyUDuSDi5GHglo7+2eDG9tGm/QhK?= =?us-ascii?Q?gUPLDEd8VOWThXCj/hWJZ4O87JSfj6hLzV+sEgZaLVDC/MBET8yQv9yyRp7W?= =?us-ascii?Q?Dmay9L4bqKvgN0DKLzE74Ssf/mULHoGC+InutxsCAg8j1eb2/t3xarpFDpId?= =?us-ascii?Q?qAE+FwTKrG7dSeCCw0fsoQPNnDuRc68TJzwsvZ7PiDCSnS9I2h5i8qSHYFg2?= =?us-ascii?Q?lxUdZ0oWdLCVyna94ghYWt4lacqJmJK78RYubOqc1fpr9XELAW2m4x7f5SYf?= =?us-ascii?Q?Gt9LwBsIFB+TvPZT0b/xGNGvn5pmCfNlbbELcSMCTom7+Cq0vTPFohz7hNQ4?= =?us-ascii?Q?vqdUxqKRWjzYEmGelcmx+fP9duom3sLm42WjZU1ykt/ZvAkKMO+ObJUgFIcM?= =?us-ascii?Q?eYM7+I9bmvRvqtjC0Eiwn5NPMGdZnujBbldnrnJvDYsEeZNsT9hTN8cXvkRl?= =?us-ascii?Q?uzpJMKhDxesG+5bGj2j8u4K7WIqbyLuUGwoE1hzsO5HYysDeHOBIwKkuQ/oQ?= =?us-ascii?Q?zEJY78Hx2aWlG4/zPTqyS2yIZ9kQ3LnmjgFdasgAjAC/jpW7XypaqneU3awI?= =?us-ascii?Q?nhKGq6R8wgFmdInnrLID9xIofuG/m0IvmTiw8PpbbxSsyxQmOPw+9a/Fy1iJ?= =?us-ascii?Q?WH3wh7a94UK+YwTjOK5QbQG53sbCzrQWcf7zJPIc1CNvGppNfP7kGxPqoWcH?= =?us-ascii?Q?/ojskrU/0L4l/nrXCMaer9BBTs+kvdUIs/nd2f4w0Ts6nQBHrNWPab08huY3?= =?us-ascii?Q?kneqFTmKH+RITDaNoSAfNMcogkXkmvM1USbq98PgVYb0VZx3lQNDi6y9G782?= =?us-ascii?Q?5rlTToR0moox/Tpz6og/AsdNxSq6Q6iNZc+apqHEhTlFevdj9ikqVS5O03kh?= =?us-ascii?Q?y7XewkCGxzGEGRzsUz9PDZgnIjmUS5467m8tUnYIdiVHxaycAvXYEO/sJQ1A?= =?us-ascii?Q?DoxHS817O+dGHqjDMa00X93pMYFvFMLB+TOO45Gpx1f7fSfH5yUE4JVTcxPh?= =?us-ascii?Q?Ybx9jP2aJYAJ9dWPKWVhTlZiNi4s9P2htPsfV7QsBgNgncU76gNrcV5rhqMK?= =?us-ascii?Q?qH5pVQ/ccyjkvBQjZXcRoEdb11Ml6c+rtEf6wUxeFx/qFYxyemdopgTK3qWG?= =?us-ascii?Q?HDnRy4bOFYNf8XdE0bExDMuHCSfrrezKLDdG6bAjA5osUpSQN+mucfhAHfY9?= =?us-ascii?Q?+T0SXtpxLNnxmT0qCffoN2galHUnBF7lSz8luwxrJmAlDLeB97bcP25Rc+0R?= =?us-ascii?Q?J7LRablQecl68A0zdtz3NjXCuUNaW3BxGKZA8QMDs51hxprCLuLkeYLuqF66?= =?us-ascii?Q?dwqa63xCaoTv6MpRRVFIReuzPNIqcNUYNHCOt/V6HazqduzUhuRV1PUg5II4?= =?us-ascii?Q?ukLRW0AyJ6awj/SNw3cD+K8wP3KM8u7YXkeSy7Mqfr5MMjWvRXjbTDbuf5kd?= =?us-ascii?Q?wC/K0mtprtn39NrZbk6rB4HDIEMUvg8P/80xqeIpuJt7+Hp0QCjC99Zzb02y?= =?us-ascii?Q?Uv+RlW5UKP8auu+tVBp2/8af2vd+Ebm/wJethoeD?= Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: PH0PR11MB5877.namprd11.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: a9a1b463-7e2e-484f-bbc7-08dbb3638aca X-MS-Exchange-CrossTenant-originalarrivaltime: 12 Sep 2023 07:40:31.4222 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: 453FDf5jjpJSAOlDvOzxvZu5qVmUAfTUJGoKPQ0bk7aAzIFBjKJR44aMpiPnyWxjILdPbS0uq7r3DWjNCRSRdw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR11MB6568 X-OriginatorOrg: intel.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org > -----Original Message----- > From: Qiao, Wenjing > Sent: Wednesday, September 6, 2023 5:34 PM > To: Zhang, Yuying ; dev@dpdk.org; Zhang, Qi Z > ; Wu, Jingjing ; Xing, Beile= i > > Cc: Liu, Mingxia > Subject: [PATCH v3 6/9] net/cpfl: add fxp rule module >=20 > From: Yuying Zhang >=20 > Added low level fxp module for rule packing / creation / destroying. >=20 > Signed-off-by: Yuying Zhang > --- > drivers/net/cpfl/cpfl_controlq.c | 424 +++++++++++++++++++++++++++++++ > drivers/net/cpfl/cpfl_controlq.h | 24 ++ > drivers/net/cpfl/cpfl_ethdev.c | 31 +++ > drivers/net/cpfl/cpfl_ethdev.h | 6 + > drivers/net/cpfl/cpfl_fxp_rule.c | 297 ++++++++++++++++++++++ > drivers/net/cpfl/cpfl_fxp_rule.h | 68 +++++ > drivers/net/cpfl/meson.build | 1 + > 7 files changed, 851 insertions(+) > create mode 100644 drivers/net/cpfl/cpfl_fxp_rule.c create mode 100644 > drivers/net/cpfl/cpfl_fxp_rule.h >=20 > diff --git a/drivers/net/cpfl/cpfl_controlq.c b/drivers/net/cpfl/cpfl_con= trolq.c > index 476c78f235..ed76282b0c 100644 > --- a/drivers/net/cpfl/cpfl_controlq.c > +++ b/drivers/net/cpfl/cpfl_controlq.c > @@ -331,6 +331,402 @@ cpfl_ctlq_add(struct idpf_hw *hw, struct > cpfl_ctlq_create_info *qinfo, > return status; > } >=20 > +/** > + * cpfl_ctlq_send - send command to Control Queue (CTQ) > + * @hw: pointer to hw struct > + * @cq: handle to control queue struct to send on > + * @num_q_msg: number of messages to send on control queue > + * @q_msg: pointer to array of queue messages to be sent > + * > + * The caller is expected to allocate DMAable buffers and pass them to > +the > + * send routine via the q_msg struct / control queue specific data struc= t. > + * The control queue will hold a reference to each send message until > + * the completion for that message has been cleaned. > + */ > +int > +cpfl_ctlq_send(struct idpf_hw *hw, struct idpf_ctlq_info *cq, > + uint16_t num_q_msg, struct idpf_ctlq_msg q_msg[]) { > + struct idpf_ctlq_desc *desc; > + int num_desc_avail =3D 0; > + int status =3D 0; > + int i =3D 0; > + > + if (!cq || !cq->ring_size) > + return -ENOBUFS; > + > + idpf_acquire_lock(&cq->cq_lock); > + > + /* Ensure there are enough descriptors to send all messages */ > + num_desc_avail =3D IDPF_CTLQ_DESC_UNUSED(cq); > + if (num_desc_avail =3D=3D 0 || num_desc_avail < num_q_msg) { > + status =3D -ENOSPC; > + goto sq_send_command_out; > + } > + > + for (i =3D 0; i < num_q_msg; i++) { > + struct idpf_ctlq_msg *msg =3D &q_msg[i]; > + uint64_t msg_cookie; > + > + desc =3D IDPF_CTLQ_DESC(cq, cq->next_to_use); > + desc->opcode =3D CPU_TO_LE16(msg->opcode); > + desc->pfid_vfid =3D CPU_TO_LE16(msg->func_id); > + msg_cookie =3D *(uint64_t *)&msg->cookie; > + desc->cookie_high =3D > + CPU_TO_LE32(IDPF_HI_DWORD(msg_cookie)); > + desc->cookie_low =3D > + CPU_TO_LE32(IDPF_LO_DWORD(msg_cookie)); > + desc->flags =3D CPU_TO_LE16((msg->host_id & > IDPF_HOST_ID_MASK) << > + IDPF_CTLQ_FLAG_HOST_ID_S); > + if (msg->data_len) { > + struct idpf_dma_mem *buff =3D msg- > >ctx.indirect.payload; > + > + desc->datalen |=3D CPU_TO_LE16(msg->data_len); > + desc->flags |=3D CPU_TO_LE16(IDPF_CTLQ_FLAG_BUF); > + desc->flags |=3D CPU_TO_LE16(IDPF_CTLQ_FLAG_RD); > + /* Update the address values in the desc with the pa > + * value for respective buffer > + */ > + desc->params.indirect.addr_high =3D > + CPU_TO_LE32(IDPF_HI_DWORD(buff->pa)); > + desc->params.indirect.addr_low =3D > + CPU_TO_LE32(IDPF_LO_DWORD(buff->pa)); > + idpf_memcpy(&desc->params, msg- > >ctx.indirect.context, > + IDPF_INDIRECT_CTX_SIZE, > IDPF_NONDMA_TO_DMA); > + } else { > + idpf_memcpy(&desc->params, msg->ctx.direct, > + IDPF_DIRECT_CTX_SIZE, > IDPF_NONDMA_TO_DMA); > + } > + > + /* Store buffer info */ > + cq->bi.tx_msg[cq->next_to_use] =3D msg; > + (cq->next_to_use)++; > + if (cq->next_to_use =3D=3D cq->ring_size) > + cq->next_to_use =3D 0; > + } > + > + /* Force memory write to complete before letting hardware > + * know that there are new descriptors to fetch. > + */ > + idpf_wmb(); > + wr32(hw, cq->reg.tail, cq->next_to_use); > + > +sq_send_command_out: > + idpf_release_lock(&cq->cq_lock); > + > + return status; > +} > + > +/** > + * __cpfl_ctlq_clean_sq - helper function to reclaim descriptors on HW > +write > + * back for the requested queue > + * @cq: pointer to the specific Control queue > + * @clean_count: (input|output) number of descriptors to clean as > +input, and > + * number of descriptors actually cleaned as output > + * @msg_status: (output) pointer to msg pointer array to be populated; > +needs > + * to be allocated by caller > + * @force: (input) clean descriptors which were not done yet. Use with > +caution > + * in kernel mode only > + * > + * Returns an array of message pointers associated with the cleaned > + * descriptors. The pointers are to the original ctlq_msgs sent on the > +cleaned > + * descriptors. The status will be returned for each; any messages > +that failed > + * to send will have a non-zero status. The caller is expected to free > +original > + * ctlq_msgs and free or reuse the DMA buffers. > + */ > +static int > +__cpfl_ctlq_clean_sq(struct idpf_ctlq_info *cq, uint16_t *clean_count, > + struct idpf_ctlq_msg *msg_status[], bool force) { > + struct idpf_ctlq_desc *desc; > + uint16_t i =3D 0, num_to_clean; > + uint16_t ntc, desc_err; > + int ret =3D 0; > + > + if (!cq || !cq->ring_size) > + return -ENOBUFS; > + > + if (*clean_count =3D=3D 0) > + return 0; > + if (*clean_count > cq->ring_size) > + return -EINVAL; > + > + idpf_acquire_lock(&cq->cq_lock); > + ntc =3D cq->next_to_clean; > + num_to_clean =3D *clean_count; > + > + for (i =3D 0; i < num_to_clean; i++) { > + /* Fetch next descriptor and check if marked as done */ > + desc =3D IDPF_CTLQ_DESC(cq, ntc); > + if (!force && !(LE16_TO_CPU(desc->flags) & > IDPF_CTLQ_FLAG_DD)) > + break; > + > + desc_err =3D LE16_TO_CPU(desc->ret_val); > + if (desc_err) { > + /* strip off FW internal code */ > + desc_err &=3D 0xff; > + } > + > + msg_status[i] =3D cq->bi.tx_msg[ntc]; > + if (!msg_status[i]) > + break; > + msg_status[i]->status =3D desc_err; > + cq->bi.tx_msg[ntc] =3D NULL; > + /* Zero out any stale data */ > + idpf_memset(desc, 0, sizeof(*desc), IDPF_DMA_MEM); > + ntc++; > + if (ntc =3D=3D cq->ring_size) > + ntc =3D 0; > + } > + > + cq->next_to_clean =3D ntc; > + idpf_release_lock(&cq->cq_lock); > + > + /* Return number of descriptors actually cleaned */ > + *clean_count =3D i; > + > + return ret; > +} > + > +/** > + * cpfl_ctlq_clean_sq - reclaim send descriptors on HW write back for > +the > + * requested queue > + * @cq: pointer to the specific Control queue > + * @clean_count: (input|output) number of descriptors to clean as > +input, and > + * number of descriptors actually cleaned as output > + * @msg_status: (output) pointer to msg pointer array to be populated; > +needs > + * to be allocated by caller > + * > + * Returns an array of message pointers associated with the cleaned > + * descriptors. The pointers are to the original ctlq_msgs sent on the > +cleaned > + * descriptors. The status will be returned for each; any messages > +that failed > + * to send will have a non-zero status. The caller is expected to free > +original > + * ctlq_msgs and free or reuse the DMA buffers. > + */ > +int > +cpfl_ctlq_clean_sq(struct idpf_ctlq_info *cq, uint16_t *clean_count, > + struct idpf_ctlq_msg *msg_status[]) { > + return __cpfl_ctlq_clean_sq(cq, clean_count, msg_status, false); } > + > +/** > + * cpfl_ctlq_post_rx_buffs - post buffers to descriptor ring > + * @hw: pointer to hw struct > + * @cq: pointer to control queue handle > + * @buff_count: (input|output) input is number of buffers caller is > +trying to > + * return; output is number of buffers that were not posted > + * @buffs: array of pointers to dma mem structs to be given to hardware > + * > + * Caller uses this function to return DMA buffers to the descriptor > +ring after > + * consuming them; buff_count will be the number of buffers. > + * > + * Note: this function needs to be called after a receive call even > + * if there are no DMA buffers to be returned, i.e. buff_count =3D 0, > + * buffs =3D NULL to support direct commands */ int > +cpfl_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info *cq, > + uint16_t *buff_count, struct idpf_dma_mem **buffs) { > + struct idpf_ctlq_desc *desc; > + uint16_t ntp =3D cq->next_to_post; > + bool buffs_avail =3D false; > + uint16_t tbp =3D ntp + 1; > + int status =3D 0; > + int i =3D 0; > + > + if (*buff_count > cq->ring_size) > + return -EINVAL; > + > + if (*buff_count > 0) > + buffs_avail =3D true; > + idpf_acquire_lock(&cq->cq_lock); > + if (tbp >=3D cq->ring_size) > + tbp =3D 0; > + > + if (tbp =3D=3D cq->next_to_clean) > + /* Nothing to do */ > + goto post_buffs_out; > + > + /* Post buffers for as many as provided or up until the last one used *= / > + while (ntp !=3D cq->next_to_clean) { > + desc =3D IDPF_CTLQ_DESC(cq, ntp); > + if (cq->bi.rx_buff[ntp]) > + goto fill_desc; > + if (!buffs_avail) { > + /* If the caller hasn't given us any buffers or > + * there are none left, search the ring itself > + * for an available buffer to move to this > + * entry starting at the next entry in the ring > + */ > + tbp =3D ntp + 1; > + /* Wrap ring if necessary */ > + if (tbp >=3D cq->ring_size) > + tbp =3D 0; > + > + while (tbp !=3D cq->next_to_clean) { > + if (cq->bi.rx_buff[tbp]) { > + cq->bi.rx_buff[ntp] =3D > + cq->bi.rx_buff[tbp]; > + cq->bi.rx_buff[tbp] =3D NULL; > + > + /* Found a buffer, no need to > + * search anymore > + */ > + break; > + } > + > + /* Wrap ring if necessary */ > + tbp++; > + if (tbp >=3D cq->ring_size) > + tbp =3D 0; > + } > + > + if (tbp =3D=3D cq->next_to_clean) > + goto post_buffs_out; > + } else { > + /* Give back pointer to DMA buffer */ > + cq->bi.rx_buff[ntp] =3D buffs[i]; > + i++; > + > + if (i >=3D *buff_count) > + buffs_avail =3D false; > + } > + > +fill_desc: > + desc->flags =3D > + CPU_TO_LE16(IDPF_CTLQ_FLAG_BUF | > IDPF_CTLQ_FLAG_RD); > + > + /* Post buffers to descriptor */ > + desc->datalen =3D CPU_TO_LE16(cq->bi.rx_buff[ntp]->size); > + desc->params.indirect.addr_high =3D > + CPU_TO_LE32(IDPF_HI_DWORD(cq->bi.rx_buff[ntp]- > >pa)); > + desc->params.indirect.addr_low =3D > + CPU_TO_LE32(IDPF_LO_DWORD(cq->bi.rx_buff[ntp]- > >pa)); > + > + ntp++; > + if (ntp =3D=3D cq->ring_size) > + ntp =3D 0; > + } > + > +post_buffs_out: > + /* Only update tail if buffers were actually posted */ > + if (cq->next_to_post !=3D ntp) { > + if (ntp) > + /* Update next_to_post to ntp - 1 since current ntp > + * will not have a buffer > + */ > + cq->next_to_post =3D ntp - 1; > + else > + /* Wrap to end of end ring since current ntp is 0 */ > + cq->next_to_post =3D cq->ring_size - 1; > + > + wr32(hw, cq->reg.tail, cq->next_to_post); > + } > + > + idpf_release_lock(&cq->cq_lock); > + /* return the number of buffers that were not posted */ > + *buff_count =3D *buff_count - i; > + > + return status; > +} > + > +/** > + * cpfl_ctlq_recv - receive control queue message call back > + * @cq: pointer to control queue handle to receive on > + * @num_q_msg: (input|output) input number of messages that should be > +received; > + * output number of messages actually received > + * @q_msg: (output) array of received control queue messages on this q; > + * needs to be pre-allocated by caller for as many messages as > +requested > + * > + * Called by interrupt handler or polling mechanism. Caller is expected > + * to free buffers > + */ > +int > +cpfl_ctlq_recv(struct idpf_ctlq_info *cq, uint16_t *num_q_msg, > + struct idpf_ctlq_msg *q_msg) > +{ > + uint16_t num_to_clean, ntc, ret_val, flags; > + struct idpf_ctlq_desc *desc; > + int ret_code =3D 0; > + uint16_t i =3D 0; > + > + if (!cq || !cq->ring_size) > + return -ENOBUFS; > + > + if (*num_q_msg =3D=3D 0) > + return 0; > + else if (*num_q_msg > cq->ring_size) > + return -EINVAL; > + > + /* take the lock before we start messing with the ring */ > + idpf_acquire_lock(&cq->cq_lock); > + ntc =3D cq->next_to_clean; > + num_to_clean =3D *num_q_msg; > + > + for (i =3D 0; i < num_to_clean; i++) { > + /* Fetch next descriptor and check if marked as done */ > + desc =3D IDPF_CTLQ_DESC(cq, ntc); > + flags =3D LE16_TO_CPU(desc->flags); > + if (!(flags & IDPF_CTLQ_FLAG_DD)) > + break; > + > + ret_val =3D LE16_TO_CPU(desc->ret_val); > + q_msg[i].vmvf_type =3D (flags & > + (IDPF_CTLQ_FLAG_FTYPE_VM | > + IDPF_CTLQ_FLAG_FTYPE_PF)) >> > + IDPF_CTLQ_FLAG_FTYPE_S; > + > + if (flags & IDPF_CTLQ_FLAG_ERR) > + ret_code =3D -EBADMSG; > + > + q_msg[i].cookie.mbx.chnl_opcode =3D LE32_TO_CPU(desc- > >cookie_high); > + q_msg[i].cookie.mbx.chnl_retval =3D LE32_TO_CPU(desc- > >cookie_low); > + q_msg[i].opcode =3D LE16_TO_CPU(desc->opcode); > + q_msg[i].data_len =3D LE16_TO_CPU(desc->datalen); > + q_msg[i].status =3D ret_val; > + > + if (desc->datalen) { > + idpf_memcpy(q_msg[i].ctx.indirect.context, > + &desc->params.indirect, > + IDPF_INDIRECT_CTX_SIZE, > + IDPF_DMA_TO_NONDMA); > + > + /* Assign pointer to dma buffer to ctlq_msg array > + * to be given to upper layer > + */ > + q_msg[i].ctx.indirect.payload =3D cq->bi.rx_buff[ntc]; > + > + /* Zero out pointer to DMA buffer info; > + * will be repopulated by post buffers API > + */ > + cq->bi.rx_buff[ntc] =3D NULL; > + } else { > + idpf_memcpy(q_msg[i].ctx.direct, > + desc->params.raw, > + IDPF_DIRECT_CTX_SIZE, > + IDPF_DMA_TO_NONDMA); > + } > + > + /* Zero out stale data in descriptor */ > + idpf_memset(desc, 0, sizeof(struct idpf_ctlq_desc), > + IDPF_DMA_MEM); > + > + ntc++; > + if (ntc =3D=3D cq->ring_size) > + ntc =3D 0; > + }; > + > + cq->next_to_clean =3D ntc; > + idpf_release_lock(&cq->cq_lock); > + *num_q_msg =3D i; > + if (*num_q_msg =3D=3D 0) > + ret_code =3D -ENOMSG; > + > + return ret_code; > +} > + > int > cpfl_vport_ctlq_add(struct idpf_hw *hw, struct cpfl_ctlq_create_info *qi= nfo, > struct idpf_ctlq_info **cq) > @@ -377,3 +773,31 @@ cpfl_vport_ctlq_remove(struct idpf_hw *hw, struct > idpf_ctlq_info *cq) { > cpfl_ctlq_remove(hw, cq); > } > + > +int > +cpfl_vport_ctlq_send(struct idpf_hw *hw, struct idpf_ctlq_info *cq, > + uint16_t num_q_msg, struct idpf_ctlq_msg q_msg[]) { > + return cpfl_ctlq_send(hw, cq, num_q_msg, q_msg); } > + > +int > +cpfl_vport_ctlq_recv(struct idpf_ctlq_info *cq, uint16_t *num_q_msg, > + struct idpf_ctlq_msg q_msg[]) > +{ > + return cpfl_ctlq_recv(cq, num_q_msg, q_msg); } > + > +int > +cpfl_vport_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info = *cq, > + uint16_t *buff_count, struct idpf_dma_mem > **buffs) { > + return cpfl_ctlq_post_rx_buffs(hw, cq, buff_count, buffs); } > + > +int > +cpfl_vport_ctlq_clean_sq(struct idpf_ctlq_info *cq, uint16_t *clean_coun= t, > + struct idpf_ctlq_msg *msg_status[]) { > + return cpfl_ctlq_clean_sq(cq, clean_count, msg_status); } > diff --git a/drivers/net/cpfl/cpfl_controlq.h b/drivers/net/cpfl/cpfl_con= trolq.h > index 930d717f63..740ae6522c 100644 > --- a/drivers/net/cpfl/cpfl_controlq.h > +++ b/drivers/net/cpfl/cpfl_controlq.h > @@ -14,6 +14,13 @@ > #define CPFL_DFLT_MBX_RING_LEN 512 > #define CPFL_CFGQ_RING_LEN 512 >=20 > +/* CRQ/CSQ specific error codes */ > +#define CPFL_ERR_CTLQ_ERROR -74 /* -EBADMSG */ > +#define CPFL_ERR_CTLQ_TIMEOUT -110 /* -ETIMEDOUT */ > +#define CPFL_ERR_CTLQ_FULL -28 /* -ENOSPC */ > +#define CPFL_ERR_CTLQ_NO_WORK -42 /* -ENOMSG */ > +#define CPFL_ERR_CTLQ_EMPTY -105 /* -ENOBUFS */ > + [Liu, Mingxia] How about replacing the const number with macro statement, = such as, +#define CPFL_ERR_CTLQ_ERROR (-EBADMSG) > /* Generic queue info structures */ > /* MB, CONFIG and EVENT q do not have extended info */ struct > cpfl_ctlq_create_info { @@ -44,8 +51,25 @@ int cpfl_ctlq_alloc_ring_res(s= truct > idpf_hw *hw, int cpfl_ctlq_add(struct idpf_hw *hw, > struct cpfl_ctlq_create_info *qinfo, > struct idpf_ctlq_info **cq); > +int cpfl_ctlq_send(struct idpf_hw *hw, struct idpf_ctlq_info *cq, > + u16 num_q_msg, struct idpf_ctlq_msg q_msg[]); int > +cpfl_ctlq_clean_sq(struct idpf_ctlq_info *cq, u16 *clean_count, > + struct idpf_ctlq_msg *msg_status[]); int > +cpfl_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info *cq, > + u16 *buff_count, struct idpf_dma_mem **buffs); int > +cpfl_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg, > + struct idpf_ctlq_msg *q_msg); > int cpfl_vport_ctlq_add(struct idpf_hw *hw, > struct cpfl_ctlq_create_info *qinfo, > struct idpf_ctlq_info **cq); > void cpfl_vport_ctlq_remove(struct idpf_hw *hw, struct idpf_ctlq_info *c= q); > +int cpfl_vport_ctlq_send(struct idpf_hw *hw, struct idpf_ctlq_info *cq, > + u16 num_q_msg, struct idpf_ctlq_msg q_msg[]); int > +cpfl_vport_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg, > + struct idpf_ctlq_msg q_msg[]); > + > +int cpfl_vport_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_i= nfo > *cq, > + u16 *buff_count, struct idpf_dma_mem > **buffs); int > +cpfl_vport_ctlq_clean_sq(struct idpf_ctlq_info *cq, u16 *clean_count, > + struct idpf_ctlq_msg *msg_status[]); > #endif > diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethde= v.c index > 618a6a0fe2..08a55f0352 100644 > --- a/drivers/net/cpfl/cpfl_ethdev.c > +++ b/drivers/net/cpfl/cpfl_ethdev.c > @@ -16,6 +16,7 @@ > #include > #include "cpfl_rxtx.h" > #include "cpfl_flow.h" > +#include "cpfl_rules.h" >=20 > #define CPFL_REPRESENTOR "representor" > #define CPFL_TX_SINGLE_Q "tx_single" > @@ -1127,6 +1128,7 @@ cpfl_dev_close(struct rte_eth_dev *dev) > adapter->cur_vport_nb--; > dev->data->dev_private =3D NULL; > adapter->vports[vport->sw_idx] =3D NULL; > + idpf_free_dma_mem(NULL, &cpfl_vport->itf.flow_dma); > rte_free(cpfl_vport); >=20 > return 0; > @@ -2462,6 +2464,26 @@ cpfl_p2p_queue_info_init(struct cpfl_vport > *cpfl_vport, > return 0; > } >=20 > +int > +cpfl_alloc_dma_mem_batch(struct idpf_dma_mem *orig_dma, struct > idpf_dma_mem *dma, uint32_t size, > + int batch_size) > +{ > + int i; > + > + if (!idpf_alloc_dma_mem(NULL, orig_dma, size * (1 + batch_size))) { > + PMD_INIT_LOG(ERR, "Could not alloc dma memory"); > + return -ENOMEM; > + } > + > + for (i =3D 0; i < batch_size; i++) { > + dma[i].va =3D (void *)((uint64_t)orig_dma->va + size * (i + 1)); > + dma[i].pa =3D orig_dma->pa + size * (i + 1); > + dma[i].size =3D size; > + dma[i].zone =3D NULL; > + } > + return 0; > +} > + > static int > cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params) { @@ -2= 511,6 > +2533,15 @@ cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_param= s) > rte_ether_addr_copy((struct rte_ether_addr *)vport- > >default_mac_addr, > &dev->data->mac_addrs[0]); >=20 > + memset(cpfl_vport->itf.dma, 0, sizeof(cpfl_vport->itf.dma)); > + memset(cpfl_vport->itf.msg, 0, sizeof(cpfl_vport->itf.msg)); > + ret =3D cpfl_alloc_dma_mem_batch(&cpfl_vport->itf.flow_dma, > + cpfl_vport->itf.dma, > + sizeof(union cpfl_rule_cfg_pkt_record), > + CPFL_FLOW_BATCH_SIZE); > + if (ret < 0) > + goto err_mac_addrs; > + > if (!adapter->base.is_rx_singleq && !adapter->base.is_tx_singleq) { > memset(&p2p_queue_grps_info, 0, > sizeof(p2p_queue_grps_info)); > ret =3D cpfl_p2p_q_grps_add(vport, &p2p_queue_grps_info, > p2p_q_vc_out_info); diff --git a/drivers/net/cpfl/cpfl_ethdev.h > b/drivers/net/cpfl/cpfl_ethdev.h index be625284a4..6b02573b4a 100644 > --- a/drivers/net/cpfl/cpfl_ethdev.h > +++ b/drivers/net/cpfl/cpfl_ethdev.h > @@ -149,10 +149,14 @@ enum cpfl_itf_type { >=20 > TAILQ_HEAD(cpfl_flow_list, rte_flow); >=20 > +#define CPFL_FLOW_BATCH_SIZE 490 > struct cpfl_itf { > enum cpfl_itf_type type; > struct cpfl_adapter_ext *adapter; > struct cpfl_flow_list flow_list; > + struct idpf_dma_mem flow_dma; > + struct idpf_dma_mem dma[CPFL_FLOW_BATCH_SIZE]; > + struct idpf_ctlq_msg msg[CPFL_FLOW_BATCH_SIZE]; > void *data; > }; >=20 > @@ -238,6 +242,8 @@ int cpfl_cc_vport_info_get(struct cpfl_adapter_ext > *adapter, > struct cpchnl2_vport_id *vport_id, > struct cpfl_vport_id *vi, > struct cpchnl2_get_vport_info_response *response); > +int cpfl_alloc_dma_mem_batch(struct idpf_dma_mem *orig_dma, struct > idpf_dma_mem *dma, > + uint32_t size, int batch_size); >=20 > #define CPFL_DEV_TO_PCI(eth_dev) \ > RTE_DEV_TO_PCI((eth_dev)->device) > diff --git a/drivers/net/cpfl/cpfl_fxp_rule.c b/drivers/net/cpfl/cpfl_fxp= _rule.c > new file mode 100644 > index 0000000000..f87ccc9f77 > --- /dev/null > +++ b/drivers/net/cpfl/cpfl_fxp_rule.c > @@ -0,0 +1,297 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright(c) 2023 Intel Corporation > + */ > +#include "cpfl_ethdev.h" > + > +#include "cpfl_fxp_rule.h" > +#include "cpfl_logs.h" > + > +#define CTLQ_SEND_RETRIES 100 > +#define CTLQ_RECEIVE_RETRIES 100 > + > +int > +cpfl_send_ctlq_msg(struct idpf_hw *hw, struct idpf_ctlq_info *cq, u16 > num_q_msg, > + struct idpf_ctlq_msg q_msg[]) > +{ > + struct idpf_ctlq_msg **msg_ptr_list; > + u16 clean_count =3D 0; > + int num_cleaned =3D 0; > + int retries =3D 0; > + int ret =3D 0; > + > + msg_ptr_list =3D calloc(num_q_msg, sizeof(struct idpf_ctlq_msg *)); > + if (!msg_ptr_list) { > + PMD_INIT_LOG(ERR, "no memory for cleaning ctlq"); > + ret =3D -ENOMEM; > + goto err; > + } > + > + ret =3D cpfl_vport_ctlq_send(hw, cq, num_q_msg, q_msg); > + if (ret) { > + PMD_INIT_LOG(ERR, "cpfl_vport_ctlq_send() failed with error: > 0x%4x", ret); > + goto send_err; > + } > + > + while (retries <=3D CTLQ_SEND_RETRIES) { > + clean_count =3D num_q_msg - num_cleaned; > + ret =3D cpfl_vport_ctlq_clean_sq(cq, &clean_count, > + &msg_ptr_list[num_cleaned]); > + if (ret) { > + PMD_INIT_LOG(ERR, "clean ctlq failed: 0x%4x", ret); > + goto send_err; > + } > + > + num_cleaned +=3D clean_count; > + retries++; > + if (num_cleaned >=3D num_q_msg) > + break; > + rte_delay_us_sleep(10); > + } > + > + if (retries > CTLQ_SEND_RETRIES) { > + PMD_INIT_LOG(ERR, "timed out while polling for > completions"); > + ret =3D -1; > + goto send_err; > + } > + > +send_err: > + if (msg_ptr_list) > + free(msg_ptr_list); > +err: > + return ret; > +} > + > +static int > +cpfl_process_rx_ctlq_msg(u16 num_q_msg, struct idpf_ctlq_msg *q_msg) { > + u16 i; > + int ret =3D 0; > + > + if (!num_q_msg || !q_msg) > + return -EINVAL; > + > + for (i =3D 0; i < num_q_msg; i++) { > + if (q_msg[i].status =3D=3D CPFL_CFG_PKT_ERR_OK) { > + continue; > + } else if (q_msg[i].status =3D=3D CPFL_CFG_PKT_ERR_EEXIST && > + q_msg[i].opcode =3D=3D cpfl_ctlq_sem_add_rule) { > + PMD_INIT_LOG(ERR, "The rule has confliction with > already existed one"); > + return -EINVAL; > + } else if (q_msg[i].status =3D=3D CPFL_CFG_PKT_ERR_ENOTFND && > + q_msg[i].opcode =3D=3D cpfl_ctlq_sem_del_rule) { > + PMD_INIT_LOG(ERR, "The rule has already deleted"); > + return -EINVAL; > + } else { > + PMD_INIT_LOG(ERR, "Invalid rule"); > + return -EINVAL; > + } > + } > + > + return ret; [Liu, Mingxia] The ret value has never been changed, can it be deleted? Ret= urn 0 directly. > +} > + > +int > +cpfl_receive_ctlq_msg(struct idpf_hw *hw, struct idpf_ctlq_info *cq, u16 > num_q_msg, > + struct idpf_ctlq_msg q_msg[]) > +{ > + int retries =3D 0; > + struct idpf_dma_mem *dma; > + u16 i; > + uint16_t buff_cnt; > + int ret =3D 0, handle_rule =3D 0; > + > + retries =3D 0; > + while (retries <=3D CTLQ_RECEIVE_RETRIES) { > + rte_delay_us_sleep(10); > + ret =3D cpfl_vport_ctlq_recv(cq, &num_q_msg, &q_msg[0]); > + > + if (ret && ret !=3D CPFL_ERR_CTLQ_NO_WORK && > + ret !=3D CPFL_ERR_CTLQ_ERROR) { > + PMD_INIT_LOG(ERR, "failed to recv ctrlq msg. err: > 0x%4x\n", ret); > + retries++; > + continue; > + } > + > + if (ret =3D=3D CPFL_ERR_CTLQ_NO_WORK) { > + retries++; > + continue; > + } > + > + if (ret =3D=3D CPFL_ERR_CTLQ_EMPTY) > + break; > + > + ret =3D cpfl_process_rx_ctlq_msg(num_q_msg, q_msg); > + if (ret) { > + PMD_INIT_LOG(WARNING, "failed to process rx_ctrlq > msg"); [Liu, Mingxia] The log error is WARNING, but the return value is passed to = the calling function, how about user ERROR log level? > + handle_rule =3D ret; > + } > + > + for (i =3D 0; i < num_q_msg; i++) { > + if (q_msg[i].data_len > 0) > + dma =3D q_msg[i].ctx.indirect.payload; > + else > + dma =3D NULL; > + > + buff_cnt =3D dma ? 1 : 0; > + ret =3D cpfl_vport_ctlq_post_rx_buffs(hw, cq, &buff_cnt, > &dma); > + if (ret) > + PMD_INIT_LOG(WARNING, "could not posted > recv bufs\n"); [Liu, Mingxia] The log level is WARNING, but the return value is passed to = the calling function, how about user ERROR log level? > + } > + break; > + } > + > + if (retries > CTLQ_RECEIVE_RETRIES) { > + PMD_INIT_LOG(ERR, "timed out while polling for receive > response"); > + ret =3D -1; > + } > + > + return ret + handle_rule; [Liu, Mingxia] Looks a bit confused and weird, the calling function cpfl_ru= le_process() only check if return value < 0, so how about return -1 if (ret= < 0 || handle_rule < 0) ? > +} > + > +static int > +cpfl_mod_rule_pack(struct cpfl_rule_info *rinfo, struct idpf_dma_mem *dm= a, > + struct idpf_ctlq_msg *msg) > +{ > + struct cpfl_mod_rule_info *minfo =3D &rinfo->mod; > + union cpfl_rule_cfg_pkt_record *blob =3D NULL; > + struct cpfl_rule_cfg_data cfg =3D {0}; > + > + /* prepare rule blob */ > + if (!dma->va) { > + PMD_INIT_LOG(ERR, "dma mem passed to %s is null\n", > __func__); > + return -1; > + } > + blob =3D (union cpfl_rule_cfg_pkt_record *)dma->va; > + memset(blob, 0, sizeof(*blob)); > + memset(&cfg, 0, sizeof(cfg)); > + > + /* fill info for both query and add/update */ > + cpfl_fill_rule_mod_content(minfo->mod_obj_size, > + minfo->pin_mod_content, > + minfo->mod_index, > + &cfg.ext.mod_content); > + > + /* only fill content for add/update */ > + memcpy(blob->mod_blob, minfo->mod_content, > + minfo->mod_content_byte_len); > + > +#define NO_HOST_NEEDED 0 > + /* pack message */ > + cpfl_fill_rule_cfg_data_common(cpfl_ctlq_mod_add_update_rule, > + rinfo->cookie, > + 0, /* vsi_id not used for mod */ > + rinfo->port_num, > + NO_HOST_NEEDED, > + 0, /* time_sel */ > + 0, /* time_sel_val */ > + 0, /* cache_wr_thru */ > + rinfo->resp_req, > + (u16)sizeof(*blob), > + (void *)dma, > + &cfg.common); > + cpfl_prep_rule_desc(&cfg, msg); > + return 0; > +} > + > +static int > +cpfl_default_rule_pack(struct cpfl_rule_info *rinfo, struct idpf_dma_mem > *dma, > + struct idpf_ctlq_msg *msg, bool add) { > + union cpfl_rule_cfg_pkt_record *blob =3D NULL; > + enum cpfl_ctlq_rule_cfg_opc opc; > + struct cpfl_rule_cfg_data cfg; > + uint16_t cfg_ctrl; > + > + if (!dma->va) { > + PMD_INIT_LOG(ERR, "dma mem passed to %s is null\n", > __func__); > + return -1; > + } > + blob =3D (union cpfl_rule_cfg_pkt_record *)dma->va; > + memset(blob, 0, sizeof(*blob)); > + memset(msg, 0, sizeof(*msg)); > + > + if (rinfo->type =3D=3D CPFL_RULE_TYPE_SEM) { > + cfg_ctrl =3D CPFL_GET_MEV_SEM_RULE_CFG_CTRL(rinfo- > >sem.prof_id, > + rinfo- > >sem.sub_prof_id, > + rinfo- > >sem.pin_to_cache, > + rinfo- > >sem.fixed_fetch); > + cpfl_prep_sem_rule_blob(rinfo->sem.key, rinfo- > >sem.key_byte_len, > + rinfo->act_bytes, rinfo->act_byte_len, > + cfg_ctrl, blob); > + opc =3D add ? cpfl_ctlq_sem_add_rule : cpfl_ctlq_sem_del_rule; > + } else { > + PMD_INIT_LOG(ERR, "not support %d rule.", rinfo->type); > + return -1; > + } > + > + cpfl_fill_rule_cfg_data_common(opc, > + rinfo->cookie, > + rinfo->vsi, > + rinfo->port_num, > + rinfo->host_id, > + 0, /* time_sel */ > + 0, /* time_sel_val */ > + 0, /* cache_wr_thru */ > + rinfo->resp_req, > + sizeof(union cpfl_rule_cfg_pkt_record), > + dma, > + &cfg.common); > + cpfl_prep_rule_desc(&cfg, msg); > + return 0; > +} > + > +static int > +cpfl_rule_pack(struct cpfl_rule_info *rinfo, struct idpf_dma_mem *dma, > + struct idpf_ctlq_msg *msg, bool add) { > + int ret =3D 0; > + > + if (rinfo->type =3D=3D CPFL_RULE_TYPE_SEM) { > + if (cpfl_default_rule_pack(rinfo, dma, msg, add) < 0) > + ret =3D -1; > + } else if (rinfo->type =3D=3D CPFL_RULE_TYPE_MOD) { > + if (cpfl_mod_rule_pack(rinfo, dma, msg) < 0) > + ret =3D -1; > + } else { > + PMD_INIT_LOG(ERR, "Invalid type of rule"); > + ret =3D -1; > + } > + > + return ret; > +} > + > +int > +cpfl_rule_process(struct cpfl_itf *itf, > + struct idpf_ctlq_info *tx_cq, > + struct idpf_ctlq_info *rx_cq, > + struct cpfl_rule_info *rinfo, > + int rule_num, > + bool add) > +{ > + struct idpf_hw *hw =3D &itf->adapter->base.hw; > + int i; > + int ret =3D 0; > + > + if (rule_num =3D=3D 0) > + return 0; > + > + for (i =3D 0; i < rule_num; i++) { > + ret =3D cpfl_rule_pack(&rinfo[i], &itf->dma[i], &itf->msg[i], add); > + if (ret) { > + PMD_INIT_LOG(ERR, "Could not pack rule"); > + return ret; > + } > + } > + ret =3D cpfl_send_ctlq_msg(hw, tx_cq, rule_num, itf->msg); > + if (ret) { > + PMD_INIT_LOG(ERR, "Failed to send control message"); > + return ret; > + } > + ret =3D cpfl_receive_ctlq_msg(hw, rx_cq, rule_num, itf->msg); > + if (ret) { > + PMD_INIT_LOG(ERR, "Failed to update rule"); > + return ret; > + } > + > + return 0; > +}