From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CCC61A00C2; Mon, 31 Oct 2022 22:15:35 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5FE5F40685; Mon, 31 Oct 2022 22:15:35 +0100 (CET) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by mails.dpdk.org (Postfix) with ESMTP id 1692E40223 for ; Mon, 31 Oct 2022 22:15:32 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1667250933; x=1698786933; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=E7vnnX6A8SXvAwWgkdsYY4HZ61iKN06L0W48GqxloSk=; b=ZbVSMECIxcG/yThVKa6/tCWX1SMjL534esPg8qT9amSmadLzs8zJfWUS 4RKcu+Etc0vencFcFGoIDajOM+g2ZoCYMgQsdpnu4CaO7/Xf/wfertoJx hqYr3HVdYjNVP3OgGoGKUpKiDBL6xTdouRMO2v+L0RmiQzVjEl6MWoaR0 tdlT7oLkcTJGqoX4JNZliz37BAlrF67rKdgyYUy+zh563QHHpsWvZws3T xv9X1y1+v/v0aRTpWRBNCe6KMVyUQQcKqiNlATuuz85ap0ZGWl8qBKc4B Qo2Zyfo2HWp4y+gnDHZ8kdlcQxP6eQ/Xky75RC+c7udYnDLREOPtdgjQs A==; X-IronPort-AV: E=McAfee;i="6500,9779,10517"; a="306618983" X-IronPort-AV: E=Sophos;i="5.95,228,1661842800"; d="scan'208";a="306618983" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 Oct 2022 14:15:23 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10517"; a="878877206" X-IronPort-AV: E=Sophos;i="5.95,228,1661842800"; d="scan'208";a="878877206" Received: from orsmsx603.amr.corp.intel.com ([10.22.229.16]) by fmsmga006.fm.intel.com with ESMTP; 31 Oct 2022 14:15:23 -0700 Received: from orsmsx610.amr.corp.intel.com (10.22.229.23) by ORSMSX603.amr.corp.intel.com (10.22.229.16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Mon, 31 Oct 2022 14:15:22 -0700 Received: from ORSEDG601.ED.cps.intel.com (10.7.248.6) by orsmsx610.amr.corp.intel.com (10.22.229.23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31 via Frontend Transport; Mon, 31 Oct 2022 14:15:22 -0700 Received: from NAM10-DM6-obe.outbound.protection.outlook.com (104.47.58.108) by edgegateway.intel.com (134.134.137.102) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2375.31; Mon, 31 Oct 2022 14:15:22 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=f63tzWENXvHAxq4zgsYPSlLeNMRMvXZVoaDIslBL/xfnK0beS15wnXcA3/lsj7jyEi3qm4tcjivYyxCLjKV1syjCln9hz0rFr4hrvFOqRlfKeCfW3NprujF6vXCjs/TTjIXUjckTkLyTK5DlkP4FP06G7UKuK/SEbaHbwDdS5aBB+tN9wU7Tq5AQ9l2+GGghEnTTPNBsLaKkqgcwSg24rngzmlsDt28guqkKFz0O6CVO3xVnsJzoeZU1TAhqQFS5+SC309xH4bYLO5o66RDhwB+AQZwdZbqZOYEkw6g10DB1chXsCz254h9VXHRoWW1tZTOJ/SlbBohqfxMt/0E3OA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=PDsO1F6C1tRh2pCoIlAQJvmS/l2JPi+EQeVZKgbmT0c=; b=UnrU2uyf67U4H1ylX+5rL70XrQkAtwq/zWGdK3/DMjGU+Qc99hN71bIwVTlXLKSTkl5CCiqtPh2duMXVM4H41FFwMp4JAJ7PVwWFTso+Sw2ra9859wauCtp58I35b0NxVk+Mi4RK5oL6cs9yJf1I7/WPL/me+O+O30SHWY1veGRumBzL1YJL6A20SE7GWViUceps2x2RpFNHkF2VyAUTQ+BwgY+1jqyDSPrjqpyMnsZwVf9zeCNWBXNuEJRBveIaPlAqTEQfVjVysegwB/xb1PcTMM1ZXhYS73HG0eUFtx3uSZ2wGET8d6GzOI4zBGOR0uWuLHlKGLaNJb38eu7Ffw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none Received: from BY5PR11MB4451.namprd11.prod.outlook.com (2603:10b6:a03:1cb::30) by CO6PR11MB5587.namprd11.prod.outlook.com (2603:10b6:303:139::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5769.19; Mon, 31 Oct 2022 21:15:19 +0000 Received: from BY5PR11MB4451.namprd11.prod.outlook.com ([fe80::de48:e4af:4bd1:b30c]) by BY5PR11MB4451.namprd11.prod.outlook.com ([fe80::de48:e4af:4bd1:b30c%4]) with mapi id 15.20.5769.019; Mon, 31 Oct 2022 21:15:19 +0000 From: "Chautru, Nicolas" To: John Miller CC: "dev@dpdk.org" , "ed.czeck@atomicrules.com" , Shepard Siegel , Maxime Coquelin Subject: RE: [PATCH 10/14] baseband/ark: introduce ark baseband driver Thread-Topic: [PATCH 10/14] baseband/ark: introduce ark baseband driver Thread-Index: AQHY6XO3A9EduVWq5kaXXD+Ofn6LkK4hSUcAgAeBq4CAADyfwA== Date: Mon, 31 Oct 2022 21:15:19 +0000 Message-ID: References: <20221026194613.1008232-1-john.miller@atomicrules.com> <20221026194613.1008232-10-john.miller@atomicrules.com> <2E3DA107-A8FE-4216-8AEC-B9B6F6D91825@atomicrules.com> In-Reply-To: <2E3DA107-A8FE-4216-8AEC-B9B6F6D91825@atomicrules.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-product: dlpe-windows dlp-reaction: no-action dlp-version: 11.6.500.17 authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=intel.com; x-ms-publictraffictype: Email x-ms-traffictypediagnostic: BY5PR11MB4451:EE_|CO6PR11MB5587:EE_ x-ms-office365-filtering-correlation-id: 51dec77b-9dbd-4a0e-1115-08dabb8503cd x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr x-ms-exchange-senderadcheck: 1 x-ms-exchange-antispam-relay: 0 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: U5D1LUdzs9T2n3gLrT9lNaJbyHU0tjn/ykjRECHW7c/WiGnZNRcYEHDryGIuEFuMpHs0NllS0IlXcRGrhHKiAM51Kd3kyfskwokahZ6+aY/LSrROSwLL38Z37sIoWItowML+f0paYizGObIz7qtBUwBWA2vr4C6fMkRDNZ7hgMsccCuAHkIoXJdC9QRSJrYug2BQukg0vHVRQ6f38oeAvBmJiBBygS4qkFt1rlwVOtKwHqOFUZq3qoK+80bBmlZMAxSF/br24+qSzzjy2zRagkZyI/zocaFaN0yZPXGGHlNqkX8ssY+E7K1i7ZWBW5L5o9AbKpr85OGEVnwBFUvjIu7hr1TrZOw9e1zTMbR+t53I8E9CX/jHsJnnvhUErRVoAe+P/3xbGr9PV4OSt8+GQ7iJ/UG/yESJPbOXhTn+0sMn3r86fig9dp5OToLQj+/dasHrLMHQrDhC/vhbOrg9ysafwn95OTkgsUAZDEQlQbFgqmZH3LThodvT7ay0aGQ4VdlnzBuVOJedIAmTQzpOrxl+7srzoR+zlIxzDpohW06BRy/vz/2iil3t3maEM+TZAOCqoHuk7xgBf8iumwGZSFTcGSifrkgvzzZT5DmCK3w8xf4FC7vaZWHihH78XZe3c8ySeqSNMIN1zOXOdQoR+z8e/AnJZmq7m+8QmpxuKV55PLykJffGcWzUwonQghNvcMNPMZ1pinFfBujBjSbFpL4gv41BowFZmI4k515jlDbiBMChJ7GbdyTyncihSKk9+4RrPhMbCjulhhTsNdGA3g== x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:BY5PR11MB4451.namprd11.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230022)(376002)(346002)(366004)(136003)(39860400002)(396003)(451199015)(86362001)(122000001)(38100700002)(82960400001)(33656002)(38070700005)(6506007)(83380400001)(2906002)(55016003)(9686003)(478600001)(186003)(7696005)(26005)(4326008)(53546011)(71200400001)(66946007)(66476007)(316002)(54906003)(30864003)(6916009)(8676002)(76116006)(66556008)(64756008)(66446008)(41300700001)(5660300002)(8936002)(52536014)(579004)(559001); DIR:OUT; SFP:1102; x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?yWa1FLEGZjv6j1Xb9KL/1BNprbHsUbe7zdI5q4F+RipZIF8QG3CRbS2JTOMU?= =?us-ascii?Q?fkXLKxHfEpNB2Ptiq6gzIvr29mPuJxM+nRLi1DpHHoG3e3k2Qz7dcpxN71J+?= =?us-ascii?Q?tHtMCcTa3+8WWrCDydbnU1BHr/WDD/7arsYk5TuT6UiLPyILI5ycqe06GXHJ?= =?us-ascii?Q?63zmPBzJFcRZh1mLCHaWbE/QCdTLxhKkLZC18oHUNA6siG1Tt0rismHycR2I?= =?us-ascii?Q?q9WJsa9uvU5znOR7MCsiR890SaQlngNBLW2+86CNGwwbponvmGU7CJvWj1WF?= =?us-ascii?Q?XBFKPJRD7P27hSv7g+PpQxF1rlvwFhpbzZfJDjcuBEfIJji4hTl4vQTs3ctj?= =?us-ascii?Q?6vJVLJAky/CaNfXywcpAhFhNzmOkUnEdZR7+Lw5jg8Ysq0RTjQ77rdCLgrm5?= =?us-ascii?Q?O3hUhBUJFJMDJDNXIYgZQ9mhZC/O72GUb3H1BZlAEojE1vWQ5POa6lqNWxMr?= =?us-ascii?Q?eq3qfpls87LvSD4GeArfJWyeLSIJlVeQKwScVDRG+AeDxc0Ts/RwPKLGZiLp?= =?us-ascii?Q?pYK2eiM8RowzWjGPpEuYskdouDL+GgSWFVoqJfSBBhfZ2nY7Bk5oX82u1SXZ?= =?us-ascii?Q?IvlYqUC/BH08um60m2NYlAwrhaGQ0S4JUwIKtM6/rk+4UejmbSA++juaWsDh?= =?us-ascii?Q?25Jnvm0ujU9Y/rCT9PqHCtsHctduYxuAM1mE6rgXd8bpxF//GQEbTjlFVO7n?= =?us-ascii?Q?kJMPbnty8rzMadr2kZUQgrrbjzW7iBVG2F/8jujjgMS4b0jyIEakAqQYOCr2?= =?us-ascii?Q?CgqDeXlQVQKKeEDSe7dHasjCfEXW6Vm79CDB9HCj3FZQU5Oty3lVJZLsnaOG?= =?us-ascii?Q?PiJZdwbpJlCS3o7+xEa7bvoVU+zyXhIHnHbOFadV9lsZ9lWJXDjW9WwfSGEG?= =?us-ascii?Q?g4h/f72Kly5N7FNIKFBf5DZ4/tYjBZLhG7vgWCfVNlCHQpdw853+wISiLNnR?= =?us-ascii?Q?qlpzBF6WG3VzD/EpqnmKjtx6/Z0ShnNCBwCtbHsOzN/10Zbn5x9ZPF3yc2M2?= =?us-ascii?Q?YaTccsDYrdiwrCIEiFcgLx3RGHan+FDy47mXQZSH6bJWInr4y5a8W810+Dpb?= =?us-ascii?Q?yTKenLTKqvGn/Skf9JynhMEh77yJn0nOVdv7JDOQWaXeGkWEHp3AbBnC+IDi?= =?us-ascii?Q?bYP4XgFEKEI7Mu6/vkdeKqoseUg1xrT+BVQaDKvqewmMfZLVRl4+jTuPFLNj?= =?us-ascii?Q?goAc/6LUS2ykxSk00Uktd/iXK9SMYUI7k8r2AH8OZNSwRUmUa8pmHpkg2wgz?= =?us-ascii?Q?HrP2K7TxnzGtNYMPoACAgh7YESNDFJhmFZPD4adw6YezPTttFOIbbQXYzFh/?= =?us-ascii?Q?kkeUvzXZdS7jYLKsZuDYr68koJBGjxLrIpA8N9OzxVAu7sXV13TQA/uPSIEp?= =?us-ascii?Q?Xa6MK7K27v/zNlN1szUowh60ruO/0U571+RqvsvMcR4Ce01XRYv/PK/SdHwY?= =?us-ascii?Q?GzcIQDkxW65xsAXI1FSKkap/L1EXOHo3imSBJ2ChFyP+929ETKqatB7cZIWy?= =?us-ascii?Q?qWiAta5RHnNVYBBGBNK0927VKhQh8G4sDkZ2ip+CKjJvusfgkNEJsWRQ3Ca6?= =?us-ascii?Q?D7Ip5MQaP0P7rWkHYLacms6w86hUPk1QjzdzdOdL?= Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: BY5PR11MB4451.namprd11.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: 51dec77b-9dbd-4a0e-1115-08dabb8503cd X-MS-Exchange-CrossTenant-originalarrivaltime: 31 Oct 2022 21:15:19.4536 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: kmCh+TLw+3hB3fBaF0H9JeCJwpJt6UyHHmUHol/Fia/eU7ycshaunRVuH+1aF8rhOC232a2Rgk44szPeVjxE/HWGGycaWCId5usKBmytVM8= X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO6PR11MB5587 X-OriginatorOrg: intel.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Hi John,=20 > From: John Miller =20 > Sent: Monday, October 31, 2022 10:34 AM > To: Chautru, Nicolas > Cc: dev@dpdk.org; ed.czeck@atomicrules.com; Shepard Siegel ; Maxime Coquelin > Subject: Re: [PATCH 10/14] baseband/ark: introduce ark baseband driver >=20 > Hi Nicolas, >=20 >=20 >=20 > On Oct 26, 2022, at 7:11 PM, Chautru, Nicolas wrote: >=20 > Hi John, >=20 > General comment. I was a bit lost in the split in the commits 10 to 14.=20 > First I would have expected it to build from the first commit. I don't be= lieve it makes sense to add 13 and 14 after the fact.=20 >=20 > I first introduced this patch set in 22.07 but we had to defer due other = company priorities. I had 10 to 14 in the same commit but you asked me to = split it into smaller commits. Perhaps I misunderstood what you were askin= g. I will put 10 thru 14 back into the same commit. >=20 This is about splitting these logically not artificially, see examples from= other PMDs contributions with incremental commits but still not based on s= plitting away doc/build.=20 >=20 > Between 10 and 11 there were a bit of confusion as well to me. > Like ark_bbdev_info_get is being first referred in 10 but then the implem= entation is in 11. > The ldpc decoding functions are also split between 10 and 11. As a conseq= uence the commit 10 is a hard to review in one chunk arguably.=20 >=20 > This will be addressed when I put them all in the same commit. >=20 That is not the intent see above.=20 >=20 >=20 > I would suggest to consider a split of commits that may be more logical a= nd incremental. For instance what was done recently for the acc driver just= as an imperfect example.=20 > This way it would also provide more digestible chunks of code to be revie= wed incrementally.=20 >=20 > I would be nice to have some doc with the first commit matching the code.= Notably as I had the impression the implementation doesn't fully match you= r cover letter (I will add some more comments on this for 11), but unclear = to me whether this is intentional or not. >=20 >=20 > We will address the doc to make sure it is accurate. >=20 >=20 > We will address your other comments in a separate response. >=20 > Thank you > -John >=20 >=20 >=20 > Thanks > Nic >=20 >=20 >=20 > -----Original Message----- > From: John Miller > Sent: Wednesday, October 26, 2022 12:46 PM > To: Chautru, Nicolas > Cc: mailto:dev@dpdk.org; mailto:ed.czeck@atomicrules.com; Shepard Siegel > ; John Miller > > Subject: [PATCH 10/14] baseband/ark: introduce ark baseband driver >=20 > This patch introduces the Arkville baseband device driver. >=20 > Signed-off-by: John Miller > --- > drivers/baseband/ark/ark_bbdev.c | 1127 > ++++++++++++++++++++++++++++++ drivers/baseband/ark/ark_bbext.h | 163 > +++++ > 2 files changed, 1290 insertions(+) > create mode 100644 drivers/baseband/ark/ark_bbdev.c create mode 100644 > drivers/baseband/ark/ark_bbext.h >=20 > diff --git a/drivers/baseband/ark/ark_bbdev.c > b/drivers/baseband/ark/ark_bbdev.c > new file mode 100644 > index 0000000000..8736d170d1 > --- /dev/null > +++ b/drivers/baseband/ark/ark_bbdev.c > @@ -0,0 +1,1127 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright(c) 2016-2021 Atomic Rules LLC */ > + > +#include > +#include > +#include > + > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > + > +#include "ark_common.h" > +#include "ark_bbdev_common.h" > +#include "ark_bbdev_custom.h" > +#include "ark_ddm.h" > +#include "ark_mpu.h" > +#include "ark_rqp.h" > +#include "ark_udm.h" > +#include "ark_bbext.h" > + > +#define DRIVER_NAME baseband_ark > + > +#define ARK_SYSCTRL_BASE 0x0 > +#define ARK_PKTGEN_BASE 0x10000 > +#define ARK_MPU_RX_BASE 0x20000 > +#define ARK_UDM_BASE 0x30000 > +#define ARK_MPU_TX_BASE 0x40000 > +#define ARK_DDM_BASE 0x60000 > +#define ARK_PKTDIR_BASE 0xa0000 > +#define ARK_PKTCHKR_BASE 0x90000 > +#define ARK_RCPACING_BASE 0xb0000 > +#define ARK_MPU_QOFFSET 0x00100 > + > +#define BB_ARK_TX_Q_FACTOR 4 > + > +#define ARK_RX_META_SIZE 32 > +#define ARK_RX_META_OFFSET (RTE_PKTMBUF_HEADROOM - > ARK_RX_META_SIZE) > +#define ARK_RX_MAX_NOCHAIN (RTE_MBUF_DEFAULT_DATAROOM) > + > +static_assert(sizeof(struct ark_rx_meta) =3D=3D ARK_RX_META_SIZE, > +"Unexpected struct size ark_rx_meta"); static_assert(sizeof(union > +ark_tx_meta) =3D=3D 8, "Unexpected struct size ark_tx_meta"); > + > +static struct rte_pci_id pci_id_ark[] =3D { > + {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x1015)}, > + {RTE_PCI_DEVICE(AR_VENDOR_ID, 0x1016)}, > + {.device_id =3D 0}, > +}; > + > +static const struct ark_dev_caps > +ark_device_caps[] =3D { > + SET_DEV_CAPS(0x1015, true, false), > + SET_DEV_CAPS(0x1016, true, false), > + {.device_id =3D 0,} > +}; > + > + > +/* Forward declarations */ > +static const struct rte_bbdev_ops ark_bbdev_pmd_ops; > + > +static int > +check_for_ext(struct ark_bbdevice *ark) { > + /* Get the env */ > + const char *dllpath =3D getenv("ARK_BBEXT_PATH"); > + > + if (dllpath =3D=3D NULL) { > + ARK_BBDEV_LOG(DEBUG, "EXT NO dll path specified\n"); > + return 0; > + } > + ARK_BBDEV_LOG(NOTICE, "EXT found dll path at %s\n", dllpath); > + > + /* Open and load the .so */ > + ark->d_handle =3D dlopen(dllpath, RTLD_LOCAL | RTLD_LAZY); > + if (ark->d_handle =3D=3D NULL) { > + ARK_BBDEV_LOG(ERR, "Could not load user extension %s\n", > + dllpath); > + return -1; > + } > + ARK_BBDEV_LOG(DEBUG, "SUCCESS: loaded user extension %s\n", > + dllpath); > + > + /* Get the entry points */ > + ark->user_ext.dev_init =3D > + (void *(*)(struct rte_bbdev *, void *)) > + dlsym(ark->d_handle, "rte_pmd_ark_bbdev_init"); > + > + ark->user_ext.dev_uninit =3D > + (int (*)(struct rte_bbdev *, void *)) > + dlsym(ark->d_handle, "rte_pmd_ark_dev_uninit"); > + ark->user_ext.dev_start =3D > + (int (*)(struct rte_bbdev *, void *)) > + dlsym(ark->d_handle, "rte_pmd_ark_bbdev_start"); > + ark->user_ext.dev_stop =3D > + (int (*)(struct rte_bbdev *, void *)) > + dlsym(ark->d_handle, "rte_pmd_ark_bbdev_stop"); > + ark->user_ext.dequeue_ldpc_dec =3D > + (int (*)(struct rte_bbdev *, > + struct rte_bbdev_dec_op *, > + uint32_t *, > + void *)) > + dlsym(ark->d_handle, > "rte_pmd_ark_bbdev_dequeue_ldpc_dec"); > + ark->user_ext.enqueue_ldpc_dec =3D > + (int (*)(struct rte_bbdev *, > + struct rte_bbdev_dec_op *, > + uint32_t *, > + uint8_t *, > + void *)) > + dlsym(ark->d_handle, > "rte_pmd_ark_bbdev_enqueue_ldpc_dec"); > + ark->user_ext.dequeue_ldpc_enc =3D > + (int (*)(struct rte_bbdev *, > + struct rte_bbdev_enc_op *, > + uint32_t *, > + void *)) > + dlsym(ark->d_handle, > "rte_pmd_ark_bbdev_dequeue_ldpc_enc"); > + ark->user_ext.enqueue_ldpc_enc =3D > + (int (*)(struct rte_bbdev *, > + struct rte_bbdev_enc_op *, > + uint32_t *, > + uint8_t *, > + void *)) > + dlsym(ark->d_handle, > "rte_pmd_ark_bbdev_enqueue_ldpc_enc"); > + > + return 0; > +} > + > + > +/* queue */ > +struct ark_bbdev_queue { > + struct ark_bbdevice *ark_bbdev; > + > + struct rte_ring *active_ops; /* Ring for processed packets */ > + > + /* RX components */ > + /* array of physical addresses of the mbuf data pointer */ > + rte_iova_t *rx_paddress_q; > + struct ark_udm_t *udm; > + struct ark_mpu_t *rx_mpu; > + > + /* TX components */ > + union ark_tx_meta *tx_meta_q; > + struct ark_mpu_t *tx_mpu; > + struct ark_ddm_t *ddm; > + > + /* */ > + uint32_t tx_queue_mask; > + uint32_t rx_queue_mask; > + > + int32_t rx_seed_index; /* step 1 set with empty mbuf */ > + int32_t rx_cons_index; /* step 3 consumed by driver */ > + > + /* 3 indexes to the paired data rings. */ > + int32_t tx_prod_index; /* where to put the next one */ > + int32_t tx_free_index; /* local copy of tx_cons_index */ > + > + /* separate cache line -- written by FPGA -- RX announce */ > + RTE_MARKER cacheline1 __rte_cache_min_aligned; > + volatile int32_t rx_prod_index; /* step 2 filled by FPGA */ > + > + /* Separate cache line -- written by FPGA -- RX completion */ > + RTE_MARKER cacheline2 __rte_cache_min_aligned; > + volatile int32_t tx_cons_index; /* hw is done, can be freed */ } > +__rte_cache_aligned; > + > + > +static int > +ark_bb_hw_q_setup(struct rte_bbdev *bbdev, uint16_t q_id, uint16_t > +queue_size) { > + struct ark_bbdev_queue *q =3D bbdev->data- >=20 > queues[q_id].queue_private; > + > + rte_iova_t queue_base; > + rte_iova_t phys_addr_q_base; > + rte_iova_t phys_addr_prod_index; > + rte_iova_t phys_addr_cons_index; > + > + if (ark_mpu_verify(q->rx_mpu, sizeof(rte_iova_t))) { > + ARK_BBDEV_LOG(ERR, "Illegal hw/sw configuration RX > queue"); > + return -1; > + } > + ARK_BBDEV_LOG(DEBUG, "ark_bb_q setup %u:%u", > + bbdev->data->dev_id, q_id); > + > + /* RX MPU */ > + phys_addr_q_base =3D rte_malloc_virt2iova(q->rx_paddress_q); > + /* Force TX mode on MPU to match bbdev behavior */ > + ark_mpu_configure(q->rx_mpu, phys_addr_q_base, queue_size, 1); > + ark_mpu_start(q->rx_mpu); > + > + /* UDM */ > + queue_base =3D rte_malloc_virt2iova(q); > + phys_addr_prod_index =3D queue_base + > + offsetof(struct ark_bbdev_queue, rx_prod_index); > + ark_udm_write_addr(q->udm, phys_addr_prod_index); > + ark_udm_queue_enable(q->udm, 1); > + > + /* TX MPU */ > + phys_addr_q_base =3D rte_malloc_virt2iova(q->tx_meta_q); > + ark_mpu_configure(q->tx_mpu, phys_addr_q_base, > + BB_ARK_TX_Q_FACTOR * queue_size, 1); > + ark_mpu_start(q->tx_mpu); > + > + /* DDM */ > + phys_addr_cons_index =3D queue_base + > + offsetof(struct ark_bbdev_queue, tx_cons_index); > + ark_ddm_queue_setup(q->ddm, phys_addr_cons_index); > + ark_ddm_queue_reset_stats(q->ddm); > + > + return 0; > +} > + > + > + > +/* Setup a queue */ > +static int > +ark_bb_q_setup(struct rte_bbdev *bbdev, uint16_t q_id, > + const struct rte_bbdev_queue_conf *queue_conf) { > + struct ark_bbdev_queue *q; > + struct ark_bbdevice *ark_bb =3D bbdev->data->dev_private; > + > + const uint32_t queue_size =3D queue_conf->queue_size; > + const int socket_id =3D queue_conf->socket; > + const uint64_t pg_sz =3D sysconf(_SC_PAGESIZE); > + char ring_name[RTE_RING_NAMESIZE]; > + > + /* Configuration checks */ > + if (!rte_is_power_of_2(queue_size)) { > + ARK_BBDEV_LOG(ERR, > + "Configuration queue size" > + " must be power of two %u", > + queue_size); > + return -EINVAL; > + } > + > + if (RTE_PKTMBUF_HEADROOM < ARK_RX_META_SIZE) { > + ARK_BBDEV_LOG(ERR, > + "Error: Ark bbdev requires head room > %d bytes > (%s)", > + ARK_RX_META_SIZE, __func__); > + return -EINVAL; > + } > + > + /* Allocate the queue data structure. */ > + q =3D rte_zmalloc_socket(RTE_STR(DRIVER_NAME), sizeof(*q), > + RTE_CACHE_LINE_SIZE, queue_conf->socket); > + if (q =3D=3D NULL) { > + ARK_BBDEV_LOG(ERR, "Failed to allocate queue memory"); > + return -ENOMEM; > + } > + bbdev->data->queues[q_id].queue_private =3D q; > + q->ark_bbdev =3D ark_bb; > + > + /* RING */ > + snprintf(ring_name, RTE_RING_NAMESIZE, RTE_STR(DRIVER_NAME) > "%u:%u", > + bbdev->data->dev_id, q_id); > + q->active_ops =3D rte_ring_create(ring_name, > + queue_size, > + queue_conf->socket, > + RING_F_SP_ENQ | RING_F_SC_DEQ); > + if (q->active_ops =3D=3D NULL) { > + ARK_BBDEV_LOG(ERR, "Failed to create ring"); > + goto free_all; > + } > + > + q->rx_queue_mask =3D queue_size - 1; > + q->tx_queue_mask =3D (BB_ARK_TX_Q_FACTOR * queue_size) - 1; > + > + /* Each mbuf requires 2 to 4 objects, factor by BB_ARK_TX_Q_FACTOR > */ > + q->tx_meta_q =3D > + rte_zmalloc_socket("Ark_bb_txqueue meta", > + queue_size * BB_ARK_TX_Q_FACTOR * > + sizeof(union ark_tx_meta), > + pg_sz, > + socket_id); > + > + if (q->tx_meta_q =3D=3D 0) { > + ARK_BBDEV_LOG(ERR, "Failed to allocate " > + "queue memory in %s", __func__); > + goto free_all; > + } > + > + q->ddm =3D RTE_PTR_ADD(ark_bb->ddm.v, q_id * ARK_DDM_QOFFSET); > + q->tx_mpu =3D RTE_PTR_ADD(ark_bb->mputx.v, q_id * > ARK_MPU_QOFFSET); > + > + q->rx_paddress_q =3D > + rte_zmalloc_socket("ark_bb_rx_paddress_q", > + queue_size * sizeof(rte_iova_t), > + pg_sz, > + socket_id); > + > + if (q->rx_paddress_q =3D=3D 0) { > + ARK_BBDEV_LOG(ERR, > + "Failed to allocate queue memory in %s", > + __func__); > + goto free_all; > + } > + q->udm =3D RTE_PTR_ADD(ark_bb->udm.v, q_id * ARK_UDM_QOFFSET); > + q->rx_mpu =3D RTE_PTR_ADD(ark_bb->mpurx.v, q_id * > ARK_MPU_QOFFSET); > + > + /* Structure have been configured, set the hardware */ > + return ark_bb_hw_q_setup(bbdev, q_id, queue_size); > + > +free_all: > + rte_free(q->tx_meta_q); > + rte_free(q->rx_paddress_q); > + rte_free(q); > + return -EFAULT; > +} > + > + > +/* Release queue */ > +static int > +ark_bb_q_release(struct rte_bbdev *bbdev, uint16_t q_id) { > + struct ark_bbdev_queue *q =3D bbdev->data- >=20 > queues[q_id].queue_private; > + > + ark_mpu_dump(q->rx_mpu, "rx_MPU release", q_id); > + ark_mpu_dump(q->tx_mpu, "tx_MPU release", q_id); > + > + rte_ring_free(q->active_ops); > + rte_free(q->tx_meta_q); > + rte_free(q->rx_paddress_q); > + rte_free(q); > + bbdev->data->queues[q_id].queue_private =3D NULL; > + > + ARK_BBDEV_LOG(DEBUG, "released device queue %u:%u", > + bbdev->data->dev_id, q_id); > + return 0; > +} > + > +static int > +ark_bbdev_start(struct rte_bbdev *bbdev) { > + struct ark_bbdevice *ark_bb =3D bbdev->data->dev_private; > + > + ARK_BBDEV_LOG(DEBUG, "Starting device %u", bbdev->data- >=20 > dev_id); > + if (ark_bb->started) > + return 0; > + > + /* User start hook */ > + if (ark_bb->user_ext.dev_start) > + ark_bb->user_ext.dev_start(bbdev, > + ark_bb->user_data); > + > + ark_bb->started =3D 1; > + > + if (ark_bb->start_pg) > + ark_pktchkr_run(ark_bb->pc); > + > + if (ark_bb->start_pg) { > + pthread_t thread; > + > + /* Delay packet generator start allow the hardware to be > ready > + * This is only used for sanity checking with internal generator > + */ > + if (pthread_create(&thread, NULL, > + ark_pktgen_delay_start, ark_bb->pg)) { > + ARK_BBDEV_LOG(ERR, "Could not create pktgen " > + "starter thread"); > + return -1; > + } > + } > + > + return 0; > +} > + > + > +static void > +ark_bbdev_stop(struct rte_bbdev *bbdev) { > + struct ark_bbdevice *ark_bb =3D bbdev->data->dev_private; > + > + ARK_BBDEV_LOG(DEBUG, "Stopping device %u", bbdev->data- >=20 > dev_id); > + > + if (!ark_bb->started) > + return; > + > + /* Stop the packet generator */ > + if (ark_bb->start_pg) > + ark_pktgen_pause(ark_bb->pg); > + > + /* STOP RX Side */ > + ark_udm_dump_stats(ark_bb->udm.v, "Post stop"); > + > + /* Stop the packet checker if it is running */ > + if (ark_bb->start_pg) { > + ark_pktchkr_dump_stats(ark_bb->pc); > + ark_pktchkr_stop(ark_bb->pc); > + } > + > + /* User stop hook */ > + if (ark_bb->user_ext.dev_stop) > + ark_bb->user_ext.dev_stop(bbdev, > + ark_bb->user_data); > + > +} > + > + > +static int > +ark_bb_q_start(struct rte_bbdev *bbdev, uint16_t q_id) { > + struct ark_bbdev_queue *q =3D bbdev->data- >=20 > queues[q_id].queue_private; > + ARK_BBDEV_LOG(DEBUG, "ark_bb_q start %u:%u", bbdev->data- >=20 > dev_id, q_id); > + ark_ddm_queue_enable(q->ddm, 1); > + ark_udm_queue_enable(q->udm, 1); > + ark_mpu_start(q->tx_mpu); > + ark_mpu_start(q->rx_mpu); > + return 0; > +} > +static int > +ark_bb_q_stop(struct rte_bbdev *bbdev, uint16_t q_id) { > + struct ark_bbdev_queue *q =3D bbdev->data- >=20 > queues[q_id].queue_private; > + int cnt =3D 0; > + > + ARK_BBDEV_LOG(DEBUG, "ark_bb_q stop %u:%u", bbdev->data- >=20 > dev_id, > +q_id); > + > + while (q->tx_cons_index !=3D q->tx_prod_index) { > + usleep(100); > + if (cnt++ > 10000) { > + fprintf(stderr, "XXXX %s(%u, %u %u) %d Failured\n", > __func__, q_id, > + q->tx_cons_index, q->tx_prod_index, > + (int32_t) (q->tx_prod_index - q- >=20 > tx_cons_index)); > + return -1; > + } > + } > + > + ark_mpu_stop(q->tx_mpu); > + ark_mpu_stop(q->rx_mpu); > + ark_udm_queue_enable(q->udm, 0); > + ark_ddm_queue_enable(q->ddm, 0); > + return 0; > +} > + > + > + > + > +/* > +*************************************************************** > ******** > +** */ > +/* Common function for all enqueue and dequeue ops */ static inline > +void ark_bb_enqueue_desc_fill(struct ark_bbdev_queue *q, > + struct rte_mbuf *mbuf, > + uint16_t offset, /* Extra offset */ > + uint8_t flags, > + uint32_t *meta, > + uint8_t meta_cnt /* 0, 1 or 2 */ > + ) > +{ > + union ark_tx_meta *tx_meta; > + int32_t tx_idx; > + uint8_t m; > + > + /* Header */ > + tx_idx =3D q->tx_prod_index & q->tx_queue_mask; > + tx_meta =3D &q->tx_meta_q[tx_idx]; > + tx_meta->data_len =3D rte_pktmbuf_data_len(mbuf) - offset; > + tx_meta->flags =3D flags; > + tx_meta->meta_cnt =3D meta_cnt; > + tx_meta->user1 =3D *meta++; > + q->tx_prod_index++; > + > + for (m =3D 0; m < meta_cnt; m++) { > + tx_idx =3D q->tx_prod_index & q->tx_queue_mask; > + tx_meta =3D &q->tx_meta_q[tx_idx]; > + tx_meta->usermeta0 =3D *meta++; > + tx_meta->usermeta1 =3D *meta++; > + q->tx_prod_index++; > + } > + > + tx_idx =3D q->tx_prod_index & q->tx_queue_mask; > + tx_meta =3D &q->tx_meta_q[tx_idx]; > + tx_meta->physaddr =3D rte_mbuf_data_iova(mbuf) + offset; > + q->tx_prod_index++; > +} > + > +static inline void > +ark_bb_enqueue_segmented_pkt(struct ark_bbdev_queue *q, > + struct rte_mbuf *mbuf, > + uint16_t offset, > + uint32_t *meta, uint8_t meta_cnt) { > + struct rte_mbuf *next; > + uint8_t flags =3D ARK_DDM_SOP; > + > + while (mbuf !=3D NULL) { > + next =3D mbuf->next; > + flags |=3D (next =3D=3D NULL) ? ARK_DDM_EOP : 0; > + > + ark_bb_enqueue_desc_fill(q, mbuf, offset, flags, > + meta, meta_cnt); > + > + flags &=3D ~ARK_DDM_SOP; /* drop SOP flags */ > + meta_cnt =3D 0; > + offset =3D 0; > + > + mbuf =3D next; > + } > +} > + > +static inline int > +ark_bb_enqueue_common(struct ark_bbdev_queue *q, > + struct rte_mbuf *m_in, struct rte_mbuf *m_out, > + uint16_t offset, > + uint32_t *meta, uint8_t meta_cnt) { > + int32_t free_queue_space; > + int32_t rx_idx; > + > + /* TX side limit */ > + free_queue_space =3D q->tx_queue_mask - > + (q->tx_prod_index - q->tx_free_index); > + if (unlikely(free_queue_space < (2 + (2 * m_in->nb_segs)))) > + return 1; > + > + /* RX side limit */ > + free_queue_space =3D q->rx_queue_mask - > + (q->rx_seed_index - q->rx_cons_index); > + if (unlikely(free_queue_space < m_out->nb_segs)) > + return 1; > + > + if (unlikely(m_in->nb_segs > 1)) > + ark_bb_enqueue_segmented_pkt(q, m_in, offset, meta, > meta_cnt); > + else > + ark_bb_enqueue_desc_fill(q, m_in, offset, > + ARK_DDM_SOP | ARK_DDM_EOP, > + meta, meta_cnt); > + > + /* We assume that the return mubf has exactly enough segments for > + * return data, which is 2048 bytes per segment. > + */ > + do { > + rx_idx =3D q->rx_seed_index & q->rx_queue_mask; > + q->rx_paddress_q[rx_idx] =3D m_out->buf_iova; > + q->rx_seed_index++; > + m_out =3D m_out->next; > + } while (m_out); > + > + return 0; > +} > + > +static inline void > +ark_bb_enqueue_finalize(struct rte_bbdev_queue_data *q_data, > + struct ark_bbdev_queue *q, > + void **ops, > + uint16_t nb_ops, uint16_t nb) > +{ > + /* BBDEV global stats */ > + /* These are not really errors, not sure why bbdev counts these. */ > + q_data->queue_stats.enqueue_err_count +=3D nb_ops - nb; > + q_data->queue_stats.enqueued_count +=3D nb; > + > + /* Notify HW that */ > + if (unlikely(nb =3D=3D 0)) > + return; > + > + ark_mpu_set_producer(q->tx_mpu, q->tx_prod_index); > + ark_mpu_set_producer(q->rx_mpu, q->rx_seed_index); > + > + /* Queue info for dequeue-side processing */ > + rte_ring_enqueue_burst(q->active_ops, > + (void **)ops, nb, NULL); > +} > + > +static int > +ark_bb_dequeue_segmented(struct rte_mbuf *mbuf0, > + int32_t *prx_cons_index, > + uint16_t pkt_len > + ) > +{ > + struct rte_mbuf *mbuf; > + uint16_t data_len; > + uint16_t remaining; > + uint16_t segments =3D 1; > + > + data_len =3D RTE_MIN(pkt_len, RTE_MBUF_DEFAULT_DATAROOM); > + remaining =3D pkt_len - data_len; > + > + mbuf =3D mbuf0; > + mbuf0->data_len =3D data_len; > + while (remaining) { > + segments +=3D 1; > + mbuf =3D mbuf->next; > + if (unlikely(mbuf =3D=3D 0)) { > + ARK_BBDEV_LOG(CRIT, "Expected chained mbuf with > " > + "at least %d segments for dequeue " > + "of packet length %d", > + segments, pkt_len); > + return 1; > + } > + > + data_len =3D RTE_MIN(remaining, > + RTE_MBUF_DEFAULT_DATAROOM); > + remaining -=3D data_len; > + > + mbuf->data_len =3D data_len; > + *prx_cons_index +=3D 1; > + } > + > + if (mbuf->next !=3D 0) { > + ARK_BBDEV_LOG(CRIT, "Expected chained mbuf with " > + "at exactly %d segments for dequeue " > + "of packet length %d. Found %d " > + "segments", > + segments, pkt_len, mbuf0->nb_segs); > + return 1; > + } > + return 0; > +} > + > +/* > +*************************************************************** > ******** > +** */ > +/* LDPC Decode ops */ > +static int16_t > +ark_bb_enqueue_ldpc_dec_one_op(struct ark_bbdev_queue *q, > + struct rte_bbdev_dec_op *this_op) { > + struct rte_bbdev_op_ldpc_dec *ldpc_dec_op =3D &this_op->ldpc_dec; > + struct rte_mbuf *m_in =3D ldpc_dec_op->input.data; > + struct rte_mbuf *m_out =3D ldpc_dec_op->hard_output.data; > + uint16_t offset =3D ldpc_dec_op->input.offset; > + uint32_t meta[5] =3D {0}; > + uint8_t meta_cnt =3D 0; > + > + if (q->ark_bbdev->user_ext.enqueue_ldpc_dec) { > + if (q->ark_bbdev->user_ext.enqueue_ldpc_dec(q->ark_bbdev- >=20 > bbdev, > + this_op, > + meta, > + &meta_cnt, > + q->ark_bbdev- >=20 > user_data)) { > + ARK_BBDEV_LOG(ERR, "%s failed", __func__); > + return 1; > + } > + } > + > + return ark_bb_enqueue_common(q, m_in, m_out, offset, meta, > meta_cnt); > +} > + > +/* Enqueue LDPC Decode -- burst */ > +static uint16_t > +ark_bb_enqueue_ldpc_dec_ops(struct rte_bbdev_queue_data *q_data, > + struct rte_bbdev_dec_op **ops, uint16_t nb_ops) { > + struct ark_bbdev_queue *q =3D q_data->queue_private; > + unsigned int max_enq; > + uint16_t nb; > + > + max_enq =3D rte_ring_free_count(q->active_ops); > + max_enq =3D RTE_MIN(max_enq, nb_ops); > + for (nb =3D 0; nb < max_enq; nb++) { > + if (ark_bb_enqueue_ldpc_dec_one_op(q, ops[nb])) > + break; > + } > + > + ark_bb_enqueue_finalize(q_data, q, (void **)ops, nb_ops, nb); > + return nb; > +} > + > + > +/* > +*************************************************************** > ******** > +** */ > +/* Dequeue LDPC Decode -- burst */ > +static uint16_t > +ark_bb_dequeue_ldpc_dec_ops(struct rte_bbdev_queue_data *q_data, > + struct rte_bbdev_dec_op **ops, uint16_t nb_ops) { > + struct ark_bbdev_queue *q =3D q_data->queue_private; > + struct rte_mbuf *mbuf; > + struct rte_bbdev_dec_op *this_op; > + struct ark_rx_meta *meta; > + uint32_t *usermeta; > + > + uint16_t nb =3D 0; > + int32_t prod_index =3D q->rx_prod_index; > + int32_t cons_index =3D q->rx_cons_index; > + > + q->tx_free_index =3D q->tx_cons_index; > + > + while ((prod_index - cons_index) > 0) { > + if (rte_ring_dequeue(q->active_ops, (void **)&this_op)) { > + ARK_BBDEV_LOG(ERR, "%s data ready but no op!", > + __func__); > + q_data->queue_stats.dequeue_err_count +=3D 1; > + break; > + } > + ops[nb] =3D this_op; > + > + mbuf =3D this_op->ldpc_dec.hard_output.data; > + > + /* META DATA embedded in headroom */ > + meta =3D RTE_PTR_ADD(mbuf->buf_addr, > ARK_RX_META_OFFSET); > + > + mbuf->pkt_len =3D meta->pkt_len; > + mbuf->data_len =3D meta->pkt_len; > + > + if (unlikely(meta->pkt_len > ARK_RX_MAX_NOCHAIN)) { > + if (ark_bb_dequeue_segmented(mbuf, &cons_index, > + meta->pkt_len)) > + q_data->queue_stats.dequeue_err_count +=3D > 1; > + } else if (mbuf->next !=3D 0) { > + ARK_BBDEV_LOG(CRIT, "Expected mbuf with " > + "at exactly 1 segments for dequeue " > + "of packet length %d. Found %d " > + "segments", > + meta->pkt_len, mbuf->nb_segs); > + q_data->queue_stats.dequeue_err_count +=3D 1; > + } > + > + usermeta =3D meta->user_meta; > + > + /* User's meta move from Arkville HW to bbdev OP */ > + if (q->ark_bbdev->user_ext.dequeue_ldpc_dec) { > + if (q->ark_bbdev->user_ext.dequeue_ldpc_dec(q- >=20 > ark_bbdev->bbdev, > + this_op, > + usermeta, > + q- >=20 > ark_bbdev->user_data)) { > + ARK_BBDEV_LOG(ERR, "%s failed", __func__); > + return 1; > + } > + } > + > + nb++; > + cons_index++; > + if (nb >=3D nb_ops) > + break; > + } > + > + q->rx_cons_index =3D cons_index; > + > + /* BBdev stats */ > + q_data->queue_stats.dequeued_count +=3D nb; > + > + return nb; > +} > + > +/************************************************************** > ******** > +****/ > +/* Enqueue LDPC Encode */ > +static int16_t > +ark_bb_enqueue_ldpc_enc_one_op(struct ark_bbdev_queue *q, > + struct rte_bbdev_enc_op *this_op) { > + struct rte_bbdev_op_ldpc_enc *ldpc_enc_op =3D &this_op->ldpc_enc; > + struct rte_mbuf *m_in =3D ldpc_enc_op->input.data; > + struct rte_mbuf *m_out =3D ldpc_enc_op->output.data; > + uint16_t offset =3D ldpc_enc_op->input.offset; > + uint32_t meta[5] =3D {0}; > + uint8_t meta_cnt =3D 0; > + > + /* User's meta move from bbdev op to Arkville HW */ > + if (q->ark_bbdev->user_ext.enqueue_ldpc_enc) { > + if (q->ark_bbdev->user_ext.enqueue_ldpc_enc(q->ark_bbdev- >=20 > bbdev, > + this_op, > + meta, > + &meta_cnt, > + q->ark_bbdev- >=20 > user_data)) { > + ARK_BBDEV_LOG(ERR, "%s failed", __func__); > + return 1; > + } > + } > + > + return ark_bb_enqueue_common(q, m_in, m_out, offset, meta, > meta_cnt); > +} > + > +/* Enqueue LDPC Encode -- burst */ > +static uint16_t > +ark_bb_enqueue_ldpc_enc_ops(struct rte_bbdev_queue_data *q_data, > + struct rte_bbdev_enc_op **ops, uint16_t nb_ops) { > + struct ark_bbdev_queue *q =3D q_data->queue_private; > + unsigned int max_enq; > + uint16_t nb; > + > + max_enq =3D rte_ring_free_count(q->active_ops); > + max_enq =3D RTE_MIN(max_enq, nb_ops); > + for (nb =3D 0; nb < max_enq; nb++) { > + if (ark_bb_enqueue_ldpc_enc_one_op(q, ops[nb])) > + break; > + } > + > + ark_bb_enqueue_finalize(q_data, q, (void **)ops, nb_ops, nb); > + return nb; > +} > + > +/* Dequeue LDPC Encode -- burst */ > +static uint16_t > +ark_bb_dequeue_ldpc_enc_ops(struct rte_bbdev_queue_data *q_data, > + struct rte_bbdev_enc_op **ops, uint16_t nb_ops) { > + struct ark_bbdev_queue *q =3D q_data->queue_private; > + struct rte_mbuf *mbuf; > + struct rte_bbdev_enc_op *this_op; > + struct ark_rx_meta *meta; > + uint32_t *usermeta; > + > + uint16_t nb =3D 0; > + int32_t prod_index =3D q->rx_prod_index; > + int32_t cons_index =3D q->rx_cons_index; > + > + q->tx_free_index =3D q->tx_cons_index; > + > + while ((prod_index - cons_index) > 0) { > + if (rte_ring_dequeue(q->active_ops, (void **)&this_op)) { > + ARK_BBDEV_LOG(ERR, "%s data ready but no op!", > + __func__); > + q_data->queue_stats.dequeue_err_count +=3D 1; > + break; > + } > + ops[nb] =3D this_op; > + > + mbuf =3D this_op->ldpc_enc.output.data; > + > + /* META DATA embedded in headroom */ > + meta =3D RTE_PTR_ADD(mbuf->buf_addr, > ARK_RX_META_OFFSET); > + > + mbuf->pkt_len =3D meta->pkt_len; > + mbuf->data_len =3D meta->pkt_len; > + usermeta =3D meta->user_meta; > + > + if (unlikely(meta->pkt_len > ARK_RX_MAX_NOCHAIN)) { > + if (ark_bb_dequeue_segmented(mbuf, &cons_index, > + meta->pkt_len)) > + q_data->queue_stats.dequeue_err_count +=3D > 1; > + } else if (mbuf->next !=3D 0) { > + ARK_BBDEV_LOG(CRIT, "Expected mbuf with " > + "at exactly 1 segments for dequeue " > + "of packet length %d. Found %d " > + "segments", > + meta->pkt_len, mbuf->nb_segs); > + q_data->queue_stats.dequeue_err_count +=3D 1; > + } > + > + /* User's meta move from Arkville HW to bbdev OP */ > + if (q->ark_bbdev->user_ext.dequeue_ldpc_enc) { > + if (q->ark_bbdev->user_ext.dequeue_ldpc_enc(q- >=20 > ark_bbdev->bbdev, > + this_op, > + usermeta, > + q- >=20 > ark_bbdev->user_data)) { > + ARK_BBDEV_LOG(ERR, "%s failed", __func__); > + return 1; > + } > + } > + > + nb++; > + cons_index++; > + if (nb >=3D nb_ops) > + break; > + } > + > + q->rx_cons_index =3D cons_index; > + > + /* BBdev stats */ > + q_data->queue_stats.dequeued_count +=3D nb; > + > + return nb; > +} > + > + > +/************************************************************** > ******** > +****/ > +/* > + *Initial device hardware configuration when device is opened > + * setup the DDM, and UDM; called once per PCIE device */ static int > +ark_bb_config_device(struct ark_bbdevice *ark_bb) { > + uint16_t num_q, i; > + struct ark_mpu_t *mpu; > + > + /* > + * Make sure that the packet director, generator and checker are in a > + * known state > + */ > + ark_bb->start_pg =3D 0; > + ark_bb->pg =3D ark_pktgen_init(ark_bb->pktgen.v, 0, 1); > + if (ark_bb->pg =3D=3D NULL) > + return -1; > + ark_pktgen_reset(ark_bb->pg); > + ark_bb->pc =3D ark_pktchkr_init(ark_bb->pktchkr.v, 0, 1); > + if (ark_bb->pc =3D=3D NULL) > + return -1; > + ark_pktchkr_stop(ark_bb->pc); > + ark_bb->pd =3D ark_pktdir_init(ark_bb->pktdir.v); > + if (ark_bb->pd =3D=3D NULL) > + return -1; > + > + /* Verify HW */ > + if (ark_udm_verify(ark_bb->udm.v)) > + return -1; > + if (ark_ddm_verify(ark_bb->ddm.v)) > + return -1; > + > + /* MPU reset */ > + mpu =3D ark_bb->mpurx.v; > + num_q =3D ark_api_num_queues(mpu); > + ark_bb->max_nb_queues =3D num_q; > + > + for (i =3D 0; i < num_q; i++) { > + ark_mpu_reset(mpu); > + mpu =3D RTE_PTR_ADD(mpu, ARK_MPU_QOFFSET); > + } > + > + ark_udm_configure(ark_bb->udm.v, > + RTE_PKTMBUF_HEADROOM, > + RTE_MBUF_DEFAULT_DATAROOM); > + > + mpu =3D ark_bb->mputx.v; > + num_q =3D ark_api_num_queues(mpu); > + for (i =3D 0; i < num_q; i++) { > + ark_mpu_reset(mpu); > + mpu =3D RTE_PTR_ADD(mpu, ARK_MPU_QOFFSET); > + } > + > + ark_rqp_stats_reset(ark_bb->rqpacing); > + > + ARK_BBDEV_LOG(INFO, "packet director set to 0x%x", ark_bb- >=20 > pkt_dir_v); > + ark_pktdir_setup(ark_bb->pd, ark_bb->pkt_dir_v); > + > + if (ark_bb->pkt_gen_args[0]) { > + ARK_BBDEV_LOG(INFO, "Setting up the packet generator"); > + ark_pktgen_parse(ark_bb->pkt_gen_args); > + ark_pktgen_reset(ark_bb->pg); > + ark_pktgen_setup(ark_bb->pg); > + ark_bb->start_pg =3D 1; > + } > + > + return 0; > +} > + > +static int > +ark_bbdev_init(struct rte_bbdev *bbdev, struct rte_pci_driver *pci_drv) > +{ > + struct ark_bbdevice *ark_bb =3D bbdev->data->dev_private; > + struct rte_pci_device *pci_dev =3D RTE_DEV_TO_PCI(bbdev->device); > + bool rqpacing =3D false; > + int p; > + ark_bb->bbdev =3D bbdev; > + > + RTE_SET_USED(pci_drv); > + > + ark_bb->bar0 =3D (uint8_t *)pci_dev->mem_resource[0].addr; > + ark_bb->a_bar =3D (uint8_t *)pci_dev->mem_resource[2].addr; > + > + ark_bb->sysctrl.v =3D (void *)&ark_bb->bar0[ARK_SYSCTRL_BASE]; > + ark_bb->mpurx.v =3D (void *)&ark_bb->bar0[ARK_MPU_RX_BASE]; > + ark_bb->udm.v =3D (void *)&ark_bb->bar0[ARK_UDM_BASE]; > + ark_bb->mputx.v =3D (void *)&ark_bb->bar0[ARK_MPU_TX_BASE]; > + ark_bb->ddm.v =3D (void *)&ark_bb->bar0[ARK_DDM_BASE]; > + ark_bb->pktdir.v =3D (void *)&ark_bb->bar0[ARK_PKTDIR_BASE]; > + ark_bb->pktgen.v =3D (void *)&ark_bb->bar0[ARK_PKTGEN_BASE]; > + ark_bb->pktchkr.v =3D (void *)&ark_bb->bar0[ARK_PKTCHKR_BASE]; > + > + p =3D 0; > + while (ark_device_caps[p].device_id !=3D 0) { > + if (pci_dev->id.device_id =3D=3D ark_device_caps[p].device_id) { > + rqpacing =3D ark_device_caps[p].caps.rqpacing; > + break; > + } > + p++; > + } > + > + if (rqpacing) > + ark_bb->rqpacing =3D > + (struct ark_rqpace_t *)(ark_bb->bar0 + > ARK_RCPACING_BASE); > + else > + ark_bb->rqpacing =3D NULL; > + > + /* Check to see if there is an extension that we need to load */ > + if (check_for_ext(ark_bb)) > + return -1; > + > + ark_bb->started =3D 0; > + > + ARK_BBDEV_LOG(INFO, "Sys Ctrl Const =3D 0x%x HW Commit_ID: %08x", > + ark_bb->sysctrl.t32[4], > + rte_be_to_cpu_32(ark_bb->sysctrl.t32[0x20 / 4])); > + ARK_BBDEV_LOG(INFO, "Arkville HW Commit_ID: %08x", > + rte_be_to_cpu_32(ark_bb->sysctrl.t32[0x20 / 4])); > + > + /* If HW sanity test fails, return an error */ > + if (ark_bb->sysctrl.t32[4] !=3D 0xcafef00d) { > + ARK_BBDEV_LOG(ERR, > + "HW Sanity test has failed, expected constant" > + " 0x%x, read 0x%x (%s)", > + 0xcafef00d, > + ark_bb->sysctrl.t32[4], __func__); > + return -1; > + } > + > + return ark_bb_config_device(ark_bb); > +} > + > +static int > +ark_bbdev_uninit(struct rte_bbdev *bbdev) { > + struct ark_bbdevice *ark_bb =3D bbdev->data->dev_private; > + > + if (rte_eal_process_type() !=3D RTE_PROC_PRIMARY) > + return 0; > + > + ark_pktgen_uninit(ark_bb->pg); > + ark_pktchkr_uninit(ark_bb->pc); > + > + return 0; > +} > + > +static int > +ark_bbdev_probe(struct rte_pci_driver *pci_drv, > + struct rte_pci_device *pci_dev) > +{ > + struct rte_bbdev *bbdev =3D NULL; > + char dev_name[RTE_BBDEV_NAME_MAX_LEN]; > + struct ark_bbdevice *ark_bb; > + > + if (pci_dev =3D=3D NULL) > + return -EINVAL; > + > + rte_pci_device_name(&pci_dev->addr, dev_name, sizeof(dev_name)); > + > + /* Allocate memory to be used privately by drivers */ > + bbdev =3D rte_bbdev_allocate(pci_dev->device.name); > + if (bbdev =3D=3D NULL) > + return -ENODEV; > + > + /* allocate device private memory */ > + bbdev->data->dev_private =3D rte_zmalloc_socket(dev_name, > + sizeof(struct ark_bbdevice), > + RTE_CACHE_LINE_SIZE, > + pci_dev->device.numa_node); > + > + if (bbdev->data->dev_private =3D=3D NULL) { > + ARK_BBDEV_LOG(CRIT, > + "Allocate of %zu bytes for device \"%s\" > failed", > + sizeof(struct ark_bbdevice), dev_name); > + rte_bbdev_release(bbdev); > + return -ENOMEM; > + } > + ark_bb =3D bbdev->data->dev_private; > + /* Initialize ark_bb */ > + ark_bb->pkt_dir_v =3D 0x00110110; >=20 > There a few magic number that you may to define separately >=20 >=20 > + > + /* Fill HW specific part of device structure */ > + bbdev->device =3D &pci_dev->device; > + bbdev->intr_handle =3D NULL; > + bbdev->data->socket_id =3D pci_dev->device.numa_node; > + bbdev->dev_ops =3D &ark_bbdev_pmd_ops; > + if (pci_dev->device.devargs) > + parse_ark_bbdev_params(pci_dev->device.devargs->args, > ark_bb); > + > + > + /* Device specific initialization */ > + if (ark_bbdev_init(bbdev, pci_drv)) > + return -EIO; > + if (ark_bbdev_start(bbdev)) > + return -EIO; > + > + /* Core operations LDPC encode amd decode */ > + bbdev->enqueue_ldpc_enc_ops =3D ark_bb_enqueue_ldpc_enc_ops; > + bbdev->dequeue_ldpc_enc_ops =3D ark_bb_dequeue_ldpc_enc_ops; > + bbdev->enqueue_ldpc_dec_ops =3D ark_bb_enqueue_ldpc_dec_ops; > + bbdev->dequeue_ldpc_dec_ops =3D ark_bb_dequeue_ldpc_dec_ops; > + > + ARK_BBDEV_LOG(DEBUG, "bbdev id =3D %u [%s]", > + bbdev->data->dev_id, dev_name); > + > + return 0; > +} > + > +/* Uninitialize device */ > +static int > +ark_bbdev_remove(struct rte_pci_device *pci_dev) { > + struct rte_bbdev *bbdev; > + int ret; > + > + if (pci_dev =3D=3D NULL) > + return -EINVAL; > + > + /* Find device */ > + bbdev =3D rte_bbdev_get_named_dev(pci_dev->device.name); > + if (bbdev =3D=3D NULL) { > + ARK_BBDEV_LOG(CRIT, > + "Couldn't find HW dev \"%s\" to Uninitialize > it", > + pci_dev->device.name); > + return -ENODEV; > + } > + > + /* Arkville device close */ > + ark_bbdev_uninit(bbdev); > + rte_free(bbdev->data->dev_private); > + > + /* Close device */ > + ret =3D rte_bbdev_close(bbdev->data->dev_id); > + if (ret < 0) > + ARK_BBDEV_LOG(ERR, > + "Device %i failed to close during remove: %i", > + bbdev->data->dev_id, ret); > + > + return rte_bbdev_release(bbdev); > +} > + > +/* Operation for the PMD */ > +static const struct rte_bbdev_ops ark_bbdev_pmd_ops =3D { > + .info_get =3D ark_bbdev_info_get, >=20 > This is the one mentioned above. This is actually defined in your future = commit. > You could start with info_get, then queue, then processing for instance.= =20 >=20 >=20 > + .start =3D ark_bbdev_start, > + .stop =3D ark_bbdev_stop, > + .queue_setup =3D ark_bb_q_setup, > + .queue_release =3D ark_bb_q_release, > + .queue_start =3D ark_bb_q_start, > + .queue_stop =3D ark_bb_q_stop, > +}; > + > +static struct rte_pci_driver ark_bbdev_pmd_drv =3D { > + .probe =3D ark_bbdev_probe, > + .remove =3D ark_bbdev_remove, > + .id_table =3D pci_id_ark, > + .drv_flags =3D RTE_PCI_DRV_NEED_MAPPING > +}; > + > +RTE_PMD_REGISTER_PCI(DRIVER_NAME, ark_bbdev_pmd_drv); > +RTE_PMD_REGISTER_PCI_TABLE(DRIVER_NAME, pci_id_ark); > +RTE_PMD_REGISTER_PARAM_STRING(DRIVER_NAME, > + ARK_BBDEV_PKTGEN_ARG "=3D " > + ARK_BBDEV_PKTCHKR_ARG "=3D " > + ARK_BBDEV_PKTDIR_ARG "=3D" > + ); > diff --git a/drivers/baseband/ark/ark_bbext.h > b/drivers/baseband/ark/ark_bbext.h > new file mode 100644 > index 0000000000..2e9cc4ccf3 > --- /dev/null > +++ b/drivers/baseband/ark/ark_bbext.h > @@ -0,0 +1,163 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright (c) 2015-2018 Atomic Rules LLC */ > + > +#ifndef _ARK_BBEXT_H_ > +#define _ARK_BBEXT_H_ > + > +#include > +#include > + > +/* The following section lists function prototypes for Arkville's > + * baseband dynamic PMD extension. User's who create an extension > + * must include this file and define the necessary and desired > + * functions. Only 1 function is required for an extension, > + * rte_pmd_ark_bbdev_init(); all other functions prototypes in this > + * section are optional. > + * See documentation for compiling and use of extensions. > + */ > + > +/** > + * Extension prototype, required implementation if extensions are used. > + * Called during device probe to initialize the user structure > + * passed to other extension functions. This is called once for each > + * port of the device. > + * > + * @param dev > + * current device. > + * @param a_bar > + * access to PCIe device bar (application bar) and hence access to > + * user's portion of FPGA. > + * @return user_data > + * which will be passed to other extension functions. > + */ > +void *rte_pmd_ark_bbdev_init(struct rte_bbdev *dev, void *a_bar); > + > +/** > + * Extension prototype, optional implementation. > + * Called during device uninit. > + * > + * @param dev > + * current device. > + * @param user_data > + * user argument from dev_init() call. > + */ > +int rte_pmd_ark_bbdev_uninit(struct rte_bbdev *dev, void *user_data); > + > +/** > + * Extension prototype, optional implementation. > + * Called during rte_bbdev_start(). > + * > + * @param dev > + * current device. > + * @param user_data > + * user argument from dev_init() call. > + * @return (0) if successful. > + */ > +int rte_pmd_ark_bbdev_start(struct rte_bbdev *dev, void *user_data); > + > +/** > + * Extension prototype, optional implementation. > + * Called during rte_bbdev_stop(). > + * > + * @param dev > + * current device. > + * @param user_data > + * user argument from dev_init() call. > + * @return (0) if successful. > + */ > +int rte_pmd_ark_bbdev_stop(struct rte_bbdev *dev, void *user_data); > + > +/** > + * Extension prototype, optional implementation. > + * Called during rte_bbdev_dequeue_ldpc_dec_ops > + * > + * @param dev > + * current device. > + * @param user_data > + * user argument from dev_init() call. > + * @return (0) if successful. > + */ > +int rte_pmd_ark_bbdev_dequeue_ldpc_dec(struct rte_bbdev *dev, > + struct rte_bbdev_dec_op *this_op, > + uint32_t *usermeta, > + void *user_data); > + > +/** > + * Extension prototype, optional implementation. > + * Called during rte_bbdev_dequeue_ldpc_enc_ops > + * > + * @param dev > + * current device. > + * @param user_data > + * user argument from dev_init() call. > + * @return (0) if successful. > + */ > +int rte_pmd_ark_bbdev_dequeue_ldpc_enc(struct rte_bbdev *dev, > + struct rte_bbdev_enc_op *this_op, > + uint32_t *usermeta, > + void *user_data); > + > +/** > + * Extension prototype, optional implementation. > + * Called during rte_bbdev_enqueue_ldpc_dec_ops > + * > + * @param dev > + * current device. > + * @param user_data > + * user argument from dev_init() call. > + * @return (0) if successful. > + */ > +int rte_pmd_ark_bbdev_enqueue_ldpc_dec(struct rte_bbdev *dev, > + struct rte_bbdev_dec_op *this_op, > + uint32_t *usermeta, > + uint8_t *meta_cnt, > + void *user_data); > + > +/** > + * Extension prototype, optional implementation. > + * Called during rte_bbdev_enqueue_ldpc_enc_ops > + * > + * @param dev > + * current device. > + * @param user_data > + * user argument from dev_init() call. > + * @return (0) if successful. > + */ > +int rte_pmd_ark_bbdev_enqueue_ldpc_enc(struct rte_bbdev *dev, > + struct rte_bbdev_enc_op *this_op, > + uint32_t *usermeta, > + uint8_t *meta_cnt, > + void *user_data); > + > + > +struct arkbb_user_ext { > + void *(*dev_init)(struct rte_bbdev *dev, void *abar); > + int (*dev_uninit)(struct rte_bbdev *dev, void *udata); > + int (*dev_start)(struct rte_bbdev *dev, void *udata); > + int (*dev_stop)(struct rte_bbdev *dev, void *udata); > + int (*dequeue_ldpc_dec)(struct rte_bbdev *dev, > + struct rte_bbdev_dec_op *op, > + uint32_t *v, > + void *udata); > + int (*dequeue_ldpc_enc)(struct rte_bbdev *dev, > + struct rte_bbdev_enc_op *op, > + uint32_t *v, > + void *udata); > + int (*enqueue_ldpc_dec)(struct rte_bbdev *dev, > + struct rte_bbdev_dec_op *op, > + uint32_t *v, > + uint8_t *v1, > + void *udata); > + int (*enqueue_ldpc_enc)(struct rte_bbdev *dev, > + struct rte_bbdev_enc_op *op, > + uint32_t *v, > + uint8_t *v1, > + void *udata); > +}; > + > + > + > + > + > +#endif > -- > 2.25.1 >