From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0ABCCA0524; Wed, 14 Apr 2021 02:53:37 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8A0EF16141A; Wed, 14 Apr 2021 02:53:36 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id E36E4161419 for ; Wed, 14 Apr 2021 02:53:34 +0200 (CEST) IronPort-SDR: d/Z6cqN1vwpRYOOWdNFC+ZIFe+6AzWawcHDDLn7+FQHEMqjeJ1FO8jrOxMdnqO15Db6epS6l3E AYuqAzU6pmFA== X-IronPort-AV: E=McAfee;i="6200,9189,9953"; a="279848039" X-IronPort-AV: E=Sophos;i="5.82,220,1613462400"; d="scan'208";a="279848039" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2021 17:53:33 -0700 IronPort-SDR: Yf00xFiU8kMlv0sL0n3+9/W3D8Qk/6LCHoacrSJt21K3jXPqi79JJRvWnilNimqQwAjdICDQT5 Ik7/Bx3U4LpQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,220,1613462400"; d="scan'208";a="460793361" Received: from fmsmsx604.amr.corp.intel.com ([10.18.126.84]) by orsmga001.jf.intel.com with ESMTP; 13 Apr 2021 17:53:33 -0700 Received: from fmsmsx610.amr.corp.intel.com (10.18.126.90) by fmsmsx604.amr.corp.intel.com (10.18.126.84) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2106.2; Tue, 13 Apr 2021 17:53:32 -0700 Received: from FMSEDG603.ED.cps.intel.com (10.1.192.133) by fmsmsx610.amr.corp.intel.com (10.18.126.90) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2106.2 via Frontend Transport; Tue, 13 Apr 2021 17:53:32 -0700 Received: from NAM11-BN8-obe.outbound.protection.outlook.com (104.47.58.174) by edgegateway.intel.com (192.55.55.68) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2106.2; Tue, 13 Apr 2021 17:53:32 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=XZu0iQ2ZgNHMie85soALffHnfHK9zrMgCPrhztPerkMRn7TKDImigIeQd1oRcuVjq1Hqj/1rcevzUVX/X/gIkgLumNxHDrjOr50P9lu4vg5ed5KBYkP7a3V1jqQP51nxekrAwiho9GStglRV/1/f6bNGrJ6zclyeHZeQkm6q2gZESqwWuKp8QbmeCc94/DxyxUDkiWe+bgiTEimaZuB36D5/s96T0gZfhAP82fT/9mLVTtGBEtWztwrNwwdVVqqa48aqG7IQxvAJ0sdQmJdLWBCeYjk9oXfBCuz3MQVqIhgxcswItdFBGJI9YemoAzYpsB3qpp4cpaX+FxYepTuLrg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=aHk7eGq0YSgs0WlBAzauzKPIrgL1YoXwpSZgjiS9mDM=; b=m9hZEBN6GVd7D07gOMWkBMjpj88Pi+isYOEBibvUSivHfhRiGhgi8EjkJAi4CA0Yx1xUDhpe8CKuY0tPqEzu3urJbac1S9+9m/ffblAblximpHCd1pBl/91pe0PFy1v2MgHNJ60aQES8dhQd+U/GiQfO9M0m1d6VkCtd92mPTEVE2OER3q7+JBXS//O4qVTM7GMlCbKkBC02ACQ7m7NWnltLpv8wHTOEjyczP+PSn8szc1vzUpJxhv5nucbGS0mDqLZCCtpCyq790WuDN+OVZ8b+ECj6nFDOl9dk5F+TTxQbw5dH7xKm9q9f59SlVyp9ZNcARZkEhi5mfbsl0btJ7w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com; s=selector2-intel-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=aHk7eGq0YSgs0WlBAzauzKPIrgL1YoXwpSZgjiS9mDM=; b=o4GPVIvQi1VH8czwZPzdZ7IIKf+XNEXUTSse2KyxZSC3NnCVYjkH8tM5TqtMKooYpw5JJqkGYUZqwWhsSOe2nh1UduHxaitQJFqGw5I+RiEMzWAIC1XgAaZt/QQIpRlyflLtxOEx61bEzSbAmzbq/omviydKpzWbJmTjT/9oVf4= Received: from BY5PR11MB4451.namprd11.prod.outlook.com (2603:10b6:a03:1cb::30) by SJ0PR11MB5038.namprd11.prod.outlook.com (2603:10b6:a03:2d8::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4020.21; Wed, 14 Apr 2021 00:53:29 +0000 Received: from BY5PR11MB4451.namprd11.prod.outlook.com ([fe80::756a:2aef:40b1:11c8]) by BY5PR11MB4451.namprd11.prod.outlook.com ([fe80::756a:2aef:40b1:11c8%5]) with mapi id 15.20.4020.022; Wed, 14 Apr 2021 00:53:29 +0000 From: "Chautru, Nicolas" To: Hemant Agrawal , "dev@dpdk.org" , "gakhil@marvell.com" CC: "david.marchand@redhat.com" , Nipun Gupta Thread-Topic: [PATCH v3 5/8] baseband/la12xx: add enqueue and dequeue support Thread-Index: AQHXMCSBEskrcjlbNUuBPlAXxljZKKqzLl2A Date: Wed, 14 Apr 2021 00:53:29 +0000 Message-ID: References: <20210410170252.4587-1-hemant.agrawal@nxp.com> <20210413051715.26430-1-hemant.agrawal@nxp.com> <20210413051715.26430-6-hemant.agrawal@nxp.com> In-Reply-To: <20210413051715.26430-6-hemant.agrawal@nxp.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-product: dlpe-windows dlp-reaction: no-action dlp-version: 11.5.1.3 authentication-results: nxp.com; dkim=none (message not signed) header.d=none;nxp.com; dmarc=none action=none header.from=intel.com; x-originating-ip: [45.28.143.88] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: b71e4d75-78a5-4ee9-cd69-08d8fedfb84d x-ms-traffictypediagnostic: SJ0PR11MB5038: x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:7219; x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: kVg7d6QvVhM4m+WpTQHUeyGoDM4iVT2Ocp/ydFC6rVr73V5RIias7U+SfHaeSTONQqCTmOk1l9wRK72DrCEauEA1evKhDqsj2Eglht+DQrhdWZt8ruEVwX0Mt5WvlseVHNC0jgOxUGngegjL+iMfjgooXNRXk5kX+8gZoCK9nL6b8zTMyvyLZ5uaJUhgPRhDs3+K/yU6MN8dPWBosafz1KH1GRLO077pdJepb8pQ0GqSHJTH4ab4OkewR/PcOdlRJPmkPLpViuDAdgl1FZsBDKKQqjgzPZoJtvbKu4scrQYw7jjOeBL0C/WOhKrQDQASY0Tbsgdwz98s/zq4i6sFPNB8ZyUJbJj4McdvaJxZsO+47adt3c8/Zi6rKPZGh0++bi2BVi4CcHmHPS1whHl88agG/TTnVp1eJEk9TbXVuvZwHXNdqYS0Wrsz0toK8z47CDr3yuRps9qQo75jXFg4i56z2IAKXnZApoqmv8ozChhC51gMECpFxVMRDFNRw6MkL7qbt8VbcyU6NH2RID2WOpmAUO+UlURSLpNVEWP/gGS6fTGv/OWqtPFKQc/BGr9aIXcg89Tek4Gdi0PxTp9iAzEqD6HGSVjGgd8lpR8BVpk= x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:BY5PR11MB4451.namprd11.prod.outlook.com; PTR:; CAT:NONE; SFS:(346002)(376002)(39860400002)(396003)(136003)(366004)(7696005)(64756008)(2906002)(26005)(186003)(52536014)(30864003)(66446008)(76116006)(6506007)(71200400001)(55016002)(316002)(110136005)(9686003)(54906003)(66946007)(478600001)(8936002)(33656002)(66476007)(4326008)(122000001)(83380400001)(38100700002)(86362001)(8676002)(66556008)(5660300002); DIR:OUT; SFP:1102; x-ms-exchange-antispam-messagedata: =?us-ascii?Q?x0Vfkg/NFGDet/Tm5eJ8AMjlDVt0rtgeGlUdqgPY5DAAnZg2L7AAf5W63vRw?= =?us-ascii?Q?4n8gOMqTbJY6Jzb9ysdR2N3o0jrZosrE6Zb6W+JEuFsazNesbaeW6ofa8UKm?= =?us-ascii?Q?VpSo75QZ/RkYyZbYZMvOObScTiJVezRS34iS7Gd0EWggGGxHnUXJUKQsSV8T?= =?us-ascii?Q?mvEaNVUhAtx+LB4vtl1v/hGH4Lqhu2sAKfAvxmzFZXZ+1M9WuKJP9hh5ULPr?= =?us-ascii?Q?uRcsQsIUHSh+DOmyu5jepT8TwT2XgWkHVKdFoHg28dUewttmtORUFWr6RSLB?= =?us-ascii?Q?GSzRMp6gT5cQ/Us1VODhtYAL0y8YlxJ+LOQ8/0/Mtk/iTvGYXefufUNJBKad?= =?us-ascii?Q?acd8bXa+rF7Tq+ooXySMvk0Hxp51Rr+x9Y+igyMxX9vzhrEqUzsuPWOcQYPB?= =?us-ascii?Q?5hfEOB23X2p27Fqzv2OpBoQh6VT5DoA2qgAK60/QAK6bMUuuzhveKlSBBuv+?= =?us-ascii?Q?QCJOSgc4V273CA5hJL45yMtK0nfSkpfJT9IYgw28zXhrvtI1e61DIMjRHKXv?= =?us-ascii?Q?KRxUqCunH3V6tmYE3wxVShg2UubBF3JZBkMCf83P6cLrGLj8D6MjN545lJnY?= =?us-ascii?Q?7OLzP87gm+RqcC1P0Cqzg8c6pRpwnB6zN8hTUTBaQ9Pu0J510xEILjlcpKed?= =?us-ascii?Q?5nxSSkh4DQO03ak3ikNTthxDlpoTaN54aCVFHcMIHtdAnojw/jzcQ/5HKrR5?= =?us-ascii?Q?rbsdQ9Xg2DLBZ2ZuduE/Cm89I4dzwGkRGUQNgZqHchvI1yE8ajbZOHn5TTMx?= =?us-ascii?Q?NkRwdkWnE8SutjOco9R+OxV4DoM5OdHq3pZamtcBkFbqVX4fun39B1twhG5z?= =?us-ascii?Q?SMGF6m0v+2+RMEp7o5dLKrZd+1mED+9d53nh1xR+NnL6vxpDwv4nG/zegqys?= =?us-ascii?Q?iEbGVdvFvVUbMcNs6pagjMB57GyMVsLkMzjUNcS1TLWexmA7ZmgFmTqMwJ04?= =?us-ascii?Q?wkSRT4nHLxZoGSDrptgwc9khYMOIcebw5JvkgmPaD2UyJVGo0sTmOaMvnQ12?= =?us-ascii?Q?feKpcJ1t/1QBhkXmGrORbroRpWOkQjymRU48OKQ/GJ92l5Szs+JcKH0hP1cn?= =?us-ascii?Q?y17K5rOo5mJKX3vxOC19aY2JKTXrun4e8CWpiQY+AFMmX+fdI/og4choII6H?= =?us-ascii?Q?F5nyLVfYQyaNvwHX5EBmuVamLFVlADx9YYwCIUQH0LxjTQScD1h/fxSyovfz?= =?us-ascii?Q?+v/qSPcS61S1yuokNgY1A/8lhYBVANXR+mzgKOOcOxoxGtBOfw+O6mXo0cxv?= =?us-ascii?Q?F154sYXyKVMVlrOD1pRUOYGoh8tx7w7B9D30lAbG6XboYPjEZpR9NzpdULW0?= =?us-ascii?Q?Y4h8xSOZ1r2uClQzOlbiE5dv?= x-ms-exchange-transport-forked: True Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: BY5PR11MB4451.namprd11.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: b71e4d75-78a5-4ee9-cd69-08d8fedfb84d X-MS-Exchange-CrossTenant-originalarrivaltime: 14 Apr 2021 00:53:29.4994 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: YzezcbePJZrK0ZB4m0WVp45u2WmsmboZ71xdYfIu8S+6mVh2w6gxt3RNCAAh6MZGqyiMIobs8cCfH0t/YMUUOLf2B7NHXQvP9WKuQeFUNJ4= X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR11MB5038 X-OriginatorOrg: intel.com Subject: Re: [dpdk-dev] [PATCH v3 5/8] baseband/la12xx: add enqueue and dequeue support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add support for enqueue and dequeue the LDPC enc/dec from the modem device. Signed-off-by: Nipun Gupta Signed-off-by: Hemant Agrawal --- drivers/baseband/la12xx/bbdev_la12xx.c | 397 ++++++++++++++++++++- drivers/baseband/la12xx/bbdev_la12xx_ipc.h | 37 ++ 2 files changed, 430 insertions(+), 4 deletions(-) diff --git a/drivers/baseband/la12xx/bbdev_la12xx.c b/drivers/baseband/la12= xx/bbdev_la12xx.c index 0a68686205..d1040987b2 100644 --- a/drivers/baseband/la12xx/bbdev_la12xx.c +++ b/drivers/baseband/la12xx/bbdev_la12xx.c @@ -117,6 +117,10 @@ la12xx_queue_release(struct rte_bbdev *dev, uint16_t q= _id) ((uint64_t) ((unsigned long) (A) \ - ((uint64_t)ipc_priv->hugepg_start.host_vaddr))) =20 +#define MODEM_P2V(A) \ + ((uint64_t) ((unsigned long) (A) \ + + (unsigned long)(ipc_priv->peb_start.host_vaddr))) + static int ipc_queue_configure(uint32_t channel_id, ipc_t instance, struct bbdev_la12xx_q_priv *q_priv) { @@ -345,6 +349,38= 7 @@ static const struct rte_bbdev_ops pmd_ops =3D { .queue_release =3D la12xx_queue_release, .start =3D la12xx_start }; + +static int +fill_feca_desc_enc(struct bbdev_la12xx_q_priv *q_priv, + struct bbdev_ipc_dequeue_op *bbdev_ipc_op, + struct rte_bbdev_enc_op *bbdev_enc_op, + struct rte_bbdev_op_data *in_op_data) { + RTE_SET_USED(q_priv); + RTE_SET_USED(bbdev_ipc_op); + RTE_SET_USED(bbdev_enc_op); + RTE_SET_USED(in_op_data); + + return 0; +} I miss why these functions are here. Is that contribution supposed to work or a placeholder? + +static int +fill_feca_desc_dec(struct bbdev_la12xx_q_priv *q_priv, + struct bbdev_ipc_dequeue_op *bbdev_ipc_op, + struct rte_bbdev_dec_op *bbdev_dec_op, + struct rte_bbdev_op_data *out_op_data) { + RTE_SET_USED(q_priv); + RTE_SET_USED(bbdev_ipc_op); + RTE_SET_USED(bbdev_dec_op); + RTE_SET_USED(out_op_data); + + return 0; +} + +static inline int +is_bd_ring_full(uint32_t ci, uint32_t ci_flag, + uint32_t pi, uint32_t pi_flag) +{ + if (pi =3D=3D ci) { + if (pi_flag !=3D ci_flag) + return 1; /* Ring is Full */ + } + return 0; +} + +static inline int +prepare_ldpc_enc_op(struct rte_bbdev_enc_op *bbdev_enc_op, + struct bbdev_ipc_dequeue_op *bbdev_ipc_op, + struct bbdev_la12xx_q_priv *q_priv, + struct rte_bbdev_op_data *in_op_data, + struct rte_bbdev_op_data *out_op_data) { + struct rte_bbdev_op_ldpc_enc *ldpc_enc =3D &bbdev_enc_op->ldpc_enc; + uint32_t total_out_bits; + int ret; + + total_out_bits =3D (ldpc_enc->tb_params.cab * + ldpc_enc->tb_params.ea) + (ldpc_enc->tb_params.c - + ldpc_enc->tb_params.cab) * ldpc_enc->tb_params.eb; + This includes ratematching, see previous comment on capability Also I see it would not support the partial TB as defined in documentation = and API (r !=3D 0) + ldpc_enc->output.length =3D (total_out_bits + 7)/8; + + ret =3D fill_feca_desc_enc(q_priv, bbdev_ipc_op, + bbdev_enc_op, in_op_data); + if (ret) { + BBDEV_LA12XX_PMD_ERR( + "fill_feca_desc_enc failed, ret: %d", ret); + return ret; + } + + rte_pktmbuf_append(out_op_data->data, ldpc_enc->output.length); + + return 0; +} + +static inline int +prepare_ldpc_dec_op(struct rte_bbdev_dec_op *bbdev_dec_op, + struct bbdev_ipc_dequeue_op *bbdev_ipc_op, + struct bbdev_la12xx_q_priv *q_priv, + struct rte_bbdev_op_data *out_op_data) { + struct rte_bbdev_op_ldpc_dec *ldpc_dec =3D &bbdev_dec_op->ldpc_dec; + uint32_t total_out_bits; + uint32_t num_code_blocks =3D 0; + uint16_t sys_cols; + int ret; + + sys_cols =3D (ldpc_dec->basegraph =3D=3D 1) ? 22 : 10; + if (ldpc_dec->tb_params.c =3D=3D 1) { + total_out_bits =3D ((sys_cols * ldpc_dec->z_c) - + ldpc_dec->n_filler); + /* 5G-NR protocol uses 16 bit CRC when output packet + * size <=3D 3824 (bits). Otherwise 24 bit CRC is used. + * Adjust the output bits accordingly + */ + if (total_out_bits - 16 <=3D 3824) + total_out_bits -=3D 16; + else + total_out_bits -=3D 24; + ldpc_dec->hard_output.length =3D (total_out_bits / 8); + } else { + total_out_bits =3D (((sys_cols * ldpc_dec->z_c) - + ldpc_dec->n_filler - 24) * + ldpc_dec->tb_params.c); + ldpc_dec->hard_output.length =3D (total_out_bits / 8) - 3; Probably good to remove magic number for 24 and 3 here. + } + + num_code_blocks =3D ldpc_dec->tb_params.c; + + bbdev_ipc_op->num_code_blocks =3D rte_cpu_to_be_32(num_code_blocks); + + ret =3D fill_feca_desc_dec(q_priv, bbdev_ipc_op, + bbdev_dec_op, out_op_data); + if (ret) { + BBDEV_LA12XX_PMD_ERR("fill_feca_desc_dec failed, ret: %d", ret); + return ret; + } + + return 0; +} + +static int +enqueue_single_op(struct bbdev_la12xx_q_priv *q_priv, void *bbdev_op) { + struct bbdev_la12xx_private *priv =3D q_priv->bbdev_priv; + ipc_userspace_t *ipc_priv =3D priv->ipc_priv; + ipc_instance_t *ipc_instance =3D ipc_priv->instance; + struct bbdev_ipc_dequeue_op *bbdev_ipc_op; + struct rte_bbdev_op_ldpc_enc *ldpc_enc; + struct rte_bbdev_op_ldpc_dec *ldpc_dec; + uint32_t q_id =3D q_priv->q_id; + uint32_t ci, ci_flag, pi, pi_flag; + ipc_ch_t *ch =3D &(ipc_instance->ch_list[q_id]); + ipc_br_md_t *md =3D &(ch->md); + size_t virt; + char *huge_start_addr =3D + (char *)q_priv->bbdev_priv->ipc_priv->hugepg_start.host_vaddr; + struct rte_bbdev_op_data *in_op_data, *out_op_data; + char *data_ptr; + uint32_t l1_pcie_addr; + int ret; + uint32_t temp_ci; + + temp_ci =3D q_priv->host_params->ci; + ci =3D IPC_GET_CI_INDEX(temp_ci); + ci_flag =3D IPC_GET_CI_FLAG(temp_ci); + + pi =3D IPC_GET_PI_INDEX(q_priv->host_pi); + pi_flag =3D IPC_GET_PI_FLAG(q_priv->host_pi); + + BBDEV_LA12XX_PMD_DP_DEBUG( + "before bd_ring_full: pi: %u, ci: %u, pi_flag: %u, ci_flag: %u, ring siz= e: %u", + pi, ci, pi_flag, ci_flag, q_priv->queue_size); + + if (is_bd_ring_full(ci, ci_flag, pi, pi_flag)) { + BBDEV_LA12XX_PMD_DP_DEBUG( + "bd ring full for queue id: %d", q_id); + return IPC_CH_FULL; + } + + virt =3D MODEM_P2V(q_priv->host_params->modem_ptr[pi]); + bbdev_ipc_op =3D (struct bbdev_ipc_dequeue_op *)virt; + q_priv->bbdev_op[pi] =3D bbdev_op; + + switch (q_priv->op_type) { + case RTE_BBDEV_OP_LDPC_ENC: + ldpc_enc =3D &(((struct rte_bbdev_enc_op *)bbdev_op)->ldpc_enc); + in_op_data =3D &ldpc_enc->input; + out_op_data =3D &ldpc_enc->output; + + ret =3D prepare_ldpc_enc_op(bbdev_op, bbdev_ipc_op, q_priv, + in_op_data, out_op_data); + if (ret) { + BBDEV_LA12XX_PMD_ERR( + "process_ldpc_enc_op failed, ret: %d", ret); + return ret; + } + break; + + case RTE_BBDEV_OP_LDPC_DEC: + ldpc_dec =3D &(((struct rte_bbdev_dec_op *)bbdev_op)->ldpc_dec); + in_op_data =3D &ldpc_dec->input; + + out_op_data =3D &ldpc_dec->hard_output; + + ret =3D prepare_ldpc_dec_op(bbdev_op, bbdev_ipc_op, + q_priv, out_op_data); + if (ret) { + BBDEV_LA12XX_PMD_ERR( + "process_ldpc_dec_op failed, ret: %d", ret); + return ret; + } + break; + + default: + BBDEV_LA12XX_PMD_ERR("unsupported bbdev_ipc op type"); + return -1; + } + + if (in_op_data->data) { + data_ptr =3D rte_pktmbuf_mtod(in_op_data->data, char *); + l1_pcie_addr =3D (uint32_t)GUL_USER_HUGE_PAGE_ADDR + + data_ptr - huge_start_addr; + bbdev_ipc_op->in_addr =3D l1_pcie_addr; + bbdev_ipc_op->in_len =3D in_op_data->length; + } + + if (out_op_data->data) { + data_ptr =3D rte_pktmbuf_mtod(out_op_data->data, char *); + l1_pcie_addr =3D (uint32_t)GUL_USER_HUGE_PAGE_ADDR + + data_ptr - huge_start_addr; + bbdev_ipc_op->out_addr =3D rte_cpu_to_be_32(l1_pcie_addr); + bbdev_ipc_op->out_len =3D rte_cpu_to_be_32(out_op_data->length); + } + + /* Move Producer Index forward */ + pi++; + /* Flip the PI flag, if wrapping */ + if (unlikely(q_priv->queue_size =3D=3D pi)) { + pi =3D 0; + pi_flag =3D pi_flag ? 0 : 1; + } + + if (pi_flag) + IPC_SET_PI_FLAG(pi); + else + IPC_RESET_PI_FLAG(pi); + /* Wait for Data Copy & pi_flag update to complete before updating pi */ + rte_mb(); + /* now update pi */ + md->pi =3D rte_cpu_to_be_32(pi); + q_priv->host_pi =3D pi; + + BBDEV_LA12XX_PMD_DP_DEBUG( + "enter: pi: %u, ci: %u, pi_flag: %u, ci_flag: %u, ring size: %u", + pi, ci, pi_flag, ci_flag, q_priv->queue_size); + + return 0; +} + +/* Enqueue decode burst */ +static uint16_t +enqueue_dec_ops(struct rte_bbdev_queue_data *q_data, + struct rte_bbdev_dec_op **ops, uint16_t nb_ops) { + struct bbdev_la12xx_q_priv *q_priv =3D q_data->queue_private; + int nb_enqueued, ret; + + for (nb_enqueued =3D 0; nb_enqueued < nb_ops; nb_enqueued++) { + ret =3D enqueue_single_op(q_priv, ops[nb_enqueued]); + if (ret) + break; + } + + q_data->queue_stats.enqueue_err_count +=3D nb_ops - nb_enqueued; + q_data->queue_stats.enqueued_count +=3D nb_enqueued; + + return nb_enqueued; +} + +/* Enqueue encode burst */ +static uint16_t +enqueue_enc_ops(struct rte_bbdev_queue_data *q_data, + struct rte_bbdev_enc_op **ops, uint16_t nb_ops) { + struct bbdev_la12xx_q_priv *q_priv =3D q_data->queue_private; + int nb_enqueued, ret; + + for (nb_enqueued =3D 0; nb_enqueued < nb_ops; nb_enqueued++) { + ret =3D enqueue_single_op(q_priv, ops[nb_enqueued]); + if (ret) + break; + } + + q_data->queue_stats.enqueue_err_count +=3D nb_ops - nb_enqueued; + q_data->queue_stats.enqueued_count +=3D nb_enqueued; + + return nb_enqueued; +} + +static inline int +is_bd_ring_empty(uint32_t ci, uint32_t ci_flag, + uint32_t pi, uint32_t pi_flag) +{ + if (ci =3D=3D pi) { + if (ci_flag =3D=3D pi_flag) + return 1; /* No more Buffer */ + } + return 0; +} + +/* Dequeue encode burst */ +static void * +dequeue_single_op(struct bbdev_la12xx_q_priv *q_priv, void *dst) { + struct bbdev_la12xx_private *priv =3D q_priv->bbdev_priv; + ipc_userspace_t *ipc_priv =3D priv->ipc_priv; + uint32_t q_id =3D q_priv->q_id + HOST_RX_QUEUEID_OFFSET; + ipc_instance_t *ipc_instance =3D ipc_priv->instance; + ipc_ch_t *ch =3D &(ipc_instance->ch_list[q_id]); + uint32_t ci, ci_flag, pi, pi_flag; + ipc_br_md_t *md; + void *op; + uint32_t temp_pi; + + md =3D &(ch->md); + ci =3D IPC_GET_CI_INDEX(q_priv->host_ci); + ci_flag =3D IPC_GET_CI_FLAG(q_priv->host_ci); + + temp_pi =3D q_priv->host_params->pi; + pi =3D IPC_GET_PI_INDEX(temp_pi); + pi_flag =3D IPC_GET_PI_FLAG(temp_pi); + + if (is_bd_ring_empty(ci, ci_flag, pi, pi_flag)) + return NULL; + + BBDEV_LA12XX_PMD_DP_DEBUG( + "pi: %u, ci: %u, pi_flag: %u, ci_flag: %u, ring size: %u", + pi, ci, pi_flag, ci_flag, q_priv->queue_size); + + op =3D q_priv->bbdev_op[ci]; + + rte_memcpy(dst, q_priv->msg_ch_vaddr[ci], + sizeof(struct bbdev_ipc_enqueue_op)); + + /* Move Consumer Index forward */ + ci++; + /* Flip the CI flag, if wrapping */ + if (q_priv->queue_size =3D=3D ci) { + ci =3D 0; + ci_flag =3D ci_flag ? 0 : 1; + } + if (ci_flag) + IPC_SET_CI_FLAG(ci); + else + IPC_RESET_CI_FLAG(ci); + md->ci =3D rte_cpu_to_be_32(ci); + q_priv->host_ci =3D ci; + + BBDEV_LA12XX_PMD_DP_DEBUG( + "exit: pi: %u, ci: %u, pi_flag: %u, ci_flag: %u, ring size: %u", + pi, ci, pi_flag, ci_flag, q_priv->queue_size); + So you don't use any of the BBDEV flags to report CRC and syndrome parity c= heck in the response? + return op; +} + +/* Dequeue decode burst */ +static uint16_t +dequeue_dec_ops(struct rte_bbdev_queue_data *q_data, + struct rte_bbdev_dec_op **ops, uint16_t nb_ops) { + struct bbdev_la12xx_q_priv *q_priv =3D q_data->queue_private; + struct bbdev_ipc_enqueue_op bbdev_ipc_op; + int nb_dequeued; + + for (nb_dequeued =3D 0; nb_dequeued < nb_ops; nb_dequeued++) { + ops[nb_dequeued] =3D dequeue_single_op(q_priv, &bbdev_ipc_op); + if (!ops[nb_dequeued]) + break; + ops[nb_dequeued]->status =3D bbdev_ipc_op.status; + } + q_data->queue_stats.dequeued_count +=3D nb_dequeued; + + return nb_dequeued; +} + +/* Dequeue encode burst */ +static uint16_t +dequeue_enc_ops(struct rte_bbdev_queue_data *q_data, + struct rte_bbdev_enc_op **ops, uint16_t nb_ops) { + struct bbdev_la12xx_q_priv *q_priv =3D q_data->queue_private; + struct bbdev_ipc_enqueue_op bbdev_ipc_op; + int nb_enqueued; + + for (nb_enqueued =3D 0; nb_enqueued < nb_ops; nb_enqueued++) { + ops[nb_enqueued] =3D dequeue_single_op(q_priv, &bbdev_ipc_op); + if (!ops[nb_enqueued]) + break; + ops[nb_enqueued]->status =3D bbdev_ipc_op.status; + } + q_data->queue_stats.enqueued_count +=3D nb_enqueued; + + return nb_enqueued; +} + static struct hugepage_info * get_hugepage_info(void) { @@ -720,10 +1105,14 @@ la12xx_bbdev_create(struct rte_vdev_device *vdev, bbdev->intr_handle =3D NULL; =20 /* register rx/tx burst functions for data path */ - bbdev->dequeue_enc_ops =3D NULL; - bbdev->dequeue_dec_ops =3D NULL; - bbdev->enqueue_enc_ops =3D NULL; - bbdev->enqueue_dec_ops =3D NULL; + bbdev->dequeue_enc_ops =3D dequeue_enc_ops; + bbdev->dequeue_dec_ops =3D dequeue_dec_ops; + bbdev->enqueue_enc_ops =3D enqueue_enc_ops; + bbdev->enqueue_dec_ops =3D enqueue_dec_ops; These above are used for 4G operations, since the capability is not there t= het can be null. + bbdev->dequeue_ldpc_enc_ops =3D dequeue_enc_ops; + bbdev->dequeue_ldpc_dec_ops =3D dequeue_dec_ops; + bbdev->enqueue_ldpc_enc_ops =3D enqueue_enc_ops; + bbdev->enqueue_ldpc_dec_ops =3D enqueue_dec_ops; =20 return 0; } diff --git a/drivers/baseband/la12xx/bbdev_la12xx_ipc.h b/drivers/baseband/= la12xx/bbdev_la12xx_ipc.h index 9d5789f726..4e181e9254 100644 --- a/drivers/baseband/la12xx/bbdev_la12xx_ipc.h +++ b/drivers/baseband/la12xx/bbdev_la12xx_ipc.h @@ -76,6 +76,25 @@ typedef struct { _IOWR(GUL_IPC_MAGIC, 5, struct ipc_msg *) #define IOCTL_GUL_IPC_CHANNEL_= RAISE_INTERRUPT _IOW(GUL_IPC_MAGIC, 6, int *) =20 +#define GUL_USER_HUGE_PAGE_OFFSET (0) +#define GUL_PCI1_ADDR_BASE (0x00000000ULL) + +#define GUL_USER_HUGE_PAGE_ADDR (GUL_PCI1_ADDR_BASE + GUL_USER_HUGE_PAGE_O= FFSET) + +/* IPC PI/CI index & flag manipulation helpers */ +#define IPC_PI_CI_FLAG_MASK 0x80000000 /* (1<<31) */ +#define IPC_PI_CI_INDEX_MASK 0x7FFFFFFF /* ~(1<<31) */ + +#define IPC_SET_PI_FLAG(x) (x |=3D IPC_PI_CI_FLAG_MASK) +#define IPC_RESET_PI_FLAG(x) (x &=3D IPC_PI_CI_INDEX_MASK) +#define IPC_GET_PI_FLAG(x) (x >> 31) +#define IPC_GET_PI_INDEX(x) (x & IPC_PI_CI_INDEX_MASK) + +#define IPC_SET_CI_FLAG(x) (x |=3D IPC_PI_CI_FLAG_MASK) +#define IPC_RESET_CI_FLAG(x) (x &=3D IPC_PI_CI_INDEX_MASK) +#define IPC_GET_CI_FLAG(x) (x >> 31) +#define IPC_GET_CI_INDEX(x) (x & IPC_PI_CI_INDEX_MASK) + /** buffer ring common metadata */ typedef struct ipc_bd_ring_md { volatile uint32_t pi; /**< Producer index and flag (MSB) @@ -173,6 +192,24 @@ struct bbdev_ipc_enqueue_op { uint32_t rsvd; }; =20 +/** Structure specifying dequeue operation (dequeue at LA1224) */=20 +struct bbdev_ipc_dequeue_op { + /** Input buffer memory address */ + uint32_t in_addr; + /** Input buffer memory length */ + uint32_t in_len; + /** Output buffer memory address */ + uint32_t out_addr; + /** Output buffer memory length */ + uint32_t out_len; + /* Number of code blocks. Only set when HARQ is used */ + uint32_t num_code_blocks; + /** Dequeue Operation flags */ + uint32_t op_flags; + /** Shared metadata between L1 and L2 */ + uint32_t shared_metadata; +}; + /* This shared memory would be on the host side which have copy of some * of the parameters which are also part of Shared BD ring. Read access * of these parameters from the host side would not be over PCI. -- 2.17.1