From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id 522F8A0562;
	Wed, 14 Apr 2021 14:07:12 +0200 (CEST)
Received: from [217.70.189.124] (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id C8CB81619F5;
	Wed, 14 Apr 2021 14:07:11 +0200 (CEST)
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2048.outbound.protection.outlook.com [40.107.21.48])
 by mails.dpdk.org (Postfix) with ESMTP id B3C1F1619F4
 for <dev@dpdk.org>; Wed, 14 Apr 2021 14:07:10 +0200 (CEST)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jelhSmDAwwE0OHuQVsbJcuksyFlRht+tXPUR/rLn1mdawbbeasfrGU20Bio6lEPX2mU2Dup782qlXFeTKVd9I1wAIrjop2wMlMzLvSp+f8XesCTYove3FNN9t7xH2k1PQi1w31f2r0vnVBAG2g/aaw+MmUWbcskFeyYafs65+riFnvBd1kP0NnwMcL0j7PEpIKvvMiOLyOkc909UUrLA7gebxDLJRlklqwszOTUpdTA79e+ByKkeM+ru5SWE7Z+KtSC2wJdneNOfeTBy8o0I5Veh2l5jmtPJW8W0LmsBAUrKXnT9fhoe/r3Nv4TBPjbfchq9M9F1iWFZI2qHZ0EQtg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sAuN5IQwZZeh/vZTMYcf/+rF1GqoNKi1ScOpopYudXg=;
 b=i4dVaby+vKj/mG+zVb6ie+6Wccu6ljJ3FWF2OG7kCTlVtu5z9xV1KpEV2YGo3c19d2ozdvrWZL/ppTPWQGknT6jajV/I+NKK3kdU34NB8wstcM9W0iW3jK0HwT97sBAMF2Tjah20OcUpNmB/8doL8JV+5+Yl3CirulFwMH2pyzcc8/bbMANXNPw8yO2aeu65WZRmRTT3yZPpJiXk9i9iovqnJ3TcjY0ySJJ1tv/J5sKhNGZ+RTFxSDhSNpVX62kUNAt0Pm83bXtT0Q/F+BZ/uXqlisdDx94VICnJ0PG3dOD7OiSSs/hASKfaQj+vRiMQcx1DiNWtS018nLyXypJafw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oss.nxp.com; dmarc=pass action=none header.from=oss.nxp.com;
 dkim=pass header.d=oss.nxp.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=NXP1.onmicrosoft.com; 
 s=selector2-NXP1-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sAuN5IQwZZeh/vZTMYcf/+rF1GqoNKi1ScOpopYudXg=;
 b=F3hR0eQ/Ht/0TrKT2ghkh19aLyxB3R0XDWJyHugkVVlHwO8dAYyUu6lAWwHhyZ8Zr8lzSWl5q7RHC/uz3B26L+2d6GgGX1mmMGJJoAhQHLsOQhTDwhneOVbzA/4TkXgdkBUlfrKrCRIQ5N2gpphDMzaznIF9/rC6f4dnVdNTBJY=
Authentication-Results: nxp.com; dkim=none (message not signed)
 header.d=none;nxp.com; dmarc=none action=none header.from=oss.nxp.com;
Received: from AM6PR04MB4456.eurprd04.prod.outlook.com (2603:10a6:20b:22::25)
 by AM6PR04MB5416.eurprd04.prod.outlook.com (2603:10a6:20b:99::25)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4020.18; Wed, 14 Apr
 2021 12:07:08 +0000
Received: from AM6PR04MB4456.eurprd04.prod.outlook.com
 ([fe80::ad9e:a38e:e84e:bf55]) by AM6PR04MB4456.eurprd04.prod.outlook.com
 ([fe80::ad9e:a38e:e84e:bf55%7]) with mapi id 15.20.4020.022; Wed, 14 Apr 2021
 12:07:08 +0000
To: "Chautru, Nicolas" <nicolas.chautru@intel.com>,
 Hemant Agrawal <hemant.agrawal@nxp.com>, "dev@dpdk.org" <dev@dpdk.org>,
 "gakhil@marvell.com" <gakhil@marvell.com>
Cc: "david.marchand@redhat.com" <david.marchand@redhat.com>,
 Nipun Gupta <nipun.gupta@nxp.com>
References: <20210410170252.4587-1-hemant.agrawal@nxp.com>
 <20210413051715.26430-1-hemant.agrawal@nxp.com>
 <20210413051715.26430-6-hemant.agrawal@nxp.com>
 <BY5PR11MB4451DECA401BD81E62883BFBF84E9@BY5PR11MB4451.namprd11.prod.outlook.com>
From: Hemant Agrawal <hemant.agrawal@oss.nxp.com>
Message-ID: <3f8fd0cd-03f6-8f25-0bff-cf36eb134c6c@oss.nxp.com>
Date: Wed, 14 Apr 2021 17:36:52 +0530
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.9.1
In-Reply-To: <BY5PR11MB4451DECA401BD81E62883BFBF84E9@BY5PR11MB4451.namprd11.prod.outlook.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Originating-IP: [223.178.219.73]
X-ClientProxiedBy: HK2PR02CA0147.apcprd02.prod.outlook.com
 (2603:1096:202:16::31) To AM6PR04MB4456.eurprd04.prod.outlook.com
 (2603:10a6:20b:22::25)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
Received: from [192.168.1.2] (223.178.219.73) by
 HK2PR02CA0147.apcprd02.prod.outlook.com (2603:1096:202:16::31) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4042.16 via Frontend Transport; Wed, 14 Apr 2021 12:07:02 +0000
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 76351393-c0e9-4961-e67c-08d8ff3dd365
X-MS-TrafficTypeDiagnostic: AM6PR04MB5416:
X-MS-Exchange-SharedMailbox-RoutingAgent-Processed: True
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <AM6PR04MB54160423B3239E7CD5D02763C84E9@AM6PR04MB5416.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:4714;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: yh2ewK7qRVD7FTysYgmRmNrpiliI/D+Ta8SBETsxP6Lt18ppD45P8XQpHSmFpedveNznDBi/WnRTLWMzV/SXyaTHLACxkVGDgvJhDcDP2a/6zhgf0IoEij02h5K94Zjjvq78S20WrQHbiqsScvggLgZz+mXVVKMaFDezpKo/DjgMpdkOpeZKxybjtm/lnKhVjTBp1uIIbnUCkMWmga21/HkFyIt0zMSAq2OSHbE966dIp+Ds+zhOAc3H373EBt7PZzERh2Q+bJ3LBgSqX/7Pr/yRaL2cz3Sj93ZRfzswUwv6cHl9PlYrWpvnMaaGgidijdqMNe0/wGVZ/rET/cO0w13HzDvMkYRsAFcEeZYwbwS7dfTVh5VlFslh5T6W7balPLEnKQV2pspPRq4c0CtMErx1lG8+DyIFrXp0blmC43VEV5s6IM17dDFu9SUcH4UMzA+c+JY3JPiV5XRNS9qQQalOTEylw1BBJOzgSi+DeVYv0lkwCoftSbY8j2IZSL5bPwKQI3Zdzqc0DKXUvxafYEHVOcuBvfOF94Ai0CGj6bLBHkQ267/oXOr8LQKyrjky0AAiEmvk2eKAdirI2oxkjHate5YOwDGW6qd4JLZi82o4D+NdIfF6ajUDG9R5ZNMRpod/7DdMexTG4R2iru9f15CAzJ4SVGMAktlA8wBZmH0LfXg0/WKFiOKxz3ri6b8m8/9GfY++HIAcSKdC1TdcXD1aly1sXOT5KtNNDSvi9rI=
X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:AM6PR04MB4456.eurprd04.prod.outlook.com; PTR:; CAT:NONE;
 SFS:(4636009)(366004)(136003)(396003)(346002)(376002)(39860400002)(8676002)(66556008)(86362001)(38350700002)(38100700002)(66476007)(31696002)(2616005)(31686004)(52116002)(66946007)(478600001)(44832011)(956004)(26005)(54906003)(83380400001)(2906002)(6486002)(6666004)(8936002)(30864003)(5660300002)(186003)(16576012)(53546011)(16526019)(4326008)(110136005)(316002)(45980500001)(43740500002);
 DIR:OUT; SFP:1101; 
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?R3p3cytUVFhDZFlsRVdLcmpvaTI4azJENGhEM1lxOEZSYS9tTDdnS2xqMTBG?=
 =?utf-8?B?SGl0a3ZCY3Rvcm12aTVNdTY2Y1hlUGRKa3dvcElCRjk4ZElUeC9uZlBaclkz?=
 =?utf-8?B?YlVwN3dseHhMbmsyNUVOcHdMUWluSDl2SFZ5WXJLM0FmeXNqSTNhRzFzMm40?=
 =?utf-8?B?NFI5ZE94aE1wTGoreGlNa0pFRVRuSXZkSDJvcGxrMFBIdUJiQXYrMkYwbFV0?=
 =?utf-8?B?bU5Db3dJY3VFYUFKQmJ4eDF3MmhkLzFUOGNEMkN0MHR3WktrR2VyeFZMc3VB?=
 =?utf-8?B?MmFCemZTUW92STQrRndrVTZzZ0FFeUNFUlo4UU11Y3FjTXp5Y0VoSGJGMUkz?=
 =?utf-8?B?RG4rSVl1d24rMmxjWWFRRjBwd3VWZ05YVDg2QU8wTEVtenYwdlVpejVqTFR3?=
 =?utf-8?B?N1FVT0RiaU1HYmtzYTlWSENuZTE5TUpzOU1KUFNlelhRZ2gvb2lvSmNYTDlB?=
 =?utf-8?B?bkd4YkVkcXMwQndwSmJUUWhtaGFBYXN5TGN1OTNCN01meTNpU0EwcHJEN05y?=
 =?utf-8?B?SU5SSXBxeVg5RCtLUzNIVEk1MFVvUjNSOFg4RTJNYlVYV0Y0RGQ3NWFCaVhL?=
 =?utf-8?B?cEViM0FyVzlqVHdsS2FJcjhtQk5ySFM4WGM0REYrZDZkTFZjK1ZzOVlVVFRC?=
 =?utf-8?B?Mlhxdm8rMzBuTE9HdmVUNkV3UkU1cVIxT2dNbDY5NUFjSzJvOTFpTXhlN2Vr?=
 =?utf-8?B?U2hxRHVLQ2JKb2pObjNpYXBUZ0RHYTN2UVg5eU5Rb0t3VFpoRzZzRzVwZ2tM?=
 =?utf-8?B?eDViQUs0cDVGamtDV0p1Nko3dzhsTDNpd2hEdGdPMVdSV3BxY3RCbm5zUTEy?=
 =?utf-8?B?SmdId3lOS2Vlcm1NdVhweEwvQTBFaFlRQU9WZEhoWVhWLy96ZE9pVy9BUGY2?=
 =?utf-8?B?d1hvVUpFekQrYVZzSkpwcUt3RjVqM3k4OEJPTDRKTEVVeU1HOHpIbUpsbWRY?=
 =?utf-8?B?ZFU1bFJCVXlxL01sZEk3cHpXZHM2cXZCL0krNXVndVBDNTdXWnJLQ1AzM2lQ?=
 =?utf-8?B?Wk51SStFYmFWVDBZdEtFQWF0QTdYbVl4T1BPMVlsNkdaaGR5T2wzck80Rit5?=
 =?utf-8?B?Qm05dWc3QjdBNDBTc0VlMkgxdnVuN0ZTL2VEUmZBbFZJa3dseU5IbzJLcnh6?=
 =?utf-8?B?TkErTU5rRkt1NGs4TEhUT2k1aTJDVzE5ZlNQdnVUTjhzZlBRdmVxaTFkUHV6?=
 =?utf-8?B?LzNkTEp5QVU3VjJmUDBuRVk3WTZUZllwT2xNTmdTNHBFNDlDSk12Mkh4ZVhV?=
 =?utf-8?B?TjZEUWRWVG5JUndTbjNFektUQzVEZVpzcFkwVURXUndvaUZBb01xa2xHS2NF?=
 =?utf-8?B?cGdTbVJaUk95Y2JZOTZGajExL3ZTNGUwZVByNWhCUmR5SUxpb0FuRktxalhn?=
 =?utf-8?B?N3hwc3F4eWgwVnZ0WWVhQmR0QzF3R0thcmRJbElDV3lFU29kdVhIT3NnbTQ3?=
 =?utf-8?B?QVM4N2Z2SDN2ei85UGNwWUJ4enNmL1R6ZnhIUkt3Vmd4V0xlUnBQeWhNb0No?=
 =?utf-8?B?SVBSSUowRUdjaVpGdXJTc0liSTNta0dyVUJidFhzbjZCMlVyU3FkNTA1Y01q?=
 =?utf-8?B?OVBPWnJ1cmc3dEk3aU96V3J5Skpvd2NEaHRNc2RDY1NnaVNLNWN6V05qZ3Q0?=
 =?utf-8?B?OG9wU0ZhakZ3TnZyRXNQYXZ6TXFtSDE0NnZNV1VnbFZGa0FNSmVmbUhnU0VD?=
 =?utf-8?B?aUNCS3NCT0dkWXpYL1g4SnBLNkhTUEp6N2o3Q3VqK3V1WkY3K0pOV3k4SkN2?=
 =?utf-8?B?YVUrSHhreVluQWIwSm9HVTRxOElLYXZNR1lFM29xb25yQmx1bnhFMnhBV2x4?=
 =?utf-8?B?TTVlUDZpTU5VVENveXpPZz09?=
X-OriginatorOrg: oss.nxp.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 76351393-c0e9-4961-e67c-08d8ff3dd365
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB4456.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Apr 2021 12:07:08.0242 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: v1j8bwpyQhm7wZ7QbW6mnHfcVRNzS/XKwfADooPnSBQo8PRrVCLM6DEGNYQM7SB0KpOQUYzNJk+e5jCX5it9QQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR04MB5416
Subject: Re: [dpdk-dev] [PATCH v3 5/8] baseband/la12xx: add enqueue and
 dequeue support
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Reply-To: hemant.agrawal@nxp.com
Errors-To: dev-bounces@dpdk.org
Sender: "dev" <dev-bounces@dpdk.org>


On 4/14/2021 6:23 AM, Chautru, Nicolas wrote:
> Add support for enqueue and dequeue the LDPC enc/dec from the modem device.
>
> Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> ---
>   drivers/baseband/la12xx/bbdev_la12xx.c     | 397 ++++++++++++++++++++-
>   drivers/baseband/la12xx/bbdev_la12xx_ipc.h |  37 ++
>   2 files changed, 430 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/baseband/la12xx/bbdev_la12xx.c b/drivers/baseband/la12xx/bbdev_la12xx.c
> index 0a68686205..d1040987b2 100644
> --- a/drivers/baseband/la12xx/bbdev_la12xx.c
> +++ b/drivers/baseband/la12xx/bbdev_la12xx.c
> @@ -117,6 +117,10 @@ la12xx_queue_release(struct rte_bbdev *dev, uint16_t q_id)
>   		((uint64_t) ((unsigned long) (A) \
>   		- ((uint64_t)ipc_priv->hugepg_start.host_vaddr)))
>   
> +#define MODEM_P2V(A) \
> +	((uint64_t) ((unsigned long) (A) \
> +		+ (unsigned long)(ipc_priv->peb_start.host_vaddr)))
> +
>   static int ipc_queue_configure(uint32_t channel_id,
>   		ipc_t instance, struct bbdev_la12xx_q_priv *q_priv)  { @@ -345,6 +349,387 @@ static const struct rte_bbdev_ops pmd_ops = {
>   	.queue_release = la12xx_queue_release,
>   	.start = la12xx_start
>   };
> +
> +static int
> +fill_feca_desc_enc(struct bbdev_la12xx_q_priv *q_priv,
> +		   struct bbdev_ipc_dequeue_op *bbdev_ipc_op,
> +		   struct rte_bbdev_enc_op *bbdev_enc_op,
> +		   struct rte_bbdev_op_data *in_op_data) {
> +	RTE_SET_USED(q_priv);
> +	RTE_SET_USED(bbdev_ipc_op);
> +	RTE_SET_USED(bbdev_enc_op);
> +	RTE_SET_USED(in_op_data);
> +
> +	return 0;
> +}
>
> I miss why these functions are here.
> Is that contribution supposed to work or a placeholder?

it is currently a placeholder for some optimization work we are doing.

i.e. to covert the bbdev params to our hardware format at host side itself.

We will remove these for now.

>
> +
> +static int
> +fill_feca_desc_dec(struct bbdev_la12xx_q_priv *q_priv,
> +		   struct bbdev_ipc_dequeue_op *bbdev_ipc_op,
> +		   struct rte_bbdev_dec_op *bbdev_dec_op,
> +		   struct rte_bbdev_op_data *out_op_data) {
> +	RTE_SET_USED(q_priv);
> +	RTE_SET_USED(bbdev_ipc_op);
> +	RTE_SET_USED(bbdev_dec_op);
> +	RTE_SET_USED(out_op_data);
> +
> +	return 0;
> +}
> +
> +static inline int
> +is_bd_ring_full(uint32_t ci, uint32_t ci_flag,
> +		uint32_t pi, uint32_t pi_flag)
> +{
> +	if (pi == ci) {
> +		if (pi_flag != ci_flag)
> +			return 1; /* Ring is Full */
> +	}
> +	return 0;
> +}
> +
> +static inline int
> +prepare_ldpc_enc_op(struct rte_bbdev_enc_op *bbdev_enc_op,
> +		    struct bbdev_ipc_dequeue_op *bbdev_ipc_op,
> +		    struct bbdev_la12xx_q_priv *q_priv,
> +		    struct rte_bbdev_op_data *in_op_data,
> +		    struct rte_bbdev_op_data *out_op_data) {
> +	struct rte_bbdev_op_ldpc_enc *ldpc_enc = &bbdev_enc_op->ldpc_enc;
> +	uint32_t total_out_bits;
> +	int ret;
> +
> +	total_out_bits = (ldpc_enc->tb_params.cab *
> +		ldpc_enc->tb_params.ea) + (ldpc_enc->tb_params.c -
> +		ldpc_enc->tb_params.cab) * ldpc_enc->tb_params.eb;
> +
>
> This includes ratematching, see previous comment on capability
>
> Also I see it would not support the partial TB as defined in documentation and API (r != 0)
Not yet.
>
> +	ldpc_enc->output.length = (total_out_bits + 7)/8;
> +
> +	ret = fill_feca_desc_enc(q_priv, bbdev_ipc_op,
> +				 bbdev_enc_op, in_op_data);
> +	if (ret) {
> +		BBDEV_LA12XX_PMD_ERR(
> +			"fill_feca_desc_enc failed, ret: %d", ret);
> +		return ret;
> +	}
> +
> +	rte_pktmbuf_append(out_op_data->data, ldpc_enc->output.length);
> +
> +	return 0;
> +}
> +
> +static inline int
> +prepare_ldpc_dec_op(struct rte_bbdev_dec_op *bbdev_dec_op,
> +		    struct bbdev_ipc_dequeue_op *bbdev_ipc_op,
> +		    struct bbdev_la12xx_q_priv *q_priv,
> +		    struct rte_bbdev_op_data *out_op_data) {
> +	struct rte_bbdev_op_ldpc_dec *ldpc_dec = &bbdev_dec_op->ldpc_dec;
> +	uint32_t total_out_bits;
> +	uint32_t num_code_blocks = 0;
> +	uint16_t sys_cols;
> +	int ret;
> +
> +	sys_cols = (ldpc_dec->basegraph == 1) ? 22 : 10;
> +	if (ldpc_dec->tb_params.c == 1) {
> +		total_out_bits = ((sys_cols * ldpc_dec->z_c) -
> +				ldpc_dec->n_filler);
> +		/* 5G-NR protocol uses 16 bit CRC when output packet
> +		 * size <= 3824 (bits). Otherwise 24 bit CRC is used.
> +		 * Adjust the output bits accordingly
> +		 */
> +		if (total_out_bits - 16 <= 3824)
> +			total_out_bits -= 16;
> +		else
> +			total_out_bits -= 24;
> +		ldpc_dec->hard_output.length = (total_out_bits / 8);
> +	} else {
> +		total_out_bits = (((sys_cols * ldpc_dec->z_c) -
> +				ldpc_dec->n_filler - 24) *
> +				ldpc_dec->tb_params.c);
> +		ldpc_dec->hard_output.length = (total_out_bits / 8) - 3;
>
> Probably good to remove magic number for 24 and 3 here.
ok
>
> +	}
> +
> +	num_code_blocks = ldpc_dec->tb_params.c;
> +
> +	bbdev_ipc_op->num_code_blocks = rte_cpu_to_be_32(num_code_blocks);
> +
> +	ret = fill_feca_desc_dec(q_priv, bbdev_ipc_op,
> +				 bbdev_dec_op, out_op_data);
> +	if (ret) {
> +		BBDEV_LA12XX_PMD_ERR("fill_feca_desc_dec failed, ret: %d", ret);
> +		return ret;
> +	}
> +
> +	return 0;
> +}
> +
> +static int
> +enqueue_single_op(struct bbdev_la12xx_q_priv *q_priv, void *bbdev_op) {
> +	struct bbdev_la12xx_private *priv = q_priv->bbdev_priv;
> +	ipc_userspace_t *ipc_priv = priv->ipc_priv;
> +	ipc_instance_t *ipc_instance = ipc_priv->instance;
> +	struct bbdev_ipc_dequeue_op *bbdev_ipc_op;
> +	struct rte_bbdev_op_ldpc_enc *ldpc_enc;
> +	struct rte_bbdev_op_ldpc_dec *ldpc_dec;
> +	uint32_t q_id = q_priv->q_id;
> +	uint32_t ci, ci_flag, pi, pi_flag;
> +	ipc_ch_t *ch = &(ipc_instance->ch_list[q_id]);
> +	ipc_br_md_t *md = &(ch->md);
> +	size_t virt;
> +	char *huge_start_addr =
> +		(char *)q_priv->bbdev_priv->ipc_priv->hugepg_start.host_vaddr;
> +	struct rte_bbdev_op_data *in_op_data, *out_op_data;
> +	char *data_ptr;
> +	uint32_t l1_pcie_addr;
> +	int ret;
> +	uint32_t temp_ci;
> +
> +	temp_ci = q_priv->host_params->ci;
> +	ci = IPC_GET_CI_INDEX(temp_ci);
> +	ci_flag = IPC_GET_CI_FLAG(temp_ci);
> +
> +	pi = IPC_GET_PI_INDEX(q_priv->host_pi);
> +	pi_flag = IPC_GET_PI_FLAG(q_priv->host_pi);
> +
> +	BBDEV_LA12XX_PMD_DP_DEBUG(
> +		"before bd_ring_full: pi: %u, ci: %u, pi_flag: %u, ci_flag: %u, ring size: %u",
> +		pi, ci, pi_flag, ci_flag, q_priv->queue_size);
> +
> +	if (is_bd_ring_full(ci, ci_flag, pi, pi_flag)) {
> +		BBDEV_LA12XX_PMD_DP_DEBUG(
> +				"bd ring full for queue id: %d", q_id);
> +		return IPC_CH_FULL;
> +	}
> +
> +	virt = MODEM_P2V(q_priv->host_params->modem_ptr[pi]);
> +	bbdev_ipc_op = (struct bbdev_ipc_dequeue_op *)virt;
> +	q_priv->bbdev_op[pi] = bbdev_op;
> +
> +	switch (q_priv->op_type) {
> +	case RTE_BBDEV_OP_LDPC_ENC:
> +		ldpc_enc = &(((struct rte_bbdev_enc_op *)bbdev_op)->ldpc_enc);
> +		in_op_data = &ldpc_enc->input;
> +		out_op_data = &ldpc_enc->output;
> +
> +		ret = prepare_ldpc_enc_op(bbdev_op, bbdev_ipc_op, q_priv,
> +					  in_op_data, out_op_data);
> +		if (ret) {
> +			BBDEV_LA12XX_PMD_ERR(
> +				"process_ldpc_enc_op failed, ret: %d", ret);
> +			return ret;
> +		}
> +		break;
> +
> +	case RTE_BBDEV_OP_LDPC_DEC:
> +		ldpc_dec = &(((struct rte_bbdev_dec_op *)bbdev_op)->ldpc_dec);
> +		in_op_data = &ldpc_dec->input;
> +
> +			out_op_data = &ldpc_dec->hard_output;
> +
> +		ret = prepare_ldpc_dec_op(bbdev_op, bbdev_ipc_op,
> +					  q_priv, out_op_data);
> +		if (ret) {
> +			BBDEV_LA12XX_PMD_ERR(
> +				"process_ldpc_dec_op failed, ret: %d", ret);
> +			return ret;
> +		}
> +		break;
> +
> +	default:
> +		BBDEV_LA12XX_PMD_ERR("unsupported bbdev_ipc op type");
> +		return -1;
> +	}
> +
> +	if (in_op_data->data) {
> +		data_ptr = rte_pktmbuf_mtod(in_op_data->data, char *);
> +		l1_pcie_addr = (uint32_t)GUL_USER_HUGE_PAGE_ADDR +
> +			       data_ptr - huge_start_addr;
> +		bbdev_ipc_op->in_addr = l1_pcie_addr;
> +		bbdev_ipc_op->in_len = in_op_data->length;
> +	}
> +
> +	if (out_op_data->data) {
> +		data_ptr = rte_pktmbuf_mtod(out_op_data->data, char *);
> +		l1_pcie_addr = (uint32_t)GUL_USER_HUGE_PAGE_ADDR +
> +				data_ptr - huge_start_addr;
> +		bbdev_ipc_op->out_addr = rte_cpu_to_be_32(l1_pcie_addr);
> +		bbdev_ipc_op->out_len = rte_cpu_to_be_32(out_op_data->length);
> +	}
> +
> +	/* Move Producer Index forward */
> +	pi++;
> +	/* Flip the PI flag, if wrapping */
> +	if (unlikely(q_priv->queue_size == pi)) {
> +		pi = 0;
> +		pi_flag = pi_flag ? 0 : 1;
> +	}
> +
> +	if (pi_flag)
> +		IPC_SET_PI_FLAG(pi);
> +	else
> +		IPC_RESET_PI_FLAG(pi);
> +	/* Wait for Data Copy & pi_flag update to complete before updating pi */
> +	rte_mb();
> +	/* now update pi */
> +	md->pi = rte_cpu_to_be_32(pi);
> +	q_priv->host_pi = pi;
> +
> +	BBDEV_LA12XX_PMD_DP_DEBUG(
> +			"enter: pi: %u, ci: %u, pi_flag: %u, ci_flag: %u, ring size: %u",
> +			pi, ci, pi_flag, ci_flag, q_priv->queue_size);
> +
> +	return 0;
> +}
> +
> +/* Enqueue decode burst */
> +static uint16_t
> +enqueue_dec_ops(struct rte_bbdev_queue_data *q_data,
> +		struct rte_bbdev_dec_op **ops, uint16_t nb_ops) {
> +	struct bbdev_la12xx_q_priv *q_priv = q_data->queue_private;
> +	int nb_enqueued, ret;
> +
> +	for (nb_enqueued = 0; nb_enqueued < nb_ops; nb_enqueued++) {
> +		ret = enqueue_single_op(q_priv, ops[nb_enqueued]);
> +		if (ret)
> +			break;
> +	}
> +
> +	q_data->queue_stats.enqueue_err_count += nb_ops - nb_enqueued;
> +	q_data->queue_stats.enqueued_count += nb_enqueued;
> +
> +	return nb_enqueued;
> +}
> +
> +/* Enqueue encode burst */
> +static uint16_t
> +enqueue_enc_ops(struct rte_bbdev_queue_data *q_data,
> +		struct rte_bbdev_enc_op **ops, uint16_t nb_ops) {
> +	struct bbdev_la12xx_q_priv *q_priv = q_data->queue_private;
> +	int nb_enqueued, ret;
> +
> +	for (nb_enqueued = 0; nb_enqueued < nb_ops; nb_enqueued++) {
> +		ret = enqueue_single_op(q_priv, ops[nb_enqueued]);
> +		if (ret)
> +			break;
> +	}
> +
> +	q_data->queue_stats.enqueue_err_count += nb_ops - nb_enqueued;
> +	q_data->queue_stats.enqueued_count += nb_enqueued;
> +
> +	return nb_enqueued;
> +}
> +
> +static inline int
> +is_bd_ring_empty(uint32_t ci, uint32_t ci_flag,
> +		 uint32_t pi, uint32_t pi_flag)
> +{
> +	if (ci == pi) {
> +		if (ci_flag == pi_flag)
> +			return 1; /* No more Buffer */
> +	}
> +	return 0;
> +}
> +
> +/* Dequeue encode burst */
> +static void *
> +dequeue_single_op(struct bbdev_la12xx_q_priv *q_priv, void *dst) {
> +	struct bbdev_la12xx_private *priv = q_priv->bbdev_priv;
> +	ipc_userspace_t *ipc_priv = priv->ipc_priv;
> +	uint32_t q_id = q_priv->q_id + HOST_RX_QUEUEID_OFFSET;
> +	ipc_instance_t *ipc_instance = ipc_priv->instance;
> +	ipc_ch_t *ch = &(ipc_instance->ch_list[q_id]);
> +	uint32_t ci, ci_flag, pi, pi_flag;
> +	ipc_br_md_t *md;
> +	void *op;
> +	uint32_t temp_pi;
> +
> +	md = &(ch->md);
> +	ci = IPC_GET_CI_INDEX(q_priv->host_ci);
> +	ci_flag = IPC_GET_CI_FLAG(q_priv->host_ci);
> +
> +	temp_pi = q_priv->host_params->pi;
> +	pi = IPC_GET_PI_INDEX(temp_pi);
> +	pi_flag = IPC_GET_PI_FLAG(temp_pi);
> +
> +	if (is_bd_ring_empty(ci, ci_flag, pi, pi_flag))
> +		return NULL;
> +
> +	BBDEV_LA12XX_PMD_DP_DEBUG(
> +		"pi: %u, ci: %u, pi_flag: %u, ci_flag: %u, ring size: %u",
> +		pi, ci, pi_flag, ci_flag, q_priv->queue_size);
> +
> +	op = q_priv->bbdev_op[ci];
> +
> +	rte_memcpy(dst, q_priv->msg_ch_vaddr[ci],
> +		sizeof(struct bbdev_ipc_enqueue_op));
> +
> +	/* Move Consumer Index forward */
> +	ci++;
> +	/* Flip the CI flag, if wrapping */
> +	if (q_priv->queue_size == ci) {
> +		ci = 0;
> +		ci_flag = ci_flag ? 0 : 1;
> +	}
> +	if (ci_flag)
> +		IPC_SET_CI_FLAG(ci);
> +	else
> +		IPC_RESET_CI_FLAG(ci);
> +	md->ci = rte_cpu_to_be_32(ci);
> +	q_priv->host_ci = ci;
> +
> +	BBDEV_LA12XX_PMD_DP_DEBUG(
> +		"exit: pi: %u, ci: %u, pi_flag: %u, ci_flag: %u, ring size: %u",
> +		pi, ci, pi_flag, ci_flag, q_priv->queue_size);
> +
>
> So you don't use any of the BBDEV flags to report CRC and syndrome parity check in the response?
that will be supported in next version.
>
> +	return op;
> +}
> +
> +/* Dequeue decode burst */
> +static uint16_t
> +dequeue_dec_ops(struct rte_bbdev_queue_data *q_data,
> +		struct rte_bbdev_dec_op **ops, uint16_t nb_ops) {
> +	struct bbdev_la12xx_q_priv *q_priv = q_data->queue_private;
> +	struct bbdev_ipc_enqueue_op bbdev_ipc_op;
> +	int nb_dequeued;
> +
> +	for (nb_dequeued = 0; nb_dequeued < nb_ops; nb_dequeued++) {
> +		ops[nb_dequeued] = dequeue_single_op(q_priv, &bbdev_ipc_op);
> +		if (!ops[nb_dequeued])
> +			break;
> +		ops[nb_dequeued]->status = bbdev_ipc_op.status;
> +	}
> +	q_data->queue_stats.dequeued_count += nb_dequeued;
> +
> +	return nb_dequeued;
> +}
> +
> +/* Dequeue encode burst */
> +static uint16_t
> +dequeue_enc_ops(struct rte_bbdev_queue_data *q_data,
> +		struct rte_bbdev_enc_op **ops, uint16_t nb_ops) {
> +	struct bbdev_la12xx_q_priv *q_priv = q_data->queue_private;
> +	struct bbdev_ipc_enqueue_op bbdev_ipc_op;
> +	int nb_enqueued;
> +
> +	for (nb_enqueued = 0; nb_enqueued < nb_ops; nb_enqueued++) {
> +		ops[nb_enqueued] = dequeue_single_op(q_priv, &bbdev_ipc_op);
> +		if (!ops[nb_enqueued])
> +			break;
> +		ops[nb_enqueued]->status = bbdev_ipc_op.status;
> +	}
> +	q_data->queue_stats.enqueued_count += nb_enqueued;
> +
> +	return nb_enqueued;
> +}
> +
>   static struct hugepage_info *
>   get_hugepage_info(void)
>   {
> @@ -720,10 +1105,14 @@ la12xx_bbdev_create(struct rte_vdev_device *vdev,
>   	bbdev->intr_handle = NULL;
>   
>   	/* register rx/tx burst functions for data path */
> -	bbdev->dequeue_enc_ops = NULL;
> -	bbdev->dequeue_dec_ops = NULL;
> -	bbdev->enqueue_enc_ops = NULL;
> -	bbdev->enqueue_dec_ops = NULL;
> +	bbdev->dequeue_enc_ops = dequeue_enc_ops;
> +	bbdev->dequeue_dec_ops = dequeue_dec_ops;
> +	bbdev->enqueue_enc_ops = enqueue_enc_ops;
> +	bbdev->enqueue_dec_ops = enqueue_dec_ops;
>
> These above are used for 4G operations, since the capability is not there thet can be null.
>
> +	bbdev->dequeue_ldpc_enc_ops = dequeue_enc_ops;
> +	bbdev->dequeue_ldpc_dec_ops = dequeue_dec_ops;
> +	bbdev->enqueue_ldpc_enc_ops = enqueue_enc_ops;
> +	bbdev->enqueue_ldpc_dec_ops = enqueue_dec_ops;
>   
>   	return 0;
>   }
> diff --git a/drivers/baseband/la12xx/bbdev_la12xx_ipc.h b/drivers/baseband/la12xx/bbdev_la12xx_ipc.h
> index 9d5789f726..4e181e9254 100644
> --- a/drivers/baseband/la12xx/bbdev_la12xx_ipc.h
> +++ b/drivers/baseband/la12xx/bbdev_la12xx_ipc.h
> @@ -76,6 +76,25 @@ typedef struct {
>   	_IOWR(GUL_IPC_MAGIC, 5, struct ipc_msg *)  #define IOCTL_GUL_IPC_CHANNEL_RAISE_INTERRUPT _IOW(GUL_IPC_MAGIC, 6, int *)
>   
> +#define GUL_USER_HUGE_PAGE_OFFSET	(0)
> +#define GUL_PCI1_ADDR_BASE	(0x00000000ULL)
> +
> +#define GUL_USER_HUGE_PAGE_ADDR	(GUL_PCI1_ADDR_BASE + GUL_USER_HUGE_PAGE_OFFSET)
> +
> +/* IPC PI/CI index & flag manipulation helpers */
> +#define IPC_PI_CI_FLAG_MASK	0x80000000 /*  (1<<31) */
> +#define IPC_PI_CI_INDEX_MASK	0x7FFFFFFF /* ~(1<<31) */
> +
> +#define IPC_SET_PI_FLAG(x)	(x |= IPC_PI_CI_FLAG_MASK)
> +#define IPC_RESET_PI_FLAG(x)	(x &= IPC_PI_CI_INDEX_MASK)
> +#define IPC_GET_PI_FLAG(x)	(x >> 31)
> +#define IPC_GET_PI_INDEX(x)	(x & IPC_PI_CI_INDEX_MASK)
> +
> +#define IPC_SET_CI_FLAG(x)	(x |= IPC_PI_CI_FLAG_MASK)
> +#define IPC_RESET_CI_FLAG(x)	(x &= IPC_PI_CI_INDEX_MASK)
> +#define IPC_GET_CI_FLAG(x)	(x >> 31)
> +#define IPC_GET_CI_INDEX(x)	(x & IPC_PI_CI_INDEX_MASK)
> +
>   /** buffer ring common metadata */
>   typedef struct ipc_bd_ring_md {
>   	volatile uint32_t pi;		/**< Producer index and flag (MSB)
> @@ -173,6 +192,24 @@ struct bbdev_ipc_enqueue_op {
>   	uint32_t rsvd;
>   };
>   
> +/** Structure specifying dequeue operation (dequeue at LA1224) */
> +struct bbdev_ipc_dequeue_op {
> +	/** Input buffer memory address */
> +	uint32_t in_addr;
> +	/** Input buffer memory length */
> +	uint32_t in_len;
> +	/** Output buffer memory address */
> +	uint32_t out_addr;
> +	/** Output buffer memory length */
> +	uint32_t out_len;
> +	/* Number of code blocks. Only set when HARQ is used */
> +	uint32_t num_code_blocks;
> +	/** Dequeue Operation flags */
> +	uint32_t op_flags;
> +	/** Shared metadata between L1 and L2 */
> +	uint32_t shared_metadata;
> +};
> +
>   /* This shared memory would be on the host side which have copy of some
>    * of the parameters which are also part of Shared BD ring. Read access
>    * of these parameters from the host side would not be over PCI.
> --
> 2.17.1
>