From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 94F76A04DF; Tue, 11 Aug 2020 20:22:28 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 2101F1C022; Tue, 11 Aug 2020 20:22:28 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 403124C99 for ; Tue, 11 Aug 2020 20:22:26 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 07BIFFlv024655; Tue, 11 Aug 2020 11:22:25 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : references : in-reply-to : content-type : content-transfer-encoding : mime-version; s=pfpt0818; bh=Iii956ae9bLO4UBjv5MrczGntokW7vWcnJntmPaM83o=; b=SRQTmAvLwLian87eJMibFq1jyInz3nYEtNpLIVM8q6Yje8goYeZ72e8P1pLXZAqxodog hAUr4I0P7vvNljY/xOdjP55DIpbLu49WSCI579fHyqjxm5KvIBv9yiZmB7x7ZgjJdaaK OszLmXib/EnjAMip9OUt4LV2Ra8wQKHYqxOA6egplWWjU6hE35EoPgoWRa27IVTK+Q7E TL4vUThBYcU3wI0+ydwkICFqH+hbmvav6rD/qIXvHDs0i/8QURKGjO7yO/dg3GOuMcA1 liwyYXuEhGIgnc2SsIvgMmmCuEFY2CGa41XXoKlvEOSjB1y6WxcXqfGwE87OzAYGwmre jg== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0b-0016f401.pphosted.com with ESMTP id 32tgpks64t-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 11 Aug 2020 11:22:24 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 11 Aug 2020 11:22:22 -0700 Received: from NAM02-CY1-obe.outbound.protection.outlook.com (104.47.37.57) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1497.2 via Frontend Transport; Tue, 11 Aug 2020 11:22:22 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=LrvRF4APUfIfLs00dEgpwCCaHp9iu9qdBdqEWxtkfI4wT2J/MxgFAkps/L9+OqdRWRInjo8OBMZlpggAEJXWcw1Ood7DMncDSgTOZWoDtXoYJ3DsYKyz21DpgWCw+XjNV2PDjUz65xyjsoFnO+rMMDhBkFlNEpZn6qddQHxiN5Qc4+czMdn1gDO9veETn4ZMo7x172BvjoJbJUjCipb71unRmRKnd6sMatxinILpYWWew/qRU7mpNPi5eVqODkKofs32rF+zfodl6OdIYE5KmtqPlgzKzoFAaYtW5ieexNYSnkawvK8+7virGNoQ476/twwzyaUZae1qHaEiDqx9kA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Iii956ae9bLO4UBjv5MrczGntokW7vWcnJntmPaM83o=; b=LfIfxN9J0+/GMCHrnVYxptClF3aErN1uftb2GZIjoWYq76BSAUDHx5+rOSH0UC6lssQ0Pyg1wOAf27+Tt29iiVMAQu2RXwIYwYJyvMLyvC+uixlnwZPU8+8da2ktDmurcYGt7Xso9ukEVu1+xlqX3qI0nkAPzomyc22Vu0PzrzdK+RwCjXdG3fHm+K/QGjgXA4PiGBF70htIC4et5FQRZjPQuRBN1LnRGMmKiyO7XH0K1o5BXczaH3a1DIzycS2/ZQ+Wbt6petSXQAVaKtRbyhDXQAEWZ5P/AX12N41ak42ddsS/PUko6jFOkY8jIqw4Whbf7cgIAzDdf7ZXEzs5bQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=marvell.com; dmarc=pass action=none header.from=marvell.com; dkim=pass header.d=marvell.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.onmicrosoft.com; s=selector1-marvell-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Iii956ae9bLO4UBjv5MrczGntokW7vWcnJntmPaM83o=; b=F6IJdu1A8OFqMG0SZ9+u7Ea6a+xRkk3CyB/BTXqWVwXCCC3CFGE32eypZJqaB1ADBEnDkGHm9C8mjZpkmLcjsTRUNuBPi2F8MIpxGVQLI2qb/H0MKvFbtazQYTt8lgs/yPSzwP1kEf4Ng2pOJ/EYDYnJDIBiK9dMRHZd9zmUWjY= Received: from BYAPR18MB2424.namprd18.prod.outlook.com (2603:10b6:a03:130::21) by BYAPR18MB2453.namprd18.prod.outlook.com (2603:10b6:a03:12d::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3261.22; Tue, 11 Aug 2020 18:22:20 +0000 Received: from BYAPR18MB2424.namprd18.prod.outlook.com ([fe80::19b7:d53f:c3cd:866e]) by BYAPR18MB2424.namprd18.prod.outlook.com ([fe80::19b7:d53f:c3cd:866e%6]) with mapi id 15.20.3283.015; Tue, 11 Aug 2020 18:22:20 +0000 From: Jerin Jacob Kollanukkaran To: "McDaniel, Timothy" CC: "mattias.ronnblom@ericsson.com" , "dev@dpdk.org" , "gage.eads@intel.com" , "harry.van.haaren@intel.com" , Thomas Monjalon Thread-Topic: [EXT] [PATCH 03/27] event/dlb: add shared code version 10.7.9 Thread-Index: AQHWZqsUi+3kUBI/xkm6vg8hOQjjzqkzRsJw Date: Tue, 11 Aug 2020 18:22:20 +0000 Message-ID: References: <1593232671-5690-0-git-send-email-timothy.mcdaniel@intel.com> <1596138614-17409-1-git-send-email-timothy.mcdaniel@intel.com> <1596138614-17409-4-git-send-email-timothy.mcdaniel@intel.com> In-Reply-To: <1596138614-17409-4-git-send-email-timothy.mcdaniel@intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: intel.com; dkim=none (message not signed) header.d=none;intel.com; dmarc=none action=none header.from=marvell.com; x-originating-ip: [106.200.202.229] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 9a877247-6507-4f03-e663-08d83e237c91 x-ms-traffictypediagnostic: BYAPR18MB2453: x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:612; x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: 6UTXHKLiKjkM5rIs1tkv+CCj0RQRrGnGZ0VUxmp3fve5cxrU8/8tr5fry1A2M+ThBptaWKDC3tuxCuAPCASKIhmCdZce9CWt2nf3aXABmVAxVu8P4yUSMYYgiDbk53YLc1Ur5EudmNbj6c8Lpl8Pe0ALbdz6jIwCTVVQ86keEdCc8uaR0tCncC8kt9zlUBdO6fv9+urFQ4oX6qD4aBrpKuVEXvkc4k86Y9lnrNgSNCAcSlVdNr6E0riUpSJhj99gPhC0T3pDJnyoMMcKStcNP0VG06Xys0a+yxIopmWQ7Nduw3Bv3cKzv1NUhKQSXlp3 x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:BYAPR18MB2424.namprd18.prod.outlook.com; PTR:; CAT:NONE; SFTY:; SFS:(4636009)(136003)(366004)(346002)(39850400004)(396003)(376002)(33656002)(26005)(316002)(55016002)(6506007)(83380400001)(4326008)(71200400001)(53546011)(54906003)(7696005)(5660300002)(9686003)(186003)(8936002)(86362001)(66946007)(8676002)(66446008)(66476007)(52536014)(76116006)(64756008)(66556008)(2906002)(30864003)(6916009)(478600001)(579004)(559001); DIR:OUT; SFP:1101; x-ms-exchange-antispam-messagedata: YlCq9Inb8HPmd/zQbA/NJJCmu3wVAYOeJ42Behuo1HHwHA4iAaTGmCQhLU+bO2kuRgryWNAWkJ6SvKBlUMYMmO5KRWny9hLbXboftg2H3Ik6d0SMu7wFyvBWYzNnwbc5EaiFaJp+pD9JRfjAEZxsIRPXcZirRTSSIbO97YTh+CfAzT3QewueBlWwwlEw1wFQNhAFmoYYWNRzW6p4pPDiKy13RD15XohdLH3hKiE3ejho2AT75gfbI6WbIInKMnQ5KCfLTO3gntp3ExThYRqXxrfHMCQvFldxlwpE5ywdP+Jn2yNorBVmnKxFybLdA/FVTbK/mPIFBMm7uDiuMuSZW+ilE0YIjWF7YgEgz6+hTILUQL/fAqK7AUCQtsilAwGPHAh+8VAyneUThj3lhfWLLsN6vA11ehsyIlzEslKXXNJdJQeiz4C+72URt2ilq9CJIjXaYsT7sXRzMS6iBPDM7q5gvFQjk3ZhQoBAximpKNL96j8YoOBFJe1lLN4nKQDRmOCGB7OEmh5nYwspP2vblAZrpOyzJtTWSJu+RwuLWnH+OcotGjZIIfrmjmPgPoySKRGZ+n9Owjfw54WJxqIk3xillBicMOzmw1n3qwLRbyeQ1pLiz3TqxlZvZX9rqYXrXkPEU0UUZSN2rzbcVeYeYQ== x-ms-exchange-transport-forked: True Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: BYAPR18MB2424.namprd18.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: 9a877247-6507-4f03-e663-08d83e237c91 X-MS-Exchange-CrossTenant-originalarrivaltime: 11 Aug 2020 18:22:20.6859 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 70e1fb47-1155-421d-87fc-2e58f638b6e0 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: 47DJF2eq2ta4FyzD30YmEXgp92CSzPXNZy+5gCAmaSB2bBC5/4mQH2+fSWEKdFe/0tVrmnnMqQAPL5VYNzAB7w== X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR18MB2453 X-OriginatorOrg: marvell.com X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235, 18.0.687 definitions=2020-08-11_15:2020-08-11, 2020-08-11 signatures=0 Subject: Re: [dpdk-dev] [EXT] [PATCH 03/27] event/dlb: add shared code version 10.7.9 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" > -----Original Message----- > From: McDaniel, Timothy > Sent: Friday, July 31, 2020 1:20 AM > To: Jerin Jacob Kollanukkaran > Cc: mattias.ronnblom@ericsson.com; dev@dpdk.org; gage.eads@intel.com; > harry.van.haaren@intel.com; McDaniel, Timothy > > Subject: [EXT] [PATCH 03/27] event/dlb: add shared code version 10.7.9 What is shared code?=20 >=20 > External Email >=20 > ---------------------------------------------------------------------- > From: "McDaniel, Timothy" >=20 > The DLB shared code is auto generated by Intel, and is being committed > here so that it can be built in the DPDK environment. The shared code > should not be modified. The shared code must be present in order to > successfully build the DLB PMD. Please sanitize the git commit log, Thing like Intel, autogenerated all irr= elevant=20 in this git commit message context. >=20 > Changes since v1 patch series > 1) convert C99 comment to standard C > 2) remove TODO and FIXME comments > 3) converted to use same log i/f as PMD > 4) disable PF->VF ISR pending access alarm > 5) disable VF->PF ISR pending access alarm Move the history to below line[1] >=20 > Signed-off-by: McDaniel, Timothy > --- [1] Here > drivers/event/dlb/pf/base/dlb_hw_types.h | 360 + > drivers/event/dlb/pf/base/dlb_mbox.h | 645 ++ > drivers/event/dlb/pf/base/dlb_osdep.h | 347 + > drivers/event/dlb/pf/base/dlb_osdep_bitmap.h | 442 ++ > drivers/event/dlb/pf/base/dlb_osdep_list.h | 131 + > drivers/event/dlb/pf/base/dlb_osdep_types.h | 31 + > drivers/event/dlb/pf/base/dlb_regs.h | 2678 +++++++ > drivers/event/dlb/pf/base/dlb_resource.c | 9722 > ++++++++++++++++++++++++++ I don't like auto generated code part of DPDK. At least split the patches to proper logical patches. Please Don't dump some library code from some place as one single patch. =20 > drivers/event/dlb/pf/base/dlb_resource.h | 1639 +++++ > drivers/event/dlb/pf/base/dlb_user.h | 1084 +++ > 10 files changed, 17079 insertions(+) > create mode 100644 drivers/event/dlb/pf/base/dlb_hw_types.h > create mode 100644 drivers/event/dlb/pf/base/dlb_mbox.h > create mode 100644 drivers/event/dlb/pf/base/dlb_osdep.h > create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_bitmap.h > create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_list.h > create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_types.h > create mode 100644 drivers/event/dlb/pf/base/dlb_regs.h > create mode 100644 drivers/event/dlb/pf/base/dlb_resource.c > create mode 100644 drivers/event/dlb/pf/base/dlb_resource.h > create mode 100644 drivers/event/dlb/pf/base/dlb_user.h >=20 > diff --git a/drivers/event/dlb/pf/base/dlb_hw_types.h > b/drivers/event/dlb/pf/base/dlb_hw_types.h > new file mode 100644 > index 0000000..d56590e > --- /dev/null > +++ b/drivers/event/dlb/pf/base/dlb_hw_types.h > @@ -0,0 +1,360 @@ > +/* SPDX-License-Identifier: (GPL-2.0-only OR BSD-3-Clause) GPL2.0 needs exception. > + * Copyright(c) 2016-2020 Intel Corporation > + */ > + > +#ifndef __DLB_HW_TYPES_H > +#define __DLB_HW_TYPES_H > + > + * os_curtime_s() - get the current time (in seconds) > + * @usecs: delay duration. > + */ > +static inline unsigned long os_curtime_s(void) > +{ > + struct timespec tv; > + > + clock_gettime(CLOCK_MONOTONIC, &tv); Please use DPDK primitives > + > + return (unsigned long)tv.tv_sec; > +} > + > +static inline void os_schedule_work(struct dlb_hw *hw) > +{ > + struct dlb_dev *dlb_dev; > + pthread_t complete_queue_map_unmap_thread; > + int ret; > + > + dlb_dev =3D container_of(hw, struct dlb_dev, hw); > + > + ret =3D pthread_create(&complete_queue_map_unmap_thread, > + NULL, > + dlb_complete_queue_map_unmap, > + dlb_dev); Please use DPDK primitives > + if (ret) > + DLB_ERR(dlb_dev, > + "Could not create queue complete map /unmap thread, > err=3D%d\n", > + ret); > + else > + dlb_dev->worker_launched =3D true; > +} > +static inline int os_notify_user_space(struct dlb_hw *hw, > + u32 domain_id, > + u64 alert_id, > + u64 aux_alert_data) > +{ > + RTE_SET_USED(hw); > + RTE_SET_USED(domain_id); > + RTE_SET_USED(alert_id); > + RTE_SET_USED(aux_alert_data); > + > + rte_panic("internal_error: %s should never be called for DLB PF > PMD\n", > + __func__); Don't use rte_panic in library. > + return -1; > +} > + > +static inline void dlb_bitmap_free(struct dlb_bitmap *bitmap) > +{ > + if (!bitmap) > + rte_panic("NULL dlb_bitmap in %s\n", __func__); Another instance of panic. Not reviewed remaining lines. I would request someone from Intel to do first level of review of the drive= r patches. > + u32 count : 32; > + } field; > + u32 val; > +}; > + > +#define DLB_LSP_LDB_SCH_CNT_L 0x28200000 > +#define DLB_LSP_LDB_SCH_CNT_L_RST 0x0 > +union dlb_lsp_ldb_sch_cnt_l { > + struct { > + u32 count : 32; > + } field; > + u32 val; > +}; > + > +#define DLB_DP_DIR_CSR_CTRL 0x38000018 > +#define DLB_DP_DIR_CSR_CTRL_RST 0xc0000000 > +union dlb_dp_dir_csr_ctrl { > + struct { > + u32 cfg_int_dis : 1; > + u32 cfg_int_dis_sbe : 1; > + u32 cfg_int_dis_mbe : 1; > + u32 spare0 : 27; > + u32 cfg_vasr_dis : 1; > + u32 cfg_int_dis_synd : 1; > + } field; > + u32 val; > +}; > + > +#define DLB_DP_CFG_CTRL_ARB_WEIGHTS_TQPRI_DIR_1 0x38000014 > +#define DLB_DP_CFG_CTRL_ARB_WEIGHTS_TQPRI_DIR_1_RST 0xfffefdfc > +union dlb_dp_cfg_ctrl_arb_weights_tqpri_dir_1 { > + struct { > + u32 pri4 : 8; > + u32 pri5 : 8; > + u32 pri6 : 8; > + u32 pri7 : 8; > + } field; > + u32 val; > +}; > + > +#define DLB_DP_CFG_CTRL_ARB_WEIGHTS_TQPRI_DIR_0 0x38000010 > +#define DLB_DP_CFG_CTRL_ARB_WEIGHTS_TQPRI_DIR_0_RST 0xfbfaf9f8 > +union dlb_dp_cfg_ctrl_arb_weights_tqpri_dir_0 { > + struct { > + u32 pri0 : 8; > + u32 pri1 : 8; > + u32 pri2 : 8; > + u32 pri3 : 8; > + } field; > + u32 val; > +}; > + > +#define DLB_DP_CFG_CTRL_ARB_WEIGHTS_TQPRI_REPLAY_1 0x3800000c > +#define DLB_DP_CFG_CTRL_ARB_WEIGHTS_TQPRI_REPLAY_1_RST 0xfffefdfc > +union dlb_dp_cfg_ctrl_arb_weights_tqpri_replay_1 { > + struct { > + u32 pri4 : 8; > + u32 pri5 : 8; > + u32 pri6 : 8; > + u32 pri7 : 8; > + } field; > + u32 val; > +}; > + > +#define DLB_DP_CFG_CTRL_ARB_WEIGHTS_TQPRI_REPLAY_0 0x38000008 > +#define DLB_DP_CFG_CTRL_ARB_WEIGHTS_TQPRI_REPLAY_0_RST 0xfbfaf9f8 > +union dlb_dp_cfg_ctrl_arb_weights_tqpri_replay_0 { > + struct { > + u32 pri0 : 8; > + u32 pri1 : 8; > + u32 pri2 : 8; > + u32 pri3 : 8; > + } field; > + u32 val; > +}; > + > +#define DLB_NALB_PIPE_CTRL_ARB_WEIGHTS_TQPRI_NALB_1 0x6800001c > +#define DLB_NALB_PIPE_CTRL_ARB_WEIGHTS_TQPRI_NALB_1_RST 0xfffefdfc > +union dlb_nalb_pipe_ctrl_arb_weights_tqpri_nalb_1 { > + struct { > + u32 pri4 : 8; > + u32 pri5 : 8; > + u32 pri6 : 8; > + u32 pri7 : 8; > + } field; > + u32 val; > +}; > + > +#define DLB_NALB_PIPE_CTRL_ARB_WEIGHTS_TQPRI_NALB_0 0x68000018 > +#define DLB_NALB_PIPE_CTRL_ARB_WEIGHTS_TQPRI_NALB_0_RST 0xfbfaf9f8 > +union dlb_nalb_pipe_ctrl_arb_weights_tqpri_nalb_0 { > + struct { > + u32 pri0 : 8; > + u32 pri1 : 8; > + u32 pri2 : 8; > + u32 pri3 : 8; > + } field; > + u32 val; > +}; > + > +#define DLB_NALB_PIPE_CFG_CTRL_ARB_WEIGHTS_TQPRI_ATQ_1 > 0x68000014 > +#define DLB_NALB_PIPE_CFG_CTRL_ARB_WEIGHTS_TQPRI_ATQ_1_RST > 0xfffefdfc > +union dlb_nalb_pipe_cfg_ctrl_arb_weights_tqpri_atq_1 { > + struct { > + u32 pri4 : 8; > + u32 pri5 : 8; > + u32 pri6 : 8; > + u32 pri7 : 8; > + } field; > + u32 val; > +}; > + > +#define DLB_NALB_PIPE_CFG_CTRL_ARB_WEIGHTS_TQPRI_ATQ_0 > 0x68000010 > +#define DLB_NALB_PIPE_CFG_CTRL_ARB_WEIGHTS_TQPRI_ATQ_0_RST > 0xfbfaf9f8 > +union dlb_nalb_pipe_cfg_ctrl_arb_weights_tqpri_atq_0 { > + struct { > + u32 pri0 : 8; > + u32 pri1 : 8; > + u32 pri2 : 8; > + u32 pri3 : 8; > + } field; > + u32 val; > +}; > + > +#define DLB_NALB_PIPE_CFG_CTRL_ARB_WEIGHTS_TQPRI_REPLAY_1 > 0x6800000c > +#define DLB_NALB_PIPE_CFG_CTRL_ARB_WEIGHTS_TQPRI_REPLAY_1_RST > 0xfffefdfc > +union dlb_nalb_pipe_cfg_ctrl_arb_weights_tqpri_replay_1 { > + struct { > + u32 pri4 : 8; > + u32 pri5 : 8; > + u32 pri6 : 8; > + u32 pri7 : 8; > + } field; > + u32 val; > +}; > + > +#define DLB_NALB_PIPE_CFG_CTRL_ARB_WEIGHTS_TQPRI_REPLAY_0 > 0x68000008 > +#define DLB_NALB_PIPE_CFG_CTRL_ARB_WEIGHTS_TQPRI_REPLAY_0_RST > 0xfbfaf9f8 > +union dlb_nalb_pipe_cfg_ctrl_arb_weights_tqpri_replay_0 { > + struct { > + u32 pri0 : 8; > + u32 pri1 : 8; > + u32 pri2 : 8; > + u32 pri3 : 8; > + } field; > + u32 val; > +}; > + > +#define DLB_ATM_PIPE_QID_LDB_QID2CQIDX(x, y) \ > + (0x70000000 + (x) * 0x1000 + (y) * 0x4) > +#define DLB_ATM_PIPE_QID_LDB_QID2CQIDX_RST 0x0 > +union dlb_atm_pipe_qid_ldb_qid2cqidx { > + struct { > + u32 cq_p0 : 8; > + u32 cq_p1 : 8; > + u32 cq_p2 : 8; > + u32 cq_p3 : 8; > + } field; > + u32 val; > +}; > + > +#define DLB_ATM_PIPE_CFG_CTRL_ARB_WEIGHTS_SCHED_BIN 0x7800000c > +#define DLB_ATM_PIPE_CFG_CTRL_ARB_WEIGHTS_SCHED_BIN_RST 0xfffefdfc > +union dlb_atm_pipe_cfg_ctrl_arb_weights_sched_bin { > + struct { > + u32 bin0 : 8; > + u32 bin1 : 8; > + u32 bin2 : 8; > + u32 bin3 : 8; > + } field; > + u32 val; > +}; > + > +#define DLB_ATM_PIPE_CTRL_ARB_WEIGHTS_RDY_BIN 0x78000008 > +#define DLB_ATM_PIPE_CTRL_ARB_WEIGHTS_RDY_BIN_RST 0xfffefdfc > +union dlb_atm_pipe_ctrl_arb_weights_rdy_bin { > + struct { > + u32 bin0 : 8; > + u32 bin1 : 8; > + u32 bin2 : 8; > + u32 bin3 : 8; > + } field; > + u32 val; > +}; > + > +#define DLB_AQED_PIPE_QID_FID_LIM(x) \ > + (0x80000014 + (x) * 0x1000) > +#define DLB_AQED_PIPE_QID_FID_LIM_RST 0x7ff > +union dlb_aqed_pipe_qid_fid_lim { > + struct { > + u32 qid_fid_limit : 13; > + u32 rsvd0 : 19; > + } field; > + u32 val; > +}; > + > +#define DLB_AQED_PIPE_FL_POP_PTR(x) \ > + (0x80000010 + (x) * 0x1000) > +#define DLB_AQED_PIPE_FL_POP_PTR_RST 0x0 > +union dlb_aqed_pipe_fl_pop_ptr { > + struct { > + u32 pop_ptr : 11; > + u32 generation : 1; > + u32 rsvd0 : 20; > + } field; > + u32 val; > +}; > + > +#define DLB_AQED_PIPE_FL_PUSH_PTR(x) \ > + (0x8000000c + (x) * 0x1000) > +#define DLB_AQED_PIPE_FL_PUSH_PTR_RST 0x0 > +union dlb_aqed_pipe_fl_push_ptr { > + struct { > + u32 push_ptr : 11; > + u32 generation : 1; > + u32 rsvd0 : 20; > + } field; > + u32 val; > +}; > + > +#define DLB_AQED_PIPE_FL_BASE(x) \ > + (0x80000008 + (x) * 0x1000) > +#define DLB_AQED_PIPE_FL_BASE_RST 0x0 > +union dlb_aqed_pipe_fl_base { > + struct { > + u32 base : 11; > + u32 rsvd0 : 21; > + } field; > + u32 val; > +}; > + > +#define DLB_AQED_PIPE_FL_LIM(x) \ > + (0x80000004 + (x) * 0x1000) > +#define DLB_AQED_PIPE_FL_LIM_RST 0x800 > +union dlb_aqed_pipe_fl_lim { > + struct { > + u32 limit : 11; > + u32 freelist_disable : 1; > + u32 rsvd0 : 20; > + } field; > + u32 val; > +}; > + > +#define DLB_AQED_PIPE_CFG_CTRL_ARB_WEIGHTS_TQPRI_ATM_0 > 0x88000008 > +#define DLB_AQED_PIPE_CFG_CTRL_ARB_WEIGHTS_TQPRI_ATM_0_RST > 0xfffe > +union dlb_aqed_pipe_cfg_ctrl_arb_weights_tqpri_atm_0 { > + struct { > + u32 pri0 : 8; > + u32 pri1 : 8; > + u32 pri2 : 8; > + u32 pri3 : 8; > + } field; > + u32 val; > +}; > + > +#define DLB_RO_PIPE_QID2GRPSLT(x) \ > + (0x90000000 + (x) * 0x1000) > +#define DLB_RO_PIPE_QID2GRPSLT_RST 0x0 > +union dlb_ro_pipe_qid2grpslt { > + struct { > + u32 slot : 5; > + u32 rsvd1 : 3; > + u32 group : 2; > + u32 rsvd0 : 22; > + } field; > + u32 val; > +}; > + > +#define DLB_RO_PIPE_GRP_SN_MODE 0x98000008 > +#define DLB_RO_PIPE_GRP_SN_MODE_RST 0x0 > +union dlb_ro_pipe_grp_sn_mode { > + struct { > + u32 sn_mode_0 : 3; > + u32 reserved0 : 5; > + u32 sn_mode_1 : 3; > + u32 reserved1 : 5; > + u32 sn_mode_2 : 3; > + u32 reserved2 : 5; > + u32 sn_mode_3 : 3; > + u32 reserved3 : 5; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_CFG_DIR_PP_SW_ALARM_EN(x) \ > + (0xa000003c + (x) * 0x1000) > +#define DLB_CHP_CFG_DIR_PP_SW_ALARM_EN_RST 0x1 > +union dlb_chp_cfg_dir_pp_sw_alarm_en { > + struct { > + u32 alarm_enable : 1; > + u32 rsvd0 : 31; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_DIR_CQ_WD_ENB(x) \ > + (0xa0000038 + (x) * 0x1000) > +#define DLB_CHP_DIR_CQ_WD_ENB_RST 0x0 > +union dlb_chp_dir_cq_wd_enb { > + struct { > + u32 wd_enable : 1; > + u32 rsvd0 : 31; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_DIR_LDB_PP2POOL(x) \ > + (0xa0000034 + (x) * 0x1000) > +#define DLB_CHP_DIR_LDB_PP2POOL_RST 0x0 > +union dlb_chp_dir_ldb_pp2pool { > + struct { > + u32 pool : 6; > + u32 rsvd0 : 26; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_DIR_DIR_PP2POOL(x) \ > + (0xa0000030 + (x) * 0x1000) > +#define DLB_CHP_DIR_DIR_PP2POOL_RST 0x0 > +union dlb_chp_dir_dir_pp2pool { > + struct { > + u32 pool : 6; > + u32 rsvd0 : 26; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_DIR_PP_LDB_CRD_CNT(x) \ > + (0xa000002c + (x) * 0x1000) > +#define DLB_CHP_DIR_PP_LDB_CRD_CNT_RST 0x0 > +union dlb_chp_dir_pp_ldb_crd_cnt { > + struct { > + u32 count : 16; > + u32 rsvd0 : 16; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_DIR_PP_DIR_CRD_CNT(x) \ > + (0xa0000028 + (x) * 0x1000) > +#define DLB_CHP_DIR_PP_DIR_CRD_CNT_RST 0x0 > +union dlb_chp_dir_pp_dir_crd_cnt { > + struct { > + u32 count : 14; > + u32 rsvd0 : 18; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_DIR_CQ_TMR_THRESHOLD(x) \ > + (0xa0000024 + (x) * 0x1000) > +#define DLB_CHP_DIR_CQ_TMR_THRESHOLD_RST 0x0 > +union dlb_chp_dir_cq_tmr_threshold { > + struct { > + u32 timer_thrsh : 14; > + u32 rsvd0 : 18; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_DIR_CQ_INT_ENB(x) \ > + (0xa0000020 + (x) * 0x1000) > +#define DLB_CHP_DIR_CQ_INT_ENB_RST 0x0 > +union dlb_chp_dir_cq_int_enb { > + struct { > + u32 en_tim : 1; > + u32 en_depth : 1; > + u32 rsvd0 : 30; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_DIR_CQ_INT_DEPTH_THRSH(x) \ > + (0xa000001c + (x) * 0x1000) > +#define DLB_CHP_DIR_CQ_INT_DEPTH_THRSH_RST 0x0 > +union dlb_chp_dir_cq_int_depth_thrsh { > + struct { > + u32 depth_threshold : 12; > + u32 rsvd0 : 20; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_DIR_CQ_TKN_DEPTH_SEL(x) \ > + (0xa0000018 + (x) * 0x1000) > +#define DLB_CHP_DIR_CQ_TKN_DEPTH_SEL_RST 0x0 > +union dlb_chp_dir_cq_tkn_depth_sel { > + struct { > + u32 token_depth_select : 4; > + u32 rsvd0 : 28; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_DIR_PP_LDB_MIN_CRD_QNT(x) \ > + (0xa0000014 + (x) * 0x1000) > +#define DLB_CHP_DIR_PP_LDB_MIN_CRD_QNT_RST 0x1 > +union dlb_chp_dir_pp_ldb_min_crd_qnt { > + struct { > + u32 quanta : 10; > + u32 rsvd0 : 22; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_DIR_PP_DIR_MIN_CRD_QNT(x) \ > + (0xa0000010 + (x) * 0x1000) > +#define DLB_CHP_DIR_PP_DIR_MIN_CRD_QNT_RST 0x1 > +union dlb_chp_dir_pp_dir_min_crd_qnt { > + struct { > + u32 quanta : 10; > + u32 rsvd0 : 22; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_DIR_PP_LDB_CRD_LWM(x) \ > + (0xa000000c + (x) * 0x1000) > +#define DLB_CHP_DIR_PP_LDB_CRD_LWM_RST 0x0 > +union dlb_chp_dir_pp_ldb_crd_lwm { > + struct { > + u32 lwm : 16; > + u32 rsvd0 : 16; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_DIR_PP_LDB_CRD_HWM(x) \ > + (0xa0000008 + (x) * 0x1000) > +#define DLB_CHP_DIR_PP_LDB_CRD_HWM_RST 0x0 > +union dlb_chp_dir_pp_ldb_crd_hwm { > + struct { > + u32 hwm : 16; > + u32 rsvd0 : 16; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_DIR_PP_DIR_CRD_LWM(x) \ > + (0xa0000004 + (x) * 0x1000) > +#define DLB_CHP_DIR_PP_DIR_CRD_LWM_RST 0x0 > +union dlb_chp_dir_pp_dir_crd_lwm { > + struct { > + u32 lwm : 14; > + u32 rsvd0 : 18; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_DIR_PP_DIR_CRD_HWM(x) \ > + (0xa0000000 + (x) * 0x1000) > +#define DLB_CHP_DIR_PP_DIR_CRD_HWM_RST 0x0 > +union dlb_chp_dir_pp_dir_crd_hwm { > + struct { > + u32 hwm : 14; > + u32 rsvd0 : 18; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_CFG_LDB_PP_SW_ALARM_EN(x) \ > + (0xa0000148 + (x) * 0x1000) > +#define DLB_CHP_CFG_LDB_PP_SW_ALARM_EN_RST 0x1 > +union dlb_chp_cfg_ldb_pp_sw_alarm_en { > + struct { > + u32 alarm_enable : 1; > + u32 rsvd0 : 31; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_LDB_CQ_WD_ENB(x) \ > + (0xa0000144 + (x) * 0x1000) > +#define DLB_CHP_LDB_CQ_WD_ENB_RST 0x0 > +union dlb_chp_ldb_cq_wd_enb { > + struct { > + u32 wd_enable : 1; > + u32 rsvd0 : 31; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_SN_CHK_ENBL(x) \ > + (0xa0000140 + (x) * 0x1000) > +#define DLB_CHP_SN_CHK_ENBL_RST 0x0 > +union dlb_chp_sn_chk_enbl { > + struct { > + u32 en : 1; > + u32 rsvd0 : 31; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_HIST_LIST_BASE(x) \ > + (0xa000013c + (x) * 0x1000) > +#define DLB_CHP_HIST_LIST_BASE_RST 0x0 > +union dlb_chp_hist_list_base { > + struct { > + u32 base : 13; > + u32 rsvd0 : 19; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_HIST_LIST_LIM(x) \ > + (0xa0000138 + (x) * 0x1000) > +#define DLB_CHP_HIST_LIST_LIM_RST 0x0 > +union dlb_chp_hist_list_lim { > + struct { > + u32 limit : 13; > + u32 rsvd0 : 19; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_LDB_LDB_PP2POOL(x) \ > + (0xa0000134 + (x) * 0x1000) > +#define DLB_CHP_LDB_LDB_PP2POOL_RST 0x0 > +union dlb_chp_ldb_ldb_pp2pool { > + struct { > + u32 pool : 6; > + u32 rsvd0 : 26; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_LDB_DIR_PP2POOL(x) \ > + (0xa0000130 + (x) * 0x1000) > +#define DLB_CHP_LDB_DIR_PP2POOL_RST 0x0 > +union dlb_chp_ldb_dir_pp2pool { > + struct { > + u32 pool : 6; > + u32 rsvd0 : 26; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_LDB_PP_LDB_CRD_CNT(x) \ > + (0xa000012c + (x) * 0x1000) > +#define DLB_CHP_LDB_PP_LDB_CRD_CNT_RST 0x0 > +union dlb_chp_ldb_pp_ldb_crd_cnt { > + struct { > + u32 count : 16; > + u32 rsvd0 : 16; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_LDB_PP_DIR_CRD_CNT(x) \ > + (0xa0000128 + (x) * 0x1000) > +#define DLB_CHP_LDB_PP_DIR_CRD_CNT_RST 0x0 > +union dlb_chp_ldb_pp_dir_crd_cnt { > + struct { > + u32 count : 14; > + u32 rsvd0 : 18; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_LDB_CQ_TMR_THRESHOLD(x) \ > + (0xa0000124 + (x) * 0x1000) > +#define DLB_CHP_LDB_CQ_TMR_THRESHOLD_RST 0x0 > +union dlb_chp_ldb_cq_tmr_threshold { > + struct { > + u32 thrsh : 14; > + u32 rsvd0 : 18; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_LDB_CQ_INT_ENB(x) \ > + (0xa0000120 + (x) * 0x1000) > +#define DLB_CHP_LDB_CQ_INT_ENB_RST 0x0 > +union dlb_chp_ldb_cq_int_enb { > + struct { > + u32 en_tim : 1; > + u32 en_depth : 1; > + u32 rsvd0 : 30; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_LDB_CQ_INT_DEPTH_THRSH(x) \ > + (0xa000011c + (x) * 0x1000) > +#define DLB_CHP_LDB_CQ_INT_DEPTH_THRSH_RST 0x0 > +union dlb_chp_ldb_cq_int_depth_thrsh { > + struct { > + u32 depth_threshold : 12; > + u32 rsvd0 : 20; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_LDB_CQ_TKN_DEPTH_SEL(x) \ > + (0xa0000118 + (x) * 0x1000) > +#define DLB_CHP_LDB_CQ_TKN_DEPTH_SEL_RST 0x0 > +union dlb_chp_ldb_cq_tkn_depth_sel { > + struct { > + u32 token_depth_select : 4; > + u32 rsvd0 : 28; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_LDB_PP_LDB_MIN_CRD_QNT(x) \ > + (0xa0000114 + (x) * 0x1000) > +#define DLB_CHP_LDB_PP_LDB_MIN_CRD_QNT_RST 0x1 > +union dlb_chp_ldb_pp_ldb_min_crd_qnt { > + struct { > + u32 quanta : 10; > + u32 rsvd0 : 22; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_LDB_PP_DIR_MIN_CRD_QNT(x) \ > + (0xa0000110 + (x) * 0x1000) > +#define DLB_CHP_LDB_PP_DIR_MIN_CRD_QNT_RST 0x1 > +union dlb_chp_ldb_pp_dir_min_crd_qnt { > + struct { > + u32 quanta : 10; > + u32 rsvd0 : 22; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_LDB_PP_LDB_CRD_LWM(x) \ > + (0xa000010c + (x) * 0x1000) > +#define DLB_CHP_LDB_PP_LDB_CRD_LWM_RST 0x0 > +union dlb_chp_ldb_pp_ldb_crd_lwm { > + struct { > + u32 lwm : 16; > + u32 rsvd0 : 16; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_LDB_PP_LDB_CRD_HWM(x) \ > + (0xa0000108 + (x) * 0x1000) > +#define DLB_CHP_LDB_PP_LDB_CRD_HWM_RST 0x0 > +union dlb_chp_ldb_pp_ldb_crd_hwm { > + struct { > + u32 hwm : 16; > + u32 rsvd0 : 16; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_LDB_PP_DIR_CRD_LWM(x) \ > + (0xa0000104 + (x) * 0x1000) > +#define DLB_CHP_LDB_PP_DIR_CRD_LWM_RST 0x0 > +union dlb_chp_ldb_pp_dir_crd_lwm { > + struct { > + u32 lwm : 14; > + u32 rsvd0 : 18; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_LDB_PP_DIR_CRD_HWM(x) \ > + (0xa0000100 + (x) * 0x1000) > +#define DLB_CHP_LDB_PP_DIR_CRD_HWM_RST 0x0 > +union dlb_chp_ldb_pp_dir_crd_hwm { > + struct { > + u32 hwm : 14; > + u32 rsvd0 : 18; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_DIR_CQ_DEPTH(x) \ > + (0xa0000218 + (x) * 0x1000) > +#define DLB_CHP_DIR_CQ_DEPTH_RST 0x0 > +union dlb_chp_dir_cq_depth { > + struct { > + u32 cq_depth : 11; > + u32 rsvd0 : 21; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_DIR_CQ_WPTR(x) \ > + (0xa0000214 + (x) * 0x1000) > +#define DLB_CHP_DIR_CQ_WPTR_RST 0x0 > +union dlb_chp_dir_cq_wptr { > + struct { > + u32 write_pointer : 10; > + u32 rsvd0 : 22; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_DIR_PP_LDB_PUSH_PTR(x) \ > + (0xa0000210 + (x) * 0x1000) > +#define DLB_CHP_DIR_PP_LDB_PUSH_PTR_RST 0x0 > +union dlb_chp_dir_pp_ldb_push_ptr { > + struct { > + u32 push_pointer : 16; > + u32 rsvd0 : 16; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_DIR_PP_DIR_PUSH_PTR(x) \ > + (0xa000020c + (x) * 0x1000) > +#define DLB_CHP_DIR_PP_DIR_PUSH_PTR_RST 0x0 > +union dlb_chp_dir_pp_dir_push_ptr { > + struct { > + u32 push_pointer : 16; > + u32 rsvd0 : 16; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_DIR_PP_STATE_RESET(x) \ > + (0xa0000204 + (x) * 0x1000) > +#define DLB_CHP_DIR_PP_STATE_RESET_RST 0x0 > +union dlb_chp_dir_pp_state_reset { > + struct { > + u32 rsvd1 : 7; > + u32 dir_type : 1; > + u32 rsvd0 : 23; > + u32 reset_pp_state : 1; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_DIR_PP_CRD_REQ_STATE(x) \ > + (0xa0000200 + (x) * 0x1000) > +#define DLB_CHP_DIR_PP_CRD_REQ_STATE_RST 0x0 > +union dlb_chp_dir_pp_crd_req_state { > + struct { > + u32 dir_crd_req_active_valid : 1; > + u32 dir_crd_req_active_check : 1; > + u32 dir_crd_req_active_busy : 1; > + u32 rsvd1 : 1; > + u32 ldb_crd_req_active_valid : 1; > + u32 ldb_crd_req_active_check : 1; > + u32 ldb_crd_req_active_busy : 1; > + u32 rsvd0 : 1; > + u32 no_pp_credit_update : 1; > + u32 crd_req_state : 23; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_LDB_CQ_DEPTH(x) \ > + (0xa0000320 + (x) * 0x1000) > +#define DLB_CHP_LDB_CQ_DEPTH_RST 0x0 > +union dlb_chp_ldb_cq_depth { > + struct { > + u32 depth : 11; > + u32 reserved : 2; > + u32 rsvd0 : 19; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_LDB_CQ_WPTR(x) \ > + (0xa000031c + (x) * 0x1000) > +#define DLB_CHP_LDB_CQ_WPTR_RST 0x0 > +union dlb_chp_ldb_cq_wptr { > + struct { > + u32 write_pointer : 10; > + u32 rsvd0 : 22; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_LDB_PP_LDB_PUSH_PTR(x) \ > + (0xa0000318 + (x) * 0x1000) > +#define DLB_CHP_LDB_PP_LDB_PUSH_PTR_RST 0x0 > +union dlb_chp_ldb_pp_ldb_push_ptr { > + struct { > + u32 push_pointer : 16; > + u32 rsvd0 : 16; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_LDB_PP_DIR_PUSH_PTR(x) \ > + (0xa0000314 + (x) * 0x1000) > +#define DLB_CHP_LDB_PP_DIR_PUSH_PTR_RST 0x0 > +union dlb_chp_ldb_pp_dir_push_ptr { > + struct { > + u32 push_pointer : 16; > + u32 rsvd0 : 16; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_HIST_LIST_POP_PTR(x) \ > + (0xa000030c + (x) * 0x1000) > +#define DLB_CHP_HIST_LIST_POP_PTR_RST 0x0 > +union dlb_chp_hist_list_pop_ptr { > + struct { > + u32 pop_ptr : 13; > + u32 generation : 1; > + u32 rsvd0 : 18; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_HIST_LIST_PUSH_PTR(x) \ > + (0xa0000308 + (x) * 0x1000) > +#define DLB_CHP_HIST_LIST_PUSH_PTR_RST 0x0 > +union dlb_chp_hist_list_push_ptr { > + struct { > + u32 push_ptr : 13; > + u32 generation : 1; > + u32 rsvd0 : 18; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_LDB_PP_STATE_RESET(x) \ > + (0xa0000304 + (x) * 0x1000) > +#define DLB_CHP_LDB_PP_STATE_RESET_RST 0x0 > +union dlb_chp_ldb_pp_state_reset { > + struct { > + u32 rsvd1 : 7; > + u32 dir_type : 1; > + u32 rsvd0 : 23; > + u32 reset_pp_state : 1; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_LDB_PP_CRD_REQ_STATE(x) \ > + (0xa0000300 + (x) * 0x1000) > +#define DLB_CHP_LDB_PP_CRD_REQ_STATE_RST 0x0 > +union dlb_chp_ldb_pp_crd_req_state { > + struct { > + u32 dir_crd_req_active_valid : 1; > + u32 dir_crd_req_active_check : 1; > + u32 dir_crd_req_active_busy : 1; > + u32 rsvd1 : 1; > + u32 ldb_crd_req_active_valid : 1; > + u32 ldb_crd_req_active_check : 1; > + u32 ldb_crd_req_active_busy : 1; > + u32 rsvd0 : 1; > + u32 no_pp_credit_update : 1; > + u32 crd_req_state : 23; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_ORD_QID_SN(x) \ > + (0xa0000408 + (x) * 0x1000) > +#define DLB_CHP_ORD_QID_SN_RST 0x0 > +union dlb_chp_ord_qid_sn { > + struct { > + u32 sn : 12; > + u32 rsvd0 : 20; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_ORD_QID_SN_MAP(x) \ > + (0xa0000404 + (x) * 0x1000) > +#define DLB_CHP_ORD_QID_SN_MAP_RST 0x0 > +union dlb_chp_ord_qid_sn_map { > + struct { > + u32 mode : 3; > + u32 slot : 5; > + u32 grp : 2; > + u32 rsvd0 : 22; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_LDB_POOL_CRD_CNT(x) \ > + (0xa000050c + (x) * 0x1000) > +#define DLB_CHP_LDB_POOL_CRD_CNT_RST 0x0 > +union dlb_chp_ldb_pool_crd_cnt { > + struct { > + u32 count : 16; > + u32 rsvd0 : 16; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_QED_FL_BASE(x) \ > + (0xa0000508 + (x) * 0x1000) > +#define DLB_CHP_QED_FL_BASE_RST 0x0 > +union dlb_chp_qed_fl_base { > + struct { > + u32 base : 14; > + u32 rsvd0 : 18; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_QED_FL_LIM(x) \ > + (0xa0000504 + (x) * 0x1000) > +#define DLB_CHP_QED_FL_LIM_RST 0x8000 > +union dlb_chp_qed_fl_lim { > + struct { > + u32 limit : 14; > + u32 rsvd1 : 1; > + u32 freelist_disable : 1; > + u32 rsvd0 : 16; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_LDB_POOL_CRD_LIM(x) \ > + (0xa0000500 + (x) * 0x1000) > +#define DLB_CHP_LDB_POOL_CRD_LIM_RST 0x0 > +union dlb_chp_ldb_pool_crd_lim { > + struct { > + u32 limit : 16; > + u32 rsvd0 : 16; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_QED_FL_POP_PTR(x) \ > + (0xa0000604 + (x) * 0x1000) > +#define DLB_CHP_QED_FL_POP_PTR_RST 0x0 > +union dlb_chp_qed_fl_pop_ptr { > + struct { > + u32 pop_ptr : 14; > + u32 reserved0 : 1; > + u32 generation : 1; > + u32 rsvd0 : 16; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_QED_FL_PUSH_PTR(x) \ > + (0xa0000600 + (x) * 0x1000) > +#define DLB_CHP_QED_FL_PUSH_PTR_RST 0x0 > +union dlb_chp_qed_fl_push_ptr { > + struct { > + u32 push_ptr : 14; > + u32 reserved0 : 1; > + u32 generation : 1; > + u32 rsvd0 : 16; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_DIR_POOL_CRD_CNT(x) \ > + (0xa000070c + (x) * 0x1000) > +#define DLB_CHP_DIR_POOL_CRD_CNT_RST 0x0 > +union dlb_chp_dir_pool_crd_cnt { > + struct { > + u32 count : 14; > + u32 rsvd0 : 18; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_DQED_FL_BASE(x) \ > + (0xa0000708 + (x) * 0x1000) > +#define DLB_CHP_DQED_FL_BASE_RST 0x0 > +union dlb_chp_dqed_fl_base { > + struct { > + u32 base : 12; > + u32 rsvd0 : 20; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_DQED_FL_LIM(x) \ > + (0xa0000704 + (x) * 0x1000) > +#define DLB_CHP_DQED_FL_LIM_RST 0x2000 > +union dlb_chp_dqed_fl_lim { > + struct { > + u32 limit : 12; > + u32 rsvd1 : 1; > + u32 freelist_disable : 1; > + u32 rsvd0 : 18; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_DIR_POOL_CRD_LIM(x) \ > + (0xa0000700 + (x) * 0x1000) > +#define DLB_CHP_DIR_POOL_CRD_LIM_RST 0x0 > +union dlb_chp_dir_pool_crd_lim { > + struct { > + u32 limit : 14; > + u32 rsvd0 : 18; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_DQED_FL_POP_PTR(x) \ > + (0xa0000804 + (x) * 0x1000) > +#define DLB_CHP_DQED_FL_POP_PTR_RST 0x0 > +union dlb_chp_dqed_fl_pop_ptr { > + struct { > + u32 pop_ptr : 12; > + u32 reserved0 : 1; > + u32 generation : 1; > + u32 rsvd0 : 18; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_DQED_FL_PUSH_PTR(x) \ > + (0xa0000800 + (x) * 0x1000) > +#define DLB_CHP_DQED_FL_PUSH_PTR_RST 0x0 > +union dlb_chp_dqed_fl_push_ptr { > + struct { > + u32 push_ptr : 12; > + u32 reserved0 : 1; > + u32 generation : 1; > + u32 rsvd0 : 18; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_CTRL_DIAG_02 0xa8000154 > +#define DLB_CHP_CTRL_DIAG_02_RST 0x0 > +union dlb_chp_ctrl_diag_02 { > + struct { > + u32 control : 32; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_CFG_CHP_CSR_CTRL 0xa8000130 > +#define DLB_CHP_CFG_CHP_CSR_CTRL_RST 0xc0003fff > +#define DLB_CHP_CFG_EXCESS_TOKENS_SHIFT 12 > +union dlb_chp_cfg_chp_csr_ctrl { > + struct { > + u32 int_inf_alarm_enable_0 : 1; > + u32 int_inf_alarm_enable_1 : 1; > + u32 int_inf_alarm_enable_2 : 1; > + u32 int_inf_alarm_enable_3 : 1; > + u32 int_inf_alarm_enable_4 : 1; > + u32 int_inf_alarm_enable_5 : 1; > + u32 int_inf_alarm_enable_6 : 1; > + u32 int_inf_alarm_enable_7 : 1; > + u32 int_inf_alarm_enable_8 : 1; > + u32 int_inf_alarm_enable_9 : 1; > + u32 int_inf_alarm_enable_10 : 1; > + u32 int_inf_alarm_enable_11 : 1; > + u32 int_inf_alarm_enable_12 : 1; > + u32 int_cor_alarm_enable : 1; > + u32 csr_control_spare : 14; > + u32 cfg_vasr_dis : 1; > + u32 counter_clear : 1; > + u32 blk_cor_report : 1; > + u32 blk_cor_synd : 1; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_LDB_CQ_INTR_ARMED1 0xa8000068 > +#define DLB_CHP_LDB_CQ_INTR_ARMED1_RST 0x0 > +union dlb_chp_ldb_cq_intr_armed1 { > + struct { > + u32 armed : 32; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_LDB_CQ_INTR_ARMED0 0xa8000064 > +#define DLB_CHP_LDB_CQ_INTR_ARMED0_RST 0x0 > +union dlb_chp_ldb_cq_intr_armed0 { > + struct { > + u32 armed : 32; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_DIR_CQ_INTR_ARMED3 0xa8000024 > +#define DLB_CHP_DIR_CQ_INTR_ARMED3_RST 0x0 > +union dlb_chp_dir_cq_intr_armed3 { > + struct { > + u32 armed : 32; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_DIR_CQ_INTR_ARMED2 0xa8000020 > +#define DLB_CHP_DIR_CQ_INTR_ARMED2_RST 0x0 > +union dlb_chp_dir_cq_intr_armed2 { > + struct { > + u32 armed : 32; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_DIR_CQ_INTR_ARMED1 0xa800001c > +#define DLB_CHP_DIR_CQ_INTR_ARMED1_RST 0x0 > +union dlb_chp_dir_cq_intr_armed1 { > + struct { > + u32 armed : 32; > + } field; > + u32 val; > +}; > + > +#define DLB_CHP_DIR_CQ_INTR_ARMED0 0xa8000018 > +#define DLB_CHP_DIR_CQ_INTR_ARMED0_RST 0x0 > +union dlb_chp_dir_cq_intr_armed0 { > + struct { > + u32 armed : 32; > + } field; > + u32 val; > +}; > + > +#define DLB_CFG_MSTR_DIAG_RESET_STS 0xb8000004 > +#define DLB_CFG_MSTR_DIAG_RESET_STS_RST 0x1ff > +union dlb_cfg_mstr_diag_reset_sts { > + struct { > + u32 chp_pf_reset_done : 1; > + u32 rop_pf_reset_done : 1; > + u32 lsp_pf_reset_done : 1; > + u32 nalb_pf_reset_done : 1; > + u32 ap_pf_reset_done : 1; > + u32 dp_pf_reset_done : 1; > + u32 qed_pf_reset_done : 1; > + u32 dqed_pf_reset_done : 1; > + u32 aqed_pf_reset_done : 1; > + u32 rsvd1 : 6; > + u32 pf_reset_active : 1; > + u32 chp_vf_reset_done : 1; > + u32 rop_vf_reset_done : 1; > + u32 lsp_vf_reset_done : 1; > + u32 nalb_vf_reset_done : 1; > + u32 ap_vf_reset_done : 1; > + u32 dp_vf_reset_done : 1; > + u32 qed_vf_reset_done : 1; > + u32 dqed_vf_reset_done : 1; > + u32 aqed_vf_reset_done : 1; > + u32 rsvd0 : 6; > + u32 vf_reset_active : 1; > + } field; > + u32 val; > +}; > + > +#define DLB_CFG_MSTR_BCAST_RESET_VF_START 0xc8100000 > +#define DLB_CFG_MSTR_BCAST_RESET_VF_START_RST 0x0 > +/* HW Reset Types */ > +#define VF_RST_TYPE_CQ_LDB 0 > +#define VF_RST_TYPE_QID_LDB 1 > +#define VF_RST_TYPE_POOL_LDB 2 > +#define VF_RST_TYPE_CQ_DIR 8 > +#define VF_RST_TYPE_QID_DIR 9 > +#define VF_RST_TYPE_POOL_DIR 10 > +union dlb_cfg_mstr_bcast_reset_vf_start { > + struct { > + u32 vf_reset_start : 1; > + u32 reserved : 3; > + u32 vf_reset_type : 4; > + u32 vf_reset_id : 24; > + } field; > + u32 val; > +}; > + > +#define DLB_FUNC_VF_VF2PF_MAILBOX_BYTES 256 > +#define DLB_FUNC_VF_VF2PF_MAILBOX(x) \ > + (0x1000 + (x) * 0x4) > +#define DLB_FUNC_VF_VF2PF_MAILBOX_RST 0x0 > +union dlb_func_vf_vf2pf_mailbox { > + struct { > + u32 msg : 32; > + } field; > + u32 val; > +}; > + > +#define DLB_FUNC_VF_VF2PF_MAILBOX_ISR 0x1f00 > +#define DLB_FUNC_VF_VF2PF_MAILBOX_ISR_RST 0x0 > +union dlb_func_vf_vf2pf_mailbox_isr { > + struct { > + u32 isr : 1; > + u32 rsvd0 : 31; > + } field; > + u32 val; > +}; > + > +#define DLB_FUNC_VF_PF2VF_MAILBOX_BYTES 64 > +#define DLB_FUNC_VF_PF2VF_MAILBOX(x) \ > + (0x2000 + (x) * 0x4) > +#define DLB_FUNC_VF_PF2VF_MAILBOX_RST 0x0 > +union dlb_func_vf_pf2vf_mailbox { > + struct { > + u32 msg : 32; > + } field; > + u32 val; > +}; > + > +#define DLB_FUNC_VF_PF2VF_MAILBOX_ISR 0x2f00 > +#define DLB_FUNC_VF_PF2VF_MAILBOX_ISR_RST 0x0 > +union dlb_func_vf_pf2vf_mailbox_isr { > + struct { > + u32 pf_isr : 1; > + u32 rsvd0 : 31; > + } field; > + u32 val; > +}; > + > +#define DLB_FUNC_VF_VF_MSI_ISR_PEND 0x2f10 > +#define DLB_FUNC_VF_VF_MSI_ISR_PEND_RST 0x0 > +union dlb_func_vf_vf_msi_isr_pend { > + struct { > + u32 isr_pend : 32; > + } field; > + u32 val; > +}; > + > +#define DLB_FUNC_VF_VF_RESET_IN_PROGRESS 0x3000 > +#define DLB_FUNC_VF_VF_RESET_IN_PROGRESS_RST 0x1 > +union dlb_func_vf_vf_reset_in_progress { > + struct { > + u32 reset_in_progress : 1; > + u32 rsvd0 : 31; > + } field; > + u32 val; > +}; > + > +#define DLB_FUNC_VF_VF_MSI_ISR 0x4000 > +#define DLB_FUNC_VF_VF_MSI_ISR_RST 0x0 > +union dlb_func_vf_vf_msi_isr { > + struct { > + u32 vf_msi_isr : 32; > + } field; > + u32 val; > +}; > + > +#endif /* __DLB_REGS_H */ > diff --git a/drivers/event/dlb/pf/base/dlb_resource.c > b/drivers/event/dlb/pf/base/dlb_resource.c > new file mode 100644 > index 0000000..51265b9 > --- /dev/null > +++ b/drivers/event/dlb/pf/base/dlb_resource.c > @@ -0,0 +1,9722 @@ > +/* SPDX-License-Identifier: (GPL-2.0-only OR BSD-3-Clause) > + * Copyright(c) 2016-2020 Intel Corporation > + */ > + > +#include "dlb_hw_types.h" > +#include "dlb_user.h" > +#include "dlb_resource.h" > +#include "dlb_osdep.h" > +#include "dlb_osdep_bitmap.h" > +#include "dlb_osdep_types.h" > +#include "dlb_regs.h" > +#include "dlb_mbox.h" > + > +#define DLB_DOM_LIST_HEAD(head, type) \ > + DLB_LIST_HEAD((head), type, domain_list) > + > +#define DLB_FUNC_LIST_HEAD(head, type) \ > + DLB_LIST_HEAD((head), type, func_list) > + > +#define DLB_DOM_LIST_FOR(head, ptr, iter) \ > + DLB_LIST_FOR_EACH(head, ptr, domain_list, iter) > + > +#define DLB_FUNC_LIST_FOR(head, ptr, iter) \ > + DLB_LIST_FOR_EACH(head, ptr, func_list, iter) > + > +#define DLB_DOM_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \ > + DLB_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, domain_list, it, > it_tmp) > + > +#define DLB_FUNC_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \ > + DLB_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, func_list, it, it_tmp) > + > +/* The PF driver cannot assume that a register write will affect subsequ= ent > HCW > + * writes. To ensure a write completes, the driver must read back a CSR.= This > + * function only need be called for configuration that can occur after t= he > + * domain has started; prior to starting, applications can't send HCWs. > + */ > +static inline void dlb_flush_csr(struct dlb_hw *hw) > +{ > + DLB_CSR_RD(hw, DLB_SYS_TOTAL_VAS); > +} > + > +static void dlb_init_fn_rsrc_lists(struct dlb_function_resources *rsrc) > +{ > + dlb_list_init_head(&rsrc->avail_domains); > + dlb_list_init_head(&rsrc->used_domains); > + dlb_list_init_head(&rsrc->avail_ldb_queues); > + dlb_list_init_head(&rsrc->avail_ldb_ports); > + dlb_list_init_head(&rsrc->avail_dir_pq_pairs); > + dlb_list_init_head(&rsrc->avail_ldb_credit_pools); > + dlb_list_init_head(&rsrc->avail_dir_credit_pools); > +} > + > +static void dlb_init_domain_rsrc_lists(struct dlb_domain *domain) > +{ > + dlb_list_init_head(&domain->used_ldb_queues); > + dlb_list_init_head(&domain->used_ldb_ports); > + dlb_list_init_head(&domain->used_dir_pq_pairs); > + dlb_list_init_head(&domain->used_ldb_credit_pools); > + dlb_list_init_head(&domain->used_dir_credit_pools); > + dlb_list_init_head(&domain->avail_ldb_queues); > + dlb_list_init_head(&domain->avail_ldb_ports); > + dlb_list_init_head(&domain->avail_dir_pq_pairs); > + dlb_list_init_head(&domain->avail_ldb_credit_pools); > + dlb_list_init_head(&domain->avail_dir_credit_pools); > +} > + > +int dlb_resource_init(struct dlb_hw *hw) > +{ > + struct dlb_list_entry *list; > + unsigned int i; > + > + /* For optimal load-balancing, ports that map to one or more QIDs in > + * common should not be in numerical sequence. This is application > + * dependent, but the driver interleaves port IDs as much as possible > + * to reduce the likelihood of this. This initial allocation maximizes > + * the average distance between an ID and its immediate neighbors (i.e. > + * the distance from 1 to 0 and to 2, the distance from 2 to 1 and to > + * 3, etc.). > + */ > + u32 init_ldb_port_allocation[DLB_MAX_NUM_LDB_PORTS] =3D { > + 0, 31, 62, 29, 60, 27, 58, 25, 56, 23, 54, 21, 52, 19, 50, 17, > + 48, 15, 46, 13, 44, 11, 42, 9, 40, 7, 38, 5, 36, 3, 34, 1, > + 32, 63, 30, 61, 28, 59, 26, 57, 24, 55, 22, 53, 20, 51, 18, 49, > + 16, 47, 14, 45, 12, 43, 10, 41, 8, 39, 6, 37, 4, 35, 2, 33 > + }; > + > + /* Zero-out resource tracking data structures */ > + memset(&hw->rsrcs, 0, sizeof(hw->rsrcs)); > + memset(&hw->pf, 0, sizeof(hw->pf)); > + > + dlb_init_fn_rsrc_lists(&hw->pf); > + > + for (i =3D 0; i < DLB_MAX_NUM_VFS; i++) { > + memset(&hw->vf[i], 0, sizeof(hw->vf[i])); > + dlb_init_fn_rsrc_lists(&hw->vf[i]); > + } > + > + for (i =3D 0; i < DLB_MAX_NUM_DOMAINS; i++) { > + memset(&hw->domains[i], 0, sizeof(hw->domains[i])); > + dlb_init_domain_rsrc_lists(&hw->domains[i]); > + hw->domains[i].parent_func =3D &hw->pf; > + } > + > + /* Give all resources to the PF driver */ > + hw->pf.num_avail_domains =3D DLB_MAX_NUM_DOMAINS; > + for (i =3D 0; i < hw->pf.num_avail_domains; i++) { > + list =3D &hw->domains[i].func_list; > + > + dlb_list_add(&hw->pf.avail_domains, list); > + } > + > + hw->pf.num_avail_ldb_queues =3D DLB_MAX_NUM_LDB_QUEUES; > + for (i =3D 0; i < hw->pf.num_avail_ldb_queues; i++) { > + list =3D &hw->rsrcs.ldb_queues[i].func_list; > + > + dlb_list_add(&hw->pf.avail_ldb_queues, list); > + } > + > + hw->pf.num_avail_ldb_ports =3D DLB_MAX_NUM_LDB_PORTS; > + for (i =3D 0; i < hw->pf.num_avail_ldb_ports; i++) { > + struct dlb_ldb_port *port; > + > + port =3D &hw->rsrcs.ldb_ports[init_ldb_port_allocation[i]]; > + > + dlb_list_add(&hw->pf.avail_ldb_ports, &port->func_list); > + } > + > + hw->pf.num_avail_dir_pq_pairs =3D DLB_MAX_NUM_DIR_PORTS; > + for (i =3D 0; i < hw->pf.num_avail_dir_pq_pairs; i++) { > + list =3D &hw->rsrcs.dir_pq_pairs[i].func_list; > + > + dlb_list_add(&hw->pf.avail_dir_pq_pairs, list); > + } > + > + hw->pf.num_avail_ldb_credit_pools =3D > DLB_MAX_NUM_LDB_CREDIT_POOLS; > + for (i =3D 0; i < hw->pf.num_avail_ldb_credit_pools; i++) { > + list =3D &hw->rsrcs.ldb_credit_pools[i].func_list; > + > + dlb_list_add(&hw->pf.avail_ldb_credit_pools, list); > + } > + > + hw->pf.num_avail_dir_credit_pools =3D > DLB_MAX_NUM_DIR_CREDIT_POOLS; > + for (i =3D 0; i < hw->pf.num_avail_dir_credit_pools; i++) { > + list =3D &hw->rsrcs.dir_credit_pools[i].func_list; > + > + dlb_list_add(&hw->pf.avail_dir_credit_pools, list); > + } > + > + /* There are 5120 history list entries, which allows us to overprovisio= n > + * the inflight limit (4096) by 1k. > + */ > + if (dlb_bitmap_alloc(hw, > + &hw->pf.avail_hist_list_entries, > + DLB_MAX_NUM_HIST_LIST_ENTRIES)) > + return -1; > + > + if (dlb_bitmap_fill(hw->pf.avail_hist_list_entries)) > + return -1; > + > + if (dlb_bitmap_alloc(hw, > + &hw->pf.avail_qed_freelist_entries, > + DLB_MAX_NUM_LDB_CREDITS)) > + return -1; > + > + if (dlb_bitmap_fill(hw->pf.avail_qed_freelist_entries)) > + return -1; > + > + if (dlb_bitmap_alloc(hw, > + &hw->pf.avail_dqed_freelist_entries, > + DLB_MAX_NUM_DIR_CREDITS)) > + return -1; > + > + if (dlb_bitmap_fill(hw->pf.avail_dqed_freelist_entries)) > + return -1; > + > + if (dlb_bitmap_alloc(hw, > + &hw->pf.avail_aqed_freelist_entries, > + DLB_MAX_NUM_AQOS_ENTRIES)) > + return -1; > + > + if (dlb_bitmap_fill(hw->pf.avail_aqed_freelist_entries)) > + return -1; > + > + for (i =3D 0; i < DLB_MAX_NUM_VFS; i++) { > + if (dlb_bitmap_alloc(hw, > + &hw->vf[i].avail_hist_list_entries, > + DLB_MAX_NUM_HIST_LIST_ENTRIES)) > + return -1; > + if (dlb_bitmap_alloc(hw, > + &hw->vf[i].avail_qed_freelist_entries, > + DLB_MAX_NUM_LDB_CREDITS)) > + return -1; > + if (dlb_bitmap_alloc(hw, > + &hw->vf[i].avail_dqed_freelist_entries, > + DLB_MAX_NUM_DIR_CREDITS)) > + return -1; > + if (dlb_bitmap_alloc(hw, > + &hw->vf[i].avail_aqed_freelist_entries, > + DLB_MAX_NUM_AQOS_ENTRIES)) > + return -1; > + > + if (dlb_bitmap_zero(hw->vf[i].avail_hist_list_entries)) > + return -1; > + > + if (dlb_bitmap_zero(hw->vf[i].avail_qed_freelist_entries)) > + return -1; > + > + if (dlb_bitmap_zero(hw->vf[i].avail_dqed_freelist_entries)) > + return -1; > + > + if (dlb_bitmap_zero(hw->vf[i].avail_aqed_freelist_entries)) > + return -1; > + } > + > + /* Initialize the hardware resource IDs */ > + for (i =3D 0; i < DLB_MAX_NUM_DOMAINS; i++) { > + hw->domains[i].id.phys_id =3D i; > + hw->domains[i].id.vf_owned =3D false; > + } > + > + for (i =3D 0; i < DLB_MAX_NUM_LDB_QUEUES; i++) { > + hw->rsrcs.ldb_queues[i].id.phys_id =3D i; > + hw->rsrcs.ldb_queues[i].id.vf_owned =3D false; > + } > + > + for (i =3D 0; i < DLB_MAX_NUM_LDB_PORTS; i++) { > + hw->rsrcs.ldb_ports[i].id.phys_id =3D i; > + hw->rsrcs.ldb_ports[i].id.vf_owned =3D false; > + } > + > + for (i =3D 0; i < DLB_MAX_NUM_DIR_PORTS; i++) { > + hw->rsrcs.dir_pq_pairs[i].id.phys_id =3D i; > + hw->rsrcs.dir_pq_pairs[i].id.vf_owned =3D false; > + } > + > + for (i =3D 0; i < DLB_MAX_NUM_LDB_CREDIT_POOLS; i++) { > + hw->rsrcs.ldb_credit_pools[i].id.phys_id =3D i; > + hw->rsrcs.ldb_credit_pools[i].id.vf_owned =3D false; > + } > + > + for (i =3D 0; i < DLB_MAX_NUM_DIR_CREDIT_POOLS; i++) { > + hw->rsrcs.dir_credit_pools[i].id.phys_id =3D i; > + hw->rsrcs.dir_credit_pools[i].id.vf_owned =3D false; > + } > + > + for (i =3D 0; i < DLB_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) { > + hw->rsrcs.sn_groups[i].id =3D i; > + /* Default mode (0) is 32 sequence numbers per queue */ > + hw->rsrcs.sn_groups[i].mode =3D 0; > + hw->rsrcs.sn_groups[i].sequence_numbers_per_queue =3D 32; > + hw->rsrcs.sn_groups[i].slot_use_bitmap =3D 0; > + } > + > + return 0; > +} > + > +void dlb_resource_free(struct dlb_hw *hw) > +{ > + int i; > + > + dlb_bitmap_free(hw->pf.avail_hist_list_entries); > + > + dlb_bitmap_free(hw->pf.avail_qed_freelist_entries); > + > + dlb_bitmap_free(hw->pf.avail_dqed_freelist_entries); > + > + dlb_bitmap_free(hw->pf.avail_aqed_freelist_entries); > + > + for (i =3D 0; i < DLB_MAX_NUM_VFS; i++) { > + dlb_bitmap_free(hw->vf[i].avail_hist_list_entries); > + dlb_bitmap_free(hw->vf[i].avail_qed_freelist_entries); > + dlb_bitmap_free(hw->vf[i].avail_dqed_freelist_entries); > + dlb_bitmap_free(hw->vf[i].avail_aqed_freelist_entries); > + } > +} > + > +static struct dlb_domain *dlb_get_domain_from_id(struct dlb_hw *hw, > + u32 id, > + bool vf_request, > + unsigned int vf_id) > +{ > + struct dlb_list_entry *iter __attribute__((unused)); > + struct dlb_function_resources *rsrcs; > + struct dlb_domain *domain; > + > + if (id >=3D DLB_MAX_NUM_DOMAINS) > + return NULL; > + > + if (!vf_request) > + return &hw->domains[id]; > + > + rsrcs =3D &hw->vf[vf_id]; > + > + DLB_FUNC_LIST_FOR(rsrcs->used_domains, domain, iter) > + if (domain->id.virt_id =3D=3D id) > + return domain; > + > + return NULL; > +} > + > +static struct dlb_credit_pool * > +dlb_get_domain_ldb_pool(u32 id, > + bool vf_request, > + struct dlb_domain *domain) > +{ > + struct dlb_list_entry *iter __attribute__((unused)); > + struct dlb_credit_pool *pool; > + > + if (id >=3D DLB_MAX_NUM_LDB_CREDIT_POOLS) > + return NULL; > + > + DLB_DOM_LIST_FOR(domain->used_ldb_credit_pools, pool, iter) > + if ((!vf_request && pool->id.phys_id =3D=3D id) || > + (vf_request && pool->id.virt_id =3D=3D id)) > + return pool; > + > + return NULL; > +} > + > +static struct dlb_credit_pool * > +dlb_get_domain_dir_pool(u32 id, > + bool vf_request, > + struct dlb_domain *domain) > +{ > + struct dlb_list_entry *iter __attribute__((unused)); > + struct dlb_credit_pool *pool; > + > + if (id >=3D DLB_MAX_NUM_DIR_CREDIT_POOLS) > + return NULL; > + > + DLB_DOM_LIST_FOR(domain->used_dir_credit_pools, pool, iter) > + if ((!vf_request && pool->id.phys_id =3D=3D id) || > + (vf_request && pool->id.virt_id =3D=3D id)) > + return pool; > + > + return NULL; > +} > + > +static struct dlb_ldb_port *dlb_get_ldb_port_from_id(struct dlb_hw *hw, > + u32 id, > + bool vf_request, > + unsigned int vf_id) > +{ > + struct dlb_list_entry *iter1 __attribute__((unused)); > + struct dlb_list_entry *iter2 __attribute__((unused)); > + struct dlb_function_resources *rsrcs; > + struct dlb_ldb_port *port; > + struct dlb_domain *domain; > + > + if (id >=3D DLB_MAX_NUM_LDB_PORTS) > + return NULL; > + > + rsrcs =3D (vf_request) ? &hw->vf[vf_id] : &hw->pf; > + > + if (!vf_request) > + return &hw->rsrcs.ldb_ports[id]; > + > + DLB_FUNC_LIST_FOR(rsrcs->used_domains, domain, iter1) { > + DLB_DOM_LIST_FOR(domain->used_ldb_ports, port, iter2) > + if (port->id.virt_id =3D=3D id) > + return port; > + } > + > + DLB_FUNC_LIST_FOR(rsrcs->avail_ldb_ports, port, iter1) > + if (port->id.virt_id =3D=3D id) > + return port; > + > + return NULL; > +} > + > +static struct dlb_ldb_port * > +dlb_get_domain_used_ldb_port(u32 id, > + bool vf_request, > + struct dlb_domain *domain) > +{ > + struct dlb_list_entry *iter __attribute__((unused)); > + struct dlb_ldb_port *port; > + > + if (id >=3D DLB_MAX_NUM_LDB_PORTS) > + return NULL; > + > + DLB_DOM_LIST_FOR(domain->used_ldb_ports, port, iter) > + if ((!vf_request && port->id.phys_id =3D=3D id) || > + (vf_request && port->id.virt_id =3D=3D id)) > + return port; > + > + DLB_DOM_LIST_FOR(domain->avail_ldb_ports, port, iter) > + if ((!vf_request && port->id.phys_id =3D=3D id) || > + (vf_request && port->id.virt_id =3D=3D id)) > + return port; > + > + return NULL; > +} > + > +static struct dlb_ldb_port *dlb_get_domain_ldb_port(u32 id, > + bool vf_request, > + struct dlb_domain *domain) > +{ > + struct dlb_list_entry *iter __attribute__((unused)); > + struct dlb_ldb_port *port; > + > + if (id >=3D DLB_MAX_NUM_LDB_PORTS) > + return NULL; > + > + DLB_DOM_LIST_FOR(domain->used_ldb_ports, port, iter) > + if ((!vf_request && port->id.phys_id =3D=3D id) || > + (vf_request && port->id.virt_id =3D=3D id)) > + return port; > + > + DLB_DOM_LIST_FOR(domain->avail_ldb_ports, port, iter) > + if ((!vf_request && port->id.phys_id =3D=3D id) || > + (vf_request && port->id.virt_id =3D=3D id)) > + return port; > + > + return NULL; > +} > + > +static struct dlb_dir_pq_pair *dlb_get_dir_pq_from_id(struct dlb_hw *hw, > + u32 id, > + bool vf_request, > + unsigned int vf_id) > +{ > + struct dlb_list_entry *iter1 __attribute__((unused)); > + struct dlb_list_entry *iter2 __attribute__((unused)); > + struct dlb_function_resources *rsrcs; > + struct dlb_dir_pq_pair *port; > + struct dlb_domain *domain; > + > + if (id >=3D DLB_MAX_NUM_DIR_PORTS) > + return NULL; > + > + rsrcs =3D (vf_request) ? &hw->vf[vf_id] : &hw->pf; > + > + if (!vf_request) > + return &hw->rsrcs.dir_pq_pairs[id]; > + > + DLB_FUNC_LIST_FOR(rsrcs->used_domains, domain, iter1) { > + DLB_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter2) > + if (port->id.virt_id =3D=3D id) > + return port; > + } > + > + DLB_FUNC_LIST_FOR(rsrcs->avail_dir_pq_pairs, port, iter1) > + if (port->id.virt_id =3D=3D id) > + return port; > + > + return NULL; > +} > + > +static struct dlb_dir_pq_pair * > +dlb_get_domain_used_dir_pq(u32 id, > + bool vf_request, > + struct dlb_domain *domain) > +{ > + struct dlb_list_entry *iter __attribute__((unused)); > + struct dlb_dir_pq_pair *port; > + > + if (id >=3D DLB_MAX_NUM_DIR_PORTS) > + return NULL; > + > + DLB_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) > + if ((!vf_request && port->id.phys_id =3D=3D id) || > + (vf_request && port->id.virt_id =3D=3D id)) > + return port; > + > + return NULL; > +} > + > +static struct dlb_dir_pq_pair *dlb_get_domain_dir_pq(u32 id, > + bool vf_request, > + struct dlb_domain > *domain) > +{ > + struct dlb_list_entry *iter __attribute__((unused)); > + struct dlb_dir_pq_pair *port; > + > + if (id >=3D DLB_MAX_NUM_DIR_PORTS) > + return NULL; > + > + DLB_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) > + if ((!vf_request && port->id.phys_id =3D=3D id) || > + (vf_request && port->id.virt_id =3D=3D id)) > + return port; > + > + DLB_DOM_LIST_FOR(domain->avail_dir_pq_pairs, port, iter) > + if ((!vf_request && port->id.phys_id =3D=3D id) || > + (vf_request && port->id.virt_id =3D=3D id)) > + return port; > + > + return NULL; > +} > + > +static struct dlb_ldb_queue *dlb_get_ldb_queue_from_id(struct dlb_hw *hw= , > + u32 id, > + bool vf_request, > + unsigned int vf_id) > +{ > + struct dlb_list_entry *iter1 __attribute__((unused)); > + struct dlb_list_entry *iter2 __attribute__((unused)); > + struct dlb_function_resources *rsrcs; > + struct dlb_ldb_queue *queue; > + struct dlb_domain *domain; > + > + if (id >=3D DLB_MAX_NUM_LDB_QUEUES) > + return NULL; > + > + rsrcs =3D (vf_request) ? &hw->vf[vf_id] : &hw->pf; > + > + if (!vf_request) > + return &hw->rsrcs.ldb_queues[id]; > + > + DLB_FUNC_LIST_FOR(rsrcs->used_domains, domain, iter1) { > + DLB_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter2) > + if (queue->id.virt_id =3D=3D id) > + return queue; > + } > + > + DLB_FUNC_LIST_FOR(rsrcs->avail_ldb_queues, queue, iter1) > + if (queue->id.virt_id =3D=3D id) > + return queue; > + > + return NULL; > +} > + > +static struct dlb_ldb_queue *dlb_get_domain_ldb_queue(u32 id, > + bool vf_request, > + struct dlb_domain > *domain) > +{ > + struct dlb_list_entry *iter __attribute__((unused)); > + struct dlb_ldb_queue *queue; > + > + if (id >=3D DLB_MAX_NUM_LDB_QUEUES) > + return NULL; > + > + DLB_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) > + if ((!vf_request && queue->id.phys_id =3D=3D id) || > + (vf_request && queue->id.virt_id =3D=3D id)) > + return queue; > + > + return NULL; > +} > + > +#define DLB_XFER_LL_RSRC(dst, src, num, type_t, name) ({ \ > + struct dlb_list_entry *it1 __attribute__((unused)); \ > + struct dlb_list_entry *it2 __attribute__((unused)); \ > + struct dlb_function_resources *_src =3D src; \ > + struct dlb_function_resources *_dst =3D dst; \ > + type_t *ptr, *tmp __attribute__((unused)); \ > + unsigned int i =3D 0; \ > + \ > + DLB_FUNC_LIST_FOR_SAFE(_src->avail_##name##s, ptr, tmp, it1, it2) { > \ > + if (i++ =3D=3D (num)) \ > + break; \ > + \ > + dlb_list_del(&_src->avail_##name##s, &ptr->func_list); \ > + dlb_list_add(&_dst->avail_##name##s, &ptr->func_list); \ > + _src->num_avail_##name##s--; \ > + _dst->num_avail_##name##s++; \ > + } \ > +}) > + > +#define DLB_VF_ID_CLEAR(head, type_t) ({ \ > + struct dlb_list_entry *iter __attribute__((unused)); \ > + type_t *var; \ > + \ > + DLB_FUNC_LIST_FOR(head, var, iter) \ > + var->id.vf_owned =3D false; \ > +}) > + > +int dlb_update_vf_sched_domains(struct dlb_hw *hw, u32 vf_id, u32 num) > +{ > + struct dlb_list_entry *iter __attribute__((unused)); > + struct dlb_function_resources *src, *dst; > + struct dlb_domain *domain; > + unsigned int orig; > + int ret; > + > + if (vf_id >=3D DLB_MAX_NUM_VFS) > + return -EINVAL; > + > + src =3D &hw->pf; > + dst =3D &hw->vf[vf_id]; > + > + /* If the VF is locked, its resource assignment can't be changed */ > + if (dlb_vf_is_locked(hw, vf_id)) > + return -EPERM; > + > + orig =3D dst->num_avail_domains; > + > + /* Detach the destination VF's current resources before checking if > + * enough are available, and set their IDs accordingly. > + */ > + DLB_VF_ID_CLEAR(dst->avail_domains, struct dlb_domain); > + > + DLB_XFER_LL_RSRC(src, dst, orig, struct dlb_domain, domain); > + > + /* Are there enough available resources to satisfy the request? */ > + if (num > src->num_avail_domains) { > + num =3D orig; > + ret =3D -EINVAL; > + } else { > + ret =3D 0; > + } > + > + DLB_XFER_LL_RSRC(dst, src, num, struct dlb_domain, domain); > + > + /* Set the domains' VF backpointer */ > + DLB_FUNC_LIST_FOR(dst->avail_domains, domain, iter) > + domain->parent_func =3D dst; > + > + return ret; > +} > + > +int dlb_update_vf_ldb_queues(struct dlb_hw *hw, u32 vf_id, u32 num) > +{ > + struct dlb_function_resources *src, *dst; > + unsigned int orig; > + int ret; > + > + if (vf_id >=3D DLB_MAX_NUM_VFS) > + return -EINVAL; > + > + src =3D &hw->pf; > + dst =3D &hw->vf[vf_id]; > + > + /* If the VF is locked, its resource assignment can't be changed */ > + if (dlb_vf_is_locked(hw, vf_id)) > + return -EPERM; > + > + orig =3D dst->num_avail_ldb_queues; > + > + /* Detach the destination VF's current resources before checking if > + * enough are available, and set their IDs accordingly. > + */ > + DLB_VF_ID_CLEAR(dst->avail_ldb_queues, struct dlb_ldb_queue); > + > + DLB_XFER_LL_RSRC(src, dst, orig, struct dlb_ldb_queue, ldb_queue); > + > + /* Are there enough available resources to satisfy the request? */ > + if (num > src->num_avail_ldb_queues) { > + num =3D orig; > + ret =3D -EINVAL; > + } else { > + ret =3D 0; > + } > + > + DLB_XFER_LL_RSRC(dst, src, num, struct dlb_ldb_queue, ldb_queue); > + > + return ret; > +} > + > +int dlb_update_vf_ldb_ports(struct dlb_hw *hw, u32 vf_id, u32 num) > +{ > + struct dlb_function_resources *src, *dst; > + unsigned int orig; > + int ret; > + > + if (vf_id >=3D DLB_MAX_NUM_VFS) > + return -EINVAL; > + > + src =3D &hw->pf; > + dst =3D &hw->vf[vf_id]; > + > + /* If the VF is locked, its resource assignment can't be changed */ > + if (dlb_vf_is_locked(hw, vf_id)) > + return -EPERM; > + > + orig =3D dst->num_avail_ldb_ports; > + > + /* Detach the destination VF's current resources before checking if > + * enough are available, and set their IDs accordingly. > + */ > + DLB_VF_ID_CLEAR(dst->avail_ldb_ports, struct dlb_ldb_port); > + > + DLB_XFER_LL_RSRC(src, dst, orig, struct dlb_ldb_port, ldb_port); > + > + /* Are there enough available resources to satisfy the request? */ > + if (num > src->num_avail_ldb_ports) { > + num =3D orig; > + ret =3D -EINVAL; > + } else { > + ret =3D 0; > + } > + > + DLB_XFER_LL_RSRC(dst, src, num, struct dlb_ldb_port, ldb_port); > + > + return ret; > +} > + > +int dlb_update_vf_dir_ports(struct dlb_hw *hw, u32 vf_id, u32 num) > +{ > + struct dlb_function_resources *src, *dst; > + unsigned int orig; > + int ret; > + > + if (vf_id >=3D DLB_MAX_NUM_VFS) > + return -EINVAL; > + > + src =3D &hw->pf; > + dst =3D &hw->vf[vf_id]; > + > + /* If the VF is locked, its resource assignment can't be changed */ > + if (dlb_vf_is_locked(hw, vf_id)) > + return -EPERM; > + > + orig =3D dst->num_avail_dir_pq_pairs; > + > + /* Detach the destination VF's current resources before checking if > + * enough are available, and set their IDs accordingly. > + */ > + DLB_VF_ID_CLEAR(dst->avail_dir_pq_pairs, struct dlb_dir_pq_pair); > + > + DLB_XFER_LL_RSRC(src, dst, orig, struct dlb_dir_pq_pair, dir_pq_pair); > + > + /* Are there enough available resources to satisfy the request? */ > + if (num > src->num_avail_dir_pq_pairs) { > + num =3D orig; > + ret =3D -EINVAL; > + } else { > + ret =3D 0; > + } > + > + DLB_XFER_LL_RSRC(dst, src, num, struct dlb_dir_pq_pair, dir_pq_pair); > + > + return ret; > +} > + > +int dlb_update_vf_ldb_credit_pools(struct dlb_hw *hw, > + u32 vf_id, > + u32 num) > +{ > + struct dlb_function_resources *src, *dst; > + unsigned int orig; > + int ret; > + > + if (vf_id >=3D DLB_MAX_NUM_VFS) > + return -EINVAL; > + > + src =3D &hw->pf; > + dst =3D &hw->vf[vf_id]; > + > + /* If the VF is locked, its resource assignment can't be changed */ > + if (dlb_vf_is_locked(hw, vf_id)) > + return -EPERM; > + > + orig =3D dst->num_avail_ldb_credit_pools; > + > + /* Detach the destination VF's current resources before checking if > + * enough are available, and set their IDs accordingly. > + */ > + DLB_VF_ID_CLEAR(dst->avail_ldb_credit_pools, struct dlb_credit_pool); > + > + DLB_XFER_LL_RSRC(src, > + dst, > + orig, > + struct dlb_credit_pool, > + ldb_credit_pool); > + > + /* Are there enough available resources to satisfy the request? */ > + if (num > src->num_avail_ldb_credit_pools) { > + num =3D orig; > + ret =3D -EINVAL; > + } else { > + ret =3D 0; > + } > + > + DLB_XFER_LL_RSRC(dst, > + src, > + num, > + struct dlb_credit_pool, > + ldb_credit_pool); > + > + return ret; > +} > + > +int dlb_update_vf_dir_credit_pools(struct dlb_hw *hw, > + u32 vf_id, > + u32 num) > +{ > + struct dlb_function_resources *src, *dst; > + unsigned int orig; > + int ret; > + > + if (vf_id >=3D DLB_MAX_NUM_VFS) > + return -EINVAL; > + > + src =3D &hw->pf; > + dst =3D &hw->vf[vf_id]; > + > + /* If the VF is locked, its resource assignment can't be changed */ > + if (dlb_vf_is_locked(hw, vf_id)) > + return -EPERM; > + > + orig =3D dst->num_avail_dir_credit_pools; > + > + /* Detach the VF's current resources before checking if enough are > + * available, and set their IDs accordingly. > + */ > + DLB_VF_ID_CLEAR(dst->avail_dir_credit_pools, struct dlb_credit_pool); > + > + DLB_XFER_LL_RSRC(src, > + dst, > + orig, > + struct dlb_credit_pool, > + dir_credit_pool); > + > + /* Are there enough available resources to satisfy the request? */ > + if (num > src->num_avail_dir_credit_pools) { > + num =3D orig; > + ret =3D -EINVAL; > + } else { > + ret =3D 0; > + } > + > + DLB_XFER_LL_RSRC(dst, > + src, > + num, > + struct dlb_credit_pool, > + dir_credit_pool); > + > + return ret; > +} > + > +static int dlb_transfer_bitmap_resources(struct dlb_bitmap *src, > + struct dlb_bitmap *dst, > + u32 num) > +{ > + int orig, ret, base; > + > + /* Validate bitmaps before use */ > + if (dlb_bitmap_count(dst) < 0 || dlb_bitmap_count(src) < 0) > + return -EINVAL; > + > + /* Reassign the dest's bitmap entries to the source's before checking > + * if a contiguous chunk of size 'num' is available. The reassignment > + * may be necessary to create a sufficiently large contiguous chunk. > + */ > + orig =3D dlb_bitmap_count(dst); > + > + dlb_bitmap_or(src, src, dst); > + > + dlb_bitmap_zero(dst); > + > + /* Are there enough available resources to satisfy the request? */ > + base =3D dlb_bitmap_find_set_bit_range(src, num); > + > + if (base =3D=3D -ENOENT) { > + num =3D orig; > + base =3D dlb_bitmap_find_set_bit_range(src, num); > + ret =3D -EINVAL; > + } else { > + ret =3D 0; > + } > + > + dlb_bitmap_set_range(dst, base, num); > + > + dlb_bitmap_clear_range(src, base, num); > + > + return ret; > +} > + > +int dlb_update_vf_ldb_credits(struct dlb_hw *hw, u32 vf_id, u32 num) > +{ > + struct dlb_function_resources *src, *dst; > + > + if (vf_id >=3D DLB_MAX_NUM_VFS) > + return -EINVAL; > + > + src =3D &hw->pf; > + dst =3D &hw->vf[vf_id]; > + > + /* If the VF is locked, its resource assignment can't be changed */ > + if (dlb_vf_is_locked(hw, vf_id)) > + return -EPERM; > + > + return dlb_transfer_bitmap_resources(src->avail_qed_freelist_entries, > + dst->avail_qed_freelist_entries, > + num); > +} > + > +int dlb_update_vf_dir_credits(struct dlb_hw *hw, u32 vf_id, u32 num) > +{ > + struct dlb_function_resources *src, *dst; > + > + if (vf_id >=3D DLB_MAX_NUM_VFS) > + return -EINVAL; > + > + src =3D &hw->pf; > + dst =3D &hw->vf[vf_id]; > + > + /* If the VF is locked, its resource assignment can't be changed */ > + if (dlb_vf_is_locked(hw, vf_id)) > + return -EPERM; > + > + return dlb_transfer_bitmap_resources(src->avail_dqed_freelist_entries, > + dst->avail_dqed_freelist_entries, > + num); > +} > + > +int dlb_update_vf_hist_list_entries(struct dlb_hw *hw, > + u32 vf_id, > + u32 num) > +{ > + struct dlb_function_resources *src, *dst; > + > + if (vf_id >=3D DLB_MAX_NUM_VFS) > + return -EINVAL; > + > + src =3D &hw->pf; > + dst =3D &hw->vf[vf_id]; > + > + /* If the VF is locked, its resource assignment can't be changed */ > + if (dlb_vf_is_locked(hw, vf_id)) > + return -EPERM; > + > + return dlb_transfer_bitmap_resources(src->avail_hist_list_entries, > + dst->avail_hist_list_entries, > + num); > +} > + > +int dlb_update_vf_atomic_inflights(struct dlb_hw *hw, > + u32 vf_id, > + u32 num) > +{ > + struct dlb_function_resources *src, *dst; > + > + if (vf_id >=3D DLB_MAX_NUM_VFS) > + return -EINVAL; > + > + src =3D &hw->pf; > + dst =3D &hw->vf[vf_id]; > + > + /* If the VF is locked, its resource assignment can't be changed */ > + if (dlb_vf_is_locked(hw, vf_id)) > + return -EPERM; > + > + return dlb_transfer_bitmap_resources(src->avail_aqed_freelist_entries, > + dst->avail_aqed_freelist_entries, > + num); > +} > + > +static int dlb_attach_ldb_queues(struct dlb_hw *hw, > + struct dlb_function_resources *rsrcs, > + struct dlb_domain *domain, > + u32 num_queues, > + struct dlb_cmd_response *resp) > +{ > + unsigned int i, j; > + > + if (rsrcs->num_avail_ldb_queues < num_queues) { > + resp->status =3D DLB_ST_LDB_QUEUES_UNAVAILABLE; > + return -1; > + } > + > + for (i =3D 0; i < num_queues; i++) { > + struct dlb_ldb_queue *queue; > + > + queue =3D DLB_FUNC_LIST_HEAD(rsrcs->avail_ldb_queues, > + typeof(*queue)); > + if (!queue) { > + DLB_HW_ERR(hw, > + "[%s()] Internal error: domain validation > failed\n", > + __func__); > + goto cleanup; > + } > + > + dlb_list_del(&rsrcs->avail_ldb_queues, &queue->func_list); > + > + queue->domain_id =3D domain->id; > + queue->owned =3D true; > + > + dlb_list_add(&domain->avail_ldb_queues, &queue- > >domain_list); > + } > + > + rsrcs->num_avail_ldb_queues -=3D num_queues; > + > + return 0; > + > +cleanup: > + > + /* Return the assigned queues */ > + for (j =3D 0; j < i; j++) { > + struct dlb_ldb_queue *queue; > + > + queue =3D DLB_FUNC_LIST_HEAD(domain->avail_ldb_queues, > + typeof(*queue)); > + /* Unrecoverable internal error */ > + if (!queue) > + break; > + > + queue->owned =3D false; > + > + dlb_list_del(&domain->avail_ldb_queues, &queue- > >domain_list); > + > + dlb_list_add(&rsrcs->avail_ldb_queues, &queue->func_list); > + } > + > + return -EFAULT; > +} > + > +static struct dlb_ldb_port * > +dlb_get_next_ldb_port(struct dlb_hw *hw, > + struct dlb_function_resources *rsrcs, > + u32 domain_id) > +{ > + struct dlb_list_entry *iter __attribute__((unused)); > + struct dlb_ldb_port *port; > + > + /* To reduce the odds of consecutive load-balanced ports mapping to > the > + * same queue(s), the driver attempts to allocate ports whose > neighbors > + * are owned by a different domain. > + */ > + DLB_FUNC_LIST_FOR(rsrcs->avail_ldb_ports, port, iter) { > + u32 next, prev; > + u32 phys_id; > + > + phys_id =3D port->id.phys_id; > + next =3D phys_id + 1; > + prev =3D phys_id - 1; > + > + if (phys_id =3D=3D DLB_MAX_NUM_LDB_PORTS - 1) > + next =3D 0; > + if (phys_id =3D=3D 0) > + prev =3D DLB_MAX_NUM_LDB_PORTS - 1; > + > + if (!hw->rsrcs.ldb_ports[next].owned || > + hw->rsrcs.ldb_ports[next].domain_id.phys_id =3D=3D domain_id) > + continue; > + > + if (!hw->rsrcs.ldb_ports[prev].owned || > + hw->rsrcs.ldb_ports[prev].domain_id.phys_id =3D=3D domain_id) > + continue; > + > + return port; > + } > + > + /* Failing that, the driver looks for a port with one neighbor owned by > + * a different domain and the other unallocated. > + */ > + DLB_FUNC_LIST_FOR(rsrcs->avail_ldb_ports, port, iter) { > + u32 next, prev; > + u32 phys_id; > + > + phys_id =3D port->id.phys_id; > + next =3D phys_id + 1; > + prev =3D phys_id - 1; > + > + if (phys_id =3D=3D DLB_MAX_NUM_LDB_PORTS - 1) > + next =3D 0; > + if (phys_id =3D=3D 0) > + prev =3D DLB_MAX_NUM_LDB_PORTS - 1; > + > + if (!hw->rsrcs.ldb_ports[prev].owned && > + hw->rsrcs.ldb_ports[next].owned && > + hw->rsrcs.ldb_ports[next].domain_id.phys_id !=3D domain_id) > + return port; > + > + if (!hw->rsrcs.ldb_ports[next].owned && > + hw->rsrcs.ldb_ports[prev].owned && > + hw->rsrcs.ldb_ports[prev].domain_id.phys_id !=3D domain_id) > + return port; > + } > + > + /* Failing that, the driver looks for a port with both neighbors > + * unallocated. > + */ > + DLB_FUNC_LIST_FOR(rsrcs->avail_ldb_ports, port, iter) { > + u32 next, prev; > + u32 phys_id; > + > + phys_id =3D port->id.phys_id; > + next =3D phys_id + 1; > + prev =3D phys_id - 1; > + > + if (phys_id =3D=3D DLB_MAX_NUM_LDB_PORTS - 1) > + next =3D 0; > + if (phys_id =3D=3D 0) > + prev =3D DLB_MAX_NUM_LDB_PORTS - 1; > + > + if (!hw->rsrcs.ldb_ports[prev].owned && > + !hw->rsrcs.ldb_ports[next].owned) > + return port; > + } > + > + /* If all else fails, the driver returns the next available port. */ > + return DLB_FUNC_LIST_HEAD(rsrcs->avail_ldb_ports, typeof(*port)); > +} > + > +static int dlb_attach_ldb_ports(struct dlb_hw *hw, > + struct dlb_function_resources *rsrcs, > + struct dlb_domain *domain, > + u32 num_ports, > + struct dlb_cmd_response *resp) > +{ > + unsigned int i, j; > + > + if (rsrcs->num_avail_ldb_ports < num_ports) { > + resp->status =3D DLB_ST_LDB_PORTS_UNAVAILABLE; > + return -1; > + } > + > + for (i =3D 0; i < num_ports; i++) { > + struct dlb_ldb_port *port; > + > + port =3D dlb_get_next_ldb_port(hw, rsrcs, domain->id.phys_id); > + > + if (!port) { > + DLB_HW_ERR(hw, > + "[%s()] Internal error: domain validation > failed\n", > + __func__); > + goto cleanup; > + } > + > + dlb_list_del(&rsrcs->avail_ldb_ports, &port->func_list); > + > + port->domain_id =3D domain->id; > + port->owned =3D true; > + > + dlb_list_add(&domain->avail_ldb_ports, &port->domain_list); > + } > + > + rsrcs->num_avail_ldb_ports -=3D num_ports; > + > + return 0; > + > +cleanup: > + > + /* Return the assigned ports */ > + for (j =3D 0; j < i; j++) { > + struct dlb_ldb_port *port; > + > + port =3D DLB_FUNC_LIST_HEAD(domain->avail_ldb_ports, > + typeof(*port)); > + /* Unrecoverable internal error */ > + if (!port) > + break; > + > + port->owned =3D false; > + > + dlb_list_del(&domain->avail_ldb_ports, &port->domain_list); > + > + dlb_list_add(&rsrcs->avail_ldb_ports, &port->func_list); > + } > + > + return -EFAULT; > +} > + > +static int dlb_attach_dir_ports(struct dlb_hw *hw, > + struct dlb_function_resources *rsrcs, > + struct dlb_domain *domain, > + u32 num_ports, > + struct dlb_cmd_response *resp) > +{ > + unsigned int i, j; > + > + if (rsrcs->num_avail_dir_pq_pairs < num_ports) { > + resp->status =3D DLB_ST_DIR_PORTS_UNAVAILABLE; > + return -1; > + } > + > + for (i =3D 0; i < num_ports; i++) { > + struct dlb_dir_pq_pair *port; > + > + port =3D DLB_FUNC_LIST_HEAD(rsrcs->avail_dir_pq_pairs, > + typeof(*port)); > + if (!port) { > + DLB_HW_ERR(hw, > + "[%s()] Internal error: domain validation > failed\n", > + __func__); > + goto cleanup; > + } > + > + dlb_list_del(&rsrcs->avail_dir_pq_pairs, &port->func_list); > + > + port->domain_id =3D domain->id; > + port->owned =3D true; > + > + dlb_list_add(&domain->avail_dir_pq_pairs, &port- > >domain_list); > + } > + > + rsrcs->num_avail_dir_pq_pairs -=3D num_ports; > + > + return 0; > + > +cleanup: > + > + /* Return the assigned ports */ > + for (j =3D 0; j < i; j++) { > + struct dlb_dir_pq_pair *port; > + > + port =3D DLB_FUNC_LIST_HEAD(domain->avail_dir_pq_pairs, > + typeof(*port)); > + /* Unrecoverable internal error */ > + if (!port) > + break; > + > + port->owned =3D false; > + > + dlb_list_del(&domain->avail_dir_pq_pairs, &port- > >domain_list); > + > + dlb_list_add(&rsrcs->avail_dir_pq_pairs, &port->func_list); > + } > + > + return -EFAULT; > +} > + > +static int dlb_attach_ldb_credits(struct dlb_function_resources *rsrcs, > + struct dlb_domain *domain, > + u32 num_credits, > + struct dlb_cmd_response *resp) > +{ > + struct dlb_bitmap *bitmap =3D rsrcs->avail_qed_freelist_entries; > + > + if (dlb_bitmap_count(bitmap) < (int)num_credits) { > + resp->status =3D DLB_ST_LDB_CREDITS_UNAVAILABLE; > + return -1; > + } > + > + if (num_credits) { > + int base; > + > + base =3D dlb_bitmap_find_set_bit_range(bitmap, num_credits); > + if (base < 0) > + goto error; > + > + domain->qed_freelist.base =3D base; > + domain->qed_freelist.bound =3D base + num_credits; > + domain->qed_freelist.offset =3D 0; > + > + dlb_bitmap_clear_range(bitmap, base, num_credits); > + } > + > + return 0; > + > +error: > + resp->status =3D DLB_ST_QED_FREELIST_ENTRIES_UNAVAILABLE; > + return -1; > +}