From: Jerin Jacob Kollanukkaran <jerinj@marvell.com>
To: "McDaniel, Timothy" <timothy.mcdaniel@intel.com>
Cc: "mattias.ronnblom@ericsson.com" <mattias.ronnblom@ericsson.com>,
"dev@dpdk.org" <dev@dpdk.org>,
"gage.eads@intel.com" <gage.eads@intel.com>,
"harry.van.haaren@intel.com" <harry.van.haaren@intel.com>,
Thomas Monjalon <thomas@monjalon.net>
Subject: Re: [dpdk-dev] [EXT] [PATCH 03/27] event/dlb: add shared code version 10.7.9
Date: Tue, 11 Aug 2020 18:22:20 +0000 [thread overview]
Message-ID: <BYAPR18MB24240D86105D7F05C700022CC8450@BYAPR18MB2424.namprd18.prod.outlook.com> (raw)
In-Reply-To: <1596138614-17409-4-git-send-email-timothy.mcdaniel@intel.com>
> -----Original Message-----
> From: McDaniel, Timothy <timothy.mcdaniel@intel.com>
> Sent: Friday, July 31, 2020 1:20 AM
> To: Jerin Jacob Kollanukkaran <jerinj@marvell.com>
> Cc: mattias.ronnblom@ericsson.com; dev@dpdk.org; gage.eads@intel.com;
> harry.van.haaren@intel.com; McDaniel, Timothy
> <timothy.mcdaniel@intel.com>
> Subject: [EXT] [PATCH 03/27] event/dlb: add shared code version 10.7.9
What is shared code?
>
> External Email
>
> ----------------------------------------------------------------------
> From: "McDaniel, Timothy" <timothy.mcdaniel@intel.com>
>
> The DLB shared code is auto generated by Intel, and is being committed
> here so that it can be built in the DPDK environment. The shared code
> should not be modified. The shared code must be present in order to
> successfully build the DLB PMD.
Please sanitize the git commit log, Thing like Intel, autogenerated all irrelevant
in this git commit message context.
>
> Changes since v1 patch series
> 1) convert C99 comment to standard C
> 2) remove TODO and FIXME comments
> 3) converted to use same log i/f as PMD
> 4) disable PF->VF ISR pending access alarm
> 5) disable VF->PF ISR pending access alarm
Move the history to below line[1]
>
> Signed-off-by: McDaniel, Timothy <timothy.mcdaniel@intel.com>
> ---
[1] Here
> drivers/event/dlb/pf/base/dlb_hw_types.h | 360 +
> drivers/event/dlb/pf/base/dlb_mbox.h | 645 ++
> drivers/event/dlb/pf/base/dlb_osdep.h | 347 +
> drivers/event/dlb/pf/base/dlb_osdep_bitmap.h | 442 ++
> drivers/event/dlb/pf/base/dlb_osdep_list.h | 131 +
> drivers/event/dlb/pf/base/dlb_osdep_types.h | 31 +
> drivers/event/dlb/pf/base/dlb_regs.h | 2678 +++++++
> drivers/event/dlb/pf/base/dlb_resource.c | 9722
> ++++++++++++++++++++++++++
I don't like auto generated code part of DPDK.
At least split the patches to proper logical patches. Please
Don't dump some library code from some place as one single patch.
> drivers/event/dlb/pf/base/dlb_resource.h | 1639 +++++
> drivers/event/dlb/pf/base/dlb_user.h | 1084 +++
> 10 files changed, 17079 insertions(+)
> create mode 100644 drivers/event/dlb/pf/base/dlb_hw_types.h
> create mode 100644 drivers/event/dlb/pf/base/dlb_mbox.h
> create mode 100644 drivers/event/dlb/pf/base/dlb_osdep.h
> create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_bitmap.h
> create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_list.h
> create mode 100644 drivers/event/dlb/pf/base/dlb_osdep_types.h
> create mode 100644 drivers/event/dlb/pf/base/dlb_regs.h
> create mode 100644 drivers/event/dlb/pf/base/dlb_resource.c
> create mode 100644 drivers/event/dlb/pf/base/dlb_resource.h
> create mode 100644 drivers/event/dlb/pf/base/dlb_user.h
>
> diff --git a/drivers/event/dlb/pf/base/dlb_hw_types.h
> b/drivers/event/dlb/pf/base/dlb_hw_types.h
> new file mode 100644
> index 0000000..d56590e
> --- /dev/null
> +++ b/drivers/event/dlb/pf/base/dlb_hw_types.h
> @@ -0,0 +1,360 @@
> +/* SPDX-License-Identifier: (GPL-2.0-only OR BSD-3-Clause)
GPL2.0 needs exception.
> + * Copyright(c) 2016-2020 Intel Corporation
> + */
> +
> +#ifndef __DLB_HW_TYPES_H
> +#define __DLB_HW_TYPES_H
> +
>
+ * os_curtime_s() - get the current time (in seconds)
> + * @usecs: delay duration.
> + */
> +static inline unsigned long os_curtime_s(void)
> +{
> + struct timespec tv;
> +
> + clock_gettime(CLOCK_MONOTONIC, &tv);
Please use DPDK primitives
> +
> + return (unsigned long)tv.tv_sec;
> +}
> +
> +static inline void os_schedule_work(struct dlb_hw *hw)
> +{
> + struct dlb_dev *dlb_dev;
> + pthread_t complete_queue_map_unmap_thread;
> + int ret;
> +
> + dlb_dev = container_of(hw, struct dlb_dev, hw);
> +
> + ret = pthread_create(&complete_queue_map_unmap_thread,
> + NULL,
> + dlb_complete_queue_map_unmap,
> + dlb_dev);
Please use DPDK primitives
> + if (ret)
> + DLB_ERR(dlb_dev,
> + "Could not create queue complete map /unmap thread,
> err=%d\n",
> + ret);
> + else
> + dlb_dev->worker_launched = true;
> +}
> +static inline int os_notify_user_space(struct dlb_hw *hw,
> + u32 domain_id,
> + u64 alert_id,
> + u64 aux_alert_data)
> +{
> + RTE_SET_USED(hw);
> + RTE_SET_USED(domain_id);
> + RTE_SET_USED(alert_id);
> + RTE_SET_USED(aux_alert_data);
> +
> + rte_panic("internal_error: %s should never be called for DLB PF
> PMD\n",
> + __func__);
Don't use rte_panic in library.
> + return -1;
> +}
> +
> +static inline void dlb_bitmap_free(struct dlb_bitmap *bitmap)
> +{
> + if (!bitmap)
> + rte_panic("NULL dlb_bitmap in %s\n", __func__);
Another instance of panic.
Not reviewed remaining lines.
I would request someone from Intel to do first level of review of the driver patches.
> + u32 count : 32;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_LSP_LDB_SCH_CNT_L 0x28200000
> +#define DLB_LSP_LDB_SCH_CNT_L_RST 0x0
> +union dlb_lsp_ldb_sch_cnt_l {
> + struct {
> + u32 count : 32;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_DP_DIR_CSR_CTRL 0x38000018
> +#define DLB_DP_DIR_CSR_CTRL_RST 0xc0000000
> +union dlb_dp_dir_csr_ctrl {
> + struct {
> + u32 cfg_int_dis : 1;
> + u32 cfg_int_dis_sbe : 1;
> + u32 cfg_int_dis_mbe : 1;
> + u32 spare0 : 27;
> + u32 cfg_vasr_dis : 1;
> + u32 cfg_int_dis_synd : 1;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_DP_CFG_CTRL_ARB_WEIGHTS_TQPRI_DIR_1 0x38000014
> +#define DLB_DP_CFG_CTRL_ARB_WEIGHTS_TQPRI_DIR_1_RST 0xfffefdfc
> +union dlb_dp_cfg_ctrl_arb_weights_tqpri_dir_1 {
> + struct {
> + u32 pri4 : 8;
> + u32 pri5 : 8;
> + u32 pri6 : 8;
> + u32 pri7 : 8;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_DP_CFG_CTRL_ARB_WEIGHTS_TQPRI_DIR_0 0x38000010
> +#define DLB_DP_CFG_CTRL_ARB_WEIGHTS_TQPRI_DIR_0_RST 0xfbfaf9f8
> +union dlb_dp_cfg_ctrl_arb_weights_tqpri_dir_0 {
> + struct {
> + u32 pri0 : 8;
> + u32 pri1 : 8;
> + u32 pri2 : 8;
> + u32 pri3 : 8;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_DP_CFG_CTRL_ARB_WEIGHTS_TQPRI_REPLAY_1 0x3800000c
> +#define DLB_DP_CFG_CTRL_ARB_WEIGHTS_TQPRI_REPLAY_1_RST 0xfffefdfc
> +union dlb_dp_cfg_ctrl_arb_weights_tqpri_replay_1 {
> + struct {
> + u32 pri4 : 8;
> + u32 pri5 : 8;
> + u32 pri6 : 8;
> + u32 pri7 : 8;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_DP_CFG_CTRL_ARB_WEIGHTS_TQPRI_REPLAY_0 0x38000008
> +#define DLB_DP_CFG_CTRL_ARB_WEIGHTS_TQPRI_REPLAY_0_RST 0xfbfaf9f8
> +union dlb_dp_cfg_ctrl_arb_weights_tqpri_replay_0 {
> + struct {
> + u32 pri0 : 8;
> + u32 pri1 : 8;
> + u32 pri2 : 8;
> + u32 pri3 : 8;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_NALB_PIPE_CTRL_ARB_WEIGHTS_TQPRI_NALB_1 0x6800001c
> +#define DLB_NALB_PIPE_CTRL_ARB_WEIGHTS_TQPRI_NALB_1_RST 0xfffefdfc
> +union dlb_nalb_pipe_ctrl_arb_weights_tqpri_nalb_1 {
> + struct {
> + u32 pri4 : 8;
> + u32 pri5 : 8;
> + u32 pri6 : 8;
> + u32 pri7 : 8;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_NALB_PIPE_CTRL_ARB_WEIGHTS_TQPRI_NALB_0 0x68000018
> +#define DLB_NALB_PIPE_CTRL_ARB_WEIGHTS_TQPRI_NALB_0_RST 0xfbfaf9f8
> +union dlb_nalb_pipe_ctrl_arb_weights_tqpri_nalb_0 {
> + struct {
> + u32 pri0 : 8;
> + u32 pri1 : 8;
> + u32 pri2 : 8;
> + u32 pri3 : 8;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_NALB_PIPE_CFG_CTRL_ARB_WEIGHTS_TQPRI_ATQ_1
> 0x68000014
> +#define DLB_NALB_PIPE_CFG_CTRL_ARB_WEIGHTS_TQPRI_ATQ_1_RST
> 0xfffefdfc
> +union dlb_nalb_pipe_cfg_ctrl_arb_weights_tqpri_atq_1 {
> + struct {
> + u32 pri4 : 8;
> + u32 pri5 : 8;
> + u32 pri6 : 8;
> + u32 pri7 : 8;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_NALB_PIPE_CFG_CTRL_ARB_WEIGHTS_TQPRI_ATQ_0
> 0x68000010
> +#define DLB_NALB_PIPE_CFG_CTRL_ARB_WEIGHTS_TQPRI_ATQ_0_RST
> 0xfbfaf9f8
> +union dlb_nalb_pipe_cfg_ctrl_arb_weights_tqpri_atq_0 {
> + struct {
> + u32 pri0 : 8;
> + u32 pri1 : 8;
> + u32 pri2 : 8;
> + u32 pri3 : 8;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_NALB_PIPE_CFG_CTRL_ARB_WEIGHTS_TQPRI_REPLAY_1
> 0x6800000c
> +#define DLB_NALB_PIPE_CFG_CTRL_ARB_WEIGHTS_TQPRI_REPLAY_1_RST
> 0xfffefdfc
> +union dlb_nalb_pipe_cfg_ctrl_arb_weights_tqpri_replay_1 {
> + struct {
> + u32 pri4 : 8;
> + u32 pri5 : 8;
> + u32 pri6 : 8;
> + u32 pri7 : 8;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_NALB_PIPE_CFG_CTRL_ARB_WEIGHTS_TQPRI_REPLAY_0
> 0x68000008
> +#define DLB_NALB_PIPE_CFG_CTRL_ARB_WEIGHTS_TQPRI_REPLAY_0_RST
> 0xfbfaf9f8
> +union dlb_nalb_pipe_cfg_ctrl_arb_weights_tqpri_replay_0 {
> + struct {
> + u32 pri0 : 8;
> + u32 pri1 : 8;
> + u32 pri2 : 8;
> + u32 pri3 : 8;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_ATM_PIPE_QID_LDB_QID2CQIDX(x, y) \
> + (0x70000000 + (x) * 0x1000 + (y) * 0x4)
> +#define DLB_ATM_PIPE_QID_LDB_QID2CQIDX_RST 0x0
> +union dlb_atm_pipe_qid_ldb_qid2cqidx {
> + struct {
> + u32 cq_p0 : 8;
> + u32 cq_p1 : 8;
> + u32 cq_p2 : 8;
> + u32 cq_p3 : 8;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_ATM_PIPE_CFG_CTRL_ARB_WEIGHTS_SCHED_BIN 0x7800000c
> +#define DLB_ATM_PIPE_CFG_CTRL_ARB_WEIGHTS_SCHED_BIN_RST 0xfffefdfc
> +union dlb_atm_pipe_cfg_ctrl_arb_weights_sched_bin {
> + struct {
> + u32 bin0 : 8;
> + u32 bin1 : 8;
> + u32 bin2 : 8;
> + u32 bin3 : 8;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_ATM_PIPE_CTRL_ARB_WEIGHTS_RDY_BIN 0x78000008
> +#define DLB_ATM_PIPE_CTRL_ARB_WEIGHTS_RDY_BIN_RST 0xfffefdfc
> +union dlb_atm_pipe_ctrl_arb_weights_rdy_bin {
> + struct {
> + u32 bin0 : 8;
> + u32 bin1 : 8;
> + u32 bin2 : 8;
> + u32 bin3 : 8;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_AQED_PIPE_QID_FID_LIM(x) \
> + (0x80000014 + (x) * 0x1000)
> +#define DLB_AQED_PIPE_QID_FID_LIM_RST 0x7ff
> +union dlb_aqed_pipe_qid_fid_lim {
> + struct {
> + u32 qid_fid_limit : 13;
> + u32 rsvd0 : 19;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_AQED_PIPE_FL_POP_PTR(x) \
> + (0x80000010 + (x) * 0x1000)
> +#define DLB_AQED_PIPE_FL_POP_PTR_RST 0x0
> +union dlb_aqed_pipe_fl_pop_ptr {
> + struct {
> + u32 pop_ptr : 11;
> + u32 generation : 1;
> + u32 rsvd0 : 20;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_AQED_PIPE_FL_PUSH_PTR(x) \
> + (0x8000000c + (x) * 0x1000)
> +#define DLB_AQED_PIPE_FL_PUSH_PTR_RST 0x0
> +union dlb_aqed_pipe_fl_push_ptr {
> + struct {
> + u32 push_ptr : 11;
> + u32 generation : 1;
> + u32 rsvd0 : 20;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_AQED_PIPE_FL_BASE(x) \
> + (0x80000008 + (x) * 0x1000)
> +#define DLB_AQED_PIPE_FL_BASE_RST 0x0
> +union dlb_aqed_pipe_fl_base {
> + struct {
> + u32 base : 11;
> + u32 rsvd0 : 21;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_AQED_PIPE_FL_LIM(x) \
> + (0x80000004 + (x) * 0x1000)
> +#define DLB_AQED_PIPE_FL_LIM_RST 0x800
> +union dlb_aqed_pipe_fl_lim {
> + struct {
> + u32 limit : 11;
> + u32 freelist_disable : 1;
> + u32 rsvd0 : 20;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_AQED_PIPE_CFG_CTRL_ARB_WEIGHTS_TQPRI_ATM_0
> 0x88000008
> +#define DLB_AQED_PIPE_CFG_CTRL_ARB_WEIGHTS_TQPRI_ATM_0_RST
> 0xfffe
> +union dlb_aqed_pipe_cfg_ctrl_arb_weights_tqpri_atm_0 {
> + struct {
> + u32 pri0 : 8;
> + u32 pri1 : 8;
> + u32 pri2 : 8;
> + u32 pri3 : 8;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_RO_PIPE_QID2GRPSLT(x) \
> + (0x90000000 + (x) * 0x1000)
> +#define DLB_RO_PIPE_QID2GRPSLT_RST 0x0
> +union dlb_ro_pipe_qid2grpslt {
> + struct {
> + u32 slot : 5;
> + u32 rsvd1 : 3;
> + u32 group : 2;
> + u32 rsvd0 : 22;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_RO_PIPE_GRP_SN_MODE 0x98000008
> +#define DLB_RO_PIPE_GRP_SN_MODE_RST 0x0
> +union dlb_ro_pipe_grp_sn_mode {
> + struct {
> + u32 sn_mode_0 : 3;
> + u32 reserved0 : 5;
> + u32 sn_mode_1 : 3;
> + u32 reserved1 : 5;
> + u32 sn_mode_2 : 3;
> + u32 reserved2 : 5;
> + u32 sn_mode_3 : 3;
> + u32 reserved3 : 5;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_CFG_DIR_PP_SW_ALARM_EN(x) \
> + (0xa000003c + (x) * 0x1000)
> +#define DLB_CHP_CFG_DIR_PP_SW_ALARM_EN_RST 0x1
> +union dlb_chp_cfg_dir_pp_sw_alarm_en {
> + struct {
> + u32 alarm_enable : 1;
> + u32 rsvd0 : 31;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_DIR_CQ_WD_ENB(x) \
> + (0xa0000038 + (x) * 0x1000)
> +#define DLB_CHP_DIR_CQ_WD_ENB_RST 0x0
> +union dlb_chp_dir_cq_wd_enb {
> + struct {
> + u32 wd_enable : 1;
> + u32 rsvd0 : 31;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_DIR_LDB_PP2POOL(x) \
> + (0xa0000034 + (x) * 0x1000)
> +#define DLB_CHP_DIR_LDB_PP2POOL_RST 0x0
> +union dlb_chp_dir_ldb_pp2pool {
> + struct {
> + u32 pool : 6;
> + u32 rsvd0 : 26;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_DIR_DIR_PP2POOL(x) \
> + (0xa0000030 + (x) * 0x1000)
> +#define DLB_CHP_DIR_DIR_PP2POOL_RST 0x0
> +union dlb_chp_dir_dir_pp2pool {
> + struct {
> + u32 pool : 6;
> + u32 rsvd0 : 26;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_DIR_PP_LDB_CRD_CNT(x) \
> + (0xa000002c + (x) * 0x1000)
> +#define DLB_CHP_DIR_PP_LDB_CRD_CNT_RST 0x0
> +union dlb_chp_dir_pp_ldb_crd_cnt {
> + struct {
> + u32 count : 16;
> + u32 rsvd0 : 16;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_DIR_PP_DIR_CRD_CNT(x) \
> + (0xa0000028 + (x) * 0x1000)
> +#define DLB_CHP_DIR_PP_DIR_CRD_CNT_RST 0x0
> +union dlb_chp_dir_pp_dir_crd_cnt {
> + struct {
> + u32 count : 14;
> + u32 rsvd0 : 18;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_DIR_CQ_TMR_THRESHOLD(x) \
> + (0xa0000024 + (x) * 0x1000)
> +#define DLB_CHP_DIR_CQ_TMR_THRESHOLD_RST 0x0
> +union dlb_chp_dir_cq_tmr_threshold {
> + struct {
> + u32 timer_thrsh : 14;
> + u32 rsvd0 : 18;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_DIR_CQ_INT_ENB(x) \
> + (0xa0000020 + (x) * 0x1000)
> +#define DLB_CHP_DIR_CQ_INT_ENB_RST 0x0
> +union dlb_chp_dir_cq_int_enb {
> + struct {
> + u32 en_tim : 1;
> + u32 en_depth : 1;
> + u32 rsvd0 : 30;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_DIR_CQ_INT_DEPTH_THRSH(x) \
> + (0xa000001c + (x) * 0x1000)
> +#define DLB_CHP_DIR_CQ_INT_DEPTH_THRSH_RST 0x0
> +union dlb_chp_dir_cq_int_depth_thrsh {
> + struct {
> + u32 depth_threshold : 12;
> + u32 rsvd0 : 20;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_DIR_CQ_TKN_DEPTH_SEL(x) \
> + (0xa0000018 + (x) * 0x1000)
> +#define DLB_CHP_DIR_CQ_TKN_DEPTH_SEL_RST 0x0
> +union dlb_chp_dir_cq_tkn_depth_sel {
> + struct {
> + u32 token_depth_select : 4;
> + u32 rsvd0 : 28;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_DIR_PP_LDB_MIN_CRD_QNT(x) \
> + (0xa0000014 + (x) * 0x1000)
> +#define DLB_CHP_DIR_PP_LDB_MIN_CRD_QNT_RST 0x1
> +union dlb_chp_dir_pp_ldb_min_crd_qnt {
> + struct {
> + u32 quanta : 10;
> + u32 rsvd0 : 22;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_DIR_PP_DIR_MIN_CRD_QNT(x) \
> + (0xa0000010 + (x) * 0x1000)
> +#define DLB_CHP_DIR_PP_DIR_MIN_CRD_QNT_RST 0x1
> +union dlb_chp_dir_pp_dir_min_crd_qnt {
> + struct {
> + u32 quanta : 10;
> + u32 rsvd0 : 22;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_DIR_PP_LDB_CRD_LWM(x) \
> + (0xa000000c + (x) * 0x1000)
> +#define DLB_CHP_DIR_PP_LDB_CRD_LWM_RST 0x0
> +union dlb_chp_dir_pp_ldb_crd_lwm {
> + struct {
> + u32 lwm : 16;
> + u32 rsvd0 : 16;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_DIR_PP_LDB_CRD_HWM(x) \
> + (0xa0000008 + (x) * 0x1000)
> +#define DLB_CHP_DIR_PP_LDB_CRD_HWM_RST 0x0
> +union dlb_chp_dir_pp_ldb_crd_hwm {
> + struct {
> + u32 hwm : 16;
> + u32 rsvd0 : 16;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_DIR_PP_DIR_CRD_LWM(x) \
> + (0xa0000004 + (x) * 0x1000)
> +#define DLB_CHP_DIR_PP_DIR_CRD_LWM_RST 0x0
> +union dlb_chp_dir_pp_dir_crd_lwm {
> + struct {
> + u32 lwm : 14;
> + u32 rsvd0 : 18;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_DIR_PP_DIR_CRD_HWM(x) \
> + (0xa0000000 + (x) * 0x1000)
> +#define DLB_CHP_DIR_PP_DIR_CRD_HWM_RST 0x0
> +union dlb_chp_dir_pp_dir_crd_hwm {
> + struct {
> + u32 hwm : 14;
> + u32 rsvd0 : 18;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_CFG_LDB_PP_SW_ALARM_EN(x) \
> + (0xa0000148 + (x) * 0x1000)
> +#define DLB_CHP_CFG_LDB_PP_SW_ALARM_EN_RST 0x1
> +union dlb_chp_cfg_ldb_pp_sw_alarm_en {
> + struct {
> + u32 alarm_enable : 1;
> + u32 rsvd0 : 31;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_LDB_CQ_WD_ENB(x) \
> + (0xa0000144 + (x) * 0x1000)
> +#define DLB_CHP_LDB_CQ_WD_ENB_RST 0x0
> +union dlb_chp_ldb_cq_wd_enb {
> + struct {
> + u32 wd_enable : 1;
> + u32 rsvd0 : 31;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_SN_CHK_ENBL(x) \
> + (0xa0000140 + (x) * 0x1000)
> +#define DLB_CHP_SN_CHK_ENBL_RST 0x0
> +union dlb_chp_sn_chk_enbl {
> + struct {
> + u32 en : 1;
> + u32 rsvd0 : 31;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_HIST_LIST_BASE(x) \
> + (0xa000013c + (x) * 0x1000)
> +#define DLB_CHP_HIST_LIST_BASE_RST 0x0
> +union dlb_chp_hist_list_base {
> + struct {
> + u32 base : 13;
> + u32 rsvd0 : 19;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_HIST_LIST_LIM(x) \
> + (0xa0000138 + (x) * 0x1000)
> +#define DLB_CHP_HIST_LIST_LIM_RST 0x0
> +union dlb_chp_hist_list_lim {
> + struct {
> + u32 limit : 13;
> + u32 rsvd0 : 19;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_LDB_LDB_PP2POOL(x) \
> + (0xa0000134 + (x) * 0x1000)
> +#define DLB_CHP_LDB_LDB_PP2POOL_RST 0x0
> +union dlb_chp_ldb_ldb_pp2pool {
> + struct {
> + u32 pool : 6;
> + u32 rsvd0 : 26;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_LDB_DIR_PP2POOL(x) \
> + (0xa0000130 + (x) * 0x1000)
> +#define DLB_CHP_LDB_DIR_PP2POOL_RST 0x0
> +union dlb_chp_ldb_dir_pp2pool {
> + struct {
> + u32 pool : 6;
> + u32 rsvd0 : 26;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_LDB_PP_LDB_CRD_CNT(x) \
> + (0xa000012c + (x) * 0x1000)
> +#define DLB_CHP_LDB_PP_LDB_CRD_CNT_RST 0x0
> +union dlb_chp_ldb_pp_ldb_crd_cnt {
> + struct {
> + u32 count : 16;
> + u32 rsvd0 : 16;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_LDB_PP_DIR_CRD_CNT(x) \
> + (0xa0000128 + (x) * 0x1000)
> +#define DLB_CHP_LDB_PP_DIR_CRD_CNT_RST 0x0
> +union dlb_chp_ldb_pp_dir_crd_cnt {
> + struct {
> + u32 count : 14;
> + u32 rsvd0 : 18;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_LDB_CQ_TMR_THRESHOLD(x) \
> + (0xa0000124 + (x) * 0x1000)
> +#define DLB_CHP_LDB_CQ_TMR_THRESHOLD_RST 0x0
> +union dlb_chp_ldb_cq_tmr_threshold {
> + struct {
> + u32 thrsh : 14;
> + u32 rsvd0 : 18;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_LDB_CQ_INT_ENB(x) \
> + (0xa0000120 + (x) * 0x1000)
> +#define DLB_CHP_LDB_CQ_INT_ENB_RST 0x0
> +union dlb_chp_ldb_cq_int_enb {
> + struct {
> + u32 en_tim : 1;
> + u32 en_depth : 1;
> + u32 rsvd0 : 30;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_LDB_CQ_INT_DEPTH_THRSH(x) \
> + (0xa000011c + (x) * 0x1000)
> +#define DLB_CHP_LDB_CQ_INT_DEPTH_THRSH_RST 0x0
> +union dlb_chp_ldb_cq_int_depth_thrsh {
> + struct {
> + u32 depth_threshold : 12;
> + u32 rsvd0 : 20;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_LDB_CQ_TKN_DEPTH_SEL(x) \
> + (0xa0000118 + (x) * 0x1000)
> +#define DLB_CHP_LDB_CQ_TKN_DEPTH_SEL_RST 0x0
> +union dlb_chp_ldb_cq_tkn_depth_sel {
> + struct {
> + u32 token_depth_select : 4;
> + u32 rsvd0 : 28;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_LDB_PP_LDB_MIN_CRD_QNT(x) \
> + (0xa0000114 + (x) * 0x1000)
> +#define DLB_CHP_LDB_PP_LDB_MIN_CRD_QNT_RST 0x1
> +union dlb_chp_ldb_pp_ldb_min_crd_qnt {
> + struct {
> + u32 quanta : 10;
> + u32 rsvd0 : 22;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_LDB_PP_DIR_MIN_CRD_QNT(x) \
> + (0xa0000110 + (x) * 0x1000)
> +#define DLB_CHP_LDB_PP_DIR_MIN_CRD_QNT_RST 0x1
> +union dlb_chp_ldb_pp_dir_min_crd_qnt {
> + struct {
> + u32 quanta : 10;
> + u32 rsvd0 : 22;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_LDB_PP_LDB_CRD_LWM(x) \
> + (0xa000010c + (x) * 0x1000)
> +#define DLB_CHP_LDB_PP_LDB_CRD_LWM_RST 0x0
> +union dlb_chp_ldb_pp_ldb_crd_lwm {
> + struct {
> + u32 lwm : 16;
> + u32 rsvd0 : 16;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_LDB_PP_LDB_CRD_HWM(x) \
> + (0xa0000108 + (x) * 0x1000)
> +#define DLB_CHP_LDB_PP_LDB_CRD_HWM_RST 0x0
> +union dlb_chp_ldb_pp_ldb_crd_hwm {
> + struct {
> + u32 hwm : 16;
> + u32 rsvd0 : 16;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_LDB_PP_DIR_CRD_LWM(x) \
> + (0xa0000104 + (x) * 0x1000)
> +#define DLB_CHP_LDB_PP_DIR_CRD_LWM_RST 0x0
> +union dlb_chp_ldb_pp_dir_crd_lwm {
> + struct {
> + u32 lwm : 14;
> + u32 rsvd0 : 18;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_LDB_PP_DIR_CRD_HWM(x) \
> + (0xa0000100 + (x) * 0x1000)
> +#define DLB_CHP_LDB_PP_DIR_CRD_HWM_RST 0x0
> +union dlb_chp_ldb_pp_dir_crd_hwm {
> + struct {
> + u32 hwm : 14;
> + u32 rsvd0 : 18;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_DIR_CQ_DEPTH(x) \
> + (0xa0000218 + (x) * 0x1000)
> +#define DLB_CHP_DIR_CQ_DEPTH_RST 0x0
> +union dlb_chp_dir_cq_depth {
> + struct {
> + u32 cq_depth : 11;
> + u32 rsvd0 : 21;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_DIR_CQ_WPTR(x) \
> + (0xa0000214 + (x) * 0x1000)
> +#define DLB_CHP_DIR_CQ_WPTR_RST 0x0
> +union dlb_chp_dir_cq_wptr {
> + struct {
> + u32 write_pointer : 10;
> + u32 rsvd0 : 22;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_DIR_PP_LDB_PUSH_PTR(x) \
> + (0xa0000210 + (x) * 0x1000)
> +#define DLB_CHP_DIR_PP_LDB_PUSH_PTR_RST 0x0
> +union dlb_chp_dir_pp_ldb_push_ptr {
> + struct {
> + u32 push_pointer : 16;
> + u32 rsvd0 : 16;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_DIR_PP_DIR_PUSH_PTR(x) \
> + (0xa000020c + (x) * 0x1000)
> +#define DLB_CHP_DIR_PP_DIR_PUSH_PTR_RST 0x0
> +union dlb_chp_dir_pp_dir_push_ptr {
> + struct {
> + u32 push_pointer : 16;
> + u32 rsvd0 : 16;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_DIR_PP_STATE_RESET(x) \
> + (0xa0000204 + (x) * 0x1000)
> +#define DLB_CHP_DIR_PP_STATE_RESET_RST 0x0
> +union dlb_chp_dir_pp_state_reset {
> + struct {
> + u32 rsvd1 : 7;
> + u32 dir_type : 1;
> + u32 rsvd0 : 23;
> + u32 reset_pp_state : 1;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_DIR_PP_CRD_REQ_STATE(x) \
> + (0xa0000200 + (x) * 0x1000)
> +#define DLB_CHP_DIR_PP_CRD_REQ_STATE_RST 0x0
> +union dlb_chp_dir_pp_crd_req_state {
> + struct {
> + u32 dir_crd_req_active_valid : 1;
> + u32 dir_crd_req_active_check : 1;
> + u32 dir_crd_req_active_busy : 1;
> + u32 rsvd1 : 1;
> + u32 ldb_crd_req_active_valid : 1;
> + u32 ldb_crd_req_active_check : 1;
> + u32 ldb_crd_req_active_busy : 1;
> + u32 rsvd0 : 1;
> + u32 no_pp_credit_update : 1;
> + u32 crd_req_state : 23;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_LDB_CQ_DEPTH(x) \
> + (0xa0000320 + (x) * 0x1000)
> +#define DLB_CHP_LDB_CQ_DEPTH_RST 0x0
> +union dlb_chp_ldb_cq_depth {
> + struct {
> + u32 depth : 11;
> + u32 reserved : 2;
> + u32 rsvd0 : 19;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_LDB_CQ_WPTR(x) \
> + (0xa000031c + (x) * 0x1000)
> +#define DLB_CHP_LDB_CQ_WPTR_RST 0x0
> +union dlb_chp_ldb_cq_wptr {
> + struct {
> + u32 write_pointer : 10;
> + u32 rsvd0 : 22;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_LDB_PP_LDB_PUSH_PTR(x) \
> + (0xa0000318 + (x) * 0x1000)
> +#define DLB_CHP_LDB_PP_LDB_PUSH_PTR_RST 0x0
> +union dlb_chp_ldb_pp_ldb_push_ptr {
> + struct {
> + u32 push_pointer : 16;
> + u32 rsvd0 : 16;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_LDB_PP_DIR_PUSH_PTR(x) \
> + (0xa0000314 + (x) * 0x1000)
> +#define DLB_CHP_LDB_PP_DIR_PUSH_PTR_RST 0x0
> +union dlb_chp_ldb_pp_dir_push_ptr {
> + struct {
> + u32 push_pointer : 16;
> + u32 rsvd0 : 16;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_HIST_LIST_POP_PTR(x) \
> + (0xa000030c + (x) * 0x1000)
> +#define DLB_CHP_HIST_LIST_POP_PTR_RST 0x0
> +union dlb_chp_hist_list_pop_ptr {
> + struct {
> + u32 pop_ptr : 13;
> + u32 generation : 1;
> + u32 rsvd0 : 18;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_HIST_LIST_PUSH_PTR(x) \
> + (0xa0000308 + (x) * 0x1000)
> +#define DLB_CHP_HIST_LIST_PUSH_PTR_RST 0x0
> +union dlb_chp_hist_list_push_ptr {
> + struct {
> + u32 push_ptr : 13;
> + u32 generation : 1;
> + u32 rsvd0 : 18;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_LDB_PP_STATE_RESET(x) \
> + (0xa0000304 + (x) * 0x1000)
> +#define DLB_CHP_LDB_PP_STATE_RESET_RST 0x0
> +union dlb_chp_ldb_pp_state_reset {
> + struct {
> + u32 rsvd1 : 7;
> + u32 dir_type : 1;
> + u32 rsvd0 : 23;
> + u32 reset_pp_state : 1;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_LDB_PP_CRD_REQ_STATE(x) \
> + (0xa0000300 + (x) * 0x1000)
> +#define DLB_CHP_LDB_PP_CRD_REQ_STATE_RST 0x0
> +union dlb_chp_ldb_pp_crd_req_state {
> + struct {
> + u32 dir_crd_req_active_valid : 1;
> + u32 dir_crd_req_active_check : 1;
> + u32 dir_crd_req_active_busy : 1;
> + u32 rsvd1 : 1;
> + u32 ldb_crd_req_active_valid : 1;
> + u32 ldb_crd_req_active_check : 1;
> + u32 ldb_crd_req_active_busy : 1;
> + u32 rsvd0 : 1;
> + u32 no_pp_credit_update : 1;
> + u32 crd_req_state : 23;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_ORD_QID_SN(x) \
> + (0xa0000408 + (x) * 0x1000)
> +#define DLB_CHP_ORD_QID_SN_RST 0x0
> +union dlb_chp_ord_qid_sn {
> + struct {
> + u32 sn : 12;
> + u32 rsvd0 : 20;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_ORD_QID_SN_MAP(x) \
> + (0xa0000404 + (x) * 0x1000)
> +#define DLB_CHP_ORD_QID_SN_MAP_RST 0x0
> +union dlb_chp_ord_qid_sn_map {
> + struct {
> + u32 mode : 3;
> + u32 slot : 5;
> + u32 grp : 2;
> + u32 rsvd0 : 22;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_LDB_POOL_CRD_CNT(x) \
> + (0xa000050c + (x) * 0x1000)
> +#define DLB_CHP_LDB_POOL_CRD_CNT_RST 0x0
> +union dlb_chp_ldb_pool_crd_cnt {
> + struct {
> + u32 count : 16;
> + u32 rsvd0 : 16;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_QED_FL_BASE(x) \
> + (0xa0000508 + (x) * 0x1000)
> +#define DLB_CHP_QED_FL_BASE_RST 0x0
> +union dlb_chp_qed_fl_base {
> + struct {
> + u32 base : 14;
> + u32 rsvd0 : 18;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_QED_FL_LIM(x) \
> + (0xa0000504 + (x) * 0x1000)
> +#define DLB_CHP_QED_FL_LIM_RST 0x8000
> +union dlb_chp_qed_fl_lim {
> + struct {
> + u32 limit : 14;
> + u32 rsvd1 : 1;
> + u32 freelist_disable : 1;
> + u32 rsvd0 : 16;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_LDB_POOL_CRD_LIM(x) \
> + (0xa0000500 + (x) * 0x1000)
> +#define DLB_CHP_LDB_POOL_CRD_LIM_RST 0x0
> +union dlb_chp_ldb_pool_crd_lim {
> + struct {
> + u32 limit : 16;
> + u32 rsvd0 : 16;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_QED_FL_POP_PTR(x) \
> + (0xa0000604 + (x) * 0x1000)
> +#define DLB_CHP_QED_FL_POP_PTR_RST 0x0
> +union dlb_chp_qed_fl_pop_ptr {
> + struct {
> + u32 pop_ptr : 14;
> + u32 reserved0 : 1;
> + u32 generation : 1;
> + u32 rsvd0 : 16;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_QED_FL_PUSH_PTR(x) \
> + (0xa0000600 + (x) * 0x1000)
> +#define DLB_CHP_QED_FL_PUSH_PTR_RST 0x0
> +union dlb_chp_qed_fl_push_ptr {
> + struct {
> + u32 push_ptr : 14;
> + u32 reserved0 : 1;
> + u32 generation : 1;
> + u32 rsvd0 : 16;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_DIR_POOL_CRD_CNT(x) \
> + (0xa000070c + (x) * 0x1000)
> +#define DLB_CHP_DIR_POOL_CRD_CNT_RST 0x0
> +union dlb_chp_dir_pool_crd_cnt {
> + struct {
> + u32 count : 14;
> + u32 rsvd0 : 18;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_DQED_FL_BASE(x) \
> + (0xa0000708 + (x) * 0x1000)
> +#define DLB_CHP_DQED_FL_BASE_RST 0x0
> +union dlb_chp_dqed_fl_base {
> + struct {
> + u32 base : 12;
> + u32 rsvd0 : 20;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_DQED_FL_LIM(x) \
> + (0xa0000704 + (x) * 0x1000)
> +#define DLB_CHP_DQED_FL_LIM_RST 0x2000
> +union dlb_chp_dqed_fl_lim {
> + struct {
> + u32 limit : 12;
> + u32 rsvd1 : 1;
> + u32 freelist_disable : 1;
> + u32 rsvd0 : 18;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_DIR_POOL_CRD_LIM(x) \
> + (0xa0000700 + (x) * 0x1000)
> +#define DLB_CHP_DIR_POOL_CRD_LIM_RST 0x0
> +union dlb_chp_dir_pool_crd_lim {
> + struct {
> + u32 limit : 14;
> + u32 rsvd0 : 18;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_DQED_FL_POP_PTR(x) \
> + (0xa0000804 + (x) * 0x1000)
> +#define DLB_CHP_DQED_FL_POP_PTR_RST 0x0
> +union dlb_chp_dqed_fl_pop_ptr {
> + struct {
> + u32 pop_ptr : 12;
> + u32 reserved0 : 1;
> + u32 generation : 1;
> + u32 rsvd0 : 18;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_DQED_FL_PUSH_PTR(x) \
> + (0xa0000800 + (x) * 0x1000)
> +#define DLB_CHP_DQED_FL_PUSH_PTR_RST 0x0
> +union dlb_chp_dqed_fl_push_ptr {
> + struct {
> + u32 push_ptr : 12;
> + u32 reserved0 : 1;
> + u32 generation : 1;
> + u32 rsvd0 : 18;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_CTRL_DIAG_02 0xa8000154
> +#define DLB_CHP_CTRL_DIAG_02_RST 0x0
> +union dlb_chp_ctrl_diag_02 {
> + struct {
> + u32 control : 32;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_CFG_CHP_CSR_CTRL 0xa8000130
> +#define DLB_CHP_CFG_CHP_CSR_CTRL_RST 0xc0003fff
> +#define DLB_CHP_CFG_EXCESS_TOKENS_SHIFT 12
> +union dlb_chp_cfg_chp_csr_ctrl {
> + struct {
> + u32 int_inf_alarm_enable_0 : 1;
> + u32 int_inf_alarm_enable_1 : 1;
> + u32 int_inf_alarm_enable_2 : 1;
> + u32 int_inf_alarm_enable_3 : 1;
> + u32 int_inf_alarm_enable_4 : 1;
> + u32 int_inf_alarm_enable_5 : 1;
> + u32 int_inf_alarm_enable_6 : 1;
> + u32 int_inf_alarm_enable_7 : 1;
> + u32 int_inf_alarm_enable_8 : 1;
> + u32 int_inf_alarm_enable_9 : 1;
> + u32 int_inf_alarm_enable_10 : 1;
> + u32 int_inf_alarm_enable_11 : 1;
> + u32 int_inf_alarm_enable_12 : 1;
> + u32 int_cor_alarm_enable : 1;
> + u32 csr_control_spare : 14;
> + u32 cfg_vasr_dis : 1;
> + u32 counter_clear : 1;
> + u32 blk_cor_report : 1;
> + u32 blk_cor_synd : 1;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_LDB_CQ_INTR_ARMED1 0xa8000068
> +#define DLB_CHP_LDB_CQ_INTR_ARMED1_RST 0x0
> +union dlb_chp_ldb_cq_intr_armed1 {
> + struct {
> + u32 armed : 32;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_LDB_CQ_INTR_ARMED0 0xa8000064
> +#define DLB_CHP_LDB_CQ_INTR_ARMED0_RST 0x0
> +union dlb_chp_ldb_cq_intr_armed0 {
> + struct {
> + u32 armed : 32;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_DIR_CQ_INTR_ARMED3 0xa8000024
> +#define DLB_CHP_DIR_CQ_INTR_ARMED3_RST 0x0
> +union dlb_chp_dir_cq_intr_armed3 {
> + struct {
> + u32 armed : 32;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_DIR_CQ_INTR_ARMED2 0xa8000020
> +#define DLB_CHP_DIR_CQ_INTR_ARMED2_RST 0x0
> +union dlb_chp_dir_cq_intr_armed2 {
> + struct {
> + u32 armed : 32;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_DIR_CQ_INTR_ARMED1 0xa800001c
> +#define DLB_CHP_DIR_CQ_INTR_ARMED1_RST 0x0
> +union dlb_chp_dir_cq_intr_armed1 {
> + struct {
> + u32 armed : 32;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CHP_DIR_CQ_INTR_ARMED0 0xa8000018
> +#define DLB_CHP_DIR_CQ_INTR_ARMED0_RST 0x0
> +union dlb_chp_dir_cq_intr_armed0 {
> + struct {
> + u32 armed : 32;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CFG_MSTR_DIAG_RESET_STS 0xb8000004
> +#define DLB_CFG_MSTR_DIAG_RESET_STS_RST 0x1ff
> +union dlb_cfg_mstr_diag_reset_sts {
> + struct {
> + u32 chp_pf_reset_done : 1;
> + u32 rop_pf_reset_done : 1;
> + u32 lsp_pf_reset_done : 1;
> + u32 nalb_pf_reset_done : 1;
> + u32 ap_pf_reset_done : 1;
> + u32 dp_pf_reset_done : 1;
> + u32 qed_pf_reset_done : 1;
> + u32 dqed_pf_reset_done : 1;
> + u32 aqed_pf_reset_done : 1;
> + u32 rsvd1 : 6;
> + u32 pf_reset_active : 1;
> + u32 chp_vf_reset_done : 1;
> + u32 rop_vf_reset_done : 1;
> + u32 lsp_vf_reset_done : 1;
> + u32 nalb_vf_reset_done : 1;
> + u32 ap_vf_reset_done : 1;
> + u32 dp_vf_reset_done : 1;
> + u32 qed_vf_reset_done : 1;
> + u32 dqed_vf_reset_done : 1;
> + u32 aqed_vf_reset_done : 1;
> + u32 rsvd0 : 6;
> + u32 vf_reset_active : 1;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_CFG_MSTR_BCAST_RESET_VF_START 0xc8100000
> +#define DLB_CFG_MSTR_BCAST_RESET_VF_START_RST 0x0
> +/* HW Reset Types */
> +#define VF_RST_TYPE_CQ_LDB 0
> +#define VF_RST_TYPE_QID_LDB 1
> +#define VF_RST_TYPE_POOL_LDB 2
> +#define VF_RST_TYPE_CQ_DIR 8
> +#define VF_RST_TYPE_QID_DIR 9
> +#define VF_RST_TYPE_POOL_DIR 10
> +union dlb_cfg_mstr_bcast_reset_vf_start {
> + struct {
> + u32 vf_reset_start : 1;
> + u32 reserved : 3;
> + u32 vf_reset_type : 4;
> + u32 vf_reset_id : 24;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_FUNC_VF_VF2PF_MAILBOX_BYTES 256
> +#define DLB_FUNC_VF_VF2PF_MAILBOX(x) \
> + (0x1000 + (x) * 0x4)
> +#define DLB_FUNC_VF_VF2PF_MAILBOX_RST 0x0
> +union dlb_func_vf_vf2pf_mailbox {
> + struct {
> + u32 msg : 32;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_FUNC_VF_VF2PF_MAILBOX_ISR 0x1f00
> +#define DLB_FUNC_VF_VF2PF_MAILBOX_ISR_RST 0x0
> +union dlb_func_vf_vf2pf_mailbox_isr {
> + struct {
> + u32 isr : 1;
> + u32 rsvd0 : 31;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_FUNC_VF_PF2VF_MAILBOX_BYTES 64
> +#define DLB_FUNC_VF_PF2VF_MAILBOX(x) \
> + (0x2000 + (x) * 0x4)
> +#define DLB_FUNC_VF_PF2VF_MAILBOX_RST 0x0
> +union dlb_func_vf_pf2vf_mailbox {
> + struct {
> + u32 msg : 32;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_FUNC_VF_PF2VF_MAILBOX_ISR 0x2f00
> +#define DLB_FUNC_VF_PF2VF_MAILBOX_ISR_RST 0x0
> +union dlb_func_vf_pf2vf_mailbox_isr {
> + struct {
> + u32 pf_isr : 1;
> + u32 rsvd0 : 31;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_FUNC_VF_VF_MSI_ISR_PEND 0x2f10
> +#define DLB_FUNC_VF_VF_MSI_ISR_PEND_RST 0x0
> +union dlb_func_vf_vf_msi_isr_pend {
> + struct {
> + u32 isr_pend : 32;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_FUNC_VF_VF_RESET_IN_PROGRESS 0x3000
> +#define DLB_FUNC_VF_VF_RESET_IN_PROGRESS_RST 0x1
> +union dlb_func_vf_vf_reset_in_progress {
> + struct {
> + u32 reset_in_progress : 1;
> + u32 rsvd0 : 31;
> + } field;
> + u32 val;
> +};
> +
> +#define DLB_FUNC_VF_VF_MSI_ISR 0x4000
> +#define DLB_FUNC_VF_VF_MSI_ISR_RST 0x0
> +union dlb_func_vf_vf_msi_isr {
> + struct {
> + u32 vf_msi_isr : 32;
> + } field;
> + u32 val;
> +};
> +
> +#endif /* __DLB_REGS_H */
> diff --git a/drivers/event/dlb/pf/base/dlb_resource.c
> b/drivers/event/dlb/pf/base/dlb_resource.c
> new file mode 100644
> index 0000000..51265b9
> --- /dev/null
> +++ b/drivers/event/dlb/pf/base/dlb_resource.c
> @@ -0,0 +1,9722 @@
> +/* SPDX-License-Identifier: (GPL-2.0-only OR BSD-3-Clause)
> + * Copyright(c) 2016-2020 Intel Corporation
> + */
> +
> +#include "dlb_hw_types.h"
> +#include "dlb_user.h"
> +#include "dlb_resource.h"
> +#include "dlb_osdep.h"
> +#include "dlb_osdep_bitmap.h"
> +#include "dlb_osdep_types.h"
> +#include "dlb_regs.h"
> +#include "dlb_mbox.h"
> +
> +#define DLB_DOM_LIST_HEAD(head, type) \
> + DLB_LIST_HEAD((head), type, domain_list)
> +
> +#define DLB_FUNC_LIST_HEAD(head, type) \
> + DLB_LIST_HEAD((head), type, func_list)
> +
> +#define DLB_DOM_LIST_FOR(head, ptr, iter) \
> + DLB_LIST_FOR_EACH(head, ptr, domain_list, iter)
> +
> +#define DLB_FUNC_LIST_FOR(head, ptr, iter) \
> + DLB_LIST_FOR_EACH(head, ptr, func_list, iter)
> +
> +#define DLB_DOM_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \
> + DLB_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, domain_list, it,
> it_tmp)
> +
> +#define DLB_FUNC_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \
> + DLB_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, func_list, it, it_tmp)
> +
> +/* The PF driver cannot assume that a register write will affect subsequent
> HCW
> + * writes. To ensure a write completes, the driver must read back a CSR. This
> + * function only need be called for configuration that can occur after the
> + * domain has started; prior to starting, applications can't send HCWs.
> + */
> +static inline void dlb_flush_csr(struct dlb_hw *hw)
> +{
> + DLB_CSR_RD(hw, DLB_SYS_TOTAL_VAS);
> +}
> +
> +static void dlb_init_fn_rsrc_lists(struct dlb_function_resources *rsrc)
> +{
> + dlb_list_init_head(&rsrc->avail_domains);
> + dlb_list_init_head(&rsrc->used_domains);
> + dlb_list_init_head(&rsrc->avail_ldb_queues);
> + dlb_list_init_head(&rsrc->avail_ldb_ports);
> + dlb_list_init_head(&rsrc->avail_dir_pq_pairs);
> + dlb_list_init_head(&rsrc->avail_ldb_credit_pools);
> + dlb_list_init_head(&rsrc->avail_dir_credit_pools);
> +}
> +
> +static void dlb_init_domain_rsrc_lists(struct dlb_domain *domain)
> +{
> + dlb_list_init_head(&domain->used_ldb_queues);
> + dlb_list_init_head(&domain->used_ldb_ports);
> + dlb_list_init_head(&domain->used_dir_pq_pairs);
> + dlb_list_init_head(&domain->used_ldb_credit_pools);
> + dlb_list_init_head(&domain->used_dir_credit_pools);
> + dlb_list_init_head(&domain->avail_ldb_queues);
> + dlb_list_init_head(&domain->avail_ldb_ports);
> + dlb_list_init_head(&domain->avail_dir_pq_pairs);
> + dlb_list_init_head(&domain->avail_ldb_credit_pools);
> + dlb_list_init_head(&domain->avail_dir_credit_pools);
> +}
> +
> +int dlb_resource_init(struct dlb_hw *hw)
> +{
> + struct dlb_list_entry *list;
> + unsigned int i;
> +
> + /* For optimal load-balancing, ports that map to one or more QIDs in
> + * common should not be in numerical sequence. This is application
> + * dependent, but the driver interleaves port IDs as much as possible
> + * to reduce the likelihood of this. This initial allocation maximizes
> + * the average distance between an ID and its immediate neighbors (i.e.
> + * the distance from 1 to 0 and to 2, the distance from 2 to 1 and to
> + * 3, etc.).
> + */
> + u32 init_ldb_port_allocation[DLB_MAX_NUM_LDB_PORTS] = {
> + 0, 31, 62, 29, 60, 27, 58, 25, 56, 23, 54, 21, 52, 19, 50, 17,
> + 48, 15, 46, 13, 44, 11, 42, 9, 40, 7, 38, 5, 36, 3, 34, 1,
> + 32, 63, 30, 61, 28, 59, 26, 57, 24, 55, 22, 53, 20, 51, 18, 49,
> + 16, 47, 14, 45, 12, 43, 10, 41, 8, 39, 6, 37, 4, 35, 2, 33
> + };
> +
> + /* Zero-out resource tracking data structures */
> + memset(&hw->rsrcs, 0, sizeof(hw->rsrcs));
> + memset(&hw->pf, 0, sizeof(hw->pf));
> +
> + dlb_init_fn_rsrc_lists(&hw->pf);
> +
> + for (i = 0; i < DLB_MAX_NUM_VFS; i++) {
> + memset(&hw->vf[i], 0, sizeof(hw->vf[i]));
> + dlb_init_fn_rsrc_lists(&hw->vf[i]);
> + }
> +
> + for (i = 0; i < DLB_MAX_NUM_DOMAINS; i++) {
> + memset(&hw->domains[i], 0, sizeof(hw->domains[i]));
> + dlb_init_domain_rsrc_lists(&hw->domains[i]);
> + hw->domains[i].parent_func = &hw->pf;
> + }
> +
> + /* Give all resources to the PF driver */
> + hw->pf.num_avail_domains = DLB_MAX_NUM_DOMAINS;
> + for (i = 0; i < hw->pf.num_avail_domains; i++) {
> + list = &hw->domains[i].func_list;
> +
> + dlb_list_add(&hw->pf.avail_domains, list);
> + }
> +
> + hw->pf.num_avail_ldb_queues = DLB_MAX_NUM_LDB_QUEUES;
> + for (i = 0; i < hw->pf.num_avail_ldb_queues; i++) {
> + list = &hw->rsrcs.ldb_queues[i].func_list;
> +
> + dlb_list_add(&hw->pf.avail_ldb_queues, list);
> + }
> +
> + hw->pf.num_avail_ldb_ports = DLB_MAX_NUM_LDB_PORTS;
> + for (i = 0; i < hw->pf.num_avail_ldb_ports; i++) {
> + struct dlb_ldb_port *port;
> +
> + port = &hw->rsrcs.ldb_ports[init_ldb_port_allocation[i]];
> +
> + dlb_list_add(&hw->pf.avail_ldb_ports, &port->func_list);
> + }
> +
> + hw->pf.num_avail_dir_pq_pairs = DLB_MAX_NUM_DIR_PORTS;
> + for (i = 0; i < hw->pf.num_avail_dir_pq_pairs; i++) {
> + list = &hw->rsrcs.dir_pq_pairs[i].func_list;
> +
> + dlb_list_add(&hw->pf.avail_dir_pq_pairs, list);
> + }
> +
> + hw->pf.num_avail_ldb_credit_pools =
> DLB_MAX_NUM_LDB_CREDIT_POOLS;
> + for (i = 0; i < hw->pf.num_avail_ldb_credit_pools; i++) {
> + list = &hw->rsrcs.ldb_credit_pools[i].func_list;
> +
> + dlb_list_add(&hw->pf.avail_ldb_credit_pools, list);
> + }
> +
> + hw->pf.num_avail_dir_credit_pools =
> DLB_MAX_NUM_DIR_CREDIT_POOLS;
> + for (i = 0; i < hw->pf.num_avail_dir_credit_pools; i++) {
> + list = &hw->rsrcs.dir_credit_pools[i].func_list;
> +
> + dlb_list_add(&hw->pf.avail_dir_credit_pools, list);
> + }
> +
> + /* There are 5120 history list entries, which allows us to overprovision
> + * the inflight limit (4096) by 1k.
> + */
> + if (dlb_bitmap_alloc(hw,
> + &hw->pf.avail_hist_list_entries,
> + DLB_MAX_NUM_HIST_LIST_ENTRIES))
> + return -1;
> +
> + if (dlb_bitmap_fill(hw->pf.avail_hist_list_entries))
> + return -1;
> +
> + if (dlb_bitmap_alloc(hw,
> + &hw->pf.avail_qed_freelist_entries,
> + DLB_MAX_NUM_LDB_CREDITS))
> + return -1;
> +
> + if (dlb_bitmap_fill(hw->pf.avail_qed_freelist_entries))
> + return -1;
> +
> + if (dlb_bitmap_alloc(hw,
> + &hw->pf.avail_dqed_freelist_entries,
> + DLB_MAX_NUM_DIR_CREDITS))
> + return -1;
> +
> + if (dlb_bitmap_fill(hw->pf.avail_dqed_freelist_entries))
> + return -1;
> +
> + if (dlb_bitmap_alloc(hw,
> + &hw->pf.avail_aqed_freelist_entries,
> + DLB_MAX_NUM_AQOS_ENTRIES))
> + return -1;
> +
> + if (dlb_bitmap_fill(hw->pf.avail_aqed_freelist_entries))
> + return -1;
> +
> + for (i = 0; i < DLB_MAX_NUM_VFS; i++) {
> + if (dlb_bitmap_alloc(hw,
> + &hw->vf[i].avail_hist_list_entries,
> + DLB_MAX_NUM_HIST_LIST_ENTRIES))
> + return -1;
> + if (dlb_bitmap_alloc(hw,
> + &hw->vf[i].avail_qed_freelist_entries,
> + DLB_MAX_NUM_LDB_CREDITS))
> + return -1;
> + if (dlb_bitmap_alloc(hw,
> + &hw->vf[i].avail_dqed_freelist_entries,
> + DLB_MAX_NUM_DIR_CREDITS))
> + return -1;
> + if (dlb_bitmap_alloc(hw,
> + &hw->vf[i].avail_aqed_freelist_entries,
> + DLB_MAX_NUM_AQOS_ENTRIES))
> + return -1;
> +
> + if (dlb_bitmap_zero(hw->vf[i].avail_hist_list_entries))
> + return -1;
> +
> + if (dlb_bitmap_zero(hw->vf[i].avail_qed_freelist_entries))
> + return -1;
> +
> + if (dlb_bitmap_zero(hw->vf[i].avail_dqed_freelist_entries))
> + return -1;
> +
> + if (dlb_bitmap_zero(hw->vf[i].avail_aqed_freelist_entries))
> + return -1;
> + }
> +
> + /* Initialize the hardware resource IDs */
> + for (i = 0; i < DLB_MAX_NUM_DOMAINS; i++) {
> + hw->domains[i].id.phys_id = i;
> + hw->domains[i].id.vf_owned = false;
> + }
> +
> + for (i = 0; i < DLB_MAX_NUM_LDB_QUEUES; i++) {
> + hw->rsrcs.ldb_queues[i].id.phys_id = i;
> + hw->rsrcs.ldb_queues[i].id.vf_owned = false;
> + }
> +
> + for (i = 0; i < DLB_MAX_NUM_LDB_PORTS; i++) {
> + hw->rsrcs.ldb_ports[i].id.phys_id = i;
> + hw->rsrcs.ldb_ports[i].id.vf_owned = false;
> + }
> +
> + for (i = 0; i < DLB_MAX_NUM_DIR_PORTS; i++) {
> + hw->rsrcs.dir_pq_pairs[i].id.phys_id = i;
> + hw->rsrcs.dir_pq_pairs[i].id.vf_owned = false;
> + }
> +
> + for (i = 0; i < DLB_MAX_NUM_LDB_CREDIT_POOLS; i++) {
> + hw->rsrcs.ldb_credit_pools[i].id.phys_id = i;
> + hw->rsrcs.ldb_credit_pools[i].id.vf_owned = false;
> + }
> +
> + for (i = 0; i < DLB_MAX_NUM_DIR_CREDIT_POOLS; i++) {
> + hw->rsrcs.dir_credit_pools[i].id.phys_id = i;
> + hw->rsrcs.dir_credit_pools[i].id.vf_owned = false;
> + }
> +
> + for (i = 0; i < DLB_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {
> + hw->rsrcs.sn_groups[i].id = i;
> + /* Default mode (0) is 32 sequence numbers per queue */
> + hw->rsrcs.sn_groups[i].mode = 0;
> + hw->rsrcs.sn_groups[i].sequence_numbers_per_queue = 32;
> + hw->rsrcs.sn_groups[i].slot_use_bitmap = 0;
> + }
> +
> + return 0;
> +}
> +
> +void dlb_resource_free(struct dlb_hw *hw)
> +{
> + int i;
> +
> + dlb_bitmap_free(hw->pf.avail_hist_list_entries);
> +
> + dlb_bitmap_free(hw->pf.avail_qed_freelist_entries);
> +
> + dlb_bitmap_free(hw->pf.avail_dqed_freelist_entries);
> +
> + dlb_bitmap_free(hw->pf.avail_aqed_freelist_entries);
> +
> + for (i = 0; i < DLB_MAX_NUM_VFS; i++) {
> + dlb_bitmap_free(hw->vf[i].avail_hist_list_entries);
> + dlb_bitmap_free(hw->vf[i].avail_qed_freelist_entries);
> + dlb_bitmap_free(hw->vf[i].avail_dqed_freelist_entries);
> + dlb_bitmap_free(hw->vf[i].avail_aqed_freelist_entries);
> + }
> +}
> +
> +static struct dlb_domain *dlb_get_domain_from_id(struct dlb_hw *hw,
> + u32 id,
> + bool vf_request,
> + unsigned int vf_id)
> +{
> + struct dlb_list_entry *iter __attribute__((unused));
> + struct dlb_function_resources *rsrcs;
> + struct dlb_domain *domain;
> +
> + if (id >= DLB_MAX_NUM_DOMAINS)
> + return NULL;
> +
> + if (!vf_request)
> + return &hw->domains[id];
> +
> + rsrcs = &hw->vf[vf_id];
> +
> + DLB_FUNC_LIST_FOR(rsrcs->used_domains, domain, iter)
> + if (domain->id.virt_id == id)
> + return domain;
> +
> + return NULL;
> +}
> +
> +static struct dlb_credit_pool *
> +dlb_get_domain_ldb_pool(u32 id,
> + bool vf_request,
> + struct dlb_domain *domain)
> +{
> + struct dlb_list_entry *iter __attribute__((unused));
> + struct dlb_credit_pool *pool;
> +
> + if (id >= DLB_MAX_NUM_LDB_CREDIT_POOLS)
> + return NULL;
> +
> + DLB_DOM_LIST_FOR(domain->used_ldb_credit_pools, pool, iter)
> + if ((!vf_request && pool->id.phys_id == id) ||
> + (vf_request && pool->id.virt_id == id))
> + return pool;
> +
> + return NULL;
> +}
> +
> +static struct dlb_credit_pool *
> +dlb_get_domain_dir_pool(u32 id,
> + bool vf_request,
> + struct dlb_domain *domain)
> +{
> + struct dlb_list_entry *iter __attribute__((unused));
> + struct dlb_credit_pool *pool;
> +
> + if (id >= DLB_MAX_NUM_DIR_CREDIT_POOLS)
> + return NULL;
> +
> + DLB_DOM_LIST_FOR(domain->used_dir_credit_pools, pool, iter)
> + if ((!vf_request && pool->id.phys_id == id) ||
> + (vf_request && pool->id.virt_id == id))
> + return pool;
> +
> + return NULL;
> +}
> +
> +static struct dlb_ldb_port *dlb_get_ldb_port_from_id(struct dlb_hw *hw,
> + u32 id,
> + bool vf_request,
> + unsigned int vf_id)
> +{
> + struct dlb_list_entry *iter1 __attribute__((unused));
> + struct dlb_list_entry *iter2 __attribute__((unused));
> + struct dlb_function_resources *rsrcs;
> + struct dlb_ldb_port *port;
> + struct dlb_domain *domain;
> +
> + if (id >= DLB_MAX_NUM_LDB_PORTS)
> + return NULL;
> +
> + rsrcs = (vf_request) ? &hw->vf[vf_id] : &hw->pf;
> +
> + if (!vf_request)
> + return &hw->rsrcs.ldb_ports[id];
> +
> + DLB_FUNC_LIST_FOR(rsrcs->used_domains, domain, iter1) {
> + DLB_DOM_LIST_FOR(domain->used_ldb_ports, port, iter2)
> + if (port->id.virt_id == id)
> + return port;
> + }
> +
> + DLB_FUNC_LIST_FOR(rsrcs->avail_ldb_ports, port, iter1)
> + if (port->id.virt_id == id)
> + return port;
> +
> + return NULL;
> +}
> +
> +static struct dlb_ldb_port *
> +dlb_get_domain_used_ldb_port(u32 id,
> + bool vf_request,
> + struct dlb_domain *domain)
> +{
> + struct dlb_list_entry *iter __attribute__((unused));
> + struct dlb_ldb_port *port;
> +
> + if (id >= DLB_MAX_NUM_LDB_PORTS)
> + return NULL;
> +
> + DLB_DOM_LIST_FOR(domain->used_ldb_ports, port, iter)
> + if ((!vf_request && port->id.phys_id == id) ||
> + (vf_request && port->id.virt_id == id))
> + return port;
> +
> + DLB_DOM_LIST_FOR(domain->avail_ldb_ports, port, iter)
> + if ((!vf_request && port->id.phys_id == id) ||
> + (vf_request && port->id.virt_id == id))
> + return port;
> +
> + return NULL;
> +}
> +
> +static struct dlb_ldb_port *dlb_get_domain_ldb_port(u32 id,
> + bool vf_request,
> + struct dlb_domain *domain)
> +{
> + struct dlb_list_entry *iter __attribute__((unused));
> + struct dlb_ldb_port *port;
> +
> + if (id >= DLB_MAX_NUM_LDB_PORTS)
> + return NULL;
> +
> + DLB_DOM_LIST_FOR(domain->used_ldb_ports, port, iter)
> + if ((!vf_request && port->id.phys_id == id) ||
> + (vf_request && port->id.virt_id == id))
> + return port;
> +
> + DLB_DOM_LIST_FOR(domain->avail_ldb_ports, port, iter)
> + if ((!vf_request && port->id.phys_id == id) ||
> + (vf_request && port->id.virt_id == id))
> + return port;
> +
> + return NULL;
> +}
> +
> +static struct dlb_dir_pq_pair *dlb_get_dir_pq_from_id(struct dlb_hw *hw,
> + u32 id,
> + bool vf_request,
> + unsigned int vf_id)
> +{
> + struct dlb_list_entry *iter1 __attribute__((unused));
> + struct dlb_list_entry *iter2 __attribute__((unused));
> + struct dlb_function_resources *rsrcs;
> + struct dlb_dir_pq_pair *port;
> + struct dlb_domain *domain;
> +
> + if (id >= DLB_MAX_NUM_DIR_PORTS)
> + return NULL;
> +
> + rsrcs = (vf_request) ? &hw->vf[vf_id] : &hw->pf;
> +
> + if (!vf_request)
> + return &hw->rsrcs.dir_pq_pairs[id];
> +
> + DLB_FUNC_LIST_FOR(rsrcs->used_domains, domain, iter1) {
> + DLB_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter2)
> + if (port->id.virt_id == id)
> + return port;
> + }
> +
> + DLB_FUNC_LIST_FOR(rsrcs->avail_dir_pq_pairs, port, iter1)
> + if (port->id.virt_id == id)
> + return port;
> +
> + return NULL;
> +}
> +
> +static struct dlb_dir_pq_pair *
> +dlb_get_domain_used_dir_pq(u32 id,
> + bool vf_request,
> + struct dlb_domain *domain)
> +{
> + struct dlb_list_entry *iter __attribute__((unused));
> + struct dlb_dir_pq_pair *port;
> +
> + if (id >= DLB_MAX_NUM_DIR_PORTS)
> + return NULL;
> +
> + DLB_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter)
> + if ((!vf_request && port->id.phys_id == id) ||
> + (vf_request && port->id.virt_id == id))
> + return port;
> +
> + return NULL;
> +}
> +
> +static struct dlb_dir_pq_pair *dlb_get_domain_dir_pq(u32 id,
> + bool vf_request,
> + struct dlb_domain
> *domain)
> +{
> + struct dlb_list_entry *iter __attribute__((unused));
> + struct dlb_dir_pq_pair *port;
> +
> + if (id >= DLB_MAX_NUM_DIR_PORTS)
> + return NULL;
> +
> + DLB_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter)
> + if ((!vf_request && port->id.phys_id == id) ||
> + (vf_request && port->id.virt_id == id))
> + return port;
> +
> + DLB_DOM_LIST_FOR(domain->avail_dir_pq_pairs, port, iter)
> + if ((!vf_request && port->id.phys_id == id) ||
> + (vf_request && port->id.virt_id == id))
> + return port;
> +
> + return NULL;
> +}
> +
> +static struct dlb_ldb_queue *dlb_get_ldb_queue_from_id(struct dlb_hw *hw,
> + u32 id,
> + bool vf_request,
> + unsigned int vf_id)
> +{
> + struct dlb_list_entry *iter1 __attribute__((unused));
> + struct dlb_list_entry *iter2 __attribute__((unused));
> + struct dlb_function_resources *rsrcs;
> + struct dlb_ldb_queue *queue;
> + struct dlb_domain *domain;
> +
> + if (id >= DLB_MAX_NUM_LDB_QUEUES)
> + return NULL;
> +
> + rsrcs = (vf_request) ? &hw->vf[vf_id] : &hw->pf;
> +
> + if (!vf_request)
> + return &hw->rsrcs.ldb_queues[id];
> +
> + DLB_FUNC_LIST_FOR(rsrcs->used_domains, domain, iter1) {
> + DLB_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter2)
> + if (queue->id.virt_id == id)
> + return queue;
> + }
> +
> + DLB_FUNC_LIST_FOR(rsrcs->avail_ldb_queues, queue, iter1)
> + if (queue->id.virt_id == id)
> + return queue;
> +
> + return NULL;
> +}
> +
> +static struct dlb_ldb_queue *dlb_get_domain_ldb_queue(u32 id,
> + bool vf_request,
> + struct dlb_domain
> *domain)
> +{
> + struct dlb_list_entry *iter __attribute__((unused));
> + struct dlb_ldb_queue *queue;
> +
> + if (id >= DLB_MAX_NUM_LDB_QUEUES)
> + return NULL;
> +
> + DLB_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter)
> + if ((!vf_request && queue->id.phys_id == id) ||
> + (vf_request && queue->id.virt_id == id))
> + return queue;
> +
> + return NULL;
> +}
> +
> +#define DLB_XFER_LL_RSRC(dst, src, num, type_t, name) ({ \
> + struct dlb_list_entry *it1 __attribute__((unused)); \
> + struct dlb_list_entry *it2 __attribute__((unused)); \
> + struct dlb_function_resources *_src = src; \
> + struct dlb_function_resources *_dst = dst; \
> + type_t *ptr, *tmp __attribute__((unused)); \
> + unsigned int i = 0; \
> + \
> + DLB_FUNC_LIST_FOR_SAFE(_src->avail_##name##s, ptr, tmp, it1, it2) {
> \
> + if (i++ == (num)) \
> + break; \
> + \
> + dlb_list_del(&_src->avail_##name##s, &ptr->func_list); \
> + dlb_list_add(&_dst->avail_##name##s, &ptr->func_list); \
> + _src->num_avail_##name##s--; \
> + _dst->num_avail_##name##s++; \
> + } \
> +})
> +
> +#define DLB_VF_ID_CLEAR(head, type_t) ({ \
> + struct dlb_list_entry *iter __attribute__((unused)); \
> + type_t *var; \
> + \
> + DLB_FUNC_LIST_FOR(head, var, iter) \
> + var->id.vf_owned = false; \
> +})
> +
> +int dlb_update_vf_sched_domains(struct dlb_hw *hw, u32 vf_id, u32 num)
> +{
> + struct dlb_list_entry *iter __attribute__((unused));
> + struct dlb_function_resources *src, *dst;
> + struct dlb_domain *domain;
> + unsigned int orig;
> + int ret;
> +
> + if (vf_id >= DLB_MAX_NUM_VFS)
> + return -EINVAL;
> +
> + src = &hw->pf;
> + dst = &hw->vf[vf_id];
> +
> + /* If the VF is locked, its resource assignment can't be changed */
> + if (dlb_vf_is_locked(hw, vf_id))
> + return -EPERM;
> +
> + orig = dst->num_avail_domains;
> +
> + /* Detach the destination VF's current resources before checking if
> + * enough are available, and set their IDs accordingly.
> + */
> + DLB_VF_ID_CLEAR(dst->avail_domains, struct dlb_domain);
> +
> + DLB_XFER_LL_RSRC(src, dst, orig, struct dlb_domain, domain);
> +
> + /* Are there enough available resources to satisfy the request? */
> + if (num > src->num_avail_domains) {
> + num = orig;
> + ret = -EINVAL;
> + } else {
> + ret = 0;
> + }
> +
> + DLB_XFER_LL_RSRC(dst, src, num, struct dlb_domain, domain);
> +
> + /* Set the domains' VF backpointer */
> + DLB_FUNC_LIST_FOR(dst->avail_domains, domain, iter)
> + domain->parent_func = dst;
> +
> + return ret;
> +}
> +
> +int dlb_update_vf_ldb_queues(struct dlb_hw *hw, u32 vf_id, u32 num)
> +{
> + struct dlb_function_resources *src, *dst;
> + unsigned int orig;
> + int ret;
> +
> + if (vf_id >= DLB_MAX_NUM_VFS)
> + return -EINVAL;
> +
> + src = &hw->pf;
> + dst = &hw->vf[vf_id];
> +
> + /* If the VF is locked, its resource assignment can't be changed */
> + if (dlb_vf_is_locked(hw, vf_id))
> + return -EPERM;
> +
> + orig = dst->num_avail_ldb_queues;
> +
> + /* Detach the destination VF's current resources before checking if
> + * enough are available, and set their IDs accordingly.
> + */
> + DLB_VF_ID_CLEAR(dst->avail_ldb_queues, struct dlb_ldb_queue);
> +
> + DLB_XFER_LL_RSRC(src, dst, orig, struct dlb_ldb_queue, ldb_queue);
> +
> + /* Are there enough available resources to satisfy the request? */
> + if (num > src->num_avail_ldb_queues) {
> + num = orig;
> + ret = -EINVAL;
> + } else {
> + ret = 0;
> + }
> +
> + DLB_XFER_LL_RSRC(dst, src, num, struct dlb_ldb_queue, ldb_queue);
> +
> + return ret;
> +}
> +
> +int dlb_update_vf_ldb_ports(struct dlb_hw *hw, u32 vf_id, u32 num)
> +{
> + struct dlb_function_resources *src, *dst;
> + unsigned int orig;
> + int ret;
> +
> + if (vf_id >= DLB_MAX_NUM_VFS)
> + return -EINVAL;
> +
> + src = &hw->pf;
> + dst = &hw->vf[vf_id];
> +
> + /* If the VF is locked, its resource assignment can't be changed */
> + if (dlb_vf_is_locked(hw, vf_id))
> + return -EPERM;
> +
> + orig = dst->num_avail_ldb_ports;
> +
> + /* Detach the destination VF's current resources before checking if
> + * enough are available, and set their IDs accordingly.
> + */
> + DLB_VF_ID_CLEAR(dst->avail_ldb_ports, struct dlb_ldb_port);
> +
> + DLB_XFER_LL_RSRC(src, dst, orig, struct dlb_ldb_port, ldb_port);
> +
> + /* Are there enough available resources to satisfy the request? */
> + if (num > src->num_avail_ldb_ports) {
> + num = orig;
> + ret = -EINVAL;
> + } else {
> + ret = 0;
> + }
> +
> + DLB_XFER_LL_RSRC(dst, src, num, struct dlb_ldb_port, ldb_port);
> +
> + return ret;
> +}
> +
> +int dlb_update_vf_dir_ports(struct dlb_hw *hw, u32 vf_id, u32 num)
> +{
> + struct dlb_function_resources *src, *dst;
> + unsigned int orig;
> + int ret;
> +
> + if (vf_id >= DLB_MAX_NUM_VFS)
> + return -EINVAL;
> +
> + src = &hw->pf;
> + dst = &hw->vf[vf_id];
> +
> + /* If the VF is locked, its resource assignment can't be changed */
> + if (dlb_vf_is_locked(hw, vf_id))
> + return -EPERM;
> +
> + orig = dst->num_avail_dir_pq_pairs;
> +
> + /* Detach the destination VF's current resources before checking if
> + * enough are available, and set their IDs accordingly.
> + */
> + DLB_VF_ID_CLEAR(dst->avail_dir_pq_pairs, struct dlb_dir_pq_pair);
> +
> + DLB_XFER_LL_RSRC(src, dst, orig, struct dlb_dir_pq_pair, dir_pq_pair);
> +
> + /* Are there enough available resources to satisfy the request? */
> + if (num > src->num_avail_dir_pq_pairs) {
> + num = orig;
> + ret = -EINVAL;
> + } else {
> + ret = 0;
> + }
> +
> + DLB_XFER_LL_RSRC(dst, src, num, struct dlb_dir_pq_pair, dir_pq_pair);
> +
> + return ret;
> +}
> +
> +int dlb_update_vf_ldb_credit_pools(struct dlb_hw *hw,
> + u32 vf_id,
> + u32 num)
> +{
> + struct dlb_function_resources *src, *dst;
> + unsigned int orig;
> + int ret;
> +
> + if (vf_id >= DLB_MAX_NUM_VFS)
> + return -EINVAL;
> +
> + src = &hw->pf;
> + dst = &hw->vf[vf_id];
> +
> + /* If the VF is locked, its resource assignment can't be changed */
> + if (dlb_vf_is_locked(hw, vf_id))
> + return -EPERM;
> +
> + orig = dst->num_avail_ldb_credit_pools;
> +
> + /* Detach the destination VF's current resources before checking if
> + * enough are available, and set their IDs accordingly.
> + */
> + DLB_VF_ID_CLEAR(dst->avail_ldb_credit_pools, struct dlb_credit_pool);
> +
> + DLB_XFER_LL_RSRC(src,
> + dst,
> + orig,
> + struct dlb_credit_pool,
> + ldb_credit_pool);
> +
> + /* Are there enough available resources to satisfy the request? */
> + if (num > src->num_avail_ldb_credit_pools) {
> + num = orig;
> + ret = -EINVAL;
> + } else {
> + ret = 0;
> + }
> +
> + DLB_XFER_LL_RSRC(dst,
> + src,
> + num,
> + struct dlb_credit_pool,
> + ldb_credit_pool);
> +
> + return ret;
> +}
> +
> +int dlb_update_vf_dir_credit_pools(struct dlb_hw *hw,
> + u32 vf_id,
> + u32 num)
> +{
> + struct dlb_function_resources *src, *dst;
> + unsigned int orig;
> + int ret;
> +
> + if (vf_id >= DLB_MAX_NUM_VFS)
> + return -EINVAL;
> +
> + src = &hw->pf;
> + dst = &hw->vf[vf_id];
> +
> + /* If the VF is locked, its resource assignment can't be changed */
> + if (dlb_vf_is_locked(hw, vf_id))
> + return -EPERM;
> +
> + orig = dst->num_avail_dir_credit_pools;
> +
> + /* Detach the VF's current resources before checking if enough are
> + * available, and set their IDs accordingly.
> + */
> + DLB_VF_ID_CLEAR(dst->avail_dir_credit_pools, struct dlb_credit_pool);
> +
> + DLB_XFER_LL_RSRC(src,
> + dst,
> + orig,
> + struct dlb_credit_pool,
> + dir_credit_pool);
> +
> + /* Are there enough available resources to satisfy the request? */
> + if (num > src->num_avail_dir_credit_pools) {
> + num = orig;
> + ret = -EINVAL;
> + } else {
> + ret = 0;
> + }
> +
> + DLB_XFER_LL_RSRC(dst,
> + src,
> + num,
> + struct dlb_credit_pool,
> + dir_credit_pool);
> +
> + return ret;
> +}
> +
> +static int dlb_transfer_bitmap_resources(struct dlb_bitmap *src,
> + struct dlb_bitmap *dst,
> + u32 num)
> +{
> + int orig, ret, base;
> +
> + /* Validate bitmaps before use */
> + if (dlb_bitmap_count(dst) < 0 || dlb_bitmap_count(src) < 0)
> + return -EINVAL;
> +
> + /* Reassign the dest's bitmap entries to the source's before checking
> + * if a contiguous chunk of size 'num' is available. The reassignment
> + * may be necessary to create a sufficiently large contiguous chunk.
> + */
> + orig = dlb_bitmap_count(dst);
> +
> + dlb_bitmap_or(src, src, dst);
> +
> + dlb_bitmap_zero(dst);
> +
> + /* Are there enough available resources to satisfy the request? */
> + base = dlb_bitmap_find_set_bit_range(src, num);
> +
> + if (base == -ENOENT) {
> + num = orig;
> + base = dlb_bitmap_find_set_bit_range(src, num);
> + ret = -EINVAL;
> + } else {
> + ret = 0;
> + }
> +
> + dlb_bitmap_set_range(dst, base, num);
> +
> + dlb_bitmap_clear_range(src, base, num);
> +
> + return ret;
> +}
> +
> +int dlb_update_vf_ldb_credits(struct dlb_hw *hw, u32 vf_id, u32 num)
> +{
> + struct dlb_function_resources *src, *dst;
> +
> + if (vf_id >= DLB_MAX_NUM_VFS)
> + return -EINVAL;
> +
> + src = &hw->pf;
> + dst = &hw->vf[vf_id];
> +
> + /* If the VF is locked, its resource assignment can't be changed */
> + if (dlb_vf_is_locked(hw, vf_id))
> + return -EPERM;
> +
> + return dlb_transfer_bitmap_resources(src->avail_qed_freelist_entries,
> + dst->avail_qed_freelist_entries,
> + num);
> +}
> +
> +int dlb_update_vf_dir_credits(struct dlb_hw *hw, u32 vf_id, u32 num)
> +{
> + struct dlb_function_resources *src, *dst;
> +
> + if (vf_id >= DLB_MAX_NUM_VFS)
> + return -EINVAL;
> +
> + src = &hw->pf;
> + dst = &hw->vf[vf_id];
> +
> + /* If the VF is locked, its resource assignment can't be changed */
> + if (dlb_vf_is_locked(hw, vf_id))
> + return -EPERM;
> +
> + return dlb_transfer_bitmap_resources(src->avail_dqed_freelist_entries,
> + dst->avail_dqed_freelist_entries,
> + num);
> +}
> +
> +int dlb_update_vf_hist_list_entries(struct dlb_hw *hw,
> + u32 vf_id,
> + u32 num)
> +{
> + struct dlb_function_resources *src, *dst;
> +
> + if (vf_id >= DLB_MAX_NUM_VFS)
> + return -EINVAL;
> +
> + src = &hw->pf;
> + dst = &hw->vf[vf_id];
> +
> + /* If the VF is locked, its resource assignment can't be changed */
> + if (dlb_vf_is_locked(hw, vf_id))
> + return -EPERM;
> +
> + return dlb_transfer_bitmap_resources(src->avail_hist_list_entries,
> + dst->avail_hist_list_entries,
> + num);
> +}
> +
> +int dlb_update_vf_atomic_inflights(struct dlb_hw *hw,
> + u32 vf_id,
> + u32 num)
> +{
> + struct dlb_function_resources *src, *dst;
> +
> + if (vf_id >= DLB_MAX_NUM_VFS)
> + return -EINVAL;
> +
> + src = &hw->pf;
> + dst = &hw->vf[vf_id];
> +
> + /* If the VF is locked, its resource assignment can't be changed */
> + if (dlb_vf_is_locked(hw, vf_id))
> + return -EPERM;
> +
> + return dlb_transfer_bitmap_resources(src->avail_aqed_freelist_entries,
> + dst->avail_aqed_freelist_entries,
> + num);
> +}
> +
> +static int dlb_attach_ldb_queues(struct dlb_hw *hw,
> + struct dlb_function_resources *rsrcs,
> + struct dlb_domain *domain,
> + u32 num_queues,
> + struct dlb_cmd_response *resp)
> +{
> + unsigned int i, j;
> +
> + if (rsrcs->num_avail_ldb_queues < num_queues) {
> + resp->status = DLB_ST_LDB_QUEUES_UNAVAILABLE;
> + return -1;
> + }
> +
> + for (i = 0; i < num_queues; i++) {
> + struct dlb_ldb_queue *queue;
> +
> + queue = DLB_FUNC_LIST_HEAD(rsrcs->avail_ldb_queues,
> + typeof(*queue));
> + if (!queue) {
> + DLB_HW_ERR(hw,
> + "[%s()] Internal error: domain validation
> failed\n",
> + __func__);
> + goto cleanup;
> + }
> +
> + dlb_list_del(&rsrcs->avail_ldb_queues, &queue->func_list);
> +
> + queue->domain_id = domain->id;
> + queue->owned = true;
> +
> + dlb_list_add(&domain->avail_ldb_queues, &queue-
> >domain_list);
> + }
> +
> + rsrcs->num_avail_ldb_queues -= num_queues;
> +
> + return 0;
> +
> +cleanup:
> +
> + /* Return the assigned queues */
> + for (j = 0; j < i; j++) {
> + struct dlb_ldb_queue *queue;
> +
> + queue = DLB_FUNC_LIST_HEAD(domain->avail_ldb_queues,
> + typeof(*queue));
> + /* Unrecoverable internal error */
> + if (!queue)
> + break;
> +
> + queue->owned = false;
> +
> + dlb_list_del(&domain->avail_ldb_queues, &queue-
> >domain_list);
> +
> + dlb_list_add(&rsrcs->avail_ldb_queues, &queue->func_list);
> + }
> +
> + return -EFAULT;
> +}
> +
> +static struct dlb_ldb_port *
> +dlb_get_next_ldb_port(struct dlb_hw *hw,
> + struct dlb_function_resources *rsrcs,
> + u32 domain_id)
> +{
> + struct dlb_list_entry *iter __attribute__((unused));
> + struct dlb_ldb_port *port;
> +
> + /* To reduce the odds of consecutive load-balanced ports mapping to
> the
> + * same queue(s), the driver attempts to allocate ports whose
> neighbors
> + * are owned by a different domain.
> + */
> + DLB_FUNC_LIST_FOR(rsrcs->avail_ldb_ports, port, iter) {
> + u32 next, prev;
> + u32 phys_id;
> +
> + phys_id = port->id.phys_id;
> + next = phys_id + 1;
> + prev = phys_id - 1;
> +
> + if (phys_id == DLB_MAX_NUM_LDB_PORTS - 1)
> + next = 0;
> + if (phys_id == 0)
> + prev = DLB_MAX_NUM_LDB_PORTS - 1;
> +
> + if (!hw->rsrcs.ldb_ports[next].owned ||
> + hw->rsrcs.ldb_ports[next].domain_id.phys_id == domain_id)
> + continue;
> +
> + if (!hw->rsrcs.ldb_ports[prev].owned ||
> + hw->rsrcs.ldb_ports[prev].domain_id.phys_id == domain_id)
> + continue;
> +
> + return port;
> + }
> +
> + /* Failing that, the driver looks for a port with one neighbor owned by
> + * a different domain and the other unallocated.
> + */
> + DLB_FUNC_LIST_FOR(rsrcs->avail_ldb_ports, port, iter) {
> + u32 next, prev;
> + u32 phys_id;
> +
> + phys_id = port->id.phys_id;
> + next = phys_id + 1;
> + prev = phys_id - 1;
> +
> + if (phys_id == DLB_MAX_NUM_LDB_PORTS - 1)
> + next = 0;
> + if (phys_id == 0)
> + prev = DLB_MAX_NUM_LDB_PORTS - 1;
> +
> + if (!hw->rsrcs.ldb_ports[prev].owned &&
> + hw->rsrcs.ldb_ports[next].owned &&
> + hw->rsrcs.ldb_ports[next].domain_id.phys_id != domain_id)
> + return port;
> +
> + if (!hw->rsrcs.ldb_ports[next].owned &&
> + hw->rsrcs.ldb_ports[prev].owned &&
> + hw->rsrcs.ldb_ports[prev].domain_id.phys_id != domain_id)
> + return port;
> + }
> +
> + /* Failing that, the driver looks for a port with both neighbors
> + * unallocated.
> + */
> + DLB_FUNC_LIST_FOR(rsrcs->avail_ldb_ports, port, iter) {
> + u32 next, prev;
> + u32 phys_id;
> +
> + phys_id = port->id.phys_id;
> + next = phys_id + 1;
> + prev = phys_id - 1;
> +
> + if (phys_id == DLB_MAX_NUM_LDB_PORTS - 1)
> + next = 0;
> + if (phys_id == 0)
> + prev = DLB_MAX_NUM_LDB_PORTS - 1;
> +
> + if (!hw->rsrcs.ldb_ports[prev].owned &&
> + !hw->rsrcs.ldb_ports[next].owned)
> + return port;
> + }
> +
> + /* If all else fails, the driver returns the next available port. */
> + return DLB_FUNC_LIST_HEAD(rsrcs->avail_ldb_ports, typeof(*port));
> +}
> +
> +static int dlb_attach_ldb_ports(struct dlb_hw *hw,
> + struct dlb_function_resources *rsrcs,
> + struct dlb_domain *domain,
> + u32 num_ports,
> + struct dlb_cmd_response *resp)
> +{
> + unsigned int i, j;
> +
> + if (rsrcs->num_avail_ldb_ports < num_ports) {
> + resp->status = DLB_ST_LDB_PORTS_UNAVAILABLE;
> + return -1;
> + }
> +
> + for (i = 0; i < num_ports; i++) {
> + struct dlb_ldb_port *port;
> +
> + port = dlb_get_next_ldb_port(hw, rsrcs, domain->id.phys_id);
> +
> + if (!port) {
> + DLB_HW_ERR(hw,
> + "[%s()] Internal error: domain validation
> failed\n",
> + __func__);
> + goto cleanup;
> + }
> +
> + dlb_list_del(&rsrcs->avail_ldb_ports, &port->func_list);
> +
> + port->domain_id = domain->id;
> + port->owned = true;
> +
> + dlb_list_add(&domain->avail_ldb_ports, &port->domain_list);
> + }
> +
> + rsrcs->num_avail_ldb_ports -= num_ports;
> +
> + return 0;
> +
> +cleanup:
> +
> + /* Return the assigned ports */
> + for (j = 0; j < i; j++) {
> + struct dlb_ldb_port *port;
> +
> + port = DLB_FUNC_LIST_HEAD(domain->avail_ldb_ports,
> + typeof(*port));
> + /* Unrecoverable internal error */
> + if (!port)
> + break;
> +
> + port->owned = false;
> +
> + dlb_list_del(&domain->avail_ldb_ports, &port->domain_list);
> +
> + dlb_list_add(&rsrcs->avail_ldb_ports, &port->func_list);
> + }
> +
> + return -EFAULT;
> +}
> +
> +static int dlb_attach_dir_ports(struct dlb_hw *hw,
> + struct dlb_function_resources *rsrcs,
> + struct dlb_domain *domain,
> + u32 num_ports,
> + struct dlb_cmd_response *resp)
> +{
> + unsigned int i, j;
> +
> + if (rsrcs->num_avail_dir_pq_pairs < num_ports) {
> + resp->status = DLB_ST_DIR_PORTS_UNAVAILABLE;
> + return -1;
> + }
> +
> + for (i = 0; i < num_ports; i++) {
> + struct dlb_dir_pq_pair *port;
> +
> + port = DLB_FUNC_LIST_HEAD(rsrcs->avail_dir_pq_pairs,
> + typeof(*port));
> + if (!port) {
> + DLB_HW_ERR(hw,
> + "[%s()] Internal error: domain validation
> failed\n",
> + __func__);
> + goto cleanup;
> + }
> +
> + dlb_list_del(&rsrcs->avail_dir_pq_pairs, &port->func_list);
> +
> + port->domain_id = domain->id;
> + port->owned = true;
> +
> + dlb_list_add(&domain->avail_dir_pq_pairs, &port-
> >domain_list);
> + }
> +
> + rsrcs->num_avail_dir_pq_pairs -= num_ports;
> +
> + return 0;
> +
> +cleanup:
> +
> + /* Return the assigned ports */
> + for (j = 0; j < i; j++) {
> + struct dlb_dir_pq_pair *port;
> +
> + port = DLB_FUNC_LIST_HEAD(domain->avail_dir_pq_pairs,
> + typeof(*port));
> + /* Unrecoverable internal error */
> + if (!port)
> + break;
> +
> + port->owned = false;
> +
> + dlb_list_del(&domain->avail_dir_pq_pairs, &port-
> >domain_list);
> +
> + dlb_list_add(&rsrcs->avail_dir_pq_pairs, &port->func_list);
> + }
> +
> + return -EFAULT;
> +}
> +
> +static int dlb_attach_ldb_credits(struct dlb_function_resources *rsrcs,
> + struct dlb_domain *domain,
> + u32 num_credits,
> + struct dlb_cmd_response *resp)
> +{
> + struct dlb_bitmap *bitmap = rsrcs->avail_qed_freelist_entries;
> +
> + if (dlb_bitmap_count(bitmap) < (int)num_credits) {
> + resp->status = DLB_ST_LDB_CREDITS_UNAVAILABLE;
> + return -1;
> + }
> +
> + if (num_credits) {
> + int base;
> +
> + base = dlb_bitmap_find_set_bit_range(bitmap, num_credits);
> + if (base < 0)
> + goto error;
> +
> + domain->qed_freelist.base = base;
> + domain->qed_freelist.bound = base + num_credits;
> + domain->qed_freelist.offset = 0;
> +
> + dlb_bitmap_clear_range(bitmap, base, num_credits);
> + }
> +
> + return 0;
> +
> +error:
> + resp->status = DLB_ST_QED_FREELIST_ENTRIES_UNAVAILABLE;
> + return -1;
> +}
next prev parent reply other threads:[~2020-08-11 18:22 UTC|newest]
Thread overview: 95+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <1593232671-5690-0-git-send-email-timothy.mcdaniel@intel.com>
2020-07-30 19:49 ` [dpdk-dev] [PATCH 00/27] Add Intel DLM PMD to 20.11 McDaniel, Timothy
2020-07-30 19:49 ` [dpdk-dev] [PATCH 01/27] eventdev: dlb upstream prerequisites McDaniel, Timothy
2020-08-11 17:44 ` Jerin Jacob
2020-10-17 19:03 ` [dpdk-dev] [PATCH v5 00/22] Add DLB PMD Timothy McDaniel
2020-10-17 19:03 ` [dpdk-dev] [PATCH v5 01/22] event/dlb: add documentation and meson infrastructure Timothy McDaniel
2020-10-20 20:06 ` Eads, Gage
2020-10-17 19:03 ` [dpdk-dev] [PATCH v5 02/22] event/dlb: add dynamic logging Timothy McDaniel
2020-10-17 19:03 ` [dpdk-dev] [PATCH v5 03/22] event/dlb: add private data structures and constants Timothy McDaniel
2020-10-20 20:06 ` Eads, Gage
2020-10-17 19:03 ` [dpdk-dev] [PATCH v5 04/22] event/dlb: add definitions shared with LKM or shared code Timothy McDaniel
2020-10-17 19:03 ` [dpdk-dev] [PATCH v5 05/22] event/dlb: add inline functions Timothy McDaniel
2020-10-17 19:04 ` [dpdk-dev] [PATCH v5 06/22] event/dlb: add probe Timothy McDaniel
2020-10-18 12:49 ` Jerin Jacob
2020-10-20 20:06 ` Eads, Gage
2020-10-17 19:04 ` [dpdk-dev] [PATCH v5 07/22] event/dlb: add xstats Timothy McDaniel
2020-10-20 20:06 ` Eads, Gage
2020-10-17 19:04 ` [dpdk-dev] [PATCH v5 08/22] event/dlb: add infos get and configure Timothy McDaniel
2020-10-20 20:06 ` Eads, Gage
2020-10-17 19:04 ` [dpdk-dev] [PATCH v5 09/22] event/dlb: add queue and port default conf Timothy McDaniel
2020-10-17 19:04 ` [dpdk-dev] [PATCH v5 10/22] event/dlb: add queue setup Timothy McDaniel
2020-10-17 19:04 ` [dpdk-dev] [PATCH v5 11/22] event/dlb: add port setup Timothy McDaniel
2020-10-20 20:06 ` Eads, Gage
2020-10-17 19:04 ` [dpdk-dev] [PATCH v5 12/22] event/dlb: add port link Timothy McDaniel
2020-10-17 19:04 ` [dpdk-dev] [PATCH v5 13/22] event/dlb: add port unlink and port unlinks in progress Timothy McDaniel
2020-10-17 19:04 ` [dpdk-dev] [PATCH v5 14/22] event/dlb: add eventdev start Timothy McDaniel
2020-10-17 19:04 ` [dpdk-dev] [PATCH v5 15/22] event/dlb: add enqueue and its burst variants Timothy McDaniel
2020-10-17 19:04 ` [dpdk-dev] [PATCH v5 16/22] event/dlb: add dequeue " Timothy McDaniel
2020-10-20 20:06 ` Eads, Gage
2020-10-17 19:04 ` [dpdk-dev] [PATCH v5 17/22] event/dlb: add eventdev stop and close Timothy McDaniel
2020-10-17 19:04 ` [dpdk-dev] [PATCH v5 18/22] event/dlb: add PMD's token pop public interface Timothy McDaniel
2020-10-17 19:04 ` [dpdk-dev] [PATCH v5 19/22] event/dlb: add PMD self-tests Timothy McDaniel
2020-10-17 19:04 ` [dpdk-dev] [PATCH v5 20/22] event/dlb: add queue and port release Timothy McDaniel
2020-10-20 20:06 ` Eads, Gage
2020-10-17 19:04 ` [dpdk-dev] [PATCH v5 21/22] event/dlb: add timeout ticks entry point Timothy McDaniel
2020-10-17 19:04 ` [dpdk-dev] [PATCH v5 22/22] doc: Add new DLB eventdev driver to relnotes Timothy McDaniel
2020-10-20 20:06 ` Eads, Gage
2020-10-23 18:32 ` [dpdk-dev] [PATCH v6 00/23] Add DLB PMD Timothy McDaniel
2020-10-23 18:32 ` [dpdk-dev] [PATCH v6 01/23] event/dlb: add documentation and meson infrastructure Timothy McDaniel
2020-10-24 13:05 ` Jerin Jacob
2020-10-26 16:02 ` McDaniel, Timothy
2020-10-24 14:05 ` Thomas Monjalon
2020-10-26 16:12 ` McDaniel, Timothy
2020-10-23 18:32 ` [dpdk-dev] [PATCH v6 02/23] event/dlb: add dynamic logging Timothy McDaniel
2020-10-23 18:32 ` [dpdk-dev] [PATCH v6 03/23] event/dlb: add private data structures and constants Timothy McDaniel
2020-10-23 18:32 ` [dpdk-dev] [PATCH v6 04/23] event/dlb: add definitions shared with LKM or shared code Timothy McDaniel
2020-10-23 18:32 ` [dpdk-dev] [PATCH v6 05/23] event/dlb: add inline functions Timothy McDaniel
2020-10-23 18:32 ` [dpdk-dev] [PATCH v6 06/23] event/dlb: add eventdev probe Timothy McDaniel
2020-10-23 18:32 ` [dpdk-dev] [PATCH v6 07/23] event/dlb: add flexible interface Timothy McDaniel
2020-10-23 18:32 ` [dpdk-dev] [PATCH v6 08/23] event/dlb: add probe-time hardware init Timothy McDaniel
2020-10-23 18:32 ` [dpdk-dev] [PATCH v6 09/23] event/dlb: add xstats Timothy McDaniel
2020-10-23 18:32 ` [dpdk-dev] [PATCH v6 10/23] event/dlb: add infos get and configure Timothy McDaniel
2020-10-23 18:32 ` [dpdk-dev] [PATCH v6 11/23] event/dlb: add queue and port default conf Timothy McDaniel
2020-10-23 18:32 ` [dpdk-dev] [PATCH v6 12/23] event/dlb: add queue setup Timothy McDaniel
2020-10-23 18:32 ` [dpdk-dev] [PATCH v6 13/23] event/dlb: add port setup Timothy McDaniel
2020-10-23 18:32 ` [dpdk-dev] [PATCH v6 14/23] event/dlb: add port link Timothy McDaniel
2020-10-23 18:32 ` [dpdk-dev] [PATCH v6 15/23] event/dlb: add port unlink and port unlinks in progress Timothy McDaniel
2020-10-23 18:32 ` [dpdk-dev] [PATCH v6 16/23] event/dlb: add eventdev start Timothy McDaniel
2020-10-23 18:32 ` [dpdk-dev] [PATCH v6 17/23] event/dlb: add enqueue and its burst variants Timothy McDaniel
2020-10-23 18:32 ` [dpdk-dev] [PATCH v6 18/23] event/dlb: add dequeue " Timothy McDaniel
2020-10-23 18:32 ` [dpdk-dev] [PATCH v6 19/23] event/dlb: add eventdev stop and close Timothy McDaniel
2020-10-23 18:32 ` [dpdk-dev] [PATCH v6 20/23] event/dlb: add PMD's token pop public interface Timothy McDaniel
2020-10-23 18:32 ` [dpdk-dev] [PATCH v6 21/23] event/dlb: add PMD self-tests Timothy McDaniel
2020-10-23 18:32 ` [dpdk-dev] [PATCH v6 22/23] event/dlb: add queue and port release Timothy McDaniel
2020-10-23 18:32 ` [dpdk-dev] [PATCH v6 23/23] event/dlb: add timeout ticks entry point Timothy McDaniel
2020-07-30 19:49 ` [dpdk-dev] [PATCH 02/27] eventdev: do not pass disable_implicit_release bit to trace macro McDaniel, Timothy
2020-08-11 17:48 ` Jerin Jacob
2020-07-30 19:49 ` [dpdk-dev] [PATCH 03/27] event/dlb: add shared code version 10.7.9 McDaniel, Timothy
2020-08-11 18:22 ` Jerin Jacob Kollanukkaran [this message]
2020-07-30 19:49 ` [dpdk-dev] [PATCH 04/27] event/dlb: add make and meson build infrastructure McDaniel, Timothy
2020-08-11 18:24 ` Jerin Jacob
2020-07-30 19:49 ` [dpdk-dev] [PATCH 05/27] event/dlb: add DLB documentation McDaniel, Timothy
2020-08-11 18:26 ` Jerin Jacob
2020-07-30 19:49 ` [dpdk-dev] [PATCH 06/27] event/dlb: add dynamic logging McDaniel, Timothy
2020-08-11 18:27 ` Jerin Jacob
2020-07-30 19:49 ` [dpdk-dev] [PATCH 07/27] event/dlb: add private data structures and constants McDaniel, Timothy
2020-07-30 19:49 ` [dpdk-dev] [PATCH 08/27] event/dlb: add definitions shared with LKM or shared code McDaniel, Timothy
2020-07-30 19:49 ` [dpdk-dev] [PATCH 09/27] event/dlb: add inline functions used in multiple files McDaniel, Timothy
2020-07-30 19:49 ` [dpdk-dev] [PATCH 10/27] event/dlb: add PFPMD-specific interface layer to shared code McDaniel, Timothy
2020-07-30 19:49 ` [dpdk-dev] [PATCH 11/27] event/dlb: add flexible PMD to device interfaces McDaniel, Timothy
2020-07-30 19:49 ` [dpdk-dev] [PATCH 12/27] event/dlb: add the PMD's public interfaces McDaniel, Timothy
2020-07-30 19:50 ` [dpdk-dev] [PATCH 13/27] event/dlb: add xstats support McDaniel, Timothy
2020-07-30 19:50 ` [dpdk-dev] [PATCH 14/27] event/dlb: add PMD self-tests McDaniel, Timothy
2020-07-30 19:50 ` [dpdk-dev] [PATCH 15/27] event/dlb: add probe McDaniel, Timothy
2020-07-30 19:50 ` [dpdk-dev] [PATCH 16/27] event/dlb: add infos_get and configure McDaniel, Timothy
2020-07-30 19:50 ` [dpdk-dev] [PATCH 17/27] event/dlb: add queue_def_conf and port_def_conf McDaniel, Timothy
2020-07-30 19:50 ` [dpdk-dev] [PATCH 18/27] event/dlb: add queue setup McDaniel, Timothy
2020-07-30 19:50 ` [dpdk-dev] [PATCH 19/27] event/dlb: add port_setup McDaniel, Timothy
2020-07-30 19:50 ` [dpdk-dev] [PATCH 20/27] event/dlb: add port_link McDaniel, Timothy
2020-07-30 19:50 ` [dpdk-dev] [PATCH 21/27] event/dlb: add queue_release and port_release McDaniel, Timothy
2020-07-30 19:50 ` [dpdk-dev] [PATCH 22/27] event/dlb: add port_unlink and port_unlinks_in_progress McDaniel, Timothy
2020-07-30 19:50 ` [dpdk-dev] [PATCH 23/27] event/dlb: add eventdev_start McDaniel, Timothy
2020-07-30 19:50 ` [dpdk-dev] [PATCH 24/27] event/dlb: add timeout_ticks, dump, xstats, and selftest McDaniel, Timothy
2020-07-30 19:50 ` [dpdk-dev] [PATCH 25/27] event/dlb: add enqueue and its burst variants McDaniel, Timothy
2020-07-30 19:50 ` [dpdk-dev] [PATCH 26/27] event/dlb: add dequeue, dequeue_burst, and variants McDaniel, Timothy
2020-07-30 19:50 ` [dpdk-dev] [PATCH 27/27] event/dlb: add eventdev_stop and eventdev_close McDaniel, Timothy
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=BYAPR18MB24240D86105D7F05C700022CC8450@BYAPR18MB2424.namprd18.prod.outlook.com \
--to=jerinj@marvell.com \
--cc=dev@dpdk.org \
--cc=gage.eads@intel.com \
--cc=harry.van.haaren@intel.com \
--cc=mattias.ronnblom@ericsson.com \
--cc=thomas@monjalon.net \
--cc=timothy.mcdaniel@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).