From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9F4E5A0350; Tue, 30 Jun 2020 10:33:07 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DF4861BF7C; Tue, 30 Jun 2020 10:32:49 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id D0EEE1BF7B for ; Tue, 30 Jun 2020 10:32:47 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 05U8VS4M007507; Tue, 30 Jun 2020 01:32:47 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=5rB7HMUK0ufCmk/gypBEzZOCs/4SqGWwu94gAu621XA=; b=KzZLPvIYpy9X7FVXMCDJmr3LowUZMndaixUVycAYmbiOSxzTKVsQmLsaf8I9DxEcCAo7 x4JuC5FZwGavx0c55kARQRsSqCAX4xnyrkcQjOWvR5XyZaDmucmnk4cnJ+J1zrK6iDkj CuOp52A+qX8e0WEbN+qeZNONg3fgFLkm2iOF8zlXbGE455FCHCorNaMKmG696rvZQQlb GwmLx+D8iXuAmjetFAmDPop56jlc5uzMepmZh4XMjrprHvruPzbnn3e7lOp4RUaUwrbi kDhM33yvPALUziZL8yJtD06EqEC3addipp4nEyJ2C3ZL54lC8Lz8zwS/XRim6s+ATYFm dQ== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0a-0016f401.pphosted.com with ESMTP id 31y0wrxtyk-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 30 Jun 2020 01:32:46 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 30 Jun 2020 01:32:35 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Tue, 30 Jun 2020 01:32:35 -0700 Received: from irv1user08.caveonetworks.com (unknown [10.104.116.105]) by maili.marvell.com (Postfix) with ESMTP id 275A93F703F; Tue, 30 Jun 2020 01:32:35 -0700 (PDT) Received: (from rmody@localhost) by irv1user08.caveonetworks.com (8.14.4/8.14.4/Submit) id 05U8WY6h013216; Tue, 30 Jun 2020 01:32:34 -0700 X-Authentication-Warning: irv1user08.caveonetworks.com: rmody set sender to rmody@marvell.com using -f From: Rasesh Mody To: , CC: Rasesh Mody , , , Igor Russkikh Date: Tue, 30 Jun 2020 01:32:12 -0700 Message-ID: <20200630083215.13108-2-rmody@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20200628055850.5275-1-rmody@marvell.com> References: <20200628055850.5275-1-rmody@marvell.com> MIME-Version: 1.0 Content-Type: text/plain X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235, 18.0.687 definitions=2020-06-30_02:2020-06-30, 2020-06-29 signatures=0 Subject: [dpdk-dev] [PATCH v2 1/4] net/qede/base: re-arrange few structures for DDC X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch rearranges some of the base driver structures which will be also used by debug data collection (DDC) implementation. It adds a new file ecore_hsi_func_common.h with Physical, Virtual memory descriptors. Signed-off-by: Rasesh Mody Signed-off-by: Igor Russkikh --- drivers/net/qede/base/ecore.h | 2 + drivers/net/qede/base/ecore_cxt.c | 140 ++---------------- drivers/net/qede/base/ecore_cxt.h | 135 ++++++++++++++++- drivers/net/qede/base/ecore_hsi_func_common.h | 17 +++ drivers/net/qede/base/ecore_init_fw_funcs.h | 7 - 5 files changed, 163 insertions(+), 138 deletions(-) create mode 100644 drivers/net/qede/base/ecore_hsi_func_common.h diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h index 498bb6f09..dc5fe4d80 100644 --- a/drivers/net/qede/base/ecore.h +++ b/drivers/net/qede/base/ecore.h @@ -24,6 +24,7 @@ #include "ecore_hsi_debug_tools.h" #include "ecore_hsi_init_func.h" #include "ecore_hsi_init_tool.h" +#include "ecore_hsi_func_common.h" #include "ecore_proto_if.h" #include "mcp_public.h" @@ -671,6 +672,7 @@ struct ecore_hwfn { struct dbg_tools_data dbg_info; void *dbg_user_info; + struct virt_mem_desc dbg_arrays[MAX_BIN_DBG_BUFFER_TYPE]; struct z_stream_s *stream; diff --git a/drivers/net/qede/base/ecore_cxt.c b/drivers/net/qede/base/ecore_cxt.c index dda47ea67..23f37b1bf 100644 --- a/drivers/net/qede/base/ecore_cxt.c +++ b/drivers/net/qede/base/ecore_cxt.c @@ -20,12 +20,6 @@ #include "ecore_sriov.h" #include "ecore_mcp.h" -/* Max number of connection types in HW (DQ/CDU etc.) */ -#define MAX_CONN_TYPES PROTOCOLID_COMMON -#define NUM_TASK_TYPES 2 -#define NUM_TASK_PF_SEGMENTS 4 -#define NUM_TASK_VF_SEGMENTS 1 - /* Doorbell-Queue constants */ #define DQ_RANGE_SHIFT 4 #define DQ_RANGE_ALIGN (1 << DQ_RANGE_SHIFT) @@ -90,128 +84,6 @@ struct src_ent { /* Alignment is inherent to the type1_task_context structure */ #define TYPE1_TASK_CXT_SIZE(p_hwfn) sizeof(union type1_task_context) -/* PF per protocl configuration object */ -#define TASK_SEGMENTS (NUM_TASK_PF_SEGMENTS + NUM_TASK_VF_SEGMENTS) -#define TASK_SEGMENT_VF (NUM_TASK_PF_SEGMENTS) - -struct ecore_tid_seg { - u32 count; - u8 type; - bool has_fl_mem; -}; - -struct ecore_conn_type_cfg { - u32 cid_count; - u32 cids_per_vf; - struct ecore_tid_seg tid_seg[TASK_SEGMENTS]; -}; - -/* ILT Client configuration, - * Per connection type (protocol) resources (cids, tis, vf cids etc.) - * 1 - for connection context (CDUC) and for each task context we need two - * values, for regular task context and for force load memory - */ -#define ILT_CLI_PF_BLOCKS (1 + NUM_TASK_PF_SEGMENTS * 2) -#define ILT_CLI_VF_BLOCKS (1 + NUM_TASK_VF_SEGMENTS * 2) -#define CDUC_BLK (0) -#define SRQ_BLK (0) -#define CDUT_SEG_BLK(n) (1 + (u8)(n)) -#define CDUT_FL_SEG_BLK(n, X) (1 + (n) + NUM_TASK_##X##_SEGMENTS) - -struct ilt_cfg_pair { - u32 reg; - u32 val; -}; - -struct ecore_ilt_cli_blk { - u32 total_size; /* 0 means not active */ - u32 real_size_in_page; - u32 start_line; - u32 dynamic_line_offset; - u32 dynamic_line_cnt; -}; - -struct ecore_ilt_client_cfg { - bool active; - - /* ILT boundaries */ - struct ilt_cfg_pair first; - struct ilt_cfg_pair last; - struct ilt_cfg_pair p_size; - - /* ILT client blocks for PF */ - struct ecore_ilt_cli_blk pf_blks[ILT_CLI_PF_BLOCKS]; - u32 pf_total_lines; - - /* ILT client blocks for VFs */ - struct ecore_ilt_cli_blk vf_blks[ILT_CLI_VF_BLOCKS]; - u32 vf_total_lines; -}; - -#define MAP_WORD_SIZE sizeof(unsigned long) -#define BITS_PER_MAP_WORD (MAP_WORD_SIZE * 8) - -struct ecore_cid_acquired_map { - u32 start_cid; - u32 max_count; - u32 *cid_map; -}; - -struct ecore_src_t2 { - struct phys_mem_desc *dma_mem; - u32 num_pages; - u64 first_free; - u64 last_free; -}; - -struct ecore_cxt_mngr { - /* Per protocl configuration */ - struct ecore_conn_type_cfg conn_cfg[MAX_CONN_TYPES]; - - /* computed ILT structure */ - struct ecore_ilt_client_cfg clients[ILT_CLI_MAX]; - - /* Task type sizes */ - u32 task_type_size[NUM_TASK_TYPES]; - - /* total number of VFs for this hwfn - - * ALL VFs are symmetric in terms of HW resources - */ - u32 vf_count; - - /* Acquired CIDs */ - struct ecore_cid_acquired_map acquired[MAX_CONN_TYPES]; - struct ecore_cid_acquired_map *acquired_vf[MAX_CONN_TYPES]; - - /* ILT shadow table */ - struct phys_mem_desc *ilt_shadow; - u32 pf_start_line; - - /* Mutex for a dynamic ILT allocation */ - osal_mutex_t mutex; - - /* SRC T2 */ - struct ecore_src_t2 src_t2; - - /* The infrastructure originally was very generic and context/task - * oriented - per connection-type we would set how many of those - * are needed, and later when determining how much memory we're - * needing for a given block we'd iterate over all the relevant - * connection-types. - * But since then we've had some additional resources, some of which - * require memory which is indepent of the general context/task - * scheme. We add those here explicitly per-feature. - */ - - /* total number of SRQ's for this hwfn */ - u32 srq_count; - - /* Maximal number of L2 steering filters */ - u32 arfs_count; - - /* TODO - VF arfs filters ? */ -}; - static OSAL_INLINE bool tm_cid_proto(enum protocol_type type) { return type == PROTOCOLID_TOE; @@ -945,7 +817,7 @@ static enum _ecore_status_t ecore_cxt_src_t2_alloc(struct ecore_hwfn *p_hwfn) } #define for_each_ilt_valid_client(pos, clients) \ - for (pos = 0; pos < ILT_CLI_MAX; pos++) \ + for (pos = 0; pos < MAX_ILT_CLIENTS; pos++) \ if (!clients[pos].active) { \ continue; \ } else \ @@ -1238,7 +1110,7 @@ enum _ecore_status_t ecore_cxt_mngr_alloc(struct ecore_hwfn *p_hwfn) clients[ILT_CLI_TSDM].p_size.reg = ILT_CFG_REG(TSDM, P_SIZE); /* default ILT page size for all clients is 64K */ - for (i = 0; i < ILT_CLI_MAX; i++) + for (i = 0; i < MAX_ILT_CLIENTS; i++) p_mngr->clients[i].p_size.val = ILT_DEFAULT_HW_P_SIZE; /* due to removal of ISCSI/FCoE files union type0_task_context @@ -2306,3 +2178,11 @@ ecore_cxt_free_ilt_range(struct ecore_hwfn *p_hwfn, return ECORE_SUCCESS; } + +static u16 ecore_blk_calculate_pages(struct ecore_ilt_cli_blk *p_blk) +{ + if (p_blk->real_size_in_page == 0) + return 0; + + return DIV_ROUND_UP(p_blk->total_size, p_blk->real_size_in_page); +} diff --git a/drivers/net/qede/base/ecore_cxt.h b/drivers/net/qede/base/ecore_cxt.h index 55f08027d..7b91b953e 100644 --- a/drivers/net/qede/base/ecore_cxt.h +++ b/drivers/net/qede/base/ecore_cxt.h @@ -31,7 +31,7 @@ enum ilt_clients { ILT_CLI_TSDM, ILT_CLI_RGFS, ILT_CLI_TGFS, - ILT_CLI_MAX + MAX_ILT_CLIENTS }; u32 ecore_cxt_get_proto_cid_count(struct ecore_hwfn *p_hwfn, @@ -212,4 +212,137 @@ enum _ecore_status_t ecore_cxt_free_proto_ilt(struct ecore_hwfn *p_hwfn, #define ECORE_CTX_WORKING_MEM 0 #define ECORE_CTX_FL_MEM 1 +/* Max number of connection types in HW (DQ/CDU etc.) */ +#define MAX_CONN_TYPES PROTOCOLID_COMMON +#define NUM_TASK_TYPES 2 +#define NUM_TASK_PF_SEGMENTS 4 +#define NUM_TASK_VF_SEGMENTS 1 + +/* PF per protocl configuration object */ +#define TASK_SEGMENTS (NUM_TASK_PF_SEGMENTS + NUM_TASK_VF_SEGMENTS) +#define TASK_SEGMENT_VF (NUM_TASK_PF_SEGMENTS) + +struct ecore_tid_seg { + u32 count; + u8 type; + bool has_fl_mem; +}; + +struct ecore_conn_type_cfg { + u32 cid_count; + u32 cids_per_vf; + struct ecore_tid_seg tid_seg[TASK_SEGMENTS]; +}; + +/* ILT Client configuration, + * Per connection type (protocol) resources (cids, tis, vf cids etc.) + * 1 - for connection context (CDUC) and for each task context we need two + * values, for regular task context and for force load memory + */ +#define ILT_CLI_PF_BLOCKS (1 + NUM_TASK_PF_SEGMENTS * 2) +#define ILT_CLI_VF_BLOCKS (1 + NUM_TASK_VF_SEGMENTS * 2) +#define CDUC_BLK (0) +#define SRQ_BLK (0) +#define CDUT_SEG_BLK(n) (1 + (u8)(n)) +#define CDUT_FL_SEG_BLK(n, X) (1 + (n) + NUM_TASK_##X##_SEGMENTS) + +struct ilt_cfg_pair { + u32 reg; + u32 val; +}; + +struct ecore_ilt_cli_blk { + u32 total_size; /* 0 means not active */ + u32 real_size_in_page; + u32 start_line; + u32 dynamic_line_offset; + u32 dynamic_line_cnt; +}; + +struct ecore_ilt_client_cfg { + bool active; + + /* ILT boundaries */ + struct ilt_cfg_pair first; + struct ilt_cfg_pair last; + struct ilt_cfg_pair p_size; + + /* ILT client blocks for PF */ + struct ecore_ilt_cli_blk pf_blks[ILT_CLI_PF_BLOCKS]; + u32 pf_total_lines; + + /* ILT client blocks for VFs */ + struct ecore_ilt_cli_blk vf_blks[ILT_CLI_VF_BLOCKS]; + u32 vf_total_lines; +}; + +#define MAP_WORD_SIZE sizeof(unsigned long) +#define BITS_PER_MAP_WORD (MAP_WORD_SIZE * 8) + +struct ecore_cid_acquired_map { + u32 start_cid; + u32 max_count; + u32 *cid_map; +}; + +struct ecore_src_t2 { + struct phys_mem_desc *dma_mem; + u32 num_pages; + u64 first_free; + u64 last_free; +}; + +struct ecore_cxt_mngr { + /* Per protocl configuration */ + struct ecore_conn_type_cfg conn_cfg[MAX_CONN_TYPES]; + + /* computed ILT structure */ + struct ecore_ilt_client_cfg clients[MAX_ILT_CLIENTS]; + + /* Task type sizes */ + u32 task_type_size[NUM_TASK_TYPES]; + + /* total number of VFs for this hwfn - + * ALL VFs are symmetric in terms of HW resources + */ + u32 vf_count; + u32 first_vf_in_pf; + + /* Acquired CIDs */ + struct ecore_cid_acquired_map acquired[MAX_CONN_TYPES]; + struct ecore_cid_acquired_map *acquired_vf[MAX_CONN_TYPES]; + + /* ILT shadow table */ + struct phys_mem_desc *ilt_shadow; + u32 ilt_shadow_size; + u32 pf_start_line; + + /* Mutex for a dynamic ILT allocation */ + osal_mutex_t mutex; + + /* SRC T2 */ + struct ecore_src_t2 src_t2; + + /* The infrastructure originally was very generic and context/task + * oriented - per connection-type we would set how many of those + * are needed, and later when determining how much memory we're + * needing for a given block we'd iterate over all the relevant + * connection-types. + * But since then we've had some additional resources, some of which + * require memory which is indepent of the general context/task + * scheme. We add those here explicitly per-feature. + */ + + /* total number of SRQ's for this hwfn */ + u32 srq_count; + + /* Maximal number of L2 steering filters */ + u32 arfs_count; + + /* TODO - VF arfs filters ? */ + + u8 task_type_id; + u16 task_ctx_size; + u16 conn_ctx_size; +}; #endif /* _ECORE_CID_ */ diff --git a/drivers/net/qede/base/ecore_hsi_func_common.h b/drivers/net/qede/base/ecore_hsi_func_common.h new file mode 100644 index 000000000..2cd175163 --- /dev/null +++ b/drivers/net/qede/base/ecore_hsi_func_common.h @@ -0,0 +1,17 @@ +#ifndef _HSI_FUNC_COMMON_H +#define _HSI_FUNC_COMMON_H + +/* Physical memory descriptor */ +struct phys_mem_desc { + dma_addr_t phys_addr; + void *virt_addr; + u32 size; /* In bytes */ +}; + +/* Virtual memory descriptor */ +struct virt_mem_desc { + void *ptr; + u32 size; /* In bytes */ +}; + +#endif diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.h b/drivers/net/qede/base/ecore_init_fw_funcs.h index 912451662..a393d088f 100644 --- a/drivers/net/qede/base/ecore_init_fw_funcs.h +++ b/drivers/net/qede/base/ecore_init_fw_funcs.h @@ -9,13 +9,6 @@ #include "ecore_hsi_common.h" #include "ecore_hsi_eth.h" -/* Physical memory descriptor */ -struct phys_mem_desc { - dma_addr_t phys_addr; - void *virt_addr; - u32 size; /* In bytes */ -}; - /* Returns the VOQ based on port and TC */ #define VOQ(port, tc, max_phys_tcs_per_port) \ ((tc) == PURE_LB_TC ? NUM_OF_PHYS_TCS * MAX_NUM_PORTS_BB + (port) : \ -- 2.18.0