From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from rnd-relay.smtp.broadcom.com (lpdvrndsmtp01.broadcom.com [192.19.229.170]) by dpdk.org (Postfix) with ESMTP id 553FAA128 for ; Fri, 26 May 2017 20:39:51 +0200 (CEST) Received: from mail-irv-17.broadcom.com (mail-irv-17.lvn.broadcom.net [10.75.224.233]) by rnd-relay.smtp.broadcom.com (Postfix) with ESMTP id C6CEC30C0BD; Fri, 26 May 2017 11:39:49 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.10.3 rnd-relay.smtp.broadcom.com C6CEC30C0BD DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1495823989; bh=gznsE0li1LcrqoAABpsrPSjKSLfK2jW0zctCQQ41ht0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=eoRy9StLAWR6S8CfP+8P7pcRMnq/haySVkH6MH6Vw85yu0LH8HdZLQorwcW5bz1HF wT1rl14IoH7q4X5Zi19EcwI9JMvA7xM4KQPeg8xbXBc9+9o6nzvBYS878EZaKO19MN wJIEG1SPnCmuqbabzSUj01WhsvXQjVjBmvS/AqNo= Received: from C02PT1RBG8WP.vpn.broadcom.net (unknown [10.10.117.5]) by mail-irv-17.broadcom.com (Postfix) with ESMTP id 13F518206B; Fri, 26 May 2017 11:39:48 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: ferruh.yigit@intel.com Date: Fri, 26 May 2017 13:39:18 -0500 Message-Id: <20170526183941.80678-3-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.10.1 (Apple Git-78) In-Reply-To: <20170526183941.80678-1-ajit.khaparde@broadcom.com> References: <20170526183941.80678-1-ajit.khaparde@broadcom.com> Subject: [dpdk-dev] [PATCH v2 02/25] bnxt: code reorg to properly allocate resources for PF/VF X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 26 May 2017 18:39:52 -0000 1) Move the function reset to bnxt_dev_init. On the same lines, setup, enable and request interrupt to init path. Memory allocation is also being done in the init path. 2) After a function reset, configure the VFs. Distribute resources evenly between all functions (PF and VF) for now. In the future, this should be controllable. 3) The bnxt_vf_info and bnxt_pf_info had lot of duplication. Move the common items to struct bnxt. And only unique items specific to PF remain in the struct bnxt_pf_info. 4) Program the firmware to allow certain commands sent by a VF. Disallowing these will prevent clean VF driver cleanup. 5) Since PF/VF need to allocate resources from a pool in the hardware, use func_qcaps and func_qcfg to appropriately query the capabilities and available resources. 6) If a PF is being initialized and no VFs are allocated, explicitly call func_cfg to allocate the resources. 7) Once resources are requested from the firmware, update local copy of resource count in struct bnxt only after sending the func_qcfg to make sure the allocation request in the firmware went through. The changes in this patch will be used by the subsequent patches to allow proper initialization of PF/VF instance. -- v1->v2: Regroup related patches per review comment Also split bigger patch into smaller patches if possible. Signed-off-by: Ajit Khaparde --- drivers/net/bnxt/bnxt.h | 80 ++++-- drivers/net/bnxt/bnxt_ethdev.c | 155 ++++++++--- drivers/net/bnxt/bnxt_filter.c | 30 +- drivers/net/bnxt/bnxt_hwrm.c | 614 +++++++++++++++++++++++++++++++++++++---- drivers/net/bnxt/bnxt_hwrm.h | 17 +- drivers/net/bnxt/bnxt_vnic.c | 45 +-- drivers/net/bnxt/bnxt_vnic.h | 11 +- 7 files changed, 764 insertions(+), 188 deletions(-) diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h index 4418c7f..0c59c66 100644 --- a/drivers/net/bnxt/bnxt.h +++ b/drivers/net/bnxt/bnxt.h @@ -35,6 +35,7 @@ #define _BNXT_H_ #include +#include #include #include @@ -54,17 +55,19 @@ enum bnxt_hw_context { HW_CONTEXT_IS_LB = 3, }; -struct bnxt_vf_info { - uint16_t fw_fid; - uint8_t mac_addr[ETHER_ADDR_LEN]; - uint16_t max_rsscos_ctx; - uint16_t max_cp_rings; - uint16_t max_tx_rings; - uint16_t max_rx_rings; - uint16_t max_l2_ctx; - uint16_t max_vnics; - uint16_t vlan; - struct bnxt_pf_info *pf; +struct bnxt_vlan_table_entry { + uint16_t tpid; + uint16_t vid; +} __attribute__((packed)); + +struct bnxt_child_vf_info { + void *req_buf; + struct bnxt_vlan_table_entry *vlan_table; + STAILQ_HEAD(, bnxt_filter_info) filter; + uint32_t func_cfg_flags; + uint32_t l2_rx_mask; + uint16_t fid; + bool random_mac; }; struct bnxt_pf_info { @@ -73,22 +76,20 @@ struct bnxt_pf_info { #define BNXT_FIRST_VF_FID 128 #define BNXT_PF_RINGS_USED(bp) bnxt_get_num_queues(bp) #define BNXT_PF_RINGS_AVAIL(bp) (bp->pf.max_cp_rings - BNXT_PF_RINGS_USED(bp)) - uint32_t fw_fid; uint8_t port_id; - uint8_t mac_addr[ETHER_ADDR_LEN]; - uint16_t max_rsscos_ctx; - uint16_t max_cp_rings; - uint16_t max_tx_rings; - uint16_t max_rx_rings; - uint16_t max_l2_ctx; - uint16_t max_vnics; uint16_t first_vf_id; uint16_t active_vfs; uint16_t max_vfs; + uint32_t func_cfg_flags; void *vf_req_buf; phys_addr_t vf_req_buf_dma_addr; uint32_t vf_req_fwd[8]; - struct bnxt_vf_info *vf; + uint16_t total_vnics; + struct bnxt_child_vf_info *vf_info; +#define BNXT_EVB_MODE_NONE 0 +#define BNXT_EVB_MODE_VEB 1 +#define BNXT_EVB_MODE_VEPA 2 + uint8_t evb_mode; }; /* Max wait time is 10 * 100ms = 1s */ @@ -174,12 +175,49 @@ struct bnxt { struct bnxt_link_info link_info; struct bnxt_cos_queue_info cos_queue[BNXT_COS_QUEUE_COUNT]; + uint16_t fw_fid; + uint8_t dflt_mac_addr[ETHER_ADDR_LEN]; + uint16_t max_rsscos_ctx; + uint16_t max_cp_rings; + uint16_t max_tx_rings; + uint16_t max_rx_rings; + uint16_t max_l2_ctx; + uint16_t max_vnics; + uint16_t max_stat_ctx; + uint16_t vlan; struct bnxt_pf_info pf; - struct bnxt_vf_info vf; uint8_t port_partition_type; uint8_t dev_stopped; + uint32_t fw_ver; +}; + +/* + * Response sent back to the caller after callback + */ +enum rte_pmd_bnxt_mb_event_rsp { + RTE_PMD_BNXT_MB_EVENT_NOOP_ACK, /**< skip mbox request and ACK */ + RTE_PMD_BNXT_MB_EVENT_NOOP_NACK, /**< skip mbox request and NACK */ + RTE_PMD_BNXT_MB_EVENT_PROCEED, /**< proceed with mbox request */ + RTE_PMD_BNXT_MB_EVENT_MAX /**< max value of this enum */ +}; + +/* mailbox message types */ +#define BNXT_VF_RESET 0x01 /* VF requests reset */ +#define BNXT_VF_SET_MAC_ADDR 0x02 /* VF requests PF to set MAC addr */ +#define BNXT_VF_SET_VLAN 0x03 /* VF requests PF to set VLAN */ +#define BNXT_VF_SET_MTU 0x04 /* VF requests PF to set MTU */ +#define BNXT_VF_SET_MRU 0x05 /* VF requests PF to set MRU */ + +/* + * Data sent to the caller when the callback is executed. + */ +struct rte_pmd_bnxt_mb_event_param { + uint16_t vf_id; /* Virtual Function number */ + int retval; /* return value */ + void *msg; /* pointer to message */ }; int bnxt_link_update_op(struct rte_eth_dev *eth_dev, int wait_to_complete); +int bnxt_rcv_msg_from_vf(struct bnxt *bp, uint16_t vf_id, void *msg); #endif diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index bb87361..f127d3a 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -345,18 +345,12 @@ static void bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev, dev_info->max_hash_mac_addrs = 0; /* PF/VF specifics */ - if (BNXT_PF(bp)) { - dev_info->max_rx_queues = bp->pf.max_rx_rings; - dev_info->max_tx_queues = bp->pf.max_tx_rings; - dev_info->max_vfs = bp->pf.active_vfs; - dev_info->reta_size = bp->pf.max_rsscos_ctx; - max_vnics = bp->pf.max_vnics; - } else { - dev_info->max_rx_queues = bp->vf.max_rx_rings; - dev_info->max_tx_queues = bp->vf.max_tx_rings; - dev_info->reta_size = bp->vf.max_rsscos_ctx; - max_vnics = bp->vf.max_vnics; - } + if (BNXT_PF(bp)) + dev_info->max_vfs = bp->pdev->max_vfs; + dev_info->max_rx_queues = bp->max_rx_rings; + dev_info->max_tx_queues = bp->max_tx_rings; + dev_info->reta_size = bp->max_rsscos_ctx; + max_vnics = bp->max_vnics; /* Fast path specifics */ dev_info->min_rx_bufsize = 1; @@ -488,41 +482,18 @@ static int bnxt_dev_start_op(struct rte_eth_dev *eth_dev) int rc; bp->dev_stopped = 0; - rc = bnxt_hwrm_func_reset(bp); - if (rc) { - RTE_LOG(ERR, PMD, "hwrm chip reset failure rc: %x\n", rc); - rc = -1; - goto error; - } - - rc = bnxt_setup_int(bp); - if (rc) - goto error; - - rc = bnxt_alloc_mem(bp); - if (rc) - goto error; - - rc = bnxt_request_int(bp); - if (rc) - goto error; rc = bnxt_init_nic(bp); if (rc) goto error; - bnxt_enable_int(bp); - bnxt_link_update_op(eth_dev, 0); return 0; error: bnxt_shutdown_nic(bp); - bnxt_disable_int(bp); - bnxt_free_int(bp); bnxt_free_tx_mbufs(bp); bnxt_free_rx_mbufs(bp); - bnxt_free_mem(bp); return rc; } @@ -554,8 +525,6 @@ static void bnxt_dev_stop_op(struct rte_eth_dev *eth_dev) eth_dev->data->dev_link.link_status = 0; } bnxt_set_hwrm_link_config(bp, false); - bnxt_disable_int(bp); - bnxt_free_int(bp); bnxt_shutdown_nic(bp); bp->dev_stopped = 1; } @@ -1082,6 +1051,12 @@ static int bnxt_init_board(struct rte_eth_dev *eth_dev) static int bnxt_dev_uninit(struct rte_eth_dev *eth_dev); +#define ALLOW_FUNC(x) \ + { \ + typeof(x) arg = (x); \ + bp->pf.vf_req_fwd[((arg) >> 5)] &= \ + ~rte_cpu_to_le_32(1 << ((arg) & 0x1f)); \ + } static int bnxt_dev_init(struct rte_eth_dev *eth_dev) { @@ -1097,6 +1072,7 @@ bnxt_dev_init(struct rte_eth_dev *eth_dev) eth_dev->data->dev_flags |= RTE_ETH_DEV_DETACHABLE; bp = eth_dev->data->dev_private; + bp->dev_stopped = 1; if (bnxt_vf_pciid(pci_dev->id.device_id)) bp->flags |= BNXT_FLAG_VF; @@ -1130,6 +1106,11 @@ bnxt_dev_init(struct rte_eth_dev *eth_dev) RTE_LOG(ERR, PMD, "hwrm query capability failure rc: %x\n", rc); goto error_free; } + if (bp->max_tx_rings == 0) { + RTE_LOG(ERR, PMD, "No TX rings available!\n"); + rc = -EBUSY; + goto error_free; + } eth_dev->data->mac_addrs = rte_zmalloc("bnxt_mac_addr_tbl", ETHER_ADDR_LEN * MAX_NUM_MAC_ADDR, 0); if (eth_dev->data->mac_addrs == NULL) { @@ -1140,10 +1121,7 @@ bnxt_dev_init(struct rte_eth_dev *eth_dev) goto error_free; } /* Copy the permanent MAC from the qcap response address now. */ - if (BNXT_PF(bp)) - memcpy(bp->mac_addr, bp->pf.mac_addr, sizeof(bp->mac_addr)); - else - memcpy(bp->mac_addr, bp->vf.mac_addr, sizeof(bp->mac_addr)); + memcpy(bp->mac_addr, bp->dflt_mac_addr, sizeof(bp->mac_addr)); memcpy(ð_dev->data->mac_addrs[0], bp->mac_addr, ETHER_ADDR_LEN); bp->grp_info = rte_zmalloc("bnxt_grp_info", sizeof(*bp->grp_info) * bp->max_ring_grps, 0); @@ -1155,8 +1133,29 @@ bnxt_dev_init(struct rte_eth_dev *eth_dev) goto error_free; } - rc = bnxt_hwrm_func_driver_register(bp, 0, - bp->pf.vf_req_fwd); + /* Forward all requests if firmware is new enough */ + if (((bp->fw_ver >= ((20 << 24) | (6 << 16) | (100 << 8))) && + (bp->fw_ver < ((20 << 24) | (7 << 16)))) || + ((bp->fw_ver >= ((20 << 24) | (8 << 16))))) { + memset(bp->pf.vf_req_fwd, 0xff, sizeof(bp->pf.vf_req_fwd)); + } else { + RTE_LOG(WARNING, PMD, + "Firmware too old for VF mailbox functionality\n"); + memset(bp->pf.vf_req_fwd, 0, sizeof(bp->pf.vf_req_fwd)); + } + + /* + * The following are used for driver cleanup. If we disallow these, + * VF drivers can't clean up cleanly. + */ + ALLOW_FUNC(HWRM_FUNC_DRV_UNRGTR); + ALLOW_FUNC(HWRM_VNIC_FREE); + ALLOW_FUNC(HWRM_RING_FREE); + ALLOW_FUNC(HWRM_RING_GRP_FREE); + ALLOW_FUNC(HWRM_VNIC_RSS_COS_LB_CTX_FREE); + ALLOW_FUNC(HWRM_CFA_L2_FILTER_FREE); + ALLOW_FUNC(HWRM_STAT_CTX_FREE); + rc = bnxt_hwrm_func_driver_register(bp); if (rc) { RTE_LOG(ERR, PMD, "Failed to register driver"); @@ -1169,10 +1168,55 @@ bnxt_dev_init(struct rte_eth_dev *eth_dev) pci_dev->mem_resource[0].phys_addr, pci_dev->mem_resource[0].addr); - bp->dev_stopped = 0; + rc = bnxt_hwrm_func_reset(bp); + if (rc) { + RTE_LOG(ERR, PMD, "hwrm chip reset failure rc: %x\n", rc); + rc = -1; + goto error_free; + } + + if (BNXT_PF(bp)) { + //if (bp->pf.active_vfs) { + // TODO: Deallocate VF resources? + //} + if (bp->pdev->max_vfs) { + rc = bnxt_hwrm_allocate_vfs(bp, bp->pdev->max_vfs); + if (rc) { + RTE_LOG(ERR, PMD, "Failed to allocate VFs\n"); + goto error_free; + } + } else { + rc = bnxt_hwrm_allocate_pf_only(bp); + if (rc) { + RTE_LOG(ERR, PMD, + "Failed to allocate PF resources\n"); + goto error_free; + } + } + } + + rc = bnxt_setup_int(bp); + if (rc) + goto error_free; + + rc = bnxt_alloc_mem(bp); + if (rc) + goto error_free_int; + + rc = bnxt_request_int(bp); + if (rc) + goto error_free_int; + + bnxt_enable_int(bp); return 0; +error_free_int: + bnxt_disable_int(bp); + bnxt_free_def_cp_ring(bp); + bnxt_hwrm_func_buf_unrgtr(bp); + bnxt_free_int(bp); + bnxt_free_mem(bp); error_free: bnxt_dev_uninit(eth_dev); error: @@ -1184,6 +1228,9 @@ bnxt_dev_uninit(struct rte_eth_dev *eth_dev) { struct bnxt *bp = eth_dev->data->dev_private; int rc; + bnxt_disable_int(bp); + bnxt_free_int(bp); + bnxt_free_mem(bp); if (eth_dev->data->mac_addrs != NULL) { rte_free(eth_dev->data->mac_addrs); eth_dev->data->mac_addrs = NULL; @@ -1196,6 +1243,8 @@ bnxt_dev_uninit(struct rte_eth_dev *eth_dev) { bnxt_free_hwrm_resources(bp); if (bp->dev_stopped == 0) bnxt_dev_close_op(eth_dev); + if (bp->pf.vf_info) + rte_free(bp->pf.vf_info); eth_dev->dev_ops = NULL; eth_dev->rx_pkt_burst = NULL; eth_dev->tx_pkt_burst = NULL; @@ -1203,6 +1252,24 @@ bnxt_dev_uninit(struct rte_eth_dev *eth_dev) { return rc; } +int bnxt_rcv_msg_from_vf(struct bnxt *bp, uint16_t vf_id, void *msg) +{ + struct rte_pmd_bnxt_mb_event_param cb_param; + + cb_param.retval = RTE_PMD_BNXT_MB_EVENT_PROCEED; + cb_param.vf_id = vf_id; + cb_param.msg = msg; + + _rte_eth_dev_callback_process(bp->eth_dev, RTE_ETH_EVENT_VF_MBOX, + &cb_param); + + /* Default to approve */ + if (cb_param.retval == RTE_PMD_BNXT_MB_EVENT_PROCEED) + cb_param.retval = RTE_PMD_BNXT_MB_EVENT_NOOP_ACK; + + return cb_param.retval == RTE_PMD_BNXT_MB_EVENT_NOOP_ACK ? true : false; +} + static int bnxt_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_device *pci_dev) { diff --git a/drivers/net/bnxt/bnxt_filter.c b/drivers/net/bnxt/bnxt_filter.c index df1042c..137c7b7 100644 --- a/drivers/net/bnxt/bnxt_filter.c +++ b/drivers/net/bnxt/bnxt_filter.c @@ -73,15 +73,7 @@ void bnxt_init_filters(struct bnxt *bp) struct bnxt_filter_info *filter; int i, max_filters; - if (BNXT_PF(bp)) { - struct bnxt_pf_info *pf = &bp->pf; - - max_filters = pf->max_l2_ctx; - } else { - struct bnxt_vf_info *vf = &bp->vf; - - max_filters = vf->max_l2_ctx; - } + max_filters = bp->max_l2_ctx; STAILQ_INIT(&bp->free_filter_list); for (i = 0; i < max_filters; i++) { filter = &bp->filter_info[i]; @@ -122,15 +114,7 @@ void bnxt_free_filter_mem(struct bnxt *bp) return; /* Ensure that all filters are freed */ - if (BNXT_PF(bp)) { - struct bnxt_pf_info *pf = &bp->pf; - - max_filters = pf->max_l2_ctx; - } else { - struct bnxt_vf_info *vf = &bp->vf; - - max_filters = vf->max_l2_ctx; - } + max_filters = bp->max_l2_ctx; for (i = 0; i < max_filters; i++) { filter = &bp->filter_info[i]; if (filter->fw_l2_filter_id != ((uint64_t)-1)) { @@ -155,15 +139,7 @@ int bnxt_alloc_filter_mem(struct bnxt *bp) struct bnxt_filter_info *filter_mem; uint16_t max_filters; - if (BNXT_PF(bp)) { - struct bnxt_pf_info *pf = &bp->pf; - - max_filters = pf->max_l2_ctx; - } else { - struct bnxt_vf_info *vf = &bp->vf; - - max_filters = vf->max_l2_ctx; - } + max_filters = bp->max_l2_ctx; /* Allocate memory for VNIC pool and filter pool */ filter_mem = rte_zmalloc("bnxt_filter_info", max_filters * sizeof(struct bnxt_filter_info), diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c index 9100b7b..7aece5d 100644 --- a/drivers/net/bnxt/bnxt_hwrm.c +++ b/drivers/net/bnxt/bnxt_hwrm.c @@ -31,6 +31,8 @@ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ +#include + #include #include #include @@ -54,6 +56,38 @@ #define HWRM_CMD_TIMEOUT 2000 +struct bnxt_plcmodes_cfg { + uint32_t flags; + uint16_t jumbo_thresh; + uint16_t hds_offset; + uint16_t hds_threshold; +}; + +static int page_getenum(size_t size) +{ + if (size <= 1 << 4) + return 4; + if (size <= 1 << 12) + return 12; + if (size <= 1 << 13) + return 13; + if (size <= 1 << 16) + return 16; + if (size <= 1 << 21) + return 21; + if (size <= 1 << 22) + return 22; + if (size <= 1 << 30) + return 30; + RTE_LOG(ERR, PMD, "Page size %zu out of range\n", size); + return sizeof(void *) * 8 - 1; +} + +static int page_roundup(size_t size) +{ + return 1 << page_getenum(size); +} + /* * HWRM Functions (sent to HWRM) * These are named bnxt_hwrm_*() and return -1 if bnxt_hwrm_send_message() @@ -275,6 +309,8 @@ int bnxt_hwrm_func_qcaps(struct bnxt *bp) int rc = 0; struct hwrm_func_qcaps_input req = {.req_type = 0 }; struct hwrm_func_qcaps_output *resp = bp->hwrm_cmd_resp_addr; + uint16_t new_max_vfs; + int i; HWRM_PREP(req, FUNC_QCAPS, -1, resp); @@ -286,31 +322,52 @@ int bnxt_hwrm_func_qcaps(struct bnxt *bp) bp->max_ring_grps = rte_le_to_cpu_32(resp->max_hw_ring_grps); if (BNXT_PF(bp)) { - struct bnxt_pf_info *pf = &bp->pf; - - pf->fw_fid = rte_le_to_cpu_32(resp->fid); - pf->port_id = resp->port_id; - memcpy(pf->mac_addr, resp->mac_address, ETHER_ADDR_LEN); - pf->max_rsscos_ctx = rte_le_to_cpu_16(resp->max_rsscos_ctx); - pf->max_cp_rings = rte_le_to_cpu_16(resp->max_cmpl_rings); - pf->max_tx_rings = rte_le_to_cpu_16(resp->max_tx_rings); - pf->max_rx_rings = rte_le_to_cpu_16(resp->max_rx_rings); - pf->max_l2_ctx = rte_le_to_cpu_16(resp->max_l2_ctxs); - pf->max_vnics = rte_le_to_cpu_16(resp->max_vnics); - pf->first_vf_id = rte_le_to_cpu_16(resp->first_vf_id); - pf->max_vfs = rte_le_to_cpu_16(resp->max_vfs); + bp->pf.port_id = resp->port_id; + bp->pf.first_vf_id = rte_le_to_cpu_16(resp->first_vf_id); + new_max_vfs = bp->pdev->max_vfs; + if (new_max_vfs != bp->pf.max_vfs) { + if (bp->pf.vf_info) + rte_free(bp->pf.vf_info); + bp->pf.vf_info = rte_malloc("bnxt_vf_info", + sizeof(bp->pf.vf_info[0]) * new_max_vfs, 0); + bp->pf.max_vfs = new_max_vfs; + for (i = 0; i < new_max_vfs; i++) { + bp->pf.vf_info[i].fid = bp->pf.first_vf_id + i; + bp->pf.vf_info[i].vlan_table = + rte_zmalloc("VF VLAN table", + getpagesize(), + getpagesize()); + if (bp->pf.vf_info[i].vlan_table == NULL) + RTE_LOG(ERR, PMD, + "Fail to alloc VLAN table for VF %d\n", + i); + else + rte_mem_lock_page( + bp->pf.vf_info[i].vlan_table); + STAILQ_INIT(&bp->pf.vf_info[i].filter); + } + } + } + + bp->fw_fid = rte_le_to_cpu_32(resp->fid); + memcpy(bp->dflt_mac_addr, &resp->mac_address, ETHER_ADDR_LEN); + bp->max_rsscos_ctx = rte_le_to_cpu_16(resp->max_rsscos_ctx); + bp->max_cp_rings = rte_le_to_cpu_16(resp->max_cmpl_rings); + bp->max_tx_rings = rte_le_to_cpu_16(resp->max_tx_rings); + bp->max_rx_rings = rte_le_to_cpu_16(resp->max_rx_rings); + bp->max_l2_ctx = rte_le_to_cpu_16(resp->max_l2_ctxs); + /* TODO: For now, do not support VMDq/RFS on VFs. */ + if (BNXT_PF(bp)) { + if (bp->pf.max_vfs) + bp->max_vnics = 1; + else + bp->max_vnics = rte_le_to_cpu_16(resp->max_vnics); } else { - struct bnxt_vf_info *vf = &bp->vf; - - vf->fw_fid = rte_le_to_cpu_32(resp->fid); - memcpy(vf->mac_addr, &resp->mac_address, ETHER_ADDR_LEN); - vf->max_rsscos_ctx = rte_le_to_cpu_16(resp->max_rsscos_ctx); - vf->max_cp_rings = rte_le_to_cpu_16(resp->max_cmpl_rings); - vf->max_tx_rings = rte_le_to_cpu_16(resp->max_tx_rings); - vf->max_rx_rings = rte_le_to_cpu_16(resp->max_rx_rings); - vf->max_l2_ctx = rte_le_to_cpu_16(resp->max_l2_ctxs); - vf->max_vnics = rte_le_to_cpu_16(resp->max_vnics); + bp->max_vnics = 1; } + bp->max_stat_ctx = rte_le_to_cpu_16(resp->max_stat_ctx); + if (BNXT_PF(bp)) + bp->pf.total_vnics = rte_le_to_cpu_16(resp->max_vnics); return rc; } @@ -332,8 +389,7 @@ int bnxt_hwrm_func_reset(struct bnxt *bp) return rc; } -int bnxt_hwrm_func_driver_register(struct bnxt *bp, uint32_t flags, - uint32_t *vf_req_fwd) +int bnxt_hwrm_func_driver_register(struct bnxt *bp) { int rc; struct hwrm_func_drv_rgtr_input req = {.req_type = 0 }; @@ -343,16 +399,22 @@ int bnxt_hwrm_func_driver_register(struct bnxt *bp, uint32_t flags, return 0; HWRM_PREP(req, FUNC_DRV_RGTR, -1, resp); - req.flags = flags; - req.enables = HWRM_FUNC_DRV_RGTR_INPUT_ENABLES_VER | - HWRM_FUNC_DRV_RGTR_INPUT_ENABLES_ASYNC_EVENT_FWD; + req.enables = rte_cpu_to_le_32(HWRM_FUNC_DRV_RGTR_INPUT_ENABLES_VER | + HWRM_FUNC_DRV_RGTR_INPUT_ENABLES_ASYNC_EVENT_FWD); req.ver_maj = RTE_VER_YEAR; req.ver_min = RTE_VER_MONTH; req.ver_upd = RTE_VER_MINOR; - memcpy(req.vf_req_fwd, vf_req_fwd, sizeof(req.vf_req_fwd)); + if (BNXT_PF(bp)) { + req.enables |= rte_cpu_to_le_32( + HWRM_FUNC_DRV_RGTR_INPUT_ENABLES_VF_INPUT_FWD); + memcpy(req.vf_req_fwd, bp->pf.vf_req_fwd, + RTE_MIN(sizeof(req.vf_req_fwd), + sizeof(bp->pf.vf_req_fwd))); + } req.async_event_fwd[0] |= rte_cpu_to_le_32(0x1); /* TODO: Use MACRO */ + memset(req.async_event_fwd, 0xff, sizeof(req.async_event_fwd)); rc = bnxt_hwrm_send_message(bp, &req, sizeof(req)); @@ -391,6 +453,8 @@ int bnxt_hwrm_ver_get(struct bnxt *bp) resp->hwrm_intf_maj, resp->hwrm_intf_min, resp->hwrm_intf_upd, resp->hwrm_fw_maj, resp->hwrm_fw_min, resp->hwrm_fw_bld); + bp->fw_ver = (resp->hwrm_fw_maj << 24) | (resp->hwrm_fw_min << 16) | + (resp->hwrm_fw_bld << 8) | resp->hwrm_fw_rsvd; RTE_LOG(INFO, PMD, "Driver HWRM version: %d.%d.%d\n", HWRM_VERSION_MAJOR, HWRM_VERSION_MINOR, HWRM_VERSION_UPDATE); @@ -825,10 +889,12 @@ int bnxt_hwrm_vnic_alloc(struct bnxt *bp, struct bnxt_vnic_info *vnic) } vnic->fw_grp_ids[j] = bp->grp_info[i].fw_grp_id; } - - vnic->fw_rss_cos_lb_ctx = (uint16_t)HWRM_NA_SIGNATURE; - vnic->ctx_is_rss_cos_lb = HW_CONTEXT_NONE; - + vnic->dflt_ring_grp = bp->grp_info[vnic->start_grp_id].fw_grp_id; + vnic->rss_rule = (uint16_t)HWRM_NA_SIGNATURE; + vnic->cos_rule = (uint16_t)HWRM_NA_SIGNATURE; + vnic->lb_rule = (uint16_t)HWRM_NA_SIGNATURE; + vnic->mru = bp->eth_dev->data->mtu + ETHER_HDR_LEN + + ETHER_CRC_LEN + VLAN_TAG_SIZE; HWRM_PREP(req, VNIC_ALLOC, -1, resp); rc = bnxt_hwrm_send_message(bp, &req, sizeof(req)); @@ -844,27 +910,45 @@ int bnxt_hwrm_vnic_cfg(struct bnxt *bp, struct bnxt_vnic_info *vnic) int rc = 0; struct hwrm_vnic_cfg_input req = {.req_type = 0 }; struct hwrm_vnic_cfg_output *resp = bp->hwrm_cmd_resp_addr; + uint32_t ctx_enable_flag = HWRM_VNIC_CFG_INPUT_ENABLES_RSS_RULE; HWRM_PREP(req, VNIC_CFG, -1, resp); /* Only RSS support for now TBD: COS & LB */ req.enables = rte_cpu_to_le_32(HWRM_VNIC_CFG_INPUT_ENABLES_DFLT_RING_GRP | - HWRM_VNIC_CFG_INPUT_ENABLES_RSS_RULE | HWRM_VNIC_CFG_INPUT_ENABLES_MRU); + if (vnic->lb_rule != 0xffff) + ctx_enable_flag = HWRM_VNIC_CFG_INPUT_ENABLES_LB_RULE; + if (vnic->cos_rule != 0xffff) + ctx_enable_flag = HWRM_VNIC_CFG_INPUT_ENABLES_COS_RULE; + if (vnic->rss_rule != 0xffff) + ctx_enable_flag = HWRM_VNIC_CFG_INPUT_ENABLES_RSS_RULE; + req.enables |= rte_cpu_to_le_32(ctx_enable_flag); req.vnic_id = rte_cpu_to_le_16(vnic->fw_vnic_id); - req.dflt_ring_grp = - rte_cpu_to_le_16(bp->grp_info[vnic->start_grp_id].fw_grp_id); - req.rss_rule = rte_cpu_to_le_16(vnic->fw_rss_cos_lb_ctx); - req.cos_rule = rte_cpu_to_le_16(0xffff); - req.lb_rule = rte_cpu_to_le_16(0xffff); - req.mru = rte_cpu_to_le_16(bp->eth_dev->data->mtu + ETHER_HDR_LEN + - ETHER_CRC_LEN + VLAN_TAG_SIZE); + req.dflt_ring_grp = rte_cpu_to_le_16(vnic->dflt_ring_grp); + req.rss_rule = rte_cpu_to_le_16(vnic->rss_rule); + req.cos_rule = rte_cpu_to_le_16(vnic->cos_rule); + req.lb_rule = rte_cpu_to_le_16(vnic->lb_rule); + req.mru = rte_cpu_to_le_16(vnic->mru); if (vnic->func_default) - req.flags = 1; + req.flags |= + rte_cpu_to_le_32(HWRM_VNIC_CFG_INPUT_FLAGS_DEFAULT); if (vnic->vlan_strip) req.flags |= rte_cpu_to_le_32(HWRM_VNIC_CFG_INPUT_FLAGS_VLAN_STRIP_MODE); + if (vnic->bd_stall) + req.flags |= + rte_cpu_to_le_32(HWRM_VNIC_CFG_INPUT_FLAGS_BD_STALL_MODE); + if (vnic->roce_dual) + req.flags |= rte_cpu_to_le_32( + HWRM_VNIC_QCFG_OUTPUT_FLAGS_ROCE_DUAL_VNIC_MODE); + if (vnic->roce_only) + req.flags |= rte_cpu_to_le_32( + HWRM_VNIC_QCFG_OUTPUT_FLAGS_ROCE_ONLY_VNIC_MODE); + if (vnic->rss_dflt_cr) + req.flags |= rte_cpu_to_le_32( + HWRM_VNIC_QCFG_OUTPUT_FLAGS_RSS_DFLT_CR_MODE); rc = bnxt_hwrm_send_message(bp, &req, sizeof(req)); @@ -873,6 +957,45 @@ int bnxt_hwrm_vnic_cfg(struct bnxt *bp, struct bnxt_vnic_info *vnic) return rc; } +int bnxt_hwrm_vnic_qcfg(struct bnxt *bp, struct bnxt_vnic_info *vnic, + int16_t fw_vf_id) +{ + int rc = 0; + struct hwrm_vnic_qcfg_input req = {.req_type = 0 }; + struct hwrm_vnic_qcfg_output *resp = bp->hwrm_cmd_resp_addr; + + HWRM_PREP(req, VNIC_QCFG, -1, resp); + + req.enables = + rte_cpu_to_le_32(HWRM_VNIC_QCFG_INPUT_ENABLES_VF_ID_VALID); + req.vnic_id = rte_cpu_to_le_16(vnic->fw_vnic_id); + req.vf_id = rte_cpu_to_le_16(fw_vf_id); + + rc = bnxt_hwrm_send_message(bp, &req, sizeof(req)); + + HWRM_CHECK_RESULT; + + vnic->dflt_ring_grp = rte_le_to_cpu_16(resp->dflt_ring_grp); + vnic->rss_rule = rte_le_to_cpu_16(resp->rss_rule); + vnic->cos_rule = rte_le_to_cpu_16(resp->cos_rule); + vnic->lb_rule = rte_le_to_cpu_16(resp->lb_rule); + vnic->mru = rte_le_to_cpu_16(resp->mru); + vnic->func_default = rte_le_to_cpu_32( + resp->flags) & HWRM_VNIC_QCFG_OUTPUT_FLAGS_DEFAULT; + vnic->vlan_strip = rte_le_to_cpu_32(resp->flags) & + HWRM_VNIC_QCFG_OUTPUT_FLAGS_VLAN_STRIP_MODE; + vnic->bd_stall = rte_le_to_cpu_32(resp->flags) & + HWRM_VNIC_QCFG_OUTPUT_FLAGS_BD_STALL_MODE; + vnic->roce_dual = rte_le_to_cpu_32(resp->flags) & + HWRM_VNIC_QCFG_OUTPUT_FLAGS_ROCE_DUAL_VNIC_MODE; + vnic->roce_only = rte_le_to_cpu_32(resp->flags) & + HWRM_VNIC_QCFG_OUTPUT_FLAGS_ROCE_ONLY_VNIC_MODE; + vnic->rss_dflt_cr = rte_le_to_cpu_32(resp->flags) & + HWRM_VNIC_QCFG_OUTPUT_FLAGS_RSS_DFLT_CR_MODE; + + return rc; +} + int bnxt_hwrm_vnic_ctx_alloc(struct bnxt *bp, struct bnxt_vnic_info *vnic) { int rc = 0; @@ -886,7 +1009,7 @@ int bnxt_hwrm_vnic_ctx_alloc(struct bnxt *bp, struct bnxt_vnic_info *vnic) HWRM_CHECK_RESULT; - vnic->fw_rss_cos_lb_ctx = rte_le_to_cpu_16(resp->rss_cos_lb_ctx_id); + vnic->rss_rule = rte_le_to_cpu_16(resp->rss_cos_lb_ctx_id); return rc; } @@ -900,13 +1023,13 @@ int bnxt_hwrm_vnic_ctx_free(struct bnxt *bp, struct bnxt_vnic_info *vnic) HWRM_PREP(req, VNIC_RSS_COS_LB_CTX_FREE, -1, resp); - req.rss_cos_lb_ctx_id = rte_cpu_to_le_16(vnic->fw_rss_cos_lb_ctx); + req.rss_cos_lb_ctx_id = rte_cpu_to_le_16(vnic->rss_rule); rc = bnxt_hwrm_send_message(bp, &req, sizeof(req)); HWRM_CHECK_RESULT; - vnic->fw_rss_cos_lb_ctx = INVALID_HW_RING_ID; + vnic->rss_rule = INVALID_HW_RING_ID; return rc; } @@ -947,7 +1070,7 @@ int bnxt_hwrm_vnic_rss_cfg(struct bnxt *bp, rte_cpu_to_le_64(vnic->rss_table_dma_addr); req.hash_key_tbl_addr = rte_cpu_to_le_64(vnic->rss_hash_key_dma_addr); - req.rss_ctx_idx = rte_cpu_to_le_16(vnic->fw_rss_cos_lb_ctx); + req.rss_ctx_idx = rte_cpu_to_le_16(vnic->rss_rule); rc = bnxt_hwrm_send_message(bp, &req, sizeof(req)); @@ -956,6 +1079,28 @@ int bnxt_hwrm_vnic_rss_cfg(struct bnxt *bp, return rc; } +int bnxt_hwrm_func_vf_mac(struct bnxt *bp, uint16_t vf, const uint8_t *mac_addr) +{ + struct hwrm_func_cfg_input req = {0}; + struct hwrm_func_cfg_output *resp = bp->hwrm_cmd_resp_addr; + int rc; + + req.flags = rte_cpu_to_le_32(bp->pf.vf_info[vf].func_cfg_flags); + req.enables = rte_cpu_to_le_32( + HWRM_FUNC_CFG_INPUT_ENABLES_DFLT_MAC_ADDR); + memcpy(req.dflt_mac_addr, mac_addr, sizeof(req.dflt_mac_addr)); + req.fid = rte_cpu_to_le_16(bp->pf.vf_info[vf].fid); + + HWRM_PREP(req, FUNC_CFG, -1, resp); + + rc = bnxt_hwrm_send_message(bp, &req, sizeof(req)); + HWRM_CHECK_RESULT; + + bp->pf.vf_info[vf].random_mac = false; + + return rc; +} + /* * HWRM utility functions */ @@ -1217,7 +1362,8 @@ void bnxt_free_all_hwrm_resources(struct bnxt *bp) return; vnic = &bp->vnic_info[0]; - bnxt_hwrm_cfa_l2_clear_rx_mask(bp, vnic); + if (BNXT_PF(bp)) + bnxt_hwrm_cfa_l2_clear_rx_mask(bp, vnic); /* VNIC resources */ for (i = 0; i < bp->nr_vnics; i++) { @@ -1512,12 +1658,8 @@ int bnxt_hwrm_func_qcfg(struct bnxt *bp) HWRM_CHECK_RESULT; - if (BNXT_VF(bp)) { - struct bnxt_vf_info *vf = &bp->vf; - - /* Hard Coded.. 0xfff VLAN ID mask */ - vf->vlan = rte_le_to_cpu_16(resp->vlan) & 0xfff; - } + /* Hard Coded.. 0xfff VLAN ID mask */ + bp->vlan = rte_le_to_cpu_16(resp->vlan) & 0xfff; switch (resp->port_partition_type) { case HWRM_FUNC_QCFG_OUTPUT_PORT_PARTITION_TYPE_NPAR1_0: @@ -1532,3 +1674,369 @@ int bnxt_hwrm_func_qcfg(struct bnxt *bp) return rc; } + +static void copy_func_cfg_to_qcaps(struct hwrm_func_cfg_input *fcfg, + struct hwrm_func_qcaps_output *qcaps) +{ + qcaps->max_rsscos_ctx = fcfg->num_rsscos_ctxs; + memcpy(qcaps->mac_address, fcfg->dflt_mac_addr, + sizeof(qcaps->mac_address)); + qcaps->max_l2_ctxs = fcfg->num_l2_ctxs; + qcaps->max_rx_rings = fcfg->num_rx_rings; + qcaps->max_tx_rings = fcfg->num_tx_rings; + qcaps->max_cmpl_rings = fcfg->num_cmpl_rings; + qcaps->max_stat_ctx = fcfg->num_stat_ctxs; + qcaps->max_vfs = 0; + qcaps->first_vf_id = 0; + qcaps->max_vnics = fcfg->num_vnics; + qcaps->max_decap_records = 0; + qcaps->max_encap_records = 0; + qcaps->max_tx_wm_flows = 0; + qcaps->max_tx_em_flows = 0; + qcaps->max_rx_wm_flows = 0; + qcaps->max_rx_em_flows = 0; + qcaps->max_flow_id = 0; + qcaps->max_mcast_filters = fcfg->num_mcast_filters; + qcaps->max_sp_tx_rings = 0; + qcaps->max_hw_ring_grps = fcfg->num_hw_ring_grps; +} + +static int bnxt_hwrm_pf_func_cfg(struct bnxt *bp, int tx_rings) +{ + struct hwrm_func_cfg_input req = {0}; + struct hwrm_func_cfg_output *resp = bp->hwrm_cmd_resp_addr; + int rc; + + req.enables = rte_cpu_to_le_32(HWRM_FUNC_CFG_INPUT_ENABLES_MTU | + HWRM_FUNC_CFG_INPUT_ENABLES_MRU | + HWRM_FUNC_CFG_INPUT_ENABLES_NUM_RSSCOS_CTXS | + HWRM_FUNC_CFG_INPUT_ENABLES_NUM_STAT_CTXS | + HWRM_FUNC_CFG_INPUT_ENABLES_NUM_CMPL_RINGS | + HWRM_FUNC_CFG_INPUT_ENABLES_NUM_TX_RINGS | + HWRM_FUNC_CFG_INPUT_ENABLES_NUM_RX_RINGS | + HWRM_FUNC_CFG_INPUT_ENABLES_NUM_L2_CTXS | + HWRM_FUNC_CFG_INPUT_ENABLES_NUM_VNICS | + HWRM_FUNC_CFG_INPUT_ENABLES_NUM_HW_RING_GRPS); + req.flags = rte_cpu_to_le_32(bp->pf.func_cfg_flags); + req.mtu = rte_cpu_to_le_16(bp->eth_dev->data->mtu + ETHER_HDR_LEN + + ETHER_CRC_LEN + VLAN_TAG_SIZE); + req.mru = rte_cpu_to_le_16(bp->eth_dev->data->mtu + ETHER_HDR_LEN + + ETHER_CRC_LEN + VLAN_TAG_SIZE); + req.num_rsscos_ctxs = rte_cpu_to_le_16(bp->max_rsscos_ctx); + req.num_stat_ctxs = rte_cpu_to_le_16(bp->max_stat_ctx); + req.num_cmpl_rings = rte_cpu_to_le_16(bp->max_cp_rings); + req.num_tx_rings = rte_cpu_to_le_16(tx_rings); + req.num_rx_rings = rte_cpu_to_le_16(bp->max_rx_rings); + req.num_l2_ctxs = rte_cpu_to_le_16(bp->max_l2_ctx); + req.num_vnics = rte_cpu_to_le_16(bp->max_vnics); + req.num_hw_ring_grps = rte_cpu_to_le_16(bp->max_ring_grps); + req.fid = rte_cpu_to_le_16(0xffff); + + HWRM_PREP(req, FUNC_CFG, -1, resp); + + rc = bnxt_hwrm_send_message(bp, &req, sizeof(req)); + HWRM_CHECK_RESULT; + + return rc; +} + +static void populate_vf_func_cfg_req(struct bnxt *bp, + struct hwrm_func_cfg_input *req, + int num_vfs) +{ + req->enables = rte_cpu_to_le_32(HWRM_FUNC_CFG_INPUT_ENABLES_MTU | + HWRM_FUNC_CFG_INPUT_ENABLES_MRU | + HWRM_FUNC_CFG_INPUT_ENABLES_NUM_RSSCOS_CTXS | + HWRM_FUNC_CFG_INPUT_ENABLES_NUM_STAT_CTXS | + HWRM_FUNC_CFG_INPUT_ENABLES_NUM_CMPL_RINGS | + HWRM_FUNC_CFG_INPUT_ENABLES_NUM_TX_RINGS | + HWRM_FUNC_CFG_INPUT_ENABLES_NUM_RX_RINGS | + HWRM_FUNC_CFG_INPUT_ENABLES_NUM_L2_CTXS | + HWRM_FUNC_CFG_INPUT_ENABLES_NUM_VNICS | + HWRM_FUNC_CFG_INPUT_ENABLES_NUM_HW_RING_GRPS); + + req->mtu = rte_cpu_to_le_16(bp->eth_dev->data->mtu + ETHER_HDR_LEN + + ETHER_CRC_LEN + VLAN_TAG_SIZE); + req->mru = rte_cpu_to_le_16(bp->eth_dev->data->mtu + ETHER_HDR_LEN + + ETHER_CRC_LEN + VLAN_TAG_SIZE); + req->num_rsscos_ctxs = rte_cpu_to_le_16(bp->max_rsscos_ctx / + (num_vfs + 1)); + req->num_stat_ctxs = rte_cpu_to_le_16(bp->max_stat_ctx / (num_vfs + 1)); + req->num_cmpl_rings = rte_cpu_to_le_16(bp->max_cp_rings / + (num_vfs + 1)); + req->num_tx_rings = rte_cpu_to_le_16(bp->max_tx_rings / (num_vfs + 1)); + req->num_rx_rings = rte_cpu_to_le_16(bp->max_rx_rings / (num_vfs + 1)); + req->num_l2_ctxs = rte_cpu_to_le_16(bp->max_l2_ctx / (num_vfs + 1)); + /* TODO: For now, do not support VMDq/RFS on VFs. */ + req->num_vnics = rte_cpu_to_le_16(1); + req->num_hw_ring_grps = rte_cpu_to_le_16(bp->max_ring_grps / + (num_vfs + 1)); +} + +static void reserve_resources_from_vf(struct bnxt *bp, + struct hwrm_func_cfg_input *cfg_req, + int vf) +{ + struct hwrm_func_qcaps_input req = {0}; + struct hwrm_func_qcaps_output *resp = bp->hwrm_cmd_resp_addr; + int rc; + + /* Get the actual allocated values now */ + HWRM_PREP(req, FUNC_QCAPS, -1, resp); + req.fid = rte_cpu_to_le_16(bp->pf.vf_info[vf].fid); + rc = bnxt_hwrm_send_message(bp, &req, sizeof(req)); + + if (rc) { + RTE_LOG(ERR, PMD, "hwrm_func_qcaps failed rc:%d\n", rc); + copy_func_cfg_to_qcaps(cfg_req, resp); + } else if (resp->error_code) { + rc = rte_le_to_cpu_16(resp->error_code); + RTE_LOG(ERR, PMD, "hwrm_func_qcaps error %d\n", rc); + copy_func_cfg_to_qcaps(cfg_req, resp); + } + + bp->max_rsscos_ctx -= rte_le_to_cpu_16(resp->max_rsscos_ctx); + bp->max_stat_ctx -= rte_le_to_cpu_16(resp->max_stat_ctx); + bp->max_cp_rings -= rte_le_to_cpu_16(resp->max_cmpl_rings); + bp->max_tx_rings -= rte_le_to_cpu_16(resp->max_tx_rings); + bp->max_rx_rings -= rte_le_to_cpu_16(resp->max_rx_rings); + bp->max_l2_ctx -= rte_le_to_cpu_16(resp->max_l2_ctxs); + /* + * TODO: While not supporting VMDq with VFs, max_vnics is always + * forced to 1 in this case + */ + //bp->max_vnics -= rte_le_to_cpu_16(esp->max_vnics); + bp->max_ring_grps -= rte_le_to_cpu_16(resp->max_hw_ring_grps); +} + +static int update_pf_resource_max(struct bnxt *bp) +{ + struct hwrm_func_qcfg_input req = {0}; + struct hwrm_func_qcfg_output *resp = bp->hwrm_cmd_resp_addr; + int rc; + + /* And copy the allocated numbers into the pf struct */ + HWRM_PREP(req, FUNC_QCFG, -1, resp); + req.fid = rte_cpu_to_le_16(0xffff); + rc = bnxt_hwrm_send_message(bp, &req, sizeof(req)); + HWRM_CHECK_RESULT; + + /* Only TX ring value reflects actual allocation? TODO */ + bp->max_tx_rings = rte_le_to_cpu_16(resp->alloc_tx_rings); + bp->pf.evb_mode = resp->evb_mode; + + return rc; +} + +int bnxt_hwrm_allocate_pf_only(struct bnxt *bp) +{ + int rc; + + if (!BNXT_PF(bp)) { + RTE_LOG(ERR, PMD, "Attempt to allcoate VFs on a VF!\n"); + return -1; + } + + rc = bnxt_hwrm_func_qcaps(bp); + if (rc) + return rc; + + bp->pf.func_cfg_flags &= + ~(HWRM_FUNC_CFG_INPUT_FLAGS_STD_TX_RING_MODE_ENABLE | + HWRM_FUNC_CFG_INPUT_FLAGS_STD_TX_RING_MODE_DISABLE); + bp->pf.func_cfg_flags |= + HWRM_FUNC_CFG_INPUT_FLAGS_STD_TX_RING_MODE_DISABLE; + rc = bnxt_hwrm_pf_func_cfg(bp, bp->max_tx_rings); + return rc; +} + +int bnxt_hwrm_allocate_vfs(struct bnxt *bp, int num_vfs) +{ + struct hwrm_func_cfg_input req = {0}; + struct hwrm_func_cfg_output *resp = bp->hwrm_cmd_resp_addr; + int i; + size_t sz; + int rc = 0; + size_t req_buf_sz; + + if (!BNXT_PF(bp)) { + RTE_LOG(ERR, PMD, "Attempt to allcoate VFs on a VF!\n"); + return -1; + } + + rc = bnxt_hwrm_func_qcaps(bp); + + if (rc) + return rc; + + bp->pf.active_vfs = num_vfs; + + /* + * First, configure the PF to only use one TX ring. This ensures that + * there are enough rings for all VFs. + * + * If we don't do this, when we call func_alloc() later, we will lock + * extra rings to the PF that won't be available during func_cfg() of + * the VFs. + * + * This has been fixed with firmware versions above 20.6.54 + */ + bp->pf.func_cfg_flags &= + ~(HWRM_FUNC_CFG_INPUT_FLAGS_STD_TX_RING_MODE_ENABLE | + HWRM_FUNC_CFG_INPUT_FLAGS_STD_TX_RING_MODE_DISABLE); + bp->pf.func_cfg_flags |= + HWRM_FUNC_CFG_INPUT_FLAGS_STD_TX_RING_MODE_ENABLE; + rc = bnxt_hwrm_pf_func_cfg(bp, 1); + if (rc) + return rc; + + /* + * Now, create and register a buffer to hold forwarded VF requests + */ + req_buf_sz = num_vfs * HWRM_MAX_REQ_LEN; + bp->pf.vf_req_buf = rte_malloc("bnxt_vf_fwd", req_buf_sz, + page_roundup(num_vfs * HWRM_MAX_REQ_LEN)); + if (bp->pf.vf_req_buf == NULL) { + rc = -ENOMEM; + goto error_free; + } + for (sz = 0; sz < req_buf_sz; sz += getpagesize()) + rte_mem_lock_page(((char *)bp->pf.vf_req_buf) + sz); + for (i = 0; i < num_vfs; i++) + bp->pf.vf_info[i].req_buf = ((char *)bp->pf.vf_req_buf) + + (i * HWRM_MAX_REQ_LEN); + + rc = bnxt_hwrm_func_buf_rgtr(bp); + if (rc) + goto error_free; + + populate_vf_func_cfg_req(bp, &req, num_vfs); + + bp->pf.active_vfs = 0; + for (i = 0; i < num_vfs; i++) { + HWRM_PREP(req, FUNC_CFG, -1, resp); + req.flags = rte_cpu_to_le_32(bp->pf.vf_info[i].func_cfg_flags); + req.fid = rte_cpu_to_le_16(bp->pf.vf_info[i].fid); + rc = bnxt_hwrm_send_message(bp, &req, sizeof(req)); + + /* Clear enable flag for next pass */ + req.enables &= ~rte_cpu_to_le_32( + HWRM_FUNC_CFG_INPUT_ENABLES_DFLT_MAC_ADDR); + + if (rc || resp->error_code) { + RTE_LOG(ERR, PMD, + "Failed to initizlie VF %d\n", i); + RTE_LOG(ERR, PMD, + "Not all VFs available. (%d, %d)\n", + rc, resp->error_code); + break; + } + + reserve_resources_from_vf(bp, &req, i); + bp->pf.active_vfs++; + } + + /* + * Now configure the PF to use "the rest" of the resources + * We're using STD_TX_RING_MODE here though which will limit the TX + * rings. This will allow QoS to function properly. Not setting this + * will cause PF rings to break bandwidth settings. + */ + rc = bnxt_hwrm_pf_func_cfg(bp, bp->max_tx_rings); + if (rc) + goto error_free; + + rc = update_pf_resource_max(bp); + if (rc) + goto error_free; + + return rc; + +error_free: + bnxt_hwrm_func_buf_unrgtr(bp); + return rc; +} + + +int bnxt_hwrm_func_buf_rgtr(struct bnxt *bp) +{ + int rc = 0; + struct hwrm_func_buf_rgtr_input req = {.req_type = 0 }; + struct hwrm_func_buf_rgtr_output *resp = bp->hwrm_cmd_resp_addr; + + HWRM_PREP(req, FUNC_BUF_RGTR, -1, resp); + + req.req_buf_num_pages = rte_cpu_to_le_16(1); + req.req_buf_page_size = rte_cpu_to_le_16( + page_getenum(bp->pf.active_vfs * HWRM_MAX_REQ_LEN)); + req.req_buf_len = rte_cpu_to_le_16(HWRM_MAX_REQ_LEN); + req.req_buf_page_addr[0] = + rte_cpu_to_le_64(rte_mem_virt2phy(bp->pf.vf_req_buf)); + if (req.req_buf_page_addr[0] == 0) { + RTE_LOG(ERR, PMD, + "unable to map buffer address to physical memory\n"); + return -ENOMEM; + } + + rc = bnxt_hwrm_send_message(bp, &req, sizeof(req)); + + HWRM_CHECK_RESULT; + + return rc; +} + +int bnxt_hwrm_func_buf_unrgtr(struct bnxt *bp) +{ + int rc = 0; + struct hwrm_func_buf_unrgtr_input req = {.req_type = 0 }; + struct hwrm_func_buf_unrgtr_output *resp = bp->hwrm_cmd_resp_addr; + + HWRM_PREP(req, FUNC_BUF_UNRGTR, -1, resp); + + rc = bnxt_hwrm_send_message(bp, &req, sizeof(req)); + + HWRM_CHECK_RESULT; + + return rc; +} + +int bnxt_hwrm_func_cfg_def_cp(struct bnxt *bp) +{ + struct hwrm_func_cfg_output *resp = bp->hwrm_cmd_resp_addr; + struct hwrm_func_cfg_input req = {0}; + int rc; + + HWRM_PREP(req, FUNC_CFG, -1, resp); + req.fid = rte_cpu_to_le_16(0xffff); + req.flags = rte_cpu_to_le_32(bp->pf.func_cfg_flags); + req.enables = rte_cpu_to_le_32( + HWRM_FUNC_CFG_INPUT_ENABLES_ASYNC_EVENT_CR); + req.async_event_cr = rte_cpu_to_le_16( + bp->def_cp_ring->cp_ring_struct->fw_ring_id); + rc = bnxt_hwrm_send_message(bp, &req, sizeof(req)); + HWRM_CHECK_RESULT; + + return rc; +} + +int bnxt_hwrm_reject_fwd_resp(struct bnxt *bp, uint16_t target_id, + void *encaped, size_t ec_size) +{ + int rc = 0; + struct hwrm_reject_fwd_resp_input req = {.req_type = 0}; + struct hwrm_reject_fwd_resp_output *resp = bp->hwrm_cmd_resp_addr; + + if (ec_size > sizeof(req.encap_request)) + return -1; + + HWRM_PREP(req, REJECT_FWD_RESP, -1, resp); + + req.encap_resp_target_id = rte_cpu_to_le_16(target_id); + memcpy(req.encap_request, encaped, ec_size); + + rc = bnxt_hwrm_send_message(bp, &req, sizeof(req)); + + HWRM_CHECK_RESULT; + + return rc; +} diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h index 6519ef2..0046ba4 100644 --- a/drivers/net/bnxt/bnxt_hwrm.h +++ b/drivers/net/bnxt/bnxt_hwrm.h @@ -51,14 +51,17 @@ int bnxt_hwrm_clear_filter(struct bnxt *bp, int bnxt_hwrm_set_filter(struct bnxt *bp, struct bnxt_vnic_info *vnic, struct bnxt_filter_info *filter); - int bnxt_hwrm_exec_fwd_resp(struct bnxt *bp, void *fwd_cmd); +int bnxt_hwrm_reject_fwd_resp(struct bnxt *bp, uint16_t target_id, + void *encaped, size_t ec_size); -int bnxt_hwrm_func_driver_register(struct bnxt *bp, uint32_t flags, - uint32_t *vf_req_fwd); +int bnxt_hwrm_func_buf_rgtr(struct bnxt *bp); +int bnxt_hwrm_func_buf_unrgtr(struct bnxt *bp); +int bnxt_hwrm_func_driver_register(struct bnxt *bp); int bnxt_hwrm_func_qcaps(struct bnxt *bp); int bnxt_hwrm_func_reset(struct bnxt *bp); int bnxt_hwrm_func_driver_unregister(struct bnxt *bp, uint32_t flags); +int bnxt_hwrm_func_cfg_def_cp(struct bnxt *bp); int bnxt_hwrm_queue_qportcfg(struct bnxt *bp); @@ -81,6 +84,8 @@ int bnxt_hwrm_ver_get(struct bnxt *bp); int bnxt_hwrm_vnic_alloc(struct bnxt *bp, struct bnxt_vnic_info *vnic); int bnxt_hwrm_vnic_cfg(struct bnxt *bp, struct bnxt_vnic_info *vnic); +int bnxt_hwrm_vnic_qcfg(struct bnxt *bp, struct bnxt_vnic_info *vnic, + int16_t fw_vf_id); int bnxt_hwrm_vnic_ctx_alloc(struct bnxt *bp, struct bnxt_vnic_info *vnic); int bnxt_hwrm_vnic_ctx_free(struct bnxt *bp, struct bnxt_vnic_info *vnic); int bnxt_hwrm_vnic_free(struct bnxt *bp, struct bnxt_vnic_info *vnic); @@ -101,5 +106,11 @@ int bnxt_alloc_hwrm_resources(struct bnxt *bp); int bnxt_get_hwrm_link_config(struct bnxt *bp, struct rte_eth_link *link); int bnxt_set_hwrm_link_config(struct bnxt *bp, bool link_up); int bnxt_hwrm_func_qcfg(struct bnxt *bp); +int bnxt_hwrm_allocate_pf_only(struct bnxt *bp); +int bnxt_hwrm_allocate_vfs(struct bnxt *bp, int num_vfs); +int bnxt_hwrm_func_vf_mac(struct bnxt *bp, uint16_t vf, + const uint8_t *mac_addr); +int bnxt_hwrm_func_qcfg_vf_default_mac(struct bnxt *bp, uint16_t vf, + struct ether_addr *mac); #endif diff --git a/drivers/net/bnxt/bnxt_vnic.c b/drivers/net/bnxt/bnxt_vnic.c index 33fdde2..4e378a9 100644 --- a/drivers/net/bnxt/bnxt_vnic.c +++ b/drivers/net/bnxt/bnxt_vnic.c @@ -69,21 +69,14 @@ void bnxt_init_vnics(struct bnxt *bp) uint16_t max_vnics; int i, j; - if (BNXT_PF(bp)) { - struct bnxt_pf_info *pf = &bp->pf; - - max_vnics = pf->max_vnics; - } else { - struct bnxt_vf_info *vf = &bp->vf; - - max_vnics = vf->max_vnics; - } + max_vnics = bp->max_vnics; STAILQ_INIT(&bp->free_vnic_list); for (i = 0; i < max_vnics; i++) { vnic = &bp->vnic_info[i]; vnic->fw_vnic_id = (uint16_t)HWRM_NA_SIGNATURE; - vnic->fw_rss_cos_lb_ctx = (uint16_t)HWRM_NA_SIGNATURE; - vnic->ctx_is_rss_cos_lb = HW_CONTEXT_NONE; + vnic->rss_rule = (uint16_t)HWRM_NA_SIGNATURE; + vnic->cos_rule = (uint16_t)HWRM_NA_SIGNATURE; + vnic->lb_rule = (uint16_t)HWRM_NA_SIGNATURE; for (j = 0; j < MAX_QUEUES_PER_VNIC; j++) vnic->fw_grp_ids[j] = (uint16_t)HWRM_NA_SIGNATURE; @@ -181,15 +174,7 @@ int bnxt_alloc_vnic_attributes(struct bnxt *bp) uint16_t max_vnics; int i; - if (BNXT_PF(bp)) { - struct bnxt_pf_info *pf = &bp->pf; - - max_vnics = pf->max_vnics; - } else { - struct bnxt_vf_info *vf = &bp->vf; - - max_vnics = vf->max_vnics; - } + max_vnics = bp->max_vnics; snprintf(mz_name, RTE_MEMZONE_NAMESIZE, "bnxt_%04x:%02x:%02x:%02x_vnicattr", pdev->addr.domain, pdev->addr.bus, pdev->addr.devid, pdev->addr.function); @@ -232,15 +217,7 @@ void bnxt_free_vnic_mem(struct bnxt *bp) if (bp->vnic_info == NULL) return; - if (BNXT_PF(bp)) { - struct bnxt_pf_info *pf = &bp->pf; - - max_vnics = pf->max_vnics; - } else { - struct bnxt_vf_info *vf = &bp->vf; - - max_vnics = vf->max_vnics; - } + max_vnics = bp->max_vnics; for (i = 0; i < max_vnics; i++) { vnic = &bp->vnic_info[i]; if (vnic->fw_vnic_id != (uint16_t)HWRM_NA_SIGNATURE) { @@ -258,15 +235,7 @@ int bnxt_alloc_vnic_mem(struct bnxt *bp) struct bnxt_vnic_info *vnic_mem; uint16_t max_vnics; - if (BNXT_PF(bp)) { - struct bnxt_pf_info *pf = &bp->pf; - - max_vnics = pf->max_vnics; - } else { - struct bnxt_vf_info *vf = &bp->vf; - - max_vnics = vf->max_vnics; - } + max_vnics = bp->max_vnics; /* Allocate memory for VNIC pool and filter pool */ vnic_mem = rte_zmalloc("bnxt_vnic_info", max_vnics * sizeof(struct bnxt_vnic_info), 0); diff --git a/drivers/net/bnxt/bnxt_vnic.h b/drivers/net/bnxt/bnxt_vnic.h index 9671ba4..ca1c9cf 100644 --- a/drivers/net/bnxt/bnxt_vnic.h +++ b/drivers/net/bnxt/bnxt_vnic.h @@ -42,8 +42,7 @@ struct bnxt_vnic_info { uint8_t ff_pool_idx; uint16_t fw_vnic_id; /* returned by Chimp during alloc */ - uint16_t fw_rss_cos_lb_ctx; - uint16_t ctx_is_rss_cos_lb; + uint16_t rss_rule; #define MAX_NUM_TRAFFIC_CLASSES 8 #define MAX_NUM_RSS_QUEUES_PER_VNIC 16 #define MAX_QUEUES_PER_VNIC (MAX_NUM_RSS_QUEUES_PER_VNIC + \ @@ -51,6 +50,8 @@ struct bnxt_vnic_info { uint16_t start_grp_id; uint16_t end_grp_id; uint16_t fw_grp_ids[MAX_QUEUES_PER_VNIC]; + uint16_t dflt_ring_grp; + uint16_t mru; uint16_t hash_type; phys_addr_t rss_table_dma_addr; uint16_t *rss_table; @@ -60,8 +61,14 @@ struct bnxt_vnic_info { #define BNXT_VNIC_INFO_PROMISC (1 << 0) #define BNXT_VNIC_INFO_ALLMULTI (1 << 1) + uint16_t cos_rule; + uint16_t lb_rule; bool vlan_strip; bool func_default; + bool bd_stall; + bool roce_dual; + bool roce_only; + bool rss_dflt_cr; STAILQ_HEAD(, bnxt_filter_info) filter; }; -- 2.10.1 (Apple Git-78)