From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2BC81A0556; Thu, 20 Feb 2020 15:01:33 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 25FE11BFFE; Thu, 20 Feb 2020 15:00:29 +0100 (CET) Received: from EUR01-VE1-obe.outbound.protection.outlook.com (mail-eopbgr140104.outbound.protection.outlook.com [40.107.14.104]) by dpdk.org (Postfix) with ESMTP id 0B4E81BFD2 for ; Thu, 20 Feb 2020 15:00:26 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=it4UfZvCKSyv9iSo0HqB+NUIki7a+RcCPm+eVrpvuBoBOkL1dBOzopSjU93oMGPp0P28UqP3AJqBtdfgGxSq7m3AWVv8m4smBSU8k/65LHTamPrYfheXiDtdgECSDrtUVvILxJMGAgUyLCBveU9mrJ/TVQU3Ya7uHdSrx0XGFu85U4iycjHZDHORqiz4ygQb6AkNV6TR7ui7+yB/mGdQFfsxyefdHObuD7pSbw7KoRdfndCVCCRUhOxqd6tQC7KinMbJFSIfsUgNMyOuSDjYsBFcmYIWaIFKqK8Z4YlfKxcpq00qzkfbFIdraIuTnfbhaq1lrbEf6+ob455IZIRxjg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=V5TkBziZVGGrgV6u8vceV3W/adNkmACXh0LgCmWBy0o=; b=bayWcT31HfysGTTySHJ9zlr3snU2UVJeiZUmyRmF44Wh52W9vkY+/mB7meznBafeSMLtEHINYdoeWMsVgVx8TVuRjJxpRSXYw6mpjZVfRPx0cstJSiMvjXWCa9YzRuR0FZkVD9zohtHu1G35c9K4V3iI3HdzDR2dEILea2oSluW/OognGYpCuiwF2r6dAw54he6cC4HQBkjAtuv8QdquP6NUMj7cyzJHbUZn1mzZZbMbAcLCqmrzMLDLBqgy3/OeSeOJQ38qItCMIl+SMlUgMJOOXAFQ2J6bH4bhGx7ijXXZ+XrrTppJr2csdoNYf4YkdKNS46IGnsbRESx0JPdOdA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=silicom.co.il; dmarc=pass action=none header.from=silicom.co.il; dkim=pass header.d=silicom.co.il; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=SILICOMLTD.onmicrosoft.com; s=selector2-SILICOMLTD-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=V5TkBziZVGGrgV6u8vceV3W/adNkmACXh0LgCmWBy0o=; b=VIhDzJfQubDftGAiyPn2EJlmWjksSzYGAi6NM1xqBjQW3Kx3CwOvCKCmxi7UWnbmhJBiOMK4Xt2L7T6aNBT/eV46CO3CRCT3WD/YbAQFNutzIDbP3EqgkPnLDc2azbpZz3d2BbcRimcDbq1t9liTc9AmDprV+ENmSmGKnrGPXjg= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=xiaojun.liu@silicom.co.il; Received: from DB7PR04MB5196.eurprd04.prod.outlook.com (20.176.234.140) by DB7PR04MB5354.eurprd04.prod.outlook.com (20.178.107.81) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2750.18; Thu, 20 Feb 2020 14:00:24 +0000 Received: from DB7PR04MB5196.eurprd04.prod.outlook.com ([fe80::a400:f6b9:34b1:ed]) by DB7PR04MB5196.eurprd04.prod.outlook.com ([fe80::a400:f6b9:34b1:ed%5]) with mapi id 15.20.2729.033; Thu, 20 Feb 2020 14:00:24 +0000 From: Xiaojun Liu To: xiao.w.wang@intel.com, qi.z.zhang@intel.com, ngai-mint.kwan@intel.com, jacob.e.keller@intel.com Cc: dev@dpdk.org, Xiaojun Liu Date: Thu, 20 Feb 2020 21:59:34 +0800 Message-Id: <1582207174-31037-6-git-send-email-xiaojun.liu@silicom.co.il> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1582207174-31037-1-git-send-email-xiaojun.liu@silicom.co.il> References: <1576057875-7677-2-git-send-email-xiaojun.liu@silicom.co.il> <1582207174-31037-1-git-send-email-xiaojun.liu@silicom.co.il> Content-Type: text/plain X-ClientProxiedBy: HK2PR06CA0007.apcprd06.prod.outlook.com (2603:1096:202:2e::19) To DB7PR04MB5196.eurprd04.prod.outlook.com (2603:10a6:10:1a::12) MIME-Version: 1.0 Received: from xj-desktop.net-perf.com (119.139.198.174) by HK2PR06CA0007.apcprd06.prod.outlook.com (2603:1096:202:2e::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2750.18 via Frontend Transport; Thu, 20 Feb 2020 14:00:22 +0000 X-Mailer: git-send-email 1.8.3.1 X-Originating-IP: [119.139.198.174] X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 432c1262-85f9-4915-b703-08d7b60d3b28 X-MS-TrafficTypeDiagnostic: DB7PR04MB5354: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:8882; X-Forefront-PRVS: 031996B7EF X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10019020)(396003)(39850400004)(136003)(376002)(346002)(366004)(199004)(189003)(36756003)(6512007)(86362001)(81166006)(2906002)(81156014)(52116002)(6506007)(30864003)(8676002)(4326008)(8936002)(16526019)(316002)(6666004)(186003)(6486002)(5660300002)(44832011)(478600001)(66946007)(66476007)(107886003)(26005)(66556008)(2616005)(956004)(290074003); DIR:OUT; SFP:1102; SCL:1; SRVR:DB7PR04MB5354; H:DB7PR04MB5196.eurprd04.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1; Received-SPF: None (protection.outlook.com: silicom.co.il does not designate permitted sender hosts) X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: aDgotq0VUI/HiTO1j5TV9bZ/NqRAZn8EhN8r2UX10rUTzedXKcin1Oiom1uCoyGu871yLJhPcUatz1ScQ2NA0FmebrBUSpwt/PFPhB9BGes0qTNa9/neaIxRrIgCt4ABxKSz4mawUWCarC9V8f06MnQu2XFjKrOm9rQA7t/9VzkQTlu+1G8ejDJ8GpbloxaY9iJ103KuiG2XV5F0Sl9hcIcETREb6YhkWYxxCGPQdhthyDyTZzDRi9B/Mhs/s4E9E3tMwA8qwnpQSUUGlolEoTCIGPF2Wn7qEtjV18ID2gblXUBY3kzhHpwiEk/qo12RS/zjYds4X9QKNrzqOfh9lU+lFoyeFKjh7b5G9xiNcz0O6p0TjIaX0+tDq+Zz2aGrf1hMwrsBJeZAuaEJpeKFxsV+k+XdRWFl3UVC1BVLxrstFM6+fvOuG7FXqE8OUV05uxMda7ScvHTVd4PHQw3rve2H1xjNYJoOAF9QqXeH84iR3OBCvOyGtEBzgZ54goR/ X-MS-Exchange-AntiSpam-MessageData: hLHvipm1GDXYHLCFmkAFxz52ZzegajVlXjl887oaXChY3BcNv1RJqtshu7MeCFbmFsR0CuiwB7o4rmeKEHddbVFyLQfTw1lQU7KWwwJNyEjZzPyejX2KM9/4k7UIqIHqhJRGIKy+Ey3pgDBnH48dWQ== X-OriginatorOrg: silicom.co.il X-MS-Exchange-CrossTenant-Network-Message-Id: 432c1262-85f9-4915-b703-08d7b60d3b28 X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Feb 2020 14:00:24.2726 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: c9e326d8-ce47-4930-8612-cc99d3c87ad1 X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: CsFWjecS3W6HVinbGHHe9QAAPERMqN2OtlSYgA6VdVFM/DFoVG47oNZhyuGIslrhVvdj2y9S30uzmgO/+gyN9OhPDoQHl0G1EaWroyyaBYY= X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR04MB5354 Subject: [dpdk-dev] [PATCH v2 5/5] net/fm10k: add switch management support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Split dev init to 2 parts. First only register the port in switch management; second init hook will be called after all the pf are registered and switch initialization. It will finish dev init. Also add switch interrupt support. Add fm10k_mirror_rule_set/fm10k_mirror_rule_reset to support mirror operation. Add fm10k_dev_filter_ctrl to support flow operation. Add dpdk port and pf mapping, so the dpdk port can map to a specific pf and 1 dpdk port can map to 2 pf to get total 100G throughput. To enable the switch management, you need add CONFIG_RTE_FM10K_MANAGEMENT=y in config/common_linux when building. Signed-off-by: Xiaojun Liu --- drivers/net/fm10k/fm10k_ethdev.c | 587 ++++++++++++++++++++++++++++++++++++--- 1 file changed, 546 insertions(+), 41 deletions(-) diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c index 581c690..16b41fd 100644 --- a/drivers/net/fm10k/fm10k_ethdev.c +++ b/drivers/net/fm10k/fm10k_ethdev.c @@ -13,6 +13,10 @@ #include "fm10k.h" #include "base/fm10k_api.h" +#ifdef ENABLE_FM10K_MANAGEMENT +#include "switch/fm10k_regs.h" +#include "switch/fm10k_switch.h" +#endif /* Default delay to acquire mailbox lock */ #define FM10K_MBXLOCK_DELAY_US 20 @@ -39,6 +43,10 @@ #define GLORT_PF_MASK 0xFFC0 #define GLORT_FD_MASK GLORT_PF_MASK #define GLORT_FD_INDEX GLORT_FD_Q_BASE +#ifdef ENABLE_FM10K_MANAGEMENT +/* When the switch is ready, the status will be changed */ +static int fm10k_switch_ready; +#endif int fm10k_logtype_init; int fm10k_logtype_driver; @@ -461,9 +469,6 @@ struct fm10k_xstats_name_off { PMD_INIT_FUNC_TRACE(); - if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) - dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH; - /* multipe queue mode checking */ ret = fm10k_check_mq_mode(dev); if (ret != 0) { @@ -512,6 +517,15 @@ struct fm10k_xstats_name_off { struct rte_eth_conf *dev_conf = &dev->data->dev_conf; uint32_t mrqc, *key, i, reta, j; uint64_t hf; +#ifdef ENABLE_FM10K_MANAGEMENT + uint16_t nb_rx_queues = dev->data->nb_rx_queues; + int mapped_num; + struct fm10k_hw *mapped_hws[2]; + + mapped_num = fm10k_switch_dpdk_mapped_hw_get(hw, mapped_hws); + if (mapped_num == 2) + nb_rx_queues /= 2; +#endif #define RSS_KEY_SIZE 40 static uint8_t rss_intel_key[RSS_KEY_SIZE] = { @@ -641,27 +655,48 @@ struct fm10k_xstats_name_off { static int fm10k_dev_tx_init(struct rte_eth_dev *dev) { +#ifndef ENABLE_FM10K_MANAGEMENT struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); +#else + struct fm10k_hw *hw; + struct fm10k_hw *unmap_hw = + FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint32_t data; +#endif int i, ret; + uint16_t hw_queue_id; struct fm10k_tx_queue *txq; uint64_t base_addr; uint32_t size; +#ifndef ENABLE_FM10K_MANAGEMENT /* Disable TXINT to avoid possible interrupt */ for (i = 0; i < hw->mac.max_queues; i++) FM10K_WRITE_REG(hw, FM10K_TXINT(i), 3 << FM10K_TXINT_TIMER_SHIFT); +#else + fm10k_switch_dpdk_tx_queue_num_set(unmap_hw, + dev->data->nb_tx_queues); +#endif /* Setup TX queue */ for (i = 0; i < dev->data->nb_tx_queues; ++i) { + hw_queue_id = i; +#ifdef ENABLE_FM10K_MANAGEMENT + fm10k_switch_dpdk_hw_queue_map(unmap_hw, + i, dev->data->nb_tx_queues, + &hw, &hw_queue_id); +#endif txq = dev->data->tx_queues[i]; base_addr = txq->hw_ring_phys_addr; size = txq->nb_desc * sizeof(struct fm10k_tx_desc); /* disable queue to avoid issues while updating state */ - ret = tx_queue_disable(hw, i); + ret = tx_queue_disable(hw, hw_queue_id); if (ret) { - PMD_INIT_LOG(ERR, "failed to disable queue %d", i); + PMD_INIT_LOG(ERR, + "failed to disable queue %d", + hw_queue_id); return -1; } /* Enable use of FTAG bit in TX descriptor, PFVTCTL @@ -669,7 +704,7 @@ struct fm10k_xstats_name_off { */ if (fm10k_check_ftag(dev->device->devargs)) { if (hw->mac.type == fm10k_mac_pf) { - FM10K_WRITE_REG(hw, FM10K_PFVTCTL(i), + FM10K_WRITE_REG(hw, FM10K_PFVTCTL(hw_queue_id), FM10K_PFVTCTL_FTAG_DESC_ENABLE); PMD_INIT_LOG(DEBUG, "FTAG mode is enabled"); } else { @@ -679,15 +714,25 @@ struct fm10k_xstats_name_off { } /* set location and size for descriptor ring */ - FM10K_WRITE_REG(hw, FM10K_TDBAL(i), + FM10K_WRITE_REG(hw, FM10K_TDBAL(hw_queue_id), base_addr & UINT64_LOWER_32BITS_MASK); - FM10K_WRITE_REG(hw, FM10K_TDBAH(i), + FM10K_WRITE_REG(hw, FM10K_TDBAH(hw_queue_id), base_addr >> (CHAR_BIT * sizeof(uint32_t))); - FM10K_WRITE_REG(hw, FM10K_TDLEN(i), size); + FM10K_WRITE_REG(hw, FM10K_TDLEN(hw_queue_id), size); /* assign default SGLORT for each TX queue by PF */ +#ifndef ENABLE_FM10K_MANAGEMENT if (hw->mac.type == fm10k_mac_pf) - FM10K_WRITE_REG(hw, FM10K_TX_SGLORT(i), hw->mac.dglort_map); + FM10K_WRITE_REG(hw, + FM10K_TX_SGLORT(hw_queue_id), + hw->mac.dglort_map); +#else + if (hw->mac.type == fm10k_mac_pf) { + data = FM10K_SW_MAKE_REG_FIELD + (TX_SGLORT_SGLORT, hw->mac.dglort_map); + FM10K_WRITE_REG(hw, FM10K_TX_SGLORT(hw_queue_id), data); + } +#endif } /* set up vector or scalar TX function as appropriate */ @@ -699,19 +744,27 @@ struct fm10k_xstats_name_off { static int fm10k_dev_rx_init(struct rte_eth_dev *dev) { +#ifndef ENABLE_FM10K_MANAGEMENT struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct fm10k_macvlan_filter_info *macvlan; struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev); struct rte_intr_handle *intr_handle = &pdev->intr_handle; + uint32_t logic_port = hw->mac.dglort_map; + uint16_t queue_stride = 0; +#else + struct fm10k_hw *hw; + struct fm10k_hw *unmap_hw = + FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); +#endif int i, ret; + uint16_t hw_queue_id; struct fm10k_rx_queue *rxq; uint64_t base_addr; uint32_t size; uint32_t rxdctl = FM10K_RXDCTL_WRITE_BACK_MIN_DELAY; - uint32_t logic_port = hw->mac.dglort_map; uint16_t buf_size; - uint16_t queue_stride = 0; +#ifndef ENABLE_FM10K_MANAGEMENT /* enable RXINT for interrupt mode */ i = 0; if (rte_intr_dp_is_en(intr_handle)) { @@ -731,26 +784,36 @@ struct fm10k_xstats_name_off { for (; i < hw->mac.max_queues; i++) FM10K_WRITE_REG(hw, FM10K_RXINT(i), 3 << FM10K_RXINT_TIMER_SHIFT); +#else + fm10k_switch_dpdk_rx_queue_num_set(unmap_hw, dev->data->nb_rx_queues); +#endif /* Setup RX queues */ for (i = 0; i < dev->data->nb_rx_queues; ++i) { + hw_queue_id = i; +#ifdef ENABLE_FM10K_MANAGEMENT + fm10k_switch_dpdk_hw_queue_map(unmap_hw, + i, dev->data->nb_rx_queues, &hw, &hw_queue_id); +#endif rxq = dev->data->rx_queues[i]; base_addr = rxq->hw_ring_phys_addr; size = rxq->nb_desc * sizeof(union fm10k_rx_desc); /* disable queue to avoid issues while updating state */ - ret = rx_queue_disable(hw, i); + ret = rx_queue_disable(hw, hw_queue_id); if (ret) { - PMD_INIT_LOG(ERR, "failed to disable queue %d", i); + PMD_INIT_LOG(ERR, + "failed to disable queue %d", + hw_queue_id); return -1; } /* Setup the Base and Length of the Rx Descriptor Ring */ - FM10K_WRITE_REG(hw, FM10K_RDBAL(i), + FM10K_WRITE_REG(hw, FM10K_RDBAL(hw_queue_id), base_addr & UINT64_LOWER_32BITS_MASK); - FM10K_WRITE_REG(hw, FM10K_RDBAH(i), + FM10K_WRITE_REG(hw, FM10K_RDBAH(hw_queue_id), base_addr >> (CHAR_BIT * sizeof(uint32_t))); - FM10K_WRITE_REG(hw, FM10K_RDLEN(i), size); + FM10K_WRITE_REG(hw, FM10K_RDLEN(hw_queue_id), size); /* Configure the Rx buffer size for one buff without split */ buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) - @@ -764,7 +827,7 @@ struct fm10k_xstats_name_off { */ buf_size -= FM10K_RX_DATABUF_ALIGN; - FM10K_WRITE_REG(hw, FM10K_SRRCTL(i), + FM10K_WRITE_REG(hw, FM10K_SRRCTL(hw_queue_id), (buf_size >> FM10K_SRRCTL_BSIZEPKT_SHIFT) | FM10K_SRRCTL_LOOPBACK_SUPPRESS); @@ -774,9 +837,9 @@ struct fm10k_xstats_name_off { rxq->offloads & DEV_RX_OFFLOAD_SCATTER) { uint32_t reg; dev->data->scattered_rx = 1; - reg = FM10K_READ_REG(hw, FM10K_SRRCTL(i)); + reg = FM10K_READ_REG(hw, FM10K_SRRCTL(hw_queue_id)); reg |= FM10K_SRRCTL_BUFFER_CHAINING_EN; - FM10K_WRITE_REG(hw, FM10K_SRRCTL(i), reg); + FM10K_WRITE_REG(hw, FM10K_SRRCTL(hw_queue_id), reg); } /* Enable drop on empty, it's RO for VF */ @@ -796,6 +859,7 @@ struct fm10k_xstats_name_off { /* update RX_SGLORT for loopback suppress*/ if (hw->mac.type != fm10k_mac_pf) return 0; +#ifndef ENABLE_FM10K_MANAGEMENT macvlan = FM10K_DEV_PRIVATE_TO_MACVLAN(dev->data->dev_private); if (macvlan->nb_queue_pools) queue_stride = dev->data->nb_rx_queues / macvlan->nb_queue_pools; @@ -804,6 +868,7 @@ struct fm10k_xstats_name_off { logic_port++; FM10K_WRITE_REG(hw, FM10K_RX_SGLORT(i), logic_port); } +#endif return 0; } @@ -811,13 +876,31 @@ struct fm10k_xstats_name_off { static int fm10k_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) { +#ifndef ENABLE_FM10K_MANAGEMENT struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); +#else + struct fm10k_hw *hw; + struct fm10k_hw *unmap_hw = + FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + int ret; +#endif int err; uint32_t reg; struct fm10k_rx_queue *rxq; + uint16_t hw_queue_id = rx_queue_id; PMD_INIT_FUNC_TRACE(); +#ifdef ENABLE_FM10K_MANAGEMENT + ret = fm10k_switch_dpdk_hw_queue_map(unmap_hw, + rx_queue_id, dev->data->nb_rx_queues, + &hw, &hw_queue_id); + if (ret < 0) + return -EIO; + else if (ret != 1) /* reference port's queue don't need start */ + return 0; +#endif + rxq = dev->data->rx_queues[rx_queue_id]; err = rx_queue_reset(rxq); if (err == -ENOMEM) { @@ -836,23 +919,23 @@ struct fm10k_xstats_name_off { * this comment and the following two register writes when the * emulation platform is no longer being used. */ - FM10K_WRITE_REG(hw, FM10K_RDH(rx_queue_id), 0); - FM10K_WRITE_REG(hw, FM10K_RDT(rx_queue_id), rxq->nb_desc - 1); + FM10K_WRITE_REG(hw, FM10K_RDH(hw_queue_id), 0); + FM10K_WRITE_REG(hw, FM10K_RDT(hw_queue_id), rxq->nb_desc - 1); /* Set PF ownership flag for PF devices */ - reg = FM10K_READ_REG(hw, FM10K_RXQCTL(rx_queue_id)); + reg = FM10K_READ_REG(hw, FM10K_RXQCTL(hw_queue_id)); if (hw->mac.type == fm10k_mac_pf) reg |= FM10K_RXQCTL_PF; reg |= FM10K_RXQCTL_ENABLE; /* enable RX queue */ - FM10K_WRITE_REG(hw, FM10K_RXQCTL(rx_queue_id), reg); + FM10K_WRITE_REG(hw, FM10K_RXQCTL(hw_queue_id), reg); FM10K_WRITE_FLUSH(hw); /* Setup the HW Rx Head and Tail Descriptor Pointers * Note: this must be done AFTER the queue is enabled */ - FM10K_WRITE_REG(hw, FM10K_RDH(rx_queue_id), 0); - FM10K_WRITE_REG(hw, FM10K_RDT(rx_queue_id), rxq->nb_desc - 1); + FM10K_WRITE_REG(hw, FM10K_RDH(hw_queue_id), 0); + FM10K_WRITE_REG(hw, FM10K_RDT(hw_queue_id), rxq->nb_desc - 1); dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED; return 0; @@ -878,22 +961,39 @@ struct fm10k_xstats_name_off { static int fm10k_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) { +#ifndef ENABLE_FM10K_MANAGEMENT struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); +#else + struct fm10k_hw *hw; + struct fm10k_hw *unmap_hw = + FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + int ret; +#endif /** @todo - this should be defined in the shared code */ #define FM10K_TXDCTL_WRITE_BACK_MIN_DELAY 0x00010000 uint32_t txdctl = FM10K_TXDCTL_WRITE_BACK_MIN_DELAY; struct fm10k_tx_queue *q = dev->data->tx_queues[tx_queue_id]; + uint16_t hw_queue_id = tx_queue_id; PMD_INIT_FUNC_TRACE(); +#ifdef ENABLE_FM10K_MANAGEMENT + ret = fm10k_switch_dpdk_hw_queue_map(unmap_hw, + tx_queue_id, dev->data->nb_tx_queues, &hw, &hw_queue_id); + if (ret < 0) + return -EIO; + else if (ret != 1) + return 0; +#endif + q->ops->reset(q); /* reset head and tail pointers */ - FM10K_WRITE_REG(hw, FM10K_TDH(tx_queue_id), 0); - FM10K_WRITE_REG(hw, FM10K_TDT(tx_queue_id), 0); + FM10K_WRITE_REG(hw, FM10K_TDH(hw_queue_id), 0); + FM10K_WRITE_REG(hw, FM10K_TDT(hw_queue_id), 0); /* enable TX queue */ - FM10K_WRITE_REG(hw, FM10K_TXDCTL(tx_queue_id), + FM10K_WRITE_REG(hw, FM10K_TXDCTL(hw_queue_id), FM10K_TXDCTL_ENABLE | txdctl); FM10K_WRITE_FLUSH(hw); dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED; @@ -1084,9 +1184,22 @@ static inline int fm10k_glort_valid(struct fm10k_hw *hw) { struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); int i, diag; +#ifdef ENABLE_FM10K_MANAGEMENT + struct fm10k_hw *mapped_hws[2]; + int j, mapped_num; + uint32_t data; +#endif PMD_INIT_FUNC_TRACE(); +#ifdef ENABLE_FM10K_MANAGEMENT + mapped_num = fm10k_switch_dpdk_mapped_hw_get(hw, mapped_hws); + if (mapped_num < 0 || mapped_num > 2) + return -EIO; +#endif + + +#ifndef ENABLE_FM10K_MANAGEMENT /* stop, init, then start the hw */ diag = fm10k_stop_hw(hw); if (diag != FM10K_SUCCESS) { @@ -1105,6 +1218,62 @@ static inline int fm10k_glort_valid(struct fm10k_hw *hw) PMD_INIT_LOG(ERR, "Hardware start failed: %d", diag); return -EIO; } +#else + for (j = 0; j < mapped_num; j++) { + struct rte_pci_device *pdev = + RTE_ETH_DEV_TO_PCI((struct rte_eth_dev *) + (fm10k_switch_dpdk_port_rte_dev_get(mapped_hws[j]))); + struct rte_intr_handle *intr_handle = &pdev->intr_handle; + + /* stop, init, then start the hw */ + diag = fm10k_stop_hw(mapped_hws[j]); + if (diag != FM10K_SUCCESS) { + PMD_INIT_LOG(ERR, "Hardware stop failed: %d", diag); + return -EIO; + } + + diag = fm10k_init_hw(mapped_hws[j]); + if (diag != FM10K_SUCCESS) { + PMD_INIT_LOG(ERR, "Hardware init failed: %d", diag); + return -EIO; + } + + diag = fm10k_start_hw(mapped_hws[j]); + if (diag != FM10K_SUCCESS) { + PMD_INIT_LOG(ERR, "Hardware start failed: %d", diag); + return -EIO; + } + + /* Disable TXINT to avoid possible interrupt */ + for (i = 0; i < hw->mac.max_queues; i++) + FM10K_WRITE_REG(mapped_hws[j], FM10K_TXINT(i), + 3 << FM10K_TXINT_TIMER_SHIFT); + + /* enable RXINT for interrupt mode */ + i = 0; + if (rte_intr_dp_is_en(intr_handle)) { + for (; i < dev->data->nb_rx_queues; i++) { + FM10K_WRITE_REG(mapped_hws[j], + FM10K_RXINT(i), Q2V(pdev, i)); + if (mapped_hws[j]->mac.type == fm10k_mac_pf) + FM10K_WRITE_REG(mapped_hws[j], + FM10K_ITR(Q2V(pdev, i)), + FM10K_ITR_AUTOMASK | + FM10K_ITR_MASK_CLEAR); + else + FM10K_WRITE_REG(mapped_hws[j], + FM10K_VFITR(Q2V(pdev, i)), + FM10K_ITR_AUTOMASK | + FM10K_ITR_MASK_CLEAR); + } + } + + /* Disable other RXINT to avoid possible interrupt */ + for (; i < hw->mac.max_queues; i++) + FM10K_WRITE_REG(mapped_hws[j], FM10K_RXINT(i), + 3 << FM10K_RXINT_TIMER_SHIFT); + } +#endif diag = fm10k_dev_tx_init(dev); if (diag) { @@ -1156,12 +1325,29 @@ static inline int fm10k_glort_valid(struct fm10k_hw *hw) } } +#ifndef ENABLE_FM10K_MANAGEMENT /* Update default vlan when not in VMDQ mode */ if (!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG)) fm10k_vlan_filter_set(dev, hw->mac.default_vid, true); +#endif fm10k_link_update(dev, 0); +#ifdef ENABLE_FM10K_MANAGEMENT + /* Admit all VLANs */ + for (j = 0; j <= 64; j++) { + for (i = 0; i < FM10K_SW_VLAN_TABLE_ENTRIES; i++) + FM10K_WRITE_REG(hw, + FM10K_SW_VLAN_TABLE_ENTRY(j, i), + 0xffffffff); + } + + /* Disable PEP loopback */ + data = FM10K_READ_REG(hw, FM10K_CTRL_EXT); + data &= ~FM10K_SW_CTRL_EXT_SWITCH_LOOPBACK; + FM10K_WRITE_REG(hw, FM10K_CTRL_EXT, data); +#endif + return 0; } @@ -1322,17 +1508,41 @@ static int fm10k_xstats_get_names(__rte_unused struct rte_eth_dev *dev, fm10k_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) { uint64_t ipackets, opackets, ibytes, obytes, imissed; +#ifndef ENABLE_FM10K_MANAGEMENT struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); +#else + struct fm10k_hw *hw; + struct fm10k_hw *unmap_hw = + FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct fm10k_hw *mapped_hws[2]; + int mapped_num; + uint16_t hw_queue_id; +#endif struct fm10k_hw_stats *hw_stats = FM10K_DEV_PRIVATE_TO_STATS(dev->data->dev_private); int i; PMD_INIT_FUNC_TRACE(); +#ifndef ENABLE_FM10K_MANAGEMENT fm10k_update_hw_stats(hw, hw_stats); +#else + mapped_num = fm10k_switch_dpdk_mapped_hw_get(unmap_hw, mapped_hws); + if (mapped_num < 0 || mapped_num > 2) + return -EIO; + + for (i = 0; i < mapped_num; i++) { + struct rte_eth_dev *mydev = + fm10k_switch_dpdk_port_rte_dev_get(mapped_hws[i]); + hw_stats = FM10K_DEV_PRIVATE_TO_STATS(mydev->data->dev_private); + fm10k_update_hw_stats(mapped_hws[i], hw_stats); + } +#endif ipackets = opackets = ibytes = obytes = imissed = 0; + +#ifndef ENABLE_FM10K_MANAGEMENT for (i = 0; (i < RTE_ETHDEV_QUEUE_STAT_CNTRS) && (i < hw->mac.max_queues); ++i) { stats->q_ipackets[i] = hw_stats->q[i].rx_packets.count; @@ -1346,6 +1556,36 @@ static int fm10k_xstats_get_names(__rte_unused struct rte_eth_dev *dev, obytes += stats->q_obytes[i]; imissed += stats->q_errors[i]; } +#else + if (mapped_num) { + for (i = 0; (i < RTE_ETHDEV_QUEUE_STAT_CNTRS) && + (i < unmap_hw->mac.max_queues); ++i) { + hw_queue_id = i; + fm10k_switch_dpdk_hw_queue_map(unmap_hw, + i, unmap_hw->mac.max_queues, + &hw, &hw_queue_id); + if (mapped_hws[1]) { + struct rte_eth_dev *mydev; + mydev = fm10k_switch_dpdk_port_rte_dev_get(hw); + hw_stats = + FM10K_DEV_PRIVATE_TO_STATS + (mydev->data->dev_private); + } + stats->q_ipackets[i] = + hw_stats->q[hw_queue_id].rx_packets.count; + stats->q_opackets[i] = + hw_stats->q[hw_queue_id].tx_packets.count; + stats->q_ibytes[i] = + hw_stats->q[hw_queue_id].rx_bytes.count; + stats->q_obytes[i] = + hw_stats->q[hw_queue_id].tx_bytes.count; + ipackets += stats->q_ipackets[i]; + opackets += stats->q_opackets[i]; + ibytes += stats->q_ibytes[i]; + obytes += stats->q_obytes[i]; + } + } +#endif stats->ipackets = ipackets; stats->opackets = opackets; stats->ibytes = ibytes; @@ -1808,8 +2048,7 @@ static uint64_t fm10k_get_rx_port_offloads_capa(struct rte_eth_dev *dev) DEV_RX_OFFLOAD_UDP_CKSUM | DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_JUMBO_FRAME | - DEV_RX_OFFLOAD_HEADER_SPLIT | - DEV_RX_OFFLOAD_RSS_HASH); + DEV_RX_OFFLOAD_HEADER_SPLIT); } static int @@ -1817,15 +2056,29 @@ static uint64_t fm10k_get_rx_port_offloads_capa(struct rte_eth_dev *dev) uint16_t nb_desc, unsigned int socket_id, const struct rte_eth_rxconf *conf, struct rte_mempool *mp) { +#ifndef ENABLE_FM10K_MANAGEMENT struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); +#else + struct fm10k_hw *hw; + struct fm10k_hw *unmap_hw = + FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); +#endif struct fm10k_dev_info *dev_info = FM10K_DEV_PRIVATE_TO_INFO(dev->data->dev_private); struct fm10k_rx_queue *q; const struct rte_memzone *mz; uint64_t offloads; + uint16_t hw_queue_id = queue_id; PMD_INIT_FUNC_TRACE(); +#ifdef ENABLE_FM10K_MANAGEMENT + if (fm10k_switch_dpdk_hw_queue_map(unmap_hw, + queue_id, dev->data->nb_rx_queues, + &hw, &hw_queue_id) < 0) + return -EIO; +#endif + offloads = conf->offloads | dev->data->dev_conf.rxmode.offloads; /* make sure the mempool element size can account for alignment. */ @@ -1871,7 +2124,7 @@ static uint64_t fm10k_get_rx_port_offloads_capa(struct rte_eth_dev *dev) q->port_id = dev->data->port_id; q->queue_id = queue_id; q->tail_ptr = (volatile uint32_t *) - &((uint32_t *)hw->hw_addr)[FM10K_RDT(queue_id)]; + &((uint32_t *)hw->hw_addr)[FM10K_RDT(hw_queue_id)]; q->offloads = offloads; if (handle_rxconf(q, conf)) return -EINVAL; @@ -2006,13 +2259,27 @@ static uint64_t fm10k_get_tx_port_offloads_capa(struct rte_eth_dev *dev) uint16_t nb_desc, unsigned int socket_id, const struct rte_eth_txconf *conf) { +#ifndef ENABLE_FM10K_MANAGEMENT struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); +#else + struct fm10k_hw *hw; + struct fm10k_hw *unmap_hw = + FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); +#endif struct fm10k_tx_queue *q; const struct rte_memzone *mz; uint64_t offloads; + uint16_t hw_queue_id = queue_id; PMD_INIT_FUNC_TRACE(); +#ifdef ENABLE_FM10K_MANAGEMENT + if (fm10k_switch_dpdk_hw_queue_map(unmap_hw, + queue_id, dev->data->nb_tx_queues, + &hw, &hw_queue_id) < 0) + return -EIO; +#endif + offloads = conf->offloads | dev->data->dev_conf.txmode.offloads; /* make sure a valid number of descriptors have been requested */ @@ -2054,7 +2321,7 @@ static uint64_t fm10k_get_tx_port_offloads_capa(struct rte_eth_dev *dev) q->offloads = offloads; q->ops = &def_txq_ops; q->tail_ptr = (volatile uint32_t *) - &((uint32_t *)hw->hw_addr)[FM10K_TDT(queue_id)]; + &((uint32_t *)hw->hw_addr)[FM10K_TDT(hw_queue_id)]; if (handle_txconf(q, conf)) return -EINVAL; @@ -2284,6 +2551,77 @@ static uint64_t fm10k_get_tx_port_offloads_capa(struct rte_eth_dev *dev) return 0; } +#ifdef ENABLE_FM10K_MANAGEMENT +static int +fm10k_mirror_rule_set(struct rte_eth_dev *dev, + struct rte_eth_mirror_conf *mirror_conf, + uint8_t sw_id, uint8_t on) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + PMD_INIT_LOG(DEBUG, + "Mirror set, switch %d to port %d attach vlan %d on %d", + sw_id, mirror_conf->dst_pool, + mirror_conf->vlan.vlan_id[0], on); + + if (on) { + if (fm10k_switch_mirror_set(hw, + mirror_conf->dst_pool, + mirror_conf->vlan.vlan_id[0]) < 0) { + PMD_INIT_LOG(ERR, "Input wrong port!!!"); + return -1; + } + } else { + if (fm10k_switch_mirror_reset(hw) < 0) { + PMD_INIT_LOG(ERR, "Input wrong port!!!"); + return -1; + } + } + + return 0; +} + + +static int +fm10k_mirror_rule_reset(struct rte_eth_dev *dev, uint8_t sw_id) +{ + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + PMD_INIT_LOG(DEBUG, "Mirror reset, switch %d", sw_id); + + fm10k_switch_mirror_reset(hw); + + return 0; +} + +static int +fm10k_dev_filter_ctrl(struct rte_eth_dev *dev, + enum rte_filter_type filter_type, + enum rte_filter_op filter_op, + void *arg) +{ + int ret = 0; + + if (dev == NULL) + return -EINVAL; + + switch (filter_type) { + case RTE_ETH_FILTER_GENERIC: + if (filter_op != RTE_ETH_FILTER_GET) + return -EINVAL; + *(const void **)arg = fm10k_flow_ops_get(); + break; + default: + PMD_DRV_LOG(WARNING, "Filter type (%d) not supported", + filter_type); + ret = -EINVAL; + break; + } + + return ret; +} +#endif + static void fm10k_dev_enable_intr_pf(struct rte_eth_dev *dev) { @@ -2592,6 +2930,9 @@ static uint64_t fm10k_get_tx_port_offloads_capa(struct rte_eth_dev *dev) FM10K_DEV_PRIVATE_TO_INFO(dev->data->dev_private); int status_mbx; s32 err; +#ifdef ENABLE_FM10K_MANAGEMENT + uint32_t writeback = 0; +#endif if (hw->mac.type != fm10k_mac_pf) return; @@ -2605,11 +2946,20 @@ static uint64_t fm10k_get_tx_port_offloads_capa(struct rte_eth_dev *dev) } /* Handle switch up/down */ - if (cause & FM10K_EICR_SWITCHNOTREADY) - PMD_INIT_LOG(ERR, "INT: Switch is not ready"); + if (cause & FM10K_EICR_SWITCHNOTREADY) { + PMD_INIT_LOG(INFO, "INT: Switch is not ready"); +#ifdef ENABLE_FM10K_MANAGEMENT + fm10k_switch_ready = 0; + writeback |= FM10K_EICR_SWITCHNOTREADY; +#endif + } if (cause & FM10K_EICR_SWITCHREADY) { PMD_INIT_LOG(INFO, "INT: Switch is ready"); +#ifdef ENABLE_FM10K_MANAGEMENT + fm10k_switch_ready = 1; + writeback |= FM10K_EICR_SWITCHREADY; +#endif if (dev_info->sm_down == 1) { fm10k_mbx_lock(hw); @@ -2660,6 +3010,7 @@ static uint64_t fm10k_get_tx_port_offloads_capa(struct rte_eth_dev *dev) } /* Handle mailbox message */ +#ifndef ENABLE_FM10K_MANAGEMENT fm10k_mbx_lock(hw); err = hw->mbx.ops.process(hw, &hw->mbx); fm10k_mbx_unlock(hw); @@ -2667,10 +3018,33 @@ static uint64_t fm10k_get_tx_port_offloads_capa(struct rte_eth_dev *dev) if (err == FM10K_ERR_RESET_REQUESTED) { PMD_INIT_LOG(INFO, "INT: Switch is down"); dev_info->sm_down = 1; - _rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_LSC, + _rte_eth_dev_callback_process + (dev, + RTE_ETH_EVENT_INTR_LSC, NULL); } +#else + if (cause & FM10K_EICR_MAILBOX) { + fm10k_mbx_lock(hw); + err = hw->mbx.ops.process(hw, &hw->mbx); + fm10k_mbx_unlock(hw); + writeback |= FM10K_EICR_MAILBOX; + if (err == FM10K_ERR_RESET_REQUESTED) { + PMD_INIT_LOG(INFO, "INT: Switch is down"); + dev_info->sm_down = 1; + _rte_eth_dev_callback_process + (dev, + RTE_ETH_EVENT_INTR_LSC, + NULL); + } + } + + /* Handle switch interrupt */ + if (cause & FM10K_SW_EICR_SWITCH_INT) + fm10k_switch_intr(hw); +#endif + /* Handle SRAM error */ if (cause & FM10K_EICR_SRAMERROR) { PMD_INIT_LOG(ERR, "INT: SRAM error on PEP"); @@ -2682,15 +3056,27 @@ static uint64_t fm10k_get_tx_port_offloads_capa(struct rte_eth_dev *dev) /* Todo: print out error message after shared code updates */ } +#ifndef ENABLE_FM10K_MANAGEMENT /* Clear these 3 events if having any */ cause &= FM10K_EICR_SWITCHNOTREADY | FM10K_EICR_MAILBOX | FM10K_EICR_SWITCHREADY; if (cause) FM10K_WRITE_REG(hw, FM10K_EICR, cause); +#else + if (writeback) + FM10K_WRITE_REG(hw, FM10K_EICR, writeback); +#endif /* Re-enable interrupt from device side */ - FM10K_WRITE_REG(hw, FM10K_ITR(0), FM10K_ITR_AUTOMASK | +#ifndef ENABLE_FM10K_MANAGEMENT + FM10K_WRITE_REG(hw, FM10K_ITR(0), + FM10K_ITR_AUTOMASK | FM10K_ITR_MASK_CLEAR); +#else + FM10K_WRITE_REG(hw, FM10K_ITR(0), + FM10K_SW_ITR_AUTO_MASK | + FM10K_SW_MAKE_REG_FIELD(ITR_MASK, FM10K_SW_ITR_MASK_W_ENABLE)); +#endif /* Re-enable interrupt from host side */ rte_intr_ack(dev->intr_handle); } @@ -2897,6 +3283,11 @@ static uint64_t fm10k_get_tx_port_offloads_capa(struct rte_eth_dev *dev) .reta_query = fm10k_reta_query, .rss_hash_update = fm10k_rss_hash_update, .rss_hash_conf_get = fm10k_rss_hash_conf_get, +#ifdef ENABLE_FM10K_MANAGEMENT + .mirror_rule_set = fm10k_mirror_rule_set, + .mirror_rule_reset = fm10k_mirror_rule_reset, + .filter_ctrl = fm10k_dev_filter_ctrl, +#endif }; static int ftag_check_handler(__rte_unused const char *key, @@ -3075,13 +3466,87 @@ static void __attribute__((cold)) info->sm_down = false; } +#ifdef ENABLE_FM10K_MANAGEMENT +static int eth_fm10k_dev_init_hook(struct fm10k_hw *hw) +{ + int i, switch_ready; + struct rte_eth_dev *dev = + (struct rte_eth_dev *)fm10k_switch_dpdk_port_rte_dev_get(hw); + + /* Make sure Switch Manager is ready before going forward. */ + if (hw->mac.type == fm10k_mac_pf) { + switch_ready = 0; + + for (i = 0; i < MAX_QUERY_SWITCH_STATE_TIMES; i++) { + fm10k_mbx_lock(hw); + switch_ready = fm10k_switch_ready; + fm10k_mbx_unlock(hw); + if (switch_ready) + break; + /* Delay some time to acquire async LPORT_MAP info. */ + rte_delay_us(WAIT_SWITCH_MSG_US); + } + + if (switch_ready == 0) { + PMD_INIT_LOG(ERR, "switch is not ready"); + return -1; + } + } + + /* + * Below function will trigger operations on mailbox, acquire lock to + * avoid race condition from interrupt handler. Operations on mailbox + * FIFO will trigger interrupt to PF/SM, in which interrupt handler + * will handle and generate an interrupt to our side. Then, FIFO in + * mailbox will be touched. + */ + if (hw->mac.dglort_map == FM10K_DGLORTMAP_NONE) { + PMD_INIT_LOG(ERR, "dglort_map is not ready"); + return -1; + } + + fm10k_mbx_lock(hw); + /* Enable port first */ + hw->mac.ops.update_lport_state(hw, hw->mac.dglort_map, + MAX_LPORT_NUM, 1); + /* Set unicast mode by default. App can change to other mode in other + * API func. + */ + hw->mac.ops.update_xcast_mode(hw, hw->mac.dglort_map, + FM10K_XCAST_MODE_NONE); + fm10k_mbx_unlock(hw); + + /* Make sure default VID is ready before going forward. */ + if (hw->mac.type == fm10k_mac_pf) { + for (i = 0; i < MAX_QUERY_SWITCH_STATE_TIMES; i++) { + if (hw->mac.default_vid) + break; + /* Delay some time to acquire async port VLAN info. */ + rte_delay_us(WAIT_SWITCH_MSG_US); + } + + if (hw->mac.default_vid == 0) + hw->mac.default_vid = 1; + } + + /* Add default mac address */ + fm10k_MAC_filter_set(dev, hw->mac.addr, true, + MAIN_VSI_POOL_NUMBER); + + return 0; +} +#endif + static int eth_fm10k_dev_init(struct rte_eth_dev *dev) { struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev); struct rte_intr_handle *intr_handle = &pdev->intr_handle; - int diag, i; + int diag; +#ifndef ENABLE_FM10K_MANAGEMENT + int i; +#endif struct fm10k_macvlan_filter_info *macvlan; PMD_INIT_FUNC_TRACE(); @@ -3118,7 +3583,9 @@ static void __attribute__((cold)) " Try to blacklist unused devices."); return -EIO; } - +#ifdef ENABLE_FM10K_MANAGEMENT + hw->sw_addr = (void *)pdev->mem_resource[4].addr; +#endif /* Store fm10k_adapter pointer */ hw->back = dev->data->dev_private; @@ -3129,6 +3596,25 @@ static void __attribute__((cold)) return -EIO; } +#ifdef ENABLE_FM10K_MANAGEMENT + if (hw->mac.type == fm10k_mac_pf) { + if (hw->hw_addr == NULL || hw->sw_addr == NULL) { + PMD_INIT_LOG(ERR, "Bad mem resource." + " Try to blacklist unused devices."); + return -EIO; + } + } else { + if (hw->hw_addr == NULL) { + PMD_INIT_LOG(ERR, "Bad mem resource." + " Try to blacklist unused devices."); + return -EIO; + } + } + + /* Store fm10k_adapter pointer */ + hw->back = dev->data->dev_private; +#endif + /* Initialize parameters */ fm10k_params_init(dev); @@ -3209,6 +3695,7 @@ static void __attribute__((cold)) hw->mac.ops.update_int_moderator(hw); /* Make sure Switch Manager is ready before going forward. */ +#ifndef ENABLE_FM10K_MANAGEMENT if (hw->mac.type == fm10k_mac_pf) { bool switch_ready = false; @@ -3216,13 +3703,13 @@ static void __attribute__((cold)) fm10k_mbx_lock(hw); hw->mac.ops.get_host_state(hw, &switch_ready); fm10k_mbx_unlock(hw); - if (switch_ready == true) + if (switch_ready) break; /* Delay some time to acquire async LPORT_MAP info. */ rte_delay_us(WAIT_SWITCH_MSG_US); } - if (switch_ready == false) { + if (switch_ready == 0) { PMD_INIT_LOG(ERR, "switch is not ready"); return -1; } @@ -3268,11 +3755,25 @@ static void __attribute__((cold)) MAIN_VSI_POOL_NUMBER); return 0; +#else + if (hw->mac.type == fm10k_mac_pf) { + bool master = FM10K_READ_REG(hw, + FM10K_CTRL) & FM10K_CTRL_BAR4_ALLOWED; + return fm10k_switch_dpdk_port_start(hw, + dev, 1, master, eth_fm10k_dev_init_hook); + } else { /* It may not work for VF */ + return fm10k_switch_dpdk_port_start(hw, + dev, 0, false, eth_fm10k_dev_init_hook); + } +#endif } static int eth_fm10k_dev_uninit(struct rte_eth_dev *dev) { +#ifdef ENABLE_FM10K_MANAGEMENT + struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); +#endif PMD_INIT_FUNC_TRACE(); /* only uninitialize in the primary process */ @@ -3282,6 +3783,10 @@ static void __attribute__((cold)) /* safe to close dev here */ fm10k_dev_close(dev); +#ifdef ENABLE_FM10K_MANAGEMENT + fm10k_switch_dpdk_port_stop(hw); +#endif + return 0; } -- 1.8.3.1