From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6F02FA04F1; Wed, 11 Dec 2019 10:53:38 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E99CE1BF9A; Wed, 11 Dec 2019 10:52:21 +0100 (CET) Received: from EUR01-VE1-obe.outbound.protection.outlook.com (mail-eopbgr140117.outbound.protection.outlook.com [40.107.14.117]) by dpdk.org (Postfix) with ESMTP id 65C191BF92 for ; Wed, 11 Dec 2019 10:52:18 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=mgAPrk47eTMPmajHvGIf6BaiC+G/GDCo2aEhvsoBr8FmP3AbpU/kM8zn6+DsXKlVIqxShKjIyeYccXd4U4uwr7fGBqOaKLZ7HLumYuNWX4GewKLA0h4GDzjasWAOoCyguQzuYtJLOfezjoKtDhg/5m+U6sqfC96pRaFqpZd/W2smXRuOp2dl9XNxO1+U2qExQxS6hr0n88rjM4txA19+P8QD+i8leOyJPQo++FkufLsTvK8BDWNEDBIVGzlUMFtuSCp25/ya+EditaP4tDQyhGS4F1dc2ygQEJnS+RpvNAg2+2U/gRZj2ivbWH2OfABQ/7yoZfPiexf4k0tEcqXaFw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=m8K2C/WArgaOKHmIoaWhPKdrb5Mcm2v5/EFU5H3z3JA=; b=BrOtkGXY3mgv4VgzGfROLjfOCAYLzR60QtLpDxbeMoLQfKzGJWzIylM23LaKTPHVvdwKUACOocn8N7zInyu3T4HHZibgsL9UyuzbSQYiQZgJPP69ODxRONB13IgaTaBwFYcIpRlg2gKX+O3b/O0bcAHVOQtPV18fGBklZufqQwapXKveg1YG6x9e2HIeOSCLUMF6xe7Wn4jSyy0ZtWGSRtUUHo5C51XVkVKoHOpcKf7UL4WjLKFZv0bIJ0VGRibkSeV5Hf0OPvr+Hqoel4WwH/zHyd7GyW7CtcBhgcch0XHJL7MwZWmb0jc0mflg1hj7pIVupjBsIhVh1yZzi4FYxw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=silicom.co.il; dmarc=pass action=none header.from=silicom.co.il; dkim=pass header.d=silicom.co.il; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=SILICOMLTD.onmicrosoft.com; s=selector2-SILICOMLTD-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=m8K2C/WArgaOKHmIoaWhPKdrb5Mcm2v5/EFU5H3z3JA=; b=Wvglv3pLwl72DsaekWmCRRH/KD52vXO3M0DSbXlcJQieCeD567KcD/QewIX8jHJrubftstAOiDa10nIe3bp2n54V/L562VeSVphNy2mtz3YWHoLnTRBEjhvvg10IBbC/Tvnp2KOWShagdVenfg0jOpu9OAAX1J9uY4EPErIeNHA= Received: from DB7PR04MB5196.eurprd04.prod.outlook.com (20.176.234.140) by DB7PR04MB5050.eurprd04.prod.outlook.com (20.176.236.151) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2516.14; Wed, 11 Dec 2019 09:52:18 +0000 Received: from DB7PR04MB5196.eurprd04.prod.outlook.com ([fe80::cdaa:fcae:322b:59ed]) by DB7PR04MB5196.eurprd04.prod.outlook.com ([fe80::cdaa:fcae:322b:59ed%7]) with mapi id 15.20.2516.018; Wed, 11 Dec 2019 09:52:18 +0000 From: Xiaojun Liu To: "xiao.w.wang@intel.com" , "qi.z.zhang@intel.com" , "ngai-mint.kwan@intel.com" , "jakub.fornal@intel.co" , "jacob.e.keller@intel.com" CC: "dev@dpdk.org" , Xiaojun Liu Thread-Topic: [PATCH v2 7/7] net/fm10k: add dpdk port mapping Thread-Index: AQHVsAis7rZAir9v5kiDH0nbx/RvyA== Date: Wed, 11 Dec 2019 09:52:17 +0000 Message-ID: <1576057875-7677-8-git-send-email-xiaojun.liu@silicom.co.il> References: <1576057875-7677-1-git-send-email-xiaojun.liu@silicom.co.il> In-Reply-To: <1576057875-7677-1-git-send-email-xiaojun.liu@silicom.co.il> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-clientproxiedby: HK2PR02CA0180.apcprd02.prod.outlook.com (2603:1096:201:21::16) To DB7PR04MB5196.eurprd04.prod.outlook.com (2603:10a6:10:1a::12) authentication-results: spf=none (sender IP is ) smtp.mailfrom=xiaojun.liu@silicom.co.il; x-ms-exchange-messagesentrepresentingtype: 1 x-mailer: git-send-email 1.8.3.1 x-originating-ip: [113.110.226.253] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 631dc72d-2ce2-41f1-ba6c-08d77e1fcef6 x-ms-traffictypediagnostic: DB7PR04MB5050: x-ms-exchange-transport-forked: True x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:9508; x-forefront-prvs: 024847EE92 x-forefront-antispam-report: SFV:NSPM; SFS:(10019020)(39850400004)(346002)(396003)(366004)(136003)(376002)(199004)(189003)(66476007)(71200400001)(2906002)(52116002)(66946007)(66446008)(6506007)(66556008)(316002)(86362001)(64756008)(107886003)(6486002)(44832011)(186003)(110136005)(6512007)(54906003)(4326008)(81156014)(81166006)(26005)(5660300002)(8676002)(30864003)(478600001)(8936002)(2616005)(36756003)(290074003)(21314003); DIR:OUT; SFP:1102; SCL:1; SRVR:DB7PR04MB5050; H:DB7PR04MB5196.eurprd04.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1; received-spf: None (protection.outlook.com: silicom.co.il does not designate permitted sender hosts) x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: WFxSdreTIYz1HosO37kfdg6XkygZs8rEsCFyitdxM/qp9rEBJ28btyWFdTryO2DY8X73j7yC8b0Wfth/szEFvuiA2QrtSTP9DdDVUCZXOsqEJxacFCoxKtXkipeE/dkzfxyZBiCNvtxogf3vN4CKrnMbo2hBRRD8WuW9X4FI/pG6LPudo3i7eBhK+Y+OzjUTaZZTM8bRUv+LlVL5q8NGmLnHGQ9WhqMk1rOvYe+cEeha+yK5TNFrRpL5xAUrQfbSmKmTF4Ya2lNewQRub3aVFlKGDkkwAAK49JCMobFogkPz6pNf41AEbb+e86qU/gVYxwDgZTyAynyE+xS1KMAmVvi395nVyiVeR88Q+4YIFE1bCTKCHxTKyWt4Tgt5AIqUgq4QnTjkQ3RfzSfl5k6z1Yj4nN/7wq00aiPzy96gat8XfCM4HHaojlkTjr44HXja4uY+eL+Z2dbb+Zp/9OXWXtyG1Vd0DkwfmJCIq/BdLOHKA1HuI4goit+okV9mt0WFWmAKo3C54I3YvkxvplNpQg== Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: silicom.co.il X-MS-Exchange-CrossTenant-Network-Message-Id: 631dc72d-2ce2-41f1-ba6c-08d77e1fcef6 X-MS-Exchange-CrossTenant-originalarrivaltime: 11 Dec 2019 09:52:17.9375 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: c9e326d8-ce47-4930-8612-cc99d3c87ad1 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: KRsShHFot3kRJqOee9BH8ehF/p/yrVvlLo5jZ/KbVZ5hY5L6zycP1oadLjx+oML0xbyaqfxm/l68zA46FtpvMSm/i37LO6akgFiWQ6KQIYE= X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR04MB5050 Subject: [dpdk-dev] [PATCH v2 7/7] net/fm10k: add dpdk port mapping X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Modify fm10k/fm10k_ethdev.c. Add dpdk port and pf mapping, so the dpdk port can map to a specific pf and 1 dpdk port can map to 2 pf to get total 100G throughput. To avoid configuration for both kernel driver and userspace SDK outside DPDK, we add switch management in FM10K DPDK PMD driver. To enable switch management, you need add CONFIG_RTE_FM10K_MANAGEMENT=3Dy in config/common_linux when building. Signed-off-by: Xiaojun Liu --- drivers/net/fm10k/fm10k_ethdev.c | 322 +++++++++++++++++++++++++++++++++++= ---- 1 file changed, 294 insertions(+), 28 deletions(-) diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_eth= dev.c index 1c01684..8af97a7 100644 --- a/drivers/net/fm10k/fm10k_ethdev.c +++ b/drivers/net/fm10k/fm10k_ethdev.c @@ -517,6 +517,15 @@ struct fm10k_xstats_name_off { struct rte_eth_conf *dev_conf =3D &dev->data->dev_conf; uint32_t mrqc, *key, i, reta, j; uint64_t hf; +#ifdef ENABLE_FM10K_MANAGEMENT + uint16_t nb_rx_queues =3D dev->data->nb_rx_queues; + int mapped_num; + struct fm10k_hw *mapped_hws[2]; + + mapped_num =3D fm10k_switch_dpdk_mapped_hw_get(hw, mapped_hws); + if (mapped_num =3D=3D 2) + nb_rx_queues /=3D 2; +#endif =20 #define RSS_KEY_SIZE 40 static uint8_t rss_intel_key[RSS_KEY_SIZE] =3D { @@ -646,27 +655,48 @@ struct fm10k_xstats_name_off { static int fm10k_dev_tx_init(struct rte_eth_dev *dev) { +#ifndef ENABLE_FM10K_MANAGEMENT struct fm10k_hw *hw =3D FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); +#else + struct fm10k_hw *hw; + struct fm10k_hw *unmap_hw =3D + FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint32_t data; +#endif int i, ret; + uint16_t hw_queue_id; struct fm10k_tx_queue *txq; uint64_t base_addr; uint32_t size; =20 +#ifndef ENABLE_FM10K_MANAGEMENT /* Disable TXINT to avoid possible interrupt */ for (i =3D 0; i < hw->mac.max_queues; i++) FM10K_WRITE_REG(hw, FM10K_TXINT(i), 3 << FM10K_TXINT_TIMER_SHIFT); +#else + fm10k_switch_dpdk_tx_queue_num_set(unmap_hw, + dev->data->nb_tx_queues); +#endif =20 /* Setup TX queue */ for (i =3D 0; i < dev->data->nb_tx_queues; ++i) { + hw_queue_id =3D i; +#ifdef ENABLE_FM10K_MANAGEMENT + fm10k_switch_dpdk_hw_queue_map(unmap_hw, + i, dev->data->nb_tx_queues, + &hw, &hw_queue_id); +#endif txq =3D dev->data->tx_queues[i]; base_addr =3D txq->hw_ring_phys_addr; size =3D txq->nb_desc * sizeof(struct fm10k_tx_desc); =20 /* disable queue to avoid issues while updating state */ - ret =3D tx_queue_disable(hw, i); + ret =3D tx_queue_disable(hw, hw_queue_id); if (ret) { - PMD_INIT_LOG(ERR, "failed to disable queue %d", i); + PMD_INIT_LOG(ERR, + "failed to disable queue %d", + hw_queue_id); return -1; } /* Enable use of FTAG bit in TX descriptor, PFVTCTL @@ -674,7 +704,7 @@ struct fm10k_xstats_name_off { */ if (fm10k_check_ftag(dev->device->devargs)) { if (hw->mac.type =3D=3D fm10k_mac_pf) { - FM10K_WRITE_REG(hw, FM10K_PFVTCTL(i), + FM10K_WRITE_REG(hw, FM10K_PFVTCTL(hw_queue_id), FM10K_PFVTCTL_FTAG_DESC_ENABLE); PMD_INIT_LOG(DEBUG, "FTAG mode is enabled"); } else { @@ -684,15 +714,25 @@ struct fm10k_xstats_name_off { } =20 /* set location and size for descriptor ring */ - FM10K_WRITE_REG(hw, FM10K_TDBAL(i), + FM10K_WRITE_REG(hw, FM10K_TDBAL(hw_queue_id), base_addr & UINT64_LOWER_32BITS_MASK); - FM10K_WRITE_REG(hw, FM10K_TDBAH(i), + FM10K_WRITE_REG(hw, FM10K_TDBAH(hw_queue_id), base_addr >> (CHAR_BIT * sizeof(uint32_t))); - FM10K_WRITE_REG(hw, FM10K_TDLEN(i), size); + FM10K_WRITE_REG(hw, FM10K_TDLEN(hw_queue_id), size); =20 /* assign default SGLORT for each TX queue by PF */ +#ifndef ENABLE_FM10K_MANAGEMENT if (hw->mac.type =3D=3D fm10k_mac_pf) - FM10K_WRITE_REG(hw, FM10K_TX_SGLORT(i), hw->mac.dglort_map); + FM10K_WRITE_REG(hw, + FM10K_TX_SGLORT(hw_queue_id), + hw->mac.dglort_map); +#else + if (hw->mac.type =3D=3D fm10k_mac_pf) { + data =3D FM10K_SW_MAKE_REG_FIELD + (TX_SGLORT_SGLORT, hw->mac.dglort_map); + FM10K_WRITE_REG(hw, FM10K_TX_SGLORT(hw_queue_id), data); + } +#endif } =20 /* set up vector or scalar TX function as appropriate */ @@ -704,19 +744,27 @@ struct fm10k_xstats_name_off { static int fm10k_dev_rx_init(struct rte_eth_dev *dev) { +#ifndef ENABLE_FM10K_MANAGEMENT struct fm10k_hw *hw =3D FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); struct fm10k_macvlan_filter_info *macvlan; struct rte_pci_device *pdev =3D RTE_ETH_DEV_TO_PCI(dev); struct rte_intr_handle *intr_handle =3D &pdev->intr_handle; + uint32_t logic_port =3D hw->mac.dglort_map; + uint16_t queue_stride =3D 0; +#else + struct fm10k_hw *hw; + struct fm10k_hw *unmap_hw =3D + FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); +#endif int i, ret; + uint16_t hw_queue_id; struct fm10k_rx_queue *rxq; uint64_t base_addr; uint32_t size; uint32_t rxdctl =3D FM10K_RXDCTL_WRITE_BACK_MIN_DELAY; - uint32_t logic_port =3D hw->mac.dglort_map; uint16_t buf_size; - uint16_t queue_stride =3D 0; =20 +#ifndef ENABLE_FM10K_MANAGEMENT /* enable RXINT for interrupt mode */ i =3D 0; if (rte_intr_dp_is_en(intr_handle)) { @@ -736,26 +784,36 @@ struct fm10k_xstats_name_off { for (; i < hw->mac.max_queues; i++) FM10K_WRITE_REG(hw, FM10K_RXINT(i), 3 << FM10K_RXINT_TIMER_SHIFT); +#else + fm10k_switch_dpdk_rx_queue_num_set(unmap_hw, dev->data->nb_rx_queues); +#endif =20 /* Setup RX queues */ for (i =3D 0; i < dev->data->nb_rx_queues; ++i) { + hw_queue_id =3D i; +#ifdef ENABLE_FM10K_MANAGEMENT + fm10k_switch_dpdk_hw_queue_map(unmap_hw, + i, dev->data->nb_rx_queues, &hw, &hw_queue_id); +#endif rxq =3D dev->data->rx_queues[i]; base_addr =3D rxq->hw_ring_phys_addr; size =3D rxq->nb_desc * sizeof(union fm10k_rx_desc); =20 /* disable queue to avoid issues while updating state */ - ret =3D rx_queue_disable(hw, i); + ret =3D rx_queue_disable(hw, hw_queue_id); if (ret) { - PMD_INIT_LOG(ERR, "failed to disable queue %d", i); + PMD_INIT_LOG(ERR, + "failed to disable queue %d", + hw_queue_id); return -1; } =20 /* Setup the Base and Length of the Rx Descriptor Ring */ - FM10K_WRITE_REG(hw, FM10K_RDBAL(i), + FM10K_WRITE_REG(hw, FM10K_RDBAL(hw_queue_id), base_addr & UINT64_LOWER_32BITS_MASK); - FM10K_WRITE_REG(hw, FM10K_RDBAH(i), + FM10K_WRITE_REG(hw, FM10K_RDBAH(hw_queue_id), base_addr >> (CHAR_BIT * sizeof(uint32_t))); - FM10K_WRITE_REG(hw, FM10K_RDLEN(i), size); + FM10K_WRITE_REG(hw, FM10K_RDLEN(hw_queue_id), size); =20 /* Configure the Rx buffer size for one buff without split */ buf_size =3D (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) - @@ -769,7 +827,7 @@ struct fm10k_xstats_name_off { */ buf_size -=3D FM10K_RX_DATABUF_ALIGN; =20 - FM10K_WRITE_REG(hw, FM10K_SRRCTL(i), + FM10K_WRITE_REG(hw, FM10K_SRRCTL(hw_queue_id), (buf_size >> FM10K_SRRCTL_BSIZEPKT_SHIFT) | FM10K_SRRCTL_LOOPBACK_SUPPRESS); =20 @@ -779,9 +837,9 @@ struct fm10k_xstats_name_off { rxq->offloads & DEV_RX_OFFLOAD_SCATTER) { uint32_t reg; dev->data->scattered_rx =3D 1; - reg =3D FM10K_READ_REG(hw, FM10K_SRRCTL(i)); + reg =3D FM10K_READ_REG(hw, FM10K_SRRCTL(hw_queue_id)); reg |=3D FM10K_SRRCTL_BUFFER_CHAINING_EN; - FM10K_WRITE_REG(hw, FM10K_SRRCTL(i), reg); + FM10K_WRITE_REG(hw, FM10K_SRRCTL(hw_queue_id), reg); } =20 /* Enable drop on empty, it's RO for VF */ @@ -801,6 +859,7 @@ struct fm10k_xstats_name_off { /* update RX_SGLORT for loopback suppress*/ if (hw->mac.type !=3D fm10k_mac_pf) return 0; +#ifndef ENABLE_FM10K_MANAGEMENT macvlan =3D FM10K_DEV_PRIVATE_TO_MACVLAN(dev->data->dev_private); if (macvlan->nb_queue_pools) queue_stride =3D dev->data->nb_rx_queues / macvlan->nb_queue_pools; @@ -809,6 +868,7 @@ struct fm10k_xstats_name_off { logic_port++; FM10K_WRITE_REG(hw, FM10K_RX_SGLORT(i), logic_port); } +#endif =20 return 0; } @@ -816,13 +876,31 @@ struct fm10k_xstats_name_off { static int fm10k_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) { +#ifndef ENABLE_FM10K_MANAGEMENT struct fm10k_hw *hw =3D FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); +#else + struct fm10k_hw *hw; + struct fm10k_hw *unmap_hw =3D + FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + int ret; +#endif int err; uint32_t reg; struct fm10k_rx_queue *rxq; + uint16_t hw_queue_id =3D rx_queue_id; =20 PMD_INIT_FUNC_TRACE(); =20 +#ifdef ENABLE_FM10K_MANAGEMENT + ret =3D fm10k_switch_dpdk_hw_queue_map(unmap_hw, + rx_queue_id, dev->data->nb_rx_queues, + &hw, &hw_queue_id); + if (ret < 0) + return -EIO; + else if (ret !=3D 1) /* reference port's queue don't need start */ + return 0; +#endif + rxq =3D dev->data->rx_queues[rx_queue_id]; err =3D rx_queue_reset(rxq); if (err =3D=3D -ENOMEM) { @@ -841,23 +919,23 @@ struct fm10k_xstats_name_off { * this comment and the following two register writes when the * emulation platform is no longer being used. */ - FM10K_WRITE_REG(hw, FM10K_RDH(rx_queue_id), 0); - FM10K_WRITE_REG(hw, FM10K_RDT(rx_queue_id), rxq->nb_desc - 1); + FM10K_WRITE_REG(hw, FM10K_RDH(hw_queue_id), 0); + FM10K_WRITE_REG(hw, FM10K_RDT(hw_queue_id), rxq->nb_desc - 1); =20 /* Set PF ownership flag for PF devices */ - reg =3D FM10K_READ_REG(hw, FM10K_RXQCTL(rx_queue_id)); + reg =3D FM10K_READ_REG(hw, FM10K_RXQCTL(hw_queue_id)); if (hw->mac.type =3D=3D fm10k_mac_pf) reg |=3D FM10K_RXQCTL_PF; reg |=3D FM10K_RXQCTL_ENABLE; /* enable RX queue */ - FM10K_WRITE_REG(hw, FM10K_RXQCTL(rx_queue_id), reg); + FM10K_WRITE_REG(hw, FM10K_RXQCTL(hw_queue_id), reg); FM10K_WRITE_FLUSH(hw); =20 /* Setup the HW Rx Head and Tail Descriptor Pointers * Note: this must be done AFTER the queue is enabled */ - FM10K_WRITE_REG(hw, FM10K_RDH(rx_queue_id), 0); - FM10K_WRITE_REG(hw, FM10K_RDT(rx_queue_id), rxq->nb_desc - 1); + FM10K_WRITE_REG(hw, FM10K_RDH(hw_queue_id), 0); + FM10K_WRITE_REG(hw, FM10K_RDT(hw_queue_id), rxq->nb_desc - 1); dev->data->rx_queue_state[rx_queue_id] =3D RTE_ETH_QUEUE_STATE_STARTED; =20 return 0; @@ -883,22 +961,39 @@ struct fm10k_xstats_name_off { static int fm10k_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) { +#ifndef ENABLE_FM10K_MANAGEMENT struct fm10k_hw *hw =3D FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); +#else + struct fm10k_hw *hw; + struct fm10k_hw *unmap_hw =3D + FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + int ret; +#endif /** @todo - this should be defined in the shared code */ #define FM10K_TXDCTL_WRITE_BACK_MIN_DELAY 0x00010000 uint32_t txdctl =3D FM10K_TXDCTL_WRITE_BACK_MIN_DELAY; struct fm10k_tx_queue *q =3D dev->data->tx_queues[tx_queue_id]; + uint16_t hw_queue_id =3D tx_queue_id; =20 PMD_INIT_FUNC_TRACE(); =20 +#ifdef ENABLE_FM10K_MANAGEMENT + ret =3D fm10k_switch_dpdk_hw_queue_map(unmap_hw, + tx_queue_id, dev->data->nb_tx_queues, &hw, &hw_queue_id); + if (ret < 0) + return -EIO; + else if (ret !=3D 1) + return 0; +#endif + q->ops->reset(q); =20 /* reset head and tail pointers */ - FM10K_WRITE_REG(hw, FM10K_TDH(tx_queue_id), 0); - FM10K_WRITE_REG(hw, FM10K_TDT(tx_queue_id), 0); + FM10K_WRITE_REG(hw, FM10K_TDH(hw_queue_id), 0); + FM10K_WRITE_REG(hw, FM10K_TDT(hw_queue_id), 0); =20 /* enable TX queue */ - FM10K_WRITE_REG(hw, FM10K_TXDCTL(tx_queue_id), + FM10K_WRITE_REG(hw, FM10K_TXDCTL(hw_queue_id), FM10K_TXDCTL_ENABLE | txdctl); FM10K_WRITE_FLUSH(hw); dev->data->tx_queue_state[tx_queue_id] =3D RTE_ETH_QUEUE_STATE_STARTED; @@ -1089,9 +1184,22 @@ static inline int fm10k_glort_valid(struct fm10k_hw = *hw) { struct fm10k_hw *hw =3D FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); int i, diag; +#ifdef ENABLE_FM10K_MANAGEMENT + struct fm10k_hw *mapped_hws[2]; + int j, mapped_num; + uint32_t data; +#endif =20 PMD_INIT_FUNC_TRACE(); =20 +#ifdef ENABLE_FM10K_MANAGEMENT + mapped_num =3D fm10k_switch_dpdk_mapped_hw_get(hw, mapped_hws); + if (mapped_num < 0 || mapped_num > 2) + return -EIO; +#endif + + +#ifndef ENABLE_FM10K_MANAGEMENT /* stop, init, then start the hw */ diag =3D fm10k_stop_hw(hw); if (diag !=3D FM10K_SUCCESS) { @@ -1110,6 +1218,62 @@ static inline int fm10k_glort_valid(struct fm10k_hw = *hw) PMD_INIT_LOG(ERR, "Hardware start failed: %d", diag); return -EIO; } +#else + for (j =3D 0; j < mapped_num; j++) { + struct rte_pci_device *pdev =3D + RTE_ETH_DEV_TO_PCI((struct rte_eth_dev *) + (fm10k_switch_dpdk_port_rte_dev_get(mapped_hws[j]))); + struct rte_intr_handle *intr_handle =3D &pdev->intr_handle; + + /* stop, init, then start the hw */ + diag =3D fm10k_stop_hw(mapped_hws[j]); + if (diag !=3D FM10K_SUCCESS) { + PMD_INIT_LOG(ERR, "Hardware stop failed: %d", diag); + return -EIO; + } + + diag =3D fm10k_init_hw(mapped_hws[j]); + if (diag !=3D FM10K_SUCCESS) { + PMD_INIT_LOG(ERR, "Hardware init failed: %d", diag); + return -EIO; + } + + diag =3D fm10k_start_hw(mapped_hws[j]); + if (diag !=3D FM10K_SUCCESS) { + PMD_INIT_LOG(ERR, "Hardware start failed: %d", diag); + return -EIO; + } + + /* Disable TXINT to avoid possible interrupt */ + for (i =3D 0; i < hw->mac.max_queues; i++) + FM10K_WRITE_REG(mapped_hws[j], FM10K_TXINT(i), + 3 << FM10K_TXINT_TIMER_SHIFT); + + /* enable RXINT for interrupt mode */ + i =3D 0; + if (rte_intr_dp_is_en(intr_handle)) { + for (; i < dev->data->nb_rx_queues; i++) { + FM10K_WRITE_REG(mapped_hws[j], + FM10K_RXINT(i), Q2V(pdev, i)); + if (mapped_hws[j]->mac.type =3D=3D fm10k_mac_pf) + FM10K_WRITE_REG(mapped_hws[j], + FM10K_ITR(Q2V(pdev, i)), + FM10K_ITR_AUTOMASK | + FM10K_ITR_MASK_CLEAR); + else + FM10K_WRITE_REG(mapped_hws[j], + FM10K_VFITR(Q2V(pdev, i)), + FM10K_ITR_AUTOMASK | + FM10K_ITR_MASK_CLEAR); + } + } + + /* Disable other RXINT to avoid possible interrupt */ + for (; i < hw->mac.max_queues; i++) + FM10K_WRITE_REG(mapped_hws[j], FM10K_RXINT(i), + 3 << FM10K_RXINT_TIMER_SHIFT); + } +#endif =20 diag =3D fm10k_dev_tx_init(dev); if (diag) { @@ -1161,12 +1325,32 @@ static inline int fm10k_glort_valid(struct fm10k_hw= *hw) } } =20 +#ifndef ENABLE_FM10K_MANAGEMENT /* Update default vlan when not in VMDQ mode */ if (!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG)) fm10k_vlan_filter_set(dev, hw->mac.default_vid, true); +#endif =20 fm10k_link_update(dev, 0); =20 +#ifdef ENABLE_FM10K_MANAGEMENT + /* Admit all VLANs */ + for (j =3D 0; j <=3D 64; j++) { + for (i =3D 0; i < FM10K_SW_VLAN_TABLE_ENTRIES; i++) + FM10K_WRITE_REG(hw, + FM10K_SW_VLAN_TABLE_ENTRY(j, i), + 0xffffffff); + } + + /* Disable PEP 1loopback */ + /* XXX Does this need to be done by the master + * PEP while the switch is in reset? + */ + data =3D FM10K_READ_REG(hw, FM10K_CTRL_EXT); + data &=3D ~FM10K_SW_CTRL_EXT_SWITCH_LOOPBACK; + FM10K_WRITE_REG(hw, FM10K_CTRL_EXT, data); +#endif + return 0; } =20 @@ -1327,17 +1511,41 @@ static int fm10k_xstats_get_names(__rte_unused stru= ct rte_eth_dev *dev, fm10k_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) { uint64_t ipackets, opackets, ibytes, obytes, imissed; +#ifndef ENABLE_FM10K_MANAGEMENT struct fm10k_hw *hw =3D FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); +#else + struct fm10k_hw *hw; + struct fm10k_hw *unmap_hw =3D + FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct fm10k_hw *mapped_hws[2]; + int mapped_num; + uint16_t hw_queue_id; +#endif struct fm10k_hw_stats *hw_stats =3D FM10K_DEV_PRIVATE_TO_STATS(dev->data->dev_private); int i; =20 PMD_INIT_FUNC_TRACE(); =20 +#ifndef ENABLE_FM10K_MANAGEMENT fm10k_update_hw_stats(hw, hw_stats); +#else + mapped_num =3D fm10k_switch_dpdk_mapped_hw_get(unmap_hw, mapped_hws); + if (mapped_num < 0 || mapped_num > 2) + return -EIO; + + for (i =3D 0; i < mapped_num; i++) { + struct rte_eth_dev *mydev =3D + fm10k_switch_dpdk_port_rte_dev_get(mapped_hws[i]); + hw_stats =3D FM10K_DEV_PRIVATE_TO_STATS(mydev->data->dev_private); + fm10k_update_hw_stats(mapped_hws[i], hw_stats); + } +#endif =20 ipackets =3D opackets =3D ibytes =3D obytes =3D imissed =3D 0; + +#ifndef ENABLE_FM10K_MANAGEMENT for (i =3D 0; (i < RTE_ETHDEV_QUEUE_STAT_CNTRS) && (i < hw->mac.max_queues); ++i) { stats->q_ipackets[i] =3D hw_stats->q[i].rx_packets.count; @@ -1351,6 +1559,36 @@ static int fm10k_xstats_get_names(__rte_unused struc= t rte_eth_dev *dev, obytes +=3D stats->q_obytes[i]; imissed +=3D stats->q_errors[i]; } +#else + if (mapped_num) { + for (i =3D 0; (i < RTE_ETHDEV_QUEUE_STAT_CNTRS) && + (i < unmap_hw->mac.max_queues); ++i) { + hw_queue_id =3D i; + fm10k_switch_dpdk_hw_queue_map(unmap_hw, + i, unmap_hw->mac.max_queues, + &hw, &hw_queue_id); + if (mapped_hws[1]) { + struct rte_eth_dev *mydev; + mydev =3D fm10k_switch_dpdk_port_rte_dev_get(hw); + hw_stats =3D + FM10K_DEV_PRIVATE_TO_STATS + (mydev->data->dev_private); + } + stats->q_ipackets[i] =3D + hw_stats->q[hw_queue_id].rx_packets.count; + stats->q_opackets[i] =3D + hw_stats->q[hw_queue_id].tx_packets.count; + stats->q_ibytes[i] =3D + hw_stats->q[hw_queue_id].rx_bytes.count; + stats->q_obytes[i] =3D + hw_stats->q[hw_queue_id].tx_bytes.count; + ipackets +=3D stats->q_ipackets[i]; + opackets +=3D stats->q_opackets[i]; + ibytes +=3D stats->q_ibytes[i]; + obytes +=3D stats->q_obytes[i]; + } + } +#endif stats->ipackets =3D ipackets; stats->opackets =3D opackets; stats->ibytes =3D ibytes; @@ -1821,15 +2059,29 @@ static uint64_t fm10k_get_rx_port_offloads_capa(str= uct rte_eth_dev *dev) uint16_t nb_desc, unsigned int socket_id, const struct rte_eth_rxconf *conf, struct rte_mempool *mp) { +#ifndef ENABLE_FM10K_MANAGEMENT struct fm10k_hw *hw =3D FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); +#else + struct fm10k_hw *hw; + struct fm10k_hw *unmap_hw =3D + FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); +#endif struct fm10k_dev_info *dev_info =3D FM10K_DEV_PRIVATE_TO_INFO(dev->data->dev_private); struct fm10k_rx_queue *q; const struct rte_memzone *mz; uint64_t offloads; + uint16_t hw_queue_id =3D queue_id; =20 PMD_INIT_FUNC_TRACE(); =20 +#ifdef ENABLE_FM10K_MANAGEMENT + if (fm10k_switch_dpdk_hw_queue_map(unmap_hw, + queue_id, dev->data->nb_rx_queues, + &hw, &hw_queue_id) < 0) + return -EIO; +#endif + offloads =3D conf->offloads | dev->data->dev_conf.rxmode.offloads; =20 /* make sure the mempool element size can account for alignment. */ @@ -1875,7 +2127,7 @@ static uint64_t fm10k_get_rx_port_offloads_capa(struc= t rte_eth_dev *dev) q->port_id =3D dev->data->port_id; q->queue_id =3D queue_id; q->tail_ptr =3D (volatile uint32_t *) - &((uint32_t *)hw->hw_addr)[FM10K_RDT(queue_id)]; + &((uint32_t *)hw->hw_addr)[FM10K_RDT(hw_queue_id)]; q->offloads =3D offloads; if (handle_rxconf(q, conf)) return -EINVAL; @@ -2010,13 +2262,27 @@ static uint64_t fm10k_get_tx_port_offloads_capa(str= uct rte_eth_dev *dev) uint16_t nb_desc, unsigned int socket_id, const struct rte_eth_txconf *conf) { +#ifndef ENABLE_FM10K_MANAGEMENT struct fm10k_hw *hw =3D FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); +#else + struct fm10k_hw *hw; + struct fm10k_hw *unmap_hw =3D + FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private); +#endif struct fm10k_tx_queue *q; const struct rte_memzone *mz; uint64_t offloads; + uint16_t hw_queue_id =3D queue_id; =20 PMD_INIT_FUNC_TRACE(); =20 +#ifdef ENABLE_FM10K_MANAGEMENT + if (fm10k_switch_dpdk_hw_queue_map(unmap_hw, + queue_id, dev->data->nb_tx_queues, + &hw, &hw_queue_id) < 0) + return -EIO; +#endif + offloads =3D conf->offloads | dev->data->dev_conf.txmode.offloads; =20 /* make sure a valid number of descriptors have been requested */ @@ -2058,7 +2324,7 @@ static uint64_t fm10k_get_tx_port_offloads_capa(struc= t rte_eth_dev *dev) q->offloads =3D offloads; q->ops =3D &def_txq_ops; q->tail_ptr =3D (volatile uint32_t *) - &((uint32_t *)hw->hw_addr)[FM10K_TDT(queue_id)]; + &((uint32_t *)hw->hw_addr)[FM10K_TDT(hw_queue_id)]; if (handle_txconf(q, conf)) return -EINVAL; =20 --=20 1.8.3.1