From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from NAM02-SN1-obe.outbound.protection.outlook.com (mail-sn1nam02on0042.outbound.protection.outlook.com [104.47.36.42]) by dpdk.org (Postfix) with ESMTP id E78461C95A for ; Thu, 5 Apr 2018 08:40:26 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector1-amd-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=ofS7fZZX7pYVamaQ7ucuoR720q3v1V1Vhiu14sL+e5M=; b=bzg3QbThVIWcIakoSGBxGboVx4MVdLmOf6vLufQsiUGppmticWzcfgyp52qR14HqpgjKovw1YSGA2UYe3fx2CeSyO0uvtA1ykKfOzZLJa5KKgs3bBwBLYAiCnJCOzUHuRcr1BK+JGFbPHlNRtNx7SaPd+twMSZWn6zJ8pXq7+Zw= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Ravi1.Kumar@amd.com; Received: from wallaby-smavila.amd.com (202.56.249.162) by BN6PR12MB1505.namprd12.prod.outlook.com (2603:10b6:405:11::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.653.12; Thu, 5 Apr 2018 06:40:25 +0000 From: Ravi Kumar To: dev@dpdk.org Cc: ferruh.yigit@intel.com Date: Thu, 5 Apr 2018 02:39:42 -0400 Message-Id: <1522910389-35530-10-git-send-email-Ravi1.kumar@amd.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1522910389-35530-1-git-send-email-Ravi1.kumar@amd.com> References: <1520584954-130575-1-git-send-email-Ravi1.kumar@amd.com> <1522910389-35530-1-git-send-email-Ravi1.kumar@amd.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [202.56.249.162] X-ClientProxiedBy: BMXPR01CA0040.INDPRD01.PROD.OUTLOOK.COM (2603:1096:b00:c::26) To BN6PR12MB1505.namprd12.prod.outlook.com (2603:10b6:405:11::18) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 6ccc438c-32a5-4056-5dc8-08d59ac01d56 X-MS-Office365-Filtering-HT: Tenant X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(7020095)(4652020)(5600026)(4604075)(48565401081)(4534165)(4627221)(201703031133081)(201702281549075)(2017052603328)(7153060)(7193020); SRVR:BN6PR12MB1505; X-Microsoft-Exchange-Diagnostics: 1; BN6PR12MB1505; 3:CsZ7wwiJL9nDULXDrB8QJPp7yOJ1ksz/uRG3LEgalV1az0ICTQfXVW//aY3pWbqhk8rTr+EMi80nf4TYVzzT2Ev1oaankuF9YaVeeECfyq4vA5G1/QUgXyPtwW5tj23qF4oUiVy5X6TA+cLsEKNLJbIaMMZx9DshgCk6VLpDQOqvENOXZZae4dDZ+H05IAxVYzULpQMnlnoFfvMePr/3O+HykAUjPaDt6LRE7P2jdh/3/D80tGQUS3mr/mao+mnd; 25:kIoqlkgcsNar9f4BItbwTP4E9C/YGKTvfpDiPLt1ipf9Y4Jxmn4vJ9xz7OV5BsmjMkmdzO+7Ct+hcTR/35uNOndAzWeQqVVuTXjx01TuzzgOxrQkslD3lbYvst6H3UqLqlU325OeXZXwHdrWn7rNdOS+Ko/HxwuCUsJKNeSXWWunfOR5Ox6nMhhjCnkc1ha+CDQ+3tREa1mAcmT86mhnrL0gDtFzc8BIIk+sU938Fm3Q4RS35+GDmx1c3hiGzuJFoLSCpPEobhswnSsgWqORpZ1FT9PsutU4lCspBIoaBzKv106/jb6iwTYlycerjHEkT2QSC4NDNUQOfHdm0adUVw==; 31:kuo3VP3Fwhv/skmPwTH/Mtwsqklsb536yLvYyjWQwAlqdTE6FNyGbeWRIWnPi/w6ljIKx2UmdUcRzlYDkX6sIBLgqi6xjkWOfOCV7+v++EdnJBTNFceZx9Uw+y6+OPj9AkHykfTNW5n9jQZ21ou9Fe7AbxmmojLHlW+p+AIuNFdTStRSJJwpBhHLGuECXSq5ynCCurbi3kuWzWvu4y7AwNTFcL5aVMCiuhMaeIrN4CQ= X-MS-TrafficTypeDiagnostic: BN6PR12MB1505: X-Microsoft-Exchange-Diagnostics: 1; BN6PR12MB1505; 20:3KYrZXol4njW18f+q/HyQqFN78ZF7TorQdLeA6+Cc10jIg6tfdw93g2KwUggtlbromJydOAZOnJvDaVA+ezy5E1h0rj+zqB69PetEs3t6omFRT8PLJkUyjzt6cS5D2LGp/xkzuFcWQ8PfTLX6lwhnpLssIAjRi2rrCrTrSGNMuKa+X993p5ldZV45GpM98e7AR8upWLJH90yeWAQEs2W7K2DxA/F9+w/yppMcfzO7RaSDIawj/6SnDx8Fx1wKqpFXyD5bCn8PPjyOD82CKDX7qTnRrJsPHgZxZUt4MetrrUtn10sfo9wh6lWRtipuR1+U9bZOHilvqbWStanbc9laL+z6h6Dko7NPsXf7YbRywtmRgOAPSgOEW2WrQN0YQdASN0Aq3ih+qS1AlqXvtgvvLlIQpjw/x9vhAlaf0CUh6ZNWyuVFLGDr/5TETxUEPE8xCST5rAl8kW3a7eeOGQZNNtTWZ8hVozYr4F/GTLgf4ZR1r/01gYXGwRkrUoz917y; 4:dy6Y02a27gY4wIRfBHnlVjfUD4VxHvYGlhTm+swxT5znfzcW/0B+6JEzzhXX6sW6aoQ5dNcMC80Kq1iWBlTN7yktiSiDyChS3T+XoErkaynsk6oXSpCdhBxLYG8XZ8FbfUUndLjjOk2b2ZyoIZ3P1mElh9cjHBMkVhmmy7ZPPlbcCO/Wo2irW4L0n7ia7t2c/de5zYYSxgPrppYQkE2vp+avCpU6bmmCejnCQxSuGKnM+6Fw7rpQ9N/bvlm4fKBYC80PkP2mVpNRd4E2nvtcRueMnKsPAhlnlLZ8Ia+dDP/tfBslC76sLMtfu1RsBHVwcmKUUDesIaQBCAvxfHaeanDWQ/Hw6v9CGsyIlkCDxfY= X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(60795455431006)(767451399110); X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(8211001083)(6040522)(2401047)(8121501046)(5005006)(3002001)(3231221)(944501327)(52105095)(93006095)(93001095)(10201501046)(6055026)(6041310)(20161123564045)(20161123562045)(20161123558120)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123560045)(6072148)(201708071742011); SRVR:BN6PR12MB1505; BCL:0; PCL:0; RULEID:; SRVR:BN6PR12MB1505; X-Forefront-PRVS: 06339BAE63 X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(366004)(346002)(376002)(396003)(39860400002)(39380400002)(199004)(189003)(3846002)(476003)(2616005)(316002)(446003)(956004)(305945005)(7736002)(478600001)(16586007)(16526019)(6486002)(186003)(76176011)(5660300001)(59450400001)(97736004)(2906002)(6116002)(486006)(386003)(26005)(50466002)(11346002)(25786009)(575784001)(8936002)(86362001)(50226002)(66066001)(47776003)(48376002)(2351001)(6666003)(2361001)(6916009)(4326008)(81166006)(36756003)(53416004)(51416003)(68736007)(7696005)(8676002)(106356001)(105586002)(52116002)(81156014)(53936002)(72206003); DIR:OUT; SFP:1101; SCL:1; SRVR:BN6PR12MB1505; H:wallaby-smavila.amd.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1; Received-SPF: None (protection.outlook.com: amd.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; BN6PR12MB1505; 23:PKW9jtPIJlc0LMB72RNWvVVP2WKcg/MqGthVz1obT?= =?us-ascii?Q?/E6n0cMg95G7LJedlwTzU8nWR3wfsxZr5psm05tlxbCdPsD4ZgEN/VsHGkAX?= =?us-ascii?Q?lvCp40EyJWef7z43xmoo/3wy2+YlQTIZABtcs0uXKyqXW+Xyk7HwZDqMA0GQ?= =?us-ascii?Q?PPgqErgxH773h14oPMf8cGNNVDE6Mfrd7z4MZ25537VsXXYSYUG5au1JxDi1?= =?us-ascii?Q?tU7zZKIwylhHnB+lZDjgluiUuZa99wiQvC/nZTJCOd5z/Sb80HjQFfNIpo5Y?= =?us-ascii?Q?So2WfT16XJ2fqsI7t65k7JQad9j87pc6lp9skQuatQu0vQorb5oAk4sAZcRk?= =?us-ascii?Q?UVOulbb1mET8WMzIn1H5/BH7gnOlUwcxjjEwWvFUBYfBwwbWEfCNtuGItUUK?= =?us-ascii?Q?Z8mHAfg56pu/cwnnUrFQ6OwlIG21OKQGFw7ag81D82eIRxAu3fHPRYO1lxNg?= =?us-ascii?Q?3asFKKCqtTzCxuTqcXkQwkmW/C5N3Zmzpq9WpgzPZGEJpoCbp/d32ZQO9/pK?= =?us-ascii?Q?zPzS7zAn/3tPEZBAUk5AAgF8Wcst8t7YsP+cKdFr/unU7WAvN9iPVlUbJXJI?= =?us-ascii?Q?lBFVS8sE19+xyazymeB9hHqyGHygh7Lk2xj5gJVOPkDuhJ1SqYtWgZlQa9bp?= =?us-ascii?Q?QdQ7Z14beFtq8h3QO2BHXtFpn5IEZ3wItnJqsKWFlyNF37zrT3wJBGrFseo2?= =?us-ascii?Q?kehHrd7tqwDuTPYh4E5EkBopdPOprXH26vmp3KMPBq9C5izMyQqGLBUiR0Ow?= =?us-ascii?Q?phis29fzoHJOwDNk1OBpnDlUIFAFD5zeDqxAfY0k5O4I4lKODWeaMGdcNmve?= =?us-ascii?Q?MiKZFMPUICK8SVsGbmb0BsJ1Y5XqK2mnURGq45J5qFwFYEL/7x/PEA21uZim?= =?us-ascii?Q?zoYfqGG7z7+ecIFGJ688CAjJTshChUPUqzlq2o96CVUa9Sy6uhI9oswT5X+y?= =?us-ascii?Q?bkaoyfsHantkNIJjAa75EgK4a3jWdVWU2fVLPDZT7UeMhskNqj4Sq2K3jxUo?= =?us-ascii?Q?ATsS3rA64SQLsZNRDkO+K2KPE/OpEFNggTo7vQXAXIYW7wruPMC+GXeWVkMh?= =?us-ascii?Q?RWlu/6HuswZyMQpPBAwj+IRvSwDll8knq4P5tHIoDzDL1KR4JmlEA2WOWvPw?= =?us-ascii?Q?IK9Xmp2rvJKnqX0KNc8OvyoCBYDXlddioTHmNTu6FFzekc7o+AyAUorm2MYH?= =?us-ascii?Q?rbjEqWWY+SnUJmGXMrL1MaKK9uRJJNBRs24itP6yKgoYJvB+95DtpIFjCS00?= =?us-ascii?Q?GqolSVeCXCalM9fctOH1f8sCKM1Lhlnq52dpa0FXjB2YMVlJ3dTJ8a7lERJm?= =?us-ascii?B?QT09?= X-Microsoft-Antispam-Message-Info: m3eYNrNQRh2rf/1cXkRCsFQCHDGFIrE5aRz4s5rDReqGR7JPxJ6k8wYKZGDnAP2r1AK8WyKfERSoPi6jZk2fDK9zSJ7q06qFJhVxyE5i+k2KsYk7j0ShdsYh3oR59RCbuU7HhD/dQTiXotpUAGlQFacFe/BlAFWF0ohei6qWnFcw3cyWloOV434BEhWil44X X-Microsoft-Exchange-Diagnostics: 1; BN6PR12MB1505; 6:xNsLE+sfH2eXcOfy7PWZzoDSmb76CWAsabSZklI93ZroIlgwLtU651blp3c+PeKoa0dZ0AYnH/RQKZd2bkV6J7iBX3dfFpQI5oYPRQmCs38czxmKhYF6YmhkF5D2WsX7D54jlJ4uI/6EHZCG6qqZD6TMdPQVX7nZP/Bg2iC/JE3ZmOeJvQ3S5ceGNbPAexrxKjElftZuYKWEiiuDUv6+pcRk8kctc4kX1hRg6TjA25WwLrA/RqQT7HJPmyJcCfkgLtRIEHWeojOymzwO8hhwXkUwr2CmWhNec/kEzIk9CUftV22S0BMJMN4tOk4EkoU8j4/OVbWbvNchaew3mzNCi6XzDVCkWCLM3ltd/jv+HB0TzbW1bOd2Qi/Y91kj+HMyuw5llRU+iuor5CK8pbkJxguUSoyypml8GMFLmhArbcOvMBH3EAV/DPaADvw7Lm8EFDufJy6IkjT5OWO2Uw5ybg==; 5:dMExrPkx4c33l8mIrAksU7lRHHmyTYAv12TI/oMVLxcSoIJn45ZpxanZfQXes9DcmkQsJaOV8hxDGKCzBh7BOjNDE7RrF+1W3W1kbsXi0budJLIN/0cZXQTzWT3DQB3CzPNG9fVUMJ6DHoZ6z71P+dv4JA8wP+fLyF+InYMZuMs=; 24:S2MrVcPySH/6thdqZALoPInCu4yoZls7MNjvCWcrK+nEUp1S4f83uR7dMpkBHUnnbvBQoJU4mCa6SOEOP4D1eL3Un6boZNUbh+gKL/qVNHs= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; BN6PR12MB1505; 7:jLQEvZCxRLpedVTWudeBDThtTEpZE9+0ysW5p9Tzl4+abm9SdREanEJ+KyABfHBQRTQ1Rd8FU0BR2oVmuuWH01C3LG9K+6vetGSuFwlcxchkM2rJmp+b9H9P01naHojzt6lnmYpeDxgs/EOOJNMorW9a7ycP9bLG2wWQ+BtNRM2104vJNIDVt8qczww3YgbyPuFtUHl1vE3Ou6KWyDnlX44+aiG6KG1RKrU3PzdTZhG2NL8y9WNMynpw+rk5o/H6; 20:qoq8X8o0+JSBOe9974VBg7lGYdCdN4jhrag4gT8K7SAxEVbXHnwgyD8t0z0sM06HjirF90+xyGP+6yR1D7g5E4w+SMazkbrRm7bkPB7n2/A32tyavc0iOX4WDYjtS1IAT0pNrmtc8hSTt526saDrdrIPRfipvIh9heT0hd+UH1kLpLqzQ7vwmNt+5xAmJNRsJg0YU7HlIrS3AmNoTlsCrBlcg2/EFk10LNoYxIQUn5LOyajDZctGYTKY8HdCZAvJ X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Apr 2018 06:40:25.0343 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 6ccc438c-32a5-4056-5dc8-08d59ac01d56 X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN6PR12MB1505 Subject: [dpdk-dev] [PATCH v4 10/17] net/axgbe: add transmit and receive data path apis X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 05 Apr 2018 06:40:27 -0000 Supported scalar implementation for RX data path Supported scalar and vector implementation for TX data path Signed-off-by: Ravi Kumar --- drivers/net/axgbe/Makefile | 1 + drivers/net/axgbe/axgbe_ethdev.c | 22 +- drivers/net/axgbe/axgbe_rxtx.c | 429 +++++++++++++++++++++++++++++++++ drivers/net/axgbe/axgbe_rxtx.h | 19 ++ drivers/net/axgbe/axgbe_rxtx_vec_sse.c | 93 +++++++ 5 files changed, 563 insertions(+), 1 deletion(-) create mode 100644 drivers/net/axgbe/axgbe_rxtx_vec_sse.c diff --git a/drivers/net/axgbe/Makefile b/drivers/net/axgbe/Makefile index 9fd7b5e..aff7917 100644 --- a/drivers/net/axgbe/Makefile +++ b/drivers/net/axgbe/Makefile @@ -24,5 +24,6 @@ SRCS-$(CONFIG_RTE_LIBRTE_AXGBE_PMD) += axgbe_mdio.c SRCS-$(CONFIG_RTE_LIBRTE_AXGBE_PMD) += axgbe_phy_impl.c SRCS-$(CONFIG_RTE_LIBRTE_AXGBE_PMD) += axgbe_i2c.c SRCS-$(CONFIG_RTE_LIBRTE_AXGBE_PMD) += axgbe_rxtx.c +SRCS-$(CONFIG_RTE_LIBRTE_AXGBE_PMD) += axgbe_rxtx_vec_sse.c include $(RTE_SDK)/mk/rte.lib.mk diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c index f8cfbd8..a293058 100644 --- a/drivers/net/axgbe/axgbe_ethdev.c +++ b/drivers/net/axgbe/axgbe_ethdev.c @@ -102,9 +102,22 @@ axgbe_dev_interrupt_handler(void *param) { struct rte_eth_dev *dev = (struct rte_eth_dev *)param; struct axgbe_port *pdata = dev->data->dev_private; + unsigned int dma_isr, dma_ch_isr; pdata->phy_if.an_isr(pdata); - + /*DMA related interrupts*/ + dma_isr = AXGMAC_IOREAD(pdata, DMA_ISR); + if (dma_isr) { + if (dma_isr & 1) { + dma_ch_isr = + AXGMAC_DMA_IOREAD((struct axgbe_rx_queue *) + pdata->rx_queues[0], + DMA_CH_SR); + AXGMAC_DMA_IOWRITE((struct axgbe_rx_queue *) + pdata->rx_queues[0], + DMA_CH_SR, dma_ch_isr); + } + } /* Enable interrupts since disabled after generation*/ rte_intr_enable(&pdata->pci_dev->intr_handle); } @@ -166,6 +179,8 @@ axgbe_dev_start(struct rte_eth_dev *dev) /* phy start*/ pdata->phy_if.phy_start(pdata); + axgbe_dev_enable_tx(dev); + axgbe_dev_enable_rx(dev); axgbe_clear_bit(AXGBE_STOPPED, &pdata->dev_state); axgbe_clear_bit(AXGBE_DOWN, &pdata->dev_state); @@ -185,6 +200,8 @@ axgbe_dev_stop(struct rte_eth_dev *dev) return; axgbe_set_bit(AXGBE_STOPPED, &pdata->dev_state); + axgbe_dev_disable_tx(dev); + axgbe_dev_disable_rx(dev); pdata->phy_if.phy_stop(pdata); pdata->hw_if.exit(pdata); @@ -423,6 +440,7 @@ eth_axgbe_dev_init(struct rte_eth_dev *eth_dev) int ret; eth_dev->dev_ops = &axgbe_eth_dev_ops; + eth_dev->rx_pkt_burst = &axgbe_recv_pkts; /* * For secondary processes, we don't initialise any further as primary @@ -573,6 +591,8 @@ eth_axgbe_dev_uninit(struct rte_eth_dev *eth_dev) rte_free(eth_dev->data->mac_addrs); eth_dev->data->mac_addrs = NULL; eth_dev->dev_ops = NULL; + eth_dev->rx_pkt_burst = NULL; + eth_dev->tx_pkt_burst = NULL; axgbe_dev_clear_queues(eth_dev); /* disable uio intr before callback unregister */ diff --git a/drivers/net/axgbe/axgbe_rxtx.c b/drivers/net/axgbe/axgbe_rxtx.c index 1dff7c8..cdc428c 100644 --- a/drivers/net/axgbe/axgbe_rxtx.c +++ b/drivers/net/axgbe/axgbe_rxtx.c @@ -113,6 +113,197 @@ int axgbe_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, return 0; } +static void axgbe_prepare_rx_stop(struct axgbe_port *pdata, + unsigned int queue) +{ + unsigned int rx_status; + unsigned long rx_timeout; + + /* The Rx engine cannot be stopped if it is actively processing + * packets. Wait for the Rx queue to empty the Rx fifo. Don't + * wait forever though... + */ + rx_timeout = rte_get_timer_cycles() + (AXGBE_DMA_STOP_TIMEOUT * + rte_get_timer_hz()); + + while (time_before(rte_get_timer_cycles(), rx_timeout)) { + rx_status = AXGMAC_MTL_IOREAD(pdata, queue, MTL_Q_RQDR); + if ((AXGMAC_GET_BITS(rx_status, MTL_Q_RQDR, PRXQ) == 0) && + (AXGMAC_GET_BITS(rx_status, MTL_Q_RQDR, RXQSTS) == 0)) + break; + + rte_delay_us(900); + } + + if (!time_before(rte_get_timer_cycles(), rx_timeout)) + PMD_DRV_LOG(ERR, + "timed out waiting for Rx queue %u to empty\n", + queue); +} + +void axgbe_dev_disable_rx(struct rte_eth_dev *dev) +{ + struct axgbe_rx_queue *rxq; + struct axgbe_port *pdata = dev->data->dev_private; + unsigned int i; + + /* Disable MAC Rx */ + AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, DCRCC, 0); + AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, CST, 0); + AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, ACS, 0); + AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, RE, 0); + + /* Prepare for Rx DMA channel stop */ + for (i = 0; i < dev->data->nb_rx_queues; i++) { + rxq = dev->data->rx_queues[i]; + axgbe_prepare_rx_stop(pdata, i); + } + /* Disable each Rx queue */ + AXGMAC_IOWRITE(pdata, MAC_RQC0R, 0); + for (i = 0; i < dev->data->nb_rx_queues; i++) { + rxq = dev->data->rx_queues[i]; + /* Disable Rx DMA channel */ + AXGMAC_DMA_IOWRITE_BITS(rxq, DMA_CH_RCR, SR, 0); + } +} + +void axgbe_dev_enable_rx(struct rte_eth_dev *dev) +{ + struct axgbe_rx_queue *rxq; + struct axgbe_port *pdata = dev->data->dev_private; + unsigned int i; + unsigned int reg_val = 0; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + rxq = dev->data->rx_queues[i]; + /* Enable Rx DMA channel */ + AXGMAC_DMA_IOWRITE_BITS(rxq, DMA_CH_RCR, SR, 1); + } + + reg_val = 0; + for (i = 0; i < pdata->rx_q_count; i++) + reg_val |= (0x02 << (i << 1)); + AXGMAC_IOWRITE(pdata, MAC_RQC0R, reg_val); + + /* Enable MAC Rx */ + AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, DCRCC, 1); + /* Frame is forwarded after stripping CRC to application*/ + if (pdata->crc_strip_enable) { + AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, CST, 1); + AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, ACS, 1); + } + AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, RE, 1); +} + +/* Rx function one to one refresh */ +uint16_t +axgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + PMD_INIT_FUNC_TRACE(); + uint16_t nb_rx = 0; + struct axgbe_rx_queue *rxq = rx_queue; + volatile union axgbe_rx_desc *desc; + uint64_t old_dirty = rxq->dirty; + struct rte_mbuf *mbuf, *tmbuf; + unsigned int err; + uint32_t error_status; + uint16_t idx, pidx, pkt_len; + + idx = AXGBE_GET_DESC_IDX(rxq, rxq->cur); + while (nb_rx < nb_pkts) { + if (unlikely(idx == rxq->nb_desc)) + idx = 0; + + desc = &rxq->desc[idx]; + + if (AXGMAC_GET_BITS_LE(desc->write.desc3, RX_NORMAL_DESC3, OWN)) + break; + tmbuf = rte_mbuf_raw_alloc(rxq->mb_pool); + if (unlikely(!tmbuf)) { + PMD_DRV_LOG(ERR, "RX mbuf alloc failed port_id = %u" + " queue_id = %u\n", + (unsigned int)rxq->port_id, + (unsigned int)rxq->queue_id); + rte_eth_devices[ + rxq->port_id].data->rx_mbuf_alloc_failed++; + break; + } + pidx = idx + 1; + if (unlikely(pidx == rxq->nb_desc)) + pidx = 0; + + rte_prefetch0(rxq->sw_ring[pidx]); + if ((pidx & 0x3) == 0) { + rte_prefetch0(&rxq->desc[pidx]); + rte_prefetch0(&rxq->sw_ring[pidx]); + } + + mbuf = rxq->sw_ring[idx]; + /* Check for any errors and free mbuf*/ + err = AXGMAC_GET_BITS_LE(desc->write.desc3, + RX_NORMAL_DESC3, ES); + error_status = 0; + if (unlikely(err)) { + error_status = desc->write.desc3 & AXGBE_ERR_STATUS; + if ((error_status != AXGBE_L3_CSUM_ERR) && + (error_status != AXGBE_L4_CSUM_ERR)) { + rxq->errors++; + rte_pktmbuf_free(mbuf); + goto err_set; + } + } + if (rxq->pdata->rx_csum_enable) { + mbuf->ol_flags = 0; + mbuf->ol_flags |= PKT_RX_IP_CKSUM_GOOD; + mbuf->ol_flags |= PKT_RX_L4_CKSUM_GOOD; + if (unlikely(error_status == AXGBE_L3_CSUM_ERR)) { + mbuf->ol_flags &= ~PKT_RX_IP_CKSUM_GOOD; + mbuf->ol_flags |= PKT_RX_IP_CKSUM_BAD; + mbuf->ol_flags &= ~PKT_RX_L4_CKSUM_GOOD; + mbuf->ol_flags |= PKT_RX_L4_CKSUM_UNKNOWN; + } else if ( + unlikely(error_status == AXGBE_L4_CSUM_ERR)) { + mbuf->ol_flags &= ~PKT_RX_L4_CKSUM_GOOD; + mbuf->ol_flags |= PKT_RX_L4_CKSUM_BAD; + } + } + rte_prefetch1(rte_pktmbuf_mtod(mbuf, void *)); + /* Get the RSS hash */ + if (AXGMAC_GET_BITS_LE(desc->write.desc3, RX_NORMAL_DESC3, RSV)) + mbuf->hash.rss = rte_le_to_cpu_32(desc->write.desc1); + pkt_len = AXGMAC_GET_BITS_LE(desc->write.desc3, RX_NORMAL_DESC3, + PL) - rxq->crc_len; + /* Mbuf populate */ + mbuf->next = NULL; + mbuf->data_off = RTE_PKTMBUF_HEADROOM; + mbuf->nb_segs = 1; + mbuf->port = rxq->port_id; + mbuf->pkt_len = pkt_len; + mbuf->data_len = pkt_len; + rxq->bytes += pkt_len; + rx_pkts[nb_rx++] = mbuf; +err_set: + rxq->cur++; + rxq->sw_ring[idx++] = tmbuf; + desc->read.baddr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(tmbuf)); + memset((void *)(&desc->read.desc2), 0, 8); + AXGMAC_SET_BITS_LE(desc->read.desc3, RX_NORMAL_DESC3, OWN, 1); + rxq->dirty++; + } + rxq->pkts += nb_rx; + if (rxq->dirty != old_dirty) { + rte_wmb(); + idx = AXGBE_GET_DESC_IDX(rxq, rxq->dirty - 1); + AXGMAC_DMA_IOWRITE(rxq, DMA_CH_RDTR_LO, + low32_value(rxq->ring_phys_addr + + (idx * sizeof(union axgbe_rx_desc)))); + } + + return nb_rx; +} + /* Tx Apis */ static void axgbe_tx_queue_release(struct axgbe_tx_queue *tx_queue) { @@ -174,6 +365,10 @@ int axgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, txq->free_thresh = (txq->nb_desc >> 1); txq->free_batch_cnt = txq->free_thresh; + /* In vector_tx path threshold should be multiple of queue_size*/ + if (txq->nb_desc % txq->free_thresh != 0) + txq->vector_disable = 1; + if ((tx_conf->txq_flags & (uint32_t)ETH_TXQ_FLAGS_NOOFFLOADS) != ETH_TXQ_FLAGS_NOOFFLOADS) { txq->vector_disable = 1; @@ -211,9 +406,243 @@ int axgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, if (!pdata->tx_queues) pdata->tx_queues = dev->data->tx_queues; + if (txq->vector_disable) + dev->tx_pkt_burst = &axgbe_xmit_pkts; + else + dev->tx_pkt_burst = &axgbe_xmit_pkts_vec; + return 0; } +static void axgbe_txq_prepare_tx_stop(struct axgbe_port *pdata, + unsigned int queue) +{ + unsigned int tx_status; + unsigned long tx_timeout; + + /* The Tx engine cannot be stopped if it is actively processing + * packets. Wait for the Tx queue to empty the Tx fifo. Don't + * wait forever though... + */ + tx_timeout = rte_get_timer_cycles() + (AXGBE_DMA_STOP_TIMEOUT * + rte_get_timer_hz()); + while (time_before(rte_get_timer_cycles(), tx_timeout)) { + tx_status = AXGMAC_MTL_IOREAD(pdata, queue, MTL_Q_TQDR); + if ((AXGMAC_GET_BITS(tx_status, MTL_Q_TQDR, TRCSTS) != 1) && + (AXGMAC_GET_BITS(tx_status, MTL_Q_TQDR, TXQSTS) == 0)) + break; + + rte_delay_us(900); + } + + if (!time_before(rte_get_timer_cycles(), tx_timeout)) + PMD_DRV_LOG(ERR, + "timed out waiting for Tx queue %u to empty\n", + queue); +} + +static void axgbe_prepare_tx_stop(struct axgbe_port *pdata, + unsigned int queue) +{ + unsigned int tx_dsr, tx_pos, tx_qidx; + unsigned int tx_status; + unsigned long tx_timeout; + + if (AXGMAC_GET_BITS(pdata->hw_feat.version, MAC_VR, SNPSVER) > 0x20) + return axgbe_txq_prepare_tx_stop(pdata, queue); + + /* Calculate the status register to read and the position within */ + if (queue < DMA_DSRX_FIRST_QUEUE) { + tx_dsr = DMA_DSR0; + tx_pos = (queue * DMA_DSR_Q_WIDTH) + DMA_DSR0_TPS_START; + } else { + tx_qidx = queue - DMA_DSRX_FIRST_QUEUE; + + tx_dsr = DMA_DSR1 + ((tx_qidx / DMA_DSRX_QPR) * DMA_DSRX_INC); + tx_pos = ((tx_qidx % DMA_DSRX_QPR) * DMA_DSR_Q_WIDTH) + + DMA_DSRX_TPS_START; + } + + /* The Tx engine cannot be stopped if it is actively processing + * descriptors. Wait for the Tx engine to enter the stopped or + * suspended state. Don't wait forever though... + */ + tx_timeout = rte_get_timer_cycles() + (AXGBE_DMA_STOP_TIMEOUT * + rte_get_timer_hz()); + while (time_before(rte_get_timer_cycles(), tx_timeout)) { + tx_status = AXGMAC_IOREAD(pdata, tx_dsr); + tx_status = GET_BITS(tx_status, tx_pos, DMA_DSR_TPS_WIDTH); + if ((tx_status == DMA_TPS_STOPPED) || + (tx_status == DMA_TPS_SUSPENDED)) + break; + + rte_delay_us(900); + } + + if (!time_before(rte_get_timer_cycles(), tx_timeout)) + PMD_DRV_LOG(ERR, + "timed out waiting for Tx DMA channel %u to stop\n", + queue); +} + +void axgbe_dev_disable_tx(struct rte_eth_dev *dev) +{ + struct axgbe_tx_queue *txq; + struct axgbe_port *pdata = dev->data->dev_private; + unsigned int i; + + /* Prepare for stopping DMA channel */ + for (i = 0; i < pdata->tx_q_count; i++) { + txq = dev->data->tx_queues[i]; + axgbe_prepare_tx_stop(pdata, i); + } + /* Disable MAC Tx */ + AXGMAC_IOWRITE_BITS(pdata, MAC_TCR, TE, 0); + /* Disable each Tx queue*/ + for (i = 0; i < pdata->tx_q_count; i++) + AXGMAC_MTL_IOWRITE_BITS(pdata, i, MTL_Q_TQOMR, TXQEN, + 0); + /* Disable each Tx DMA channel */ + for (i = 0; i < dev->data->nb_tx_queues; i++) { + txq = dev->data->tx_queues[i]; + AXGMAC_DMA_IOWRITE_BITS(txq, DMA_CH_TCR, ST, 0); + } +} + +void axgbe_dev_enable_tx(struct rte_eth_dev *dev) +{ + struct axgbe_tx_queue *txq; + struct axgbe_port *pdata = dev->data->dev_private; + unsigned int i; + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + txq = dev->data->tx_queues[i]; + /* Enable Tx DMA channel */ + AXGMAC_DMA_IOWRITE_BITS(txq, DMA_CH_TCR, ST, 1); + } + /* Enable Tx queue*/ + for (i = 0; i < pdata->tx_q_count; i++) + AXGMAC_MTL_IOWRITE_BITS(pdata, i, MTL_Q_TQOMR, TXQEN, + MTL_Q_ENABLED); + /* Enable MAC Tx */ + AXGMAC_IOWRITE_BITS(pdata, MAC_TCR, TE, 1); +} + +/* Free Tx conformed mbufs */ +static void axgbe_xmit_cleanup(struct axgbe_tx_queue *txq) +{ + volatile struct axgbe_tx_desc *desc; + uint16_t idx; + + idx = AXGBE_GET_DESC_IDX(txq, txq->dirty); + while (txq->cur != txq->dirty) { + if (unlikely(idx == txq->nb_desc)) + idx = 0; + desc = &txq->desc[idx]; + /* Check for ownership */ + if (AXGMAC_GET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, OWN)) + return; + memset((void *)&desc->desc2, 0, 8); + /* Free mbuf */ + rte_pktmbuf_free(txq->sw_ring[idx]); + txq->sw_ring[idx++] = NULL; + txq->dirty++; + } +} + +/* Tx Descriptor formation + * Considering each mbuf requires one desc + * mbuf is linear + */ +static int axgbe_xmit_hw(struct axgbe_tx_queue *txq, + struct rte_mbuf *mbuf) +{ + volatile struct axgbe_tx_desc *desc; + uint16_t idx; + uint64_t mask; + + idx = AXGBE_GET_DESC_IDX(txq, txq->cur); + desc = &txq->desc[idx]; + + /* Update buffer address and length */ + desc->baddr = rte_mbuf_data_iova(mbuf); + AXGMAC_SET_BITS_LE(desc->desc2, TX_NORMAL_DESC2, HL_B1L, + mbuf->pkt_len); + /* Total msg length to transmit */ + AXGMAC_SET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, FL, + mbuf->pkt_len); + /* Mark it as First and Last Descriptor */ + AXGMAC_SET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, FD, 1); + AXGMAC_SET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, LD, 1); + /* Mark it as a NORMAL descriptor */ + AXGMAC_SET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, CTXT, 0); + /* configure h/w Offload */ + mask = mbuf->ol_flags & PKT_TX_L4_MASK; + if ((mask == PKT_TX_TCP_CKSUM) || (mask == PKT_TX_UDP_CKSUM)) + AXGMAC_SET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, CIC, 0x3); + else if (mbuf->ol_flags & PKT_TX_IP_CKSUM) + AXGMAC_SET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, CIC, 0x1); + rte_wmb(); + + /* Set OWN bit */ + AXGMAC_SET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, OWN, 1); + rte_wmb(); + + /* Save mbuf */ + txq->sw_ring[idx] = mbuf; + /* Update current index*/ + txq->cur++; + /* Update stats */ + txq->bytes += mbuf->pkt_len; + + return 0; +} + +/* Eal supported tx wrapper*/ +uint16_t +axgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts) +{ + PMD_INIT_FUNC_TRACE(); + + if (unlikely(nb_pkts == 0)) + return nb_pkts; + + struct axgbe_tx_queue *txq; + uint16_t nb_desc_free; + uint16_t nb_pkt_sent = 0; + uint16_t idx; + uint32_t tail_addr; + struct rte_mbuf *mbuf; + + txq = (struct axgbe_tx_queue *)tx_queue; + nb_desc_free = txq->nb_desc - (txq->cur - txq->dirty); + + if (unlikely(nb_desc_free <= txq->free_thresh)) { + axgbe_xmit_cleanup(txq); + nb_desc_free = txq->nb_desc - (txq->cur - txq->dirty); + if (unlikely(nb_desc_free == 0)) + return 0; + } + nb_pkts = RTE_MIN(nb_desc_free, nb_pkts); + while (nb_pkts--) { + mbuf = *tx_pkts++; + if (axgbe_xmit_hw(txq, mbuf)) + goto out; + nb_pkt_sent++; + } +out: + /* Sync read and write */ + rte_mb(); + idx = AXGBE_GET_DESC_IDX(txq, txq->cur); + tail_addr = low32_value(txq->ring_phys_addr + + idx * sizeof(struct axgbe_tx_desc)); + /* Update tail reg with next immediate address to kick Tx DMA channel*/ + AXGMAC_DMA_IOWRITE(txq, DMA_CH_TDTR_LO, tail_addr); + txq->pkts += nb_pkt_sent; + return nb_pkt_sent; +} + void axgbe_dev_clear_queues(struct rte_eth_dev *dev) { PMD_INIT_FUNC_TRACE(); diff --git a/drivers/net/axgbe/axgbe_rxtx.h b/drivers/net/axgbe/axgbe_rxtx.h index 1b88d7a..f221cc3 100644 --- a/drivers/net/axgbe/axgbe_rxtx.h +++ b/drivers/net/axgbe/axgbe_rxtx.h @@ -156,12 +156,31 @@ void axgbe_dev_tx_queue_release(void *txq); int axgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id, uint16_t nb_tx_desc, unsigned int socket_id, const struct rte_eth_txconf *tx_conf); +void axgbe_dev_enable_tx(struct rte_eth_dev *dev); +void axgbe_dev_disable_tx(struct rte_eth_dev *dev); +int axgbe_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id); +int axgbe_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id); + +uint16_t axgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts); +uint16_t axgbe_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts); + void axgbe_dev_rx_queue_release(void *rxq); int axgbe_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id, uint16_t nb_rx_desc, unsigned int socket_id, const struct rte_eth_rxconf *rx_conf, struct rte_mempool *mb_pool); +void axgbe_dev_enable_rx(struct rte_eth_dev *dev); +void axgbe_dev_disable_rx(struct rte_eth_dev *dev); +int axgbe_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id); +int axgbe_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id); +uint16_t axgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts); +uint16_t axgbe_recv_pkts_threshold_refresh(void *rx_queue, + struct rte_mbuf **rx_pkts, + uint16_t nb_pkts); void axgbe_dev_clear_queues(struct rte_eth_dev *dev); #endif /* _AXGBE_RXTX_H_ */ diff --git a/drivers/net/axgbe/axgbe_rxtx_vec_sse.c b/drivers/net/axgbe/axgbe_rxtx_vec_sse.c new file mode 100644 index 0000000..9be7037 --- /dev/null +++ b/drivers/net/axgbe/axgbe_rxtx_vec_sse.c @@ -0,0 +1,93 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018 Advanced Micro Devices, Inc. All rights reserved. + * Copyright(c) 2018 Synopsys, Inc. All rights reserved. + */ + +#include "axgbe_ethdev.h" +#include "axgbe_rxtx.h" +#include "axgbe_phy.h" + +#include +#include +#include + +/* Useful to avoid shifting for every descriptor prepration*/ +#define TX_DESC_CTRL_FLAGS 0xb000000000000000 +#define TX_FREE_BULK 8 +#define TX_FREE_BULK_CHECK (TX_FREE_BULK - 1) + +static inline void +axgbe_vec_tx(volatile struct axgbe_tx_desc *desc, + struct rte_mbuf *mbuf) +{ + __m128i descriptor = _mm_set_epi64x((uint64_t)mbuf->pkt_len << 32 | + TX_DESC_CTRL_FLAGS | mbuf->data_len, + mbuf->buf_iova + + mbuf->data_off); + _mm_store_si128((__m128i *)desc, descriptor); +} + +static void +axgbe_xmit_cleanup_vec(struct axgbe_tx_queue *txq) +{ + volatile struct axgbe_tx_desc *desc; + int idx, i; + + idx = AXGBE_GET_DESC_IDX(txq, txq->dirty + txq->free_batch_cnt + - 1); + desc = &txq->desc[idx]; + if (desc->desc3 & AXGBE_DESC_OWN) + return; + /* memset avoided for desc ctrl fields since in vec_tx path + * all 128 bits are populated + */ + for (i = 0; i < txq->free_batch_cnt; i++, idx--) + rte_pktmbuf_free_seg(txq->sw_ring[idx]); + + + txq->dirty += txq->free_batch_cnt; + txq->nb_desc_free += txq->free_batch_cnt; +} + +uint16_t +axgbe_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts) +{ + PMD_INIT_FUNC_TRACE(); + + struct axgbe_tx_queue *txq; + uint16_t idx, nb_commit, loop, i; + uint32_t tail_addr; + + txq = (struct axgbe_tx_queue *)tx_queue; + if (txq->nb_desc_free < txq->free_thresh) { + axgbe_xmit_cleanup_vec(txq); + if (unlikely(txq->nb_desc_free == 0)) + return 0; + } + nb_pkts = RTE_MIN(txq->nb_desc_free, nb_pkts); + nb_commit = nb_pkts; + idx = AXGBE_GET_DESC_IDX(txq, txq->cur); + loop = txq->nb_desc - idx; + if (nb_commit >= loop) { + for (i = 0; i < loop; ++i, ++idx, ++tx_pkts) { + axgbe_vec_tx(&txq->desc[idx], *tx_pkts); + txq->sw_ring[idx] = *tx_pkts; + } + nb_commit -= loop; + idx = 0; + } + for (i = 0; i < nb_commit; ++i, ++idx, ++tx_pkts) { + axgbe_vec_tx(&txq->desc[idx], *tx_pkts); + txq->sw_ring[idx] = *tx_pkts; + } + txq->cur += nb_pkts; + tail_addr = (uint32_t)(txq->ring_phys_addr + + idx * sizeof(struct axgbe_tx_desc)); + /* Update tail reg with next immediate address to kick Tx DMA channel*/ + rte_write32(tail_addr, (void *)txq->dma_tail_reg); + txq->pkts += nb_pkts; + txq->nb_desc_free -= nb_pkts; + + return nb_pkts; +} -- 2.7.4