From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from NAM02-SN1-obe.outbound.protection.outlook.com (mail-sn1nam02on0079.outbound.protection.outlook.com [104.47.36.79]) by dpdk.org (Postfix) with ESMTP id 30E151CFD4 for ; Fri, 6 Apr 2018 14:37:30 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector1-amd-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=Xr5w6K8WEtaxSaOfqs9iRrpRIwmdlg8+UiKrq9A7LfE=; b=AyV6wijAOZGxSwDV5Upa4qc/S3g3GsrDw+EuxxIT0fXu3tNOrgWCISyfo3DSvTAc990R9bsIbG8OJHQPkalGDMP56Vyz54V5ohzau+/c5wY6zWhZY4jQwOdaR2PqbZ5GJjD536sZAxunEXBNOA672jtzn59AAhQWQdD1Fqff4Js= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Ravi1.Kumar@amd.com; Received: from wallaby-smavila.amd.com (202.56.249.162) by CY4PR12MB1509.namprd12.prod.outlook.com (2603:10b6:910:8::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.631.10; Fri, 6 Apr 2018 12:37:27 +0000 From: Ravi Kumar To: dev@dpdk.org Cc: ferruh.yigit@intel.com Date: Fri, 6 Apr 2018 08:36:43 -0400 Message-Id: <1523018211-65765-10-git-send-email-Ravi1.kumar@amd.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1523018211-65765-1-git-send-email-Ravi1.kumar@amd.com> References: <1522910389-35530-1-git-send-email-Ravi1.kumar@amd.com> <1523018211-65765-1-git-send-email-Ravi1.kumar@amd.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [202.56.249.162] X-ClientProxiedBy: MA1PR0101CA0013.INDPRD01.PROD.OUTLOOK.COM (2603:1096:a00:21::23) To CY4PR12MB1509.namprd12.prod.outlook.com (2603:10b6:910:8::22) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: 077ab730-a151-47de-3236-08d59bbb286b X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(7020095)(4652020)(48565401081)(5600026)(4604075)(4534165)(4627221)(201703031133081)(201702281549075)(2017052603328)(7153060)(7193020); SRVR:CY4PR12MB1509; X-Microsoft-Exchange-Diagnostics: 1; CY4PR12MB1509; 3:2r6HvK3wHqg0iWCKueIjAWoSr1Z5zdyuPmz8dlGuttOGdjF4Uf2LGoIhKsMQ7NIiS7gDJ8IvIo41irTRyayfvQhc5OhS47I0QVG/4Dr78wapNV1bM7d1s5DszZzj4yf3VQ9pA1oAIL0mapxRJA1/QVDApc1eiRTRLpK6lwlJOh8CUhBmlBZxogYGIYXHzsskp21M5iwf7kezqy6ebOdQ0C3nJVMq1uOoCUIydsxwnX1vjRcXUAiYs2utPr8IFPbq; 25:w2zIcnayLSfppuzqsqJk/xmr6GQuYuXC1VDzwE4srF1L+DIi+jR1TVvz4J+sYOAEH+MNyNJ9NuFb/MtyojNd6GEShiyM60WtIt3lkqUPdwpAgIdPiuKj5uE7+9onrFyg5tw8AId/fwGKId27iopGK/EwOXCLvRjDmzdT5/gm9V1EVzNsXiFtTcNozkalVsnRMmXYG0BnfzpS+Ii1KvYtczHNHGnLfEizlVWM225rtXHvBV9s+ccUbSrtpLHP62slSpMVizOP5b9hdtwAuLuDAWnK5bXlaSQGWI8SjcBFgogxU4gGOvuMWZvx3U1q6WkaVO53w83Cn1yCjZAPyBjMpw==; 31:rBFzdQPsBIiZefRXWyZALJcId/HQLU6DYZAMTTOmTzKFItiZwJrtF87k1Nu85dBwpDczSF+uoSsLHhYRz32hv+gC1ZMxtIBE/+sV6CiiX2Rp5PebNa2d41BRIkH9achcD8gOEtsXla/xA11qcD5RReM59CcKenxUMvmnnsmyAbNhMQyDIKBku5QTv25pQLRYSZbXZkDwcdWTN4BjXs+PjdUZsbWPc3yU3B8M/kk1FHY= X-MS-TrafficTypeDiagnostic: CY4PR12MB1509: X-Microsoft-Exchange-Diagnostics: 1; CY4PR12MB1509; 20:x1NlSoXkswhdlrs0lGkA/QPpS0kv4sfD5QQ7WQ8ZxzcfV+lwHpj1fWBEvIg3n473qVgFlBQuTf7HqZf/ATE5bPo68sOefCxhO9Z4wKY7EFIYQttn0AngbnGFKK7X4AEVXdTxEcXVqyDKu9Hs+TwqKk5U17QCiZF4qAPrMa9vu8GviGJOad43SCWJg2Yjm15chCEXNI1VqXPhaz9aw0ERaZLm5iGFcG3rU0+nnD9SfjmNGf557PwBS8dvkWUH93gs+TBvjipueh1QTk/yLvmiCGqucGjHxmkKskGB6Hw/Ra+zIaPRQlsgMVoVEEGs5aRlT/68GscGat+dlv/ONpNE8qGCIfrot9zh5q/7Psemw5pHsfUX47rEREmE8iHDTDWPyZNKM6Ku6BrDo/bATSUrHpOky0WVfYtV6locH6gTyW7+Ha5FEJSdXIQRO1XfON9z9lJQsQCfW+j/Sm3vNYuxov6f2Co58BLtlW4MWYthQ6ujxvxMk1LM18/ejtz65oai; 4:V5149I1W6BHPcLrGWRcxHAg9DSa8FPdpJCwTGXYuarCO7R+mp8y+WuzWf9t8e+aH+/YMgxOchAVuEsaCcQBnJfghonKOtNvRzuaGytFVrEwTT1rdBwQuI3AS5MfECbaQ9YHqvfgT6O5boRiSJdN3YWvdU3BXeGecsek9/Hbwmxv1n/qsBovS7ycvMHMK3b6hekbfhJIKBBaijJJiTX+KNRyjmXeZ2qt5beSBqzzGkeagFF1tMdt4GdzzMBz5tyetsX9ngDfL2Kjbd488Fm+mcGRSMxqedO9CUEtYnMsVh5UUcQ/lPXOln/0JAyX/TrCMCtdbhB8CyPif8uxS2ohGVD57v6BuAVhfUblH+mShq/k= X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(60795455431006)(767451399110); X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(8211001083)(6040522)(2401047)(5005006)(8121501046)(3002001)(10201501046)(93006095)(93001095)(3231221)(944501327)(52105095)(6055026)(6041310)(20161123558120)(20161123564045)(20161123560045)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123562045)(6072148)(201708071742011); SRVR:CY4PR12MB1509; BCL:0; PCL:0; RULEID:; SRVR:CY4PR12MB1509; X-Forefront-PRVS: 0634F37BFF X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(396003)(39380400002)(39860400002)(346002)(366004)(376002)(199004)(189003)(48376002)(72206003)(7696005)(105586002)(478600001)(6116002)(50466002)(3846002)(575784001)(446003)(486006)(53936002)(11346002)(68736007)(2906002)(2616005)(51416003)(386003)(16526019)(186003)(26005)(76176011)(956004)(50226002)(16586007)(476003)(316002)(97736004)(305945005)(2351001)(2361001)(25786009)(7736002)(36756003)(59450400001)(5660300001)(52116002)(8676002)(6916009)(6666003)(47776003)(4326008)(6486002)(86362001)(53416004)(8936002)(66066001)(81166006)(81156014)(106356001); DIR:OUT; SFP:1101; SCL:1; SRVR:CY4PR12MB1509; H:wallaby-smavila.amd.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1; Received-SPF: None (protection.outlook.com: amd.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; CY4PR12MB1509; 23:i+q3IC8mWB0he5u0JZctGAYypRb6Q+unrTiVxPlPI?= =?us-ascii?Q?E+PgoipRf5B3ue0fDkM63UtkE+zNd9q/Ri4Ixkh0rBORvnxn8rJQvHljkH0/?= =?us-ascii?Q?hGtu4+XMGxls+wRlIBB3m+CwwVDldTWFIfOFYThVEmuAGfT3x053EBCH8LHt?= =?us-ascii?Q?o4kTyBP9YtfYurR6vA5ltF+8BGTg2/utaHAcMTfzK2RSDAXTxqB1xtrLe7MV?= =?us-ascii?Q?h6b/8cVemJMeO58ppJCmsoctrNxPp82G8ss3db7Z0cugQjBRTdOTkgBuXNVn?= =?us-ascii?Q?i4Dn/RlzNFtWZxo0lg87coJxalkOJWD6eY4xmy4AqDlET+CnAHNEg95ChZh5?= =?us-ascii?Q?6qe/v2KwL7kHzeoCMGNb7b/Z22UdhTZs7ZoGUBQEBcpDuSytmP7gKZc6W7pK?= =?us-ascii?Q?V2VAbnnhZrK1G/SR6E86wkC/kjL2oy9CmehvRYxktIX2hnGfVbg5wDih0k/m?= =?us-ascii?Q?+NUlIoh44ihuNX+3/+6ULvrJTcqGPEVI8eanvZ9lx9Neuwe9kGi5U3y1RBw/?= =?us-ascii?Q?muTnzNZUyof0l3Ht6vASzWtQFUZXUULDfljDs97rYuNJJ2scCNq5RS750lbG?= =?us-ascii?Q?KL9vC/Y4y6zMA0xH3wnIwTZWqN9Uj+uT82qzyAvARlm8+/VfAHJzHuA5HB1W?= =?us-ascii?Q?Z9oQNHarCMcEyEkgTZo2ZPMcew5y+IaiQkZPoqF5vGKravDX0JnHUrECw+PY?= =?us-ascii?Q?rxfwh4lXGdE4KXa1uhATFh2ZMDd006W+SZ21/RU/0JAVtlsgzONtMtzp0El6?= =?us-ascii?Q?Bkt3JeQExzFrq8i1iP6e5x6DTkYWwm6m2INE/B8jAXlZXgFF8j1wAK75lO+o?= =?us-ascii?Q?VUxium/7KRX2U101aSm7cfw0BPducTkSsv+1KxxKgsNWybhQzT8eKV4pPBF1?= =?us-ascii?Q?ndDeXqEK+bnL/Z2tutfEQj4EN54hVR+rWMUl8Bkl8OlCPrIKH6uAiAFugSlL?= =?us-ascii?Q?9XGiO67I3b16Bt4/EekOA1pIVzycymBLMOBJKxhdEnR01dnAMa15vHMRqkGR?= =?us-ascii?Q?HX4jCti85CEuqyj0yyxexUDsccK+Koliu5r/cTZcWHUiDzYCKhEFWu1aKNSf?= =?us-ascii?Q?ZKZCueRPcO7NFhIl8JfnqfB4FV1qU0/8JLZiNl8xZNLL6SIceMzhzV6j4Z6h?= =?us-ascii?Q?jphetZDZZztZ2HMyrikG8bs7ItddanT2TkMDzQs/efTNwr37UMZ4GWXsu+5S?= =?us-ascii?Q?d1MU25pSP6DxBVKB3rhtMm9w11MmrbqG2AI/NnySEIN8qWkj1CmIrAs6intw?= =?us-ascii?Q?OQxc/w9BNxnRTjNX6hXq8y7QoBvGKkBBI7JZWU5nAZj9a3cZ0En1v+dDUZoU?= =?us-ascii?B?Zz09?= X-Microsoft-Antispam-Message-Info: D4TrZcUOdlL1F5apsr0wzQH9tpGWkeaHAbohqc9kxplIKQug22ov5CYOdn10jFgysMMQt/JN1niXQPlqksDYLtjBYAua20xIQeF8+FnWdIZKZKInR26mDY8VhQo6N/oALKYCF/qq8hbMeT7FqQ6gkHJ05QIikiMqN1MmoM2Zk5tjauJ2ng/YgSQEDxAfRVTJ X-Microsoft-Exchange-Diagnostics: 1; CY4PR12MB1509; 6:A9NBR5Qs3nQqZlrT33tmpI1vtm0gZss5e/tJqWq2YneRG02VIRPKRGWTfHZe5dQjCU7acBCdxcTThQoj0dvP+pPevp0Ijzd1LXg9pX99PmXrf6Z0Lzm/R6tdO3M+aleQbdXTu5tfUEbMq3SqLhERWeF4QC+5l7vquh9+1rNyIw1QC5Utek8UX4mJFmrGs/1/4ihha/h85z+mDotypim4HzvVu5WgzGy7klziEjMMp8wH3Kke/DdhboJYTU45vAL8TctA9unjO/Z+1jbXka9uwf3g7YrHOKY80Gcdj1s715MV9lKdzqXWvrVZr7+xqJQ6djGZaVtdVtCPKhqZyCQvdk7YDNfy+EZormFEA9jH5nHSmvC7r4dcfoxDwIdWWuIyoQ1SHVCLBSGLUcOeg0eRuiO9pyEvmEXxJeC87ZsooMp3WsBX7dVwfjWpXgJusGmbKrzmizD6ZhlOBny3ln4gXg==; 5:fupeqLPMFazrWREDpWYMhZGrfsEpreAeYS9YuOwxvmgBct3fF/GVKJsSkJnyRJKMQsMKOOd6GeHIxGhLRfF9Ir/xE84DsCXs4sR44BUggctQqZIrc/7k8l85X4IUtkhOXbuX8TPTWaLABOr3BNxHAoMmW0c/zWxRScmuDD8WtFI=; 24:bO6TNsNnRo9FtRUeboGRZNBkDrOqQosznUdZZdswAqixlwOJz52efM2ydDnTx2S6APYkEvVT/thLmFBpUIVdd1UfJW499TxFwYnvLVVaGY4= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; CY4PR12MB1509; 7:g07K80ifNFP1h9tXUJt/Bx9PUnG9bis+iL5Uj1hI6wOwPNniO3TrLMBGa+YLhcZcTNT0nuzysd90jYL/jfm6x6xo10hGDzQD6RZBM+2vBi4rOmBClQCtbip1KhSVXVF0dZaqMJkT4bPV9KGeSYeo2E9zbXMB8HByRQCjCx2srgbaTXdGK7KJt+xI7Ogoud5mpKQj4bHiPo5v/aEr1cdnpebj0aTDKOen2yMQSAXP6oYuCV+yaJwnCx+0+o2cSAz2; 20:gvRFdRWF9BDETrf/GmjR7ta6xfj7hPxQrIONfcnt2+knIeDu4sA5TI9YTjWu5Tcq481dHCt4Exemu1vhct5KPAILhK0htdpNzjmEe568X/Cjdw43R/XULm0kIA+3YAOilN6/icXuH6bi5MFbrJap2UTEU1XF8HyI1xFC96uFQcnISiZedtxeQbSfWQ9YaLfFw5y5y+gpkSZlkio2h7nQWDbS8aQmJkjhBxR7b4nzChXI9C++3dlUjlHP5AR+LRRw X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Apr 2018 12:37:27.2409 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 077ab730-a151-47de-3236-08d59bbb286b X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR12MB1509 Subject: [dpdk-dev] [PATCH v5 10/18] net/axgbe: add transmit and receive data path apis X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 06 Apr 2018 12:37:31 -0000 Supported scalar implementation for RX data path. Supported scalar and vector implementation for TX data path. Signed-off-by: Ravi Kumar --- drivers/net/axgbe/Makefile | 3 + drivers/net/axgbe/axgbe_ethdev.c | 22 +- drivers/net/axgbe/axgbe_rxtx.c | 433 +++++++++++++++++++++++++++++++++ drivers/net/axgbe/axgbe_rxtx.h | 19 ++ drivers/net/axgbe/axgbe_rxtx_vec_sse.c | 93 +++++++ 5 files changed, 569 insertions(+), 1 deletion(-) create mode 100644 drivers/net/axgbe/axgbe_rxtx_vec_sse.c diff --git a/drivers/net/axgbe/Makefile b/drivers/net/axgbe/Makefile index e1e5f53..72215ae 100644 --- a/drivers/net/axgbe/Makefile +++ b/drivers/net/axgbe/Makefile @@ -28,5 +28,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_AXGBE_PMD) += axgbe_mdio.c SRCS-$(CONFIG_RTE_LIBRTE_AXGBE_PMD) += axgbe_phy_impl.c SRCS-$(CONFIG_RTE_LIBRTE_AXGBE_PMD) += axgbe_i2c.c SRCS-$(CONFIG_RTE_LIBRTE_AXGBE_PMD) += axgbe_rxtx.c +ifeq ($(CONFIG_RTE_ARCH_X86),y) +SRCS-$(CONFIG_RTE_LIBRTE_AXGBE_PMD) += axgbe_rxtx_vec_sse.c +endif include $(RTE_SDK)/mk/rte.lib.mk diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c index f8cfbd8..a293058 100644 --- a/drivers/net/axgbe/axgbe_ethdev.c +++ b/drivers/net/axgbe/axgbe_ethdev.c @@ -102,9 +102,22 @@ axgbe_dev_interrupt_handler(void *param) { struct rte_eth_dev *dev = (struct rte_eth_dev *)param; struct axgbe_port *pdata = dev->data->dev_private; + unsigned int dma_isr, dma_ch_isr; pdata->phy_if.an_isr(pdata); - + /*DMA related interrupts*/ + dma_isr = AXGMAC_IOREAD(pdata, DMA_ISR); + if (dma_isr) { + if (dma_isr & 1) { + dma_ch_isr = + AXGMAC_DMA_IOREAD((struct axgbe_rx_queue *) + pdata->rx_queues[0], + DMA_CH_SR); + AXGMAC_DMA_IOWRITE((struct axgbe_rx_queue *) + pdata->rx_queues[0], + DMA_CH_SR, dma_ch_isr); + } + } /* Enable interrupts since disabled after generation*/ rte_intr_enable(&pdata->pci_dev->intr_handle); } @@ -166,6 +179,8 @@ axgbe_dev_start(struct rte_eth_dev *dev) /* phy start*/ pdata->phy_if.phy_start(pdata); + axgbe_dev_enable_tx(dev); + axgbe_dev_enable_rx(dev); axgbe_clear_bit(AXGBE_STOPPED, &pdata->dev_state); axgbe_clear_bit(AXGBE_DOWN, &pdata->dev_state); @@ -185,6 +200,8 @@ axgbe_dev_stop(struct rte_eth_dev *dev) return; axgbe_set_bit(AXGBE_STOPPED, &pdata->dev_state); + axgbe_dev_disable_tx(dev); + axgbe_dev_disable_rx(dev); pdata->phy_if.phy_stop(pdata); pdata->hw_if.exit(pdata); @@ -423,6 +440,7 @@ eth_axgbe_dev_init(struct rte_eth_dev *eth_dev) int ret; eth_dev->dev_ops = &axgbe_eth_dev_ops; + eth_dev->rx_pkt_burst = &axgbe_recv_pkts; /* * For secondary processes, we don't initialise any further as primary @@ -573,6 +591,8 @@ eth_axgbe_dev_uninit(struct rte_eth_dev *eth_dev) rte_free(eth_dev->data->mac_addrs); eth_dev->data->mac_addrs = NULL; eth_dev->dev_ops = NULL; + eth_dev->rx_pkt_burst = NULL; + eth_dev->tx_pkt_burst = NULL; axgbe_dev_clear_queues(eth_dev); /* disable uio intr before callback unregister */ diff --git a/drivers/net/axgbe/axgbe_rxtx.c b/drivers/net/axgbe/axgbe_rxtx.c index 1dff7c8..e96e2be 100644 --- a/drivers/net/axgbe/axgbe_rxtx.c +++ b/drivers/net/axgbe/axgbe_rxtx.c @@ -113,6 +113,197 @@ int axgbe_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, return 0; } +static void axgbe_prepare_rx_stop(struct axgbe_port *pdata, + unsigned int queue) +{ + unsigned int rx_status; + unsigned long rx_timeout; + + /* The Rx engine cannot be stopped if it is actively processing + * packets. Wait for the Rx queue to empty the Rx fifo. Don't + * wait forever though... + */ + rx_timeout = rte_get_timer_cycles() + (AXGBE_DMA_STOP_TIMEOUT * + rte_get_timer_hz()); + + while (time_before(rte_get_timer_cycles(), rx_timeout)) { + rx_status = AXGMAC_MTL_IOREAD(pdata, queue, MTL_Q_RQDR); + if ((AXGMAC_GET_BITS(rx_status, MTL_Q_RQDR, PRXQ) == 0) && + (AXGMAC_GET_BITS(rx_status, MTL_Q_RQDR, RXQSTS) == 0)) + break; + + rte_delay_us(900); + } + + if (!time_before(rte_get_timer_cycles(), rx_timeout)) + PMD_DRV_LOG(ERR, + "timed out waiting for Rx queue %u to empty\n", + queue); +} + +void axgbe_dev_disable_rx(struct rte_eth_dev *dev) +{ + struct axgbe_rx_queue *rxq; + struct axgbe_port *pdata = dev->data->dev_private; + unsigned int i; + + /* Disable MAC Rx */ + AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, DCRCC, 0); + AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, CST, 0); + AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, ACS, 0); + AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, RE, 0); + + /* Prepare for Rx DMA channel stop */ + for (i = 0; i < dev->data->nb_rx_queues; i++) { + rxq = dev->data->rx_queues[i]; + axgbe_prepare_rx_stop(pdata, i); + } + /* Disable each Rx queue */ + AXGMAC_IOWRITE(pdata, MAC_RQC0R, 0); + for (i = 0; i < dev->data->nb_rx_queues; i++) { + rxq = dev->data->rx_queues[i]; + /* Disable Rx DMA channel */ + AXGMAC_DMA_IOWRITE_BITS(rxq, DMA_CH_RCR, SR, 0); + } +} + +void axgbe_dev_enable_rx(struct rte_eth_dev *dev) +{ + struct axgbe_rx_queue *rxq; + struct axgbe_port *pdata = dev->data->dev_private; + unsigned int i; + unsigned int reg_val = 0; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + rxq = dev->data->rx_queues[i]; + /* Enable Rx DMA channel */ + AXGMAC_DMA_IOWRITE_BITS(rxq, DMA_CH_RCR, SR, 1); + } + + reg_val = 0; + for (i = 0; i < pdata->rx_q_count; i++) + reg_val |= (0x02 << (i << 1)); + AXGMAC_IOWRITE(pdata, MAC_RQC0R, reg_val); + + /* Enable MAC Rx */ + AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, DCRCC, 1); + /* Frame is forwarded after stripping CRC to application*/ + if (pdata->crc_strip_enable) { + AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, CST, 1); + AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, ACS, 1); + } + AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, RE, 1); +} + +/* Rx function one to one refresh */ +uint16_t +axgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + PMD_INIT_FUNC_TRACE(); + uint16_t nb_rx = 0; + struct axgbe_rx_queue *rxq = rx_queue; + volatile union axgbe_rx_desc *desc; + uint64_t old_dirty = rxq->dirty; + struct rte_mbuf *mbuf, *tmbuf; + unsigned int err; + uint32_t error_status; + uint16_t idx, pidx, pkt_len; + + idx = AXGBE_GET_DESC_IDX(rxq, rxq->cur); + while (nb_rx < nb_pkts) { + if (unlikely(idx == rxq->nb_desc)) + idx = 0; + + desc = &rxq->desc[idx]; + + if (AXGMAC_GET_BITS_LE(desc->write.desc3, RX_NORMAL_DESC3, OWN)) + break; + tmbuf = rte_mbuf_raw_alloc(rxq->mb_pool); + if (unlikely(!tmbuf)) { + PMD_DRV_LOG(ERR, "RX mbuf alloc failed port_id = %u" + " queue_id = %u\n", + (unsigned int)rxq->port_id, + (unsigned int)rxq->queue_id); + rte_eth_devices[ + rxq->port_id].data->rx_mbuf_alloc_failed++; + break; + } + pidx = idx + 1; + if (unlikely(pidx == rxq->nb_desc)) + pidx = 0; + + rte_prefetch0(rxq->sw_ring[pidx]); + if ((pidx & 0x3) == 0) { + rte_prefetch0(&rxq->desc[pidx]); + rte_prefetch0(&rxq->sw_ring[pidx]); + } + + mbuf = rxq->sw_ring[idx]; + /* Check for any errors and free mbuf*/ + err = AXGMAC_GET_BITS_LE(desc->write.desc3, + RX_NORMAL_DESC3, ES); + error_status = 0; + if (unlikely(err)) { + error_status = desc->write.desc3 & AXGBE_ERR_STATUS; + if ((error_status != AXGBE_L3_CSUM_ERR) && + (error_status != AXGBE_L4_CSUM_ERR)) { + rxq->errors++; + rte_pktmbuf_free(mbuf); + goto err_set; + } + } + if (rxq->pdata->rx_csum_enable) { + mbuf->ol_flags = 0; + mbuf->ol_flags |= PKT_RX_IP_CKSUM_GOOD; + mbuf->ol_flags |= PKT_RX_L4_CKSUM_GOOD; + if (unlikely(error_status == AXGBE_L3_CSUM_ERR)) { + mbuf->ol_flags &= ~PKT_RX_IP_CKSUM_GOOD; + mbuf->ol_flags |= PKT_RX_IP_CKSUM_BAD; + mbuf->ol_flags &= ~PKT_RX_L4_CKSUM_GOOD; + mbuf->ol_flags |= PKT_RX_L4_CKSUM_UNKNOWN; + } else if ( + unlikely(error_status == AXGBE_L4_CSUM_ERR)) { + mbuf->ol_flags &= ~PKT_RX_L4_CKSUM_GOOD; + mbuf->ol_flags |= PKT_RX_L4_CKSUM_BAD; + } + } + rte_prefetch1(rte_pktmbuf_mtod(mbuf, void *)); + /* Get the RSS hash */ + if (AXGMAC_GET_BITS_LE(desc->write.desc3, RX_NORMAL_DESC3, RSV)) + mbuf->hash.rss = rte_le_to_cpu_32(desc->write.desc1); + pkt_len = AXGMAC_GET_BITS_LE(desc->write.desc3, RX_NORMAL_DESC3, + PL) - rxq->crc_len; + /* Mbuf populate */ + mbuf->next = NULL; + mbuf->data_off = RTE_PKTMBUF_HEADROOM; + mbuf->nb_segs = 1; + mbuf->port = rxq->port_id; + mbuf->pkt_len = pkt_len; + mbuf->data_len = pkt_len; + rxq->bytes += pkt_len; + rx_pkts[nb_rx++] = mbuf; +err_set: + rxq->cur++; + rxq->sw_ring[idx++] = tmbuf; + desc->read.baddr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(tmbuf)); + memset((void *)(&desc->read.desc2), 0, 8); + AXGMAC_SET_BITS_LE(desc->read.desc3, RX_NORMAL_DESC3, OWN, 1); + rxq->dirty++; + } + rxq->pkts += nb_rx; + if (rxq->dirty != old_dirty) { + rte_wmb(); + idx = AXGBE_GET_DESC_IDX(rxq, rxq->dirty - 1); + AXGMAC_DMA_IOWRITE(rxq, DMA_CH_RDTR_LO, + low32_value(rxq->ring_phys_addr + + (idx * sizeof(union axgbe_rx_desc)))); + } + + return nb_rx; +} + /* Tx Apis */ static void axgbe_tx_queue_release(struct axgbe_tx_queue *tx_queue) { @@ -174,6 +365,10 @@ int axgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, txq->free_thresh = (txq->nb_desc >> 1); txq->free_batch_cnt = txq->free_thresh; + /* In vector_tx path threshold should be multiple of queue_size*/ + if (txq->nb_desc % txq->free_thresh != 0) + txq->vector_disable = 1; + if ((tx_conf->txq_flags & (uint32_t)ETH_TXQ_FLAGS_NOOFFLOADS) != ETH_TXQ_FLAGS_NOOFFLOADS) { txq->vector_disable = 1; @@ -211,9 +406,247 @@ int axgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, if (!pdata->tx_queues) pdata->tx_queues = dev->data->tx_queues; + if (txq->vector_disable) + dev->tx_pkt_burst = &axgbe_xmit_pkts; + else +#ifdef RTE_ARCH_X86 + dev->tx_pkt_burst = &axgbe_xmit_pkts_vec; +#else + dev->tx_pkt_burst = &axgbe_xmit_pkts; +#endif + return 0; } +static void axgbe_txq_prepare_tx_stop(struct axgbe_port *pdata, + unsigned int queue) +{ + unsigned int tx_status; + unsigned long tx_timeout; + + /* The Tx engine cannot be stopped if it is actively processing + * packets. Wait for the Tx queue to empty the Tx fifo. Don't + * wait forever though... + */ + tx_timeout = rte_get_timer_cycles() + (AXGBE_DMA_STOP_TIMEOUT * + rte_get_timer_hz()); + while (time_before(rte_get_timer_cycles(), tx_timeout)) { + tx_status = AXGMAC_MTL_IOREAD(pdata, queue, MTL_Q_TQDR); + if ((AXGMAC_GET_BITS(tx_status, MTL_Q_TQDR, TRCSTS) != 1) && + (AXGMAC_GET_BITS(tx_status, MTL_Q_TQDR, TXQSTS) == 0)) + break; + + rte_delay_us(900); + } + + if (!time_before(rte_get_timer_cycles(), tx_timeout)) + PMD_DRV_LOG(ERR, + "timed out waiting for Tx queue %u to empty\n", + queue); +} + +static void axgbe_prepare_tx_stop(struct axgbe_port *pdata, + unsigned int queue) +{ + unsigned int tx_dsr, tx_pos, tx_qidx; + unsigned int tx_status; + unsigned long tx_timeout; + + if (AXGMAC_GET_BITS(pdata->hw_feat.version, MAC_VR, SNPSVER) > 0x20) + return axgbe_txq_prepare_tx_stop(pdata, queue); + + /* Calculate the status register to read and the position within */ + if (queue < DMA_DSRX_FIRST_QUEUE) { + tx_dsr = DMA_DSR0; + tx_pos = (queue * DMA_DSR_Q_WIDTH) + DMA_DSR0_TPS_START; + } else { + tx_qidx = queue - DMA_DSRX_FIRST_QUEUE; + + tx_dsr = DMA_DSR1 + ((tx_qidx / DMA_DSRX_QPR) * DMA_DSRX_INC); + tx_pos = ((tx_qidx % DMA_DSRX_QPR) * DMA_DSR_Q_WIDTH) + + DMA_DSRX_TPS_START; + } + + /* The Tx engine cannot be stopped if it is actively processing + * descriptors. Wait for the Tx engine to enter the stopped or + * suspended state. Don't wait forever though... + */ + tx_timeout = rte_get_timer_cycles() + (AXGBE_DMA_STOP_TIMEOUT * + rte_get_timer_hz()); + while (time_before(rte_get_timer_cycles(), tx_timeout)) { + tx_status = AXGMAC_IOREAD(pdata, tx_dsr); + tx_status = GET_BITS(tx_status, tx_pos, DMA_DSR_TPS_WIDTH); + if ((tx_status == DMA_TPS_STOPPED) || + (tx_status == DMA_TPS_SUSPENDED)) + break; + + rte_delay_us(900); + } + + if (!time_before(rte_get_timer_cycles(), tx_timeout)) + PMD_DRV_LOG(ERR, + "timed out waiting for Tx DMA channel %u to stop\n", + queue); +} + +void axgbe_dev_disable_tx(struct rte_eth_dev *dev) +{ + struct axgbe_tx_queue *txq; + struct axgbe_port *pdata = dev->data->dev_private; + unsigned int i; + + /* Prepare for stopping DMA channel */ + for (i = 0; i < pdata->tx_q_count; i++) { + txq = dev->data->tx_queues[i]; + axgbe_prepare_tx_stop(pdata, i); + } + /* Disable MAC Tx */ + AXGMAC_IOWRITE_BITS(pdata, MAC_TCR, TE, 0); + /* Disable each Tx queue*/ + for (i = 0; i < pdata->tx_q_count; i++) + AXGMAC_MTL_IOWRITE_BITS(pdata, i, MTL_Q_TQOMR, TXQEN, + 0); + /* Disable each Tx DMA channel */ + for (i = 0; i < dev->data->nb_tx_queues; i++) { + txq = dev->data->tx_queues[i]; + AXGMAC_DMA_IOWRITE_BITS(txq, DMA_CH_TCR, ST, 0); + } +} + +void axgbe_dev_enable_tx(struct rte_eth_dev *dev) +{ + struct axgbe_tx_queue *txq; + struct axgbe_port *pdata = dev->data->dev_private; + unsigned int i; + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + txq = dev->data->tx_queues[i]; + /* Enable Tx DMA channel */ + AXGMAC_DMA_IOWRITE_BITS(txq, DMA_CH_TCR, ST, 1); + } + /* Enable Tx queue*/ + for (i = 0; i < pdata->tx_q_count; i++) + AXGMAC_MTL_IOWRITE_BITS(pdata, i, MTL_Q_TQOMR, TXQEN, + MTL_Q_ENABLED); + /* Enable MAC Tx */ + AXGMAC_IOWRITE_BITS(pdata, MAC_TCR, TE, 1); +} + +/* Free Tx conformed mbufs */ +static void axgbe_xmit_cleanup(struct axgbe_tx_queue *txq) +{ + volatile struct axgbe_tx_desc *desc; + uint16_t idx; + + idx = AXGBE_GET_DESC_IDX(txq, txq->dirty); + while (txq->cur != txq->dirty) { + if (unlikely(idx == txq->nb_desc)) + idx = 0; + desc = &txq->desc[idx]; + /* Check for ownership */ + if (AXGMAC_GET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, OWN)) + return; + memset((void *)&desc->desc2, 0, 8); + /* Free mbuf */ + rte_pktmbuf_free(txq->sw_ring[idx]); + txq->sw_ring[idx++] = NULL; + txq->dirty++; + } +} + +/* Tx Descriptor formation + * Considering each mbuf requires one desc + * mbuf is linear + */ +static int axgbe_xmit_hw(struct axgbe_tx_queue *txq, + struct rte_mbuf *mbuf) +{ + volatile struct axgbe_tx_desc *desc; + uint16_t idx; + uint64_t mask; + + idx = AXGBE_GET_DESC_IDX(txq, txq->cur); + desc = &txq->desc[idx]; + + /* Update buffer address and length */ + desc->baddr = rte_mbuf_data_iova(mbuf); + AXGMAC_SET_BITS_LE(desc->desc2, TX_NORMAL_DESC2, HL_B1L, + mbuf->pkt_len); + /* Total msg length to transmit */ + AXGMAC_SET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, FL, + mbuf->pkt_len); + /* Mark it as First and Last Descriptor */ + AXGMAC_SET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, FD, 1); + AXGMAC_SET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, LD, 1); + /* Mark it as a NORMAL descriptor */ + AXGMAC_SET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, CTXT, 0); + /* configure h/w Offload */ + mask = mbuf->ol_flags & PKT_TX_L4_MASK; + if ((mask == PKT_TX_TCP_CKSUM) || (mask == PKT_TX_UDP_CKSUM)) + AXGMAC_SET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, CIC, 0x3); + else if (mbuf->ol_flags & PKT_TX_IP_CKSUM) + AXGMAC_SET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, CIC, 0x1); + rte_wmb(); + + /* Set OWN bit */ + AXGMAC_SET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, OWN, 1); + rte_wmb(); + + /* Save mbuf */ + txq->sw_ring[idx] = mbuf; + /* Update current index*/ + txq->cur++; + /* Update stats */ + txq->bytes += mbuf->pkt_len; + + return 0; +} + +/* Eal supported tx wrapper*/ +uint16_t +axgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts) +{ + PMD_INIT_FUNC_TRACE(); + + if (unlikely(nb_pkts == 0)) + return nb_pkts; + + struct axgbe_tx_queue *txq; + uint16_t nb_desc_free; + uint16_t nb_pkt_sent = 0; + uint16_t idx; + uint32_t tail_addr; + struct rte_mbuf *mbuf; + + txq = (struct axgbe_tx_queue *)tx_queue; + nb_desc_free = txq->nb_desc - (txq->cur - txq->dirty); + + if (unlikely(nb_desc_free <= txq->free_thresh)) { + axgbe_xmit_cleanup(txq); + nb_desc_free = txq->nb_desc - (txq->cur - txq->dirty); + if (unlikely(nb_desc_free == 0)) + return 0; + } + nb_pkts = RTE_MIN(nb_desc_free, nb_pkts); + while (nb_pkts--) { + mbuf = *tx_pkts++; + if (axgbe_xmit_hw(txq, mbuf)) + goto out; + nb_pkt_sent++; + } +out: + /* Sync read and write */ + rte_mb(); + idx = AXGBE_GET_DESC_IDX(txq, txq->cur); + tail_addr = low32_value(txq->ring_phys_addr + + idx * sizeof(struct axgbe_tx_desc)); + /* Update tail reg with next immediate address to kick Tx DMA channel*/ + AXGMAC_DMA_IOWRITE(txq, DMA_CH_TDTR_LO, tail_addr); + txq->pkts += nb_pkt_sent; + return nb_pkt_sent; +} + void axgbe_dev_clear_queues(struct rte_eth_dev *dev) { PMD_INIT_FUNC_TRACE(); diff --git a/drivers/net/axgbe/axgbe_rxtx.h b/drivers/net/axgbe/axgbe_rxtx.h index 1b88d7a..f221cc3 100644 --- a/drivers/net/axgbe/axgbe_rxtx.h +++ b/drivers/net/axgbe/axgbe_rxtx.h @@ -156,12 +156,31 @@ void axgbe_dev_tx_queue_release(void *txq); int axgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id, uint16_t nb_tx_desc, unsigned int socket_id, const struct rte_eth_txconf *tx_conf); +void axgbe_dev_enable_tx(struct rte_eth_dev *dev); +void axgbe_dev_disable_tx(struct rte_eth_dev *dev); +int axgbe_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id); +int axgbe_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id); + +uint16_t axgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts); +uint16_t axgbe_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts); + void axgbe_dev_rx_queue_release(void *rxq); int axgbe_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id, uint16_t nb_rx_desc, unsigned int socket_id, const struct rte_eth_rxconf *rx_conf, struct rte_mempool *mb_pool); +void axgbe_dev_enable_rx(struct rte_eth_dev *dev); +void axgbe_dev_disable_rx(struct rte_eth_dev *dev); +int axgbe_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id); +int axgbe_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id); +uint16_t axgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts); +uint16_t axgbe_recv_pkts_threshold_refresh(void *rx_queue, + struct rte_mbuf **rx_pkts, + uint16_t nb_pkts); void axgbe_dev_clear_queues(struct rte_eth_dev *dev); #endif /* _AXGBE_RXTX_H_ */ diff --git a/drivers/net/axgbe/axgbe_rxtx_vec_sse.c b/drivers/net/axgbe/axgbe_rxtx_vec_sse.c new file mode 100644 index 0000000..9be7037 --- /dev/null +++ b/drivers/net/axgbe/axgbe_rxtx_vec_sse.c @@ -0,0 +1,93 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018 Advanced Micro Devices, Inc. All rights reserved. + * Copyright(c) 2018 Synopsys, Inc. All rights reserved. + */ + +#include "axgbe_ethdev.h" +#include "axgbe_rxtx.h" +#include "axgbe_phy.h" + +#include +#include +#include + +/* Useful to avoid shifting for every descriptor prepration*/ +#define TX_DESC_CTRL_FLAGS 0xb000000000000000 +#define TX_FREE_BULK 8 +#define TX_FREE_BULK_CHECK (TX_FREE_BULK - 1) + +static inline void +axgbe_vec_tx(volatile struct axgbe_tx_desc *desc, + struct rte_mbuf *mbuf) +{ + __m128i descriptor = _mm_set_epi64x((uint64_t)mbuf->pkt_len << 32 | + TX_DESC_CTRL_FLAGS | mbuf->data_len, + mbuf->buf_iova + + mbuf->data_off); + _mm_store_si128((__m128i *)desc, descriptor); +} + +static void +axgbe_xmit_cleanup_vec(struct axgbe_tx_queue *txq) +{ + volatile struct axgbe_tx_desc *desc; + int idx, i; + + idx = AXGBE_GET_DESC_IDX(txq, txq->dirty + txq->free_batch_cnt + - 1); + desc = &txq->desc[idx]; + if (desc->desc3 & AXGBE_DESC_OWN) + return; + /* memset avoided for desc ctrl fields since in vec_tx path + * all 128 bits are populated + */ + for (i = 0; i < txq->free_batch_cnt; i++, idx--) + rte_pktmbuf_free_seg(txq->sw_ring[idx]); + + + txq->dirty += txq->free_batch_cnt; + txq->nb_desc_free += txq->free_batch_cnt; +} + +uint16_t +axgbe_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts) +{ + PMD_INIT_FUNC_TRACE(); + + struct axgbe_tx_queue *txq; + uint16_t idx, nb_commit, loop, i; + uint32_t tail_addr; + + txq = (struct axgbe_tx_queue *)tx_queue; + if (txq->nb_desc_free < txq->free_thresh) { + axgbe_xmit_cleanup_vec(txq); + if (unlikely(txq->nb_desc_free == 0)) + return 0; + } + nb_pkts = RTE_MIN(txq->nb_desc_free, nb_pkts); + nb_commit = nb_pkts; + idx = AXGBE_GET_DESC_IDX(txq, txq->cur); + loop = txq->nb_desc - idx; + if (nb_commit >= loop) { + for (i = 0; i < loop; ++i, ++idx, ++tx_pkts) { + axgbe_vec_tx(&txq->desc[idx], *tx_pkts); + txq->sw_ring[idx] = *tx_pkts; + } + nb_commit -= loop; + idx = 0; + } + for (i = 0; i < nb_commit; ++i, ++idx, ++tx_pkts) { + axgbe_vec_tx(&txq->desc[idx], *tx_pkts); + txq->sw_ring[idx] = *tx_pkts; + } + txq->cur += nb_pkts; + tail_addr = (uint32_t)(txq->ring_phys_addr + + idx * sizeof(struct axgbe_tx_desc)); + /* Update tail reg with next immediate address to kick Tx DMA channel*/ + rte_write32(tail_addr, (void *)txq->dma_tail_reg); + txq->pkts += nb_pkts; + txq->nb_desc_free -= nb_pkts; + + return nb_pkts; +} -- 2.7.4