From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from NAM03-BY2-obe.outbound.protection.outlook.com (mail-by2nam03on0050.outbound.protection.outlook.com [104.47.42.50]) by dpdk.org (Postfix) with ESMTP id 0BF6A7CBA for ; Thu, 30 Nov 2017 14:11:55 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector1-amd-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=UuM77ifNZBpGY8p61eYgfo8O4NzoRl8dOpgWCt86hI0=; b=eZB3aKKBGn0HsJwdSaoc9QEWv/KjvzqJfZU4W+vNLIGXjeopAPqXH87mR5J5lLIVnoARJrn6l2RgWPKXSvDwK1zdNseS8tGJP+r2wlKbIZ5ybTiJoolzgs8tCj1AvR+efF4M0z9aZ02L0KiLose3eGBYhodh3+m3IOMQw83i84k= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Ravi1.Kumar@amd.com; Received: from wallaby-smavila.amd.com (202.56.249.162) by DM5PR12MB1514.namprd12.prod.outlook.com (10.172.39.146) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.282.5; Thu, 30 Nov 2017 13:11:52 +0000 From: Ravi Kumar To: dev@dpdk.org Date: Thu, 30 Nov 2017 08:11:06 -0500 Message-Id: <1512047472-118050-10-git-send-email-Ravi1.kumar@amd.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1512047472-118050-1-git-send-email-Ravi1.kumar@amd.com> References: <1512047472-118050-1-git-send-email-Ravi1.kumar@amd.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [202.56.249.162] X-ClientProxiedBy: MA1PR01CA0077.INDPRD01.PROD.OUTLOOK.COM (10.174.56.17) To DM5PR12MB1514.namprd12.prod.outlook.com (10.172.39.146) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 79406f67-7716-4c46-2411-08d537f3ed03 X-MS-Office365-Filtering-HT: Tenant X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(5600026)(4604075)(4534020)(4602075)(4627115)(201703031133081)(201702281549075)(48565401081)(2017052603286); SRVR:DM5PR12MB1514; X-Microsoft-Exchange-Diagnostics: 1; DM5PR12MB1514; 3:wj5MDheJRSEtv+ACyiE0LfxTfEQQmY6o5tJD1FcGtUTvnm/XsDRF02Vd5kyxvOSo4jnWFkZZNmJz6fuye8rlBP9szkfSFrVF1k9hqiftHmPR55ibSRAgGuDI3gkeqvOJ2BERqJ/Zz259SZAapPDDvCWrz/rIiSl42pxEY1cGTkM0YDCvSkwLqW+u1z2M1v7I6RtLgDwGanYhVvCpVjLxp0R8mkY7m+d2wkAoLSPfgNWxZop2roMIsOt8571JG39D; 25:RbMTKVRDzS3E7JSFUt5lomEVKcsNAoDuwAmNiP0gTkBBfArHiJGZzuLi8xPcbr/nPAn8ssQkQ8yEAupS2ev7gQLGbbkQGlxBuckmI4cfgDy/QNl0QmcO+c2QeFZ0IlGrUZtMVtSPQjlI8E+iZNmHLToWjiXYsXGkL3NliHJy2vJmqx8BzALzLM/TsiuUIKaGMj9klJpgafN171f4iI+t93KqgxcvSgCkGCNcAodAnS+fl5VOzVj61P/Ey35XWM1KvqrKYelJADRlDTfy6SY7jWKU5LkcEJ3bDaFONUqobDb19IenVHUTfKQ4xJaIu+fRO65lyBqRkeQZHseCEk01JA==; 31:QjhEdaekaH82T+ylh4AZUTqcva77In7F3f3DGZEqQ6X08Y/wzsR6DJybIr26g+Ei7y1IzuOwlXMolki6Hi9cTRP7VyftLC6q0jtfBxhn2GrGnG7UEl1z5JlCSH7nXq6Y+bmNhwZgKTDchmnfInElEtwUr6HLvTV3j+lCbWjwuO2Du14EX02qlGbJiM4eGQEKr7G3cq/rCBeo8emWphjdhh0yJhZfSPdgiApkL9SRni4= X-MS-TrafficTypeDiagnostic: DM5PR12MB1514: X-Microsoft-Exchange-Diagnostics: 1; DM5PR12MB1514; 20:c+C6yH0z2g9LEgf+tkWudXKscVkusjAiVsamh86Y2NfomvwB/F9Opun+0XOct1yr4i3WDO9mTeJpcs82W2lU17H7KPLhXKfJ9Rxn9WUiwznlQk3PrPE/AizVoe4ywo7OkY8B5lTeeQhMdaddQDPvbu3tBY6HQ/dN5vkOLGa7Rqyq7DX/siu9lL6JPjd0ufjhHI6lKZZKVgcewLsHzp2xsTwHDN0vBVTqHuU7eDkaHal76FE7JcjhX3iVdVNYbaOgUWtEfu1OoDyKUykUwhf3l2tKORo/1cbmdhBQIk57UFsxhThzQ0ddcV0lRcNxJkrgRBuVcDOya/hlb994TWEQKSGzPgwPGa6eajOz8L/x62DXbnHDyLy4J9qddcODnkl5oFRtRNQ24QfFOelSdFDQgJn2FEz3T2PRSJR/eah52weaPDvlmRnWYzypdtLFm35kzBBWI0fa0d0WDFCyT6b9Pfd2rkR5vCEpstG0U/yMaQztDpJXn5mKNGBMS6/n6Ujt; 4:C/7QWgs0gnAX71SeRkUgaedeALsOS/BwQROjYcYsFPtM9tl/v0N7MUztH23iVCIxbR9QzA5VNXvcqCBzhANfFpp+xY5XKexBKtANcvChtFSZDqbdCQX/EJI3LpRsTS2T1y5uUD4LOhyz+MzfayEC5OLTH/9dwxLKWK+kiZmDHyao0cvASRds1yhwgdXGq02Tgtbr+xvQ4td1jI2NHCJ+BIQcv9WoM2159BFsIH6dTGmdIrDUJZjmvtXaOokYoN6ZY4sarvjqTm08cI5r02xRQcBl1rvXyT5bDIZY7Dt8lojZCPRMOzoOBfcl0kmmwtcjCTl8kLStTroLvyUfd9fJ7D3mhZZhVgA9q3P4Y7k9+d/emywxLzIMbA0l8BC87S0t/KCHQLRNEI2siG2NH5zQyGWB8AiF08UOpcIGts4Mjwk= X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(250305191791016)(60795455431006)(22074186197030)(767451399110); X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(6040450)(2401047)(5005006)(8121501046)(93006095)(93001095)(3231022)(3002001)(10201501046)(6055026)(6041248)(20161123555025)(20161123562025)(20161123564025)(20161123558100)(201703131423075)(201702281528075)(201703061421075)(201703061406153)(20161123560025)(6072148)(201708071742011); SRVR:DM5PR12MB1514; BCL:0; PCL:0; RULEID:(100000803101)(100110400095); SRVR:DM5PR12MB1514; X-Forefront-PRVS: 05079D8470 X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(6009001)(366004)(346002)(39860400002)(376002)(199003)(189002)(48376002)(16586007)(6666003)(68736007)(86362001)(316002)(575784001)(101416001)(2906002)(50466002)(8676002)(51416003)(6486002)(72206003)(50226002)(16526018)(50986010)(76176010)(53416004)(189998001)(7696005)(52116002)(97736004)(53936002)(6916009)(2351001)(2950100002)(105586002)(66066001)(106356001)(3846002)(25786009)(2361001)(81156014)(5660300001)(36756003)(53946003)(6116002)(47776003)(305945005)(478600001)(8936002)(7736002)(6306002)(81166006)(2004002); DIR:OUT; SFP:1101; SCL:1; SRVR:DM5PR12MB1514; H:wallaby-smavila.amd.com; FPR:; SPF:None; PTR:InfoNoRecords; A:1; MX:1; LANG:en; Received-SPF: None (protection.outlook.com: amd.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; DM5PR12MB1514; 23:b/GZKnnDz52Q5PKbz8qgTnV2kyhda7SR+eZ8YQhLq?= =?us-ascii?Q?5TE72iTMdg7WOvS+8ESLd5q6pRF3fg5KcJdr8+4RJA1f2NaEAplVTbX2bXmT?= =?us-ascii?Q?hwtve6Kvrz9WIWICdqHcPA8ZQMT9q4A2oycbArSUEzmtJbJZ9lPA8PoIzmFt?= =?us-ascii?Q?VLY1j0Lf3FW4D0cYTb3oxrRbC3zdy+pX6aRDwFx05t6NgMcpjEWx4ynHF5cu?= =?us-ascii?Q?kaopYvD1LvT+UsuG/arJUlnCfhgfImH3s4PlsHCPhysNa1l8MV7JEQd6MPB0?= =?us-ascii?Q?sloe6+55RWzLWAWny/aAQ7804AANr5OiPSeu72M3tYjJ8G8kLmd9jT9acFmq?= =?us-ascii?Q?6apF3YB7xqsPTK1+xnXJb/AAKV2ExsCfbPe9M8Ib6dACD/MQKz3Pfdd4rvM9?= =?us-ascii?Q?lvVGKZ393YX+p1WVbi/qp6eU5o98pegA1v0/11lCr6CfiUHxw4ma88H8MOQt?= =?us-ascii?Q?Ef0VQQQS2W2d1935skt9EgrYSnfKdqncQ4G0srhFzVfrO+59hY7Mv1NpfgFE?= =?us-ascii?Q?4D1VrT2Z/uB1+Whcx7NWaH+Ii3gcvZz+p7Fht2X5Tnl4gO9z8fXZ60yaw18e?= =?us-ascii?Q?jCvcnr9xhKmdKuuQOi6ABadpgVVRxPGY5fLIeV2n6vNPm8+8WjQ/ZCblY4sZ?= =?us-ascii?Q?Yp6cfZKtqxSotnixUUqQdfMUoyzHKD3GL41mca3F1okZSdEk++6lohDjWkKh?= =?us-ascii?Q?nt1hoBKJKCHfhgMcvcS2pPqpJEfGCQCZ8CUxQtHU3CclF4n7dh84ItlV8aHc?= =?us-ascii?Q?YHcdv/qG5NXROj7xuVXCnMTXHtkeFye9aGA8EIDNIzrtZCEN4gilyz0va6vA?= =?us-ascii?Q?f2M0ZYTiiIdR+Fa0wPwWKvl21FL5NQE3MFPDDu3jFvhhLo8qDIaJTMcWhGU6?= =?us-ascii?Q?lZuHVM8bCscpw4Kglckoiie2roW7XC4qQfSGacSRnLR2LTZ2hiVuQPGWHrsT?= =?us-ascii?Q?jPPc7LN6LtKWPAN0MysfLGjDoW15ImZ8Hdsw2GAgwBGfXM36+ML+j1jiS9vx?= =?us-ascii?Q?hp93fpTIe3uWC8Sk7t0dUgsAaQE4adGOoJHt2N3LdcCv7VQ2v3piYORcHE6Y?= =?us-ascii?Q?ARVETRusZ79Q6jEYdiP4jXmxfmNlnJmnf8zMyH1w4lTuDdUoGYkGRw0A0QJu?= =?us-ascii?Q?79chtEYkPjJSMsUszz65gT7gel2tPmGP7pJnQ6Gn5cUpyXrLDb0tow+mnjJB?= =?us-ascii?Q?6pPJTUSXnRwwLOuWhgyzyL2HTzNSEvwKhLt0VPlOtVouY7fyvuXoGb7Tw=3D?= =?us-ascii?Q?=3D?= X-Microsoft-Exchange-Diagnostics: 1; DM5PR12MB1514; 6:m8M2Pfh5l7kkR21CyxX/azFd2QvPwEold4lYYflYx29qYe7KmtshRIhBbLh6HwNzPLzXHWQLkxNnTYaEFNTgblxnkSnJcXFDwKBWCFzL4YO4/4o0UEbCOLjuc8foBoTMJ+054522ZOi6EXXPfdH9oF2kBK+HOt9HiiGupFwmAnkXcnVd8cTnHlCxc1atbdsm4HnrA6i5ACWZa8tvwb7yDb/XgZHzxRDDUnKyeZAzm5kWKJ02I1y+fPOxkBh3431tD9XjM3IUGGMtCaIjMNRr1IDGmMhfScfCvnHMgUVviPz5k8W4mXIani/L0oTugixdBpyeR0f32iRn0qkADkpjbPC4Xk4tQ26xPDgtFl9R0lM=; 5:daVqhKrh5za5JBcNFLUchcDgCsAR2xtY/uTqqaEz4tPa0+ADoP0ool5k/I/cBo3iEUcGowa3Ovb1kHMzKuAddqH5ek6oKoIZYQhFahaHE0CVb5/5Fn1xnMfQ0M3ZqtGEtydDeksE9WgNpn85sVZPvFFlIuH3G57KZNSKpzTani0=; 24:LJSrSjvH+1oH1Gw1KeJeVj4ETDkFEB4dZeK72lFb8bICmvL/kr9nrH7vI8B9hBGRM09xyH2N50BS14uK7jvKSZFCtwpOcLFZLsUSIxTytcM=; 7:+wxoUXuZgStJ1LdQCAEzOYRr9l4drQMaiQe+CQgZY76f/r5bMWF1quy0zddeogFN1netq5K4LAGFQR0Ses8wPdVnR5JKUmF9PB7cVFxuKSa0Gak6vH9UR+lP/XbEYzwI/MexPZfgBOP0ywy2dnOB6NmR1JpXi+jKc3a0ED63nU/IWEsnQ6MN2lOl2mx4qAQNoAlxNexa5QiI2h3Xe3r+APR34sipi59fW7nF7inORei6RiR9EKMUPogAjfG/e7kf SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; DM5PR12MB1514; 20:xX8dSOIbnlriuQPsIphsptCW5+99BQwbxgfNY+KlayBr7UDUTZAyD5M1hVk8S72yoT7N5xOgQpDkjT0G5l3RzOr7guabFsz43josN2alzvfUozwn18tvTNgmNaYHc/8hfBj/FO/f8zWWFOamjFxiS7tfPxOxZivo8/E7KzIYqU2XbocQAIDjLgClBa4VgIOjQquNXqzpbRUXAZ0abXsWcR3vMobCJ5AmMyAyi/DFolHKb/869MvDm+gKEuy0d+sN X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Nov 2017 13:11:52.7967 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 79406f67-7716-4c46-2411-08d537f3ed03 X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB1514 Subject: [dpdk-dev] [PATCH 10/16] net/axgbe: add transmit and receive data path apis X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 30 Nov 2017 13:11:56 -0000 Signed-off-by: Ravi Kumar --- drivers/net/axgbe/Makefile | 1 + drivers/net/axgbe/axgbe_ethdev.c | 56 ++++- drivers/net/axgbe/axgbe_rxtx.c | 427 +++++++++++++++++++++++++++++++++++++ drivers/net/axgbe/axgbe_rxtx.h | 19 ++ drivers/net/axgbe/axgbe_rxtx_vec.c | 215 +++++++++++++++++++ 5 files changed, 715 insertions(+), 3 deletions(-) create mode 100644 drivers/net/axgbe/axgbe_rxtx_vec.c diff --git a/drivers/net/axgbe/Makefile b/drivers/net/axgbe/Makefile index d030530..8d82d5c 100644 --- a/drivers/net/axgbe/Makefile +++ b/drivers/net/axgbe/Makefile @@ -147,5 +147,6 @@ SRCS-$(CONFIG_RTE_LIBRTE_AXGBE_PMD) += axgbe_mdio.c SRCS-$(CONFIG_RTE_LIBRTE_AXGBE_PMD) += axgbe_phy_impl.c SRCS-$(CONFIG_RTE_LIBRTE_AXGBE_PMD) += axgbe_i2c.c SRCS-$(CONFIG_RTE_LIBRTE_AXGBE_PMD) += axgbe_rxtx.c +SRCS-$(CONFIG_RTE_LIBRTE_AXGBE_PMD) += axgbe_rxtx_vec.c include $(RTE_SDK)/mk/rte.lib.mk diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c index 56a7b4d..dc012a0 100644 --- a/drivers/net/axgbe/axgbe_ethdev.c +++ b/drivers/net/axgbe/axgbe_ethdev.c @@ -136,6 +136,8 @@ static int axgbe_dev_configure(struct rte_eth_dev *dev); static int axgbe_dev_start(struct rte_eth_dev *dev); static void axgbe_dev_stop(struct rte_eth_dev *dev); static void axgbe_dev_close(struct rte_eth_dev *dev); +static int axgbe_dev_link_update(struct rte_eth_dev *dev, + int wait_to_complete); static void axgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info); static void axgbe_dev_interrupt_handler(void *param); @@ -190,6 +192,7 @@ static const struct eth_dev_ops axgbe_eth_dev_ops = { .dev_start = axgbe_dev_start, .dev_stop = axgbe_dev_stop, .dev_close = axgbe_dev_close, + .link_update = axgbe_dev_link_update, .dev_infos_get = axgbe_dev_info_get, .rx_queue_setup = axgbe_dev_rx_queue_setup, .rx_queue_release = axgbe_dev_rx_queue_release, @@ -221,9 +224,22 @@ axgbe_dev_interrupt_handler(void *param) { struct rte_eth_dev *dev = (struct rte_eth_dev *)param; struct axgbe_port *pdata = dev->data->dev_private; + unsigned int dma_isr, dma_ch_isr; pdata->phy_if.an_isr(pdata); - + /*DMA related interrupts*/ + dma_isr = AXGMAC_IOREAD(pdata, DMA_ISR); + if (dma_isr) { + if (dma_isr & 1) { + dma_ch_isr = + AXGMAC_DMA_IOREAD((struct axgbe_rx_queue *) + pdata->rx_queues[0], + DMA_CH_SR); + AXGMAC_DMA_IOWRITE((struct axgbe_rx_queue *) + pdata->rx_queues[0], + DMA_CH_SR, dma_ch_isr); + } + } rte_intr_enable(&pdata->pci_dev->intr_handle); } @@ -283,6 +299,8 @@ axgbe_dev_start(struct rte_eth_dev *dev) /* phy start*/ pdata->phy_if.phy_start(pdata); + axgbe_dev_enable_tx(dev); + axgbe_dev_enable_rx(dev); axgbe_clear_bit(AXGBE_STOPPED, &pdata->dev_state); axgbe_clear_bit(AXGBE_DOWN, &pdata->dev_state); @@ -302,6 +320,8 @@ axgbe_dev_stop(struct rte_eth_dev *dev) return; axgbe_set_bit(AXGBE_STOPPED, &pdata->dev_state); + axgbe_dev_disable_tx(dev); + axgbe_dev_disable_rx(dev); pdata->phy_if.phy_stop(pdata); pdata->hw_if.exit(pdata); @@ -316,6 +336,36 @@ axgbe_dev_close(struct rte_eth_dev *dev) axgbe_dev_clear_queues(dev); } +/* return 0 means link status changed, -1 means not changed */ +static int +axgbe_dev_link_update(struct rte_eth_dev *dev, + int wait_to_complete __rte_unused) +{ + PMD_INIT_FUNC_TRACE(); + rte_delay_ms(800); + + struct axgbe_port *pdata = dev->data->dev_private; + int old_link_status = dev->data->dev_link.link_status; + + pdata->phy_if.phy_status(pdata); + + dev->data->dev_link.link_speed = pdata->phy_speed; + switch (pdata->phy.duplex) { + case DUPLEX_FULL: + dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX; + break; + case DUPLEX_HALF: + dev->data->dev_link.link_duplex = ETH_LINK_HALF_DUPLEX; + break; + } + dev->data->dev_link.link_autoneg = !(dev->data->dev_conf.link_speeds & + ETH_LINK_SPEED_FIXED); + dev->data->dev_link.link_status = pdata->phy_link; + + return old_link_status == dev->data->dev_link.link_status ? -1 : 0; +} + + static void axgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) @@ -545,8 +595,8 @@ eth_axgbe_dev_init(struct rte_eth_dev *eth_dev) axgbe_set_bit(AXGBE_STOPPED, &pdata->dev_state); pdata->eth_dev = eth_dev; eth_dev->dev_ops = &axgbe_eth_dev_ops; - eth_dev->rx_pkt_burst = NULL; - eth_dev->tx_pkt_burst = NULL; + eth_dev->rx_pkt_burst = &axgbe_recv_pkts; + eth_dev->tx_pkt_burst = &axgbe_xmit_pkts_vec; /* * For secondary processes, we don't initialise any further as primary diff --git a/drivers/net/axgbe/axgbe_rxtx.c b/drivers/net/axgbe/axgbe_rxtx.c index 64065e8..3e92f84 100644 --- a/drivers/net/axgbe/axgbe_rxtx.c +++ b/drivers/net/axgbe/axgbe_rxtx.c @@ -235,6 +235,197 @@ int axgbe_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, return 0; } +static void axgbe_prepare_rx_stop(struct axgbe_port *pdata, + unsigned int queue) +{ + unsigned int rx_status; + unsigned long rx_timeout; + + /* The Rx engine cannot be stopped if it is actively processing + * packets. Wait for the Rx queue to empty the Rx fifo. Don't + * wait forever though... + */ + rx_timeout = rte_get_timer_cycles() + (AXGBE_DMA_STOP_TIMEOUT * + rte_get_timer_hz()); + + while (time_before(rte_get_timer_cycles(), rx_timeout)) { + rx_status = AXGMAC_MTL_IOREAD(pdata, queue, MTL_Q_RQDR); + if ((AXGMAC_GET_BITS(rx_status, MTL_Q_RQDR, PRXQ) == 0) && + (AXGMAC_GET_BITS(rx_status, MTL_Q_RQDR, RXQSTS) == 0)) + break; + + rte_delay_us(900); + } + + if (!time_before(rte_get_timer_cycles(), rx_timeout)) + PMD_DRV_LOG(ERR, + "timed out waiting for Rx queue %u to empty\n", + queue); +} + +void axgbe_dev_disable_rx(struct rte_eth_dev *dev) +{ + struct axgbe_rx_queue *rxq; + struct axgbe_port *pdata = dev->data->dev_private; + unsigned int i; + + /* Disable MAC Rx */ + AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, DCRCC, 0); + AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, CST, 0); + AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, ACS, 0); + AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, RE, 0); + + /* Prepare for Rx DMA channel stop */ + for (i = 0; i < dev->data->nb_rx_queues; i++) { + rxq = dev->data->rx_queues[i]; + axgbe_prepare_rx_stop(pdata, i); + } + /* Disable each Rx queue */ + AXGMAC_IOWRITE(pdata, MAC_RQC0R, 0); + for (i = 0; i < dev->data->nb_rx_queues; i++) { + rxq = dev->data->rx_queues[i]; + /* Disable Rx DMA channel */ + AXGMAC_DMA_IOWRITE_BITS(rxq, DMA_CH_RCR, SR, 0); + } +} + +void axgbe_dev_enable_rx(struct rte_eth_dev *dev) +{ + struct axgbe_rx_queue *rxq; + struct axgbe_port *pdata = dev->data->dev_private; + unsigned int i; + unsigned int reg_val = 0; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + rxq = dev->data->rx_queues[i]; + /* Enable Rx DMA channel */ + AXGMAC_DMA_IOWRITE_BITS(rxq, DMA_CH_RCR, SR, 1); + } + + reg_val = 0; + for (i = 0; i < pdata->rx_q_count; i++) + reg_val |= (0x02 << (i << 1)); + AXGMAC_IOWRITE(pdata, MAC_RQC0R, reg_val); + + /* Enable MAC Rx */ + AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, DCRCC, 1); + /* Frame is forwarded after stripping CRC to application*/ + if (pdata->crc_strip_enable) { + AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, CST, 1); + AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, ACS, 1); + } + AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, RE, 1); +} + +/* Rx function one to one refresh */ +uint16_t +axgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + PMD_INIT_FUNC_TRACE(); + uint16_t nb_rx = 0; + struct axgbe_rx_queue *rxq = rx_queue; + volatile union axgbe_rx_desc *desc; + uint64_t old_dirty = rxq->dirty; + struct rte_mbuf *mbuf, *tmbuf; + unsigned int err; + uint32_t error_status; + uint16_t idx, pidx, pkt_len; + + idx = AXGBE_GET_DESC_IDX(rxq, rxq->cur); + while (nb_rx < nb_pkts) { + if (unlikely(idx == rxq->nb_desc)) + idx = 0; + + desc = &rxq->desc[idx]; + + if (AXGMAC_GET_BITS_LE(desc->write.desc3, RX_NORMAL_DESC3, OWN)) + break; + tmbuf = rte_mbuf_raw_alloc(rxq->mb_pool); + if (unlikely(!tmbuf)) { + PMD_DRV_LOG(ERR, "RX mbuf alloc failed port_id = %u" + " queue_id = %u\n", + (unsigned int)rxq->port_id, + (unsigned int)rxq->queue_id); + rte_eth_devices[ + rxq->port_id].data->rx_mbuf_alloc_failed++; + break; + } + pidx = idx + 1; + if (unlikely(pidx == rxq->nb_desc)) + pidx = 0; + + rte_prefetch0(rxq->sw_ring[pidx]); + if ((pidx & 0x3) == 0) { + rte_prefetch0(&rxq->desc[pidx]); + rte_prefetch0(&rxq->sw_ring[pidx]); + } + + mbuf = rxq->sw_ring[idx]; + /* Check for any errors and free mbuf*/ + err = AXGMAC_GET_BITS_LE(desc->write.desc3, + RX_NORMAL_DESC3, ES); + error_status = 0; + if (unlikely(err)) { + error_status = desc->write.desc3 & AXGBE_ERR_STATUS; + if ((error_status != AXGBE_L3_CSUM_ERR) && + (error_status != AXGBE_L4_CSUM_ERR)) { + rxq->errors++; + rte_pktmbuf_free(mbuf); + goto err_set; + } + } + if (rxq->pdata->rx_csum_enable) { + mbuf->ol_flags = 0; + mbuf->ol_flags |= PKT_RX_IP_CKSUM_GOOD; + mbuf->ol_flags |= PKT_RX_L4_CKSUM_GOOD; + if (unlikely(error_status == AXGBE_L3_CSUM_ERR)) { + mbuf->ol_flags &= ~PKT_RX_IP_CKSUM_GOOD; + mbuf->ol_flags |= PKT_RX_IP_CKSUM_BAD; + mbuf->ol_flags &= ~PKT_RX_L4_CKSUM_GOOD; + mbuf->ol_flags |= PKT_RX_L4_CKSUM_UNKNOWN; + } else if ( + unlikely(error_status == AXGBE_L4_CSUM_ERR)) { + mbuf->ol_flags &= ~PKT_RX_L4_CKSUM_GOOD; + mbuf->ol_flags |= PKT_RX_L4_CKSUM_BAD; + } + } + rte_prefetch1(rte_pktmbuf_mtod(mbuf, void *)); + /* Get the RSS hash */ + if (AXGMAC_GET_BITS_LE(desc->write.desc3, RX_NORMAL_DESC3, RSV)) + mbuf->hash.rss = rte_le_to_cpu_32(desc->write.desc1); + pkt_len = AXGMAC_GET_BITS_LE(desc->write.desc3, RX_NORMAL_DESC3, + PL) - rxq->crc_len; + /* Mbuf populate */ + mbuf->next = NULL; + mbuf->data_off = RTE_PKTMBUF_HEADROOM; + mbuf->nb_segs = 1; + mbuf->port = rxq->port_id; + mbuf->pkt_len = pkt_len; + mbuf->data_len = pkt_len; + rxq->bytes += pkt_len; + rx_pkts[nb_rx++] = mbuf; +err_set: + rxq->cur++; + rxq->sw_ring[idx++] = tmbuf; + desc->read.baddr = + rte_cpu_to_le_64(rte_mbuf_data_iova_default(tmbuf)); + memset((void *)(&desc->read.desc2), 0, 8); + AXGMAC_SET_BITS_LE(desc->read.desc3, RX_NORMAL_DESC3, OWN, 1); + rxq->dirty++; + } + rxq->pkts += nb_rx; + if (rxq->dirty != old_dirty) { + rte_wmb(); + idx = AXGBE_GET_DESC_IDX(rxq, rxq->dirty - 1); + AXGMAC_DMA_IOWRITE(rxq, DMA_CH_RDTR_LO, + low32_value(rxq->ring_phys_addr + + (idx * sizeof(union axgbe_rx_desc)))); + } + + return nb_rx; +} + /* Tx Apis */ static void axgbe_tx_queue_release(struct axgbe_tx_queue *tx_queue) { @@ -296,6 +487,10 @@ int axgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, txq->free_thresh = (txq->nb_desc >> 1); txq->free_batch_cnt = txq->free_thresh; + /* In vector_tx path threshold should be multiple of queue_size*/ + if (txq->nb_desc % txq->free_thresh != 0) + txq->vector_disable = 1; + if ((tx_conf->txq_flags & (uint32_t)ETH_TXQ_FLAGS_NOOFFLOADS) != ETH_TXQ_FLAGS_NOOFFLOADS) { txq->vector_disable = 1; @@ -333,9 +528,241 @@ int axgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, if (!pdata->tx_queues) pdata->tx_queues = dev->data->tx_queues; + if (txq->vector_disable) + dev->tx_pkt_burst = axgbe_xmit_pkts; + return 0; } +static void axgbe_txq_prepare_tx_stop(struct axgbe_port *pdata, + unsigned int queue) +{ + unsigned int tx_status; + unsigned long tx_timeout; + + /* The Tx engine cannot be stopped if it is actively processing + * packets. Wait for the Tx queue to empty the Tx fifo. Don't + * wait forever though... + */ + tx_timeout = rte_get_timer_cycles() + (AXGBE_DMA_STOP_TIMEOUT * + rte_get_timer_hz()); + while (time_before(rte_get_timer_cycles(), tx_timeout)) { + tx_status = AXGMAC_MTL_IOREAD(pdata, queue, MTL_Q_TQDR); + if ((AXGMAC_GET_BITS(tx_status, MTL_Q_TQDR, TRCSTS) != 1) && + (AXGMAC_GET_BITS(tx_status, MTL_Q_TQDR, TXQSTS) == 0)) + break; + + rte_delay_us(900); + } + + if (!time_before(rte_get_timer_cycles(), tx_timeout)) + PMD_DRV_LOG(ERR, + "timed out waiting for Tx queue %u to empty\n", + queue); +} + +static void axgbe_prepare_tx_stop(struct axgbe_port *pdata, + unsigned int queue) +{ + unsigned int tx_dsr, tx_pos, tx_qidx; + unsigned int tx_status; + unsigned long tx_timeout; + + if (AXGMAC_GET_BITS(pdata->hw_feat.version, MAC_VR, SNPSVER) > 0x20) + return axgbe_txq_prepare_tx_stop(pdata, queue); + + /* Calculate the status register to read and the position within */ + if (queue < DMA_DSRX_FIRST_QUEUE) { + tx_dsr = DMA_DSR0; + tx_pos = (queue * DMA_DSR_Q_WIDTH) + DMA_DSR0_TPS_START; + } else { + tx_qidx = queue - DMA_DSRX_FIRST_QUEUE; + + tx_dsr = DMA_DSR1 + ((tx_qidx / DMA_DSRX_QPR) * DMA_DSRX_INC); + tx_pos = ((tx_qidx % DMA_DSRX_QPR) * DMA_DSR_Q_WIDTH) + + DMA_DSRX_TPS_START; + } + + /* The Tx engine cannot be stopped if it is actively processing + * descriptors. Wait for the Tx engine to enter the stopped or + * suspended state. Don't wait forever though... + */ + tx_timeout = rte_get_timer_cycles() + (AXGBE_DMA_STOP_TIMEOUT * + rte_get_timer_hz()); + while (time_before(rte_get_timer_cycles(), tx_timeout)) { + tx_status = AXGMAC_IOREAD(pdata, tx_dsr); + tx_status = GET_BITS(tx_status, tx_pos, DMA_DSR_TPS_WIDTH); + if ((tx_status == DMA_TPS_STOPPED) || + (tx_status == DMA_TPS_SUSPENDED)) + break; + + rte_delay_us(900); + } + + if (!time_before(rte_get_timer_cycles(), tx_timeout)) + PMD_DRV_LOG(ERR, + "timed out waiting for Tx DMA channel %u to stop\n", + queue); +} + +void axgbe_dev_disable_tx(struct rte_eth_dev *dev) +{ + struct axgbe_tx_queue *txq; + struct axgbe_port *pdata = dev->data->dev_private; + unsigned int i; + + /* Prepare for stopping DMA channel */ + for (i = 0; i < pdata->tx_q_count; i++) { + txq = dev->data->tx_queues[i]; + axgbe_prepare_tx_stop(pdata, i); + } + /* Disable MAC Tx */ + AXGMAC_IOWRITE_BITS(pdata, MAC_TCR, TE, 0); + /* Disable each Tx queue*/ + for (i = 0; i < pdata->tx_q_count; i++) + AXGMAC_MTL_IOWRITE_BITS(pdata, i, MTL_Q_TQOMR, TXQEN, + 0); + /* Disable each Tx DMA channel */ + for (i = 0; i < dev->data->nb_tx_queues; i++) { + txq = dev->data->tx_queues[i]; + AXGMAC_DMA_IOWRITE_BITS(txq, DMA_CH_TCR, ST, 0); + } +} + +void axgbe_dev_enable_tx(struct rte_eth_dev *dev) +{ + struct axgbe_tx_queue *txq; + struct axgbe_port *pdata = dev->data->dev_private; + unsigned int i; + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + txq = dev->data->tx_queues[i]; + /* Enable Tx DMA channel */ + AXGMAC_DMA_IOWRITE_BITS(txq, DMA_CH_TCR, ST, 1); + } + /* Enable Tx queue*/ + for (i = 0; i < pdata->tx_q_count; i++) + AXGMAC_MTL_IOWRITE_BITS(pdata, i, MTL_Q_TQOMR, TXQEN, + MTL_Q_ENABLED); + /* Enable MAC Tx */ + AXGMAC_IOWRITE_BITS(pdata, MAC_TCR, TE, 1); +} + +/* Free Tx conformed mbufs */ +static void axgbe_xmit_cleanup(struct axgbe_tx_queue *txq) +{ + volatile struct axgbe_tx_desc *desc; + uint16_t idx; + + idx = AXGBE_GET_DESC_IDX(txq, txq->dirty); + while (txq->cur != txq->dirty) { + if (unlikely(idx == txq->nb_desc)) + idx = 0; + desc = &txq->desc[idx]; + /* Check for ownership */ + if (AXGMAC_GET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, OWN)) + return; + memset((void *)&desc->desc2, 0, 8); + /* Free mbuf */ + rte_pktmbuf_free(txq->sw_ring[idx]); + txq->sw_ring[idx++] = NULL; + txq->dirty++; + } +} + +/* Tx Descriptor formation + * Considering each mbuf requires one desc + * mbuf is linear + */ +static int axgbe_xmit_hw(struct axgbe_tx_queue *txq, + struct rte_mbuf *mbuf) +{ + volatile struct axgbe_tx_desc *desc; + uint16_t idx; + uint64_t mask; + + idx = AXGBE_GET_DESC_IDX(txq, txq->cur); + desc = &txq->desc[idx]; + + /* Update buffer address and length */ + desc->baddr = rte_mbuf_data_iova(mbuf); + AXGMAC_SET_BITS_LE(desc->desc2, TX_NORMAL_DESC2, HL_B1L, + mbuf->pkt_len); + /* Total msg length to transmit */ + AXGMAC_SET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, FL, + mbuf->pkt_len); + /* Mark it as First and Last Descriptor */ + AXGMAC_SET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, FD, 1); + AXGMAC_SET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, LD, 1); + /* Mark it as a NORMAL descriptor */ + AXGMAC_SET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, CTXT, 0); + /* configure h/w Offload */ + mask = mbuf->ol_flags & PKT_TX_L4_MASK; + if ((mask == PKT_TX_TCP_CKSUM) || (mask == PKT_TX_UDP_CKSUM)) + AXGMAC_SET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, CIC, 0x3); + else if (mbuf->ol_flags & PKT_TX_IP_CKSUM) + AXGMAC_SET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, CIC, 0x1); + rte_wmb(); + + /* Set OWN bit */ + AXGMAC_SET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, OWN, 1); + rte_wmb(); + + /* Save mbuf */ + txq->sw_ring[idx] = mbuf; + /* Update current index*/ + txq->cur++; + /* Update stats */ + txq->bytes += mbuf->pkt_len; + + return 0; +} + +/* Eal supported tx wrapper*/ +uint16_t +axgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts) +{ + PMD_INIT_FUNC_TRACE(); + + if (unlikely(nb_pkts == 0)) + return nb_pkts; + + struct axgbe_tx_queue *txq; + uint16_t nb_desc_free; + uint16_t nb_pkt_sent = 0; + uint16_t idx; + uint32_t tail_addr; + struct rte_mbuf *mbuf; + + txq = (struct axgbe_tx_queue *)tx_queue; + nb_desc_free = txq->nb_desc - (txq->cur - txq->dirty); + + if (unlikely(nb_desc_free <= txq->free_thresh)) { + axgbe_xmit_cleanup(txq); + nb_desc_free = txq->nb_desc - (txq->cur - txq->dirty); + if (unlikely(nb_desc_free == 0)) + return 0; + } + nb_pkts = RTE_MIN(nb_desc_free, nb_pkts); + while (nb_pkts--) { + mbuf = *tx_pkts++; + if (axgbe_xmit_hw(txq, mbuf)) + goto out; + nb_pkt_sent++; + } +out: + /* Sync read and write */ + rte_mb(); + idx = AXGBE_GET_DESC_IDX(txq, txq->cur); + tail_addr = low32_value(txq->ring_phys_addr + + idx * sizeof(struct axgbe_tx_desc)); + /* Update tail reg with next immediate address to kick Tx DMA channel*/ + AXGMAC_DMA_IOWRITE(txq, DMA_CH_TDTR_LO, tail_addr); + txq->pkts += nb_pkt_sent; + return nb_pkt_sent; +} + void axgbe_dev_clear_queues(struct rte_eth_dev *dev) { PMD_INIT_FUNC_TRACE(); diff --git a/drivers/net/axgbe/axgbe_rxtx.h b/drivers/net/axgbe/axgbe_rxtx.h index cd7cf8b..89bf9bc 100644 --- a/drivers/net/axgbe/axgbe_rxtx.h +++ b/drivers/net/axgbe/axgbe_rxtx.h @@ -272,12 +272,31 @@ void axgbe_dev_tx_queue_release(void *txq); int axgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id, uint16_t nb_tx_desc, unsigned int socket_id, const struct rte_eth_txconf *tx_conf); +void axgbe_dev_enable_tx(struct rte_eth_dev *dev); +void axgbe_dev_disable_tx(struct rte_eth_dev *dev); +int axgbe_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id); +int axgbe_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id); + +uint16_t axgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts); +uint16_t axgbe_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts); + void axgbe_dev_rx_queue_release(void *rxq); int axgbe_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id, uint16_t nb_rx_desc, unsigned int socket_id, const struct rte_eth_rxconf *rx_conf, struct rte_mempool *mb_pool); +void axgbe_dev_enable_rx(struct rte_eth_dev *dev); +void axgbe_dev_disable_rx(struct rte_eth_dev *dev); +int axgbe_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id); +int axgbe_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id); +uint16_t axgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts); +uint16_t axgbe_recv_pkts_threshold_refresh(void *rx_queue, + struct rte_mbuf **rx_pkts, + uint16_t nb_pkts); void axgbe_dev_clear_queues(struct rte_eth_dev *dev); #endif /* _AXGBE_RXTX_H_ */ diff --git a/drivers/net/axgbe/axgbe_rxtx_vec.c b/drivers/net/axgbe/axgbe_rxtx_vec.c new file mode 100644 index 0000000..c2bd5da --- /dev/null +++ b/drivers/net/axgbe/axgbe_rxtx_vec.c @@ -0,0 +1,215 @@ +/*- + * Copyright(c) 2017 Advanced Micro Devices, Inc. + * All rights reserved. + * + * AMD 10Gb Ethernet driver + * + * This file is available to you under your choice of the following two + * licenses: + * + * License 1: GPLv2 + * + * Copyright (c) 2017 Advanced Micro Devices, Inc. + * + * This file is free software; you may copy, redistribute and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation, either version 2 of the License, or + * (at your option) any later version. + * + * This file is distributed in the hope that it will be useful, but + * WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program. If not, see . + * + * This file incorporates work covered by the following copyright and + * permission notice: + * + * Copyright (c) 2013 Synopsys, Inc. + * + * The Synopsys DWC ETHER XGMAC Software Driver and documentation + * (hereinafter "Software") is an unsupported proprietary work of Synopsys, + * Inc. unless otherwise expressly agreed to in writing between Synopsys + * and you. + * + * The Software IS NOT an item of Licensed Software or Licensed Product + * under any End User Software License Agreement or Agreement for Licensed + * Product with Synopsys or any supplement thereto. Permission is hereby + * granted, free of charge, to any person obtaining a copy of this software + * annotated with this license and the Software, to deal in the Software + * without restriction, including without limitation the rights to use, + * copy, modify, merge, publish, distribute, sublicense, and/or sell copies + * of the Software, and to permit persons to whom the Software is furnished + * to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included + * in all copies or substantial portions of the Software. + * + * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" + * BASIS AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED + * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A + * PARTICULAR PURPOSE ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS + * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR + * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF + * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS + * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN + * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) + * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF + * THE POSSIBILITY OF SUCH DAMAGE. + * + * License 2: Modified BSD + * + * Copyright (c) 2017 Advanced Micro Devices, Inc. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in the + * documentation and/or other materials provided with the distribution. + * * Neither the name of Advanced Micro Devices, Inc. nor the + * names of its contributors may be used to endorse or promote products + * derived from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL + * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + * + * This file incorporates work covered by the following copyright and + * permission notice: + * + * Copyright (c) 2013 Synopsys, Inc. + * + * The Synopsys DWC ETHER XGMAC Software Driver and documentation + * (hereinafter "Software") is an unsupported proprietary work of Synopsys, + * Inc. unless otherwise expressly agreed to in writing between Synopsys + * and you. + * + * The Software IS NOT an item of Licensed Software or Licensed Product + * under any End User Software License Agreement or Agreement for Licensed + * Product with Synopsys or any supplement thereto. Permission is hereby + * granted, free of charge, to any person obtaining a copy of this software + * annotated with this license and the Software, to deal in the Software + * without restriction, including without limitation the rights to use, + * copy, modify, merge, publish, distribute, sublicense, and/or sell copies + * of the Software, and to permit persons to whom the Software is furnished + * to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included + * in all copies or substantial portions of the Software. + * + * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" + * BASIS AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED + * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A + * PARTICULAR PURPOSE ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS + * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR + * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF + * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS + * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN + * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) + * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF + * THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include "axgbe_ethdev.h" +#include "axgbe_rxtx.h" +#include "axgbe_phy.h" + +#include +#include +#include + +/* Useful to avoid shifting for every descriptor prepration*/ +#define TX_DESC_CTRL_FLAGS 0xb000000000000000 +#define TX_FREE_BULK 8 +#define TX_FREE_BULK_CHECK (TX_FREE_BULK - 1) + +static inline void +axgbe_vec_tx(volatile struct axgbe_tx_desc *desc, + struct rte_mbuf *mbuf) +{ + __m128i descriptor = _mm_set_epi64x((uint64_t)mbuf->pkt_len << 32 | + TX_DESC_CTRL_FLAGS | mbuf->data_len, + mbuf->buf_iova + + mbuf->data_off); + _mm_store_si128((__m128i *)desc, descriptor); +} + +static void +axgbe_xmit_cleanup_vec(struct axgbe_tx_queue *txq) +{ + volatile struct axgbe_tx_desc *desc; + int idx, i; + + idx = AXGBE_GET_DESC_IDX(txq, txq->dirty + txq->free_batch_cnt + - 1); + desc = &txq->desc[idx]; + if (desc->desc3 & AXGBE_DESC_OWN) + return; + /* memset avoided for desc ctrl fields since in vec_tx path + * all 128 bits are populated + */ + for (i = 0; i < txq->free_batch_cnt; i++, idx--) + rte_pktmbuf_free_seg(txq->sw_ring[idx]); + + + txq->dirty += txq->free_batch_cnt; + txq->nb_desc_free += txq->free_batch_cnt; +} + +uint16_t +axgbe_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts) +{ + PMD_INIT_FUNC_TRACE(); + + struct axgbe_tx_queue *txq; + uint16_t idx, nb_commit, loop, i; + uint32_t tail_addr; + + txq = (struct axgbe_tx_queue *)tx_queue; + if (txq->nb_desc_free < txq->free_thresh) { + axgbe_xmit_cleanup_vec(txq); + if (unlikely(txq->nb_desc_free == 0)) + return 0; + } + nb_pkts = RTE_MIN(txq->nb_desc_free, nb_pkts); + nb_commit = nb_pkts; + idx = AXGBE_GET_DESC_IDX(txq, txq->cur); + loop = txq->nb_desc - idx; + if (nb_commit >= loop) { + for (i = 0; i < loop; ++i, ++idx, ++tx_pkts) { + axgbe_vec_tx(&txq->desc[idx], *tx_pkts); + txq->sw_ring[idx] = *tx_pkts; + } + nb_commit -= loop; + idx = 0; + } + for (i = 0; i < nb_commit; ++i, ++idx, ++tx_pkts) { + axgbe_vec_tx(&txq->desc[idx], *tx_pkts); + txq->sw_ring[idx] = *tx_pkts; + } + txq->cur += nb_pkts; + tail_addr = (uint32_t)(txq->ring_phys_addr + + idx * sizeof(struct axgbe_tx_desc)); + /* Update tail reg with next immediate address to kick Tx DMA channel*/ + rte_write32(tail_addr, (void *)txq->dma_tail_reg); + txq->pkts += nb_pkts; + txq->nb_desc_free -= nb_pkts; + + return nb_pkts; +} -- 2.7.4