From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from NAM01-SN1-obe.outbound.protection.outlook.com (mail-sn1nam01on0072.outbound.protection.outlook.com [104.47.32.72]) by dpdk.org (Postfix) with ESMTP id EA5ED1B3B6 for ; Tue, 9 Oct 2018 11:31:46 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=AQUANTIA1COM.onmicrosoft.com; s=selector1-aquantia-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=oExz3Agpn1MQtwqdxOF4s+cEgJqQraJFK2YZnve2jzY=; b=d56w7N2Dyrdf0HKVgluGeAb79Xzl+igadL1wI14859EOj4G+Gix4hSWJFnJ/3M4dDFw/QPRNojP5Eev6/0d7TsJ9TeYwcVxEi7EsWR/ZRG0t3fr9QXcHsokRdUQyggtvEQLlHCK2xepM2NJLLUnIJfURo7suKXmpicNrQ6+1jPI= Received: from BLUPR0701MB1651.namprd07.prod.outlook.com (10.163.84.21) by BLUPR0701MB1650.namprd07.prod.outlook.com (10.163.84.20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1207.23; Tue, 9 Oct 2018 09:31:44 +0000 Received: from BLUPR0701MB1651.namprd07.prod.outlook.com ([fe80::7c97:1527:4c0:8b46]) by BLUPR0701MB1651.namprd07.prod.outlook.com ([fe80::7c97:1527:4c0:8b46%3]) with mapi id 15.20.1207.024; Tue, 9 Oct 2018 09:31:44 +0000 From: Igor Russkikh To: "dev@dpdk.org" CC: Pavel Belous , Igor Russkikh , "ferruh.yigit@intel.com" , Pavel Belous Thread-Topic: [PATCH v4 09/22] net/atlantic: RX side structures and implementation Thread-Index: AQHUX7LkyMaBgvDcvEKSCbLoc1K1GA== Date: Tue, 9 Oct 2018 09:31:44 +0000 Message-ID: <06f49e1c1122d2ce70fced056405df9c39a4b12c.1539075891.git.igor.russkikh@aquantia.com> References: In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-clientproxiedby: AM0PR01CA0028.eurprd01.prod.exchangelabs.com (2603:10a6:208:69::41) To BLUPR0701MB1651.namprd07.prod.outlook.com (2a01:111:e400:58c6::21) authentication-results: spf=none (sender IP is ) smtp.mailfrom=Igor.Russkikh@aquantia.com; x-ms-exchange-messagesentrepresentingtype: 1 x-originating-ip: [95.79.108.179] x-ms-publictraffictype: Email x-microsoft-exchange-diagnostics: 1; BLUPR0701MB1650; 6:GBmivOyHqAX5tJK3jK+Fe5KhuoNykg/Q/Niw67ItDWlkzSJDdBx6n4FgcWyoPCwaGH5R0804PBulXhRh3DLh450CfOqorxjOMl2ibM1WimQrKohzNien3DYKkMlANkx9290DYek2tNs673bJmI5MfS6cAA8J00uWxSX/Ho7T8kmI1JwQnzbgw+YLV5BHpN6FWrkaqH7H1mLVRIWy8FxDWX9Az1j+amL8OsOSBmbjkBtxfVI3e2aHD1o4RCF/h3YVWPxGk6LJL18YhkzQJb/bgPhw3CyE36vGZUNHviBcFz4nY+n2hyl1LeF21wcmfzRZXNiR0FaHZMBRXrRhOqD7lNn5fUaLUDdv+0SoZs2jsaJAI0F8OSBVg4zFvS6efFD8x/WlrlvxFxS+/x7664WA7OiglS/Zn8rG986fixDAkIhlQixrT/NmxNYL1lX95JnEp8McJ6OZEnajLn3vHzM6Xg==; 5:TAR74FO9B85ssZIllRFwnpk1X7EQn1uTYKLCaFt2zUhjZKz61WVUAktu+HiOTXK9sExDy2MxeIiEewh78uyMwaTdFt31l0fvXjcPZZjPjReVwbhYmjMWV279rBzP98OJ6ibHMGr+XrBFsuh7sc95r6qRZBbbjinw+T+7diQo0r8=; 7:ljaDy438ZXlmQ9ZRW+uA3m8N7p5cLYbe53dTgkSTp9zuRlm1pTMay+UztM0o9Wu4qc5j2+efm7C++kXHBqRZ2MdLwdtW2jqsKEbhvx57t5pi+W+vCraZVsZqPu7Mijxlt/fNOfCRUNTYpcbv+J2sQeTjOtO2Mye8wRfbhgtqILolR6KI0Q8g2e0h+6cG0r/efDtIBf+ijN/HKpMBMfB2ISYpGDDuhBeFxqdcFwQyAmkvYX9HUoQ436G8zlqFRVH2 x-ms-office365-filtering-correlation-id: 1d319291-6b53-48df-71dc-08d62dca06c4 x-microsoft-antispam: BCL:0; PCL:0; RULEID:(7020095)(4652040)(8989299)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(5600074)(711020)(2017052603328)(7153060)(7193020); SRVR:BLUPR0701MB1650; x-ms-traffictypediagnostic: BLUPR0701MB1650: x-microsoft-antispam-prvs: x-exchange-antispam-report-test: UriScan:; x-ms-exchange-senderadcheck: 1 x-exchange-antispam-report-cfa-test: BCL:0; PCL:0; RULEID:(6040522)(2401047)(8121501046)(5005006)(10201501046)(3231355)(944501410)(52105095)(93006095)(93001095)(3002001)(149066)(150057)(6041310)(20161123564045)(20161123562045)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123560045)(20161123558120)(201708071742011)(7699051); SRVR:BLUPR0701MB1650; BCL:0; PCL:0; RULEID:; SRVR:BLUPR0701MB1650; x-forefront-prvs: 08200063E9 x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(366004)(39850400004)(346002)(376002)(396003)(136003)(189003)(199004)(81156014)(5640700003)(6116002)(71200400001)(105586002)(68736007)(316002)(3846002)(107886003)(71190400001)(26005)(118296001)(2616005)(5250100002)(25786009)(4326008)(102836004)(44832011)(446003)(476003)(11346002)(186003)(2906002)(106356001)(97736004)(2501003)(2351001)(486006)(478600001)(99286004)(53936002)(386003)(1730700003)(53946003)(5660300001)(6506007)(36756003)(66066001)(54906003)(305945005)(256004)(14444005)(4744004)(86362001)(7736002)(2900100001)(76176011)(8936002)(52116002)(8676002)(6486002)(72206003)(6436002)(14454004)(81166006)(6512007)(6916009)(579004); DIR:OUT; SFP:1101; SCL:1; SRVR:BLUPR0701MB1650; H:BLUPR0701MB1651.namprd07.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1; received-spf: None (protection.outlook.com: aquantia.com does not designate permitted sender hosts) x-microsoft-antispam-message-info: v/KkqQhTpZx0WvOWrrX6sHfqSSGZ87cun2qXqRRC3WrmE/mHhFUJlvr6I7FVEiflEheqhSS+b1+EFEWyL91xAAHgSDpliZNYWcT5u2Fzoor9F5SzKai0qHWGAMNetdwR34LPIGN9ZdCOg7ocD5Wsfpx4KXg2ZT8BPiG2pjOsG1AZEwOF1DlYf95rxCxgPFfO/+PzEmp6zHP60UVueNPqlvPsoMlGtoLMcZuxAtP7QzdLakiUSVxtQYPxtsF+liayQwsDTKosd1SKHZJxziP+Pxuq29LBji+ifDvBg9UNFzNt3Drn3TuD866uk7eiwEiK4Qwk8Qjovo5Cke+44C2ADuw9CSwZD9NNyyRBbkT9QpA= spamdiagnosticoutput: 1:99 spamdiagnosticmetadata: NSPM Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: aquantia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 1d319291-6b53-48df-71dc-08d62dca06c4 X-MS-Exchange-CrossTenant-originalarrivaltime: 09 Oct 2018 09:31:44.4228 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 83e2e134-991c-4ede-8ced-34d47e38e6b1 X-MS-Exchange-Transport-CrossTenantHeadersStamped: BLUPR0701MB1650 Subject: [dpdk-dev] [PATCH v4 09/22] net/atlantic: RX side structures and implementation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 09 Oct 2018 09:31:47 -0000 Signed-off-by: Igor Russkikh Signed-off-by: Pavel Belous --- drivers/net/atlantic/Makefile | 2 +- drivers/net/atlantic/atl_ethdev.c | 72 ++++- drivers/net/atlantic/atl_ethdev.h | 19 ++ drivers/net/atlantic/atl_rxtx.c | 622 ++++++++++++++++++++++++++++++++++= +++- 4 files changed, 707 insertions(+), 8 deletions(-) diff --git a/drivers/net/atlantic/Makefile b/drivers/net/atlantic/Makefile index b88da362146d..62dcdbffa69c 100644 --- a/drivers/net/atlantic/Makefile +++ b/drivers/net/atlantic/Makefile @@ -15,7 +15,7 @@ EXPORT_MAP :=3D rte_pmd_atlantic_version.map =20 LIBABIVER :=3D 1 =20 -LDLIBS +=3D -lrte_eal +LDLIBS +=3D -lrte_eal -lrte_mbuf -lrte_mempool -lrte_ring LDLIBS +=3D -lrte_ethdev -lrte_net LDLIBS +=3D -lrte_bus_pci =20 diff --git a/drivers/net/atlantic/atl_ethdev.c b/drivers/net/atlantic/atl_e= thdev.c index 3f7a3d23d6bb..53f687b5f748 100644 --- a/drivers/net/atlantic/atl_ethdev.c +++ b/drivers/net/atlantic/atl_ethdev.c @@ -27,6 +27,7 @@ static int atl_fw_version_get(struct rte_eth_dev *dev, ch= ar *fw_version, static void atl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info); =20 +static const uint32_t *atl_dev_supported_ptypes_get(struct rte_eth_dev *de= v); =20 static int eth_atl_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_device *pci_dev); @@ -75,6 +76,18 @@ static struct rte_pci_driver rte_atl_pmd =3D { .remove =3D eth_atl_pci_remove, }; =20 +#define ATL_RX_OFFLOADS (DEV_RX_OFFLOAD_VLAN_STRIP \ + | DEV_RX_OFFLOAD_IPV4_CKSUM \ + | DEV_RX_OFFLOAD_UDP_CKSUM \ + | DEV_RX_OFFLOAD_TCP_CKSUM \ + | DEV_RX_OFFLOAD_JUMBO_FRAME) + +static const struct rte_eth_desc_lim rx_desc_lim =3D { + .nb_max =3D ATL_MAX_RING_DESC, + .nb_min =3D ATL_MIN_RING_DESC, + .nb_align =3D ATL_RXD_ALIGN, +}; + static const struct eth_dev_ops atl_eth_dev_ops =3D { .dev_configure =3D atl_dev_configure, .dev_start =3D atl_dev_start, @@ -84,6 +97,13 @@ static const struct eth_dev_ops atl_eth_dev_ops =3D { =20 .fw_version_get =3D atl_fw_version_get, .dev_infos_get =3D atl_dev_info_get, + .dev_supported_ptypes_get =3D atl_dev_supported_ptypes_get, + + /* Queue Control */ + .rx_queue_start =3D atl_rx_queue_start, + .rx_queue_stop =3D atl_rx_queue_stop, + .rx_queue_setup =3D atl_rx_queue_setup, + .rx_queue_release =3D atl_rx_queue_release, }; =20 static inline int32_t @@ -239,12 +259,19 @@ atl_dev_start(struct rte_eth_dev *dev) goto error; } =20 + err =3D atl_start_queues(dev); + if (err < 0) { + PMD_INIT_LOG(ERR, "Unable to start rxtx queues"); + goto error; + } + atl_print_adapter_info(hw); =20 return 0; =20 error: PMD_INIT_LOG(ERR, "failure in atl_dev_start(): %d", err); + atl_stop_queues(dev); return -EIO; } =20 @@ -261,6 +288,12 @@ atl_dev_stop(struct rte_eth_dev *dev) atl_reset_hw(hw); hw->adapter_stopped =3D 0; =20 + atl_stop_queues(dev); + + /* Clear stored conf */ + dev->data->scattered_rx =3D 0; + dev->data->lro =3D 0; + } =20 /* @@ -277,6 +310,8 @@ atl_dev_close(struct rte_eth_dev *dev) =20 atl_dev_stop(dev); hw->adapter_stopped =3D 1; + + atl_free_queues(dev); } =20 static int @@ -320,14 +355,47 @@ atl_dev_info_get(struct rte_eth_dev *dev, struct rte_= eth_dev_info *dev_info) { struct rte_pci_device *pci_dev =3D RTE_ETH_DEV_TO_PCI(dev); =20 - dev_info->max_rx_queues =3D 0; - dev_info->max_rx_queues =3D 0; + dev_info->max_rx_queues =3D AQ_HW_MAX_RX_QUEUES; + dev_info->max_tx_queues =3D AQ_HW_MAX_TX_QUEUES; =20 + dev_info->min_rx_bufsize =3D 1024; + dev_info->max_rx_pktlen =3D HW_ATL_B0_MTU_JUMBO; + dev_info->max_mac_addrs =3D HW_ATL_B0_MAC_MAX; dev_info->max_vfs =3D pci_dev->max_vfs; =20 dev_info->max_hash_mac_addrs =3D 0; dev_info->max_vmdq_pools =3D 0; dev_info->vmdq_queue_num =3D 0; + + dev_info->rx_offload_capa =3D ATL_RX_OFFLOADS; + + dev_info->default_rxconf =3D (struct rte_eth_rxconf) { + .rx_free_thresh =3D ATL_DEFAULT_RX_FREE_THRESH, + }; + + dev_info->rx_desc_lim =3D rx_desc_lim; +} + +static const uint32_t * +atl_dev_supported_ptypes_get(struct rte_eth_dev *dev) +{ + static const uint32_t ptypes[] =3D { + RTE_PTYPE_L2_ETHER, + RTE_PTYPE_L2_ETHER_ARP, + RTE_PTYPE_L2_ETHER_VLAN, + RTE_PTYPE_L3_IPV4, + RTE_PTYPE_L3_IPV6, + RTE_PTYPE_L4_TCP, + RTE_PTYPE_L4_UDP, + RTE_PTYPE_L4_SCTP, + RTE_PTYPE_L4_ICMP, + RTE_PTYPE_UNKNOWN + }; + + if (dev->rx_pkt_burst =3D=3D atl_recv_pkts) + return ptypes; + + return NULL; } =20 RTE_PMD_REGISTER_PCI(net_atlantic, rte_atl_pmd); diff --git a/drivers/net/atlantic/atl_ethdev.h b/drivers/net/atlantic/atl_e= thdev.h index 990e8e4e9978..a9a9fc8fe7ff 100644 --- a/drivers/net/atlantic/atl_ethdev.h +++ b/drivers/net/atlantic/atl_ethdev.h @@ -13,6 +13,10 @@ #define ATL_DEV_PRIVATE_TO_HW(adapter) \ (&((struct atl_adapter *)adapter)->hw) =20 +#define ATL_DEV_TO_ADAPTER(dev) \ + ((struct atl_adapter *)(dev)->data->dev_private) + + /* * Structure to store private data for each driver instance (for each port= ). */ @@ -24,9 +28,24 @@ struct atl_adapter { /* * RX/TX function prototypes */ +void atl_rx_queue_release(void *rxq); + +int atl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id, + uint16_t nb_rx_desc, unsigned int socket_id, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mb_pool); + int atl_rx_init(struct rte_eth_dev *dev); int atl_tx_init(struct rte_eth_dev *dev); =20 +int atl_start_queues(struct rte_eth_dev *dev); +int atl_stop_queues(struct rte_eth_dev *dev); +void atl_free_queues(struct rte_eth_dev *dev); + +int atl_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id); +int atl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id); + + uint16_t atl_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); =20 diff --git a/drivers/net/atlantic/atl_rxtx.c b/drivers/net/atlantic/atl_rxt= x.c index 0fbd93038075..db45ad916cfd 100644 --- a/drivers/net/atlantic/atl_rxtx.c +++ b/drivers/net/atlantic/atl_rxtx.c @@ -2,7 +2,146 @@ * Copyright(c) 2018 Aquantia Corporation */ =20 +#include +#include + #include "atl_ethdev.h" +#include "atl_hw_regs.h" + +#include "atl_logs.h" +#include "hw_atl/hw_atl_llh.h" +#include "hw_atl/hw_atl_b0.h" +#include "hw_atl/hw_atl_b0_internal.h" + +/** + * Structure associated with each descriptor of the RX ring of a RX queue. + */ +struct atl_rx_entry { + struct rte_mbuf *mbuf; +}; + +/** + * Structure associated with each RX queue. + */ +struct atl_rx_queue { + struct rte_mempool *mb_pool; + struct hw_atl_rxd_s *hw_ring; + uint64_t hw_ring_phys_addr; + struct atl_rx_entry *sw_ring; + uint16_t nb_rx_desc; + uint16_t rx_tail; + uint16_t nb_rx_hold; + uint16_t rx_free_thresh; + uint16_t queue_id; + uint16_t port_id; + uint16_t buff_size; + bool l3_csum_enabled; + bool l4_csum_enabled; +}; + +static inline void +atl_reset_rx_queue(struct atl_rx_queue *rxq) +{ + struct hw_atl_rxd_s *rxd =3D NULL; + int i; + + PMD_INIT_FUNC_TRACE(); + + for (i =3D 0; i < rxq->nb_rx_desc; i++) { + rxd =3D (struct hw_atl_rxd_s *)&rxq->hw_ring[i]; + rxd->buf_addr =3D 0; + rxd->hdr_addr =3D 0; + } + + rxq->rx_tail =3D 0; +} + +int +atl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id, + uint16_t nb_rx_desc, unsigned int socket_id, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mb_pool) +{ + struct atl_rx_queue *rxq; + const struct rte_memzone *mz; + + PMD_INIT_FUNC_TRACE(); + + /* make sure a valid number of descriptors have been requested */ + if (nb_rx_desc < AQ_HW_MIN_RX_RING_SIZE || + nb_rx_desc > AQ_HW_MAX_RX_RING_SIZE) { + PMD_INIT_LOG(ERR, "Number of Rx descriptors must be " + "less than or equal to %d, " + "greater than or equal to %d", AQ_HW_MAX_RX_RING_SIZE, + AQ_HW_MIN_RX_RING_SIZE); + return -EINVAL; + } + + /* + * if this queue existed already, free the associated memory. The + * queue cannot be reused in case we need to allocate memory on + * different socket than was previously used. + */ + if (dev->data->rx_queues[rx_queue_id] !=3D NULL) { + atl_rx_queue_release(dev->data->rx_queues[rx_queue_id]); + dev->data->rx_queues[rx_queue_id] =3D NULL; + } + + /* allocate memory for the queue structure */ + rxq =3D rte_zmalloc_socket("atlantic Rx queue", sizeof(*rxq), + RTE_CACHE_LINE_SIZE, socket_id); + if (rxq =3D=3D NULL) { + PMD_INIT_LOG(ERR, "Cannot allocate queue structure"); + return -ENOMEM; + } + + /* setup queue */ + rxq->mb_pool =3D mb_pool; + rxq->nb_rx_desc =3D nb_rx_desc; + rxq->port_id =3D dev->data->port_id; + rxq->queue_id =3D rx_queue_id; + rxq->rx_free_thresh =3D rx_conf->rx_free_thresh; + + rxq->l3_csum_enabled =3D dev->data->dev_conf.rxmode.offloads & + DEV_RX_OFFLOAD_IPV4_CKSUM; + rxq->l4_csum_enabled =3D dev->data->dev_conf.rxmode.offloads & + (DEV_RX_OFFLOAD_UDP_CKSUM | DEV_RX_OFFLOAD_TCP_CKSUM); + if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) + PMD_DRV_LOG(ERR, "PMD does not support KEEP_CRC offload"); + + /* allocate memory for the software ring */ + rxq->sw_ring =3D rte_zmalloc_socket("atlantic sw rx ring", + nb_rx_desc * sizeof(struct atl_rx_entry), + RTE_CACHE_LINE_SIZE, socket_id); + if (rxq->sw_ring =3D=3D NULL) { + PMD_INIT_LOG(ERR, "Cannot allocate software ring"); + rte_free(rxq); + return -ENOMEM; + } + + /* + * allocate memory for the hardware descriptor ring. A memzone large + * enough to hold the maximum ring size is requested to allow for + * resizing in later calls to the queue setup function. + */ + mz =3D rte_eth_dma_zone_reserve(dev, "rx hw_ring", rx_queue_id, + HW_ATL_B0_MAX_RXD * + sizeof(struct hw_atl_rxd_s), + 128, socket_id); + if (mz =3D=3D NULL) { + PMD_INIT_LOG(ERR, "Cannot allocate hardware ring"); + rte_free(rxq->sw_ring); + rte_free(rxq); + return -ENOMEM; + } + rxq->hw_ring =3D mz->addr; + rxq->hw_ring_phys_addr =3D mz->iova; + + atl_reset_rx_queue(rxq); + + dev->data->rx_queues[rx_queue_id] =3D rxq; + return 0; +} =20 int atl_tx_init(struct rte_eth_dev *eth_dev __rte_unused) @@ -11,11 +150,175 @@ atl_tx_init(struct rte_eth_dev *eth_dev __rte_unused) } =20 int -atl_rx_init(struct rte_eth_dev *eth_dev __rte_unused) +atl_rx_init(struct rte_eth_dev *eth_dev) { + struct aq_hw_s *hw =3D ATL_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private); + struct atl_rx_queue *rxq; + uint64_t base_addr =3D 0; + int i =3D 0; + int err =3D 0; + + PMD_INIT_FUNC_TRACE(); + + for (i =3D 0; i < eth_dev->data->nb_rx_queues; i++) { + rxq =3D eth_dev->data->rx_queues[i]; + base_addr =3D rxq->hw_ring_phys_addr; + + /* Take requested pool mbuf size and adapt + * descriptor buffer to best fit + */ + int buff_size =3D rte_pktmbuf_data_room_size(rxq->mb_pool) - + RTE_PKTMBUF_HEADROOM; + + buff_size =3D RTE_ALIGN_FLOOR(buff_size, 1024); + if (buff_size > HW_ATL_B0_RXD_BUF_SIZE_MAX) { + PMD_INIT_LOG(WARNING, + "queue %d: mem pool buff size is too big\n", + rxq->queue_id); + buff_size =3D HW_ATL_B0_RXD_BUF_SIZE_MAX; + } + if (buff_size < 1024) { + PMD_INIT_LOG(ERR, + "queue %d: mem pool buff size is too small\n", + rxq->queue_id); + return -EINVAL; + } + rxq->buff_size =3D buff_size; + + err =3D hw_atl_b0_hw_ring_rx_init(hw, base_addr, rxq->queue_id, + rxq->nb_rx_desc, buff_size, 0, + rxq->port_id); + + if (err) { + PMD_INIT_LOG(ERR, "Cannot init RX queue %d", + rxq->queue_id); + break; + } + } + + return err; +} + +static int +atl_alloc_rx_queue_mbufs(struct atl_rx_queue *rxq) +{ + struct atl_rx_entry *rx_entry =3D rxq->sw_ring; + struct hw_atl_rxd_s *rxd; + uint64_t dma_addr =3D 0; + uint32_t i =3D 0; + + PMD_INIT_FUNC_TRACE(); + + /* fill Rx ring */ + for (i =3D 0; i < rxq->nb_rx_desc; i++) { + struct rte_mbuf *mbuf =3D rte_mbuf_raw_alloc(rxq->mb_pool); + + if (mbuf =3D=3D NULL) { + PMD_INIT_LOG(ERR, "mbuf alloca failed for rx queue %u", + (unsigned int)rxq->queue_id); + return -ENOMEM; + } + + mbuf->data_off =3D RTE_PKTMBUF_HEADROOM; + mbuf->port =3D rxq->port_id; + + dma_addr =3D rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf)); + rxd =3D (struct hw_atl_rxd_s *)&rxq->hw_ring[i]; + rxd->buf_addr =3D dma_addr; + rxd->hdr_addr =3D 0; + rx_entry[i].mbuf =3D mbuf; + } + return 0; } =20 +static void +atl_rx_queue_release_mbufs(struct atl_rx_queue *rxq) +{ + int i; + + PMD_INIT_FUNC_TRACE(); + + if (rxq->sw_ring !=3D NULL) { + for (i =3D 0; i < rxq->nb_rx_desc; i++) { + if (rxq->sw_ring[i].mbuf !=3D NULL) { + rte_pktmbuf_free_seg(rxq->sw_ring[i].mbuf); + rxq->sw_ring[i].mbuf =3D NULL; + } + } + } +} + +int +atl_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) +{ + struct aq_hw_s *hw =3D ATL_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct atl_rx_queue *rxq =3D NULL; + + PMD_INIT_FUNC_TRACE(); + + if (rx_queue_id < dev->data->nb_rx_queues) { + rxq =3D dev->data->rx_queues[rx_queue_id]; + + if (atl_alloc_rx_queue_mbufs(rxq) !=3D 0) { + PMD_INIT_LOG(ERR, "Allocate mbufs for queue %d failed", + rx_queue_id); + return -1; + } + + hw_atl_b0_hw_ring_rx_start(hw, rx_queue_id); + + rte_wmb(); + hw_atl_reg_rx_dma_desc_tail_ptr_set(hw, rxq->nb_rx_desc - 1, + rx_queue_id); + dev->data->rx_queue_state[rx_queue_id] =3D + RTE_ETH_QUEUE_STATE_STARTED; + } else { + return -1; + } + + return 0; +} + +int +atl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) +{ + struct aq_hw_s *hw =3D ATL_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct atl_rx_queue *rxq =3D NULL; + + PMD_INIT_FUNC_TRACE(); + + if (rx_queue_id < dev->data->nb_rx_queues) { + rxq =3D dev->data->rx_queues[rx_queue_id]; + + hw_atl_b0_hw_ring_rx_stop(hw, rx_queue_id); + + atl_rx_queue_release_mbufs(rxq); + atl_reset_rx_queue(rxq); + + dev->data->rx_queue_state[rx_queue_id] =3D + RTE_ETH_QUEUE_STATE_STOPPED; + } else { + return -1; + } + + return 0; +} + +void +atl_rx_queue_release(void *rx_queue) +{ + PMD_INIT_FUNC_TRACE(); + + if (rx_queue !=3D NULL) { + struct atl_rx_queue *rxq =3D (struct atl_rx_queue *)rx_queue; + + atl_rx_queue_release_mbufs(rxq); + rte_free(rxq->sw_ring); + rte_free(rxq); + } +} + uint16_t atl_prep_pkts(void *tx_queue __rte_unused, struct rte_mbuf **tx_pkts __rte_unused, @@ -24,14 +327,323 @@ atl_prep_pkts(void *tx_queue __rte_unused, return 0; } =20 -uint16_t -atl_recv_pkts(void *rx_queue __rte_unused, - struct rte_mbuf **rx_pkts __rte_unused, - uint16_t nb_pkts __rte_unused) +void +atl_free_queues(struct rte_eth_dev *dev) +{ + unsigned int i; + + PMD_INIT_FUNC_TRACE(); + + for (i =3D 0; i < dev->data->nb_rx_queues; i++) { + atl_rx_queue_release(dev->data->rx_queues[i]); + dev->data->rx_queues[i] =3D 0; + } + dev->data->nb_rx_queues =3D 0; +} + +int +atl_start_queues(struct rte_eth_dev *dev) { + int i; + + PMD_INIT_FUNC_TRACE(); + + for (i =3D 0; i < dev->data->nb_rx_queues; i++) { + if (atl_rx_queue_start(dev, i) !=3D 0) { + PMD_DRV_LOG(ERR, "Start Rx queue %d failed", i); + return -1; + } + } + return 0; } =20 +int +atl_stop_queues(struct rte_eth_dev *dev) +{ + int i; + + PMD_INIT_FUNC_TRACE(); + + for (i =3D 0; i < dev->data->nb_rx_queues; i++) { + if (atl_rx_queue_stop(dev, i) !=3D 0) { + PMD_DRV_LOG(ERR, "Stop Rx queue %d failed", i); + return -1; + } + } + + return 0; +} + +static uint64_t +atl_desc_to_offload_flags(struct atl_rx_queue *rxq, + struct hw_atl_rxd_wb_s *rxd_wb) +{ + uint64_t mbuf_flags =3D 0; + + PMD_INIT_FUNC_TRACE(); + + /* IPv4 ? */ + if (rxq->l3_csum_enabled && ((rxd_wb->pkt_type & 0x3) =3D=3D 0)) { + /* IPv4 csum error ? */ + if (rxd_wb->rx_stat & BIT(1)) + mbuf_flags |=3D PKT_RX_IP_CKSUM_BAD; + else + mbuf_flags |=3D PKT_RX_IP_CKSUM_GOOD; + } else { + mbuf_flags |=3D PKT_RX_IP_CKSUM_UNKNOWN; + } + + /* CSUM calculated ? */ + if (rxq->l4_csum_enabled && (rxd_wb->rx_stat & BIT(3))) { + if (rxd_wb->rx_stat & BIT(2)) + mbuf_flags |=3D PKT_RX_L4_CKSUM_BAD; + else + mbuf_flags |=3D PKT_RX_L4_CKSUM_GOOD; + } else { + mbuf_flags |=3D PKT_RX_L4_CKSUM_UNKNOWN; + } + + return mbuf_flags; +} + +static uint32_t +atl_desc_to_pkt_type(struct hw_atl_rxd_wb_s *rxd_wb) +{ + uint32_t type =3D RTE_PTYPE_UNKNOWN; + uint16_t l2_l3_type =3D rxd_wb->pkt_type & 0x3; + uint16_t l4_type =3D (rxd_wb->pkt_type & 0x1C) >> 2; + + switch (l2_l3_type) { + case 0: + type =3D RTE_PTYPE_L3_IPV4; + break; + case 1: + type =3D RTE_PTYPE_L3_IPV6; + break; + case 2: + type =3D RTE_PTYPE_L2_ETHER; + break; + case 3: + type =3D RTE_PTYPE_L2_ETHER_ARP; + break; + } + + switch (l4_type) { + case 0: + type |=3D RTE_PTYPE_L4_TCP; + break; + case 1: + type |=3D RTE_PTYPE_L4_UDP; + break; + case 2: + type |=3D RTE_PTYPE_L4_SCTP; + break; + case 3: + type |=3D RTE_PTYPE_L4_ICMP; + break; + } + + if (rxd_wb->pkt_type & BIT(5)) + type |=3D RTE_PTYPE_L2_ETHER_VLAN; + + return type; +} + +uint16_t +atl_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) +{ + struct atl_rx_queue *rxq =3D (struct atl_rx_queue *)rx_queue; + struct rte_eth_dev *dev =3D &rte_eth_devices[rxq->port_id]; + struct atl_adapter *adapter =3D + ATL_DEV_TO_ADAPTER(&rte_eth_devices[rxq->port_id]); + struct aq_hw_s *hw =3D ATL_DEV_PRIVATE_TO_HW(adapter); + struct atl_rx_entry *sw_ring =3D rxq->sw_ring; + + struct rte_mbuf *new_mbuf; + struct rte_mbuf *rx_mbuf, *rx_mbuf_prev, *rx_mbuf_first; + struct atl_rx_entry *rx_entry; + uint16_t nb_rx =3D 0; + uint16_t nb_hold =3D 0; + struct hw_atl_rxd_wb_s rxd_wb; + struct hw_atl_rxd_s *rxd =3D NULL; + uint16_t tail =3D rxq->rx_tail; + uint64_t dma_addr; + uint16_t pkt_len =3D 0; + + while (nb_rx < nb_pkts) { + uint16_t eop_tail =3D tail; + + rxd =3D (struct hw_atl_rxd_s *)&rxq->hw_ring[tail]; + rxd_wb =3D *(struct hw_atl_rxd_wb_s *)rxd; + + if (!rxd_wb.dd) { /* RxD is not done */ + break; + } + + PMD_RX_LOG(ERR, "port_id=3D%u queue_id=3D%u tail=3D%u " + "eop=3D0x%x pkt_len=3D%u hash=3D0x%x hash_type=3D0x%x", + (unsigned int)rxq->port_id, + (unsigned int)rxq->queue_id, + (unsigned int)tail, (unsigned int)rxd_wb.eop, + (unsigned int)rte_le_to_cpu_16(rxd_wb.pkt_len), + rxd_wb.rss_hash, rxd_wb.rss_type); + + /* RxD is not done */ + if (!rxd_wb.eop) { + while (true) { + struct hw_atl_rxd_wb_s *eop_rxwbd; + + eop_tail =3D (eop_tail + 1) % rxq->nb_rx_desc; + eop_rxwbd =3D (struct hw_atl_rxd_wb_s *) + &rxq->hw_ring[eop_tail]; + if (!eop_rxwbd->dd) { + /* no EOP received yet */ + eop_tail =3D tail; + break; + } + if (eop_rxwbd->dd && eop_rxwbd->eop) + break; + } + /* No EOP in ring */ + if (eop_tail =3D=3D tail) + break; + } + rx_mbuf_prev =3D NULL; + rx_mbuf_first =3D NULL; + + /* Run through packet segments */ + while (true) { + new_mbuf =3D rte_mbuf_raw_alloc(rxq->mb_pool); + if (new_mbuf =3D=3D NULL) { + PMD_RX_LOG(ERR, + "RX mbuf alloc failed port_id=3D%u " + "queue_id=3D%u", (unsigned int)rxq->port_id, + (unsigned int)rxq->queue_id); + dev->data->rx_mbuf_alloc_failed++; + goto err_stop; + } + + nb_hold++; + rx_entry =3D &sw_ring[tail]; + + rx_mbuf =3D rx_entry->mbuf; + rx_entry->mbuf =3D new_mbuf; + dma_addr =3D rte_cpu_to_le_64( + rte_mbuf_data_iova_default(new_mbuf)); + + /* setup RX descriptor */ + rxd->hdr_addr =3D 0; + rxd->buf_addr =3D dma_addr; + + /* + * Initialize the returned mbuf. + * 1) setup generic mbuf fields: + * - number of segments, + * - next segment, + * - packet length, + * - RX port identifier. + * 2) integrate hardware offload data, if any: + * < - RSS flag & hash, + * - IP checksum flag, + * - VLAN TCI, if any, + * - error flags. + */ + pkt_len =3D (uint16_t)rte_le_to_cpu_16(rxd_wb.pkt_len); + rx_mbuf->data_off =3D RTE_PKTMBUF_HEADROOM; + rte_prefetch1((char *)rx_mbuf->buf_addr + + rx_mbuf->data_off); + rx_mbuf->nb_segs =3D 0; + rx_mbuf->next =3D NULL; + rx_mbuf->pkt_len =3D pkt_len; + rx_mbuf->data_len =3D pkt_len; + if (rxd_wb.eop) { + u16 remainder_len =3D pkt_len % rxq->buff_size; + if (!remainder_len) + remainder_len =3D rxq->buff_size; + rx_mbuf->data_len =3D remainder_len; + } else { + rx_mbuf->data_len =3D pkt_len > rxq->buff_size ? + rxq->buff_size : pkt_len; + } + rx_mbuf->port =3D rxq->port_id; + + rx_mbuf->hash.rss =3D rxd_wb.rss_hash; + + rx_mbuf->vlan_tci =3D rxd_wb.vlan; + + rx_mbuf->ol_flags =3D + atl_desc_to_offload_flags(rxq, &rxd_wb); + rx_mbuf->packet_type =3D atl_desc_to_pkt_type(&rxd_wb); + + if (!rx_mbuf_first) + rx_mbuf_first =3D rx_mbuf; + rx_mbuf_first->nb_segs++; + + if (rx_mbuf_prev) + rx_mbuf_prev->next =3D rx_mbuf; + rx_mbuf_prev =3D rx_mbuf; + + tail =3D (tail + 1) % rxq->nb_rx_desc; + /* Prefetch next mbufs */ + rte_prefetch0(sw_ring[tail].mbuf); + if ((tail & 0x3) =3D=3D 0) { + rte_prefetch0(&sw_ring[tail]); + rte_prefetch0(&sw_ring[tail]); + } + + /* filled mbuf_first */ + if (rxd_wb.eop) + break; + rxd =3D (struct hw_atl_rxd_s *)&rxq->hw_ring[tail]; + rxd_wb =3D *(struct hw_atl_rxd_wb_s *)rxd; + }; + + /* + * Store the mbuf address into the next entry of the array + * of returned packets. + */ + rx_pkts[nb_rx++] =3D rx_mbuf_first; + + PMD_RX_LOG(ERR, "add mbuf segs=3D%d pkt_len=3D%d", + rx_mbuf_first->nb_segs, + rx_mbuf_first->pkt_len); + } + +err_stop: + + rxq->rx_tail =3D tail; + + /* + * If the number of free RX descriptors is greater than the RX free + * threshold of the queue, advance the Receive Descriptor Tail (RDT) + * register. + * Update the RDT with the value of the last processed RX descriptor + * minus 1, to guarantee that the RDT register is never equal to the + * RDH register, which creates a "full" ring situtation from the + * hardware point of view... + */ + nb_hold =3D (uint16_t)(nb_hold + rxq->nb_rx_hold); + if (nb_hold > rxq->rx_free_thresh) { + PMD_RX_LOG(ERR, "port_id=3D%u queue_id=3D%u rx_tail=3D%u " + "nb_hold=3D%u nb_rx=3D%u", + (unsigned int)rxq->port_id, (unsigned int)rxq->queue_id, + (unsigned int)tail, (unsigned int)nb_hold, + (unsigned int)nb_rx); + tail =3D (uint16_t)((tail =3D=3D 0) ? + (rxq->nb_rx_desc - 1) : (tail - 1)); + + hw_atl_reg_rx_dma_desc_tail_ptr_set(hw, tail, rxq->queue_id); + + nb_hold =3D 0; + } + + rxq->nb_rx_hold =3D nb_hold; + + return nb_rx; +} + + uint16_t atl_xmit_pkts(void *tx_queue __rte_unused, struct rte_mbuf **tx_pkts __rte_unused, --=20 2.7.4