From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from NAM01-BN3-obe.outbound.protection.outlook.com (mail-bn3nam01on0077.outbound.protection.outlook.com [104.47.33.77]) by dpdk.org (Postfix) with ESMTP id 43D46D592 for ; Sat, 25 Mar 2017 07:28:03 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=DMsWk/tG+z8vMefXXf4t7ohQfDgwNnGJryidpy/cm1U=; b=JEUMTSoa6a6RLztOYDm6c2eeoZbI+LTVh/9zZNdOG3swYWDrfPVLUxaMasNRtVgnHF9prJblqyM+VBYTNR00O3iSbtWpe14vDCczsMOeVUVg0lGEtKTEMIJX5ZsnuqRK3cN9+amaLRQL2rqZxNqnZ5rwtf30S7Gl7+zfXhmUYwY= Authentication-Results: intel.com; dkim=none (message not signed) header.d=none;intel.com; dmarc=none action=none header.from=caviumnetworks.com; Received: from lio357.in.caveonetworks.com (14.140.2.178) by CY1PR07MB2280.namprd07.prod.outlook.com (10.164.112.158) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.977.11; Sat, 25 Mar 2017 06:27:59 +0000 From: Shijith Thotton To: Ferruh Yigit Cc: dev@dpdk.org, Jerin Jacob , Derek Chickles , Venkat Koppula , Srisivasubramanian S , Mallesham Jatharakonda Date: Sat, 25 Mar 2017 11:54:33 +0530 Message-Id: <1490423097-6797-23-git-send-email-shijith.thotton@caviumnetworks.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1490423097-6797-1-git-send-email-shijith.thotton@caviumnetworks.com> References: <1488454371-3342-1-git-send-email-shijith.thotton@caviumnetworks.com> <1490423097-6797-1-git-send-email-shijith.thotton@caviumnetworks.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [14.140.2.178] X-ClientProxiedBy: BM1PR01CA0117.INDPRD01.PROD.OUTLOOK.COM (10.174.208.33) To CY1PR07MB2280.namprd07.prod.outlook.com (10.164.112.158) X-MS-Office365-Filtering-Correlation-Id: 80c74623-7aee-4a67-f4e7-08d4734816ab X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:(22001);SRVR:CY1PR07MB2280; X-Microsoft-Exchange-Diagnostics: 1; CY1PR07MB2280; 3:SABh2/iN+hEPfSP67XxvRyhsr0TNgBkdfNqH22BMkse62rJZZ7GrWwhEdjXQff1W85XRGrWbUSNPqux+PguXhOSYIxcYJkyO4SKcxFEvey+Roz+wdT4ebOd8p35ldc+Hb3Rlj4QqHSgRvgQX6Qn5jFW3rWRlkIusXm64kttc05bk0xaUQWYF+Oo9iY3f+yrPBx85gwpiRtwLgyH+imhByiIAwNVXGJqQK2HEE2517td9fIxK5TP3ty4TMlONEdWmydGmh2EBeRz+4WOTbQ7DPw==; 25:nhBZxGssfdbTQZ8AVBajs1nXf+cZdeqxJx+vIVLnv74uJFieE3okchlhQOUW0l9gHSFg/sYdX5SvOUDwbHm6CmOHzAkykq+m2jR2oo5PWoo0J4B6poonoW8DZtbXDN0W4agww+sQaepokIGOJeAGCJAotfLYnUv+FK0UWlW8m3jvCS+ialS39/C6pZicTUTW6aixKcnKZ4zPJr27bp6h3Q4Ge6BpFK3NrIOgwWF+Ptc0glJ8np/zQ4y4x/ILt4eG1e8G0ir+9b4jTmpy2xIKMGU/KyNkYqWxQ+DgL0ATuwD6KL3cKlN37KfR9grWXbdp1HCqkndDebJ4DC1ub5a6t0vCsH6JAoPBr7RC4BXQ2w+kHGO1V1S8XncouKEnNx7T38Lmwl5lLEPhvqZ3SMqv12hEgP59+XO/c2oB1XBzrWTz74EMqiM5QtdygBx4iP0LHmwPpkaAY95Rgi/R+xjQ5Q== X-Microsoft-Exchange-Diagnostics: 1; CY1PR07MB2280; 31:e9wf5oJdOjgIukV/QcuJx9WQlt9+E4zHZ4ziSHVxvWR12S+6pJThY7wGzEBuLfwPX+THmNjkxNDJc7pOf7HkoLa2clC6KUNlG171HLZABVIWpEku8l7M+fTt1mMAJ2aICllMHegKr8TXWv9xhZAf6svW6h2Zrt9hQYhjSGduSizclx7HY6iflGWtvBJe2qB5w1Sue1xFlpB/VyayIuGZPxR+WQikszcYPQRZA2zIU9jHdNJhNrq4cW4dlnVEjS4P; 20:ElEdPEZH1mEhoIXl69h0CXRM9Mkb3v+yjb4PM0IPeBQ8USBxkQbwQx3K/C7++JpF08TodowcXrYaJe/B9j/0we3v2QiB7f/AVKQpyrMLvuqyqbBqJ/eU2s0ipf17s+A/75v8dQDQqKqHPtyDGypDRxUrvV6In4oQYZPe5E1hMCDgWY2j4zR20NVua+sSdc8/Z2qsZIgMMD4QArwErrc6aw2KcTwviX+lm6Wq7m3mzDe/CBwB0xE66176rbVo6rrXdsWReuaTbzdXrW3MhyrbngRB7NCbYmXNzspJuwdfyiXyNCd/lRe3v/E+r6BWFlEJLwQGkZu0QKl5Nn95qok355wBxnGgQC2pyxIr3azfPNQzp7g5St1ll0sss/XSOoO+x3iCaHFPgOngGwgZtBGNVTL7qQ8xRVa1Gzag770PGIxyJCf7twMLmzdHAu7O1MPpI0GZoDtrlNtpGylI2jmxLTUF4l/E/DqB4Z2aJIIwFLFLHG07+THbwd3z99Liug8fodSKyJB6Du+eBMI2gN5EjTs0y8F5wrQmSy9dhxGkOGbDboR2hSX8dj0PBoN8uUpwUUb9keF0950YKizaG8Z3pAVMpo0Cn2KOgcXNq/0k1a0= X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(6040375)(601004)(2401047)(8121501046)(5005006)(3002001)(10201501046)(6041248)(20161123564025)(20161123560025)(20161123562025)(20161123558025)(20161123555025)(6072148); SRVR:CY1PR07MB2280; BCL:0; PCL:0; RULEID:; SRVR:CY1PR07MB2280; X-Microsoft-Exchange-Diagnostics: 1; CY1PR07MB2280; 4:b9ereqAAFXqfMtAJMgv+YmHj7YP7dTWMZp6JHWE38Qa6YWset04wDC1bGACs2UpbJyCyaN0Pcn7/MaCknZrEdTRnIeecs3AggeujLowakWYnPud5ZuIEgIqD3rZieWo0HHWd7HoJxHLMU4eOqO7vA9gXQ3Ky12+XnW7dGyzZqsnpd5UdwI+3C/MANlGl94yGPpMahMMQ0Gzog/OwTbExwRv+F5uDjjs92a92atBjf9UQUTOo94UTkLs3EUWDSzUENQJIDrD/Vn+1VD2b/UGI9jZT0xS8Aqujsn2pAEQsHhsfWZikSE8Eq4LR5gnF9RI+LceeB38BVvUovC7n1Z4p+O/seGiiyue64gWtYDPlGDgSP2kXNpTPBHl8hYkBuitvzynKjPwFz/3V2g6WBycQFNePeKYQAvOXkj+EY2YjtokmxVShAo99hlC5KHyDGAJY9EVQpyonq4I+fwEq+03Jnv2skhKKaY8JLDVyrv6WTL+s2+Kz4mB7PNIUr1Y7XE9hDZjGyE1X96XxQB8QxoXZEPpoJZDb/h/f7aR/t42dQ6I0J0EkTBOVW9KanzjeAsSuivj/Wd50i0/yaiQtGl4GtFldY88JuOH2f0YivS7kV1g= X-Forefront-PRVS: 025796F161 X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(4630300001)(6009001)(39450400003)(39410400002)(39830400002)(50944005)(2906002)(2950100002)(5003940100001)(66066001)(575784001)(42882006)(6916009)(50466002)(48376002)(47776003)(42186005)(53936002)(6512007)(5009440100003)(54906002)(50226002)(38730400002)(6486002)(6666003)(25786009)(305945005)(81166006)(7736002)(6116002)(3846002)(4326008)(8676002)(6506006)(189998001)(36756003)(50986999)(76176999)(33646002)(4720700003)(5660300001)(110136004)(110426004); DIR:OUT; SFP:1101; SCL:1; SRVR:CY1PR07MB2280; H:lio357.in.caveonetworks.com; FPR:; SPF:None; MLV:sfv; LANG:en; X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; CY1PR07MB2280; 23:RidYtFY5O5AEUBQ61K9wqjJtw5LFeB+C2TLzwcH2N?= =?us-ascii?Q?csqnV19xORf6UJ0do0C05B5D0u37HTnmPB8mh9duK2lbIZdU84cmT7tCrKJm?= =?us-ascii?Q?FvMdtHEM8JMrE86UBL37Ts4YF6hirZQ3a2Q32dnVFWaObY5CKOa7q4YVTrR4?= =?us-ascii?Q?/7Tizk7pvI05kqpyghaLnDDSOaMj1gbJKUXZUIcK9xtjgn+5nfFZPmRCUxpF?= =?us-ascii?Q?fCUApZncK6NBXv2uTmmIUipjE3YYvWO8nsG6+Fv1q1JmdXDNNdXXmpvP7Lh8?= =?us-ascii?Q?lLANIIXS/2mYpNBXF25ApM2oC+vOl9fqViDlB7DJthpfUrMpLo1SJH2tG4Xi?= =?us-ascii?Q?KL5zC0wntf6eqOuq45pnUJ9pX+O5ClwHPnYgu/QzOsXjOyV1pCeYWUaMgQD+?= =?us-ascii?Q?B/A5VBBXPMR8mhFqsJHcCSRH6t+EZRCkSwgmO+Jwx3vmBbvfZGNG4PBrjq8e?= =?us-ascii?Q?6uSlDt1pXGLTzcDQtACixgFRI83ZS0y+KE51XRAQlMTjjZcdBDG6DVcVCPfx?= =?us-ascii?Q?UP4HIEjPz63UwhW5NMHE6M/8maxecB3B8wr/IZpqK0rb2ilMW7n6lPtZlPdN?= =?us-ascii?Q?EcfIzYB5qofVTcymGsOXjU0USWI6piFOwe1k2m+4wAaTc6oU1Yfaw5mJ2FG9?= =?us-ascii?Q?XGbsLyXGnQpvvfCt9L77PhVso5eYb1NyyMdcskn+iWw2NcNjyi3EceFzY6YF?= =?us-ascii?Q?Ulhqpjvhg+Ig1lGQYUXoOJ5zHM5FaJSpc25ePf0DxgFVFRKeAIEugdZ6zA7y?= =?us-ascii?Q?tNCxN/JPKSzPOph86XfhNa3v/ILlIJC9JIyjfZEd6PyURYdZxCX0HjRfLAp0?= =?us-ascii?Q?jV2xsb6xQ0HyrzvuLpKmuCPuLseM+qaSGP1ID+nwM2ebqQm3CuRBJEp5SSlY?= =?us-ascii?Q?Sty3zN3d5f50l+EDZusluEchvHn4KYW7KVVcDx+ItFovDkGRjgi5pc+TTy6Q?= =?us-ascii?Q?D7JXG6NS+yQL9mbUIP8QHrnKwXy++jMVa+dwirBypCqzuSTsIP/YRK/ZDTOn?= =?us-ascii?Q?R5lqJ8FnmD4OnRzyiwytgyQZXHqeTsjlL7FgLRWt3rHbr+8RnEFvELhqT4iq?= =?us-ascii?Q?FNBgs4+kHf8wkQb8Uvs1mCkhbVQRJJkRcXtk+m4bj9hgrL3vQ=3D=3D?= X-Microsoft-Exchange-Diagnostics: 1; CY1PR07MB2280; 6:2XxSV5bOUomtJTI1cnLL0GGT8sky4sfmVrvbK3qVFxKXlkvyvp8h4bSqPJSeB0RUWZkdNXYvNBIKCGWRlCJW72DIcdiLAuRrbqoF/aNnMqpms4DQ3XGDXa2wLSrMrm4oFlxg2mJnAcxfr01g6UEHQsL9CQJhoY8rYm1YiJ28Iie3VB3S9Qc9vNJzCTMo7U2Dard42X6MwzKNF7SBT2l5zHZWqjBIfIqrdRQW0pwmM9bwVjOo/6Yao8r9bkGRnZ/bMRY19nWODkVpIKpy987FOgRvtFq5CAZiB2V+FxpTiA65ulaUz4TwJbYibIPFXKF9PHS0H9ahB7DFdduc8XcUpuqRiMmPIqQPdtbaXff6BEavv6mobwyUgf1pN6JlX5RylenXp0hcgyEsjLXiUfpaHQ==; 5:KnupRgiRaVlgQ3VZsZYgwPfdC9TSm7MU7s6yGV+VAOveDY5iIrZ04dFGnkTr2jM/P6F9+BmdiEI6cICepQdokYvc5F+S/a3mwsV3XjkxGM2z3o4UPBrDw8zU/mTTHGSziseXLnwyuKdBgP32JlP05A==; 24:1/ft/vRufgHhOCCafhkO3i2TWGRkvRaH6Fka66Ncqo1mCb5UeKEVOnP67mHWl3+PogO/KW2r9HeDy5EXUbNqwHU/lbVEEQOgg7GhTD3zRZs= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; CY1PR07MB2280; 7:EFqb+SXmwqhaob0Ugo2ctlJQHvrBtXvtC8ZozeySxAOwroj6ycJbMTaCXY2tMiGwMVMxltmv4NUZw2cG7ixZBVk8CZ5iK+D/mmwTUO80oLqeQC9Ni+vZUMFNxvZjbQtv9pRNBiOW5mW2P1exPisgd66SuRXlizWv0kK6O+/RQxDfrlWXHlgH3iZgFPYyuPZRDsRWWlyvGyuWaJ34VVIMXBuOBvn9E4soP9qQr3qv+6/k5ViFSWXN1Y28fYwwL+KxV848udpAk5M/rjWyjTK18eqS/roo67Zw+S7GR+VNhReJ8gv1//HmBt0KoxoyFR+P7i7onOUM/mUFw6d9g+bvEA== X-OriginatorOrg: caviumnetworks.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Mar 2017 06:27:59.7351 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY1PR07MB2280 Subject: [dpdk-dev] [PATCH v3 22/46] net/liquidio: add Rx data path X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 25 Mar 2017 06:28:03 -0000 Add APIs to receive packets and re-fill ring buffers. Signed-off-by: Shijith Thotton Signed-off-by: Jerin Jacob Signed-off-by: Derek Chickles Signed-off-by: Venkat Koppula Signed-off-by: Srisivasubramanian S Signed-off-by: Mallesham Jatharakonda --- doc/guides/nics/features/liquidio.ini | 4 + drivers/net/liquidio/base/lio_hw_defs.h | 12 + drivers/net/liquidio/lio_ethdev.c | 5 + drivers/net/liquidio/lio_rxtx.c | 380 ++++++++++++++++++++++++++++++++ drivers/net/liquidio/lio_rxtx.h | 13 ++ 5 files changed, 414 insertions(+) diff --git a/doc/guides/nics/features/liquidio.ini b/doc/guides/nics/features/liquidio.ini index 6c5d8d1..554d921 100644 --- a/doc/guides/nics/features/liquidio.ini +++ b/doc/guides/nics/features/liquidio.ini @@ -4,6 +4,10 @@ ; Refer to default.ini for the full list of available PMD features. ; [Features] +Scattered Rx = Y +CRC offload = Y +L3 checksum offload = Y +L4 checksum offload = Y Multiprocess aware = Y Linux UIO = Y Linux VFIO = Y diff --git a/drivers/net/liquidio/base/lio_hw_defs.h b/drivers/net/liquidio/base/lio_hw_defs.h index 4271730..2db7085 100644 --- a/drivers/net/liquidio/base/lio_hw_defs.h +++ b/drivers/net/liquidio/base/lio_hw_defs.h @@ -112,12 +112,24 @@ enum octeon_tag_type { /* used for NIC operations */ #define LIO_OPCODE 1 +/* Subcodes are used by host driver/apps to identify the sub-operation + * for the core. They only need to by unique for a given subsystem. + */ +#define LIO_OPCODE_SUBCODE(op, sub) \ + ((((op) & 0x0f) << 8) | ((sub) & 0x7f)) + /** LIO_OPCODE subcodes */ /* This subcode is sent by core PCI driver to indicate cores are ready. */ +#define LIO_OPCODE_NW_DATA 0x02 /* network packet data */ #define LIO_OPCODE_IF_CFG 0x09 #define LIO_MAX_RX_PKTLEN (64 * 1024) +/* RX(packets coming from wire) Checksum verification flags */ +/* TCP/UDP csum */ +#define LIO_L4_CSUM_VERIFIED 0x1 +#define LIO_IP_CSUM_VERIFIED 0x2 + /* Interface flags communicated between host driver and core app. */ enum lio_ifflags { LIO_IFFLAG_UNICAST = 0x10 diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c index a93fa4a..ebfdf7a 100644 --- a/drivers/net/liquidio/lio_ethdev.c +++ b/drivers/net/liquidio/lio_ethdev.c @@ -404,6 +404,8 @@ static int lio_dev_configure(struct rte_eth_dev *eth_dev) rte_free(eth_dev->data->mac_addrs); eth_dev->data->mac_addrs = NULL; + eth_dev->rx_pkt_burst = NULL; + return 0; } @@ -415,6 +417,8 @@ static int lio_dev_configure(struct rte_eth_dev *eth_dev) PMD_INIT_FUNC_TRACE(); + eth_dev->rx_pkt_burst = &lio_dev_recv_pkts; + /* Primary does the initialization. */ if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; @@ -448,6 +452,7 @@ static int lio_dev_configure(struct rte_eth_dev *eth_dev) lio_dev_err(lio_dev, "MAC addresses memory allocation failed\n"); eth_dev->dev_ops = NULL; + eth_dev->rx_pkt_burst = NULL; return -ENOMEM; } diff --git a/drivers/net/liquidio/lio_rxtx.c b/drivers/net/liquidio/lio_rxtx.c index 9948023..9e4da3a 100644 --- a/drivers/net/liquidio/lio_rxtx.c +++ b/drivers/net/liquidio/lio_rxtx.c @@ -326,6 +326,386 @@ return 0; } +static inline uint32_t +lio_droq_get_bufcount(uint32_t buf_size, uint32_t total_len) +{ + uint32_t buf_cnt = 0; + + while (total_len > (buf_size * buf_cnt)) + buf_cnt++; + + return buf_cnt; +} + +/* If we were not able to refill all buffers, try to move around + * the buffers that were not dispatched. + */ +static inline uint32_t +lio_droq_refill_pullup_descs(struct lio_droq *droq, + struct lio_droq_desc *desc_ring) +{ + uint32_t refill_index = droq->refill_idx; + uint32_t desc_refilled = 0; + + while (refill_index != droq->read_idx) { + if (droq->recv_buf_list[refill_index].buffer) { + droq->recv_buf_list[droq->refill_idx].buffer = + droq->recv_buf_list[refill_index].buffer; + desc_ring[droq->refill_idx].buffer_ptr = + desc_ring[refill_index].buffer_ptr; + droq->recv_buf_list[refill_index].buffer = NULL; + desc_ring[refill_index].buffer_ptr = 0; + do { + droq->refill_idx = lio_incr_index( + droq->refill_idx, 1, + droq->max_count); + desc_refilled++; + droq->refill_count--; + } while (droq->recv_buf_list[droq->refill_idx].buffer); + } + refill_index = lio_incr_index(refill_index, 1, + droq->max_count); + } /* while */ + + return desc_refilled; +} + +/* lio_droq_refill + * + * @param lio_dev - pointer to the lio device structure + * @param droq - droq in which descriptors require new buffers. + * + * Description: + * Called during normal DROQ processing in interrupt mode or by the poll + * thread to refill the descriptors from which buffers were dispatched + * to upper layers. Attempts to allocate new buffers. If that fails, moves + * up buffers (that were not dispatched) to form a contiguous ring. + * + * Returns: + * No of descriptors refilled. + * + * Locks: + * This routine is called with droq->lock held. + */ +static uint32_t +lio_droq_refill(struct lio_device *lio_dev, struct lio_droq *droq) +{ + struct lio_droq_desc *desc_ring; + uint32_t desc_refilled = 0; + void *buf = NULL; + + desc_ring = droq->desc_ring; + + while (droq->refill_count && (desc_refilled < droq->max_count)) { + /* If a valid buffer exists (happens if there is no dispatch), + * reuse the buffer, else allocate. + */ + if (droq->recv_buf_list[droq->refill_idx].buffer == NULL) { + buf = lio_recv_buffer_alloc(lio_dev, droq->q_no); + /* If a buffer could not be allocated, no point in + * continuing + */ + if (buf == NULL) + break; + + droq->recv_buf_list[droq->refill_idx].buffer = buf; + } + + desc_ring[droq->refill_idx].buffer_ptr = + lio_map_ring(droq->recv_buf_list[droq->refill_idx].buffer); + /* Reset any previous values in the length field. */ + droq->info_list[droq->refill_idx].length = 0; + + droq->refill_idx = lio_incr_index(droq->refill_idx, 1, + droq->max_count); + desc_refilled++; + droq->refill_count--; + } + + if (droq->refill_count) + desc_refilled += lio_droq_refill_pullup_descs(droq, desc_ring); + + /* if droq->refill_count + * The refill count would not change in pass two. We only moved buffers + * to close the gap in the ring, but we would still have the same no. of + * buffers to refill. + */ + return desc_refilled; +} + +static int +lio_droq_fast_process_packet(struct lio_device *lio_dev, + struct lio_droq *droq, + struct rte_mbuf **rx_pkts) +{ + struct rte_mbuf *nicbuf = NULL; + struct lio_droq_info *info; + uint32_t total_len = 0; + int data_total_len = 0; + uint32_t pkt_len = 0; + union octeon_rh *rh; + int data_pkts = 0; + + info = &droq->info_list[droq->read_idx]; + lio_swap_8B_data((uint64_t *)info, 2); + + if (!info->length) + return -1; + + /* Len of resp hdr in included in the received data len. */ + info->length -= OCTEON_RH_SIZE; + rh = &info->rh; + + total_len += (uint32_t)info->length; + + if (lio_opcode_slow_path(rh)) { + uint32_t buf_cnt; + + buf_cnt = lio_droq_get_bufcount(droq->buffer_size, + (uint32_t)info->length); + droq->read_idx = lio_incr_index(droq->read_idx, buf_cnt, + droq->max_count); + droq->refill_count += buf_cnt; + } else { + if (info->length <= droq->buffer_size) { + if (rh->r_dh.has_hash) + pkt_len = (uint32_t)(info->length - 8); + else + pkt_len = (uint32_t)info->length; + + nicbuf = droq->recv_buf_list[droq->read_idx].buffer; + droq->recv_buf_list[droq->read_idx].buffer = NULL; + droq->read_idx = lio_incr_index( + droq->read_idx, 1, + droq->max_count); + droq->refill_count++; + + if (likely(nicbuf != NULL)) { + nicbuf->data_off = RTE_PKTMBUF_HEADROOM; + nicbuf->nb_segs = 1; + nicbuf->next = NULL; + /* We don't have a way to pass flags yet */ + nicbuf->ol_flags = 0; + if (rh->r_dh.has_hash) { + uint64_t *hash_ptr; + + nicbuf->ol_flags |= PKT_RX_RSS_HASH; + hash_ptr = rte_pktmbuf_mtod(nicbuf, + uint64_t *); + lio_swap_8B_data(hash_ptr, 1); + nicbuf->hash.rss = (uint32_t)*hash_ptr; + nicbuf->data_off += 8; + } + + nicbuf->pkt_len = pkt_len; + nicbuf->data_len = pkt_len; + nicbuf->port = lio_dev->port_id; + /* Store the mbuf */ + rx_pkts[data_pkts++] = nicbuf; + data_total_len += pkt_len; + } + + /* Prefetch buffer pointers when on a cache line + * boundary + */ + if ((droq->read_idx & 3) == 0) { + rte_prefetch0( + &droq->recv_buf_list[droq->read_idx]); + rte_prefetch0( + &droq->info_list[droq->read_idx]); + } + } else { + struct rte_mbuf *first_buf = NULL; + struct rte_mbuf *last_buf = NULL; + + while (pkt_len < info->length) { + int cpy_len = 0; + + cpy_len = ((pkt_len + droq->buffer_size) > + info->length) + ? ((uint32_t)info->length - + pkt_len) + : droq->buffer_size; + + nicbuf = + droq->recv_buf_list[droq->read_idx].buffer; + droq->recv_buf_list[droq->read_idx].buffer = + NULL; + + if (likely(nicbuf != NULL)) { + /* Note the first seg */ + if (!pkt_len) + first_buf = nicbuf; + + nicbuf->data_off = RTE_PKTMBUF_HEADROOM; + nicbuf->nb_segs = 1; + nicbuf->next = NULL; + nicbuf->port = lio_dev->port_id; + /* We don't have a way to pass + * flags yet + */ + nicbuf->ol_flags = 0; + if ((!pkt_len) && (rh->r_dh.has_hash)) { + uint64_t *hash_ptr; + + nicbuf->ol_flags |= + PKT_RX_RSS_HASH; + hash_ptr = rte_pktmbuf_mtod( + nicbuf, uint64_t *); + lio_swap_8B_data(hash_ptr, 1); + nicbuf->hash.rss = + (uint32_t)*hash_ptr; + nicbuf->data_off += 8; + nicbuf->pkt_len = cpy_len - 8; + nicbuf->data_len = cpy_len - 8; + } else { + nicbuf->pkt_len = cpy_len; + nicbuf->data_len = cpy_len; + } + + if (pkt_len) + first_buf->nb_segs++; + + if (last_buf) + last_buf->next = nicbuf; + + last_buf = nicbuf; + } else { + PMD_RX_LOG(lio_dev, ERR, "no buf\n"); + } + + pkt_len += cpy_len; + droq->read_idx = lio_incr_index( + droq->read_idx, + 1, droq->max_count); + droq->refill_count++; + + /* Prefetch buffer pointers when on a + * cache line boundary + */ + if ((droq->read_idx & 3) == 0) { + rte_prefetch0(&droq->recv_buf_list + [droq->read_idx]); + + rte_prefetch0( + &droq->info_list[droq->read_idx]); + } + } + rx_pkts[data_pkts++] = first_buf; + if (rh->r_dh.has_hash) + data_total_len += (pkt_len - 8); + else + data_total_len += pkt_len; + } + + /* Inform upper layer about packet checksum verification */ + struct rte_mbuf *m = rx_pkts[data_pkts - 1]; + + if (rh->r_dh.csum_verified & LIO_IP_CSUM_VERIFIED) + m->ol_flags |= PKT_RX_IP_CKSUM_GOOD; + + if (rh->r_dh.csum_verified & LIO_L4_CSUM_VERIFIED) + m->ol_flags |= PKT_RX_L4_CKSUM_GOOD; + } + + if (droq->refill_count >= droq->refill_threshold) { + int desc_refilled = lio_droq_refill(lio_dev, droq); + + /* Flush the droq descriptor data to memory to be sure + * that when we update the credits the data in memory is + * accurate. + */ + rte_wmb(); + rte_write32(desc_refilled, droq->pkts_credit_reg); + /* make sure mmio write completes */ + rte_wmb(); + } + + info->length = 0; + info->rh.rh64 = 0; + + return data_pkts; +} + +static uint32_t +lio_droq_fast_process_packets(struct lio_device *lio_dev, + struct lio_droq *droq, + struct rte_mbuf **rx_pkts, + uint32_t pkts_to_process) +{ + int ret, data_pkts = 0; + uint32_t pkt; + + for (pkt = 0; pkt < pkts_to_process; pkt++) { + ret = lio_droq_fast_process_packet(lio_dev, droq, + &rx_pkts[data_pkts]); + if (ret < 0) { + lio_dev_err(lio_dev, "Port[%d] DROQ[%d] idx: %d len:0, pkt_cnt: %d\n", + lio_dev->port_id, droq->q_no, + droq->read_idx, pkts_to_process); + break; + } + data_pkts += ret; + } + + rte_atomic64_sub(&droq->pkts_pending, pkt); + + return data_pkts; +} + +static inline uint32_t +lio_droq_check_hw_for_pkts(struct lio_droq *droq) +{ + uint32_t last_count; + uint32_t pkt_count; + + pkt_count = rte_read32(droq->pkts_sent_reg); + + last_count = pkt_count - droq->pkt_count; + droq->pkt_count = pkt_count; + + if (last_count) + rte_atomic64_add(&droq->pkts_pending, last_count); + + return last_count; +} + +uint16_t +lio_dev_recv_pkts(void *rx_queue, + struct rte_mbuf **rx_pkts, + uint16_t budget) +{ + struct lio_droq *droq = rx_queue; + struct lio_device *lio_dev = droq->lio_dev; + uint32_t pkts_processed = 0; + uint32_t pkt_count = 0; + + lio_droq_check_hw_for_pkts(droq); + + pkt_count = rte_atomic64_read(&droq->pkts_pending); + if (!pkt_count) + return 0; + + if (pkt_count > budget) + pkt_count = budget; + + /* Grab the lock */ + rte_spinlock_lock(&droq->lock); + pkts_processed = lio_droq_fast_process_packets(lio_dev, + droq, rx_pkts, + pkt_count); + + if (droq->pkt_count) { + rte_write32(droq->pkt_count, droq->pkts_sent_reg); + droq->pkt_count = 0; + } + + /* Release the spin lock */ + rte_spinlock_unlock(&droq->lock); + + return pkts_processed; +} + /** * lio_init_instr_queue() * @param lio_dev - pointer to the lio device structure. diff --git a/drivers/net/liquidio/lio_rxtx.h b/drivers/net/liquidio/lio_rxtx.h index fc623ad..420b893 100644 --- a/drivers/net/liquidio/lio_rxtx.h +++ b/drivers/net/liquidio/lio_rxtx.h @@ -515,6 +515,17 @@ enum { return (uint64_t)dma_addr; } +static inline int +lio_opcode_slow_path(union octeon_rh *rh) +{ + uint16_t subcode1, subcode2; + + subcode1 = LIO_OPCODE_SUBCODE(rh->r.opcode, rh->r.subcode); + subcode2 = LIO_OPCODE_SUBCODE(LIO_OPCODE, LIO_OPCODE_NW_DATA); + + return subcode2 != subcode1; +} + /* Macro to increment index. * Index is incremented by count; if the sum exceeds * max, index is wrapped-around to the start. @@ -533,6 +544,8 @@ enum { int lio_setup_droq(struct lio_device *lio_dev, int q_no, int num_descs, int desc_size, struct rte_mempool *mpool, unsigned int socket_id); +uint16_t lio_dev_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t budget); /** Setup instruction queue zero for the device * @param lio_dev which lio device to setup -- 1.8.3.1