From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from NAM03-CO1-obe.outbound.protection.outlook.com (mail-co1nam03on0067.outbound.protection.outlook.com [104.47.40.67]) by dpdk.org (Postfix) with ESMTP id 6DC00D59A for ; Thu, 2 Mar 2017 12:39:05 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=gaDrrE+GLIWwLNUlcXl6mDfz8pcUFyv57IwIcbKvbzA=; b=V8ypTIXK189gAbQmlW+0HRugypvIrXvvXvkSPoXyhJwQqOWPlJFy4omqzbrlvnUhZUWVqJgRQ2NQsgoXQycHRtjAO0PahGFSPOKj/n9Hoqf9MEBMzmTlLxQ62kezZc2DfNZT5oZs9G9L+eNfpuDQwv+VLYieVsocYej3G7q/kR0= Authentication-Results: intel.com; dkim=none (message not signed) header.d=none;intel.com; dmarc=none action=none header.from=caviumnetworks.com; Received: from lio357.in.caveonetworks.com (14.140.2.178) by CY1PR07MB2277.namprd07.prod.outlook.com (10.164.112.155) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.947.12; Thu, 2 Mar 2017 11:39:01 +0000 From: Shijith Thotton To: ferruh.yigit@intel.com Cc: dev@dpdk.org, Jerin Jacob , Derek Chickles , Venkat Koppula , Srisivasubramanian S , Mallesham Jatharakonda Date: Thu, 2 Mar 2017 17:02:27 +0530 Message-Id: <1488454371-3342-23-git-send-email-shijith.thotton@caviumnetworks.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1488454371-3342-1-git-send-email-shijith.thotton@caviumnetworks.com> References: <1487669225-30091-1-git-send-email-shijith.thotton@caviumnetworks.com> <1488454371-3342-1-git-send-email-shijith.thotton@caviumnetworks.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [14.140.2.178] X-ClientProxiedBy: BM1PR01CA0021.INDPRD01.PROD.OUTLOOK.COM (10.163.198.156) To CY1PR07MB2277.namprd07.prod.outlook.com (10.164.112.155) X-MS-Office365-Filtering-Correlation-Id: eda43f0a-0e6f-4724-fca1-08d46160bab9 X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:(22001);SRVR:CY1PR07MB2277; X-Microsoft-Exchange-Diagnostics: 1; CY1PR07MB2277; 3:Tw6e6gawS9vIDVSxnmzuRSn/h0SEoICKyucyk8+xhgwjzN4lOPvI+gHfTCqaC3DpI5hvyXwy/F41sSpyRm95h1qecZ6C2XiMV+UTBxPT/zVC+E6Iqy2RVhK4czu+QOeN4wP0BfdpeWP1XMxTZ2ERiXBoV1XgsTE1ARvi4z5MVivAfDr+gyppQ7Ikf9CZJ88szL5yijjj1/F6FHfNPd8dSiye+g1GLIEh2drTbnrrABasvwQYNzcCHLyRT2+/yLZ9Hwn/v7eDx1DeB07itlXCzg==; 25:b3UfWMgsRqJu+3NomZxkd+FCR3ypFr8JXKXl1ABsXjbBUduIFXUCUKAz0K8jUPF+SYMZIidnfzV0t/IBpGRUmJt8niaKj/6mrYx4+uV77+r4JuqfOjhiR9f8OeCVVO+mhNtyuqAN/8GaGwgfR3UKLkIKVj9qewhvHvxhdau4fgJzfxGo4iZREQGXWxehfYuriLwAUFKVau/5O5SjGRJVwl1qsJr/CLpem/ib1qdScJ6VilYqKBxrTbOZju97O1v3znCwIMHdJjFYl3+W7PfXH+d479q8IoTOTYPPCESpkG8FOMNgTIGdZqahIVlPLkmKTFIKyz47Fu9yJ8Bir9m+s1QAuBbNLwW+eBwjOKykqye7GQIU7gwd45MevFK72GUTvtN2lzSjOJgBPt8AI/Ey9bSrNBY0hA+LYuNvuJ1cz/un7UZHdYvO0/XVkwDfE+CI0OQl6Tlw3tLiJDwofoi73w== X-Microsoft-Exchange-Diagnostics: 1; CY1PR07MB2277; 31:7yRQ9qdK6MhaV1RyV9thjBvFZriE/4hbT9nlDAQbT6D9jmFyOP+j050Ln11ggb9HHWG5l9eHkxNMO43zwlGgl4qxshDOqXCmFuqujKrk4nIBeqDYi8Pv6BVY9rFarNrKuhHeB/N/ouUXBHrICShTGrnyYVGR69x1xzd3mZ+uI0gmN8TkRcexNhOb3naeZ4nfTYYp3QMHJSFj9cHNucckIhV6Iv3nZBgQfN7dZ8BJLf22u3k1g7lMYKTlb4gl2LMJ; 20:vMwD8VStOpt9kE8VVfFKnS0vroIlDnRi/lsnm/E3RKWkQDoHgpLAFHlRFrYXZSVnrvQiCHgP4bkfwEaN/VXksygrjUD1NwR60j097O4p59noN3+31so/3qM1Wdz8o4kWXe7x2EUkRsyyI/EXWLX7tnXdQuREylEbpg4DhN+e6IS+XnEgFo6VoGAUxIT2dEazhydvi7+Q17zwr9A0rho6NrY6n6KHV68iEOG/1RLmZLSDVet0oxRE2OM55d9wxnHovYqkcCd+MTsYfY623NoewrUZ2YJK3Z6PjhfuMfDB4tRERKH5A3SgCwRJa9qH657ggyd2dCZw3frWON48g2+QSoXix56ikcyLyU0NH+/199M4KUPa+ypDDCwGX5254430blROi1UWAURvrpzk1uOSCxX/9ASwOkVBJkpPrs0JTu+mMwDQ462UBEaRKCLpm5MEHsTxwKeCpCs7uKFRjFOV1VXJ07VzSVBx0TP2bT5KKaCaqoQxHmyp6SHY1T693V5I0fozb6QwkSUrqfHAQ2/mVwISW7u7zm8jYm2jBxnL3itJ9b2941vlUykMvgsV7dWxCaTCaQluOECemhli1q8w+7CAwgpjNrHRaIVPRIIi3gU= X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(6040375)(601004)(2401047)(5005006)(8121501046)(10201501046)(3002001)(6041248)(20161123558025)(20161123555025)(20161123560025)(20161123564025)(20161123562025)(6072148); SRVR:CY1PR07MB2277; BCL:0; PCL:0; RULEID:; SRVR:CY1PR07MB2277; X-Microsoft-Exchange-Diagnostics: 1; CY1PR07MB2277; 4:aCbXtWkoWid/FvbyN2L6qrwLBPd/Gm6ShtED7bAdzFb5oS7pHIzWebVrrat7d1wDy7MOAb6OZA7Uob+qUXgQz4rfW6XGbaqtlYrDzwP6hSlM0aBaCO3e5RY+53HqzOubRm9tr1I/KSI9eQEanNHDw8stRBM6qJ/WFEfPlATwtJYGrhnUfrAKbo7noHNJWRHXrymsQDfZbgsB3jGtPPPG6bMsEBptdB6F/7+WtevPSP6WNSS7NwM71LTqTG7Pfkf6iV/cnuGbzhEF3ev+Ob7tCVSx5lwsIL+hroY4yigXoKja7Mep0a3Zdlm/n/589ySQQBF1PBuA0gBwxq7xj+lcsRjpXKPzfomujpBRw8EhJKUgmKaMtYdFAKD/08o9UjlacBDeuOB61n2NmVDtHk25C+nARfim7g40nafNGTUUAaM36Bz3b/9kQiW2GWEa3awKQstVL1nKk9yaJDYBqCdcLdXBpxdcs7xPY6rgM/YpC5cZGMYDFkVlkcshquLPqRmWqG9mSt9QnBwUk/a0vhM7uBjFDrDZ4AEcJQqOCVPF1djmKtuofGVBpJlZNlLSxJ2PWK5ddisbszh1OkdDi75EvaIM44IGO8xgpDpdRh/oRl0= X-Forefront-PRVS: 023495660C X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(4630300001)(6009001)(7916002)(39450400003)(50944005)(2950100002)(6916009)(42882006)(575784001)(5660300001)(92566002)(4720700003)(33646002)(50226002)(6666003)(2906002)(36756003)(8676002)(50466002)(81166006)(110136004)(38730400002)(4326008)(305945005)(2361001)(53936002)(2351001)(25786008)(6512007)(189998001)(54906002)(47776003)(6506006)(6486002)(66066001)(6116002)(3846002)(50986999)(76176999)(5009440100003)(42186005)(7736002)(7099028)(110426004); DIR:OUT; SFP:1101; SCL:1; SRVR:CY1PR07MB2277; H:lio357.in.caveonetworks.com; FPR:; SPF:None; MLV:sfv; LANG:en; X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; CY1PR07MB2277; 23:2OJR7I8zsDFqaUgk+QDKvRKKH+5Zeon11SIg2vuwG?= =?us-ascii?Q?+GE6Zw8IiuKpTpkGk6mQ3FwI7fuOuYCGgZKMuyYwunxQINCGhfJ9d7JeY5Xz?= =?us-ascii?Q?UH4Zs0U+uX9+PztPFUSAidOzZeiNlxxaMI7FH2zcmxP4FTkqY1SizLDWPQ/s?= =?us-ascii?Q?+mHDIqjka4Kv4xCsQvIY5irxCaZ+cewB4h+iKHFMJY30WihYUBgI5UU+IauC?= =?us-ascii?Q?kBQVapreOAZkjUuFfYMekwwz2ohGHv0TPgmYHL6RDaiuC9i9VKYIafpUA3QR?= =?us-ascii?Q?KchcQ0BtxDyAddkDYG/3Zd3HVtt/UGhoNP7u/3b8vCHp/7tPh9dTbi2tMRxl?= =?us-ascii?Q?soQbxy6ODteD30H9m8CCZQqNFq3+GgLlgoO8t998/SpY1XFlyJlk72K6f9Il?= =?us-ascii?Q?5YwnGnyBo+Y1GsUKc5J//CWRMoEjSTSBMGj+ORiO6azFVHqPQ5KtCuDWPQAS?= =?us-ascii?Q?kMfNvhzBrVVpvjv3C1pW4NoDzTVzMJWOfy28FDke7WXBlBd1veVOzlJKIY3O?= =?us-ascii?Q?Zx9HQAqR+nI92IjscmOgB+tWNBNkLe422rsfPZSV4QlCl/WSvgFk+3wVz/2f?= =?us-ascii?Q?z98ZRJJDFCkYwwxxnFA8Bk9ywiJThNztg18aVavJOOOAfYGHfR9RAYpnh0Yb?= =?us-ascii?Q?hfPrJvll+r3zyver3mPF3oc3Mu6nipKxX0j+CAuRMN1b189hrUQHAlsNfVgn?= =?us-ascii?Q?AzkLtofGnIf8TXCxOgkNmmkzMgJcsXBQdxL0zeg+DSHpUbZ9qbQo2RGe4QO2?= =?us-ascii?Q?Aq9m1glY/miC3KMJDsLg5kJTsq83ZzscAiulmwssl0Y9QLoqETIU7VBTyiUF?= =?us-ascii?Q?0vrZGNlVKES0Ve6K+Pc5AGWFjSCm2bpkHDNkaeHO0JAxhqpyxn2D31y+MVHn?= =?us-ascii?Q?j5dfgJn/uRFFqW3gpAUuSeMtOS0g1h+UfanDBqDjUACa+2eVL+gDfcUq7UQT?= =?us-ascii?Q?8agtAW+i916fLWASmQf1G/keA7Q1bXqFnY4ZUT7J+llb/5SYF40hN4WbvZrG?= =?us-ascii?Q?D6gGU6oofMvleFQ+uKxBsDO8c2SGGhuYSCzDFnFg4q9FoXbE9hOX7guZ3wXX?= =?us-ascii?Q?qJ3FOzuVVAn+LXt/uK/DHGnwMyc?= X-Microsoft-Exchange-Diagnostics: 1; CY1PR07MB2277; 6:11OwBB6Ds74qr8laNFwzZHiw0LutWDbrCbnI8LmljXeg2JvPvOhE5z7Gu85TYQ6MNirPb7hcUlfpqMYgEBnVIq+lR14TKyFbcFcCtfpptNLGFHHdw7MRp+gpC+CzaqdImhghJjtsFwjcVYV2P74AvpT9rmkEX1o+tvxeAqtmTpCxdl5s+jdu3dVudQdgg8XZ1kmWo5zcy+amcXM9c5Q4eVPOf0sfmrKj8wZdM9jfEsC0zzR07JvhMLiFTYl45zoX3zXztEhml0fjnXgyjYwVIq+NFxIZsGYiLuf/yjBroZ6Ol+KmHTfZBjg+s6mGSXZvIGCxKPYRB0Uh6xNIM8NR+cc4U4KlrxpF50PHpQzEzBj12qIoi2PZzwWRE1be92FJzq3/B7RyR4Ud/KPbGOD63Q==; 5:C6R+LBr0Dt2GJevqe9OXI3L0S3aMVYSMA/uKv3S/QwBjzhFdzpzuWtp4YYo9V1U+vIXoWq7bYqzmmLdKTSywCaxFPcnVOUHndzCYs7A5frxJhqwJvHSWaZW8hpoxpHwPz6iwTFeoGmqYNOacnbPVCg==; 24:j0/w9oQOgPIrvjcS/E6H7A+Irl21qyvhfePZa6x2f/Lajk0W4GBaGXryLlMB6bNz6QNjGp6VX4SMt7hgu+hjZV+G36/Py5xhiYyJ6cKP7qs= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; CY1PR07MB2277; 7:jbJOHywn7Iw/gtrG+Zc0DZ63iOOHaU2x8LrpnFDbPg+ueiRlNr94ao02/F5kvPStoicXs+qrNVA0tYxFw2F8qjd7f2XRUq2AzjWm/tEMcOE2l0UbQRxIJ+CQq7D7Fd2tUxrc+5/LfdkxGIv8BLr8jZz+yCNq0nT3lwCuWfXmMHg7x3TMlQqumU9KZF8G5BmZVwyIsuGBf0srt08+g76i6dtPKhj2D/RBb6Nd2tmBlipmW5I+QYliyQfuTAJRJpn+CHjLft6Bq9zGgw0kZhQqIkyJSFXG0yVA5RrIGOEnOK6ryprzLuZZ8pWoTrCqgAkEAmsEF2eb8+2RsAKAJTvJuA== X-OriginatorOrg: caviumnetworks.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Mar 2017 11:39:01.6712 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY1PR07MB2277 Subject: [dpdk-dev] [PATCH v2 22/46] net/liquidio: add Rx data path X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 02 Mar 2017 11:39:06 -0000 Add APIs to receive packets and re-fill ring buffers. Signed-off-by: Shijith Thotton Signed-off-by: Jerin Jacob Signed-off-by: Derek Chickles Signed-off-by: Venkat Koppula Signed-off-by: Srisivasubramanian S Signed-off-by: Mallesham Jatharakonda --- drivers/net/liquidio/base/lio_hw_defs.h | 12 + drivers/net/liquidio/lio_ethdev.c | 5 + drivers/net/liquidio/lio_rxtx.c | 380 ++++++++++++++++++++++++++++++++ drivers/net/liquidio/lio_rxtx.h | 13 ++ 4 files changed, 410 insertions(+) diff --git a/drivers/net/liquidio/base/lio_hw_defs.h b/drivers/net/liquidio/base/lio_hw_defs.h index 4271730..2db7085 100644 --- a/drivers/net/liquidio/base/lio_hw_defs.h +++ b/drivers/net/liquidio/base/lio_hw_defs.h @@ -112,12 +112,24 @@ enum octeon_tag_type { /* used for NIC operations */ #define LIO_OPCODE 1 +/* Subcodes are used by host driver/apps to identify the sub-operation + * for the core. They only need to by unique for a given subsystem. + */ +#define LIO_OPCODE_SUBCODE(op, sub) \ + ((((op) & 0x0f) << 8) | ((sub) & 0x7f)) + /** LIO_OPCODE subcodes */ /* This subcode is sent by core PCI driver to indicate cores are ready. */ +#define LIO_OPCODE_NW_DATA 0x02 /* network packet data */ #define LIO_OPCODE_IF_CFG 0x09 #define LIO_MAX_RX_PKTLEN (64 * 1024) +/* RX(packets coming from wire) Checksum verification flags */ +/* TCP/UDP csum */ +#define LIO_L4_CSUM_VERIFIED 0x1 +#define LIO_IP_CSUM_VERIFIED 0x2 + /* Interface flags communicated between host driver and core app. */ enum lio_ifflags { LIO_IFFLAG_UNICAST = 0x10 diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c index a93fa4a..ebfdf7a 100644 --- a/drivers/net/liquidio/lio_ethdev.c +++ b/drivers/net/liquidio/lio_ethdev.c @@ -404,6 +404,8 @@ static int lio_dev_configure(struct rte_eth_dev *eth_dev) rte_free(eth_dev->data->mac_addrs); eth_dev->data->mac_addrs = NULL; + eth_dev->rx_pkt_burst = NULL; + return 0; } @@ -415,6 +417,8 @@ static int lio_dev_configure(struct rte_eth_dev *eth_dev) PMD_INIT_FUNC_TRACE(); + eth_dev->rx_pkt_burst = &lio_dev_recv_pkts; + /* Primary does the initialization. */ if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; @@ -448,6 +452,7 @@ static int lio_dev_configure(struct rte_eth_dev *eth_dev) lio_dev_err(lio_dev, "MAC addresses memory allocation failed\n"); eth_dev->dev_ops = NULL; + eth_dev->rx_pkt_burst = NULL; return -ENOMEM; } diff --git a/drivers/net/liquidio/lio_rxtx.c b/drivers/net/liquidio/lio_rxtx.c index 9948023..9e4da3a 100644 --- a/drivers/net/liquidio/lio_rxtx.c +++ b/drivers/net/liquidio/lio_rxtx.c @@ -326,6 +326,386 @@ return 0; } +static inline uint32_t +lio_droq_get_bufcount(uint32_t buf_size, uint32_t total_len) +{ + uint32_t buf_cnt = 0; + + while (total_len > (buf_size * buf_cnt)) + buf_cnt++; + + return buf_cnt; +} + +/* If we were not able to refill all buffers, try to move around + * the buffers that were not dispatched. + */ +static inline uint32_t +lio_droq_refill_pullup_descs(struct lio_droq *droq, + struct lio_droq_desc *desc_ring) +{ + uint32_t refill_index = droq->refill_idx; + uint32_t desc_refilled = 0; + + while (refill_index != droq->read_idx) { + if (droq->recv_buf_list[refill_index].buffer) { + droq->recv_buf_list[droq->refill_idx].buffer = + droq->recv_buf_list[refill_index].buffer; + desc_ring[droq->refill_idx].buffer_ptr = + desc_ring[refill_index].buffer_ptr; + droq->recv_buf_list[refill_index].buffer = NULL; + desc_ring[refill_index].buffer_ptr = 0; + do { + droq->refill_idx = lio_incr_index( + droq->refill_idx, 1, + droq->max_count); + desc_refilled++; + droq->refill_count--; + } while (droq->recv_buf_list[droq->refill_idx].buffer); + } + refill_index = lio_incr_index(refill_index, 1, + droq->max_count); + } /* while */ + + return desc_refilled; +} + +/* lio_droq_refill + * + * @param lio_dev - pointer to the lio device structure + * @param droq - droq in which descriptors require new buffers. + * + * Description: + * Called during normal DROQ processing in interrupt mode or by the poll + * thread to refill the descriptors from which buffers were dispatched + * to upper layers. Attempts to allocate new buffers. If that fails, moves + * up buffers (that were not dispatched) to form a contiguous ring. + * + * Returns: + * No of descriptors refilled. + * + * Locks: + * This routine is called with droq->lock held. + */ +static uint32_t +lio_droq_refill(struct lio_device *lio_dev, struct lio_droq *droq) +{ + struct lio_droq_desc *desc_ring; + uint32_t desc_refilled = 0; + void *buf = NULL; + + desc_ring = droq->desc_ring; + + while (droq->refill_count && (desc_refilled < droq->max_count)) { + /* If a valid buffer exists (happens if there is no dispatch), + * reuse the buffer, else allocate. + */ + if (droq->recv_buf_list[droq->refill_idx].buffer == NULL) { + buf = lio_recv_buffer_alloc(lio_dev, droq->q_no); + /* If a buffer could not be allocated, no point in + * continuing + */ + if (buf == NULL) + break; + + droq->recv_buf_list[droq->refill_idx].buffer = buf; + } + + desc_ring[droq->refill_idx].buffer_ptr = + lio_map_ring(droq->recv_buf_list[droq->refill_idx].buffer); + /* Reset any previous values in the length field. */ + droq->info_list[droq->refill_idx].length = 0; + + droq->refill_idx = lio_incr_index(droq->refill_idx, 1, + droq->max_count); + desc_refilled++; + droq->refill_count--; + } + + if (droq->refill_count) + desc_refilled += lio_droq_refill_pullup_descs(droq, desc_ring); + + /* if droq->refill_count + * The refill count would not change in pass two. We only moved buffers + * to close the gap in the ring, but we would still have the same no. of + * buffers to refill. + */ + return desc_refilled; +} + +static int +lio_droq_fast_process_packet(struct lio_device *lio_dev, + struct lio_droq *droq, + struct rte_mbuf **rx_pkts) +{ + struct rte_mbuf *nicbuf = NULL; + struct lio_droq_info *info; + uint32_t total_len = 0; + int data_total_len = 0; + uint32_t pkt_len = 0; + union octeon_rh *rh; + int data_pkts = 0; + + info = &droq->info_list[droq->read_idx]; + lio_swap_8B_data((uint64_t *)info, 2); + + if (!info->length) + return -1; + + /* Len of resp hdr in included in the received data len. */ + info->length -= OCTEON_RH_SIZE; + rh = &info->rh; + + total_len += (uint32_t)info->length; + + if (lio_opcode_slow_path(rh)) { + uint32_t buf_cnt; + + buf_cnt = lio_droq_get_bufcount(droq->buffer_size, + (uint32_t)info->length); + droq->read_idx = lio_incr_index(droq->read_idx, buf_cnt, + droq->max_count); + droq->refill_count += buf_cnt; + } else { + if (info->length <= droq->buffer_size) { + if (rh->r_dh.has_hash) + pkt_len = (uint32_t)(info->length - 8); + else + pkt_len = (uint32_t)info->length; + + nicbuf = droq->recv_buf_list[droq->read_idx].buffer; + droq->recv_buf_list[droq->read_idx].buffer = NULL; + droq->read_idx = lio_incr_index( + droq->read_idx, 1, + droq->max_count); + droq->refill_count++; + + if (likely(nicbuf != NULL)) { + nicbuf->data_off = RTE_PKTMBUF_HEADROOM; + nicbuf->nb_segs = 1; + nicbuf->next = NULL; + /* We don't have a way to pass flags yet */ + nicbuf->ol_flags = 0; + if (rh->r_dh.has_hash) { + uint64_t *hash_ptr; + + nicbuf->ol_flags |= PKT_RX_RSS_HASH; + hash_ptr = rte_pktmbuf_mtod(nicbuf, + uint64_t *); + lio_swap_8B_data(hash_ptr, 1); + nicbuf->hash.rss = (uint32_t)*hash_ptr; + nicbuf->data_off += 8; + } + + nicbuf->pkt_len = pkt_len; + nicbuf->data_len = pkt_len; + nicbuf->port = lio_dev->port_id; + /* Store the mbuf */ + rx_pkts[data_pkts++] = nicbuf; + data_total_len += pkt_len; + } + + /* Prefetch buffer pointers when on a cache line + * boundary + */ + if ((droq->read_idx & 3) == 0) { + rte_prefetch0( + &droq->recv_buf_list[droq->read_idx]); + rte_prefetch0( + &droq->info_list[droq->read_idx]); + } + } else { + struct rte_mbuf *first_buf = NULL; + struct rte_mbuf *last_buf = NULL; + + while (pkt_len < info->length) { + int cpy_len = 0; + + cpy_len = ((pkt_len + droq->buffer_size) > + info->length) + ? ((uint32_t)info->length - + pkt_len) + : droq->buffer_size; + + nicbuf = + droq->recv_buf_list[droq->read_idx].buffer; + droq->recv_buf_list[droq->read_idx].buffer = + NULL; + + if (likely(nicbuf != NULL)) { + /* Note the first seg */ + if (!pkt_len) + first_buf = nicbuf; + + nicbuf->data_off = RTE_PKTMBUF_HEADROOM; + nicbuf->nb_segs = 1; + nicbuf->next = NULL; + nicbuf->port = lio_dev->port_id; + /* We don't have a way to pass + * flags yet + */ + nicbuf->ol_flags = 0; + if ((!pkt_len) && (rh->r_dh.has_hash)) { + uint64_t *hash_ptr; + + nicbuf->ol_flags |= + PKT_RX_RSS_HASH; + hash_ptr = rte_pktmbuf_mtod( + nicbuf, uint64_t *); + lio_swap_8B_data(hash_ptr, 1); + nicbuf->hash.rss = + (uint32_t)*hash_ptr; + nicbuf->data_off += 8; + nicbuf->pkt_len = cpy_len - 8; + nicbuf->data_len = cpy_len - 8; + } else { + nicbuf->pkt_len = cpy_len; + nicbuf->data_len = cpy_len; + } + + if (pkt_len) + first_buf->nb_segs++; + + if (last_buf) + last_buf->next = nicbuf; + + last_buf = nicbuf; + } else { + PMD_RX_LOG(lio_dev, ERR, "no buf\n"); + } + + pkt_len += cpy_len; + droq->read_idx = lio_incr_index( + droq->read_idx, + 1, droq->max_count); + droq->refill_count++; + + /* Prefetch buffer pointers when on a + * cache line boundary + */ + if ((droq->read_idx & 3) == 0) { + rte_prefetch0(&droq->recv_buf_list + [droq->read_idx]); + + rte_prefetch0( + &droq->info_list[droq->read_idx]); + } + } + rx_pkts[data_pkts++] = first_buf; + if (rh->r_dh.has_hash) + data_total_len += (pkt_len - 8); + else + data_total_len += pkt_len; + } + + /* Inform upper layer about packet checksum verification */ + struct rte_mbuf *m = rx_pkts[data_pkts - 1]; + + if (rh->r_dh.csum_verified & LIO_IP_CSUM_VERIFIED) + m->ol_flags |= PKT_RX_IP_CKSUM_GOOD; + + if (rh->r_dh.csum_verified & LIO_L4_CSUM_VERIFIED) + m->ol_flags |= PKT_RX_L4_CKSUM_GOOD; + } + + if (droq->refill_count >= droq->refill_threshold) { + int desc_refilled = lio_droq_refill(lio_dev, droq); + + /* Flush the droq descriptor data to memory to be sure + * that when we update the credits the data in memory is + * accurate. + */ + rte_wmb(); + rte_write32(desc_refilled, droq->pkts_credit_reg); + /* make sure mmio write completes */ + rte_wmb(); + } + + info->length = 0; + info->rh.rh64 = 0; + + return data_pkts; +} + +static uint32_t +lio_droq_fast_process_packets(struct lio_device *lio_dev, + struct lio_droq *droq, + struct rte_mbuf **rx_pkts, + uint32_t pkts_to_process) +{ + int ret, data_pkts = 0; + uint32_t pkt; + + for (pkt = 0; pkt < pkts_to_process; pkt++) { + ret = lio_droq_fast_process_packet(lio_dev, droq, + &rx_pkts[data_pkts]); + if (ret < 0) { + lio_dev_err(lio_dev, "Port[%d] DROQ[%d] idx: %d len:0, pkt_cnt: %d\n", + lio_dev->port_id, droq->q_no, + droq->read_idx, pkts_to_process); + break; + } + data_pkts += ret; + } + + rte_atomic64_sub(&droq->pkts_pending, pkt); + + return data_pkts; +} + +static inline uint32_t +lio_droq_check_hw_for_pkts(struct lio_droq *droq) +{ + uint32_t last_count; + uint32_t pkt_count; + + pkt_count = rte_read32(droq->pkts_sent_reg); + + last_count = pkt_count - droq->pkt_count; + droq->pkt_count = pkt_count; + + if (last_count) + rte_atomic64_add(&droq->pkts_pending, last_count); + + return last_count; +} + +uint16_t +lio_dev_recv_pkts(void *rx_queue, + struct rte_mbuf **rx_pkts, + uint16_t budget) +{ + struct lio_droq *droq = rx_queue; + struct lio_device *lio_dev = droq->lio_dev; + uint32_t pkts_processed = 0; + uint32_t pkt_count = 0; + + lio_droq_check_hw_for_pkts(droq); + + pkt_count = rte_atomic64_read(&droq->pkts_pending); + if (!pkt_count) + return 0; + + if (pkt_count > budget) + pkt_count = budget; + + /* Grab the lock */ + rte_spinlock_lock(&droq->lock); + pkts_processed = lio_droq_fast_process_packets(lio_dev, + droq, rx_pkts, + pkt_count); + + if (droq->pkt_count) { + rte_write32(droq->pkt_count, droq->pkts_sent_reg); + droq->pkt_count = 0; + } + + /* Release the spin lock */ + rte_spinlock_unlock(&droq->lock); + + return pkts_processed; +} + /** * lio_init_instr_queue() * @param lio_dev - pointer to the lio device structure. diff --git a/drivers/net/liquidio/lio_rxtx.h b/drivers/net/liquidio/lio_rxtx.h index fc623ad..420b893 100644 --- a/drivers/net/liquidio/lio_rxtx.h +++ b/drivers/net/liquidio/lio_rxtx.h @@ -515,6 +515,17 @@ enum { return (uint64_t)dma_addr; } +static inline int +lio_opcode_slow_path(union octeon_rh *rh) +{ + uint16_t subcode1, subcode2; + + subcode1 = LIO_OPCODE_SUBCODE(rh->r.opcode, rh->r.subcode); + subcode2 = LIO_OPCODE_SUBCODE(LIO_OPCODE, LIO_OPCODE_NW_DATA); + + return subcode2 != subcode1; +} + /* Macro to increment index. * Index is incremented by count; if the sum exceeds * max, index is wrapped-around to the start. @@ -533,6 +544,8 @@ enum { int lio_setup_droq(struct lio_device *lio_dev, int q_no, int num_descs, int desc_size, struct rte_mempool *mpool, unsigned int socket_id); +uint16_t lio_dev_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t budget); /** Setup instruction queue zero for the device * @param lio_dev which lio device to setup -- 1.8.3.1