From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 79036A0544; Tue, 11 Oct 2022 02:53:21 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2156842C06; Tue, 11 Oct 2022 02:52:12 +0200 (CEST) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2077.outbound.protection.outlook.com [40.107.94.77]) by mails.dpdk.org (Postfix) with ESMTP id 3A2AF42BB6 for ; Tue, 11 Oct 2022 02:52:08 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=IyM/eK3ZmA9UAUd/XM7kAQ7gTSG/8jEM3ZFctwbVOiJf17fb9yj2h1q5I8OVJ3F4twDghadv61cqi0weZ7jLzFhA4q221JkJvJPF1fZPC4/w1qhiCgIBQ4bZBUZQRz4/b1QLELEGJRald5s4jGmCGLTuSzlP9f77G3buE1Ex7ZUF+Of7lnnEPlSSOt2XRAKH6+Irslkqm1SScHGOOk1XOYuCiMDiPCEeDhs2lRSudUtiLwZSr44g74T124C/gEsbwxzXbm5zox2MenCCEh4cEAKXq+fMhcXThOUVlB7yAOWJAVUoFl8UJYO17zGTVeZrKC3RE9PmPnwn+0D5ki/2zA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=1lcFXK/r+naZ22rbu2LjGqTHoqoOPCviFK90Yuj4dDk=; b=DAL2n2EQ8pTOTWFrhOwu8KIUeQ/MWMgwtWYPt54H+7dNGkOdAc7fiK1xVTYI5X7uGUoyFAA1jJmZ5Mn2PTD8Ls3iyWMLFVRQZt0vr1QTU61ToTtdCJ4ys6wZotjW+laSfTR0srxPeuwAb38UMSBXSuuiUaBI3bfTCxv5MB7DDLEqgotETSXRclUVWkDUkm4rDGX1qZw+h8SBQzh9pOM/nHCfFsomtbtudC/bdWJeQglX8hwXT2TWnHU/RBh66QmMfpODDPos/3R3Ldmuy8CrM0ppsBgLIFacgMBQrs9PsOfluqeMoa/k3R081MVe03VsFLYJ/ggMUVHO/2drvW3xVA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=dpdk.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=1lcFXK/r+naZ22rbu2LjGqTHoqoOPCviFK90Yuj4dDk=; b=nJBo/BiXe65ZIQdDnuMQ61pwwGZiJUiJ8ZIgsKqQZeZ4Ao/9sQdLIDdFOFdc0AuW2bzQkfEiRWaV+0p/G+WwqYs0JkYVa0NNQWDq7ZNYD4RMtZhhxHa/102/I07p4Ix4lYmr37OpbuaK/W1zYBf/wDtNH2foHig/4vBDfgqgIrg= Received: from BN0PR08CA0020.namprd08.prod.outlook.com (2603:10b6:408:142::8) by DS7PR12MB6309.namprd12.prod.outlook.com (2603:10b6:8:96::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5676.28; Tue, 11 Oct 2022 00:52:06 +0000 Received: from BN8NAM11FT024.eop-nam11.prod.protection.outlook.com (2603:10b6:408:142:cafe::ff) by BN0PR08CA0020.outlook.office365.com (2603:10b6:408:142::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5709.15 via Frontend Transport; Tue, 11 Oct 2022 00:52:05 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by BN8NAM11FT024.mail.protection.outlook.com (10.13.177.38) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.5709.10 via Frontend Transport; Tue, 11 Oct 2022 00:52:05 +0000 Received: from driver-dev1.pensando.io (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.28; Mon, 10 Oct 2022 19:52:04 -0500 From: Andrew Boyer To: CC: Andrew Boyer , Neel Patel Subject: [PATCH v1 19/35] net/ionic: overhaul receive side for performance Date: Mon, 10 Oct 2022 17:50:16 -0700 Message-ID: <20221011005032.47584-20-andrew.boyer@amd.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221007174336.54354-1-andrew.boyer@amd.com> References: <20221007174336.54354-1-andrew.boyer@amd.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB04.amd.com (10.181.40.145) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT024:EE_|DS7PR12MB6309:EE_ X-MS-Office365-Filtering-Correlation-Id: 70c46ce5-6705-4d03-bd64-08daab22d16a X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: OawE1BE6OwH0FaF/kp6snjvh+vM7GlEYBHuYz3fCElxDZjFdUKhNrIK0TJTMrxQ+ZJZJVBdGBbEgpcjnG12pPkeaqIy4YkxS44BxyRgmpxNm4Puq7Cn9zDhOXrGI7W6BmR0ZN03aybGTLdYh3BHhU5wPshGJ9IlE/rk+GTx7AAbNbs+jXpm2awLlA5BkUiAKzuoo4iRlISkAH4jLexC6cTlVxXIa4Bj6I1K2cz/J6HYLLSNzgkr+gML1l4cI1rByMfsp8szCN9226IgvB5FHsm6R6MgWYw3iqDj2pmI5zorcmYZ2qBuXYJiYIzTJsovOd4i1mxGkDR4H7w9UWzFs7jMYK3CQuj+6/LOut9uWTRAL6A+n+UCuE8Hl/CrKodXcAa0bxlRgifpxuxrYySfTjI04qSXDhnHeZlEk11cqa/M8Yaifx/lN31eFaKm7NgWif4EhZNZLPoUK6JwJmU+Lv+KW9QxtnRYCO5i8AASNyxKNqg0RO03Ybm0qKG6PT/bgXmCwVcBUq359Xla4C9UEYlSWREZW8m5BUd+FNCRSizxS/3nq7uf5eZLXyCPU7tltPDRGrXg3V+Ia3c35ZjKPCduumDURVmvAExnzT/SRZetDSL5Qezk9xf5Xm3tZy2HjC0mKDKf+S7P1/M0Itd/wjm8ml2SYpZbwTFrk8omI0+OIMex/pK/PowcXvSn66vV69uwzmZcg0hKnrntackFeytM0MEBvvZf6Yu52Pe98jcIwlW3XtyVmikxNgdtDXIf9VhT7uFUCXc6ezpgcRkGFsG0t0LjISFsRred3jYT0Nq9cgdPEr0gdqqAV5RmIadJU X-Forefront-Antispam-Report: CIP:165.204.84.17; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:SATLEXMB04.amd.com; PTR:InfoDomainNonexistent; CAT:NONE; SFS:(13230022)(4636009)(376002)(396003)(39860400002)(346002)(136003)(451199015)(40470700004)(36840700001)(46966006)(426003)(47076005)(83380400001)(36860700001)(1076003)(36756003)(82740400003)(8676002)(2616005)(41300700001)(40460700003)(316002)(5660300002)(336012)(86362001)(8936002)(478600001)(54906003)(70586007)(16526019)(82310400005)(6666004)(356005)(70206006)(186003)(40480700001)(4326008)(30864003)(2906002)(6916009)(26005)(81166007)(44832011)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Oct 2022 00:52:05.6705 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 70c46ce5-6705-4d03-bd64-08daab22d16a X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d; Ip=[165.204.84.17]; Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT024.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB6309 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Linearize RX mbuf chains in the expanded info array. Clean one and fill one per CQE (completions are not coalesced). Touch the mbufs as little as possible in the fill stage. When touching the mbuf in the clean stage, use the rearm_data unions. Ring the doorbell once at the end of the bulk clean/fill. Signed-off-by: Neel Patel Signed-off-by: Andrew Boyer --- doc/guides/rel_notes/release_22_11.rst | 1 + drivers/net/ionic/ionic_dev.h | 2 +- drivers/net/ionic/ionic_lif.c | 49 +++++- drivers/net/ionic/ionic_lif.h | 3 +- drivers/net/ionic/ionic_rxtx.c | 225 ++++++++++++------------- drivers/net/ionic/ionic_rxtx.h | 1 - 6 files changed, 159 insertions(+), 122 deletions(-) diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst index 4135021283..6c8a8d5bdb 100644 --- a/doc/guides/rel_notes/release_22_11.rst +++ b/doc/guides/rel_notes/release_22_11.rst @@ -164,6 +164,7 @@ New Features Updated the ionic PMD with new features and improvements, including: * Updated to reflect that Pensando has been acquired by AMD. + * Enhanced data path to provide substantial performance improvements. * **Added support for MACsec in rte_security.** diff --git a/drivers/net/ionic/ionic_dev.h b/drivers/net/ionic/ionic_dev.h index 55a9485bff..6a80ebc71b 100644 --- a/drivers/net/ionic/ionic_dev.h +++ b/drivers/net/ionic/ionic_dev.h @@ -132,7 +132,7 @@ struct ionic_dev { #define Q_NEXT_TO_POST(_q, _n) (((_q)->head_idx + (_n)) & ((_q)->size_mask)) #define Q_NEXT_TO_SRVC(_q, _n) (((_q)->tail_idx + (_n)) & ((_q)->size_mask)) -#define IONIC_INFO_IDX(_q, _i) (_i) +#define IONIC_INFO_IDX(_q, _i) ((_i) * (_q)->num_segs) #define IONIC_INFO_PTR(_q, _i) (&(_q)->info[IONIC_INFO_IDX((_q), _i)]) struct ionic_queue { diff --git a/drivers/net/ionic/ionic_lif.c b/drivers/net/ionic/ionic_lif.c index cc64aedaa1..db5d42dda6 100644 --- a/drivers/net/ionic/ionic_lif.c +++ b/drivers/net/ionic/ionic_lif.c @@ -116,7 +116,6 @@ ionic_lif_get_abs_stats(const struct ionic_lif *lif, struct rte_eth_stats *stats struct ionic_rx_stats *rx_stats = &lif->rxqcqs[i]->stats; stats->ierrors += rx_stats->bad_cq_status + - rx_stats->no_room + rx_stats->bad_len; } @@ -136,7 +135,6 @@ ionic_lif_get_abs_stats(const struct ionic_lif *lif, struct rte_eth_stats *stats stats->q_ibytes[i] = rx_stats->bytes; stats->q_errors[i] = rx_stats->bad_cq_status + - rx_stats->no_room + rx_stats->bad_len; } @@ -608,8 +606,9 @@ ionic_qcq_alloc(struct ionic_lif *lif, new->lif = lif; + /* Most queue types will store 1 ptr per descriptor */ new->q.info = rte_calloc_socket("ionic", - num_descs, sizeof(void *), + num_descs * num_segs, sizeof(void *), rte_mem_page_size(), socket_id); if (!new->q.info) { IONIC_PRINT(ERR, "Cannot allocate queue info"); @@ -698,6 +697,42 @@ ionic_qcq_free(struct ionic_qcq *qcq) rte_free(qcq); } +static uint64_t +ionic_rx_rearm_data(struct ionic_lif *lif) +{ + struct rte_mbuf rxm; + + memset(&rxm, 0, sizeof(rxm)); + + rte_mbuf_refcnt_set(&rxm, 1); + rxm.data_off = RTE_PKTMBUF_HEADROOM; + rxm.nb_segs = 1; + rxm.port = lif->port_id; + + rte_compiler_barrier(); + + RTE_BUILD_BUG_ON(sizeof(rxm.rearm_data[0]) != sizeof(uint64_t)); + return rxm.rearm_data[0]; +} + +static uint64_t +ionic_rx_seg_rearm_data(struct ionic_lif *lif) +{ + struct rte_mbuf rxm; + + memset(&rxm, 0, sizeof(rxm)); + + rte_mbuf_refcnt_set(&rxm, 1); + rxm.data_off = 0; /* no headroom */ + rxm.nb_segs = 1; + rxm.port = lif->port_id; + + rte_compiler_barrier(); + + RTE_BUILD_BUG_ON(sizeof(rxm.rearm_data[0]) != sizeof(uint64_t)); + return rxm.rearm_data[0]; +} + int ionic_rx_qcq_alloc(struct ionic_lif *lif, uint32_t socket_id, uint32_t index, uint16_t nrxq_descs, struct rte_mempool *mb_pool, @@ -721,11 +756,13 @@ ionic_rx_qcq_alloc(struct ionic_lif *lif, uint32_t socket_id, uint32_t index, /* * Calculate how many fragment pointers might be stored in queue. + * This is the worst-case number, so that there's enough room in + * the info array. */ max_segs = 1 + (max_mtu + RTE_PKTMBUF_HEADROOM - 1) / seg_size; - IONIC_PRINT(DEBUG, "rxq %u frame_size %u seg_size %u max_segs %u", - index, lif->frame_size, seg_size, max_segs); + IONIC_PRINT(DEBUG, "rxq %u max_mtu %u seg_size %u max_segs %u", + index, max_mtu, seg_size, max_segs); if (max_segs > max_segs_fw) { IONIC_PRINT(ERR, "Rx mbuf size insufficient (%d > %d avail)", max_segs, max_segs_fw); @@ -751,6 +788,8 @@ ionic_rx_qcq_alloc(struct ionic_lif *lif, uint32_t socket_id, uint32_t index, rxq->flags = flags; rxq->seg_size = seg_size; rxq->hdr_seg_size = hdr_seg_size; + rxq->rearm_data = ionic_rx_rearm_data(lif); + rxq->rearm_seg_data = ionic_rx_seg_rearm_data(lif); lif->rxqcqs[index] = rxq; *rxq_out = rxq; diff --git a/drivers/net/ionic/ionic_lif.h b/drivers/net/ionic/ionic_lif.h index 237fd0a2ef..b0bd721b06 100644 --- a/drivers/net/ionic/ionic_lif.h +++ b/drivers/net/ionic/ionic_lif.h @@ -40,7 +40,6 @@ struct ionic_rx_stats { uint64_t packets; uint64_t bytes; uint64_t bad_cq_status; - uint64_t no_room; uint64_t bad_len; uint64_t mtods; }; @@ -80,6 +79,8 @@ struct ionic_rx_qcq { /* cacheline2 */ struct rte_mempool *mb_pool; + uint64_t rearm_data; + uint64_t rearm_seg_data; uint16_t frame_size; /* Based on configured MTU */ uint16_t hdr_seg_size; /* Length of first segment of RX chain */ uint16_t seg_size; /* Length of all subsequent segments */ diff --git a/drivers/net/ionic/ionic_rxtx.c b/drivers/net/ionic/ionic_rxtx.c index 2a34465e46..bb6ca019d9 100644 --- a/drivers/net/ionic/ionic_rxtx.c +++ b/drivers/net/ionic/ionic_rxtx.c @@ -72,7 +72,11 @@ ionic_rx_empty(struct ionic_rx_qcq *rxq) { struct ionic_queue *q = &rxq->qcq.q; - ionic_empty_array(q->info, q->num_descs, 0); + /* + * Walk the full info array so that the clean up includes any + * fragments that were left dangling for later reuse + */ + ionic_empty_array(q->info, q->num_descs * q->num_segs, 0); } /********************************************************************* @@ -658,9 +662,6 @@ ionic_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) * **********************************************************************/ -static void ionic_rx_recycle(struct ionic_queue *q, uint32_t q_desc_index, - struct rte_mbuf *mbuf); - void ionic_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, struct rte_eth_rxq_info *qinfo) @@ -763,64 +764,67 @@ ionic_dev_rx_queue_setup(struct rte_eth_dev *eth_dev, return 0; } +/* + * Cleans one descriptor. Connects the filled mbufs into a chain. + * Does not advance the tail index. + */ static __rte_always_inline void -ionic_rx_clean(struct ionic_rx_qcq *rxq, - uint32_t q_desc_index, uint32_t cq_desc_index, +ionic_rx_clean_one(struct ionic_rx_qcq *rxq, + struct ionic_rxq_comp *cq_desc, struct ionic_rx_service *rx_svc) { struct ionic_queue *q = &rxq->qcq.q; - struct ionic_cq *cq = &rxq->qcq.cq; - struct ionic_rxq_comp *cq_desc_base = cq->base; - struct ionic_rxq_comp *cq_desc = &cq_desc_base[cq_desc_index]; - struct rte_mbuf *rxm, *rxm_seg; + struct rte_mbuf *rxm, *rxm_seg, *prev_rxm; + struct ionic_rx_stats *stats = &rxq->stats; uint64_t pkt_flags = 0; uint32_t pkt_type; - struct ionic_rx_stats *stats = &rxq->stats; - uint32_t left; + uint32_t left, i; + uint16_t cq_desc_len; void **info; - assert(q_desc_index == cq_desc->comp_index); + cq_desc_len = rte_le_to_cpu_16(cq_desc->len); - info = IONIC_INFO_PTR(q, cq_desc->comp_index); + info = IONIC_INFO_PTR(q, q->tail_idx); rxm = info[0]; if (cq_desc->status) { stats->bad_cq_status++; - ionic_rx_recycle(q, q_desc_index, rxm); - return; - } - - if (rx_svc->nb_rx >= rx_svc->nb_pkts) { - stats->no_room++; - ionic_rx_recycle(q, q_desc_index, rxm); return; } - if (cq_desc->len > rxq->frame_size || cq_desc->len == 0) { + if (cq_desc_len > rxq->frame_size || cq_desc_len == 0) { stats->bad_len++; - ionic_rx_recycle(q, q_desc_index, rxm); return; } - rxm->data_off = RTE_PKTMBUF_HEADROOM; - rte_prefetch1((char *)rxm->buf_addr + rxm->data_off); - rxm->nb_segs = 1; /* cq_desc->num_sg_elems */ - rxm->pkt_len = cq_desc->len; - rxm->port = rxq->qcq.lif->port_id; + info[0] = NULL; - rxm->data_len = RTE_MIN(rxq->hdr_seg_size, cq_desc->len); - left = cq_desc->len - rxm->data_len; + /* Set the mbuf metadata based on the cq entry */ + rxm->rearm_data[0] = rxq->rearm_data; + rxm->pkt_len = cq_desc_len; + rxm->data_len = RTE_MIN(rxq->hdr_seg_size, cq_desc_len); + left = cq_desc_len - rxm->data_len; + rxm->nb_segs = cq_desc->num_sg_elems + 1; + prev_rxm = rxm; - rxm_seg = rxm->next; - while (rxm_seg && left) { + for (i = 1; i < rxm->nb_segs && left; i++) { + rxm_seg = info[i]; + info[i] = NULL; + + /* Set the chained mbuf metadata */ + rxm_seg->rearm_data[0] = rxq->rearm_seg_data; rxm_seg->data_len = RTE_MIN(rxq->seg_size, left); left -= rxm_seg->data_len; - rxm_seg = rxm_seg->next; - rxm->nb_segs++; + /* Link the mbuf */ + prev_rxm->next = rxm_seg; + prev_rxm = rxm_seg; } + /* Terminate the mbuf chain */ + prev_rxm->next = NULL; + /* RSS */ pkt_flags |= RTE_MBUF_F_RX_RSS_HASH; rxm->hash.rss = rte_le_to_cpu_32(cq_desc->rss_hash); @@ -897,77 +901,74 @@ ionic_rx_clean(struct ionic_rx_qcq *rxq, stats->bytes += rxm->pkt_len; } -static void -ionic_rx_recycle(struct ionic_queue *q, uint32_t q_desc_index, - struct rte_mbuf *mbuf) -{ - struct ionic_rxq_desc *desc_base = q->base; - struct ionic_rxq_desc *old = &desc_base[q_desc_index]; - struct ionic_rxq_desc *new = &desc_base[q->head_idx]; - - new->addr = old->addr; - new->len = old->len; - - q->info[q->head_idx] = mbuf; - - q->head_idx = Q_NEXT_TO_POST(q, 1); - - ionic_q_flush(q); -} - +/* + * Fills one descriptor with mbufs. Does not advance the head index. + */ static __rte_always_inline int -ionic_rx_fill(struct ionic_rx_qcq *rxq) +ionic_rx_fill_one(struct ionic_rx_qcq *rxq) { struct ionic_queue *q = &rxq->qcq.q; + struct rte_mbuf *rxm, *rxm_seg; struct ionic_rxq_desc *desc, *desc_base = q->base; struct ionic_rxq_sg_desc *sg_desc, *sg_desc_base = q->sg_base; - struct ionic_rxq_sg_elem *elem; + rte_iova_t data_iova; + uint32_t i; void **info; - rte_iova_t dma_addr; - uint32_t i, j; - /* Initialize software ring entries */ - for (i = ionic_q_space_avail(q); i; i--) { - struct rte_mbuf *rxm = rte_mbuf_raw_alloc(rxq->mb_pool); - struct rte_mbuf *prev_rxm_seg; + info = IONIC_INFO_PTR(q, q->head_idx); + desc = &desc_base[q->head_idx]; + sg_desc = &sg_desc_base[q->head_idx]; + + /* mbuf is unused => whole chain is unused */ + if (unlikely(info[0])) + return 0; + + rxm = rte_mbuf_raw_alloc(rxq->mb_pool); + if (unlikely(rxm == NULL)) { + assert(0); + return -ENOMEM; + } + + info[0] = rxm; + + data_iova = rte_mbuf_data_iova_default(rxm); + desc->addr = rte_cpu_to_le_64(data_iova); - if (rxm == NULL) { - IONIC_PRINT(ERR, "RX mbuf alloc failed"); + for (i = 1; i < q->num_segs; i++) { + /* mbuf is unused => rest of the chain is unused */ + if (info[i]) + return 0; + + rxm_seg = rte_mbuf_raw_alloc(rxq->mb_pool); + if (rxm_seg == NULL) { + assert(0); return -ENOMEM; } - info = IONIC_INFO_PTR(q, q->head_idx); + info[i] = rxm_seg; - desc = &desc_base[q->head_idx]; - dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(rxm)); - desc->addr = dma_addr; - rxm->next = NULL; - - prev_rxm_seg = rxm; - sg_desc = &sg_desc_base[q->head_idx]; - elem = sg_desc->elems; - for (j = 0; j < q->num_segs - 1u; j++) { - struct rte_mbuf *rxm_seg; - rte_iova_t data_iova; - - rxm_seg = rte_mbuf_raw_alloc(rxq->mb_pool); - if (rxm_seg == NULL) { - IONIC_PRINT(ERR, "RX mbuf alloc failed"); - return -ENOMEM; - } + /* The data_off does not get set to 0 until later */ + data_iova = rxm_seg->buf_iova; + sg_desc->elems[i - 1].addr = rte_cpu_to_le_64(data_iova); + } - rxm_seg->data_off = 0; - data_iova = rte_mbuf_data_iova(rxm_seg); - dma_addr = rte_cpu_to_le_64(data_iova); - elem->addr = dma_addr; - elem++; + return 0; +} - rxm_seg->next = NULL; - prev_rxm_seg->next = rxm_seg; - prev_rxm_seg = rxm_seg; - } +/* + * Fills all descriptors with mbufs. + */ +static int __rte_cold +ionic_rx_fill(struct ionic_rx_qcq *rxq) +{ + struct ionic_queue *q = &rxq->qcq.q; + uint32_t i; + int err; - info[0] = rxm; + for (i = 1; i < q->num_descs; i++) { + err = ionic_rx_fill_one(rxq); + if (err) + return err; q->head_idx = Q_NEXT_TO_POST(q, 1); } @@ -1056,53 +1057,52 @@ ionic_dev_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t rx_queue_id) return 0; } +/* + * Walk the CQ to find completed receive descriptors. + * Any completed descriptor found is refilled. + */ static __rte_always_inline void ionic_rxq_service(struct ionic_rx_qcq *rxq, uint32_t work_to_do, struct ionic_rx_service *rx_svc) { struct ionic_cq *cq = &rxq->qcq.cq; struct ionic_queue *q = &rxq->qcq.q; + struct ionic_rxq_desc *q_desc_base = q->base; struct ionic_rxq_comp *cq_desc, *cq_desc_base = cq->base; - bool more; - uint32_t curr_q_tail_idx, curr_cq_tail_idx; uint32_t work_done = 0; - if (work_to_do == 0) - return; - cq_desc = &cq_desc_base[cq->tail_idx]; + while (color_match(cq_desc->pkt_type_color, cq->done_color)) { - curr_cq_tail_idx = cq->tail_idx; cq->tail_idx = Q_NEXT_TO_SRVC(cq, 1); if (cq->tail_idx == 0) cq->done_color = !cq->done_color; - /* Prefetch the next 4 descriptors */ - if ((cq->tail_idx & 0x3) == 0) - rte_prefetch0(&cq_desc_base[cq->tail_idx]); - - do { - more = (q->tail_idx != cq_desc->comp_index); + /* Prefetch 8 x 8B bufinfo */ + rte_prefetch0(IONIC_INFO_PTR(q, Q_NEXT_TO_SRVC(q, 8))); + /* Prefetch 4 x 16B comp */ + rte_prefetch0(&cq_desc_base[Q_NEXT_TO_SRVC(cq, 4)]); + /* Prefetch 4 x 16B descriptors */ + rte_prefetch0(&q_desc_base[Q_NEXT_TO_POST(q, 4)]); - curr_q_tail_idx = q->tail_idx; - q->tail_idx = Q_NEXT_TO_SRVC(q, 1); + ionic_rx_clean_one(rxq, cq_desc, rx_svc); - /* Prefetch the next 4 descriptors */ - if ((q->tail_idx & 0x3) == 0) - /* q desc info */ - rte_prefetch0(&q->info[q->tail_idx]); + q->tail_idx = Q_NEXT_TO_SRVC(q, 1); - ionic_rx_clean(rxq, curr_q_tail_idx, curr_cq_tail_idx, - rx_svc); + (void)ionic_rx_fill_one(rxq); - } while (more); + q->head_idx = Q_NEXT_TO_POST(q, 1); if (++work_done == work_to_do) break; cq_desc = &cq_desc_base[cq->tail_idx]; } + + /* Update the queue indices and ring the doorbell */ + if (work_done) + ionic_q_flush(q); } /* @@ -1141,12 +1141,9 @@ ionic_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, struct ionic_rx_service rx_svc; rx_svc.rx_pkts = rx_pkts; - rx_svc.nb_pkts = nb_pkts; rx_svc.nb_rx = 0; ionic_rxq_service(rxq, nb_pkts, &rx_svc); - ionic_rx_fill(rxq); - return rx_svc.nb_rx; } diff --git a/drivers/net/ionic/ionic_rxtx.h b/drivers/net/ionic/ionic_rxtx.h index 91a9073803..79ec1112de 100644 --- a/drivers/net/ionic/ionic_rxtx.h +++ b/drivers/net/ionic/ionic_rxtx.h @@ -10,7 +10,6 @@ struct ionic_rx_service { /* cb in */ struct rte_mbuf **rx_pkts; - uint16_t nb_pkts; /* cb out */ uint16_t nb_rx; }; -- 2.17.1