From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B4072A0544; Fri, 2 Sep 2022 11:18:54 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A7686427ED; Fri, 2 Sep 2022 11:18:54 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 4985D40A7F for ; Fri, 2 Sep 2022 11:18:53 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 281LnGCW015737; Fri, 2 Sep 2022 02:18:50 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=/wyFidc59OvGdGuTslI+YEJs4K6yvXCfzEhEP846jbM=; b=Phbsk6pUNr0Hzuy6J0qsAEuNKG6s5A16cSLYQQqZ4Br386RIFlGgvCj0wL0QMk67nrBx ePV/0TNOVFjjf515sJMf8Xs3f76y2zY1jYRJzbEtIErYcn0gOJs96I2/jWAkAcaI2MVb galMskeGBQs56KucR2wrXIZCo3ab42K5XBjSEXFBNdQg1diskfOD+b43wLSswJntd5Mi wXZZ3ZWTN746ceubqVXYf/6lRfxpa2oNyBtYI1T1/HwXDMgiBekkTC5b7j45pFHOBVpp Yb/FAi26/7zjX5/IrQQGAhnMrPDTZh8kfu0MHH2z8z3wAXee901oHXgCh9OpVeiix5co kA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3jax17uupn-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 02 Sep 2022 02:18:49 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 2 Sep 2022 02:18:47 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Fri, 2 Sep 2022 02:18:47 -0700 Received: from MININT-80QBFE8.corp.innovium.com (unknown [10.28.161.88]) by maili.marvell.com (Postfix) with ESMTP id 419FB5B6938; Fri, 2 Sep 2022 02:18:44 -0700 (PDT) From: To: , David Christensen , "Ruifeng Wang" , Bruce Richardson , Konstantin Ananyev CC: , Pavan Nikhilesh Subject: [PATCH v2 2/5] examples/l3fwd: split processing and send stages Date: Fri, 2 Sep 2022 14:48:30 +0530 Message-ID: <20220902091833.9074-2-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220902091833.9074-1-pbhagavatula@marvell.com> References: <20220829094442.3422-1-pbhagavatula@marvell.com> <20220902091833.9074-1-pbhagavatula@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-GUID: 3n6o687KfZtR7cgq_2AbLlsMWguxl-Je X-Proofpoint-ORIG-GUID: 3n6o687KfZtR7cgq_2AbLlsMWguxl-Je X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.517,FMLib:17.11.122.1 definitions=2022-09-01_12,2022-08-31_03,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Pavan Nikhilesh Split packet processing from packet send stage, as send stage is not common for poll and event mode. Signed-off-by: Pavan Nikhilesh --- examples/l3fwd/l3fwd_em_hlm.h | 39 +++++++++++++++++++----------- examples/l3fwd/l3fwd_lpm_altivec.h | 25 ++++++++++++++++--- examples/l3fwd/l3fwd_lpm_neon.h | 35 ++++++++++++++++++++------- examples/l3fwd/l3fwd_lpm_sse.h | 25 ++++++++++++++++--- 4 files changed, 95 insertions(+), 29 deletions(-) diff --git a/examples/l3fwd/l3fwd_em_hlm.h b/examples/l3fwd/l3fwd_em_hlm.h index e76f2760b0..12b997e477 100644 --- a/examples/l3fwd/l3fwd_em_hlm.h +++ b/examples/l3fwd/l3fwd_em_hlm.h @@ -177,16 +177,12 @@ em_get_dst_port(const struct lcore_conf *qconf, struct rte_mbuf *pkt, return portid; } -/* - * Buffer optimized handling of packets, invoked - * from main_loop. - */ static inline void -l3fwd_em_send_packets(int nb_rx, struct rte_mbuf **pkts_burst, - uint16_t portid, struct lcore_conf *qconf) +l3fwd_em_process_packets(int nb_rx, struct rte_mbuf **pkts_burst, + uint16_t *dst_port, uint16_t portid, + struct lcore_conf *qconf, const uint8_t do_step3) { int32_t i, j, pos; - uint16_t dst_port[MAX_PKT_BURST]; /* * Send nb_rx - nb_rx % EM_HASH_LOOKUP_COUNT packets @@ -233,13 +229,30 @@ l3fwd_em_send_packets(int nb_rx, struct rte_mbuf **pkts_burst, dst_port[j + i] = em_get_dst_port(qconf, pkts_burst[j + i], portid); } + + for (i = 0; i < EM_HASH_LOOKUP_COUNT && do_step3; i += FWDSTEP) + processx4_step3(&pkts_burst[j + i], &dst_port[j + i]); } - for (; j < nb_rx; j++) + for (; j < nb_rx; j++) { dst_port[j] = em_get_dst_port(qconf, pkts_burst[j], portid); + if (do_step3) + process_packet(pkts_burst[j], &pkts_burst[j]->port); + } +} - send_packets_multi(qconf, pkts_burst, dst_port, nb_rx); +/* + * Buffer optimized handling of packets, invoked + * from main_loop. + */ +static inline void +l3fwd_em_send_packets(int nb_rx, struct rte_mbuf **pkts_burst, uint16_t portid, + struct lcore_conf *qconf) +{ + uint16_t dst_port[MAX_PKT_BURST]; + l3fwd_em_process_packets(nb_rx, pkts_burst, dst_port, portid, qconf, 0); + send_packets_multi(qconf, pkts_burst, dst_port, nb_rx); } /* @@ -260,11 +273,8 @@ l3fwd_em_process_events(int nb_rx, struct rte_event **ev, */ int32_t n = RTE_ALIGN_FLOOR(nb_rx, EM_HASH_LOOKUP_COUNT); - for (j = 0; j < EM_HASH_LOOKUP_COUNT && j < nb_rx; j++) { + for (j = 0; j < nb_rx; j++) pkts_burst[j] = ev[j]->mbuf; - rte_prefetch0(rte_pktmbuf_mtod(pkts_burst[j], - struct rte_ether_hdr *) + 1); - } for (j = 0; j < n; j += EM_HASH_LOOKUP_COUNT) { @@ -305,7 +315,8 @@ l3fwd_em_process_events(int nb_rx, struct rte_event **ev, } continue; } - processx4_step3(&pkts_burst[j], &dst_port[j]); + for (i = 0; i < EM_HASH_LOOKUP_COUNT; i += FWDSTEP) + processx4_step3(&pkts_burst[j + i], &dst_port[j + i]); for (i = 0; i < EM_HASH_LOOKUP_COUNT; i++) pkts_burst[j + i]->port = dst_port[j + i]; diff --git a/examples/l3fwd/l3fwd_lpm_altivec.h b/examples/l3fwd/l3fwd_lpm_altivec.h index 0c6852a7bb..adb82f1478 100644 --- a/examples/l3fwd/l3fwd_lpm_altivec.h +++ b/examples/l3fwd/l3fwd_lpm_altivec.h @@ -96,11 +96,11 @@ processx4_step2(const struct lcore_conf *qconf, * from main_loop. */ static inline void -l3fwd_lpm_send_packets(int nb_rx, struct rte_mbuf **pkts_burst, - uint8_t portid, struct lcore_conf *qconf) +l3fwd_lpm_process_packets(int nb_rx, struct rte_mbuf **pkts_burst, + uint8_t portid, uint16_t *dst_port, + struct lcore_conf *qconf, const uint8_t do_step3) { int32_t j; - uint16_t dst_port[MAX_PKT_BURST]; __vector unsigned int dip[MAX_PKT_BURST / FWDSTEP]; uint32_t ipv4_flag[MAX_PKT_BURST / FWDSTEP]; const int32_t k = RTE_ALIGN_FLOOR(nb_rx, FWDSTEP); @@ -114,22 +114,41 @@ l3fwd_lpm_send_packets(int nb_rx, struct rte_mbuf **pkts_burst, ipv4_flag[j / FWDSTEP], portid, &pkts_burst[j], &dst_port[j]); + if (do_step3) + for (j = 0; j != k; j += FWDSTEP) + processx4_step3(&pkts_burst[j], &dst_port[j]); + /* Classify last up to 3 packets one by one */ switch (nb_rx % FWDSTEP) { case 3: dst_port[j] = lpm_get_dst_port(qconf, pkts_burst[j], portid); + if (do_step3) + process_packet(pkts_burst[j], &dst_port[j]); j++; /* fall-through */ case 2: dst_port[j] = lpm_get_dst_port(qconf, pkts_burst[j], portid); + if (do_step3) + process_packet(pkts_burst[j], &dst_port[j]); j++; /* fall-through */ case 1: dst_port[j] = lpm_get_dst_port(qconf, pkts_burst[j], portid); + if (do_step3) + process_packet(pkts_burst[j], &dst_port[j]); j++; /* fall-through */ } +} + +static inline void +l3fwd_lpm_send_packets(int nb_rx, struct rte_mbuf **pkts_burst, uint8_t portid, + struct lcore_conf *qconf) +{ + uint16_t dst_port[MAX_PKT_BURST]; + l3fwd_lpm_process_packets(nb_rx, pkts_burst, portid, dst_port, qconf, + 0); send_packets_multi(qconf, pkts_burst, dst_port, nb_rx); } diff --git a/examples/l3fwd/l3fwd_lpm_neon.h b/examples/l3fwd/l3fwd_lpm_neon.h index 78ee83b76c..2a68c4c15e 100644 --- a/examples/l3fwd/l3fwd_lpm_neon.h +++ b/examples/l3fwd/l3fwd_lpm_neon.h @@ -80,16 +80,12 @@ processx4_step2(const struct lcore_conf *qconf, } } -/* - * Buffer optimized handling of packets, invoked - * from main_loop. - */ static inline void -l3fwd_lpm_send_packets(int nb_rx, struct rte_mbuf **pkts_burst, - uint16_t portid, struct lcore_conf *qconf) +l3fwd_lpm_process_packets(int nb_rx, struct rte_mbuf **pkts_burst, + uint16_t portid, uint16_t *dst_port, + struct lcore_conf *qconf, const uint8_t do_step3) { int32_t i = 0, j = 0; - uint16_t dst_port[MAX_PKT_BURST]; int32x4_t dip; uint32_t ipv4_flag; const int32_t k = RTE_ALIGN_FLOOR(nb_rx, FWDSTEP); @@ -100,7 +96,6 @@ l3fwd_lpm_send_packets(int nb_rx, struct rte_mbuf **pkts_burst, rte_prefetch0(rte_pktmbuf_mtod(pkts_burst[i], void *)); } - for (j = 0; j != k - FWDSTEP; j += FWDSTEP) { for (i = 0; i < FWDSTEP; i++) { rte_prefetch0(rte_pktmbuf_mtod( @@ -111,11 +106,15 @@ l3fwd_lpm_send_packets(int nb_rx, struct rte_mbuf **pkts_burst, processx4_step1(&pkts_burst[j], &dip, &ipv4_flag); processx4_step2(qconf, dip, ipv4_flag, portid, &pkts_burst[j], &dst_port[j]); + if (do_step3) + processx4_step3(&pkts_burst[j], &dst_port[j]); } processx4_step1(&pkts_burst[j], &dip, &ipv4_flag); processx4_step2(qconf, dip, ipv4_flag, portid, &pkts_burst[j], &dst_port[j]); + if (do_step3) + processx4_step3(&pkts_burst[j], &dst_port[j]); j += FWDSTEP; } @@ -138,26 +137,44 @@ l3fwd_lpm_send_packets(int nb_rx, struct rte_mbuf **pkts_burst, void *)); j++; } - j -= m; /* Classify last up to 3 packets one by one */ switch (m) { case 3: dst_port[j] = lpm_get_dst_port(qconf, pkts_burst[j], portid); + if (do_step3) + process_packet(pkts_burst[j], &dst_port[j]); j++; /* fallthrough */ case 2: dst_port[j] = lpm_get_dst_port(qconf, pkts_burst[j], portid); + if (do_step3) + process_packet(pkts_burst[j], &dst_port[j]); j++; /* fallthrough */ case 1: dst_port[j] = lpm_get_dst_port(qconf, pkts_burst[j], portid); + if (do_step3) + process_packet(pkts_burst[j], &dst_port[j]); } } +} + +/* + * Buffer optimized handling of packets, invoked + * from main_loop. + */ +static inline void +l3fwd_lpm_send_packets(int nb_rx, struct rte_mbuf **pkts_burst, uint16_t portid, + struct lcore_conf *qconf) +{ + uint16_t dst_port[MAX_PKT_BURST]; + l3fwd_lpm_process_packets(nb_rx, pkts_burst, portid, dst_port, qconf, + 0); send_packets_multi(qconf, pkts_burst, dst_port, nb_rx); } diff --git a/examples/l3fwd/l3fwd_lpm_sse.h b/examples/l3fwd/l3fwd_lpm_sse.h index 3f637a23d1..db15030320 100644 --- a/examples/l3fwd/l3fwd_lpm_sse.h +++ b/examples/l3fwd/l3fwd_lpm_sse.h @@ -82,11 +82,11 @@ processx4_step2(const struct lcore_conf *qconf, * from main_loop. */ static inline void -l3fwd_lpm_send_packets(int nb_rx, struct rte_mbuf **pkts_burst, - uint16_t portid, struct lcore_conf *qconf) +l3fwd_lpm_process_packets(int nb_rx, struct rte_mbuf **pkts_burst, + uint16_t portid, uint16_t *dst_port, + struct lcore_conf *qconf, const uint8_t do_step3) { int32_t j; - uint16_t dst_port[MAX_PKT_BURST]; __m128i dip[MAX_PKT_BURST / FWDSTEP]; uint32_t ipv4_flag[MAX_PKT_BURST / FWDSTEP]; const int32_t k = RTE_ALIGN_FLOOR(nb_rx, FWDSTEP); @@ -99,21 +99,40 @@ l3fwd_lpm_send_packets(int nb_rx, struct rte_mbuf **pkts_burst, processx4_step2(qconf, dip[j / FWDSTEP], ipv4_flag[j / FWDSTEP], portid, &pkts_burst[j], &dst_port[j]); + if (do_step3) + for (j = 0; j != k; j += FWDSTEP) + processx4_step3(&pkts_burst[j], &dst_port[j]); + /* Classify last up to 3 packets one by one */ switch (nb_rx % FWDSTEP) { case 3: dst_port[j] = lpm_get_dst_port(qconf, pkts_burst[j], portid); + if (do_step3) + process_packet(pkts_burst[j], &dst_port[j]); j++; /* fall-through */ case 2: dst_port[j] = lpm_get_dst_port(qconf, pkts_burst[j], portid); + if (do_step3) + process_packet(pkts_burst[j], &dst_port[j]); j++; /* fall-through */ case 1: dst_port[j] = lpm_get_dst_port(qconf, pkts_burst[j], portid); + if (do_step3) + process_packet(pkts_burst[j], &dst_port[j]); j++; } +} + +static inline void +l3fwd_lpm_send_packets(int nb_rx, struct rte_mbuf **pkts_burst, uint16_t portid, + struct lcore_conf *qconf) +{ + uint16_t dst_port[MAX_PKT_BURST]; + l3fwd_lpm_process_packets(nb_rx, pkts_burst, portid, dst_port, qconf, + 0); send_packets_multi(qconf, pkts_burst, dst_port, nb_rx); } -- 2.25.1