From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AC628A0C46; Fri, 17 Sep 2021 10:03:07 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7EBE3410E9; Fri, 17 Sep 2021 10:03:03 +0200 (CEST) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2088.outbound.protection.outlook.com [40.107.244.88]) by mails.dpdk.org (Postfix) with ESMTP id 651334111A for ; Fri, 17 Sep 2021 10:03:02 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=XCSbZSy6Pf1kFZTfVNkch2c/jfjDY5DQFtnpPgpC+gj6J0fPVDaOy0bvXGVUb8xKAX92EA8KxJQcLSb1oVcuBBS3a3edpwEaOnY3l6NEHReI7vhmmSec6ilYCSp2YvSBwCuh0VgCunfSLBXt15F1n/5RqAjWVQEY7DeiRHnlro8S8gIEWdU9ZynVC/FcQbg/q5KAUZB975lBo+Gs/fZqqSNqSBri7RMCaE3aWVffSIgLIj1fsoASNyAQmRB6c8883XiBvk+M61p6D7Za71Da49Ui9Ir62yGPL90olN+DgCbMQFab0o70OusW/VCtPBq6/6NxAAAqnyuaL0gjIIMMpQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=pxnA5A12JF4RLi09SHhGZt6fFqXFI4qQShYIRqinQQw=; b=U+sEPWv4ntRqzKdVJhkaH0eB+7zPcK5eNEHjF4x3rfHb7KkF3REAsVMzpBFaJiZ/rhRQ/tyEZAbooDVW+EmnXOtVJYRrx+/p1IQxV08Dg2h7PXz59ftzXVht/GuB5EvhcrdSKk44DcEKDmw9AqxrxhmFa6q1lF153T8Qs2RMeNqlzdmmhzjSTqEMEqeGLgBJ1EuEbvR6VoLD7/VIo7YHgryEqQ7CE5hhYRJrMWwdQ8SZyYYB899/Vjb1R5+Xiqlkngm4TO+yjbqCy8r2PGMdmYIDrBoaRpHo+0/JgIEkhvjcM6zHyrjbaeQSKy/H5cWnM/nyqNJHxUQukxRmhwSvvw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.35) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=pxnA5A12JF4RLi09SHhGZt6fFqXFI4qQShYIRqinQQw=; b=eIg1r9GEZVQmZtunDYmYZnfGlPWF/CAsbYmxiniuYFw1DlcH8SGOWsl1wujhMkIsbDS2D5HWKD7kqMk2jB1EhT3NpYGg4MnH1OUjHfIH2vIQpYqPxjOH+kLq4BRf9ilkOC1Kyjl5wIGUOUFSr+24QuJymLnsfQ/wGNEprqclYVg/XF6zx1BpdeR6GTSUh9ClUVDmgC59tbNBMYFiaFHMB4s3R+OOoZP1k7V3EBvCbVp3rBntchncCI6rySf8UZvbcRbCwO0TwRH0Azo553cZynrLmO7MnS9cUcqAmVYmVDg5wpzEQQqv4gGZXfTXkA+h5YLTt1b1yt7qTqH08yCSDQ== Received: from BN9PR03CA0049.namprd03.prod.outlook.com (2603:10b6:408:fb::24) by BYAPR12MB2888.namprd12.prod.outlook.com (2603:10b6:a03:137::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4478.25; Fri, 17 Sep 2021 08:03:00 +0000 Received: from BN8NAM11FT037.eop-nam11.prod.protection.outlook.com (2603:10b6:408:fb:cafe::a) by BN9PR03CA0049.outlook.office365.com (2603:10b6:408:fb::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4523.14 via Frontend Transport; Fri, 17 Sep 2021 08:02:59 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.35) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.35 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.35; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.35) by BN8NAM11FT037.mail.protection.outlook.com (10.13.177.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4523.14 via Frontend Transport; Fri, 17 Sep 2021 08:02:59 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 17 Sep 2021 08:02:59 +0000 Received: from nvidia.com (172.20.187.6) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 17 Sep 2021 08:02:56 +0000 From: Xueming Li To: CC: Xiaoyu Min , , Jerin Jacob , Ferruh Yigit , "Andrew Rybchenko" , Viacheslav Ovsiienko , Thomas Monjalon , "Lior Margalit" , Xiaoyun Li Date: Fri, 17 Sep 2021 16:01:19 +0800 Message-ID: <20210917080121.329373-7-xuemingl@nvidia.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210917080121.329373-1-xuemingl@nvidia.com> References: <20210727034204.20649-1-xuemingl@nvidia.com> <20210917080121.329373-1-xuemingl@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 86481d18-a1fe-492a-f08d-08d979b190fe X-MS-TrafficTypeDiagnostic: BYAPR12MB2888: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:7691; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: vChfAitqnQItryV/8TAKHgLd8qLYIOYarnbTOvToGXWXJLvprbOghfdbZhtLlRx0mrzifAwp0V4z9yjZZGpZ3loEaRPuerBCXcpY0rb+fFo+FvWkBm/k6KeGgupvf/XdDQqq4AaquMevaU+ZM1+ZAV1xTe//wgMj9Q1teVxEeV4XpgaVlp8nnL4RVx3dat6ws2o3e0tuczqMFs9fr6f5EhxsHE7bedGmLklgJPim2ZlSccWIAdMaGCQurEXQu9wQ4ruMw7VoxTEY/QXZHrzC3o4olsSR+7tSWr9OCQ2gpruR3FceGT7HxeHZY68uGXYZSIBk35HdTmH2aIQLw0skZ2mJB0IN2O8kLtAZPCygLwiv7fGgOT9jdBke72JVu+sKsfhAOe7c+eypfb75Kqf9vwnzat3jZW/2EJOJGYb8J0hOAkzirBbjJ2OXIutcjPJcQVRzKcD/GFt+fv5XR5sIlzl2856+ucu8uEXAKBkoLqakINS68F1j2l9M2xgQGeD8oGtqypZwym8iGlS+2qZc6QbmGRXqcTTbNFwiFjWxJCfR9Hxh6/G9F+cYTvmoF52UnovDqXNz+R9yGK+gtlGOJNa0pv2+aRvjRjNvx4MaT1n9uDZhJXHXw8htNU3AU5O9X4omXedRsKvBe/EIeXB/T9q2HwiyAhH+pQm1vBXFGB9Gf5eJmuoMPqdHjM5CS723YbtgwRd+CtNeC/L2/l/YZg== X-Forefront-Antispam-Report: CIP:216.228.112.35; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid02.nvidia.com; CAT:NONE; SFS:(4636009)(346002)(376002)(136003)(39860400002)(396003)(36840700001)(46966006)(36906005)(36860700001)(54906003)(426003)(336012)(8676002)(55016002)(478600001)(83380400001)(36756003)(6666004)(6916009)(47076005)(82310400003)(86362001)(16526019)(356005)(316002)(26005)(2616005)(186003)(82740400003)(7636003)(8936002)(4326008)(70206006)(7696005)(6286002)(2906002)(5660300002)(30864003)(1076003)(70586007); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Sep 2021 08:02:59.6695 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 86481d18-a1fe-492a-f08d-08d979b190fe X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.35]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT037.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR12MB2888 Subject: [dpdk-dev] [PATCH v3 6/8] app/testpmd: add common fwd wrapper X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Xiaoyu Min Added common forwarding wrapper function for all fwd engines which do the following in common: - record core cycles - call rte_eth_rx_burst(...,nb_pkt_per_burst) - update received packets - handle received mbufs with callback function For better performance, the function is defined as macro. Signed-off-by: Xiaoyu Min Signed-off-by: Xueming Li --- app/test-pmd/5tswap.c | 25 +++++-------------------- app/test-pmd/csumonly.c | 25 ++++++------------------- app/test-pmd/flowgen.c | 20 +++++--------------- app/test-pmd/icmpecho.c | 30 ++++++++---------------------- app/test-pmd/iofwd.c | 24 +++++------------------- app/test-pmd/macfwd.c | 24 +++++------------------- app/test-pmd/macswap.c | 23 +++++------------------ app/test-pmd/rxonly.c | 32 ++++++++------------------------ app/test-pmd/testpmd.h | 19 +++++++++++++++++++ 9 files changed, 66 insertions(+), 156 deletions(-) diff --git a/app/test-pmd/5tswap.c b/app/test-pmd/5tswap.c index e8cef9623b..8fe940294f 100644 --- a/app/test-pmd/5tswap.c +++ b/app/test-pmd/5tswap.c @@ -82,18 +82,16 @@ swap_udp(struct rte_udp_hdr *udp_hdr) * Parses each layer and swaps it. When the next layer doesn't match it stops. */ static void -pkt_burst_5tuple_swap(struct fwd_stream *fs) +_5tuple_swap_stream(struct fwd_stream *fs, uint16_t nb_rx, + struct rte_mbuf **pkts_burst) { - struct rte_mbuf *pkts_burst[MAX_PKT_BURST]; struct rte_port *txp; struct rte_mbuf *mb; uint16_t next_proto; uint64_t ol_flags; uint16_t proto; - uint16_t nb_rx; uint16_t nb_tx; uint32_t retry; - int i; union { struct rte_ether_hdr *eth; @@ -105,20 +103,6 @@ pkt_burst_5tuple_swap(struct fwd_stream *fs) uint8_t *byte; } h; - uint64_t start_tsc = 0; - - get_start_cycles(&start_tsc); - - /* - * Receive a burst of packets and forward them. - */ - nb_rx = rte_eth_rx_burst(fs->rx_port, fs->rx_queue, pkts_burst, - nb_pkt_per_burst); - inc_rx_burst_stats(fs, nb_rx); - if (unlikely(nb_rx == 0)) - return; - - fs->rx_packets += nb_rx; txp = &ports[fs->tx_port]; ol_flags = ol_flags_init(txp->dev_conf.txmode.offloads); vlan_qinq_set(pkts_burst, nb_rx, ol_flags, @@ -182,12 +166,13 @@ pkt_burst_5tuple_swap(struct fwd_stream *fs) rte_pktmbuf_free(pkts_burst[nb_tx]); } while (++nb_tx < nb_rx); } - get_end_cycles(fs, start_tsc); } +PKT_BURST_FWD(_5tuple_swap_stream); + struct fwd_engine five_tuple_swap_fwd_engine = { .fwd_mode_name = "5tswap", .port_fwd_begin = NULL, .port_fwd_end = NULL, - .packet_fwd = pkt_burst_5tuple_swap, + .packet_fwd = pkt_burst_fwd, }; diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c index 38cc256533..9bfc7d10dc 100644 --- a/app/test-pmd/csumonly.c +++ b/app/test-pmd/csumonly.c @@ -763,7 +763,7 @@ pkt_copy_split(const struct rte_mbuf *pkt) } /* - * Receive a burst of packets, and for each packet: + * For each packet in received mbuf: * - parse packet, and try to recognize a supported packet type (1) * - if it's not a supported packet type, don't touch the packet, else: * - reprocess the checksum of all supported layers. This is done in SW @@ -792,9 +792,9 @@ pkt_copy_split(const struct rte_mbuf *pkt) * OUTER_IP is only useful for tunnel packets. */ static void -pkt_burst_checksum_forward(struct fwd_stream *fs) +checksum_forward_stream(struct fwd_stream *fs, uint16_t nb_rx, + struct rte_mbuf **pkts_burst) { - struct rte_mbuf *pkts_burst[MAX_PKT_BURST]; struct rte_mbuf *gso_segments[GSO_MAX_PKT_BURST]; struct rte_gso_ctx *gso_ctx; struct rte_mbuf **tx_pkts_burst; @@ -805,7 +805,6 @@ pkt_burst_checksum_forward(struct fwd_stream *fs) void **gro_ctx; uint16_t gro_pkts_num; uint8_t gro_enable; - uint16_t nb_rx; uint16_t nb_tx; uint16_t nb_prep; uint16_t i; @@ -820,18 +819,6 @@ pkt_burst_checksum_forward(struct fwd_stream *fs) uint16_t nb_segments = 0; int ret; - uint64_t start_tsc = 0; - - get_start_cycles(&start_tsc); - - /* receive a burst of packet */ - nb_rx = rte_eth_rx_burst(fs->rx_port, fs->rx_queue, pkts_burst, - nb_pkt_per_burst); - inc_rx_burst_stats(fs, nb_rx); - if (unlikely(nb_rx == 0)) - return; - - fs->rx_packets += nb_rx; rx_bad_ip_csum = 0; rx_bad_l4_csum = 0; rx_bad_outer_l4_csum = 0; @@ -1138,13 +1125,13 @@ pkt_burst_checksum_forward(struct fwd_stream *fs) rte_pktmbuf_free(tx_pkts_burst[nb_tx]); } while (++nb_tx < nb_rx); } - - get_end_cycles(fs, start_tsc); } +PKT_BURST_FWD(checksum_forward_stream); + struct fwd_engine csum_fwd_engine = { .fwd_mode_name = "csum", .port_fwd_begin = NULL, .port_fwd_end = NULL, - .packet_fwd = pkt_burst_checksum_forward, + .packet_fwd = pkt_burst_fwd, }; diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c index 0d3664a64d..aa45948b4c 100644 --- a/app/test-pmd/flowgen.c +++ b/app/test-pmd/flowgen.c @@ -61,10 +61,10 @@ RTE_DEFINE_PER_LCORE(int, _next_flow); * still do so in order to maintain traffic statistics. */ static void -pkt_burst_flow_gen(struct fwd_stream *fs) +flow_gen_stream(struct fwd_stream *fs, uint16_t nb_rx, + struct rte_mbuf **pkts_burst) { unsigned pkt_size = tx_pkt_length - 4; /* Adjust FCS */ - struct rte_mbuf *pkts_burst[MAX_PKT_BURST]; struct rte_mempool *mbp; struct rte_mbuf *pkt = NULL; struct rte_ether_hdr *eth_hdr; @@ -72,7 +72,6 @@ pkt_burst_flow_gen(struct fwd_stream *fs) struct rte_udp_hdr *udp_hdr; uint16_t vlan_tci, vlan_tci_outer; uint64_t ol_flags = 0; - uint16_t nb_rx; uint16_t nb_tx; uint16_t nb_dropped; uint16_t nb_pkt; @@ -80,17 +79,9 @@ pkt_burst_flow_gen(struct fwd_stream *fs) uint16_t i; uint32_t retry; uint64_t tx_offloads; - uint64_t start_tsc = 0; int next_flow = RTE_PER_LCORE(_next_flow); - get_start_cycles(&start_tsc); - - /* Receive a burst of packets and discard them. */ - nb_rx = rte_eth_rx_burst(fs->rx_port, fs->rx_queue, pkts_burst, - nb_pkt_per_burst); inc_rx_burst_stats(fs, nb_rx); - fs->rx_packets += nb_rx; - for (i = 0; i < nb_rx; i++) rte_pktmbuf_free(pkts_burst[i]); @@ -195,12 +186,11 @@ pkt_burst_flow_gen(struct fwd_stream *fs) rte_pktmbuf_free(pkts_burst[nb_tx]); } while (++nb_tx < nb_pkt); } - RTE_PER_LCORE(_next_flow) = next_flow; - - get_end_cycles(fs, start_tsc); } +PKT_BURST_FWD(flow_gen_stream); + static void flowgen_begin(portid_t pi) { @@ -211,5 +201,5 @@ struct fwd_engine flow_gen_engine = { .fwd_mode_name = "flowgen", .port_fwd_begin = flowgen_begin, .port_fwd_end = NULL, - .packet_fwd = pkt_burst_flow_gen, + .packet_fwd = pkt_burst_fwd, }; diff --git a/app/test-pmd/icmpecho.c b/app/test-pmd/icmpecho.c index 8948f28eb5..467ba330aa 100644 --- a/app/test-pmd/icmpecho.c +++ b/app/test-pmd/icmpecho.c @@ -267,13 +267,13 @@ ipv4_hdr_cksum(struct rte_ipv4_hdr *ip_h) (((rte_be_to_cpu_32((ipv4_addr)) >> 24) & 0x000000FF) == 0xE0) /* - * Receive a burst of packets, lookup for ICMP echo requests, and, if any, - * send back ICMP echo replies. + * Lookup for ICMP echo requests in received mbuf and, if any, + * send back ICMP echo replies to corresponding Tx port. */ static void -reply_to_icmp_echo_rqsts(struct fwd_stream *fs) +reply_to_icmp_echo_rqsts_stream(struct fwd_stream *fs, uint16_t nb_rx, + struct rte_mbuf **pkts_burst) { - struct rte_mbuf *pkts_burst[MAX_PKT_BURST]; struct rte_mbuf *pkt; struct rte_ether_hdr *eth_h; struct rte_vlan_hdr *vlan_h; @@ -283,7 +283,6 @@ reply_to_icmp_echo_rqsts(struct fwd_stream *fs) struct rte_ether_addr eth_addr; uint32_t retry; uint32_t ip_addr; - uint16_t nb_rx; uint16_t nb_tx; uint16_t nb_replies; uint16_t eth_type; @@ -291,22 +290,9 @@ reply_to_icmp_echo_rqsts(struct fwd_stream *fs) uint16_t arp_op; uint16_t arp_pro; uint32_t cksum; - uint8_t i; + uint16_t i; int l2_len; - uint64_t start_tsc = 0; - get_start_cycles(&start_tsc); - - /* - * First, receive a burst of packets. - */ - nb_rx = rte_eth_rx_burst(fs->rx_port, fs->rx_queue, pkts_burst, - nb_pkt_per_burst); - inc_rx_burst_stats(fs, nb_rx); - if (unlikely(nb_rx == 0)) - return; - - fs->rx_packets += nb_rx; nb_replies = 0; for (i = 0; i < nb_rx; i++) { if (likely(i < nb_rx - 1)) @@ -509,13 +495,13 @@ reply_to_icmp_echo_rqsts(struct fwd_stream *fs) } while (++nb_tx < nb_replies); } } - - get_end_cycles(fs, start_tsc); } +PKT_BURST_FWD(reply_to_icmp_echo_rqsts_stream); + struct fwd_engine icmp_echo_engine = { .fwd_mode_name = "icmpecho", .port_fwd_begin = NULL, .port_fwd_end = NULL, - .packet_fwd = reply_to_icmp_echo_rqsts, + .packet_fwd = pkt_burst_fwd, }; diff --git a/app/test-pmd/iofwd.c b/app/test-pmd/iofwd.c index 83d098adcb..dbd78167b4 100644 --- a/app/test-pmd/iofwd.c +++ b/app/test-pmd/iofwd.c @@ -44,25 +44,11 @@ * to packets data. */ static void -pkt_burst_io_forward(struct fwd_stream *fs) +io_forward_stream(struct fwd_stream *fs, uint16_t nb_rx, + struct rte_mbuf **pkts_burst) { - struct rte_mbuf *pkts_burst[MAX_PKT_BURST]; - uint16_t nb_rx; uint16_t nb_tx; uint32_t retry; - uint64_t start_tsc = 0; - - get_start_cycles(&start_tsc); - - /* - * Receive a burst of packets and forward them. - */ - nb_rx = rte_eth_rx_burst(fs->rx_port, fs->rx_queue, - pkts_burst, nb_pkt_per_burst); - inc_rx_burst_stats(fs, nb_rx); - if (unlikely(nb_rx == 0)) - return; - fs->rx_packets += nb_rx; nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_rx); @@ -85,13 +71,13 @@ pkt_burst_io_forward(struct fwd_stream *fs) rte_pktmbuf_free(pkts_burst[nb_tx]); } while (++nb_tx < nb_rx); } - - get_end_cycles(fs, start_tsc); } +PKT_BURST_FWD(io_forward_stream); + struct fwd_engine io_fwd_engine = { .fwd_mode_name = "io", .port_fwd_begin = NULL, .port_fwd_end = NULL, - .packet_fwd = pkt_burst_io_forward, + .packet_fwd = pkt_burst_fwd, }; diff --git a/app/test-pmd/macfwd.c b/app/test-pmd/macfwd.c index 0568ea794d..b0728c7597 100644 --- a/app/test-pmd/macfwd.c +++ b/app/test-pmd/macfwd.c @@ -44,32 +44,18 @@ * before forwarding them. */ static void -pkt_burst_mac_forward(struct fwd_stream *fs) +mac_forward_stream(struct fwd_stream *fs, uint16_t nb_rx, + struct rte_mbuf **pkts_burst) { - struct rte_mbuf *pkts_burst[MAX_PKT_BURST]; struct rte_port *txp; struct rte_mbuf *mb; struct rte_ether_hdr *eth_hdr; uint32_t retry; - uint16_t nb_rx; uint16_t nb_tx; uint16_t i; uint64_t ol_flags = 0; uint64_t tx_offloads; - uint64_t start_tsc = 0; - get_start_cycles(&start_tsc); - - /* - * Receive a burst of packets and forward them. - */ - nb_rx = rte_eth_rx_burst(fs->rx_port, fs->rx_queue, pkts_burst, - nb_pkt_per_burst); - inc_rx_burst_stats(fs, nb_rx); - if (unlikely(nb_rx == 0)) - return; - - fs->rx_packets += nb_rx; txp = &ports[fs->tx_port]; tx_offloads = txp->dev_conf.txmode.offloads; if (tx_offloads & DEV_TX_OFFLOAD_VLAN_INSERT) @@ -116,13 +102,13 @@ pkt_burst_mac_forward(struct fwd_stream *fs) rte_pktmbuf_free(pkts_burst[nb_tx]); } while (++nb_tx < nb_rx); } - - get_end_cycles(fs, start_tsc); } +PKT_BURST_FWD(mac_forward_stream); + struct fwd_engine mac_fwd_engine = { .fwd_mode_name = "mac", .port_fwd_begin = NULL, .port_fwd_end = NULL, - .packet_fwd = pkt_burst_mac_forward, + .packet_fwd = pkt_burst_fwd, }; diff --git a/app/test-pmd/macswap.c b/app/test-pmd/macswap.c index 310bca06af..cc208944d7 100644 --- a/app/test-pmd/macswap.c +++ b/app/test-pmd/macswap.c @@ -50,27 +50,13 @@ * addresses of packets before forwarding them. */ static void -pkt_burst_mac_swap(struct fwd_stream *fs) +mac_swap_stream(struct fwd_stream *fs, uint16_t nb_rx, + struct rte_mbuf **pkts_burst) { - struct rte_mbuf *pkts_burst[MAX_PKT_BURST]; struct rte_port *txp; - uint16_t nb_rx; uint16_t nb_tx; uint32_t retry; - uint64_t start_tsc = 0; - get_start_cycles(&start_tsc); - - /* - * Receive a burst of packets and forward them. - */ - nb_rx = rte_eth_rx_burst(fs->rx_port, fs->rx_queue, pkts_burst, - nb_pkt_per_burst); - inc_rx_burst_stats(fs, nb_rx); - if (unlikely(nb_rx == 0)) - return; - - fs->rx_packets += nb_rx; txp = &ports[fs->tx_port]; do_macswap(pkts_burst, nb_rx, txp); @@ -95,12 +81,13 @@ pkt_burst_mac_swap(struct fwd_stream *fs) rte_pktmbuf_free(pkts_burst[nb_tx]); } while (++nb_tx < nb_rx); } - get_end_cycles(fs, start_tsc); } +PKT_BURST_FWD(mac_swap_stream); + struct fwd_engine mac_swap_engine = { .fwd_mode_name = "macswap", .port_fwd_begin = NULL, .port_fwd_end = NULL, - .packet_fwd = pkt_burst_mac_swap, + .packet_fwd = pkt_burst_fwd, }; diff --git a/app/test-pmd/rxonly.c b/app/test-pmd/rxonly.c index c78fc4609a..a7354596b5 100644 --- a/app/test-pmd/rxonly.c +++ b/app/test-pmd/rxonly.c @@ -41,37 +41,21 @@ #include "testpmd.h" /* - * Received a burst of packets. + * Process a burst of received packets from same stream. */ static void -pkt_burst_receive(struct fwd_stream *fs) +rxonly_forward_stream(struct fwd_stream *fs, uint16_t nb_rx, + struct rte_mbuf **pkts_burst) { - struct rte_mbuf *pkts_burst[MAX_PKT_BURST]; - uint16_t nb_rx; - uint16_t i; - uint64_t start_tsc = 0; - - get_start_cycles(&start_tsc); - - /* - * Receive a burst of packets. - */ - nb_rx = rte_eth_rx_burst(fs->rx_port, fs->rx_queue, pkts_burst, - nb_pkt_per_burst); - inc_rx_burst_stats(fs, nb_rx); - if (unlikely(nb_rx == 0)) - return; - - fs->rx_packets += nb_rx; - for (i = 0; i < nb_rx; i++) - rte_pktmbuf_free(pkts_burst[i]); - - get_end_cycles(fs, start_tsc); + RTE_SET_USED(fs); + rte_pktmbuf_free_bulk(pkts_burst, nb_rx); } +PKT_BURST_FWD(rxonly_forward_stream) + struct fwd_engine rx_only_engine = { .fwd_mode_name = "rxonly", .port_fwd_begin = NULL, .port_fwd_end = NULL, - .packet_fwd = pkt_burst_receive, + .packet_fwd = pkt_burst_fwd, }; diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index f121a2da90..4792bef03b 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -1028,6 +1028,25 @@ void add_tx_dynf_callback(portid_t portid); void remove_tx_dynf_callback(portid_t portid); int update_jumbo_frame_offload(portid_t portid); +#define PKT_BURST_FWD(cb) \ +static void \ +pkt_burst_fwd(struct fwd_stream *fs) \ +{ \ + struct rte_mbuf *pkts_burst[nb_pkt_per_burst]; \ + uint16_t nb_rx; \ + uint64_t start_tsc = 0; \ + \ + get_start_cycles(&start_tsc); \ + nb_rx = rte_eth_rx_burst(fs->rx_port, fs->rx_queue, \ + pkts_burst, nb_pkt_per_burst); \ + inc_rx_burst_stats(fs, nb_rx); \ + if (unlikely(nb_rx == 0)) \ + return; \ + fs->rx_packets += nb_rx; \ + cb(fs, nb_rx, pkts_burst); \ + get_end_cycles(fs, start_tsc); \ +} + /* * Work-around of a compilation error with ICC on invocations of the * rte_be_to_cpu_16() function. -- 2.33.0