From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <prvs=1993afbbcd=pbhagavatula@marvell.com>
Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com
 [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 1B3382B82
 for <dev@dpdk.org>; Sun, 31 Mar 2019 15:14:29 +0200 (CEST)
Received: from pps.filterd (m0045849.ppops.net [127.0.0.1])
 by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id
 x2VDAk1I012274; Sun, 31 Mar 2019 06:14:29 -0700
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com;
 h=from : to : cc :
 subject : date : message-id : references : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=pfpt0818;
 bh=OJ8PFT/WXc/uXbjzUuzUN0zg0uAMNGbPsfd2vtHXs94=;
 b=hIwszik2+/VRWXXNTUANJR5auaTz2f/LqLxid74d7TvFwL102cW6gkJ8gcOviFQhdsds
 iAS6P2o7xg7ryoxmKcL7B3fFhPfF97GjOg/CHAZNL7tqXN3x/eT5nWPvCBUlpDXxE2Cq
 z1iKuXPwl/bOEPQEf4ugDJcRfv9uL04k/xhtvdSCKt+M2mcctw5UABvPUUOQ1aNz3ycp
 MbgofVip4D1eGFDXWcwi/GPfv2epabciVunQDMZlKdK/Wff6vysdRIBLdv2TFzamX5JY
 8pCQm5wpd/V06lU1RK/ieSxIqhsa/s9nhBydGvB809zH8GxZAEaVrhnnq+YxtqIy/CCY qw== 
Received: from sc-exch01.marvell.com ([199.233.58.181])
 by mx0a-0016f401.pphosted.com with ESMTP id 2rj5tr2ndf-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT);
 Sun, 31 Mar 2019 06:14:28 -0700
Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com
 (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 31 Mar
 2019 06:14:22 -0700
Received: from NAM04-SN1-obe.outbound.protection.outlook.com (104.47.44.56) by
 SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server
 (TLS) id
 15.0.1367.3 via Frontend Transport; Sun, 31 Mar 2019 06:14:22 -0700
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=marvell.onmicrosoft.com; s=selector1-marvell-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OJ8PFT/WXc/uXbjzUuzUN0zg0uAMNGbPsfd2vtHXs94=;
 b=AoaQ+gcDtFP8hXABP5wDUdTk6d7uRk3MpyjKB/BMo+Q11afEVvJQ2YZtpWCO9ZXFwTk1dLA8BHEKH+K5WLPy5VsEYd9pNOivkVi8RtDijl/RzMfprLcvdsoTqNL4J8U2hxrEijT69oDYVSEwi+ltQmUgZeodriAjQtwCqh8Y344=
Received: from CY4PR1801MB1863.namprd18.prod.outlook.com (10.171.255.14) by
 CY4PR1801MB2071.namprd18.prod.outlook.com (10.171.254.163) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.1750.15; Sun, 31 Mar 2019 13:14:20 +0000
Received: from CY4PR1801MB1863.namprd18.prod.outlook.com
 ([fe80::286d:5e93:974e:8bfa]) by CY4PR1801MB1863.namprd18.prod.outlook.com
 ([fe80::286d:5e93:974e:8bfa%2]) with mapi id 15.20.1750.014; Sun, 31 Mar 2019
 13:14:20 +0000
From: Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>
To: Jerin Jacob Kollanukkaran <jerinj@marvell.com>, "thomas@monjalon.net"
 <thomas@monjalon.net>, "arybchenko@solarflare.com"
 <arybchenko@solarflare.com>, "ferruh.yigit@intel.com"
 <ferruh.yigit@intel.com>, "bernard.iremonger@intel.com"
 <bernard.iremonger@intel.com>
CC: "dev@dpdk.org" <dev@dpdk.org>, Pavan Nikhilesh Bhagavatula
 <pbhagavatula@marvell.com>
Thread-Topic: [dpdk-dev] [PATCH v5 1/2] app/testpmd: optimize testpmd txonly
 mode
Thread-Index: AQHU58OmNuHTLIT6F0WWZ3z1WI+wYQ==
Date: Sun, 31 Mar 2019 13:14:20 +0000
Message-ID: <20190331131341.12924-1-pbhagavatula@marvell.com>
References: <20190228194128.14236-1-pbhagavatula@marvell.com>
In-Reply-To: <20190228194128.14236-1-pbhagavatula@marvell.com>
Accept-Language: en-IN, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-clientproxiedby: SG2PR06CA0178.apcprd06.prod.outlook.com
 (2603:1096:1:1e::32) To CY4PR1801MB1863.namprd18.prod.outlook.com
 (2603:10b6:910:7a::14)
x-ms-exchange-messagesentrepresentingtype: 1
x-mailer: git-send-email 2.21.0
x-originating-ip: [49.205.218.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: f23b0bc1-a469-44fa-b7eb-08d6b5dac8e3
x-microsoft-antispam: BCL:0; PCL:0;
 RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(5600127)(711020)(4605104)(2017052603328)(7153060)(7193020);
 SRVR:CY4PR1801MB2071; 
x-ms-traffictypediagnostic: CY4PR1801MB2071:
x-microsoft-antispam-prvs: <CY4PR1801MB2071F91A2DD3F4772FC4165ADE540@CY4PR1801MB2071.namprd18.prod.outlook.com>
x-forefront-prvs: 0993689CD1
x-forefront-antispam-report: SFV:NSPM;
 SFS:(10009020)(376002)(136003)(346002)(396003)(39850400004)(366004)(199004)(189003)(2906002)(76176011)(8936002)(6506007)(6436002)(81156014)(386003)(8676002)(81166006)(102836004)(26005)(5660300002)(4326008)(107886003)(110136005)(1076003)(186003)(71200400001)(78486014)(7736002)(6512007)(2501003)(6486002)(99286004)(97736004)(54906003)(3846002)(6116002)(52116002)(316002)(105586002)(53936002)(305945005)(25786009)(71190400001)(86362001)(68736007)(2201001)(486006)(106356001)(478600001)(50226002)(66066001)(11346002)(446003)(476003)(2616005)(256004)(14454004)(36756003);
 DIR:OUT; SFP:1101; SCL:1; SRVR:CY4PR1801MB2071;
 H:CY4PR1801MB1863.namprd18.prod.outlook.com; FPR:; SPF:None; LANG:en;
 PTR:InfoNoRecords; A:1; MX:1; 
received-spf: None (protection.outlook.com: marvell.com does not designate
 permitted sender hosts)
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam-message-info: aVHgbNJECqPxKgn9ByPvhmlHyAsQcn16O6NmuSi0cc1CV0NiJsnzR7r7nKjhFO8d7Sq6AXjOkDJvhmsemDOaGLfqicrP9HXFcgNjmPsS7vPeSVQfa3hK0bVXbTLI3U/+Dl3wjv0t3osi/JSrQvwMC1cxwLtdd8msu2YLiSwqGPm8+uWSxTMUwqpbiVdO2o4HT1tCPZRABUyhODdiRvmUx6tajPZrl0ZjFkZQkmqv65djDllNPjjDqZWZ6hUUq7WjXjb/SR50+A8HU5hBxdRk/xXe8vtvDitoIDCX2cuEYABU0z35CEAxvKgK1L8u0TuNLThlRG3Ytgh3Y3ziIPyVmRrAc08dDkEsoHi4e+kjP9yvIzN/yfQg/pdobvscNwJNYUmMgKcg9TH926DgmHJxhg/qLi6id0Sguz1gWXitswQ=
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-Network-Message-Id: f23b0bc1-a469-44fa-b7eb-08d6b5dac8e3
X-MS-Exchange-CrossTenant-originalarrivaltime: 31 Mar 2019 13:14:20.2922 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 70e1fb47-1155-421d-87fc-2e58f638b6e0
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR1801MB2071
X-OriginatorOrg: marvell.com
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, ,
 definitions=2019-03-31_08:, , signatures=0
Subject: [dpdk-dev] [PATCH v5 1/2] app/testpmd: optimize testpmd txonly mode
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
X-List-Received-Date: Sun, 31 Mar 2019 13:14:30 -0000

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

Optimize testpmd txonly mode by
1. Moving per packet ethernet header copy above the loop.
2. Use bulk ops for allocating segments instead of having a inner loop
for every segment.

Also, move the packet prepare logic into a separate function so that it
can be reused later.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
 v5 Changes
 - Remove unnecessary change to struct rte_port *txp (movement). (Bernard)

 v4 Changes:
 - Fix packet len calculation.

 v3 Changes:
 - Split the patches for easier review. (Thomas)
 - Remove unnecessary assignments to 0. (Bernard)

 v2 Changes:
 - Use bulk ops for fetching segments. (Andrew Rybchenko)
 - Fallback to rte_mbuf_raw_alloc if bulk get fails. (Andrew Rybchenko)
 - Fix mbufs not being freed when there is no more mbufs available for
 segments. (Andrew Rybchenko)

 app/test-pmd/txonly.c | 139 +++++++++++++++++++++++-------------------
 1 file changed, 76 insertions(+), 63 deletions(-)

diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c
index 1f08b6ed3..9c0147089 100644
--- a/app/test-pmd/txonly.c
+++ b/app/test-pmd/txonly.c
@@ -147,6 +147,63 @@ setup_pkt_udp_ip_headers(struct ipv4_hdr *ip_hdr,
 	ip_hdr->hdr_checksum =3D (uint16_t) ip_cksum;
 }

+static inline bool
+pkt_burst_prepare(struct rte_mbuf *pkt, struct rte_mempool *mbp,
+		struct ether_hdr *eth_hdr, const uint16_t vlan_tci,
+		const uint16_t vlan_tci_outer, const uint64_t ol_flags)
+{
+	struct rte_mbuf *pkt_segs[RTE_MAX_SEGS_PER_PKT];
+	struct rte_mbuf *pkt_seg;
+	uint32_t nb_segs, pkt_len;
+	uint8_t i;
+
+	if (unlikely(tx_pkt_split =3D=3D TX_PKT_SPLIT_RND))
+		nb_segs =3D random() % tx_pkt_nb_segs + 1;
+	else
+		nb_segs =3D tx_pkt_nb_segs;
+
+	if (nb_segs > 1) {
+		if (rte_mempool_get_bulk(mbp, (void **)pkt_segs, nb_segs))
+			return false;
+	}
+
+	rte_pktmbuf_reset_headroom(pkt);
+	pkt->data_len =3D tx_pkt_seg_lengths[0];
+	pkt->ol_flags =3D ol_flags;
+	pkt->vlan_tci =3D vlan_tci;
+	pkt->vlan_tci_outer =3D vlan_tci_outer;
+	pkt->l2_len =3D sizeof(struct ether_hdr);
+	pkt->l3_len =3D sizeof(struct ipv4_hdr);
+
+	pkt_len =3D pkt->data_len;
+	pkt_seg =3D pkt;
+	for (i =3D 1; i < nb_segs; i++) {
+		pkt_seg->next =3D pkt_segs[i - 1];
+		pkt_seg =3D pkt_seg->next;
+		pkt_seg->data_len =3D tx_pkt_seg_lengths[i];
+		pkt_len +=3D pkt_seg->data_len;
+	}
+	pkt_seg->next =3D NULL; /* Last segment of packet. */
+	/*
+	 * Copy headers in first packet segment(s).
+	 */
+	copy_buf_to_pkt(eth_hdr, sizeof(eth_hdr), pkt, 0);
+	copy_buf_to_pkt(&pkt_ip_hdr, sizeof(pkt_ip_hdr), pkt,
+			sizeof(struct ether_hdr));
+	copy_buf_to_pkt(&pkt_udp_hdr, sizeof(pkt_udp_hdr), pkt,
+			sizeof(struct ether_hdr) +
+			sizeof(struct ipv4_hdr));
+
+	/*
+	 * Complete first mbuf of packet and append it to the
+	 * burst of packets to be transmitted.
+	 */
+	pkt->nb_segs =3D nb_segs;
+	pkt->pkt_len =3D pkt_len;
+
+	return true;
+}
+
 /*
  * Transmit a burst of multi-segments packets.
  */
@@ -156,7 +213,6 @@ pkt_burst_transmit(struct fwd_stream *fs)
 	struct rte_mbuf *pkts_burst[MAX_PKT_BURST];
 	struct rte_port *txp;
 	struct rte_mbuf *pkt;
-	struct rte_mbuf *pkt_seg;
 	struct rte_mempool *mbp;
 	struct ether_hdr eth_hdr;
 	uint16_t nb_tx;
@@ -164,14 +220,12 @@ pkt_burst_transmit(struct fwd_stream *fs)
 	uint16_t vlan_tci, vlan_tci_outer;
 	uint32_t retry;
 	uint64_t ol_flags =3D 0;
-	uint8_t  i;
 	uint64_t tx_offloads;
 #ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES
 	uint64_t start_tsc;
 	uint64_t end_tsc;
 	uint64_t core_cycles;
 #endif
-	uint32_t nb_segs, pkt_len;

 #ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES
 	start_tsc =3D rte_rdtsc();
@@ -188,72 +242,31 @@ pkt_burst_transmit(struct fwd_stream *fs)
 		ol_flags |=3D PKT_TX_QINQ_PKT;
 	if (tx_offloads & DEV_TX_OFFLOAD_MACSEC_INSERT)
 		ol_flags |=3D PKT_TX_MACSEC;
+
+	/*
+	 * Initialize Ethernet header.
+	 */
+	ether_addr_copy(&peer_eth_addrs[fs->peer_addr], &eth_hdr.d_addr);
+	ether_addr_copy(&ports[fs->tx_port].eth_addr, &eth_hdr.s_addr);
+	eth_hdr.ether_type =3D rte_cpu_to_be_16(ETHER_TYPE_IPv4);
+
 	for (nb_pkt =3D 0; nb_pkt < nb_pkt_per_burst; nb_pkt++) {
 		pkt =3D rte_mbuf_raw_alloc(mbp);
-		if (pkt =3D=3D NULL) {
-		nomore_mbuf:
-			if (nb_pkt =3D=3D 0)
-				return;
+		if (pkt =3D=3D NULL)
+			break;
+		if (unlikely(!pkt_burst_prepare(pkt, mbp,
+						&eth_hdr, vlan_tci,
+						vlan_tci_outer,
+						ol_flags))) {
+			rte_mempool_put(mbp, pkt);
 			break;
 		}
-
-		/*
-		 * Using raw alloc is good to improve performance,
-		 * but some consumers may use the headroom and so
-		 * decrement data_off. We need to make sure it is
-		 * reset to default value.
-		 */
-		rte_pktmbuf_reset_headroom(pkt);
-		pkt->data_len =3D tx_pkt_seg_lengths[0];
-		pkt_seg =3D pkt;
-		if (tx_pkt_split =3D=3D TX_PKT_SPLIT_RND)
-			nb_segs =3D random() % tx_pkt_nb_segs + 1;
-		else
-			nb_segs =3D tx_pkt_nb_segs;
-		pkt_len =3D pkt->data_len;
-		for (i =3D 1; i < nb_segs; i++) {
-			pkt_seg->next =3D rte_mbuf_raw_alloc(mbp);
-			if (pkt_seg->next =3D=3D NULL) {
-				pkt->nb_segs =3D i;
-				rte_pktmbuf_free(pkt);
-				goto nomore_mbuf;
-			}
-			pkt_seg =3D pkt_seg->next;
-			pkt_seg->data_len =3D tx_pkt_seg_lengths[i];
-			pkt_len +=3D pkt_seg->data_len;
-		}
-		pkt_seg->next =3D NULL; /* Last segment of packet. */
-
-		/*
-		 * Initialize Ethernet header.
-		 */
-		ether_addr_copy(&peer_eth_addrs[fs->peer_addr],&eth_hdr.d_addr);
-		ether_addr_copy(&ports[fs->tx_port].eth_addr, &eth_hdr.s_addr);
-		eth_hdr.ether_type =3D rte_cpu_to_be_16(ETHER_TYPE_IPv4);
-
-		/*
-		 * Copy headers in first packet segment(s).
-		 */
-		copy_buf_to_pkt(&eth_hdr, sizeof(eth_hdr), pkt, 0);
-		copy_buf_to_pkt(&pkt_ip_hdr, sizeof(pkt_ip_hdr), pkt,
-				sizeof(struct ether_hdr));
-		copy_buf_to_pkt(&pkt_udp_hdr, sizeof(pkt_udp_hdr), pkt,
-				sizeof(struct ether_hdr) +
-				sizeof(struct ipv4_hdr));
-
-		/*
-		 * Complete first mbuf of packet and append it to the
-		 * burst of packets to be transmitted.
-		 */
-		pkt->nb_segs =3D nb_segs;
-		pkt->pkt_len =3D pkt_len;
-		pkt->ol_flags =3D ol_flags;
-		pkt->vlan_tci =3D vlan_tci;
-		pkt->vlan_tci_outer =3D vlan_tci_outer;
-		pkt->l2_len =3D sizeof(struct ether_hdr);
-		pkt->l3_len =3D sizeof(struct ipv4_hdr);
 		pkts_burst[nb_pkt] =3D pkt;
 	}
+
+	if (nb_pkt =3D=3D 0)
+		return;
+
 	nb_tx =3D rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_pkt)=
;
 	/*
 	 * Retry if necessary
--
2.21.0

From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from dpdk.org (dpdk.org [92.243.14.124])
	by dpdk.space (Postfix) with ESMTP id DFC88A00B9
	for <public@inbox.dpdk.org>; Sun, 31 Mar 2019 15:14:32 +0200 (CEST)
Received: from [92.243.14.124] (localhost [127.0.0.1])
	by dpdk.org (Postfix) with ESMTP id C7CF22B9A;
	Sun, 31 Mar 2019 15:14:31 +0200 (CEST)
Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com
 [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 1B3382B82
 for <dev@dpdk.org>; Sun, 31 Mar 2019 15:14:29 +0200 (CEST)
Received: from pps.filterd (m0045849.ppops.net [127.0.0.1])
 by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id
 x2VDAk1I012274; Sun, 31 Mar 2019 06:14:29 -0700
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com;
 h=from : to : cc :
 subject : date : message-id : references : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=pfpt0818;
 bh=OJ8PFT/WXc/uXbjzUuzUN0zg0uAMNGbPsfd2vtHXs94=;
 b=hIwszik2+/VRWXXNTUANJR5auaTz2f/LqLxid74d7TvFwL102cW6gkJ8gcOviFQhdsds
 iAS6P2o7xg7ryoxmKcL7B3fFhPfF97GjOg/CHAZNL7tqXN3x/eT5nWPvCBUlpDXxE2Cq
 z1iKuXPwl/bOEPQEf4ugDJcRfv9uL04k/xhtvdSCKt+M2mcctw5UABvPUUOQ1aNz3ycp
 MbgofVip4D1eGFDXWcwi/GPfv2epabciVunQDMZlKdK/Wff6vysdRIBLdv2TFzamX5JY
 8pCQm5wpd/V06lU1RK/ieSxIqhsa/s9nhBydGvB809zH8GxZAEaVrhnnq+YxtqIy/CCY qw== 
Received: from sc-exch01.marvell.com ([199.233.58.181])
 by mx0a-0016f401.pphosted.com with ESMTP id 2rj5tr2ndf-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT);
 Sun, 31 Mar 2019 06:14:28 -0700
Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com
 (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 31 Mar
 2019 06:14:22 -0700
Received: from NAM04-SN1-obe.outbound.protection.outlook.com (104.47.44.56) by
 SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server
 (TLS) id
 15.0.1367.3 via Frontend Transport; Sun, 31 Mar 2019 06:14:22 -0700
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=marvell.onmicrosoft.com; s=selector1-marvell-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OJ8PFT/WXc/uXbjzUuzUN0zg0uAMNGbPsfd2vtHXs94=;
 b=AoaQ+gcDtFP8hXABP5wDUdTk6d7uRk3MpyjKB/BMo+Q11afEVvJQ2YZtpWCO9ZXFwTk1dLA8BHEKH+K5WLPy5VsEYd9pNOivkVi8RtDijl/RzMfprLcvdsoTqNL4J8U2hxrEijT69oDYVSEwi+ltQmUgZeodriAjQtwCqh8Y344=
Received: from CY4PR1801MB1863.namprd18.prod.outlook.com (10.171.255.14) by
 CY4PR1801MB2071.namprd18.prod.outlook.com (10.171.254.163) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.1750.15; Sun, 31 Mar 2019 13:14:20 +0000
Received: from CY4PR1801MB1863.namprd18.prod.outlook.com
 ([fe80::286d:5e93:974e:8bfa]) by CY4PR1801MB1863.namprd18.prod.outlook.com
 ([fe80::286d:5e93:974e:8bfa%2]) with mapi id 15.20.1750.014; Sun, 31 Mar 2019
 13:14:20 +0000
From: Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>
To: Jerin Jacob Kollanukkaran <jerinj@marvell.com>, "thomas@monjalon.net"
 <thomas@monjalon.net>, "arybchenko@solarflare.com"
 <arybchenko@solarflare.com>, "ferruh.yigit@intel.com"
 <ferruh.yigit@intel.com>, "bernard.iremonger@intel.com"
 <bernard.iremonger@intel.com>
CC: "dev@dpdk.org" <dev@dpdk.org>, Pavan Nikhilesh Bhagavatula
 <pbhagavatula@marvell.com>
Thread-Topic: [dpdk-dev] [PATCH v5 1/2] app/testpmd: optimize testpmd txonly
 mode
Thread-Index: AQHU58OmNuHTLIT6F0WWZ3z1WI+wYQ==
Date: Sun, 31 Mar 2019 13:14:20 +0000
Message-ID: <20190331131341.12924-1-pbhagavatula@marvell.com>
References: <20190228194128.14236-1-pbhagavatula@marvell.com>
In-Reply-To: <20190228194128.14236-1-pbhagavatula@marvell.com>
Accept-Language: en-IN, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-clientproxiedby: SG2PR06CA0178.apcprd06.prod.outlook.com
 (2603:1096:1:1e::32) To CY4PR1801MB1863.namprd18.prod.outlook.com
 (2603:10b6:910:7a::14)
x-ms-exchange-messagesentrepresentingtype: 1
x-mailer: git-send-email 2.21.0
x-originating-ip: [49.205.218.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: f23b0bc1-a469-44fa-b7eb-08d6b5dac8e3
x-microsoft-antispam: BCL:0; PCL:0;
 RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(5600127)(711020)(4605104)(2017052603328)(7153060)(7193020);
 SRVR:CY4PR1801MB2071; 
x-ms-traffictypediagnostic: CY4PR1801MB2071:
x-microsoft-antispam-prvs: <CY4PR1801MB2071F91A2DD3F4772FC4165ADE540@CY4PR1801MB2071.namprd18.prod.outlook.com>
x-forefront-prvs: 0993689CD1
x-forefront-antispam-report: SFV:NSPM;
 SFS:(10009020)(376002)(136003)(346002)(396003)(39850400004)(366004)(199004)(189003)(2906002)(76176011)(8936002)(6506007)(6436002)(81156014)(386003)(8676002)(81166006)(102836004)(26005)(5660300002)(4326008)(107886003)(110136005)(1076003)(186003)(71200400001)(78486014)(7736002)(6512007)(2501003)(6486002)(99286004)(97736004)(54906003)(3846002)(6116002)(52116002)(316002)(105586002)(53936002)(305945005)(25786009)(71190400001)(86362001)(68736007)(2201001)(486006)(106356001)(478600001)(50226002)(66066001)(11346002)(446003)(476003)(2616005)(256004)(14454004)(36756003);
 DIR:OUT; SFP:1101; SCL:1; SRVR:CY4PR1801MB2071;
 H:CY4PR1801MB1863.namprd18.prod.outlook.com; FPR:; SPF:None; LANG:en;
 PTR:InfoNoRecords; A:1; MX:1; 
received-spf: None (protection.outlook.com: marvell.com does not designate
 permitted sender hosts)
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam-message-info: aVHgbNJECqPxKgn9ByPvhmlHyAsQcn16O6NmuSi0cc1CV0NiJsnzR7r7nKjhFO8d7Sq6AXjOkDJvhmsemDOaGLfqicrP9HXFcgNjmPsS7vPeSVQfa3hK0bVXbTLI3U/+Dl3wjv0t3osi/JSrQvwMC1cxwLtdd8msu2YLiSwqGPm8+uWSxTMUwqpbiVdO2o4HT1tCPZRABUyhODdiRvmUx6tajPZrl0ZjFkZQkmqv65djDllNPjjDqZWZ6hUUq7WjXjb/SR50+A8HU5hBxdRk/xXe8vtvDitoIDCX2cuEYABU0z35CEAxvKgK1L8u0TuNLThlRG3Ytgh3Y3ziIPyVmRrAc08dDkEsoHi4e+kjP9yvIzN/yfQg/pdobvscNwJNYUmMgKcg9TH926DgmHJxhg/qLi6id0Sguz1gWXitswQ=
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-Network-Message-Id: f23b0bc1-a469-44fa-b7eb-08d6b5dac8e3
X-MS-Exchange-CrossTenant-originalarrivaltime: 31 Mar 2019 13:14:20.2922 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 70e1fb47-1155-421d-87fc-2e58f638b6e0
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR1801MB2071
X-OriginatorOrg: marvell.com
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, ,
 definitions=2019-03-31_08:, , signatures=0
Subject: [dpdk-dev] [PATCH v5 1/2] app/testpmd: optimize testpmd txonly mode
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org
Sender: "dev" <dev-bounces@dpdk.org>
Message-ID: <20190331131420.tmnPVAavIxCn3KlIbITlSs4GJHKWaJlMcTFxKJZo3Dg@z>

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

Optimize testpmd txonly mode by
1. Moving per packet ethernet header copy above the loop.
2. Use bulk ops for allocating segments instead of having a inner loop
for every segment.

Also, move the packet prepare logic into a separate function so that it
can be reused later.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
 v5 Changes
 - Remove unnecessary change to struct rte_port *txp (movement). (Bernard)

 v4 Changes:
 - Fix packet len calculation.

 v3 Changes:
 - Split the patches for easier review. (Thomas)
 - Remove unnecessary assignments to 0. (Bernard)

 v2 Changes:
 - Use bulk ops for fetching segments. (Andrew Rybchenko)
 - Fallback to rte_mbuf_raw_alloc if bulk get fails. (Andrew Rybchenko)
 - Fix mbufs not being freed when there is no more mbufs available for
 segments. (Andrew Rybchenko)

 app/test-pmd/txonly.c | 139 +++++++++++++++++++++++-------------------
 1 file changed, 76 insertions(+), 63 deletions(-)

diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c
index 1f08b6ed3..9c0147089 100644
--- a/app/test-pmd/txonly.c
+++ b/app/test-pmd/txonly.c
@@ -147,6 +147,63 @@ setup_pkt_udp_ip_headers(struct ipv4_hdr *ip_hdr,
 	ip_hdr->hdr_checksum =3D (uint16_t) ip_cksum;
 }

+static inline bool
+pkt_burst_prepare(struct rte_mbuf *pkt, struct rte_mempool *mbp,
+		struct ether_hdr *eth_hdr, const uint16_t vlan_tci,
+		const uint16_t vlan_tci_outer, const uint64_t ol_flags)
+{
+	struct rte_mbuf *pkt_segs[RTE_MAX_SEGS_PER_PKT];
+	struct rte_mbuf *pkt_seg;
+	uint32_t nb_segs, pkt_len;
+	uint8_t i;
+
+	if (unlikely(tx_pkt_split =3D=3D TX_PKT_SPLIT_RND))
+		nb_segs =3D random() % tx_pkt_nb_segs + 1;
+	else
+		nb_segs =3D tx_pkt_nb_segs;
+
+	if (nb_segs > 1) {
+		if (rte_mempool_get_bulk(mbp, (void **)pkt_segs, nb_segs))
+			return false;
+	}
+
+	rte_pktmbuf_reset_headroom(pkt);
+	pkt->data_len =3D tx_pkt_seg_lengths[0];
+	pkt->ol_flags =3D ol_flags;
+	pkt->vlan_tci =3D vlan_tci;
+	pkt->vlan_tci_outer =3D vlan_tci_outer;
+	pkt->l2_len =3D sizeof(struct ether_hdr);
+	pkt->l3_len =3D sizeof(struct ipv4_hdr);
+
+	pkt_len =3D pkt->data_len;
+	pkt_seg =3D pkt;
+	for (i =3D 1; i < nb_segs; i++) {
+		pkt_seg->next =3D pkt_segs[i - 1];
+		pkt_seg =3D pkt_seg->next;
+		pkt_seg->data_len =3D tx_pkt_seg_lengths[i];
+		pkt_len +=3D pkt_seg->data_len;
+	}
+	pkt_seg->next =3D NULL; /* Last segment of packet. */
+	/*
+	 * Copy headers in first packet segment(s).
+	 */
+	copy_buf_to_pkt(eth_hdr, sizeof(eth_hdr), pkt, 0);
+	copy_buf_to_pkt(&pkt_ip_hdr, sizeof(pkt_ip_hdr), pkt,
+			sizeof(struct ether_hdr));
+	copy_buf_to_pkt(&pkt_udp_hdr, sizeof(pkt_udp_hdr), pkt,
+			sizeof(struct ether_hdr) +
+			sizeof(struct ipv4_hdr));
+
+	/*
+	 * Complete first mbuf of packet and append it to the
+	 * burst of packets to be transmitted.
+	 */
+	pkt->nb_segs =3D nb_segs;
+	pkt->pkt_len =3D pkt_len;
+
+	return true;
+}
+
 /*
  * Transmit a burst of multi-segments packets.
  */
@@ -156,7 +213,6 @@ pkt_burst_transmit(struct fwd_stream *fs)
 	struct rte_mbuf *pkts_burst[MAX_PKT_BURST];
 	struct rte_port *txp;
 	struct rte_mbuf *pkt;
-	struct rte_mbuf *pkt_seg;
 	struct rte_mempool *mbp;
 	struct ether_hdr eth_hdr;
 	uint16_t nb_tx;
@@ -164,14 +220,12 @@ pkt_burst_transmit(struct fwd_stream *fs)
 	uint16_t vlan_tci, vlan_tci_outer;
 	uint32_t retry;
 	uint64_t ol_flags =3D 0;
-	uint8_t  i;
 	uint64_t tx_offloads;
 #ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES
 	uint64_t start_tsc;
 	uint64_t end_tsc;
 	uint64_t core_cycles;
 #endif
-	uint32_t nb_segs, pkt_len;

 #ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES
 	start_tsc =3D rte_rdtsc();
@@ -188,72 +242,31 @@ pkt_burst_transmit(struct fwd_stream *fs)
 		ol_flags |=3D PKT_TX_QINQ_PKT;
 	if (tx_offloads & DEV_TX_OFFLOAD_MACSEC_INSERT)
 		ol_flags |=3D PKT_TX_MACSEC;
+
+	/*
+	 * Initialize Ethernet header.
+	 */
+	ether_addr_copy(&peer_eth_addrs[fs->peer_addr], &eth_hdr.d_addr);
+	ether_addr_copy(&ports[fs->tx_port].eth_addr, &eth_hdr.s_addr);
+	eth_hdr.ether_type =3D rte_cpu_to_be_16(ETHER_TYPE_IPv4);
+
 	for (nb_pkt =3D 0; nb_pkt < nb_pkt_per_burst; nb_pkt++) {
 		pkt =3D rte_mbuf_raw_alloc(mbp);
-		if (pkt =3D=3D NULL) {
-		nomore_mbuf:
-			if (nb_pkt =3D=3D 0)
-				return;
+		if (pkt =3D=3D NULL)
+			break;
+		if (unlikely(!pkt_burst_prepare(pkt, mbp,
+						&eth_hdr, vlan_tci,
+						vlan_tci_outer,
+						ol_flags))) {
+			rte_mempool_put(mbp, pkt);
 			break;
 		}
-
-		/*
-		 * Using raw alloc is good to improve performance,
-		 * but some consumers may use the headroom and so
-		 * decrement data_off. We need to make sure it is
-		 * reset to default value.
-		 */
-		rte_pktmbuf_reset_headroom(pkt);
-		pkt->data_len =3D tx_pkt_seg_lengths[0];
-		pkt_seg =3D pkt;
-		if (tx_pkt_split =3D=3D TX_PKT_SPLIT_RND)
-			nb_segs =3D random() % tx_pkt_nb_segs + 1;
-		else
-			nb_segs =3D tx_pkt_nb_segs;
-		pkt_len =3D pkt->data_len;
-		for (i =3D 1; i < nb_segs; i++) {
-			pkt_seg->next =3D rte_mbuf_raw_alloc(mbp);
-			if (pkt_seg->next =3D=3D NULL) {
-				pkt->nb_segs =3D i;
-				rte_pktmbuf_free(pkt);
-				goto nomore_mbuf;
-			}
-			pkt_seg =3D pkt_seg->next;
-			pkt_seg->data_len =3D tx_pkt_seg_lengths[i];
-			pkt_len +=3D pkt_seg->data_len;
-		}
-		pkt_seg->next =3D NULL; /* Last segment of packet. */
-
-		/*
-		 * Initialize Ethernet header.
-		 */
-		ether_addr_copy(&peer_eth_addrs[fs->peer_addr],&eth_hdr.d_addr);
-		ether_addr_copy(&ports[fs->tx_port].eth_addr, &eth_hdr.s_addr);
-		eth_hdr.ether_type =3D rte_cpu_to_be_16(ETHER_TYPE_IPv4);
-
-		/*
-		 * Copy headers in first packet segment(s).
-		 */
-		copy_buf_to_pkt(&eth_hdr, sizeof(eth_hdr), pkt, 0);
-		copy_buf_to_pkt(&pkt_ip_hdr, sizeof(pkt_ip_hdr), pkt,
-				sizeof(struct ether_hdr));
-		copy_buf_to_pkt(&pkt_udp_hdr, sizeof(pkt_udp_hdr), pkt,
-				sizeof(struct ether_hdr) +
-				sizeof(struct ipv4_hdr));
-
-		/*
-		 * Complete first mbuf of packet and append it to the
-		 * burst of packets to be transmitted.
-		 */
-		pkt->nb_segs =3D nb_segs;
-		pkt->pkt_len =3D pkt_len;
-		pkt->ol_flags =3D ol_flags;
-		pkt->vlan_tci =3D vlan_tci;
-		pkt->vlan_tci_outer =3D vlan_tci_outer;
-		pkt->l2_len =3D sizeof(struct ether_hdr);
-		pkt->l3_len =3D sizeof(struct ipv4_hdr);
 		pkts_burst[nb_pkt] =3D pkt;
 	}
+
+	if (nb_pkt =3D=3D 0)
+		return;
+
 	nb_tx =3D rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_pkt)=
;
 	/*
 	 * Retry if necessary
--
2.21.0