From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id 6593FA0096 for ; Tue, 9 Apr 2019 11:28:36 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 223735B2C; Tue, 9 Apr 2019 11:28:35 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id 06C525B2A for ; Tue, 9 Apr 2019 11:28:33 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Apr 2019 02:28:33 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,328,1549958400"; d="scan'208";a="162675165" Received: from fmsmsx105.amr.corp.intel.com ([10.18.124.203]) by fmsmga001.fm.intel.com with ESMTP; 09 Apr 2019 02:28:32 -0700 Received: from fmsmsx154.amr.corp.intel.com (10.18.116.70) by FMSMSX105.amr.corp.intel.com (10.18.124.203) with Microsoft SMTP Server (TLS) id 14.3.408.0; Tue, 9 Apr 2019 02:28:32 -0700 Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by FMSMSX154.amr.corp.intel.com (10.18.116.70) with Microsoft SMTP Server (TLS) id 14.3.408.0; Tue, 9 Apr 2019 02:28:32 -0700 Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.92]) by SHSMSX101.ccr.corp.intel.com ([169.254.1.164]) with mapi id 14.03.0415.000; Tue, 9 Apr 2019 17:28:30 +0800 From: "Lin, Xueqin" To: Pavan Nikhilesh Bhagavatula , "Yigit, Ferruh" CC: "dev@dpdk.org" , "Xu, Qian Q" , "Li, WenjieX A" , "Wang, FengqinX" , "Yao, Lei A" , "Wang, Yinan" , Jerin Jacob Kollanukkaran , "thomas@monjalon.net" , "arybchenko@solarflare.com" , "Iremonger, Bernard" , "alialnu@mellanox.com" , "Zhang, Qi Z" Thread-Topic: [dpdk-dev] [PATCH v6 3/4] app/testpmd: move pkt prepare logic into a separate function Thread-Index: AQHU6ToajvWbBekdiU2A9J0sVyZfn6YzlhTQ Date: Tue, 9 Apr 2019 09:28:29 +0000 Message-ID: <0D300480287911409D9FF92C1FA2A3355B4D2DE1@SHSMSX104.ccr.corp.intel.com> References: <20190228194128.14236-1-pbhagavatula@marvell.com> <20190402095255.848-1-pbhagavatula@marvell.com> <20190402095255.848-3-pbhagavatula@marvell.com> In-Reply-To: <20190402095255.848-3-pbhagavatula@marvell.com> Accept-Language: zh-CN, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH v6 3/4] app/testpmd: move pkt prepare logic into a separate function X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Message-ID: <20190409092829.k3nlnw8X8B2YD1S0eezTrF6bI-Unkp_EqJc1_xnYirs@z> Hi NIkhilesh, This patchset impacts some of 19.05 rc1 txonly/burst tests on Intel NIC. If= set txonly fwd, IXIA or tester peer can't receive packets that sent from a= pp generated.=20 This is high issue, block some cases test. Detailed information as below, n= eed you to check it soon. *DPDK version: 19.05.0-rc1 *NIC hardware: Fortville_eagle/Fortville_spirit/Niantic Environment: one NIC port connect with another NIC port, or one NIC port co= nnect with IXIA Test Setup 1. Bind port to igb_uio or vfio 2. On DUT, setup testpmd: ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x1e -n 4 -- -i --rxq=3D4 = --txq=3D4 --port-topology=3Dloop 3. Set txonly forward, start testpmd testpmd>set fwd txonly testpmd>start 4. Dump packets from tester NIC port or IXIA, find no packets were received= on the PORT0. tcpdump -i -v Best regards, Xueqin > -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Pavan Nikhilesh > Bhagavatula > Sent: Tuesday, April 2, 2019 5:54 PM > To: Jerin Jacob Kollanukkaran ; > thomas@monjalon.net; arybchenko@solarflare.com; Yigit, Ferruh > ; Iremonger, Bernard > ; alialnu@mellanox.com > Cc: dev@dpdk.org; Pavan Nikhilesh Bhagavatula > > Subject: [dpdk-dev] [PATCH v6 3/4] app/testpmd: move pkt prepare logic > into a separate function >=20 > From: Pavan Nikhilesh >=20 > Move the packet prepare logic into a separate function so that it can be > reused later. >=20 > Signed-off-by: Pavan Nikhilesh > --- > app/test-pmd/txonly.c | 163 +++++++++++++++++++++--------------------- > 1 file changed, 83 insertions(+), 80 deletions(-) >=20 > diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c index > 65171c1d1..56ca0ad24 100644 > --- a/app/test-pmd/txonly.c > +++ b/app/test-pmd/txonly.c > @@ -148,6 +148,80 @@ setup_pkt_udp_ip_headers(struct ipv4_hdr *ip_hdr, > ip_hdr->hdr_checksum =3D (uint16_t) ip_cksum; } >=20 > +static inline bool > +pkt_burst_prepare(struct rte_mbuf *pkt, struct rte_mempool *mbp, > + struct ether_hdr *eth_hdr, const uint16_t vlan_tci, > + const uint16_t vlan_tci_outer, const uint64_t ol_flags) { > + struct rte_mbuf *pkt_segs[RTE_MAX_SEGS_PER_PKT]; > + uint8_t ip_var =3D RTE_PER_LCORE(_ip_var); > + struct rte_mbuf *pkt_seg; > + uint32_t nb_segs, pkt_len; > + uint8_t i; > + > + if (unlikely(tx_pkt_split =3D=3D TX_PKT_SPLIT_RND)) > + nb_segs =3D random() % tx_pkt_nb_segs + 1; > + else > + nb_segs =3D tx_pkt_nb_segs; > + > + if (nb_segs > 1) { > + if (rte_mempool_get_bulk(mbp, (void **)pkt_segs, nb_segs)) > + return false; > + } > + > + rte_pktmbuf_reset_headroom(pkt); > + pkt->data_len =3D tx_pkt_seg_lengths[0]; > + pkt->ol_flags =3D ol_flags; > + pkt->vlan_tci =3D vlan_tci; > + pkt->vlan_tci_outer =3D vlan_tci_outer; > + pkt->l2_len =3D sizeof(struct ether_hdr); > + pkt->l3_len =3D sizeof(struct ipv4_hdr); > + > + pkt_len =3D pkt->data_len; > + pkt_seg =3D pkt; > + for (i =3D 1; i < nb_segs; i++) { > + pkt_seg->next =3D pkt_segs[i - 1]; > + pkt_seg =3D pkt_seg->next; > + pkt_seg->data_len =3D tx_pkt_seg_lengths[i]; > + pkt_len +=3D pkt_seg->data_len; > + } > + pkt_seg->next =3D NULL; /* Last segment of packet. */ > + /* > + * Copy headers in first packet segment(s). > + */ > + copy_buf_to_pkt(eth_hdr, sizeof(eth_hdr), pkt, 0); > + copy_buf_to_pkt(&pkt_ip_hdr, sizeof(pkt_ip_hdr), pkt, > + sizeof(struct ether_hdr)); > + if (txonly_multi_flow) { > + struct ipv4_hdr *ip_hdr; > + uint32_t addr; > + > + ip_hdr =3D rte_pktmbuf_mtod_offset(pkt, > + struct ipv4_hdr *, > + sizeof(struct ether_hdr)); > + /* > + * Generate multiple flows by varying IP src addr. This > + * enables packets are well distributed by RSS in > + * receiver side if any and txonly mode can be a decent > + * packet generator for developer's quick performance > + * regression test. > + */ > + addr =3D (IP_DST_ADDR | (ip_var++ << 8)) + rte_lcore_id(); > + ip_hdr->src_addr =3D rte_cpu_to_be_32(addr); > + } > + copy_buf_to_pkt(&pkt_udp_hdr, sizeof(pkt_udp_hdr), pkt, > + sizeof(struct ether_hdr) + > + sizeof(struct ipv4_hdr)); > + /* > + * Complete first mbuf of packet and append it to the > + * burst of packets to be transmitted. > + */ > + pkt->nb_segs =3D nb_segs; > + pkt->pkt_len =3D pkt_len; > + > + return true; > +} > + > /* > * Transmit a burst of multi-segments packets. > */ > @@ -155,10 +229,8 @@ static void > pkt_burst_transmit(struct fwd_stream *fs) { > struct rte_mbuf *pkts_burst[MAX_PKT_BURST]; > - struct rte_mbuf *pkt_segs[RTE_MAX_SEGS_PER_PKT]; > struct rte_port *txp; > struct rte_mbuf *pkt; > - struct rte_mbuf *pkt_seg; > struct rte_mempool *mbp; > struct ether_hdr eth_hdr; > uint16_t nb_tx; > @@ -166,15 +238,12 @@ pkt_burst_transmit(struct fwd_stream *fs) > uint16_t vlan_tci, vlan_tci_outer; > uint32_t retry; > uint64_t ol_flags =3D 0; > - uint8_t ip_var =3D RTE_PER_LCORE(_ip_var); > - uint8_t i; > uint64_t tx_offloads; > #ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES > uint64_t start_tsc; > uint64_t end_tsc; > uint64_t core_cycles; > #endif > - uint32_t nb_segs, pkt_len; >=20 > #ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES > start_tsc =3D rte_rdtsc(); > @@ -201,85 +270,19 @@ pkt_burst_transmit(struct fwd_stream *fs) >=20 > for (nb_pkt =3D 0; nb_pkt < nb_pkt_per_burst; nb_pkt++) { > pkt =3D rte_mbuf_raw_alloc(mbp); > - if (pkt =3D=3D NULL) { > - nomore_mbuf: > - if (nb_pkt =3D=3D 0) > - return; > + if (pkt =3D=3D NULL) > + break; > + if (unlikely(!pkt_burst_prepare(pkt, mbp, ð_hdr, vlan_tci, > + vlan_tci_outer, ol_flags))) { > + rte_pktmbuf_free(pkt); > break; > } > - > - /* > - * Using raw alloc is good to improve performance, > - * but some consumers may use the headroom and so > - * decrement data_off. We need to make sure it is > - * reset to default value. > - */ > - rte_pktmbuf_reset_headroom(pkt); > - pkt->data_len =3D tx_pkt_seg_lengths[0]; > - pkt_seg =3D pkt; > - > - if (tx_pkt_split =3D=3D TX_PKT_SPLIT_RND) > - nb_segs =3D random() % tx_pkt_nb_segs + 1; > - else > - nb_segs =3D tx_pkt_nb_segs; > - > - if (nb_segs > 1) { > - if (rte_mempool_get_bulk(mbp, (void **)pkt_segs, > - nb_segs)) { > - rte_pktmbuf_free(pkt); > - goto nomore_mbuf; > - } > - } > - > - pkt_len =3D pkt->data_len; > - for (i =3D 1; i < nb_segs; i++) { > - pkt_seg->next =3D pkt_segs[i - 1]; > - pkt_seg =3D pkt_seg->next; > - pkt_seg->data_len =3D tx_pkt_seg_lengths[i]; > - pkt_len +=3D pkt_seg->data_len; > - } > - pkt_seg->next =3D NULL; /* Last segment of packet. */ > - > - /* > - * Copy headers in first packet segment(s). > - */ > - copy_buf_to_pkt(ð_hdr, sizeof(eth_hdr), pkt, 0); > - copy_buf_to_pkt(&pkt_ip_hdr, sizeof(pkt_ip_hdr), pkt, > - sizeof(struct ether_hdr)); > - if (txonly_multi_flow) { > - struct ipv4_hdr *ip_hdr; > - uint32_t addr; > - > - ip_hdr =3D rte_pktmbuf_mtod_offset(pkt, > - struct ipv4_hdr *, > - sizeof(struct ether_hdr)); > - /* > - * Generate multiple flows by varying IP src addr. This > - * enables packets are well distributed by RSS in > - * receiver side if any and txonly mode can be a > decent > - * packet generator for developer's quick > performance > - * regression test. > - */ > - addr =3D (IP_DST_ADDR | (ip_var++ << 8)) + > rte_lcore_id(); > - ip_hdr->src_addr =3D rte_cpu_to_be_32(addr); > - } > - copy_buf_to_pkt(&pkt_udp_hdr, sizeof(pkt_udp_hdr), pkt, > - sizeof(struct ether_hdr) + > - sizeof(struct ipv4_hdr)); > - > - /* > - * Complete first mbuf of packet and append it to the > - * burst of packets to be transmitted. > - */ > - pkt->nb_segs =3D nb_segs; > - pkt->pkt_len =3D pkt_len; > - pkt->ol_flags =3D ol_flags; > - pkt->vlan_tci =3D vlan_tci; > - pkt->vlan_tci_outer =3D vlan_tci_outer; > - pkt->l2_len =3D sizeof(struct ether_hdr); > - pkt->l3_len =3D sizeof(struct ipv4_hdr); > pkts_burst[nb_pkt] =3D pkt; > } > + > + if (nb_pkt =3D=3D 0) > + return; > + > nb_tx =3D rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, > nb_pkt); > /* > * Retry if necessary > -- > 2.21.0