From: "Lin, Xueqin" <xueqin.lin@intel.com>
To: Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>,
"Yigit, Ferruh" <ferruh.yigit@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>, "Xu, Qian Q" <qian.q.xu@intel.com>,
"Li, WenjieX A" <wenjiex.a.li@intel.com>,
"Wang, FengqinX" <fengqinx.wang@intel.com>,
"Yao, Lei A" <lei.a.yao@intel.com>,
"Wang, Yinan" <yinan.wang@intel.com>,
Jerin Jacob Kollanukkaran <jerinj@marvell.com>,
"thomas@monjalon.net" <thomas@monjalon.net>,
"arybchenko@solarflare.com" <arybchenko@solarflare.com>,
"Iremonger, Bernard" <bernard.iremonger@intel.com>,
"alialnu@mellanox.com" <alialnu@mellanox.com>,
"Zhang, Qi Z" <qi.z.zhang@intel.com>
Subject: Re: [dpdk-dev] [PATCH v6 3/4] app/testpmd: move pkt prepare logic into a separate function
Date: Tue, 9 Apr 2019 09:28:29 +0000 [thread overview]
Message-ID: <0D300480287911409D9FF92C1FA2A3355B4D2DE1@SHSMSX104.ccr.corp.intel.com> (raw)
Message-ID: <20190409092829.k3nlnw8X8B2YD1S0eezTrF6bI-Unkp_EqJc1_xnYirs@z> (raw)
In-Reply-To: <20190402095255.848-3-pbhagavatula@marvell.com>
Hi NIkhilesh,
This patchset impacts some of 19.05 rc1 txonly/burst tests on Intel NIC. If set txonly fwd, IXIA or tester peer can't receive packets that sent from app generated.
This is high issue, block some cases test. Detailed information as below, need you to check it soon.
*DPDK version: 19.05.0-rc1
*NIC hardware: Fortville_eagle/Fortville_spirit/Niantic
Environment: one NIC port connect with another NIC port, or one NIC port connect with IXIA
Test Setup
1. Bind port to igb_uio or vfio
2. On DUT, setup testpmd:
./x86_64-native-linuxapp-gcc/app/testpmd -c 0x1e -n 4 -- -i --rxq=4 --txq=4 --port-topology=loop
3. Set txonly forward, start testpmd
testpmd>set fwd txonly
testpmd>start
4. Dump packets from tester NIC port or IXIA, find no packets were received on the PORT0.
tcpdump -i <tester_interface> -v
Best regards,
Xueqin
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Pavan Nikhilesh
> Bhagavatula
> Sent: Tuesday, April 2, 2019 5:54 PM
> To: Jerin Jacob Kollanukkaran <jerinj@marvell.com>;
> thomas@monjalon.net; arybchenko@solarflare.com; Yigit, Ferruh
> <ferruh.yigit@intel.com>; Iremonger, Bernard
> <bernard.iremonger@intel.com>; alialnu@mellanox.com
> Cc: dev@dpdk.org; Pavan Nikhilesh Bhagavatula
> <pbhagavatula@marvell.com>
> Subject: [dpdk-dev] [PATCH v6 3/4] app/testpmd: move pkt prepare logic
> into a separate function
>
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> Move the packet prepare logic into a separate function so that it can be
> reused later.
>
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> ---
> app/test-pmd/txonly.c | 163 +++++++++++++++++++++---------------------
> 1 file changed, 83 insertions(+), 80 deletions(-)
>
> diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c index
> 65171c1d1..56ca0ad24 100644
> --- a/app/test-pmd/txonly.c
> +++ b/app/test-pmd/txonly.c
> @@ -148,6 +148,80 @@ setup_pkt_udp_ip_headers(struct ipv4_hdr *ip_hdr,
> ip_hdr->hdr_checksum = (uint16_t) ip_cksum; }
>
> +static inline bool
> +pkt_burst_prepare(struct rte_mbuf *pkt, struct rte_mempool *mbp,
> + struct ether_hdr *eth_hdr, const uint16_t vlan_tci,
> + const uint16_t vlan_tci_outer, const uint64_t ol_flags) {
> + struct rte_mbuf *pkt_segs[RTE_MAX_SEGS_PER_PKT];
> + uint8_t ip_var = RTE_PER_LCORE(_ip_var);
> + struct rte_mbuf *pkt_seg;
> + uint32_t nb_segs, pkt_len;
> + uint8_t i;
> +
> + if (unlikely(tx_pkt_split == TX_PKT_SPLIT_RND))
> + nb_segs = random() % tx_pkt_nb_segs + 1;
> + else
> + nb_segs = tx_pkt_nb_segs;
> +
> + if (nb_segs > 1) {
> + if (rte_mempool_get_bulk(mbp, (void **)pkt_segs, nb_segs))
> + return false;
> + }
> +
> + rte_pktmbuf_reset_headroom(pkt);
> + pkt->data_len = tx_pkt_seg_lengths[0];
> + pkt->ol_flags = ol_flags;
> + pkt->vlan_tci = vlan_tci;
> + pkt->vlan_tci_outer = vlan_tci_outer;
> + pkt->l2_len = sizeof(struct ether_hdr);
> + pkt->l3_len = sizeof(struct ipv4_hdr);
> +
> + pkt_len = pkt->data_len;
> + pkt_seg = pkt;
> + for (i = 1; i < nb_segs; i++) {
> + pkt_seg->next = pkt_segs[i - 1];
> + pkt_seg = pkt_seg->next;
> + pkt_seg->data_len = tx_pkt_seg_lengths[i];
> + pkt_len += pkt_seg->data_len;
> + }
> + pkt_seg->next = NULL; /* Last segment of packet. */
> + /*
> + * Copy headers in first packet segment(s).
> + */
> + copy_buf_to_pkt(eth_hdr, sizeof(eth_hdr), pkt, 0);
> + copy_buf_to_pkt(&pkt_ip_hdr, sizeof(pkt_ip_hdr), pkt,
> + sizeof(struct ether_hdr));
> + if (txonly_multi_flow) {
> + struct ipv4_hdr *ip_hdr;
> + uint32_t addr;
> +
> + ip_hdr = rte_pktmbuf_mtod_offset(pkt,
> + struct ipv4_hdr *,
> + sizeof(struct ether_hdr));
> + /*
> + * Generate multiple flows by varying IP src addr. This
> + * enables packets are well distributed by RSS in
> + * receiver side if any and txonly mode can be a decent
> + * packet generator for developer's quick performance
> + * regression test.
> + */
> + addr = (IP_DST_ADDR | (ip_var++ << 8)) + rte_lcore_id();
> + ip_hdr->src_addr = rte_cpu_to_be_32(addr);
> + }
> + copy_buf_to_pkt(&pkt_udp_hdr, sizeof(pkt_udp_hdr), pkt,
> + sizeof(struct ether_hdr) +
> + sizeof(struct ipv4_hdr));
> + /*
> + * Complete first mbuf of packet and append it to the
> + * burst of packets to be transmitted.
> + */
> + pkt->nb_segs = nb_segs;
> + pkt->pkt_len = pkt_len;
> +
> + return true;
> +}
> +
> /*
> * Transmit a burst of multi-segments packets.
> */
> @@ -155,10 +229,8 @@ static void
> pkt_burst_transmit(struct fwd_stream *fs) {
> struct rte_mbuf *pkts_burst[MAX_PKT_BURST];
> - struct rte_mbuf *pkt_segs[RTE_MAX_SEGS_PER_PKT];
> struct rte_port *txp;
> struct rte_mbuf *pkt;
> - struct rte_mbuf *pkt_seg;
> struct rte_mempool *mbp;
> struct ether_hdr eth_hdr;
> uint16_t nb_tx;
> @@ -166,15 +238,12 @@ pkt_burst_transmit(struct fwd_stream *fs)
> uint16_t vlan_tci, vlan_tci_outer;
> uint32_t retry;
> uint64_t ol_flags = 0;
> - uint8_t ip_var = RTE_PER_LCORE(_ip_var);
> - uint8_t i;
> uint64_t tx_offloads;
> #ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES
> uint64_t start_tsc;
> uint64_t end_tsc;
> uint64_t core_cycles;
> #endif
> - uint32_t nb_segs, pkt_len;
>
> #ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES
> start_tsc = rte_rdtsc();
> @@ -201,85 +270,19 @@ pkt_burst_transmit(struct fwd_stream *fs)
>
> for (nb_pkt = 0; nb_pkt < nb_pkt_per_burst; nb_pkt++) {
> pkt = rte_mbuf_raw_alloc(mbp);
> - if (pkt == NULL) {
> - nomore_mbuf:
> - if (nb_pkt == 0)
> - return;
> + if (pkt == NULL)
> + break;
> + if (unlikely(!pkt_burst_prepare(pkt, mbp, ð_hdr, vlan_tci,
> + vlan_tci_outer, ol_flags))) {
> + rte_pktmbuf_free(pkt);
> break;
> }
> -
> - /*
> - * Using raw alloc is good to improve performance,
> - * but some consumers may use the headroom and so
> - * decrement data_off. We need to make sure it is
> - * reset to default value.
> - */
> - rte_pktmbuf_reset_headroom(pkt);
> - pkt->data_len = tx_pkt_seg_lengths[0];
> - pkt_seg = pkt;
> -
> - if (tx_pkt_split == TX_PKT_SPLIT_RND)
> - nb_segs = random() % tx_pkt_nb_segs + 1;
> - else
> - nb_segs = tx_pkt_nb_segs;
> -
> - if (nb_segs > 1) {
> - if (rte_mempool_get_bulk(mbp, (void **)pkt_segs,
> - nb_segs)) {
> - rte_pktmbuf_free(pkt);
> - goto nomore_mbuf;
> - }
> - }
> -
> - pkt_len = pkt->data_len;
> - for (i = 1; i < nb_segs; i++) {
> - pkt_seg->next = pkt_segs[i - 1];
> - pkt_seg = pkt_seg->next;
> - pkt_seg->data_len = tx_pkt_seg_lengths[i];
> - pkt_len += pkt_seg->data_len;
> - }
> - pkt_seg->next = NULL; /* Last segment of packet. */
> -
> - /*
> - * Copy headers in first packet segment(s).
> - */
> - copy_buf_to_pkt(ð_hdr, sizeof(eth_hdr), pkt, 0);
> - copy_buf_to_pkt(&pkt_ip_hdr, sizeof(pkt_ip_hdr), pkt,
> - sizeof(struct ether_hdr));
> - if (txonly_multi_flow) {
> - struct ipv4_hdr *ip_hdr;
> - uint32_t addr;
> -
> - ip_hdr = rte_pktmbuf_mtod_offset(pkt,
> - struct ipv4_hdr *,
> - sizeof(struct ether_hdr));
> - /*
> - * Generate multiple flows by varying IP src addr. This
> - * enables packets are well distributed by RSS in
> - * receiver side if any and txonly mode can be a
> decent
> - * packet generator for developer's quick
> performance
> - * regression test.
> - */
> - addr = (IP_DST_ADDR | (ip_var++ << 8)) +
> rte_lcore_id();
> - ip_hdr->src_addr = rte_cpu_to_be_32(addr);
> - }
> - copy_buf_to_pkt(&pkt_udp_hdr, sizeof(pkt_udp_hdr), pkt,
> - sizeof(struct ether_hdr) +
> - sizeof(struct ipv4_hdr));
> -
> - /*
> - * Complete first mbuf of packet and append it to the
> - * burst of packets to be transmitted.
> - */
> - pkt->nb_segs = nb_segs;
> - pkt->pkt_len = pkt_len;
> - pkt->ol_flags = ol_flags;
> - pkt->vlan_tci = vlan_tci;
> - pkt->vlan_tci_outer = vlan_tci_outer;
> - pkt->l2_len = sizeof(struct ether_hdr);
> - pkt->l3_len = sizeof(struct ipv4_hdr);
> pkts_burst[nb_pkt] = pkt;
> }
> +
> + if (nb_pkt == 0)
> + return;
> +
> nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst,
> nb_pkt);
> /*
> * Retry if necessary
> --
> 2.21.0
next prev parent reply other threads:[~2019-04-09 9:28 UTC|newest]
Thread overview: 68+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-02-28 19:42 [dpdk-dev] [PATCH] app/testpmd: use mempool bulk get for txonly mode Pavan Nikhilesh Bhagavatula
2019-03-01 7:38 ` Andrew Rybchenko
2019-03-01 8:45 ` [dpdk-dev] [EXT] " Pavan Nikhilesh Bhagavatula
2019-03-01 13:47 ` [dpdk-dev] [PATCH v2] app/testpmd: add " Pavan Nikhilesh Bhagavatula
2019-03-19 16:48 ` Ferruh Yigit
2019-03-19 16:48 ` Ferruh Yigit
2019-03-20 4:53 ` Pavan Nikhilesh Bhagavatula
2019-03-20 4:53 ` Pavan Nikhilesh Bhagavatula
2019-03-26 8:43 ` Ferruh Yigit
2019-03-26 8:43 ` Ferruh Yigit
2019-03-26 11:00 ` Thomas Monjalon
2019-03-26 11:00 ` Thomas Monjalon
2019-03-26 11:50 ` Iremonger, Bernard
2019-03-26 11:50 ` Iremonger, Bernard
2019-03-26 12:06 ` Pavan Nikhilesh Bhagavatula
2019-03-26 12:06 ` Pavan Nikhilesh Bhagavatula
2019-03-26 13:16 ` Jerin Jacob Kollanukkaran
2019-03-26 13:16 ` Jerin Jacob Kollanukkaran
2019-03-26 12:26 ` [dpdk-dev] [PATCH v3 1/2] app/testpmd: optimize testpmd " Pavan Nikhilesh Bhagavatula
2019-03-26 12:26 ` Pavan Nikhilesh Bhagavatula
2019-03-26 12:27 ` [dpdk-dev] [PATCH v3 2/2] app/testpmd: add mempool bulk get for " Pavan Nikhilesh Bhagavatula
2019-03-26 12:27 ` Pavan Nikhilesh Bhagavatula
2019-03-26 13:03 ` [dpdk-dev] [PATCH v4 1/2] app/testpmd: optimize testpmd " Pavan Nikhilesh Bhagavatula
2019-03-26 13:03 ` Pavan Nikhilesh Bhagavatula
2019-03-26 13:03 ` [dpdk-dev] [PATCH v4 2/2] app/testpmd: add mempool bulk get for " Pavan Nikhilesh Bhagavatula
2019-03-26 13:03 ` Pavan Nikhilesh Bhagavatula
2019-03-26 16:13 ` [dpdk-dev] [PATCH v4 1/2] app/testpmd: optimize testpmd " Iremonger, Bernard
2019-03-26 16:13 ` Iremonger, Bernard
2019-03-31 13:14 ` [dpdk-dev] [PATCH v5 " Pavan Nikhilesh Bhagavatula
2019-03-31 13:14 ` Pavan Nikhilesh Bhagavatula
2019-03-31 13:14 ` [dpdk-dev] [PATCH v5 2/2] app/testpmd: add mempool bulk get for " Pavan Nikhilesh Bhagavatula
2019-03-31 13:14 ` Pavan Nikhilesh Bhagavatula
2019-04-01 20:25 ` [dpdk-dev] [PATCH v5 1/2] app/testpmd: optimize testpmd " Ferruh Yigit
2019-04-01 20:25 ` Ferruh Yigit
2019-04-01 20:53 ` Thomas Monjalon
2019-04-01 20:53 ` Thomas Monjalon
2019-04-02 1:03 ` Jerin Jacob Kollanukkaran
2019-04-02 1:03 ` Jerin Jacob Kollanukkaran
2019-04-02 7:06 ` Thomas Monjalon
2019-04-02 7:06 ` Thomas Monjalon
2019-04-02 8:31 ` Jerin Jacob Kollanukkaran
2019-04-02 8:31 ` Jerin Jacob Kollanukkaran
2019-04-02 9:03 ` Ali Alnubani
2019-04-02 9:03 ` Ali Alnubani
2019-04-02 9:06 ` Pavan Nikhilesh Bhagavatula
2019-04-02 9:06 ` Pavan Nikhilesh Bhagavatula
2019-04-02 9:53 ` [dpdk-dev] [PATCH v6 1/4] app/testpmd: move eth header generation outside the loop Pavan Nikhilesh Bhagavatula
2019-04-02 9:53 ` Pavan Nikhilesh Bhagavatula
2019-04-02 9:53 ` [dpdk-dev] [PATCH v6 2/4] app/testpmd: use bulk ops for allocating segments Pavan Nikhilesh Bhagavatula
2019-04-02 9:53 ` Pavan Nikhilesh Bhagavatula
2019-04-02 9:53 ` [dpdk-dev] [PATCH v6 3/4] app/testpmd: move pkt prepare logic into a separate function Pavan Nikhilesh Bhagavatula
2019-04-02 9:53 ` Pavan Nikhilesh Bhagavatula
2019-04-09 9:28 ` Lin, Xueqin [this message]
2019-04-09 9:28 ` Lin, Xueqin
2019-04-09 9:32 ` Pavan Nikhilesh Bhagavatula
2019-04-09 9:32 ` Pavan Nikhilesh Bhagavatula
2019-04-09 12:24 ` Yao, Lei A
2019-04-09 12:24 ` Yao, Lei A
2019-04-09 12:29 ` Ferruh Yigit
2019-04-09 12:29 ` Ferruh Yigit
2019-04-02 9:53 ` [dpdk-dev] [PATCH v6 4/4] app/testpmd: add mempool bulk get for txonly mode Pavan Nikhilesh Bhagavatula
2019-04-02 9:53 ` Pavan Nikhilesh Bhagavatula
2019-04-02 15:21 ` [dpdk-dev] [PATCH v6 1/4] app/testpmd: move eth header generation outside the loop Raslan Darawsheh
2019-04-02 15:21 ` Raslan Darawsheh
2019-04-04 16:23 ` Ferruh Yigit
2019-04-04 16:23 ` Ferruh Yigit
2019-04-05 17:37 ` Ferruh Yigit
2019-04-05 17:37 ` Ferruh Yigit
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=0D300480287911409D9FF92C1FA2A3355B4D2DE1@SHSMSX104.ccr.corp.intel.com \
--to=xueqin.lin@intel.com \
--cc=alialnu@mellanox.com \
--cc=arybchenko@solarflare.com \
--cc=bernard.iremonger@intel.com \
--cc=dev@dpdk.org \
--cc=fengqinx.wang@intel.com \
--cc=ferruh.yigit@intel.com \
--cc=jerinj@marvell.com \
--cc=lei.a.yao@intel.com \
--cc=pbhagavatula@marvell.com \
--cc=qi.z.zhang@intel.com \
--cc=qian.q.xu@intel.com \
--cc=thomas@monjalon.net \
--cc=wenjiex.a.li@intel.com \
--cc=yinan.wang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).