From: Arvind Narayanan <webguru2688@gmail.com>
To: Andrew Bainbridge <andbain@microsoft.com>
Cc: Yongseok Koh <yskoh@mellanox.com>, users <users@dpdk.org>
Subject: Re: [dpdk-users] Issue with mlx5_rxtx.c while calling rte_eth_tx_burst() in DPDK 18.11
Date: Wed, 8 May 2019 08:03:50 -0500 [thread overview]
Message-ID: <CAHJJQSWfU=f-aZWVDV=FmjvOPn4YUabQZfZa5wcfRjHfh_6zjA@mail.gmail.com> (raw)
In-Reply-To: <HE1PR83MB0378C66123CED709295A09D2AE320@HE1PR83MB0378.EURPRD83.prod.outlook.com>
I will try using testpmd, and get back. Thanks.
Since my packets are single segment, as Yongseok suggested - I tried
commenting out the asserts. By doing this, the tx descriptors would get
depleted and would halt all tx completely. But if packets are sent slowly
by reducing traffic-generator's transmission speed, there is no problem.
What does "vectorized TX bursts" mean? Does executing consecutive tx_burst
calls (where each call is sending out ~64 mbufs) qualify as vectorized tx
burst?
I tried finding information about this online, but couldn't find anything
useful. I see a bunch of runtime config params for vectorized tx in mlx5.
Arvind
On Wed, May 8, 2019 at 4:26 AM Andrew Bainbridge <andbain@microsoft.com>
wrote:
> testpmd calls rte_eth_tx_burst() in a loop. Does it fail? I suspect not.
> If not, then you can gradually transform testpmd until it looks like your
> code that fails. The loop in question is in txonly.c.
>
> You need a command line something like this for the test:
> testpmd -- --forward-mode=txonly --stats-period 1
>
> -----Original Message-----
> From: users <users-bounces@dpdk.org> On Behalf Of Arvind Narayanan
> Sent: 05 May 2019 00:07
> To: Yongseok Koh <yskoh@mellanox.com>
> Cc: users <users@dpdk.org>
> Subject: Re: [dpdk-users] Issue with mlx5_rxtx.c while calling
> rte_eth_tx_burst() in DPDK 18.11
>
> It passes __rte_mbuf_sanity_check. rte_mbuf_check() is not available in
> dpdk 18.02.
> I debugged when the assertion failed and double checked all the mbuf's
> pkt_len and data_len. All seems fine.
> Yes, in my case its simple, all mbufs are single segment.
>
> Is there some bound on the number of tx calls we can do consecutively using
> mlx5 driver?
> Its like if I do a lot of tx calls consecutively (e.g. ~10 to 20 calls to
> rte_eth_tx_burst() with each call sending out a burst of ~64 mbufs), I
> face this problem otherwise I don't.
>
> Thoughts?
>
> Arvind
>
> On Tue, Apr 23, 2019 at 6:45 PM Yongseok Koh <yskoh@mellanox.com> wrote:
>
> >
> > > On Apr 21, 2019, at 9:59 PM, Arvind Narayanan
> > > <webguru2688@gmail.com>
> > wrote:
> > >
> > > I am running into a weird problem when using rte_eth_tx_burst()
> > > using
> > mlx5
> > > in dpdk 18.11, running on Ubuntu 18.04 LTS (using Mellanox Connect
> > > X5
> > 100G
> > > EN).
> > >
> > > Here is a simplified snippet.
> > >
> > > ==================
> > > #define MAX_BATCHES 64
> > > #define MAX_BURST_SIZE 64
> > >
> > > struct batch {
> > > struct rte_mbuf *mbufs[MAX_BURST_SIZE]; // array of packets
> > > int num_mbufs; // num of mbufs
> > > int queue; // outgoing tx_queue
> > > int port; // outgoing port
> > > }
> > >
> > > struct batch * batches[MAX_BATCHES];
> > >
> > > /* dequeue a number of batches */
> > > int batch_count = rte_ring_sc_dequeue_bulk(some_rte_ring, (void **)
> > > &(batches), MAX_BATCHES, NULL);
> > >
> > > /* transmit out all pkts from every batch */ if (likely(batch_count
> > > > 0)) {
> > > for (i = 0; i < batch_count; i++) {
> > > ret = rte_eth_tx_burst(batches[i]->port, batches[i]->queue,
> > (struct
> > > rte_mbuf **) batches[i]->mbufs,
> > > batches[i]->num_mbufs);
> > > }
> > > }
> > >
> > > ==================
> > >
> > > At rte_eth_tx_burst(), I keep getting an error saying:
> > > myapp: /home/arvind/dpdk/drivers/net/mlx5/mlx5_rxtx.c:1652: uint16_t
> > > txq_burst_empw(struct mlx5_txq_data *, struct rte_mbuf **, uint16_t):
> > > Assertion `length == DATA_LEN(buf)' failed.
> > > OR
> > > myapp: /home/arvind/dpdk/drivers/net/mlx5/mlx5_rxtx.c:1609: uint16_t
> > > txq_burst_empw(struct mlx5_txq_data *, struct rte_mbuf **, uint16_t):
> > > Assertion `length == DATA_LEN(buf)' failed.
> > >
> > > I have debugged and ensured all the mbuf counts (at least in my
> > > code) are good. All the memory references to the mbufs also look
> > > good. However, I
> > am
> > > not sure why Mellanox driver would complain.
> > >
> > > I have also tried to play with mlx5_rxtx.c by changing above lines
> > > to something like assert(length == pkts_n); // pkts_n is an argument
> > > passed to the func.
> > > Didn't help.
> > >
> > > Any thoughts?
> >
> > Hi,
> >
> > Does your mbuf pass rte_mbuf_check()?
> > That complaint is regarding mismatch between m->pkt_len and m->data_len.
> > If the mbuf is single segment packet (m->nb_segs == 1, m->next ==
> > NULL),
> > m->pkt_len should be same as m->data_len.
> >
> > That assert() ins't strictly needed in the txq_burst_empw() though.
> >
> >
> > Thanks,
> > Yongseok
>
prev parent reply other threads:[~2019-05-08 13:04 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-04-22 4:59 Arvind Narayanan
2019-04-23 23:45 ` Yongseok Koh
2019-05-04 23:07 ` Arvind Narayanan
2019-05-08 9:26 ` Andrew Bainbridge
2019-05-08 13:03 ` Arvind Narayanan [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAHJJQSWfU=f-aZWVDV=FmjvOPn4YUabQZfZa5wcfRjHfh_6zjA@mail.gmail.com' \
--to=webguru2688@gmail.com \
--cc=andbain@microsoft.com \
--cc=users@dpdk.org \
--cc=yskoh@mellanox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).