DPDK patches and discussions
 help / color / mirror / Atom feed
From: Jerin Jacob Kollanukkaran <jerinj@marvell.com>
To: "thomas@monjalon.net" <thomas@monjalon.net>,
	Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>
Cc: "shahafs@mellanox.com" <shahafs@mellanox.com>,
	"bernard.iremonger@intel.com" <bernard.iremonger@intel.com>,
	"dev@dpdk.org" <dev@dpdk.org>,
	"arybchenko@solarflare.com" <arybchenko@solarflare.com>,
	"ferruh.yigit@intel.com" <ferruh.yigit@intel.com>
Subject: Re: [dpdk-dev] [PATCH v5 1/2] app/testpmd: optimize testpmd txonly mode
Date: Tue, 2 Apr 2019 01:03:46 +0000	[thread overview]
Message-ID: <acd9b7164bd23224f343aed272303fce7ac3a3e4.camel@marvell.com> (raw)
Message-ID: <20190402010346.WvZKidIXl4nrGDiW18DH51XlTJ6JTLljGyErrfB8bls@z> (raw)
In-Reply-To: <1732867.UvzobiCdsi@xps>

On Mon, 2019-04-01 at 22:53 +0200, Thomas Monjalon wrote:
> 01/04/2019 22:25, Ferruh Yigit:
> > On 3/31/2019 2:14 PM, Pavan Nikhilesh Bhagavatula wrote:
> > > From: Pavan Nikhilesh <pbhagavatula@marvell.com>
> > > 
> > > Optimize testpmd txonly mode by
> > > 1. Moving per packet ethernet header copy above the loop.
> > > 2. Use bulk ops for allocating segments instead of having a inner
> > > loop
> > > for every segment.
> > > 
> > > Also, move the packet prepare logic into a separate function so
> > > that it
> > > can be reused later.
> > > 
> > > Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> > > ---
> > >  v5 Changes
> > >  - Remove unnecessary change to struct rte_port *txp (movement).
> > > (Bernard)
> > > 
> > >  v4 Changes:
> > >  - Fix packet len calculation.
> > > 
> > >  v3 Changes:
> > >  - Split the patches for easier review. (Thomas)
> > >  - Remove unnecessary assignments to 0. (Bernard)
> > > 
> > >  v2 Changes:
> > >  - Use bulk ops for fetching segments. (Andrew Rybchenko)
> > >  - Fallback to rte_mbuf_raw_alloc if bulk get fails. (Andrew
> > > Rybchenko)
> > >  - Fix mbufs not being freed when there is no more mbufs
> > > available for
> > >  segments. (Andrew Rybchenko)
> > 
> > Hi Thomas, Shahafs,
> > 
> > I guess there was a performance issue on Mellanox with this patch,
> > I assume it
> > is still valid, since this version only has some cosmetic change,
> > but can you
> > please confirm?
> 
> We will check it.
> 
> > And what is the next step, can you guys provide some info to Pavan
> > to solve the
> > issue, or perhaps even better a fix?
> 
> Looking at the first patch, there are still 3 changes merged
> together.
> Why not splitting even more?

Splitting further more is not a issue. But we should not start the
thread for squashing it patch latter. What would be interesting to know
if there is any performance degradation with Mellanox NIC? If so, Why?
Based on that, We can craft the patch as you need.


> 
> 
> 

  parent reply	other threads:[~2019-04-02  1:03 UTC|newest]

Thread overview: 68+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-02-28 19:42 [dpdk-dev] [PATCH] app/testpmd: use mempool bulk get for " Pavan Nikhilesh Bhagavatula
2019-03-01  7:38 ` Andrew Rybchenko
2019-03-01  8:45   ` [dpdk-dev] [EXT] " Pavan Nikhilesh Bhagavatula
2019-03-01 13:47 ` [dpdk-dev] [PATCH v2] app/testpmd: add " Pavan Nikhilesh Bhagavatula
2019-03-19 16:48   ` Ferruh Yigit
2019-03-19 16:48     ` Ferruh Yigit
2019-03-20  4:53     ` Pavan Nikhilesh Bhagavatula
2019-03-20  4:53       ` Pavan Nikhilesh Bhagavatula
2019-03-26  8:43       ` Ferruh Yigit
2019-03-26  8:43         ` Ferruh Yigit
2019-03-26 11:00   ` Thomas Monjalon
2019-03-26 11:00     ` Thomas Monjalon
2019-03-26 11:50     ` Iremonger, Bernard
2019-03-26 11:50       ` Iremonger, Bernard
2019-03-26 12:06       ` Pavan Nikhilesh Bhagavatula
2019-03-26 12:06         ` Pavan Nikhilesh Bhagavatula
2019-03-26 13:16         ` Jerin Jacob Kollanukkaran
2019-03-26 13:16           ` Jerin Jacob Kollanukkaran
2019-03-26 12:26 ` [dpdk-dev] [PATCH v3 1/2] app/testpmd: optimize testpmd " Pavan Nikhilesh Bhagavatula
2019-03-26 12:26   ` Pavan Nikhilesh Bhagavatula
2019-03-26 12:27   ` [dpdk-dev] [PATCH v3 2/2] app/testpmd: add mempool bulk get for " Pavan Nikhilesh Bhagavatula
2019-03-26 12:27     ` Pavan Nikhilesh Bhagavatula
2019-03-26 13:03 ` [dpdk-dev] [PATCH v4 1/2] app/testpmd: optimize testpmd " Pavan Nikhilesh Bhagavatula
2019-03-26 13:03   ` Pavan Nikhilesh Bhagavatula
2019-03-26 13:03   ` [dpdk-dev] [PATCH v4 2/2] app/testpmd: add mempool bulk get for " Pavan Nikhilesh Bhagavatula
2019-03-26 13:03     ` Pavan Nikhilesh Bhagavatula
2019-03-26 16:13   ` [dpdk-dev] [PATCH v4 1/2] app/testpmd: optimize testpmd " Iremonger, Bernard
2019-03-26 16:13     ` Iremonger, Bernard
2019-03-31 13:14 ` [dpdk-dev] [PATCH v5 " Pavan Nikhilesh Bhagavatula
2019-03-31 13:14   ` Pavan Nikhilesh Bhagavatula
2019-03-31 13:14   ` [dpdk-dev] [PATCH v5 2/2] app/testpmd: add mempool bulk get for " Pavan Nikhilesh Bhagavatula
2019-03-31 13:14     ` Pavan Nikhilesh Bhagavatula
2019-04-01 20:25   ` [dpdk-dev] [PATCH v5 1/2] app/testpmd: optimize testpmd " Ferruh Yigit
2019-04-01 20:25     ` Ferruh Yigit
2019-04-01 20:53     ` Thomas Monjalon
2019-04-01 20:53       ` Thomas Monjalon
2019-04-02  1:03       ` Jerin Jacob Kollanukkaran [this message]
2019-04-02  1:03         ` Jerin Jacob Kollanukkaran
2019-04-02  7:06         ` Thomas Monjalon
2019-04-02  7:06           ` Thomas Monjalon
2019-04-02  8:31           ` Jerin Jacob Kollanukkaran
2019-04-02  8:31             ` Jerin Jacob Kollanukkaran
2019-04-02  9:03   ` Ali Alnubani
2019-04-02  9:03     ` Ali Alnubani
2019-04-02  9:06     ` Pavan Nikhilesh Bhagavatula
2019-04-02  9:06       ` Pavan Nikhilesh Bhagavatula
2019-04-02  9:53 ` [dpdk-dev] [PATCH v6 1/4] app/testpmd: move eth header generation outside the loop Pavan Nikhilesh Bhagavatula
2019-04-02  9:53   ` Pavan Nikhilesh Bhagavatula
2019-04-02  9:53   ` [dpdk-dev] [PATCH v6 2/4] app/testpmd: use bulk ops for allocating segments Pavan Nikhilesh Bhagavatula
2019-04-02  9:53     ` Pavan Nikhilesh Bhagavatula
2019-04-02  9:53   ` [dpdk-dev] [PATCH v6 3/4] app/testpmd: move pkt prepare logic into a separate function Pavan Nikhilesh Bhagavatula
2019-04-02  9:53     ` Pavan Nikhilesh Bhagavatula
2019-04-09  9:28     ` Lin, Xueqin
2019-04-09  9:28       ` Lin, Xueqin
2019-04-09  9:32       ` Pavan Nikhilesh Bhagavatula
2019-04-09  9:32         ` Pavan Nikhilesh Bhagavatula
2019-04-09 12:24         ` Yao, Lei A
2019-04-09 12:24           ` Yao, Lei A
2019-04-09 12:29           ` Ferruh Yigit
2019-04-09 12:29             ` Ferruh Yigit
2019-04-02  9:53   ` [dpdk-dev] [PATCH v6 4/4] app/testpmd: add mempool bulk get for txonly mode Pavan Nikhilesh Bhagavatula
2019-04-02  9:53     ` Pavan Nikhilesh Bhagavatula
2019-04-02 15:21   ` [dpdk-dev] [PATCH v6 1/4] app/testpmd: move eth header generation outside the loop Raslan Darawsheh
2019-04-02 15:21     ` Raslan Darawsheh
2019-04-04 16:23   ` Ferruh Yigit
2019-04-04 16:23     ` Ferruh Yigit
2019-04-05 17:37   ` Ferruh Yigit
2019-04-05 17:37     ` Ferruh Yigit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=acd9b7164bd23224f343aed272303fce7ac3a3e4.camel@marvell.com \
    --to=jerinj@marvell.com \
    --cc=arybchenko@solarflare.com \
    --cc=bernard.iremonger@intel.com \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@intel.com \
    --cc=pbhagavatula@marvell.com \
    --cc=shahafs@mellanox.com \
    --cc=thomas@monjalon.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).