patches for DPDK stable branches
 help / color / mirror / Atom feed
From: Thomas Monjalon <thomas@monjalon.net>
To: Slava Ovsiienko <viacheslavo@nvidia.com>
Cc: Ferruh Yigit <ferruh.yigit@intel.com>,
	"dev@dpdk.org" <dev@dpdk.org>,
	Raslan Darawsheh <rasland@nvidia.com>,
	Matan Azrad <matan@nvidia.com>, Ori Kam <orika@nvidia.com>,
	Alexander Kozyrev <akozyrev@nvidia.com>,
	"stable@dpdk.org" <stable@dpdk.org>,
	bluca@debian.org, kevin.traynor@redhat.com,
	Christian Ehrhardt <christian.ehrhardt@canonical.com>
Subject: Re: [dpdk-stable] [PATCH v3 1/2] net/mlx5: optimize inline mbuf freeing
Date: Thu, 28 Jan 2021 10:34:32 +0100	[thread overview]
Message-ID: <2182238.4QQNH6NEyb@thomas> (raw)
In-Reply-To: <DM6PR12MB3753A064D4C99C24EB4785B9DFBA9@DM6PR12MB3753.namprd12.prod.outlook.com>

28/01/2021 10:14, Slava Ovsiienko:
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> > On 1/22/2021 5:12 PM, Viacheslav Ovsiienko wrote:
> > > The mlx5 PMD supports packet data inlining by pushing data to the
> > > transmit descriptor. If packet is short enough and all data are
> > > inline, the mbuf is not needed for data send anymore and can be freed.
> > >
> > > The mbuf free was performed in the most inner loop building the
> > > transmit descriptors. This patch postpones the mbuf free transaction
> > > to the tx_burst routine exit, optimizing the loop and allowing the
> > > bulk freeing for the multiple mbufs in single pool API call.
> > >
> > > Cc: stable@dpdk.org
> > >
> > 
> > Hi Slava,
> > 
> > This patch is optimization for inline mbufs, right, it is not a fix, should it be
> > backported?
> Not critical, but nice to have this small optimization in LTS.
> 
> > 
> > cc'ed LTS maintainers.
> > 
> > I am dropping the stable to for now in the next-net, can add it later based on
> > discussion result.
> 
> OK, let's consider this backporting in dedicated way, thank you.

Consensus from techboard is to reject optimizations in LTS for now.
Some acceptance guidelines will be written soon.
Not sure this one will be considered.



      reply	other threads:[~2021-01-28  9:34 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <1608311697-31529-1-git-send-email-viacheslavo@nvidia.com>
2020-12-18 17:14 ` [dpdk-stable] [PATCH " Viacheslav Ovsiienko
     [not found] ` <1609922063-13716-1-git-send-email-viacheslavo@nvidia.com>
2021-01-06  8:34   ` [dpdk-stable] [PATCH v2 " Viacheslav Ovsiienko
     [not found] ` <1611335529-26503-1-git-send-email-viacheslavo@nvidia.com>
2021-01-22 17:12   ` [dpdk-stable] [PATCH v3 " Viacheslav Ovsiienko
2021-01-27 12:44     ` Ferruh Yigit
2021-01-27 12:48       ` [dpdk-stable] [dpdk-dev] " Ferruh Yigit
2021-01-28  9:14       ` [dpdk-stable] " Slava Ovsiienko
2021-01-28  9:34         ` Thomas Monjalon [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2182238.4QQNH6NEyb@thomas \
    --to=thomas@monjalon.net \
    --cc=akozyrev@nvidia.com \
    --cc=bluca@debian.org \
    --cc=christian.ehrhardt@canonical.com \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@intel.com \
    --cc=kevin.traynor@redhat.com \
    --cc=matan@nvidia.com \
    --cc=orika@nvidia.com \
    --cc=rasland@nvidia.com \
    --cc=stable@dpdk.org \
    --cc=viacheslavo@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).