DPDK patches and discussions
 help / color / mirror / Atom feed
From: Shahaf Shuler <shahafs@mellanox.com>
To: Stephen Hemminger <stephen@networkplumber.org>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
	Thomas Monjalon <thomas@monjalon.net>,
	"olivier.matz@6wind.com" <olivier.matz@6wind.com>,
	"wwasko@nvidia.com" <wwasko@nvidia.com>,
	"spotluri@nvidia.com" <spotluri@nvidia.com>,
	Asaf Penso <asafp@mellanox.com>,
	Slava Ovsiienko <viacheslavo@mellanox.com>
Subject: Re: [dpdk-dev] [RFC PATCH 20.02] mbuf: hint PMD not to inline packet
Date: Tue, 22 Oct 2019 06:29:44 +0000	[thread overview]
Message-ID: <AM0PR0502MB3795BFB5C09B4425D4F5BBE6C3680@AM0PR0502MB3795.eurprd05.prod.outlook.com> (raw)
In-Reply-To: <20191017081444.7f91b680@hermes.lan>

Thursday, October 17, 2019 6:15 PM, Stephen Hemminger:
> Subject: Re: [dpdk-dev] [RFC PATCH 20.02] mbuf: hint PMD not to inline
> packet
> 
> On Thu, 17 Oct 2019 07:27:34 +0000
> Shahaf Shuler <shahafs@mellanox.com> wrote:
> 
> > Some PMDs inline the mbuf data buffer directly to device. This is in
> > order to save the overhead of the PCI headers involved when the device
> > DMA read the buffer pointer. For some devices it is essential in order
> > to reach the pick BW.
> >
> > However, there are cases where such inlining is in-efficient. For
> > example when the data buffer resides on other device memory (like GPU
> > or storage device). attempt to inline such buffer will result in high
> > PCI overhead for reading and copying the data from the remote device.
> >
> > To support a mixed traffic pattern (some buffers from local DRAM, some
> > buffers from other devices) with high BW, a hint flag is introduced in
> > the mbuf.
> > Application will hint the PMD whether or not it should try to inline
> > the given mbuf data buffer. PMD should do best effort to act upon this
> > request.
> >
> > Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
> 
> This kind of optimization is hard, and pushing the problem to the application
> to decide seems like the wrong step.

See my comments to Jerin on other thread. This optimization is for custom application who do unique acceleration using look aside accelerators for compute while utilizing network device zero copy. 

 Can the driver just infer this already
> because some mbuf's are external?

Having mbuf w/ external buffer does not necessarily  means the buffer location is on other PCI device. 
Making optimization based on such heuristics may lead to unexpected behavior.   


  reply	other threads:[~2019-10-22  6:29 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-17  7:27 Shahaf Shuler
2019-10-17  8:16 ` Jerin Jacob
2019-10-17 10:59   ` Shahaf Shuler
2019-10-17 17:18     ` Jerin Jacob
2019-10-22  6:26       ` Shahaf Shuler
2019-10-22 15:17         ` Jerin Jacob
2019-10-23 11:24           ` Shahaf Shuler
2019-10-25 11:17             ` Jerin Jacob
2019-10-17 15:14 ` Stephen Hemminger
2019-10-22  6:29   ` Shahaf Shuler [this message]
2019-12-11 17:01 ` [dpdk-dev] [RFC v2] mlx5/net: " Viacheslav Ovsiienko
2019-12-27  8:59   ` Olivier Matz
2020-01-14  7:57 ` [dpdk-dev] [PATCH] net/mlx5: update Tx datapath to support no inline hint Viacheslav Ovsiienko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=AM0PR0502MB3795BFB5C09B4425D4F5BBE6C3680@AM0PR0502MB3795.eurprd05.prod.outlook.com \
    --to=shahafs@mellanox.com \
    --cc=asafp@mellanox.com \
    --cc=dev@dpdk.org \
    --cc=olivier.matz@6wind.com \
    --cc=spotluri@nvidia.com \
    --cc=stephen@networkplumber.org \
    --cc=thomas@monjalon.net \
    --cc=viacheslavo@mellanox.com \
    --cc=wwasko@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).