DPDK patches and discussions
 help / color / mirror / Atom feed
From: Shahaf Shuler <shahafs@mellanox.com>
To: "dev@dpdk.org" <dev@dpdk.org>,
	Thomas Monjalon <thomas@monjalon.net>,
	"olivier.matz@6wind.com" <olivier.matz@6wind.com>
Cc: "wwasko@nvidia.com" <wwasko@nvidia.com>,
	"spotluri@nvidia.com" <spotluri@nvidia.com>,
	Asaf Penso <asafp@mellanox.com>,
	Slava Ovsiienko <viacheslavo@mellanox.com>
Subject: [dpdk-dev] [RFC PATCH 20.02] mbuf: hint PMD not to inline packet
Date: Thu, 17 Oct 2019 07:27:34 +0000	[thread overview]
Message-ID: <20191017072723.36509-1-shahafs@mellanox.com> (raw)

Some PMDs inline the mbuf data buffer directly to device. This is in
order to save the overhead of the PCI headers involved when the device
DMA read the buffer pointer. For some devices it is essential in order
to reach the pick BW.

However, there are cases where such inlining is in-efficient. For example
when the data buffer resides on other device memory (like GPU or storage
device). attempt to inline such buffer will result in high PCI overhead
for reading and copying the data from the remote device.

To support a mixed traffic pattern (some buffers from local DRAM, some
buffers from other devices) with high BW, a hint flag is introduced in
the mbuf.
Application will hint the PMD whether or not it should try to inline the
given mbuf data buffer. PMD should do best effort to act upon this
request.

Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
---
 lib/librte_mbuf/rte_mbuf.h | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 98225ec80b..5934532b7f 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -203,6 +203,15 @@ extern "C" {
 /* add new TX flags here */
 
 /**
+ * Hint to PMD to not inline the mbuf data buffer to device
+ * rather let the device use its DMA engine to fetch the data with the
+ * provided pointer.
+ *
+ * This flag is a only a hint. PMD should enforce it as best effort.
+ */
+#define PKT_TX_DONT_INLINE_HINT (1ULL << 39)
+
+/**
  * Indicate that the metadata field in the mbuf is in use.
  */
 #define PKT_TX_METADATA	(1ULL << 40)
-- 
2.12.0


             reply	other threads:[~2019-10-17  7:27 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-17  7:27 Shahaf Shuler [this message]
2019-10-17  8:16 ` Jerin Jacob
2019-10-17 10:59   ` Shahaf Shuler
2019-10-17 17:18     ` Jerin Jacob
2019-10-22  6:26       ` Shahaf Shuler
2019-10-22 15:17         ` Jerin Jacob
2019-10-23 11:24           ` Shahaf Shuler
2019-10-25 11:17             ` Jerin Jacob
2019-10-17 15:14 ` Stephen Hemminger
2019-10-22  6:29   ` Shahaf Shuler
2019-12-11 17:01 ` [dpdk-dev] [RFC v2] mlx5/net: " Viacheslav Ovsiienko
2019-12-27  8:59   ` Olivier Matz
2020-01-14  7:57 ` [dpdk-dev] [PATCH] net/mlx5: update Tx datapath to support no inline hint Viacheslav Ovsiienko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191017072723.36509-1-shahafs@mellanox.com \
    --to=shahafs@mellanox.com \
    --cc=asafp@mellanox.com \
    --cc=dev@dpdk.org \
    --cc=olivier.matz@6wind.com \
    --cc=spotluri@nvidia.com \
    --cc=thomas@monjalon.net \
    --cc=viacheslavo@mellanox.com \
    --cc=wwasko@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).