From: Raslan Darawsheh <rasland@mellanox.com>
To: Slava Ovsiienko <viacheslavo@mellanox.com>,
"dev@dpdk.org" <dev@dpdk.org>
Cc: Matan Azrad <matan@mellanox.com>, Ori Kam <orika@mellanox.com>,
Shahaf Shuler <shahafs@mellanox.com>,
"thomas@mellanox.net" <thomas@mellanox.net>,
"olivier.matz@6wind.com" <olivier.matz@6wind.com>,
"ferruh.yigit@intel.com" <ferruh.yigit@intel.com>
Subject: Re: [dpdk-dev] [PATCH v2 0/2] mlx5/net: hint PMD not to inline packet
Date: Thu, 30 Jan 2020 13:52:36 +0000 [thread overview]
Message-ID: <VI1PR05MB671821FDBABF7F7C878B3272C2040@VI1PR05MB6718.eurprd05.prod.outlook.com> (raw)
In-Reply-To: <1580300467-7716-1-git-send-email-viacheslavo@mellanox.com>
Hi,
> -----Original Message-----
> From: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
> Sent: Wednesday, January 29, 2020 2:21 PM
> To: dev@dpdk.org
> Cc: Matan Azrad <matan@mellanox.com>; Raslan Darawsheh
> <rasland@mellanox.com>; Ori Kam <orika@mellanox.com>; Shahaf Shuler
> <shahafs@mellanox.com>; thomas@mellanox.net;
> olivier.matz@6wind.com; ferruh.yigit@intel.com
> Subject: [PATCH v2 0/2] mlx5/net: hint PMD not to inline packet
>
> Some PMDs inline the mbuf data buffer directly to device transmit
> descriptor.
> This is in order to save the overhead of the PCI headers imposed when the
> device DMA reads the data by buffer pointer. For some devices it is essential
> in order to provide the full bandwidth.
>
> However, there are cases where such inlining is in-efficient. For example,
> when
> the data buffer resides on other device memory (like GPU or storage
> device).
> Attempt to inline such buffer will result in high PCI overhead for reading
> and copying the data from the remote device to the host memory.
>
> To support a mixed traffic pattern (some buffers from local host memory,
> some
> buffers from other devices) with high bandwidth, a hint flag is introduced in
> the mbuf.
>
> Application will hint the PMD whether or not it should try to inline the
> given mbuf data buffer. PMD should do the best effort to act upon this
> request.
>
> The hint flag RTE_NET_MLX5_DYNFLAG_NO_INLINE_NAME is supposed to
> be dynamic,
> registered by application with rte_mbuf_dynflag_register(). This flag is
> purely vendor specific and declared in PMD specific header rte_pmd_mlx5.h,
> which is intended to be used by specific application.
>
> To query the supported specific flags in runtime the private routine is
> introduced:
>
> int rte_pmd_mlx5_get_dyn_flag_names(
> uint16_t port,
> char *names[],
> uint16_t n)
>
> It returns the array of currently (over present hardware and configuration)
> supported specific flags.
>
> The "not inline hint" feature operating flow is the following one:
> - application start
> - probe the devices, ports are created
> - query the port capabilities
> - if port supporting the feature is found
> - register dynamic flag RTE_NET_MLX5_DYNFLAG_NO_INLINE_NAME
> - application starts the ports
> - on dev_start() PMD checks whether the feature flag is registered and
> enables the feature support in datapath
> - application might set this flag in ol_flags field of mbuf in the packets
> being sent and PMD will handle ones appropriately.
>
> Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
>
> ---
> RFC:
> https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpatch
> es.dpdk.org%2Fpatch%2F61348%2F&data=02%7C01%7Crasland%40mell
> anox.com%7C7b9dad01f6f24fc054df08d7a4b5c1aa%7Ca652971c7d2e4d9ba6a
> 4d149256f461b%7C0%7C0%7C637158972862376366&sdata=GVQd0sNOS
> 8Bbi3z33j2USdZpx%2FPE8IzwcfTg4QBj%2BwI%3D&reserved=0
>
> This patchset combines the parts of the following:
>
> v1/testpmd:
> https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpatch
> es.dpdk.org%2Fcover%2F64541%2F&data=02%7C01%7Crasland%40mell
> anox.com%7C7b9dad01f6f24fc054df08d7a4b5c1aa%7Ca652971c7d2e4d9ba6a
> 4d149256f461b%7C0%7C0%7C637158972862376366&sdata=wpMH45Orli
> mz1y4Bd7Emb%2F%2Fz4hsu%2BLUMN8sortguMUE%3D&reserved=0
> v1/mlx5:
> https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpatch
> es.dpdk.org%2Fpatch%2F64622%2F&data=02%7C01%7Crasland%40mell
> anox.com%7C7b9dad01f6f24fc054df08d7a4b5c1aa%7Ca652971c7d2e4d9ba6a
> 4d149256f461b%7C0%7C0%7C637158972862376366&sdata=RAA3Qw104
> dV6rujRoxXIOm0gcAI0DY5DyAdMAwryeb8%3D&reserved=0
>
> ---
> Ori Kam (1):
> net/mlx5: add fine grain dynamic flag support
>
> Viacheslav Ovsiienko (1):
> net/mlx5: update Tx datapath to support no inline hint
>
> drivers/net/mlx5/mlx5.c | 20 ++++++
> drivers/net/mlx5/mlx5_rxtx.c | 106 +++++++++++++++++++++++++--
> ---
> drivers/net/mlx5/mlx5_rxtx.h | 3 +
> drivers/net/mlx5/mlx5_trigger.c | 8 +++
> drivers/net/mlx5/rte_pmd_mlx5.h | 35 ++++++++++
> drivers/net/mlx5/rte_pmd_mlx5_version.map | 7 ++
> 6 files changed, 163 insertions(+), 16 deletions(-)
> create mode 100644 drivers/net/mlx5/rte_pmd_mlx5.h
>
> --
> 1.8.3.1
Series applied to next-net-mlx,
Kindest regards,
Raslan Darawsheh
prev parent reply other threads:[~2020-01-30 13:52 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-01-13 9:29 [dpdk-dev] [PATCH 0/2] net/mlx5: add PMD dynf Ori Kam
2020-01-13 9:29 ` [dpdk-dev] [PATCH 1/2] app/testpmd: add dynamic flag support Ori Kam
2020-01-15 13:31 ` Ferruh Yigit
2020-01-16 8:07 ` Ori Kam
2020-01-13 9:29 ` [dpdk-dev] [PATCH 2/2] net/mlx5: add fine grain " Ori Kam
2020-01-15 14:01 ` Ferruh Yigit
2020-01-16 12:05 ` Ori Kam
2020-01-16 12:24 ` Ferruh Yigit
2020-01-16 12:37 ` Ori Kam
2020-01-16 12:53 ` [dpdk-dev] [PATCH v2] app/testpmd: add " Ori Kam
2020-01-16 19:59 ` Ferruh Yigit
2020-01-29 12:21 ` [dpdk-dev] [PATCH v2 0/2] mlx5/net: hint PMD not to inline packet Viacheslav Ovsiienko
2020-01-29 12:21 ` [dpdk-dev] [PATCH v2 1/2] net/mlx5: add fine grain dynamic flag support Viacheslav Ovsiienko
2020-01-29 12:21 ` [dpdk-dev] [PATCH v2 2/2] net/mlx5: update Tx datapath to support no inline hint Viacheslav Ovsiienko
2020-01-30 13:52 ` Raslan Darawsheh [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=VI1PR05MB671821FDBABF7F7C878B3272C2040@VI1PR05MB6718.eurprd05.prod.outlook.com \
--to=rasland@mellanox.com \
--cc=dev@dpdk.org \
--cc=ferruh.yigit@intel.com \
--cc=matan@mellanox.com \
--cc=olivier.matz@6wind.com \
--cc=orika@mellanox.com \
--cc=shahafs@mellanox.com \
--cc=thomas@mellanox.net \
--cc=viacheslavo@mellanox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).