From: Yongseok Koh <yskoh@mellanox.com>
To: Slava Ovsiienko <viacheslavo@mellanox.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH v4 3/8] net/mlx5: update Tx datapath definitions
Date: Mon, 22 Jul 2019 05:33:08 +0000 [thread overview]
Message-ID: <CC8BEAC6-F2C7-44DE-B4D5-777628FB9C64@mellanox.com> (raw)
In-Reply-To: <1563719100-368-4-git-send-email-viacheslavo@mellanox.com>
> On Jul 21, 2019, at 7:24 AM, Viacheslav Ovsiienko <viacheslavo@mellanox.com> wrote:
>
> This patch updates Tx datapath definitions, mostly hardware related.
> The Tx descriptor structures are redefined with required fields,
> size definitions are renamed to reflect the meanings in more
> appropriate way. This is a preparation step before introducing
> the new Tx datapath implementation.
>
> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
> ---
Acked-by: Yongseok Koh <yskoh@mellanox.com>
> drivers/net/mlx5/mlx5_defs.h | 2 +-
> drivers/net/mlx5/mlx5_prm.h | 164 +++++++++++++++++++++++++++++++++++++++----
> 2 files changed, 152 insertions(+), 14 deletions(-)
>
> diff --git a/drivers/net/mlx5/mlx5_defs.h b/drivers/net/mlx5/mlx5_defs.h
> index 6861304..873a595 100644
> --- a/drivers/net/mlx5/mlx5_defs.h
> +++ b/drivers/net/mlx5/mlx5_defs.h
> @@ -58,7 +58,7 @@
> #define MLX5_MAX_XSTATS 32
>
> /* Maximum Packet headers size (L2+L3+L4) for TSO. */
> -#define MLX5_MAX_TSO_HEADER 192
> +#define MLX5_MAX_TSO_HEADER (128u + 34u)
>
> /* Threshold of buffer replenishment for vectorized Rx. */
> #define MLX5_VPMD_RXQ_RPLNSH_THRESH(n) \
> diff --git a/drivers/net/mlx5/mlx5_prm.h b/drivers/net/mlx5/mlx5_prm.h
> index dfd9317..97abdb2 100644
> --- a/drivers/net/mlx5/mlx5_prm.h
> +++ b/drivers/net/mlx5/mlx5_prm.h
> @@ -39,14 +39,85 @@
> /* Invalidate a CQE. */
> #define MLX5_CQE_INVALIDATE (MLX5_CQE_INVALID << 4)
>
> -/* WQE DWORD size */
> -#define MLX5_WQE_DWORD_SIZE 16
> -
> -/* WQE size */
> -#define MLX5_WQE_SIZE (4 * MLX5_WQE_DWORD_SIZE)
> +/* WQE Segment sizes in bytes. */
> +#define MLX5_WSEG_SIZE 16u
> +#define MLX5_WQE_CSEG_SIZE sizeof(struct mlx5_wqe_cseg)
> +#define MLX5_WQE_DSEG_SIZE sizeof(struct mlx5_wqe_dseg)
> +#define MLX5_WQE_ESEG_SIZE sizeof(struct mlx5_wqe_eseg)
> +
> +/* WQE/WQEBB size in bytes. */
> +#define MLX5_WQE_SIZE sizeof(struct mlx5_wqe)
> +
> +/*
> + * Max size of a WQE session.
> + * Absolute maximum size is 63 (MLX5_DSEG_MAX) segments,
> + * the WQE size field in Control Segment is 6 bits wide.
> + */
> +#define MLX5_WQE_SIZE_MAX (60 * MLX5_WSEG_SIZE)
> +
> +/*
> + * Default minimum number of Tx queues for inlining packets.
> + * If there are less queues as specified we assume we have
> + * no enough CPU resources (cycles) to perform inlining,
> + * the PCIe throughput is not supposed as bottleneck and
> + * inlining is disabled.
> + */
> +#define MLX5_INLINE_MAX_TXQS 8u
> +#define MLX5_INLINE_MAX_TXQS_BLUEFIELD 16u
> +
> +/*
> + * Default packet length threshold to be inlined with
> + * enhanced MPW. If packet length exceeds the threshold
> + * the data are not inlined. Should be aligned in WQEBB
> + * boundary with accounting the title Control and Ethernet
> + * segments.
> + */
> +#define MLX5_EMPW_DEF_INLINE_LEN (3U * MLX5_WQE_SIZE + \
> + MLX5_DSEG_MIN_INLINE_SIZE - \
> + MLX5_WQE_DSEG_SIZE)
> +/*
> + * Maximal inline data length sent with enhanced MPW.
> + * Is based on maximal WQE size.
> + */
> +#define MLX5_EMPW_MAX_INLINE_LEN (MLX5_WQE_SIZE_MAX - \
> + MLX5_WQE_CSEG_SIZE - \
> + MLX5_WQE_ESEG_SIZE - \
> + MLX5_WQE_DSEG_SIZE + \
> + MLX5_DSEG_MIN_INLINE_SIZE)
> +/*
> + * Minimal amount of packets to be sent with EMPW.
> + * This limits the minimal required size of sent EMPW.
> + * If there are no enough resources to built minimal
> + * EMPW the sending loop exits.
> + */
> +#define MLX5_EMPW_MIN_PACKETS (2 + 3 * 4)
> +#define MLX5_EMPW_MAX_PACKETS ((MLX5_WQE_SIZE_MAX - \
> + MLX5_WQE_CSEG_SIZE - \
> + MLX5_WQE_ESEG_SIZE) / \
> + MLX5_WSEG_SIZE)
> +/*
> + * Default packet length threshold to be inlined with
> + * ordinary SEND. Inlining saves the MR key search
> + * and extra PCIe data fetch transaction, but eats the
> + * CPU cycles.
> + */
> +#define MLX5_SEND_DEF_INLINE_LEN (5U * MLX5_WQE_SIZE + \
> + MLX5_ESEG_MIN_INLINE_SIZE - \
> + MLX5_WQE_CSEG_SIZE - \
> + MLX5_WQE_ESEG_SIZE - \
> + MLX5_WQE_DSEG_SIZE)
> +/*
> + * Maximal inline data length sent with ordinary SEND.
> + * Is based on maximal WQE size.
> + */
> +#define MLX5_SEND_MAX_INLINE_LEN (MLX5_WQE_SIZE_MAX - \
> + MLX5_WQE_CSEG_SIZE - \
> + MLX5_WQE_ESEG_SIZE - \
> + MLX5_WQE_DSEG_SIZE + \
> + MLX5_ESEG_MIN_INLINE_SIZE)
>
> -#define MLX5_OPC_MOD_ENHANCED_MPSW 0
> -#define MLX5_OPCODE_ENHANCED_MPSW 0x29
> +/* Missed in mlv5dv.h, should define here. */
> +#define MLX5_OPCODE_ENHANCED_MPSW 0x29u
>
> /* CQE value to inform that VLAN is stripped. */
> #define MLX5_CQE_VLAN_STRIPPED (1u << 0)
> @@ -114,6 +185,12 @@
> /* Inner L3 type is IPV6. */
> #define MLX5_ETH_WQE_L3_INNER_IPV6 (1u << 0)
>
> +/* VLAN insertion flag. */
> +#define MLX5_ETH_WQE_VLAN_INSERT (1u << 31)
> +
> +/* Data inline segment flag. */
> +#define MLX5_ETH_WQE_DATA_INLINE (1u << 31)
> +
> /* Is flow mark valid. */
> #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
> #define MLX5_FLOW_MARK_IS_VALID(val) ((val) & 0xffffff00)
> @@ -130,12 +207,21 @@
> /* Default mark value used when none is provided. */
> #define MLX5_FLOW_MARK_DEFAULT 0xffffff
>
> -/* Maximum number of DS in WQE. */
> +/* Maximum number of DS in WQE. Limited by 6-bit field. */
> #define MLX5_DSEG_MAX 63
>
> /* The completion mode offset in the WQE control segment line 2. */
> #define MLX5_COMP_MODE_OFFSET 2
>
> +/* Amount of data bytes in minimal inline data segment. */
> +#define MLX5_DSEG_MIN_INLINE_SIZE 12u
> +
> +/* Amount of data bytes in minimal inline eth segment. */
> +#define MLX5_ESEG_MIN_INLINE_SIZE 18u
> +
> +/* Amount of data bytes after eth data segment. */
> +#define MLX5_ESEG_EXTRA_DATA_SIZE 32u
> +
> /* Completion mode. */
> enum mlx5_completion_mode {
> MLX5_COMP_ONLY_ERR = 0x0,
> @@ -144,11 +230,6 @@ enum mlx5_completion_mode {
> MLX5_COMP_CQE_AND_EQE = 0x3,
> };
>
> -/* Small common part of the WQE. */
> -struct mlx5_wqe {
> - uint32_t ctrl[4];
> -};
> -
> /* MPW mode. */
> enum mlx5_mpw_mode {
> MLX5_MPW_DISABLED,
> @@ -156,6 +237,63 @@ enum mlx5_mpw_mode {
> MLX5_MPW_ENHANCED, /* Enhanced Multi-Packet Send WQE, a.k.a MPWv2. */
> };
>
> +/* WQE Control segment. */
> +struct mlx5_wqe_cseg {
> + uint32_t opcode;
> + uint32_t sq_ds;
> + uint32_t flags;
> + uint32_t misc;
> +} __rte_packed __rte_aligned(MLX5_WSEG_SIZE);
> +
> +/* Header of data segment. Minimal size Data Segment */
> +struct mlx5_wqe_dseg {
> + uint32_t bcount;
> + union {
> + uint8_t inline_data[MLX5_DSEG_MIN_INLINE_SIZE];
> + struct {
> + uint32_t lkey;
> + uint64_t pbuf;
> + } __rte_packed;
> + };
> +} __rte_packed;
> +
> +/* Subset of struct WQE Ethernet Segment. */
> +struct mlx5_wqe_eseg {
> + union {
> + struct {
> + uint32_t swp_offs;
> + uint8_t cs_flags;
> + uint8_t swp_flags;
> + uint16_t mss;
> + uint32_t metadata;
> + uint16_t inline_hdr_sz;
> + union {
> + uint16_t inline_data;
> + uint16_t vlan_tag;
> + };
> + } __rte_packed;
> + struct {
> + uint32_t offsets;
> + uint32_t flags;
> + uint32_t flow_metadata;
> + uint32_t inline_hdr;
> + } __rte_packed;
> + };
> +} __rte_packed;
> +
> +/* The title WQEBB, header of WQE. */
> +struct mlx5_wqe {
> + union {
> + struct mlx5_wqe_cseg cseg;
> + uint32_t ctrl[4];
> + };
> + struct mlx5_wqe_eseg eseg;
> + union {
> + struct mlx5_wqe_dseg dseg[2];
> + uint8_t data[MLX5_ESEG_EXTRA_DATA_SIZE];
> + };
> +} __rte_packed;
> +
> /* WQE for Multi-Packet RQ. */
> struct mlx5_wqe_mprq {
> struct mlx5_wqe_srq_next_seg next_seg;
> --
> 1.8.3.1
>
next prev parent reply other threads:[~2019-07-22 5:33 UTC|newest]
Thread overview: 50+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-07-04 16:29 [dpdk-dev] [PATCH 0/7] net/mlx5: consolidate Tx datapath Viacheslav Ovsiienko
2019-07-04 16:29 ` [dpdk-dev] [PATCH 1/7] net/mlx5: remove Tx datapath implementation Viacheslav Ovsiienko
2019-07-15 13:59 ` [dpdk-dev] [PATCH v2 0/7] net/mlx5: consolidate Tx datapath Viacheslav Ovsiienko
2019-07-15 13:59 ` [dpdk-dev] [PATCH v2 1/7] net/mlx5: remove Tx datapath implementation Viacheslav Ovsiienko
2019-07-15 13:59 ` [dpdk-dev] [PATCH v2 2/7] net/mlx5: add Tx datapath related devargs Viacheslav Ovsiienko
2019-07-15 13:59 ` [dpdk-dev] [PATCH v2 3/7] net/mlx5: update Tx datapath definitions Viacheslav Ovsiienko
2019-07-15 13:59 ` [dpdk-dev] [PATCH v2 4/7] net/mlx5: add Tx datapath configuration and setup Viacheslav Ovsiienko
2019-07-15 13:59 ` [dpdk-dev] [PATCH v2 5/7] net/mlx5: introduce Tx burst routine template Viacheslav Ovsiienko
2019-07-15 13:59 ` [dpdk-dev] [PATCH v2 6/7] net/mlx5: implement Tx burst template Viacheslav Ovsiienko
2019-07-15 13:59 ` [dpdk-dev] [PATCH v2 7/7] net/mlx5: add minimal required Tx data inline Viacheslav Ovsiienko
2019-07-17 6:53 ` [dpdk-dev] [PATCH v3 0/8] net/mlx5: consolidate Tx datapath Viacheslav Ovsiienko
2019-07-17 6:53 ` [dpdk-dev] [PATCH v3 1/8] net/mlx5: remove Tx datapath implementation Viacheslav Ovsiienko
2019-07-17 6:53 ` [dpdk-dev] [PATCH v3 2/8] net/mlx5: add Tx datapath related devargs Viacheslav Ovsiienko
2019-07-17 6:53 ` [dpdk-dev] [PATCH v3 3/8] net/mlx5: update Tx datapath definitions Viacheslav Ovsiienko
2019-07-17 6:53 ` [dpdk-dev] [PATCH v3 4/8] net/mlx5: add Tx datapath configuration and setup Viacheslav Ovsiienko
2019-07-17 6:53 ` [dpdk-dev] [PATCH v3 5/8] net/mlx5: introduce Tx burst routine template Viacheslav Ovsiienko
2019-07-17 6:53 ` [dpdk-dev] [PATCH v3 6/8] net/mlx5: implement Tx burst template Viacheslav Ovsiienko
2019-07-17 6:53 ` [dpdk-dev] [PATCH v3 7/8] net/mlx5: add minimal required Tx data inline Viacheslav Ovsiienko
2019-07-17 6:53 ` [dpdk-dev] [PATCH v3 8/8] net/mlx5: report supported max number of mbuf segments Viacheslav Ovsiienko
2019-07-21 14:24 ` [dpdk-dev] [PATCH v4 0/8] net/mlx5: consolidate Tx datapath Viacheslav Ovsiienko
2019-07-21 14:24 ` [dpdk-dev] [PATCH v4 1/8] net/mlx5: remove Tx datapath implementation Viacheslav Ovsiienko
2019-07-22 5:32 ` Yongseok Koh
2019-07-21 14:24 ` [dpdk-dev] [PATCH v4 2/8] net/mlx5: add Tx datapath related devargs Viacheslav Ovsiienko
2019-07-22 5:32 ` Yongseok Koh
2019-07-21 14:24 ` [dpdk-dev] [PATCH v4 3/8] net/mlx5: update Tx datapath definitions Viacheslav Ovsiienko
2019-07-22 5:33 ` Yongseok Koh [this message]
2019-07-21 14:24 ` [dpdk-dev] [PATCH v4 4/8] net/mlx5: extend NIC attributes query via DevX Viacheslav Ovsiienko
2019-07-22 5:33 ` Yongseok Koh
2019-07-21 14:24 ` [dpdk-dev] [PATCH v4 5/8] net/mlx5: add Tx datapath configuration and setup Viacheslav Ovsiienko
2019-07-22 5:33 ` Yongseok Koh
2019-07-21 14:24 ` [dpdk-dev] [PATCH v4 6/8] net/mlx5: introduce Tx burst routine template Viacheslav Ovsiienko
2019-07-22 5:34 ` Yongseok Koh
2019-07-22 11:53 ` Ferruh Yigit
2019-07-22 13:07 ` Slava Ovsiienko
2019-07-22 16:45 ` Ferruh Yigit
2019-07-22 16:49 ` Yongseok Koh
2019-07-22 19:00 ` Slava Ovsiienko
2019-07-22 18:26 ` [dpdk-dev] [PATCH] net/mlx5: fix icc compilation issue with type qualifier Viacheslav Ovsiienko
2019-07-22 19:07 ` Ferruh Yigit
2019-07-21 14:24 ` [dpdk-dev] [PATCH v4 7/8] net/mlx5: implement Tx burst template Viacheslav Ovsiienko
2019-07-22 5:34 ` Yongseok Koh
2019-07-21 14:25 ` [dpdk-dev] [PATCH v4 8/8] net/mlx5: report supported max number of mbuf segments Viacheslav Ovsiienko
2019-07-22 5:34 ` Yongseok Koh
2019-07-22 8:42 ` [dpdk-dev] [PATCH v4 0/8] net/mlx5: consolidate Tx datapath Raslan Darawsheh
2019-07-04 16:29 ` [dpdk-dev] [PATCH 2/7] net/mlx5: add Tx datapath related devargs Viacheslav Ovsiienko
2019-07-04 16:29 ` [dpdk-dev] [PATCH 3/7] net/mlx5: update Tx datapath definitions Viacheslav Ovsiienko
2019-07-04 16:29 ` [dpdk-dev] [PATCH 4/7] net/mlx5: add Tx datapath configuration and setup Viacheslav Ovsiienko
2019-07-04 16:29 ` [dpdk-dev] [PATCH 5/7] net/mlx5: introduce Tx burst routine template Viacheslav Ovsiienko
2019-07-04 16:29 ` [dpdk-dev] [PATCH 6/7] net/mlx5: implement Tx burst template Viacheslav Ovsiienko
2019-07-04 16:29 ` [dpdk-dev] [PATCH 7/7] net/mlx5: add minimal required Tx data inline Viacheslav Ovsiienko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CC8BEAC6-F2C7-44DE-B4D5-777628FB9C64@mellanox.com \
--to=yskoh@mellanox.com \
--cc=dev@dpdk.org \
--cc=viacheslavo@mellanox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).