From: Ori Kam <orika@mellanox.com>
To: Bing Zhao <bingz@mellanox.com>, Matan Azrad <matan@mellanox.com>
Cc: Slava Ovsiienko <viacheslavo@mellanox.com>,
Raslan Darawsheh <rasland@mellanox.com>,
"dev@dpdk.org" <dev@dpdk.org>,
"stable@dpdk.org" <stable@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH] net/mlx5: fix the hairpin queue capacity
Date: Tue, 18 Feb 2020 09:05:18 +0000 [thread overview]
Message-ID: <AM6PR05MB5176D618D2BEA2229A645238DB110@AM6PR05MB5176.eurprd05.prod.outlook.com> (raw)
In-Reply-To: <1581946301-457865-1-git-send-email-bingz@mellanox.com>
> -----Original Message-----
> From: Bing Zhao <bingz@mellanox.com>
> Subject: [PATCH] net/mlx5: fix the hairpin queue capacity
>
> The hairpin TX/RX queue depth and packet size is fixed in the past.
> When the firmware has some fix or improvement, the PMD will not
> make full use of it. And also, 32 packets for a single queue will not
> guarantee a good performance for hairpin flows.
> The parameter of hairpin queue setup needs to be adjusted. Number of
> packets of a single queue should be the maximum supported value, and
> the maximum single packet size should support the standard jumbo
> frame with 9KB. In the meanwhile, there is no need to support the
> max capacity of a single packet because the memory consumption should
> also be taken into consideration.
>
> Fixes: e79c9be91515 ("net/mlx5: support Rx hairpin queues")
> Cc: orika@mellanox.com
> Cc: stable@dpdk.org
>
> Signed-off-by: Bing Zhao <bingz@mellanox.com>
> ---
Acked-by: Ori Kam <orika@mellanox.com>
Thanks,
Ori
> drivers/net/mlx5/mlx5_defs.h | 4 ++++
> drivers/net/mlx5/mlx5_rxq.c | 12 ++++++++----
> drivers/net/mlx5/mlx5_txq.c | 12 ++++++++----
> 3 files changed, 20 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/net/mlx5/mlx5_defs.h b/drivers/net/mlx5/mlx5_defs.h
> index 9b392ed..19e8253 100644
> --- a/drivers/net/mlx5/mlx5_defs.h
> +++ b/drivers/net/mlx5/mlx5_defs.h
> @@ -173,6 +173,10 @@
> #define MLX5_FLOW_MREG_HNAME "MARK_COPY_TABLE"
> #define MLX5_DEFAULT_COPY_ID UINT32_MAX
>
> +/* Hairpin TX/RX queue configuration parameters. */
> +#define MLX5_HAIRPIN_QUEUE_STRIDE 6
> +#define MLX5_HAIRPIN_JUMBO_LOG_SIZE (14 + 2)
> +
> /* Definition of static_assert found in /usr/include/assert.h */
> #ifndef HAVE_STATIC_ASSERT
> #define static_assert _Static_assert
> diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
> index dc0fd82..ac9016e 100644
> --- a/drivers/net/mlx5/mlx5_rxq.c
> +++ b/drivers/net/mlx5/mlx5_rxq.c
> @@ -1268,6 +1268,7 @@
> struct mlx5_devx_create_rq_attr attr = { 0 };
> struct mlx5_rxq_obj *tmpl = NULL;
> int ret = 0;
> + uint32_t max_wq_data;
>
> MLX5_ASSERT(rxq_data);
> MLX5_ASSERT(!rxq_ctrl->obj);
> @@ -1283,11 +1284,14 @@
> tmpl->type = MLX5_RXQ_OBJ_TYPE_DEVX_HAIRPIN;
> tmpl->rxq_ctrl = rxq_ctrl;
> attr.hairpin = 1;
> - /* Workaround for hairpin startup */
> - attr.wq_attr.log_hairpin_num_packets = log2above(32);
> - /* Workaround for packets larger than 1KB */
> + max_wq_data = priv->config.hca_attr.log_max_hairpin_wq_data_sz;
> + /* Set the packets number to the maximum value for performance. */
> + attr.wq_attr.log_hairpin_num_packets = max_wq_data -
> + MLX5_HAIRPIN_QUEUE_STRIDE;
> + /* Jumbo frames > 9KB should be supported. */
> attr.wq_attr.log_hairpin_data_sz =
> - priv->config.hca_attr.log_max_hairpin_wq_data_sz;
> + (max_wq_data < MLX5_HAIRPIN_JUMBO_LOG_SIZE) ?
> + max_wq_data : MLX5_HAIRPIN_JUMBO_LOG_SIZE;
> tmpl->rq = mlx5_devx_cmd_create_rq(priv->sh->ctx, &attr,
> rxq_ctrl->socket);
> if (!tmpl->rq) {
> diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
> index bc13abf..6c08bb9 100644
> --- a/drivers/net/mlx5/mlx5_txq.c
> +++ b/drivers/net/mlx5/mlx5_txq.c
> @@ -493,6 +493,7 @@
> struct mlx5_devx_create_sq_attr attr = { 0 };
> struct mlx5_txq_obj *tmpl = NULL;
> int ret = 0;
> + uint32_t max_wq_data;
>
> MLX5_ASSERT(txq_data);
> MLX5_ASSERT(!txq_ctrl->obj);
> @@ -509,11 +510,14 @@
> tmpl->txq_ctrl = txq_ctrl;
> attr.hairpin = 1;
> attr.tis_lst_sz = 1;
> - /* Workaround for hairpin startup */
> - attr.wq_attr.log_hairpin_num_packets = log2above(32);
> - /* Workaround for packets larger than 1KB */
> + max_wq_data = priv->config.hca_attr.log_max_hairpin_wq_data_sz;
> + /* Set the packets number to the maximum value for performance. */
> + attr.wq_attr.log_hairpin_num_packets = max_wq_data -
> + MLX5_HAIRPIN_QUEUE_STRIDE;
> + /* Jumbo frames > 9KB should be supported. */
> attr.wq_attr.log_hairpin_data_sz =
> - priv->config.hca_attr.log_max_hairpin_wq_data_sz;
> + (max_wq_data < MLX5_HAIRPIN_JUMBO_LOG_SIZE) ?
> + max_wq_data : MLX5_HAIRPIN_JUMBO_LOG_SIZE;
> attr.tis_num = priv->sh->tis->id;
> tmpl->sq = mlx5_devx_cmd_create_sq(priv->sh->ctx, &attr);
> if (!tmpl->sq) {
> --
> 1.8.3.1
next prev parent reply other threads:[~2020-02-18 9:05 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-02-17 13:31 Bing Zhao
2020-02-18 9:05 ` Ori Kam [this message]
2020-02-19 8:28 ` [dpdk-dev] [PATCH v2] " Bing Zhao
2020-02-19 12:32 ` Ori Kam
2020-02-19 14:54 ` Raslan Darawsheh
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=AM6PR05MB5176D618D2BEA2229A645238DB110@AM6PR05MB5176.eurprd05.prod.outlook.com \
--to=orika@mellanox.com \
--cc=bingz@mellanox.com \
--cc=dev@dpdk.org \
--cc=matan@mellanox.com \
--cc=rasland@mellanox.com \
--cc=stable@dpdk.org \
--cc=viacheslavo@mellanox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).