DPDK patches and discussions
 help / color / mirror / Atom feed
From: Alexander Kozyrev <akozyrev@nvidia.com>
To: dev@dpdk.org
Cc: rasland@nvidia.com, matan@nvidia.com, viacheslavo@nvidia.com
Subject: [dpdk-dev] [PATCH v2 0/2] net/mlx5: add vectorized mprq
Date: Wed, 21 Oct 2020 20:30:28 +0000	[thread overview]
Message-ID: <20201021203030.19042-1-akozyrev@nvidia.com> (raw)
In-Reply-To: <20200719041142.14485-1-akozyrev@mellanox.com>

The vectorized Rx burst function helps to accelerate the Rx processing
by using SIMD (single instruction, multiple data) extensions for the
multi-buffer packet processing. Pre-allocating multiple mbufs and
filling them in batches of four greatly improves the throughput of the
Rx burst routine.

MPRQ (Multi-Packet Rx Queue) lacks the vectorized version currently.
It works by posting a single large buffer (consisted of  multiple
fixed-size strides) in order to receive multiple packets at once on this
buffer. A Rx packet is then copied to a user-provided mbuf or PMD
attaches the Rx packet to the mbuf by the pointer to an external buffer.

It is proposed to add a vectorized MPRQ Rx routine to speed up the MPRQ
buffer handling as well. It would require pre-allocation of multiple
mbufs every time we exhaust all the strides from the current MPRQ buffer
and switch to a new one. The new mlx5_rx_burst_mprq_vec() routine will
take care of this as well as of decision on whether should we copy or
attach an external buffer for a packet. The batch processing logic won't
be different from the simple vectorized Rx routine.

The new vectorized MPRQ burst function is going to be selected
automatically whenever the mprq_en devarg is specified. If SIMD is not
available on the platform we fall back to the simple MPRQ Rx burst
function. LRO is not supported by the vectorized MPRQ version and fall
back to the regular MPRQ will be performed.


Alexander Kozyrev (2):
  net/mlx5: refactor vectorized Rx routine
  net/mlx5: implement vectorized MPRQ burst

 drivers/net/mlx5/mlx5_devx.c             |  15 +-
 drivers/net/mlx5/mlx5_ethdev.c           |  20 +-
 drivers/net/mlx5/mlx5_rxq.c              |  96 +++---
 drivers/net/mlx5/mlx5_rxtx.c             | 237 ++++---------
 drivers/net/mlx5/mlx5_rxtx.h             | 200 ++++++++++-
 drivers/net/mlx5/mlx5_rxtx_vec.c         | 416 ++++++++++++++++++++++-
 drivers/net/mlx5/mlx5_rxtx_vec.h         |  55 ---
 drivers/net/mlx5/mlx5_rxtx_vec_altivec.h | 106 ++----
 drivers/net/mlx5/mlx5_rxtx_vec_neon.h    | 103 ++----
 drivers/net/mlx5/mlx5_rxtx_vec_sse.h     | 121 ++-----
 10 files changed, 813 insertions(+), 556 deletions(-)

-- 
2.24.1


  reply	other threads:[~2020-10-21 20:30 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-19  4:11 [dpdk-dev] [PATCH] net/mlx5: implement vectorized MPRQ burst Alexander Kozyrev
2020-10-21 20:30 ` Alexander Kozyrev [this message]
2020-10-21 20:30   ` [dpdk-dev] [PATCH v2 1/2] net/mlx5: refactor vectorized Rx routine Alexander Kozyrev
2020-10-21 20:30   ` [dpdk-dev] [PATCH v2 2/2] net/mlx5: implement vectorized MPRQ burst Alexander Kozyrev
2020-10-22 15:01   ` [dpdk-dev] [PATCH v2 0/2] net/mlx5: add vectorized mprq Raslan Darawsheh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20201021203030.19042-1-akozyrev@nvidia.com \
    --to=akozyrev@nvidia.com \
    --cc=dev@dpdk.org \
    --cc=matan@nvidia.com \
    --cc=rasland@nvidia.com \
    --cc=viacheslavo@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).