From: Ciara Loftus <ciara.loftus@intel.com>
To: dev@dpdk.org
Cc: Ciara Loftus <ciara.loftus@intel.com>
Subject: [RFC PATCH 10/14] net/intel: introduce infrastructure for Rx path selection
Date: Fri, 25 Jul 2025 12:49:15 +0000 [thread overview]
Message-ID: <20250725124919.3564890-11-ciara.loftus@intel.com> (raw)
In-Reply-To: <20250725124919.3564890-1-ciara.loftus@intel.com>
The code for determining which Rx path to select during initialisation
has become complicated in many intel drivers due to the amount of
different paths and features available within each path. This commit
aims to simplify and genericize the path selection logic.
The following information about each Rx burst function is stored and
used by the new common function to select the appropriate Rx path:
- Rx Offloads
- SIMD bitwidth
- Flexible RXD usage
- Bulk alloc function
- Scattered function
Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
---
drivers/net/intel/common/rx.h | 110 ++++++++++++++++++++++++++++++++++
1 file changed, 110 insertions(+)
diff --git a/drivers/net/intel/common/rx.h b/drivers/net/intel/common/rx.h
index 70b597e8dc..6e9d81fecf 100644
--- a/drivers/net/intel/common/rx.h
+++ b/drivers/net/intel/common/rx.h
@@ -10,6 +10,7 @@
#include <unistd.h>
#include <rte_mbuf.h>
#include <rte_ethdev.h>
+#include <rte_vect.h>
#include "desc.h"
@@ -20,6 +21,12 @@
#define CI_VPMD_DESCS_PER_LOOP_WIDE 8
#define CI_VPMD_RX_REARM_THRESH 64
+#define CI_RX_BURST_NO_FEATURES 0
+#define CI_RX_BURST_FEATURE_SCATTERED RTE_BIT32(0)
+#define CI_RX_BURST_FEATURE_FLEX RTE_BIT32(1)
+#define CI_RX_BURST_FEATURE_BULK_ALLOC RTE_BIT32(2)
+#define CI_RX_BURST_FEATURE_IS_DISABLED RTE_BIT32(3)
+
struct ci_rx_queue;
struct ci_rx_entry {
@@ -125,6 +132,19 @@ struct ci_rx_queue {
};
};
+
+struct ci_rx_burst_features {
+ uint32_t rx_offloads;
+ enum rte_vect_max_simd simd_width;
+ uint32_t other_features_mask;
+};
+
+struct ci_rx_burst_info {
+ eth_rx_burst_t pkt_burst;
+ const char *info;
+ struct ci_rx_burst_features features;
+};
+
static inline uint16_t
ci_rx_reassemble_packets(struct rte_mbuf **rx_bufs, uint16_t nb_bufs, uint8_t *split_flags,
struct rte_mbuf **pkt_first_seg, struct rte_mbuf **pkt_last_seg,
@@ -222,4 +242,94 @@ ci_rxq_vec_capable(uint16_t nb_desc, uint16_t rx_free_thresh, uint64_t offloads)
return true;
}
+/**
+ * Select the best matching Rx burst mode function based on features
+ *
+ * @param req_features
+ * The requested features for the Rx burst mode
+ *
+ * @return
+ * The packet burst function index that best matches the requested features
+ */
+static inline int
+ci_rx_burst_mode_select(const struct ci_rx_burst_info *infos,
+ struct ci_rx_burst_features req_features,
+ int num_paths,
+ int default_path)
+{
+ int i, idx = -1;
+ const struct ci_rx_burst_features *info_features;
+ bool req_flex = req_features.other_features_mask & CI_RX_BURST_FEATURE_FLEX;
+ bool req_scattered = req_features.other_features_mask & CI_RX_BURST_FEATURE_SCATTERED;
+ bool req_bulk_alloc = req_features.other_features_mask & CI_RX_BURST_FEATURE_BULK_ALLOC;
+ bool info_flex, info_scattered, info_bulk_alloc;
+
+ for (i = 0; i < num_paths; i++) {
+ info_features = &infos[i].features;
+
+ /* Do not select a disabled rx burst function. */
+ if (info_features->other_features_mask & CI_RX_BURST_FEATURE_IS_DISABLED)
+ continue;
+
+ /* If requested, ensure the function uses the flexible descriptor. */
+ info_flex = info_features->other_features_mask & CI_RX_BURST_FEATURE_FLEX;
+ if (info_flex != req_flex)
+ continue;
+
+ /* If requested, ensure the function supports scattered RX. */
+ info_scattered = info_features->other_features_mask & CI_RX_BURST_FEATURE_SCATTERED;
+ if (info_scattered != req_scattered)
+ continue;
+
+ /* Do not use a bulk alloc function if not requested. However if it is the only
+ * feature requested, ensure it is supported in the selected function.
+ */
+ info_bulk_alloc =
+ info_features->other_features_mask & CI_RX_BURST_FEATURE_BULK_ALLOC;
+ if ((info_bulk_alloc && !req_bulk_alloc) ||
+ (req_features.other_features_mask ==
+ CI_RX_BURST_FEATURE_BULK_ALLOC &&
+ !info_bulk_alloc))
+ continue;
+
+ /* Ensure the function supports the requested RX offloads. */
+ if ((info_features->rx_offloads & req_features.rx_offloads) !=
+ req_features.rx_offloads)
+ continue;
+
+ /* Ensure the function's SIMD width is compatible with the requested width. */
+ if (info_features->simd_width > req_features.simd_width)
+ continue;
+
+ /* If this is the first valid path found, select it. */
+ if (idx == -1) {
+ idx = i;
+ continue;
+ }
+
+ /* At this point, at least one path has already been found that has met the
+ * requested criteria. Analyse the current path and select it if it is
+ * better than the previously selected one. i.e. if it has a larger SIMD width or
+ * if it has the same SIMD width but fewer offloads enabled.
+ */
+
+ if (info_features->simd_width > infos[idx].features.simd_width) {
+ idx = i;
+ continue;
+ }
+
+ /* Use the path with the least offloads that satisfies the requested offloads. */
+ if (info_features->simd_width == infos[idx].features.simd_width &&
+ (rte_popcount32(info_features->rx_offloads) <
+ rte_popcount32(infos[idx].features.rx_offloads)))
+ idx = i;
+ }
+
+ /* No path was found so use the default. */
+ if (idx == -1)
+ return default_path;
+
+ return idx;
+}
+
#endif /* _COMMON_INTEL_RX_H_ */
--
2.34.1
next prev parent reply other threads:[~2025-07-25 12:51 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-07-25 12:49 [RFC PATCH 00/14] net/intel: rx path selection simplification Ciara Loftus
2025-07-25 12:49 ` [RFC PATCH 01/14] net/ice: use the same Rx path across process types Ciara Loftus
2025-07-25 13:40 ` Bruce Richardson
2025-07-25 12:49 ` [RFC PATCH 02/14] net/iavf: rename Rx/Tx function type variables Ciara Loftus
2025-07-25 13:40 ` Bruce Richardson
2025-07-25 12:49 ` [RFC PATCH 03/14] net/iavf: use the same Rx path across process types Ciara Loftus
2025-07-25 13:41 ` Bruce Richardson
2025-07-25 12:49 ` [RFC PATCH 04/14] net/i40e: " Ciara Loftus
2025-07-25 13:43 ` Bruce Richardson
2025-07-25 12:49 ` [RFC PATCH 05/14] net/intel: introduce common vector capability function Ciara Loftus
2025-07-25 13:45 ` Bruce Richardson
2025-07-25 12:49 ` [RFC PATCH 06/14] net/ice: use the new " Ciara Loftus
2025-07-25 13:56 ` Bruce Richardson
2025-07-25 12:49 ` [RFC PATCH 07/14] net/iavf: " Ciara Loftus
2025-07-25 12:49 ` [RFC PATCH 08/14] net/i40e: " Ciara Loftus
2025-07-25 12:49 ` [RFC PATCH 09/14] net/iavf: remove redundant field from iavf adapter struct Ciara Loftus
2025-07-25 14:51 ` Bruce Richardson
2025-07-25 12:49 ` Ciara Loftus [this message]
2025-07-25 15:21 ` [RFC PATCH 10/14] net/intel: introduce infrastructure for Rx path selection Bruce Richardson
2025-07-25 12:49 ` [RFC PATCH 11/14] net/ice: remove unsupported Rx offload Ciara Loftus
2025-07-25 15:22 ` Bruce Richardson
2025-07-25 12:49 ` [RFC PATCH 12/14] net/ice: use the common Rx path selection infrastructure Ciara Loftus
2025-07-25 12:49 ` [RFC PATCH 13/14] net/iavf: " Ciara Loftus
2025-07-25 12:49 ` [RFC PATCH 14/14] net/i40e: " Ciara Loftus
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250725124919.3564890-11-ciara.loftus@intel.com \
--to=ciara.loftus@intel.com \
--cc=dev@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).