* [PATCH 0/8] net/sfc: improve flow action RSS support on EF100 boards
@ 2022-02-01 8:49 Ivan Malov
2022-02-01 8:49 ` [PATCH 1/8] net/sfc: rework flow action RSS support Ivan Malov
` (8 more replies)
0 siblings, 9 replies; 12+ messages in thread
From: Ivan Malov @ 2022-02-01 8:49 UTC (permalink / raw)
To: dev
The first patch reworks flow action RSS support in general, on
all board types. Later patches add support for EF100-specific
features: the even spread mode (no indirection table) and the
ability to select indirection table size in the normal mode.
Ivan Malov (8):
net/sfc: rework flow action RSS support
common/sfc_efx/base: query RSS queue span limit on Riverhead
net/sfc: use non-static queue span limit in flow action RSS
common/sfc_efx/base: revise name of RSS table entry count
common/sfc_efx/base: support selecting RSS table entry count
net/sfc: use adaptive table entry count in flow action RSS
common/sfc_efx/base: support the even spread RSS mode
net/sfc: use the even spread mode in flow action RSS
drivers/common/sfc_efx/base/ef10_impl.h | 11 +-
drivers/common/sfc_efx/base/ef10_nic.c | 49 ++-
drivers/common/sfc_efx/base/ef10_rx.c | 186 +++++++--
drivers/common/sfc_efx/base/efx.h | 43 +-
drivers/common/sfc_efx/base/efx_impl.h | 3 +-
drivers/common/sfc_efx/base/efx_mcdi.h | 11 +
drivers/common/sfc_efx/base/efx_rx.c | 81 +++-
drivers/common/sfc_efx/base/rhead_impl.h | 11 +-
drivers/common/sfc_efx/base/rhead_rx.c | 16 +-
drivers/common/sfc_efx/base/siena_nic.c | 5 +
drivers/common/sfc_efx/version.map | 1 +
drivers/net/sfc/meson.build | 1 +
drivers/net/sfc/sfc.c | 12 +-
drivers/net/sfc/sfc.h | 4 +-
drivers/net/sfc/sfc_ethdev.c | 8 +-
drivers/net/sfc/sfc_flow.c | 249 +++---------
drivers/net/sfc/sfc_flow.h | 19 +-
drivers/net/sfc/sfc_flow_rss.c | 474 +++++++++++++++++++++++
drivers/net/sfc/sfc_flow_rss.h | 86 ++++
19 files changed, 977 insertions(+), 293 deletions(-)
create mode 100644 drivers/net/sfc/sfc_flow_rss.c
create mode 100644 drivers/net/sfc/sfc_flow_rss.h
--
2.30.2
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 1/8] net/sfc: rework flow action RSS support
2022-02-01 8:49 [PATCH 0/8] net/sfc: improve flow action RSS support on EF100 boards Ivan Malov
@ 2022-02-01 8:49 ` Ivan Malov
2022-02-01 8:49 ` [PATCH 2/8] common/sfc_efx/base: query RSS queue span limit on Riverhead Ivan Malov
` (7 subsequent siblings)
8 siblings, 0 replies; 12+ messages in thread
From: Ivan Malov @ 2022-02-01 8:49 UTC (permalink / raw)
To: dev; +Cc: Andrew Rybchenko, Andy Moreton
Currently, the driver always allocates a dedicated NIC RSS context
for every separate flow rule with action RSS, which is not optimal.
First of all, multiple rules which have the same RSS specification
can share a context since filters in the hardware operate this way.
Secondly, entries in a context's indirection table are not precise
queue IDs but offsets with regard to the base queue ID of a filter.
This way, for example, queue arrays "0, 1, 2" and "3, 4, 5" in two
otherwise identical RSS specifications allow the driver to use the
same context since they both yield the same table of queue offsets.
Rework flow action RSS support in order to use these optimisations.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
drivers/net/sfc/meson.build | 1 +
drivers/net/sfc/sfc.c | 12 +-
drivers/net/sfc/sfc.h | 4 +-
drivers/net/sfc/sfc_ethdev.c | 8 +-
drivers/net/sfc/sfc_flow.c | 249 ++++----------------
drivers/net/sfc/sfc_flow.h | 19 +-
| 409 +++++++++++++++++++++++++++++++++
| 81 +++++++
8 files changed, 562 insertions(+), 221 deletions(-)
create mode 100644 drivers/net/sfc/sfc_flow_rss.c
create mode 100644 drivers/net/sfc/sfc_flow_rss.h
diff --git a/drivers/net/sfc/meson.build b/drivers/net/sfc/meson.build
index 46d94184b8..547cb8db8c 100644
--- a/drivers/net/sfc/meson.build
+++ b/drivers/net/sfc/meson.build
@@ -90,6 +90,7 @@ sources = files(
'sfc_mae.c',
'sfc_mae_counter.c',
'sfc_flow.c',
+ 'sfc_flow_rss.c',
'sfc_flow_tunnel.c',
'sfc_dp.c',
'sfc_ef10_rx.c',
diff --git a/drivers/net/sfc/sfc.c b/drivers/net/sfc/sfc.c
index 2cead4e045..51726d229b 100644
--- a/drivers/net/sfc/sfc.c
+++ b/drivers/net/sfc/sfc.c
@@ -848,7 +848,9 @@ sfc_rss_attach(struct sfc_adapter *sa)
efx_intr_fini(sa->nic);
rte_memcpy(rss->key, default_rss_key, sizeof(rss->key));
- rss->dummy_rss_context = EFX_RSS_CONTEXT_DEFAULT;
+ memset(&rss->dummy_ctx, 0, sizeof(rss->dummy_ctx));
+ rss->dummy_ctx.conf.qid_span = 1;
+ rss->dummy_ctx.dummy = true;
return 0;
@@ -970,6 +972,10 @@ sfc_attach(struct sfc_adapter *sa)
if (rc != 0)
goto fail_rss_attach;
+ rc = sfc_flow_rss_attach(sa);
+ if (rc != 0)
+ goto fail_flow_rss_attach;
+
rc = sfc_filter_attach(sa);
if (rc != 0)
goto fail_filter_attach;
@@ -1033,6 +1039,9 @@ sfc_attach(struct sfc_adapter *sa)
sfc_filter_detach(sa);
fail_filter_attach:
+ sfc_flow_rss_detach(sa);
+
+fail_flow_rss_attach:
sfc_rss_detach(sa);
fail_rss_attach:
@@ -1087,6 +1096,7 @@ sfc_detach(struct sfc_adapter *sa)
sfc_mae_detach(sa);
sfc_mae_counter_rxq_detach(sa);
sfc_filter_detach(sa);
+ sfc_flow_rss_detach(sa);
sfc_rss_detach(sa);
sfc_port_detach(sa);
sfc_ev_detach(sa);
diff --git a/drivers/net/sfc/sfc.h b/drivers/net/sfc/sfc.h
index 3337cb57e3..c075c01883 100644
--- a/drivers/net/sfc/sfc.h
+++ b/drivers/net/sfc/sfc.h
@@ -27,6 +27,7 @@
#include "sfc_debug.h"
#include "sfc_log.h"
#include "sfc_filter.h"
+#include "sfc_flow_rss.h"
#include "sfc_flow_tunnel.h"
#include "sfc_sriov.h"
#include "sfc_mae.h"
@@ -118,7 +119,7 @@ struct sfc_rss {
unsigned int tbl[EFX_RSS_TBL_SIZE];
uint8_t key[EFX_RSS_KEY_SIZE];
- uint32_t dummy_rss_context;
+ struct sfc_flow_rss_ctx dummy_ctx;
};
/* Adapter private data shared by primary and secondary processes */
@@ -238,6 +239,7 @@ struct sfc_adapter {
struct sfc_intr intr;
struct sfc_port port;
struct sfc_sw_stats sw_stats;
+ struct sfc_flow_rss flow_rss;
/* Registry of tunnel offload contexts */
struct sfc_flow_tunnel flow_tunnels[SFC_FT_MAX_NTUNNELS];
struct sfc_filter filter;
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index d4210b63dd..abf7b8d287 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -1674,15 +1674,13 @@ sfc_dev_rss_hash_update(struct rte_eth_dev *dev,
struct sfc_adapter *sa = sfc_adapter_by_eth_dev(dev);
struct sfc_rss *rss = &sfc_sa2shared(sa)->rss;
unsigned int efx_hash_types;
- uint32_t contexts[] = {EFX_RSS_CONTEXT_DEFAULT, rss->dummy_rss_context};
unsigned int n_contexts;
unsigned int mode_i = 0;
unsigned int key_i = 0;
+ uint32_t contexts[2];
unsigned int i = 0;
int rc = 0;
- n_contexts = rss->dummy_rss_context == EFX_RSS_CONTEXT_DEFAULT ? 1 : 2;
-
if (sfc_sa2shared(sa)->isolated)
return -ENOTSUP;
@@ -1709,6 +1707,10 @@ sfc_dev_rss_hash_update(struct rte_eth_dev *dev,
if (rc != 0)
goto fail_rx_hf_rte_to_efx;
+ contexts[0] = EFX_RSS_CONTEXT_DEFAULT;
+ contexts[1] = rss->dummy_ctx.nic_handle;
+ n_contexts = (rss->dummy_ctx.nic_handle_refcnt == 0) ? 1 : 2;
+
for (mode_i = 0; mode_i < n_contexts; mode_i++) {
rc = efx_rx_scale_mode_set(sa->nic, contexts[mode_i],
rss->hash_alg, efx_hash_types,
diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
index 509fde4a86..bbb40a3b38 100644
--- a/drivers/net/sfc/sfc_flow.c
+++ b/drivers/net/sfc/sfc_flow.c
@@ -22,6 +22,7 @@
#include "sfc_rx.h"
#include "sfc_filter.h"
#include "sfc_flow.h"
+#include "sfc_flow_rss.h"
#include "sfc_flow_tunnel.h"
#include "sfc_log.h"
#include "sfc_dp_rx.h"
@@ -41,11 +42,12 @@ static sfc_flow_parse_cb_t sfc_flow_parse_rte_to_filter;
static sfc_flow_parse_cb_t sfc_flow_parse_rte_to_mae;
static sfc_flow_insert_cb_t sfc_flow_filter_insert;
static sfc_flow_remove_cb_t sfc_flow_filter_remove;
+static sfc_flow_cleanup_cb_t sfc_flow_cleanup;
static const struct sfc_flow_ops_by_spec sfc_flow_ops_filter = {
.parse = sfc_flow_parse_rte_to_filter,
.verify = NULL,
- .cleanup = NULL,
+ .cleanup = sfc_flow_cleanup,
.insert = sfc_flow_filter_insert,
.remove = sfc_flow_filter_remove,
.query = NULL,
@@ -1429,8 +1431,14 @@ sfc_flow_parse_queue(struct sfc_adapter *sa,
spec_filter->template.efs_dmaq_id = (uint16_t)rxq->hw_index;
rxq_info = &sfc_sa2shared(sa)->rxq_info[queue->index];
- spec_filter->rss_hash_required = !!(rxq_info->rxq_flags &
- SFC_RXQ_FLAG_RSS_HASH);
+
+ if ((rxq_info->rxq_flags & SFC_RXQ_FLAG_RSS_HASH) != 0) {
+ struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+ struct sfc_rss *ethdev_rss = &sas->rss;
+
+ spec_filter->template.efs_flags |= EFX_FILTER_FLAG_RX_RSS;
+ spec_filter->rss_ctx = ðdev_rss->dummy_ctx;
+ }
return 0;
}
@@ -1440,107 +1448,30 @@ sfc_flow_parse_rss(struct sfc_adapter *sa,
const struct rte_flow_action_rss *action_rss,
struct rte_flow *flow)
{
- struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
- struct sfc_rss *rss = &sas->rss;
- sfc_ethdev_qid_t ethdev_qid;
+ struct sfc_flow_spec_filter *spec_filter = &flow->spec.filter;
+ struct sfc_flow_rss_conf conf;
+ uint16_t sw_qid_min;
struct sfc_rxq *rxq;
- unsigned int rxq_hw_index_min;
- unsigned int rxq_hw_index_max;
- efx_rx_hash_type_t efx_hash_types;
- const uint8_t *rss_key;
- struct sfc_flow_spec *spec = &flow->spec;
- struct sfc_flow_spec_filter *spec_filter = &spec->filter;
- struct sfc_flow_rss *sfc_rss_conf = &spec_filter->rss_conf;
- unsigned int i;
-
- if (action_rss->queue_num == 0)
- return -EINVAL;
-
- ethdev_qid = sfc_sa2shared(sa)->ethdev_rxq_count - 1;
- rxq = sfc_rxq_ctrl_by_ethdev_qid(sa, ethdev_qid);
- rxq_hw_index_min = rxq->hw_index;
- rxq_hw_index_max = 0;
-
- for (i = 0; i < action_rss->queue_num; ++i) {
- ethdev_qid = action_rss->queue[i];
-
- if ((unsigned int)ethdev_qid >=
- sfc_sa2shared(sa)->ethdev_rxq_count)
- return -EINVAL;
-
- rxq = sfc_rxq_ctrl_by_ethdev_qid(sa, ethdev_qid);
-
- if (rxq->hw_index < rxq_hw_index_min)
- rxq_hw_index_min = rxq->hw_index;
-
- if (rxq->hw_index > rxq_hw_index_max)
- rxq_hw_index_max = rxq->hw_index;
- }
+ int rc;
- if (rxq_hw_index_max - rxq_hw_index_min + 1 > EFX_MAXRSS)
- return -EINVAL;
+ spec_filter->template.efs_flags |= EFX_FILTER_FLAG_RX_RSS;
- switch (action_rss->func) {
- case RTE_ETH_HASH_FUNCTION_DEFAULT:
- case RTE_ETH_HASH_FUNCTION_TOEPLITZ:
- break;
- default:
- return -EINVAL;
- }
+ rc = sfc_flow_rss_parse_conf(sa, action_rss, &conf, &sw_qid_min);
+ if (rc != 0)
+ return -rc;
- if (action_rss->level)
- return -EINVAL;
+ rxq = sfc_rxq_ctrl_by_ethdev_qid(sa, sw_qid_min);
+ spec_filter->template.efs_dmaq_id = rxq->hw_index;
- /*
- * Dummy RSS action with only one queue and no specific settings
- * for hash types and key does not require dedicated RSS context
- * and may be simplified to single queue action.
- */
- if (action_rss->queue_num == 1 && action_rss->types == 0 &&
- action_rss->key_len == 0) {
- spec_filter->template.efs_dmaq_id = rxq_hw_index_min;
+ spec_filter->rss_ctx = sfc_flow_rss_ctx_reuse(sa, &conf, sw_qid_min,
+ action_rss->queue);
+ if (spec_filter->rss_ctx != NULL)
return 0;
- }
-
- if (action_rss->types) {
- int rc;
-
- rc = sfc_rx_hf_rte_to_efx(sa, action_rss->types,
- &efx_hash_types);
- if (rc != 0)
- return -rc;
- } else {
- unsigned int i;
-
- efx_hash_types = 0;
- for (i = 0; i < rss->hf_map_nb_entries; ++i)
- efx_hash_types |= rss->hf_map[i].efx;
- }
-
- if (action_rss->key_len) {
- if (action_rss->key_len != sizeof(rss->key))
- return -EINVAL;
-
- rss_key = action_rss->key;
- } else {
- rss_key = rss->key;
- }
-
- spec_filter->rss = B_TRUE;
-
- sfc_rss_conf->rxq_hw_index_min = rxq_hw_index_min;
- sfc_rss_conf->rxq_hw_index_max = rxq_hw_index_max;
- sfc_rss_conf->rss_hash_types = efx_hash_types;
- rte_memcpy(sfc_rss_conf->rss_key, rss_key, sizeof(rss->key));
- for (i = 0; i < RTE_DIM(sfc_rss_conf->rss_tbl); ++i) {
- unsigned int nb_queues = action_rss->queue_num;
- struct sfc_rxq *rxq;
-
- ethdev_qid = action_rss->queue[i % nb_queues];
- rxq = sfc_rxq_ctrl_by_ethdev_qid(sa, ethdev_qid);
- sfc_rss_conf->rss_tbl[i] = rxq->hw_index - rxq_hw_index_min;
- }
+ rc = sfc_flow_rss_ctx_add(sa, &conf, sw_qid_min, action_rss->queue,
+ &spec_filter->rss_ctx);
+ if (rc != 0)
+ return -rc;
return 0;
}
@@ -1597,61 +1528,17 @@ static int
sfc_flow_filter_insert(struct sfc_adapter *sa,
struct rte_flow *flow)
{
- struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
- struct sfc_rss *rss = &sas->rss;
struct sfc_flow_spec_filter *spec_filter = &flow->spec.filter;
- struct sfc_flow_rss *flow_rss = &spec_filter->rss_conf;
- uint32_t efs_rss_context = EFX_RSS_CONTEXT_DEFAULT;
- boolean_t create_context;
- unsigned int i;
+ struct sfc_flow_rss_ctx *rss_ctx = spec_filter->rss_ctx;
int rc = 0;
- create_context = spec_filter->rss || (spec_filter->rss_hash_required &&
- rss->dummy_rss_context == EFX_RSS_CONTEXT_DEFAULT);
-
- if (create_context) {
- unsigned int rss_spread;
- unsigned int rss_hash_types;
- uint8_t *rss_key;
-
- if (spec_filter->rss) {
- rss_spread = flow_rss->rxq_hw_index_max -
- flow_rss->rxq_hw_index_min + 1;
- rss_hash_types = flow_rss->rss_hash_types;
- rss_key = flow_rss->rss_key;
- } else {
- /*
- * Initialize dummy RSS context parameters to have
- * valid RSS hash. Use default RSS hash function and
- * key.
- */
- rss_spread = 1;
- rss_hash_types = rss->hash_types;
- rss_key = rss->key;
- }
-
- rc = efx_rx_scale_context_alloc(sa->nic,
- EFX_RX_SCALE_EXCLUSIVE,
- rss_spread,
- &efs_rss_context);
- if (rc != 0)
- goto fail_scale_context_alloc;
-
- rc = efx_rx_scale_mode_set(sa->nic, efs_rss_context,
- rss->hash_alg,
- rss_hash_types, B_TRUE);
- if (rc != 0)
- goto fail_scale_mode_set;
+ rc = sfc_flow_rss_ctx_program(sa, rss_ctx);
+ if (rc != 0)
+ goto fail_rss_ctx_program;
- rc = efx_rx_scale_key_set(sa->nic, efs_rss_context,
- rss_key, sizeof(rss->key));
- if (rc != 0)
- goto fail_scale_key_set;
- } else {
- efs_rss_context = rss->dummy_rss_context;
- }
+ if (rss_ctx != NULL) {
+ unsigned int i;
- if (spec_filter->rss || spec_filter->rss_hash_required) {
/*
* At this point, fully elaborated filter specifications
* have been produced from the template. To make sure that
@@ -1661,10 +1548,7 @@ sfc_flow_filter_insert(struct sfc_adapter *sa,
for (i = 0; i < spec_filter->count; i++) {
efx_filter_spec_t *spec = &spec_filter->filters[i];
- spec->efs_rss_context = efs_rss_context;
- spec->efs_flags |= EFX_FILTER_FLAG_RX_RSS;
- if (spec_filter->rss)
- spec->efs_dmaq_id = flow_rss->rxq_hw_index_min;
+ spec->efs_rss_context = rss_ctx->nic_handle;
}
}
@@ -1672,42 +1556,12 @@ sfc_flow_filter_insert(struct sfc_adapter *sa,
if (rc != 0)
goto fail_filter_insert;
- if (create_context) {
- unsigned int dummy_tbl[RTE_DIM(flow_rss->rss_tbl)] = {0};
- unsigned int *tbl;
-
- tbl = spec_filter->rss ? flow_rss->rss_tbl : dummy_tbl;
-
- /*
- * Scale table is set after filter insertion because
- * the table entries are relative to the base RxQ ID
- * and the latter is submitted to the HW by means of
- * inserting a filter, so by the time of the request
- * the HW knows all the information needed to verify
- * the table entries, and the operation will succeed
- */
- rc = efx_rx_scale_tbl_set(sa->nic, efs_rss_context,
- tbl, RTE_DIM(flow_rss->rss_tbl));
- if (rc != 0)
- goto fail_scale_tbl_set;
-
- /* Remember created dummy RSS context */
- if (!spec_filter->rss)
- rss->dummy_rss_context = efs_rss_context;
- }
-
return 0;
-fail_scale_tbl_set:
- sfc_flow_spec_remove(sa, &flow->spec);
-
fail_filter_insert:
-fail_scale_key_set:
-fail_scale_mode_set:
- if (create_context)
- efx_rx_scale_context_free(sa->nic, efs_rss_context);
+ sfc_flow_rss_ctx_terminate(sa, rss_ctx);
-fail_scale_context_alloc:
+fail_rss_ctx_program:
return rc;
}
@@ -1722,18 +1576,9 @@ sfc_flow_filter_remove(struct sfc_adapter *sa,
if (rc != 0)
return rc;
- if (spec_filter->rss) {
- /*
- * All specifications for a given flow rule have the same RSS
- * context, so that RSS context value is taken from the first
- * filter specification
- */
- efx_filter_spec_t *spec = &spec_filter->filters[0];
-
- rc = efx_rx_scale_context_free(sa->nic, spec->efs_rss_context);
- }
+ sfc_flow_rss_ctx_terminate(sa, spec_filter->rss_ctx);
- return rc;
+ return 0;
}
static int
@@ -2985,8 +2830,6 @@ sfc_flow_fini(struct sfc_adapter *sa)
void
sfc_flow_stop(struct sfc_adapter *sa)
{
- struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
- struct sfc_rss *rss = &sas->rss;
struct rte_flow *flow;
SFC_ASSERT(sfc_adapter_is_locked(sa));
@@ -2994,11 +2837,6 @@ sfc_flow_stop(struct sfc_adapter *sa)
TAILQ_FOREACH(flow, &sa->flow_list, entries)
sfc_flow_remove(sa, flow, NULL);
- if (rss->dummy_rss_context != EFX_RSS_CONTEXT_DEFAULT) {
- efx_rx_scale_context_free(sa->nic, rss->dummy_rss_context);
- rss->dummy_rss_context = EFX_RSS_CONTEXT_DEFAULT;
- }
-
/*
* MAE counter service is not stopped on flow rule remove to avoid
* extra work. Make sure that it is stopped here.
@@ -3029,3 +2867,12 @@ sfc_flow_start(struct sfc_adapter *sa)
fail_bad_flow:
return rc;
}
+
+static void
+sfc_flow_cleanup(struct sfc_adapter *sa, struct rte_flow *flow)
+{
+ if (flow == NULL)
+ return;
+
+ sfc_flow_rss_ctx_del(sa, flow->spec.filter.rss_ctx);
+}
diff --git a/drivers/net/sfc/sfc_flow.h b/drivers/net/sfc/sfc_flow.h
index efdecc97ab..545e2267d4 100644
--- a/drivers/net/sfc/sfc_flow.h
+++ b/drivers/net/sfc/sfc_flow.h
@@ -15,6 +15,8 @@
#include "efx.h"
+#include "sfc_flow_rss.h"
+
#ifdef __cplusplus
extern "C" {
#endif
@@ -30,15 +32,6 @@ extern "C" {
#define SFC_BUILD_SET_OVERFLOW(_action, _set) \
RTE_BUILD_BUG_ON((_action) >= sizeof(_set) * CHAR_BIT)
-/* RSS configuration storage */
-struct sfc_flow_rss {
- unsigned int rxq_hw_index_min;
- unsigned int rxq_hw_index_max;
- unsigned int rss_hash_types;
- uint8_t rss_key[EFX_RSS_KEY_SIZE];
- unsigned int rss_tbl[EFX_RSS_TBL_SIZE];
-};
-
/* Flow engines supported by the implementation */
enum sfc_flow_spec_type {
SFC_FLOW_SPEC_FILTER = 0,
@@ -55,12 +48,8 @@ struct sfc_flow_spec_filter {
efx_filter_spec_t filters[SF_FLOW_SPEC_NB_FILTERS_MAX];
/* number of complete specifications */
unsigned int count;
- /* RSS toggle */
- boolean_t rss;
- /* RSS hash toggle */
- boolean_t rss_hash_required;
- /* RSS configuration */
- struct sfc_flow_rss rss_conf;
+ /* RSS context (or NULL) */
+ struct sfc_flow_rss_ctx *rss_ctx;
};
/* Indicates the role of a given flow in tunnel offload */
--git a/drivers/net/sfc/sfc_flow_rss.c b/drivers/net/sfc/sfc_flow_rss.c
new file mode 100644
index 0000000000..17876f11c1
--- /dev/null
+++ b/drivers/net/sfc/sfc_flow_rss.c
@@ -0,0 +1,409 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright(c) 2022 Xilinx, Inc.
+ */
+
+#include <stdbool.h>
+#include <stdint.h>
+
+#include <rte_common.h>
+#include <rte_flow.h>
+#include <rte_tailq.h>
+
+#include "efx.h"
+
+#include "sfc.h"
+#include "sfc_debug.h"
+#include "sfc_flow_rss.h"
+#include "sfc_log.h"
+#include "sfc_rx.h"
+
+int
+sfc_flow_rss_attach(struct sfc_adapter *sa)
+{
+ struct sfc_flow_rss *flow_rss = &sa->flow_rss;
+
+ sfc_log_init(sa, "entry");
+
+ TAILQ_INIT(&flow_rss->ctx_list);
+
+ sfc_log_init(sa, "done");
+
+ return 0;
+}
+
+void
+sfc_flow_rss_detach(struct sfc_adapter *sa)
+{
+ sfc_log_init(sa, "entry");
+
+ sfc_log_init(sa, "done");
+}
+
+int
+sfc_flow_rss_parse_conf(struct sfc_adapter *sa,
+ const struct rte_flow_action_rss *in,
+ struct sfc_flow_rss_conf *out, uint16_t *sw_qid_minp)
+{
+ struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+ const struct sfc_rss *ethdev_rss = &sas->rss;
+ uint16_t sw_qid_min;
+ uint16_t sw_qid_max;
+ const uint8_t *key;
+ unsigned int i;
+ int rc;
+
+ if (in->level) {
+ /*
+ * The caller demands that RSS hash be computed
+ * within the given encapsulation frame / level.
+ * Per flow control for that is not implemented.
+ */
+ sfc_err(sa, "flow-rss: parse: 'level' must be 0");
+ return EINVAL;
+ }
+
+ if (in->types != 0) {
+ rc = sfc_rx_hf_rte_to_efx(sa, in->types,
+ &out->efx_hash_types);
+ if (rc != 0) {
+ sfc_err(sa, "flow-rss: parse: failed to process 'types'");
+ return rc;
+ }
+ } else {
+ sfc_dbg(sa, "flow-rss: parse: 'types' is 0; proceeding with ethdev setting");
+ out->efx_hash_types = ethdev_rss->hash_types;
+ }
+
+ if (in->key_len != 0) {
+ if (in->key_len != sizeof(out->key)) {
+ sfc_err(sa, "flow-rss: parse: 'key_len' must be either %zu or 0",
+ sizeof(out->key));
+ return EINVAL;
+ }
+
+ if (in->key == NULL) {
+ sfc_err(sa, "flow-rss: parse: 'key' is NULL");
+ return EINVAL;
+ }
+
+ key = in->key;
+ } else {
+ sfc_dbg(sa, "flow-rss: parse: 'key_len' is 0; proceeding with ethdev key");
+ key = ethdev_rss->key;
+ }
+
+ rte_memcpy(out->key, key, sizeof(out->key));
+
+ switch (in->func) {
+ case RTE_ETH_HASH_FUNCTION_DEFAULT:
+ /*
+ * DEFAULT means that conformance to a specific
+ * hash algorithm is a don't care to the caller.
+ * The driver can pick the one it deems optimal.
+ */
+ break;
+ case RTE_ETH_HASH_FUNCTION_TOEPLITZ:
+ if (ethdev_rss->hash_alg != EFX_RX_HASHALG_TOEPLITZ) {
+ sfc_err(sa, "flow-rss: parse: 'func' TOEPLITZ is unavailable; use DEFAULT");
+ return EINVAL;
+ }
+ break;
+ default:
+ sfc_err(sa, "flow-rss: parse: 'func' #%d is unsupported", in->func);
+ return EINVAL;
+ }
+
+ if (in->queue_num == 0) {
+ sfc_err(sa, "flow-rss: parse: 'queue_num' is 0; MIN=1");
+ return EINVAL;
+ }
+
+ if (in->queue_num > EFX_RSS_TBL_SIZE) {
+ sfc_err(sa, "flow-rss: parse: 'queue_num' is too large; MAX=%u",
+ EFX_RSS_TBL_SIZE);
+ return EINVAL;
+ }
+
+ if (in->queue == NULL) {
+ sfc_err(sa, "flow-rss: parse: 'queue' is NULL");
+ return EINVAL;
+ }
+
+ sw_qid_min = sas->ethdev_rxq_count - 1;
+ sw_qid_max = 0;
+
+ out->nb_qid_offsets = 0;
+
+ for (i = 0; i < in->queue_num; ++i) {
+ uint16_t sw_qid = in->queue[i];
+
+ if (sw_qid >= sas->ethdev_rxq_count) {
+ sfc_err(sa, "flow-rss: parse: queue=%u does not exist",
+ sw_qid);
+ return EINVAL;
+ }
+
+ if (sw_qid < sw_qid_min)
+ sw_qid_min = sw_qid;
+
+ if (sw_qid > sw_qid_max)
+ sw_qid_max = sw_qid;
+
+ if (sw_qid != in->queue[0] + i)
+ out->nb_qid_offsets = in->queue_num;
+ }
+
+ out->qid_span = sw_qid_max - sw_qid_min + 1;
+
+ if (out->qid_span > EFX_MAXRSS) {
+ sfc_err(sa, "flow-rss: parse: queue ID span %u is too large; MAX=%u",
+ out->qid_span, EFX_MAXRSS);
+ return EINVAL;
+ }
+
+ if (sw_qid_minp != NULL)
+ *sw_qid_minp = sw_qid_min;
+
+ return 0;
+}
+
+struct sfc_flow_rss_ctx *
+sfc_flow_rss_ctx_reuse(struct sfc_adapter *sa,
+ const struct sfc_flow_rss_conf *conf,
+ uint16_t sw_qid_min, const uint16_t *sw_qids)
+{
+ struct sfc_flow_rss *flow_rss = &sa->flow_rss;
+ static struct sfc_flow_rss_ctx *ctx;
+
+ SFC_ASSERT(sfc_adapter_is_locked(sa));
+
+ TAILQ_FOREACH(ctx, &flow_rss->ctx_list, entries) {
+ if (memcmp(&ctx->conf, conf, sizeof(*conf)) != 0)
+ continue;
+
+ if (conf->nb_qid_offsets != 0) {
+ bool match_confirmed = true;
+ unsigned int i;
+
+ for (i = 0; i < conf->nb_qid_offsets; ++i) {
+ uint16_t qid_offset = sw_qids[i] - sw_qid_min;
+
+ if (ctx->qid_offsets[i] != qid_offset) {
+ match_confirmed = false;
+ break;
+ }
+ }
+
+ if (!match_confirmed)
+ continue;
+ }
+
+ sfc_dbg(sa, "flow-rss: reusing ctx=%p", ctx);
+ ++(ctx->refcnt);
+ return ctx;
+ }
+
+ return NULL;
+}
+
+int
+sfc_flow_rss_ctx_add(struct sfc_adapter *sa,
+ const struct sfc_flow_rss_conf *conf, uint16_t sw_qid_min,
+ const uint16_t *sw_qids, struct sfc_flow_rss_ctx **ctxp)
+{
+ struct sfc_flow_rss *flow_rss = &sa->flow_rss;
+ struct sfc_flow_rss_ctx *ctx;
+
+ SFC_ASSERT(sfc_adapter_is_locked(sa));
+
+ ctx = rte_zmalloc("sfc_flow_rss_ctx", sizeof(*ctx), 0);
+ if (ctx == NULL)
+ return ENOMEM;
+
+ if (conf->nb_qid_offsets != 0) {
+ unsigned int i;
+
+ ctx->qid_offsets = rte_calloc("sfc_flow_rss_ctx_qid_offsets",
+ conf->nb_qid_offsets,
+ sizeof(*ctx->qid_offsets), 0);
+ if (ctx->qid_offsets == NULL) {
+ rte_free(ctx);
+ return ENOMEM;
+ }
+
+ for (i = 0; i < conf->nb_qid_offsets; ++i)
+ ctx->qid_offsets[i] = sw_qids[i] - sw_qid_min;
+ }
+
+ ctx->conf = *conf;
+ ctx->refcnt = 1;
+
+ TAILQ_INSERT_TAIL(&flow_rss->ctx_list, ctx, entries);
+
+ *ctxp = ctx;
+
+ sfc_dbg(sa, "flow-rss: added ctx=%p", ctx);
+
+ return 0;
+}
+
+void
+sfc_flow_rss_ctx_del(struct sfc_adapter *sa, struct sfc_flow_rss_ctx *ctx)
+{
+ struct sfc_flow_rss *flow_rss = &sa->flow_rss;
+
+ if (ctx == NULL)
+ return;
+
+ SFC_ASSERT(sfc_adapter_is_locked(sa));
+
+ if (ctx->dummy)
+ return;
+
+ SFC_ASSERT(ctx->refcnt != 0);
+
+ --(ctx->refcnt);
+
+ if (ctx->refcnt != 0)
+ return;
+
+ if (ctx->nic_handle_refcnt != 0) {
+ sfc_err(sa, "flow-rss: deleting ctx=%p abandons its NIC resource: handle=0x%08x, refcnt=%u",
+ ctx, ctx->nic_handle, ctx->nic_handle_refcnt);
+ }
+
+ TAILQ_REMOVE(&flow_rss->ctx_list, ctx, entries);
+ rte_free(ctx->qid_offsets);
+ rte_free(ctx);
+
+ sfc_dbg(sa, "flow-rss: deleted ctx=%p", ctx);
+}
+
+static int
+sfc_flow_rss_ctx_program_tbl(struct sfc_adapter *sa,
+ const struct sfc_flow_rss_ctx *ctx)
+{
+ const struct sfc_flow_rss_conf *conf = &ctx->conf;
+ unsigned int *tbl = sa->flow_rss.bounce_tbl;
+ unsigned int i;
+
+ SFC_ASSERT(sfc_adapter_is_locked(sa));
+
+ if (conf->nb_qid_offsets != 0) {
+ SFC_ASSERT(ctx->qid_offsets != NULL);
+
+ for (i = 0; i < EFX_RSS_TBL_SIZE; ++i)
+ tbl[i] = ctx->qid_offsets[i % conf->nb_qid_offsets];
+ } else {
+ for (i = 0; i < EFX_RSS_TBL_SIZE; ++i)
+ tbl[i] = i % conf->qid_span;
+ }
+
+ return efx_rx_scale_tbl_set(sa->nic, ctx->nic_handle,
+ tbl, EFX_RSS_TBL_SIZE);
+}
+
+int
+sfc_flow_rss_ctx_program(struct sfc_adapter *sa, struct sfc_flow_rss_ctx *ctx)
+{
+ efx_rx_scale_context_type_t ctx_type = EFX_RX_SCALE_EXCLUSIVE;
+ struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+ struct sfc_rss *ethdev_rss = &sas->rss;
+ struct sfc_flow_rss_conf *conf;
+ bool allocation_done = B_FALSE;
+ int rc;
+
+ if (ctx == NULL)
+ return 0;
+
+ conf = &ctx->conf;
+
+ SFC_ASSERT(sfc_adapter_is_locked(sa));
+
+ if (ctx->nic_handle_refcnt == 0) {
+ rc = efx_rx_scale_context_alloc(sa->nic, ctx_type,
+ conf->qid_span,
+ &ctx->nic_handle);
+ if (rc != 0) {
+ sfc_err(sa, "flow-rss: failed to allocate NIC resource for ctx=%p: type=%d, qid_span=%u, rc=%d",
+ ctx, ctx_type, conf->qid_span, rc);
+ goto fail;
+ }
+
+ sfc_dbg(sa, "flow-rss: allocated NIC resource for ctx=%p: type=%d, qid_span=%u; handle=0x%08x",
+ ctx, ctx_type, conf->qid_span,
+ ctx->nic_handle);
+
+ ++(ctx->nic_handle_refcnt);
+ allocation_done = B_TRUE;
+ } else {
+ ++(ctx->nic_handle_refcnt);
+ return 0;
+ }
+
+ rc = efx_rx_scale_mode_set(sa->nic, ctx->nic_handle,
+ ethdev_rss->hash_alg,
+ (ctx->dummy) ? ethdev_rss->hash_types :
+ conf->efx_hash_types,
+ B_TRUE);
+ if (rc != 0) {
+ sfc_err(sa, "flow-rss: failed to configure hash for ctx=%p: efx_hash_alg=%d, efx_hash_types=0x%08x; rc=%d",
+ ctx, ethdev_rss->hash_alg,
+ (ctx->dummy) ? ethdev_rss->hash_types :
+ conf->efx_hash_types,
+ rc);
+ goto fail;
+ }
+
+ rc = efx_rx_scale_key_set(sa->nic, ctx->nic_handle,
+ (ctx->dummy) ? ethdev_rss->key : conf->key,
+ RTE_DIM(conf->key));
+ if (rc != 0) {
+ sfc_err(sa, "flow-rss: failed to set key for ctx=%p; rc=%d",
+ ctx, rc);
+ goto fail;
+ }
+
+ rc = sfc_flow_rss_ctx_program_tbl(sa, ctx);
+ if (rc != 0) {
+ sfc_err(sa, "flow-rss: failed to program table for ctx=%p; rc=%d",
+ ctx, rc);
+ goto fail;
+ }
+
+ return 0;
+
+fail:
+ if (allocation_done)
+ sfc_flow_rss_ctx_terminate(sa, ctx);
+
+ return rc;
+}
+
+void
+sfc_flow_rss_ctx_terminate(struct sfc_adapter *sa, struct sfc_flow_rss_ctx *ctx)
+{
+ if (ctx == NULL)
+ return;
+
+ SFC_ASSERT(sfc_adapter_is_locked(sa));
+
+ SFC_ASSERT(ctx->nic_handle_refcnt != 0);
+ --(ctx->nic_handle_refcnt);
+
+ if (ctx->nic_handle_refcnt == 0) {
+ int rc;
+
+ rc = efx_rx_scale_context_free(sa->nic, ctx->nic_handle);
+ if (rc != 0) {
+ sfc_err(sa, "flow-rss: failed to release NIC resource for ctx=%p: handle=0x%08x; rc=%d",
+ ctx, ctx->nic_handle, rc);
+
+ sfc_warn(sa, "flow-rss: proceeding despite the prior error");
+ }
+
+ sfc_dbg(sa, "flow-rss: released NIC resource for ctx=%p; rc=%d",
+ ctx, rc);
+ }
+}
--git a/drivers/net/sfc/sfc_flow_rss.h b/drivers/net/sfc/sfc_flow_rss.h
new file mode 100644
index 0000000000..cb2355ab67
--- /dev/null
+++ b/drivers/net/sfc/sfc_flow_rss.h
@@ -0,0 +1,81 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright(c) 2022 Xilinx, Inc.
+ */
+
+#ifndef _SFC_FLOW_RSS_H
+#define _SFC_FLOW_RSS_H
+
+#include <stdbool.h>
+#include <stdint.h>
+
+#include <rte_flow.h>
+#include <rte_tailq.h>
+
+#include "efx.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+struct sfc_flow_rss_conf {
+ uint8_t key[EFX_RSS_KEY_SIZE];
+ efx_rx_hash_type_t efx_hash_types;
+ unsigned int nb_qid_offsets;
+ unsigned int qid_span;
+};
+
+struct sfc_flow_rss_ctx {
+ TAILQ_ENTRY(sfc_flow_rss_ctx) entries;
+
+ unsigned int refcnt;
+ bool dummy;
+
+ unsigned int nic_handle_refcnt;
+ uint32_t nic_handle;
+
+ struct sfc_flow_rss_conf conf;
+
+ uint16_t *qid_offsets;
+};
+
+TAILQ_HEAD(sfc_flow_rss_ctx_list, sfc_flow_rss_ctx);
+
+struct sfc_flow_rss {
+ unsigned int bounce_tbl[EFX_RSS_TBL_SIZE];
+
+ struct sfc_flow_rss_ctx_list ctx_list;
+};
+
+struct sfc_adapter;
+
+int sfc_flow_rss_attach(struct sfc_adapter *sa);
+
+void sfc_flow_rss_detach(struct sfc_adapter *sa);
+
+int sfc_flow_rss_parse_conf(struct sfc_adapter *sa,
+ const struct rte_flow_action_rss *in,
+ struct sfc_flow_rss_conf *out,
+ uint16_t *sw_qid_minp);
+
+struct sfc_flow_rss_ctx *sfc_flow_rss_ctx_reuse(struct sfc_adapter *sa,
+ const struct sfc_flow_rss_conf *conf,
+ uint16_t sw_qid_min, const uint16_t *sw_qids);
+
+int sfc_flow_rss_ctx_add(struct sfc_adapter *sa,
+ const struct sfc_flow_rss_conf *conf,
+ uint16_t sw_qid_min, const uint16_t *sw_qids,
+ struct sfc_flow_rss_ctx **ctxp);
+
+void sfc_flow_rss_ctx_del(struct sfc_adapter *sa, struct sfc_flow_rss_ctx *ctx);
+
+int sfc_flow_rss_ctx_program(struct sfc_adapter *sa,
+ struct sfc_flow_rss_ctx *ctx);
+
+void sfc_flow_rss_ctx_terminate(struct sfc_adapter *sa,
+ struct sfc_flow_rss_ctx *ctx);
+
+#ifdef __cplusplus
+}
+#endif
+#endif /* _SFC_FLOW_RSS_H */
--
2.30.2
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 2/8] common/sfc_efx/base: query RSS queue span limit on Riverhead
2022-02-01 8:49 [PATCH 0/8] net/sfc: improve flow action RSS support on EF100 boards Ivan Malov
2022-02-01 8:49 ` [PATCH 1/8] net/sfc: rework flow action RSS support Ivan Malov
@ 2022-02-01 8:49 ` Ivan Malov
2022-02-01 8:49 ` [PATCH 3/8] net/sfc: use non-static queue span limit in flow action RSS Ivan Malov
` (6 subsequent siblings)
8 siblings, 0 replies; 12+ messages in thread
From: Ivan Malov @ 2022-02-01 8:49 UTC (permalink / raw)
To: dev; +Cc: Andrew Rybchenko, Andy Moreton
On Riverhead boards, clients can query the limit on how many
queues an RSS context may address. Put the capability to use.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
drivers/common/sfc_efx/base/ef10_nic.c | 14 ++++++++++++--
drivers/common/sfc_efx/base/ef10_rx.c | 3 ++-
drivers/common/sfc_efx/base/efx.h | 7 +++++++
drivers/common/sfc_efx/base/siena_nic.c | 2 ++
4 files changed, 23 insertions(+), 3 deletions(-)
diff --git a/drivers/common/sfc_efx/base/ef10_nic.c b/drivers/common/sfc_efx/base/ef10_nic.c
index 355d274470..d9f7c0f362 100644
--- a/drivers/common/sfc_efx/base/ef10_nic.c
+++ b/drivers/common/sfc_efx/base/ef10_nic.c
@@ -1051,14 +1051,14 @@ ef10_get_datapath_caps(
efx_nic_cfg_t *encp = &(enp->en_nic_cfg);
efx_mcdi_req_t req;
EFX_MCDI_DECLARE_BUF(payload, MC_CMD_GET_CAPABILITIES_IN_LEN,
- MC_CMD_GET_CAPABILITIES_V7_OUT_LEN);
+ MC_CMD_GET_CAPABILITIES_V9_OUT_LEN);
efx_rc_t rc;
req.emr_cmd = MC_CMD_GET_CAPABILITIES;
req.emr_in_buf = payload;
req.emr_in_length = MC_CMD_GET_CAPABILITIES_IN_LEN;
req.emr_out_buf = payload;
- req.emr_out_length = MC_CMD_GET_CAPABILITIES_V7_OUT_LEN;
+ req.emr_out_length = MC_CMD_GET_CAPABILITIES_V9_OUT_LEN;
efx_mcdi_execute_quiet(enp, &req);
@@ -1466,6 +1466,16 @@ ef10_get_datapath_caps(
encp->enc_mae_admin = B_FALSE;
#endif /* EFSYS_OPT_MAE */
+#if EFSYS_OPT_RX_SCALE
+ if (req.emr_out_length_used >= MC_CMD_GET_CAPABILITIES_V9_OUT_LEN) {
+ encp->enc_rx_scale_indirection_max_nqueues =
+ MCDI_OUT_DWORD(req,
+ GET_CAPABILITIES_V9_OUT_RSS_MAX_INDIRECTION_QUEUES);
+ } else {
+ encp->enc_rx_scale_indirection_max_nqueues = EFX_MAXRSS;
+ }
+#endif /* EFSYS_OPT_RX_SCALE */
+
#undef CAP_FLAGS1
#undef CAP_FLAGS2
#undef CAP_FLAGS3
diff --git a/drivers/common/sfc_efx/base/ef10_rx.c b/drivers/common/sfc_efx/base/ef10_rx.c
index a658e0dba2..3b041b962e 100644
--- a/drivers/common/sfc_efx/base/ef10_rx.c
+++ b/drivers/common/sfc_efx/base/ef10_rx.c
@@ -18,6 +18,7 @@ efx_mcdi_rss_context_alloc(
__in uint32_t num_queues,
__out uint32_t *rss_contextp)
{
+ const efx_nic_cfg_t *encp = efx_nic_cfg_get(enp);
efx_mcdi_req_t req;
EFX_MCDI_DECLARE_BUF(payload, MC_CMD_RSS_CONTEXT_ALLOC_IN_LEN,
MC_CMD_RSS_CONTEXT_ALLOC_OUT_LEN);
@@ -25,7 +26,7 @@ efx_mcdi_rss_context_alloc(
uint32_t context_type;
efx_rc_t rc;
- if (num_queues > EFX_MAXRSS) {
+ if (num_queues > encp->enc_rx_scale_indirection_max_nqueues) {
rc = EINVAL;
goto fail1;
}
diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index 96769935c0..f875487b89 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -1495,6 +1495,13 @@ typedef struct efx_nic_cfg_s {
uint32_t enc_rx_buf_align_start;
uint32_t enc_rx_buf_align_end;
#if EFSYS_OPT_RX_SCALE
+ /*
+ * The limit on how many queues an RSS indirection table can address.
+ *
+ * Indirection table entries are offsets relative to a base queue ID.
+ * This means that the maximum offset has to be less than this value.
+ */
+ uint32_t enc_rx_scale_indirection_max_nqueues;
uint32_t enc_rx_scale_max_exclusive_contexts;
/*
* Mask of supported hash algorithms.
diff --git a/drivers/common/sfc_efx/base/siena_nic.c b/drivers/common/sfc_efx/base/siena_nic.c
index e42599131a..5f6d298d3f 100644
--- a/drivers/common/sfc_efx/base/siena_nic.c
+++ b/drivers/common/sfc_efx/base/siena_nic.c
@@ -119,6 +119,8 @@ siena_board_cfg(
encp->enc_rx_push_align = 1;
#if EFSYS_OPT_RX_SCALE
+ encp->enc_rx_scale_indirection_max_nqueues = EFX_MAXRSS;
+
/* There is one RSS context per function */
encp->enc_rx_scale_max_exclusive_contexts = 1;
--
2.30.2
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 3/8] net/sfc: use non-static queue span limit in flow action RSS
2022-02-01 8:49 [PATCH 0/8] net/sfc: improve flow action RSS support on EF100 boards Ivan Malov
2022-02-01 8:49 ` [PATCH 1/8] net/sfc: rework flow action RSS support Ivan Malov
2022-02-01 8:49 ` [PATCH 2/8] common/sfc_efx/base: query RSS queue span limit on Riverhead Ivan Malov
@ 2022-02-01 8:49 ` Ivan Malov
2022-02-01 8:49 ` [PATCH 4/8] common/sfc_efx/base: revise name of RSS table entry count Ivan Malov
` (5 subsequent siblings)
8 siblings, 0 replies; 12+ messages in thread
From: Ivan Malov @ 2022-02-01 8:49 UTC (permalink / raw)
To: dev; +Cc: Andrew Rybchenko, Andy Moreton
On EF10 boards, the limit on how many queues an RSS context
can address is 64. On EF100 boards, this parameter may vary.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
| 8 ++++++--
| 2 ++
2 files changed, 8 insertions(+), 2 deletions(-)
--git a/drivers/net/sfc/sfc_flow_rss.c b/drivers/net/sfc/sfc_flow_rss.c
index 17876f11c1..1c94333b62 100644
--- a/drivers/net/sfc/sfc_flow_rss.c
+++ b/drivers/net/sfc/sfc_flow_rss.c
@@ -21,10 +21,13 @@
int
sfc_flow_rss_attach(struct sfc_adapter *sa)
{
+ const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic);
struct sfc_flow_rss *flow_rss = &sa->flow_rss;
sfc_log_init(sa, "entry");
+ flow_rss->qid_span_max = encp->enc_rx_scale_indirection_max_nqueues;
+
TAILQ_INIT(&flow_rss->ctx_list);
sfc_log_init(sa, "done");
@@ -46,6 +49,7 @@ sfc_flow_rss_parse_conf(struct sfc_adapter *sa,
struct sfc_flow_rss_conf *out, uint16_t *sw_qid_minp)
{
struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+ const struct sfc_flow_rss *flow_rss = &sa->flow_rss;
const struct sfc_rss *ethdev_rss = &sas->rss;
uint16_t sw_qid_min;
uint16_t sw_qid_max;
@@ -156,9 +160,9 @@ sfc_flow_rss_parse_conf(struct sfc_adapter *sa,
out->qid_span = sw_qid_max - sw_qid_min + 1;
- if (out->qid_span > EFX_MAXRSS) {
+ if (out->qid_span > flow_rss->qid_span_max) {
sfc_err(sa, "flow-rss: parse: queue ID span %u is too large; MAX=%u",
- out->qid_span, EFX_MAXRSS);
+ out->qid_span, flow_rss->qid_span_max);
return EINVAL;
}
--git a/drivers/net/sfc/sfc_flow_rss.h b/drivers/net/sfc/sfc_flow_rss.h
index cb2355ab67..e9f798a8f3 100644
--- a/drivers/net/sfc/sfc_flow_rss.h
+++ b/drivers/net/sfc/sfc_flow_rss.h
@@ -42,6 +42,8 @@ struct sfc_flow_rss_ctx {
TAILQ_HEAD(sfc_flow_rss_ctx_list, sfc_flow_rss_ctx);
struct sfc_flow_rss {
+ unsigned int qid_span_max;
+
unsigned int bounce_tbl[EFX_RSS_TBL_SIZE];
struct sfc_flow_rss_ctx_list ctx_list;
--
2.30.2
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 4/8] common/sfc_efx/base: revise name of RSS table entry count
2022-02-01 8:49 [PATCH 0/8] net/sfc: improve flow action RSS support on EF100 boards Ivan Malov
` (2 preceding siblings ...)
2022-02-01 8:49 ` [PATCH 3/8] net/sfc: use non-static queue span limit in flow action RSS Ivan Malov
@ 2022-02-01 8:49 ` Ivan Malov
2022-02-01 8:49 ` [PATCH 5/8] common/sfc_efx/base: support selecting " Ivan Malov
` (4 subsequent siblings)
8 siblings, 0 replies; 12+ messages in thread
From: Ivan Malov @ 2022-02-01 8:49 UTC (permalink / raw)
To: dev; +Cc: Andrew Rybchenko, Andy Moreton
In the existing code, "n" is hardly a clear name for that.
Use a clearer name to help future maintainers of the code.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
drivers/common/sfc_efx/base/ef10_impl.h | 10 +++---
drivers/common/sfc_efx/base/ef10_rx.c | 25 +++++++--------
drivers/common/sfc_efx/base/efx.h | 10 +++---
drivers/common/sfc_efx/base/efx_rx.c | 39 ++++++++++++------------
drivers/common/sfc_efx/base/rhead_impl.h | 10 +++---
drivers/common/sfc_efx/base/rhead_rx.c | 12 ++++----
6 files changed, 54 insertions(+), 52 deletions(-)
diff --git a/drivers/common/sfc_efx/base/ef10_impl.h b/drivers/common/sfc_efx/base/ef10_impl.h
index d48f238479..597dd24909 100644
--- a/drivers/common/sfc_efx/base/ef10_impl.h
+++ b/drivers/common/sfc_efx/base/ef10_impl.h
@@ -1163,12 +1163,12 @@ ef10_rx_scale_key_set(
__in size_t n);
LIBEFX_INTERNAL
-extern __checkReturn efx_rc_t
+extern __checkReturn efx_rc_t
ef10_rx_scale_tbl_set(
- __in efx_nic_t *enp,
- __in uint32_t rss_context,
- __in_ecount(n) unsigned int *table,
- __in size_t n);
+ __in efx_nic_t *enp,
+ __in uint32_t rss_context,
+ __in_ecount(nentries) unsigned int *table,
+ __in size_t nentries);
LIBEFX_INTERNAL
extern __checkReturn uint32_t
diff --git a/drivers/common/sfc_efx/base/ef10_rx.c b/drivers/common/sfc_efx/base/ef10_rx.c
index 3b041b962e..5008139a3f 100644
--- a/drivers/common/sfc_efx/base/ef10_rx.c
+++ b/drivers/common/sfc_efx/base/ef10_rx.c
@@ -292,12 +292,12 @@ efx_mcdi_rss_context_set_key(
#endif /* EFSYS_OPT_RX_SCALE */
#if EFSYS_OPT_RX_SCALE
-static efx_rc_t
+static efx_rc_t
efx_mcdi_rss_context_set_table(
- __in efx_nic_t *enp,
- __in uint32_t rss_context,
- __in_ecount(n) unsigned int *table,
- __in size_t n)
+ __in efx_nic_t *enp,
+ __in uint32_t rss_context,
+ __in_ecount(nentries) unsigned int *table,
+ __in size_t nentries)
{
efx_mcdi_req_t req;
EFX_MCDI_DECLARE_BUF(payload, MC_CMD_RSS_CONTEXT_SET_TABLE_IN_LEN,
@@ -325,7 +325,8 @@ efx_mcdi_rss_context_set_table(
for (i = 0;
i < MC_CMD_RSS_CONTEXT_SET_TABLE_IN_INDIRECTION_TABLE_LEN;
i++) {
- req_table[i] = (n > 0) ? (uint8_t)table[i % n] : 0;
+ req_table[i] = (nentries > 0) ?
+ (uint8_t)table[i % nentries] : 0;
}
efx_mcdi_execute(enp, &req);
@@ -514,12 +515,12 @@ ef10_rx_scale_key_set(
#endif /* EFSYS_OPT_RX_SCALE */
#if EFSYS_OPT_RX_SCALE
- __checkReturn efx_rc_t
+ __checkReturn efx_rc_t
ef10_rx_scale_tbl_set(
- __in efx_nic_t *enp,
- __in uint32_t rss_context,
- __in_ecount(n) unsigned int *table,
- __in size_t n)
+ __in efx_nic_t *enp,
+ __in uint32_t rss_context,
+ __in_ecount(nentries) unsigned int *table,
+ __in size_t nentries)
{
efx_rc_t rc;
@@ -533,7 +534,7 @@ ef10_rx_scale_tbl_set(
}
if ((rc = efx_mcdi_rss_context_set_table(enp,
- rss_context, table, n)) != 0)
+ rss_context, table, nentries)) != 0)
goto fail2;
return (0);
diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index f875487b89..a35e29ebcf 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -2903,12 +2903,12 @@ efx_rx_scale_mode_set(
__in boolean_t insert);
LIBEFX_API
-extern __checkReturn efx_rc_t
+extern __checkReturn efx_rc_t
efx_rx_scale_tbl_set(
- __in efx_nic_t *enp,
- __in uint32_t rss_context,
- __in_ecount(n) unsigned int *table,
- __in size_t n);
+ __in efx_nic_t *enp,
+ __in uint32_t rss_context,
+ __in_ecount(nentries) unsigned int *table,
+ __in size_t nentries);
LIBEFX_API
extern __checkReturn efx_rc_t
diff --git a/drivers/common/sfc_efx/base/efx_rx.c b/drivers/common/sfc_efx/base/efx_rx.c
index 7e63363be7..eb3f736f63 100644
--- a/drivers/common/sfc_efx/base/efx_rx.c
+++ b/drivers/common/sfc_efx/base/efx_rx.c
@@ -41,12 +41,12 @@ siena_rx_scale_key_set(
__in_ecount(n) uint8_t *key,
__in size_t n);
-static __checkReturn efx_rc_t
+static __checkReturn efx_rc_t
siena_rx_scale_tbl_set(
- __in efx_nic_t *enp,
- __in uint32_t rss_context,
- __in_ecount(n) unsigned int *table,
- __in size_t n);
+ __in efx_nic_t *enp,
+ __in uint32_t rss_context,
+ __in_ecount(nentries) unsigned int *table,
+ __in size_t nentries);
static __checkReturn uint32_t
siena_rx_prefix_hash(
@@ -690,12 +690,12 @@ efx_rx_scale_key_set(
#endif /* EFSYS_OPT_RX_SCALE */
#if EFSYS_OPT_RX_SCALE
- __checkReturn efx_rc_t
+ __checkReturn efx_rc_t
efx_rx_scale_tbl_set(
- __in efx_nic_t *enp,
- __in uint32_t rss_context,
- __in_ecount(n) unsigned int *table,
- __in size_t n)
+ __in efx_nic_t *enp,
+ __in uint32_t rss_context,
+ __in_ecount(nentries) unsigned int *table,
+ __in size_t nentries)
{
const efx_rx_ops_t *erxop = enp->en_erxop;
efx_rc_t rc;
@@ -703,7 +703,8 @@ efx_rx_scale_tbl_set(
EFSYS_ASSERT3U(enp->en_magic, ==, EFX_NIC_MAGIC);
EFSYS_ASSERT3U(enp->en_mod_flags, &, EFX_MOD_RX);
- if ((rc = erxop->erxo_scale_tbl_set(enp, rss_context, table, n)) != 0)
+ if ((rc = erxop->erxo_scale_tbl_set(enp, rss_context, table,
+ nentries)) != 0)
goto fail1;
return (0);
@@ -1419,12 +1420,12 @@ siena_rx_scale_key_set(
#endif
#if EFSYS_OPT_RX_SCALE
-static __checkReturn efx_rc_t
+static __checkReturn efx_rc_t
siena_rx_scale_tbl_set(
- __in efx_nic_t *enp,
- __in uint32_t rss_context,
- __in_ecount(n) unsigned int *table,
- __in size_t n)
+ __in efx_nic_t *enp,
+ __in uint32_t rss_context,
+ __in_ecount(nentries) unsigned int *table,
+ __in size_t nentries)
{
efx_oword_t oword;
int index;
@@ -1438,7 +1439,7 @@ siena_rx_scale_tbl_set(
goto fail1;
}
- if (n > FR_BZ_RX_INDIRECTION_TBL_ROWS) {
+ if (nentries > FR_BZ_RX_INDIRECTION_TBL_ROWS) {
rc = EINVAL;
goto fail2;
}
@@ -1447,7 +1448,7 @@ siena_rx_scale_tbl_set(
uint32_t byte;
/* Calculate the entry to place in the table */
- byte = (n > 0) ? (uint32_t)table[index % n] : 0;
+ byte = (nentries > 0) ? (uint32_t)table[index % nentries] : 0;
EFSYS_PROBE2(table, int, index, uint32_t, byte);
@@ -1462,7 +1463,7 @@ siena_rx_scale_tbl_set(
uint32_t byte;
/* Determine if we're starting a new batch */
- byte = (n > 0) ? (uint32_t)table[index % n] : 0;
+ byte = (nentries > 0) ? (uint32_t)table[index % nentries] : 0;
/* Read the table */
EFX_BAR_TBL_READO(enp, FR_BZ_RX_INDIRECTION_TBL,
diff --git a/drivers/common/sfc_efx/base/rhead_impl.h b/drivers/common/sfc_efx/base/rhead_impl.h
index dd38ded775..e0d95ba2aa 100644
--- a/drivers/common/sfc_efx/base/rhead_impl.h
+++ b/drivers/common/sfc_efx/base/rhead_impl.h
@@ -287,12 +287,12 @@ rhead_rx_scale_key_set(
__in size_t n);
LIBEFX_INTERNAL
-extern __checkReturn efx_rc_t
+extern __checkReturn efx_rc_t
rhead_rx_scale_tbl_set(
- __in efx_nic_t *enp,
- __in uint32_t rss_context,
- __in_ecount(n) unsigned int *table,
- __in size_t n);
+ __in efx_nic_t *enp,
+ __in uint32_t rss_context,
+ __in_ecount(nentries) unsigned int *table,
+ __in size_t nentries);
LIBEFX_INTERNAL
extern __checkReturn uint32_t
diff --git a/drivers/common/sfc_efx/base/rhead_rx.c b/drivers/common/sfc_efx/base/rhead_rx.c
index 7b9a4af9da..d28f936ab7 100644
--- a/drivers/common/sfc_efx/base/rhead_rx.c
+++ b/drivers/common/sfc_efx/base/rhead_rx.c
@@ -162,16 +162,16 @@ rhead_rx_scale_key_set(
return (rc);
}
- __checkReturn efx_rc_t
+ __checkReturn efx_rc_t
rhead_rx_scale_tbl_set(
- __in efx_nic_t *enp,
- __in uint32_t rss_context,
- __in_ecount(n) unsigned int *table,
- __in size_t n)
+ __in efx_nic_t *enp,
+ __in uint32_t rss_context,
+ __in_ecount(nentries) unsigned int *table,
+ __in size_t nentries)
{
efx_rc_t rc;
- rc = ef10_rx_scale_tbl_set(enp, rss_context, table, n);
+ rc = ef10_rx_scale_tbl_set(enp, rss_context, table, nentries);
if (rc != 0)
goto fail1;
--
2.30.2
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 5/8] common/sfc_efx/base: support selecting RSS table entry count
2022-02-01 8:49 [PATCH 0/8] net/sfc: improve flow action RSS support on EF100 boards Ivan Malov
` (3 preceding siblings ...)
2022-02-01 8:49 ` [PATCH 4/8] common/sfc_efx/base: revise name of RSS table entry count Ivan Malov
@ 2022-02-01 8:49 ` Ivan Malov
2022-02-02 11:51 ` Ray Kinsella
2022-02-01 8:50 ` [PATCH 6/8] net/sfc: use adaptive table entry count in flow action RSS Ivan Malov
` (3 subsequent siblings)
8 siblings, 1 reply; 12+ messages in thread
From: Ivan Malov @ 2022-02-01 8:49 UTC (permalink / raw)
To: dev; +Cc: Andrew Rybchenko, Andy Moreton, Ray Kinsella
On Riverhead boards, the client can control how many entries
to have in the indirection table of an exclusive RSS context.
Provide the new parameter to clients and indicate its bounds.
Extend the API for writing the table to have the flexibility.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
drivers/common/sfc_efx/base/ef10_impl.h | 1 +
drivers/common/sfc_efx/base/ef10_nic.c | 13 +++
drivers/common/sfc_efx/base/ef10_rx.c | 136 +++++++++++++++++++++--
drivers/common/sfc_efx/base/efx.h | 18 +++
drivers/common/sfc_efx/base/efx_impl.h | 3 +-
drivers/common/sfc_efx/base/efx_mcdi.h | 11 ++
drivers/common/sfc_efx/base/efx_rx.c | 38 ++++++-
drivers/common/sfc_efx/base/rhead_impl.h | 1 +
drivers/common/sfc_efx/base/rhead_rx.c | 4 +-
drivers/common/sfc_efx/version.map | 1 +
10 files changed, 212 insertions(+), 14 deletions(-)
diff --git a/drivers/common/sfc_efx/base/ef10_impl.h b/drivers/common/sfc_efx/base/ef10_impl.h
index 597dd24909..342a9a2006 100644
--- a/drivers/common/sfc_efx/base/ef10_impl.h
+++ b/drivers/common/sfc_efx/base/ef10_impl.h
@@ -1137,6 +1137,7 @@ ef10_rx_scale_context_alloc(
__in efx_nic_t *enp,
__in efx_rx_scale_context_type_t type,
__in uint32_t num_queues,
+ __in uint32_t table_nentries,
__out uint32_t *rss_contextp);
LIBEFX_INTERNAL
diff --git a/drivers/common/sfc_efx/base/ef10_nic.c b/drivers/common/sfc_efx/base/ef10_nic.c
index d9f7c0f362..cca31bc725 100644
--- a/drivers/common/sfc_efx/base/ef10_nic.c
+++ b/drivers/common/sfc_efx/base/ef10_nic.c
@@ -1409,6 +1409,11 @@ ef10_get_datapath_caps(
*/
encp->enc_rx_scale_l4_hash_supported = B_TRUE;
}
+
+ if (CAP_FLAGS3(req, RSS_SELECTABLE_TABLE_SIZE))
+ encp->enc_rx_scale_tbl_entry_count_is_selectable = B_TRUE;
+ else
+ encp->enc_rx_scale_tbl_entry_count_is_selectable = B_FALSE;
#endif /* EFSYS_OPT_RX_SCALE */
/* Check if the firmware supports "FLAG" and "MARK" filter actions */
@@ -1471,8 +1476,16 @@ ef10_get_datapath_caps(
encp->enc_rx_scale_indirection_max_nqueues =
MCDI_OUT_DWORD(req,
GET_CAPABILITIES_V9_OUT_RSS_MAX_INDIRECTION_QUEUES);
+ encp->enc_rx_scale_tbl_min_nentries =
+ MCDI_OUT_DWORD(req,
+ GET_CAPABILITIES_V9_OUT_RSS_MIN_INDIRECTION_TABLE_SIZE);
+ encp->enc_rx_scale_tbl_max_nentries =
+ MCDI_OUT_DWORD(req,
+ GET_CAPABILITIES_V9_OUT_RSS_MAX_INDIRECTION_TABLE_SIZE);
} else {
encp->enc_rx_scale_indirection_max_nqueues = EFX_MAXRSS;
+ encp->enc_rx_scale_tbl_min_nentries = EFX_RSS_TBL_SIZE;
+ encp->enc_rx_scale_tbl_max_nentries = EFX_RSS_TBL_SIZE;
}
#endif /* EFSYS_OPT_RX_SCALE */
diff --git a/drivers/common/sfc_efx/base/ef10_rx.c b/drivers/common/sfc_efx/base/ef10_rx.c
index 5008139a3f..78af7300a0 100644
--- a/drivers/common/sfc_efx/base/ef10_rx.c
+++ b/drivers/common/sfc_efx/base/ef10_rx.c
@@ -16,11 +16,12 @@ efx_mcdi_rss_context_alloc(
__in efx_nic_t *enp,
__in efx_rx_scale_context_type_t type,
__in uint32_t num_queues,
+ __in uint32_t table_nentries,
__out uint32_t *rss_contextp)
{
const efx_nic_cfg_t *encp = efx_nic_cfg_get(enp);
efx_mcdi_req_t req;
- EFX_MCDI_DECLARE_BUF(payload, MC_CMD_RSS_CONTEXT_ALLOC_IN_LEN,
+ EFX_MCDI_DECLARE_BUF(payload, MC_CMD_RSS_CONTEXT_ALLOC_V2_IN_LEN,
MC_CMD_RSS_CONTEXT_ALLOC_OUT_LEN);
uint32_t rss_context;
uint32_t context_type;
@@ -31,6 +32,13 @@ efx_mcdi_rss_context_alloc(
goto fail1;
}
+ if (table_nentries < encp->enc_rx_scale_tbl_min_nentries ||
+ table_nentries > encp->enc_rx_scale_tbl_max_nentries ||
+ !ISP2(table_nentries)) {
+ rc = EINVAL;
+ goto fail2;
+ }
+
switch (type) {
case EFX_RX_SCALE_EXCLUSIVE:
context_type = MC_CMD_RSS_CONTEXT_ALLOC_IN_TYPE_EXCLUSIVE;
@@ -40,12 +48,15 @@ efx_mcdi_rss_context_alloc(
break;
default:
rc = EINVAL;
- goto fail2;
+ goto fail3;
}
req.emr_cmd = MC_CMD_RSS_CONTEXT_ALLOC;
req.emr_in_buf = payload;
- req.emr_in_length = MC_CMD_RSS_CONTEXT_ALLOC_IN_LEN;
+ req.emr_in_length =
+ (encp->enc_rx_scale_tbl_entry_count_is_selectable != B_FALSE) ?
+ MC_CMD_RSS_CONTEXT_ALLOC_V2_IN_LEN :
+ MC_CMD_RSS_CONTEXT_ALLOC_IN_LEN;
req.emr_out_buf = payload;
req.emr_out_length = MC_CMD_RSS_CONTEXT_ALLOC_OUT_LEN;
@@ -61,28 +72,36 @@ efx_mcdi_rss_context_alloc(
*/
MCDI_IN_SET_DWORD(req, RSS_CONTEXT_ALLOC_IN_NUM_QUEUES, num_queues);
+ if (encp->enc_rx_scale_tbl_entry_count_is_selectable != B_FALSE) {
+ MCDI_IN_SET_DWORD(req,
+ RSS_CONTEXT_ALLOC_V2_IN_INDIRECTION_TABLE_SIZE,
+ table_nentries);
+ }
+
efx_mcdi_execute(enp, &req);
if (req.emr_rc != 0) {
rc = req.emr_rc;
- goto fail3;
+ goto fail4;
}
if (req.emr_out_length_used < MC_CMD_RSS_CONTEXT_ALLOC_OUT_LEN) {
rc = EMSGSIZE;
- goto fail4;
+ goto fail5;
}
rss_context = MCDI_OUT_DWORD(req, RSS_CONTEXT_ALLOC_OUT_RSS_CONTEXT_ID);
if (rss_context == EF10_RSS_CONTEXT_INVALID) {
rc = ENOENT;
- goto fail5;
+ goto fail6;
}
*rss_contextp = rss_context;
return (0);
+fail6:
+ EFSYS_PROBE(fail6);
fail5:
EFSYS_PROBE(fail5);
fail4:
@@ -347,6 +366,76 @@ efx_mcdi_rss_context_set_table(
}
#endif /* EFSYS_OPT_RX_SCALE */
+#if EFSYS_OPT_RX_SCALE
+static __checkReturn efx_rc_t
+efx_mcdi_rss_context_write_table(
+ __in efx_nic_t *enp,
+ __in uint32_t context,
+ __in unsigned int start_idx,
+ __in_ecount(nentries) unsigned int *table,
+ __in unsigned int nentries)
+{
+ const efx_nic_cfg_t *encp = efx_nic_cfg_get(enp);
+ efx_mcdi_req_t req;
+ EFX_MCDI_DECLARE_BUF(payload,
+ MC_CMD_RSS_CONTEXT_WRITE_TABLE_IN_LENMAX_MCDI2,
+ MC_CMD_RSS_CONTEXT_WRITE_TABLE_OUT_LEN);
+ unsigned int i;
+ int rc;
+
+ if (nentries >
+ MC_CMD_RSS_CONTEXT_WRITE_TABLE_IN_ENTRIES_MAXNUM_MCDI2) {
+ rc = EINVAL;
+ goto fail1;
+ }
+
+ if (start_idx + nentries >
+ encp->enc_rx_scale_tbl_max_nentries) {
+ rc = EINVAL;
+ goto fail2;
+ }
+
+ req.emr_cmd = MC_CMD_RSS_CONTEXT_WRITE_TABLE;
+ req.emr_in_buf = payload;
+ req.emr_in_length = MC_CMD_RSS_CONTEXT_WRITE_TABLE_IN_LEN(nentries);
+ req.emr_out_buf = payload;
+ req.emr_out_length = MC_CMD_RSS_CONTEXT_WRITE_TABLE_OUT_LEN;
+
+ MCDI_IN_SET_DWORD(req,
+ RSS_CONTEXT_WRITE_TABLE_IN_RSS_CONTEXT_ID, context);
+
+ for (i = 0; i < nentries; ++i) {
+ if (table[i] >= encp->enc_rx_scale_indirection_max_nqueues) {
+ rc = EINVAL;
+ goto fail3;
+ }
+
+ MCDI_IN_POPULATE_INDEXED_DWORD_2(req,
+ RSS_CONTEXT_WRITE_TABLE_IN_ENTRIES, i,
+ RSS_CONTEXT_WRITE_TABLE_ENTRY_INDEX, start_idx + i,
+ RSS_CONTEXT_WRITE_TABLE_ENTRY_VALUE, table[i]);
+ }
+
+ efx_mcdi_execute(enp, &req);
+ if (req.emr_rc != 0) {
+ rc = req.emr_rc;
+ goto fail4;
+ }
+
+ return (0);
+
+fail4:
+ EFSYS_PROBE(fail4);
+fail3:
+ EFSYS_PROBE(fail3);
+fail2:
+ EFSYS_PROBE(fail2);
+fail1:
+ EFSYS_PROBE1(fail1, efx_rc_t, rc);
+ return (rc);
+}
+#endif /* EFSYS_OPT_RX_SCALE */
+
__checkReturn efx_rc_t
ef10_rx_init(
@@ -355,7 +444,7 @@ ef10_rx_init(
#if EFSYS_OPT_RX_SCALE
if (efx_mcdi_rss_context_alloc(enp, EFX_RX_SCALE_EXCLUSIVE, EFX_MAXRSS,
- &enp->en_rss_context) == 0) {
+ EFX_RSS_TBL_SIZE, &enp->en_rss_context) == 0) {
/*
* Allocated an exclusive RSS context, which allows both the
* indirection table and key to be modified.
@@ -398,11 +487,13 @@ ef10_rx_scale_context_alloc(
__in efx_nic_t *enp,
__in efx_rx_scale_context_type_t type,
__in uint32_t num_queues,
+ __in uint32_t table_nentries,
__out uint32_t *rss_contextp)
{
efx_rc_t rc;
- rc = efx_mcdi_rss_context_alloc(enp, type, num_queues, rss_contextp);
+ rc = efx_mcdi_rss_context_alloc(enp, type, num_queues, table_nentries,
+ rss_contextp);
if (rc != 0)
goto fail1;
@@ -522,6 +613,7 @@ ef10_rx_scale_tbl_set(
__in_ecount(nentries) unsigned int *table,
__in size_t nentries)
{
+ const efx_nic_cfg_t *encp = efx_nic_cfg_get(enp);
efx_rc_t rc;
@@ -533,12 +625,34 @@ ef10_rx_scale_tbl_set(
rss_context = enp->en_rss_context;
}
- if ((rc = efx_mcdi_rss_context_set_table(enp,
- rss_context, table, nentries)) != 0)
- goto fail2;
+ if (encp->enc_rx_scale_tbl_entry_count_is_selectable != B_FALSE) {
+ uint32_t index, remain, batch;
+
+ batch = MC_CMD_RSS_CONTEXT_WRITE_TABLE_IN_ENTRIES_MAXNUM_MCDI2;
+ index = 0;
+
+ for (remain = nentries; remain > 0; remain -= batch) {
+ if (batch > remain)
+ batch = remain;
+
+ rc = efx_mcdi_rss_context_write_table(enp, rss_context,
+ index, &table[index], batch);
+ if (rc != 0)
+ goto fail2;
+
+ index += batch;
+ }
+ } else {
+ rc = efx_mcdi_rss_context_set_table(enp, rss_context, table,
+ nentries);
+ if (rc != 0)
+ goto fail3;
+ }
return (0);
+fail3:
+ EFSYS_PROBE(fail3);
fail2:
EFSYS_PROBE(fail2);
fail1:
diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index a35e29ebcf..4523829eb2 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -1502,6 +1502,10 @@ typedef struct efx_nic_cfg_s {
* This means that the maximum offset has to be less than this value.
*/
uint32_t enc_rx_scale_indirection_max_nqueues;
+ /* Minimum number of entries an RSS indirection table can contain. */
+ uint32_t enc_rx_scale_tbl_min_nentries;
+ /* Maximum number of entries an RSS indirection table can contain. */
+ uint32_t enc_rx_scale_tbl_max_nentries;
uint32_t enc_rx_scale_max_exclusive_contexts;
/*
* Mask of supported hash algorithms.
@@ -1514,6 +1518,11 @@ typedef struct efx_nic_cfg_s {
*/
boolean_t enc_rx_scale_l4_hash_supported;
boolean_t enc_rx_scale_additional_modes_supported;
+ /*
+ * Indicates whether the user can decide how many entries to
+ * have in the indirection table of an exclusive RSS context.
+ */
+ boolean_t enc_rx_scale_tbl_entry_count_is_selectable;
#endif /* EFSYS_OPT_RX_SCALE */
#if EFSYS_OPT_LOOPBACK
efx_qword_t enc_loopback_types[EFX_LINK_NMODES];
@@ -2887,6 +2896,15 @@ efx_rx_scale_context_alloc(
__in uint32_t num_queues,
__out uint32_t *rss_contextp);
+LIBEFX_API
+extern __checkReturn efx_rc_t
+efx_rx_scale_context_alloc_v2(
+ __in efx_nic_t *enp,
+ __in efx_rx_scale_context_type_t type,
+ __in uint32_t num_queues,
+ __in uint32_t table_nentries,
+ __out uint32_t *rss_contextp);
+
LIBEFX_API
extern __checkReturn efx_rc_t
efx_rx_scale_context_free(
diff --git a/drivers/common/sfc_efx/base/efx_impl.h b/drivers/common/sfc_efx/base/efx_impl.h
index e2802e6672..7dfe30b695 100644
--- a/drivers/common/sfc_efx/base/efx_impl.h
+++ b/drivers/common/sfc_efx/base/efx_impl.h
@@ -173,7 +173,8 @@ typedef struct efx_rx_ops_s {
#if EFSYS_OPT_RX_SCALE
efx_rc_t (*erxo_scale_context_alloc)(efx_nic_t *,
efx_rx_scale_context_type_t,
- uint32_t, uint32_t *);
+ uint32_t, uint32_t,
+ uint32_t *);
efx_rc_t (*erxo_scale_context_free)(efx_nic_t *, uint32_t);
efx_rc_t (*erxo_scale_mode_set)(efx_nic_t *, uint32_t,
efx_rx_hash_alg_t,
diff --git a/drivers/common/sfc_efx/base/efx_mcdi.h b/drivers/common/sfc_efx/base/efx_mcdi.h
index c91ea41911..14a3833567 100644
--- a/drivers/common/sfc_efx/base/efx_mcdi.h
+++ b/drivers/common/sfc_efx/base/efx_mcdi.h
@@ -315,6 +315,10 @@ efx_mcdi_set_nic_addr_regions(
#define MCDI_IN2(_emr, _type, _ofst) \
MCDI_IN(_emr, _type, MC_CMD_ ## _ofst ## _OFST)
+#define MCDI_INDEXED_IN2(_emr, _type, _ofst, _idx) \
+ MCDI_IN(_emr, _type, MC_CMD_ ## _ofst ## _OFST + \
+ _idx * MC_CMD_ ## _ofst ## _LEN)
+
#define MCDI_IN_SET_BYTE(_emr, _ofst, _value) \
EFX_POPULATE_BYTE_1(*MCDI_IN2(_emr, efx_byte_t, _ofst), \
EFX_BYTE_0, _value)
@@ -356,6 +360,13 @@ efx_mcdi_set_nic_addr_regions(
MC_CMD_ ## _field1, _value1, \
MC_CMD_ ## _field2, _value2)
+#define MCDI_IN_POPULATE_INDEXED_DWORD_2(_emr, _ofst, _idx, \
+ _field1, _value1, _field2, _value2) \
+ EFX_POPULATE_DWORD_2( \
+ *MCDI_INDEXED_IN2(_emr, efx_dword_t, _ofst, _idx), \
+ MC_CMD_ ## _field1, _value1, \
+ MC_CMD_ ## _field2, _value2)
+
#define MCDI_IN_POPULATE_DWORD_3(_emr, _ofst, _field1, _value1, \
_field2, _value2, _field3, _value3) \
EFX_POPULATE_DWORD_3(*MCDI_IN2(_emr, efx_dword_t, _ofst), \
diff --git a/drivers/common/sfc_efx/base/efx_rx.c b/drivers/common/sfc_efx/base/efx_rx.c
index eb3f736f63..d10b990259 100644
--- a/drivers/common/sfc_efx/base/efx_rx.c
+++ b/drivers/common/sfc_efx/base/efx_rx.c
@@ -514,8 +514,44 @@ efx_rx_scale_context_alloc(
rc = ENOTSUP;
goto fail1;
}
+
+ if ((rc = erxop->erxo_scale_context_alloc(enp, type, num_queues,
+ EFX_RSS_TBL_SIZE, rss_contextp)) != 0) {
+ goto fail2;
+ }
+
+ return (0);
+
+fail2:
+ EFSYS_PROBE(fail2);
+fail1:
+ EFSYS_PROBE1(fail1, efx_rc_t, rc);
+ return (rc);
+}
+#endif /* EFSYS_OPT_RX_SCALE */
+
+#if EFSYS_OPT_RX_SCALE
+ __checkReturn efx_rc_t
+efx_rx_scale_context_alloc_v2(
+ __in efx_nic_t *enp,
+ __in efx_rx_scale_context_type_t type,
+ __in uint32_t num_queues,
+ __in uint32_t table_nentries,
+ __out uint32_t *rss_contextp)
+{
+ const efx_rx_ops_t *erxop = enp->en_erxop;
+ efx_rc_t rc;
+
+ EFSYS_ASSERT3U(enp->en_magic, ==, EFX_NIC_MAGIC);
+ EFSYS_ASSERT3U(enp->en_mod_flags, &, EFX_MOD_RX);
+
+ if (erxop->erxo_scale_context_alloc == NULL) {
+ rc = ENOTSUP;
+ goto fail1;
+ }
+
if ((rc = erxop->erxo_scale_context_alloc(enp, type,
- num_queues, rss_contextp)) != 0) {
+ num_queues, table_nentries, rss_contextp)) != 0) {
goto fail2;
}
diff --git a/drivers/common/sfc_efx/base/rhead_impl.h b/drivers/common/sfc_efx/base/rhead_impl.h
index e0d95ba2aa..fb0ecca79d 100644
--- a/drivers/common/sfc_efx/base/rhead_impl.h
+++ b/drivers/common/sfc_efx/base/rhead_impl.h
@@ -261,6 +261,7 @@ rhead_rx_scale_context_alloc(
__in efx_nic_t *enp,
__in efx_rx_scale_context_type_t type,
__in uint32_t num_queues,
+ __in uint32_t table_nentries,
__out uint32_t *rss_contextp);
LIBEFX_INTERNAL
diff --git a/drivers/common/sfc_efx/base/rhead_rx.c b/drivers/common/sfc_efx/base/rhead_rx.c
index d28f936ab7..d0ac5c02f8 100644
--- a/drivers/common/sfc_efx/base/rhead_rx.c
+++ b/drivers/common/sfc_efx/base/rhead_rx.c
@@ -88,11 +88,13 @@ rhead_rx_scale_context_alloc(
__in efx_nic_t *enp,
__in efx_rx_scale_context_type_t type,
__in uint32_t num_queues,
+ __in uint32_t table_nentries,
__out uint32_t *rss_contextp)
{
efx_rc_t rc;
- rc = ef10_rx_scale_context_alloc(enp, type, num_queues, rss_contextp);
+ rc = ef10_rx_scale_context_alloc(enp, type, num_queues, table_nentries,
+ rss_contextp);
if (rc != 0)
goto fail1;
diff --git a/drivers/common/sfc_efx/version.map b/drivers/common/sfc_efx/version.map
index 97dd943ec4..9510897b83 100644
--- a/drivers/common/sfc_efx/version.map
+++ b/drivers/common/sfc_efx/version.map
@@ -216,6 +216,7 @@ INTERNAL {
efx_rx_qpost;
efx_rx_qpush;
efx_rx_scale_context_alloc;
+ efx_rx_scale_context_alloc_v2;
efx_rx_scale_context_free;
efx_rx_scale_default_support_get;
efx_rx_scale_hash_flags_get;
--
2.30.2
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 6/8] net/sfc: use adaptive table entry count in flow action RSS
2022-02-01 8:49 [PATCH 0/8] net/sfc: improve flow action RSS support on EF100 boards Ivan Malov
` (4 preceding siblings ...)
2022-02-01 8:49 ` [PATCH 5/8] common/sfc_efx/base: support selecting " Ivan Malov
@ 2022-02-01 8:50 ` Ivan Malov
2022-02-01 8:50 ` [PATCH 7/8] common/sfc_efx/base: support the even spread RSS mode Ivan Malov
` (2 subsequent siblings)
8 siblings, 0 replies; 12+ messages in thread
From: Ivan Malov @ 2022-02-01 8:50 UTC (permalink / raw)
To: dev; +Cc: Andrew Rybchenko, Andy Moreton
Currently, every RSS context uses 128 indirection entries in
the hardware. That is not always optimal because the entries
come from a pool shared among all PCI functions of the board,
while the format of action RSS allows to pass less queue IDs.
With EF100 boards, it is possible to decide how many entries
to allocate for the indirection table of a context. Make use
of that in order to optimise resource usage in RSS scenarios.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
| 72 +++++++++++++++++++++++++++-------
| 4 +-
2 files changed, 60 insertions(+), 16 deletions(-)
--git a/drivers/net/sfc/sfc_flow_rss.c b/drivers/net/sfc/sfc_flow_rss.c
index 1c94333b62..4bf3002164 100644
--- a/drivers/net/sfc/sfc_flow_rss.c
+++ b/drivers/net/sfc/sfc_flow_rss.c
@@ -23,23 +23,45 @@ sfc_flow_rss_attach(struct sfc_adapter *sa)
{
const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic);
struct sfc_flow_rss *flow_rss = &sa->flow_rss;
+ int rc;
sfc_log_init(sa, "entry");
flow_rss->qid_span_max = encp->enc_rx_scale_indirection_max_nqueues;
+ flow_rss->nb_tbl_entries_min = encp->enc_rx_scale_tbl_min_nentries;
+ flow_rss->nb_tbl_entries_max = encp->enc_rx_scale_tbl_max_nentries;
+
+ sfc_log_init(sa, "allocate the bounce buffer for indirection entries");
+ flow_rss->bounce_tbl = rte_calloc("sfc_flow_rss_bounce_tbl",
+ flow_rss->nb_tbl_entries_max,
+ sizeof(*flow_rss->bounce_tbl), 0);
+ if (flow_rss->bounce_tbl == NULL) {
+ rc = ENOMEM;
+ goto fail;
+ }
TAILQ_INIT(&flow_rss->ctx_list);
sfc_log_init(sa, "done");
return 0;
+
+fail:
+ sfc_log_init(sa, "failed %d", rc);
+
+ return rc;
}
void
sfc_flow_rss_detach(struct sfc_adapter *sa)
{
+ struct sfc_flow_rss *flow_rss = &sa->flow_rss;
+
sfc_log_init(sa, "entry");
+ sfc_log_init(sa, "free the bounce buffer for indirection entries");
+ rte_free(flow_rss->bounce_tbl);
+
sfc_log_init(sa, "done");
}
@@ -123,9 +145,9 @@ sfc_flow_rss_parse_conf(struct sfc_adapter *sa,
return EINVAL;
}
- if (in->queue_num > EFX_RSS_TBL_SIZE) {
+ if (in->queue_num > flow_rss->nb_tbl_entries_max) {
sfc_err(sa, "flow-rss: parse: 'queue_num' is too large; MAX=%u",
- EFX_RSS_TBL_SIZE);
+ flow_rss->nb_tbl_entries_max);
return EINVAL;
}
@@ -286,6 +308,7 @@ sfc_flow_rss_ctx_del(struct sfc_adapter *sa, struct sfc_flow_rss_ctx *ctx)
static int
sfc_flow_rss_ctx_program_tbl(struct sfc_adapter *sa,
+ unsigned int nb_tbl_entries,
const struct sfc_flow_rss_ctx *ctx)
{
const struct sfc_flow_rss_conf *conf = &ctx->conf;
@@ -297,15 +320,15 @@ sfc_flow_rss_ctx_program_tbl(struct sfc_adapter *sa,
if (conf->nb_qid_offsets != 0) {
SFC_ASSERT(ctx->qid_offsets != NULL);
- for (i = 0; i < EFX_RSS_TBL_SIZE; ++i)
+ for (i = 0; i < nb_tbl_entries; ++i)
tbl[i] = ctx->qid_offsets[i % conf->nb_qid_offsets];
} else {
- for (i = 0; i < EFX_RSS_TBL_SIZE; ++i)
+ for (i = 0; i < nb_tbl_entries; ++i)
tbl[i] = i % conf->qid_span;
}
return efx_rx_scale_tbl_set(sa->nic, ctx->nic_handle,
- tbl, EFX_RSS_TBL_SIZE);
+ tbl, nb_tbl_entries);
}
int
@@ -313,9 +336,12 @@ sfc_flow_rss_ctx_program(struct sfc_adapter *sa, struct sfc_flow_rss_ctx *ctx)
{
efx_rx_scale_context_type_t ctx_type = EFX_RX_SCALE_EXCLUSIVE;
struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+ const struct sfc_flow_rss *flow_rss = &sa->flow_rss;
struct sfc_rss *ethdev_rss = &sas->rss;
struct sfc_flow_rss_conf *conf;
bool allocation_done = B_FALSE;
+ unsigned int nb_qid_offsets;
+ unsigned int nb_tbl_entries;
int rc;
if (ctx == NULL)
@@ -325,18 +351,34 @@ sfc_flow_rss_ctx_program(struct sfc_adapter *sa, struct sfc_flow_rss_ctx *ctx)
SFC_ASSERT(sfc_adapter_is_locked(sa));
+ if (conf->nb_qid_offsets != 0)
+ nb_qid_offsets = conf->nb_qid_offsets;
+ else
+ nb_qid_offsets = conf->qid_span;
+
+ if (!RTE_IS_POWER_OF_2(nb_qid_offsets)) {
+ /*
+ * Most likely, it pays to enlarge the indirection
+ * table to facilitate better distribution quality.
+ */
+ nb_qid_offsets = flow_rss->nb_tbl_entries_max;
+ }
+
+ nb_tbl_entries = RTE_MAX(flow_rss->nb_tbl_entries_min, nb_qid_offsets);
+
if (ctx->nic_handle_refcnt == 0) {
- rc = efx_rx_scale_context_alloc(sa->nic, ctx_type,
- conf->qid_span,
- &ctx->nic_handle);
+ rc = efx_rx_scale_context_alloc_v2(sa->nic, ctx_type,
+ conf->qid_span,
+ nb_tbl_entries,
+ &ctx->nic_handle);
if (rc != 0) {
- sfc_err(sa, "flow-rss: failed to allocate NIC resource for ctx=%p: type=%d, qid_span=%u, rc=%d",
- ctx, ctx_type, conf->qid_span, rc);
+ sfc_err(sa, "flow-rss: failed to allocate NIC resource for ctx=%p: type=%d, qid_span=%u, nb_tbl_entries=%u; rc=%d",
+ ctx, ctx_type, conf->qid_span, nb_tbl_entries, rc);
goto fail;
}
- sfc_dbg(sa, "flow-rss: allocated NIC resource for ctx=%p: type=%d, qid_span=%u; handle=0x%08x",
- ctx, ctx_type, conf->qid_span,
+ sfc_dbg(sa, "flow-rss: allocated NIC resource for ctx=%p: type=%d, qid_span=%u, nb_tbl_entries=%u; handle=0x%08x",
+ ctx, ctx_type, conf->qid_span, nb_tbl_entries,
ctx->nic_handle);
++(ctx->nic_handle_refcnt);
@@ -369,10 +411,10 @@ sfc_flow_rss_ctx_program(struct sfc_adapter *sa, struct sfc_flow_rss_ctx *ctx)
goto fail;
}
- rc = sfc_flow_rss_ctx_program_tbl(sa, ctx);
+ rc = sfc_flow_rss_ctx_program_tbl(sa, nb_tbl_entries, ctx);
if (rc != 0) {
- sfc_err(sa, "flow-rss: failed to program table for ctx=%p; rc=%d",
- ctx, rc);
+ sfc_err(sa, "flow-rss: failed to program table for ctx=%p: nb_tbl_entries=%u; rc=%d",
+ ctx, nb_tbl_entries, rc);
goto fail;
}
--git a/drivers/net/sfc/sfc_flow_rss.h b/drivers/net/sfc/sfc_flow_rss.h
index e9f798a8f3..3341d06cf4 100644
--- a/drivers/net/sfc/sfc_flow_rss.h
+++ b/drivers/net/sfc/sfc_flow_rss.h
@@ -42,9 +42,11 @@ struct sfc_flow_rss_ctx {
TAILQ_HEAD(sfc_flow_rss_ctx_list, sfc_flow_rss_ctx);
struct sfc_flow_rss {
+ unsigned int nb_tbl_entries_min;
+ unsigned int nb_tbl_entries_max;
unsigned int qid_span_max;
- unsigned int bounce_tbl[EFX_RSS_TBL_SIZE];
+ unsigned int *bounce_tbl; /* MAX */
struct sfc_flow_rss_ctx_list ctx_list;
};
--
2.30.2
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 7/8] common/sfc_efx/base: support the even spread RSS mode
2022-02-01 8:49 [PATCH 0/8] net/sfc: improve flow action RSS support on EF100 boards Ivan Malov
` (5 preceding siblings ...)
2022-02-01 8:50 ` [PATCH 6/8] net/sfc: use adaptive table entry count in flow action RSS Ivan Malov
@ 2022-02-01 8:50 ` Ivan Malov
2022-02-01 8:50 ` [PATCH 8/8] net/sfc: use the even spread mode in flow action RSS Ivan Malov
2022-02-02 17:41 ` [PATCH 0/8] net/sfc: improve flow action RSS support on EF100 boards Ferruh Yigit
8 siblings, 0 replies; 12+ messages in thread
From: Ivan Malov @ 2022-02-01 8:50 UTC (permalink / raw)
To: dev; +Cc: Andrew Rybchenko, Andy Moreton
Riverhead boards support spreading traffic across the
specified number of queues without using indirections.
This mode is provided by a dedicated RSS context type.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
drivers/common/sfc_efx/base/ef10_nic.c | 22 +++++++++++++
drivers/common/sfc_efx/base/ef10_rx.c | 42 ++++++++++++++++++-------
drivers/common/sfc_efx/base/efx.h | 8 ++++-
drivers/common/sfc_efx/base/efx_rx.c | 6 +++-
drivers/common/sfc_efx/base/siena_nic.c | 3 ++
5 files changed, 67 insertions(+), 14 deletions(-)
diff --git a/drivers/common/sfc_efx/base/ef10_nic.c b/drivers/common/sfc_efx/base/ef10_nic.c
index cca31bc725..aa667309ab 100644
--- a/drivers/common/sfc_efx/base/ef10_nic.c
+++ b/drivers/common/sfc_efx/base/ef10_nic.c
@@ -1482,10 +1482,32 @@ ef10_get_datapath_caps(
encp->enc_rx_scale_tbl_max_nentries =
MCDI_OUT_DWORD(req,
GET_CAPABILITIES_V9_OUT_RSS_MAX_INDIRECTION_TABLE_SIZE);
+
+ if (CAP_FLAGS3(req, RSS_EVEN_SPREADING)) {
+#define RSS_MAX_EVEN_SPREADING_QUEUES \
+ GET_CAPABILITIES_V9_OUT_RSS_MAX_EVEN_SPREADING_QUEUES
+ /*
+ * The even spreading mode distributes traffic across
+ * the specified number of queues without the need to
+ * allocate precious indirection entry pool resources.
+ */
+ encp->enc_rx_scale_even_spread_max_nqueues =
+ MCDI_OUT_DWORD(req, RSS_MAX_EVEN_SPREADING_QUEUES);
+#undef RSS_MAX_EVEN_SPREADING_QUEUES
+ } else {
+ /* There is no support for the even spread contexts. */
+ encp->enc_rx_scale_even_spread_max_nqueues = 0;
+ }
} else {
encp->enc_rx_scale_indirection_max_nqueues = EFX_MAXRSS;
encp->enc_rx_scale_tbl_min_nentries = EFX_RSS_TBL_SIZE;
encp->enc_rx_scale_tbl_max_nentries = EFX_RSS_TBL_SIZE;
+
+ /*
+ * Assume that there is no support
+ * for the even spread contexts.
+ */
+ encp->enc_rx_scale_even_spread_max_nqueues = 0;
}
#endif /* EFSYS_OPT_RX_SCALE */
diff --git a/drivers/common/sfc_efx/base/ef10_rx.c b/drivers/common/sfc_efx/base/ef10_rx.c
index 78af7300a0..afc9cf025f 100644
--- a/drivers/common/sfc_efx/base/ef10_rx.c
+++ b/drivers/common/sfc_efx/base/ef10_rx.c
@@ -23,30 +23,45 @@ efx_mcdi_rss_context_alloc(
efx_mcdi_req_t req;
EFX_MCDI_DECLARE_BUF(payload, MC_CMD_RSS_CONTEXT_ALLOC_V2_IN_LEN,
MC_CMD_RSS_CONTEXT_ALLOC_OUT_LEN);
+ uint32_t table_nentries_min;
+ uint32_t table_nentries_max;
+ uint32_t num_queues_max;
uint32_t rss_context;
uint32_t context_type;
efx_rc_t rc;
- if (num_queues > encp->enc_rx_scale_indirection_max_nqueues) {
- rc = EINVAL;
- goto fail1;
- }
-
- if (table_nentries < encp->enc_rx_scale_tbl_min_nentries ||
- table_nentries > encp->enc_rx_scale_tbl_max_nentries ||
- !ISP2(table_nentries)) {
- rc = EINVAL;
- goto fail2;
- }
-
switch (type) {
case EFX_RX_SCALE_EXCLUSIVE:
context_type = MC_CMD_RSS_CONTEXT_ALLOC_IN_TYPE_EXCLUSIVE;
+ num_queues_max = encp->enc_rx_scale_indirection_max_nqueues;
+ table_nentries_min = encp->enc_rx_scale_tbl_min_nentries;
+ table_nentries_max = encp->enc_rx_scale_tbl_max_nentries;
break;
case EFX_RX_SCALE_SHARED:
context_type = MC_CMD_RSS_CONTEXT_ALLOC_IN_TYPE_SHARED;
+ num_queues_max = encp->enc_rx_scale_indirection_max_nqueues;
+ table_nentries_min = encp->enc_rx_scale_tbl_min_nentries;
+ table_nentries_max = encp->enc_rx_scale_tbl_max_nentries;
+ break;
+ case EFX_RX_SCALE_EVEN_SPREAD:
+ context_type = MC_CMD_RSS_CONTEXT_ALLOC_IN_TYPE_EVEN_SPREADING;
+ num_queues_max = encp->enc_rx_scale_even_spread_max_nqueues;
+ table_nentries_min = 0;
+ table_nentries_max = 0;
break;
default:
+ rc = EINVAL;
+ goto fail1;
+ }
+
+ if (num_queues == 0 || num_queues > num_queues_max) {
+ rc = EINVAL;
+ goto fail2;
+ }
+
+ if (table_nentries < table_nentries_min ||
+ table_nentries > table_nentries_max ||
+ (table_nentries != 0 && !ISP2(table_nentries))) {
rc = EINVAL;
goto fail3;
}
@@ -69,6 +84,9 @@ efx_mcdi_rss_context_alloc(
* indirection table offsets.
* For shared contexts, the provided context will spread traffic over
* NUM_QUEUES many queues.
+ * For the even spread contexts, the provided context will spread
+ * traffic over NUM_QUEUES many queues, but that will not involve
+ * the use of precious indirection table resources in the adapter.
*/
MCDI_IN_SET_DWORD(req, RSS_CONTEXT_ALLOC_IN_NUM_QUEUES, num_queues);
diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index 4523829eb2..854527e0fd 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -1495,6 +1495,11 @@ typedef struct efx_nic_cfg_s {
uint32_t enc_rx_buf_align_start;
uint32_t enc_rx_buf_align_end;
#if EFSYS_OPT_RX_SCALE
+ /*
+ * The limit on how many queues an RSS context in the even spread
+ * mode can span. When this mode is not supported, the value is 0.
+ */
+ uint32_t enc_rx_scale_even_spread_max_nqueues;
/*
* The limit on how many queues an RSS indirection table can address.
*
@@ -2784,7 +2789,8 @@ typedef enum efx_rx_hash_support_e {
typedef enum efx_rx_scale_context_type_e {
EFX_RX_SCALE_UNAVAILABLE = 0, /* No RX scale context */
EFX_RX_SCALE_EXCLUSIVE, /* Writable key/indirection table */
- EFX_RX_SCALE_SHARED /* Read-only key/indirection table */
+ EFX_RX_SCALE_SHARED, /* Read-only key/indirection table */
+ EFX_RX_SCALE_EVEN_SPREAD, /* No indirection table, writable key */
} efx_rx_scale_context_type_t;
/*
diff --git a/drivers/common/sfc_efx/base/efx_rx.c b/drivers/common/sfc_efx/base/efx_rx.c
index d10b990259..45dc5d6c6d 100644
--- a/drivers/common/sfc_efx/base/efx_rx.c
+++ b/drivers/common/sfc_efx/base/efx_rx.c
@@ -504,6 +504,7 @@ efx_rx_scale_context_alloc(
__in uint32_t num_queues,
__out uint32_t *rss_contextp)
{
+ uint32_t table_nentries = EFX_RSS_TBL_SIZE;
const efx_rx_ops_t *erxop = enp->en_erxop;
efx_rc_t rc;
@@ -515,8 +516,11 @@ efx_rx_scale_context_alloc(
goto fail1;
}
+ if (type == EFX_RX_SCALE_EVEN_SPREAD)
+ table_nentries = 0;
+
if ((rc = erxop->erxo_scale_context_alloc(enp, type, num_queues,
- EFX_RSS_TBL_SIZE, rss_contextp)) != 0) {
+ table_nentries, rss_contextp)) != 0) {
goto fail2;
}
diff --git a/drivers/common/sfc_efx/base/siena_nic.c b/drivers/common/sfc_efx/base/siena_nic.c
index 5f6d298d3f..939551dbf5 100644
--- a/drivers/common/sfc_efx/base/siena_nic.c
+++ b/drivers/common/sfc_efx/base/siena_nic.c
@@ -121,6 +121,9 @@ siena_board_cfg(
#if EFSYS_OPT_RX_SCALE
encp->enc_rx_scale_indirection_max_nqueues = EFX_MAXRSS;
+ /* There is no support for the even spread contexts. */
+ encp->enc_rx_scale_even_spread_max_nqueues = 0;
+
/* There is one RSS context per function */
encp->enc_rx_scale_max_exclusive_contexts = 1;
--
2.30.2
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 8/8] net/sfc: use the even spread mode in flow action RSS
2022-02-01 8:49 [PATCH 0/8] net/sfc: improve flow action RSS support on EF100 boards Ivan Malov
` (6 preceding siblings ...)
2022-02-01 8:50 ` [PATCH 7/8] common/sfc_efx/base: support the even spread RSS mode Ivan Malov
@ 2022-02-01 8:50 ` Ivan Malov
2022-02-02 17:41 ` [PATCH 0/8] net/sfc: improve flow action RSS support on EF100 boards Ferruh Yigit
8 siblings, 0 replies; 12+ messages in thread
From: Ivan Malov @ 2022-02-01 8:50 UTC (permalink / raw)
To: dev; +Cc: Andrew Rybchenko, Andy Moreton
If the user provides contiguous ascending queue IDs,
use the even spread mode to avoid wasting resources
which are needed to serve indirection table entries.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
| 19 +++++++++++++++++++
| 1 +
2 files changed, 20 insertions(+)
--git a/drivers/net/sfc/sfc_flow_rss.c b/drivers/net/sfc/sfc_flow_rss.c
index 4bf3002164..e28c943335 100644
--- a/drivers/net/sfc/sfc_flow_rss.c
+++ b/drivers/net/sfc/sfc_flow_rss.c
@@ -140,6 +140,8 @@ sfc_flow_rss_parse_conf(struct sfc_adapter *sa,
return EINVAL;
}
+ out->rte_hash_function = in->func;
+
if (in->queue_num == 0) {
sfc_err(sa, "flow-rss: parse: 'queue_num' is 0; MIN=1");
return EINVAL;
@@ -317,6 +319,9 @@ sfc_flow_rss_ctx_program_tbl(struct sfc_adapter *sa,
SFC_ASSERT(sfc_adapter_is_locked(sa));
+ if (nb_tbl_entries == 0)
+ return 0;
+
if (conf->nb_qid_offsets != 0) {
SFC_ASSERT(ctx->qid_offsets != NULL);
@@ -336,6 +341,7 @@ sfc_flow_rss_ctx_program(struct sfc_adapter *sa, struct sfc_flow_rss_ctx *ctx)
{
efx_rx_scale_context_type_t ctx_type = EFX_RX_SCALE_EXCLUSIVE;
struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+ const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic);
const struct sfc_flow_rss *flow_rss = &sa->flow_rss;
struct sfc_rss *ethdev_rss = &sas->rss;
struct sfc_flow_rss_conf *conf;
@@ -366,6 +372,19 @@ sfc_flow_rss_ctx_program(struct sfc_adapter *sa, struct sfc_flow_rss_ctx *ctx)
nb_tbl_entries = RTE_MAX(flow_rss->nb_tbl_entries_min, nb_qid_offsets);
+ if (conf->rte_hash_function == RTE_ETH_HASH_FUNCTION_DEFAULT &&
+ conf->nb_qid_offsets == 0 &&
+ conf->qid_span <= encp->enc_rx_scale_even_spread_max_nqueues) {
+ /*
+ * Conformance to a specific hash algorithm is a don't care to
+ * the user. The queue array is contiguous and ascending. That
+ * means that the even spread context may be requested here in
+ * order to avoid wasting precious indirection table resources.
+ */
+ ctx_type = EFX_RX_SCALE_EVEN_SPREAD;
+ nb_tbl_entries = 0;
+ }
+
if (ctx->nic_handle_refcnt == 0) {
rc = efx_rx_scale_context_alloc_v2(sa->nic, ctx_type,
conf->qid_span,
--git a/drivers/net/sfc/sfc_flow_rss.h b/drivers/net/sfc/sfc_flow_rss.h
index 3341d06cf4..2ed81dc190 100644
--- a/drivers/net/sfc/sfc_flow_rss.h
+++ b/drivers/net/sfc/sfc_flow_rss.h
@@ -20,6 +20,7 @@ extern "C" {
struct sfc_flow_rss_conf {
uint8_t key[EFX_RSS_KEY_SIZE];
+ enum rte_eth_hash_function rte_hash_function;
efx_rx_hash_type_t efx_hash_types;
unsigned int nb_qid_offsets;
unsigned int qid_span;
--
2.30.2
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 5/8] common/sfc_efx/base: support selecting RSS table entry count
2022-02-01 8:49 ` [PATCH 5/8] common/sfc_efx/base: support selecting " Ivan Malov
@ 2022-02-02 11:51 ` Ray Kinsella
2022-02-02 12:24 ` Ivan Malov
0 siblings, 1 reply; 12+ messages in thread
From: Ray Kinsella @ 2022-02-02 11:51 UTC (permalink / raw)
To: Ivan Malov
Cc: dev, Andrew Rybchenko, Andy Moreton, Thomas Monjalon,
ferruh.yigit, David Marchand
Ivan Malov <ivan.malov@oktetlabs.ru> writes:
> On Riverhead boards, the client can control how many entries
> to have in the indirection table of an exclusive RSS context.
>
> Provide the new parameter to clients and indicate its bounds.
> Extend the API for writing the table to have the flexibility.
>
> Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Reviewed-by: Andy Moreton <amoreton@xilinx.com>
> ---
> drivers/common/sfc_efx/base/ef10_impl.h | 1 +
> drivers/common/sfc_efx/base/ef10_nic.c | 13 +++
> drivers/common/sfc_efx/base/ef10_rx.c | 136 +++++++++++++++++++++--
> drivers/common/sfc_efx/base/efx.h | 18 +++
> drivers/common/sfc_efx/base/efx_impl.h | 3 +-
> drivers/common/sfc_efx/base/efx_mcdi.h | 11 ++
> drivers/common/sfc_efx/base/efx_rx.c | 38 ++++++-
> drivers/common/sfc_efx/base/rhead_impl.h | 1 +
> drivers/common/sfc_efx/base/rhead_rx.c | 4 +-
> drivers/common/sfc_efx/version.map | 1 +
> 10 files changed, 212 insertions(+), 14 deletions(-)
>
>
> diff --git a/drivers/common/sfc_efx/version.map b/drivers/common/sfc_efx/version.map
> index 97dd943ec4..9510897b83 100644
> --- a/drivers/common/sfc_efx/version.map
> +++ b/drivers/common/sfc_efx/version.map
> @@ -216,6 +216,7 @@ INTERNAL {
> efx_rx_qpost;
> efx_rx_qpush;
> efx_rx_scale_context_alloc;
> + efx_rx_scale_context_alloc_v2;
> efx_rx_scale_context_free;
> efx_rx_scale_default_support_get;
> efx_rx_scale_hash_flags_get;
So this internal, so ordinarly I have little enough to do or say about
it. In this case, I do have to ask is the _v2 version of the function
avoidable?
--
Regards, Ray K
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 5/8] common/sfc_efx/base: support selecting RSS table entry count
2022-02-02 11:51 ` Ray Kinsella
@ 2022-02-02 12:24 ` Ivan Malov
0 siblings, 0 replies; 12+ messages in thread
From: Ivan Malov @ 2022-02-02 12:24 UTC (permalink / raw)
To: Ray Kinsella
Cc: dev, Andrew Rybchenko, Andy Moreton, Thomas Monjalon,
ferruh.yigit, David Marchand
Hi,
On Wed, 2 Feb 2022, Ray Kinsella wrote:
>
> Ivan Malov <ivan.malov@oktetlabs.ru> writes:
>
>> On Riverhead boards, the client can control how many entries
>> to have in the indirection table of an exclusive RSS context.
>>
>> Provide the new parameter to clients and indicate its bounds.
>> Extend the API for writing the table to have the flexibility.
>>
>> Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
>> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>> Reviewed-by: Andy Moreton <amoreton@xilinx.com>
>> ---
>> drivers/common/sfc_efx/base/ef10_impl.h | 1 +
>> drivers/common/sfc_efx/base/ef10_nic.c | 13 +++
>> drivers/common/sfc_efx/base/ef10_rx.c | 136 +++++++++++++++++++++--
>> drivers/common/sfc_efx/base/efx.h | 18 +++
>> drivers/common/sfc_efx/base/efx_impl.h | 3 +-
>> drivers/common/sfc_efx/base/efx_mcdi.h | 11 ++
>> drivers/common/sfc_efx/base/efx_rx.c | 38 ++++++-
>> drivers/common/sfc_efx/base/rhead_impl.h | 1 +
>> drivers/common/sfc_efx/base/rhead_rx.c | 4 +-
>> drivers/common/sfc_efx/version.map | 1 +
>> 10 files changed, 212 insertions(+), 14 deletions(-)
>>
>>
>> diff --git a/drivers/common/sfc_efx/version.map b/drivers/common/sfc_efx/version.map
>> index 97dd943ec4..9510897b83 100644
>> --- a/drivers/common/sfc_efx/version.map
>> +++ b/drivers/common/sfc_efx/version.map
>> @@ -216,6 +216,7 @@ INTERNAL {
>> efx_rx_qpost;
>> efx_rx_qpush;
>> efx_rx_scale_context_alloc;
>> + efx_rx_scale_context_alloc_v2;
>> efx_rx_scale_context_free;
>> efx_rx_scale_default_support_get;
>> efx_rx_scale_hash_flags_get;
>
> So this internal, so ordinarly I have little enough to do or say about
> it. In this case, I do have to ask is the _v2 version of the function
> avoidable?
The PMD in question is not the only driver based on libefx. Extending
efx_rx_scale_context_alloc() to add an extra argument would require
that the other libefx-based drivers be updated accordingly. That
is not always reasonable / robust. Hence the v2 method.
Thank you.
--
Ivan M
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 0/8] net/sfc: improve flow action RSS support on EF100 boards
2022-02-01 8:49 [PATCH 0/8] net/sfc: improve flow action RSS support on EF100 boards Ivan Malov
` (7 preceding siblings ...)
2022-02-01 8:50 ` [PATCH 8/8] net/sfc: use the even spread mode in flow action RSS Ivan Malov
@ 2022-02-02 17:41 ` Ferruh Yigit
8 siblings, 0 replies; 12+ messages in thread
From: Ferruh Yigit @ 2022-02-02 17:41 UTC (permalink / raw)
To: Ivan Malov, dev
On 2/1/2022 8:49 AM, Ivan Malov wrote:
> The first patch reworks flow action RSS support in general, on
> all board types. Later patches add support for EF100-specific
> features: the even spread mode (no indirection table) and the
> ability to select indirection table size in the normal mode.
>
> Ivan Malov (8):
> net/sfc: rework flow action RSS support
> common/sfc_efx/base: query RSS queue span limit on Riverhead
> net/sfc: use non-static queue span limit in flow action RSS
> common/sfc_efx/base: revise name of RSS table entry count
> common/sfc_efx/base: support selecting RSS table entry count
> net/sfc: use adaptive table entry count in flow action RSS
> common/sfc_efx/base: support the even spread RSS mode
> net/sfc: use the even spread mode in flow action RSS
>
Series applied to dpdk-next-net/main, thanks.
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2022-02-02 17:41 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-02-01 8:49 [PATCH 0/8] net/sfc: improve flow action RSS support on EF100 boards Ivan Malov
2022-02-01 8:49 ` [PATCH 1/8] net/sfc: rework flow action RSS support Ivan Malov
2022-02-01 8:49 ` [PATCH 2/8] common/sfc_efx/base: query RSS queue span limit on Riverhead Ivan Malov
2022-02-01 8:49 ` [PATCH 3/8] net/sfc: use non-static queue span limit in flow action RSS Ivan Malov
2022-02-01 8:49 ` [PATCH 4/8] common/sfc_efx/base: revise name of RSS table entry count Ivan Malov
2022-02-01 8:49 ` [PATCH 5/8] common/sfc_efx/base: support selecting " Ivan Malov
2022-02-02 11:51 ` Ray Kinsella
2022-02-02 12:24 ` Ivan Malov
2022-02-01 8:50 ` [PATCH 6/8] net/sfc: use adaptive table entry count in flow action RSS Ivan Malov
2022-02-01 8:50 ` [PATCH 7/8] common/sfc_efx/base: support the even spread RSS mode Ivan Malov
2022-02-01 8:50 ` [PATCH 8/8] net/sfc: use the even spread mode in flow action RSS Ivan Malov
2022-02-02 17:41 ` [PATCH 0/8] net/sfc: improve flow action RSS support on EF100 boards Ferruh Yigit
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).