* [PATCH 0/5] net/mlx5: add indirect QUOTA create/query/modify
@ 2023-01-18 12:55 Gregory Etelson
2023-01-18 12:55 ` [PATCH 1/5] net/mlx5: update query fields in async job structure Gregory Etelson
` (7 more replies)
0 siblings, 8 replies; 20+ messages in thread
From: Gregory Etelson @ 2023-01-18 12:55 UTC (permalink / raw)
To: dev; +Cc: getelson, matan, rasland
Add indirect quota flow action.
Add match on quota flow item.
Gregory Etelson (5):
net/mlx5: update query fields in async job structure
net/mlx5: remove code duplication
common/mlx5: update MTR ASO definitions
net/mlx5: add indirect QUOTA create/query/modify
mlx5dr: Definer, translate RTE quota item
drivers/common/mlx5/mlx5_prm.h | 4 +
drivers/net/mlx5/hws/mlx5dr_definer.c | 61 +++
drivers/net/mlx5/meson.build | 1 +
drivers/net/mlx5/mlx5.h | 88 +++-
drivers/net/mlx5/mlx5_flow.c | 62 +++
drivers/net/mlx5/mlx5_flow.h | 20 +-
drivers/net/mlx5/mlx5_flow_aso.c | 10 +-
drivers/net/mlx5/mlx5_flow_hw.c | 527 +++++++++++++------
drivers/net/mlx5/mlx5_flow_quota.c | 726 ++++++++++++++++++++++++++
9 files changed, 1318 insertions(+), 181 deletions(-)
create mode 100644 drivers/net/mlx5/mlx5_flow_quota.c
--
2.34.1
^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH 1/5] net/mlx5: update query fields in async job structure
2023-01-18 12:55 [PATCH 0/5] net/mlx5: add indirect QUOTA create/query/modify Gregory Etelson
@ 2023-01-18 12:55 ` Gregory Etelson
2023-01-18 12:55 ` [PATCH 2/5] net/mlx5: remove code duplication Gregory Etelson
` (6 subsequent siblings)
7 siblings, 0 replies; 20+ messages in thread
From: Gregory Etelson @ 2023-01-18 12:55 UTC (permalink / raw)
To: dev; +Cc: getelson, matan, rasland, Viacheslav Ovsiienko
Query fields defined in `mlx5_hw_q_job` target CT type only.
The patch updates `mlx5_hw_q_job` for other query types as well.
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
---
drivers/net/mlx5/mlx5.h | 10 +++++-----
drivers/net/mlx5/mlx5_flow_aso.c | 2 +-
drivers/net/mlx5/mlx5_flow_hw.c | 6 +++---
3 files changed, 9 insertions(+), 9 deletions(-)
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 16b33e1548..eaf2ad69fb 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -366,11 +366,11 @@ struct mlx5_hw_q_job {
struct rte_flow_item *items;
union {
struct {
- /* Pointer to ct query user memory. */
- struct rte_flow_action_conntrack *profile;
- /* Pointer to ct ASO query out memory. */
- void *out_data;
- } __rte_packed;
+ /* User memory for query output */
+ void *user;
+ /* Data extracted from hardware */
+ void *hw;
+ } __rte_packed query;
struct rte_flow_item_ethdev port_spec;
struct rte_flow_item_tag tag_spec;
} __rte_packed;
diff --git a/drivers/net/mlx5/mlx5_flow_aso.c b/drivers/net/mlx5/mlx5_flow_aso.c
index 29bd7ce9e8..0eb91c570f 100644
--- a/drivers/net/mlx5/mlx5_flow_aso.c
+++ b/drivers/net/mlx5/mlx5_flow_aso.c
@@ -1389,7 +1389,7 @@ mlx5_aso_ct_sq_query_single(struct mlx5_dev_ctx_shared *sh,
struct mlx5_hw_q_job *job = (struct mlx5_hw_q_job *)user_data;
sq->elts[wqe_idx].ct = user_data;
- job->out_data = (char *)((uintptr_t)sq->mr.addr + wqe_idx * 64);
+ job->query.hw = (char *)((uintptr_t)sq->mr.addr + wqe_idx * 64);
} else {
sq->elts[wqe_idx].query_data = data;
sq->elts[wqe_idx].ct = ct;
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 20c71ff7f0..df5883f340 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -2730,8 +2730,8 @@ __flow_hw_pull_indir_action_comp(struct rte_eth_dev *dev,
idx = MLX5_ACTION_CTX_CT_GET_IDX
((uint32_t)(uintptr_t)job->action);
aso_ct = mlx5_ipool_get(priv->hws_ctpool->cts, idx);
- mlx5_aso_ct_obj_analyze(job->profile,
- job->out_data);
+ mlx5_aso_ct_obj_analyze(job->query.user,
+ job->query.hw);
aso_ct->state = ASO_CONNTRACK_READY;
}
}
@@ -8179,7 +8179,7 @@ flow_hw_action_handle_query(struct rte_eth_dev *dev, uint32_t queue,
case MLX5_INDIRECT_ACTION_TYPE_CT:
aso = true;
if (job)
- job->profile = (struct rte_flow_action_conntrack *)data;
+ job->query.user = data;
ret = flow_hw_conntrack_query(dev, queue, act_idx, data,
job, push, error);
break;
--
2.34.1
^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH 2/5] net/mlx5: remove code duplication
2023-01-18 12:55 [PATCH 0/5] net/mlx5: add indirect QUOTA create/query/modify Gregory Etelson
2023-01-18 12:55 ` [PATCH 1/5] net/mlx5: update query fields in async job structure Gregory Etelson
@ 2023-01-18 12:55 ` Gregory Etelson
2023-01-18 12:55 ` [PATCH 3/5] common/mlx5: update MTR ASO definitions Gregory Etelson
` (5 subsequent siblings)
7 siblings, 0 replies; 20+ messages in thread
From: Gregory Etelson @ 2023-01-18 12:55 UTC (permalink / raw)
To: dev; +Cc: getelson, matan, rasland, Viacheslav Ovsiienko
Replace duplicated code with dedicated functions
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
---
drivers/net/mlx5/mlx5.h | 6 +-
drivers/net/mlx5/mlx5_flow_hw.c | 182 ++++++++++++++++----------------
2 files changed, 95 insertions(+), 93 deletions(-)
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index eaf2ad69fb..7c6bc91ddf 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -344,11 +344,11 @@ struct mlx5_lb_ctx {
};
/* HW steering queue job descriptor type. */
-enum {
+enum mlx5_hw_job_type {
MLX5_HW_Q_JOB_TYPE_CREATE, /* Flow create job type. */
MLX5_HW_Q_JOB_TYPE_DESTROY, /* Flow destroy job type. */
- MLX5_HW_Q_JOB_TYPE_UPDATE,
- MLX5_HW_Q_JOB_TYPE_QUERY,
+ MLX5_HW_Q_JOB_TYPE_UPDATE, /* Flow update job type. */
+ MLX5_HW_Q_JOB_TYPE_QUERY, /* Flow query job type. */
};
#define MLX5_HW_MAX_ITEMS (16)
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index df5883f340..04d0612ee1 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -7532,6 +7532,67 @@ flow_hw_action_handle_validate(struct rte_eth_dev *dev, uint32_t queue,
return 0;
}
+static __rte_always_inline bool
+flow_hw_action_push(const struct rte_flow_op_attr *attr)
+{
+ return attr ? !attr->postpone : true;
+}
+
+static __rte_always_inline struct mlx5_hw_q_job *
+flow_hw_job_get(struct mlx5_priv *priv, uint32_t queue)
+{
+ return priv->hw_q[queue].job[--priv->hw_q[queue].job_idx];
+}
+
+static __rte_always_inline void
+flow_hw_job_put(struct mlx5_priv *priv, uint32_t queue)
+{
+ priv->hw_q[queue].job_idx++;
+}
+
+static __rte_always_inline struct mlx5_hw_q_job *
+flow_hw_action_job_init(struct mlx5_priv *priv, uint32_t queue,
+ const struct rte_flow_action_handle *handle,
+ void *user_data, void *query_data,
+ enum mlx5_hw_job_type type,
+ struct rte_flow_error *error)
+{
+ struct mlx5_hw_q_job *job;
+
+ MLX5_ASSERT(queue != MLX5_HW_INV_QUEUE);
+ if (unlikely(!priv->hw_q[queue].job_idx)) {
+ rte_flow_error_set(error, ENOMEM,
+ RTE_FLOW_ERROR_TYPE_ACTION_NUM, NULL,
+ "Action destroy failed due to queue full.");
+ return NULL;
+ }
+ job = flow_hw_job_get(priv, queue);
+ job->type = type;
+ job->action = handle;
+ job->user_data = user_data;
+ job->query.user = query_data;
+ return job;
+}
+
+static __rte_always_inline void
+flow_hw_action_finalize(struct rte_eth_dev *dev, uint32_t queue,
+ struct mlx5_hw_q_job *job,
+ bool push, bool aso, bool status)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ if (likely(status)) {
+ if (push)
+ __flow_hw_push_action(dev, queue);
+ if (!aso)
+ rte_ring_enqueue(push ?
+ priv->hw_q[queue].indir_cq :
+ priv->hw_q[queue].indir_iq,
+ job);
+ } else {
+ flow_hw_job_put(priv, queue);
+ }
+}
+
/**
* Create shared action.
*
@@ -7569,21 +7630,15 @@ flow_hw_action_handle_create(struct rte_eth_dev *dev, uint32_t queue,
cnt_id_t cnt_id;
uint32_t mtr_id;
uint32_t age_idx;
- bool push = true;
+ bool push = flow_hw_action_push(attr);
bool aso = false;
if (attr) {
- MLX5_ASSERT(queue != MLX5_HW_INV_QUEUE);
- if (unlikely(!priv->hw_q[queue].job_idx)) {
- rte_flow_error_set(error, ENOMEM,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
- "Flow queue full.");
+ job = flow_hw_action_job_init(priv, queue, NULL, user_data,
+ NULL, MLX5_HW_Q_JOB_TYPE_CREATE,
+ error);
+ if (!job)
return NULL;
- }
- job = priv->hw_q[queue].job[--priv->hw_q[queue].job_idx];
- job->type = MLX5_HW_Q_JOB_TYPE_CREATE;
- job->user_data = user_data;
- push = !attr->postpone;
}
switch (action->type) {
case RTE_FLOW_ACTION_TYPE_AGE:
@@ -7646,17 +7701,9 @@ flow_hw_action_handle_create(struct rte_eth_dev *dev, uint32_t queue,
break;
}
if (job) {
- if (!handle) {
- priv->hw_q[queue].job_idx++;
- return NULL;
- }
job->action = handle;
- if (push)
- __flow_hw_push_action(dev, queue);
- if (aso)
- return handle;
- rte_ring_enqueue(push ? priv->hw_q[queue].indir_cq :
- priv->hw_q[queue].indir_iq, job);
+ flow_hw_action_finalize(dev, queue, job, push, aso,
+ handle != NULL);
}
return handle;
}
@@ -7704,19 +7751,15 @@ flow_hw_action_handle_update(struct rte_eth_dev *dev, uint32_t queue,
uint32_t type = act_idx >> MLX5_INDIRECT_ACTION_TYPE_OFFSET;
uint32_t idx = act_idx & ((1u << MLX5_INDIRECT_ACTION_TYPE_OFFSET) - 1);
int ret = 0;
- bool push = true;
+ bool push = flow_hw_action_push(attr);
bool aso = false;
if (attr) {
- MLX5_ASSERT(queue != MLX5_HW_INV_QUEUE);
- if (unlikely(!priv->hw_q[queue].job_idx))
- return rte_flow_error_set(error, ENOMEM,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
- "Action update failed due to queue full.");
- job = priv->hw_q[queue].job[--priv->hw_q[queue].job_idx];
- job->type = MLX5_HW_Q_JOB_TYPE_UPDATE;
- job->user_data = user_data;
- push = !attr->postpone;
+ job = flow_hw_action_job_init(priv, queue, handle, user_data,
+ NULL, MLX5_HW_Q_JOB_TYPE_UPDATE,
+ error);
+ if (!job)
+ return -rte_errno;
}
switch (type) {
case MLX5_INDIRECT_ACTION_TYPE_AGE:
@@ -7779,19 +7822,8 @@ flow_hw_action_handle_update(struct rte_eth_dev *dev, uint32_t queue,
"action type not supported");
break;
}
- if (job) {
- if (ret) {
- priv->hw_q[queue].job_idx++;
- return ret;
- }
- job->action = handle;
- if (push)
- __flow_hw_push_action(dev, queue);
- if (aso)
- return 0;
- rte_ring_enqueue(push ? priv->hw_q[queue].indir_cq :
- priv->hw_q[queue].indir_iq, job);
- }
+ if (job)
+ flow_hw_action_finalize(dev, queue, job, push, aso, ret == 0);
return ret;
}
@@ -7830,20 +7862,16 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue,
struct mlx5_hw_q_job *job = NULL;
struct mlx5_aso_mtr *aso_mtr;
struct mlx5_flow_meter_info *fm;
- bool push = true;
+ bool push = flow_hw_action_push(attr);
bool aso = false;
int ret = 0;
if (attr) {
- MLX5_ASSERT(queue != MLX5_HW_INV_QUEUE);
- if (unlikely(!priv->hw_q[queue].job_idx))
- return rte_flow_error_set(error, ENOMEM,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
- "Action destroy failed due to queue full.");
- job = priv->hw_q[queue].job[--priv->hw_q[queue].job_idx];
- job->type = MLX5_HW_Q_JOB_TYPE_DESTROY;
- job->user_data = user_data;
- push = !attr->postpone;
+ job = flow_hw_action_job_init(priv, queue, handle, user_data,
+ NULL, MLX5_HW_Q_JOB_TYPE_DESTROY,
+ error);
+ if (!job)
+ return -rte_errno;
}
switch (type) {
case MLX5_INDIRECT_ACTION_TYPE_AGE:
@@ -7906,19 +7934,8 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue,
"action type not supported");
break;
}
- if (job) {
- if (ret) {
- priv->hw_q[queue].job_idx++;
- return ret;
- }
- job->action = handle;
- if (push)
- __flow_hw_push_action(dev, queue);
- if (aso)
- return ret;
- rte_ring_enqueue(push ? priv->hw_q[queue].indir_cq :
- priv->hw_q[queue].indir_iq, job);
- }
+ if (job)
+ flow_hw_action_finalize(dev, queue, job, push, aso, ret == 0);
return ret;
}
@@ -8155,19 +8172,15 @@ flow_hw_action_handle_query(struct rte_eth_dev *dev, uint32_t queue,
uint32_t type = act_idx >> MLX5_INDIRECT_ACTION_TYPE_OFFSET;
uint32_t age_idx = act_idx & MLX5_HWS_AGE_IDX_MASK;
int ret;
- bool push = true;
+ bool push = flow_hw_action_push(attr);
bool aso = false;
if (attr) {
- MLX5_ASSERT(queue != MLX5_HW_INV_QUEUE);
- if (unlikely(!priv->hw_q[queue].job_idx))
- return rte_flow_error_set(error, ENOMEM,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
- "Action destroy failed due to queue full.");
- job = priv->hw_q[queue].job[--priv->hw_q[queue].job_idx];
- job->type = MLX5_HW_Q_JOB_TYPE_QUERY;
- job->user_data = user_data;
- push = !attr->postpone;
+ job = flow_hw_action_job_init(priv, queue, handle, user_data,
+ data, MLX5_HW_Q_JOB_TYPE_QUERY,
+ error);
+ if (!job)
+ return -rte_errno;
}
switch (type) {
case MLX5_INDIRECT_ACTION_TYPE_AGE:
@@ -8190,19 +8203,8 @@ flow_hw_action_handle_query(struct rte_eth_dev *dev, uint32_t queue,
"action type not supported");
break;
}
- if (job) {
- if (ret) {
- priv->hw_q[queue].job_idx++;
- return ret;
- }
- job->action = handle;
- if (push)
- __flow_hw_push_action(dev, queue);
- if (aso)
- return ret;
- rte_ring_enqueue(push ? priv->hw_q[queue].indir_cq :
- priv->hw_q[queue].indir_iq, job);
- }
+ if (job)
+ flow_hw_action_finalize(dev, queue, job, push, aso, ret == 0);
return 0;
}
--
2.34.1
^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH 3/5] common/mlx5: update MTR ASO definitions
2023-01-18 12:55 [PATCH 0/5] net/mlx5: add indirect QUOTA create/query/modify Gregory Etelson
2023-01-18 12:55 ` [PATCH 1/5] net/mlx5: update query fields in async job structure Gregory Etelson
2023-01-18 12:55 ` [PATCH 2/5] net/mlx5: remove code duplication Gregory Etelson
@ 2023-01-18 12:55 ` Gregory Etelson
2023-01-18 12:55 ` [PATCH 4/5] net/mlx5: add indirect QUOTA create/query/modify Gregory Etelson
` (4 subsequent siblings)
7 siblings, 0 replies; 20+ messages in thread
From: Gregory Etelson @ 2023-01-18 12:55 UTC (permalink / raw)
To: dev; +Cc: getelson, matan, rasland, Viacheslav Ovsiienko
Update MTR ASO definitions for QUOTA flow action.
Quota flow action requires WQE READ capability and access to
token fields.
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
---
drivers/common/mlx5/mlx5_prm.h | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index 3790dc84b8..c25eb6b8c3 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -3814,6 +3814,8 @@ enum mlx5_aso_op {
ASO_OPER_LOGICAL_OR = 0x1,
};
+#define MLX5_ASO_CSEG_READ_ENABLE 1
+
/* ASO WQE CTRL segment. */
struct mlx5_aso_cseg {
uint32_t va_h;
@@ -3828,6 +3830,8 @@ struct mlx5_aso_cseg {
uint64_t data_mask;
} __rte_packed;
+#define MLX5_MTR_MAX_TOKEN_VALUE INT32_MAX
+
/* A meter data segment - 2 per ASO WQE. */
struct mlx5_aso_mtr_dseg {
uint32_t v_bo_sc_bbog_mm;
--
2.34.1
^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH 4/5] net/mlx5: add indirect QUOTA create/query/modify
2023-01-18 12:55 [PATCH 0/5] net/mlx5: add indirect QUOTA create/query/modify Gregory Etelson
` (2 preceding siblings ...)
2023-01-18 12:55 ` [PATCH 3/5] common/mlx5: update MTR ASO definitions Gregory Etelson
@ 2023-01-18 12:55 ` Gregory Etelson
2023-01-18 12:55 ` [PATCH 5/5] mlx5dr: Definer, translate RTE quota item Gregory Etelson
` (3 subsequent siblings)
7 siblings, 0 replies; 20+ messages in thread
From: Gregory Etelson @ 2023-01-18 12:55 UTC (permalink / raw)
To: dev; +Cc: getelson, matan, rasland, Viacheslav Ovsiienko
Implement HWS functions for indirect QUOTA creation, modification and
query.
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
---
drivers/net/mlx5/meson.build | 1 +
drivers/net/mlx5/mlx5.h | 72 +++
drivers/net/mlx5/mlx5_flow.c | 62 +++
drivers/net/mlx5/mlx5_flow.h | 20 +-
drivers/net/mlx5/mlx5_flow_aso.c | 8 +-
drivers/net/mlx5/mlx5_flow_hw.c | 343 +++++++++++---
drivers/net/mlx5/mlx5_flow_quota.c | 726 +++++++++++++++++++++++++++++
7 files changed, 1151 insertions(+), 81 deletions(-)
create mode 100644 drivers/net/mlx5/mlx5_flow_quota.c
diff --git a/drivers/net/mlx5/meson.build b/drivers/net/mlx5/meson.build
index abd507bd88..323c381d2b 100644
--- a/drivers/net/mlx5/meson.build
+++ b/drivers/net/mlx5/meson.build
@@ -23,6 +23,7 @@ sources = files(
'mlx5_flow_dv.c',
'mlx5_flow_aso.c',
'mlx5_flow_flex.c',
+ 'mlx5_flow_quota.c',
'mlx5_mac.c',
'mlx5_rss.c',
'mlx5_rx.c',
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 7c6bc91ddf..c18dffeab5 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -46,6 +46,14 @@
#define MLX5_HW_INV_QUEUE UINT32_MAX
+/*
+ * The default ipool threshold value indicates which per_core_cache
+ * value to set.
+ */
+#define MLX5_HW_IPOOL_SIZE_THRESHOLD (1 << 19)
+/* The default min local cache size. */
+#define MLX5_HW_IPOOL_CACHE_MIN (1 << 9)
+
/*
* Number of modification commands.
* The maximal actions amount in FW is some constant, and it is 16 in the
@@ -349,6 +357,7 @@ enum mlx5_hw_job_type {
MLX5_HW_Q_JOB_TYPE_DESTROY, /* Flow destroy job type. */
MLX5_HW_Q_JOB_TYPE_UPDATE, /* Flow update job type. */
MLX5_HW_Q_JOB_TYPE_QUERY, /* Flow query job type. */
+ MLX5_HW_Q_JOB_TYPE_UPDATE_QUERY, /* Flow update and query job type. */
};
#define MLX5_HW_MAX_ITEMS (16)
@@ -590,6 +599,7 @@ struct mlx5_aso_sq_elem {
char *query_data;
};
void *user_data;
+ struct mlx5_quota *quota_obj;
};
};
@@ -1645,6 +1655,33 @@ struct mlx5_hw_ctrl_flow {
struct mlx5_flow_hw_ctrl_rx;
+enum mlx5_quota_state {
+ MLX5_QUOTA_STATE_FREE, /* quota not in use */
+ MLX5_QUOTA_STATE_READY, /* quota is ready */
+ MLX5_QUOTA_STATE_WAIT /* quota waits WR completion */
+};
+
+struct mlx5_quota {
+ uint8_t state; /* object state */
+ uint8_t mode; /* metering mode */
+ /**
+ * Keep track of application update types.
+ * PMD does not allow 2 consecutive ADD updates.
+ */
+ enum rte_flow_update_quota_op last_update;
+};
+
+/* Bulk management structure for flow quota. */
+struct mlx5_quota_ctx {
+ uint32_t nb_quotas; /* Total number of quota objects */
+ struct mlx5dr_action *dr_action; /* HWS action */
+ struct mlx5_devx_obj *devx_obj; /* DEVX ranged object. */
+ struct mlx5_pmd_mr mr; /* MR for READ from MTR ASO */
+ struct mlx5_aso_mtr_dseg **read_buf; /* Buffers for READ */
+ struct mlx5_aso_sq *sq; /* SQs for sync/async ACCESS_ASO WRs */
+ struct mlx5_indexed_pool *quota_ipool; /* Manage quota objects */
+};
+
struct mlx5_priv {
struct rte_eth_dev_data *dev_data; /* Pointer to device data. */
struct mlx5_dev_ctx_shared *sh; /* Shared device context. */
@@ -1734,6 +1771,7 @@ struct mlx5_priv {
struct mlx5_flow_meter_policy *mtr_policy_arr; /* Policy array. */
struct mlx5_l3t_tbl *mtr_idx_tbl; /* Meter index lookup table. */
struct mlx5_mtr_bulk mtr_bulk; /* Meter index mapping for HWS */
+ struct mlx5_quota_ctx quota_ctx; /* Quota index mapping for HWS */
uint8_t skip_default_rss_reta; /* Skip configuration of default reta. */
uint8_t fdb_def_rule; /* Whether fdb jump to table 1 is configured. */
struct mlx5_mp_id mp_id; /* ID of a multi-process process */
@@ -2227,6 +2265,15 @@ int mlx5_aso_ct_queue_init(struct mlx5_dev_ctx_shared *sh,
uint32_t nb_queues);
int mlx5_aso_ct_queue_uninit(struct mlx5_dev_ctx_shared *sh,
struct mlx5_aso_ct_pools_mng *ct_mng);
+int
+mlx5_aso_sq_create(struct mlx5_common_device *cdev, struct mlx5_aso_sq *sq,
+ void *uar, uint16_t log_desc_n);
+void
+mlx5_aso_destroy_sq(struct mlx5_aso_sq *sq);
+void
+mlx5_aso_mtr_init_sq(struct mlx5_aso_sq *sq);
+void
+mlx5_aso_cqe_err_handle(struct mlx5_aso_sq *sq);
/* mlx5_flow_flex.c */
@@ -2257,4 +2304,29 @@ struct mlx5_list_entry *mlx5_flex_parser_clone_cb(void *list_ctx,
void *ctx);
void mlx5_flex_parser_clone_free_cb(void *tool_ctx,
struct mlx5_list_entry *entry);
+
+int
+mlx5_flow_quota_destroy(struct rte_eth_dev *dev);
+int
+mlx5_flow_quota_init(struct rte_eth_dev *dev, uint32_t nb_quotas);
+struct rte_flow_action_handle *
+mlx5_quota_alloc(struct rte_eth_dev *dev, uint32_t queue,
+ const struct rte_flow_action_quota *conf,
+ struct mlx5_hw_q_job *job, bool push,
+ struct rte_flow_error *error);
+void
+mlx5_quota_async_completion(struct rte_eth_dev *dev, uint32_t queue,
+ struct mlx5_hw_q_job *job);
+int
+mlx5_quota_query_update(struct rte_eth_dev *dev, uint32_t queue,
+ struct rte_flow_action_handle *handle,
+ const struct rte_flow_action *update,
+ struct rte_flow_query_quota *query,
+ struct mlx5_hw_q_job *async_job, bool push,
+ struct rte_flow_error *error);
+int mlx5_quota_query(struct rte_eth_dev *dev, uint32_t queue,
+ const struct rte_flow_action_handle *handle,
+ struct rte_flow_query_quota *query,
+ struct mlx5_hw_q_job *async_job, bool push,
+ struct rte_flow_error *error);
#endif /* RTE_PMD_MLX5_H_ */
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index f5e2831480..768c4c4ae6 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -1075,6 +1075,20 @@ mlx5_flow_async_action_handle_query(struct rte_eth_dev *dev, uint32_t queue,
void *data,
void *user_data,
struct rte_flow_error *error);
+static int
+mlx5_action_handle_query_update(struct rte_eth_dev *dev,
+ struct rte_flow_action_handle *handle,
+ const void *update, void *query,
+ enum rte_flow_query_update_mode qu_mode,
+ struct rte_flow_error *error);
+static int
+mlx5_flow_async_action_handle_query_update
+ (struct rte_eth_dev *dev, uint32_t queue_id,
+ const struct rte_flow_op_attr *op_attr,
+ struct rte_flow_action_handle *action_handle,
+ const void *update, void *query,
+ enum rte_flow_query_update_mode qu_mode,
+ void *user_data, struct rte_flow_error *error);
static const struct rte_flow_ops mlx5_flow_ops = {
.validate = mlx5_flow_validate,
@@ -1090,6 +1104,7 @@ static const struct rte_flow_ops mlx5_flow_ops = {
.action_handle_destroy = mlx5_action_handle_destroy,
.action_handle_update = mlx5_action_handle_update,
.action_handle_query = mlx5_action_handle_query,
+ .action_handle_query_update = mlx5_action_handle_query_update,
.tunnel_decap_set = mlx5_flow_tunnel_decap_set,
.tunnel_match = mlx5_flow_tunnel_match,
.tunnel_action_decap_release = mlx5_flow_tunnel_action_release,
@@ -1112,6 +1127,8 @@ static const struct rte_flow_ops mlx5_flow_ops = {
.push = mlx5_flow_push,
.async_action_handle_create = mlx5_flow_async_action_handle_create,
.async_action_handle_update = mlx5_flow_async_action_handle_update,
+ .async_action_handle_query_update =
+ mlx5_flow_async_action_handle_query_update,
.async_action_handle_query = mlx5_flow_async_action_handle_query,
.async_action_handle_destroy = mlx5_flow_async_action_handle_destroy,
};
@@ -9031,6 +9048,27 @@ mlx5_flow_async_action_handle_update(struct rte_eth_dev *dev, uint32_t queue,
update, user_data, error);
}
+static int
+mlx5_flow_async_action_handle_query_update
+ (struct rte_eth_dev *dev, uint32_t queue_id,
+ const struct rte_flow_op_attr *op_attr,
+ struct rte_flow_action_handle *action_handle,
+ const void *update, void *query,
+ enum rte_flow_query_update_mode qu_mode,
+ void *user_data, struct rte_flow_error *error)
+{
+ const struct mlx5_flow_driver_ops *fops =
+ flow_get_drv_ops(MLX5_FLOW_TYPE_HW);
+
+ if (!fops || !fops->async_action_query_update)
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "async query_update not supported");
+ return fops->async_action_query_update
+ (dev, queue_id, op_attr, action_handle,
+ update, query, qu_mode, user_data, error);
+}
+
/**
* Query shared action.
*
@@ -10163,6 +10201,30 @@ mlx5_action_handle_query(struct rte_eth_dev *dev,
return flow_drv_action_query(dev, handle, data, fops, error);
}
+static int
+mlx5_action_handle_query_update(struct rte_eth_dev *dev,
+ struct rte_flow_action_handle *handle,
+ const void *update, void *query,
+ enum rte_flow_query_update_mode qu_mode,
+ struct rte_flow_error *error)
+{
+ struct rte_flow_attr attr = { .transfer = 0 };
+ enum mlx5_flow_drv_type drv_type = flow_get_drv_type(dev, &attr);
+ const struct mlx5_flow_driver_ops *fops;
+
+ if (drv_type == MLX5_FLOW_TYPE_MIN || drv_type == MLX5_FLOW_TYPE_MAX)
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ NULL, "invalid driver type");
+ fops = flow_get_drv_ops(drv_type);
+ if (!fops || !fops->action_query_update)
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ NULL, "no query_update handler");
+ return fops->action_query_update(dev, handle, update,
+ query, qu_mode, error);
+}
+
/**
* Destroy all indirect actions (shared RSS).
*
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index e376dcae93..9235af960d 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -70,6 +70,7 @@ enum {
MLX5_INDIRECT_ACTION_TYPE_COUNT,
MLX5_INDIRECT_ACTION_TYPE_CT,
MLX5_INDIRECT_ACTION_TYPE_METER_MARK,
+ MLX5_INDIRECT_ACTION_TYPE_QUOTA,
};
/* Now, the maximal ports will be supported is 16, action number is 32M. */
@@ -218,6 +219,8 @@ enum mlx5_feature_name {
/* Meter color item */
#define MLX5_FLOW_ITEM_METER_COLOR (UINT64_C(1) << 44)
+#define MLX5_FLOW_ITEM_QUOTA (UINT64_C(1) << 45)
+
/* Outer Masks. */
#define MLX5_FLOW_LAYER_OUTER_L3 \
@@ -303,6 +306,7 @@ enum mlx5_feature_name {
#define MLX5_FLOW_ACTION_SEND_TO_KERNEL (1ull << 42)
#define MLX5_FLOW_ACTION_INDIRECT_COUNT (1ull << 43)
#define MLX5_FLOW_ACTION_INDIRECT_AGE (1ull << 44)
+#define MLX5_FLOW_ACTION_QUOTA (1ull << 46)
#define MLX5_FLOW_DROP_INCLUSIVE_ACTIONS \
(MLX5_FLOW_ACTION_COUNT | MLX5_FLOW_ACTION_SAMPLE | MLX5_FLOW_ACTION_AGE)
@@ -1699,6 +1703,12 @@ typedef int (*mlx5_flow_action_query_t)
const struct rte_flow_action_handle *action,
void *data,
struct rte_flow_error *error);
+typedef int (*mlx5_flow_action_query_update_t)
+ (struct rte_eth_dev *dev,
+ struct rte_flow_action_handle *handle,
+ const void *update, void *data,
+ enum rte_flow_query_update_mode qu_mode,
+ struct rte_flow_error *error);
typedef int (*mlx5_flow_sync_domain_t)
(struct rte_eth_dev *dev,
uint32_t domains,
@@ -1845,7 +1855,13 @@ typedef int (*mlx5_flow_async_action_handle_update_t)
const void *update,
void *user_data,
struct rte_flow_error *error);
-
+typedef int (*mlx5_flow_async_action_handle_query_update_t)
+ (struct rte_eth_dev *dev, uint32_t queue_id,
+ const struct rte_flow_op_attr *op_attr,
+ struct rte_flow_action_handle *action_handle,
+ const void *update, void *data,
+ enum rte_flow_query_update_mode qu_mode,
+ void *user_data, struct rte_flow_error *error);
typedef int (*mlx5_flow_async_action_handle_query_t)
(struct rte_eth_dev *dev,
uint32_t queue,
@@ -1896,6 +1912,7 @@ struct mlx5_flow_driver_ops {
mlx5_flow_action_destroy_t action_destroy;
mlx5_flow_action_update_t action_update;
mlx5_flow_action_query_t action_query;
+ mlx5_flow_action_query_update_t action_query_update;
mlx5_flow_sync_domain_t sync_domain;
mlx5_flow_discover_priorities_t discover_priorities;
mlx5_flow_item_create_t item_create;
@@ -1917,6 +1934,7 @@ struct mlx5_flow_driver_ops {
mlx5_flow_push_t push;
mlx5_flow_async_action_handle_create_t async_action_create;
mlx5_flow_async_action_handle_update_t async_action_update;
+ mlx5_flow_async_action_handle_query_update_t async_action_query_update;
mlx5_flow_async_action_handle_query_t async_action_query;
mlx5_flow_async_action_handle_destroy_t async_action_destroy;
};
diff --git a/drivers/net/mlx5/mlx5_flow_aso.c b/drivers/net/mlx5/mlx5_flow_aso.c
index 0eb91c570f..3c08da0614 100644
--- a/drivers/net/mlx5/mlx5_flow_aso.c
+++ b/drivers/net/mlx5/mlx5_flow_aso.c
@@ -74,7 +74,7 @@ mlx5_aso_reg_mr(struct mlx5_common_device *cdev, size_t length,
* @param[in] sq
* ASO SQ to destroy.
*/
-static void
+void
mlx5_aso_destroy_sq(struct mlx5_aso_sq *sq)
{
mlx5_devx_sq_destroy(&sq->sq_obj);
@@ -148,7 +148,7 @@ mlx5_aso_age_init_sq(struct mlx5_aso_sq *sq)
* @param[in] sq
* ASO SQ to initialize.
*/
-static void
+void
mlx5_aso_mtr_init_sq(struct mlx5_aso_sq *sq)
{
volatile struct mlx5_aso_wqe *restrict wqe;
@@ -219,7 +219,7 @@ mlx5_aso_ct_init_sq(struct mlx5_aso_sq *sq)
* @return
* 0 on success, a negative errno value otherwise and rte_errno is set.
*/
-static int
+int
mlx5_aso_sq_create(struct mlx5_common_device *cdev, struct mlx5_aso_sq *sq,
void *uar, uint16_t log_desc_n)
{
@@ -504,7 +504,7 @@ mlx5_aso_dump_err_objs(volatile uint32_t *cqe, volatile uint32_t *wqe)
* @param[in] sq
* ASO SQ to use.
*/
-static void
+void
mlx5_aso_cqe_err_handle(struct mlx5_aso_sq *sq)
{
struct mlx5_aso_cq *cq = &sq->cq;
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 04d0612ee1..5815310ba6 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -68,6 +68,9 @@ flow_hw_set_vlan_vid_construct(struct rte_eth_dev *dev,
struct mlx5_action_construct_data *act_data,
const struct mlx5_hw_actions *hw_acts,
const struct rte_flow_action *action);
+static void
+flow_hw_construct_quota(struct mlx5_priv *priv,
+ struct mlx5dr_rule_action *rule_act, uint32_t qid);
static __rte_always_inline uint32_t flow_hw_tx_tag_regc_mask(struct rte_eth_dev *dev);
static __rte_always_inline uint32_t flow_hw_tx_tag_regc_value(struct rte_eth_dev *dev);
@@ -791,6 +794,9 @@ flow_hw_shared_action_translate(struct rte_eth_dev *dev,
action_src, action_dst, idx))
return -1;
break;
+ case MLX5_INDIRECT_ACTION_TYPE_QUOTA:
+ flow_hw_construct_quota(priv, &acts->rule_acts[action_dst], idx);
+ break;
default:
DRV_LOG(WARNING, "Unsupported shared action type:%d", type);
break;
@@ -1834,6 +1840,16 @@ flow_hw_shared_action_get(struct rte_eth_dev *dev,
return -1;
}
+static void
+flow_hw_construct_quota(struct mlx5_priv *priv,
+ struct mlx5dr_rule_action *rule_act, uint32_t qid)
+{
+ rule_act->action = priv->quota_ctx.dr_action;
+ rule_act->aso_meter.offset = qid - 1;
+ rule_act->aso_meter.init_color =
+ MLX5DR_ACTION_ASO_METER_COLOR_GREEN;
+}
+
/**
* Construct shared indirect action.
*
@@ -1957,6 +1973,9 @@ flow_hw_shared_action_construct(struct rte_eth_dev *dev, uint32_t queue,
(enum mlx5dr_action_aso_meter_color)
rte_col_2_mlx5_col(aso_mtr->init_color);
break;
+ case MLX5_INDIRECT_ACTION_TYPE_QUOTA:
+ flow_hw_construct_quota(priv, rule_act, idx);
+ break;
default:
DRV_LOG(WARNING, "Unsupported shared action type:%d", type);
break;
@@ -2263,6 +2282,11 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
rule_acts[act_data->action_dst].action =
priv->hw_vport[port_action->port_id];
break;
+ case RTE_FLOW_ACTION_TYPE_QUOTA:
+ flow_hw_construct_quota(priv,
+ rule_acts + act_data->action_dst,
+ act_data->shared_meter.id);
+ break;
case RTE_FLOW_ACTION_TYPE_METER:
meter = action->conf;
mtr_id = meter->mtr_id;
@@ -2702,11 +2726,18 @@ __flow_hw_pull_indir_action_comp(struct rte_eth_dev *dev,
if (ret_comp < n_res && priv->hws_ctpool)
ret_comp += mlx5_aso_pull_completion(&priv->ct_mng->aso_sqs[queue],
&res[ret_comp], n_res - ret_comp);
+ if (ret_comp < n_res && priv->quota_ctx.sq)
+ ret_comp += mlx5_aso_pull_completion(&priv->quota_ctx.sq[queue],
+ &res[ret_comp],
+ n_res - ret_comp);
for (i = 0; i < ret_comp; i++) {
job = (struct mlx5_hw_q_job *)res[i].user_data;
/* Restore user data. */
res[i].user_data = job->user_data;
- if (job->type == MLX5_HW_Q_JOB_TYPE_DESTROY) {
+ if (MLX5_INDIRECT_ACTION_TYPE_GET(job->action) ==
+ MLX5_INDIRECT_ACTION_TYPE_QUOTA) {
+ mlx5_quota_async_completion(dev, queue, job);
+ } else if (job->type == MLX5_HW_Q_JOB_TYPE_DESTROY) {
type = MLX5_INDIRECT_ACTION_TYPE_GET(job->action);
if (type == MLX5_INDIRECT_ACTION_TYPE_METER_MARK) {
idx = MLX5_INDIRECT_ACTION_IDX_GET(job->action);
@@ -3687,6 +3718,10 @@ flow_hw_validate_action_indirect(struct rte_eth_dev *dev,
return ret;
*action_flags |= MLX5_FLOW_ACTION_INDIRECT_AGE;
break;
+ case RTE_FLOW_ACTION_TYPE_QUOTA:
+ /* TODO: add proper quota verification */
+ *action_flags |= MLX5_FLOW_ACTION_QUOTA;
+ break;
default:
DRV_LOG(WARNING, "Unsupported shared action type: %d", type);
return rte_flow_error_set(error, ENOTSUP,
@@ -3724,19 +3759,17 @@ flow_hw_validate_action_raw_encap(struct rte_eth_dev *dev __rte_unused,
}
static inline uint16_t
-flow_hw_template_expand_modify_field(const struct rte_flow_action actions[],
- const struct rte_flow_action masks[],
- const struct rte_flow_action *mf_action,
- const struct rte_flow_action *mf_mask,
- struct rte_flow_action *new_actions,
- struct rte_flow_action *new_masks,
- uint64_t flags, uint32_t act_num)
+flow_hw_template_expand_modify_field(struct rte_flow_action actions[],
+ struct rte_flow_action masks[],
+ const struct rte_flow_action *mf_actions,
+ const struct rte_flow_action *mf_masks,
+ uint64_t flags, uint32_t act_num,
+ uint32_t mf_num)
{
uint32_t i, tail;
MLX5_ASSERT(actions && masks);
- MLX5_ASSERT(new_actions && new_masks);
- MLX5_ASSERT(mf_action && mf_mask);
+ MLX5_ASSERT(mf_num > 0);
if (flags & MLX5_FLOW_ACTION_MODIFY_FIELD) {
/*
* Application action template already has Modify Field.
@@ -3787,12 +3820,10 @@ flow_hw_template_expand_modify_field(const struct rte_flow_action actions[],
i = 0;
insert:
tail = act_num - i; /* num action to move */
- memcpy(new_actions, actions, sizeof(actions[0]) * i);
- new_actions[i] = *mf_action;
- memcpy(new_actions + i + 1, actions + i, sizeof(actions[0]) * tail);
- memcpy(new_masks, masks, sizeof(masks[0]) * i);
- new_masks[i] = *mf_mask;
- memcpy(new_masks + i + 1, masks + i, sizeof(masks[0]) * tail);
+ memmove(actions + i + mf_num, actions + i, sizeof(actions[0]) * tail);
+ memcpy(actions + i, mf_actions, sizeof(actions[0]) * mf_num);
+ memmove(masks + i + mf_num, masks + i, sizeof(masks[0]) * tail);
+ memcpy(masks + i, mf_masks, sizeof(masks[0]) * mf_num);
return i;
}
@@ -4102,6 +4133,7 @@ flow_hw_dr_actions_template_handle_shared(const struct rte_flow_action *mask,
action_types[*curr_off] = MLX5DR_ACTION_TYP_ASO_CT;
*curr_off = *curr_off + 1;
break;
+ case RTE_FLOW_ACTION_TYPE_QUOTA:
case RTE_FLOW_ACTION_TYPE_METER_MARK:
at->actions_off[action_src] = *curr_off;
action_types[*curr_off] = MLX5DR_ACTION_TYP_ASO_METER;
@@ -4331,6 +4363,96 @@ flow_hw_set_vlan_vid_construct(struct rte_eth_dev *dev,
&modify_action);
}
+static __rte_always_inline void
+flow_hw_actions_template_replace_container(const
+ struct rte_flow_action *actions,
+ const
+ struct rte_flow_action *masks,
+ struct rte_flow_action *new_actions,
+ struct rte_flow_action *new_masks,
+ struct rte_flow_action **ra,
+ struct rte_flow_action **rm,
+ uint32_t act_num)
+{
+ memcpy(new_actions, actions, sizeof(actions[0]) * act_num);
+ memcpy(new_masks, masks, sizeof(masks[0]) * act_num);
+ *ra = (void *)(uintptr_t)new_actions;
+ *rm = (void *)(uintptr_t)new_masks;
+}
+
+#define RX_META_COPY_ACTION ((const struct rte_flow_action) { \
+ .type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD, \
+ .conf = &(struct rte_flow_action_modify_field){ \
+ .operation = RTE_FLOW_MODIFY_SET, \
+ .dst = { \
+ .field = (enum rte_flow_field_id) \
+ MLX5_RTE_FLOW_FIELD_META_REG, \
+ .level = REG_B, \
+ }, \
+ .src = { \
+ .field = (enum rte_flow_field_id) \
+ MLX5_RTE_FLOW_FIELD_META_REG, \
+ .level = REG_C_1, \
+ }, \
+ .width = 32, \
+ } \
+})
+
+#define RX_META_COPY_MASK ((const struct rte_flow_action) { \
+ .type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD, \
+ .conf = &(struct rte_flow_action_modify_field){ \
+ .operation = RTE_FLOW_MODIFY_SET, \
+ .dst = { \
+ .field = (enum rte_flow_field_id) \
+ MLX5_RTE_FLOW_FIELD_META_REG, \
+ .level = UINT32_MAX, \
+ .offset = UINT32_MAX, \
+ }, \
+ .src = { \
+ .field = (enum rte_flow_field_id) \
+ MLX5_RTE_FLOW_FIELD_META_REG, \
+ .level = UINT32_MAX, \
+ .offset = UINT32_MAX, \
+ }, \
+ .width = UINT32_MAX, \
+ } \
+})
+
+#define QUOTA_COLOR_INC_ACTION ((const struct rte_flow_action) { \
+ .type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD, \
+ .conf = &(struct rte_flow_action_modify_field) { \
+ .operation = RTE_FLOW_MODIFY_ADD, \
+ .dst = { \
+ .field = RTE_FLOW_FIELD_METER_COLOR, \
+ .level = 0, .offset = 0 \
+ }, \
+ .src = { \
+ .field = RTE_FLOW_FIELD_VALUE, \
+ .level = 1, \
+ .offset = 0, \
+ }, \
+ .width = 2 \
+ } \
+})
+
+#define QUOTA_COLOR_INC_MASK ((const struct rte_flow_action) { \
+ .type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD, \
+ .conf = &(struct rte_flow_action_modify_field) { \
+ .operation = RTE_FLOW_MODIFY_ADD, \
+ .dst = { \
+ .field = RTE_FLOW_FIELD_METER_COLOR, \
+ .level = UINT32_MAX, \
+ .offset = UINT32_MAX, \
+ }, \
+ .src = { \
+ .field = RTE_FLOW_FIELD_VALUE, \
+ .level = 3, \
+ .offset = 0 \
+ }, \
+ .width = UINT32_MAX \
+ } \
+})
+
/**
* Create flow action template.
*
@@ -4369,40 +4491,9 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev,
int set_vlan_vid_ix = -1;
struct rte_flow_action_modify_field set_vlan_vid_spec = {0, };
struct rte_flow_action_modify_field set_vlan_vid_mask = {0, };
- const struct rte_flow_action_modify_field rx_mreg = {
- .operation = RTE_FLOW_MODIFY_SET,
- .dst = {
- .field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
- .level = REG_B,
- },
- .src = {
- .field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
- .level = REG_C_1,
- },
- .width = 32,
- };
- const struct rte_flow_action_modify_field rx_mreg_mask = {
- .operation = RTE_FLOW_MODIFY_SET,
- .dst = {
- .field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
- .level = UINT32_MAX,
- .offset = UINT32_MAX,
- },
- .src = {
- .field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
- .level = UINT32_MAX,
- .offset = UINT32_MAX,
- },
- .width = UINT32_MAX,
- };
- const struct rte_flow_action rx_cpy = {
- .type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD,
- .conf = &rx_mreg,
- };
- const struct rte_flow_action rx_cpy_mask = {
- .type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD,
- .conf = &rx_mreg_mask,
- };
+ struct rte_flow_action mf_actions[MLX5_HW_MAX_ACTS];
+ struct rte_flow_action mf_masks[MLX5_HW_MAX_ACTS];
+ uint32_t expand_mf_num = 0;
if (mlx5_flow_hw_actions_validate(dev, attr, actions, masks,
&action_flags, error))
@@ -4432,44 +4523,57 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev,
RTE_FLOW_ERROR_TYPE_ACTION, NULL, "Too many actions");
return NULL;
}
+ if (set_vlan_vid_ix != -1) {
+ /* If temporary action buffer was not used, copy template actions to it */
+ if (ra == actions)
+ flow_hw_actions_template_replace_container(actions,
+ masks,
+ tmp_action,
+ tmp_mask,
+ &ra, &rm,
+ act_num);
+ flow_hw_set_vlan_vid(dev, ra, rm,
+ &set_vlan_vid_spec, &set_vlan_vid_mask,
+ set_vlan_vid_ix);
+ action_flags |= MLX5_FLOW_ACTION_MODIFY_FIELD;
+ }
+ if (action_flags & MLX5_FLOW_ACTION_QUOTA) {
+ mf_actions[expand_mf_num] = QUOTA_COLOR_INC_ACTION;
+ mf_masks[expand_mf_num] = QUOTA_COLOR_INC_MASK;
+ expand_mf_num++;
+ }
if (priv->sh->config.dv_xmeta_en == MLX5_XMETA_MODE_META32_HWS &&
priv->sh->config.dv_esw_en &&
(action_flags & (MLX5_FLOW_ACTION_QUEUE | MLX5_FLOW_ACTION_RSS))) {
/* Insert META copy */
- if (act_num + 1 > MLX5_HW_MAX_ACTS) {
+ mf_actions[expand_mf_num] = RX_META_COPY_ACTION;
+ mf_masks[expand_mf_num] = RX_META_COPY_MASK;
+ expand_mf_num++;
+ }
+ if (expand_mf_num) {
+ if (act_num + expand_mf_num > MLX5_HW_MAX_ACTS) {
rte_flow_error_set(error, E2BIG,
RTE_FLOW_ERROR_TYPE_ACTION,
NULL, "cannot expand: too many actions");
return NULL;
}
+ if (ra == actions)
+ flow_hw_actions_template_replace_container(actions,
+ masks,
+ tmp_action,
+ tmp_mask,
+ &ra, &rm,
+ act_num);
/* Application should make sure only one Q/RSS exist in one rule. */
- pos = flow_hw_template_expand_modify_field(actions, masks,
- &rx_cpy,
- &rx_cpy_mask,
- tmp_action, tmp_mask,
+ pos = flow_hw_template_expand_modify_field(ra, rm,
+ mf_actions,
+ mf_masks,
action_flags,
- act_num);
- ra = tmp_action;
- rm = tmp_mask;
- act_num++;
+ act_num,
+ expand_mf_num);
+ act_num += expand_mf_num;
action_flags |= MLX5_FLOW_ACTION_MODIFY_FIELD;
}
- if (set_vlan_vid_ix != -1) {
- /* If temporary action buffer was not used, copy template actions to it */
- if (ra == actions && rm == masks) {
- for (i = 0; i < act_num; ++i) {
- tmp_action[i] = actions[i];
- tmp_mask[i] = masks[i];
- if (actions[i].type == RTE_FLOW_ACTION_TYPE_END)
- break;
- }
- ra = tmp_action;
- rm = tmp_mask;
- }
- flow_hw_set_vlan_vid(dev, ra, rm,
- &set_vlan_vid_spec, &set_vlan_vid_mask,
- set_vlan_vid_ix);
- }
act_len = rte_flow_conv(RTE_FLOW_CONV_OP_ACTIONS, NULL, 0, ra, error);
if (act_len <= 0)
return NULL;
@@ -4732,6 +4836,7 @@ flow_hw_pattern_validate(struct rte_eth_dev *dev,
case RTE_FLOW_ITEM_TYPE_ICMP:
case RTE_FLOW_ITEM_TYPE_ICMP6:
case RTE_FLOW_ITEM_TYPE_CONNTRACK:
+ case RTE_FLOW_ITEM_TYPE_QUOTA:
break;
case RTE_FLOW_ITEM_TYPE_INTEGRITY:
/*
@@ -6932,6 +7037,12 @@ flow_hw_configure(struct rte_eth_dev *dev,
"Failed to set up Rx control flow templates");
goto err;
}
+ /* Initialize quotas */
+ if (port_attr->nb_quotas) {
+ ret = mlx5_flow_quota_init(dev, port_attr->nb_quotas);
+ if (ret)
+ goto err;
+ }
/* Initialize meter library*/
if (port_attr->nb_meters)
if (mlx5_flow_meter_init(dev, port_attr->nb_meters, 1, 1, nb_q_updated))
@@ -7031,6 +7142,7 @@ flow_hw_configure(struct rte_eth_dev *dev,
mlx5_hws_cnt_pool_destroy(priv->sh, priv->hws_cpool);
priv->hws_cpool = NULL;
}
+ mlx5_flow_quota_destroy(dev);
flow_hw_free_vport_actions(priv);
for (i = 0; i < MLX5_HW_ACTION_FLAG_MAX; i++) {
if (priv->hw_drop[i])
@@ -7124,6 +7236,7 @@ flow_hw_resource_release(struct rte_eth_dev *dev)
flow_hw_ct_mng_destroy(dev, priv->ct_mng);
priv->ct_mng = NULL;
}
+ mlx5_flow_quota_destroy(dev);
for (i = 0; i < priv->nb_queue; i++) {
rte_ring_free(priv->hw_q[i].indir_iq);
rte_ring_free(priv->hw_q[i].indir_cq);
@@ -7524,6 +7637,8 @@ flow_hw_action_handle_validate(struct rte_eth_dev *dev, uint32_t queue,
return flow_hw_validate_action_meter_mark(dev, action, error);
case RTE_FLOW_ACTION_TYPE_RSS:
return flow_dv_action_validate(dev, conf, action, error);
+ case RTE_FLOW_ACTION_TYPE_QUOTA:
+ return 0;
default:
return rte_flow_error_set(error, ENOTSUP,
RTE_FLOW_ERROR_TYPE_ACTION, NULL,
@@ -7695,6 +7810,11 @@ flow_hw_action_handle_create(struct rte_eth_dev *dev, uint32_t queue,
case RTE_FLOW_ACTION_TYPE_RSS:
handle = flow_dv_action_create(dev, conf, action, error);
break;
+ case RTE_FLOW_ACTION_TYPE_QUOTA:
+ aso = true;
+ handle = mlx5_quota_alloc(dev, queue, action->conf,
+ job, push, error);
+ break;
default:
rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION,
NULL, "action type not supported");
@@ -7815,6 +7935,11 @@ flow_hw_action_handle_update(struct rte_eth_dev *dev, uint32_t queue,
case MLX5_INDIRECT_ACTION_TYPE_RSS:
ret = flow_dv_action_update(dev, handle, update, error);
break;
+ case MLX5_INDIRECT_ACTION_TYPE_QUOTA:
+ aso = true;
+ ret = mlx5_quota_query_update(dev, queue, handle, update, NULL,
+ job, push, error);
+ break;
default:
ret = -ENOTSUP;
rte_flow_error_set(error, ENOTSUP,
@@ -7927,6 +8052,8 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue,
case MLX5_INDIRECT_ACTION_TYPE_RSS:
ret = flow_dv_action_destroy(dev, handle, error);
break;
+ case MLX5_INDIRECT_ACTION_TYPE_QUOTA:
+ break;
default:
ret = -ENOTSUP;
rte_flow_error_set(error, ENOTSUP,
@@ -8196,6 +8323,11 @@ flow_hw_action_handle_query(struct rte_eth_dev *dev, uint32_t queue,
ret = flow_hw_conntrack_query(dev, queue, act_idx, data,
job, push, error);
break;
+ case MLX5_INDIRECT_ACTION_TYPE_QUOTA:
+ aso = true;
+ ret = mlx5_quota_query(dev, queue, handle, data,
+ job, push, error);
+ break;
default:
ret = -ENOTSUP;
rte_flow_error_set(error, ENOTSUP,
@@ -8205,7 +8337,51 @@ flow_hw_action_handle_query(struct rte_eth_dev *dev, uint32_t queue,
}
if (job)
flow_hw_action_finalize(dev, queue, job, push, aso, ret == 0);
- return 0;
+ return ret;
+}
+
+static int
+flow_hw_async_action_handle_query_update
+ (struct rte_eth_dev *dev, uint32_t queue,
+ const struct rte_flow_op_attr *attr,
+ struct rte_flow_action_handle *handle,
+ const void *update, void *query,
+ enum rte_flow_query_update_mode qu_mode,
+ void *user_data, struct rte_flow_error *error)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ bool push = flow_hw_action_push(attr);
+ bool aso = false;
+ struct mlx5_hw_q_job *job = NULL;
+ int ret = 0;
+
+ if (attr) {
+ job = flow_hw_action_job_init(priv, queue, handle, user_data,
+ query,
+ MLX5_HW_Q_JOB_TYPE_UPDATE_QUERY,
+ error);
+ if (!job)
+ return -rte_errno;
+ }
+ switch (MLX5_INDIRECT_ACTION_TYPE_GET(handle)) {
+ case MLX5_INDIRECT_ACTION_TYPE_QUOTA:
+ if (qu_mode != RTE_FLOW_QU_QUERY_FIRST) {
+ ret = rte_flow_error_set
+ (error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION_CONF,
+ NULL, "quota action must query before update");
+ break;
+ }
+ aso = true;
+ ret = mlx5_quota_query_update(dev, queue, handle,
+ update, query, job, push, error);
+ break;
+ default:
+ ret = rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL, "update and query not supportred");
+ }
+ if (job)
+ flow_hw_action_finalize(dev, queue, job, push, aso, ret == 0);
+ return ret;
}
static int
@@ -8217,6 +8393,19 @@ flow_hw_action_query(struct rte_eth_dev *dev,
handle, data, NULL, error);
}
+static int
+flow_hw_action_query_update(struct rte_eth_dev *dev,
+ struct rte_flow_action_handle *handle,
+ const void *update, void *query,
+ enum rte_flow_query_update_mode qu_mode,
+ struct rte_flow_error *error)
+{
+ return flow_hw_async_action_handle_query_update(dev, MLX5_HW_INV_QUEUE,
+ NULL, handle, update,
+ query, qu_mode, NULL,
+ error);
+}
+
/**
* Get aged-out flows of a given port on the given HWS flow queue.
*
@@ -8329,12 +8518,14 @@ const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = {
.async_action_create = flow_hw_action_handle_create,
.async_action_destroy = flow_hw_action_handle_destroy,
.async_action_update = flow_hw_action_handle_update,
+ .async_action_query_update = flow_hw_async_action_handle_query_update,
.async_action_query = flow_hw_action_handle_query,
.action_validate = flow_hw_action_validate,
.action_create = flow_hw_action_create,
.action_destroy = flow_hw_action_destroy,
.action_update = flow_hw_action_update,
.action_query = flow_hw_action_query,
+ .action_query_update = flow_hw_action_query_update,
.query = flow_hw_query,
.get_aged_flows = flow_hw_get_aged_flows,
.get_q_aged_flows = flow_hw_get_q_aged_flows,
diff --git a/drivers/net/mlx5/mlx5_flow_quota.c b/drivers/net/mlx5/mlx5_flow_quota.c
new file mode 100644
index 0000000000..0639620848
--- /dev/null
+++ b/drivers/net/mlx5/mlx5_flow_quota.c
@@ -0,0 +1,726 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Nvidia Inc. All rights reserved.
+ */
+#include <stddef.h>
+#include <rte_eal_paging.h>
+
+#include <mlx5_prm.h>
+
+#include "mlx5.h"
+#include "mlx5_malloc.h"
+#include "mlx5_flow.h"
+
+typedef void (*quota_wqe_cmd_t)(volatile struct mlx5_aso_wqe *restrict,
+ struct mlx5_quota_ctx *, uint32_t, uint32_t,
+ void *);
+
+#define MLX5_ASO_MTR1_INIT_MASK 0xffffffffULL
+#define MLX5_ASO_MTR0_INIT_MASK ((MLX5_ASO_MTR1_INIT_MASK) << 32)
+
+static __rte_always_inline bool
+is_aso_mtr1_obj(uint32_t qix)
+{
+ return (qix & 1) != 0;
+}
+
+static __rte_always_inline bool
+is_quota_sync_queue(const struct mlx5_priv *priv, uint32_t queue)
+{
+ return queue >= priv->nb_queue - 1;
+}
+
+static __rte_always_inline uint32_t
+quota_sync_queue(const struct mlx5_priv *priv)
+{
+ return priv->nb_queue - 1;
+}
+
+static __rte_always_inline uint32_t
+mlx5_quota_wqe_read_offset(uint32_t qix, uint32_t sq_index)
+{
+ return 2 * sq_index + (qix & 1);
+}
+
+static int32_t
+mlx5_quota_fetch_tokens(const struct mlx5_aso_mtr_dseg *rd_buf)
+{
+ int c_tok = (int)rte_be_to_cpu_32(rd_buf->c_tokens);
+ int e_tok = (int)rte_be_to_cpu_32(rd_buf->e_tokens);
+ int result;
+
+ DRV_LOG(DEBUG, "c_tokens %d e_tokens %d\n",
+ rte_be_to_cpu_32(rd_buf->c_tokens),
+ rte_be_to_cpu_32(rd_buf->e_tokens));
+ /* Query after SET ignores negative E tokens */
+ if (c_tok >= 0 && e_tok < 0)
+ result = c_tok;
+ /**
+ * If number of tokens in Meter bucket is zero or above,
+ * Meter hardware will use that bucket and can set number of tokens to
+ * negative value.
+ * Quota can discard negative C tokens in query report.
+ * That is a known hardware limitation.
+ * Use case example:
+ *
+ * C E Result
+ * 250 250 500
+ * 50 250 300
+ * -150 250 100
+ * -150 50 50 *
+ * -150 -150 -300
+ *
+ */
+ else if (c_tok < 0 && e_tok >= 0 && (c_tok + e_tok) < 0)
+ result = e_tok;
+ else
+ result = c_tok + e_tok;
+
+ return result;
+}
+
+static void
+mlx5_quota_query_update_async_cmpl(struct mlx5_hw_q_job *job)
+{
+ struct rte_flow_query_quota *query = job->query.user;
+
+ query->quota = mlx5_quota_fetch_tokens(job->query.hw);
+}
+
+void
+mlx5_quota_async_completion(struct rte_eth_dev *dev, uint32_t queue,
+ struct mlx5_hw_q_job *job)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_quota_ctx *qctx = &priv->quota_ctx;
+ uint32_t qix = MLX5_INDIRECT_ACTION_IDX_GET(job->action);
+ struct mlx5_quota *qobj = mlx5_ipool_get(qctx->quota_ipool, qix);
+
+ RTE_SET_USED(queue);
+ qobj->state = MLX5_QUOTA_STATE_READY;
+ switch (job->type) {
+ case MLX5_HW_Q_JOB_TYPE_CREATE:
+ break;
+ case MLX5_HW_Q_JOB_TYPE_QUERY:
+ case MLX5_HW_Q_JOB_TYPE_UPDATE_QUERY:
+ mlx5_quota_query_update_async_cmpl(job);
+ break;
+ default:
+ break;
+ }
+}
+
+static __rte_always_inline void
+mlx5_quota_wqe_set_aso_read(volatile struct mlx5_aso_wqe *restrict wqe,
+ struct mlx5_quota_ctx *qctx, uint32_t queue)
+{
+ struct mlx5_aso_sq *sq = qctx->sq + queue;
+ uint32_t sq_mask = (1 << sq->log_desc_n) - 1;
+ uint32_t sq_head = sq->head & sq_mask;
+ uintptr_t rd_addr = (uintptr_t)(qctx->read_buf[queue] + 2 * sq_head);
+
+ wqe->aso_cseg.lkey = rte_cpu_to_be_32(qctx->mr.lkey);
+ wqe->aso_cseg.va_h = rte_cpu_to_be_32((uint32_t)(rd_addr >> 32));
+ wqe->aso_cseg.va_l_r = rte_cpu_to_be_32(((uint32_t)rd_addr) |
+ MLX5_ASO_CSEG_READ_ENABLE);
+}
+
+#define MLX5_ASO_MTR1_ADD_MASK 0x00000F00ULL
+#define MLX5_ASO_MTR1_SET_MASK 0x000F0F00ULL
+#define MLX5_ASO_MTR0_ADD_MASK ((MLX5_ASO_MTR1_ADD_MASK) << 32)
+#define MLX5_ASO_MTR0_SET_MASK ((MLX5_ASO_MTR1_SET_MASK) << 32)
+
+static __rte_always_inline void
+mlx5_quota_wqe_set_mtr_tokens(volatile struct mlx5_aso_wqe *restrict wqe,
+ uint32_t qix, void *arg)
+{
+ volatile struct mlx5_aso_mtr_dseg *mtr_dseg;
+ const struct rte_flow_update_quota *conf = arg;
+ bool set_op = (conf->op == RTE_FLOW_UPDATE_QUOTA_SET);
+
+ if (is_aso_mtr1_obj(qix)) {
+ wqe->aso_cseg.data_mask = set_op ?
+ RTE_BE64(MLX5_ASO_MTR1_SET_MASK) :
+ RTE_BE64(MLX5_ASO_MTR1_ADD_MASK);
+ mtr_dseg = wqe->aso_dseg.mtrs + 1;
+ } else {
+ wqe->aso_cseg.data_mask = set_op ?
+ RTE_BE64(MLX5_ASO_MTR0_SET_MASK) :
+ RTE_BE64(MLX5_ASO_MTR0_ADD_MASK);
+ mtr_dseg = wqe->aso_dseg.mtrs;
+ }
+ if (set_op) {
+ /* prevent using E tokens when C tokens exhausted */
+ mtr_dseg->e_tokens = -1;
+ mtr_dseg->c_tokens = rte_cpu_to_be_32(conf->quota);
+ } else {
+ mtr_dseg->e_tokens = rte_cpu_to_be_32(conf->quota);
+ }
+}
+
+static __rte_always_inline void
+mlx5_quota_wqe_query(volatile struct mlx5_aso_wqe *restrict wqe,
+ struct mlx5_quota_ctx *qctx, __rte_unused uint32_t qix,
+ uint32_t queue, __rte_unused void *arg)
+{
+ mlx5_quota_wqe_set_aso_read(wqe, qctx, queue);
+ wqe->aso_cseg.data_mask = 0ull; /* clear MTR ASO data modification */
+}
+
+static __rte_always_inline void
+mlx5_quota_wqe_update(volatile struct mlx5_aso_wqe *restrict wqe,
+ __rte_unused struct mlx5_quota_ctx *qctx, uint32_t qix,
+ __rte_unused uint32_t queue, void *arg)
+{
+ mlx5_quota_wqe_set_mtr_tokens(wqe, qix, arg);
+ wqe->aso_cseg.va_l_r = 0; /* clear READ flag */
+}
+
+static __rte_always_inline void
+mlx5_quota_wqe_query_update(volatile struct mlx5_aso_wqe *restrict wqe,
+ struct mlx5_quota_ctx *qctx, uint32_t qix,
+ uint32_t queue, void *arg)
+{
+ mlx5_quota_wqe_set_aso_read(wqe, qctx, queue);
+ mlx5_quota_wqe_set_mtr_tokens(wqe, qix, arg);
+}
+
+static __rte_always_inline void
+mlx5_quota_set_init_wqe(volatile struct mlx5_aso_wqe *restrict wqe,
+ __rte_unused struct mlx5_quota_ctx *qctx, uint32_t qix,
+ __rte_unused uint32_t queue, void *arg)
+{
+ volatile struct mlx5_aso_mtr_dseg *mtr_dseg;
+ const struct rte_flow_action_quota *conf = arg;
+ const struct mlx5_quota *qobj = mlx5_ipool_get(qctx->quota_ipool, qix + 1);
+
+ if (is_aso_mtr1_obj(qix)) {
+ wqe->aso_cseg.data_mask =
+ rte_cpu_to_be_64(MLX5_ASO_MTR1_INIT_MASK);
+ mtr_dseg = wqe->aso_dseg.mtrs + 1;
+ } else {
+ wqe->aso_cseg.data_mask =
+ rte_cpu_to_be_64(MLX5_ASO_MTR0_INIT_MASK);
+ mtr_dseg = wqe->aso_dseg.mtrs;
+ }
+ mtr_dseg->e_tokens = -1;
+ mtr_dseg->c_tokens = rte_cpu_to_be_32(conf->quota);
+ mtr_dseg->v_bo_sc_bbog_mm |= rte_cpu_to_be_32
+ (qobj->mode << ASO_DSEG_MTR_MODE);
+}
+
+static __rte_always_inline void
+mlx5_quota_cmd_completed_status(struct mlx5_aso_sq *sq, uint16_t n)
+{
+ uint16_t i, mask = (1 << sq->log_desc_n) - 1;
+
+ for (i = 0; i < n; i++) {
+ uint8_t state = MLX5_QUOTA_STATE_WAIT;
+ struct mlx5_quota *quota_obj =
+ sq->elts[(sq->tail + i) & mask].quota_obj;
+
+ __atomic_compare_exchange_n("a_obj->state, &state,
+ MLX5_QUOTA_STATE_READY, false,
+ __ATOMIC_RELAXED, __ATOMIC_RELAXED);
+ }
+}
+
+static void
+mlx5_quota_cmd_completion_handle(struct mlx5_aso_sq *sq)
+{
+ struct mlx5_aso_cq *cq = &sq->cq;
+ volatile struct mlx5_cqe *restrict cqe;
+ const unsigned int cq_size = 1 << cq->log_desc_n;
+ const unsigned int mask = cq_size - 1;
+ uint32_t idx;
+ uint32_t next_idx = cq->cq_ci & mask;
+ uint16_t max;
+ uint16_t n = 0;
+ int ret;
+
+ MLX5_ASSERT(rte_spinlock_is_locked(&sq->sqsl));
+ max = (uint16_t)(sq->head - sq->tail);
+ if (unlikely(!max))
+ return;
+ do {
+ idx = next_idx;
+ next_idx = (cq->cq_ci + 1) & mask;
+ rte_prefetch0(&cq->cq_obj.cqes[next_idx]);
+ cqe = &cq->cq_obj.cqes[idx];
+ ret = check_cqe(cqe, cq_size, cq->cq_ci);
+ /*
+ * Be sure owner read is done before any other cookie field or
+ * opaque field.
+ */
+ rte_io_rmb();
+ if (ret != MLX5_CQE_STATUS_SW_OWN) {
+ if (likely(ret == MLX5_CQE_STATUS_HW_OWN))
+ break;
+ mlx5_aso_cqe_err_handle(sq);
+ } else {
+ n++;
+ }
+ cq->cq_ci++;
+ } while (1);
+ if (likely(n)) {
+ mlx5_quota_cmd_completed_status(sq, n);
+ sq->tail += n;
+ rte_io_wmb();
+ cq->cq_obj.db_rec[0] = rte_cpu_to_be_32(cq->cq_ci);
+ }
+}
+
+static int
+mlx5_quota_cmd_wait_cmpl(struct mlx5_aso_sq *sq, struct mlx5_quota *quota_obj)
+{
+ uint32_t poll_cqe_times = MLX5_MTR_POLL_WQE_CQE_TIMES;
+
+ do {
+ rte_spinlock_lock(&sq->sqsl);
+ mlx5_quota_cmd_completion_handle(sq);
+ rte_spinlock_unlock(&sq->sqsl);
+ if (__atomic_load_n("a_obj->state, __ATOMIC_RELAXED) ==
+ MLX5_QUOTA_STATE_READY)
+ return 0;
+ } while (poll_cqe_times -= MLX5_ASO_WQE_CQE_RESPONSE_DELAY);
+ DRV_LOG(ERR, "QUOTA: failed to poll command CQ");
+ return -1;
+}
+
+static int
+mlx5_quota_cmd_wqe(struct rte_eth_dev *dev, struct mlx5_quota *quota_obj,
+ quota_wqe_cmd_t wqe_cmd, uint32_t qix, uint32_t queue,
+ struct mlx5_hw_q_job *job, bool push, void *arg)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_dev_ctx_shared *sh = priv->sh;
+ struct mlx5_quota_ctx *qctx = &priv->quota_ctx;
+ struct mlx5_aso_sq *sq = qctx->sq + queue;
+ uint32_t head, sq_mask = (1 << sq->log_desc_n) - 1;
+ bool sync_queue = is_quota_sync_queue(priv, queue);
+ volatile struct mlx5_aso_wqe *restrict wqe;
+ int ret = 0;
+
+ if (sync_queue)
+ rte_spinlock_lock(&sq->sqsl);
+ head = sq->head & sq_mask;
+ wqe = &sq->sq_obj.aso_wqes[head];
+ wqe_cmd(wqe, qctx, qix, queue, arg);
+ wqe->general_cseg.misc = rte_cpu_to_be_32(qctx->devx_obj->id + (qix >> 1));
+ wqe->general_cseg.opcode = rte_cpu_to_be_32
+ (ASO_OPC_MOD_POLICER << WQE_CSEG_OPC_MOD_OFFSET |
+ sq->pi << WQE_CSEG_WQE_INDEX_OFFSET | MLX5_OPCODE_ACCESS_ASO);
+ sq->head++;
+ sq->pi += 2; /* Each WQE contains 2 WQEBB */
+ if (push) {
+ mlx5_doorbell_ring(&sh->tx_uar.bf_db, *(volatile uint64_t *)wqe,
+ sq->pi, &sq->sq_obj.db_rec[MLX5_SND_DBR],
+ !sh->tx_uar.dbnc);
+ sq->db_pi = sq->pi;
+ }
+ sq->db = wqe;
+ job->query.hw = qctx->read_buf[queue] +
+ mlx5_quota_wqe_read_offset(qix, head);
+ sq->elts[head].quota_obj = sync_queue ?
+ quota_obj : (typeof(quota_obj))job;
+ if (sync_queue) {
+ rte_spinlock_unlock(&sq->sqsl);
+ ret = mlx5_quota_cmd_wait_cmpl(sq, quota_obj);
+ }
+ return ret;
+}
+
+static void
+mlx5_quota_destroy_sq(struct mlx5_priv *priv)
+{
+ struct mlx5_quota_ctx *qctx = &priv->quota_ctx;
+ uint32_t i, nb_queues = priv->nb_queue;
+
+ if (!qctx->sq)
+ return;
+ for (i = 0; i < nb_queues; i++)
+ mlx5_aso_destroy_sq(qctx->sq + i);
+ mlx5_free(qctx->sq);
+}
+
+static __rte_always_inline void
+mlx5_quota_wqe_init_common(struct mlx5_aso_sq *sq,
+ volatile struct mlx5_aso_wqe *restrict wqe)
+{
+#define ASO_MTR_DW0 RTE_BE32(1 << ASO_DSEG_VALID_OFFSET | \
+ MLX5_FLOW_COLOR_GREEN << ASO_DSEG_SC_OFFSET)
+
+ memset((void *)(uintptr_t)wqe, 0, sizeof(*wqe));
+ wqe->general_cseg.sq_ds = rte_cpu_to_be_32((sq->sqn << 8) |
+ (sizeof(*wqe) >> 4));
+ wqe->aso_cseg.operand_masks = RTE_BE32
+ (0u | (ASO_OPER_LOGICAL_OR << ASO_CSEG_COND_OPER_OFFSET) |
+ (ASO_OP_ALWAYS_TRUE << ASO_CSEG_COND_1_OPER_OFFSET) |
+ (ASO_OP_ALWAYS_TRUE << ASO_CSEG_COND_0_OPER_OFFSET) |
+ (BYTEWISE_64BYTE << ASO_CSEG_DATA_MASK_MODE_OFFSET));
+ wqe->general_cseg.flags = RTE_BE32
+ (MLX5_COMP_ALWAYS << MLX5_COMP_MODE_OFFSET);
+ wqe->aso_dseg.mtrs[0].v_bo_sc_bbog_mm = ASO_MTR_DW0;
+ /**
+ * ASO Meter tokens auto-update must be disabled in quota action.
+ * Tokens auto-update is disabled when Meter when *IR values set to
+ * ((0x1u << 16) | (0x1Eu << 24)) **NOT** 0x00
+ */
+ wqe->aso_dseg.mtrs[0].cbs_cir = RTE_BE32((0x1u << 16) | (0x1Eu << 24));
+ wqe->aso_dseg.mtrs[0].ebs_eir = RTE_BE32((0x1u << 16) | (0x1Eu << 24));
+ wqe->aso_dseg.mtrs[1].v_bo_sc_bbog_mm = ASO_MTR_DW0;
+ wqe->aso_dseg.mtrs[1].cbs_cir = RTE_BE32((0x1u << 16) | (0x1Eu << 24));
+ wqe->aso_dseg.mtrs[1].ebs_eir = RTE_BE32((0x1u << 16) | (0x1Eu << 24));
+#undef ASO_MTR_DW0
+}
+
+static void
+mlx5_quota_init_sq(struct mlx5_aso_sq *sq)
+{
+ uint32_t i, size = 1 << sq->log_desc_n;
+
+ for (i = 0; i < size; i++)
+ mlx5_quota_wqe_init_common(sq, sq->sq_obj.aso_wqes + i);
+}
+
+static int
+mlx5_quota_alloc_sq(struct mlx5_priv *priv)
+{
+ struct mlx5_dev_ctx_shared *sh = priv->sh;
+ struct mlx5_quota_ctx *qctx = &priv->quota_ctx;
+ uint32_t i, nb_queues = priv->nb_queue;
+
+ qctx->sq = mlx5_malloc(MLX5_MEM_ZERO,
+ sizeof(qctx->sq[0]) * nb_queues,
+ 0, SOCKET_ID_ANY);
+ if (!qctx->sq) {
+ DRV_LOG(DEBUG, "QUOTA: failed to allocate SQ pool");
+ return -ENOMEM;
+ }
+ for (i = 0; i < nb_queues; i++) {
+ int ret = mlx5_aso_sq_create
+ (sh->cdev, qctx->sq + i, sh->tx_uar.obj,
+ rte_log2_u32(priv->hw_q[i].size));
+ if (ret) {
+ DRV_LOG(DEBUG, "QUOTA: failed to allocate SQ[%u]", i);
+ return -ENOMEM;
+ }
+ mlx5_quota_init_sq(qctx->sq + i);
+ }
+ return 0;
+}
+
+static void
+mlx5_quota_destroy_read_buf(struct mlx5_priv *priv)
+{
+ struct mlx5_dev_ctx_shared *sh = priv->sh;
+ struct mlx5_quota_ctx *qctx = &priv->quota_ctx;
+
+ if (qctx->mr.lkey) {
+ void *addr = qctx->mr.addr;
+ sh->cdev->mr_scache.dereg_mr_cb(&qctx->mr);
+ mlx5_free(addr);
+ }
+ if (qctx->read_buf)
+ mlx5_free(qctx->read_buf);
+}
+
+static int
+mlx5_quota_alloc_read_buf(struct mlx5_priv *priv)
+{
+ struct mlx5_dev_ctx_shared *sh = priv->sh;
+ struct mlx5_quota_ctx *qctx = &priv->quota_ctx;
+ uint32_t i, nb_queues = priv->nb_queue;
+ uint32_t sq_size_sum;
+ size_t page_size = rte_mem_page_size();
+ struct mlx5_aso_mtr_dseg *buf;
+ size_t rd_buf_size;
+ int ret;
+
+ for (i = 0, sq_size_sum = 0; i < nb_queues; i++)
+ sq_size_sum += priv->hw_q[i].size;
+ /* ACCESS MTR ASO WQE reads 2 MTR objects */
+ rd_buf_size = 2 * sq_size_sum * sizeof(buf[0]);
+ buf = mlx5_malloc(MLX5_MEM_ANY | MLX5_MEM_ZERO, rd_buf_size,
+ page_size, SOCKET_ID_ANY);
+ if (!buf) {
+ DRV_LOG(DEBUG, "QUOTA: failed to allocate MTR ASO READ buffer [1]");
+ return -ENOMEM;
+ }
+ ret = sh->cdev->mr_scache.reg_mr_cb(sh->cdev->pd, buf,
+ rd_buf_size, &qctx->mr);
+ if (ret) {
+ DRV_LOG(DEBUG, "QUOTA: failed to register MTR ASO READ MR");
+ return -errno;
+ }
+ qctx->read_buf = mlx5_malloc(MLX5_MEM_ZERO,
+ sizeof(qctx->read_buf[0]) * nb_queues,
+ 0, SOCKET_ID_ANY);
+ if (!qctx->read_buf) {
+ DRV_LOG(DEBUG, "QUOTA: failed to allocate MTR ASO READ buffer [2]");
+ return -ENOMEM;
+ }
+ for (i = 0; i < nb_queues; i++) {
+ qctx->read_buf[i] = buf;
+ buf += 2 * priv->hw_q[i].size;
+ }
+ return 0;
+}
+
+static __rte_always_inline int
+mlx5_quota_check_ready(struct mlx5_quota *qobj, struct rte_flow_error *error)
+{
+ uint8_t state = MLX5_QUOTA_STATE_READY;
+ bool verdict = __atomic_compare_exchange_n
+ (&qobj->state, &state, MLX5_QUOTA_STATE_WAIT, false,
+ __ATOMIC_RELAXED, __ATOMIC_RELAXED);
+
+ if (!verdict)
+ return rte_flow_error_set(error, EBUSY,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL, "action is busy");
+ return 0;
+}
+
+int
+mlx5_quota_query(struct rte_eth_dev *dev, uint32_t queue,
+ const struct rte_flow_action_handle *handle,
+ struct rte_flow_query_quota *query,
+ struct mlx5_hw_q_job *async_job, bool push,
+ struct rte_flow_error *error)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_quota_ctx *qctx = &priv->quota_ctx;
+ uint32_t work_queue = !is_quota_sync_queue(priv, queue) ?
+ queue : quota_sync_queue(priv);
+ uint32_t id = MLX5_INDIRECT_ACTION_IDX_GET(handle);
+ uint32_t qix = id - 1;
+ struct mlx5_quota *qobj = mlx5_ipool_get(qctx->quota_ipool, id);
+ struct mlx5_hw_q_job sync_job;
+ int ret;
+
+ if (!qobj)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+ "invalid query handle");
+ ret = mlx5_quota_check_ready(qobj, error);
+ if (ret)
+ return ret;
+ ret = mlx5_quota_cmd_wqe(dev, qobj, mlx5_quota_wqe_query, qix, work_queue,
+ async_job ? async_job : &sync_job, push, NULL);
+ if (ret) {
+ __atomic_store_n(&qobj->state, MLX5_QUOTA_STATE_READY,
+ __ATOMIC_RELAXED);
+ return rte_flow_error_set(error, EAGAIN,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL, "try again");
+ }
+ if (is_quota_sync_queue(priv, queue))
+ query->quota = mlx5_quota_fetch_tokens(sync_job.query.hw);
+ return 0;
+}
+
+int
+mlx5_quota_query_update(struct rte_eth_dev *dev, uint32_t queue,
+ struct rte_flow_action_handle *handle,
+ const struct rte_flow_action *update,
+ struct rte_flow_query_quota *query,
+ struct mlx5_hw_q_job *async_job, bool push,
+ struct rte_flow_error *error)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_quota_ctx *qctx = &priv->quota_ctx;
+ const struct rte_flow_update_quota *conf = update->conf;
+ uint32_t work_queue = !is_quota_sync_queue(priv, queue) ?
+ queue : quota_sync_queue(priv);
+ uint32_t id = MLX5_INDIRECT_ACTION_IDX_GET(handle);
+ uint32_t qix = id - 1;
+ struct mlx5_quota *qobj = mlx5_ipool_get(qctx->quota_ipool, id);
+ struct mlx5_hw_q_job sync_job;
+ quota_wqe_cmd_t wqe_cmd = query ?
+ mlx5_quota_wqe_query_update :
+ mlx5_quota_wqe_update;
+ int ret;
+
+ if (conf->quota > MLX5_MTR_MAX_TOKEN_VALUE)
+ return rte_flow_error_set(error, E2BIG,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL, "update value too big");
+ if (!qobj)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+ "invalid query_update handle");
+ if (conf->op == RTE_FLOW_UPDATE_QUOTA_ADD &&
+ qobj->last_update == RTE_FLOW_UPDATE_QUOTA_ADD)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL, "cannot add twice");
+ ret = mlx5_quota_check_ready(qobj, error);
+ if (ret)
+ return ret;
+ ret = mlx5_quota_cmd_wqe(dev, qobj, wqe_cmd, qix, work_queue,
+ async_job ? async_job : &sync_job, push,
+ (void *)(uintptr_t)update->conf);
+ if (ret) {
+ __atomic_store_n(&qobj->state, MLX5_QUOTA_STATE_READY,
+ __ATOMIC_RELAXED);
+ return rte_flow_error_set(error, EAGAIN,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL, "try again");
+ }
+ qobj->last_update = conf->op;
+ if (query && is_quota_sync_queue(priv, queue))
+ query->quota = mlx5_quota_fetch_tokens(sync_job.query.hw);
+ return 0;
+}
+
+struct rte_flow_action_handle *
+mlx5_quota_alloc(struct rte_eth_dev *dev, uint32_t queue,
+ const struct rte_flow_action_quota *conf,
+ struct mlx5_hw_q_job *job, bool push,
+ struct rte_flow_error *error)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_quota_ctx *qctx = &priv->quota_ctx;
+ uint32_t id;
+ struct mlx5_quota *qobj;
+ uintptr_t handle = (uintptr_t)MLX5_INDIRECT_ACTION_TYPE_QUOTA <<
+ MLX5_INDIRECT_ACTION_TYPE_OFFSET;
+ uint32_t work_queue = !is_quota_sync_queue(priv, queue) ?
+ queue : quota_sync_queue(priv);
+ struct mlx5_hw_q_job sync_job;
+ uint8_t state = MLX5_QUOTA_STATE_FREE;
+ bool verdict;
+ int ret;
+
+ qobj = mlx5_ipool_malloc(qctx->quota_ipool, &id);
+ if (!qobj) {
+ rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION,
+ NULL, "quota: failed to allocate quota object");
+ return NULL;
+ }
+ verdict = __atomic_compare_exchange_n
+ (&qobj->state, &state, MLX5_QUOTA_STATE_WAIT, false,
+ __ATOMIC_RELAXED, __ATOMIC_RELAXED);
+ if (!verdict) {
+ rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION,
+ NULL, "quota: new quota object has invalid state");
+ return NULL;
+ }
+ switch (conf->mode) {
+ case RTE_FLOW_QUOTA_MODE_L2:
+ qobj->mode = MLX5_METER_MODE_L2_LEN;
+ break;
+ case RTE_FLOW_QUOTA_MODE_PACKET:
+ qobj->mode = MLX5_METER_MODE_PKT;
+ break;
+ default:
+ qobj->mode = MLX5_METER_MODE_IP_LEN;
+ }
+ ret = mlx5_quota_cmd_wqe(dev, qobj, mlx5_quota_set_init_wqe, id - 1,
+ work_queue, job ? job : &sync_job, push,
+ (void *)(uintptr_t)conf);
+ if (ret) {
+ mlx5_ipool_free(qctx->quota_ipool, id);
+ __atomic_store_n(&qobj->state, MLX5_QUOTA_STATE_FREE,
+ __ATOMIC_RELAXED);
+ rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION,
+ NULL, "quota: WR failure");
+ return 0;
+ }
+ return (struct rte_flow_action_handle *)(handle | id);
+}
+
+int
+mlx5_flow_quota_destroy(struct rte_eth_dev *dev)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_quota_ctx *qctx = &priv->quota_ctx;
+ int ret;
+
+ if (qctx->quota_ipool)
+ mlx5_ipool_destroy(qctx->quota_ipool);
+ mlx5_quota_destroy_sq(priv);
+ mlx5_quota_destroy_read_buf(priv);
+ if (qctx->dr_action) {
+ ret = mlx5dr_action_destroy(qctx->dr_action);
+ if (ret)
+ DRV_LOG(ERR, "QUOTA: failed to destroy DR action");
+ }
+ if (qctx->devx_obj) {
+ ret = mlx5_devx_cmd_destroy(qctx->devx_obj);
+ if (ret)
+ DRV_LOG(ERR, "QUOTA: failed to destroy MTR ASO object");
+ }
+ memset(qctx, 0, sizeof(*qctx));
+ return 0;
+}
+
+#define MLX5_QUOTA_IPOOL_TRUNK_SIZE (1u << 12)
+#define MLX5_QUOTA_IPOOL_CACHE_SIZE (1u << 13)
+int
+mlx5_flow_quota_init(struct rte_eth_dev *dev, uint32_t nb_quotas)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_dev_ctx_shared *sh = priv->sh;
+ struct mlx5_quota_ctx *qctx = &priv->quota_ctx;
+ int reg_id = mlx5_flow_get_reg_id(dev, MLX5_MTR_COLOR, 0, NULL);
+ uint32_t flags = MLX5DR_ACTION_FLAG_HWS_RX | MLX5DR_ACTION_FLAG_HWS_TX;
+ struct mlx5_indexed_pool_config quota_ipool_cfg = {
+ .size = sizeof(struct mlx5_quota),
+ .trunk_size = RTE_MIN(nb_quotas, MLX5_QUOTA_IPOOL_TRUNK_SIZE),
+ .need_lock = 1,
+ .release_mem_en = !!priv->sh->config.reclaim_mode,
+ .malloc = mlx5_malloc,
+ .max_idx = nb_quotas,
+ .free = mlx5_free,
+ .type = "mlx5_flow_quota_index_pool"
+ };
+ int ret;
+
+ if (!nb_quotas) {
+ DRV_LOG(DEBUG, "QUOTA: cannot create quota with 0 objects");
+ return -EINVAL;
+ }
+ if (!priv->mtr_en || !sh->meter_aso_en) {
+ DRV_LOG(DEBUG, "QUOTA: no MTR support");
+ return -ENOTSUP;
+ }
+ if (reg_id < 0) {
+ DRV_LOG(DEBUG, "QUOTA: MRT register not available");
+ return -ENOTSUP;
+ }
+ qctx->devx_obj = mlx5_devx_cmd_create_flow_meter_aso_obj
+ (sh->cdev->ctx, sh->cdev->pdn, rte_log2_u32(nb_quotas >> 1));
+ if (!qctx->devx_obj) {
+ DRV_LOG(DEBUG, "QUOTA: cannot allocate MTR ASO objects");
+ return -ENOMEM;
+ }
+ if (sh->config.dv_esw_en && priv->master)
+ flags |= MLX5DR_ACTION_FLAG_HWS_FDB;
+ qctx->dr_action = mlx5dr_action_create_aso_meter
+ (priv->dr_ctx, (struct mlx5dr_devx_obj *)qctx->devx_obj,
+ reg_id - REG_C_0, flags);
+ if (!qctx->dr_action) {
+ DRV_LOG(DEBUG, "QUOTA: failed to create DR action");
+ ret = -ENOMEM;
+ goto err;
+ }
+ ret = mlx5_quota_alloc_read_buf(priv);
+ if (ret)
+ goto err;
+ ret = mlx5_quota_alloc_sq(priv);
+ if (ret)
+ goto err;
+ if (nb_quotas < MLX5_QUOTA_IPOOL_TRUNK_SIZE)
+ quota_ipool_cfg.per_core_cache = 0;
+ else if (nb_quotas < MLX5_HW_IPOOL_SIZE_THRESHOLD)
+ quota_ipool_cfg.per_core_cache = MLX5_HW_IPOOL_CACHE_MIN;
+ else
+ quota_ipool_cfg.per_core_cache = MLX5_QUOTA_IPOOL_CACHE_SIZE;
+ qctx->quota_ipool = mlx5_ipool_create("a_ipool_cfg);
+ if (!qctx->quota_ipool) {
+ DRV_LOG(DEBUG, "QUOTA: failed to allocate quota pool");
+ ret = -ENOMEM;
+ goto err;
+ }
+ qctx->nb_quotas = nb_quotas;
+ return 0;
+err:
+ mlx5_flow_quota_destroy(dev);
+ return ret;
+}
--
2.34.1
^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH 5/5] mlx5dr: Definer, translate RTE quota item
2023-01-18 12:55 [PATCH 0/5] net/mlx5: add indirect QUOTA create/query/modify Gregory Etelson
` (3 preceding siblings ...)
2023-01-18 12:55 ` [PATCH 4/5] net/mlx5: add indirect QUOTA create/query/modify Gregory Etelson
@ 2023-01-18 12:55 ` Gregory Etelson
2023-03-08 2:58 ` [PATCH 0/5] net/mlx5: add indirect QUOTA create/query/modify Suanming Mou
` (2 subsequent siblings)
7 siblings, 0 replies; 20+ messages in thread
From: Gregory Etelson @ 2023-01-18 12:55 UTC (permalink / raw)
To: dev; +Cc: getelson, matan, rasland, Viacheslav Ovsiienko
MLX5 PMD implements QUOTA with Meter object.
PMD Quota action translation implicitly increments
Meter register value after HW assigns it.
Meter register values are:
HW QUOTA(HW+1) QUOTA state
RED 0 1 (01b) BLOCK
YELLOW 1 2 (10b) PASS
GREEN 2 3 (11b) PASS
Quota item checks Meter register bit 1 value to determine state:
SPEC MASK
PASS 2 (10b) 2 (10b)
BLOCK 0 (00b) 2 (10b)
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
---
drivers/net/mlx5/hws/mlx5dr_definer.c | 61 +++++++++++++++++++++++++++
1 file changed, 61 insertions(+)
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index 6b98eb8c96..40ffb02be0 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -19,6 +19,9 @@
#define STE_UDP 0x2
#define STE_ICMP 0x3
+#define MLX5DR_DEFINER_QUOTA_BLOCK 0
+#define MLX5DR_DEFINER_QUOTA_PASS 2
+
/* Setter function based on bit offset and mask, for 32bit DW*/
#define _DR_SET_32(p, v, byte_off, bit_off, mask) \
do { \
@@ -1134,6 +1137,60 @@ mlx5dr_definer_conv_item_tag(struct mlx5dr_definer_conv_data *cd,
return 0;
}
+static void
+mlx5dr_definer_quota_set(struct mlx5dr_definer_fc *fc,
+ const void *item_data, uint8_t *tag)
+{
+ /**
+ * MLX5 PMD implements QUOTA with Meter object.
+ * PMD Quota action translation implicitly increments
+ * Meter register value after HW assigns it.
+ * Meter register values are:
+ * HW QUOTA(HW+1) QUOTA state
+ * RED 0 1 (01b) BLOCK
+ * YELLOW 1 2 (10b) PASS
+ * GREEN 2 3 (11b) PASS
+ *
+ * Quota item checks Meter register bit 1 value to determine state:
+ * SPEC MASK
+ * PASS 2 (10b) 2 (10b)
+ * BLOCK 0 (00b) 2 (10b)
+ *
+ * item_data is NULL when template quota item is non-masked:
+ * .. / quota / ..
+ */
+
+ const struct rte_flow_item_quota *quota = item_data;
+ uint32_t val;
+
+ if (quota && (quota->state == RTE_FLOW_QUOTA_STATE_BLOCK))
+ val = MLX5DR_DEFINER_QUOTA_BLOCK;
+ else
+ val = MLX5DR_DEFINER_QUOTA_PASS;
+
+ DR_SET(tag, val, fc->byte_off, fc->bit_off, fc->bit_mask);
+}
+
+static int
+mlx5dr_definer_conv_item_quota(struct mlx5dr_definer_conv_data *cd,
+ __rte_unused struct rte_flow_item *item,
+ int item_idx)
+{
+ int mtr_reg = flow_hw_get_reg_id(RTE_FLOW_ITEM_TYPE_METER_COLOR, 0);
+ struct mlx5dr_definer_fc *fc;
+
+ if (mtr_reg < 0)
+ return EINVAL;
+
+ fc = mlx5dr_definer_get_register_fc(cd, mtr_reg);
+ if (!fc)
+ return rte_errno;
+
+ fc->tag_set = &mlx5dr_definer_quota_set;
+ fc->item_idx = item_idx;
+ return 0;
+}
+
static int
mlx5dr_definer_conv_item_metadata(struct mlx5dr_definer_conv_data *cd,
struct rte_flow_item *item,
@@ -1581,6 +1638,10 @@ mlx5dr_definer_conv_items_to_hl(struct mlx5dr_context *ctx,
ret = mlx5dr_definer_conv_item_meter_color(&cd, items, i);
item_flags |= MLX5_FLOW_ITEM_METER_COLOR;
break;
+ case RTE_FLOW_ITEM_TYPE_QUOTA:
+ ret = mlx5dr_definer_conv_item_quota(&cd, items, i);
+ item_flags |= MLX5_FLOW_ITEM_QUOTA;
+ break;
default:
DR_LOG(ERR, "Unsupported item type %d", items->type);
rte_errno = ENOTSUP;
--
2.34.1
^ permalink raw reply [flat|nested] 20+ messages in thread
* RE: [PATCH 0/5] net/mlx5: add indirect QUOTA create/query/modify
2023-01-18 12:55 [PATCH 0/5] net/mlx5: add indirect QUOTA create/query/modify Gregory Etelson
` (4 preceding siblings ...)
2023-01-18 12:55 ` [PATCH 5/5] mlx5dr: Definer, translate RTE quota item Gregory Etelson
@ 2023-03-08 2:58 ` Suanming Mou
2023-03-08 17:01 ` [PATCH v2 " Gregory Etelson
2023-05-07 7:39 ` [PATCH v3 0/5] net/mlx5: support indirect quota flow action Gregory Etelson
7 siblings, 0 replies; 20+ messages in thread
From: Suanming Mou @ 2023-03-08 2:58 UTC (permalink / raw)
To: Gregory Etelson, dev; +Cc: Gregory Etelson, Matan Azrad, Raslan Darawsheh
Hi Gregory,
The code looks good to me. But I assume doc update is missing here. Can you please update the relevant doc and release notes?
BR,
Suanming Mou
> -----Original Message-----
> From: Gregory Etelson <getelson@nvidia.com>
> Sent: Wednesday, January 18, 2023 8:56 PM
> To: dev@dpdk.org
> Cc: Gregory Etelson <getelson@nvidia.com>; Matan Azrad
> <matan@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com>
> Subject: [PATCH 0/5] net/mlx5: add indirect QUOTA create/query/modify
>
> Add indirect quota flow action.
> Add match on quota flow item.
>
> Gregory Etelson (5):
> net/mlx5: update query fields in async job structure
> net/mlx5: remove code duplication
> common/mlx5: update MTR ASO definitions
> net/mlx5: add indirect QUOTA create/query/modify
> mlx5dr: Definer, translate RTE quota item
>
> drivers/common/mlx5/mlx5_prm.h | 4 +
> drivers/net/mlx5/hws/mlx5dr_definer.c | 61 +++
> drivers/net/mlx5/meson.build | 1 +
> drivers/net/mlx5/mlx5.h | 88 +++-
> drivers/net/mlx5/mlx5_flow.c | 62 +++
> drivers/net/mlx5/mlx5_flow.h | 20 +-
> drivers/net/mlx5/mlx5_flow_aso.c | 10 +-
> drivers/net/mlx5/mlx5_flow_hw.c | 527 +++++++++++++------
> drivers/net/mlx5/mlx5_flow_quota.c | 726 ++++++++++++++++++++++++++
> 9 files changed, 1318 insertions(+), 181 deletions(-) create mode 100644
> drivers/net/mlx5/mlx5_flow_quota.c
>
> --
> 2.34.1
^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH v2 0/5] net/mlx5: add indirect QUOTA create/query/modify
2023-01-18 12:55 [PATCH 0/5] net/mlx5: add indirect QUOTA create/query/modify Gregory Etelson
` (5 preceding siblings ...)
2023-03-08 2:58 ` [PATCH 0/5] net/mlx5: add indirect QUOTA create/query/modify Suanming Mou
@ 2023-03-08 17:01 ` Gregory Etelson
2023-03-08 17:01 ` [PATCH v2 1/5] net/mlx5: update query fields in async job structure Gregory Etelson
` (4 more replies)
2023-05-07 7:39 ` [PATCH v3 0/5] net/mlx5: support indirect quota flow action Gregory Etelson
7 siblings, 5 replies; 20+ messages in thread
From: Gregory Etelson @ 2023-03-08 17:01 UTC (permalink / raw)
To: dev; +Cc: getelson, matan, rasland
Add indirect quota flow action.
Add match on quota flow item.
v2: rebase to the latest main branch.
Gregory Etelson (5):
net/mlx5: update query fields in async job structure
net/mlx5: remove code duplication
common/mlx5: update MTR ASO definitions
net/mlx5: add indirect QUOTA create/query/modify
mlx5dr: Definer, translate RTE quota item
drivers/common/mlx5/mlx5_prm.h | 4 +
drivers/net/mlx5/hws/mlx5dr_definer.c | 61 +++
drivers/net/mlx5/meson.build | 1 +
drivers/net/mlx5/mlx5.h | 88 ++++-
drivers/net/mlx5/mlx5_flow.c | 62 +++
drivers/net/mlx5/mlx5_flow.h | 20 +-
drivers/net/mlx5/mlx5_flow_aso.c | 10 +-
drivers/net/mlx5/mlx5_flow_hw.c | 527 ++++++++++++++++++--------
8 files changed, 592 insertions(+), 181 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH v2 1/5] net/mlx5: update query fields in async job structure
2023-03-08 17:01 ` [PATCH v2 " Gregory Etelson
@ 2023-03-08 17:01 ` Gregory Etelson
2023-03-08 17:01 ` [PATCH v2 2/5] net/mlx5: remove code duplication Gregory Etelson
` (3 subsequent siblings)
4 siblings, 0 replies; 20+ messages in thread
From: Gregory Etelson @ 2023-03-08 17:01 UTC (permalink / raw)
To: dev; +Cc: getelson, matan, rasland, Viacheslav Ovsiienko
Query fields defined in `mlx5_hw_q_job` target CT type only.
The patch updates `mlx5_hw_q_job` for other query types as well.
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
---
drivers/net/mlx5/mlx5.h | 10 +++++-----
drivers/net/mlx5/mlx5_flow_aso.c | 2 +-
drivers/net/mlx5/mlx5_flow_hw.c | 6 +++---
3 files changed, 9 insertions(+), 9 deletions(-)
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index a766fb408e..aa956ec1b7 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -366,11 +366,11 @@ struct mlx5_hw_q_job {
struct rte_flow_item *items;
union {
struct {
- /* Pointer to ct query user memory. */
- struct rte_flow_action_conntrack *profile;
- /* Pointer to ct ASO query out memory. */
- void *out_data;
- } __rte_packed;
+ /* User memory for query output */
+ void *user;
+ /* Data extracted from hardware */
+ void *hw;
+ } __rte_packed query;
struct rte_flow_item_ethdev port_spec;
struct rte_flow_item_tag tag_spec;
} __rte_packed;
diff --git a/drivers/net/mlx5/mlx5_flow_aso.c b/drivers/net/mlx5/mlx5_flow_aso.c
index 29bd7ce9e8..0eb91c570f 100644
--- a/drivers/net/mlx5/mlx5_flow_aso.c
+++ b/drivers/net/mlx5/mlx5_flow_aso.c
@@ -1389,7 +1389,7 @@ mlx5_aso_ct_sq_query_single(struct mlx5_dev_ctx_shared *sh,
struct mlx5_hw_q_job *job = (struct mlx5_hw_q_job *)user_data;
sq->elts[wqe_idx].ct = user_data;
- job->out_data = (char *)((uintptr_t)sq->mr.addr + wqe_idx * 64);
+ job->query.hw = (char *)((uintptr_t)sq->mr.addr + wqe_idx * 64);
} else {
sq->elts[wqe_idx].query_data = data;
sq->elts[wqe_idx].ct = ct;
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index a9c7045a3e..cd951019de 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -2738,8 +2738,8 @@ __flow_hw_pull_indir_action_comp(struct rte_eth_dev *dev,
idx = MLX5_ACTION_CTX_CT_GET_IDX
((uint32_t)(uintptr_t)job->action);
aso_ct = mlx5_ipool_get(priv->hws_ctpool->cts, idx);
- mlx5_aso_ct_obj_analyze(job->profile,
- job->out_data);
+ mlx5_aso_ct_obj_analyze(job->query.user,
+ job->query.hw);
aso_ct->state = ASO_CONNTRACK_READY;
}
}
@@ -8275,7 +8275,7 @@ flow_hw_action_handle_query(struct rte_eth_dev *dev, uint32_t queue,
case MLX5_INDIRECT_ACTION_TYPE_CT:
aso = true;
if (job)
- job->profile = (struct rte_flow_action_conntrack *)data;
+ job->query.user = data;
ret = flow_hw_conntrack_query(dev, queue, act_idx, data,
job, push, error);
break;
--
2.34.1
^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH v2 2/5] net/mlx5: remove code duplication
2023-03-08 17:01 ` [PATCH v2 " Gregory Etelson
2023-03-08 17:01 ` [PATCH v2 1/5] net/mlx5: update query fields in async job structure Gregory Etelson
@ 2023-03-08 17:01 ` Gregory Etelson
2023-03-08 17:01 ` [PATCH v2 3/5] common/mlx5: update MTR ASO definitions Gregory Etelson
` (2 subsequent siblings)
4 siblings, 0 replies; 20+ messages in thread
From: Gregory Etelson @ 2023-03-08 17:01 UTC (permalink / raw)
To: dev; +Cc: getelson, matan, rasland, Viacheslav Ovsiienko
Replace duplicated code with dedicated functions
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
---
drivers/net/mlx5/mlx5.h | 6 +-
drivers/net/mlx5/mlx5_flow_hw.c | 182 ++++++++++++++++----------------
2 files changed, 95 insertions(+), 93 deletions(-)
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index aa956ec1b7..a4ed61e257 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -344,11 +344,11 @@ struct mlx5_lb_ctx {
};
/* HW steering queue job descriptor type. */
-enum {
+enum mlx5_hw_job_type {
MLX5_HW_Q_JOB_TYPE_CREATE, /* Flow create job type. */
MLX5_HW_Q_JOB_TYPE_DESTROY, /* Flow destroy job type. */
- MLX5_HW_Q_JOB_TYPE_UPDATE,
- MLX5_HW_Q_JOB_TYPE_QUERY,
+ MLX5_HW_Q_JOB_TYPE_UPDATE, /* Flow update job type. */
+ MLX5_HW_Q_JOB_TYPE_QUERY, /* Flow query job type. */
};
#define MLX5_HW_MAX_ITEMS (16)
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index cd951019de..8a5e8941fd 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -7626,6 +7626,67 @@ flow_hw_action_handle_validate(struct rte_eth_dev *dev, uint32_t queue,
return 0;
}
+static __rte_always_inline bool
+flow_hw_action_push(const struct rte_flow_op_attr *attr)
+{
+ return attr ? !attr->postpone : true;
+}
+
+static __rte_always_inline struct mlx5_hw_q_job *
+flow_hw_job_get(struct mlx5_priv *priv, uint32_t queue)
+{
+ return priv->hw_q[queue].job[--priv->hw_q[queue].job_idx];
+}
+
+static __rte_always_inline void
+flow_hw_job_put(struct mlx5_priv *priv, uint32_t queue)
+{
+ priv->hw_q[queue].job_idx++;
+}
+
+static __rte_always_inline struct mlx5_hw_q_job *
+flow_hw_action_job_init(struct mlx5_priv *priv, uint32_t queue,
+ const struct rte_flow_action_handle *handle,
+ void *user_data, void *query_data,
+ enum mlx5_hw_job_type type,
+ struct rte_flow_error *error)
+{
+ struct mlx5_hw_q_job *job;
+
+ MLX5_ASSERT(queue != MLX5_HW_INV_QUEUE);
+ if (unlikely(!priv->hw_q[queue].job_idx)) {
+ rte_flow_error_set(error, ENOMEM,
+ RTE_FLOW_ERROR_TYPE_ACTION_NUM, NULL,
+ "Action destroy failed due to queue full.");
+ return NULL;
+ }
+ job = flow_hw_job_get(priv, queue);
+ job->type = type;
+ job->action = handle;
+ job->user_data = user_data;
+ job->query.user = query_data;
+ return job;
+}
+
+static __rte_always_inline void
+flow_hw_action_finalize(struct rte_eth_dev *dev, uint32_t queue,
+ struct mlx5_hw_q_job *job,
+ bool push, bool aso, bool status)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ if (likely(status)) {
+ if (push)
+ __flow_hw_push_action(dev, queue);
+ if (!aso)
+ rte_ring_enqueue(push ?
+ priv->hw_q[queue].indir_cq :
+ priv->hw_q[queue].indir_iq,
+ job);
+ } else {
+ flow_hw_job_put(priv, queue);
+ }
+}
+
/**
* Create shared action.
*
@@ -7663,21 +7724,15 @@ flow_hw_action_handle_create(struct rte_eth_dev *dev, uint32_t queue,
cnt_id_t cnt_id;
uint32_t mtr_id;
uint32_t age_idx;
- bool push = true;
+ bool push = flow_hw_action_push(attr);
bool aso = false;
if (attr) {
- MLX5_ASSERT(queue != MLX5_HW_INV_QUEUE);
- if (unlikely(!priv->hw_q[queue].job_idx)) {
- rte_flow_error_set(error, ENOMEM,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
- "Flow queue full.");
+ job = flow_hw_action_job_init(priv, queue, NULL, user_data,
+ NULL, MLX5_HW_Q_JOB_TYPE_CREATE,
+ error);
+ if (!job)
return NULL;
- }
- job = priv->hw_q[queue].job[--priv->hw_q[queue].job_idx];
- job->type = MLX5_HW_Q_JOB_TYPE_CREATE;
- job->user_data = user_data;
- push = !attr->postpone;
}
switch (action->type) {
case RTE_FLOW_ACTION_TYPE_AGE:
@@ -7740,17 +7795,9 @@ flow_hw_action_handle_create(struct rte_eth_dev *dev, uint32_t queue,
break;
}
if (job) {
- if (!handle) {
- priv->hw_q[queue].job_idx++;
- return NULL;
- }
job->action = handle;
- if (push)
- __flow_hw_push_action(dev, queue);
- if (aso)
- return handle;
- rte_ring_enqueue(push ? priv->hw_q[queue].indir_cq :
- priv->hw_q[queue].indir_iq, job);
+ flow_hw_action_finalize(dev, queue, job, push, aso,
+ handle != NULL);
}
return handle;
}
@@ -7798,19 +7845,15 @@ flow_hw_action_handle_update(struct rte_eth_dev *dev, uint32_t queue,
uint32_t type = act_idx >> MLX5_INDIRECT_ACTION_TYPE_OFFSET;
uint32_t idx = act_idx & ((1u << MLX5_INDIRECT_ACTION_TYPE_OFFSET) - 1);
int ret = 0;
- bool push = true;
+ bool push = flow_hw_action_push(attr);
bool aso = false;
if (attr) {
- MLX5_ASSERT(queue != MLX5_HW_INV_QUEUE);
- if (unlikely(!priv->hw_q[queue].job_idx))
- return rte_flow_error_set(error, ENOMEM,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
- "Action update failed due to queue full.");
- job = priv->hw_q[queue].job[--priv->hw_q[queue].job_idx];
- job->type = MLX5_HW_Q_JOB_TYPE_UPDATE;
- job->user_data = user_data;
- push = !attr->postpone;
+ job = flow_hw_action_job_init(priv, queue, handle, user_data,
+ NULL, MLX5_HW_Q_JOB_TYPE_UPDATE,
+ error);
+ if (!job)
+ return -rte_errno;
}
switch (type) {
case MLX5_INDIRECT_ACTION_TYPE_AGE:
@@ -7873,19 +7916,8 @@ flow_hw_action_handle_update(struct rte_eth_dev *dev, uint32_t queue,
"action type not supported");
break;
}
- if (job) {
- if (ret) {
- priv->hw_q[queue].job_idx++;
- return ret;
- }
- job->action = handle;
- if (push)
- __flow_hw_push_action(dev, queue);
- if (aso)
- return 0;
- rte_ring_enqueue(push ? priv->hw_q[queue].indir_cq :
- priv->hw_q[queue].indir_iq, job);
- }
+ if (job)
+ flow_hw_action_finalize(dev, queue, job, push, aso, ret == 0);
return ret;
}
@@ -7924,20 +7956,16 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue,
struct mlx5_hw_q_job *job = NULL;
struct mlx5_aso_mtr *aso_mtr;
struct mlx5_flow_meter_info *fm;
- bool push = true;
+ bool push = flow_hw_action_push(attr);
bool aso = false;
int ret = 0;
if (attr) {
- MLX5_ASSERT(queue != MLX5_HW_INV_QUEUE);
- if (unlikely(!priv->hw_q[queue].job_idx))
- return rte_flow_error_set(error, ENOMEM,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
- "Action destroy failed due to queue full.");
- job = priv->hw_q[queue].job[--priv->hw_q[queue].job_idx];
- job->type = MLX5_HW_Q_JOB_TYPE_DESTROY;
- job->user_data = user_data;
- push = !attr->postpone;
+ job = flow_hw_action_job_init(priv, queue, handle, user_data,
+ NULL, MLX5_HW_Q_JOB_TYPE_DESTROY,
+ error);
+ if (!job)
+ return -rte_errno;
}
switch (type) {
case MLX5_INDIRECT_ACTION_TYPE_AGE:
@@ -8000,19 +8028,8 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue,
"action type not supported");
break;
}
- if (job) {
- if (ret) {
- priv->hw_q[queue].job_idx++;
- return ret;
- }
- job->action = handle;
- if (push)
- __flow_hw_push_action(dev, queue);
- if (aso)
- return ret;
- rte_ring_enqueue(push ? priv->hw_q[queue].indir_cq :
- priv->hw_q[queue].indir_iq, job);
- }
+ if (job)
+ flow_hw_action_finalize(dev, queue, job, push, aso, ret == 0);
return ret;
}
@@ -8251,19 +8268,15 @@ flow_hw_action_handle_query(struct rte_eth_dev *dev, uint32_t queue,
uint32_t type = act_idx >> MLX5_INDIRECT_ACTION_TYPE_OFFSET;
uint32_t age_idx = act_idx & MLX5_HWS_AGE_IDX_MASK;
int ret;
- bool push = true;
+ bool push = flow_hw_action_push(attr);
bool aso = false;
if (attr) {
- MLX5_ASSERT(queue != MLX5_HW_INV_QUEUE);
- if (unlikely(!priv->hw_q[queue].job_idx))
- return rte_flow_error_set(error, ENOMEM,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
- "Action destroy failed due to queue full.");
- job = priv->hw_q[queue].job[--priv->hw_q[queue].job_idx];
- job->type = MLX5_HW_Q_JOB_TYPE_QUERY;
- job->user_data = user_data;
- push = !attr->postpone;
+ job = flow_hw_action_job_init(priv, queue, handle, user_data,
+ data, MLX5_HW_Q_JOB_TYPE_QUERY,
+ error);
+ if (!job)
+ return -rte_errno;
}
switch (type) {
case MLX5_INDIRECT_ACTION_TYPE_AGE:
@@ -8286,19 +8299,8 @@ flow_hw_action_handle_query(struct rte_eth_dev *dev, uint32_t queue,
"action type not supported");
break;
}
- if (job) {
- if (ret) {
- priv->hw_q[queue].job_idx++;
- return ret;
- }
- job->action = handle;
- if (push)
- __flow_hw_push_action(dev, queue);
- if (aso)
- return ret;
- rte_ring_enqueue(push ? priv->hw_q[queue].indir_cq :
- priv->hw_q[queue].indir_iq, job);
- }
+ if (job)
+ flow_hw_action_finalize(dev, queue, job, push, aso, ret == 0);
return 0;
}
--
2.34.1
^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH v2 3/5] common/mlx5: update MTR ASO definitions
2023-03-08 17:01 ` [PATCH v2 " Gregory Etelson
2023-03-08 17:01 ` [PATCH v2 1/5] net/mlx5: update query fields in async job structure Gregory Etelson
2023-03-08 17:01 ` [PATCH v2 2/5] net/mlx5: remove code duplication Gregory Etelson
@ 2023-03-08 17:01 ` Gregory Etelson
2023-03-08 17:01 ` [PATCH v2 4/5] net/mlx5: add indirect QUOTA create/query/modify Gregory Etelson
2023-03-08 17:01 ` [PATCH v2 5/5] mlx5dr: Definer, translate RTE quota item Gregory Etelson
4 siblings, 0 replies; 20+ messages in thread
From: Gregory Etelson @ 2023-03-08 17:01 UTC (permalink / raw)
To: dev; +Cc: getelson, matan, rasland, Viacheslav Ovsiienko
Update MTR ASO definitions for QUOTA flow action.
Quota flow action requires WQE READ capability and access to
token fields.
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
---
drivers/common/mlx5/mlx5_prm.h | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index 75af636f59..525364d5e4 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -3917,6 +3917,8 @@ enum mlx5_aso_op {
ASO_OPER_LOGICAL_OR = 0x1,
};
+#define MLX5_ASO_CSEG_READ_ENABLE 1
+
/* ASO WQE CTRL segment. */
struct mlx5_aso_cseg {
uint32_t va_h;
@@ -3931,6 +3933,8 @@ struct mlx5_aso_cseg {
uint64_t data_mask;
} __rte_packed;
+#define MLX5_MTR_MAX_TOKEN_VALUE INT32_MAX
+
/* A meter data segment - 2 per ASO WQE. */
struct mlx5_aso_mtr_dseg {
uint32_t v_bo_sc_bbog_mm;
--
2.34.1
^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH v2 4/5] net/mlx5: add indirect QUOTA create/query/modify
2023-03-08 17:01 ` [PATCH v2 " Gregory Etelson
` (2 preceding siblings ...)
2023-03-08 17:01 ` [PATCH v2 3/5] common/mlx5: update MTR ASO definitions Gregory Etelson
@ 2023-03-08 17:01 ` Gregory Etelson
2023-03-08 17:01 ` [PATCH v2 5/5] mlx5dr: Definer, translate RTE quota item Gregory Etelson
4 siblings, 0 replies; 20+ messages in thread
From: Gregory Etelson @ 2023-03-08 17:01 UTC (permalink / raw)
To: dev; +Cc: getelson, matan, rasland, Viacheslav Ovsiienko
Implement HWS functions for indirect QUOTA creation, modification and
query.
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
---
drivers/net/mlx5/meson.build | 1 +
drivers/net/mlx5/mlx5.h | 72 +++++++
drivers/net/mlx5/mlx5_flow.c | 62 ++++++
drivers/net/mlx5/mlx5_flow.h | 20 +-
drivers/net/mlx5/mlx5_flow_aso.c | 8 +-
drivers/net/mlx5/mlx5_flow_hw.c | 343 ++++++++++++++++++++++++-------
6 files changed, 425 insertions(+), 81 deletions(-)
diff --git a/drivers/net/mlx5/meson.build b/drivers/net/mlx5/meson.build
index abd507bd88..323c381d2b 100644
--- a/drivers/net/mlx5/meson.build
+++ b/drivers/net/mlx5/meson.build
@@ -23,6 +23,7 @@ sources = files(
'mlx5_flow_dv.c',
'mlx5_flow_aso.c',
'mlx5_flow_flex.c',
+ 'mlx5_flow_quota.c',
'mlx5_mac.c',
'mlx5_rss.c',
'mlx5_rx.c',
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index a4ed61e257..6e6f2f53eb 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -46,6 +46,14 @@
#define MLX5_HW_INV_QUEUE UINT32_MAX
+/*
+ * The default ipool threshold value indicates which per_core_cache
+ * value to set.
+ */
+#define MLX5_HW_IPOOL_SIZE_THRESHOLD (1 << 19)
+/* The default min local cache size. */
+#define MLX5_HW_IPOOL_CACHE_MIN (1 << 9)
+
/*
* Number of modification commands.
* The maximal actions amount in FW is some constant, and it is 16 in the
@@ -349,6 +357,7 @@ enum mlx5_hw_job_type {
MLX5_HW_Q_JOB_TYPE_DESTROY, /* Flow destroy job type. */
MLX5_HW_Q_JOB_TYPE_UPDATE, /* Flow update job type. */
MLX5_HW_Q_JOB_TYPE_QUERY, /* Flow query job type. */
+ MLX5_HW_Q_JOB_TYPE_UPDATE_QUERY, /* Flow update and query job type. */
};
#define MLX5_HW_MAX_ITEMS (16)
@@ -601,6 +610,7 @@ struct mlx5_aso_sq_elem {
char *query_data;
};
void *user_data;
+ struct mlx5_quota *quota_obj;
};
};
@@ -1658,6 +1668,33 @@ struct mlx5_hw_ctrl_flow {
struct mlx5_flow_hw_ctrl_rx;
+enum mlx5_quota_state {
+ MLX5_QUOTA_STATE_FREE, /* quota not in use */
+ MLX5_QUOTA_STATE_READY, /* quota is ready */
+ MLX5_QUOTA_STATE_WAIT /* quota waits WR completion */
+};
+
+struct mlx5_quota {
+ uint8_t state; /* object state */
+ uint8_t mode; /* metering mode */
+ /**
+ * Keep track of application update types.
+ * PMD does not allow 2 consecutive ADD updates.
+ */
+ enum rte_flow_update_quota_op last_update;
+};
+
+/* Bulk management structure for flow quota. */
+struct mlx5_quota_ctx {
+ uint32_t nb_quotas; /* Total number of quota objects */
+ struct mlx5dr_action *dr_action; /* HWS action */
+ struct mlx5_devx_obj *devx_obj; /* DEVX ranged object. */
+ struct mlx5_pmd_mr mr; /* MR for READ from MTR ASO */
+ struct mlx5_aso_mtr_dseg **read_buf; /* Buffers for READ */
+ struct mlx5_aso_sq *sq; /* SQs for sync/async ACCESS_ASO WRs */
+ struct mlx5_indexed_pool *quota_ipool; /* Manage quota objects */
+};
+
struct mlx5_priv {
struct rte_eth_dev_data *dev_data; /* Pointer to device data. */
struct mlx5_dev_ctx_shared *sh; /* Shared device context. */
@@ -1747,6 +1784,7 @@ struct mlx5_priv {
struct mlx5_flow_meter_policy *mtr_policy_arr; /* Policy array. */
struct mlx5_l3t_tbl *mtr_idx_tbl; /* Meter index lookup table. */
struct mlx5_mtr_bulk mtr_bulk; /* Meter index mapping for HWS */
+ struct mlx5_quota_ctx quota_ctx; /* Quota index mapping for HWS */
uint8_t skip_default_rss_reta; /* Skip configuration of default reta. */
uint8_t fdb_def_rule; /* Whether fdb jump to table 1 is configured. */
struct mlx5_mp_id mp_id; /* ID of a multi-process process */
@@ -2242,6 +2280,15 @@ int mlx5_aso_ct_queue_init(struct mlx5_dev_ctx_shared *sh,
uint32_t nb_queues);
int mlx5_aso_ct_queue_uninit(struct mlx5_dev_ctx_shared *sh,
struct mlx5_aso_ct_pools_mng *ct_mng);
+int
+mlx5_aso_sq_create(struct mlx5_common_device *cdev, struct mlx5_aso_sq *sq,
+ void *uar, uint16_t log_desc_n);
+void
+mlx5_aso_destroy_sq(struct mlx5_aso_sq *sq);
+void
+mlx5_aso_mtr_init_sq(struct mlx5_aso_sq *sq);
+void
+mlx5_aso_cqe_err_handle(struct mlx5_aso_sq *sq);
/* mlx5_flow_flex.c */
@@ -2273,6 +2320,31 @@ struct mlx5_list_entry *mlx5_flex_parser_clone_cb(void *list_ctx,
void mlx5_flex_parser_clone_free_cb(void *tool_ctx,
struct mlx5_list_entry *entry);
+int
+mlx5_flow_quota_destroy(struct rte_eth_dev *dev);
+int
+mlx5_flow_quota_init(struct rte_eth_dev *dev, uint32_t nb_quotas);
+struct rte_flow_action_handle *
+mlx5_quota_alloc(struct rte_eth_dev *dev, uint32_t queue,
+ const struct rte_flow_action_quota *conf,
+ struct mlx5_hw_q_job *job, bool push,
+ struct rte_flow_error *error);
+void
+mlx5_quota_async_completion(struct rte_eth_dev *dev, uint32_t queue,
+ struct mlx5_hw_q_job *job);
+int
+mlx5_quota_query_update(struct rte_eth_dev *dev, uint32_t queue,
+ struct rte_flow_action_handle *handle,
+ const struct rte_flow_action *update,
+ struct rte_flow_query_quota *query,
+ struct mlx5_hw_q_job *async_job, bool push,
+ struct rte_flow_error *error);
+int mlx5_quota_query(struct rte_eth_dev *dev, uint32_t queue,
+ const struct rte_flow_action_handle *handle,
+ struct rte_flow_query_quota *query,
+ struct mlx5_hw_q_job *async_job, bool push,
+ struct rte_flow_error *error);
+
int mlx5_alloc_srh_flex_parser(struct rte_eth_dev *dev);
void mlx5_free_srh_flex_parser(struct rte_eth_dev *dev);
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index a6a426caf7..682f942dc4 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -1075,6 +1075,20 @@ mlx5_flow_async_action_handle_query(struct rte_eth_dev *dev, uint32_t queue,
void *data,
void *user_data,
struct rte_flow_error *error);
+static int
+mlx5_action_handle_query_update(struct rte_eth_dev *dev,
+ struct rte_flow_action_handle *handle,
+ const void *update, void *query,
+ enum rte_flow_query_update_mode qu_mode,
+ struct rte_flow_error *error);
+static int
+mlx5_flow_async_action_handle_query_update
+ (struct rte_eth_dev *dev, uint32_t queue_id,
+ const struct rte_flow_op_attr *op_attr,
+ struct rte_flow_action_handle *action_handle,
+ const void *update, void *query,
+ enum rte_flow_query_update_mode qu_mode,
+ void *user_data, struct rte_flow_error *error);
static const struct rte_flow_ops mlx5_flow_ops = {
.validate = mlx5_flow_validate,
@@ -1090,6 +1104,7 @@ static const struct rte_flow_ops mlx5_flow_ops = {
.action_handle_destroy = mlx5_action_handle_destroy,
.action_handle_update = mlx5_action_handle_update,
.action_handle_query = mlx5_action_handle_query,
+ .action_handle_query_update = mlx5_action_handle_query_update,
.tunnel_decap_set = mlx5_flow_tunnel_decap_set,
.tunnel_match = mlx5_flow_tunnel_match,
.tunnel_action_decap_release = mlx5_flow_tunnel_action_release,
@@ -1112,6 +1127,8 @@ static const struct rte_flow_ops mlx5_flow_ops = {
.push = mlx5_flow_push,
.async_action_handle_create = mlx5_flow_async_action_handle_create,
.async_action_handle_update = mlx5_flow_async_action_handle_update,
+ .async_action_handle_query_update =
+ mlx5_flow_async_action_handle_query_update,
.async_action_handle_query = mlx5_flow_async_action_handle_query,
.async_action_handle_destroy = mlx5_flow_async_action_handle_destroy,
};
@@ -9092,6 +9109,27 @@ mlx5_flow_async_action_handle_update(struct rte_eth_dev *dev, uint32_t queue,
update, user_data, error);
}
+static int
+mlx5_flow_async_action_handle_query_update
+ (struct rte_eth_dev *dev, uint32_t queue_id,
+ const struct rte_flow_op_attr *op_attr,
+ struct rte_flow_action_handle *action_handle,
+ const void *update, void *query,
+ enum rte_flow_query_update_mode qu_mode,
+ void *user_data, struct rte_flow_error *error)
+{
+ const struct mlx5_flow_driver_ops *fops =
+ flow_get_drv_ops(MLX5_FLOW_TYPE_HW);
+
+ if (!fops || !fops->async_action_query_update)
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "async query_update not supported");
+ return fops->async_action_query_update
+ (dev, queue_id, op_attr, action_handle,
+ update, query, qu_mode, user_data, error);
+}
+
/**
* Query shared action.
*
@@ -10230,6 +10268,30 @@ mlx5_action_handle_query(struct rte_eth_dev *dev,
return flow_drv_action_query(dev, handle, data, fops, error);
}
+static int
+mlx5_action_handle_query_update(struct rte_eth_dev *dev,
+ struct rte_flow_action_handle *handle,
+ const void *update, void *query,
+ enum rte_flow_query_update_mode qu_mode,
+ struct rte_flow_error *error)
+{
+ struct rte_flow_attr attr = { .transfer = 0 };
+ enum mlx5_flow_drv_type drv_type = flow_get_drv_type(dev, &attr);
+ const struct mlx5_flow_driver_ops *fops;
+
+ if (drv_type == MLX5_FLOW_TYPE_MIN || drv_type == MLX5_FLOW_TYPE_MAX)
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ NULL, "invalid driver type");
+ fops = flow_get_drv_ops(drv_type);
+ if (!fops || !fops->action_query_update)
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ NULL, "no query_update handler");
+ return fops->action_query_update(dev, handle, update,
+ query, qu_mode, error);
+}
+
/**
* Destroy all indirect actions (shared RSS).
*
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 4bef2296b8..3ba178bd6c 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -70,6 +70,7 @@ enum {
MLX5_INDIRECT_ACTION_TYPE_COUNT,
MLX5_INDIRECT_ACTION_TYPE_CT,
MLX5_INDIRECT_ACTION_TYPE_METER_MARK,
+ MLX5_INDIRECT_ACTION_TYPE_QUOTA,
};
/* Now, the maximal ports will be supported is 16, action number is 32M. */
@@ -218,6 +219,8 @@ enum mlx5_feature_name {
/* Meter color item */
#define MLX5_FLOW_ITEM_METER_COLOR (UINT64_C(1) << 44)
+#define MLX5_FLOW_ITEM_QUOTA (UINT64_C(1) << 45)
+
/* IPv6 routing extension item */
#define MLX5_FLOW_ITEM_OUTER_IPV6_ROUTING_EXT (UINT64_C(1) << 45)
@@ -307,6 +310,7 @@ enum mlx5_feature_name {
#define MLX5_FLOW_ACTION_SEND_TO_KERNEL (1ull << 42)
#define MLX5_FLOW_ACTION_INDIRECT_COUNT (1ull << 43)
#define MLX5_FLOW_ACTION_INDIRECT_AGE (1ull << 44)
+#define MLX5_FLOW_ACTION_QUOTA (1ull << 46)
#define MLX5_FLOW_DROP_INCLUSIVE_ACTIONS \
(MLX5_FLOW_ACTION_COUNT | MLX5_FLOW_ACTION_SAMPLE | MLX5_FLOW_ACTION_AGE)
@@ -1703,6 +1707,12 @@ typedef int (*mlx5_flow_action_query_t)
const struct rte_flow_action_handle *action,
void *data,
struct rte_flow_error *error);
+typedef int (*mlx5_flow_action_query_update_t)
+ (struct rte_eth_dev *dev,
+ struct rte_flow_action_handle *handle,
+ const void *update, void *data,
+ enum rte_flow_query_update_mode qu_mode,
+ struct rte_flow_error *error);
typedef int (*mlx5_flow_sync_domain_t)
(struct rte_eth_dev *dev,
uint32_t domains,
@@ -1849,7 +1859,13 @@ typedef int (*mlx5_flow_async_action_handle_update_t)
const void *update,
void *user_data,
struct rte_flow_error *error);
-
+typedef int (*mlx5_flow_async_action_handle_query_update_t)
+ (struct rte_eth_dev *dev, uint32_t queue_id,
+ const struct rte_flow_op_attr *op_attr,
+ struct rte_flow_action_handle *action_handle,
+ const void *update, void *data,
+ enum rte_flow_query_update_mode qu_mode,
+ void *user_data, struct rte_flow_error *error);
typedef int (*mlx5_flow_async_action_handle_query_t)
(struct rte_eth_dev *dev,
uint32_t queue,
@@ -1900,6 +1916,7 @@ struct mlx5_flow_driver_ops {
mlx5_flow_action_destroy_t action_destroy;
mlx5_flow_action_update_t action_update;
mlx5_flow_action_query_t action_query;
+ mlx5_flow_action_query_update_t action_query_update;
mlx5_flow_sync_domain_t sync_domain;
mlx5_flow_discover_priorities_t discover_priorities;
mlx5_flow_item_create_t item_create;
@@ -1921,6 +1938,7 @@ struct mlx5_flow_driver_ops {
mlx5_flow_push_t push;
mlx5_flow_async_action_handle_create_t async_action_create;
mlx5_flow_async_action_handle_update_t async_action_update;
+ mlx5_flow_async_action_handle_query_update_t async_action_query_update;
mlx5_flow_async_action_handle_query_t async_action_query;
mlx5_flow_async_action_handle_destroy_t async_action_destroy;
};
diff --git a/drivers/net/mlx5/mlx5_flow_aso.c b/drivers/net/mlx5/mlx5_flow_aso.c
index 0eb91c570f..3c08da0614 100644
--- a/drivers/net/mlx5/mlx5_flow_aso.c
+++ b/drivers/net/mlx5/mlx5_flow_aso.c
@@ -74,7 +74,7 @@ mlx5_aso_reg_mr(struct mlx5_common_device *cdev, size_t length,
* @param[in] sq
* ASO SQ to destroy.
*/
-static void
+void
mlx5_aso_destroy_sq(struct mlx5_aso_sq *sq)
{
mlx5_devx_sq_destroy(&sq->sq_obj);
@@ -148,7 +148,7 @@ mlx5_aso_age_init_sq(struct mlx5_aso_sq *sq)
* @param[in] sq
* ASO SQ to initialize.
*/
-static void
+void
mlx5_aso_mtr_init_sq(struct mlx5_aso_sq *sq)
{
volatile struct mlx5_aso_wqe *restrict wqe;
@@ -219,7 +219,7 @@ mlx5_aso_ct_init_sq(struct mlx5_aso_sq *sq)
* @return
* 0 on success, a negative errno value otherwise and rte_errno is set.
*/
-static int
+int
mlx5_aso_sq_create(struct mlx5_common_device *cdev, struct mlx5_aso_sq *sq,
void *uar, uint16_t log_desc_n)
{
@@ -504,7 +504,7 @@ mlx5_aso_dump_err_objs(volatile uint32_t *cqe, volatile uint32_t *wqe)
* @param[in] sq
* ASO SQ to use.
*/
-static void
+void
mlx5_aso_cqe_err_handle(struct mlx5_aso_sq *sq)
{
struct mlx5_aso_cq *cq = &sq->cq;
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 8a5e8941fd..0343d0a891 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -70,6 +70,9 @@ flow_hw_set_vlan_vid_construct(struct rte_eth_dev *dev,
struct mlx5_action_construct_data *act_data,
const struct mlx5_hw_actions *hw_acts,
const struct rte_flow_action *action);
+static void
+flow_hw_construct_quota(struct mlx5_priv *priv,
+ struct mlx5dr_rule_action *rule_act, uint32_t qid);
static __rte_always_inline uint32_t flow_hw_tx_tag_regc_mask(struct rte_eth_dev *dev);
static __rte_always_inline uint32_t flow_hw_tx_tag_regc_value(struct rte_eth_dev *dev);
@@ -797,6 +800,9 @@ flow_hw_shared_action_translate(struct rte_eth_dev *dev,
action_src, action_dst, idx))
return -1;
break;
+ case MLX5_INDIRECT_ACTION_TYPE_QUOTA:
+ flow_hw_construct_quota(priv, &acts->rule_acts[action_dst], idx);
+ break;
default:
DRV_LOG(WARNING, "Unsupported shared action type:%d", type);
break;
@@ -1840,6 +1846,16 @@ flow_hw_shared_action_get(struct rte_eth_dev *dev,
return -1;
}
+static void
+flow_hw_construct_quota(struct mlx5_priv *priv,
+ struct mlx5dr_rule_action *rule_act, uint32_t qid)
+{
+ rule_act->action = priv->quota_ctx.dr_action;
+ rule_act->aso_meter.offset = qid - 1;
+ rule_act->aso_meter.init_color =
+ MLX5DR_ACTION_ASO_METER_COLOR_GREEN;
+}
+
/**
* Construct shared indirect action.
*
@@ -1963,6 +1979,9 @@ flow_hw_shared_action_construct(struct rte_eth_dev *dev, uint32_t queue,
(enum mlx5dr_action_aso_meter_color)
rte_col_2_mlx5_col(aso_mtr->init_color);
break;
+ case MLX5_INDIRECT_ACTION_TYPE_QUOTA:
+ flow_hw_construct_quota(priv, rule_act, idx);
+ break;
default:
DRV_LOG(WARNING, "Unsupported shared action type:%d", type);
break;
@@ -2269,6 +2288,11 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
rule_acts[act_data->action_dst].action =
priv->hw_vport[port_action->port_id];
break;
+ case RTE_FLOW_ACTION_TYPE_QUOTA:
+ flow_hw_construct_quota(priv,
+ rule_acts + act_data->action_dst,
+ act_data->shared_meter.id);
+ break;
case RTE_FLOW_ACTION_TYPE_METER:
meter = action->conf;
mtr_id = meter->mtr_id;
@@ -2710,11 +2734,18 @@ __flow_hw_pull_indir_action_comp(struct rte_eth_dev *dev,
if (ret_comp < n_res && priv->hws_ctpool)
ret_comp += mlx5_aso_pull_completion(&priv->ct_mng->aso_sqs[queue],
&res[ret_comp], n_res - ret_comp);
+ if (ret_comp < n_res && priv->quota_ctx.sq)
+ ret_comp += mlx5_aso_pull_completion(&priv->quota_ctx.sq[queue],
+ &res[ret_comp],
+ n_res - ret_comp);
for (i = 0; i < ret_comp; i++) {
job = (struct mlx5_hw_q_job *)res[i].user_data;
/* Restore user data. */
res[i].user_data = job->user_data;
- if (job->type == MLX5_HW_Q_JOB_TYPE_DESTROY) {
+ if (MLX5_INDIRECT_ACTION_TYPE_GET(job->action) ==
+ MLX5_INDIRECT_ACTION_TYPE_QUOTA) {
+ mlx5_quota_async_completion(dev, queue, job);
+ } else if (job->type == MLX5_HW_Q_JOB_TYPE_DESTROY) {
type = MLX5_INDIRECT_ACTION_TYPE_GET(job->action);
if (type == MLX5_INDIRECT_ACTION_TYPE_METER_MARK) {
idx = MLX5_INDIRECT_ACTION_IDX_GET(job->action);
@@ -3695,6 +3726,10 @@ flow_hw_validate_action_indirect(struct rte_eth_dev *dev,
return ret;
*action_flags |= MLX5_FLOW_ACTION_INDIRECT_AGE;
break;
+ case RTE_FLOW_ACTION_TYPE_QUOTA:
+ /* TODO: add proper quota verification */
+ *action_flags |= MLX5_FLOW_ACTION_QUOTA;
+ break;
default:
DRV_LOG(WARNING, "Unsupported shared action type: %d", type);
return rte_flow_error_set(error, ENOTSUP,
@@ -3732,19 +3767,17 @@ flow_hw_validate_action_raw_encap(struct rte_eth_dev *dev __rte_unused,
}
static inline uint16_t
-flow_hw_template_expand_modify_field(const struct rte_flow_action actions[],
- const struct rte_flow_action masks[],
- const struct rte_flow_action *mf_action,
- const struct rte_flow_action *mf_mask,
- struct rte_flow_action *new_actions,
- struct rte_flow_action *new_masks,
- uint64_t flags, uint32_t act_num)
+flow_hw_template_expand_modify_field(struct rte_flow_action actions[],
+ struct rte_flow_action masks[],
+ const struct rte_flow_action *mf_actions,
+ const struct rte_flow_action *mf_masks,
+ uint64_t flags, uint32_t act_num,
+ uint32_t mf_num)
{
uint32_t i, tail;
MLX5_ASSERT(actions && masks);
- MLX5_ASSERT(new_actions && new_masks);
- MLX5_ASSERT(mf_action && mf_mask);
+ MLX5_ASSERT(mf_num > 0);
if (flags & MLX5_FLOW_ACTION_MODIFY_FIELD) {
/*
* Application action template already has Modify Field.
@@ -3795,12 +3828,10 @@ flow_hw_template_expand_modify_field(const struct rte_flow_action actions[],
i = 0;
insert:
tail = act_num - i; /* num action to move */
- memcpy(new_actions, actions, sizeof(actions[0]) * i);
- new_actions[i] = *mf_action;
- memcpy(new_actions + i + 1, actions + i, sizeof(actions[0]) * tail);
- memcpy(new_masks, masks, sizeof(masks[0]) * i);
- new_masks[i] = *mf_mask;
- memcpy(new_masks + i + 1, masks + i, sizeof(masks[0]) * tail);
+ memmove(actions + i + mf_num, actions + i, sizeof(actions[0]) * tail);
+ memcpy(actions + i, mf_actions, sizeof(actions[0]) * mf_num);
+ memmove(masks + i + mf_num, masks + i, sizeof(masks[0]) * tail);
+ memcpy(masks + i, mf_masks, sizeof(masks[0]) * mf_num);
return i;
}
@@ -4110,6 +4141,7 @@ flow_hw_dr_actions_template_handle_shared(const struct rte_flow_action *mask,
action_types[*curr_off] = MLX5DR_ACTION_TYP_ASO_CT;
*curr_off = *curr_off + 1;
break;
+ case RTE_FLOW_ACTION_TYPE_QUOTA:
case RTE_FLOW_ACTION_TYPE_METER_MARK:
at->actions_off[action_src] = *curr_off;
action_types[*curr_off] = MLX5DR_ACTION_TYP_ASO_METER;
@@ -4339,6 +4371,96 @@ flow_hw_set_vlan_vid_construct(struct rte_eth_dev *dev,
&modify_action);
}
+static __rte_always_inline void
+flow_hw_actions_template_replace_container(const
+ struct rte_flow_action *actions,
+ const
+ struct rte_flow_action *masks,
+ struct rte_flow_action *new_actions,
+ struct rte_flow_action *new_masks,
+ struct rte_flow_action **ra,
+ struct rte_flow_action **rm,
+ uint32_t act_num)
+{
+ memcpy(new_actions, actions, sizeof(actions[0]) * act_num);
+ memcpy(new_masks, masks, sizeof(masks[0]) * act_num);
+ *ra = (void *)(uintptr_t)new_actions;
+ *rm = (void *)(uintptr_t)new_masks;
+}
+
+#define RX_META_COPY_ACTION ((const struct rte_flow_action) { \
+ .type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD, \
+ .conf = &(struct rte_flow_action_modify_field){ \
+ .operation = RTE_FLOW_MODIFY_SET, \
+ .dst = { \
+ .field = (enum rte_flow_field_id) \
+ MLX5_RTE_FLOW_FIELD_META_REG, \
+ .level = REG_B, \
+ }, \
+ .src = { \
+ .field = (enum rte_flow_field_id) \
+ MLX5_RTE_FLOW_FIELD_META_REG, \
+ .level = REG_C_1, \
+ }, \
+ .width = 32, \
+ } \
+})
+
+#define RX_META_COPY_MASK ((const struct rte_flow_action) { \
+ .type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD, \
+ .conf = &(struct rte_flow_action_modify_field){ \
+ .operation = RTE_FLOW_MODIFY_SET, \
+ .dst = { \
+ .field = (enum rte_flow_field_id) \
+ MLX5_RTE_FLOW_FIELD_META_REG, \
+ .level = UINT32_MAX, \
+ .offset = UINT32_MAX, \
+ }, \
+ .src = { \
+ .field = (enum rte_flow_field_id) \
+ MLX5_RTE_FLOW_FIELD_META_REG, \
+ .level = UINT32_MAX, \
+ .offset = UINT32_MAX, \
+ }, \
+ .width = UINT32_MAX, \
+ } \
+})
+
+#define QUOTA_COLOR_INC_ACTION ((const struct rte_flow_action) { \
+ .type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD, \
+ .conf = &(struct rte_flow_action_modify_field) { \
+ .operation = RTE_FLOW_MODIFY_ADD, \
+ .dst = { \
+ .field = RTE_FLOW_FIELD_METER_COLOR, \
+ .level = 0, .offset = 0 \
+ }, \
+ .src = { \
+ .field = RTE_FLOW_FIELD_VALUE, \
+ .level = 1, \
+ .offset = 0, \
+ }, \
+ .width = 2 \
+ } \
+})
+
+#define QUOTA_COLOR_INC_MASK ((const struct rte_flow_action) { \
+ .type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD, \
+ .conf = &(struct rte_flow_action_modify_field) { \
+ .operation = RTE_FLOW_MODIFY_ADD, \
+ .dst = { \
+ .field = RTE_FLOW_FIELD_METER_COLOR, \
+ .level = UINT32_MAX, \
+ .offset = UINT32_MAX, \
+ }, \
+ .src = { \
+ .field = RTE_FLOW_FIELD_VALUE, \
+ .level = 3, \
+ .offset = 0 \
+ }, \
+ .width = UINT32_MAX \
+ } \
+})
+
/**
* Create flow action template.
*
@@ -4377,40 +4499,9 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev,
int set_vlan_vid_ix = -1;
struct rte_flow_action_modify_field set_vlan_vid_spec = {0, };
struct rte_flow_action_modify_field set_vlan_vid_mask = {0, };
- const struct rte_flow_action_modify_field rx_mreg = {
- .operation = RTE_FLOW_MODIFY_SET,
- .dst = {
- .field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
- .level = REG_B,
- },
- .src = {
- .field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
- .level = REG_C_1,
- },
- .width = 32,
- };
- const struct rte_flow_action_modify_field rx_mreg_mask = {
- .operation = RTE_FLOW_MODIFY_SET,
- .dst = {
- .field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
- .level = UINT32_MAX,
- .offset = UINT32_MAX,
- },
- .src = {
- .field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
- .level = UINT32_MAX,
- .offset = UINT32_MAX,
- },
- .width = UINT32_MAX,
- };
- const struct rte_flow_action rx_cpy = {
- .type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD,
- .conf = &rx_mreg,
- };
- const struct rte_flow_action rx_cpy_mask = {
- .type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD,
- .conf = &rx_mreg_mask,
- };
+ struct rte_flow_action mf_actions[MLX5_HW_MAX_ACTS];
+ struct rte_flow_action mf_masks[MLX5_HW_MAX_ACTS];
+ uint32_t expand_mf_num = 0;
if (mlx5_flow_hw_actions_validate(dev, attr, actions, masks,
&action_flags, error))
@@ -4440,44 +4531,57 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev,
RTE_FLOW_ERROR_TYPE_ACTION, NULL, "Too many actions");
return NULL;
}
+ if (set_vlan_vid_ix != -1) {
+ /* If temporary action buffer was not used, copy template actions to it */
+ if (ra == actions)
+ flow_hw_actions_template_replace_container(actions,
+ masks,
+ tmp_action,
+ tmp_mask,
+ &ra, &rm,
+ act_num);
+ flow_hw_set_vlan_vid(dev, ra, rm,
+ &set_vlan_vid_spec, &set_vlan_vid_mask,
+ set_vlan_vid_ix);
+ action_flags |= MLX5_FLOW_ACTION_MODIFY_FIELD;
+ }
+ if (action_flags & MLX5_FLOW_ACTION_QUOTA) {
+ mf_actions[expand_mf_num] = QUOTA_COLOR_INC_ACTION;
+ mf_masks[expand_mf_num] = QUOTA_COLOR_INC_MASK;
+ expand_mf_num++;
+ }
if (priv->sh->config.dv_xmeta_en == MLX5_XMETA_MODE_META32_HWS &&
priv->sh->config.dv_esw_en &&
(action_flags & (MLX5_FLOW_ACTION_QUEUE | MLX5_FLOW_ACTION_RSS))) {
/* Insert META copy */
- if (act_num + 1 > MLX5_HW_MAX_ACTS) {
+ mf_actions[expand_mf_num] = RX_META_COPY_ACTION;
+ mf_masks[expand_mf_num] = RX_META_COPY_MASK;
+ expand_mf_num++;
+ }
+ if (expand_mf_num) {
+ if (act_num + expand_mf_num > MLX5_HW_MAX_ACTS) {
rte_flow_error_set(error, E2BIG,
RTE_FLOW_ERROR_TYPE_ACTION,
NULL, "cannot expand: too many actions");
return NULL;
}
+ if (ra == actions)
+ flow_hw_actions_template_replace_container(actions,
+ masks,
+ tmp_action,
+ tmp_mask,
+ &ra, &rm,
+ act_num);
/* Application should make sure only one Q/RSS exist in one rule. */
- pos = flow_hw_template_expand_modify_field(actions, masks,
- &rx_cpy,
- &rx_cpy_mask,
- tmp_action, tmp_mask,
+ pos = flow_hw_template_expand_modify_field(ra, rm,
+ mf_actions,
+ mf_masks,
action_flags,
- act_num);
- ra = tmp_action;
- rm = tmp_mask;
- act_num++;
+ act_num,
+ expand_mf_num);
+ act_num += expand_mf_num;
action_flags |= MLX5_FLOW_ACTION_MODIFY_FIELD;
}
- if (set_vlan_vid_ix != -1) {
- /* If temporary action buffer was not used, copy template actions to it */
- if (ra == actions && rm == masks) {
- for (i = 0; i < act_num; ++i) {
- tmp_action[i] = actions[i];
- tmp_mask[i] = masks[i];
- if (actions[i].type == RTE_FLOW_ACTION_TYPE_END)
- break;
- }
- ra = tmp_action;
- rm = tmp_mask;
- }
- flow_hw_set_vlan_vid(dev, ra, rm,
- &set_vlan_vid_spec, &set_vlan_vid_mask,
- set_vlan_vid_ix);
- }
act_len = rte_flow_conv(RTE_FLOW_CONV_OP_ACTIONS, NULL, 0, ra, error);
if (act_len <= 0)
return NULL;
@@ -4740,6 +4844,7 @@ flow_hw_pattern_validate(struct rte_eth_dev *dev,
case RTE_FLOW_ITEM_TYPE_ICMP:
case RTE_FLOW_ITEM_TYPE_ICMP6:
case RTE_FLOW_ITEM_TYPE_ICMP6_ECHO_REQUEST:
+ case RTE_FLOW_ITEM_TYPE_QUOTA:
case RTE_FLOW_ITEM_TYPE_ICMP6_ECHO_REPLY:
case RTE_FLOW_ITEM_TYPE_CONNTRACK:
case RTE_FLOW_ITEM_TYPE_IPV6_ROUTING_EXT:
@@ -7017,6 +7122,12 @@ flow_hw_configure(struct rte_eth_dev *dev,
"Failed to set up Rx control flow templates");
goto err;
}
+ /* Initialize quotas */
+ if (port_attr->nb_quotas) {
+ ret = mlx5_flow_quota_init(dev, port_attr->nb_quotas);
+ if (ret)
+ goto err;
+ }
/* Initialize meter library*/
if (port_attr->nb_meters || (host_priv && host_priv->hws_mpool))
if (mlx5_flow_meter_init(dev, port_attr->nb_meters, 1, 1, nb_q_updated))
@@ -7116,6 +7227,7 @@ flow_hw_configure(struct rte_eth_dev *dev,
mlx5_hws_cnt_pool_destroy(priv->sh, priv->hws_cpool);
priv->hws_cpool = NULL;
}
+ mlx5_flow_quota_destroy(dev);
flow_hw_free_vport_actions(priv);
for (i = 0; i < MLX5_HW_ACTION_FLAG_MAX; i++) {
if (priv->hw_drop[i])
@@ -7213,6 +7325,7 @@ flow_hw_resource_release(struct rte_eth_dev *dev)
flow_hw_ct_mng_destroy(dev, priv->ct_mng);
priv->ct_mng = NULL;
}
+ mlx5_flow_quota_destroy(dev);
for (i = 0; i < priv->nb_queue; i++) {
rte_ring_free(priv->hw_q[i].indir_iq);
rte_ring_free(priv->hw_q[i].indir_cq);
@@ -7618,6 +7731,8 @@ flow_hw_action_handle_validate(struct rte_eth_dev *dev, uint32_t queue,
return flow_hw_validate_action_meter_mark(dev, action, error);
case RTE_FLOW_ACTION_TYPE_RSS:
return flow_dv_action_validate(dev, conf, action, error);
+ case RTE_FLOW_ACTION_TYPE_QUOTA:
+ return 0;
default:
return rte_flow_error_set(error, ENOTSUP,
RTE_FLOW_ERROR_TYPE_ACTION, NULL,
@@ -7789,6 +7904,11 @@ flow_hw_action_handle_create(struct rte_eth_dev *dev, uint32_t queue,
case RTE_FLOW_ACTION_TYPE_RSS:
handle = flow_dv_action_create(dev, conf, action, error);
break;
+ case RTE_FLOW_ACTION_TYPE_QUOTA:
+ aso = true;
+ handle = mlx5_quota_alloc(dev, queue, action->conf,
+ job, push, error);
+ break;
default:
rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION,
NULL, "action type not supported");
@@ -7909,6 +8029,11 @@ flow_hw_action_handle_update(struct rte_eth_dev *dev, uint32_t queue,
case MLX5_INDIRECT_ACTION_TYPE_RSS:
ret = flow_dv_action_update(dev, handle, update, error);
break;
+ case MLX5_INDIRECT_ACTION_TYPE_QUOTA:
+ aso = true;
+ ret = mlx5_quota_query_update(dev, queue, handle, update, NULL,
+ job, push, error);
+ break;
default:
ret = -ENOTSUP;
rte_flow_error_set(error, ENOTSUP,
@@ -8021,6 +8146,8 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue,
case MLX5_INDIRECT_ACTION_TYPE_RSS:
ret = flow_dv_action_destroy(dev, handle, error);
break;
+ case MLX5_INDIRECT_ACTION_TYPE_QUOTA:
+ break;
default:
ret = -ENOTSUP;
rte_flow_error_set(error, ENOTSUP,
@@ -8292,6 +8419,11 @@ flow_hw_action_handle_query(struct rte_eth_dev *dev, uint32_t queue,
ret = flow_hw_conntrack_query(dev, queue, act_idx, data,
job, push, error);
break;
+ case MLX5_INDIRECT_ACTION_TYPE_QUOTA:
+ aso = true;
+ ret = mlx5_quota_query(dev, queue, handle, data,
+ job, push, error);
+ break;
default:
ret = -ENOTSUP;
rte_flow_error_set(error, ENOTSUP,
@@ -8301,7 +8433,51 @@ flow_hw_action_handle_query(struct rte_eth_dev *dev, uint32_t queue,
}
if (job)
flow_hw_action_finalize(dev, queue, job, push, aso, ret == 0);
- return 0;
+ return ret;
+}
+
+static int
+flow_hw_async_action_handle_query_update
+ (struct rte_eth_dev *dev, uint32_t queue,
+ const struct rte_flow_op_attr *attr,
+ struct rte_flow_action_handle *handle,
+ const void *update, void *query,
+ enum rte_flow_query_update_mode qu_mode,
+ void *user_data, struct rte_flow_error *error)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ bool push = flow_hw_action_push(attr);
+ bool aso = false;
+ struct mlx5_hw_q_job *job = NULL;
+ int ret = 0;
+
+ if (attr) {
+ job = flow_hw_action_job_init(priv, queue, handle, user_data,
+ query,
+ MLX5_HW_Q_JOB_TYPE_UPDATE_QUERY,
+ error);
+ if (!job)
+ return -rte_errno;
+ }
+ switch (MLX5_INDIRECT_ACTION_TYPE_GET(handle)) {
+ case MLX5_INDIRECT_ACTION_TYPE_QUOTA:
+ if (qu_mode != RTE_FLOW_QU_QUERY_FIRST) {
+ ret = rte_flow_error_set
+ (error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION_CONF,
+ NULL, "quota action must query before update");
+ break;
+ }
+ aso = true;
+ ret = mlx5_quota_query_update(dev, queue, handle,
+ update, query, job, push, error);
+ break;
+ default:
+ ret = rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL, "update and query not supportred");
+ }
+ if (job)
+ flow_hw_action_finalize(dev, queue, job, push, aso, ret == 0);
+ return ret;
}
static int
@@ -8313,6 +8489,19 @@ flow_hw_action_query(struct rte_eth_dev *dev,
handle, data, NULL, error);
}
+static int
+flow_hw_action_query_update(struct rte_eth_dev *dev,
+ struct rte_flow_action_handle *handle,
+ const void *update, void *query,
+ enum rte_flow_query_update_mode qu_mode,
+ struct rte_flow_error *error)
+{
+ return flow_hw_async_action_handle_query_update(dev, MLX5_HW_INV_QUEUE,
+ NULL, handle, update,
+ query, qu_mode, NULL,
+ error);
+}
+
/**
* Get aged-out flows of a given port on the given HWS flow queue.
*
@@ -8425,12 +8614,14 @@ const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = {
.async_action_create = flow_hw_action_handle_create,
.async_action_destroy = flow_hw_action_handle_destroy,
.async_action_update = flow_hw_action_handle_update,
+ .async_action_query_update = flow_hw_async_action_handle_query_update,
.async_action_query = flow_hw_action_handle_query,
.action_validate = flow_hw_action_validate,
.action_create = flow_hw_action_create,
.action_destroy = flow_hw_action_destroy,
.action_update = flow_hw_action_update,
.action_query = flow_hw_action_query,
+ .action_query_update = flow_hw_action_query_update,
.query = flow_hw_query,
.get_aged_flows = flow_hw_get_aged_flows,
.get_q_aged_flows = flow_hw_get_q_aged_flows,
--
2.34.1
^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH v2 5/5] mlx5dr: Definer, translate RTE quota item
2023-03-08 17:01 ` [PATCH v2 " Gregory Etelson
` (3 preceding siblings ...)
2023-03-08 17:01 ` [PATCH v2 4/5] net/mlx5: add indirect QUOTA create/query/modify Gregory Etelson
@ 2023-03-08 17:01 ` Gregory Etelson
4 siblings, 0 replies; 20+ messages in thread
From: Gregory Etelson @ 2023-03-08 17:01 UTC (permalink / raw)
To: dev; +Cc: getelson, matan, rasland, Viacheslav Ovsiienko
MLX5 PMD implements QUOTA with Meter object.
PMD Quota action translation implicitly increments
Meter register value after HW assigns it.
Meter register values are:
HW QUOTA(HW+1) QUOTA state
RED 0 1 (01b) BLOCK
YELLOW 1 2 (10b) PASS
GREEN 2 3 (11b) PASS
Quota item checks Meter register bit 1 value to determine state:
SPEC MASK
PASS 2 (10b) 2 (10b)
BLOCK 0 (00b) 2 (10b)
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
---
drivers/net/mlx5/hws/mlx5dr_definer.c | 63 +++++++++++++++++++++++++++
1 file changed, 63 insertions(+)
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index 6374f9df33..dc9e50ee0f 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -19,6 +19,9 @@
#define STE_UDP 0x2
#define STE_ICMP 0x3
+#define MLX5DR_DEFINER_QUOTA_BLOCK 0
+#define MLX5DR_DEFINER_QUOTA_PASS 2
+
/* Setter function based on bit offset and mask, for 32bit DW*/
#define _DR_SET_32(p, v, byte_off, bit_off, mask) \
do { \
@@ -1247,6 +1250,62 @@ mlx5dr_definer_conv_item_tag(struct mlx5dr_definer_conv_data *cd,
return 0;
}
+static void
+mlx5dr_definer_quota_set(struct mlx5dr_definer_fc *fc,
+ const void *item_data, uint8_t *tag)
+{
+ /**
+ * MLX5 PMD implements QUOTA with Meter object.
+ * PMD Quota action translation implicitly increments
+ * Meter register value after HW assigns it.
+ * Meter register values are:
+ * HW QUOTA(HW+1) QUOTA state
+ * RED 0 1 (01b) BLOCK
+ * YELLOW 1 2 (10b) PASS
+ * GREEN 2 3 (11b) PASS
+ *
+ * Quota item checks Meter register bit 1 value to determine state:
+ * SPEC MASK
+ * PASS 2 (10b) 2 (10b)
+ * BLOCK 0 (00b) 2 (10b)
+ *
+ * item_data is NULL when template quota item is non-masked:
+ * .. / quota / ..
+ */
+
+ const struct rte_flow_item_quota *quota = item_data;
+ uint32_t val;
+
+ if (quota && quota->state == RTE_FLOW_QUOTA_STATE_BLOCK)
+ val = MLX5DR_DEFINER_QUOTA_BLOCK;
+ else
+ val = MLX5DR_DEFINER_QUOTA_PASS;
+
+ DR_SET(tag, val, fc->byte_off, fc->bit_off, fc->bit_mask);
+}
+
+static int
+mlx5dr_definer_conv_item_quota(struct mlx5dr_definer_conv_data *cd,
+ __rte_unused struct rte_flow_item *item,
+ int item_idx)
+{
+ int mtr_reg = flow_hw_get_reg_id(RTE_FLOW_ITEM_TYPE_METER_COLOR, 0);
+ struct mlx5dr_definer_fc *fc;
+
+ if (mtr_reg < 0) {
+ rte_errno = EINVAL;
+ return rte_errno;
+ }
+
+ fc = mlx5dr_definer_get_register_fc(cd, mtr_reg);
+ if (!fc)
+ return rte_errno;
+
+ fc->tag_set = &mlx5dr_definer_quota_set;
+ fc->item_idx = item_idx;
+ return 0;
+}
+
static int
mlx5dr_definer_conv_item_metadata(struct mlx5dr_definer_conv_data *cd,
struct rte_flow_item *item,
@@ -1904,6 +1963,10 @@ mlx5dr_definer_conv_items_to_hl(struct mlx5dr_context *ctx,
ret = mlx5dr_definer_conv_item_meter_color(&cd, items, i);
item_flags |= MLX5_FLOW_ITEM_METER_COLOR;
break;
+ case RTE_FLOW_ITEM_TYPE_QUOTA:
+ ret = mlx5dr_definer_conv_item_quota(&cd, items, i);
+ item_flags |= MLX5_FLOW_ITEM_QUOTA;
+ break;
case RTE_FLOW_ITEM_TYPE_IPV6_ROUTING_EXT:
ret = mlx5dr_definer_conv_item_ipv6_routing_ext(&cd, items, i);
item_flags |= cd.tunnel ? MLX5_FLOW_ITEM_INNER_IPV6_ROUTING_EXT :
--
2.34.1
^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH v3 0/5] net/mlx5: support indirect quota flow action
2023-01-18 12:55 [PATCH 0/5] net/mlx5: add indirect QUOTA create/query/modify Gregory Etelson
` (6 preceding siblings ...)
2023-03-08 17:01 ` [PATCH v2 " Gregory Etelson
@ 2023-05-07 7:39 ` Gregory Etelson
2023-05-07 7:39 ` [PATCH v3 1/5] net/mlx5: update query fields in async job structure Gregory Etelson
` (5 more replies)
7 siblings, 6 replies; 20+ messages in thread
From: Gregory Etelson @ 2023-05-07 7:39 UTC (permalink / raw)
To: dev; +Cc: getelson, mkashani, rasland
1. Prepare MLX5 PMD for upcoming indirect quota action.
2. Support query_update API.
3. Support indirect quota action.
v3: prepare patches for dpdk-23.07
Gregory Etelson (5):
net/mlx5: update query fields in async job structure
net/mlx5: remove code duplication
common/mlx5: update MTR ASO definitions
net/mlx5: add indirect QUOTA create/query/modify
mlx5dr: Definer, translate RTE quota item
doc/guides/nics/features/mlx5.ini | 2 +
doc/guides/nics/mlx5.rst | 10 +
doc/guides/rel_notes/release_23_07.rst | 4 +
drivers/common/mlx5/mlx5_prm.h | 4 +
drivers/net/mlx5/hws/mlx5dr_definer.c | 63 +++
drivers/net/mlx5/meson.build | 1 +
drivers/net/mlx5/mlx5.h | 88 ++-
drivers/net/mlx5/mlx5_flow.c | 62 +++
drivers/net/mlx5/mlx5_flow.h | 20 +-
drivers/net/mlx5/mlx5_flow_aso.c | 10 +-
drivers/net/mlx5/mlx5_flow_hw.c | 526 ++++++++++++------
drivers/net/mlx5/mlx5_flow_quota.c | 726 +++++++++++++++++++++++++
12 files changed, 1335 insertions(+), 181 deletions(-)
create mode 100644 drivers/net/mlx5/mlx5_flow_quota.c
--
2.34.1
^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH v3 1/5] net/mlx5: update query fields in async job structure
2023-05-07 7:39 ` [PATCH v3 0/5] net/mlx5: support indirect quota flow action Gregory Etelson
@ 2023-05-07 7:39 ` Gregory Etelson
2023-05-07 7:39 ` [PATCH v3 2/5] net/mlx5: remove code duplication Gregory Etelson
` (4 subsequent siblings)
5 siblings, 0 replies; 20+ messages in thread
From: Gregory Etelson @ 2023-05-07 7:39 UTC (permalink / raw)
To: dev; +Cc: getelson, mkashani, rasland, Viacheslav Ovsiienko, Matan Azrad
Query fields defined in `mlx5_hw_q_job` target CT type only.
The patch updates `mlx5_hw_q_job` for other query types as well.
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
drivers/net/mlx5/mlx5.h | 10 +++++-----
drivers/net/mlx5/mlx5_flow_aso.c | 2 +-
drivers/net/mlx5/mlx5_flow_hw.c | 6 +++---
3 files changed, 9 insertions(+), 9 deletions(-)
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 9eae692037..18ac90dfe2 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -368,11 +368,11 @@ struct mlx5_hw_q_job {
struct rte_flow_item *items;
union {
struct {
- /* Pointer to ct query user memory. */
- struct rte_flow_action_conntrack *profile;
- /* Pointer to ct ASO query out memory. */
- void *out_data;
- } __rte_packed;
+ /* User memory for query output */
+ void *user;
+ /* Data extracted from hardware */
+ void *hw;
+ } __rte_packed query;
struct rte_flow_item_ethdev port_spec;
struct rte_flow_item_tag tag_spec;
} __rte_packed;
diff --git a/drivers/net/mlx5/mlx5_flow_aso.c b/drivers/net/mlx5/mlx5_flow_aso.c
index 29bd7ce9e8..0eb91c570f 100644
--- a/drivers/net/mlx5/mlx5_flow_aso.c
+++ b/drivers/net/mlx5/mlx5_flow_aso.c
@@ -1389,7 +1389,7 @@ mlx5_aso_ct_sq_query_single(struct mlx5_dev_ctx_shared *sh,
struct mlx5_hw_q_job *job = (struct mlx5_hw_q_job *)user_data;
sq->elts[wqe_idx].ct = user_data;
- job->out_data = (char *)((uintptr_t)sq->mr.addr + wqe_idx * 64);
+ job->query.hw = (char *)((uintptr_t)sq->mr.addr + wqe_idx * 64);
} else {
sq->elts[wqe_idx].query_data = data;
sq->elts[wqe_idx].ct = ct;
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 853c94af9c..2a51d3ee19 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -2876,8 +2876,8 @@ __flow_hw_pull_indir_action_comp(struct rte_eth_dev *dev,
idx = MLX5_ACTION_CTX_CT_GET_IDX
((uint32_t)(uintptr_t)job->action);
aso_ct = mlx5_ipool_get(priv->hws_ctpool->cts, idx);
- mlx5_aso_ct_obj_analyze(job->profile,
- job->out_data);
+ mlx5_aso_ct_obj_analyze(job->query.user,
+ job->query.hw);
aso_ct->state = ASO_CONNTRACK_READY;
}
}
@@ -8619,7 +8619,7 @@ flow_hw_action_handle_query(struct rte_eth_dev *dev, uint32_t queue,
case MLX5_INDIRECT_ACTION_TYPE_CT:
aso = true;
if (job)
- job->profile = (struct rte_flow_action_conntrack *)data;
+ job->query.user = data;
ret = flow_hw_conntrack_query(dev, queue, act_idx, data,
job, push, error);
break;
--
2.34.1
^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH v3 2/5] net/mlx5: remove code duplication
2023-05-07 7:39 ` [PATCH v3 0/5] net/mlx5: support indirect quota flow action Gregory Etelson
2023-05-07 7:39 ` [PATCH v3 1/5] net/mlx5: update query fields in async job structure Gregory Etelson
@ 2023-05-07 7:39 ` Gregory Etelson
2023-05-07 7:39 ` [PATCH v3 3/5] common/mlx5: update MTR ASO definitions Gregory Etelson
` (3 subsequent siblings)
5 siblings, 0 replies; 20+ messages in thread
From: Gregory Etelson @ 2023-05-07 7:39 UTC (permalink / raw)
To: dev; +Cc: getelson, mkashani, rasland, Viacheslav Ovsiienko, Matan Azrad
Replace duplicated code with dedicated functions
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
drivers/net/mlx5/mlx5.h | 6 +-
drivers/net/mlx5/mlx5_flow_hw.c | 182 ++++++++++++++++----------------
2 files changed, 95 insertions(+), 93 deletions(-)
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 18ac90dfe2..c12149b7e7 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -346,11 +346,11 @@ struct mlx5_lb_ctx {
};
/* HW steering queue job descriptor type. */
-enum {
+enum mlx5_hw_job_type {
MLX5_HW_Q_JOB_TYPE_CREATE, /* Flow create job type. */
MLX5_HW_Q_JOB_TYPE_DESTROY, /* Flow destroy job type. */
- MLX5_HW_Q_JOB_TYPE_UPDATE,
- MLX5_HW_Q_JOB_TYPE_QUERY,
+ MLX5_HW_Q_JOB_TYPE_UPDATE, /* Flow update job type. */
+ MLX5_HW_Q_JOB_TYPE_QUERY, /* Flow query job type. */
};
#define MLX5_HW_MAX_ITEMS (16)
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 2a51d3ee19..350b4d99cf 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -7970,6 +7970,67 @@ flow_hw_action_handle_validate(struct rte_eth_dev *dev, uint32_t queue,
return 0;
}
+static __rte_always_inline bool
+flow_hw_action_push(const struct rte_flow_op_attr *attr)
+{
+ return attr ? !attr->postpone : true;
+}
+
+static __rte_always_inline struct mlx5_hw_q_job *
+flow_hw_job_get(struct mlx5_priv *priv, uint32_t queue)
+{
+ return priv->hw_q[queue].job[--priv->hw_q[queue].job_idx];
+}
+
+static __rte_always_inline void
+flow_hw_job_put(struct mlx5_priv *priv, uint32_t queue)
+{
+ priv->hw_q[queue].job_idx++;
+}
+
+static __rte_always_inline struct mlx5_hw_q_job *
+flow_hw_action_job_init(struct mlx5_priv *priv, uint32_t queue,
+ const struct rte_flow_action_handle *handle,
+ void *user_data, void *query_data,
+ enum mlx5_hw_job_type type,
+ struct rte_flow_error *error)
+{
+ struct mlx5_hw_q_job *job;
+
+ MLX5_ASSERT(queue != MLX5_HW_INV_QUEUE);
+ if (unlikely(!priv->hw_q[queue].job_idx)) {
+ rte_flow_error_set(error, ENOMEM,
+ RTE_FLOW_ERROR_TYPE_ACTION_NUM, NULL,
+ "Action destroy failed due to queue full.");
+ return NULL;
+ }
+ job = flow_hw_job_get(priv, queue);
+ job->type = type;
+ job->action = handle;
+ job->user_data = user_data;
+ job->query.user = query_data;
+ return job;
+}
+
+static __rte_always_inline void
+flow_hw_action_finalize(struct rte_eth_dev *dev, uint32_t queue,
+ struct mlx5_hw_q_job *job,
+ bool push, bool aso, bool status)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ if (likely(status)) {
+ if (push)
+ __flow_hw_push_action(dev, queue);
+ if (!aso)
+ rte_ring_enqueue(push ?
+ priv->hw_q[queue].indir_cq :
+ priv->hw_q[queue].indir_iq,
+ job);
+ } else {
+ flow_hw_job_put(priv, queue);
+ }
+}
+
/**
* Create shared action.
*
@@ -8007,21 +8068,15 @@ flow_hw_action_handle_create(struct rte_eth_dev *dev, uint32_t queue,
cnt_id_t cnt_id;
uint32_t mtr_id;
uint32_t age_idx;
- bool push = true;
+ bool push = flow_hw_action_push(attr);
bool aso = false;
if (attr) {
- MLX5_ASSERT(queue != MLX5_HW_INV_QUEUE);
- if (unlikely(!priv->hw_q[queue].job_idx)) {
- rte_flow_error_set(error, ENOMEM,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
- "Flow queue full.");
+ job = flow_hw_action_job_init(priv, queue, NULL, user_data,
+ NULL, MLX5_HW_Q_JOB_TYPE_CREATE,
+ error);
+ if (!job)
return NULL;
- }
- job = priv->hw_q[queue].job[--priv->hw_q[queue].job_idx];
- job->type = MLX5_HW_Q_JOB_TYPE_CREATE;
- job->user_data = user_data;
- push = !attr->postpone;
}
switch (action->type) {
case RTE_FLOW_ACTION_TYPE_AGE:
@@ -8084,17 +8139,9 @@ flow_hw_action_handle_create(struct rte_eth_dev *dev, uint32_t queue,
break;
}
if (job) {
- if (!handle) {
- priv->hw_q[queue].job_idx++;
- return NULL;
- }
job->action = handle;
- if (push)
- __flow_hw_push_action(dev, queue);
- if (aso)
- return handle;
- rte_ring_enqueue(push ? priv->hw_q[queue].indir_cq :
- priv->hw_q[queue].indir_iq, job);
+ flow_hw_action_finalize(dev, queue, job, push, aso,
+ handle != NULL);
}
return handle;
}
@@ -8142,19 +8189,15 @@ flow_hw_action_handle_update(struct rte_eth_dev *dev, uint32_t queue,
uint32_t type = act_idx >> MLX5_INDIRECT_ACTION_TYPE_OFFSET;
uint32_t idx = act_idx & ((1u << MLX5_INDIRECT_ACTION_TYPE_OFFSET) - 1);
int ret = 0;
- bool push = true;
+ bool push = flow_hw_action_push(attr);
bool aso = false;
if (attr) {
- MLX5_ASSERT(queue != MLX5_HW_INV_QUEUE);
- if (unlikely(!priv->hw_q[queue].job_idx))
- return rte_flow_error_set(error, ENOMEM,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
- "Action update failed due to queue full.");
- job = priv->hw_q[queue].job[--priv->hw_q[queue].job_idx];
- job->type = MLX5_HW_Q_JOB_TYPE_UPDATE;
- job->user_data = user_data;
- push = !attr->postpone;
+ job = flow_hw_action_job_init(priv, queue, handle, user_data,
+ NULL, MLX5_HW_Q_JOB_TYPE_UPDATE,
+ error);
+ if (!job)
+ return -rte_errno;
}
switch (type) {
case MLX5_INDIRECT_ACTION_TYPE_AGE:
@@ -8217,19 +8260,8 @@ flow_hw_action_handle_update(struct rte_eth_dev *dev, uint32_t queue,
"action type not supported");
break;
}
- if (job) {
- if (ret) {
- priv->hw_q[queue].job_idx++;
- return ret;
- }
- job->action = handle;
- if (push)
- __flow_hw_push_action(dev, queue);
- if (aso)
- return 0;
- rte_ring_enqueue(push ? priv->hw_q[queue].indir_cq :
- priv->hw_q[queue].indir_iq, job);
- }
+ if (job)
+ flow_hw_action_finalize(dev, queue, job, push, aso, ret == 0);
return ret;
}
@@ -8268,20 +8300,16 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue,
struct mlx5_hw_q_job *job = NULL;
struct mlx5_aso_mtr *aso_mtr;
struct mlx5_flow_meter_info *fm;
- bool push = true;
+ bool push = flow_hw_action_push(attr);
bool aso = false;
int ret = 0;
if (attr) {
- MLX5_ASSERT(queue != MLX5_HW_INV_QUEUE);
- if (unlikely(!priv->hw_q[queue].job_idx))
- return rte_flow_error_set(error, ENOMEM,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
- "Action destroy failed due to queue full.");
- job = priv->hw_q[queue].job[--priv->hw_q[queue].job_idx];
- job->type = MLX5_HW_Q_JOB_TYPE_DESTROY;
- job->user_data = user_data;
- push = !attr->postpone;
+ job = flow_hw_action_job_init(priv, queue, handle, user_data,
+ NULL, MLX5_HW_Q_JOB_TYPE_DESTROY,
+ error);
+ if (!job)
+ return -rte_errno;
}
switch (type) {
case MLX5_INDIRECT_ACTION_TYPE_AGE:
@@ -8344,19 +8372,8 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue,
"action type not supported");
break;
}
- if (job) {
- if (ret) {
- priv->hw_q[queue].job_idx++;
- return ret;
- }
- job->action = handle;
- if (push)
- __flow_hw_push_action(dev, queue);
- if (aso)
- return ret;
- rte_ring_enqueue(push ? priv->hw_q[queue].indir_cq :
- priv->hw_q[queue].indir_iq, job);
- }
+ if (job)
+ flow_hw_action_finalize(dev, queue, job, push, aso, ret == 0);
return ret;
}
@@ -8595,19 +8612,15 @@ flow_hw_action_handle_query(struct rte_eth_dev *dev, uint32_t queue,
uint32_t type = act_idx >> MLX5_INDIRECT_ACTION_TYPE_OFFSET;
uint32_t age_idx = act_idx & MLX5_HWS_AGE_IDX_MASK;
int ret;
- bool push = true;
+ bool push = flow_hw_action_push(attr);
bool aso = false;
if (attr) {
- MLX5_ASSERT(queue != MLX5_HW_INV_QUEUE);
- if (unlikely(!priv->hw_q[queue].job_idx))
- return rte_flow_error_set(error, ENOMEM,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
- "Action destroy failed due to queue full.");
- job = priv->hw_q[queue].job[--priv->hw_q[queue].job_idx];
- job->type = MLX5_HW_Q_JOB_TYPE_QUERY;
- job->user_data = user_data;
- push = !attr->postpone;
+ job = flow_hw_action_job_init(priv, queue, handle, user_data,
+ data, MLX5_HW_Q_JOB_TYPE_QUERY,
+ error);
+ if (!job)
+ return -rte_errno;
}
switch (type) {
case MLX5_INDIRECT_ACTION_TYPE_AGE:
@@ -8630,19 +8643,8 @@ flow_hw_action_handle_query(struct rte_eth_dev *dev, uint32_t queue,
"action type not supported");
break;
}
- if (job) {
- if (ret) {
- priv->hw_q[queue].job_idx++;
- return ret;
- }
- job->action = handle;
- if (push)
- __flow_hw_push_action(dev, queue);
- if (aso)
- return ret;
- rte_ring_enqueue(push ? priv->hw_q[queue].indir_cq :
- priv->hw_q[queue].indir_iq, job);
- }
+ if (job)
+ flow_hw_action_finalize(dev, queue, job, push, aso, ret == 0);
return 0;
}
--
2.34.1
^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH v3 3/5] common/mlx5: update MTR ASO definitions
2023-05-07 7:39 ` [PATCH v3 0/5] net/mlx5: support indirect quota flow action Gregory Etelson
2023-05-07 7:39 ` [PATCH v3 1/5] net/mlx5: update query fields in async job structure Gregory Etelson
2023-05-07 7:39 ` [PATCH v3 2/5] net/mlx5: remove code duplication Gregory Etelson
@ 2023-05-07 7:39 ` Gregory Etelson
2023-05-07 7:39 ` [PATCH v3 4/5] net/mlx5: add indirect QUOTA create/query/modify Gregory Etelson
` (2 subsequent siblings)
5 siblings, 0 replies; 20+ messages in thread
From: Gregory Etelson @ 2023-05-07 7:39 UTC (permalink / raw)
To: dev; +Cc: getelson, mkashani, rasland, Viacheslav Ovsiienko, Matan Azrad
Update MTR ASO definitions for QUOTA flow action.
Quota flow action requires WQE READ capability and access to
token fields.
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
drivers/common/mlx5/mlx5_prm.h | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index ed3d5efbb7..9ba3c5d008 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -3949,6 +3949,8 @@ enum mlx5_aso_op {
ASO_OPER_LOGICAL_OR = 0x1,
};
+#define MLX5_ASO_CSEG_READ_ENABLE 1
+
/* ASO WQE CTRL segment. */
struct mlx5_aso_cseg {
uint32_t va_h;
@@ -3963,6 +3965,8 @@ struct mlx5_aso_cseg {
uint64_t data_mask;
} __rte_packed;
+#define MLX5_MTR_MAX_TOKEN_VALUE INT32_MAX
+
/* A meter data segment - 2 per ASO WQE. */
struct mlx5_aso_mtr_dseg {
uint32_t v_bo_sc_bbog_mm;
--
2.34.1
^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH v3 4/5] net/mlx5: add indirect QUOTA create/query/modify
2023-05-07 7:39 ` [PATCH v3 0/5] net/mlx5: support indirect quota flow action Gregory Etelson
` (2 preceding siblings ...)
2023-05-07 7:39 ` [PATCH v3 3/5] common/mlx5: update MTR ASO definitions Gregory Etelson
@ 2023-05-07 7:39 ` Gregory Etelson
2023-05-07 7:39 ` [PATCH v3 5/5] mlx5dr: Definer, translate RTE quota item Gregory Etelson
2023-05-25 14:18 ` [PATCH v3 0/5] net/mlx5: support indirect quota flow action Raslan Darawsheh
5 siblings, 0 replies; 20+ messages in thread
From: Gregory Etelson @ 2023-05-07 7:39 UTC (permalink / raw)
To: dev; +Cc: getelson, mkashani, rasland, Viacheslav Ovsiienko, Matan Azrad
Implement HWS functions for indirect QUOTA creation, modification and
query.
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
doc/guides/nics/features/mlx5.ini | 2 +
doc/guides/nics/mlx5.rst | 10 +
doc/guides/rel_notes/release_23_07.rst | 4 +
drivers/net/mlx5/meson.build | 1 +
drivers/net/mlx5/mlx5.h | 72 +++
drivers/net/mlx5/mlx5_flow.c | 62 +++
drivers/net/mlx5/mlx5_flow.h | 20 +-
drivers/net/mlx5/mlx5_flow_aso.c | 8 +-
drivers/net/mlx5/mlx5_flow_hw.c | 342 +++++++++---
drivers/net/mlx5/mlx5_flow_quota.c | 726 +++++++++++++++++++++++++
10 files changed, 1166 insertions(+), 81 deletions(-)
create mode 100644 drivers/net/mlx5/mlx5_flow_quota.c
diff --git a/doc/guides/nics/features/mlx5.ini b/doc/guides/nics/features/mlx5.ini
index 0650e02e2d..83d2f25660 100644
--- a/doc/guides/nics/features/mlx5.ini
+++ b/doc/guides/nics/features/mlx5.ini
@@ -84,6 +84,7 @@ mpls = Y
nvgre = Y
port_id = Y
port_representor = Y
+quota = Y
tag = Y
tcp = Y
udp = Y
@@ -115,6 +116,7 @@ of_push_vlan = Y
of_set_vlan_pcp = Y
of_set_vlan_vid = Y
port_id = Y
+quota = I
queue = Y
raw_decap = Y
raw_encap = Y
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 7a137d5f6a..db089591ab 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -162,6 +162,7 @@ Features
- Sub-Function.
- Matching on represented port.
- Matching on aggregated affinity.
+- Flow quota.
Limitations
@@ -694,6 +695,15 @@ Limitations
The flow engine of a process cannot move from active to standby mode
if preceding active application rules are still present and vice versa.
+- Quota:
+
+ - Quota implemented for HWS / template API.
+ - Maximal value for quota SET and ADD operations in INT32_MAX (2GB).
+ - Application cannot use 2 consecutive ADD updates.
+ Next tokens update after ADD must always be SET.
+ - Quota flow action cannot be used with Meter or CT flow actions in the same rule.
+ - Quota flow action and item supported in non-root HWS tables.
+ - Maximal number of HW quota and HW meter objects <= 16e6.
Statistics
----------
diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst
index a9b1293689..e9c41c4027 100644
--- a/doc/guides/rel_notes/release_23_07.rst
+++ b/doc/guides/rel_notes/release_23_07.rst
@@ -55,6 +55,10 @@ New Features
Also, make sure to start the actual text at the margin.
=======================================================
+ * **Updated NVIDIA mlx5 driver.**
+
+ * Added support for quota flow action and item.
+
Removed Items
-------------
diff --git a/drivers/net/mlx5/meson.build b/drivers/net/mlx5/meson.build
index 623d60c1a2..6ef5083ea4 100644
--- a/drivers/net/mlx5/meson.build
+++ b/drivers/net/mlx5/meson.build
@@ -47,6 +47,7 @@ if is_linux
sources += files(
'mlx5_flow_hw.c',
'mlx5_hws_cnt.c',
+ 'mlx5_flow_quota.c',
'mlx5_flow_verbs.c',
)
endif
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index c12149b7e7..04febd3282 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -47,6 +47,14 @@
#define MLX5_HW_INV_QUEUE UINT32_MAX
+/*
+ * The default ipool threshold value indicates which per_core_cache
+ * value to set.
+ */
+#define MLX5_HW_IPOOL_SIZE_THRESHOLD (1 << 19)
+/* The default min local cache size. */
+#define MLX5_HW_IPOOL_CACHE_MIN (1 << 9)
+
/*
* Number of modification commands.
* The maximal actions amount in FW is some constant, and it is 16 in the
@@ -351,6 +359,7 @@ enum mlx5_hw_job_type {
MLX5_HW_Q_JOB_TYPE_DESTROY, /* Flow destroy job type. */
MLX5_HW_Q_JOB_TYPE_UPDATE, /* Flow update job type. */
MLX5_HW_Q_JOB_TYPE_QUERY, /* Flow query job type. */
+ MLX5_HW_Q_JOB_TYPE_UPDATE_QUERY, /* Flow update and query job type. */
};
#define MLX5_HW_MAX_ITEMS (16)
@@ -592,6 +601,7 @@ struct mlx5_aso_sq_elem {
char *query_data;
};
void *user_data;
+ struct mlx5_quota *quota_obj;
};
};
@@ -1686,6 +1696,33 @@ struct mlx5_flow_hw_attr {
struct mlx5_flow_hw_ctrl_rx;
+enum mlx5_quota_state {
+ MLX5_QUOTA_STATE_FREE, /* quota not in use */
+ MLX5_QUOTA_STATE_READY, /* quota is ready */
+ MLX5_QUOTA_STATE_WAIT /* quota waits WR completion */
+};
+
+struct mlx5_quota {
+ uint8_t state; /* object state */
+ uint8_t mode; /* metering mode */
+ /**
+ * Keep track of application update types.
+ * PMD does not allow 2 consecutive ADD updates.
+ */
+ enum rte_flow_update_quota_op last_update;
+};
+
+/* Bulk management structure for flow quota. */
+struct mlx5_quota_ctx {
+ uint32_t nb_quotas; /* Total number of quota objects */
+ struct mlx5dr_action *dr_action; /* HWS action */
+ struct mlx5_devx_obj *devx_obj; /* DEVX ranged object. */
+ struct mlx5_pmd_mr mr; /* MR for READ from MTR ASO */
+ struct mlx5_aso_mtr_dseg **read_buf; /* Buffers for READ */
+ struct mlx5_aso_sq *sq; /* SQs for sync/async ACCESS_ASO WRs */
+ struct mlx5_indexed_pool *quota_ipool; /* Manage quota objects */
+};
+
struct mlx5_priv {
struct rte_eth_dev_data *dev_data; /* Pointer to device data. */
struct mlx5_dev_ctx_shared *sh; /* Shared device context. */
@@ -1776,6 +1813,7 @@ struct mlx5_priv {
struct mlx5_flow_meter_policy *mtr_policy_arr; /* Policy array. */
struct mlx5_l3t_tbl *mtr_idx_tbl; /* Meter index lookup table. */
struct mlx5_mtr_bulk mtr_bulk; /* Meter index mapping for HWS */
+ struct mlx5_quota_ctx quota_ctx; /* Quota index mapping for HWS */
uint8_t skip_default_rss_reta; /* Skip configuration of default reta. */
uint8_t fdb_def_rule; /* Whether fdb jump to table 1 is configured. */
struct mlx5_mp_id mp_id; /* ID of a multi-process process */
@@ -2273,6 +2311,15 @@ int mlx5_aso_ct_queue_init(struct mlx5_dev_ctx_shared *sh,
uint32_t nb_queues);
int mlx5_aso_ct_queue_uninit(struct mlx5_dev_ctx_shared *sh,
struct mlx5_aso_ct_pools_mng *ct_mng);
+int
+mlx5_aso_sq_create(struct mlx5_common_device *cdev, struct mlx5_aso_sq *sq,
+ void *uar, uint16_t log_desc_n);
+void
+mlx5_aso_destroy_sq(struct mlx5_aso_sq *sq);
+void
+mlx5_aso_mtr_init_sq(struct mlx5_aso_sq *sq);
+void
+mlx5_aso_cqe_err_handle(struct mlx5_aso_sq *sq);
/* mlx5_flow_flex.c */
@@ -2310,6 +2357,31 @@ struct mlx5_list_entry *mlx5_flex_parser_clone_cb(void *list_ctx,
void mlx5_flex_parser_clone_free_cb(void *tool_ctx,
struct mlx5_list_entry *entry);
+int
+mlx5_flow_quota_destroy(struct rte_eth_dev *dev);
+int
+mlx5_flow_quota_init(struct rte_eth_dev *dev, uint32_t nb_quotas);
+struct rte_flow_action_handle *
+mlx5_quota_alloc(struct rte_eth_dev *dev, uint32_t queue,
+ const struct rte_flow_action_quota *conf,
+ struct mlx5_hw_q_job *job, bool push,
+ struct rte_flow_error *error);
+void
+mlx5_quota_async_completion(struct rte_eth_dev *dev, uint32_t queue,
+ struct mlx5_hw_q_job *job);
+int
+mlx5_quota_query_update(struct rte_eth_dev *dev, uint32_t queue,
+ struct rte_flow_action_handle *handle,
+ const struct rte_flow_action *update,
+ struct rte_flow_query_quota *query,
+ struct mlx5_hw_q_job *async_job, bool push,
+ struct rte_flow_error *error);
+int mlx5_quota_query(struct rte_eth_dev *dev, uint32_t queue,
+ const struct rte_flow_action_handle *handle,
+ struct rte_flow_query_quota *query,
+ struct mlx5_hw_q_job *async_job, bool push,
+ struct rte_flow_error *error);
+
int mlx5_alloc_srh_flex_parser(struct rte_eth_dev *dev);
void mlx5_free_srh_flex_parser(struct rte_eth_dev *dev);
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index d0275fdd00..6558ddd768 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -1095,6 +1095,20 @@ mlx5_flow_async_action_handle_query(struct rte_eth_dev *dev, uint32_t queue,
void *data,
void *user_data,
struct rte_flow_error *error);
+static int
+mlx5_action_handle_query_update(struct rte_eth_dev *dev,
+ struct rte_flow_action_handle *handle,
+ const void *update, void *query,
+ enum rte_flow_query_update_mode qu_mode,
+ struct rte_flow_error *error);
+static int
+mlx5_flow_async_action_handle_query_update
+ (struct rte_eth_dev *dev, uint32_t queue_id,
+ const struct rte_flow_op_attr *op_attr,
+ struct rte_flow_action_handle *action_handle,
+ const void *update, void *query,
+ enum rte_flow_query_update_mode qu_mode,
+ void *user_data, struct rte_flow_error *error);
static const struct rte_flow_ops mlx5_flow_ops = {
.validate = mlx5_flow_validate,
@@ -1110,6 +1124,7 @@ static const struct rte_flow_ops mlx5_flow_ops = {
.action_handle_destroy = mlx5_action_handle_destroy,
.action_handle_update = mlx5_action_handle_update,
.action_handle_query = mlx5_action_handle_query,
+ .action_handle_query_update = mlx5_action_handle_query_update,
.tunnel_decap_set = mlx5_flow_tunnel_decap_set,
.tunnel_match = mlx5_flow_tunnel_match,
.tunnel_action_decap_release = mlx5_flow_tunnel_action_release,
@@ -1133,6 +1148,8 @@ static const struct rte_flow_ops mlx5_flow_ops = {
.push = mlx5_flow_push,
.async_action_handle_create = mlx5_flow_async_action_handle_create,
.async_action_handle_update = mlx5_flow_async_action_handle_update,
+ .async_action_handle_query_update =
+ mlx5_flow_async_action_handle_query_update,
.async_action_handle_query = mlx5_flow_async_action_handle_query,
.async_action_handle_destroy = mlx5_flow_async_action_handle_destroy,
};
@@ -9464,6 +9481,27 @@ mlx5_flow_async_action_handle_update(struct rte_eth_dev *dev, uint32_t queue,
update, user_data, error);
}
+static int
+mlx5_flow_async_action_handle_query_update
+ (struct rte_eth_dev *dev, uint32_t queue_id,
+ const struct rte_flow_op_attr *op_attr,
+ struct rte_flow_action_handle *action_handle,
+ const void *update, void *query,
+ enum rte_flow_query_update_mode qu_mode,
+ void *user_data, struct rte_flow_error *error)
+{
+ const struct mlx5_flow_driver_ops *fops =
+ flow_get_drv_ops(MLX5_FLOW_TYPE_HW);
+
+ if (!fops || !fops->async_action_query_update)
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "async query_update not supported");
+ return fops->async_action_query_update
+ (dev, queue_id, op_attr, action_handle,
+ update, query, qu_mode, user_data, error);
+}
+
/**
* Query shared action.
*
@@ -10602,6 +10640,30 @@ mlx5_action_handle_query(struct rte_eth_dev *dev,
return flow_drv_action_query(dev, handle, data, fops, error);
}
+static int
+mlx5_action_handle_query_update(struct rte_eth_dev *dev,
+ struct rte_flow_action_handle *handle,
+ const void *update, void *query,
+ enum rte_flow_query_update_mode qu_mode,
+ struct rte_flow_error *error)
+{
+ struct rte_flow_attr attr = { .transfer = 0 };
+ enum mlx5_flow_drv_type drv_type = flow_get_drv_type(dev, &attr);
+ const struct mlx5_flow_driver_ops *fops;
+
+ if (drv_type == MLX5_FLOW_TYPE_MIN || drv_type == MLX5_FLOW_TYPE_MAX)
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ NULL, "invalid driver type");
+ fops = flow_get_drv_ops(drv_type);
+ if (!fops || !fops->action_query_update)
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ NULL, "no query_update handler");
+ return fops->action_query_update(dev, handle, update,
+ query, qu_mode, error);
+}
+
/**
* Destroy all indirect actions (shared RSS).
*
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 1d116ea0f6..22363fb0b7 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -71,6 +71,7 @@ enum {
MLX5_INDIRECT_ACTION_TYPE_COUNT,
MLX5_INDIRECT_ACTION_TYPE_CT,
MLX5_INDIRECT_ACTION_TYPE_METER_MARK,
+ MLX5_INDIRECT_ACTION_TYPE_QUOTA,
};
/* Now, the maximal ports will be supported is 16, action number is 32M. */
@@ -219,6 +220,8 @@ enum mlx5_feature_name {
/* Meter color item */
#define MLX5_FLOW_ITEM_METER_COLOR (UINT64_C(1) << 44)
+#define MLX5_FLOW_ITEM_QUOTA (UINT64_C(1) << 45)
+
/* IPv6 routing extension item */
#define MLX5_FLOW_ITEM_OUTER_IPV6_ROUTING_EXT (UINT64_C(1) << 45)
@@ -311,6 +314,7 @@ enum mlx5_feature_name {
#define MLX5_FLOW_ACTION_SEND_TO_KERNEL (1ull << 42)
#define MLX5_FLOW_ACTION_INDIRECT_COUNT (1ull << 43)
#define MLX5_FLOW_ACTION_INDIRECT_AGE (1ull << 44)
+#define MLX5_FLOW_ACTION_QUOTA (1ull << 46)
#define MLX5_FLOW_DROP_INCLUSIVE_ACTIONS \
(MLX5_FLOW_ACTION_COUNT | MLX5_FLOW_ACTION_SAMPLE | MLX5_FLOW_ACTION_AGE)
@@ -1714,6 +1718,12 @@ typedef int (*mlx5_flow_action_query_t)
const struct rte_flow_action_handle *action,
void *data,
struct rte_flow_error *error);
+typedef int (*mlx5_flow_action_query_update_t)
+ (struct rte_eth_dev *dev,
+ struct rte_flow_action_handle *handle,
+ const void *update, void *data,
+ enum rte_flow_query_update_mode qu_mode,
+ struct rte_flow_error *error);
typedef int (*mlx5_flow_sync_domain_t)
(struct rte_eth_dev *dev,
uint32_t domains,
@@ -1870,7 +1880,13 @@ typedef int (*mlx5_flow_async_action_handle_update_t)
const void *update,
void *user_data,
struct rte_flow_error *error);
-
+typedef int (*mlx5_flow_async_action_handle_query_update_t)
+ (struct rte_eth_dev *dev, uint32_t queue_id,
+ const struct rte_flow_op_attr *op_attr,
+ struct rte_flow_action_handle *action_handle,
+ const void *update, void *data,
+ enum rte_flow_query_update_mode qu_mode,
+ void *user_data, struct rte_flow_error *error);
typedef int (*mlx5_flow_async_action_handle_query_t)
(struct rte_eth_dev *dev,
uint32_t queue,
@@ -1921,6 +1937,7 @@ struct mlx5_flow_driver_ops {
mlx5_flow_action_destroy_t action_destroy;
mlx5_flow_action_update_t action_update;
mlx5_flow_action_query_t action_query;
+ mlx5_flow_action_query_update_t action_query_update;
mlx5_flow_sync_domain_t sync_domain;
mlx5_flow_discover_priorities_t discover_priorities;
mlx5_flow_item_create_t item_create;
@@ -1943,6 +1960,7 @@ struct mlx5_flow_driver_ops {
mlx5_flow_push_t push;
mlx5_flow_async_action_handle_create_t async_action_create;
mlx5_flow_async_action_handle_update_t async_action_update;
+ mlx5_flow_async_action_handle_query_update_t async_action_query_update;
mlx5_flow_async_action_handle_query_t async_action_query;
mlx5_flow_async_action_handle_destroy_t async_action_destroy;
};
diff --git a/drivers/net/mlx5/mlx5_flow_aso.c b/drivers/net/mlx5/mlx5_flow_aso.c
index 0eb91c570f..3c08da0614 100644
--- a/drivers/net/mlx5/mlx5_flow_aso.c
+++ b/drivers/net/mlx5/mlx5_flow_aso.c
@@ -74,7 +74,7 @@ mlx5_aso_reg_mr(struct mlx5_common_device *cdev, size_t length,
* @param[in] sq
* ASO SQ to destroy.
*/
-static void
+void
mlx5_aso_destroy_sq(struct mlx5_aso_sq *sq)
{
mlx5_devx_sq_destroy(&sq->sq_obj);
@@ -148,7 +148,7 @@ mlx5_aso_age_init_sq(struct mlx5_aso_sq *sq)
* @param[in] sq
* ASO SQ to initialize.
*/
-static void
+void
mlx5_aso_mtr_init_sq(struct mlx5_aso_sq *sq)
{
volatile struct mlx5_aso_wqe *restrict wqe;
@@ -219,7 +219,7 @@ mlx5_aso_ct_init_sq(struct mlx5_aso_sq *sq)
* @return
* 0 on success, a negative errno value otherwise and rte_errno is set.
*/
-static int
+int
mlx5_aso_sq_create(struct mlx5_common_device *cdev, struct mlx5_aso_sq *sq,
void *uar, uint16_t log_desc_n)
{
@@ -504,7 +504,7 @@ mlx5_aso_dump_err_objs(volatile uint32_t *cqe, volatile uint32_t *wqe)
* @param[in] sq
* ASO SQ to use.
*/
-static void
+void
mlx5_aso_cqe_err_handle(struct mlx5_aso_sq *sq)
{
struct mlx5_aso_cq *cq = &sq->cq;
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 350b4d99cf..7f5682f83b 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -70,6 +70,9 @@ flow_hw_set_vlan_vid_construct(struct rte_eth_dev *dev,
struct mlx5_action_construct_data *act_data,
const struct mlx5_hw_actions *hw_acts,
const struct rte_flow_action *action);
+static void
+flow_hw_construct_quota(struct mlx5_priv *priv,
+ struct mlx5dr_rule_action *rule_act, uint32_t qid);
static __rte_always_inline uint32_t flow_hw_tx_tag_regc_mask(struct rte_eth_dev *dev);
static __rte_always_inline uint32_t flow_hw_tx_tag_regc_value(struct rte_eth_dev *dev);
@@ -814,6 +817,9 @@ flow_hw_shared_action_translate(struct rte_eth_dev *dev,
action_src, action_dst, idx))
return -1;
break;
+ case MLX5_INDIRECT_ACTION_TYPE_QUOTA:
+ flow_hw_construct_quota(priv, &acts->rule_acts[action_dst], idx);
+ break;
default:
DRV_LOG(WARNING, "Unsupported shared action type:%d", type);
break;
@@ -1861,6 +1867,16 @@ flow_hw_shared_action_get(struct rte_eth_dev *dev,
return -1;
}
+static void
+flow_hw_construct_quota(struct mlx5_priv *priv,
+ struct mlx5dr_rule_action *rule_act, uint32_t qid)
+{
+ rule_act->action = priv->quota_ctx.dr_action;
+ rule_act->aso_meter.offset = qid - 1;
+ rule_act->aso_meter.init_color =
+ MLX5DR_ACTION_ASO_METER_COLOR_GREEN;
+}
+
/**
* Construct shared indirect action.
*
@@ -1984,6 +2000,9 @@ flow_hw_shared_action_construct(struct rte_eth_dev *dev, uint32_t queue,
(enum mlx5dr_action_aso_meter_color)
rte_col_2_mlx5_col(aso_mtr->init_color);
break;
+ case MLX5_INDIRECT_ACTION_TYPE_QUOTA:
+ flow_hw_construct_quota(priv, rule_act, idx);
+ break;
default:
DRV_LOG(WARNING, "Unsupported shared action type:%d", type);
break;
@@ -2294,6 +2313,11 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
rule_acts[act_data->action_dst].action =
priv->hw_vport[port_action->port_id];
break;
+ case RTE_FLOW_ACTION_TYPE_QUOTA:
+ flow_hw_construct_quota(priv,
+ rule_acts + act_data->action_dst,
+ act_data->shared_meter.id);
+ break;
case RTE_FLOW_ACTION_TYPE_METER:
meter = action->conf;
mtr_id = meter->mtr_id;
@@ -2848,11 +2872,18 @@ __flow_hw_pull_indir_action_comp(struct rte_eth_dev *dev,
if (ret_comp < n_res && priv->hws_ctpool)
ret_comp += mlx5_aso_pull_completion(&priv->ct_mng->aso_sqs[queue],
&res[ret_comp], n_res - ret_comp);
+ if (ret_comp < n_res && priv->quota_ctx.sq)
+ ret_comp += mlx5_aso_pull_completion(&priv->quota_ctx.sq[queue],
+ &res[ret_comp],
+ n_res - ret_comp);
for (i = 0; i < ret_comp; i++) {
job = (struct mlx5_hw_q_job *)res[i].user_data;
/* Restore user data. */
res[i].user_data = job->user_data;
- if (job->type == MLX5_HW_Q_JOB_TYPE_DESTROY) {
+ if (MLX5_INDIRECT_ACTION_TYPE_GET(job->action) ==
+ MLX5_INDIRECT_ACTION_TYPE_QUOTA) {
+ mlx5_quota_async_completion(dev, queue, job);
+ } else if (job->type == MLX5_HW_Q_JOB_TYPE_DESTROY) {
type = MLX5_INDIRECT_ACTION_TYPE_GET(job->action);
if (type == MLX5_INDIRECT_ACTION_TYPE_METER_MARK) {
idx = MLX5_INDIRECT_ACTION_IDX_GET(job->action);
@@ -3855,6 +3886,10 @@ flow_hw_validate_action_indirect(struct rte_eth_dev *dev,
return ret;
*action_flags |= MLX5_FLOW_ACTION_INDIRECT_AGE;
break;
+ case RTE_FLOW_ACTION_TYPE_QUOTA:
+ /* TODO: add proper quota verification */
+ *action_flags |= MLX5_FLOW_ACTION_QUOTA;
+ break;
default:
DRV_LOG(WARNING, "Unsupported shared action type: %d", type);
return rte_flow_error_set(error, ENOTSUP,
@@ -3892,19 +3927,17 @@ flow_hw_validate_action_raw_encap(struct rte_eth_dev *dev __rte_unused,
}
static inline uint16_t
-flow_hw_template_expand_modify_field(const struct rte_flow_action actions[],
- const struct rte_flow_action masks[],
- const struct rte_flow_action *mf_action,
- const struct rte_flow_action *mf_mask,
- struct rte_flow_action *new_actions,
- struct rte_flow_action *new_masks,
- uint64_t flags, uint32_t act_num)
+flow_hw_template_expand_modify_field(struct rte_flow_action actions[],
+ struct rte_flow_action masks[],
+ const struct rte_flow_action *mf_actions,
+ const struct rte_flow_action *mf_masks,
+ uint64_t flags, uint32_t act_num,
+ uint32_t mf_num)
{
uint32_t i, tail;
MLX5_ASSERT(actions && masks);
- MLX5_ASSERT(new_actions && new_masks);
- MLX5_ASSERT(mf_action && mf_mask);
+ MLX5_ASSERT(mf_num > 0);
if (flags & MLX5_FLOW_ACTION_MODIFY_FIELD) {
/*
* Application action template already has Modify Field.
@@ -3955,12 +3988,10 @@ flow_hw_template_expand_modify_field(const struct rte_flow_action actions[],
i = 0;
insert:
tail = act_num - i; /* num action to move */
- memcpy(new_actions, actions, sizeof(actions[0]) * i);
- new_actions[i] = *mf_action;
- memcpy(new_actions + i + 1, actions + i, sizeof(actions[0]) * tail);
- memcpy(new_masks, masks, sizeof(masks[0]) * i);
- new_masks[i] = *mf_mask;
- memcpy(new_masks + i + 1, masks + i, sizeof(masks[0]) * tail);
+ memmove(actions + i + mf_num, actions + i, sizeof(actions[0]) * tail);
+ memcpy(actions + i, mf_actions, sizeof(actions[0]) * mf_num);
+ memmove(masks + i + mf_num, masks + i, sizeof(masks[0]) * tail);
+ memcpy(masks + i, mf_masks, sizeof(masks[0]) * mf_num);
return i;
}
@@ -4270,6 +4301,7 @@ flow_hw_dr_actions_template_handle_shared(const struct rte_flow_action *mask,
action_types[*curr_off] = MLX5DR_ACTION_TYP_ASO_CT;
*curr_off = *curr_off + 1;
break;
+ case RTE_FLOW_ACTION_TYPE_QUOTA:
case RTE_FLOW_ACTION_TYPE_METER_MARK:
at->actions_off[action_src] = *curr_off;
action_types[*curr_off] = MLX5DR_ACTION_TYP_ASO_METER;
@@ -4528,6 +4560,95 @@ flow_hw_flex_item_release(struct rte_eth_dev *dev, uint8_t *flex_item)
*flex_item &= ~(uint8_t)RTE_BIT32(index);
}
}
+static __rte_always_inline void
+flow_hw_actions_template_replace_container(const
+ struct rte_flow_action *actions,
+ const
+ struct rte_flow_action *masks,
+ struct rte_flow_action *new_actions,
+ struct rte_flow_action *new_masks,
+ struct rte_flow_action **ra,
+ struct rte_flow_action **rm,
+ uint32_t act_num)
+{
+ memcpy(new_actions, actions, sizeof(actions[0]) * act_num);
+ memcpy(new_masks, masks, sizeof(masks[0]) * act_num);
+ *ra = (void *)(uintptr_t)new_actions;
+ *rm = (void *)(uintptr_t)new_masks;
+}
+
+#define RX_META_COPY_ACTION ((const struct rte_flow_action) { \
+ .type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD, \
+ .conf = &(struct rte_flow_action_modify_field){ \
+ .operation = RTE_FLOW_MODIFY_SET, \
+ .dst = { \
+ .field = (enum rte_flow_field_id) \
+ MLX5_RTE_FLOW_FIELD_META_REG, \
+ .level = REG_B, \
+ }, \
+ .src = { \
+ .field = (enum rte_flow_field_id) \
+ MLX5_RTE_FLOW_FIELD_META_REG, \
+ .level = REG_C_1, \
+ }, \
+ .width = 32, \
+ } \
+})
+
+#define RX_META_COPY_MASK ((const struct rte_flow_action) { \
+ .type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD, \
+ .conf = &(struct rte_flow_action_modify_field){ \
+ .operation = RTE_FLOW_MODIFY_SET, \
+ .dst = { \
+ .field = (enum rte_flow_field_id) \
+ MLX5_RTE_FLOW_FIELD_META_REG, \
+ .level = UINT32_MAX, \
+ .offset = UINT32_MAX, \
+ }, \
+ .src = { \
+ .field = (enum rte_flow_field_id) \
+ MLX5_RTE_FLOW_FIELD_META_REG, \
+ .level = UINT32_MAX, \
+ .offset = UINT32_MAX, \
+ }, \
+ .width = UINT32_MAX, \
+ } \
+})
+
+#define QUOTA_COLOR_INC_ACTION ((const struct rte_flow_action) { \
+ .type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD, \
+ .conf = &(struct rte_flow_action_modify_field) { \
+ .operation = RTE_FLOW_MODIFY_ADD, \
+ .dst = { \
+ .field = RTE_FLOW_FIELD_METER_COLOR, \
+ .level = 0, .offset = 0 \
+ }, \
+ .src = { \
+ .field = RTE_FLOW_FIELD_VALUE, \
+ .level = 1, \
+ .offset = 0, \
+ }, \
+ .width = 2 \
+ } \
+})
+
+#define QUOTA_COLOR_INC_MASK ((const struct rte_flow_action) { \
+ .type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD, \
+ .conf = &(struct rte_flow_action_modify_field) { \
+ .operation = RTE_FLOW_MODIFY_ADD, \
+ .dst = { \
+ .field = RTE_FLOW_FIELD_METER_COLOR, \
+ .level = UINT32_MAX, \
+ .offset = UINT32_MAX, \
+ }, \
+ .src = { \
+ .field = RTE_FLOW_FIELD_VALUE, \
+ .level = 3, \
+ .offset = 0 \
+ }, \
+ .width = UINT32_MAX \
+ } \
+})
/**
* Create flow action template.
@@ -4567,40 +4688,9 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev,
int set_vlan_vid_ix = -1;
struct rte_flow_action_modify_field set_vlan_vid_spec = {0, };
struct rte_flow_action_modify_field set_vlan_vid_mask = {0, };
- const struct rte_flow_action_modify_field rx_mreg = {
- .operation = RTE_FLOW_MODIFY_SET,
- .dst = {
- .field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
- .level = REG_B,
- },
- .src = {
- .field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
- .level = REG_C_1,
- },
- .width = 32,
- };
- const struct rte_flow_action_modify_field rx_mreg_mask = {
- .operation = RTE_FLOW_MODIFY_SET,
- .dst = {
- .field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
- .level = UINT32_MAX,
- .offset = UINT32_MAX,
- },
- .src = {
- .field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
- .level = UINT32_MAX,
- .offset = UINT32_MAX,
- },
- .width = UINT32_MAX,
- };
- const struct rte_flow_action rx_cpy = {
- .type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD,
- .conf = &rx_mreg,
- };
- const struct rte_flow_action rx_cpy_mask = {
- .type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD,
- .conf = &rx_mreg_mask,
- };
+ struct rte_flow_action mf_actions[MLX5_HW_MAX_ACTS];
+ struct rte_flow_action mf_masks[MLX5_HW_MAX_ACTS];
+ uint32_t expand_mf_num = 0;
if (mlx5_flow_hw_actions_validate(dev, attr, actions, masks,
&action_flags, error))
@@ -4630,44 +4720,57 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev,
RTE_FLOW_ERROR_TYPE_ACTION, NULL, "Too many actions");
return NULL;
}
+ if (set_vlan_vid_ix != -1) {
+ /* If temporary action buffer was not used, copy template actions to it */
+ if (ra == actions)
+ flow_hw_actions_template_replace_container(actions,
+ masks,
+ tmp_action,
+ tmp_mask,
+ &ra, &rm,
+ act_num);
+ flow_hw_set_vlan_vid(dev, ra, rm,
+ &set_vlan_vid_spec, &set_vlan_vid_mask,
+ set_vlan_vid_ix);
+ action_flags |= MLX5_FLOW_ACTION_MODIFY_FIELD;
+ }
+ if (action_flags & MLX5_FLOW_ACTION_QUOTA) {
+ mf_actions[expand_mf_num] = QUOTA_COLOR_INC_ACTION;
+ mf_masks[expand_mf_num] = QUOTA_COLOR_INC_MASK;
+ expand_mf_num++;
+ }
if (priv->sh->config.dv_xmeta_en == MLX5_XMETA_MODE_META32_HWS &&
priv->sh->config.dv_esw_en &&
(action_flags & (MLX5_FLOW_ACTION_QUEUE | MLX5_FLOW_ACTION_RSS))) {
/* Insert META copy */
- if (act_num + 1 > MLX5_HW_MAX_ACTS) {
+ mf_actions[expand_mf_num] = RX_META_COPY_ACTION;
+ mf_masks[expand_mf_num] = RX_META_COPY_MASK;
+ expand_mf_num++;
+ }
+ if (expand_mf_num) {
+ if (act_num + expand_mf_num > MLX5_HW_MAX_ACTS) {
rte_flow_error_set(error, E2BIG,
RTE_FLOW_ERROR_TYPE_ACTION,
NULL, "cannot expand: too many actions");
return NULL;
}
+ if (ra == actions)
+ flow_hw_actions_template_replace_container(actions,
+ masks,
+ tmp_action,
+ tmp_mask,
+ &ra, &rm,
+ act_num);
/* Application should make sure only one Q/RSS exist in one rule. */
- pos = flow_hw_template_expand_modify_field(actions, masks,
- &rx_cpy,
- &rx_cpy_mask,
- tmp_action, tmp_mask,
+ pos = flow_hw_template_expand_modify_field(ra, rm,
+ mf_actions,
+ mf_masks,
action_flags,
- act_num);
- ra = tmp_action;
- rm = tmp_mask;
- act_num++;
+ act_num,
+ expand_mf_num);
+ act_num += expand_mf_num;
action_flags |= MLX5_FLOW_ACTION_MODIFY_FIELD;
}
- if (set_vlan_vid_ix != -1) {
- /* If temporary action buffer was not used, copy template actions to it */
- if (ra == actions && rm == masks) {
- for (i = 0; i < act_num; ++i) {
- tmp_action[i] = actions[i];
- tmp_mask[i] = masks[i];
- if (actions[i].type == RTE_FLOW_ACTION_TYPE_END)
- break;
- }
- ra = tmp_action;
- rm = tmp_mask;
- }
- flow_hw_set_vlan_vid(dev, ra, rm,
- &set_vlan_vid_spec, &set_vlan_vid_mask,
- set_vlan_vid_ix);
- }
act_len = rte_flow_conv(RTE_FLOW_CONV_OP_ACTIONS, NULL, 0, ra, error);
if (act_len <= 0)
return NULL;
@@ -4964,6 +5067,7 @@ flow_hw_pattern_validate(struct rte_eth_dev *dev,
case RTE_FLOW_ITEM_TYPE_ICMP:
case RTE_FLOW_ITEM_TYPE_ICMP6:
case RTE_FLOW_ITEM_TYPE_ICMP6_ECHO_REQUEST:
+ case RTE_FLOW_ITEM_TYPE_QUOTA:
case RTE_FLOW_ITEM_TYPE_ICMP6_ECHO_REPLY:
case RTE_FLOW_ITEM_TYPE_CONNTRACK:
case RTE_FLOW_ITEM_TYPE_IPV6_ROUTING_EXT:
@@ -7357,6 +7461,12 @@ flow_hw_configure(struct rte_eth_dev *dev,
"Failed to set up Rx control flow templates");
goto err;
}
+ /* Initialize quotas */
+ if (port_attr->nb_quotas) {
+ ret = mlx5_flow_quota_init(dev, port_attr->nb_quotas);
+ if (ret)
+ goto err;
+ }
/* Initialize meter library*/
if (port_attr->nb_meters || (host_priv && host_priv->hws_mpool))
if (mlx5_flow_meter_init(dev, port_attr->nb_meters, 0, 0, nb_q_updated))
@@ -7456,6 +7566,7 @@ flow_hw_configure(struct rte_eth_dev *dev,
mlx5_hws_cnt_pool_destroy(priv->sh, priv->hws_cpool);
priv->hws_cpool = NULL;
}
+ mlx5_flow_quota_destroy(dev);
flow_hw_free_vport_actions(priv);
for (i = 0; i < MLX5_HW_ACTION_FLAG_MAX; i++) {
if (priv->hw_drop[i])
@@ -7555,6 +7666,7 @@ flow_hw_resource_release(struct rte_eth_dev *dev)
flow_hw_ct_mng_destroy(dev, priv->ct_mng);
priv->ct_mng = NULL;
}
+ mlx5_flow_quota_destroy(dev);
for (i = 0; i < priv->nb_queue; i++) {
rte_ring_free(priv->hw_q[i].indir_iq);
rte_ring_free(priv->hw_q[i].indir_cq);
@@ -7962,6 +8074,8 @@ flow_hw_action_handle_validate(struct rte_eth_dev *dev, uint32_t queue,
return flow_hw_validate_action_meter_mark(dev, action, error);
case RTE_FLOW_ACTION_TYPE_RSS:
return flow_dv_action_validate(dev, conf, action, error);
+ case RTE_FLOW_ACTION_TYPE_QUOTA:
+ return 0;
default:
return rte_flow_error_set(error, ENOTSUP,
RTE_FLOW_ERROR_TYPE_ACTION, NULL,
@@ -8133,6 +8247,11 @@ flow_hw_action_handle_create(struct rte_eth_dev *dev, uint32_t queue,
case RTE_FLOW_ACTION_TYPE_RSS:
handle = flow_dv_action_create(dev, conf, action, error);
break;
+ case RTE_FLOW_ACTION_TYPE_QUOTA:
+ aso = true;
+ handle = mlx5_quota_alloc(dev, queue, action->conf,
+ job, push, error);
+ break;
default:
rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION,
NULL, "action type not supported");
@@ -8253,6 +8372,11 @@ flow_hw_action_handle_update(struct rte_eth_dev *dev, uint32_t queue,
case MLX5_INDIRECT_ACTION_TYPE_RSS:
ret = flow_dv_action_update(dev, handle, update, error);
break;
+ case MLX5_INDIRECT_ACTION_TYPE_QUOTA:
+ aso = true;
+ ret = mlx5_quota_query_update(dev, queue, handle, update, NULL,
+ job, push, error);
+ break;
default:
ret = -ENOTSUP;
rte_flow_error_set(error, ENOTSUP,
@@ -8365,6 +8489,8 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue,
case MLX5_INDIRECT_ACTION_TYPE_RSS:
ret = flow_dv_action_destroy(dev, handle, error);
break;
+ case MLX5_INDIRECT_ACTION_TYPE_QUOTA:
+ break;
default:
ret = -ENOTSUP;
rte_flow_error_set(error, ENOTSUP,
@@ -8636,6 +8762,11 @@ flow_hw_action_handle_query(struct rte_eth_dev *dev, uint32_t queue,
ret = flow_hw_conntrack_query(dev, queue, act_idx, data,
job, push, error);
break;
+ case MLX5_INDIRECT_ACTION_TYPE_QUOTA:
+ aso = true;
+ ret = mlx5_quota_query(dev, queue, handle, data,
+ job, push, error);
+ break;
default:
ret = -ENOTSUP;
rte_flow_error_set(error, ENOTSUP,
@@ -8645,7 +8776,51 @@ flow_hw_action_handle_query(struct rte_eth_dev *dev, uint32_t queue,
}
if (job)
flow_hw_action_finalize(dev, queue, job, push, aso, ret == 0);
- return 0;
+ return ret;
+}
+
+static int
+flow_hw_async_action_handle_query_update
+ (struct rte_eth_dev *dev, uint32_t queue,
+ const struct rte_flow_op_attr *attr,
+ struct rte_flow_action_handle *handle,
+ const void *update, void *query,
+ enum rte_flow_query_update_mode qu_mode,
+ void *user_data, struct rte_flow_error *error)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ bool push = flow_hw_action_push(attr);
+ bool aso = false;
+ struct mlx5_hw_q_job *job = NULL;
+ int ret = 0;
+
+ if (attr) {
+ job = flow_hw_action_job_init(priv, queue, handle, user_data,
+ query,
+ MLX5_HW_Q_JOB_TYPE_UPDATE_QUERY,
+ error);
+ if (!job)
+ return -rte_errno;
+ }
+ switch (MLX5_INDIRECT_ACTION_TYPE_GET(handle)) {
+ case MLX5_INDIRECT_ACTION_TYPE_QUOTA:
+ if (qu_mode != RTE_FLOW_QU_QUERY_FIRST) {
+ ret = rte_flow_error_set
+ (error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION_CONF,
+ NULL, "quota action must query before update");
+ break;
+ }
+ aso = true;
+ ret = mlx5_quota_query_update(dev, queue, handle,
+ update, query, job, push, error);
+ break;
+ default:
+ ret = rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL, "update and query not supportred");
+ }
+ if (job)
+ flow_hw_action_finalize(dev, queue, job, push, aso, ret == 0);
+ return ret;
}
static int
@@ -8657,6 +8832,19 @@ flow_hw_action_query(struct rte_eth_dev *dev,
handle, data, NULL, error);
}
+static int
+flow_hw_action_query_update(struct rte_eth_dev *dev,
+ struct rte_flow_action_handle *handle,
+ const void *update, void *query,
+ enum rte_flow_query_update_mode qu_mode,
+ struct rte_flow_error *error)
+{
+ return flow_hw_async_action_handle_query_update(dev, MLX5_HW_INV_QUEUE,
+ NULL, handle, update,
+ query, qu_mode, NULL,
+ error);
+}
+
/**
* Get aged-out flows of a given port on the given HWS flow queue.
*
@@ -8770,12 +8958,14 @@ const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = {
.async_action_create = flow_hw_action_handle_create,
.async_action_destroy = flow_hw_action_handle_destroy,
.async_action_update = flow_hw_action_handle_update,
+ .async_action_query_update = flow_hw_async_action_handle_query_update,
.async_action_query = flow_hw_action_handle_query,
.action_validate = flow_hw_action_validate,
.action_create = flow_hw_action_create,
.action_destroy = flow_hw_action_destroy,
.action_update = flow_hw_action_update,
.action_query = flow_hw_action_query,
+ .action_query_update = flow_hw_action_query_update,
.query = flow_hw_query,
.get_aged_flows = flow_hw_get_aged_flows,
.get_q_aged_flows = flow_hw_get_q_aged_flows,
diff --git a/drivers/net/mlx5/mlx5_flow_quota.c b/drivers/net/mlx5/mlx5_flow_quota.c
new file mode 100644
index 0000000000..78595f7193
--- /dev/null
+++ b/drivers/net/mlx5/mlx5_flow_quota.c
@@ -0,0 +1,726 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Nvidia Inc. All rights reserved.
+ */
+#include <stddef.h>
+#include <rte_eal_paging.h>
+
+#include "mlx5_utils.h"
+#include "mlx5_flow.h"
+
+#if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H)
+
+typedef void (*quota_wqe_cmd_t)(volatile struct mlx5_aso_wqe *restrict,
+ struct mlx5_quota_ctx *, uint32_t, uint32_t,
+ void *);
+
+#define MLX5_ASO_MTR1_INIT_MASK 0xffffffffULL
+#define MLX5_ASO_MTR0_INIT_MASK ((MLX5_ASO_MTR1_INIT_MASK) << 32)
+
+static __rte_always_inline bool
+is_aso_mtr1_obj(uint32_t qix)
+{
+ return (qix & 1) != 0;
+}
+
+static __rte_always_inline bool
+is_quota_sync_queue(const struct mlx5_priv *priv, uint32_t queue)
+{
+ return queue >= priv->nb_queue - 1;
+}
+
+static __rte_always_inline uint32_t
+quota_sync_queue(const struct mlx5_priv *priv)
+{
+ return priv->nb_queue - 1;
+}
+
+static __rte_always_inline uint32_t
+mlx5_quota_wqe_read_offset(uint32_t qix, uint32_t sq_index)
+{
+ return 2 * sq_index + (qix & 1);
+}
+
+static int32_t
+mlx5_quota_fetch_tokens(const struct mlx5_aso_mtr_dseg *rd_buf)
+{
+ int c_tok = (int)rte_be_to_cpu_32(rd_buf->c_tokens);
+ int e_tok = (int)rte_be_to_cpu_32(rd_buf->e_tokens);
+ int result;
+
+ DRV_LOG(DEBUG, "c_tokens %d e_tokens %d\n",
+ rte_be_to_cpu_32(rd_buf->c_tokens),
+ rte_be_to_cpu_32(rd_buf->e_tokens));
+ /* Query after SET ignores negative E tokens */
+ if (c_tok >= 0 && e_tok < 0)
+ result = c_tok;
+ /**
+ * If number of tokens in Meter bucket is zero or above,
+ * Meter hardware will use that bucket and can set number of tokens to
+ * negative value.
+ * Quota can discard negative C tokens in query report.
+ * That is a known hardware limitation.
+ * Use case example:
+ *
+ * C E Result
+ * 250 250 500
+ * 50 250 300
+ * -150 250 100
+ * -150 50 50 *
+ * -150 -150 -300
+ *
+ */
+ else if (c_tok < 0 && e_tok >= 0 && (c_tok + e_tok) < 0)
+ result = e_tok;
+ else
+ result = c_tok + e_tok;
+
+ return result;
+}
+
+static void
+mlx5_quota_query_update_async_cmpl(struct mlx5_hw_q_job *job)
+{
+ struct rte_flow_query_quota *query = job->query.user;
+
+ query->quota = mlx5_quota_fetch_tokens(job->query.hw);
+}
+
+void
+mlx5_quota_async_completion(struct rte_eth_dev *dev, uint32_t queue,
+ struct mlx5_hw_q_job *job)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_quota_ctx *qctx = &priv->quota_ctx;
+ uint32_t qix = MLX5_INDIRECT_ACTION_IDX_GET(job->action);
+ struct mlx5_quota *qobj = mlx5_ipool_get(qctx->quota_ipool, qix);
+
+ RTE_SET_USED(queue);
+ qobj->state = MLX5_QUOTA_STATE_READY;
+ switch (job->type) {
+ case MLX5_HW_Q_JOB_TYPE_CREATE:
+ break;
+ case MLX5_HW_Q_JOB_TYPE_QUERY:
+ case MLX5_HW_Q_JOB_TYPE_UPDATE_QUERY:
+ mlx5_quota_query_update_async_cmpl(job);
+ break;
+ default:
+ break;
+ }
+}
+
+static __rte_always_inline void
+mlx5_quota_wqe_set_aso_read(volatile struct mlx5_aso_wqe *restrict wqe,
+ struct mlx5_quota_ctx *qctx, uint32_t queue)
+{
+ struct mlx5_aso_sq *sq = qctx->sq + queue;
+ uint32_t sq_mask = (1 << sq->log_desc_n) - 1;
+ uint32_t sq_head = sq->head & sq_mask;
+ uint64_t rd_addr = (uint64_t)(qctx->read_buf[queue] + 2 * sq_head);
+
+ wqe->aso_cseg.lkey = rte_cpu_to_be_32(qctx->mr.lkey);
+ wqe->aso_cseg.va_h = rte_cpu_to_be_32((uint32_t)(rd_addr >> 32));
+ wqe->aso_cseg.va_l_r = rte_cpu_to_be_32(((uint32_t)rd_addr) |
+ MLX5_ASO_CSEG_READ_ENABLE);
+}
+
+#define MLX5_ASO_MTR1_ADD_MASK 0x00000F00ULL
+#define MLX5_ASO_MTR1_SET_MASK 0x000F0F00ULL
+#define MLX5_ASO_MTR0_ADD_MASK ((MLX5_ASO_MTR1_ADD_MASK) << 32)
+#define MLX5_ASO_MTR0_SET_MASK ((MLX5_ASO_MTR1_SET_MASK) << 32)
+
+static __rte_always_inline void
+mlx5_quota_wqe_set_mtr_tokens(volatile struct mlx5_aso_wqe *restrict wqe,
+ uint32_t qix, void *arg)
+{
+ volatile struct mlx5_aso_mtr_dseg *mtr_dseg;
+ const struct rte_flow_update_quota *conf = arg;
+ bool set_op = (conf->op == RTE_FLOW_UPDATE_QUOTA_SET);
+
+ if (is_aso_mtr1_obj(qix)) {
+ wqe->aso_cseg.data_mask = set_op ?
+ RTE_BE64(MLX5_ASO_MTR1_SET_MASK) :
+ RTE_BE64(MLX5_ASO_MTR1_ADD_MASK);
+ mtr_dseg = wqe->aso_dseg.mtrs + 1;
+ } else {
+ wqe->aso_cseg.data_mask = set_op ?
+ RTE_BE64(MLX5_ASO_MTR0_SET_MASK) :
+ RTE_BE64(MLX5_ASO_MTR0_ADD_MASK);
+ mtr_dseg = wqe->aso_dseg.mtrs;
+ }
+ if (set_op) {
+ /* prevent using E tokens when C tokens exhausted */
+ mtr_dseg->e_tokens = -1;
+ mtr_dseg->c_tokens = rte_cpu_to_be_32(conf->quota);
+ } else {
+ mtr_dseg->e_tokens = rte_cpu_to_be_32(conf->quota);
+ }
+}
+
+static __rte_always_inline void
+mlx5_quota_wqe_query(volatile struct mlx5_aso_wqe *restrict wqe,
+ struct mlx5_quota_ctx *qctx, __rte_unused uint32_t qix,
+ uint32_t queue, __rte_unused void *arg)
+{
+ mlx5_quota_wqe_set_aso_read(wqe, qctx, queue);
+ wqe->aso_cseg.data_mask = 0ull; /* clear MTR ASO data modification */
+}
+
+static __rte_always_inline void
+mlx5_quota_wqe_update(volatile struct mlx5_aso_wqe *restrict wqe,
+ __rte_unused struct mlx5_quota_ctx *qctx, uint32_t qix,
+ __rte_unused uint32_t queue, void *arg)
+{
+ mlx5_quota_wqe_set_mtr_tokens(wqe, qix, arg);
+ wqe->aso_cseg.va_l_r = 0; /* clear READ flag */
+}
+
+static __rte_always_inline void
+mlx5_quota_wqe_query_update(volatile struct mlx5_aso_wqe *restrict wqe,
+ struct mlx5_quota_ctx *qctx, uint32_t qix,
+ uint32_t queue, void *arg)
+{
+ mlx5_quota_wqe_set_aso_read(wqe, qctx, queue);
+ mlx5_quota_wqe_set_mtr_tokens(wqe, qix, arg);
+}
+
+static __rte_always_inline void
+mlx5_quota_set_init_wqe(volatile struct mlx5_aso_wqe *restrict wqe,
+ __rte_unused struct mlx5_quota_ctx *qctx, uint32_t qix,
+ __rte_unused uint32_t queue, void *arg)
+{
+ volatile struct mlx5_aso_mtr_dseg *mtr_dseg;
+ const struct rte_flow_action_quota *conf = arg;
+ const struct mlx5_quota *qobj = mlx5_ipool_get(qctx->quota_ipool, qix + 1);
+
+ if (is_aso_mtr1_obj(qix)) {
+ wqe->aso_cseg.data_mask =
+ rte_cpu_to_be_64(MLX5_ASO_MTR1_INIT_MASK);
+ mtr_dseg = wqe->aso_dseg.mtrs + 1;
+ } else {
+ wqe->aso_cseg.data_mask =
+ rte_cpu_to_be_64(MLX5_ASO_MTR0_INIT_MASK);
+ mtr_dseg = wqe->aso_dseg.mtrs;
+ }
+ mtr_dseg->e_tokens = -1;
+ mtr_dseg->c_tokens = rte_cpu_to_be_32(conf->quota);
+ mtr_dseg->v_bo_sc_bbog_mm |= rte_cpu_to_be_32
+ (qobj->mode << ASO_DSEG_MTR_MODE);
+}
+
+static __rte_always_inline void
+mlx5_quota_cmd_completed_status(struct mlx5_aso_sq *sq, uint16_t n)
+{
+ uint16_t i, mask = (1 << sq->log_desc_n) - 1;
+
+ for (i = 0; i < n; i++) {
+ uint8_t state = MLX5_QUOTA_STATE_WAIT;
+ struct mlx5_quota *quota_obj =
+ sq->elts[(sq->tail + i) & mask].quota_obj;
+
+ __atomic_compare_exchange_n("a_obj->state, &state,
+ MLX5_QUOTA_STATE_READY, false,
+ __ATOMIC_RELAXED, __ATOMIC_RELAXED);
+ }
+}
+
+static void
+mlx5_quota_cmd_completion_handle(struct mlx5_aso_sq *sq)
+{
+ struct mlx5_aso_cq *cq = &sq->cq;
+ volatile struct mlx5_cqe *restrict cqe;
+ const unsigned int cq_size = 1 << cq->log_desc_n;
+ const unsigned int mask = cq_size - 1;
+ uint32_t idx;
+ uint32_t next_idx = cq->cq_ci & mask;
+ uint16_t max;
+ uint16_t n = 0;
+ int ret;
+
+ MLX5_ASSERT(rte_spinlock_is_locked(&sq->sqsl));
+ max = (uint16_t)(sq->head - sq->tail);
+ if (unlikely(!max))
+ return;
+ do {
+ idx = next_idx;
+ next_idx = (cq->cq_ci + 1) & mask;
+ rte_prefetch0(&cq->cq_obj.cqes[next_idx]);
+ cqe = &cq->cq_obj.cqes[idx];
+ ret = check_cqe(cqe, cq_size, cq->cq_ci);
+ /*
+ * Be sure owner read is done before any other cookie field or
+ * opaque field.
+ */
+ rte_io_rmb();
+ if (ret != MLX5_CQE_STATUS_SW_OWN) {
+ if (likely(ret == MLX5_CQE_STATUS_HW_OWN))
+ break;
+ mlx5_aso_cqe_err_handle(sq);
+ } else {
+ n++;
+ }
+ cq->cq_ci++;
+ } while (1);
+ if (likely(n)) {
+ mlx5_quota_cmd_completed_status(sq, n);
+ sq->tail += n;
+ rte_io_wmb();
+ cq->cq_obj.db_rec[0] = rte_cpu_to_be_32(cq->cq_ci);
+ }
+}
+
+static int
+mlx5_quota_cmd_wait_cmpl(struct mlx5_aso_sq *sq, struct mlx5_quota *quota_obj)
+{
+ uint32_t poll_cqe_times = MLX5_MTR_POLL_WQE_CQE_TIMES;
+
+ do {
+ rte_spinlock_lock(&sq->sqsl);
+ mlx5_quota_cmd_completion_handle(sq);
+ rte_spinlock_unlock(&sq->sqsl);
+ if (__atomic_load_n("a_obj->state, __ATOMIC_RELAXED) ==
+ MLX5_QUOTA_STATE_READY)
+ return 0;
+ } while (poll_cqe_times -= MLX5_ASO_WQE_CQE_RESPONSE_DELAY);
+ DRV_LOG(ERR, "QUOTA: failed to poll command CQ");
+ return -1;
+}
+
+static int
+mlx5_quota_cmd_wqe(struct rte_eth_dev *dev, struct mlx5_quota *quota_obj,
+ quota_wqe_cmd_t wqe_cmd, uint32_t qix, uint32_t queue,
+ struct mlx5_hw_q_job *job, bool push, void *arg)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_dev_ctx_shared *sh = priv->sh;
+ struct mlx5_quota_ctx *qctx = &priv->quota_ctx;
+ struct mlx5_aso_sq *sq = qctx->sq + queue;
+ uint32_t head, sq_mask = (1 << sq->log_desc_n) - 1;
+ bool sync_queue = is_quota_sync_queue(priv, queue);
+ volatile struct mlx5_aso_wqe *restrict wqe;
+ int ret = 0;
+
+ if (sync_queue)
+ rte_spinlock_lock(&sq->sqsl);
+ head = sq->head & sq_mask;
+ wqe = &sq->sq_obj.aso_wqes[head];
+ wqe_cmd(wqe, qctx, qix, queue, arg);
+ wqe->general_cseg.misc = rte_cpu_to_be_32(qctx->devx_obj->id + (qix >> 1));
+ wqe->general_cseg.opcode = rte_cpu_to_be_32
+ (ASO_OPC_MOD_POLICER << WQE_CSEG_OPC_MOD_OFFSET |
+ sq->pi << WQE_CSEG_WQE_INDEX_OFFSET | MLX5_OPCODE_ACCESS_ASO);
+ sq->head++;
+ sq->pi += 2; /* Each WQE contains 2 WQEBB */
+ if (push) {
+ mlx5_doorbell_ring(&sh->tx_uar.bf_db, *(volatile uint64_t *)wqe,
+ sq->pi, &sq->sq_obj.db_rec[MLX5_SND_DBR],
+ !sh->tx_uar.dbnc);
+ sq->db_pi = sq->pi;
+ }
+ sq->db = wqe;
+ job->query.hw = qctx->read_buf[queue] +
+ mlx5_quota_wqe_read_offset(qix, head);
+ sq->elts[head].quota_obj = sync_queue ?
+ quota_obj : (typeof(quota_obj))job;
+ if (sync_queue) {
+ rte_spinlock_unlock(&sq->sqsl);
+ ret = mlx5_quota_cmd_wait_cmpl(sq, quota_obj);
+ }
+ return ret;
+}
+
+static void
+mlx5_quota_destroy_sq(struct mlx5_priv *priv)
+{
+ struct mlx5_quota_ctx *qctx = &priv->quota_ctx;
+ uint32_t i, nb_queues = priv->nb_queue;
+
+ if (!qctx->sq)
+ return;
+ for (i = 0; i < nb_queues; i++)
+ mlx5_aso_destroy_sq(qctx->sq + i);
+ mlx5_free(qctx->sq);
+}
+
+static __rte_always_inline void
+mlx5_quota_wqe_init_common(struct mlx5_aso_sq *sq,
+ volatile struct mlx5_aso_wqe *restrict wqe)
+{
+#define ASO_MTR_DW0 RTE_BE32(1 << ASO_DSEG_VALID_OFFSET | \
+ MLX5_FLOW_COLOR_GREEN << ASO_DSEG_SC_OFFSET)
+
+ memset((void *)(uintptr_t)wqe, 0, sizeof(*wqe));
+ wqe->general_cseg.sq_ds = rte_cpu_to_be_32((sq->sqn << 8) |
+ (sizeof(*wqe) >> 4));
+ wqe->aso_cseg.operand_masks = RTE_BE32
+ (0u | (ASO_OPER_LOGICAL_OR << ASO_CSEG_COND_OPER_OFFSET) |
+ (ASO_OP_ALWAYS_TRUE << ASO_CSEG_COND_1_OPER_OFFSET) |
+ (ASO_OP_ALWAYS_TRUE << ASO_CSEG_COND_0_OPER_OFFSET) |
+ (BYTEWISE_64BYTE << ASO_CSEG_DATA_MASK_MODE_OFFSET));
+ wqe->general_cseg.flags = RTE_BE32
+ (MLX5_COMP_ALWAYS << MLX5_COMP_MODE_OFFSET);
+ wqe->aso_dseg.mtrs[0].v_bo_sc_bbog_mm = ASO_MTR_DW0;
+ /**
+ * ASO Meter tokens auto-update must be disabled in quota action.
+ * Tokens auto-update is disabled when Meter when *IR values set to
+ * ((0x1u << 16) | (0x1Eu << 24)) **NOT** 0x00
+ */
+ wqe->aso_dseg.mtrs[0].cbs_cir = RTE_BE32((0x1u << 16) | (0x1Eu << 24));
+ wqe->aso_dseg.mtrs[0].ebs_eir = RTE_BE32((0x1u << 16) | (0x1Eu << 24));
+ wqe->aso_dseg.mtrs[1].v_bo_sc_bbog_mm = ASO_MTR_DW0;
+ wqe->aso_dseg.mtrs[1].cbs_cir = RTE_BE32((0x1u << 16) | (0x1Eu << 24));
+ wqe->aso_dseg.mtrs[1].ebs_eir = RTE_BE32((0x1u << 16) | (0x1Eu << 24));
+#undef ASO_MTR_DW0
+}
+
+static void
+mlx5_quota_init_sq(struct mlx5_aso_sq *sq)
+{
+ uint32_t i, size = 1 << sq->log_desc_n;
+
+ for (i = 0; i < size; i++)
+ mlx5_quota_wqe_init_common(sq, sq->sq_obj.aso_wqes + i);
+}
+
+static int
+mlx5_quota_alloc_sq(struct mlx5_priv *priv)
+{
+ struct mlx5_dev_ctx_shared *sh = priv->sh;
+ struct mlx5_quota_ctx *qctx = &priv->quota_ctx;
+ uint32_t i, nb_queues = priv->nb_queue;
+
+ qctx->sq = mlx5_malloc(MLX5_MEM_ZERO,
+ sizeof(qctx->sq[0]) * nb_queues,
+ 0, SOCKET_ID_ANY);
+ if (!qctx->sq) {
+ DRV_LOG(DEBUG, "QUOTA: failed to allocate SQ pool");
+ return -ENOMEM;
+ }
+ for (i = 0; i < nb_queues; i++) {
+ int ret = mlx5_aso_sq_create
+ (sh->cdev, qctx->sq + i, sh->tx_uar.obj,
+ rte_log2_u32(priv->hw_q[i].size));
+ if (ret) {
+ DRV_LOG(DEBUG, "QUOTA: failed to allocate SQ[%u]", i);
+ return -ENOMEM;
+ }
+ mlx5_quota_init_sq(qctx->sq + i);
+ }
+ return 0;
+}
+
+static void
+mlx5_quota_destroy_read_buf(struct mlx5_priv *priv)
+{
+ struct mlx5_dev_ctx_shared *sh = priv->sh;
+ struct mlx5_quota_ctx *qctx = &priv->quota_ctx;
+
+ if (qctx->mr.lkey) {
+ void *addr = qctx->mr.addr;
+ sh->cdev->mr_scache.dereg_mr_cb(&qctx->mr);
+ mlx5_free(addr);
+ }
+ if (qctx->read_buf)
+ mlx5_free(qctx->read_buf);
+}
+
+static int
+mlx5_quota_alloc_read_buf(struct mlx5_priv *priv)
+{
+ struct mlx5_dev_ctx_shared *sh = priv->sh;
+ struct mlx5_quota_ctx *qctx = &priv->quota_ctx;
+ uint32_t i, nb_queues = priv->nb_queue;
+ uint32_t sq_size_sum;
+ size_t page_size = rte_mem_page_size();
+ struct mlx5_aso_mtr_dseg *buf;
+ size_t rd_buf_size;
+ int ret;
+
+ for (i = 0, sq_size_sum = 0; i < nb_queues; i++)
+ sq_size_sum += priv->hw_q[i].size;
+ /* ACCESS MTR ASO WQE reads 2 MTR objects */
+ rd_buf_size = 2 * sq_size_sum * sizeof(buf[0]);
+ buf = mlx5_malloc(MLX5_MEM_ANY | MLX5_MEM_ZERO, rd_buf_size,
+ page_size, SOCKET_ID_ANY);
+ if (!buf) {
+ DRV_LOG(DEBUG, "QUOTA: failed to allocate MTR ASO READ buffer [1]");
+ return -ENOMEM;
+ }
+ ret = sh->cdev->mr_scache.reg_mr_cb(sh->cdev->pd, buf,
+ rd_buf_size, &qctx->mr);
+ if (ret) {
+ DRV_LOG(DEBUG, "QUOTA: failed to register MTR ASO READ MR");
+ return -errno;
+ }
+ qctx->read_buf = mlx5_malloc(MLX5_MEM_ZERO,
+ sizeof(qctx->read_buf[0]) * nb_queues,
+ 0, SOCKET_ID_ANY);
+ if (!qctx->read_buf) {
+ DRV_LOG(DEBUG, "QUOTA: failed to allocate MTR ASO READ buffer [2]");
+ return -ENOMEM;
+ }
+ for (i = 0; i < nb_queues; i++) {
+ qctx->read_buf[i] = buf;
+ buf += 2 * priv->hw_q[i].size;
+ }
+ return 0;
+}
+
+static __rte_always_inline int
+mlx5_quota_check_ready(struct mlx5_quota *qobj, struct rte_flow_error *error)
+{
+ uint8_t state = MLX5_QUOTA_STATE_READY;
+ bool verdict = __atomic_compare_exchange_n
+ (&qobj->state, &state, MLX5_QUOTA_STATE_WAIT, false,
+ __ATOMIC_RELAXED, __ATOMIC_RELAXED);
+
+ if (!verdict)
+ return rte_flow_error_set(error, EBUSY,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL, "action is busy");
+ return 0;
+}
+
+int
+mlx5_quota_query(struct rte_eth_dev *dev, uint32_t queue,
+ const struct rte_flow_action_handle *handle,
+ struct rte_flow_query_quota *query,
+ struct mlx5_hw_q_job *async_job, bool push,
+ struct rte_flow_error *error)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_quota_ctx *qctx = &priv->quota_ctx;
+ uint32_t work_queue = !is_quota_sync_queue(priv, queue) ?
+ queue : quota_sync_queue(priv);
+ uint32_t id = MLX5_INDIRECT_ACTION_IDX_GET(handle);
+ uint32_t qix = id - 1;
+ struct mlx5_quota *qobj = mlx5_ipool_get(qctx->quota_ipool, id);
+ struct mlx5_hw_q_job sync_job;
+ int ret;
+
+ if (!qobj)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+ "invalid query handle");
+ ret = mlx5_quota_check_ready(qobj, error);
+ if (ret)
+ return ret;
+ ret = mlx5_quota_cmd_wqe(dev, qobj, mlx5_quota_wqe_query, qix, work_queue,
+ async_job ? async_job : &sync_job, push, NULL);
+ if (ret) {
+ __atomic_store_n(&qobj->state, MLX5_QUOTA_STATE_READY,
+ __ATOMIC_RELAXED);
+ return rte_flow_error_set(error, EAGAIN,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL, "try again");
+ }
+ if (is_quota_sync_queue(priv, queue))
+ query->quota = mlx5_quota_fetch_tokens(sync_job.query.hw);
+ return 0;
+}
+
+int
+mlx5_quota_query_update(struct rte_eth_dev *dev, uint32_t queue,
+ struct rte_flow_action_handle *handle,
+ const struct rte_flow_action *update,
+ struct rte_flow_query_quota *query,
+ struct mlx5_hw_q_job *async_job, bool push,
+ struct rte_flow_error *error)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_quota_ctx *qctx = &priv->quota_ctx;
+ const struct rte_flow_update_quota *conf = update->conf;
+ uint32_t work_queue = !is_quota_sync_queue(priv, queue) ?
+ queue : quota_sync_queue(priv);
+ uint32_t id = MLX5_INDIRECT_ACTION_IDX_GET(handle);
+ uint32_t qix = id - 1;
+ struct mlx5_quota *qobj = mlx5_ipool_get(qctx->quota_ipool, id);
+ struct mlx5_hw_q_job sync_job;
+ quota_wqe_cmd_t wqe_cmd = query ?
+ mlx5_quota_wqe_query_update :
+ mlx5_quota_wqe_update;
+ int ret;
+
+ if (conf->quota > MLX5_MTR_MAX_TOKEN_VALUE)
+ return rte_flow_error_set(error, E2BIG,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL, "update value too big");
+ if (!qobj)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+ "invalid query_update handle");
+ if (conf->op == RTE_FLOW_UPDATE_QUOTA_ADD &&
+ qobj->last_update == RTE_FLOW_UPDATE_QUOTA_ADD)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL, "cannot add twice");
+ ret = mlx5_quota_check_ready(qobj, error);
+ if (ret)
+ return ret;
+ ret = mlx5_quota_cmd_wqe(dev, qobj, wqe_cmd, qix, work_queue,
+ async_job ? async_job : &sync_job, push,
+ (void *)(uintptr_t)update->conf);
+ if (ret) {
+ __atomic_store_n(&qobj->state, MLX5_QUOTA_STATE_READY,
+ __ATOMIC_RELAXED);
+ return rte_flow_error_set(error, EAGAIN,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL, "try again");
+ }
+ qobj->last_update = conf->op;
+ if (query && is_quota_sync_queue(priv, queue))
+ query->quota = mlx5_quota_fetch_tokens(sync_job.query.hw);
+ return 0;
+}
+
+struct rte_flow_action_handle *
+mlx5_quota_alloc(struct rte_eth_dev *dev, uint32_t queue,
+ const struct rte_flow_action_quota *conf,
+ struct mlx5_hw_q_job *job, bool push,
+ struct rte_flow_error *error)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_quota_ctx *qctx = &priv->quota_ctx;
+ uint32_t id;
+ struct mlx5_quota *qobj;
+ uintptr_t handle = (uintptr_t)MLX5_INDIRECT_ACTION_TYPE_QUOTA <<
+ MLX5_INDIRECT_ACTION_TYPE_OFFSET;
+ uint32_t work_queue = !is_quota_sync_queue(priv, queue) ?
+ queue : quota_sync_queue(priv);
+ struct mlx5_hw_q_job sync_job;
+ uint8_t state = MLX5_QUOTA_STATE_FREE;
+ bool verdict;
+ int ret;
+
+ qobj = mlx5_ipool_malloc(qctx->quota_ipool, &id);
+ if (!qobj) {
+ rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION,
+ NULL, "quota: failed to allocate quota object");
+ return NULL;
+ }
+ verdict = __atomic_compare_exchange_n
+ (&qobj->state, &state, MLX5_QUOTA_STATE_WAIT, false,
+ __ATOMIC_RELAXED, __ATOMIC_RELAXED);
+ if (!verdict) {
+ rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION,
+ NULL, "quota: new quota object has invalid state");
+ return NULL;
+ }
+ switch (conf->mode) {
+ case RTE_FLOW_QUOTA_MODE_L2:
+ qobj->mode = MLX5_METER_MODE_L2_LEN;
+ break;
+ case RTE_FLOW_QUOTA_MODE_PACKET:
+ qobj->mode = MLX5_METER_MODE_PKT;
+ break;
+ default:
+ qobj->mode = MLX5_METER_MODE_IP_LEN;
+ }
+ ret = mlx5_quota_cmd_wqe(dev, qobj, mlx5_quota_set_init_wqe, id - 1,
+ work_queue, job ? job : &sync_job, push,
+ (void *)(uintptr_t)conf);
+ if (ret) {
+ mlx5_ipool_free(qctx->quota_ipool, id);
+ __atomic_store_n(&qobj->state, MLX5_QUOTA_STATE_FREE,
+ __ATOMIC_RELAXED);
+ rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION,
+ NULL, "quota: WR failure");
+ return 0;
+ }
+ return (struct rte_flow_action_handle *)(handle | id);
+}
+
+int
+mlx5_flow_quota_destroy(struct rte_eth_dev *dev)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_quota_ctx *qctx = &priv->quota_ctx;
+ int ret;
+
+ if (qctx->quota_ipool)
+ mlx5_ipool_destroy(qctx->quota_ipool);
+ mlx5_quota_destroy_sq(priv);
+ mlx5_quota_destroy_read_buf(priv);
+ if (qctx->dr_action) {
+ ret = mlx5dr_action_destroy(qctx->dr_action);
+ if (ret)
+ DRV_LOG(ERR, "QUOTA: failed to destroy DR action");
+ }
+ if (qctx->devx_obj) {
+ ret = mlx5_devx_cmd_destroy(qctx->devx_obj);
+ if (ret)
+ DRV_LOG(ERR, "QUOTA: failed to destroy MTR ASO object");
+ }
+ memset(qctx, 0, sizeof(*qctx));
+ return 0;
+}
+
+#define MLX5_QUOTA_IPOOL_TRUNK_SIZE (1u << 12)
+#define MLX5_QUOTA_IPOOL_CACHE_SIZE (1u << 13)
+int
+mlx5_flow_quota_init(struct rte_eth_dev *dev, uint32_t nb_quotas)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_dev_ctx_shared *sh = priv->sh;
+ struct mlx5_quota_ctx *qctx = &priv->quota_ctx;
+ int reg_id = mlx5_flow_get_reg_id(dev, MLX5_MTR_COLOR, 0, NULL);
+ uint32_t flags = MLX5DR_ACTION_FLAG_HWS_RX | MLX5DR_ACTION_FLAG_HWS_TX;
+ struct mlx5_indexed_pool_config quota_ipool_cfg = {
+ .size = sizeof(struct mlx5_quota),
+ .trunk_size = RTE_MIN(nb_quotas, MLX5_QUOTA_IPOOL_TRUNK_SIZE),
+ .need_lock = 1,
+ .release_mem_en = !!priv->sh->config.reclaim_mode,
+ .malloc = mlx5_malloc,
+ .max_idx = nb_quotas,
+ .free = mlx5_free,
+ .type = "mlx5_flow_quota_index_pool"
+ };
+ int ret;
+
+ if (!nb_quotas) {
+ DRV_LOG(DEBUG, "QUOTA: cannot create quota with 0 objects");
+ return -EINVAL;
+ }
+ if (!priv->mtr_en || !sh->meter_aso_en) {
+ DRV_LOG(DEBUG, "QUOTA: no MTR support");
+ return -ENOTSUP;
+ }
+ if (reg_id < 0) {
+ DRV_LOG(DEBUG, "QUOTA: MRT register not available");
+ return -ENOTSUP;
+ }
+ qctx->devx_obj = mlx5_devx_cmd_create_flow_meter_aso_obj
+ (sh->cdev->ctx, sh->cdev->pdn, rte_log2_u32(nb_quotas >> 1));
+ if (!qctx->devx_obj) {
+ DRV_LOG(DEBUG, "QUOTA: cannot allocate MTR ASO objects");
+ return -ENOMEM;
+ }
+ if (sh->config.dv_esw_en && priv->master)
+ flags |= MLX5DR_ACTION_FLAG_HWS_FDB;
+ qctx->dr_action = mlx5dr_action_create_aso_meter
+ (priv->dr_ctx, (struct mlx5dr_devx_obj *)qctx->devx_obj,
+ reg_id - REG_C_0, flags);
+ if (!qctx->dr_action) {
+ DRV_LOG(DEBUG, "QUOTA: failed to create DR action");
+ ret = -ENOMEM;
+ goto err;
+ }
+ ret = mlx5_quota_alloc_read_buf(priv);
+ if (ret)
+ goto err;
+ ret = mlx5_quota_alloc_sq(priv);
+ if (ret)
+ goto err;
+ if (nb_quotas < MLX5_QUOTA_IPOOL_TRUNK_SIZE)
+ quota_ipool_cfg.per_core_cache = 0;
+ else if (nb_quotas < MLX5_HW_IPOOL_SIZE_THRESHOLD)
+ quota_ipool_cfg.per_core_cache = MLX5_HW_IPOOL_CACHE_MIN;
+ else
+ quota_ipool_cfg.per_core_cache = MLX5_QUOTA_IPOOL_CACHE_SIZE;
+ qctx->quota_ipool = mlx5_ipool_create("a_ipool_cfg);
+ if (!qctx->quota_ipool) {
+ DRV_LOG(DEBUG, "QUOTA: failed to allocate quota pool");
+ ret = -ENOMEM;
+ goto err;
+ }
+ qctx->nb_quotas = nb_quotas;
+ return 0;
+err:
+ mlx5_flow_quota_destroy(dev);
+ return ret;
+}
+#endif /* defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) */
--
2.34.1
^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH v3 5/5] mlx5dr: Definer, translate RTE quota item
2023-05-07 7:39 ` [PATCH v3 0/5] net/mlx5: support indirect quota flow action Gregory Etelson
` (3 preceding siblings ...)
2023-05-07 7:39 ` [PATCH v3 4/5] net/mlx5: add indirect QUOTA create/query/modify Gregory Etelson
@ 2023-05-07 7:39 ` Gregory Etelson
2023-05-25 14:18 ` [PATCH v3 0/5] net/mlx5: support indirect quota flow action Raslan Darawsheh
5 siblings, 0 replies; 20+ messages in thread
From: Gregory Etelson @ 2023-05-07 7:39 UTC (permalink / raw)
To: dev; +Cc: getelson, mkashani, rasland, Viacheslav Ovsiienko, Matan Azrad
MLX5 PMD implements QUOTA with Meter object.
PMD Quota action translation implicitly increments
Meter register value after HW assigns it.
Meter register values are:
HW QUOTA(HW+1) QUOTA state
RED 0 1 (01b) BLOCK
YELLOW 1 2 (10b) PASS
GREEN 2 3 (11b) PASS
Quota item checks Meter register bit 1 value to determine state:
SPEC MASK
PASS 2 (10b) 2 (10b)
BLOCK 0 (00b) 2 (10b)
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
drivers/net/mlx5/hws/mlx5dr_definer.c | 63 +++++++++++++++++++++++++++
1 file changed, 63 insertions(+)
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index f92d3e8e1f..2d505f1908 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -21,6 +21,9 @@
#define STE_UDP 0x2
#define STE_ICMP 0x3
+#define MLX5DR_DEFINER_QUOTA_BLOCK 0
+#define MLX5DR_DEFINER_QUOTA_PASS 2
+
/* Setter function based on bit offset and mask, for 32bit DW*/
#define _DR_SET_32(p, v, byte_off, bit_off, mask) \
do { \
@@ -1447,6 +1450,62 @@ mlx5dr_definer_conv_item_tag(struct mlx5dr_definer_conv_data *cd,
return 0;
}
+static void
+mlx5dr_definer_quota_set(struct mlx5dr_definer_fc *fc,
+ const void *item_data, uint8_t *tag)
+{
+ /**
+ * MLX5 PMD implements QUOTA with Meter object.
+ * PMD Quota action translation implicitly increments
+ * Meter register value after HW assigns it.
+ * Meter register values are:
+ * HW QUOTA(HW+1) QUOTA state
+ * RED 0 1 (01b) BLOCK
+ * YELLOW 1 2 (10b) PASS
+ * GREEN 2 3 (11b) PASS
+ *
+ * Quota item checks Meter register bit 1 value to determine state:
+ * SPEC MASK
+ * PASS 2 (10b) 2 (10b)
+ * BLOCK 0 (00b) 2 (10b)
+ *
+ * item_data is NULL when template quota item is non-masked:
+ * .. / quota / ..
+ */
+
+ const struct rte_flow_item_quota *quota = item_data;
+ uint32_t val;
+
+ if (quota && quota->state == RTE_FLOW_QUOTA_STATE_BLOCK)
+ val = MLX5DR_DEFINER_QUOTA_BLOCK;
+ else
+ val = MLX5DR_DEFINER_QUOTA_PASS;
+
+ DR_SET(tag, val, fc->byte_off, fc->bit_off, fc->bit_mask);
+}
+
+static int
+mlx5dr_definer_conv_item_quota(struct mlx5dr_definer_conv_data *cd,
+ __rte_unused struct rte_flow_item *item,
+ int item_idx)
+{
+ int mtr_reg = flow_hw_get_reg_id(RTE_FLOW_ITEM_TYPE_METER_COLOR, 0);
+ struct mlx5dr_definer_fc *fc;
+
+ if (mtr_reg < 0) {
+ rte_errno = EINVAL;
+ return rte_errno;
+ }
+
+ fc = mlx5dr_definer_get_register_fc(cd, mtr_reg);
+ if (!fc)
+ return rte_errno;
+
+ fc->tag_set = &mlx5dr_definer_quota_set;
+ fc->item_idx = item_idx;
+ return 0;
+}
+
static int
mlx5dr_definer_conv_item_metadata(struct mlx5dr_definer_conv_data *cd,
struct rte_flow_item *item,
@@ -2163,6 +2222,10 @@ mlx5dr_definer_conv_items_to_hl(struct mlx5dr_context *ctx,
ret = mlx5dr_definer_conv_item_meter_color(&cd, items, i);
item_flags |= MLX5_FLOW_ITEM_METER_COLOR;
break;
+ case RTE_FLOW_ITEM_TYPE_QUOTA:
+ ret = mlx5dr_definer_conv_item_quota(&cd, items, i);
+ item_flags |= MLX5_FLOW_ITEM_QUOTA;
+ break;
case RTE_FLOW_ITEM_TYPE_IPV6_ROUTING_EXT:
ret = mlx5dr_definer_conv_item_ipv6_routing_ext(&cd, items, i);
item_flags |= cd.tunnel ? MLX5_FLOW_ITEM_INNER_IPV6_ROUTING_EXT :
--
2.34.1
^ permalink raw reply [flat|nested] 20+ messages in thread
* RE: [PATCH v3 0/5] net/mlx5: support indirect quota flow action
2023-05-07 7:39 ` [PATCH v3 0/5] net/mlx5: support indirect quota flow action Gregory Etelson
` (4 preceding siblings ...)
2023-05-07 7:39 ` [PATCH v3 5/5] mlx5dr: Definer, translate RTE quota item Gregory Etelson
@ 2023-05-25 14:18 ` Raslan Darawsheh
5 siblings, 0 replies; 20+ messages in thread
From: Raslan Darawsheh @ 2023-05-25 14:18 UTC (permalink / raw)
To: Gregory Etelson, dev; +Cc: Maayan Kashani
Hi,
> -----Original Message-----
> From: Gregory Etelson <getelson@nvidia.com>
> Sent: Sunday, May 7, 2023 10:40 AM
> To: dev@dpdk.org
> Cc: Gregory Etelson <getelson@nvidia.com>; Maayan Kashani
> <mkashani@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com>
> Subject: [PATCH v3 0/5] net/mlx5: support indirect quota flow action
>
> 1. Prepare MLX5 PMD for upcoming indirect quota action.
> 2. Support query_update API.
> 3. Support indirect quota action.
>
> v3: prepare patches for dpdk-23.07
>
> Gregory Etelson (5):
> net/mlx5: update query fields in async job structure
> net/mlx5: remove code duplication
> common/mlx5: update MTR ASO definitions
> net/mlx5: add indirect QUOTA create/query/modify
> mlx5dr: Definer, translate RTE quota item
Changed title for this commit
>
> doc/guides/nics/features/mlx5.ini | 2 +
> doc/guides/nics/mlx5.rst | 10 +
> doc/guides/rel_notes/release_23_07.rst | 4 +
> drivers/common/mlx5/mlx5_prm.h | 4 +
> drivers/net/mlx5/hws/mlx5dr_definer.c | 63 +++
> drivers/net/mlx5/meson.build | 1 +
> drivers/net/mlx5/mlx5.h | 88 ++-
> drivers/net/mlx5/mlx5_flow.c | 62 +++
> drivers/net/mlx5/mlx5_flow.h | 20 +-
> drivers/net/mlx5/mlx5_flow_aso.c | 10 +-
> drivers/net/mlx5/mlx5_flow_hw.c | 526 ++++++++++++------
> drivers/net/mlx5/mlx5_flow_quota.c | 726
> +++++++++++++++++++++++++
> 12 files changed, 1335 insertions(+), 181 deletions(-) create mode 100644
> drivers/net/mlx5/mlx5_flow_quota.c
>
> --
> 2.34.1
Series applied to next-net-mlx,
Kindest regards,
Raslan Darawsheh
^ permalink raw reply [flat|nested] 20+ messages in thread
end of thread, other threads:[~2023-05-25 14:18 UTC | newest]
Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-01-18 12:55 [PATCH 0/5] net/mlx5: add indirect QUOTA create/query/modify Gregory Etelson
2023-01-18 12:55 ` [PATCH 1/5] net/mlx5: update query fields in async job structure Gregory Etelson
2023-01-18 12:55 ` [PATCH 2/5] net/mlx5: remove code duplication Gregory Etelson
2023-01-18 12:55 ` [PATCH 3/5] common/mlx5: update MTR ASO definitions Gregory Etelson
2023-01-18 12:55 ` [PATCH 4/5] net/mlx5: add indirect QUOTA create/query/modify Gregory Etelson
2023-01-18 12:55 ` [PATCH 5/5] mlx5dr: Definer, translate RTE quota item Gregory Etelson
2023-03-08 2:58 ` [PATCH 0/5] net/mlx5: add indirect QUOTA create/query/modify Suanming Mou
2023-03-08 17:01 ` [PATCH v2 " Gregory Etelson
2023-03-08 17:01 ` [PATCH v2 1/5] net/mlx5: update query fields in async job structure Gregory Etelson
2023-03-08 17:01 ` [PATCH v2 2/5] net/mlx5: remove code duplication Gregory Etelson
2023-03-08 17:01 ` [PATCH v2 3/5] common/mlx5: update MTR ASO definitions Gregory Etelson
2023-03-08 17:01 ` [PATCH v2 4/5] net/mlx5: add indirect QUOTA create/query/modify Gregory Etelson
2023-03-08 17:01 ` [PATCH v2 5/5] mlx5dr: Definer, translate RTE quota item Gregory Etelson
2023-05-07 7:39 ` [PATCH v3 0/5] net/mlx5: support indirect quota flow action Gregory Etelson
2023-05-07 7:39 ` [PATCH v3 1/5] net/mlx5: update query fields in async job structure Gregory Etelson
2023-05-07 7:39 ` [PATCH v3 2/5] net/mlx5: remove code duplication Gregory Etelson
2023-05-07 7:39 ` [PATCH v3 3/5] common/mlx5: update MTR ASO definitions Gregory Etelson
2023-05-07 7:39 ` [PATCH v3 4/5] net/mlx5: add indirect QUOTA create/query/modify Gregory Etelson
2023-05-07 7:39 ` [PATCH v3 5/5] mlx5dr: Definer, translate RTE quota item Gregory Etelson
2023-05-25 14:18 ` [PATCH v3 0/5] net/mlx5: support indirect quota flow action Raslan Darawsheh
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).