* [dpdk-dev] [PATCH 1/7] net/sfc: make flow RSS details VNIC-specific
2020-03-05 10:47 [dpdk-dev] [PATCH 0/7] net/sfc: prepare rte_flow to have one more backend Andrew Rybchenko
@ 2020-03-05 10:47 ` Andrew Rybchenko
2020-03-05 10:47 ` [dpdk-dev] [PATCH 2/7] net/sfc: make the flow list engine-agnostic Andrew Rybchenko
` (6 subsequent siblings)
7 siblings, 0 replies; 9+ messages in thread
From: Andrew Rybchenko @ 2020-03-05 10:47 UTC (permalink / raw)
To: dev; +Cc: Ivan Malov
From: Ivan Malov <ivan.malov@oktetlabs.ru>
The flow specification structure will be transformed to a generic one, and
its current contents will be fenced off to form a VNIC-specific parameters.
Flow RSS details do not belong to the said specification currently, albeit
being VNIC-specific. This patch addresses this issue as a preparation step.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
drivers/net/sfc/sfc_flow.c | 12 ++++++------
drivers/net/sfc/sfc_flow.h | 6 ++++--
2 files changed, 10 insertions(+), 8 deletions(-)
diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
index 8d636f692..f285ba552 100644
--- a/drivers/net/sfc/sfc_flow.c
+++ b/drivers/net/sfc/sfc_flow.c
@@ -1262,7 +1262,7 @@ sfc_flow_parse_rss(struct sfc_adapter *sa,
unsigned int rxq_hw_index_max;
efx_rx_hash_type_t efx_hash_types;
const uint8_t *rss_key;
- struct sfc_flow_rss *sfc_rss_conf = &flow->rss_conf;
+ struct sfc_flow_rss *sfc_rss_conf = &flow->spec.rss_conf;
unsigned int i;
if (action_rss->queue_num == 0)
@@ -1334,7 +1334,7 @@ sfc_flow_parse_rss(struct sfc_adapter *sa,
rss_key = rss->key;
}
- flow->rss = B_TRUE;
+ flow->spec.rss = B_TRUE;
sfc_rss_conf->rxq_hw_index_min = rxq_hw_index_min;
sfc_rss_conf->rxq_hw_index_max = rxq_hw_index_max;
@@ -1402,12 +1402,12 @@ sfc_flow_filter_insert(struct sfc_adapter *sa,
{
struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
struct sfc_rss *rss = &sas->rss;
- struct sfc_flow_rss *flow_rss = &flow->rss_conf;
+ struct sfc_flow_rss *flow_rss = &flow->spec.rss_conf;
uint32_t efs_rss_context = EFX_RSS_CONTEXT_DEFAULT;
unsigned int i;
int rc = 0;
- if (flow->rss) {
+ if (flow->spec.rss) {
unsigned int rss_spread = MIN(flow_rss->rxq_hw_index_max -
flow_rss->rxq_hw_index_min + 1,
EFX_MAXRSS);
@@ -1450,7 +1450,7 @@ sfc_flow_filter_insert(struct sfc_adapter *sa,
if (rc != 0)
goto fail_filter_insert;
- if (flow->rss) {
+ if (flow->spec.rss) {
/*
* Scale table is set after filter insertion because
* the table entries are relative to the base RxQ ID
@@ -1491,7 +1491,7 @@ sfc_flow_filter_remove(struct sfc_adapter *sa,
if (rc != 0)
return rc;
- if (flow->rss) {
+ if (flow->spec.rss) {
/*
* All specifications for a given flow rule have the same RSS
* context, so that RSS context value is taken from the first
diff --git a/drivers/net/sfc/sfc_flow.h b/drivers/net/sfc/sfc_flow.h
index 71ec18cb9..14a6b5d14 100644
--- a/drivers/net/sfc/sfc_flow.h
+++ b/drivers/net/sfc/sfc_flow.h
@@ -43,13 +43,15 @@ struct sfc_flow_spec {
efx_filter_spec_t filters[SF_FLOW_SPEC_NB_FILTERS_MAX];
/* number of complete specifications */
unsigned int count;
+ /* RSS toggle */
+ boolean_t rss;
+ /* RSS configuration */
+ struct sfc_flow_rss rss_conf;
};
/* PMD-specific definition of the opaque type from rte_flow.h */
struct rte_flow {
struct sfc_flow_spec spec; /* flow spec for hardware filter(s) */
- boolean_t rss; /* RSS toggle */
- struct sfc_flow_rss rss_conf; /* RSS configuration */
TAILQ_ENTRY(rte_flow) entries; /* flow list entries */
};
--
2.17.1
^ permalink raw reply [flat|nested] 9+ messages in thread
* [dpdk-dev] [PATCH 2/7] net/sfc: make the flow list engine-agnostic
2020-03-05 10:47 [dpdk-dev] [PATCH 0/7] net/sfc: prepare rte_flow to have one more backend Andrew Rybchenko
2020-03-05 10:47 ` [dpdk-dev] [PATCH 1/7] net/sfc: make flow RSS details VNIC-specific Andrew Rybchenko
@ 2020-03-05 10:47 ` Andrew Rybchenko
2020-03-05 10:47 ` [dpdk-dev] [PATCH 3/7] net/sfc: generalise the flow specification structure Andrew Rybchenko
` (5 subsequent siblings)
7 siblings, 0 replies; 9+ messages in thread
From: Andrew Rybchenko @ 2020-03-05 10:47 UTC (permalink / raw)
To: dev; +Cc: Ivan Malov
From: Ivan Malov <ivan.malov@oktetlabs.ru>
A backend which a driver employs to handle flow rules of a given
type depends on the underlying NIC flow engine. The driver in
question in its current state is tailored to support the only
flow engine, VNIC filtering. As the need arises to add support
for transfer rules, the driver has to be reworked so that it
becomes possible to introduce yet another backend.
As a preparation step, make the flow list shared
between different engines.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
drivers/net/sfc/sfc.h | 2 ++
drivers/net/sfc/sfc_filter.h | 2 --
drivers/net/sfc/sfc_flow.c | 20 ++++++++++----------
3 files changed, 12 insertions(+), 12 deletions(-)
diff --git a/drivers/net/sfc/sfc.h b/drivers/net/sfc/sfc.h
index cc5222877..2520cf2b7 100644
--- a/drivers/net/sfc/sfc.h
+++ b/drivers/net/sfc/sfc.h
@@ -242,6 +242,8 @@ struct sfc_adapter {
struct sfc_port port;
struct sfc_filter filter;
+ struct sfc_flow_list flow_list;
+
unsigned int rxq_max;
unsigned int txq_max;
diff --git a/drivers/net/sfc/sfc_filter.h b/drivers/net/sfc/sfc_filter.h
index 64ab114e0..6a0254813 100644
--- a/drivers/net/sfc/sfc_filter.h
+++ b/drivers/net/sfc/sfc_filter.h
@@ -23,8 +23,6 @@ struct sfc_filter {
size_t supported_match_num;
/** Driver cache of supported filter match masks */
uint32_t *supported_match;
- /** List of flow rules */
- struct sfc_flow_list flow_list;
/**
* Supports any of ip_proto, remote host or local host
* filters. This flag is used for filter match exceptions
diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
index f285ba552..0826032e0 100644
--- a/drivers/net/sfc/sfc_flow.c
+++ b/drivers/net/sfc/sfc_flow.c
@@ -2317,7 +2317,7 @@ sfc_flow_create(struct rte_eth_dev *dev,
sfc_adapter_lock(sa);
- TAILQ_INSERT_TAIL(&sa->filter.flow_list, flow, entries);
+ TAILQ_INSERT_TAIL(&sa->flow_list, flow, entries);
if (sa->state == SFC_ADAPTER_STARTED) {
rc = sfc_flow_filter_insert(sa, flow);
@@ -2334,7 +2334,7 @@ sfc_flow_create(struct rte_eth_dev *dev,
return flow;
fail_filter_insert:
- TAILQ_REMOVE(&sa->filter.flow_list, flow, entries);
+ TAILQ_REMOVE(&sa->flow_list, flow, entries);
fail_bad_value:
rte_free(flow);
@@ -2361,7 +2361,7 @@ sfc_flow_remove(struct sfc_adapter *sa,
"Failed to destroy flow rule");
}
- TAILQ_REMOVE(&sa->filter.flow_list, flow, entries);
+ TAILQ_REMOVE(&sa->flow_list, flow, entries);
rte_free(flow);
return rc;
@@ -2378,7 +2378,7 @@ sfc_flow_destroy(struct rte_eth_dev *dev,
sfc_adapter_lock(sa);
- TAILQ_FOREACH(flow_ptr, &sa->filter.flow_list, entries) {
+ TAILQ_FOREACH(flow_ptr, &sa->flow_list, entries) {
if (flow_ptr == flow)
rc = 0;
}
@@ -2408,7 +2408,7 @@ sfc_flow_flush(struct rte_eth_dev *dev,
sfc_adapter_lock(sa);
- while ((flow = TAILQ_FIRST(&sa->filter.flow_list)) != NULL) {
+ while ((flow = TAILQ_FIRST(&sa->flow_list)) != NULL) {
rc = sfc_flow_remove(sa, flow, error);
if (rc != 0)
ret = rc;
@@ -2454,7 +2454,7 @@ sfc_flow_init(struct sfc_adapter *sa)
{
SFC_ASSERT(sfc_adapter_is_locked(sa));
- TAILQ_INIT(&sa->filter.flow_list);
+ TAILQ_INIT(&sa->flow_list);
}
void
@@ -2464,8 +2464,8 @@ sfc_flow_fini(struct sfc_adapter *sa)
SFC_ASSERT(sfc_adapter_is_locked(sa));
- while ((flow = TAILQ_FIRST(&sa->filter.flow_list)) != NULL) {
- TAILQ_REMOVE(&sa->filter.flow_list, flow, entries);
+ while ((flow = TAILQ_FIRST(&sa->flow_list)) != NULL) {
+ TAILQ_REMOVE(&sa->flow_list, flow, entries);
rte_free(flow);
}
}
@@ -2477,7 +2477,7 @@ sfc_flow_stop(struct sfc_adapter *sa)
SFC_ASSERT(sfc_adapter_is_locked(sa));
- TAILQ_FOREACH(flow, &sa->filter.flow_list, entries)
+ TAILQ_FOREACH(flow, &sa->flow_list, entries)
sfc_flow_filter_remove(sa, flow);
}
@@ -2491,7 +2491,7 @@ sfc_flow_start(struct sfc_adapter *sa)
SFC_ASSERT(sfc_adapter_is_locked(sa));
- TAILQ_FOREACH(flow, &sa->filter.flow_list, entries) {
+ TAILQ_FOREACH(flow, &sa->flow_list, entries) {
rc = sfc_flow_filter_insert(sa, flow);
if (rc != 0)
goto fail_bad_flow;
--
2.17.1
^ permalink raw reply [flat|nested] 9+ messages in thread
* [dpdk-dev] [PATCH 3/7] net/sfc: generalise the flow specification structure
2020-03-05 10:47 [dpdk-dev] [PATCH 0/7] net/sfc: prepare rte_flow to have one more backend Andrew Rybchenko
2020-03-05 10:47 ` [dpdk-dev] [PATCH 1/7] net/sfc: make flow RSS details VNIC-specific Andrew Rybchenko
2020-03-05 10:47 ` [dpdk-dev] [PATCH 2/7] net/sfc: make the flow list engine-agnostic Andrew Rybchenko
@ 2020-03-05 10:47 ` Andrew Rybchenko
2020-03-05 10:47 ` [dpdk-dev] [PATCH 4/7] net/sfc: introduce flow allocation and free path Andrew Rybchenko
` (4 subsequent siblings)
7 siblings, 0 replies; 9+ messages in thread
From: Andrew Rybchenko @ 2020-03-05 10:47 UTC (permalink / raw)
To: dev; +Cc: Ivan Malov
From: Ivan Malov <ivan.malov@oktetlabs.ru>
Add the concept of a flow specification type.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
drivers/net/sfc/sfc_flow.c | 137 +++++++++++++++++++++++--------------
drivers/net/sfc/sfc_flow.h | 25 ++++++-
2 files changed, 106 insertions(+), 56 deletions(-)
diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
index 0826032e0..9be1f9ac8 100644
--- a/drivers/net/sfc/sfc_flow.c
+++ b/drivers/net/sfc/sfc_flow.c
@@ -25,8 +25,8 @@
#include "sfc_dp_rx.h"
/*
- * At now flow API is implemented in such a manner that each
- * flow rule is converted to one or more hardware filters.
+ * Currently, filter-based (VNIC) flow API is implemented in such a manner
+ * that each flow rule is converted to one or more hardware filters.
* All elements of flow rule (attributes, pattern items, actions)
* correspond to one or more fields in the efx_filter_spec_s structure
* that is responsible for the hardware filter.
@@ -1093,6 +1093,9 @@ sfc_flow_parse_attr(const struct rte_flow_attr *attr,
struct rte_flow *flow,
struct rte_flow_error *error)
{
+ struct sfc_flow_spec *spec = &flow->spec;
+ struct sfc_flow_spec_filter *spec_filter = &spec->filter;
+
if (attr == NULL) {
rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ATTR, NULL,
@@ -1130,8 +1133,9 @@ sfc_flow_parse_attr(const struct rte_flow_attr *attr,
return -rte_errno;
}
- flow->spec.template.efs_flags |= EFX_FILTER_FLAG_RX;
- flow->spec.template.efs_rss_context = EFX_RSS_CONTEXT_DEFAULT;
+ spec->type = SFC_FLOW_SPEC_FILTER;
+ spec_filter->template.efs_flags |= EFX_FILTER_FLAG_RX;
+ spec_filter->template.efs_rss_context = EFX_RSS_CONTEXT_DEFAULT;
return 0;
}
@@ -1158,6 +1162,8 @@ sfc_flow_parse_pattern(const struct rte_flow_item pattern[],
unsigned int prev_layer = SFC_FLOW_ITEM_ANY_LAYER;
boolean_t is_ifrm = B_FALSE;
const struct sfc_flow_item *item;
+ struct sfc_flow_spec *spec = &flow->spec;
+ struct sfc_flow_spec_filter *spec_filter = &spec->filter;
if (pattern == NULL) {
rte_flow_error_set(error, EINVAL,
@@ -1222,7 +1228,7 @@ sfc_flow_parse_pattern(const struct rte_flow_item pattern[],
break;
}
- rc = item->parse(pattern, &flow->spec.template, error);
+ rc = item->parse(pattern, &spec_filter->template, error);
if (rc != 0)
return rc;
@@ -1238,13 +1244,15 @@ sfc_flow_parse_queue(struct sfc_adapter *sa,
const struct rte_flow_action_queue *queue,
struct rte_flow *flow)
{
+ struct sfc_flow_spec *spec = &flow->spec;
+ struct sfc_flow_spec_filter *spec_filter = &spec->filter;
struct sfc_rxq *rxq;
if (queue->index >= sfc_sa2shared(sa)->rxq_count)
return -EINVAL;
rxq = &sa->rxq_ctrl[queue->index];
- flow->spec.template.efs_dmaq_id = (uint16_t)rxq->hw_index;
+ spec_filter->template.efs_dmaq_id = (uint16_t)rxq->hw_index;
return 0;
}
@@ -1262,7 +1270,9 @@ sfc_flow_parse_rss(struct sfc_adapter *sa,
unsigned int rxq_hw_index_max;
efx_rx_hash_type_t efx_hash_types;
const uint8_t *rss_key;
- struct sfc_flow_rss *sfc_rss_conf = &flow->spec.rss_conf;
+ struct sfc_flow_spec *spec = &flow->spec;
+ struct sfc_flow_spec_filter *spec_filter = &spec->filter;
+ struct sfc_flow_rss *sfc_rss_conf = &spec_filter->rss_conf;
unsigned int i;
if (action_rss->queue_num == 0)
@@ -1306,7 +1316,7 @@ sfc_flow_parse_rss(struct sfc_adapter *sa,
*/
if (action_rss->queue_num == 1 && action_rss->types == 0 &&
action_rss->key_len == 0) {
- flow->spec.template.efs_dmaq_id = rxq_hw_index_min;
+ spec_filter->template.efs_dmaq_id = rxq_hw_index_min;
return 0;
}
@@ -1334,7 +1344,7 @@ sfc_flow_parse_rss(struct sfc_adapter *sa,
rss_key = rss->key;
}
- flow->spec.rss = B_TRUE;
+ spec_filter->rss = B_TRUE;
sfc_rss_conf->rxq_hw_index_min = rxq_hw_index_min;
sfc_rss_conf->rxq_hw_index_max = rxq_hw_index_max;
@@ -1356,13 +1366,14 @@ static int
sfc_flow_spec_flush(struct sfc_adapter *sa, struct sfc_flow_spec *spec,
unsigned int filters_count)
{
+ struct sfc_flow_spec_filter *spec_filter = &spec->filter;
unsigned int i;
int ret = 0;
for (i = 0; i < filters_count; i++) {
int rc;
- rc = efx_filter_remove(sa->nic, &spec->filters[i]);
+ rc = efx_filter_remove(sa->nic, &spec_filter->filters[i]);
if (ret == 0 && rc != 0) {
sfc_err(sa, "failed to remove filter specification "
"(rc = %d)", rc);
@@ -1376,11 +1387,12 @@ sfc_flow_spec_flush(struct sfc_adapter *sa, struct sfc_flow_spec *spec,
static int
sfc_flow_spec_insert(struct sfc_adapter *sa, struct sfc_flow_spec *spec)
{
+ struct sfc_flow_spec_filter *spec_filter = &spec->filter;
unsigned int i;
int rc = 0;
- for (i = 0; i < spec->count; i++) {
- rc = efx_filter_insert(sa->nic, &spec->filters[i]);
+ for (i = 0; i < spec_filter->count; i++) {
+ rc = efx_filter_insert(sa->nic, &spec_filter->filters[i]);
if (rc != 0) {
sfc_flow_spec_flush(sa, spec, i);
break;
@@ -1393,7 +1405,9 @@ sfc_flow_spec_insert(struct sfc_adapter *sa, struct sfc_flow_spec *spec)
static int
sfc_flow_spec_remove(struct sfc_adapter *sa, struct sfc_flow_spec *spec)
{
- return sfc_flow_spec_flush(sa, spec, spec->count);
+ struct sfc_flow_spec_filter *spec_filter = &spec->filter;
+
+ return sfc_flow_spec_flush(sa, spec, spec_filter->count);
}
static int
@@ -1402,12 +1416,13 @@ sfc_flow_filter_insert(struct sfc_adapter *sa,
{
struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
struct sfc_rss *rss = &sas->rss;
- struct sfc_flow_rss *flow_rss = &flow->spec.rss_conf;
+ struct sfc_flow_spec_filter *spec_filter = &flow->spec.filter;
+ struct sfc_flow_rss *flow_rss = &spec_filter->rss_conf;
uint32_t efs_rss_context = EFX_RSS_CONTEXT_DEFAULT;
unsigned int i;
int rc = 0;
- if (flow->spec.rss) {
+ if (spec_filter->rss) {
unsigned int rss_spread = MIN(flow_rss->rxq_hw_index_max -
flow_rss->rxq_hw_index_min + 1,
EFX_MAXRSS);
@@ -1437,8 +1452,8 @@ sfc_flow_filter_insert(struct sfc_adapter *sa,
* RSS behaviour is consistent between them, set the same
* RSS context value everywhere.
*/
- for (i = 0; i < flow->spec.count; i++) {
- efx_filter_spec_t *spec = &flow->spec.filters[i];
+ for (i = 0; i < spec_filter->count; i++) {
+ efx_filter_spec_t *spec = &spec_filter->filters[i];
spec->efs_rss_context = efs_rss_context;
spec->efs_dmaq_id = flow_rss->rxq_hw_index_min;
@@ -1450,7 +1465,7 @@ sfc_flow_filter_insert(struct sfc_adapter *sa,
if (rc != 0)
goto fail_filter_insert;
- if (flow->spec.rss) {
+ if (spec_filter->rss) {
/*
* Scale table is set after filter insertion because
* the table entries are relative to the base RxQ ID
@@ -1485,19 +1500,20 @@ static int
sfc_flow_filter_remove(struct sfc_adapter *sa,
struct rte_flow *flow)
{
+ struct sfc_flow_spec_filter *spec_filter = &flow->spec.filter;
int rc = 0;
rc = sfc_flow_spec_remove(sa, &flow->spec);
if (rc != 0)
return rc;
- if (flow->spec.rss) {
+ if (spec_filter->rss) {
/*
* All specifications for a given flow rule have the same RSS
* context, so that RSS context value is taken from the first
* filter specification
*/
- efx_filter_spec_t *spec = &flow->spec.filters[0];
+ efx_filter_spec_t *spec = &spec_filter->filters[0];
rc = efx_rx_scale_context_free(sa->nic, spec->efs_rss_context);
}
@@ -1510,13 +1526,15 @@ sfc_flow_parse_mark(struct sfc_adapter *sa,
const struct rte_flow_action_mark *mark,
struct rte_flow *flow)
{
+ struct sfc_flow_spec *spec = &flow->spec;
+ struct sfc_flow_spec_filter *spec_filter = &spec->filter;
const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic);
if (mark == NULL || mark->id > encp->enc_filter_action_mark_max)
return EINVAL;
- flow->spec.template.efs_flags |= EFX_FILTER_FLAG_ACTION_MARK;
- flow->spec.template.efs_mark = mark->id;
+ spec_filter->template.efs_flags |= EFX_FILTER_FLAG_ACTION_MARK;
+ spec_filter->template.efs_mark = mark->id;
return 0;
}
@@ -1528,6 +1546,8 @@ sfc_flow_parse_actions(struct sfc_adapter *sa,
struct rte_flow_error *error)
{
int rc;
+ struct sfc_flow_spec *spec = &flow->spec;
+ struct sfc_flow_spec_filter *spec_filter = &spec->filter;
const unsigned int dp_rx_features = sa->priv.dp_rx->features;
uint32_t actions_set = 0;
const uint32_t fate_actions_mask = (1UL << RTE_FLOW_ACTION_TYPE_QUEUE) |
@@ -1589,7 +1609,7 @@ sfc_flow_parse_actions(struct sfc_adapter *sa,
if ((actions_set & fate_actions_mask) != 0)
goto fail_fate_actions;
- flow->spec.template.efs_dmaq_id =
+ spec_filter->template.efs_dmaq_id =
EFX_FILTER_SPEC_RX_DMAQ_ID_DROP;
break;
@@ -1606,7 +1626,7 @@ sfc_flow_parse_actions(struct sfc_adapter *sa,
return -rte_errno;
}
- flow->spec.template.efs_flags |=
+ spec_filter->template.efs_flags |=
EFX_FILTER_FLAG_ACTION_FLAG;
break;
@@ -1645,7 +1665,7 @@ sfc_flow_parse_actions(struct sfc_adapter *sa,
/* When fate is unknown, drop traffic. */
if ((actions_set & fate_actions_mask) == 0) {
- flow->spec.template.efs_dmaq_id =
+ spec_filter->template.efs_dmaq_id =
EFX_FILTER_SPEC_RX_DMAQ_ID_DROP;
}
@@ -1682,12 +1702,13 @@ sfc_flow_set_unknown_dst_flags(struct sfc_flow_spec *spec,
struct rte_flow_error *error)
{
unsigned int i;
+ struct sfc_flow_spec_filter *spec_filter = &spec->filter;
static const efx_filter_match_flags_t vals[] = {
EFX_FILTER_MATCH_UNKNOWN_UCAST_DST,
EFX_FILTER_MATCH_UNKNOWN_MCAST_DST
};
- if (filters_count_for_one_val * RTE_DIM(vals) != spec->count) {
+ if (filters_count_for_one_val * RTE_DIM(vals) != spec_filter->count) {
rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
"Number of specifications is incorrect while copying "
@@ -1695,9 +1716,9 @@ sfc_flow_set_unknown_dst_flags(struct sfc_flow_spec *spec,
return -rte_errno;
}
- for (i = 0; i < spec->count; i++) {
+ for (i = 0; i < spec_filter->count; i++) {
/* The check above ensures that divisor can't be zero here */
- spec->filters[i].efs_match_flags |=
+ spec_filter->filters[i].efs_match_flags |=
vals[i / filters_count_for_one_val];
}
@@ -1756,11 +1777,12 @@ sfc_flow_set_ethertypes(struct sfc_flow_spec *spec,
struct rte_flow_error *error)
{
unsigned int i;
+ struct sfc_flow_spec_filter *spec_filter = &spec->filter;
static const uint16_t vals[] = {
EFX_ETHER_TYPE_IPV4, EFX_ETHER_TYPE_IPV6
};
- if (filters_count_for_one_val * RTE_DIM(vals) != spec->count) {
+ if (filters_count_for_one_val * RTE_DIM(vals) != spec_filter->count) {
rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
"Number of specifications is incorrect "
@@ -1768,15 +1790,15 @@ sfc_flow_set_ethertypes(struct sfc_flow_spec *spec,
return -rte_errno;
}
- for (i = 0; i < spec->count; i++) {
- spec->filters[i].efs_match_flags |=
+ for (i = 0; i < spec_filter->count; i++) {
+ spec_filter->filters[i].efs_match_flags |=
EFX_FILTER_MATCH_ETHER_TYPE;
/*
* The check above ensures that
* filters_count_for_one_val is not 0
*/
- spec->filters[i].efs_ether_type =
+ spec_filter->filters[i].efs_ether_type =
vals[i / filters_count_for_one_val];
}
@@ -1800,9 +1822,10 @@ sfc_flow_set_outer_vid_flag(struct sfc_flow_spec *spec,
unsigned int filters_count_for_one_val,
struct rte_flow_error *error)
{
+ struct sfc_flow_spec_filter *spec_filter = &spec->filter;
unsigned int i;
- if (filters_count_for_one_val != spec->count) {
+ if (filters_count_for_one_val != spec_filter->count) {
rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
"Number of specifications is incorrect "
@@ -1810,11 +1833,11 @@ sfc_flow_set_outer_vid_flag(struct sfc_flow_spec *spec,
return -rte_errno;
}
- for (i = 0; i < spec->count; i++) {
- spec->filters[i].efs_match_flags |=
+ for (i = 0; i < spec_filter->count; i++) {
+ spec_filter->filters[i].efs_match_flags |=
EFX_FILTER_MATCH_OUTER_VID;
- spec->filters[i].efs_outer_vid = 0;
+ spec_filter->filters[i].efs_outer_vid = 0;
}
return 0;
@@ -1839,12 +1862,13 @@ sfc_flow_set_ifrm_unknown_dst_flags(struct sfc_flow_spec *spec,
struct rte_flow_error *error)
{
unsigned int i;
+ struct sfc_flow_spec_filter *spec_filter = &spec->filter;
static const efx_filter_match_flags_t vals[] = {
EFX_FILTER_MATCH_IFRM_UNKNOWN_UCAST_DST,
EFX_FILTER_MATCH_IFRM_UNKNOWN_MCAST_DST
};
- if (filters_count_for_one_val * RTE_DIM(vals) != spec->count) {
+ if (filters_count_for_one_val * RTE_DIM(vals) != spec_filter->count) {
rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
"Number of specifications is incorrect while copying "
@@ -1852,9 +1876,9 @@ sfc_flow_set_ifrm_unknown_dst_flags(struct sfc_flow_spec *spec,
return -rte_errno;
}
- for (i = 0; i < spec->count; i++) {
+ for (i = 0; i < spec_filter->count; i++) {
/* The check above ensures that divisor can't be zero here */
- spec->filters[i].efs_match_flags |=
+ spec_filter->filters[i].efs_match_flags |=
vals[i / filters_count_for_one_val];
}
@@ -1998,6 +2022,7 @@ sfc_flow_spec_add_match_flag(struct sfc_flow_spec *spec,
unsigned int new_filters_count;
unsigned int filters_count_for_one_val;
const struct sfc_flow_copy_flag *copy_flag;
+ struct sfc_flow_spec_filter *spec_filter = &spec->filter;
int rc;
copy_flag = sfc_flow_get_copy_flag(flag);
@@ -2008,7 +2033,7 @@ sfc_flow_spec_add_match_flag(struct sfc_flow_spec *spec,
return -rte_errno;
}
- new_filters_count = spec->count * copy_flag->vals_count;
+ new_filters_count = spec_filter->count * copy_flag->vals_count;
if (new_filters_count > SF_FLOW_SPEC_NB_FILTERS_MAX) {
rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
@@ -2017,11 +2042,13 @@ sfc_flow_spec_add_match_flag(struct sfc_flow_spec *spec,
}
/* Copy filters specifications */
- for (i = spec->count; i < new_filters_count; i++)
- spec->filters[i] = spec->filters[i - spec->count];
+ for (i = spec_filter->count; i < new_filters_count; i++) {
+ spec_filter->filters[i] =
+ spec_filter->filters[i - spec_filter->count];
+ }
- filters_count_for_one_val = spec->count;
- spec->count = new_filters_count;
+ filters_count_for_one_val = spec_filter->count;
+ spec_filter->count = new_filters_count;
rc = copy_flag->set_vals(spec, filters_count_for_one_val, error);
if (rc != 0)
@@ -2096,6 +2123,7 @@ sfc_flow_spec_filters_complete(struct sfc_adapter *sa,
struct sfc_flow_spec *spec,
struct rte_flow_error *error)
{
+ struct sfc_flow_spec_filter *spec_filter = &spec->filter;
struct sfc_filter *filter = &sa->filter;
efx_filter_match_flags_t miss_flags;
efx_filter_match_flags_t min_miss_flags = 0;
@@ -2105,12 +2133,12 @@ sfc_flow_spec_filters_complete(struct sfc_adapter *sa,
unsigned int i;
int rc;
- match = spec->template.efs_match_flags;
+ match = spec_filter->template.efs_match_flags;
for (i = 0; i < filter->supported_match_num; i++) {
if ((match & filter->supported_match[i]) == match) {
miss_flags = filter->supported_match[i] & (~match);
multiplier = sfc_flow_check_missing_flags(miss_flags,
- &spec->template, filter);
+ &spec_filter->template, filter);
if (multiplier > 0) {
if (multiplier <= min_multiplier) {
min_multiplier = multiplier;
@@ -2184,16 +2212,17 @@ sfc_flow_is_match_flags_exception(struct sfc_filter *filter,
uint16_t ether_type;
uint8_t ip_proto;
efx_filter_match_flags_t match_flags;
+ struct sfc_flow_spec_filter *spec_filter = &spec->filter;
- for (i = 0; i < spec->count; i++) {
- match_flags = spec->filters[i].efs_match_flags;
+ for (i = 0; i < spec_filter->count; i++) {
+ match_flags = spec_filter->filters[i].efs_match_flags;
if (sfc_flow_is_match_with_vids(match_flags,
EFX_FILTER_MATCH_ETHER_TYPE) ||
sfc_flow_is_match_with_vids(match_flags,
EFX_FILTER_MATCH_ETHER_TYPE |
EFX_FILTER_MATCH_LOC_MAC)) {
- ether_type = spec->filters[i].efs_ether_type;
+ ether_type = spec_filter->filters[i].efs_ether_type;
if (filter->supports_ip_proto_or_addr_filter &&
(ether_type == EFX_ETHER_TYPE_IPV4 ||
ether_type == EFX_ETHER_TYPE_IPV6))
@@ -2205,7 +2234,7 @@ sfc_flow_is_match_flags_exception(struct sfc_filter *filter,
EFX_FILTER_MATCH_ETHER_TYPE |
EFX_FILTER_MATCH_IP_PROTO |
EFX_FILTER_MATCH_LOC_MAC)) {
- ip_proto = spec->filters[i].efs_ip_proto;
+ ip_proto = spec_filter->filters[i].efs_ip_proto;
if (filter->supports_rem_or_local_port_filter &&
(ip_proto == EFX_IPPROTO_TCP ||
ip_proto == EFX_IPPROTO_UDP))
@@ -2221,13 +2250,15 @@ sfc_flow_validate_match_flags(struct sfc_adapter *sa,
struct rte_flow *flow,
struct rte_flow_error *error)
{
- efx_filter_spec_t *spec_tmpl = &flow->spec.template;
+ struct sfc_flow_spec *spec = &flow->spec;
+ struct sfc_flow_spec_filter *spec_filter = &spec->filter;
+ efx_filter_spec_t *spec_tmpl = &spec_filter->template;
efx_filter_match_flags_t match_flags = spec_tmpl->efs_match_flags;
int rc;
/* Initialize the first filter spec with template */
- flow->spec.filters[0] = *spec_tmpl;
- flow->spec.count = 1;
+ spec_filter->filters[0] = *spec_tmpl;
+ spec_filter->count = 1;
if (!sfc_filter_is_match_supported(sa, match_flags)) {
rc = sfc_flow_spec_filters_complete(sa, &flow->spec, error);
diff --git a/drivers/net/sfc/sfc_flow.h b/drivers/net/sfc/sfc_flow.h
index 14a6b5d14..70f0cdf12 100644
--- a/drivers/net/sfc/sfc_flow.h
+++ b/drivers/net/sfc/sfc_flow.h
@@ -35,8 +35,15 @@ struct sfc_flow_rss {
unsigned int rss_tbl[EFX_RSS_TBL_SIZE];
};
-/* Filter specification storage */
-struct sfc_flow_spec {
+/* Flow engines supported by the implementation */
+enum sfc_flow_spec_type {
+ SFC_FLOW_SPEC_FILTER = 0,
+
+ SFC_FLOW_SPEC_NTYPES
+};
+
+/* VNIC-specific flow specification */
+struct sfc_flow_spec_filter {
/* partial specification from flow rule */
efx_filter_spec_t template;
/* fully elaborated hardware filters specifications */
@@ -49,9 +56,21 @@ struct sfc_flow_spec {
struct sfc_flow_rss rss_conf;
};
+/* Flow specification */
+struct sfc_flow_spec {
+ /* Flow specification type (engine-based) */
+ enum sfc_flow_spec_type type;
+
+ RTE_STD_C11
+ union {
+ /* Filter-based (VNIC level flows) specification */
+ struct sfc_flow_spec_filter filter;
+ };
+};
+
/* PMD-specific definition of the opaque type from rte_flow.h */
struct rte_flow {
- struct sfc_flow_spec spec; /* flow spec for hardware filter(s) */
+ struct sfc_flow_spec spec; /* flow specification */
TAILQ_ENTRY(rte_flow) entries; /* flow list entries */
};
--
2.17.1
^ permalink raw reply [flat|nested] 9+ messages in thread
* [dpdk-dev] [PATCH 4/7] net/sfc: introduce flow allocation and free path
2020-03-05 10:47 [dpdk-dev] [PATCH 0/7] net/sfc: prepare rte_flow to have one more backend Andrew Rybchenko
` (2 preceding siblings ...)
2020-03-05 10:47 ` [dpdk-dev] [PATCH 3/7] net/sfc: generalise the flow specification structure Andrew Rybchenko
@ 2020-03-05 10:47 ` Andrew Rybchenko
2020-03-05 10:47 ` [dpdk-dev] [PATCH 5/7] net/sfc: generalise flow parsing Andrew Rybchenko
` (3 subsequent siblings)
7 siblings, 0 replies; 9+ messages in thread
From: Andrew Rybchenko @ 2020-03-05 10:47 UTC (permalink / raw)
To: dev; +Cc: Ivan Malov
From: Ivan Malov <ivan.malov@oktetlabs.ru>
Riverhead boards maintain support for MAE, a low-level Match-Action Engine.
The upcoming patches will bring support for MAE to RTE flow implementation.
A follow-up patch will introduce MAE-specific specification cleanup method.
In order to prepare for the patch, introduce flow allocation and free path.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
drivers/net/sfc/sfc_flow.c | 49 ++++++++++++++++++++++++++++----------
1 file changed, 37 insertions(+), 12 deletions(-)
diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
index 9be1f9ac8..2ddde6168 100644
--- a/drivers/net/sfc/sfc_flow.c
+++ b/drivers/net/sfc/sfc_flow.c
@@ -2309,6 +2309,27 @@ sfc_flow_parse(struct rte_eth_dev *dev,
return rc;
}
+static struct rte_flow *
+sfc_flow_zmalloc(struct rte_flow_error *error)
+{
+ struct rte_flow *flow;
+
+ flow = rte_zmalloc("sfc_rte_flow", sizeof(*flow), 0);
+ if (flow == NULL) {
+ rte_flow_error_set(error, ENOMEM,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "Failed to allocate memory");
+ }
+
+ return flow;
+}
+
+static void
+sfc_flow_free(__rte_unused struct sfc_adapter *sa, struct rte_flow *flow)
+{
+ rte_free(flow);
+}
+
static int
sfc_flow_validate(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
@@ -2316,11 +2337,19 @@ sfc_flow_validate(struct rte_eth_dev *dev,
const struct rte_flow_action actions[],
struct rte_flow_error *error)
{
- struct rte_flow flow;
+ struct sfc_adapter *sa = sfc_adapter_by_eth_dev(dev);
+ struct rte_flow *flow;
+ int rc;
- memset(&flow, 0, sizeof(flow));
+ flow = sfc_flow_zmalloc(error);
+ if (flow == NULL)
+ return -rte_errno;
- return sfc_flow_parse(dev, attr, pattern, actions, &flow, error);
+ rc = sfc_flow_parse(dev, attr, pattern, actions, flow, error);
+
+ sfc_flow_free(sa, flow);
+
+ return rc;
}
static struct rte_flow *
@@ -2334,13 +2363,9 @@ sfc_flow_create(struct rte_eth_dev *dev,
struct rte_flow *flow = NULL;
int rc;
- flow = rte_zmalloc("sfc_rte_flow", sizeof(*flow), 0);
- if (flow == NULL) {
- rte_flow_error_set(error, ENOMEM,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
- "Failed to allocate memory");
+ flow = sfc_flow_zmalloc(error);
+ if (flow == NULL)
goto fail_no_mem;
- }
rc = sfc_flow_parse(dev, attr, pattern, actions, flow, error);
if (rc != 0)
@@ -2368,7 +2393,7 @@ sfc_flow_create(struct rte_eth_dev *dev,
TAILQ_REMOVE(&sa->flow_list, flow, entries);
fail_bad_value:
- rte_free(flow);
+ sfc_flow_free(sa, flow);
sfc_adapter_unlock(sa);
fail_no_mem:
@@ -2393,7 +2418,7 @@ sfc_flow_remove(struct sfc_adapter *sa,
}
TAILQ_REMOVE(&sa->flow_list, flow, entries);
- rte_free(flow);
+ sfc_flow_free(sa, flow);
return rc;
}
@@ -2497,7 +2522,7 @@ sfc_flow_fini(struct sfc_adapter *sa)
while ((flow = TAILQ_FIRST(&sa->flow_list)) != NULL) {
TAILQ_REMOVE(&sa->flow_list, flow, entries);
- rte_free(flow);
+ sfc_flow_free(sa, flow);
}
}
--
2.17.1
^ permalink raw reply [flat|nested] 9+ messages in thread
* [dpdk-dev] [PATCH 5/7] net/sfc: generalise flow parsing
2020-03-05 10:47 [dpdk-dev] [PATCH 0/7] net/sfc: prepare rte_flow to have one more backend Andrew Rybchenko
` (3 preceding siblings ...)
2020-03-05 10:47 ` [dpdk-dev] [PATCH 4/7] net/sfc: introduce flow allocation and free path Andrew Rybchenko
@ 2020-03-05 10:47 ` Andrew Rybchenko
2020-03-05 10:47 ` [dpdk-dev] [PATCH 6/7] net/sfc: generalise flow start and stop path Andrew Rybchenko
` (2 subsequent siblings)
7 siblings, 0 replies; 9+ messages in thread
From: Andrew Rybchenko @ 2020-03-05 10:47 UTC (permalink / raw)
To: dev; +Cc: Ivan Malov
From: Ivan Malov <ivan.malov@oktetlabs.ru>
Generalise flow attribute parsing function in regard to transfer attribute.
Add a method table and factor out VNIC-specific parsing code as a callback.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
drivers/net/sfc/sfc_flow.c | 101 +++++++++++++++++++++++++++----------
drivers/net/sfc/sfc_flow.h | 6 +++
2 files changed, 81 insertions(+), 26 deletions(-)
diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
index 2ddde6168..ed91c3cef 100644
--- a/drivers/net/sfc/sfc_flow.c
+++ b/drivers/net/sfc/sfc_flow.c
@@ -24,6 +24,34 @@
#include "sfc_log.h"
#include "sfc_dp_rx.h"
+struct sfc_flow_ops_by_spec {
+ sfc_flow_parse_cb_t *parse;
+};
+
+static sfc_flow_parse_cb_t sfc_flow_parse_rte_to_filter;
+
+static const struct sfc_flow_ops_by_spec sfc_flow_ops_filter = {
+ .parse = sfc_flow_parse_rte_to_filter,
+};
+
+static const struct sfc_flow_ops_by_spec *
+sfc_flow_get_ops_by_spec(struct rte_flow *flow)
+{
+ struct sfc_flow_spec *spec = &flow->spec;
+ const struct sfc_flow_ops_by_spec *ops = NULL;
+
+ switch (spec->type) {
+ case SFC_FLOW_SPEC_FILTER:
+ ops = &sfc_flow_ops_filter;
+ break;
+ default:
+ SFC_ASSERT(false);
+ break;
+ }
+
+ return ops;
+}
+
/*
* Currently, filter-based (VNIC) flow API is implemented in such a manner
* that each flow rule is converted to one or more hardware filters.
@@ -1108,35 +1136,35 @@ sfc_flow_parse_attr(const struct rte_flow_attr *attr,
"Groups are not supported");
return -rte_errno;
}
- if (attr->priority != 0) {
- rte_flow_error_set(error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, attr,
- "Priorities are not supported");
- return -rte_errno;
- }
if (attr->egress != 0) {
rte_flow_error_set(error, ENOTSUP,
RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, attr,
"Egress is not supported");
return -rte_errno;
}
- if (attr->transfer != 0) {
+ if (attr->ingress == 0) {
rte_flow_error_set(error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ATTR_TRANSFER, attr,
- "Transfer is not supported");
+ RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, attr,
+ "Ingress is compulsory");
return -rte_errno;
}
- if (attr->ingress == 0) {
+ if (attr->transfer == 0) {
+ if (attr->priority != 0) {
+ rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY,
+ attr, "Priorities are unsupported");
+ return -rte_errno;
+ }
+ spec->type = SFC_FLOW_SPEC_FILTER;
+ spec_filter->template.efs_flags |= EFX_FILTER_FLAG_RX;
+ spec_filter->template.efs_rss_context = EFX_RSS_CONTEXT_DEFAULT;
+ } else {
rte_flow_error_set(error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, attr,
- "Only ingress is supported");
+ RTE_FLOW_ERROR_TYPE_ATTR_TRANSFER, attr,
+ "Transfer is not supported");
return -rte_errno;
}
- spec->type = SFC_FLOW_SPEC_FILTER;
- spec_filter->template.efs_flags |= EFX_FILTER_FLAG_RX;
- spec_filter->template.efs_rss_context = EFX_RSS_CONTEXT_DEFAULT;
-
return 0;
}
@@ -2277,20 +2305,15 @@ sfc_flow_validate_match_flags(struct sfc_adapter *sa,
}
static int
-sfc_flow_parse(struct rte_eth_dev *dev,
- const struct rte_flow_attr *attr,
- const struct rte_flow_item pattern[],
- const struct rte_flow_action actions[],
- struct rte_flow *flow,
- struct rte_flow_error *error)
+sfc_flow_parse_rte_to_filter(struct rte_eth_dev *dev,
+ const struct rte_flow_item pattern[],
+ const struct rte_flow_action actions[],
+ struct rte_flow *flow,
+ struct rte_flow_error *error)
{
struct sfc_adapter *sa = sfc_adapter_by_eth_dev(dev);
int rc;
- rc = sfc_flow_parse_attr(attr, flow, error);
- if (rc != 0)
- goto fail_bad_value;
-
rc = sfc_flow_parse_pattern(pattern, flow, error);
if (rc != 0)
goto fail_bad_value;
@@ -2309,6 +2332,32 @@ sfc_flow_parse(struct rte_eth_dev *dev,
return rc;
}
+static int
+sfc_flow_parse(struct rte_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item pattern[],
+ const struct rte_flow_action actions[],
+ struct rte_flow *flow,
+ struct rte_flow_error *error)
+{
+ const struct sfc_flow_ops_by_spec *ops;
+ int rc;
+
+ rc = sfc_flow_parse_attr(attr, flow, error);
+ if (rc != 0)
+ return rc;
+
+ ops = sfc_flow_get_ops_by_spec(flow);
+ if (ops == NULL || ops->parse == NULL) {
+ rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "No backend to handle this flow");
+ return -rte_errno;
+ }
+
+ return ops->parse(dev, pattern, actions, flow, error);
+}
+
static struct rte_flow *
sfc_flow_zmalloc(struct rte_flow_error *error)
{
diff --git a/drivers/net/sfc/sfc_flow.h b/drivers/net/sfc/sfc_flow.h
index 70f0cdf12..19db8fce5 100644
--- a/drivers/net/sfc/sfc_flow.h
+++ b/drivers/net/sfc/sfc_flow.h
@@ -85,6 +85,12 @@ void sfc_flow_fini(struct sfc_adapter *sa);
int sfc_flow_start(struct sfc_adapter *sa);
void sfc_flow_stop(struct sfc_adapter *sa);
+typedef int (sfc_flow_parse_cb_t)(struct rte_eth_dev *dev,
+ const struct rte_flow_item items[],
+ const struct rte_flow_action actions[],
+ struct rte_flow *flow,
+ struct rte_flow_error *error);
+
#ifdef __cplusplus
}
#endif
--
2.17.1
^ permalink raw reply [flat|nested] 9+ messages in thread
* [dpdk-dev] [PATCH 6/7] net/sfc: generalise flow start and stop path
2020-03-05 10:47 [dpdk-dev] [PATCH 0/7] net/sfc: prepare rte_flow to have one more backend Andrew Rybchenko
` (4 preceding siblings ...)
2020-03-05 10:47 ` [dpdk-dev] [PATCH 5/7] net/sfc: generalise flow parsing Andrew Rybchenko
@ 2020-03-05 10:47 ` Andrew Rybchenko
2020-03-05 10:47 ` [dpdk-dev] [PATCH 7/7] net/sfc: generalise flow pattern item processing Andrew Rybchenko
2020-03-06 16:51 ` [dpdk-dev] [PATCH 0/7] net/sfc: prepare rte_flow to have one more backend Ferruh Yigit
7 siblings, 0 replies; 9+ messages in thread
From: Andrew Rybchenko @ 2020-03-05 10:47 UTC (permalink / raw)
To: dev; +Cc: Ivan Malov
From: Ivan Malov <ivan.malov@oktetlabs.ru>
As a preparation step, generalise flow start and stop path using callbacks.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
drivers/net/sfc/sfc_flow.c | 113 ++++++++++++++++++++++++-------------
drivers/net/sfc/sfc_flow.h | 6 ++
2 files changed, 81 insertions(+), 38 deletions(-)
diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
index ed91c3cef..b95acff31 100644
--- a/drivers/net/sfc/sfc_flow.c
+++ b/drivers/net/sfc/sfc_flow.c
@@ -26,12 +26,18 @@
struct sfc_flow_ops_by_spec {
sfc_flow_parse_cb_t *parse;
+ sfc_flow_insert_cb_t *insert;
+ sfc_flow_remove_cb_t *remove;
};
static sfc_flow_parse_cb_t sfc_flow_parse_rte_to_filter;
+static sfc_flow_insert_cb_t sfc_flow_filter_insert;
+static sfc_flow_remove_cb_t sfc_flow_filter_remove;
static const struct sfc_flow_ops_by_spec sfc_flow_ops_filter = {
.parse = sfc_flow_parse_rte_to_filter,
+ .insert = sfc_flow_filter_insert,
+ .remove = sfc_flow_filter_remove,
};
static const struct sfc_flow_ops_by_spec *
@@ -2379,6 +2385,54 @@ sfc_flow_free(__rte_unused struct sfc_adapter *sa, struct rte_flow *flow)
rte_free(flow);
}
+static int
+sfc_flow_insert(struct sfc_adapter *sa, struct rte_flow *flow,
+ struct rte_flow_error *error)
+{
+ const struct sfc_flow_ops_by_spec *ops;
+ int rc;
+
+ ops = sfc_flow_get_ops_by_spec(flow);
+ if (ops == NULL || ops->insert == NULL) {
+ rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "No backend to handle this flow");
+ return rte_errno;
+ }
+
+ rc = ops->insert(sa, flow);
+ if (rc != 0) {
+ rte_flow_error_set(error, rc, RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+ NULL, "Failed to insert the flow rule");
+ }
+
+ return rc;
+}
+
+static int
+sfc_flow_remove(struct sfc_adapter *sa, struct rte_flow *flow,
+ struct rte_flow_error *error)
+{
+ const struct sfc_flow_ops_by_spec *ops;
+ int rc;
+
+ ops = sfc_flow_get_ops_by_spec(flow);
+ if (ops == NULL || ops->remove == NULL) {
+ rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "No backend to handle this flow");
+ return rte_errno;
+ }
+
+ rc = ops->remove(sa, flow);
+ if (rc != 0) {
+ rte_flow_error_set(error, rc, RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+ NULL, "Failed to remove the flow rule");
+ }
+
+ return rc;
+}
+
static int
sfc_flow_validate(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
@@ -2425,20 +2479,16 @@ sfc_flow_create(struct rte_eth_dev *dev,
TAILQ_INSERT_TAIL(&sa->flow_list, flow, entries);
if (sa->state == SFC_ADAPTER_STARTED) {
- rc = sfc_flow_filter_insert(sa, flow);
- if (rc != 0) {
- rte_flow_error_set(error, rc,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
- "Failed to insert filter");
- goto fail_filter_insert;
- }
+ rc = sfc_flow_insert(sa, flow, error);
+ if (rc != 0)
+ goto fail_flow_insert;
}
sfc_adapter_unlock(sa);
return flow;
-fail_filter_insert:
+fail_flow_insert:
TAILQ_REMOVE(&sa->flow_list, flow, entries);
fail_bad_value:
@@ -2449,29 +2499,6 @@ sfc_flow_create(struct rte_eth_dev *dev,
return NULL;
}
-static int
-sfc_flow_remove(struct sfc_adapter *sa,
- struct rte_flow *flow,
- struct rte_flow_error *error)
-{
- int rc = 0;
-
- SFC_ASSERT(sfc_adapter_is_locked(sa));
-
- if (sa->state == SFC_ADAPTER_STARTED) {
- rc = sfc_flow_filter_remove(sa, flow);
- if (rc != 0)
- rte_flow_error_set(error, rc,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
- "Failed to destroy flow rule");
- }
-
- TAILQ_REMOVE(&sa->flow_list, flow, entries);
- sfc_flow_free(sa, flow);
-
- return rc;
-}
-
static int
sfc_flow_destroy(struct rte_eth_dev *dev,
struct rte_flow *flow,
@@ -2494,7 +2521,11 @@ sfc_flow_destroy(struct rte_eth_dev *dev,
goto fail_bad_value;
}
- rc = sfc_flow_remove(sa, flow, error);
+ if (sa->state == SFC_ADAPTER_STARTED)
+ rc = sfc_flow_remove(sa, flow, error);
+
+ TAILQ_REMOVE(&sa->flow_list, flow, entries);
+ sfc_flow_free(sa, flow);
fail_bad_value:
sfc_adapter_unlock(sa);
@@ -2508,15 +2539,21 @@ sfc_flow_flush(struct rte_eth_dev *dev,
{
struct sfc_adapter *sa = sfc_adapter_by_eth_dev(dev);
struct rte_flow *flow;
- int rc = 0;
int ret = 0;
sfc_adapter_lock(sa);
while ((flow = TAILQ_FIRST(&sa->flow_list)) != NULL) {
- rc = sfc_flow_remove(sa, flow, error);
- if (rc != 0)
- ret = rc;
+ if (sa->state == SFC_ADAPTER_STARTED) {
+ int rc;
+
+ rc = sfc_flow_remove(sa, flow, error);
+ if (rc != 0)
+ ret = rc;
+ }
+
+ TAILQ_REMOVE(&sa->flow_list, flow, entries);
+ sfc_flow_free(sa, flow);
}
sfc_adapter_unlock(sa);
@@ -2583,7 +2620,7 @@ sfc_flow_stop(struct sfc_adapter *sa)
SFC_ASSERT(sfc_adapter_is_locked(sa));
TAILQ_FOREACH(flow, &sa->flow_list, entries)
- sfc_flow_filter_remove(sa, flow);
+ sfc_flow_remove(sa, flow, NULL);
}
int
@@ -2597,7 +2634,7 @@ sfc_flow_start(struct sfc_adapter *sa)
SFC_ASSERT(sfc_adapter_is_locked(sa));
TAILQ_FOREACH(flow, &sa->flow_list, entries) {
- rc = sfc_flow_filter_insert(sa, flow);
+ rc = sfc_flow_insert(sa, flow, NULL);
if (rc != 0)
goto fail_bad_flow;
}
diff --git a/drivers/net/sfc/sfc_flow.h b/drivers/net/sfc/sfc_flow.h
index 19db8fce5..5d87212c1 100644
--- a/drivers/net/sfc/sfc_flow.h
+++ b/drivers/net/sfc/sfc_flow.h
@@ -91,6 +91,12 @@ typedef int (sfc_flow_parse_cb_t)(struct rte_eth_dev *dev,
struct rte_flow *flow,
struct rte_flow_error *error);
+typedef int (sfc_flow_insert_cb_t)(struct sfc_adapter *sa,
+ struct rte_flow *flow);
+
+typedef int (sfc_flow_remove_cb_t)(struct sfc_adapter *sa,
+ struct rte_flow *flow);
+
#ifdef __cplusplus
}
#endif
--
2.17.1
^ permalink raw reply [flat|nested] 9+ messages in thread
* [dpdk-dev] [PATCH 7/7] net/sfc: generalise flow pattern item processing
2020-03-05 10:47 [dpdk-dev] [PATCH 0/7] net/sfc: prepare rte_flow to have one more backend Andrew Rybchenko
` (5 preceding siblings ...)
2020-03-05 10:47 ` [dpdk-dev] [PATCH 6/7] net/sfc: generalise flow start and stop path Andrew Rybchenko
@ 2020-03-05 10:47 ` Andrew Rybchenko
2020-03-06 16:51 ` [dpdk-dev] [PATCH 0/7] net/sfc: prepare rte_flow to have one more backend Ferruh Yigit
7 siblings, 0 replies; 9+ messages in thread
From: Andrew Rybchenko @ 2020-03-05 10:47 UTC (permalink / raw)
To: dev; +Cc: Ivan Malov
From: Ivan Malov <ivan.malov@oktetlabs.ru>
This is needed to reuse pattern processing engine for MAE.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
drivers/net/sfc/sfc_flow.c | 101 ++++++++++++++++++++++---------------
drivers/net/sfc/sfc_flow.h | 52 +++++++++++++++++++
2 files changed, 111 insertions(+), 42 deletions(-)
diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
index b95acff31..ec0ca3cd6 100644
--- a/drivers/net/sfc/sfc_flow.c
+++ b/drivers/net/sfc/sfc_flow.c
@@ -69,25 +69,6 @@ sfc_flow_get_ops_by_spec(struct rte_flow *flow)
* of such a field.
*/
-enum sfc_flow_item_layers {
- SFC_FLOW_ITEM_ANY_LAYER,
- SFC_FLOW_ITEM_START_LAYER,
- SFC_FLOW_ITEM_L2,
- SFC_FLOW_ITEM_L3,
- SFC_FLOW_ITEM_L4,
-};
-
-typedef int (sfc_flow_item_parse)(const struct rte_flow_item *item,
- efx_filter_spec_t *spec,
- struct rte_flow_error *error);
-
-struct sfc_flow_item {
- enum rte_flow_item_type type; /* Type of item */
- enum sfc_flow_item_layers layer; /* Layer of item */
- enum sfc_flow_item_layers prev_layer; /* Previous layer of item */
- sfc_flow_item_parse *parse; /* Parsing function */
-};
-
static sfc_flow_item_parse sfc_flow_parse_void;
static sfc_flow_item_parse sfc_flow_parse_eth;
static sfc_flow_item_parse sfc_flow_parse_vlan;
@@ -144,7 +125,7 @@ sfc_flow_is_zero(const uint8_t *buf, unsigned int size)
/*
* Validate item and prepare structures spec and mask for parsing
*/
-static int
+int
sfc_flow_parse_init(const struct rte_flow_item *item,
const void **spec_ptr,
const void **mask_ptr,
@@ -243,7 +224,7 @@ sfc_flow_parse_init(const struct rte_flow_item *item,
static int
sfc_flow_parse_void(__rte_unused const struct rte_flow_item *item,
- __rte_unused efx_filter_spec_t *efx_spec,
+ __rte_unused struct sfc_flow_parse_ctx *parse_ctx,
__rte_unused struct rte_flow_error *error)
{
return 0;
@@ -265,10 +246,11 @@ sfc_flow_parse_void(__rte_unused const struct rte_flow_item *item,
*/
static int
sfc_flow_parse_eth(const struct rte_flow_item *item,
- efx_filter_spec_t *efx_spec,
+ struct sfc_flow_parse_ctx *parse_ctx,
struct rte_flow_error *error)
{
int rc;
+ efx_filter_spec_t *efx_spec = parse_ctx->filter;
const struct rte_flow_item_eth *spec = NULL;
const struct rte_flow_item_eth *mask = NULL;
const struct rte_flow_item_eth supp_mask = {
@@ -377,11 +359,12 @@ sfc_flow_parse_eth(const struct rte_flow_item *item,
*/
static int
sfc_flow_parse_vlan(const struct rte_flow_item *item,
- efx_filter_spec_t *efx_spec,
+ struct sfc_flow_parse_ctx *parse_ctx,
struct rte_flow_error *error)
{
int rc;
uint16_t vid;
+ efx_filter_spec_t *efx_spec = parse_ctx->filter;
const struct rte_flow_item_vlan *spec = NULL;
const struct rte_flow_item_vlan *mask = NULL;
const struct rte_flow_item_vlan supp_mask = {
@@ -463,10 +446,11 @@ sfc_flow_parse_vlan(const struct rte_flow_item *item,
*/
static int
sfc_flow_parse_ipv4(const struct rte_flow_item *item,
- efx_filter_spec_t *efx_spec,
+ struct sfc_flow_parse_ctx *parse_ctx,
struct rte_flow_error *error)
{
int rc;
+ efx_filter_spec_t *efx_spec = parse_ctx->filter;
const struct rte_flow_item_ipv4 *spec = NULL;
const struct rte_flow_item_ipv4 *mask = NULL;
const uint16_t ether_type_ipv4 = rte_cpu_to_le_16(EFX_ETHER_TYPE_IPV4);
@@ -553,10 +537,11 @@ sfc_flow_parse_ipv4(const struct rte_flow_item *item,
*/
static int
sfc_flow_parse_ipv6(const struct rte_flow_item *item,
- efx_filter_spec_t *efx_spec,
+ struct sfc_flow_parse_ctx *parse_ctx,
struct rte_flow_error *error)
{
int rc;
+ efx_filter_spec_t *efx_spec = parse_ctx->filter;
const struct rte_flow_item_ipv6 *spec = NULL;
const struct rte_flow_item_ipv6 *mask = NULL;
const uint16_t ether_type_ipv6 = rte_cpu_to_le_16(EFX_ETHER_TYPE_IPV6);
@@ -661,10 +646,11 @@ sfc_flow_parse_ipv6(const struct rte_flow_item *item,
*/
static int
sfc_flow_parse_tcp(const struct rte_flow_item *item,
- efx_filter_spec_t *efx_spec,
+ struct sfc_flow_parse_ctx *parse_ctx,
struct rte_flow_error *error)
{
int rc;
+ efx_filter_spec_t *efx_spec = parse_ctx->filter;
const struct rte_flow_item_tcp *spec = NULL;
const struct rte_flow_item_tcp *mask = NULL;
const struct rte_flow_item_tcp supp_mask = {
@@ -742,10 +728,11 @@ sfc_flow_parse_tcp(const struct rte_flow_item *item,
*/
static int
sfc_flow_parse_udp(const struct rte_flow_item *item,
- efx_filter_spec_t *efx_spec,
+ struct sfc_flow_parse_ctx *parse_ctx,
struct rte_flow_error *error)
{
int rc;
+ efx_filter_spec_t *efx_spec = parse_ctx->filter;
const struct rte_flow_item_udp *spec = NULL;
const struct rte_flow_item_udp *mask = NULL;
const struct rte_flow_item_udp supp_mask = {
@@ -900,10 +887,11 @@ sfc_flow_set_efx_spec_vni_or_vsid(efx_filter_spec_t *efx_spec,
*/
static int
sfc_flow_parse_vxlan(const struct rte_flow_item *item,
- efx_filter_spec_t *efx_spec,
+ struct sfc_flow_parse_ctx *parse_ctx,
struct rte_flow_error *error)
{
int rc;
+ efx_filter_spec_t *efx_spec = parse_ctx->filter;
const struct rte_flow_item_vxlan *spec = NULL;
const struct rte_flow_item_vxlan *mask = NULL;
const struct rte_flow_item_vxlan supp_mask = {
@@ -952,10 +940,11 @@ sfc_flow_parse_vxlan(const struct rte_flow_item *item,
*/
static int
sfc_flow_parse_geneve(const struct rte_flow_item *item,
- efx_filter_spec_t *efx_spec,
+ struct sfc_flow_parse_ctx *parse_ctx,
struct rte_flow_error *error)
{
int rc;
+ efx_filter_spec_t *efx_spec = parse_ctx->filter;
const struct rte_flow_item_geneve *spec = NULL;
const struct rte_flow_item_geneve *mask = NULL;
const struct rte_flow_item_geneve supp_mask = {
@@ -1019,10 +1008,11 @@ sfc_flow_parse_geneve(const struct rte_flow_item *item,
*/
static int
sfc_flow_parse_nvgre(const struct rte_flow_item *item,
- efx_filter_spec_t *efx_spec,
+ struct sfc_flow_parse_ctx *parse_ctx,
struct rte_flow_error *error)
{
int rc;
+ efx_filter_spec_t *efx_spec = parse_ctx->filter;
const struct rte_flow_item_nvgre *spec = NULL;
const struct rte_flow_item_nvgre *mask = NULL;
const struct rte_flow_item_nvgre supp_mask = {
@@ -1061,60 +1051,70 @@ static const struct sfc_flow_item sfc_flow_items[] = {
.type = RTE_FLOW_ITEM_TYPE_VOID,
.prev_layer = SFC_FLOW_ITEM_ANY_LAYER,
.layer = SFC_FLOW_ITEM_ANY_LAYER,
+ .ctx_type = SFC_FLOW_PARSE_CTX_FILTER,
.parse = sfc_flow_parse_void,
},
{
.type = RTE_FLOW_ITEM_TYPE_ETH,
.prev_layer = SFC_FLOW_ITEM_START_LAYER,
.layer = SFC_FLOW_ITEM_L2,
+ .ctx_type = SFC_FLOW_PARSE_CTX_FILTER,
.parse = sfc_flow_parse_eth,
},
{
.type = RTE_FLOW_ITEM_TYPE_VLAN,
.prev_layer = SFC_FLOW_ITEM_L2,
.layer = SFC_FLOW_ITEM_L2,
+ .ctx_type = SFC_FLOW_PARSE_CTX_FILTER,
.parse = sfc_flow_parse_vlan,
},
{
.type = RTE_FLOW_ITEM_TYPE_IPV4,
.prev_layer = SFC_FLOW_ITEM_L2,
.layer = SFC_FLOW_ITEM_L3,
+ .ctx_type = SFC_FLOW_PARSE_CTX_FILTER,
.parse = sfc_flow_parse_ipv4,
},
{
.type = RTE_FLOW_ITEM_TYPE_IPV6,
.prev_layer = SFC_FLOW_ITEM_L2,
.layer = SFC_FLOW_ITEM_L3,
+ .ctx_type = SFC_FLOW_PARSE_CTX_FILTER,
.parse = sfc_flow_parse_ipv6,
},
{
.type = RTE_FLOW_ITEM_TYPE_TCP,
.prev_layer = SFC_FLOW_ITEM_L3,
.layer = SFC_FLOW_ITEM_L4,
+ .ctx_type = SFC_FLOW_PARSE_CTX_FILTER,
.parse = sfc_flow_parse_tcp,
},
{
.type = RTE_FLOW_ITEM_TYPE_UDP,
.prev_layer = SFC_FLOW_ITEM_L3,
.layer = SFC_FLOW_ITEM_L4,
+ .ctx_type = SFC_FLOW_PARSE_CTX_FILTER,
.parse = sfc_flow_parse_udp,
},
{
.type = RTE_FLOW_ITEM_TYPE_VXLAN,
.prev_layer = SFC_FLOW_ITEM_L4,
.layer = SFC_FLOW_ITEM_START_LAYER,
+ .ctx_type = SFC_FLOW_PARSE_CTX_FILTER,
.parse = sfc_flow_parse_vxlan,
},
{
.type = RTE_FLOW_ITEM_TYPE_GENEVE,
.prev_layer = SFC_FLOW_ITEM_L4,
.layer = SFC_FLOW_ITEM_START_LAYER,
+ .ctx_type = SFC_FLOW_PARSE_CTX_FILTER,
.parse = sfc_flow_parse_geneve,
},
{
.type = RTE_FLOW_ITEM_TYPE_NVGRE,
.prev_layer = SFC_FLOW_ITEM_L3,
.layer = SFC_FLOW_ITEM_START_LAYER,
+ .ctx_type = SFC_FLOW_PARSE_CTX_FILTER,
.parse = sfc_flow_parse_nvgre,
},
};
@@ -1176,28 +1176,30 @@ sfc_flow_parse_attr(const struct rte_flow_attr *attr,
/* Get item from array sfc_flow_items */
static const struct sfc_flow_item *
-sfc_flow_get_item(enum rte_flow_item_type type)
+sfc_flow_get_item(const struct sfc_flow_item *items,
+ unsigned int nb_items,
+ enum rte_flow_item_type type)
{
unsigned int i;
- for (i = 0; i < RTE_DIM(sfc_flow_items); i++)
- if (sfc_flow_items[i].type == type)
- return &sfc_flow_items[i];
+ for (i = 0; i < nb_items; i++)
+ if (items[i].type == type)
+ return &items[i];
return NULL;
}
-static int
-sfc_flow_parse_pattern(const struct rte_flow_item pattern[],
- struct rte_flow *flow,
+int
+sfc_flow_parse_pattern(const struct sfc_flow_item *flow_items,
+ unsigned int nb_flow_items,
+ const struct rte_flow_item pattern[],
+ struct sfc_flow_parse_ctx *parse_ctx,
struct rte_flow_error *error)
{
int rc;
unsigned int prev_layer = SFC_FLOW_ITEM_ANY_LAYER;
boolean_t is_ifrm = B_FALSE;
const struct sfc_flow_item *item;
- struct sfc_flow_spec *spec = &flow->spec;
- struct sfc_flow_spec_filter *spec_filter = &spec->filter;
if (pattern == NULL) {
rte_flow_error_set(error, EINVAL,
@@ -1207,7 +1209,8 @@ sfc_flow_parse_pattern(const struct rte_flow_item pattern[],
}
for (; pattern->type != RTE_FLOW_ITEM_TYPE_END; pattern++) {
- item = sfc_flow_get_item(pattern->type);
+ item = sfc_flow_get_item(flow_items, nb_flow_items,
+ pattern->type);
if (item == NULL) {
rte_flow_error_set(error, ENOTSUP,
RTE_FLOW_ERROR_TYPE_ITEM, pattern,
@@ -1262,7 +1265,14 @@ sfc_flow_parse_pattern(const struct rte_flow_item pattern[],
break;
}
- rc = item->parse(pattern, &spec_filter->template, error);
+ if (parse_ctx->type != item->ctx_type) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, pattern,
+ "Parse context type mismatch");
+ return -rte_errno;
+ }
+
+ rc = item->parse(pattern, parse_ctx, error);
if (rc != 0)
return rc;
@@ -2318,9 +2328,16 @@ sfc_flow_parse_rte_to_filter(struct rte_eth_dev *dev,
struct rte_flow_error *error)
{
struct sfc_adapter *sa = sfc_adapter_by_eth_dev(dev);
+ struct sfc_flow_spec *spec = &flow->spec;
+ struct sfc_flow_spec_filter *spec_filter = &spec->filter;
+ struct sfc_flow_parse_ctx ctx;
int rc;
- rc = sfc_flow_parse_pattern(pattern, flow, error);
+ ctx.type = SFC_FLOW_PARSE_CTX_FILTER;
+ ctx.filter = &spec_filter->template;
+
+ rc = sfc_flow_parse_pattern(sfc_flow_items, RTE_DIM(sfc_flow_items),
+ pattern, &ctx, error);
if (rc != 0)
goto fail_bad_value;
diff --git a/drivers/net/sfc/sfc_flow.h b/drivers/net/sfc/sfc_flow.h
index 5d87212c1..0a87924b2 100644
--- a/drivers/net/sfc/sfc_flow.h
+++ b/drivers/net/sfc/sfc_flow.h
@@ -78,6 +78,58 @@ TAILQ_HEAD(sfc_flow_list, rte_flow);
extern const struct rte_flow_ops sfc_flow_ops;
+enum sfc_flow_item_layers {
+ SFC_FLOW_ITEM_ANY_LAYER,
+ SFC_FLOW_ITEM_START_LAYER,
+ SFC_FLOW_ITEM_L2,
+ SFC_FLOW_ITEM_L3,
+ SFC_FLOW_ITEM_L4,
+};
+
+/* Flow parse context types */
+enum sfc_flow_parse_ctx_type {
+ SFC_FLOW_PARSE_CTX_FILTER = 0,
+
+ SFC_FLOW_PARSE_CTX_NTYPES
+};
+
+/* Flow parse context */
+struct sfc_flow_parse_ctx {
+ enum sfc_flow_parse_ctx_type type;
+
+ RTE_STD_C11
+ union {
+ /* Context pointer valid for filter-based (VNIC) flows */
+ efx_filter_spec_t *filter;
+ };
+};
+
+typedef int (sfc_flow_item_parse)(const struct rte_flow_item *item,
+ struct sfc_flow_parse_ctx *parse_ctx,
+ struct rte_flow_error *error);
+
+struct sfc_flow_item {
+ enum rte_flow_item_type type; /* Type of item */
+ enum sfc_flow_item_layers layer; /* Layer of item */
+ enum sfc_flow_item_layers prev_layer; /* Previous layer of item */
+ enum sfc_flow_parse_ctx_type ctx_type; /* Parse context type */
+ sfc_flow_item_parse *parse; /* Parsing function */
+};
+
+int sfc_flow_parse_pattern(const struct sfc_flow_item *flow_items,
+ unsigned int nb_flow_items,
+ const struct rte_flow_item pattern[],
+ struct sfc_flow_parse_ctx *parse_ctx,
+ struct rte_flow_error *error);
+
+int sfc_flow_parse_init(const struct rte_flow_item *item,
+ const void **spec_ptr,
+ const void **mask_ptr,
+ const void *supp_mask,
+ const void *def_mask,
+ unsigned int size,
+ struct rte_flow_error *error);
+
struct sfc_adapter;
void sfc_flow_init(struct sfc_adapter *sa);
--
2.17.1
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [dpdk-dev] [PATCH 0/7] net/sfc: prepare rte_flow to have one more backend
2020-03-05 10:47 [dpdk-dev] [PATCH 0/7] net/sfc: prepare rte_flow to have one more backend Andrew Rybchenko
` (6 preceding siblings ...)
2020-03-05 10:47 ` [dpdk-dev] [PATCH 7/7] net/sfc: generalise flow pattern item processing Andrew Rybchenko
@ 2020-03-06 16:51 ` Ferruh Yigit
7 siblings, 0 replies; 9+ messages in thread
From: Ferruh Yigit @ 2020-03-06 16:51 UTC (permalink / raw)
To: Andrew Rybchenko, dev
On 3/5/2020 10:47 AM, Andrew Rybchenko wrote:
> Prepare rte_flow API support code to addition of one more
> backend for flow rules handling.
>
> Ivan Malov (7):
> net/sfc: make flow RSS details VNIC-specific
> net/sfc: make the flow list engine-agnostic
> net/sfc: generalise the flow specification structure
> net/sfc: introduce flow allocation and free path
> net/sfc: generalise flow parsing
> net/sfc: generalise flow start and stop path
> net/sfc: generalise flow pattern item processing
>
Series applied to dpdk-next-net/master, thanks.
^ permalink raw reply [flat|nested] 9+ messages in thread