From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B0197A0032; Wed, 29 Sep 2021 22:58:18 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9E7DF4113D; Wed, 29 Sep 2021 22:57:52 +0200 (CEST) Received: from shelob.oktetlabs.ru (shelob.oktetlabs.ru [91.220.146.113]) by mails.dpdk.org (Postfix) with ESMTP id A6352410EE for ; Wed, 29 Sep 2021 22:57:45 +0200 (CEST) Received: from localhost.localdomain (unknown [5.144.122.192]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by shelob.oktetlabs.ru (Postfix) with ESMTPSA id 4F1C67F6D3; Wed, 29 Sep 2021 23:57:45 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 shelob.oktetlabs.ru 4F1C67F6D3 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=oktetlabs.ru; s=default; t=1632949065; bh=b8rcClhjSRFMHuUIKA7LXApQ6pVqteM9C5XL5GDYIqg=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=eGVTYmOu6H0qBmyOq0KS2M0vzGCt1yTEX4SR9xn8gjtffQELupaQdGwBxEbb3/VIK C5SVz5GGkUOP7PKiXgxnQ4u6ON+NsgqVRUEWqzAeJrPT7YXiplD7edgyLaS8ZBVCDj l5ufc5V2FyK2TfUH8p+pyYmJrWGEnpL3HM3nQEGQ= From: Ivan Malov To: dev@dpdk.org Cc: Andrew Rybchenko Date: Wed, 29 Sep 2021 23:57:25 +0300 Message-Id: <20210929205730.775-6-ivan.malov@oktetlabs.ru> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210929205730.775-1-ivan.malov@oktetlabs.ru> References: <20210929205730.775-1-ivan.malov@oktetlabs.ru> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [dpdk-dev] [PATCH 05/10] net/sfc: support GROUP flows in tunnel offload X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" GROUP is an in-house term for so-called "tunnel_match" flows. On parsing, they are detected by virtue of PMD-internal item MARK. It associates a given flow with its tunnel context. Such a flow is represented by a MAE action rule which is chained with the corresponding JUMP rule's outer rule by virtue of matching on its recirculation ID. GROUP flows do narrower match than JUMP flows do and decapsulate matching packets (full offload). Signed-off-by: Ivan Malov Reviewed-by: Andrew Rybchenko --- drivers/net/sfc/sfc_flow.h | 2 + drivers/net/sfc/sfc_flow_tunnel.h | 6 ++ drivers/net/sfc/sfc_mae.c | 151 ++++++++++++++++++++++++++++++ 3 files changed, 159 insertions(+) diff --git a/drivers/net/sfc/sfc_flow.h b/drivers/net/sfc/sfc_flow.h index ada3d563ad..efdecc97ab 100644 --- a/drivers/net/sfc/sfc_flow.h +++ b/drivers/net/sfc/sfc_flow.h @@ -69,6 +69,8 @@ enum sfc_flow_tunnel_rule_type { SFC_FT_RULE_NONE = 0, /* The flow represents a JUMP rule */ SFC_FT_RULE_JUMP, + /* The flow represents a GROUP rule */ + SFC_FT_RULE_GROUP, }; /* MAE-specific flow specification */ diff --git a/drivers/net/sfc/sfc_flow_tunnel.h b/drivers/net/sfc/sfc_flow_tunnel.h index 6a81b29438..27a8fa5ae7 100644 --- a/drivers/net/sfc/sfc_flow_tunnel.h +++ b/drivers/net/sfc/sfc_flow_tunnel.h @@ -39,6 +39,12 @@ typedef uint8_t sfc_ft_id_t; #define SFC_FT_ID_TO_TUNNEL_MARK(_id) \ ((_id) + 1) +#define SFC_FT_ID_TO_MARK(_id) \ + (SFC_FT_ID_TO_TUNNEL_MARK(_id) << SFC_FT_USER_MARK_BITS) + +#define SFC_FT_GET_USER_MARK(_mark) \ + ((_mark) & SFC_FT_USER_MARK_MASK) + #define SFC_FT_MAX_NTUNNELS \ (RTE_LEN2MASK(SFC_FT_TUNNEL_MARK_BITS, uint8_t) - 1) diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c index 57a999d895..63ec2b02b3 100644 --- a/drivers/net/sfc/sfc_mae.c +++ b/drivers/net/sfc/sfc_mae.c @@ -1048,6 +1048,36 @@ sfc_mae_rule_process_pattern_data(struct sfc_mae_parse_ctx *ctx, "Failed to process pattern data"); } +static int +sfc_mae_rule_parse_item_mark(const struct rte_flow_item *item, + struct sfc_flow_parse_ctx *ctx, + struct rte_flow_error *error) +{ + const struct rte_flow_item_mark *spec = item->spec; + struct sfc_mae_parse_ctx *ctx_mae = ctx->mae; + + if (spec == NULL) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, item, + "NULL spec in item MARK"); + } + + /* + * This item is used in tunnel offload support only. + * It must go before any network header items. This + * way, sfc_mae_rule_preparse_item_mark() must have + * already parsed it. Only one item MARK is allowed. + */ + if (ctx_mae->ft_rule_type != SFC_FT_RULE_GROUP || + spec->id != (uint32_t)SFC_FT_ID_TO_MARK(ctx_mae->ft->id)) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "invalid item MARK"); + } + + return 0; +} + static int sfc_mae_rule_parse_item_port_id(const struct rte_flow_item *item, struct sfc_flow_parse_ctx *ctx, @@ -1996,6 +2026,14 @@ sfc_mae_rule_parse_item_tunnel(const struct rte_flow_item *item, } static const struct sfc_flow_item sfc_flow_items[] = { + { + .type = RTE_FLOW_ITEM_TYPE_MARK, + .name = "MARK", + .prev_layer = SFC_FLOW_ITEM_ANY_LAYER, + .layer = SFC_FLOW_ITEM_ANY_LAYER, + .ctx_type = SFC_FLOW_PARSE_CTX_MAE, + .parse = sfc_mae_rule_parse_item_mark, + }, { .type = RTE_FLOW_ITEM_TYPE_PORT_ID, .name = "PORT_ID", @@ -2164,6 +2202,19 @@ sfc_mae_rule_process_outer(struct sfc_adapter *sa, case SFC_FT_RULE_JUMP: /* No action rule */ return 0; + case SFC_FT_RULE_GROUP: + /* + * Match on recirculation ID rather than + * on the outer rule allocation handle. + */ + rc = efx_mae_match_spec_recirc_id_set(ctx->match_spec_action, + SFC_FT_ID_TO_TUNNEL_MARK(ctx->ft->id)); + if (rc != 0) { + return rte_flow_error_set(error, rc, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "tunnel offload: GROUP: AR: failed to request match on RECIRC_ID"); + } + return 0; default: SFC_ASSERT(B_FALSE); } @@ -2198,6 +2249,44 @@ sfc_mae_rule_process_outer(struct sfc_adapter *sa, return 0; } +static int +sfc_mae_rule_preparse_item_mark(const struct rte_flow_item_mark *spec, + struct sfc_mae_parse_ctx *ctx) +{ + struct sfc_flow_tunnel *ft; + uint32_t user_mark; + + if (spec == NULL) { + sfc_err(ctx->sa, "tunnel offload: GROUP: NULL spec in item MARK"); + return EINVAL; + } + + ft = sfc_flow_tunnel_pick(ctx->sa, spec->id); + if (ft == NULL) { + sfc_err(ctx->sa, "tunnel offload: GROUP: invalid tunnel"); + return EINVAL; + } + + if (ft->refcnt == 0) { + sfc_err(ctx->sa, "tunnel offload: GROUP: tunnel=%u does not exist", + ft->id); + return ENOENT; + } + + user_mark = SFC_FT_GET_USER_MARK(spec->id); + if (user_mark != 0) { + sfc_err(ctx->sa, "tunnel offload: GROUP: invalid item MARK"); + return EINVAL; + } + + sfc_dbg(ctx->sa, "tunnel offload: GROUP: detected"); + + ctx->ft_rule_type = SFC_FT_RULE_GROUP; + ctx->ft = ft; + + return 0; +} + static int sfc_mae_rule_encap_parse_init(struct sfc_adapter *sa, const struct rte_flow_item pattern[], @@ -2217,6 +2306,16 @@ sfc_mae_rule_encap_parse_init(struct sfc_adapter *sa, for (;;) { switch (pattern->type) { + case RTE_FLOW_ITEM_TYPE_MARK: + rc = sfc_mae_rule_preparse_item_mark(pattern->spec, + ctx); + if (rc != 0) { + return rte_flow_error_set(error, rc, + RTE_FLOW_ERROR_TYPE_ITEM, + pattern, "tunnel offload: GROUP: invalid item MARK"); + } + ++pattern; + continue; case RTE_FLOW_ITEM_TYPE_VXLAN: ctx->encap_type = EFX_TUNNEL_PROTOCOL_VXLAN; ctx->tunnel_def_mask = &rte_flow_item_vxlan_mask; @@ -2258,6 +2357,17 @@ sfc_mae_rule_encap_parse_init(struct sfc_adapter *sa, } ctx->encap_type = ctx->ft->encap_type; break; + case SFC_FT_RULE_GROUP: + if (pattern->type == RTE_FLOW_ITEM_TYPE_END) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + NULL, "tunnel offload: GROUP: missing tunnel item"); + } else if (ctx->encap_type != ctx->ft->encap_type) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + pattern, "tunnel offload: GROUP: tunnel type mismatch"); + } + break; default: SFC_ASSERT(B_FALSE); break; @@ -2306,6 +2416,14 @@ sfc_mae_rule_encap_parse_init(struct sfc_adapter *sa, "OR: failed to initialise RECIRC_ID"); } break; + case SFC_FT_RULE_GROUP: + /* Outermost items -> "ENC" match fields in the action rule. */ + ctx->field_ids_remap = field_ids_remap_to_encap; + ctx->match_spec = ctx->match_spec_action; + + /* No own outer rule; match on JUMP OR's RECIRC_ID is used. */ + ctx->encap_type = EFX_TUNNEL_PROTOCOL_NONE; + break; default: SFC_ASSERT(B_FALSE); break; @@ -2345,6 +2463,8 @@ sfc_mae_rule_parse_pattern(struct sfc_adapter *sa, case SFC_FT_RULE_JUMP: /* No action rule */ break; + case SFC_FT_RULE_GROUP: + /* FALLTHROUGH */ case SFC_FT_RULE_NONE: rc = efx_mae_match_spec_init(sa->nic, EFX_MAE_RULE_ACTION, spec->priority, @@ -2379,6 +2499,13 @@ sfc_mae_rule_parse_pattern(struct sfc_adapter *sa, if (rc != 0) goto fail_encap_parse_init; + /* + * sfc_mae_rule_encap_parse_init() may have detected tunnel offload + * GROUP rule. Remember its properties for later use. + */ + spec->ft_rule_type = ctx_mae.ft_rule_type; + spec->ft = ctx_mae.ft; + rc = sfc_flow_parse_pattern(sa, sfc_flow_items, RTE_DIM(sfc_flow_items), pattern, &ctx, error); if (rc != 0) @@ -3215,6 +3342,13 @@ sfc_mae_rule_parse_actions(struct sfc_adapter *sa, if (rc != 0) goto fail_action_set_spec_init; + if (spec_mae->ft_rule_type == SFC_FT_RULE_GROUP) { + /* JUMP rules don't decapsulate packets. GROUP rules do. */ + rc = efx_mae_action_set_populate_decap(spec); + if (rc != 0) + goto fail_enforce_ft_decap; + } + /* Cleanup after previous encap. header bounce buffer usage. */ sfc_mae_bounce_eh_invalidate(&mae->bounce_eh); @@ -3245,6 +3379,22 @@ sfc_mae_rule_parse_actions(struct sfc_adapter *sa, goto fail_nb_count; } + switch (spec_mae->ft_rule_type) { + case SFC_FT_RULE_NONE: + break; + case SFC_FT_RULE_GROUP: + /* + * Packets that go to the rule's AR have FT mark set (from the + * JUMP rule OR's RECIRC_ID). Remove this mark in matching + * packets. The user may have provided their own action + * MARK above, so don't check the return value here. + */ + (void)efx_mae_action_set_populate_mark(spec, 0); + break; + default: + SFC_ASSERT(B_FALSE); + } + spec_mae->action_set = sfc_mae_action_set_attach(sa, encap_header, n_count, spec); if (spec_mae->action_set != NULL) { @@ -3268,6 +3418,7 @@ sfc_mae_rule_parse_actions(struct sfc_adapter *sa, fail_rule_parse_action: efx_mae_action_set_spec_fini(sa->nic, spec); +fail_enforce_ft_decap: fail_action_set_spec_init: if (rc > 0 && rte_errno == 0) { rc = rte_flow_error_set(error, rc, -- 2.20.1