From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wr0-f171.google.com (mail-wr0-f171.google.com [209.85.128.171]) by dpdk.org (Postfix) with ESMTP id 73BC01C0B for ; Thu, 6 Jul 2017 18:57:07 +0200 (CEST) Received: by mail-wr0-f171.google.com with SMTP id r103so10946370wrb.0 for ; Thu, 06 Jul 2017 09:57:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=SvfqX2z9TbxWeB+HhksmoaY+jZ2+afnX955lGp6hh38=; b=nb94ac00HNsokFl941HovZU0dw2e4Trgqcji4thTil1SP1KBNtpn6EhXeB+Uht+uRf dY2+vGQ75WO6gdNAFyfnfS4YeE5NDCA2+zcQi+sbaiW5L0w3Hr1Awb1GTES7FwbJxRQN B1OqTwqTligh/fQfAK17ynOErBMqsjUIH0Xnh+NO6HbKqTgv4vtRrNS64SWKGy7g0lna tnr4Ou3EwEzdvhZmmfp0lnv842hJy407+5yvFCS8nNF6UegZF2pNmOLo5+3dZZViMCyp EFhObw+9p97ydCevsS5zALOhZVbZbdBG04jAm1hOAGosbVCVcQFUxZe0QG4IIA8i9yoo f48g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=SvfqX2z9TbxWeB+HhksmoaY+jZ2+afnX955lGp6hh38=; b=HG4JzqQ2eEQklaGqvnGycCdovj4kGTvAcfi3wDynTyxxPDlS4OBxRrb23Y6fD+K2qx PCUFkmQKQIcX3NvN2HrQowKIfIYLWxFqKLGTTQdpAlhTx8xoXdhwW8RCmMH/trBs+TQx k1JyXssb8SM7FwyjWmtKiUWa2nJYGH5vt8zbhrPxQoFQ8PrKCwN5lc/KzzjfPjUfb6mP p6wmnwF+A/wATdSVDL3ypw8//mjZoSB8Klkl1dmQhzNo0qjvQe1HuhoYW+OAmtbn1HE1 C5qxQk6lQ6R2P+9/B3pZ/mwIr1MCJnXycd8Du+LkOMwFIl9nZx45CIfnUDaPdVQfHec7 40ig== X-Gm-Message-State: AIVw113tKDapWxQ0Hym260vZC3/KaF4mCWmMOJEijUAwRkXZGvXNTqHs TKtqogVLZxwNgr0hXjo= X-Received: by 10.28.71.76 with SMTP id u73mr121363wma.105.1499360226674; Thu, 06 Jul 2017 09:57:06 -0700 (PDT) Received: from 6wind.com (host.78.145.23.62.rev.coltfrance.com. [62.23.145.78]) by smtp.gmail.com with ESMTPSA id p7sm1163422wmf.11.2017.07.06.09.57.05 (version=TLS1_2 cipher=AES128-SHA bits=128/128); Thu, 06 Jul 2017 09:57:05 -0700 (PDT) From: Adrien Mazarguil To: Gaetan Rivet , Thomas Monjalon Cc: dev@dpdk.org Date: Thu, 6 Jul 2017 18:56:54 +0200 Message-Id: <4695f0b4603308f6e27a0f2518945172989165fd.1499360068.git.adrien.mazarguil@6wind.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: References: Subject: [dpdk-dev] [PATCH v3] ethdev: add flow API rule copy function X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 06 Jul 2017 16:57:07 -0000 From: Gaetan Rivet This allows PMDs and applications to save flow rules in their generic format for later processing. This is useful when rules cannot be applied immediately, such as when the device is not properly initialized. Signed-off-by: Gaetan Rivet Signed-off-by: Adrien Mazarguil --- Gaetan, sorry for taking over and modifying your patch, took me a while to review it and given there is not much time left before RC1, here is my suggestion in the form of an updated patch for reasons described below. I'm convinced exposing rte_flow_copy() is useful and mandatory not only for the fail-safe PMD, but also to avoid code duplication later. However since you took this code from testpmd, you've inherited its main drawback which is rte_flow_desc_item[] and rte_flow_desc_action[] are a pain to maintain. Every time new flow item/action are added, one has to update these arrays as well. Moreover it forces you to expose and version a bunch of additional symbols so far only useful to testpmd. Therefore I'm thinking about generating these arrays automatically at compilation time from enum definitions and not expose extra symbols at the same time. I'll try to submit these changes before the next RC. So in the meantime, here's a v3 in order not to break existing series that depend on this patch and not introduce unnecessary symbols. v2 -> v3: * Revert testpmd changes. * Do not expose extra symbols (rte_flow_desc_action, rte_flow_desc_item, rte_flow_nb_action and rte_flow_nb_item). * Do not expose struct rte_flow_desc_data. * Add missing #include directives to rte_flow.c. v1 -> v2: * fix checkpatch warnings --- lib/librte_ether/rte_ether_version.map | 1 + lib/librte_ether/rte_flow.c | 225 ++++++++++++++++++++++++++++ lib/librte_ether/rte_flow.h | 40 +++++ 3 files changed, 266 insertions(+) diff --git a/lib/librte_ether/rte_ether_version.map b/lib/librte_ether/rte_ether_version.map index 019a93d..6f65f83 100644 --- a/lib/librte_ether/rte_ether_version.map +++ b/lib/librte_ether/rte_ether_version.map @@ -152,6 +152,7 @@ DPDK_17.08 { global: _rte_eth_dev_callback_process; + rte_flow_copy; rte_flow_isolate; } DPDK_17.05; diff --git a/lib/librte_ether/rte_flow.c b/lib/librte_ether/rte_flow.c index c1de31b..59ca85a 100644 --- a/lib/librte_ether/rte_flow.c +++ b/lib/librte_ether/rte_flow.c @@ -31,14 +31,79 @@ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ +#include +#include #include +#include +#include #include #include #include "rte_ethdev.h" #include "rte_flow_driver.h" #include "rte_flow.h" +/** + * Flow elements description tables. + */ +struct rte_flow_desc_data { + const char *name; + size_t size; +}; + +/** Generate flow_item[] entry. */ +#define MK_FLOW_ITEM(t, s) \ + [RTE_FLOW_ITEM_TYPE_ ## t] = { \ + .name = # t, \ + .size = s, \ + } + +/** Information about known flow pattern items. */ +static const struct rte_flow_desc_data rte_flow_desc_item[] = { + MK_FLOW_ITEM(END, 0), + MK_FLOW_ITEM(VOID, 0), + MK_FLOW_ITEM(INVERT, 0), + MK_FLOW_ITEM(ANY, sizeof(struct rte_flow_item_any)), + MK_FLOW_ITEM(PF, 0), + MK_FLOW_ITEM(VF, sizeof(struct rte_flow_item_vf)), + MK_FLOW_ITEM(PORT, sizeof(struct rte_flow_item_port)), + MK_FLOW_ITEM(RAW, sizeof(struct rte_flow_item_raw)), /* +pattern[] */ + MK_FLOW_ITEM(ETH, sizeof(struct rte_flow_item_eth)), + MK_FLOW_ITEM(VLAN, sizeof(struct rte_flow_item_vlan)), + MK_FLOW_ITEM(IPV4, sizeof(struct rte_flow_item_ipv4)), + MK_FLOW_ITEM(IPV6, sizeof(struct rte_flow_item_ipv6)), + MK_FLOW_ITEM(ICMP, sizeof(struct rte_flow_item_icmp)), + MK_FLOW_ITEM(UDP, sizeof(struct rte_flow_item_udp)), + MK_FLOW_ITEM(TCP, sizeof(struct rte_flow_item_tcp)), + MK_FLOW_ITEM(SCTP, sizeof(struct rte_flow_item_sctp)), + MK_FLOW_ITEM(VXLAN, sizeof(struct rte_flow_item_vxlan)), + MK_FLOW_ITEM(MPLS, sizeof(struct rte_flow_item_mpls)), + MK_FLOW_ITEM(GRE, sizeof(struct rte_flow_item_gre)), +}; + +/** Generate flow_action[] entry. */ +#define MK_FLOW_ACTION(t, s) \ + [RTE_FLOW_ACTION_TYPE_ ## t] = { \ + .name = # t, \ + .size = s, \ + } + +/** Information about known flow actions. */ +static const struct rte_flow_desc_data rte_flow_desc_action[] = { + MK_FLOW_ACTION(END, 0), + MK_FLOW_ACTION(VOID, 0), + MK_FLOW_ACTION(PASSTHRU, 0), + MK_FLOW_ACTION(MARK, sizeof(struct rte_flow_action_mark)), + MK_FLOW_ACTION(FLAG, 0), + MK_FLOW_ACTION(QUEUE, sizeof(struct rte_flow_action_queue)), + MK_FLOW_ACTION(DROP, 0), + MK_FLOW_ACTION(COUNT, 0), + MK_FLOW_ACTION(DUP, sizeof(struct rte_flow_action_dup)), + MK_FLOW_ACTION(RSS, sizeof(struct rte_flow_action_rss)), /* +queue[] */ + MK_FLOW_ACTION(PF, 0), + MK_FLOW_ACTION(VF, sizeof(struct rte_flow_action_vf)), +}; + /* Get generic flow operations structure from a port. */ const struct rte_flow_ops * rte_flow_ops_get(uint8_t port_id, struct rte_flow_error *error) @@ -175,3 +240,163 @@ rte_flow_isolate(uint8_t port_id, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, rte_strerror(ENOSYS)); } + +/** Compute storage space needed by item specification. */ +static void +flow_item_spec_size(const struct rte_flow_item *item, + size_t *size, size_t *pad) +{ + if (!item->spec) + goto empty; + switch (item->type) { + union { + const struct rte_flow_item_raw *raw; + } spec; + + /* Not a fall-through */ + case RTE_FLOW_ITEM_TYPE_RAW: + spec.raw = item->spec; + *size = offsetof(struct rte_flow_item_raw, pattern) + + spec.raw->length * sizeof(*spec.raw->pattern); + break; + default: +empty: + *size = 0; + break; + } + *pad = RTE_ALIGN_CEIL(*size, sizeof(double)) - *size; +} + +/** Compute storage space needed by action configuration. */ +static void +flow_action_conf_size(const struct rte_flow_action *action, + size_t *size, size_t *pad) +{ + if (!action->conf) + goto empty; + switch (action->type) { + union { + const struct rte_flow_action_rss *rss; + } conf; + + /* Not a fall-through. */ + case RTE_FLOW_ACTION_TYPE_RSS: + conf.rss = action->conf; + *size = offsetof(struct rte_flow_action_rss, queue) + + conf.rss->num * sizeof(*conf.rss->queue); + break; + default: +empty: + *size = 0; + break; + } + *pad = RTE_ALIGN_CEIL(*size, sizeof(double)) - *size; +} + +/** Store a full rte_flow description. */ +size_t +rte_flow_copy(struct rte_flow_desc *desc, size_t len, + const struct rte_flow_attr *attr, + const struct rte_flow_item *items, + const struct rte_flow_action *actions) +{ + struct rte_flow_desc *fd = NULL; + size_t tmp; + size_t pad; + size_t off1 = 0; + size_t off2 = 0; + size_t size = 0; + +store: + if (items) { + const struct rte_flow_item *item; + + item = items; + if (fd) + fd->items = (void *)&fd->data[off1]; + do { + struct rte_flow_item *dst = NULL; + + if ((size_t)item->type >= + RTE_DIM(rte_flow_desc_item) || + !rte_flow_desc_item[item->type].name) { + rte_errno = ENOTSUP; + return 0; + } + if (fd) + dst = memcpy(fd->data + off1, item, + sizeof(*item)); + off1 += sizeof(*item); + flow_item_spec_size(item, &tmp, &pad); + if (item->spec) { + if (fd) + dst->spec = memcpy(fd->data + off2, + item->spec, tmp); + off2 += tmp + pad; + } + if (item->last) { + if (fd) + dst->last = memcpy(fd->data + off2, + item->last, tmp); + off2 += tmp + pad; + } + if (item->mask) { + if (fd) + dst->mask = memcpy(fd->data + off2, + item->mask, tmp); + off2 += tmp + pad; + } + off2 = RTE_ALIGN_CEIL(off2, sizeof(double)); + } while ((item++)->type != RTE_FLOW_ITEM_TYPE_END); + off1 = RTE_ALIGN_CEIL(off1, sizeof(double)); + } + if (actions) { + const struct rte_flow_action *action; + + action = actions; + if (fd) + fd->actions = (void *)&fd->data[off1]; + do { + struct rte_flow_action *dst = NULL; + + if ((size_t)action->type >= + RTE_DIM(rte_flow_desc_action) || + !rte_flow_desc_action[action->type].name) { + rte_errno = ENOTSUP; + return 0; + } + if (fd) + dst = memcpy(fd->data + off1, action, + sizeof(*action)); + off1 += sizeof(*action); + flow_action_conf_size(action, &tmp, &pad); + if (action->conf) { + if (fd) + dst->conf = memcpy(fd->data + off2, + action->conf, tmp); + off2 += tmp + pad; + } + off2 = RTE_ALIGN_CEIL(off2, sizeof(double)); + } while ((action++)->type != RTE_FLOW_ACTION_TYPE_END); + } + if (fd != NULL) + return size; + off1 = RTE_ALIGN_CEIL(off1, sizeof(double)); + tmp = RTE_ALIGN_CEIL(offsetof(struct rte_flow_desc, data), + sizeof(double)); + size = tmp + off1 + off2; + if (size > len) + return size; + fd = desc; + if (fd != NULL) { + *fd = (const struct rte_flow_desc) { + .size = size, + .attr = *attr, + }; + tmp -= offsetof(struct rte_flow_desc, data); + off2 = tmp + off1; + off1 = tmp; + goto store; + } + return 0; +} diff --git a/lib/librte_ether/rte_flow.h b/lib/librte_ether/rte_flow.h index cfbed30..6ac7cdb 100644 --- a/lib/librte_ether/rte_flow.h +++ b/lib/librte_ether/rte_flow.h @@ -1266,6 +1266,46 @@ rte_flow_query(uint8_t port_id, int rte_flow_isolate(uint8_t port_id, int set, struct rte_flow_error *error); +/** + * Generic flow representation. + * + * This form is sufficient to describe an rte_flow independently from any + * PMD implementation and allows for replayability and identification. + */ +struct rte_flow_desc { + size_t size; /**< Allocated space including data[]. */ + struct rte_flow_attr attr; /**< Attributes. */ + struct rte_flow_item *items; /**< Items. */ + struct rte_flow_action *actions; /**< Actions. */ + uint8_t data[]; /**< Storage for items/actions. */ +}; + +/** + * Copy an rte_flow rule description. + * + * @param[in] fd + * Flow rule description. + * @param[in] len + * Total size of allocated data for the flow description. + * @param[in] attr + * Flow rule attributes. + * @param[in] items + * Pattern specification (list terminated by the END pattern item). + * @param[in] actions + * Associated actions (list terminated by the END action). + * + * @return + * If len is greater or equal to the size of the flow, the total size of the + * flow description and its data. + * If len is lower than the size of the flow, the number of bytes that would + * have been written to desc had it been sufficient. Nothing is written. + */ +size_t +rte_flow_copy(struct rte_flow_desc *fd, size_t len, + const struct rte_flow_attr *attr, + const struct rte_flow_item *items, + const struct rte_flow_action *actions); + #ifdef __cplusplus } #endif -- 2.1.4