From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 28B9CA054D; Tue, 16 Feb 2021 21:47:15 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7794416081D; Tue, 16 Feb 2021 21:46:56 +0100 (CET) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id 1BAED1607F8 for ; Tue, 16 Feb 2021 21:46:52 +0100 (CET) IronPort-SDR: pbDCEqofoZPUzQVCH6lbO7HABzDnC8/dG7/TLiRqZ/lxkyeZ1DGEELGGeIpPUyJU+LXlwPjYLO ZwEOFg3ecoFQ== X-IronPort-AV: E=McAfee;i="6000,8403,9897"; a="247078508" X-IronPort-AV: E=Sophos;i="5.81,184,1610438400"; d="scan'208";a="247078508" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2021 12:46:51 -0800 IronPort-SDR: 0yClna0vSdJ0jUHGpR3IeNoboDpvJsMdqCfKVzHIjy7VxDrDWhUXBZA8xGPenkxclBh0V3As8b gVRVaLJ3Qr6A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,184,1610438400"; d="scan'208";a="493443119" Received: from silpixa00400573.ir.intel.com (HELO silpixa00400573.ger.corp.intel.com) ([10.237.223.107]) by fmsmga001.fm.intel.com with ESMTP; 16 Feb 2021 12:46:51 -0800 From: Cristian Dumitrescu To: dev@dpdk.org Cc: Churchill Khangar Date: Tue, 16 Feb 2021 20:46:46 +0000 Message-Id: <20210216204646.24196-5-cristian.dumitrescu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210216204646.24196-1-cristian.dumitrescu@intel.com> References: <20210216202127.22803-1-cristian.dumitrescu@intel.com> <20210216204646.24196-1-cristian.dumitrescu@intel.com> Subject: [dpdk-dev] [PATCH v3 5/5] table: add wildcard match table type X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add the widlcard match/ACL table type for the SWX pipeline, which is used under the hood by the table instruction. Signed-off-by: Cristian Dumitrescu Signed-off-by: Churchill Khangar --- doc/api/doxy-api-index.md | 1 + examples/pipeline/obj.c | 8 + lib/librte_table/meson.build | 8 +- lib/librte_table/rte_swx_table_wm.c | 470 ++++++++++++++++++++++++++++ lib/librte_table/rte_swx_table_wm.h | 27 ++ lib/librte_table/version.map | 3 + 6 files changed, 515 insertions(+), 2 deletions(-) create mode 100644 lib/librte_table/rte_swx_table_wm.c create mode 100644 lib/librte_table/rte_swx_table_wm.h diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md index 748514e24..94e9937be 100644 --- a/doc/api/doxy-api-index.md +++ b/doc/api/doxy-api-index.md @@ -187,6 +187,7 @@ The public API headers are grouped by topics: * SWX table: [table] (@ref rte_swx_table.h), [table_em] (@ref rte_swx_table_em.h) + [table_wm] (@ref rte_swx_table_wm.h) * [graph] (@ref rte_graph.h): [graph_worker] (@ref rte_graph_worker.h) * graph_nodes: diff --git a/examples/pipeline/obj.c b/examples/pipeline/obj.c index 84bbcf2b2..7be61228b 100644 --- a/examples/pipeline/obj.c +++ b/examples/pipeline/obj.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include @@ -415,6 +416,13 @@ pipeline_create(struct obj *obj, const char *name, int numa_node) if (status) goto error; + status = rte_swx_pipeline_table_type_register(p, + "wildcard", + RTE_SWX_TABLE_MATCH_WILDCARD, + &rte_swx_table_wildcard_match_ops); + if (status) + goto error; + /* Node allocation */ pipeline = calloc(1, sizeof(struct pipeline)); if (pipeline == NULL) diff --git a/lib/librte_table/meson.build b/lib/librte_table/meson.build index aa1e1d038..007ffe013 100644 --- a/lib/librte_table/meson.build +++ b/lib/librte_table/meson.build @@ -12,7 +12,9 @@ sources = files('rte_table_acl.c', 'rte_table_hash_lru.c', 'rte_table_array.c', 'rte_table_stub.c', - 'rte_swx_table_em.c',) + 'rte_swx_table_em.c', + 'rte_swx_table_wm.c', + ) headers = files('rte_table.h', 'rte_table_acl.h', 'rte_table_lpm.h', @@ -24,7 +26,9 @@ headers = files('rte_table.h', 'rte_table_array.h', 'rte_table_stub.h', 'rte_swx_table.h', - 'rte_swx_table_em.h',) + 'rte_swx_table_em.h', + 'rte_swx_table_wm.h', + ) deps += ['mbuf', 'port', 'lpm', 'hash', 'acl'] indirect_headers += files('rte_lru_x86.h', diff --git a/lib/librte_table/rte_swx_table_wm.c b/lib/librte_table/rte_swx_table_wm.c new file mode 100644 index 000000000..9924231b3 --- /dev/null +++ b/lib/librte_table/rte_swx_table_wm.c @@ -0,0 +1,470 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2020 Intel Corporation + */ +#include +#include +#include +#include + +#include +#include +#include +#include + +#include "rte_swx_table_wm.h" + +#ifndef RTE_SWX_TABLE_EM_USE_HUGE_PAGES +#define RTE_SWX_TABLE_EM_USE_HUGE_PAGES 1 +#endif + +#if RTE_SWX_TABLE_EM_USE_HUGE_PAGES + +#include + +static void * +env_malloc(size_t size, size_t alignment, int numa_node) +{ + return rte_zmalloc_socket(NULL, size, alignment, numa_node); +} + +static void +env_free(void *start, size_t size __rte_unused) +{ + rte_free(start); +} + +#else + +#include + +static void * +env_malloc(size_t size, size_t alignment __rte_unused, int numa_node) +{ + return numa_alloc_onnode(size, numa_node); +} + +static void +env_free(void *start, size_t size) +{ + numa_free(start, size); +} + +#endif + +static char *get_unique_name(void) +{ + char *name; + uint64_t *tsc; + + name = calloc(7, 1); + if (!name) + return NULL; + + tsc = (uint64_t *) name; + *tsc = rte_get_tsc_cycles(); + return name; +} + +static uint32_t +count_entries(struct rte_swx_table_entry_list *entries) +{ + struct rte_swx_table_entry *entry; + uint32_t n_entries = 0; + + if (!entries) + return 0; + + TAILQ_FOREACH(entry, entries, node) + n_entries++; + + return n_entries; +} + +static int +acl_table_cfg_get(struct rte_acl_config *cfg, struct rte_swx_table_params *p) +{ + uint32_t byte_id = 0, field_id = 0; + + /* cfg->num_categories. */ + cfg->num_categories = 1; + + /* cfg->defs and cfg->num_fields. */ + for (byte_id = 0; byte_id < p->key_size; ) { + uint32_t field_size = field_id ? 4 : 1; + uint8_t byte = p->key_mask0 ? p->key_mask0[byte_id] : 0xFF; + + if (!byte) { + byte_id++; + continue; + } + + if (field_id == RTE_ACL_MAX_FIELDS) + return -1; + + cfg->defs[field_id].type = RTE_ACL_FIELD_TYPE_BITMASK; + cfg->defs[field_id].size = field_size; + cfg->defs[field_id].field_index = field_id; + cfg->defs[field_id].input_index = field_id; + cfg->defs[field_id].offset = p->key_offset + byte_id; + + field_id++; + byte_id += field_size; + } + + if (!field_id) + return -1; + + cfg->num_fields = field_id; + + /* cfg->max_size. */ + cfg->max_size = 0; + + return 0; +} + +static void +acl_table_rule_field8(uint8_t *value, + uint8_t *mask, + uint8_t *key_mask0, + uint8_t *key_mask, + uint8_t *key, + uint32_t offset) +{ + uint8_t km0, km; + + km0 = key_mask0 ? key_mask0[offset] : 0xFF; + km = key_mask ? key_mask[offset] : 0xFF; + + *value = key[offset]; + *mask = km0 & km; +} + +static void +acl_table_rule_field32(uint32_t *value, + uint32_t *mask, + uint8_t *key_mask0, + uint8_t *key_mask, + uint8_t *key, + uint32_t key_size, + uint32_t offset) +{ + uint32_t km0[4], km[4], k[4]; + uint32_t byte_id; + + /* Byte 0 = MSB, byte 3 = LSB. */ + for (byte_id = 0; byte_id < 4; byte_id++) { + if (offset + byte_id >= key_size) { + km0[byte_id] = 0; + km[byte_id] = 0; + k[byte_id] = 0; + continue; + } + + km0[byte_id] = key_mask0 ? key_mask0[offset + byte_id] : 0xFF; + km[byte_id] = key_mask ? key_mask[offset + byte_id] : 0xFF; + k[byte_id] = key[offset + byte_id]; + } + + *value = (k[0] << 24) | + (k[1] << 16) | + (k[2] << 8) | + k[3]; + + *mask = ((km[0] & km0[0]) << 24) | + ((km[1] & km0[1]) << 16) | + ((km[2] & km0[2]) << 8) | + (km[3] & km0[3]); +} + +RTE_ACL_RULE_DEF(acl_rule, RTE_ACL_MAX_FIELDS); + +static struct rte_acl_rule * +acl_table_rules_get(struct rte_acl_config *acl_cfg, + struct rte_swx_table_params *p, + struct rte_swx_table_entry_list *entries, + uint32_t n_entries) +{ + struct rte_swx_table_entry *entry; + uint8_t *memory; + uint32_t acl_rule_size = RTE_ACL_RULE_SZ(acl_cfg->num_fields); + uint32_t n_fields = acl_cfg->num_fields; + uint32_t rule_id; + + if (!n_entries) + return NULL; + + memory = malloc(n_entries * acl_rule_size); + if (!memory) + return NULL; + + rule_id = 0; + TAILQ_FOREACH(entry, entries, node) { + uint8_t *m = &memory[rule_id * acl_rule_size]; + struct acl_rule *acl_rule = (struct acl_rule *)m; + uint32_t field_id; + + acl_rule->data.category_mask = 1; + acl_rule->data.priority = RTE_ACL_MAX_PRIORITY - + entry->key_priority; + acl_rule->data.userdata = rule_id + 1; + + for (field_id = 0; field_id < n_fields; field_id++) { + struct rte_acl_field *f = &acl_rule->field[field_id]; + uint32_t size = acl_cfg->defs[field_id].size; + uint32_t offset = acl_cfg->defs[field_id].offset - + p->key_offset; + + if (size == 1) { + uint8_t value, mask; + + acl_table_rule_field8(&value, + &mask, + p->key_mask0, + entry->key_mask, + entry->key, + offset); + + f->value.u8 = value; + f->mask_range.u8 = mask; + } else { + uint32_t value, mask; + + acl_table_rule_field32(&value, + &mask, + p->key_mask0, + entry->key_mask, + entry->key, + p->key_size, + offset); + + f->value.u32 = value; + f->mask_range.u32 = mask; + } + } + + rule_id++; + } + + return (struct rte_acl_rule *)memory; +} + +/* When the table to be created has no rules, the expected behavior is to always + * get lookup miss for any input key. To achieve this, we add a single bogus + * rule to the table with the rule user data set to 0, i.e. the value returned + * when lookup miss takes place. Whether lookup hit (the bogus rule is hit) or + * miss, a user data of 0 is returned, which for the ACL library is equivalent + * to lookup miss. + */ +static struct rte_acl_rule * +acl_table_rules_default_get(struct rte_acl_config *acl_cfg) +{ + struct rte_acl_rule *acl_rule; + uint32_t acl_rule_size = RTE_ACL_RULE_SZ(acl_cfg->num_fields); + + acl_rule = calloc(1, acl_rule_size); + if (!acl_rule) + return NULL; + + acl_rule->data.category_mask = 1; + acl_rule->data.priority = RTE_ACL_MAX_PRIORITY; + acl_rule->data.userdata = 0; + + memset(&acl_rule[1], 0xFF, acl_rule_size - sizeof(struct rte_acl_rule)); + + return acl_rule; +} + +static struct rte_acl_ctx * +acl_table_create(struct rte_swx_table_params *params, + struct rte_swx_table_entry_list *entries, + uint32_t n_entries, + int numa_node) +{ + struct rte_acl_param acl_params = {0}; + struct rte_acl_config acl_cfg = {0}; + struct rte_acl_ctx *acl_ctx = NULL; + struct rte_acl_rule *acl_rules = NULL; + char *name = NULL; + int status = 0; + + /* ACL config data structures. */ + name = get_unique_name(); + if (!name) { + status = -1; + goto free_resources; + } + + status = acl_table_cfg_get(&acl_cfg, params); + if (status) + goto free_resources; + + acl_rules = n_entries ? + acl_table_rules_get(&acl_cfg, params, entries, n_entries) : + acl_table_rules_default_get(&acl_cfg); + if (!acl_rules) { + status = -1; + goto free_resources; + } + + n_entries = n_entries ? n_entries : 1; + + /* ACL create. */ + acl_params.name = name; + acl_params.socket_id = numa_node; + acl_params.rule_size = RTE_ACL_RULE_SZ(acl_cfg.num_fields); + acl_params.max_rule_num = n_entries; + + acl_ctx = rte_acl_create(&acl_params); + if (!acl_ctx) { + status = -1; + goto free_resources; + } + + /* ACL add rules. */ + status = rte_acl_add_rules(acl_ctx, acl_rules, n_entries); + if (status) + goto free_resources; + + /* ACL build. */ + status = rte_acl_build(acl_ctx, &acl_cfg); + +free_resources: + if (status && acl_ctx) + rte_acl_free(acl_ctx); + + free(acl_rules); + + free(name); + + return status ? NULL : acl_ctx; +} + +static void +entry_data_copy(uint8_t *data, + struct rte_swx_table_entry_list *entries, + uint32_t n_entries, + uint32_t entry_data_size) +{ + struct rte_swx_table_entry *entry; + uint32_t i = 0; + + if (!n_entries) + return; + + TAILQ_FOREACH(entry, entries, node) { + uint64_t *d = (uint64_t *)&data[i * entry_data_size]; + + d[0] = entry->action_id; + memcpy(&d[1], entry->action_data, entry_data_size - 8); + + i++; + } +} + +struct table { + struct rte_acl_ctx *acl_ctx; + uint8_t *data; + size_t total_size; + uint32_t entry_data_size; +}; + +static void +table_free(void *table) +{ + struct table *t = table; + + if (!t) + return; + + if (t->acl_ctx) + rte_acl_free(t->acl_ctx); + env_free(t, t->total_size); +} + +static void * +table_create(struct rte_swx_table_params *params, + struct rte_swx_table_entry_list *entries, + const char *args __rte_unused, + int numa_node) +{ + struct table *t = NULL; + size_t meta_sz, data_sz, total_size; + uint32_t entry_data_size; + uint32_t n_entries = count_entries(entries); + + /* Check input arguments. */ + if (!params || !params->key_size) + goto error; + + /* Memory allocation and initialization. */ + entry_data_size = 8 + params->action_data_size; + meta_sz = sizeof(struct table); + data_sz = n_entries * entry_data_size; + total_size = meta_sz + data_sz; + + t = env_malloc(total_size, RTE_CACHE_LINE_SIZE, numa_node); + if (!t) + goto error; + + memset(t, 0, total_size); + t->entry_data_size = entry_data_size; + t->total_size = total_size; + t->data = (uint8_t *)&t[1]; + + t->acl_ctx = acl_table_create(params, entries, n_entries, numa_node); + if (!t->acl_ctx) + goto error; + + entry_data_copy(t->data, entries, n_entries, entry_data_size); + + return t; + +error: + table_free(t); + return NULL; +} + +struct mailbox { + +}; + +static uint64_t +table_mailbox_size_get(void) +{ + return sizeof(struct mailbox); +} + +static int +table_lookup(void *table, + void *mailbox __rte_unused, + const uint8_t **key, + uint64_t *action_id, + uint8_t **action_data, + int *hit) +{ + struct table *t = table; + uint8_t *data; + uint32_t user_data; + + rte_acl_classify(t->acl_ctx, key, &user_data, 1, 1); + if (!user_data) { + *hit = 0; + return 1; + } + + data = &t->data[(user_data - 1) * t->entry_data_size]; + *action_id = ((uint64_t *)data)[0]; + *action_data = &data[8]; + *hit = 1; + return 1; +} + +struct rte_swx_table_ops rte_swx_table_wildcard_match_ops = { + .footprint_get = NULL, + .mailbox_size_get = table_mailbox_size_get, + .create = table_create, + .add = NULL, + .del = NULL, + .lkp = (rte_swx_table_lookup_t)table_lookup, + .free = table_free, +}; diff --git a/lib/librte_table/rte_swx_table_wm.h b/lib/librte_table/rte_swx_table_wm.h new file mode 100644 index 000000000..a716536ca --- /dev/null +++ b/lib/librte_table/rte_swx_table_wm.h @@ -0,0 +1,27 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2021 Intel Corporation + */ +#ifndef __INCLUDE_RTE_SWX_TABLE_WM_H__ +#define __INCLUDE_RTE_SWX_TABLE_WM_H__ + +#ifdef __cplusplus +extern "C" { +#endif + +/** + * @file + * RTE SWX Wildcard Match Table + */ + +#include + +#include + +/** Wildcard match table operations. */ +extern struct rte_swx_table_ops rte_swx_table_wildcard_match_ops; + +#ifdef __cplusplus +} +#endif + +#endif diff --git a/lib/librte_table/version.map b/lib/librte_table/version.map index bea2252a4..eb0291ac4 100644 --- a/lib/librte_table/version.map +++ b/lib/librte_table/version.map @@ -25,4 +25,7 @@ EXPERIMENTAL { # added in 20.11 rte_swx_table_exact_match_ops; rte_swx_table_exact_match_unoptimized_ops; + + # added in 21.05 + rte_swx_table_wildcard_match_ops; }; -- 2.17.1