* [dpdk-dev] [RFC 17.08] Flow classification library @ 2017-04-20 18:54 Ferruh Yigit 2017-04-20 18:54 ` [dpdk-dev] [RFC 17.08] flow_classify: add librte_flow_classify library Ferruh Yigit ` (2 more replies) 0 siblings, 3 replies; 145+ messages in thread From: Ferruh Yigit @ 2017-04-20 18:54 UTC (permalink / raw) To: dev; +Cc: Ferruh Yigit, John McNamara, Maryam Tahhan DPDK works with packets, but some network administration tools works based on flow information. This library is suggested to provide helper APIs to convert packet based information to the flow records. Library header file has more comments on how library works and provided APIs. Packets to flow conversion will cause performance drop, that is why this conversion can be enabled and disabled dynamically by application. Initial implementation in mind is to provide support for IPFIX metering process but library planned to be as generic as possible. And flow information provided by this library is missing to implement full IPFIX features, but this is planned to be initial step. It is possible to define flow with various flow keys, but currently only one type of flow defined in the library, which is more generic, and it offloads fine grained flow analysis to the application. Library enables expanding for other flow types. It will be more beneficial to shape this library to cover more use cases, please feel free to comment on possible other use case and desired functionalities. Thanks, ferruh cc: John McNamara <john.mcnamara@intel.com> cc: Maryam Tahhan <maryam.tahhan@intel.com> Ferruh Yigit (1): flow_classify: add librte_flow_classify library config/common_base | 5 + doc/api/doxy-api-index.md | 1 + doc/api/doxy-api.conf | 1 + doc/guides/rel_notes/release_17_05.rst | 1 + lib/Makefile | 2 + lib/librte_flow_classify/Makefile | 50 +++++ lib/librte_flow_classify/rte_flow_classify.c | 34 ++++ lib/librte_flow_classify/rte_flow_classify.h | 202 +++++++++++++++++++++ .../rte_flow_classify_version.map | 10 + 9 files changed, 306 insertions(+) create mode 100644 lib/librte_flow_classify/Makefile create mode 100644 lib/librte_flow_classify/rte_flow_classify.c create mode 100644 lib/librte_flow_classify/rte_flow_classify.h create mode 100644 lib/librte_flow_classify/rte_flow_classify_version.map -- 2.9.3 ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [RFC 17.08] flow_classify: add librte_flow_classify library 2017-04-20 18:54 [dpdk-dev] [RFC 17.08] Flow classification library Ferruh Yigit @ 2017-04-20 18:54 ` Ferruh Yigit 2017-05-04 11:35 ` Mcnamara, John 2017-05-17 14:54 ` Ananyev, Konstantin 2017-04-21 10:38 ` [dpdk-dev] [RFC 17.08] Flow classification library Gaëtan Rivet 2017-05-18 18:12 ` [dpdk-dev] [RFC v2] " Ferruh Yigit 2 siblings, 2 replies; 145+ messages in thread From: Ferruh Yigit @ 2017-04-20 18:54 UTC (permalink / raw) To: dev; +Cc: Ferruh Yigit, John McNamara, Maryam Tahhan Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com> --- config/common_base | 5 + doc/api/doxy-api-index.md | 1 + doc/api/doxy-api.conf | 1 + doc/guides/rel_notes/release_17_05.rst | 1 + lib/Makefile | 2 + lib/librte_flow_classify/Makefile | 50 +++++ lib/librte_flow_classify/rte_flow_classify.c | 34 ++++ lib/librte_flow_classify/rte_flow_classify.h | 202 +++++++++++++++++++++ .../rte_flow_classify_version.map | 10 + 9 files changed, 306 insertions(+) create mode 100644 lib/librte_flow_classify/Makefile create mode 100644 lib/librte_flow_classify/rte_flow_classify.c create mode 100644 lib/librte_flow_classify/rte_flow_classify.h create mode 100644 lib/librte_flow_classify/rte_flow_classify_version.map diff --git a/config/common_base b/config/common_base index 412ec3f..c05a411 100644 --- a/config/common_base +++ b/config/common_base @@ -634,6 +634,11 @@ CONFIG_RTE_LIBRTE_IP_FRAG_TBL_STAT=n CONFIG_RTE_LIBRTE_METER=y # +# Compile librte_classify +# +CONFIG_RTE_LIBRTE_FLOW_CLASSIFY=y + +# # Compile librte_sched # CONFIG_RTE_LIBRTE_SCHED=y diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md index a26846a..7f0be03 100644 --- a/doc/api/doxy-api-index.md +++ b/doc/api/doxy-api-index.md @@ -97,6 +97,7 @@ There are many libraries, so their headers may be grouped by topics: [LPM IPv4 route] (@ref rte_lpm.h), [LPM IPv6 route] (@ref rte_lpm6.h), [ACL] (@ref rte_acl.h), + [flow_classify] (@ref rte_flow_classify.h), [EFD] (@ref rte_efd.h) - **QoS**: diff --git a/doc/api/doxy-api.conf b/doc/api/doxy-api.conf index 97fb551..9eec10c 100644 --- a/doc/api/doxy-api.conf +++ b/doc/api/doxy-api.conf @@ -45,6 +45,7 @@ INPUT = doc/api/doxy-api-index.md \ lib/librte_efd \ lib/librte_ether \ lib/librte_eventdev \ + lib/librte_flow_classify \ lib/librte_hash \ lib/librte_ip_frag \ lib/librte_jobstats \ diff --git a/doc/guides/rel_notes/release_17_05.rst b/doc/guides/rel_notes/release_17_05.rst index 25e7144..89520e4 100644 --- a/doc/guides/rel_notes/release_17_05.rst +++ b/doc/guides/rel_notes/release_17_05.rst @@ -507,6 +507,7 @@ The libraries prepended with a plus sign were incremented in this version. librte_distributor.so.1 + librte_eal.so.4 librte_ethdev.so.6 + librte_flow_classify.so.1 librte_hash.so.2 librte_ip_frag.so.1 librte_jobstats.so.1 diff --git a/lib/Makefile b/lib/Makefile index 07e1fd0..e63cd61 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -80,6 +80,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_POWER) += librte_power DEPDIRS-librte_power := librte_eal DIRS-$(CONFIG_RTE_LIBRTE_METER) += librte_meter DEPDIRS-librte_meter := librte_eal +DIRS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += librte_flow_classify +DEPDIRS-librte_flow_classify := librte_eal librte_ether librte_net DIRS-$(CONFIG_RTE_LIBRTE_SCHED) += librte_sched DEPDIRS-librte_sched := librte_eal librte_mempool librte_mbuf librte_net DEPDIRS-librte_sched += librte_timer diff --git a/lib/librte_flow_classify/Makefile b/lib/librte_flow_classify/Makefile new file mode 100644 index 0000000..c57e9a3 --- /dev/null +++ b/lib/librte_flow_classify/Makefile @@ -0,0 +1,50 @@ +# BSD LICENSE +# +# Copyright(c) 2017 Intel Corporation. All rights reserved. +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Intel Corporation nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +include $(RTE_SDK)/mk/rte.vars.mk + +# library name +LIB = librte_flow_classify.a + +CFLAGS += -O3 +CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) + +EXPORT_MAP := rte_flow_classify_version.map + +LIBABIVER := 1 + +# all source are stored in SRCS-y +SRCS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) := rte_flow_classify.c + +# install this header file +SYMLINK-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY)-include := rte_flow_classify.h + +include $(RTE_SDK)/mk/rte.lib.mk diff --git a/lib/librte_flow_classify/rte_flow_classify.c b/lib/librte_flow_classify/rte_flow_classify.c new file mode 100644 index 0000000..e6f724e --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify.c @@ -0,0 +1,34 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include "rte_flow_classify.h" diff --git a/lib/librte_flow_classify/rte_flow_classify.h b/lib/librte_flow_classify/rte_flow_classify.h new file mode 100644 index 0000000..a52394f --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify.h @@ -0,0 +1,202 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _RTE_FLOW_CLASSIFY_H_ +#define _RTE_FLOW_CLASSIFY_H_ + +/** + * @file + * + * RTE Flow Classify Library + * + * This library provides flow record information with some measured properties. + * + * Application can select variety of flow types based on various flow keys. + * + * Library only maintains flow records between rte_flow_classify_stats_get() + * calls and with a maximum limit. + * + * Provided flow record will be linked list rte_flow_classify_stat_xxx + * structure. + * + * Library is responsible from allocating and freeing memory for flow record + * table. Previous table freed with next rte_flow_classify_stats_get() call and + * all tables are freed with rte_flow_classify_type_reset() or + * rte_flow_classify_type_set(x, 0). Memory for table allocated on the fly while + * creating records. + * + * A rte_flow_classify_type_set() with a valid type will register Rx/Tx + * callbacks and start filling flow record table. + * With rte_flow_classify_stats_get(), pointer sent to caller and meanwhile + * library continues collecting records. + * + * Usage: + * - application calls rte_flow_classify_type_set() for a device + * - library creates Rx/Tx callbacks for packets and start filling flow table + * for that type of flow (currently only one flow type supported) + * - application calls rte_flow_classify_stats_get() to get pointer to linked + * listed flow table. Library assigns this pointer to another value and keeps + * collecting flow data. In next rte_flow_classify_stats_get(), library first + * free the previous table, and pass current table to the application, keep + * collecting data. + * - application calls rte_flow_classify_type_reset(), library unregisters the + * callbacks and free all flow table data. + * + */ + +#include <rte_ethdev.h> +#include <rte_ether.h> +#include <rte_ip.h> +#include <rte_tcp.h> +#include <rte_udp.h> + +#ifdef __cplusplus +extern "C" { +#endif + +/** + * Types of classification supported. + */ +enum rte_flow_classify_type { + RTE_FLOW_CLASSIFY_TYPE_GENERIC = (1 << 0), + RTE_FLOW_CLASSIFY_TYPE_MAX, +}; + +#define RTE_FLOW_CLASSIFY_TYPE_MASK = (((RTE_FLOW_CLASSIFY_TYPE_MAX - 1) << 1) - 1) + +/** + * Global configuration struct + */ +struct rte_flow_classify_config { + uint32_t type; /* bitwise enum rte_flow_classify_type values */ + void *flow_table_prev; + uint32_t flow_table_prev_item_count; + void *flow_table_current; + uint32_t flow_table_current_item_count; +} rte_flow_classify_config[RTE_MAX_ETHPORTS]; + +#define RTE_FLOW_CLASSIFY_STAT_MAX UINT16_MAX + +/** + * Classification stats data struct + */ +struct rte_flow_classify_stat_generic { + struct rte_flow_classify_stat_generic *next; + uint32_t id; + uint64_t timestamp; + + struct ether_addr src_mac; + struct ether_addr dst_mac; + uint32_t src_ipv4; + uint32_t dst_ipv4; + uint8_t l3_protocol_id; + uint16_t src_port; + uint16_t dst_port; + + uint64_t packet_count; + uint64_t packet_size; /* bytes */ +}; + +/** +* Get flow types to flow_classify +* +* @param port_id +* Ethernet device port id to get classification. +* @param type +* bitmap of enum rte_flow_classify_type values enabled for classification +* @return +* - (0) if successful. +* - (-EINVAL) on failure. +*/ +int +rte_flow_classify_type_get(uint8_t port_id, uint32_t *type); + +/** +* Set flow types to flow_classify +* +* If the type list is zero, no classification done. +* +* @param port_id +* Ethernet device port_id to set classification. +* @param type +* bitmap of enum rte_flow_classify_type values to enable classification +* @return +* - (0) if successful. +* - (-EINVAL) on failure. +*/ +int +rte_flow_classify_type_set(uint8_t port_id, uint32_t type); + +/** +* Disable flow classification for device +* +* @param port_id +* Ethernet device port id to reset classification. +* @return +* - (0) if successful. +* - (-EINVAL) on failure. +*/ +int +rte_flow_classify_type_reset(uint8_t port_id); + +/** +* Get classified results +* +* @param port_id +* Ethernet device port id to get flow stats +* @param stats +* void * to linked list flow data +* @return +* - (0) if successful. +* - (-EINVAL) on failure. +*/ +int +rte_flow_classify_stats_get(uint8_t port_id, void *stats); + +/** +* Reset classified results +* +* @param port_id +* Ethernet device port id to reset flow stats +* @return +* - (0) if successful. +* - (-EINVAL) on failure. +*/ +int +rte_flow_classify_stats_reset(uint8_t port_id); + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_FLOW_CLASSIFY_H_ */ diff --git a/lib/librte_flow_classify/rte_flow_classify_version.map b/lib/librte_flow_classify/rte_flow_classify_version.map new file mode 100644 index 0000000..0f396ae --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify_version.map @@ -0,0 +1,10 @@ +DPDK_17.08 { + global: + + rte_flow_classify_stats_get; + rte_flow_classify_type_get; + rte_flow_classify_type_reset; + rte_flow_classify_type_set; + + local: *; +}; -- 2.9.3 ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [RFC 17.08] flow_classify: add librte_flow_classify library 2017-04-20 18:54 ` [dpdk-dev] [RFC 17.08] flow_classify: add librte_flow_classify library Ferruh Yigit @ 2017-05-04 11:35 ` Mcnamara, John 2017-05-16 22:19 ` Thomas Monjalon 2017-05-17 14:54 ` Ananyev, Konstantin 1 sibling, 1 reply; 145+ messages in thread From: Mcnamara, John @ 2017-05-04 11:35 UTC (permalink / raw) To: Yigit, Ferruh, dev; +Cc: Tahhan, Maryam, techboard > -----Original Message----- > From: Yigit, Ferruh > Sent: Thursday, April 20, 2017 7:55 PM > To: dev@dpdk.org > Cc: Yigit, Ferruh <ferruh.yigit@intel.com>; Mcnamara, John > <john.mcnamara@intel.com>; Tahhan, Maryam <maryam.tahhan@intel.com> > Subject: [RFC 17.08] flow_classify: add librte_flow_classify library CCing techboard@dpdk.org since we would like this RFC added to the agenda for discussion at the next Tech Board meeting. John ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [RFC 17.08] flow_classify: add librte_flow_classify library 2017-05-04 11:35 ` Mcnamara, John @ 2017-05-16 22:19 ` Thomas Monjalon 0 siblings, 0 replies; 145+ messages in thread From: Thomas Monjalon @ 2017-05-16 22:19 UTC (permalink / raw) To: jblunck; +Cc: dev, Mcnamara, John, Yigit, Ferruh, Tahhan, Maryam, techboard 04/05/2017 13:35, Mcnamara, John: > > > -----Original Message----- > > From: Yigit, Ferruh > > Sent: Thursday, April 20, 2017 7:55 PM > > To: dev@dpdk.org > > Cc: Yigit, Ferruh <ferruh.yigit@intel.com>; Mcnamara, John > > <john.mcnamara@intel.com>; Tahhan, Maryam <maryam.tahhan@intel.com> > > Subject: [RFC 17.08] flow_classify: add librte_flow_classify library > > > CCing techboard@dpdk.org since we would like this RFC added to the agenda > for discussion at the next Tech Board meeting. Please Jan, could you add it to the agenda? ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [RFC 17.08] flow_classify: add librte_flow_classify library 2017-04-20 18:54 ` [dpdk-dev] [RFC 17.08] flow_classify: add librte_flow_classify library Ferruh Yigit 2017-05-04 11:35 ` Mcnamara, John @ 2017-05-17 14:54 ` Ananyev, Konstantin 2017-05-17 15:37 ` Ferruh Yigit 2017-05-17 16:02 ` Ferruh Yigit 1 sibling, 2 replies; 145+ messages in thread From: Ananyev, Konstantin @ 2017-05-17 14:54 UTC (permalink / raw) To: Yigit, Ferruh, dev; +Cc: Yigit, Ferruh, Mcnamara, John, Tahhan, Maryam Hi Ferruh, Please see my comments/questions below. Thanks Konstantin > + > +/** > + * @file > + * > + * RTE Flow Classify Library > + * > + * This library provides flow record information with some measured properties. > + * > + * Application can select variety of flow types based on various flow keys. > + * > + * Library only maintains flow records between rte_flow_classify_stats_get() > + * calls and with a maximum limit. > + * > + * Provided flow record will be linked list rte_flow_classify_stat_xxx > + * structure. > + * > + * Library is responsible from allocating and freeing memory for flow record > + * table. Previous table freed with next rte_flow_classify_stats_get() call and > + * all tables are freed with rte_flow_classify_type_reset() or > + * rte_flow_classify_type_set(x, 0). Memory for table allocated on the fly while > + * creating records. > + * > + * A rte_flow_classify_type_set() with a valid type will register Rx/Tx > + * callbacks and start filling flow record table. > + * With rte_flow_classify_stats_get(), pointer sent to caller and meanwhile > + * library continues collecting records. > + * > + * Usage: > + * - application calls rte_flow_classify_type_set() for a device > + * - library creates Rx/Tx callbacks for packets and start filling flow table Does it necessary to use an RX callback here? Can library provide an API like collect(port_id, input_mbuf[], pkt_num) instead? So the user would have a choice either setup a callback or call collect() directly. > + * for that type of flow (currently only one flow type supported) > + * - application calls rte_flow_classify_stats_get() to get pointer to linked > + * listed flow table. Library assigns this pointer to another value and keeps > + * collecting flow data. In next rte_flow_classify_stats_get(), library first > + * free the previous table, and pass current table to the application, keep > + * collecting data. Ok, but that means that you can't use stats_get() for the same type from 2 different threads without explicit synchronization? > + * - application calls rte_flow_classify_type_reset(), library unregisters the > + * callbacks and free all flow table data. > + * > + */ > + > +enum rte_flow_classify_type { > + RTE_FLOW_CLASSIFY_TYPE_GENERIC = (1 << 0), > + RTE_FLOW_CLASSIFY_TYPE_MAX, > +}; > + > +#define RTE_FLOW_CLASSIFY_TYPE_MASK = (((RTE_FLOW_CLASSIFY_TYPE_MAX - 1) << 1) - 1) > + > +/** > + * Global configuration struct > + */ > +struct rte_flow_classify_config { > + uint32_t type; /* bitwise enum rte_flow_classify_type values */ > + void *flow_table_prev; > + uint32_t flow_table_prev_item_count; > + void *flow_table_current; > + uint32_t flow_table_current_item_count; > +} rte_flow_classify_config[RTE_MAX_ETHPORTS]; > + > +#define RTE_FLOW_CLASSIFY_STAT_MAX UINT16_MAX > + > +/** > + * Classification stats data struct > + */ > +struct rte_flow_classify_stat_generic { > + struct rte_flow_classify_stat_generic *next; > + uint32_t id; > + uint64_t timestamp; > + > + struct ether_addr src_mac; > + struct ether_addr dst_mac; > + uint32_t src_ipv4; > + uint32_t dst_ipv4; > + uint8_t l3_protocol_id; > + uint16_t src_port; > + uint16_t dst_port; > + > + uint64_t packet_count; > + uint64_t packet_size; /* bytes */ > +}; Ok, so if I understood things right, for generic type it will always classify all incoming packets by: <src_mac, dst_mac, src_ipv4, dst_ipv4, l3_protocol_id, src_port, dst_port> all by absolute values, and represent results as a linked list. Is that correct, or I misunderstood your intentions here? If so, then I see several disadvantages here: 1) It is really hard to predict what kind of stats is required for that particular cases. Let say some people would like to collect stat by <dst_mac,, vlan> , another by <dst_ipv4,subnet_mask>, third ones by <l4 dst_port> and so on. Having just one hardcoded filter doesn't seem very felxable/usable. I think you need to find a way to allow user to define what type of filter they want to apply. I think it was discussed already, but I still wonder why rte_flow_item can't be used for that approach? 2) Even one 10G port can produce you ~14M rte_flow_classify_stat_generic entries in one second (all packets have different ipv4/ports or so). Accessing/retrieving items over linked list with 14M entries - doesn't sound like a good idea. I'd say we need some better way to retrieve/present collected data. ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [RFC 17.08] flow_classify: add librte_flow_classify library 2017-05-17 14:54 ` Ananyev, Konstantin @ 2017-05-17 15:37 ` Ferruh Yigit 2017-05-17 16:10 ` Ananyev, Konstantin 2017-05-17 16:02 ` Ferruh Yigit 1 sibling, 1 reply; 145+ messages in thread From: Ferruh Yigit @ 2017-05-17 15:37 UTC (permalink / raw) To: Ananyev, Konstantin, dev; +Cc: Mcnamara, John, Tahhan, Maryam On 5/17/2017 3:54 PM, Ananyev, Konstantin wrote: > Hi Ferruh, > Please see my comments/questions below. > Thanks > Konstantin > >> + >> +/** >> + * @file >> + * >> + * RTE Flow Classify Library >> + * >> + * This library provides flow record information with some measured properties. >> + * >> + * Application can select variety of flow types based on various flow keys. >> + * >> + * Library only maintains flow records between rte_flow_classify_stats_get() >> + * calls and with a maximum limit. >> + * >> + * Provided flow record will be linked list rte_flow_classify_stat_xxx >> + * structure. >> + * >> + * Library is responsible from allocating and freeing memory for flow record >> + * table. Previous table freed with next rte_flow_classify_stats_get() call and >> + * all tables are freed with rte_flow_classify_type_reset() or >> + * rte_flow_classify_type_set(x, 0). Memory for table allocated on the fly while >> + * creating records. >> + * >> + * A rte_flow_classify_type_set() with a valid type will register Rx/Tx >> + * callbacks and start filling flow record table. >> + * With rte_flow_classify_stats_get(), pointer sent to caller and meanwhile >> + * library continues collecting records. >> + * >> + * Usage: >> + * - application calls rte_flow_classify_type_set() for a device >> + * - library creates Rx/Tx callbacks for packets and start filling flow table > > Does it necessary to use an RX callback here? > Can library provide an API like collect(port_id, input_mbuf[], pkt_num) instead? > So the user would have a choice either setup a callback or call collect() directly. This was also comment from Morten, I will update RFC to use direct API call. > >> + * for that type of flow (currently only one flow type supported) >> + * - application calls rte_flow_classify_stats_get() to get pointer to linked >> + * listed flow table. Library assigns this pointer to another value and keeps >> + * collecting flow data. In next rte_flow_classify_stats_get(), library first >> + * free the previous table, and pass current table to the application, keep >> + * collecting data. > > Ok, but that means that you can't use stats_get() for the same type > from 2 different threads without explicit synchronization? Correct. And multiple threads shouldn't be calling this API. It doesn't store previous flow data, so multiple threads calling this only can have piece of information. Do you see any use case that multiple threads can call this API? > >> + * - application calls rte_flow_classify_type_reset(), library unregisters the >> + * callbacks and free all flow table data. >> + * >> + */ >> + >> +enum rte_flow_classify_type { >> + RTE_FLOW_CLASSIFY_TYPE_GENERIC = (1 << 0), >> + RTE_FLOW_CLASSIFY_TYPE_MAX, >> +}; >> + >> +#define RTE_FLOW_CLASSIFY_TYPE_MASK = (((RTE_FLOW_CLASSIFY_TYPE_MAX - 1) << 1) - 1) >> + >> +/** >> + * Global configuration struct >> + */ >> +struct rte_flow_classify_config { >> + uint32_t type; /* bitwise enum rte_flow_classify_type values */ >> + void *flow_table_prev; >> + uint32_t flow_table_prev_item_count; >> + void *flow_table_current; >> + uint32_t flow_table_current_item_count; >> +} rte_flow_classify_config[RTE_MAX_ETHPORTS]; >> + >> +#define RTE_FLOW_CLASSIFY_STAT_MAX UINT16_MAX >> + >> +/** >> + * Classification stats data struct >> + */ >> +struct rte_flow_classify_stat_generic { >> + struct rte_flow_classify_stat_generic *next; >> + uint32_t id; >> + uint64_t timestamp; >> + >> + struct ether_addr src_mac; >> + struct ether_addr dst_mac; >> + uint32_t src_ipv4; >> + uint32_t dst_ipv4; >> + uint8_t l3_protocol_id; >> + uint16_t src_port; >> + uint16_t dst_port; >> + >> + uint64_t packet_count; >> + uint64_t packet_size; /* bytes */ >> +}; > > Ok, so if I understood things right, for generic type it will always classify all incoming packets by: > <src_mac, dst_mac, src_ipv4, dst_ipv4, l3_protocol_id, src_port, dst_port> > all by absolute values, and represent results as a linked list. > Is that correct, or I misunderstood your intentions here? Correct. > If so, then I see several disadvantages here: > 1) It is really hard to predict what kind of stats is required for that particular cases. > Let say some people would like to collect stat by <dst_mac,, vlan> , > another by <dst_ipv4,subnet_mask>, third ones by <l4 dst_port> and so on. > Having just one hardcoded filter doesn't seem very felxable/usable. > I think you need to find a way to allow user to define what type of filter they want to apply. The flow type should be provided by applications, according their needs, and needs to be implemented in this library. The generic one will be the only one implemented in first version: enum rte_flow_classify_type { RTE_FLOW_CLASSIFY_TYPE_GENERIC = (1 << 0), RTE_FLOW_CLASSIFY_TYPE_MAX, }; App should set the type first via the API: rte_flow_classify_type_set(uint8_t port_id, uint32_t type); And the stats for this type will be returned, because returned type can be different type of struct, returned as void: rte_flow_classify_stats_get(uint8_t port_id, void *stats); > I think it was discussed already, but I still wonder why rte_flow_item can't be used for that approach? > 2) Even one 10G port can produce you ~14M rte_flow_classify_stat_generic entries in one second > (all packets have different ipv4/ports or so). > Accessing/retrieving items over linked list with 14M entries - doesn't sound like a good idea. > I'd say we need some better way to retrieve/present collected data. This is to keep flows, so I expect the numbers will be less comparing to the packet numbers. It is possible to use fixed size arrays for this. But I think it is easy to make this switch later, I would like to see the performance effect before doing this switch. Do you think is it OK to start like this and give that decision during implementation? ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [RFC 17.08] flow_classify: add librte_flow_classify library 2017-05-17 15:37 ` Ferruh Yigit @ 2017-05-17 16:10 ` Ananyev, Konstantin 2017-05-18 12:12 ` Ferruh Yigit 0 siblings, 1 reply; 145+ messages in thread From: Ananyev, Konstantin @ 2017-05-17 16:10 UTC (permalink / raw) To: Yigit, Ferruh, dev; +Cc: Mcnamara, John, Tahhan, Maryam > > Hi Ferruh, > > Please see my comments/questions below. > > Thanks > > Konstantin > > > >> + > >> +/** > >> + * @file > >> + * > >> + * RTE Flow Classify Library > >> + * > >> + * This library provides flow record information with some measured properties. > >> + * > >> + * Application can select variety of flow types based on various flow keys. > >> + * > >> + * Library only maintains flow records between rte_flow_classify_stats_get() > >> + * calls and with a maximum limit. > >> + * > >> + * Provided flow record will be linked list rte_flow_classify_stat_xxx > >> + * structure. > >> + * > >> + * Library is responsible from allocating and freeing memory for flow record > >> + * table. Previous table freed with next rte_flow_classify_stats_get() call and > >> + * all tables are freed with rte_flow_classify_type_reset() or > >> + * rte_flow_classify_type_set(x, 0). Memory for table allocated on the fly while > >> + * creating records. > >> + * > >> + * A rte_flow_classify_type_set() with a valid type will register Rx/Tx > >> + * callbacks and start filling flow record table. > >> + * With rte_flow_classify_stats_get(), pointer sent to caller and meanwhile > >> + * library continues collecting records. > >> + * > >> + * Usage: > >> + * - application calls rte_flow_classify_type_set() for a device > >> + * - library creates Rx/Tx callbacks for packets and start filling flow table > > > > Does it necessary to use an RX callback here? > > Can library provide an API like collect(port_id, input_mbuf[], pkt_num) instead? > > So the user would have a choice either setup a callback or call collect() directly. > > This was also comment from Morten, I will update RFC to use direct API call. > > > > >> + * for that type of flow (currently only one flow type supported) > >> + * - application calls rte_flow_classify_stats_get() to get pointer to linked > >> + * listed flow table. Library assigns this pointer to another value and keeps > >> + * collecting flow data. In next rte_flow_classify_stats_get(), library first > >> + * free the previous table, and pass current table to the application, keep > >> + * collecting data. > > > > Ok, but that means that you can't use stats_get() for the same type > > from 2 different threads without explicit synchronization? > > Correct. > And multiple threads shouldn't be calling this API. It doesn't store > previous flow data, so multiple threads calling this only can have piece > of information. Do you see any use case that multiple threads can call > this API? One example would be when you have multiple queues per port, managed/monitored by different cores. BTW, how are you going to collect the stats in that way? > > > > >> + * - application calls rte_flow_classify_type_reset(), library unregisters the > >> + * callbacks and free all flow table data. > >> + * > >> + */ > >> + > >> +enum rte_flow_classify_type { > >> + RTE_FLOW_CLASSIFY_TYPE_GENERIC = (1 << 0), > >> + RTE_FLOW_CLASSIFY_TYPE_MAX, > >> +}; > >> + > >> +#define RTE_FLOW_CLASSIFY_TYPE_MASK = (((RTE_FLOW_CLASSIFY_TYPE_MAX - 1) << 1) - 1) > >> + > >> +/** > >> + * Global configuration struct > >> + */ > >> +struct rte_flow_classify_config { > >> + uint32_t type; /* bitwise enum rte_flow_classify_type values */ > >> + void *flow_table_prev; > >> + uint32_t flow_table_prev_item_count; > >> + void *flow_table_current; > >> + uint32_t flow_table_current_item_count; > >> +} rte_flow_classify_config[RTE_MAX_ETHPORTS]; > >> + > >> +#define RTE_FLOW_CLASSIFY_STAT_MAX UINT16_MAX > >> + > >> +/** > >> + * Classification stats data struct > >> + */ > >> +struct rte_flow_classify_stat_generic { > >> + struct rte_flow_classify_stat_generic *next; > >> + uint32_t id; > >> + uint64_t timestamp; > >> + > >> + struct ether_addr src_mac; > >> + struct ether_addr dst_mac; > >> + uint32_t src_ipv4; > >> + uint32_t dst_ipv4; > >> + uint8_t l3_protocol_id; > >> + uint16_t src_port; > >> + uint16_t dst_port; > >> + > >> + uint64_t packet_count; > >> + uint64_t packet_size; /* bytes */ > >> +}; > > > > Ok, so if I understood things right, for generic type it will always classify all incoming packets by: > > <src_mac, dst_mac, src_ipv4, dst_ipv4, l3_protocol_id, src_port, dst_port> > > all by absolute values, and represent results as a linked list. > > Is that correct, or I misunderstood your intentions here? > > Correct. > > > If so, then I see several disadvantages here: > > 1) It is really hard to predict what kind of stats is required for that particular cases. > > Let say some people would like to collect stat by <dst_mac,, vlan> , > > another by <dst_ipv4,subnet_mask>, third ones by <l4 dst_port> and so on. > > Having just one hardcoded filter doesn't seem very felxable/usable. > > I think you need to find a way to allow user to define what type of filter they want to apply. > > The flow type should be provided by applications, according their needs, > and needs to be implemented in this library. The generic one will be the > only one implemented in first version: > enum rte_flow_classify_type { > RTE_FLOW_CLASSIFY_TYPE_GENERIC = (1 << 0), > RTE_FLOW_CLASSIFY_TYPE_MAX, > }; > > > App should set the type first via the API: > rte_flow_classify_type_set(uint8_t port_id, uint32_t type); > > > And the stats for this type will be returned, because returned type can > be different type of struct, returned as void: > rte_flow_classify_stats_get(uint8_t port_id, void *stats); I understand that, but it means that for every different filter user wants to use, someone has to update the library: define a new type and write a new piece of code to handle it. That seems not flexible and totally un-extendable from user perspective. Even HW allows some flexibility with RX filters. Why not allow user to specify a classification filter he/she wants for that particular case? In a way both rte_flow and rte_acl work? > > > I think it was discussed already, but I still wonder why rte_flow_item can't be used for that approach? > > > > 2) Even one 10G port can produce you ~14M rte_flow_classify_stat_generic entries in one second > > (all packets have different ipv4/ports or so). > > Accessing/retrieving items over linked list with 14M entries - doesn't sound like a good idea. > > I'd say we need some better way to retrieve/present collected data. > > This is to keep flows, so I expect the numbers will be less comparing to > the packet numbers. That was an extreme example to show how bad the selected approach should behave. What I am trying to say: we need a way to collect and retrieve stats in a quick and easy way. Let say right now user invoked stats_get(port=0, type=generic). Now, he is interested to get stats for particular dst_ip only. The only way to get it: walk over whole list stats_get() returned and examine each entry one by one. I think would be much better to have something like: struct rte_flow_stats {timestamp; packet_count; packet_bytes; ..}; <fill rte_flow_item (or something else) to define desired filter> filter_id = rte_flow_stats_register(.., &rte_flow_item); .... struct rte_flow_stats stats; rte_flow_stats_get(..., filter_id, &stats); That allows user to define flows to collect stats for. Again in that case you don't need to worry about when/where to destroy the previous version of your stats. Of course the open question is how to treat packets that would match more than one flow (priority/insertion order/something else?), but I suppose we'll need to deal with that question anyway. Konstantin > It is possible to use fixed size arrays for this. But I think it is easy > to make this switch later, I would like to see the performance effect > before doing this switch. Do you think is it OK to start like this and > give that decision during implementation? ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [RFC 17.08] flow_classify: add librte_flow_classify library 2017-05-17 16:10 ` Ananyev, Konstantin @ 2017-05-18 12:12 ` Ferruh Yigit 0 siblings, 0 replies; 145+ messages in thread From: Ferruh Yigit @ 2017-05-18 12:12 UTC (permalink / raw) To: Ananyev, Konstantin, dev; +Cc: Mcnamara, John, Tahhan, Maryam On 5/17/2017 5:10 PM, Ananyev, Konstantin wrote: >>> Hi Ferruh, >>> Please see my comments/questions below. >>> Thanks >>> Konstantin >>> >>>> + >>>> +/** >>>> + * @file >>>> + * >>>> + * RTE Flow Classify Library >>>> + * >>>> + * This library provides flow record information with some measured properties. >>>> + * >>>> + * Application can select variety of flow types based on various flow keys. >>>> + * >>>> + * Library only maintains flow records between rte_flow_classify_stats_get() >>>> + * calls and with a maximum limit. >>>> + * >>>> + * Provided flow record will be linked list rte_flow_classify_stat_xxx >>>> + * structure. >>>> + * >>>> + * Library is responsible from allocating and freeing memory for flow record >>>> + * table. Previous table freed with next rte_flow_classify_stats_get() call and >>>> + * all tables are freed with rte_flow_classify_type_reset() or >>>> + * rte_flow_classify_type_set(x, 0). Memory for table allocated on the fly while >>>> + * creating records. >>>> + * >>>> + * A rte_flow_classify_type_set() with a valid type will register Rx/Tx >>>> + * callbacks and start filling flow record table. >>>> + * With rte_flow_classify_stats_get(), pointer sent to caller and meanwhile >>>> + * library continues collecting records. >>>> + * >>>> + * Usage: >>>> + * - application calls rte_flow_classify_type_set() for a device >>>> + * - library creates Rx/Tx callbacks for packets and start filling flow table >>> >>> Does it necessary to use an RX callback here? >>> Can library provide an API like collect(port_id, input_mbuf[], pkt_num) instead? >>> So the user would have a choice either setup a callback or call collect() directly. >> >> This was also comment from Morten, I will update RFC to use direct API call. >> >>> >>>> + * for that type of flow (currently only one flow type supported) >>>> + * - application calls rte_flow_classify_stats_get() to get pointer to linked >>>> + * listed flow table. Library assigns this pointer to another value and keeps >>>> + * collecting flow data. In next rte_flow_classify_stats_get(), library first >>>> + * free the previous table, and pass current table to the application, keep >>>> + * collecting data. >>> >>> Ok, but that means that you can't use stats_get() for the same type >>> from 2 different threads without explicit synchronization? >> >> Correct. >> And multiple threads shouldn't be calling this API. It doesn't store >> previous flow data, so multiple threads calling this only can have piece >> of information. Do you see any use case that multiple threads can call >> this API? > > One example would be when you have multiple queues per port, > managed/monitored by different cores. > BTW, how are you going to collect the stats in that way? > >> >>> >>>> + * - application calls rte_flow_classify_type_reset(), library unregisters the >>>> + * callbacks and free all flow table data. >>>> + * >>>> + */ >>>> + >>>> +enum rte_flow_classify_type { >>>> + RTE_FLOW_CLASSIFY_TYPE_GENERIC = (1 << 0), >>>> + RTE_FLOW_CLASSIFY_TYPE_MAX, >>>> +}; >>>> + >>>> +#define RTE_FLOW_CLASSIFY_TYPE_MASK = (((RTE_FLOW_CLASSIFY_TYPE_MAX - 1) << 1) - 1) >>>> + >>>> +/** >>>> + * Global configuration struct >>>> + */ >>>> +struct rte_flow_classify_config { >>>> + uint32_t type; /* bitwise enum rte_flow_classify_type values */ >>>> + void *flow_table_prev; >>>> + uint32_t flow_table_prev_item_count; >>>> + void *flow_table_current; >>>> + uint32_t flow_table_current_item_count; >>>> +} rte_flow_classify_config[RTE_MAX_ETHPORTS]; >>>> + >>>> +#define RTE_FLOW_CLASSIFY_STAT_MAX UINT16_MAX >>>> + >>>> +/** >>>> + * Classification stats data struct >>>> + */ >>>> +struct rte_flow_classify_stat_generic { >>>> + struct rte_flow_classify_stat_generic *next; >>>> + uint32_t id; >>>> + uint64_t timestamp; >>>> + >>>> + struct ether_addr src_mac; >>>> + struct ether_addr dst_mac; >>>> + uint32_t src_ipv4; >>>> + uint32_t dst_ipv4; >>>> + uint8_t l3_protocol_id; >>>> + uint16_t src_port; >>>> + uint16_t dst_port; >>>> + >>>> + uint64_t packet_count; >>>> + uint64_t packet_size; /* bytes */ >>>> +}; >>> >>> Ok, so if I understood things right, for generic type it will always classify all incoming packets by: >>> <src_mac, dst_mac, src_ipv4, dst_ipv4, l3_protocol_id, src_port, dst_port> >>> all by absolute values, and represent results as a linked list. >>> Is that correct, or I misunderstood your intentions here? >> >> Correct. >> >>> If so, then I see several disadvantages here: >>> 1) It is really hard to predict what kind of stats is required for that particular cases. >>> Let say some people would like to collect stat by <dst_mac,, vlan> , >>> another by <dst_ipv4,subnet_mask>, third ones by <l4 dst_port> and so on. >>> Having just one hardcoded filter doesn't seem very felxable/usable. >>> I think you need to find a way to allow user to define what type of filter they want to apply. >> >> The flow type should be provided by applications, according their needs, >> and needs to be implemented in this library. The generic one will be the >> only one implemented in first version: >> enum rte_flow_classify_type { >> RTE_FLOW_CLASSIFY_TYPE_GENERIC = (1 << 0), >> RTE_FLOW_CLASSIFY_TYPE_MAX, >> }; >> >> >> App should set the type first via the API: >> rte_flow_classify_type_set(uint8_t port_id, uint32_t type); >> >> >> And the stats for this type will be returned, because returned type can >> be different type of struct, returned as void: >> rte_flow_classify_stats_get(uint8_t port_id, void *stats); > > I understand that, but it means that for every different filter user wants to use, > someone has to update the library: define a new type and write a new piece of code to handle it. > That seems not flexible and totally un-extendable from user perspective. > Even HW allows some flexibility with RX filters. > Why not allow user to specify a classification filter he/she wants for that particular case? > In a way both rte_flow and rte_acl work? > >> >>> I think it was discussed already, but I still wonder why rte_flow_item can't be used for that approach? >> >> >>> 2) Even one 10G port can produce you ~14M rte_flow_classify_stat_generic entries in one second >>> (all packets have different ipv4/ports or so). >>> Accessing/retrieving items over linked list with 14M entries - doesn't sound like a good idea. >>> I'd say we need some better way to retrieve/present collected data. >> >> This is to keep flows, so I expect the numbers will be less comparing to >> the packet numbers. > > That was an extreme example to show how bad the selected approach should behave. > What I am trying to say: we need a way to collect and retrieve stats in a quick and easy way. > Let say right now user invoked stats_get(port=0, type=generic). > Now, he is interested to get stats for particular dst_ip only. > The only way to get it: walk over whole list stats_get() returned and examine each entry one by one. > > I think would be much better to have something like: > > struct rte_flow_stats {timestamp; packet_count; packet_bytes; ..}; > > <fill rte_flow_item (or something else) to define desired filter> > > filter_id = rte_flow_stats_register(.., &rte_flow_item); > .... > struct rte_flow_stats stats; > rte_flow_stats_get(..., filter_id, &stats); > > That allows user to define flows to collect stats for. > Again in that case you don't need to worry about when/where to destroy the previous > version of your stats. Except from using rte_flow, above suggest instead of: - set key/filter - poll collect() - when ever app wants call stats_get() using: - poll stats_get(key/filter); specially after switched from callbacks to polling, this makes sense because application already will have to do to continuous calls to this library. Merging set filter/collect/stats_get into same function saves library from storing/deleting stats until app asks for them, as you mentioned above. So, I will update RFC according. > Of course the open question is how to treat packets that would match more than one flow > (priority/insertion order/something else?), but I suppose we'll need to deal with that question anyway. > > Konstantin > >> It is possible to use fixed size arrays for this. But I think it is easy >> to make this switch later, I would like to see the performance effect >> before doing this switch. Do you think is it OK to start like this and >> give that decision during implementation? ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [RFC 17.08] flow_classify: add librte_flow_classify library 2017-05-17 14:54 ` Ananyev, Konstantin 2017-05-17 15:37 ` Ferruh Yigit @ 2017-05-17 16:02 ` Ferruh Yigit 2017-05-17 16:18 ` Ananyev, Konstantin 2017-05-17 16:38 ` Gaëtan Rivet 1 sibling, 2 replies; 145+ messages in thread From: Ferruh Yigit @ 2017-05-17 16:02 UTC (permalink / raw) To: Ananyev, Konstantin, dev; +Cc: Mcnamara, John, Tahhan, Maryam On 5/17/2017 3:54 PM, Ananyev, Konstantin wrote: > Hi Ferruh, > Please see my comments/questions below. Thanks for review. > Thanks > Konstantin <...> > I think it was discussed already, but I still wonder why rte_flow_item can't be used for that approach? Missed this one: Gaëtan also had same comment, copy-paste from other mail related to my concerns using rte_flow: " rte_flow is to create flow rules in PMD level, but what this library aims to collect flow information, independent from if underlying PMD implemented rte_flow or not. So issues with using rte_flow for this use case: 1- It may not be implemented for all PMDs (including virtual ones). 2- It may conflict with other rte_flow rules created by user. 3- It may not gather all information required. (I mean some actions here, count like ones are easy but rte_flow may not be so flexible to extract different metrics from flows) " ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [RFC 17.08] flow_classify: add librte_flow_classify library 2017-05-17 16:02 ` Ferruh Yigit @ 2017-05-17 16:18 ` Ananyev, Konstantin 2017-05-17 16:38 ` Gaëtan Rivet 1 sibling, 0 replies; 145+ messages in thread From: Ananyev, Konstantin @ 2017-05-17 16:18 UTC (permalink / raw) To: Yigit, Ferruh, dev; +Cc: Mcnamara, John, Tahhan, Maryam > > On 5/17/2017 3:54 PM, Ananyev, Konstantin wrote: > > Hi Ferruh, > > Please see my comments/questions below. > > Thanks for review. > > > Thanks > > Konstantin > > <...> > > > I think it was discussed already, but I still wonder why rte_flow_item can't be used for that approach? > > Missed this one: > > Gaëtan also had same comment, copy-paste from other mail related to my > concerns using rte_flow: > > " > rte_flow is to create flow rules in PMD level, but what this library > aims to collect flow information, independent from if underlying PMD > implemented rte_flow or not. > > So issues with using rte_flow for this use case: > 1- It may not be implemented for all PMDs (including virtual ones). > 2- It may conflict with other rte_flow rules created by user. > 3- It may not gather all information required. (I mean some actions > here, count like ones are easy but rte_flow may not be so flexible to > extract different metrics from flows) > " I am not talking about actions - I am talking about using rte_flow_item (or similar approach) to allow user to define what flow he likes to have. Then the flow_classify library would use that information to generate the internal structures(/code) it will use to classify the incoming packets. I understand that we might not support all define rte_flow_items straightway, we could start with some limited set and add new ones on a iterative basis. Basically what I am talking about - SW implementation for rte_flow. Konstantin ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [RFC 17.08] flow_classify: add librte_flow_classify library 2017-05-17 16:02 ` Ferruh Yigit 2017-05-17 16:18 ` Ananyev, Konstantin @ 2017-05-17 16:38 ` Gaëtan Rivet 2017-05-18 11:33 ` Ferruh Yigit 1 sibling, 1 reply; 145+ messages in thread From: Gaëtan Rivet @ 2017-05-17 16:38 UTC (permalink / raw) To: Ferruh Yigit; +Cc: Ananyev, Konstantin, dev, Mcnamara, John, Tahhan, Maryam Hi Ferruh, On Wed, May 17, 2017 at 05:02:50PM +0100, Ferruh Yigit wrote: >On 5/17/2017 3:54 PM, Ananyev, Konstantin wrote: >> Hi Ferruh, >> Please see my comments/questions below. > >Thanks for review. > >> Thanks >> Konstantin > ><...> > >> I think it was discussed already, but I still wonder why rte_flow_item can't be used for that approach? > >Missed this one: > >Gaëtan also had same comment, copy-paste from other mail related to my >concerns using rte_flow: > >" >rte_flow is to create flow rules in PMD level, but what this library >aims to collect flow information, independent from if underlying PMD >implemented rte_flow or not. > >So issues with using rte_flow for this use case: >1- It may not be implemented for all PMDs (including virtual ones). >2- It may conflict with other rte_flow rules created by user. >3- It may not gather all information required. (I mean some actions >here, count like ones are easy but rte_flow may not be so flexible to >extract different metrics from flows) >" There are two separate elements to using rte_flow in this context I think. One is the use of the existing actions, and as you say, this makes the support of this library dependent on the rte_flow support in PMDs. The other is the expression of flows through a shared syntax. Using flags to propose presets can be simpler, but will probably not be flexible enough. rte_flow_items are a first-class citizen in DPDK and are already a data type that can express flows with flexibility. As mentioned, they are however missing a few elements to fully cover IPFIX meters, but nothing that cannot be added I think. So I was probably not clear enough, but I was thinking about supporting rte_flow_items in rte_flow_classify as the possible key applications would use to configure their measurements. This should not require rte_flow supports from the PMDs they would be using, only rte_flow_item parsing from the rte_flow_classify library. Otherwise, DPDK will probably end up with two competing flow representations. Additionally, it may be interesting for applications to bind these data directly to rte_flow actions once the classification has been analyzed. -- Gaëtan Rivet 6WIND ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [RFC 17.08] flow_classify: add librte_flow_classify library 2017-05-17 16:38 ` Gaëtan Rivet @ 2017-05-18 11:33 ` Ferruh Yigit 2017-05-18 20:31 ` Thomas Monjalon 0 siblings, 1 reply; 145+ messages in thread From: Ferruh Yigit @ 2017-05-18 11:33 UTC (permalink / raw) To: Gaëtan Rivet Cc: Ananyev, Konstantin, dev, Mcnamara, John, Tahhan, Maryam On 5/17/2017 5:38 PM, Gaëtan Rivet wrote: > Hi Ferruh, Hi Gaetan, > > On Wed, May 17, 2017 at 05:02:50PM +0100, Ferruh Yigit wrote: >> On 5/17/2017 3:54 PM, Ananyev, Konstantin wrote: >>> Hi Ferruh, >>> Please see my comments/questions below. >> >> Thanks for review. >> >>> Thanks >>> Konstantin >> >> <...> >> >>> I think it was discussed already, but I still wonder why rte_flow_item can't be used for that approach? >> >> Missed this one: >> >> Gaëtan also had same comment, copy-paste from other mail related to my >> concerns using rte_flow: >> >> " >> rte_flow is to create flow rules in PMD level, but what this library >> aims to collect flow information, independent from if underlying PMD >> implemented rte_flow or not. >> >> So issues with using rte_flow for this use case: >> 1- It may not be implemented for all PMDs (including virtual ones). >> 2- It may conflict with other rte_flow rules created by user. >> 3- It may not gather all information required. (I mean some actions >> here, count like ones are easy but rte_flow may not be so flexible to >> extract different metrics from flows) >> " > > There are two separate elements to using rte_flow in this context I think. > > One is the use of the existing actions, and as you say, this makes the > support of this library dependent on the rte_flow support in PMDs. > > The other is the expression of flows through a shared syntax. Using > flags to propose presets can be simpler, but will probably not be flexible > enough. rte_flow_items are a first-class citizen in DPDK and are > already a data type that can express flows with flexibility. As > mentioned, they are however missing a few elements to fully cover IPFIX > meters, but nothing that cannot be added I think. > > So I was probably not clear enough, but I was thinking about > supporting rte_flow_items in rte_flow_classify as the possible key > applications would use to configure their measurements. This should not > require rte_flow supports from the PMDs they would be using, only > rte_flow_item parsing from the rte_flow_classify library. > > Otherwise, DPDK will probably end up with two competing flow > representations. Additionally, it may be interesting for applications > to bind these data directly to rte_flow actions once the > classification has been analyzed. Thanks for clarification, I see now what you and Konstantin is proposing. And yes it makes sense to use rte_flow to define flows in the library, I will update the RFC. Thanks, ferruh ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [RFC 17.08] flow_classify: add librte_flow_classify library 2017-05-18 11:33 ` Ferruh Yigit @ 2017-05-18 20:31 ` Thomas Monjalon 2017-05-19 8:57 ` Ananyev, Konstantin 0 siblings, 1 reply; 145+ messages in thread From: Thomas Monjalon @ 2017-05-18 20:31 UTC (permalink / raw) To: Ferruh Yigit Cc: dev, Gaëtan Rivet, Ananyev, Konstantin, Mcnamara, John, Tahhan, Maryam, adrien.mazarguil 18/05/2017 13:33, Ferruh Yigit: > On 5/17/2017 5:38 PM, Gaëtan Rivet wrote: > > The other is the expression of flows through a shared syntax. Using > > flags to propose presets can be simpler, but will probably not be flexible > > enough. rte_flow_items are a first-class citizen in DPDK and are > > already a data type that can express flows with flexibility. As > > mentioned, they are however missing a few elements to fully cover IPFIX > > meters, but nothing that cannot be added I think. > > > > So I was probably not clear enough, but I was thinking about > > supporting rte_flow_items in rte_flow_classify as the possible key > > applications would use to configure their measurements. This should not > > require rte_flow supports from the PMDs they would be using, only > > rte_flow_item parsing from the rte_flow_classify library. > > > > Otherwise, DPDK will probably end up with two competing flow > > representations. Additionally, it may be interesting for applications > > to bind these data directly to rte_flow actions once the > > classification has been analyzed. > > Thanks for clarification, I see now what you and Konstantin is proposing. > > And yes it makes sense to use rte_flow to define flows in the library, I > will update the RFC. Does it mean that rte_flow.h must be moved from ethdev to this new flow library? Or will it depend of ethdev? ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [RFC 17.08] flow_classify: add librte_flow_classify library 2017-05-18 20:31 ` Thomas Monjalon @ 2017-05-19 8:57 ` Ananyev, Konstantin 2017-05-19 9:11 ` Gaëtan Rivet 0 siblings, 1 reply; 145+ messages in thread From: Ananyev, Konstantin @ 2017-05-19 8:57 UTC (permalink / raw) To: Thomas Monjalon, Yigit, Ferruh Cc: dev, Gaëtan Rivet, Mcnamara, John, Tahhan, Maryam, adrien.mazarguil > -----Original Message----- > From: Thomas Monjalon [mailto:thomas@monjalon.net] > Sent: Thursday, May 18, 2017 9:32 PM > To: Yigit, Ferruh <ferruh.yigit@intel.com> > Cc: dev@dpdk.org; Gaëtan Rivet <gaetan.rivet@6wind.com>; Ananyev, Konstantin <konstantin.ananyev@intel.com>; Mcnamara, John > <john.mcnamara@intel.com>; Tahhan, Maryam <maryam.tahhan@intel.com>; adrien.mazarguil@6wind.com > Subject: Re: [dpdk-dev] [RFC 17.08] flow_classify: add librte_flow_classify library > > 18/05/2017 13:33, Ferruh Yigit: > > On 5/17/2017 5:38 PM, Gaëtan Rivet wrote: > > > The other is the expression of flows through a shared syntax. Using > > > flags to propose presets can be simpler, but will probably not be flexible > > > enough. rte_flow_items are a first-class citizen in DPDK and are > > > already a data type that can express flows with flexibility. As > > > mentioned, they are however missing a few elements to fully cover IPFIX > > > meters, but nothing that cannot be added I think. > > > > > > So I was probably not clear enough, but I was thinking about > > > supporting rte_flow_items in rte_flow_classify as the possible key > > > applications would use to configure their measurements. This should not > > > require rte_flow supports from the PMDs they would be using, only > > > rte_flow_item parsing from the rte_flow_classify library. > > > > > > Otherwise, DPDK will probably end up with two competing flow > > > representations. Additionally, it may be interesting for applications > > > to bind these data directly to rte_flow actions once the > > > classification has been analyzed. > > > > Thanks for clarification, I see now what you and Konstantin is proposing. > > > > And yes it makes sense to use rte_flow to define flows in the library, I > > will update the RFC. > > Does it mean that rte_flow.h must be moved from ethdev to this > new flow library? Or will it depend of ethdev? Just a thought: probably move rte_flow.h to lib/librte_net? Konstantin ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [RFC 17.08] flow_classify: add librte_flow_classify library 2017-05-19 8:57 ` Ananyev, Konstantin @ 2017-05-19 9:11 ` Gaëtan Rivet 2017-05-19 9:40 ` Ananyev, Konstantin 2017-05-19 10:11 ` Thomas Monjalon 0 siblings, 2 replies; 145+ messages in thread From: Gaëtan Rivet @ 2017-05-19 9:11 UTC (permalink / raw) To: Ananyev, Konstantin Cc: Thomas Monjalon, Yigit, Ferruh, dev, Mcnamara, John, Tahhan, Maryam, adrien.mazarguil On Fri, May 19, 2017 at 08:57:01AM +0000, Ananyev, Konstantin wrote: > > >> -----Original Message----- >> From: Thomas Monjalon [mailto:thomas@monjalon.net] >> Sent: Thursday, May 18, 2017 9:32 PM >> To: Yigit, Ferruh <ferruh.yigit@intel.com> >> Cc: dev@dpdk.org; Gaëtan Rivet <gaetan.rivet@6wind.com>; Ananyev, Konstantin <konstantin.ananyev@intel.com>; Mcnamara, John >> <john.mcnamara@intel.com>; Tahhan, Maryam <maryam.tahhan@intel.com>; adrien.mazarguil@6wind.com >> Subject: Re: [dpdk-dev] [RFC 17.08] flow_classify: add librte_flow_classify library >> >> 18/05/2017 13:33, Ferruh Yigit: >> > On 5/17/2017 5:38 PM, Gaëtan Rivet wrote: >> > > The other is the expression of flows through a shared syntax. Using >> > > flags to propose presets can be simpler, but will probably not be flexible >> > > enough. rte_flow_items are a first-class citizen in DPDK and are >> > > already a data type that can express flows with flexibility. As >> > > mentioned, they are however missing a few elements to fully cover IPFIX >> > > meters, but nothing that cannot be added I think. >> > > >> > > So I was probably not clear enough, but I was thinking about >> > > supporting rte_flow_items in rte_flow_classify as the possible key >> > > applications would use to configure their measurements. This should not >> > > require rte_flow supports from the PMDs they would be using, only >> > > rte_flow_item parsing from the rte_flow_classify library. >> > > >> > > Otherwise, DPDK will probably end up with two competing flow >> > > representations. Additionally, it may be interesting for applications >> > > to bind these data directly to rte_flow actions once the >> > > classification has been analyzed. >> > >> > Thanks for clarification, I see now what you and Konstantin is proposing. >> > >> > And yes it makes sense to use rte_flow to define flows in the library, I >> > will update the RFC. >> >> Does it mean that rte_flow.h must be moved from ethdev to this >> new flow library? Or will it depend of ethdev? Even outside of lib/librte_ether, wouldn't rte_flow stay dependent on rte_ether? > >Just a thought: probably move rte_flow.h to lib/librte_net? >Konstantin If we are to move rte_flow, why not lib/librte_flow? -- Gaëtan Rivet 6WIND ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [RFC 17.08] flow_classify: add librte_flow_classify library 2017-05-19 9:11 ` Gaëtan Rivet @ 2017-05-19 9:40 ` Ananyev, Konstantin 2017-05-19 10:11 ` Thomas Monjalon 1 sibling, 0 replies; 145+ messages in thread From: Ananyev, Konstantin @ 2017-05-19 9:40 UTC (permalink / raw) To: Gaëtan Rivet Cc: Thomas Monjalon, Yigit, Ferruh, dev, Mcnamara, John, Tahhan, Maryam, adrien.mazarguil > -----Original Message----- > From: Gaëtan Rivet [mailto:gaetan.rivet@6wind.com] > Sent: Friday, May 19, 2017 10:11 AM > To: Ananyev, Konstantin <konstantin.ananyev@intel.com> > Cc: Thomas Monjalon <thomas@monjalon.net>; Yigit, Ferruh <ferruh.yigit@intel.com>; dev@dpdk.org; Mcnamara, John > <john.mcnamara@intel.com>; Tahhan, Maryam <maryam.tahhan@intel.com>; adrien.mazarguil@6wind.com > Subject: Re: [dpdk-dev] [RFC 17.08] flow_classify: add librte_flow_classify library > > On Fri, May 19, 2017 at 08:57:01AM +0000, Ananyev, Konstantin wrote: > > > > > >> -----Original Message----- > >> From: Thomas Monjalon [mailto:thomas@monjalon.net] > >> Sent: Thursday, May 18, 2017 9:32 PM > >> To: Yigit, Ferruh <ferruh.yigit@intel.com> > >> Cc: dev@dpdk.org; Gaëtan Rivet <gaetan.rivet@6wind.com>; Ananyev, Konstantin <konstantin.ananyev@intel.com>; Mcnamara, John > >> <john.mcnamara@intel.com>; Tahhan, Maryam <maryam.tahhan@intel.com>; adrien.mazarguil@6wind.com > >> Subject: Re: [dpdk-dev] [RFC 17.08] flow_classify: add librte_flow_classify library > >> > >> 18/05/2017 13:33, Ferruh Yigit: > >> > On 5/17/2017 5:38 PM, Gaëtan Rivet wrote: > >> > > The other is the expression of flows through a shared syntax. Using > >> > > flags to propose presets can be simpler, but will probably not be flexible > >> > > enough. rte_flow_items are a first-class citizen in DPDK and are > >> > > already a data type that can express flows with flexibility. As > >> > > mentioned, they are however missing a few elements to fully cover IPFIX > >> > > meters, but nothing that cannot be added I think. > >> > > > >> > > So I was probably not clear enough, but I was thinking about > >> > > supporting rte_flow_items in rte_flow_classify as the possible key > >> > > applications would use to configure their measurements. This should not > >> > > require rte_flow supports from the PMDs they would be using, only > >> > > rte_flow_item parsing from the rte_flow_classify library. > >> > > > >> > > Otherwise, DPDK will probably end up with two competing flow > >> > > representations. Additionally, it may be interesting for applications > >> > > to bind these data directly to rte_flow actions once the > >> > > classification has been analyzed. > >> > > >> > Thanks for clarification, I see now what you and Konstantin is proposing. > >> > > >> > And yes it makes sense to use rte_flow to define flows in the library, I > >> > will update the RFC. > >> > >> Does it mean that rte_flow.h must be moved from ethdev to this > >> new flow library? Or will it depend of ethdev? > > Even outside of lib/librte_ether, wouldn't rte_flow stay dependent on > rte_ether? > > > > >Just a thought: probably move rte_flow.h to lib/librte_net? > >Konstantin > > If we are to move rte_flow, why not lib/librte_flow? To avoid new dependency for lib/lirte_ethdev? > > -- > Gaëtan Rivet > 6WIND ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [RFC 17.08] flow_classify: add librte_flow_classify library 2017-05-19 9:11 ` Gaëtan Rivet 2017-05-19 9:40 ` Ananyev, Konstantin @ 2017-05-19 10:11 ` Thomas Monjalon 2017-05-22 9:13 ` Adrien Mazarguil 1 sibling, 1 reply; 145+ messages in thread From: Thomas Monjalon @ 2017-05-19 10:11 UTC (permalink / raw) To: Gaëtan Rivet, Ananyev, Konstantin, Yigit, Ferruh Cc: dev, Mcnamara, John, Tahhan, Maryam, adrien.mazarguil 19/05/2017 11:11, Gaëtan Rivet: > On Fri, May 19, 2017 at 08:57:01AM +0000, Ananyev, Konstantin wrote: > > From: Thomas Monjalon [mailto:thomas@monjalon.net] > >> 18/05/2017 13:33, Ferruh Yigit: > >> > On 5/17/2017 5:38 PM, Gaëtan Rivet wrote: > >> > > The other is the expression of flows through a shared syntax. Using > >> > > flags to propose presets can be simpler, but will probably not be flexible > >> > > enough. rte_flow_items are a first-class citizen in DPDK and are > >> > > already a data type that can express flows with flexibility. As > >> > > mentioned, they are however missing a few elements to fully cover IPFIX > >> > > meters, but nothing that cannot be added I think. > >> > > > >> > > So I was probably not clear enough, but I was thinking about > >> > > supporting rte_flow_items in rte_flow_classify as the possible key > >> > > applications would use to configure their measurements. This should not > >> > > require rte_flow supports from the PMDs they would be using, only > >> > > rte_flow_item parsing from the rte_flow_classify library. > >> > > > >> > > Otherwise, DPDK will probably end up with two competing flow > >> > > representations. Additionally, it may be interesting for applications > >> > > to bind these data directly to rte_flow actions once the > >> > > classification has been analyzed. > >> > > >> > Thanks for clarification, I see now what you and Konstantin is proposing. > >> > > >> > And yes it makes sense to use rte_flow to define flows in the library, I > >> > will update the RFC. > >> > >> Does it mean that rte_flow.h must be moved from ethdev to this > >> new flow library? Or will it depend of ethdev? > > Even outside of lib/librte_ether, wouldn't rte_flow stay dependent on > rte_ether? > > > > >Just a thought: probably move rte_flow.h to lib/librte_net? > >Konstantin > > If we are to move rte_flow, why not lib/librte_flow? There are 3 different things: 1/ rte_flow.h for flow description 2/ rte_flow API in ethdev for HW offloading 3/ SW flow table (this new lib) 2 and 3 will depends on 1. I think moving rte_flow.h in librte_net is a good idea. ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [RFC 17.08] flow_classify: add librte_flow_classify library 2017-05-19 10:11 ` Thomas Monjalon @ 2017-05-22 9:13 ` Adrien Mazarguil 0 siblings, 0 replies; 145+ messages in thread From: Adrien Mazarguil @ 2017-05-22 9:13 UTC (permalink / raw) To: Thomas Monjalon Cc: Gaëtan Rivet, Ananyev, Konstantin, Yigit, Ferruh, dev, Mcnamara, John, Tahhan, Maryam On Fri, May 19, 2017 at 12:11:53PM +0200, Thomas Monjalon wrote: > 19/05/2017 11:11, Gaëtan Rivet: > > On Fri, May 19, 2017 at 08:57:01AM +0000, Ananyev, Konstantin wrote: > > > From: Thomas Monjalon [mailto:thomas@monjalon.net] > > >> 18/05/2017 13:33, Ferruh Yigit: > > >> > On 5/17/2017 5:38 PM, Gaëtan Rivet wrote: > > >> > > The other is the expression of flows through a shared syntax. Using > > >> > > flags to propose presets can be simpler, but will probably not be flexible > > >> > > enough. rte_flow_items are a first-class citizen in DPDK and are > > >> > > already a data type that can express flows with flexibility. As > > >> > > mentioned, they are however missing a few elements to fully cover IPFIX > > >> > > meters, but nothing that cannot be added I think. > > >> > > > > >> > > So I was probably not clear enough, but I was thinking about > > >> > > supporting rte_flow_items in rte_flow_classify as the possible key > > >> > > applications would use to configure their measurements. This should not > > >> > > require rte_flow supports from the PMDs they would be using, only > > >> > > rte_flow_item parsing from the rte_flow_classify library. > > >> > > > > >> > > Otherwise, DPDK will probably end up with two competing flow > > >> > > representations. Additionally, it may be interesting for applications > > >> > > to bind these data directly to rte_flow actions once the > > >> > > classification has been analyzed. > > >> > > > >> > Thanks for clarification, I see now what you and Konstantin is proposing. > > >> > > > >> > And yes it makes sense to use rte_flow to define flows in the library, I > > >> > will update the RFC. > > >> > > >> Does it mean that rte_flow.h must be moved from ethdev to this > > >> new flow library? Or will it depend of ethdev? > > > > Even outside of lib/librte_ether, wouldn't rte_flow stay dependent on > > rte_ether? > > > > > > > >Just a thought: probably move rte_flow.h to lib/librte_net? > > >Konstantin > > > > If we are to move rte_flow, why not lib/librte_flow? > > There are 3 different things: > 1/ rte_flow.h for flow description > 2/ rte_flow API in ethdev for HW offloading > 3/ SW flow table (this new lib) > > 2 and 3 will depends on 1. > I think moving rte_flow.h in librte_net is a good idea. If I had to choose, it would be librte_flow over librte_net because rte_flow is not necessarily about matching protocol headers (e.g. you can can match meta properties like physical ports or the fact traffic comes from a specific VF). However, I am not sure a separate library is actually necessary, I think the requirements can be addressed by rte_flow (in its current directory) directly. One assumption is that the COUNT action as currently described by rte_flow satisfies the counters requirements from this proposal, new actions could be added later to return other flow-related properties. In short there is no need to return info from individual packets, only from the flows themselves. If the above is true, then as pointed earlier by Gaetan this proposal can be summarized as a software implementation for rte_flow_query() and related actions. To determine if a packet is part of a given flow in software and update the associated flow counters, it must be parsed and compared against patterns of all existing rte_flow rules until one of them matches. For accurate results, this must be done on all TX/RX traffic. RFCv1 does so by automatically hooking burst functions while RFCv2 does so by expecting the application to call rte_flow_classify_stats_get(). One issue I would like to raise before going on is the CPU cost of doing all this. Parsing all incoming/outgoing traffic without missing any, looking up related flows and updating counters in software seems like a performance killer. Applications will likely request assistance from HW to minimize this cost as much as possible (e.g. using the rte_flow MARK action if COUNT is not supported directly). Assuming a flow is identified by HW, parsing it once again in software with the proposed API to update the related stats seems counterproductive; a hybrid HW/SW solution with the SW part automatically used as a fallback when hardware is not capable enough would be better and easier to use. The topic of software fallbacks for rte_flow was brought up some time ago (can't find the exact thread). The intent was to expose a common set of features between PMDs so that applications do not have to implement their own fallbacks. They would request it on a rule basis by setting a kind of "sw_fallback" bit in flow rule attributes (struct rte_flow_attr). This bit would be checked by PMDs and/or the rte_flow_* wrapper functions after the underlying PMD refuses to validate/create a rule. Basically I think rte_flow_classify could be entirely implemented in rte_flow through this "sw_fallback" approach in order for applications to automatically benefit from HW acceleration when PMDs can handle it. It then makes sense for the underlying implementation to use RX/TX hooks if necessary (as in RFCv1). These hooks would be removed by destroying the related flow rule(s). This would also open the door to a full SW implementation for rte_flow given that once the packet parser is there, most actions can be implemented rather easily (well, that's still a lot of work.) Bottom line is I'm not against a separate SW implementation not tied to a port_id for rte_flow_classify, but I would like to hear the community's thoughts about the above first. -- Adrien Mazarguil 6WIND ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [RFC 17.08] Flow classification library 2017-04-20 18:54 [dpdk-dev] [RFC 17.08] Flow classification library Ferruh Yigit 2017-04-20 18:54 ` [dpdk-dev] [RFC 17.08] flow_classify: add librte_flow_classify library Ferruh Yigit @ 2017-04-21 10:38 ` Gaëtan Rivet 2017-05-03 9:15 ` Mcnamara, John 2017-05-09 13:26 ` Ferruh Yigit 2017-05-18 18:12 ` [dpdk-dev] [RFC v2] " Ferruh Yigit 2 siblings, 2 replies; 145+ messages in thread From: Gaëtan Rivet @ 2017-04-21 10:38 UTC (permalink / raw) To: Ferruh Yigit; +Cc: dev, John McNamara, Maryam Tahhan Hi Ferruh, On Thu, Apr 20, 2017 at 07:54:47PM +0100, Ferruh Yigit wrote: >DPDK works with packets, but some network administration tools works based on >flow information. > >This library is suggested to provide helper APIs to convert packet based >information to the flow records. Library header file has more comments on >how library works and provided APIs. > >Packets to flow conversion will cause performance drop, that is why this >conversion can be enabled and disabled dynamically by application. > >Initial implementation in mind is to provide support for IPFIX metering process >but library planned to be as generic as possible. And flow information provided >by this library is missing to implement full IPFIX features, but this is planned >to be initial step. > In order to be generic, would it not be interesting to specify the flow as a generic rte_flow_item list? Some specific IPFIX items are not expressed currently in rte_flow (e.g. packet size), but they could be added. This library could consist in an rte_flow_item to IPFIX translation. The inverse approach could be used, but seems backward to me. It makes more sense to support DPDK idioms and open them to standards by proper APIs than including standards in internals and introduce translation layers between DPDK components. >It is possible to define flow with various flow keys, but currently only one >type of flow defined in the library, which is more generic, and it offloads >fine grained flow analysis to the application. Library enables expanding for >other flow types. > I'm not sure I understand the purpose of this flow key, generic is too general of a hint to define the possible cases. However, my intuition is that the flow type describe a filter to restrict the flow classification to specific patterns instead of all supported ones. This library thus resembles using the action RTE_FLOW_ACTION_TYPE_COUNT, then retrieved using rte_flow_query_count. The rte_flow_item aggregated with the rte_flow_query_count structure could be sufficient to derive IPFIX meters? An application could then use this data for its IPFIX support. >It will be more beneficial to shape this library to cover more use cases, please >feel free to comment on possible other use case and desired functionalities. > >Thanks, >ferruh > >cc: John McNamara <john.mcnamara@intel.com> >cc: Maryam Tahhan <maryam.tahhan@intel.com> > >Ferruh Yigit (1): > flow_classify: add librte_flow_classify library > > config/common_base | 5 + > doc/api/doxy-api-index.md | 1 + > doc/api/doxy-api.conf | 1 + > doc/guides/rel_notes/release_17_05.rst | 1 + > lib/Makefile | 2 + > lib/librte_flow_classify/Makefile | 50 +++++ > lib/librte_flow_classify/rte_flow_classify.c | 34 ++++ > lib/librte_flow_classify/rte_flow_classify.h | 202 +++++++++++++++++++++ > .../rte_flow_classify_version.map | 10 + > 9 files changed, 306 insertions(+) > create mode 100644 lib/librte_flow_classify/Makefile > create mode 100644 lib/librte_flow_classify/rte_flow_classify.c > create mode 100644 lib/librte_flow_classify/rte_flow_classify.h > create mode 100644 lib/librte_flow_classify/rte_flow_classify_version.map > >-- >2.9.3 > -- Gaëtan Rivet 6WIND ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [RFC 17.08] Flow classification library 2017-04-21 10:38 ` [dpdk-dev] [RFC 17.08] Flow classification library Gaëtan Rivet @ 2017-05-03 9:15 ` Mcnamara, John 2017-05-06 14:04 ` Morten Brørup 2017-05-09 13:26 ` Ferruh Yigit 1 sibling, 1 reply; 145+ messages in thread From: Mcnamara, John @ 2017-05-03 9:15 UTC (permalink / raw) To: dev; +Cc: Tahhan, Maryam, Gaëtan Rivet, Yigit, Ferruh > -----Original Message----- > From: Gaëtan Rivet [mailto:gaetan.rivet@6wind.com] > Sent: Friday, April 21, 2017 11:38 AM > To: Yigit, Ferruh <ferruh.yigit@intel.com> > Cc: dev@dpdk.org; Mcnamara, John <john.mcnamara@intel.com>; Tahhan, Maryam > <maryam.tahhan@intel.com> > Subject: Re: [dpdk-dev] [RFC 17.08] Flow classification library > Any other opinions on this proposal? (Original email: http://dpdk.org/ml/archives/dev/2017-April/064443.html) John ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [RFC 17.08] Flow classification library 2017-05-03 9:15 ` Mcnamara, John @ 2017-05-06 14:04 ` Morten Brørup 2017-05-09 13:37 ` Ferruh Yigit 0 siblings, 1 reply; 145+ messages in thread From: Morten Brørup @ 2017-05-06 14:04 UTC (permalink / raw) To: Mcnamara, John, dev, Yigit, Ferruh; +Cc: Tahhan, Maryam, Gaëtan Rivet Carthago delenda est: Again with the callbacks... why not just let the application call the library's processing functions where appropriate. The hook+callback design pattern tends to impose a specific framework (or order of execution) on the DPDK user, rather than just being a stand-alone library offering some functions. DPDK is not a stack; and one of the reasons we are moving our firmware away from Linux is to avoid being enforced a specific order of processing the packets (through a whole bunch of hooks everywhere in the stack). Maybe I missed the point of this library, so bear with me if my example is stupid: Consider a NAT router application. Does this library support processing ingress packets in the outside->inside direction after they have been processed by the NAT engine? Or how about IP fragments after passing the reassembly engine? Generally, a generic flow processing library would be great; but such a library would need to support flow processing applications, not just byte counting. Four key functions would be required: 1. Identify which flow a packet belongs to (or "not found"), 2. Create a flow, 3. Destroy a flow, and 4. Iterate through flows (e.g. for aging or listing purposes). Med venlig hilsen / kind regards - Morten Brørup > -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Mcnamara, John > Sent: Wednesday, May 3, 2017 11:16 AM > To: dev@dpdk.org > Cc: Tahhan, Maryam; Gaëtan Rivet; Yigit, Ferruh > Subject: Re: [dpdk-dev] [RFC 17.08] Flow classification library > > > > > -----Original Message----- > > From: Gaëtan Rivet [mailto:gaetan.rivet@6wind.com] > > Sent: Friday, April 21, 2017 11:38 AM > > To: Yigit, Ferruh <ferruh.yigit@intel.com> > > Cc: dev@dpdk.org; Mcnamara, John <john.mcnamara@intel.com>; Tahhan, > > Maryam <maryam.tahhan@intel.com> > > Subject: Re: [dpdk-dev] [RFC 17.08] Flow classification library > > > > > Any other opinions on this proposal? > > (Original email: http://dpdk.org/ml/archives/dev/2017-April/064443.html) > > John ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [RFC 17.08] Flow classification library 2017-05-06 14:04 ` Morten Brørup @ 2017-05-09 13:37 ` Ferruh Yigit 2017-05-09 19:24 ` Morten Brørup 0 siblings, 1 reply; 145+ messages in thread From: Ferruh Yigit @ 2017-05-09 13:37 UTC (permalink / raw) To: Morten Brørup, Mcnamara, John, dev; +Cc: Tahhan, Maryam, Gaëtan Rivet On 5/6/2017 3:04 PM, Morten Brørup wrote: > Carthago delenda est: Again with the callbacks... why not just let the application call the library's processing functions where appropriate. The hook+callback design pattern tends to impose a specific framework (or order of execution) on the DPDK user, rather than just being a stand-alone library offering some functions. DPDK is not a stack; and one of the reasons we are moving our firmware away from Linux is to avoid being enforced a specific order of processing the packets (through a whole bunch of hooks everywhere in the stack). > I agree on callbacks usage, but I can't see the other option for this case. This is for additional functionality to get flow information, while packet processing happens. So with don't want this functionality to be available always or to be part of the processing. And this data requires each packet to be processed, what can be the "library's processing function" alternative can be? > Maybe I missed the point of this library, so bear with me if my example is stupid: > > Consider a NAT router application. Does this library support processing ingress packets in the outside->inside direction after they have been processed by the NAT engine? Or how about IP fragments after passing the reassembly engine? Implementation is not there, we have packet information, and I guess with more processing of packets, the proper flow information can be created for various cases. But my concern is if this should be in DPDK? I was thinking to provide API to the application to give the flow information with a specific key, and rest of the processing can be done in upper layer, who calls these APIs. > > > Generally, a generic flow processing library would be great; but such a library would need to support flow processing applications, not just byte counting. Four key functions would be required: 1. Identify which flow a packet belongs to (or "not found"), 2. Create a flow, 3. Destroy a flow, and 4. Iterate through flows (e.g. for aging or listing purposes). Agreed, and where should this be? Part of DPDK, or DPDK providing some APIs to enable this kind of library on top of DPDK? > > > Med venlig hilsen / kind regards > - Morten Brørup > > >> -----Original Message----- >> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Mcnamara, John >> Sent: Wednesday, May 3, 2017 11:16 AM >> To: dev@dpdk.org >> Cc: Tahhan, Maryam; Gaëtan Rivet; Yigit, Ferruh >> Subject: Re: [dpdk-dev] [RFC 17.08] Flow classification library >> >> >> >>> -----Original Message----- >>> From: Gaëtan Rivet [mailto:gaetan.rivet@6wind.com] >>> Sent: Friday, April 21, 2017 11:38 AM >>> To: Yigit, Ferruh <ferruh.yigit@intel.com> >>> Cc: dev@dpdk.org; Mcnamara, John <john.mcnamara@intel.com>; Tahhan, >>> Maryam <maryam.tahhan@intel.com> >>> Subject: Re: [dpdk-dev] [RFC 17.08] Flow classification library >>> >> >> >> Any other opinions on this proposal? >> >> (Original email: http://dpdk.org/ml/archives/dev/2017-April/064443.html) >> >> John > ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [RFC 17.08] Flow classification library 2017-05-09 13:37 ` Ferruh Yigit @ 2017-05-09 19:24 ` Morten Brørup 2017-05-17 11:26 ` Ferruh Yigit 0 siblings, 1 reply; 145+ messages in thread From: Morten Brørup @ 2017-05-09 19:24 UTC (permalink / raw) To: Ferruh Yigit, Mcnamara, John, dev; +Cc: Tahhan, Maryam, Gaëtan Rivet > -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Ferruh Yigit > Sent: Tuesday, May 9, 2017 3:38 PM > To: Morten Brørup; Mcnamara, John; dev@dpdk.org > Cc: Tahhan, Maryam; Gaëtan Rivet > Subject: Re: [dpdk-dev] [RFC 17.08] Flow classification library > > On 5/6/2017 3:04 PM, Morten Brørup wrote: > > Carthago delenda est: Again with the callbacks... why not just let the > application call the library's processing functions where appropriate. The > hook+callback design pattern tends to impose a specific framework (or order > of execution) on the DPDK user, rather than just being a stand-alone > library offering some functions. DPDK is not a stack; and one of the > reasons we are moving our firmware away from Linux is to avoid being > enforced a specific order of processing the packets (through a whole bunch > of hooks everywhere in the stack). > > > > I agree on callbacks usage, but I can't see the other option for this case. > > This is for additional functionality to get flow information, while > packet processing happens. So with don't want this functionality to be > available always or to be part of the processing. And this data requires > each packet to be processed, what can be the "library's processing > function" alternative can be? > As I understand it, your library (and other libraries using the same hook) calls a function for each packet via the PMD RX hook. Why not just let the application call this function (i.e. the callback function) wherever the application developer thinks it is appropriate? If the application calls it as the first thing after rte_eth_rx_burst(), the result will probably be the same as the current hook+callback design. > > Maybe I missed the point of this library, so bear with me if my example > is stupid: > > > > Consider a NAT router application. Does this library support processing > ingress packets in the outside->inside direction after they have been > processed by the NAT engine? Or how about IP fragments after passing the > reassembly engine? > > Implementation is not there, we have packet information, and I guess > with more processing of packets, the proper flow information can be > created for various cases. But my concern is if this should be in DPDK? > > I was thinking to provide API to the application to give the flow > information with a specific key, and rest of the processing can be done > in upper layer, who calls these APIs. > > > > > > > Generally, a generic flow processing library would be great; but such a > library would need to support flow processing applications, not just byte > counting. Four key functions would be required: 1. Identify which flow a > packet belongs to (or "not found"), 2. Create a flow, 3. Destroy a flow, > and 4. Iterate through flows (e.g. for aging or listing purposes). > > Agreed, and where should this be? > Part of DPDK, or DPDK providing some APIs to enable this kind of library > on top of DPDK? > Part of DPDK, so it will take advantage of any offload features provided by the advanced NICs. Most network security appliances are flow based, not packet based, so I thought your RFC intended to add flow support beyond RSS hashing to DPDK 17.08. Our StraightShaper product is flow based and stateful for each flow. As a simplified example, consider a web server implemented using DPDK... It must get all the packets related to the HTTP request, regardless how these packets arrive (possibly fragmented, possibly via multiple interfaces through multipath routing or link aggregation, etc.). Your current library does not support this, so a flow based product like ours cannot use your library. But it might still be perfectly viable for IPFIX for simple L2/L3 forwarding products. Med venlig hilsen / kind regards - Morten Brørup ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [RFC 17.08] Flow classification library 2017-05-09 19:24 ` Morten Brørup @ 2017-05-17 11:26 ` Ferruh Yigit 0 siblings, 0 replies; 145+ messages in thread From: Ferruh Yigit @ 2017-05-17 11:26 UTC (permalink / raw) To: Morten Brørup, Mcnamara, John, dev; +Cc: Tahhan, Maryam, Gaëtan Rivet On 5/9/2017 8:24 PM, Morten Brørup wrote: >> -----Original Message----- >> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Ferruh Yigit >> Sent: Tuesday, May 9, 2017 3:38 PM >> To: Morten Brørup; Mcnamara, John; dev@dpdk.org >> Cc: Tahhan, Maryam; Gaëtan Rivet >> Subject: Re: [dpdk-dev] [RFC 17.08] Flow classification library >> >> On 5/6/2017 3:04 PM, Morten Brørup wrote: >>> Carthago delenda est: Again with the callbacks... why not just let the >> application call the library's processing functions where appropriate. The >> hook+callback design pattern tends to impose a specific framework (or order >> of execution) on the DPDK user, rather than just being a stand-alone >> library offering some functions. DPDK is not a stack; and one of the >> reasons we are moving our firmware away from Linux is to avoid being >> enforced a specific order of processing the packets (through a whole bunch >> of hooks everywhere in the stack). >>> >> >> I agree on callbacks usage, but I can't see the other option for this case. >> >> This is for additional functionality to get flow information, while >> packet processing happens. So with don't want this functionality to be >> available always or to be part of the processing. And this data requires >> each packet to be processed, what can be the "library's processing >> function" alternative can be? >> > > As I understand it, your library (and other libraries using the same hook) calls a function for each packet via the PMD RX hook. Why not just let the application call this function (i.e. the callback function) wherever the application developer thinks it is appropriate? If the application calls it as the first thing after rte_eth_rx_burst(), the result will probably be the same as the current hook+callback design. > Agreed, I will send an updated RFC, thanks. > >>> Maybe I missed the point of this library, so bear with me if my example >> is stupid: >>> >>> Consider a NAT router application. Does this library support processing >> ingress packets in the outside->inside direction after they have been >> processed by the NAT engine? Or how about IP fragments after passing the >> reassembly engine? >> >> Implementation is not there, we have packet information, and I guess >> with more processing of packets, the proper flow information can be >> created for various cases. But my concern is if this should be in DPDK? >> >> I was thinking to provide API to the application to give the flow >> information with a specific key, and rest of the processing can be done >> in upper layer, who calls these APIs. >> >>> >>> >>> Generally, a generic flow processing library would be great; but such a >> library would need to support flow processing applications, not just byte >> counting. Four key functions would be required: 1. Identify which flow a >> packet belongs to (or "not found"), 2. Create a flow, 3. Destroy a flow, >> and 4. Iterate through flows (e.g. for aging or listing purposes). >> >> Agreed, and where should this be? >> Part of DPDK, or DPDK providing some APIs to enable this kind of library >> on top of DPDK? >> > > Part of DPDK, so it will take advantage of any offload features provided by the advanced NICs. Most network security appliances are flow based, not packet based, so I thought your RFC intended to add flow support beyond RSS hashing to DPDK 17.08. > > Our StraightShaper product is flow based and stateful for each flow. As a simplified example, consider a web server implemented using DPDK... It must get all the packets related to the HTTP request, regardless how these packets arrive (possibly fragmented, possibly via multiple interfaces through multipath routing or link aggregation, etc.). Your current library does not support this, so a flow based product like ours cannot use your library. But it might still be perfectly viable for IPFIX for simple L2/L3 forwarding products. > > > Med venlig hilsen / kind regards > - Morten Brørup > ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [RFC 17.08] Flow classification library 2017-04-21 10:38 ` [dpdk-dev] [RFC 17.08] Flow classification library Gaëtan Rivet 2017-05-03 9:15 ` Mcnamara, John @ 2017-05-09 13:26 ` Ferruh Yigit 1 sibling, 0 replies; 145+ messages in thread From: Ferruh Yigit @ 2017-05-09 13:26 UTC (permalink / raw) To: Gaëtan Rivet; +Cc: dev, John McNamara, Maryam Tahhan On 4/21/2017 11:38 AM, Gaëtan Rivet wrote: > Hi Ferruh, > > On Thu, Apr 20, 2017 at 07:54:47PM +0100, Ferruh Yigit wrote: >> DPDK works with packets, but some network administration tools works based on >> flow information. >> >> This library is suggested to provide helper APIs to convert packet based >> information to the flow records. Library header file has more comments on >> how library works and provided APIs. >> >> Packets to flow conversion will cause performance drop, that is why this >> conversion can be enabled and disabled dynamically by application. >> >> Initial implementation in mind is to provide support for IPFIX metering process >> but library planned to be as generic as possible. And flow information provided >> by this library is missing to implement full IPFIX features, but this is planned >> to be initial step. >> > > In order to be generic, would it not be interesting to specify the flow > as a generic rte_flow_item list? Some specific IPFIX items are not > expressed currently in rte_flow (e.g. packet size), but they could be added. > > This library could consist in an rte_flow_item to IPFIX translation. Agreed, it would be better to be able to use rte_flow, but I am not sure if rte_flow will be enough for this case. rte_flow is to create flow rules in PMD level, but what this library aims to collect flow information, independent from if underlying PMD implemented rte_flow or not. So issues with using rte_flow for this use case: 1- It may not be implemented for all PMDs (including virtual ones). 2- It may conflict with other rte_flow rules created by user. 3- It may not gather all information required. > > The inverse approach could be used, but seems backward to me. It makes > more sense to support DPDK idioms and open them to standards by > proper APIs than including standards in internals and introduce > translation layers between DPDK components. > >> It is possible to define flow with various flow keys, but currently only one >> type of flow defined in the library, which is more generic, and it offloads >> fine grained flow analysis to the application. Library enables expanding for >> other flow types. >> > > I'm not sure I understand the purpose of this flow key, generic > is too general of a hint to define the possible cases. > > However, my intuition is that the flow type describe a filter to > restrict the flow classification to specific patterns instead of all > supported ones. Yes, that is the intention. User can define flow by key. And IPFIX supports many flow features, that is missing right now. > > This library thus resembles using the action RTE_FLOW_ACTION_TYPE_COUNT, then > retrieved using rte_flow_query_count. The rte_flow_item aggregated with > the rte_flow_query_count structure could be sufficient to derive IPFIX > meters? For counting, COUNT action looks like good candidate, it looks like it is hard to build flow classification functionality completely on top of rte_flow, but rte_flow can be used when appropriate like this case. > > An application could then use this data for its IPFIX support. > >> It will be more beneficial to shape this library to cover more use cases, please >> feel free to comment on possible other use case and desired functionalities. >> >> Thanks, >> ferruh >> >> cc: John McNamara <john.mcnamara@intel.com> >> cc: Maryam Tahhan <maryam.tahhan@intel.com> >> >> Ferruh Yigit (1): >> flow_classify: add librte_flow_classify library >> >> config/common_base | 5 + >> doc/api/doxy-api-index.md | 1 + >> doc/api/doxy-api.conf | 1 + >> doc/guides/rel_notes/release_17_05.rst | 1 + >> lib/Makefile | 2 + >> lib/librte_flow_classify/Makefile | 50 +++++ >> lib/librte_flow_classify/rte_flow_classify.c | 34 ++++ >> lib/librte_flow_classify/rte_flow_classify.h | 202 +++++++++++++++++++++ >> .../rte_flow_classify_version.map | 10 + >> 9 files changed, 306 insertions(+) >> create mode 100644 lib/librte_flow_classify/Makefile >> create mode 100644 lib/librte_flow_classify/rte_flow_classify.c >> create mode 100644 lib/librte_flow_classify/rte_flow_classify.h >> create mode 100644 lib/librte_flow_classify/rte_flow_classify_version.map >> >> -- >> 2.9.3 >> > ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [RFC v2] Flow classification library 2017-04-20 18:54 [dpdk-dev] [RFC 17.08] Flow classification library Ferruh Yigit 2017-04-20 18:54 ` [dpdk-dev] [RFC 17.08] flow_classify: add librte_flow_classify library Ferruh Yigit 2017-04-21 10:38 ` [dpdk-dev] [RFC 17.08] Flow classification library Gaëtan Rivet @ 2017-05-18 18:12 ` Ferruh Yigit 2017-05-18 18:12 ` [dpdk-dev] [RFC v2] flow_classify: add librte_flow_classify library Ferruh Yigit ` (2 more replies) 2 siblings, 3 replies; 145+ messages in thread From: Ferruh Yigit @ 2017-05-18 18:12 UTC (permalink / raw) To: dev; +Cc: Ferruh Yigit, John McNamara, Maryam Tahhan DPDK works with packets, but some network administration tools works based on flow information. This library is suggested to provide helper API to convert packet based information to the flow records. Basically the library consist of a single API that gets packets, flow definition and action as parameter and provides flow stats based on action. Application should call the API for all received packets. Library header file has more comments on how library works and provided APIs. Packets to flow conversion will cause performance drop, that is why conversion done on demand by an API call provided by this library. Initial implementation in mind is to provide support for IPFIX metering process but library planned to be as generic as possible. And flow information provided by this library is missing to implement full IPFIX features, but this is planned to be initial step. Flows are defined using rte_flow, also measurements (actions) are provided by rte_flow. To support more IPFIX measurements, the implementation may require extending rte_flow addition to implementing this library. Since both flows and action defined by rte_flow, it is possible to consider this library as rte_flow software fallback. And in case the underlying hardware supports the provided flow and action, in implementation details this library may prefer to use hardware support to get the requested stats, for the actions that are not supported by hardware this library will implement the ways to get the stats. It will be more beneficial to shape this library to cover more use cases, please feel free to comment on possible other use case and desired functionalities. Changes to previous version of the RFC: v2: * library uses rte_flow to define flows and action. * no more callbacks used, user should call API in poll mode for flow stats. * library no more maintain any flow data, all flow related stats returned by API call and forgotten. Thanks, ferruh cc: John McNamara <john.mcnamara@intel.com> cc: Maryam Tahhan <maryam.tahhan@intel.com> Ferruh Yigit (1): flow_classify: add librte_flow_classify library config/common_base | 5 + doc/api/doxy-api-index.md | 1 + doc/api/doxy-api.conf | 1 + doc/guides/rel_notes/release_17_08.rst | 1 + lib/Makefile | 2 + lib/librte_flow_classify/Makefile | 50 ++++++++ lib/librte_flow_classify/rte_flow_classify.c | 72 ++++++++++++ lib/librte_flow_classify/rte_flow_classify.h | 129 +++++++++++++++++++++ .../rte_flow_classify_version.map | 7 ++ mk/rte.app.mk | 1 + 10 files changed, 269 insertions(+) create mode 100644 lib/librte_flow_classify/Makefile create mode 100644 lib/librte_flow_classify/rte_flow_classify.c create mode 100644 lib/librte_flow_classify/rte_flow_classify.h create mode 100644 lib/librte_flow_classify/rte_flow_classify_version.map -- 2.9.3 ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [RFC v2] flow_classify: add librte_flow_classify library 2017-05-18 18:12 ` [dpdk-dev] [RFC v2] " Ferruh Yigit @ 2017-05-18 18:12 ` Ferruh Yigit 2017-05-19 16:30 ` [dpdk-dev] [RFC v2] Flow classification library Iremonger, Bernard 2017-05-25 15:46 ` [dpdk-dev] [RFC v3] " Ferruh Yigit 2 siblings, 0 replies; 145+ messages in thread From: Ferruh Yigit @ 2017-05-18 18:12 UTC (permalink / raw) To: dev Cc: Ferruh Yigit, John McNamara, Maryam Tahhan, Bernard Iremonger, Adrien Mazarguil Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com> --- Cc: Bernard Iremonger <bernard.iremonger@intel.com> Cc: Adrien Mazarguil <adrien.mazarguil@6wind.com> RFC v2: * prefer user called functions to callbacks * use rte_flow to define flows and actions --- config/common_base | 5 + doc/api/doxy-api-index.md | 1 + doc/api/doxy-api.conf | 1 + doc/guides/rel_notes/release_17_08.rst | 1 + lib/Makefile | 2 + lib/librte_flow_classify/Makefile | 50 ++++++++ lib/librte_flow_classify/rte_flow_classify.c | 72 ++++++++++++ lib/librte_flow_classify/rte_flow_classify.h | 129 +++++++++++++++++++++ .../rte_flow_classify_version.map | 7 ++ mk/rte.app.mk | 1 + 10 files changed, 269 insertions(+) create mode 100644 lib/librte_flow_classify/Makefile create mode 100644 lib/librte_flow_classify/rte_flow_classify.c create mode 100644 lib/librte_flow_classify/rte_flow_classify.h create mode 100644 lib/librte_flow_classify/rte_flow_classify_version.map diff --git a/config/common_base b/config/common_base index 8907bea..3a7e73a 100644 --- a/config/common_base +++ b/config/common_base @@ -651,6 +651,11 @@ CONFIG_RTE_LIBRTE_IP_FRAG_TBL_STAT=n CONFIG_RTE_LIBRTE_METER=y # +# Compile librte_classify +# +CONFIG_RTE_LIBRTE_FLOW_CLASSIFY=y + +# # Compile librte_sched # CONFIG_RTE_LIBRTE_SCHED=y diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md index f5f1f19..d18c2b6 100644 --- a/doc/api/doxy-api-index.md +++ b/doc/api/doxy-api-index.md @@ -98,6 +98,7 @@ There are many libraries, so their headers may be grouped by topics: [LPM IPv4 route] (@ref rte_lpm.h), [LPM IPv6 route] (@ref rte_lpm6.h), [ACL] (@ref rte_acl.h), + [flow_classify] (@ref rte_flow_classify.h), [EFD] (@ref rte_efd.h) - **QoS**: diff --git a/doc/api/doxy-api.conf b/doc/api/doxy-api.conf index ca9194f..94f3d0f 100644 --- a/doc/api/doxy-api.conf +++ b/doc/api/doxy-api.conf @@ -46,6 +46,7 @@ INPUT = doc/api/doxy-api-index.md \ lib/librte_efd \ lib/librte_ether \ lib/librte_eventdev \ + lib/librte_flow_classify \ lib/librte_hash \ lib/librte_ip_frag \ lib/librte_jobstats \ diff --git a/doc/guides/rel_notes/release_17_08.rst b/doc/guides/rel_notes/release_17_08.rst index 74aae10..6362bb2 100644 --- a/doc/guides/rel_notes/release_17_08.rst +++ b/doc/guides/rel_notes/release_17_08.rst @@ -152,6 +152,7 @@ The libraries prepended with a plus sign were incremented in this version. librte_distributor.so.1 librte_eal.so.4 librte_ethdev.so.6 + + librte_flow_classify.so.1 librte_hash.so.2 librte_ip_frag.so.1 librte_jobstats.so.1 diff --git a/lib/Makefile b/lib/Makefile index 07e1fd0..e63cd61 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -80,6 +80,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_POWER) += librte_power DEPDIRS-librte_power := librte_eal DIRS-$(CONFIG_RTE_LIBRTE_METER) += librte_meter DEPDIRS-librte_meter := librte_eal +DIRS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += librte_flow_classify +DEPDIRS-librte_flow_classify := librte_eal librte_ether librte_net DIRS-$(CONFIG_RTE_LIBRTE_SCHED) += librte_sched DEPDIRS-librte_sched := librte_eal librte_mempool librte_mbuf librte_net DEPDIRS-librte_sched += librte_timer diff --git a/lib/librte_flow_classify/Makefile b/lib/librte_flow_classify/Makefile new file mode 100644 index 0000000..c57e9a3 --- /dev/null +++ b/lib/librte_flow_classify/Makefile @@ -0,0 +1,50 @@ +# BSD LICENSE +# +# Copyright(c) 2017 Intel Corporation. All rights reserved. +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Intel Corporation nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +include $(RTE_SDK)/mk/rte.vars.mk + +# library name +LIB = librte_flow_classify.a + +CFLAGS += -O3 +CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) + +EXPORT_MAP := rte_flow_classify_version.map + +LIBABIVER := 1 + +# all source are stored in SRCS-y +SRCS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) := rte_flow_classify.c + +# install this header file +SYMLINK-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY)-include := rte_flow_classify.h + +include $(RTE_SDK)/mk/rte.lib.mk diff --git a/lib/librte_flow_classify/rte_flow_classify.c b/lib/librte_flow_classify/rte_flow_classify.c new file mode 100644 index 0000000..dfbb84a --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify.c @@ -0,0 +1,72 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include "rte_flow_classify.h" + +static int +stats_get_one(struct rte_mbuf **rx_pkts, const uint16_t nb_pkts, + struct rte_flow_classify_filter *filter) +{ + + if (filter->stats.available_space == 0) + filter->stats.used_space = 0; + + (void)nb_pkts; + (void)rx_pkts; + + return 0; +} + +int +rte_flow_classify_stats_get(struct rte_mbuf **rx_pkts, const uint16_t nb_pkts, + struct rte_flow_classify_filter filter[]) +{ + struct rte_flow_classify_filter *current_filter; + int ret; + + if (rx_pkts == NULL || filter == NULL) + return -EINVAL; + + if (nb_pkts == 0) + return -EINVAL; + + for (current_filter = filter; current_filter->action; + current_filter = ++filter) { + + ret = stats_get_one(rx_pkts, nb_pkts, current_filter); + if (ret) + return -EFAULT; + } + + return 0; +} diff --git a/lib/librte_flow_classify/rte_flow_classify.h b/lib/librte_flow_classify/rte_flow_classify.h new file mode 100644 index 0000000..507c91c --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify.h @@ -0,0 +1,129 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _RTE_FLOW_CLASSIFY_H_ +#define _RTE_FLOW_CLASSIFY_H_ + +/** + * @file + * + * RTE Flow Classify Library + * + * This library provides flow record information with some measured properties. + * + * Application should define the flow and measurement criteria (action) for it. + * + * Library doesn't maintain iany flow records itself, flow information is + * returned to upper layer only for given packets. + * + * It is application's responsibility to call rte_flow_classify_stats_get() + * after every packet reception. Application should provide the flow type + * interested in, measurement to apply that flow and storage to put resuls + * with rte_flow_classify_stats_get() API. + * + * Usage: + * - application calls rte_flow_classify_stats_get() in a polling manner, + * preferably after rte_eth_rx_burst(). This will cause the library to + * convert packet information to flow information with some measurements. + */ + +#include <rte_ethdev.h> +#include <rte_ether.h> +#include <rte_flow.h> + +#ifdef __cplusplus +extern "C" { +#endif + +/** + * Flow stats + * + * For single action an array of stats can be returned by API. Technically each + * packet can return a stat at max. + * + * Storage for stats is provided by application, library should know available + * space, and should return the number of used space. + * + * stats type is based on what measurement (action) requested by application. + * + */ +struct rte_flow_classify_stats { + unsigned int available_space; + unsigned int used_space; + void **stats; +}; + +/** + * Flow filter + * + * Structure defines the flow, the action to apply that flow and status. + * + * Application can provide an array of filters. + * + */ +struct rte_flow_classify_filter { + const struct rte_flow_item *pattern; + const struct rte_flow_attr attr; + enum rte_flow_action_type action; + struct rte_flow_classify_stats stats; +}; + +/** +* Get flow classification stats for given mbufs. +* +* Provided filter parameter includes: +* - flow definition +* - measurement (action) to apply flow +* - output stats type which is defined by action and storage provided by caller +* +* Application can provide an array of filters. +* +* @param rx_pkts +* Pointer to mbufs to process +* @param nb_pkts +* Number of mbufs to process +* @param filter +* The definition of the flow to filter +* @return +* - (0) if successful. +* - (-EINVAL) on failure. +*/ +int +rte_flow_classify_stats_get(struct rte_mbuf **rx_pkts, const uint16_t nb_pkts, + struct rte_flow_classify_filter filter[]); + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_FLOW_CLASSIFY_H_ */ diff --git a/lib/librte_flow_classify/rte_flow_classify_version.map b/lib/librte_flow_classify/rte_flow_classify_version.map new file mode 100644 index 0000000..605a12d --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify_version.map @@ -0,0 +1,7 @@ +DPDK_17.08 { + global: + + rte_flow_classify_stats_get; + + local: *; +}; diff --git a/mk/rte.app.mk b/mk/rte.app.mk index bcaf1b3..adb6be4 100644 --- a/mk/rte.app.mk +++ b/mk/rte.app.mk @@ -81,6 +81,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_POWER) += -lrte_power _LDLIBS-$(CONFIG_RTE_LIBRTE_TIMER) += -lrte_timer _LDLIBS-$(CONFIG_RTE_LIBRTE_EFD) += -lrte_efd _LDLIBS-$(CONFIG_RTE_LIBRTE_CFGFILE) += -lrte_cfgfile +_LDLIBS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += -lrte_flow_classify _LDLIBS-y += --whole-archive -- 2.9.3 ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [RFC v2] Flow classification library 2017-05-18 18:12 ` [dpdk-dev] [RFC v2] " Ferruh Yigit 2017-05-18 18:12 ` [dpdk-dev] [RFC v2] flow_classify: add librte_flow_classify library Ferruh Yigit @ 2017-05-19 16:30 ` Iremonger, Bernard 2017-05-22 13:53 ` Ferruh Yigit 2017-05-25 15:46 ` [dpdk-dev] [RFC v3] " Ferruh Yigit 2 siblings, 1 reply; 145+ messages in thread From: Iremonger, Bernard @ 2017-05-19 16:30 UTC (permalink / raw) To: Yigit, Ferruh, dev; +Cc: Yigit, Ferruh, Mcnamara, John, Tahhan, Maryam Hi Ferruh, > -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Ferruh Yigit > Sent: Thursday, May 18, 2017 7:12 PM > To: dev@dpdk.org > Cc: Yigit, Ferruh <ferruh.yigit@intel.com>; Mcnamara, John > <john.mcnamara@intel.com>; Tahhan, Maryam > <maryam.tahhan@intel.com> > Subject: [dpdk-dev] [RFC v2] Flow classification library > > DPDK works with packets, but some network administration tools works > based on flow information. > > This library is suggested to provide helper API to convert packet based > information to the flow records. > > Basically the library consist of a single API that gets packets, flow definition > and action as parameter and provides flow stats based on action. Application > should call the API for all received packets. > > Library header file has more comments on how library works and provided > APIs. > > Packets to flow conversion will cause performance drop, that is why > conversion done on demand by an API call provided by this library. > > Initial implementation in mind is to provide support for IPFIX metering > process but library planned to be as generic as possible. And flow information > provided by this library is missing to implement full IPFIX features, but this is > planned to be initial step. > > Flows are defined using rte_flow, also measurements (actions) are provided > by rte_flow. To support more IPFIX measurements, the implementation may > require extending rte_flow addition to implementing this library. Do you know what extensions are needed to the rte_flow code? > > Since both flows and action defined by rte_flow, it is possible to consider this > library as rte_flow software fallback. > > And in case the underlying hardware supports the provided flow and action, > in implementation details this library may prefer to use hardware support to > get the requested stats, for the actions that are not supported by hardware > this library will implement the ways to get the stats. > > It will be more beneficial to shape this library to cover more use cases, please > feel free to comment on possible other use case and desired functionalities. > > > Changes to previous version of the RFC: > v2: > * library uses rte_flow to define flows and action. > * no more callbacks used, user should call API in poll mode for flow stats. > * library no more maintain any flow data, all flow related stats returned > by API call and forgotten. > > Thanks, > ferruh > > cc: John McNamara <john.mcnamara@intel.com> > cc: Maryam Tahhan <maryam.tahhan@intel.com> > > Ferruh Yigit (1): > flow_classify: add librte_flow_classify library > > config/common_base | 5 + > doc/api/doxy-api-index.md | 1 + > doc/api/doxy-api.conf | 1 + > doc/guides/rel_notes/release_17_08.rst | 1 + > lib/Makefile | 2 + > lib/librte_flow_classify/Makefile | 50 ++++++++ > lib/librte_flow_classify/rte_flow_classify.c | 72 ++++++++++++ > lib/librte_flow_classify/rte_flow_classify.h | 129 > +++++++++++++++++++++ > .../rte_flow_classify_version.map | 7 ++ > mk/rte.app.mk | 1 + > 10 files changed, 269 insertions(+) > create mode 100644 lib/librte_flow_classify/Makefile create mode 100644 > lib/librte_flow_classify/rte_flow_classify.c > create mode 100644 lib/librte_flow_classify/rte_flow_classify.h > create mode 100644 lib/librte_flow_classify/rte_flow_classify_version.map > > -- > 2.9.3 ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [RFC v2] Flow classification library 2017-05-19 16:30 ` [dpdk-dev] [RFC v2] Flow classification library Iremonger, Bernard @ 2017-05-22 13:53 ` Ferruh Yigit 2017-05-23 12:26 ` Adrien Mazarguil 0 siblings, 1 reply; 145+ messages in thread From: Ferruh Yigit @ 2017-05-22 13:53 UTC (permalink / raw) To: Iremonger, Bernard, dev; +Cc: Mcnamara, John, Tahhan, Maryam On 5/19/2017 5:30 PM, Iremonger, Bernard wrote: > Hi Ferruh, > >> -----Original Message----- >> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Ferruh Yigit >> Sent: Thursday, May 18, 2017 7:12 PM >> To: dev@dpdk.org >> Cc: Yigit, Ferruh <ferruh.yigit@intel.com>; Mcnamara, John >> <john.mcnamara@intel.com>; Tahhan, Maryam >> <maryam.tahhan@intel.com> >> Subject: [dpdk-dev] [RFC v2] Flow classification library >> >> DPDK works with packets, but some network administration tools works >> based on flow information. >> >> This library is suggested to provide helper API to convert packet based >> information to the flow records. >> >> Basically the library consist of a single API that gets packets, flow definition >> and action as parameter and provides flow stats based on action. Application >> should call the API for all received packets. >> >> Library header file has more comments on how library works and provided >> APIs. >> >> Packets to flow conversion will cause performance drop, that is why >> conversion done on demand by an API call provided by this library. >> >> Initial implementation in mind is to provide support for IPFIX metering >> process but library planned to be as generic as possible. And flow information >> provided by this library is missing to implement full IPFIX features, but this is >> planned to be initial step. >> >> Flows are defined using rte_flow, also measurements (actions) are provided >> by rte_flow. To support more IPFIX measurements, the implementation may >> require extending rte_flow addition to implementing this library. > > Do you know what extensions are needed to the rte_flow code? The extension may be required on two fields: 1- Defining the flow 2- Available actions For defining the flow, an update may not be required, specially at first version of the library. But for action, there may be some updates. IPFIX RFC defines Metering process as [1], (in [2]). This library should provide helper APIs to metering process. Currently only action can be used in rte_flow is COUNT, more actions can be added to help "packet header capturing, timestamping, sampling, classifying" tasks of the metering process. The exact list depends on the what will be implemented in this release. [1] Metering Process The Metering Process generates Flow Records. Inputs to the process are packet headers, characteristics, and Packet Treatment observed at one or more Observation Points. The Metering Process consists of a set of functions that includes packet header capturing, timestamping, sampling, classifying, and maintaining Flow Records. The maintenance of Flow Records may include creating new records, updating existing ones, computing Flow statistics, deriving further Flow properties, detecting Flow expiration, passing Flow Records to the Exporting Process, and deleting Flow Records. [2] https://tools.ietf.org/html/rfc7011 > >> >> Since both flows and action defined by rte_flow, it is possible to consider this >> library as rte_flow software fallback. >> >> And in case the underlying hardware supports the provided flow and action, >> in implementation details this library may prefer to use hardware support to >> get the requested stats, for the actions that are not supported by hardware >> this library will implement the ways to get the stats. >> >> It will be more beneficial to shape this library to cover more use cases, please >> feel free to comment on possible other use case and desired functionalities. >> >> >> Changes to previous version of the RFC: >> v2: >> * library uses rte_flow to define flows and action. >> * no more callbacks used, user should call API in poll mode for flow stats. >> * library no more maintain any flow data, all flow related stats returned >> by API call and forgotten. >> >> Thanks, >> ferruh >> >> cc: John McNamara <john.mcnamara@intel.com> >> cc: Maryam Tahhan <maryam.tahhan@intel.com> >> >> Ferruh Yigit (1): >> flow_classify: add librte_flow_classify library >> >> config/common_base | 5 + >> doc/api/doxy-api-index.md | 1 + >> doc/api/doxy-api.conf | 1 + >> doc/guides/rel_notes/release_17_08.rst | 1 + >> lib/Makefile | 2 + >> lib/librte_flow_classify/Makefile | 50 ++++++++ >> lib/librte_flow_classify/rte_flow_classify.c | 72 ++++++++++++ >> lib/librte_flow_classify/rte_flow_classify.h | 129 >> +++++++++++++++++++++ >> .../rte_flow_classify_version.map | 7 ++ >> mk/rte.app.mk | 1 + >> 10 files changed, 269 insertions(+) >> create mode 100644 lib/librte_flow_classify/Makefile create mode 100644 >> lib/librte_flow_classify/rte_flow_classify.c >> create mode 100644 lib/librte_flow_classify/rte_flow_classify.h >> create mode 100644 lib/librte_flow_classify/rte_flow_classify_version.map >> >> -- >> 2.9.3 > ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [RFC v2] Flow classification library 2017-05-22 13:53 ` Ferruh Yigit @ 2017-05-23 12:26 ` Adrien Mazarguil 2017-05-23 12:58 ` Ferruh Yigit 0 siblings, 1 reply; 145+ messages in thread From: Adrien Mazarguil @ 2017-05-23 12:26 UTC (permalink / raw) To: Ferruh Yigit; +Cc: Iremonger, Bernard, dev, Mcnamara, John, Tahhan, Maryam On Mon, May 22, 2017 at 02:53:28PM +0100, Ferruh Yigit wrote: > On 5/19/2017 5:30 PM, Iremonger, Bernard wrote: > > Hi Ferruh, > > > >> -----Original Message----- > >> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Ferruh Yigit > >> Sent: Thursday, May 18, 2017 7:12 PM > >> To: dev@dpdk.org > >> Cc: Yigit, Ferruh <ferruh.yigit@intel.com>; Mcnamara, John > >> <john.mcnamara@intel.com>; Tahhan, Maryam > >> <maryam.tahhan@intel.com> > >> Subject: [dpdk-dev] [RFC v2] Flow classification library > >> > >> DPDK works with packets, but some network administration tools works > >> based on flow information. > >> > >> This library is suggested to provide helper API to convert packet based > >> information to the flow records. > >> > >> Basically the library consist of a single API that gets packets, flow definition > >> and action as parameter and provides flow stats based on action. Application > >> should call the API for all received packets. > >> > >> Library header file has more comments on how library works and provided > >> APIs. > >> > >> Packets to flow conversion will cause performance drop, that is why > >> conversion done on demand by an API call provided by this library. > >> > >> Initial implementation in mind is to provide support for IPFIX metering > >> process but library planned to be as generic as possible. And flow information > >> provided by this library is missing to implement full IPFIX features, but this is > >> planned to be initial step. > >> > >> Flows are defined using rte_flow, also measurements (actions) are provided > >> by rte_flow. To support more IPFIX measurements, the implementation may > >> require extending rte_flow addition to implementing this library. > > > > Do you know what extensions are needed to the rte_flow code? > > The extension may be required on two fields: > 1- Defining the flow > 2- Available actions > > For defining the flow, an update may not be required, specially at first > version of the library. > > But for action, there may be some updates. > > IPFIX RFC defines Metering process as [1], (in [2]). This library should > provide helper APIs to metering process. > > Currently only action can be used in rte_flow is COUNT, more actions can > be added to help "packet header capturing, timestamping, sampling, > classifying" tasks of the metering process. > > The exact list depends on the what will be implemented in this release. > > > [1] > Metering Process > > The Metering Process generates Flow Records. Inputs to the > process are packet headers, characteristics, and Packet Treatment > observed at one or more Observation Points. > > The Metering Process consists of a set of functions that includes > packet header capturing, timestamping, sampling, classifying, and > maintaining Flow Records. > > The maintenance of Flow Records may include creating new records, > updating existing ones, computing Flow statistics, deriving > further Flow properties, detecting Flow expiration, passing Flow > Records to the Exporting Process, and deleting Flow Records. > > [2] > https://tools.ietf.org/html/rfc7011 Since I did not take this into account in my previous answer [3], I now understand several of these requirements cannot be met by hardware (at least in the near future). Therefore I think it makes sense to leave IPFIX and more generally the maintenance of software data associated with flows to separate libraries, instead of adding many new rte_flow actions that can only be implemented in software. A hybrid solution as described in [3] is still needed regardless to offload flow recognition, so that only flows of interest are reprocessed in software to compute IPFIX and other data. You suggested at one point to take flow rules in addition to mbufs as input to handle that. Well, that's actually a nice approach. For this to work, rte_flow_classify would have to use opaque handles like rte_flow, provided back by the application when attempting to classify traffic. If the handle is not known (e.g. MARK is unsupported), a separate API function could take a mbuf as input and spit the related rte_flow_classify object if any. To be clear: 1. Create classifier object: classify = rte_flow_classify_create([some rte_flow pattern], [classify-specific actions list, associated resources]); 2. Create some flow rule with a MARK action to identify it uniquely. This step might fail and flow can be NULL, that's not an issue: flow = rte_flow_create([the same pattern], MARK with id 42) 3. For each received packet: /* * Attempt HW and fall back on SW for flow identification in order to * update classifier flow-related data. */ if (flow) { if (mbuf->ol_flags & PKT_RX_FDIR_ID && mbuf->hash.fdir.hi == 42) tmp_classify = classify; } else { tmp_classify = rte_flow_classify_lookup([classifier candidates], mbuf); } if (tmp_classify) rte_flow_classify_update(tmp_classify, mbuf); 4. At some point, retrieve computed data from the classifier object itself: rte_flow_classify_stats_get(classify, [output buffer(s)]) On the RX path, the MARK action can be enough to implement the above. When not supported, it could also be emulated through the "sw_fallback" bit described in [3] however if the above approach is fine, no need for that. It's a bit more complicated to benefit from rte_flow on the TX path since no MARK data can be returned. There is currently no other solution than doing it all in software anyway. [3] http://dpdk.org/ml/archives/dev/2017-May/066177.html -- Adrien Mazarguil 6WIND ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [RFC v2] Flow classification library 2017-05-23 12:26 ` Adrien Mazarguil @ 2017-05-23 12:58 ` Ferruh Yigit 2017-05-23 13:30 ` Adrien Mazarguil 0 siblings, 1 reply; 145+ messages in thread From: Ferruh Yigit @ 2017-05-23 12:58 UTC (permalink / raw) To: Adrien Mazarguil; +Cc: Iremonger, Bernard, dev, Mcnamara, John, Tahhan, Maryam On 5/23/2017 1:26 PM, Adrien Mazarguil wrote: > On Mon, May 22, 2017 at 02:53:28PM +0100, Ferruh Yigit wrote: >> On 5/19/2017 5:30 PM, Iremonger, Bernard wrote: >>> Hi Ferruh, >>> >>>> -----Original Message----- >>>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Ferruh Yigit >>>> Sent: Thursday, May 18, 2017 7:12 PM >>>> To: dev@dpdk.org >>>> Cc: Yigit, Ferruh <ferruh.yigit@intel.com>; Mcnamara, John >>>> <john.mcnamara@intel.com>; Tahhan, Maryam >>>> <maryam.tahhan@intel.com> >>>> Subject: [dpdk-dev] [RFC v2] Flow classification library >>>> >>>> DPDK works with packets, but some network administration tools works >>>> based on flow information. >>>> >>>> This library is suggested to provide helper API to convert packet based >>>> information to the flow records. >>>> >>>> Basically the library consist of a single API that gets packets, flow definition >>>> and action as parameter and provides flow stats based on action. Application >>>> should call the API for all received packets. >>>> >>>> Library header file has more comments on how library works and provided >>>> APIs. >>>> >>>> Packets to flow conversion will cause performance drop, that is why >>>> conversion done on demand by an API call provided by this library. >>>> >>>> Initial implementation in mind is to provide support for IPFIX metering >>>> process but library planned to be as generic as possible. And flow information >>>> provided by this library is missing to implement full IPFIX features, but this is >>>> planned to be initial step. >>>> >>>> Flows are defined using rte_flow, also measurements (actions) are provided >>>> by rte_flow. To support more IPFIX measurements, the implementation may >>>> require extending rte_flow addition to implementing this library. >>> >>> Do you know what extensions are needed to the rte_flow code? >> >> The extension may be required on two fields: >> 1- Defining the flow >> 2- Available actions >> >> For defining the flow, an update may not be required, specially at first >> version of the library. >> >> But for action, there may be some updates. >> >> IPFIX RFC defines Metering process as [1], (in [2]). This library should >> provide helper APIs to metering process. >> >> Currently only action can be used in rte_flow is COUNT, more actions can >> be added to help "packet header capturing, timestamping, sampling, >> classifying" tasks of the metering process. >> >> The exact list depends on the what will be implemented in this release. >> >> >> [1] >> Metering Process >> >> The Metering Process generates Flow Records. Inputs to the >> process are packet headers, characteristics, and Packet Treatment >> observed at one or more Observation Points. >> >> The Metering Process consists of a set of functions that includes >> packet header capturing, timestamping, sampling, classifying, and >> maintaining Flow Records. >> >> The maintenance of Flow Records may include creating new records, >> updating existing ones, computing Flow statistics, deriving >> further Flow properties, detecting Flow expiration, passing Flow >> Records to the Exporting Process, and deleting Flow Records. >> >> [2] >> https://tools.ietf.org/html/rfc7011 > > Since I did not take this into account in my previous answer [3], I now > understand several of these requirements cannot be met by hardware (at least > in the near future). Therefore I think it makes sense to leave IPFIX and > more generally the maintenance of software data associated with flows to > separate libraries, instead of adding many new rte_flow actions that can > only be implemented in software. > > A hybrid solution as described in [3] is still needed regardless to offload > flow recognition, so that only flows of interest are reprocessed in software > to compute IPFIX and other data. > > You suggested at one point to take flow rules in addition to mbufs as input > to handle that. Well, that's actually a nice approach. > > For this to work, rte_flow_classify would have to use opaque handles like > rte_flow, provided back by the application when attempting to classify > traffic. If the handle is not known (e.g. MARK is unsupported), a separate > API function could take a mbuf as input and spit the related > rte_flow_classify object if any. > > To be clear: > > 1. Create classifier object: > > classify = rte_flow_classify_create([some rte_flow pattern], > [classify-specific actions list, associated resources]); > > 2. Create some flow rule with a MARK action to identify it uniquely. This > step might fail and flow can be NULL, that's not an issue: > > flow = rte_flow_create([the same pattern], MARK with id 42) > > 3. For each received packet: > > /* > * Attempt HW and fall back on SW for flow identification in order to > * update classifier flow-related data. > */ > if (flow) { > if (mbuf->ol_flags & PKT_RX_FDIR_ID && mbuf->hash.fdir.hi == 42) > tmp_classify = classify; > } else { > tmp_classify = rte_flow_classify_lookup([classifier candidates], mbuf); > } > if (tmp_classify) > rte_flow_classify_update(tmp_classify, mbuf); > > 4. At some point, retrieve computed data from the classifier object itself: > > rte_flow_classify_stats_get(classify, [output buffer(s)]) > > On the RX path, the MARK action can be enough to implement the above. When > not supported, it could also be emulated through the "sw_fallback" bit > described in [3] however if the above approach is fine, no need for that. > > It's a bit more complicated to benefit from rte_flow on the TX path since no > MARK data can be returned. There is currently no other solution than doing > it all in software anyway. > > [3] http://dpdk.org/ml/archives/dev/2017-May/066177.html > Using MARK action is good idea indeed, but I believe the software version needs to be implemented anyway, so the MARK action can be next step optimization. And another thing is, having 3. and 4. as separate steps means rte_flow_classify to maintain the flow_records in the library for an unknown time. Application needs to explicitly do the call for 3., why not return the output buffers immediately so the application do the required association of data to the ports. Mainly, I would like to keep same API logic in RFC v2, but agree to add two new APIs: rte_flow_classify_create() rte_flow_classify_destroy() Create will return and opaque rte_flow object (as done in rte_flow), meanwhile it can create some masks in rte_flow object to help parsing mbuf. And rte_flow_classify_stats_get() will get rte_flow object and stats as parameter. Also instead of getting an array of filters, to simplify the logic, it will get a rte_flow object at a time. I am planning to send an updated RFC (v3) as above, if there is no major objection, we can continue to discuss on that RFC. Thanks, ferruh ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [RFC v2] Flow classification library 2017-05-23 12:58 ` Ferruh Yigit @ 2017-05-23 13:30 ` Adrien Mazarguil 2017-05-23 16:42 ` Ferruh Yigit 0 siblings, 1 reply; 145+ messages in thread From: Adrien Mazarguil @ 2017-05-23 13:30 UTC (permalink / raw) To: Ferruh Yigit; +Cc: Iremonger, Bernard, dev, Mcnamara, John, Tahhan, Maryam On Tue, May 23, 2017 at 01:58:44PM +0100, Ferruh Yigit wrote: > On 5/23/2017 1:26 PM, Adrien Mazarguil wrote: > > On Mon, May 22, 2017 at 02:53:28PM +0100, Ferruh Yigit wrote: > >> On 5/19/2017 5:30 PM, Iremonger, Bernard wrote: > >>> Hi Ferruh, > >>> > >>>> -----Original Message----- > >>>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Ferruh Yigit > >>>> Sent: Thursday, May 18, 2017 7:12 PM > >>>> To: dev@dpdk.org > >>>> Cc: Yigit, Ferruh <ferruh.yigit@intel.com>; Mcnamara, John > >>>> <john.mcnamara@intel.com>; Tahhan, Maryam > >>>> <maryam.tahhan@intel.com> > >>>> Subject: [dpdk-dev] [RFC v2] Flow classification library > >>>> > >>>> DPDK works with packets, but some network administration tools works > >>>> based on flow information. > >>>> > >>>> This library is suggested to provide helper API to convert packet based > >>>> information to the flow records. > >>>> > >>>> Basically the library consist of a single API that gets packets, flow definition > >>>> and action as parameter and provides flow stats based on action. Application > >>>> should call the API for all received packets. > >>>> > >>>> Library header file has more comments on how library works and provided > >>>> APIs. > >>>> > >>>> Packets to flow conversion will cause performance drop, that is why > >>>> conversion done on demand by an API call provided by this library. > >>>> > >>>> Initial implementation in mind is to provide support for IPFIX metering > >>>> process but library planned to be as generic as possible. And flow information > >>>> provided by this library is missing to implement full IPFIX features, but this is > >>>> planned to be initial step. > >>>> > >>>> Flows are defined using rte_flow, also measurements (actions) are provided > >>>> by rte_flow. To support more IPFIX measurements, the implementation may > >>>> require extending rte_flow addition to implementing this library. > >>> > >>> Do you know what extensions are needed to the rte_flow code? > >> > >> The extension may be required on two fields: > >> 1- Defining the flow > >> 2- Available actions > >> > >> For defining the flow, an update may not be required, specially at first > >> version of the library. > >> > >> But for action, there may be some updates. > >> > >> IPFIX RFC defines Metering process as [1], (in [2]). This library should > >> provide helper APIs to metering process. > >> > >> Currently only action can be used in rte_flow is COUNT, more actions can > >> be added to help "packet header capturing, timestamping, sampling, > >> classifying" tasks of the metering process. > >> > >> The exact list depends on the what will be implemented in this release. > >> > >> > >> [1] > >> Metering Process > >> > >> The Metering Process generates Flow Records. Inputs to the > >> process are packet headers, characteristics, and Packet Treatment > >> observed at one or more Observation Points. > >> > >> The Metering Process consists of a set of functions that includes > >> packet header capturing, timestamping, sampling, classifying, and > >> maintaining Flow Records. > >> > >> The maintenance of Flow Records may include creating new records, > >> updating existing ones, computing Flow statistics, deriving > >> further Flow properties, detecting Flow expiration, passing Flow > >> Records to the Exporting Process, and deleting Flow Records. > >> > >> [2] > >> https://tools.ietf.org/html/rfc7011 > > > > Since I did not take this into account in my previous answer [3], I now > > understand several of these requirements cannot be met by hardware (at least > > in the near future). Therefore I think it makes sense to leave IPFIX and > > more generally the maintenance of software data associated with flows to > > separate libraries, instead of adding many new rte_flow actions that can > > only be implemented in software. > > > > A hybrid solution as described in [3] is still needed regardless to offload > > flow recognition, so that only flows of interest are reprocessed in software > > to compute IPFIX and other data. > > > > You suggested at one point to take flow rules in addition to mbufs as input > > to handle that. Well, that's actually a nice approach. > > > > For this to work, rte_flow_classify would have to use opaque handles like > > rte_flow, provided back by the application when attempting to classify > > traffic. If the handle is not known (e.g. MARK is unsupported), a separate > > API function could take a mbuf as input and spit the related > > rte_flow_classify object if any. > > > > To be clear: > > > > 1. Create classifier object: > > > > classify = rte_flow_classify_create([some rte_flow pattern], > > [classify-specific actions list, associated resources]); > > > > 2. Create some flow rule with a MARK action to identify it uniquely. This > > step might fail and flow can be NULL, that's not an issue: > > > > flow = rte_flow_create([the same pattern], MARK with id 42) > > > > 3. For each received packet: > > > > /* > > * Attempt HW and fall back on SW for flow identification in order to > > * update classifier flow-related data. > > */ > > if (flow) { > > if (mbuf->ol_flags & PKT_RX_FDIR_ID && mbuf->hash.fdir.hi == 42) > > tmp_classify = classify; > > } else { > > tmp_classify = rte_flow_classify_lookup([classifier candidates], mbuf); > > } > > if (tmp_classify) > > rte_flow_classify_update(tmp_classify, mbuf); > > > > 4. At some point, retrieve computed data from the classifier object itself: > > > > rte_flow_classify_stats_get(classify, [output buffer(s)]) > > > > On the RX path, the MARK action can be enough to implement the above. When > > not supported, it could also be emulated through the "sw_fallback" bit > > described in [3] however if the above approach is fine, no need for that. > > > > It's a bit more complicated to benefit from rte_flow on the TX path since no > > MARK data can be returned. There is currently no other solution than doing > > it all in software anyway. > > > > [3] http://dpdk.org/ml/archives/dev/2017-May/066177.html > > > > Using MARK action is good idea indeed, but I believe the software > version needs to be implemented anyway, so the MARK action can be next > step optimization. Does it mean you are you fine with the separation of RFCv2's rte_flow_classify_stats_get() into a sort of rte_flow_classify_lookup() and rte_flow_classify_update() as described? There is otherwise no change needed to benefit from MARK optimization beside providing the ability to not perform the lookup step when not necessary. > And another thing is, having 3. and 4. as separate steps means > rte_flow_classify to maintain the flow_records in the library for an > unknown time. I think it should be an application's problem. The library does not necessarily allocate anything since the related temporary storage can be provided as arguments to rte_flow_classify_create(). Expiration could be handled by making rte_flow_classify_lookup() return some error code when the flow has expired. Application would then be free to call rte_flow_classify_destroy() directly or retrieve stats one last time through rte_flow_classify_stats_get() before that. > Application needs to explicitly do the call for 3., why not return the > output buffers immediately so the application do the required > association of data to the ports. 3. is typically done in the data path and must be as fast as possible, right? The way I see it, in many cases if the flow is already identified (classifier object provided) the mbuf does not even need to be parsed for associated counters to be incremented. Also because rte_flow_classify_stats_get() could perform extra computation to convert internal data to an application-friendly format. > Mainly, I would like to keep same API logic in RFC v2, but agree to add > two new APIs: > rte_flow_classify_create() > rte_flow_classify_destroy() > > Create will return and opaque rte_flow object (as done in rte_flow), > meanwhile it can create some masks in rte_flow object to help parsing mbuf. OK, I suggest naming that object "struct rte_flow_classify" to avoid confusion though. > And rte_flow_classify_stats_get() will get rte_flow object and stats as > parameter. Also instead of getting an array of filters, to simplify the > logic, it will get a rte_flow object at a time. Perfect. > I am planning to send an updated RFC (v3) as above, if there is no major > objection, we can continue to discuss on that RFC. Except for my above comments, looks like we're converging. -- Adrien Mazarguil 6WIND ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [RFC v2] Flow classification library 2017-05-23 13:30 ` Adrien Mazarguil @ 2017-05-23 16:42 ` Ferruh Yigit 0 siblings, 0 replies; 145+ messages in thread From: Ferruh Yigit @ 2017-05-23 16:42 UTC (permalink / raw) To: Adrien Mazarguil; +Cc: Iremonger, Bernard, dev, Mcnamara, John, Tahhan, Maryam On 5/23/2017 2:30 PM, Adrien Mazarguil wrote: > On Tue, May 23, 2017 at 01:58:44PM +0100, Ferruh Yigit wrote: >> On 5/23/2017 1:26 PM, Adrien Mazarguil wrote: >>> On Mon, May 22, 2017 at 02:53:28PM +0100, Ferruh Yigit wrote: >>>> On 5/19/2017 5:30 PM, Iremonger, Bernard wrote: >>>>> Hi Ferruh, >>>>> >>>>>> -----Original Message----- >>>>>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Ferruh Yigit >>>>>> Sent: Thursday, May 18, 2017 7:12 PM >>>>>> To: dev@dpdk.org >>>>>> Cc: Yigit, Ferruh <ferruh.yigit@intel.com>; Mcnamara, John >>>>>> <john.mcnamara@intel.com>; Tahhan, Maryam >>>>>> <maryam.tahhan@intel.com> >>>>>> Subject: [dpdk-dev] [RFC v2] Flow classification library >>>>>> >>>>>> DPDK works with packets, but some network administration tools works >>>>>> based on flow information. >>>>>> >>>>>> This library is suggested to provide helper API to convert packet based >>>>>> information to the flow records. >>>>>> >>>>>> Basically the library consist of a single API that gets packets, flow definition >>>>>> and action as parameter and provides flow stats based on action. Application >>>>>> should call the API for all received packets. >>>>>> >>>>>> Library header file has more comments on how library works and provided >>>>>> APIs. >>>>>> >>>>>> Packets to flow conversion will cause performance drop, that is why >>>>>> conversion done on demand by an API call provided by this library. >>>>>> >>>>>> Initial implementation in mind is to provide support for IPFIX metering >>>>>> process but library planned to be as generic as possible. And flow information >>>>>> provided by this library is missing to implement full IPFIX features, but this is >>>>>> planned to be initial step. >>>>>> >>>>>> Flows are defined using rte_flow, also measurements (actions) are provided >>>>>> by rte_flow. To support more IPFIX measurements, the implementation may >>>>>> require extending rte_flow addition to implementing this library. >>>>> >>>>> Do you know what extensions are needed to the rte_flow code? >>>> >>>> The extension may be required on two fields: >>>> 1- Defining the flow >>>> 2- Available actions >>>> >>>> For defining the flow, an update may not be required, specially at first >>>> version of the library. >>>> >>>> But for action, there may be some updates. >>>> >>>> IPFIX RFC defines Metering process as [1], (in [2]). This library should >>>> provide helper APIs to metering process. >>>> >>>> Currently only action can be used in rte_flow is COUNT, more actions can >>>> be added to help "packet header capturing, timestamping, sampling, >>>> classifying" tasks of the metering process. >>>> >>>> The exact list depends on the what will be implemented in this release. >>>> >>>> >>>> [1] >>>> Metering Process >>>> >>>> The Metering Process generates Flow Records. Inputs to the >>>> process are packet headers, characteristics, and Packet Treatment >>>> observed at one or more Observation Points. >>>> >>>> The Metering Process consists of a set of functions that includes >>>> packet header capturing, timestamping, sampling, classifying, and >>>> maintaining Flow Records. >>>> >>>> The maintenance of Flow Records may include creating new records, >>>> updating existing ones, computing Flow statistics, deriving >>>> further Flow properties, detecting Flow expiration, passing Flow >>>> Records to the Exporting Process, and deleting Flow Records. >>>> >>>> [2] >>>> https://tools.ietf.org/html/rfc7011 >>> >>> Since I did not take this into account in my previous answer [3], I now >>> understand several of these requirements cannot be met by hardware (at least >>> in the near future). Therefore I think it makes sense to leave IPFIX and >>> more generally the maintenance of software data associated with flows to >>> separate libraries, instead of adding many new rte_flow actions that can >>> only be implemented in software. >>> >>> A hybrid solution as described in [3] is still needed regardless to offload >>> flow recognition, so that only flows of interest are reprocessed in software >>> to compute IPFIX and other data. >>> >>> You suggested at one point to take flow rules in addition to mbufs as input >>> to handle that. Well, that's actually a nice approach. >>> >>> For this to work, rte_flow_classify would have to use opaque handles like >>> rte_flow, provided back by the application when attempting to classify >>> traffic. If the handle is not known (e.g. MARK is unsupported), a separate >>> API function could take a mbuf as input and spit the related >>> rte_flow_classify object if any. >>> >>> To be clear: >>> >>> 1. Create classifier object: >>> >>> classify = rte_flow_classify_create([some rte_flow pattern], >>> [classify-specific actions list, associated resources]); >>> >>> 2. Create some flow rule with a MARK action to identify it uniquely. This >>> step might fail and flow can be NULL, that's not an issue: >>> >>> flow = rte_flow_create([the same pattern], MARK with id 42) >>> >>> 3. For each received packet: >>> >>> /* >>> * Attempt HW and fall back on SW for flow identification in order to >>> * update classifier flow-related data. >>> */ >>> if (flow) { >>> if (mbuf->ol_flags & PKT_RX_FDIR_ID && mbuf->hash.fdir.hi == 42) >>> tmp_classify = classify; >>> } else { >>> tmp_classify = rte_flow_classify_lookup([classifier candidates], mbuf); >>> } >>> if (tmp_classify) >>> rte_flow_classify_update(tmp_classify, mbuf); >>> >>> 4. At some point, retrieve computed data from the classifier object itself: >>> >>> rte_flow_classify_stats_get(classify, [output buffer(s)]) >>> >>> On the RX path, the MARK action can be enough to implement the above. When >>> not supported, it could also be emulated through the "sw_fallback" bit >>> described in [3] however if the above approach is fine, no need for that. >>> >>> It's a bit more complicated to benefit from rte_flow on the TX path since no >>> MARK data can be returned. There is currently no other solution than doing >>> it all in software anyway. >>> >>> [3] http://dpdk.org/ml/archives/dev/2017-May/066177.html >>> >> >> Using MARK action is good idea indeed, but I believe the software >> version needs to be implemented anyway, so the MARK action can be next >> step optimization. > > Does it mean you are you fine with the separation of RFCv2's > rte_flow_classify_stats_get() into a sort of rte_flow_classify_lookup() and > rte_flow_classify_update() as described? No indeed, I was planning to postpone that step and keep continue with single rte_flow_classify_stats_get() > > There is otherwise no change needed to benefit from MARK optimization > beside providing the ability to not perform the lookup step when not > necessary. > >> And another thing is, having 3. and 4. as separate steps means >> rte_flow_classify to maintain the flow_records in the library for an >> unknown time. > > I think it should be an application's problem. The library does not > necessarily allocate anything since the related temporary storage can be > provided as arguments to rte_flow_classify_create(). > > Expiration could be handled by making rte_flow_classify_lookup() return some > error code when the flow has expired. Application would then be free to call > rte_flow_classify_destroy() directly or retrieve stats one last time through > rte_flow_classify_stats_get() before that. > >> Application needs to explicitly do the call for 3., why not return the >> output buffers immediately so the application do the required >> association of data to the ports. > > 3. is typically done in the data path and must be as fast as possible, > right? The way I see it, in many cases if the flow is already identified > (classifier object provided) the mbuf does not even need to be parsed for > associated counters to be incremented. > > Also because rte_flow_classify_stats_get() could perform extra computation > to convert internal data to an application-friendly format. > >> Mainly, I would like to keep same API logic in RFC v2, but agree to add >> two new APIs: >> rte_flow_classify_create() >> rte_flow_classify_destroy() >> >> Create will return and opaque rte_flow object (as done in rte_flow), >> meanwhile it can create some masks in rte_flow object to help parsing mbuf. > > OK, I suggest naming that object "struct rte_flow_classify" to avoid > confusion though. OK, makes sense. > >> And rte_flow_classify_stats_get() will get rte_flow object and stats as >> parameter. Also instead of getting an array of filters, to simplify the >> logic, it will get a rte_flow object at a time. > > Perfect. > >> I am planning to send an updated RFC (v3) as above, if there is no major >> objection, we can continue to discuss on that RFC. > > Except for my above comments, looks like we're converging. > ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [RFC v3] Flow classification library 2017-05-18 18:12 ` [dpdk-dev] [RFC v2] " Ferruh Yigit 2017-05-18 18:12 ` [dpdk-dev] [RFC v2] flow_classify: add librte_flow_classify library Ferruh Yigit 2017-05-19 16:30 ` [dpdk-dev] [RFC v2] Flow classification library Iremonger, Bernard @ 2017-05-25 15:46 ` Ferruh Yigit 2017-05-25 15:46 ` [dpdk-dev] [RFC v3] flow_classify: add librte_flow_classify library Ferruh Yigit ` (7 more replies) 2 siblings, 8 replies; 145+ messages in thread From: Ferruh Yigit @ 2017-05-25 15:46 UTC (permalink / raw) To: dev; +Cc: ferruh.yigit, John McNamara, Maryam Tahhan DPDK works with packets, but some network administration tools works based on flow information. This library is suggested to provide helper API to convert packet based information to the flow records. Basically the library consist of APIs to create and destroy the rule and to query the stats. Application should call the query API for all received packets. Library header file has more comments on how library works and provided APIs. Packets to flow conversion will cause performance drop, that is why conversion done on demand by an API call provided by this library. Initial implementation in mind is to provide support for IPFIX metering process but library planned to be as generic as possible. And flow information provided by this library is missing to implement full IPFIX features, but this is planned to be initial step. Flows are defined using rte_flow, also measurements (actions) are provided by rte_flow. To support more IPFIX measurements, the implementation may require extending rte_flow addition to implementing this library. Used both flows and action defined by rte_flow.h so this library has dependency to rte_flow.h. For further steps, this library can expand to benefit from hardware filter for better performance. It will be more beneficial to shape this library to cover more use cases, please feel free to comment on possible other use case and desired functionalities. Changes to previous version of the RFC: v3: * add create() / destroy() APIs * query() gets rte_flow_classify object as parameter * query() gets one flow at a time v2: * library uses rte_flow to define flows and action. * no more callbacks used, user should call API in poll mode for flow stats. * library no more maintain any flow data, all flow related stats returned by API call and forgotten. cc: John McNamara <john.mcnamara@intel.com> cc: Maryam Tahhan <maryam.tahhan@intel.com> Ferruh Yigit (1): flow_classify: add librte_flow_classify library config/common_base | 5 + doc/api/doxy-api-index.md | 1 + doc/api/doxy-api.conf | 1 + doc/guides/rel_notes/release_17_08.rst | 1 + lib/Makefile | 2 + lib/librte_flow_classify/Makefile | 50 +++++++ lib/librte_flow_classify/rte_flow_classify.c | 153 +++++++++++++++++++++ lib/librte_flow_classify/rte_flow_classify.h | 149 ++++++++++++++++++++ .../rte_flow_classify_version.map | 9 ++ mk/rte.app.mk | 1 + 10 files changed, 372 insertions(+) create mode 100644 lib/librte_flow_classify/Makefile create mode 100644 lib/librte_flow_classify/rte_flow_classify.c create mode 100644 lib/librte_flow_classify/rte_flow_classify.h create mode 100644 lib/librte_flow_classify/rte_flow_classify_version.map -- 2.9.3 ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [RFC v3] flow_classify: add librte_flow_classify library 2017-05-25 15:46 ` [dpdk-dev] [RFC v3] " Ferruh Yigit @ 2017-05-25 15:46 ` Ferruh Yigit 2017-05-30 12:59 ` Iremonger, Bernard 2017-08-23 13:51 ` [dpdk-dev] [PATCH v1 0/6] Flow classification library Bernard Iremonger ` (6 subsequent siblings) 7 siblings, 1 reply; 145+ messages in thread From: Ferruh Yigit @ 2017-05-25 15:46 UTC (permalink / raw) To: dev Cc: ferruh.yigit, John McNamara, Maryam Tahhan, Bernard Iremonger, Adrien Mazarguil Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com> --- Cc: Bernard Iremonger <bernard.iremonger@intel.com> Cc: Adrien Mazarguil <adrien.mazarguil@6wind.com> RFC v3: * add create() / destroy() APIs * query() gets rte_flow_classify object as param * query() gets one flow at a time RFC v2: * prefer user called functions to callbacks * use rte_flow to define flows and actions --- config/common_base | 5 + doc/api/doxy-api-index.md | 1 + doc/api/doxy-api.conf | 1 + doc/guides/rel_notes/release_17_08.rst | 1 + lib/Makefile | 2 + lib/librte_flow_classify/Makefile | 50 +++++++ lib/librte_flow_classify/rte_flow_classify.c | 153 +++++++++++++++++++++ lib/librte_flow_classify/rte_flow_classify.h | 149 ++++++++++++++++++++ .../rte_flow_classify_version.map | 9 ++ mk/rte.app.mk | 1 + 10 files changed, 372 insertions(+) create mode 100644 lib/librte_flow_classify/Makefile create mode 100644 lib/librte_flow_classify/rte_flow_classify.c create mode 100644 lib/librte_flow_classify/rte_flow_classify.h create mode 100644 lib/librte_flow_classify/rte_flow_classify_version.map diff --git a/config/common_base b/config/common_base index 8907bea..3a7e73a 100644 --- a/config/common_base +++ b/config/common_base @@ -651,6 +651,11 @@ CONFIG_RTE_LIBRTE_IP_FRAG_TBL_STAT=n CONFIG_RTE_LIBRTE_METER=y # +# Compile librte_classify +# +CONFIG_RTE_LIBRTE_FLOW_CLASSIFY=y + +# # Compile librte_sched # CONFIG_RTE_LIBRTE_SCHED=y diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md index f5f1f19..d18c2b6 100644 --- a/doc/api/doxy-api-index.md +++ b/doc/api/doxy-api-index.md @@ -98,6 +98,7 @@ There are many libraries, so their headers may be grouped by topics: [LPM IPv4 route] (@ref rte_lpm.h), [LPM IPv6 route] (@ref rte_lpm6.h), [ACL] (@ref rte_acl.h), + [flow_classify] (@ref rte_flow_classify.h), [EFD] (@ref rte_efd.h) - **QoS**: diff --git a/doc/api/doxy-api.conf b/doc/api/doxy-api.conf index ca9194f..94f3d0f 100644 --- a/doc/api/doxy-api.conf +++ b/doc/api/doxy-api.conf @@ -46,6 +46,7 @@ INPUT = doc/api/doxy-api-index.md \ lib/librte_efd \ lib/librte_ether \ lib/librte_eventdev \ + lib/librte_flow_classify \ lib/librte_hash \ lib/librte_ip_frag \ lib/librte_jobstats \ diff --git a/doc/guides/rel_notes/release_17_08.rst b/doc/guides/rel_notes/release_17_08.rst index 74aae10..6362bb2 100644 --- a/doc/guides/rel_notes/release_17_08.rst +++ b/doc/guides/rel_notes/release_17_08.rst @@ -152,6 +152,7 @@ The libraries prepended with a plus sign were incremented in this version. librte_distributor.so.1 librte_eal.so.4 librte_ethdev.so.6 + + librte_flow_classify.so.1 librte_hash.so.2 librte_ip_frag.so.1 librte_jobstats.so.1 diff --git a/lib/Makefile b/lib/Makefile index 07e1fd0..e63cd61 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -80,6 +80,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_POWER) += librte_power DEPDIRS-librte_power := librte_eal DIRS-$(CONFIG_RTE_LIBRTE_METER) += librte_meter DEPDIRS-librte_meter := librte_eal +DIRS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += librte_flow_classify +DEPDIRS-librte_flow_classify := librte_eal librte_ether librte_net DIRS-$(CONFIG_RTE_LIBRTE_SCHED) += librte_sched DEPDIRS-librte_sched := librte_eal librte_mempool librte_mbuf librte_net DEPDIRS-librte_sched += librte_timer diff --git a/lib/librte_flow_classify/Makefile b/lib/librte_flow_classify/Makefile new file mode 100644 index 0000000..c57e9a3 --- /dev/null +++ b/lib/librte_flow_classify/Makefile @@ -0,0 +1,50 @@ +# BSD LICENSE +# +# Copyright(c) 2017 Intel Corporation. All rights reserved. +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Intel Corporation nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +include $(RTE_SDK)/mk/rte.vars.mk + +# library name +LIB = librte_flow_classify.a + +CFLAGS += -O3 +CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) + +EXPORT_MAP := rte_flow_classify_version.map + +LIBABIVER := 1 + +# all source are stored in SRCS-y +SRCS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) := rte_flow_classify.c + +# install this header file +SYMLINK-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY)-include := rte_flow_classify.h + +include $(RTE_SDK)/mk/rte.lib.mk diff --git a/lib/librte_flow_classify/rte_flow_classify.c b/lib/librte_flow_classify/rte_flow_classify.c new file mode 100644 index 0000000..7c99c88 --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify.c @@ -0,0 +1,153 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <rte_flow_classify.h> + +LIST_HEAD(rte_flow_classify_list, rte_flow_classify) rte_flow_classify_head = + LIST_HEAD_INITIALIZER(rte_flow_classify_head); + +struct rte_flow_classify { + LIST_ENTRY(rte_flow_classify) next; + struct rte_flow_action action; + uint32_t id; +}; + +static uint32_t unique_id = 1; + +static struct rte_flow_classify * +allocate(const struct rte_flow_action *action) +{ + + struct rte_flow_classify *flow_classify = NULL; + + flow_classify = malloc(sizeof(struct rte_flow_classify)); + + if (!flow_classify) + return flow_classify; + + flow_classify->action = *action; + flow_classify->id = unique_id++; + + return flow_classify; +} + +struct rte_flow_classify * +rte_flow_classify_create(const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action *action) +{ + struct rte_flow_classify *flow_classify; + + if (!attr || !pattern || !action) + return NULL; + + flow_classify = allocate(action); + if (!flow_classify) + return NULL; + + /* parse attr, pattern and action, + * create mask or hash values etc to match flow easier + * update flow_classify object to include these data */ + + LIST_INSERT_HEAD(&rte_flow_classify_head, flow_classify, next); + + return flow_classify; +} + +int +rte_flow_classify_destroy(struct rte_flow_classify *flow_classify) +{ + if (!flow_classify) + return -EINVAL; + + LIST_REMOVE(flow_classify, next); + + free(flow_classify); + + return 0; +} + +static int +flow_match(const struct rte_flow_classify *flow_classify, + const struct rte_mbuf *m) +{ + (void)flow_classify; + (void)m; + + return 0; +} + +static int +action_apply(const struct rte_flow_classify *flow_classify, + const struct rte_mbuf *m, + struct rte_flow_classify_stats *stats) +{ + switch (flow_classify->action.type) { + default: + return -ENOTSUP; + } + + stats->used_space++; + + (void)m; + + return 0; +} + +int +rte_flow_classify_query(const struct rte_flow_classify *flow_classify, + struct rte_mbuf **pkts, + const uint16_t nb_pkts, + struct rte_flow_classify_stats *stats) +{ + const struct rte_mbuf *m; + int ret = 0; + uint16_t i; + + if (stats->available_space == 0) + return -EINVAL; + + stats->used_space = 0; + + for (i = 0; i < nb_pkts; i++) { + m = pkts[i]; + + if (!flow_match(flow_classify, m)) { + ret = action_apply(flow_classify, m, stats); + if (ret) + break; + } + } + + return ret; +} diff --git a/lib/librte_flow_classify/rte_flow_classify.h b/lib/librte_flow_classify/rte_flow_classify.h new file mode 100644 index 0000000..5440775 --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify.h @@ -0,0 +1,149 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _RTE_FLOW_CLASSIFY_H_ +#define _RTE_FLOW_CLASSIFY_H_ + +/** + * @file + * + * RTE Flow Classify Library + * + * This library provides flow record information with some measured properties. + * + * Application should define the flow and measurement criteria (action) for it. + * + * Library doesn't maintain any flow records itself, instead flow information is + * returned to upper layer only for given packets. + * + * It is application's responsibility to call rte_flow_classify_query() + * for group of packets, just after receive them or before transmit them. + * Application should provide the flow type interested in, measurement to apply + * that flow in rte_flow_classify_create() API, and should provide + * rte_flow_classify object and storage to put results in + * rte_flow_classify_query() API. + * + * Usage: + * - application calls rte_flow_classify_create() to create a rte_flow_classify + * object. + * - application calls rte_flow_classify_query() in a polling manner, + * preferably after rte_eth_rx_burst(). This will cause the library to + * convert packet information to flow information with some measurements. + * - rte_flow_classify object can be destroyed when they are no more needed via + * rte_flow_classify_destroy() + */ + +#include <rte_ethdev.h> +#include <rte_ether.h> +#include <rte_flow.h> + +#ifdef __cplusplus +extern "C" { +#endif + +struct rte_flow_classify; + +/** + * Flow stats + * + * For single action an array of stats can be returned by API. Technically each + * packet can return a stat at max. + * + * Storage for stats is provided by application, library should know available + * space, and should return the number of used space. + * + * stats type is based on what measurement (action) requested by application. + * + */ +struct rte_flow_classify_stats { + const unsigned int available_space; + unsigned int used_space; + void **stats; +}; + +/** + * Create a flow classify rule. + * + * @param[in] attr + * Flow rule attributes + * @param[in] pattern + * Pattern specification (list terminated by the END pattern item). + * @param[in] action + * Associated action + * + * @return + * A valid handle in case of success, NULL otherwise. + */ +struct rte_flow_classify * +rte_flow_classify_create(const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action *action); + +/** + * Destroy a flow classify rule. + * + * @param flow_classify + * Flow rule handle to destroy + * + * @return + * 0 on success, a negative errno value otherwise. + */ +int +rte_flow_classify_destroy(struct rte_flow_classify *flow_classify); + +/** + * Get flow classification stats for given packets. + * + * @param flow_classify + * Flow rule object + * @param pkts + * Pointer to packets to process + * @param nb_pkts + * Number of packets to process + * @param stats + * To store stats define by action + * + * @return + * 0 on success, a negative errno value otherwise. + */ +int +rte_flow_classify_query(const struct rte_flow_classify *flow_classify, + struct rte_mbuf **pkts, + const uint16_t nb_pkts, + struct rte_flow_classify_stats *stats); + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_FLOW_CLASSIFY_H_ */ diff --git a/lib/librte_flow_classify/rte_flow_classify_version.map b/lib/librte_flow_classify/rte_flow_classify_version.map new file mode 100644 index 0000000..5aaf664 --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify_version.map @@ -0,0 +1,9 @@ +DPDK_17.08 { + global: + + rte_flow_classify_create; + rte_flow_classify_destroy; + rte_flow_classify_query; + + local: *; +}; diff --git a/mk/rte.app.mk b/mk/rte.app.mk index bcaf1b3..adb6be4 100644 --- a/mk/rte.app.mk +++ b/mk/rte.app.mk @@ -81,6 +81,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_POWER) += -lrte_power _LDLIBS-$(CONFIG_RTE_LIBRTE_TIMER) += -lrte_timer _LDLIBS-$(CONFIG_RTE_LIBRTE_EFD) += -lrte_efd _LDLIBS-$(CONFIG_RTE_LIBRTE_CFGFILE) += -lrte_cfgfile +_LDLIBS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += -lrte_flow_classify _LDLIBS-y += --whole-archive -- 2.9.3 ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [RFC v3] flow_classify: add librte_flow_classify library 2017-05-25 15:46 ` [dpdk-dev] [RFC v3] flow_classify: add librte_flow_classify library Ferruh Yigit @ 2017-05-30 12:59 ` Iremonger, Bernard 0 siblings, 0 replies; 145+ messages in thread From: Iremonger, Bernard @ 2017-05-30 12:59 UTC (permalink / raw) To: Yigit, Ferruh, dev; +Cc: Mcnamara, John, Tahhan, Maryam, Adrien Mazarguil Hi Ferruh, > -----Original Message----- > From: Yigit, Ferruh > Sent: Thursday, May 25, 2017 4:47 PM > To: dev@dpdk.org > Cc: Yigit, Ferruh <ferruh.yigit@intel.com>; Mcnamara, John > <john.mcnamara@intel.com>; Tahhan, Maryam > <maryam.tahhan@intel.com>; Iremonger, Bernard > <bernard.iremonger@intel.com>; Adrien Mazarguil > <adrien.mazarguil@6wind.com> > Subject: [RFC v3] flow_classify: add librte_flow_classify library > > Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com> > --- > Cc: Bernard Iremonger <bernard.iremonger@intel.com> > Cc: Adrien Mazarguil <adrien.mazarguil@6wind.com> > > RFC v3: > * add create() / destroy() APIs > * query() gets rte_flow_classify object as param > * query() gets one flow at a time > > RFC v2: > * prefer user called functions to callbacks > * use rte_flow to define flows and actions > --- > config/common_base | 5 + > doc/api/doxy-api-index.md | 1 + > doc/api/doxy-api.conf | 1 + > doc/guides/rel_notes/release_17_08.rst | 1 + > lib/Makefile | 2 + > lib/librte_flow_classify/Makefile | 50 +++++++ > lib/librte_flow_classify/rte_flow_classify.c | 153 > +++++++++++++++++++++ > lib/librte_flow_classify/rte_flow_classify.h | 149 > ++++++++++++++++++++ > .../rte_flow_classify_version.map | 9 ++ > mk/rte.app.mk | 1 + > 10 files changed, 372 insertions(+) > create mode 100644 lib/librte_flow_classify/Makefile create mode 100644 > lib/librte_flow_classify/rte_flow_classify.c > create mode 100644 lib/librte_flow_classify/rte_flow_classify.h > create mode 100644 lib/librte_flow_classify/rte_flow_classify_version.map > > diff --git a/config/common_base b/config/common_base index > 8907bea..3a7e73a 100644 > --- a/config/common_base > +++ b/config/common_base > @@ -651,6 +651,11 @@ CONFIG_RTE_LIBRTE_IP_FRAG_TBL_STAT=n > CONFIG_RTE_LIBRTE_METER=y > > # > +# Compile librte_classify > +# > +CONFIG_RTE_LIBRTE_FLOW_CLASSIFY=y > + > +# > # Compile librte_sched > # > CONFIG_RTE_LIBRTE_SCHED=y > diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md index > f5f1f19..d18c2b6 100644 > --- a/doc/api/doxy-api-index.md > +++ b/doc/api/doxy-api-index.md > @@ -98,6 +98,7 @@ There are many libraries, so their headers may be > grouped by topics: > [LPM IPv4 route] (@ref rte_lpm.h), > [LPM IPv6 route] (@ref rte_lpm6.h), > [ACL] (@ref rte_acl.h), > + [flow_classify] (@ref rte_flow_classify.h), > [EFD] (@ref rte_efd.h) > > - **QoS**: > diff --git a/doc/api/doxy-api.conf b/doc/api/doxy-api.conf index > ca9194f..94f3d0f 100644 > --- a/doc/api/doxy-api.conf > +++ b/doc/api/doxy-api.conf > @@ -46,6 +46,7 @@ INPUT = doc/api/doxy-api-index.md \ > lib/librte_efd \ > lib/librte_ether \ > lib/librte_eventdev \ > + lib/librte_flow_classify \ > lib/librte_hash \ > lib/librte_ip_frag \ > lib/librte_jobstats \ diff --git > a/doc/guides/rel_notes/release_17_08.rst > b/doc/guides/rel_notes/release_17_08.rst > index 74aae10..6362bb2 100644 > --- a/doc/guides/rel_notes/release_17_08.rst > +++ b/doc/guides/rel_notes/release_17_08.rst > @@ -152,6 +152,7 @@ The libraries prepended with a plus sign were > incremented in this version. > librte_distributor.so.1 > librte_eal.so.4 > librte_ethdev.so.6 > + + librte_flow_classify.so.1 > librte_hash.so.2 > librte_ip_frag.so.1 > librte_jobstats.so.1 > diff --git a/lib/Makefile b/lib/Makefile index 07e1fd0..e63cd61 100644 > --- a/lib/Makefile > +++ b/lib/Makefile > @@ -80,6 +80,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_POWER) += librte_power > DEPDIRS-librte_power := librte_eal > DIRS-$(CONFIG_RTE_LIBRTE_METER) += librte_meter DEPDIRS- > librte_meter := librte_eal > +DIRS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += librte_flow_classify > +DEPDIRS-librte_flow_classify := librte_eal librte_ether librte_net > DIRS-$(CONFIG_RTE_LIBRTE_SCHED) += librte_sched DEPDIRS-librte_sched > := librte_eal librte_mempool librte_mbuf librte_net DEPDIRS-librte_sched > += librte_timer diff --git a/lib/librte_flow_classify/Makefile > b/lib/librte_flow_classify/Makefile > new file mode 100644 > index 0000000..c57e9a3 > --- /dev/null > +++ b/lib/librte_flow_classify/Makefile > @@ -0,0 +1,50 @@ > +# BSD LICENSE > +# > +# Copyright(c) 2017 Intel Corporation. All rights reserved. > +# All rights reserved. > +# > +# Redistribution and use in source and binary forms, with or without > +# modification, are permitted provided that the following conditions > +# are met: > +# > +# * Redistributions of source code must retain the above copyright > +# notice, this list of conditions and the following disclaimer. > +# * Redistributions in binary form must reproduce the above copyright > +# notice, this list of conditions and the following disclaimer in > +# the documentation and/or other materials provided with the > +# distribution. > +# * Neither the name of Intel Corporation nor the names of its > +# contributors may be used to endorse or promote products derived > +# from this software without specific prior written permission. > +# > +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND > CONTRIBUTORS > +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT > NOT > +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND > FITNESS FOR > +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE > COPYRIGHT > +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, > INCIDENTAL, > +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT > NOT > +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS > OF USE, > +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED > AND ON ANY > +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR > TORT > +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF > THE USE > +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH > DAMAGE. > + > +include $(RTE_SDK)/mk/rte.vars.mk > + > +# library name > +LIB = librte_flow_classify.a > + > +CFLAGS += -O3 > +CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) > + > +EXPORT_MAP := rte_flow_classify_version.map > + > +LIBABIVER := 1 > + > +# all source are stored in SRCS-y > +SRCS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) := rte_flow_classify.c > + > +# install this header file > +SYMLINK-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY)-include := > +rte_flow_classify.h > + > +include $(RTE_SDK)/mk/rte.lib.mk > diff --git a/lib/librte_flow_classify/rte_flow_classify.c > b/lib/librte_flow_classify/rte_flow_classify.c > new file mode 100644 > index 0000000..7c99c88 > --- /dev/null > +++ b/lib/librte_flow_classify/rte_flow_classify.c > @@ -0,0 +1,153 @@ > +/*- > + * BSD LICENSE > + * > + * Copyright(c) 2017 Intel Corporation. All rights reserved. > + * All rights reserved. > + * > + * Redistribution and use in source and binary forms, with or without > + * modification, are permitted provided that the following conditions > + * are met: > + * > + * * Redistributions of source code must retain the above copyright > + * notice, this list of conditions and the following disclaimer. > + * * Redistributions in binary form must reproduce the above copyright > + * notice, this list of conditions and the following disclaimer in > + * the documentation and/or other materials provided with the > + * distribution. > + * * Neither the name of Intel Corporation nor the names of its > + * contributors may be used to endorse or promote products derived > + * from this software without specific prior written permission. > + * > + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND > CONTRIBUTORS > + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT > NOT > + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND > FITNESS FOR > + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE > COPYRIGHT > + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, > INCIDENTAL, > + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT > NOT > + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS > OF USE, > + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED > AND ON ANY > + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR > TORT > + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF > THE USE > + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH > DAMAGE. > + */ > + > +#include <rte_flow_classify.h> > + > +LIST_HEAD(rte_flow_classify_list, rte_flow_classify) rte_flow_classify_head > = > + LIST_HEAD_INITIALIZER(rte_flow_classify_head); > + > +struct rte_flow_classify { > + LIST_ENTRY(rte_flow_classify) next; > + struct rte_flow_action action; > + uint32_t id; > +}; > + > +static uint32_t unique_id = 1; > + > +static struct rte_flow_classify * > +allocate(const struct rte_flow_action *action) { > + > + struct rte_flow_classify *flow_classify = NULL; > + > + flow_classify = malloc(sizeof(struct rte_flow_classify)); > + > + if (!flow_classify) > + return flow_classify; > + > + flow_classify->action = *action; > + flow_classify->id = unique_id++; > + > + return flow_classify; > +} > + > +struct rte_flow_classify * > +rte_flow_classify_create(const struct rte_flow_attr *attr, > + const struct rte_flow_item pattern[], > + const struct rte_flow_action *action) { > + struct rte_flow_classify *flow_classify; > + > + if (!attr || !pattern || !action) > + return NULL; > + > + flow_classify = allocate(action); > + if (!flow_classify) > + return NULL; > + > + /* parse attr, pattern and action, > + * create mask or hash values etc to match flow easier > + * update flow_classify object to include these data */ > + > + LIST_INSERT_HEAD(&rte_flow_classify_head, flow_classify, next); > + > + return flow_classify; > +} > + > +int > +rte_flow_classify_destroy(struct rte_flow_classify *flow_classify) { > + if (!flow_classify) > + return -EINVAL; > + > + LIST_REMOVE(flow_classify, next); > + > + free(flow_classify); > + > + return 0; > +} > + > +static int > +flow_match(const struct rte_flow_classify *flow_classify, > + const struct rte_mbuf *m) > +{ > + (void)flow_classify; > + (void)m; > + > + return 0; > +} > + > +static int > +action_apply(const struct rte_flow_classify *flow_classify, > + const struct rte_mbuf *m, > + struct rte_flow_classify_stats *stats) { > + switch (flow_classify->action.type) { > + default: > + return -ENOTSUP; > + } > + > + stats->used_space++; > + > + (void)m; > + > + return 0; > +} > + > +int > +rte_flow_classify_query(const struct rte_flow_classify *flow_classify, > + struct rte_mbuf **pkts, > + const uint16_t nb_pkts, > + struct rte_flow_classify_stats *stats) { > + const struct rte_mbuf *m; > + int ret = 0; > + uint16_t i; > + > + if (stats->available_space == 0) > + return -EINVAL; > + > + stats->used_space = 0; > + > + for (i = 0; i < nb_pkts; i++) { > + m = pkts[i]; > + > + if (!flow_match(flow_classify, m)) { > + ret = action_apply(flow_classify, m, stats); > + if (ret) > + break; > + } > + } > + > + return ret; > +} > diff --git a/lib/librte_flow_classify/rte_flow_classify.h > b/lib/librte_flow_classify/rte_flow_classify.h > new file mode 100644 > index 0000000..5440775 > --- /dev/null > +++ b/lib/librte_flow_classify/rte_flow_classify.h > @@ -0,0 +1,149 @@ > +/*- > + * BSD LICENSE > + * > + * Copyright(c) 2017 Intel Corporation. All rights reserved. > + * All rights reserved. > + * > + * Redistribution and use in source and binary forms, with or without > + * modification, are permitted provided that the following conditions > + * are met: > + * > + * * Redistributions of source code must retain the above copyright > + * notice, this list of conditions and the following disclaimer. > + * * Redistributions in binary form must reproduce the above copyright > + * notice, this list of conditions and the following disclaimer in > + * the documentation and/or other materials provided with the > + * distribution. > + * * Neither the name of Intel Corporation nor the names of its > + * contributors may be used to endorse or promote products derived > + * from this software without specific prior written permission. > + * > + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND > CONTRIBUTORS > + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT > NOT > + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND > FITNESS FOR > + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE > COPYRIGHT > + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, > INCIDENTAL, > + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT > NOT > + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS > OF USE, > + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED > AND ON ANY > + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR > TORT > + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF > THE USE > + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH > DAMAGE. > + */ > + > +#ifndef _RTE_FLOW_CLASSIFY_H_ > +#define _RTE_FLOW_CLASSIFY_H_ > + > +/** > + * @file > + * > + * RTE Flow Classify Library > + * > + * This library provides flow record information with some measured > properties. > + * > + * Application should define the flow and measurement criteria (action) for > it. > + * > + * Library doesn't maintain any flow records itself, instead flow > +information is > + * returned to upper layer only for given packets. > + * > + * It is application's responsibility to call rte_flow_classify_query() > + * for group of packets, just after receive them or before transmit them. > + * Application should provide the flow type interested in, measurement > +to apply > + * that flow in rte_flow_classify_create() API, and should provide > + * rte_flow_classify object and storage to put results in > + * rte_flow_classify_query() API. > + * > + * Usage: > + * - application calls rte_flow_classify_create() to create a rte_flow_classify > + * object. > + * - application calls rte_flow_classify_query() in a polling manner, > + * preferably after rte_eth_rx_burst(). This will cause the library to > + * convert packet information to flow information with some > measurements. > + * - rte_flow_classify object can be destroyed when they are no more > needed via > + * rte_flow_classify_destroy() > + */ > + > +#include <rte_ethdev.h> > +#include <rte_ether.h> > +#include <rte_flow.h> > + > +#ifdef __cplusplus > +extern "C" { > +#endif > + > +struct rte_flow_classify; > + > +/** > + * Flow stats > + * > + * For single action an array of stats can be returned by API. > +Technically each > + * packet can return a stat at max. > + * > + * Storage for stats is provided by application, library should know > +available > + * space, and should return the number of used space. > + * > + * stats type is based on what measurement (action) requested by > application. > + * > + */ > +struct rte_flow_classify_stats { > + const unsigned int available_space; > + unsigned int used_space; > + void **stats; > +}; > + > +/** > + * Create a flow classify rule. > + * > + * @param[in] attr > + * Flow rule attributes > + * @param[in] pattern > + * Pattern specification (list terminated by the END pattern item). > + * @param[in] action > + * Associated action > + * > + * @return > + * A valid handle in case of success, NULL otherwise. > + */ > +struct rte_flow_classify * > +rte_flow_classify_create(const struct rte_flow_attr *attr, > + const struct rte_flow_item pattern[], > + const struct rte_flow_action *action); > + > +/** > + * Destroy a flow classify rule. > + * > + * @param flow_classify > + * Flow rule handle to destroy > + * > + * @return > + * 0 on success, a negative errno value otherwise. > + */ > +int > +rte_flow_classify_destroy(struct rte_flow_classify *flow_classify); > + > +/** > + * Get flow classification stats for given packets. > + * > + * @param flow_classify > + * Flow rule object > + * @param pkts > + * Pointer to packets to process > + * @param nb_pkts > + * Number of packets to process > + * @param stats > + * To store stats define by action > + * > + * @return > + * 0 on success, a negative errno value otherwise. > + */ > +int > +rte_flow_classify_query(const struct rte_flow_classify *flow_classify, > + struct rte_mbuf **pkts, > + const uint16_t nb_pkts, > + struct rte_flow_classify_stats *stats); > + > +#ifdef __cplusplus > +} > +#endif > + > +#endif /* _RTE_FLOW_CLASSIFY_H_ */ > diff --git a/lib/librte_flow_classify/rte_flow_classify_version.map > b/lib/librte_flow_classify/rte_flow_classify_version.map > new file mode 100644 > index 0000000..5aaf664 > --- /dev/null > +++ b/lib/librte_flow_classify/rte_flow_classify_version.map > @@ -0,0 +1,9 @@ > +DPDK_17.08 { > + global: > + > + rte_flow_classify_create; > + rte_flow_classify_destroy; > + rte_flow_classify_query; > + > + local: *; > +}; > diff --git a/mk/rte.app.mk b/mk/rte.app.mk index bcaf1b3..adb6be4 100644 > --- a/mk/rte.app.mk > +++ b/mk/rte.app.mk > @@ -81,6 +81,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_POWER) += - > lrte_power > _LDLIBS-$(CONFIG_RTE_LIBRTE_TIMER) += -lrte_timer > _LDLIBS-$(CONFIG_RTE_LIBRTE_EFD) += -lrte_efd > _LDLIBS-$(CONFIG_RTE_LIBRTE_CFGFILE) += -lrte_cfgfile > +_LDLIBS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += -lrte_flow_classify > > _LDLIBS-y += --whole-archive > > -- > 2.9.3 It is probably useful to add an rte_flow_classify_validate API . Regards, Bernard. ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v1 0/6] Flow classification library 2017-05-25 15:46 ` [dpdk-dev] [RFC v3] " Ferruh Yigit 2017-05-25 15:46 ` [dpdk-dev] [RFC v3] flow_classify: add librte_flow_classify library Ferruh Yigit @ 2017-08-23 13:51 ` Bernard Iremonger 2017-08-25 16:10 ` [dpdk-dev] [PATCH v2 0/6] flow " Bernard Iremonger ` (6 more replies) 2017-08-23 13:51 ` [dpdk-dev] [PATCH v1 1/6] librte_table: move structure to header file Bernard Iremonger ` (5 subsequent siblings) 7 siblings, 7 replies; 145+ messages in thread From: Bernard Iremonger @ 2017-08-23 13:51 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil Cc: Bernard Iremonger DPDK works with packets, but some network administration tools works based on flow information. This library is suggested to provide a helper API to convert packet based information to the flow records. Basically the library consist of APIs to validate, create and destroy the rule and to query the stats. Application should call the query API for all received packets. The library header file has more comments on how library works and provided APIs. Packets to flow conversion will cause performance drop, that is why conversion done on demand by an API call provided by this library. The initial implementation is to provide counting of IPv4 five tuple packets for UDP, TCP and SCTP but the library is planned to be as generic as possible. The flow information provided by this library is missing to implement full IPFIX features, but this is planned to be initial step. Flows are defined using rte_flow, also measurements (actions) are provided by rte_flow. To support more IPFIX measurements, the implementation may require extending rte_flow in addition to extending this library. The library uses both flows and actions defined by rte_flow.h so this library has a dependency on rte_flow.h. For further steps, this library can be expanded to benefit from hardware filters for better performance. It will be more beneficial to shape this library to cover more use cases, please feel free to comment on possible other use cases and desired functionalities. Changes since RFC v3: added rte_flow_classify_validate API. librte_table ACL is used for packet matching. a table_acl parameter has been added to all of the API's an error parameter has been been added to all of the API's This patchset also contains the following: some changes to librte_ether and librte_table. a bug fix to rte_table_acl.c the flow_classify sample application. the flow_classify_autotest unit test program. Changes Bernard Iremonger (5): librte_table: move structure to header file librte_table: fix acl entry add and delete functions librte_ether: initialise IPv4 protocol mask for rte_flow examples/flow_classify: flow classify sample application test: flow classify library unit tests Ferruh Yigit (1): librte_flow_classify: add librte_flow_classify library config/common_base | 6 + doc/api/doxy-api-index.md | 1 + doc/api/doxy-api.conf | 1 + examples/flow_classify/Makefile | 57 ++ examples/flow_classify/flow_classify.c | 625 +++++++++++++++++++++ lib/Makefile | 3 + lib/librte_eal/common/include/rte_log.h | 1 + lib/librte_ether/rte_flow.h | 1 + lib/librte_flow_classify/Makefile | 51 ++ lib/librte_flow_classify/rte_flow_classify.c | 559 ++++++++++++++++++ lib/librte_flow_classify/rte_flow_classify.h | 204 +++++++ lib/librte_flow_classify/rte_flow_classify_parse.c | 546 ++++++++++++++++++ lib/librte_flow_classify/rte_flow_classify_parse.h | 74 +++ .../rte_flow_classify_version.map | 10 + lib/librte_table/rte_table_acl.c | 33 +- lib/librte_table/rte_table_acl.h | 24 + mk/rte.app.mk | 2 +- test/test/Makefile | 1 + test/test/test_flow_classify.c | 487 ++++++++++++++++ test/test/test_flow_classify.h | 184 ++++++ 20 files changed, 2840 insertions(+), 30 deletions(-) create mode 100644 examples/flow_classify/Makefile create mode 100644 examples/flow_classify/flow_classify.c create mode 100644 lib/librte_flow_classify/Makefile create mode 100644 lib/librte_flow_classify/rte_flow_classify.c create mode 100644 lib/librte_flow_classify/rte_flow_classify.h create mode 100644 lib/librte_flow_classify/rte_flow_classify_parse.c create mode 100644 lib/librte_flow_classify/rte_flow_classify_parse.h create mode 100644 lib/librte_flow_classify/rte_flow_classify_version.map create mode 100644 test/test/test_flow_classify.c create mode 100644 test/test/test_flow_classify.h -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v2 0/6] flow classification library 2017-08-23 13:51 ` [dpdk-dev] [PATCH v1 0/6] Flow classification library Bernard Iremonger @ 2017-08-25 16:10 ` Bernard Iremonger 2017-08-31 14:54 ` [dpdk-dev] [PATCH v3 0/5] " Bernard Iremonger ` (5 more replies) 2017-08-25 16:10 ` [dpdk-dev] [PATCH v2 1/6] librte_table: fix acl entry add and delete functions Bernard Iremonger ` (5 subsequent siblings) 6 siblings, 6 replies; 145+ messages in thread From: Bernard Iremonger @ 2017-08-25 16:10 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil Cc: Bernard Iremonger DPDK works with packets, but some network administration tools works based on flow information. This library is suggested to provide a helper API to convert packet based information to the flow records. Basically the library consist of APIs to validate, create and destroy the rule and to query the stats. Application should call the query API for all received packets. The library header file has more comments on how library works and provided APIs. Packets to flow conversion will cause performance drop, that is why conversion done on demand by an API call provided by this library. The initial implementation is to provide counting of IPv4 five tuple packets for UDP, TCP and SCTP but the library is planned to be as generic as possible. The flow information provided by this library is missing to implement full IPFIX features, but this is planned to be initial step. Flows are defined using rte_flow, also measurements (actions) are provided by rte_flow. To support more IPFIX measurements, the implementation may require extending rte_flow in addition to extending this library. The library uses both flows and actions defined by rte_flow.h so this library has a dependency on rte_flow.h. For further steps, this library can be expanded to benefit from hardware filters for better performance. It will be more beneficial to shape this library to cover more use cases, please feel free to comment on possible other use cases and desired functionalities. Changes in v2: Patch 1, librte_table: move structure to header file, has been dropped. The code has been reworked to not access struct rte_table_acl directly. An entry_size parameter has been added to the rte_flow_classify_create function. The f_lookup function is now called instead of the rte_acl_classify function. Patch 2, librte_table: fix acl lookup function, has been added. Changes in v1, since RFC v3: added rte_flow_classify_validate API. librte_table ACL is used for packet matching. a table_acl parameter has been added to all of the API's an error parameter has been been added to all of the API's This patchset also contains the following: some changes to librte_ether and librte_table. a bug fix to rte_table_acl.c the flow_classify sample application. the flow_classify_autotest unit test program. Bernard Iremonger (5): librte_table: fix acl entry add and delete functions librte_table: fix acl lookup function librte_ether: initialise IPv4 protocol mask for rte_flow examples/flow_classify: flow classify sample application test: flow classify library unit tests Ferruh Yigit (1): librte_flow_classify: add librte_flow_classify library config/common_base | 6 + doc/api/doxy-api-index.md | 1 + doc/api/doxy-api.conf | 1 + examples/flow_classify/Makefile | 57 ++ examples/flow_classify/flow_classify.c | 634 +++++++++++++++++++++ lib/Makefile | 3 + lib/librte_eal/common/include/rte_log.h | 1 + lib/librte_ether/rte_flow.h | 1 + lib/librte_flow_classify/Makefile | 51 ++ lib/librte_flow_classify/rte_flow_classify.c | 465 +++++++++++++++ lib/librte_flow_classify/rte_flow_classify.h | 207 +++++++ lib/librte_flow_classify/rte_flow_classify_parse.c | 546 ++++++++++++++++++ lib/librte_flow_classify/rte_flow_classify_parse.h | 74 +++ .../rte_flow_classify_version.map | 10 + lib/librte_table/rte_table_acl.c | 11 +- mk/rte.app.mk | 2 +- test/test/Makefile | 1 + test/test/test_flow_classify.c | 494 ++++++++++++++++ test/test/test_flow_classify.h | 186 ++++++ 19 files changed, 2744 insertions(+), 7 deletions(-) create mode 100644 examples/flow_classify/Makefile create mode 100644 examples/flow_classify/flow_classify.c create mode 100644 lib/librte_flow_classify/Makefile create mode 100644 lib/librte_flow_classify/rte_flow_classify.c create mode 100644 lib/librte_flow_classify/rte_flow_classify.h create mode 100644 lib/librte_flow_classify/rte_flow_classify_parse.c create mode 100644 lib/librte_flow_classify/rte_flow_classify_parse.h create mode 100644 lib/librte_flow_classify/rte_flow_classify_version.map create mode 100644 test/test/test_flow_classify.c create mode 100644 test/test/test_flow_classify.h -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v3 0/5] flow classification library 2017-08-25 16:10 ` [dpdk-dev] [PATCH v2 0/6] flow " Bernard Iremonger @ 2017-08-31 14:54 ` Bernard Iremonger 2017-09-06 10:27 ` [dpdk-dev] [PATCH v4 " Bernard Iremonger ` (5 more replies) 2017-08-31 14:54 ` [dpdk-dev] [PATCH v3 1/5] librte_table: fix acl entry add and delete functions Bernard Iremonger ` (4 subsequent siblings) 5 siblings, 6 replies; 145+ messages in thread From: Bernard Iremonger @ 2017-08-31 14:54 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil Cc: Bernard Iremonger DPDK works with packets, but some network administration tools works based on flow information. This library is suggested to provide a helper API to convert packet based information to the flow records. Basically the library consist of APIs to validate, create and destroy the rule and to query the stats. Application should call the query API for all received packets. The library header file has more comments on how library works and provided APIs. Packets to flow conversion will cause performance drop, that is why conversion done on demand by an API call provided by this library. The initial implementation is to provide counting of IPv4 five tuple packets for UDP, TCP and SCTP but the library is planned to be as generic as possible. The flow information provided by this library is missing to implement full IPFIX features, but this is planned to be initial step. Flows are defined using rte_flow, also measurements (actions) are provided by rte_flow. To support more IPFIX measurements, the implementation may require extending rte_flow in addition to extending this library. The library uses both flows and actions defined by rte_flow.h so this library has a dependency on rte_flow.h. For further steps, this library can be expanded to benefit from hardware filters for better performance. It will be more beneficial to shape this library to cover more use cases, please feel free to comment on possible other use cases and desired functionalities. Changes in v3: Patch 3 from the v2 patch set has been dropped,"librte_ether: initialise IPv4 protocol mask for rte_flow". The flow_classify sample application is now using an input file of IPv4 five tuple rules instead of hardcoded values. A minor fix to the rte_flow_classify_create() function. Changes in v2: Patch 1, librte_table: move structure to header file, has been dropped. The code has been reworked to not access struct rte_table_acl directly. An entry_size parameter has been added to the rte_flow_classify_create function. The f_lookup function is now called instead of the rte_acl_classify function. Patch 2, librte_table: fix acl lookup function, has been added. Changes in v1, since RFC v3: added rte_flow_classify_validate API. librte_table ACL is used for packet matching. a table_acl parameter has been added to all of the API's an error parameter has been been added to all of the API's Bernard Iremonger (4): librte_table: fix acl entry add and delete functions librte_table: fix acl lookup function examples/flow_classify: flow classify sample application test: flow classify library unit tests Ferruh Yigit (1): librte_flow_classify: add librte_flow_classify library config/common_base | 6 + doc/api/doxy-api-index.md | 1 + doc/api/doxy-api.conf | 1 + examples/flow_classify/Makefile | 57 ++ examples/flow_classify/flow_classify.c | 879 +++++++++++++++++++++ examples/flow_classify/ipv4_rules_file.txt | 14 + lib/Makefile | 3 + lib/librte_eal/common/include/rte_log.h | 1 + lib/librte_flow_classify/Makefile | 51 ++ lib/librte_flow_classify/rte_flow_classify.c | 459 +++++++++++ lib/librte_flow_classify/rte_flow_classify.h | 207 +++++ lib/librte_flow_classify/rte_flow_classify_parse.c | 546 +++++++++++++ lib/librte_flow_classify/rte_flow_classify_parse.h | 74 ++ .../rte_flow_classify_version.map | 10 + lib/librte_table/rte_table_acl.c | 11 +- mk/rte.app.mk | 2 +- test/test/Makefile | 1 + test/test/test_flow_classify.c | 494 ++++++++++++ test/test/test_flow_classify.h | 186 +++++ 19 files changed, 2996 insertions(+), 7 deletions(-) create mode 100644 examples/flow_classify/Makefile create mode 100644 examples/flow_classify/flow_classify.c create mode 100644 examples/flow_classify/ipv4_rules_file.txt create mode 100644 lib/librte_flow_classify/Makefile create mode 100644 lib/librte_flow_classify/rte_flow_classify.c create mode 100644 lib/librte_flow_classify/rte_flow_classify.h create mode 100644 lib/librte_flow_classify/rte_flow_classify_parse.c create mode 100644 lib/librte_flow_classify/rte_flow_classify_parse.h create mode 100644 lib/librte_flow_classify/rte_flow_classify_version.map create mode 100644 test/test/test_flow_classify.c create mode 100644 test/test/test_flow_classify.h -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v4 0/5] flow classification library 2017-08-31 14:54 ` [dpdk-dev] [PATCH v3 0/5] " Bernard Iremonger @ 2017-09-06 10:27 ` Bernard Iremonger 2017-09-07 16:43 ` [dpdk-dev] [PATCH v5 0/6] " Bernard Iremonger ` (6 more replies) 2017-09-06 10:27 ` [dpdk-dev] [PATCH v4 1/5] librte_table: fix acl entry add and delete functions Bernard Iremonger ` (4 subsequent siblings) 5 siblings, 7 replies; 145+ messages in thread From: Bernard Iremonger @ 2017-09-06 10:27 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil Cc: Bernard Iremonger DPDK works with packets, but some network administration tools works based on flow information. This library is suggested to provide a helper API to convert packet based information to the flow records. Basically the library consist of APIs to validate, create and destroy the rule and to query the stats. Application should call the query API for all received packets. The library header file has more comments on how library works and provided APIs. Packets to flow conversion will cause performance drop, that is why conversion done on demand by an API call provided by this library. The initial implementation is to provide counting of IPv4 five tuple packets for UDP, TCP and SCTP but the library is planned to be as generic as possible. The flow information provided by this library is missing to implement full IPFIX features, but this is planned to be initial step. Flows are defined using rte_flow, also measurements (actions) are provided by rte_flow. To support more IPFIX measurements, the implementation may require extending rte_flow in addition to extending this library. The library uses both flows and actions defined by rte_flow.h so this library has a dependency on rte_flow.h. For further steps, this library can be expanded to benefit from hardware filters for better performance. It will be more beneficial to shape this library to cover more use cases, please feel free to comment on possible other use cases and desired functionalities. Changes in v4: Replaced GET_CB_FIELD macro with get_cb_field function in the flow classify sample application to fix checkpatch warning. Fixed checkpatch warnings in test_flow_classify.c Changes in v3: Patch 3 from the v2 patch set has been dropped,"librte_ether: initialise IPv4 protocol mask for rte_flow". The flow_classify sample application is now using an input file of IPv4 five tuple rules instead of hardcoded values. A minor fix to the rte_flow_classify_create() function. Changes in v2: Patch 1, librte_table: move structure to header file, has been dropped. The code has been reworked to not access struct rte_table_acl directly. An entry_size parameter has been added to the rte_flow_classify_create function. The f_lookup function is now called instead of the rte_acl_classify function. Patch 2, librte_table: fix acl lookup function, has been added. Changes in v1, since RFC v3: added rte_flow_classify_validate API. librte_table ACL is used for packet matching. a table_acl parameter has been added to all of the API's an error parameter has been been added to all of the API's Bernard Iremonger (4): librte_table: fix acl entry add and delete functions librte_table: fix acl lookup function examples/flow_classify: flow classify sample application test: flow classify library unit tests Ferruh Yigit (1): librte_flow_classify: add librte_flow_classify library config/common_base | 6 + doc/api/doxy-api-index.md | 1 + doc/api/doxy-api.conf | 1 + examples/flow_classify/Makefile | 57 ++ examples/flow_classify/flow_classify.c | 897 +++++++++++++++++++++ examples/flow_classify/ipv4_rules_file.txt | 14 + lib/Makefile | 3 + lib/librte_eal/common/include/rte_log.h | 1 + lib/librte_flow_classify/Makefile | 51 ++ lib/librte_flow_classify/rte_flow_classify.c | 459 +++++++++++ lib/librte_flow_classify/rte_flow_classify.h | 207 +++++ lib/librte_flow_classify/rte_flow_classify_parse.c | 546 +++++++++++++ lib/librte_flow_classify/rte_flow_classify_parse.h | 74 ++ .../rte_flow_classify_version.map | 10 + lib/librte_table/rte_table_acl.c | 11 +- mk/rte.app.mk | 2 +- test/test/Makefile | 1 + test/test/test_flow_classify.c | 493 +++++++++++ test/test/test_flow_classify.h | 186 +++++ 19 files changed, 3013 insertions(+), 7 deletions(-) create mode 100644 examples/flow_classify/Makefile create mode 100644 examples/flow_classify/flow_classify.c create mode 100644 examples/flow_classify/ipv4_rules_file.txt create mode 100644 lib/librte_flow_classify/Makefile create mode 100644 lib/librte_flow_classify/rte_flow_classify.c create mode 100644 lib/librte_flow_classify/rte_flow_classify.h create mode 100644 lib/librte_flow_classify/rte_flow_classify_parse.c create mode 100644 lib/librte_flow_classify/rte_flow_classify_parse.h create mode 100644 lib/librte_flow_classify/rte_flow_classify_version.map create mode 100644 test/test/test_flow_classify.c create mode 100644 test/test/test_flow_classify.h -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v5 0/6] flow classification library 2017-09-06 10:27 ` [dpdk-dev] [PATCH v4 " Bernard Iremonger @ 2017-09-07 16:43 ` Bernard Iremonger 2017-09-29 9:18 ` [dpdk-dev] [PATCH v6 0/4] " Bernard Iremonger ` (4 more replies) 2017-09-07 16:43 ` [dpdk-dev] [PATCH v5 1/6] librte_table: fix acl entry add and delete functions Bernard Iremonger ` (5 subsequent siblings) 6 siblings, 5 replies; 145+ messages in thread From: Bernard Iremonger @ 2017-09-07 16:43 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil Cc: Bernard Iremonger DPDK works with packets, but some network administration tools works based on flow information. This library is suggested to provide a helper API to convert packet based information to the flow records. Basically the library consist of APIs to validate, create and destroy the rule and to query the stats. Application should call the query API for all received packets. The library header file has more comments on how library works and provided APIs. Packets to flow conversion will cause performance drop, that is why conversion done on demand by an API call provided by this library. The initial implementation is to provide counting of IPv4 five tuple packets for UDP, TCP and SCTP but the library is planned to be as generic as possible. The flow information provided by this library is missing to implement full IPFIX features, but this is planned to be initial step. Flows are defined using rte_flow, also measurements (actions) are provided by rte_flow. To support more IPFIX measurements, the implementation may require extending rte_flow in addition to extending this library. The library uses both flows and actions defined by rte_flow.h so this library has a dependency on rte_flow.h. For further steps, this library can be expanded to benefit from hardware filters for better performance. It will be more beneficial to shape this library to cover more use cases, please feel free to comment on possible other use cases and desired functionalities. Changes in v5: Added tests for TCP an STCP traffic to unit test code. Added patch to packet_burst_generator code to add functions for TCP and SCTP protocols. Changes in v4: Replaced GET_CB_FIELD macro with get_cb_field function in the flow classify sample application to fix checkpatch warning. Fixed checkpatch warnings in test_flow_classify.c Changes in v3: Patch 3 from the v2 patch set has been dropped,"librte_ether: initialise IPv4 protocol mask for rte_flow". The flow_classify sample application is now using an input file of IPv4 five tuple rules instead of hardcoded values. A minor fix to the rte_flow_classify_create() function. Changes in v2: Patch 1, librte_table: move structure to header file, has been dropped. The code has been reworked to not access struct rte_table_acl directly. An entry_size parameter has been added to the rte_flow_classify_create function. The f_lookup function is now called instead of the rte_acl_classify function. Patch 2, librte_table: fix acl lookup function, has been added. Changes in v1, since RFC v3: added rte_flow_classify_validate API. librte_table ACL is used for packet matching. a table_acl parameter has been added to all of the API's an error parameter has been been added to all of the API's Bernard Iremonger (5): librte_table: fix acl entry add and delete functions librte_table: fix acl lookup function examples/flow_classify: flow classify sample application test: add packet burst generator functions test: flow classify library unit tests Ferruh Yigit (1): librte_flow_classify: add librte_flow_classify library config/common_base | 6 + doc/api/doxy-api-index.md | 1 + doc/api/doxy-api.conf | 1 + examples/flow_classify/Makefile | 57 ++ examples/flow_classify/flow_classify.c | 897 +++++++++++++++++++++ examples/flow_classify/ipv4_rules_file.txt | 14 + lib/Makefile | 3 + lib/librte_eal/common/include/rte_log.h | 1 + lib/librte_flow_classify/Makefile | 51 ++ lib/librte_flow_classify/rte_flow_classify.c | 459 +++++++++++ lib/librte_flow_classify/rte_flow_classify.h | 207 +++++ lib/librte_flow_classify/rte_flow_classify_parse.c | 546 +++++++++++++ lib/librte_flow_classify/rte_flow_classify_parse.h | 74 ++ .../rte_flow_classify_version.map | 10 + lib/librte_table/rte_table_acl.c | 11 +- mk/rte.app.mk | 2 +- test/test/Makefile | 1 + test/test/packet_burst_generator.c | 191 +++++ test/test/packet_burst_generator.h | 22 +- test/test/test_flow_classify.c | 698 ++++++++++++++++ test/test/test_flow_classify.h | 240 ++++++ 21 files changed, 3483 insertions(+), 9 deletions(-) create mode 100644 examples/flow_classify/Makefile create mode 100644 examples/flow_classify/flow_classify.c create mode 100644 examples/flow_classify/ipv4_rules_file.txt create mode 100644 lib/librte_flow_classify/Makefile create mode 100644 lib/librte_flow_classify/rte_flow_classify.c create mode 100644 lib/librte_flow_classify/rte_flow_classify.h create mode 100644 lib/librte_flow_classify/rte_flow_classify_parse.c create mode 100644 lib/librte_flow_classify/rte_flow_classify_parse.h create mode 100644 lib/librte_flow_classify/rte_flow_classify_version.map create mode 100644 test/test/test_flow_classify.c create mode 100644 test/test/test_flow_classify.h -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v6 0/4] flow classification library 2017-09-07 16:43 ` [dpdk-dev] [PATCH v5 0/6] " Bernard Iremonger @ 2017-09-29 9:18 ` Bernard Iremonger 2017-10-02 9:31 ` [dpdk-dev] [PATCH v7 " Bernard Iremonger ` (4 more replies) 2017-09-29 9:18 ` [dpdk-dev] [PATCH v6 1/4] librte_flow_classify: add librte_flow_classify library Bernard Iremonger ` (3 subsequent siblings) 4 siblings, 5 replies; 145+ messages in thread From: Bernard Iremonger @ 2017-09-29 9:18 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil Cc: Bernard Iremonger DPDK works with packets, but some network administration tools works based on flow information. This library is suggested to provide a helper API to convert packet based information to the flow records. Basically the library consist of APIs to validate, create and destroy the rule and to query the stats. Application should call the query API for all received packets. The library header file has more comments on how library works and provided APIs. Packets to flow conversion will cause performance drop, that is why conversion done on demand by an API call provided by this library. The initial implementation is to provide counting of IPv4 five tuple packets for UDP, TCP and SCTP but the library is planned to be as generic as possible. The flow information provided by this library is missing to implement full IPFIX features, but this is planned to be initial step. Flows are defined using rte_flow, also measurements (actions) are provided by rte_flow. To support more IPFIX measurements, the implementation may require extending rte_flow in addition to extending this library. The library uses both flows and actions defined by rte_flow.h so this library has a dependency on rte_flow.h. For further steps, this library can be expanded to benefit from hardware filters for better performance. It will be more beneficial to shape this library to cover more use cases, please feel free to comment on possible other use cases and desired functionalities. Changes in v6: Dropped two librte_table patches (patches 1 and 2 in v5 patch set). Revised librte_flow_classify patch to use librte_table API's correctly. Changes in v5: Added tests for TCP an STCP traffic to unit test code. Added patch to packet_burst_generator code to add functions for TCP and SCTP protocols. Changes in v4: Replaced GET_CB_FIELD macro with get_cb_field function in the flow classify sample application to fix checkpatch warning. Fixed checkpatch warnings in test_flow_classify.c Changes in v3: Patch 3 from the v2 patch set has been dropped,"librte_ether: initialise IPv4 protocol mask for rte_flow". The flow_classify sample application is now using an input file of IPv4 five tuple rules instead of hardcoded values. A minor fix to the rte_flow_classify_create() function. Changes in v2: Patch 1, librte_table: move structure to header file, has been dropped. The code has been reworked to not access struct rte_table_acl directly. An entry_size parameter has been added to the rte_flow_classify_create function. The f_lookup function is now called instead of the rte_acl_classify function. Patch 2, librte_table: fix acl lookup function, has been added. Changes in v1, since RFC v3: added rte_flow_classify_validate API. librte_table ACL is used for packet matching. a table_acl parameter has been added to all of the API's an error parameter has been been added to all of the API's Bernard Iremonger (3): examples/flow_classify: flow classify sample application test: add packet burst generator functions test: flow classify library unit tests Ferruh Yigit (1): librte_flow_classify: add librte_flow_classify library MAINTAINERS | 7 + config/common_base | 6 + doc/api/doxy-api-index.md | 1 + doc/api/doxy-api.conf | 1 + examples/flow_classify/Makefile | 57 ++ examples/flow_classify/flow_classify.c | 897 +++++++++++++++++++++ examples/flow_classify/ipv4_rules_file.txt | 14 + lib/Makefile | 3 + lib/librte_eal/common/include/rte_log.h | 1 + lib/librte_flow_classify/Makefile | 51 ++ lib/librte_flow_classify/rte_flow_classify.c | 460 +++++++++++ lib/librte_flow_classify/rte_flow_classify.h | 207 +++++ lib/librte_flow_classify/rte_flow_classify_parse.c | 546 +++++++++++++ lib/librte_flow_classify/rte_flow_classify_parse.h | 74 ++ .../rte_flow_classify_version.map | 10 + mk/rte.app.mk | 2 +- test/test/Makefile | 1 + test/test/packet_burst_generator.c | 191 +++++ test/test/packet_burst_generator.h | 22 +- test/test/test_flow_classify.c | 698 ++++++++++++++++ test/test/test_flow_classify.h | 240 ++++++ 21 files changed, 3486 insertions(+), 3 deletions(-) create mode 100644 examples/flow_classify/Makefile create mode 100644 examples/flow_classify/flow_classify.c create mode 100644 examples/flow_classify/ipv4_rules_file.txt create mode 100644 lib/librte_flow_classify/Makefile create mode 100644 lib/librte_flow_classify/rte_flow_classify.c create mode 100644 lib/librte_flow_classify/rte_flow_classify.h create mode 100644 lib/librte_flow_classify/rte_flow_classify_parse.c create mode 100644 lib/librte_flow_classify/rte_flow_classify_parse.h create mode 100644 lib/librte_flow_classify/rte_flow_classify_version.map create mode 100644 test/test/test_flow_classify.c create mode 100644 test/test/test_flow_classify.h -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v7 0/4] flow classification library 2017-09-29 9:18 ` [dpdk-dev] [PATCH v6 0/4] " Bernard Iremonger @ 2017-10-02 9:31 ` Bernard Iremonger 2017-10-17 20:26 ` [dpdk-dev] [PATCH v8 " Bernard Iremonger ` (4 more replies) 2017-10-02 9:31 ` [dpdk-dev] [PATCH v7 1/4] librte_flow_classify: add librte_flow_classify library Bernard Iremonger ` (3 subsequent siblings) 4 siblings, 5 replies; 145+ messages in thread From: Bernard Iremonger @ 2017-10-02 9:31 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil Cc: Bernard Iremonger DPDK works with packets, but some network administration tools works based on flow information. This library is suggested to provide a helper API to convert packet based information to the flow records. Basically the library consist of APIs to validate, create and destroy the rule and to query the stats. Application should call the query API for all received packets. The library header file has more comments on how library works and provided APIs. Packets to flow conversion will cause performance drop, that is why conversion done on demand by an API call provided by this library. The initial implementation is to provide counting of IPv4 five tuple packets for UDP, TCP and SCTP but the library is planned to be as generic as possible. The flow information provided by this library is missing to implement full IPFIX features, but this is planned to be initial step. Flows are defined using rte_flow, also measurements (actions) are provided by rte_flow. To support more IPFIX measurements, the implementation may require extending rte_flow in addition to extending this library. The library uses both flows and actions defined by rte_flow.h so this library has a dependency on rte_flow.h. For further steps, this library can be expanded to benefit from hardware filters for better performance. It will be more beneficial to shape this library to cover more use cases, please feel free to comment on possible other use cases and desired functionalities. Changes in v7: Fix rte_flow_classify_version.map file. Fix checkpatch warnings. Changes in v6: Dropped two librte_table patches (patches 1 and 2 in v5 patch set). Revised librte_flow_classify patch to use librte_table API's correctly. Changes in v5: Added tests for TCP an STCP traffic to unit test code. Added patch to packet_burst_generator code to add functions for TCP and SCTP protocols. Changes in v4: Replaced GET_CB_FIELD macro with get_cb_field function in the flow classify sample application to fix checkpatch warning. Fixed checkpatch warnings in test_flow_classify.c Changes in v3: Patch 3 from the v2 patch set has been dropped,"librte_ether: initialise IPv4 protocol mask for rte_flow". The flow_classify sample application is now using an input file of IPv4 five tuple rules instead of hardcoded values. A minor fix to the rte_flow_classify_create() function. Changes in v2: Patch 1, librte_table: move structure to header file, has been dropped. The code has been reworked to not access struct rte_table_acl directly. An entry_size parameter has been added to the rte_flow_classify_create function. The f_lookup function is now called instead of the rte_acl_classify function. Patch 2, librte_table: fix acl lookup function, has been added. Changes in v1, since RFC v3: added rte_flow_classify_validate API. librte_table ACL is used for packet matching. a table_acl parameter has been added to all of the API's an error parameter has been been added to all of the API's Bernard Iremonger (3): examples/flow_classify: flow classify sample application test: add packet burst generator functions test: flow classify library unit tests Ferruh Yigit (1): librte_flow_classify: add librte_flow_classify library MAINTAINERS | 7 + config/common_base | 6 + doc/api/doxy-api-index.md | 1 + doc/api/doxy-api.conf | 1 + examples/flow_classify/Makefile | 57 ++ examples/flow_classify/flow_classify.c | 897 +++++++++++++++++++++ examples/flow_classify/ipv4_rules_file.txt | 14 + lib/Makefile | 3 + lib/librte_eal/common/include/rte_log.h | 1 + lib/librte_flow_classify/Makefile | 51 ++ lib/librte_flow_classify/rte_flow_classify.c | 460 +++++++++++ lib/librte_flow_classify/rte_flow_classify.h | 207 +++++ lib/librte_flow_classify/rte_flow_classify_parse.c | 546 +++++++++++++ lib/librte_flow_classify/rte_flow_classify_parse.h | 74 ++ .../rte_flow_classify_version.map | 10 + mk/rte.app.mk | 2 +- test/test/Makefile | 1 + test/test/packet_burst_generator.c | 191 +++++ test/test/packet_burst_generator.h | 22 +- test/test/test_flow_classify.c | 698 ++++++++++++++++ test/test/test_flow_classify.h | 240 ++++++ 21 files changed, 3486 insertions(+), 3 deletions(-) create mode 100644 examples/flow_classify/Makefile create mode 100644 examples/flow_classify/flow_classify.c create mode 100644 examples/flow_classify/ipv4_rules_file.txt create mode 100644 lib/librte_flow_classify/Makefile create mode 100644 lib/librte_flow_classify/rte_flow_classify.c create mode 100644 lib/librte_flow_classify/rte_flow_classify.h create mode 100644 lib/librte_flow_classify/rte_flow_classify_parse.c create mode 100644 lib/librte_flow_classify/rte_flow_classify_parse.h create mode 100644 lib/librte_flow_classify/rte_flow_classify_version.map create mode 100644 test/test/test_flow_classify.c create mode 100644 test/test/test_flow_classify.h -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v8 0/4] flow classification library 2017-10-02 9:31 ` [dpdk-dev] [PATCH v7 " Bernard Iremonger @ 2017-10-17 20:26 ` Bernard Iremonger 2017-10-22 13:32 ` [dpdk-dev] [PATCH v9 " Bernard Iremonger ` (4 more replies) 2017-10-17 20:26 ` [dpdk-dev] [PATCH v8 1/4] librte_flow_classify: add flow classify library Bernard Iremonger ` (3 subsequent siblings) 4 siblings, 5 replies; 145+ messages in thread From: Bernard Iremonger @ 2017-10-17 20:26 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil, jasvinder.singh Cc: Bernard Iremonger DPDK works with packets, but some network administration tools work based on flow information. This library is suggested to provide an API to classify received packets using flow patterns. Basically the library consists of APIs to validate, add and delete flow rules and to query the stats for a given rule. The application should call the query API for received packet bursts. The application should use the following sequence of API calls: call rte_flow_classifier_create() to create the classifier object. call rte_flow_classify_table_create() to add a table to the classifier. call rte_flow_classify_validate() to validate a flow pattern. call rte_flow_classify_table_entry_add to add a flow rule to the table. After a call to rte_eth_rx-burst() to receive a packet burst. call rte_flow_classifier_run() to classify the packets against the rules in the classifier. call rte_flow_classifier_query() to return data to the application. The flow_classify sample application in this patchset is using the ACL table. The library header file has more comments on how library works and the provided APIs. Packets to flow matching will cause a performance drop, that is why classification is done on demand by the rte_flow_classifier_run() API provided by this library. The initial implementation is to provide counting of IPv4 five tuple packets for UDP, TCP and SCTP but the library is planned to be as generic as possible. The flow information provided by this library is missing to implement full IPFIX features, but this is planned to be the initial step. Flows are defined using rte_flow, also measurements (actions) are provided by rte_flow. To support more IPFIX measurements, the implementation may require extending rte_flow in addition to extending this library. The library uses both flows and actions defined by rte_flow.h so this library has a dependency on rte_flow.h This patchset also contains a set of unit tests for the Flow Classify library and a patch containing additional functions added to the packet burst generator code. For further steps, this library may be expanded to benefit from hardware filters for better performance. It will be more beneficial to shape this library to cover more use cases, please feel free to comment on possible other use cases and desired functionalities. Changes in v8: The library has been reworked so that it can be used with any of the tables supported by librte_table. Four new API's have been added to support this, rte_flow_classifier_create, rte_flow_classifier_free rte_flow_classify_table_create and rte_flow_classify_run. rte_flow_classify_create has been replaced by rte_flow_classify_table_entry_add. rte_flow_classify_destroy has been replaced by rte_flow_classify_table_entry_delete. rte_flow_classify_query has been replaced by rte_flow_classifier_run. Changes in v7: Fix rte_flow_classify_version.map file. Fix checkpatch warnings. Changes in v6: Dropped two librte_table patches (patches 1 and 2 in v5 patch set). Revised librte_flow_classify patch to use librte_table API's correctly. Changes in v5: Added tests for TCP an STCP traffic to unit test code. Added patch to packet_burst_generator code to add functions for TCP and SCTP protocols. Changes in v4: Replaced GET_CB_FIELD macro with get_cb_field function in the flow classify sample application to fix checkpatch warning. Fixed checkpatch warnings in test_flow_classify.c Changes in v3: Patch 3 from the v2 patch set has been dropped,"librte_ether: initialise IPv4 protocol mask for rte_flow". The flow_classify sample application is now using an input file of IPv4 five tuple rules instead of hardcoded values. A minor fix to the rte_flow_classify_create() function. Changes in v2: Patch 1, librte_table: move structure to header file, has been dropped. The code has been reworked to not access struct rte_table_acl directly. An entry_size parameter has been added to the rte_flow_classify_create function. The f_lookup function is now called instead of the rte_acl_classify function. Patch 2, librte_table: fix acl lookup function, has been added. Changes in v1, since RFC v3: added rte_flow_classify_validate API. librte_table ACL is used for packet matching. a table_acl parameter has been added to all of the API's an error parameter has been been added to all of the API's Bernard Iremonger (3): examples/flow_classify: flow classify sample application test: add packet burst generator functions test: flow classify library unit tests Ferruh Yigit (1): librte_flow_classify: add flow classify library MAINTAINERS | 7 + config/common_base | 6 + doc/api/doxy-api-index.md | 1 + doc/api/doxy-api.conf | 1 + examples/flow_classify/Makefile | 57 ++ examples/flow_classify/flow_classify.c | 854 +++++++++++++++++++++ examples/flow_classify/ipv4_rules_file.txt | 14 + lib/Makefile | 3 + lib/librte_eal/common/include/rte_log.h | 1 + lib/librte_flow_classify/Makefile | 51 ++ lib/librte_flow_classify/rte_flow_classify.c | 735 ++++++++++++++++++ lib/librte_flow_classify/rte_flow_classify.h | 321 ++++++++ lib/librte_flow_classify/rte_flow_classify_parse.c | 546 +++++++++++++ lib/librte_flow_classify/rte_flow_classify_parse.h | 74 ++ .../rte_flow_classify_version.map | 14 + mk/rte.app.mk | 1 + test/test/Makefile | 1 + test/test/packet_burst_generator.c | 191 +++++ test/test/packet_burst_generator.h | 22 +- test/test/test_flow_classify.c | 783 +++++++++++++++++++ test/test/test_flow_classify.h | 234 ++++++ 21 files changed, 3915 insertions(+), 2 deletions(-) create mode 100644 examples/flow_classify/Makefile create mode 100644 examples/flow_classify/flow_classify.c create mode 100644 examples/flow_classify/ipv4_rules_file.txt create mode 100644 lib/librte_flow_classify/Makefile create mode 100644 lib/librte_flow_classify/rte_flow_classify.c create mode 100644 lib/librte_flow_classify/rte_flow_classify.h create mode 100644 lib/librte_flow_classify/rte_flow_classify_parse.c create mode 100644 lib/librte_flow_classify/rte_flow_classify_parse.h create mode 100644 lib/librte_flow_classify/rte_flow_classify_version.map create mode 100644 test/test/test_flow_classify.c create mode 100644 test/test/test_flow_classify.h -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v9 0/4] flow classification library 2017-10-17 20:26 ` [dpdk-dev] [PATCH v8 " Bernard Iremonger @ 2017-10-22 13:32 ` Bernard Iremonger 2017-10-23 15:16 ` [dpdk-dev] [PATCH v10 " Bernard Iremonger ` (4 more replies) 2017-10-22 13:32 ` [dpdk-dev] [PATCH v9 1/4] librte_flow_classify: add flow classify library Bernard Iremonger ` (3 subsequent siblings) 4 siblings, 5 replies; 145+ messages in thread From: Bernard Iremonger @ 2017-10-22 13:32 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil, jasvinder.singh Cc: Bernard Iremonger DPDK works with packets, but some network administration tools work with flow information. This library is proposed to provide an API to classify received packets using flow patterns. Basically the library consists of APIs to create the classifier object, add a table to the classifer, add and delete flow rules to the table and to query the stats for a given rule. The application should use the following sequence of API's: call rte_flow_classifier_create() to create the classifier object. call rte_flow_classify_table_create() to add a table to the classifier. call rte_flow_classify_table_entry_add to add a flow rule to the table. After a call to rte_eth_rx-burst() to receive a packet burst. call rte_flow_classifier_query() to classify the packets in the against the rules in the classifier and to return data to the application. The flow_classify sample application in this patchset is using the ACL table for packet matching. The flow classification library can support other tables for example EM and LPM tables. The library header file has more comments on how library works and the provided APIs. Packets to flow rule matching will cause a performance drop, that is why classification is done on demand by the rte_flow_classifier_query() API provided by this library. The initial implementation is to provide counting of IPv4 five tuple packets for UDP, TCP and SCTP, but the library is planned to be as generic as possible. The flow information provided by this library is missing to implement full IPFIX features, but this is planned to be the initial step. Flows are defined using rte_flow, also measurements (actions) are provided by rte_flow. To support more IPFIX measurements, the implementation may require extending rte_flow in addition to extending this library. The library uses both flows and actions defined by rte_flow.h so this library has a dependency on rte_flow.h This patch set also contains a set of unit tests for the Flow Classify library, patch(4) and a patch(3) containing additional functions added to the packet burst generator code. For further steps, this library may be expanded to benefit from hardware filters for better performance. It will be more beneficial to shape this library to cover more use cases, please feel free to comment on possible other use cases and desired functionalities. Changes in v9: The library has been reworked following comments on the v8 patchset. The validate API has changed to an internal function and renamed. The run API has been merged with the query API. The default_entry code has been removed from the library. A key_found parameter has been added to the rte_classify_table_entry_add API. Checks on the f_* functions has been added to library. The rte_flow_classify_table_entry structure has been made private. The doxygen API output has been revised. The flow_classify sample application has been revised for the latest API's. The flow_classify_autotest program has been revised for the latest API's Changes in v8: The library has been reworked so that it can be used with any of the tables supported by librte_table. Four new API's have been added to support this, rte_flow_classifier_create, rte_flow_classifier_free rte_flow_classify_table_create and rte_flow_classify_run. rte_flow_classify_create has been replaced by rte_flow_classify_table_entry_add. rte_flow_classify_destroy has been replaced by rte_flow_classify_table_entry_delete. rte_flow_classify_query has been replaced by rte_flow_classifier_run. Changes in v7: Fix rte_flow_classify_version.map file. Fix checkpatch warnings. Changes in v6: Dropped two librte_table patches (patches 1 and 2 in v5 patch set). Revised librte_flow_classify patch to use librte_table API's correctly. Changes in v5: Added tests for TCP an STCP traffic to unit test code. Added patch to packet_burst_generator code to add functions for TCP and SCTP protocols. Changes in v4: Replaced GET_CB_FIELD macro with get_cb_field function in the flow classify sample application to fix checkpatch warning. Fixed checkpatch warnings in test_flow_classify.c Changes in v3: Patch 3 from the v2 patch set has been dropped,"librte_ether: initialise IPv4 protocol mask for rte_flow". The flow_classify sample application is now using an input file of IPv4 five tuple rules instead of hardcoded values. A minor fix to the rte_flow_classify_create() function. Changes in v2: Patch 1, librte_table: move structure to header file, has been dropped. The code has been reworked to not access struct rte_table_acl directly. An entry_size parameter has been added to the rte_flow_classify_create function. The f_lookup function is now called instead of the rte_acl_classify function. Patch 2, librte_table: fix acl lookup function, has been added. Changes in v1, since RFC v3: added rte_flow_classify_validate API. librte_table ACL is used for packet matching. a table_acl parameter has been added to all of the API's an error parameter has been been added to all of the API's Bernard Iremonger (3): examples/flow_classify: flow classify sample application test: add packet burst generator functions test: flow classify library unit tests Ferruh Yigit (1): librte_flow_classify: add flow classify library MAINTAINERS | 7 + config/common_base | 6 + doc/api/doxy-api-index.md | 1 + doc/api/doxy-api.conf | 1 + examples/flow_classify/Makefile | 57 ++ examples/flow_classify/flow_classify.c | 850 +++++++++++++++++++++ examples/flow_classify/ipv4_rules_file.txt | 14 + lib/Makefile | 3 + lib/librte_eal/common/include/rte_log.h | 1 + lib/librte_flow_classify/Makefile | 51 ++ lib/librte_flow_classify/rte_flow_classify.c | 685 +++++++++++++++++ lib/librte_flow_classify/rte_flow_classify.h | 285 +++++++ lib/librte_flow_classify/rte_flow_classify_parse.c | 546 +++++++++++++ lib/librte_flow_classify/rte_flow_classify_parse.h | 74 ++ .../rte_flow_classify_version.map | 12 + mk/rte.app.mk | 1 + test/test/Makefile | 1 + test/test/packet_burst_generator.c | 191 +++++ test/test/packet_burst_generator.h | 22 +- test/test/test_flow_classify.c | 673 ++++++++++++++++ test/test/test_flow_classify.h | 234 ++++++ 21 files changed, 3713 insertions(+), 2 deletions(-) create mode 100644 examples/flow_classify/Makefile create mode 100644 examples/flow_classify/flow_classify.c create mode 100644 examples/flow_classify/ipv4_rules_file.txt create mode 100644 lib/librte_flow_classify/Makefile create mode 100644 lib/librte_flow_classify/rte_flow_classify.c create mode 100644 lib/librte_flow_classify/rte_flow_classify.h create mode 100644 lib/librte_flow_classify/rte_flow_classify_parse.c create mode 100644 lib/librte_flow_classify/rte_flow_classify_parse.h create mode 100644 lib/librte_flow_classify/rte_flow_classify_version.map create mode 100644 test/test/test_flow_classify.c create mode 100644 test/test/test_flow_classify.h -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v10 0/4] flow classification library 2017-10-22 13:32 ` [dpdk-dev] [PATCH v9 " Bernard Iremonger @ 2017-10-23 15:16 ` Bernard Iremonger 2017-10-23 20:59 ` Thomas Monjalon ` (5 more replies) 2017-10-23 15:16 ` [dpdk-dev] [PATCH v10 1/4] librte_flow_classify: add flow classify library Bernard Iremonger ` (3 subsequent siblings) 4 siblings, 6 replies; 145+ messages in thread From: Bernard Iremonger @ 2017-10-23 15:16 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil, jasvinder.singh Cc: Bernard Iremonger DPDK works with packets, but some network administration tools work with flow information. This library is proposed to provide an API to classify received packets using flow patterns. Basically the library consists of APIs to create the classifier object, add a table to the classifer, add and delete flow rules to the table and to query the stats for a given rule. The application should use the following sequence of API's: call rte_flow_classifier_create() to create the classifier object. call rte_flow_classify_table_create() to add a table to the classifier. call rte_flow_classify_table_entry_add to add a flow rule to the table. After a call to rte_eth_rx-burst() to receive a packet burst. call rte_flow_classifier_query() to classify the packets in the against the rules in the classifier and to return data to the application. The flow_classify sample application in this patchset is using the ACL table for packet matching. The flow classification library can support other tables for example Hash and LPM tables. The library header file has more comments on how library works and the provided APIs. Packets to flow rule matching will cause a performance drop, that is why classification is done on demand by the rte_flow_classifier_query() API provided by this library. The initial implementation is to provide counting of IPv4 five tuple packets for UDP, TCP and SCTP, but the library is planned to be as generic as possible. The flow information provided by this library is missing to implement full IPFIX features, but this is planned to be the initial step. Flows are defined using rte_flow, also measurements (actions) are provided by rte_flow. To support more IPFIX measurements, the implementation may require extending rte_flow in addition to extending this library. The library uses both flows and actions defined by rte_flow.h so this library has a dependency on rte_flow.h This patch set also contains a set of unit tests for the Flow Classify library, patch(4) and a patch(3) containing additional functions added to the packet burst generator code. For further steps, this library may be expanded to benefit from hardware filters for better performance. It will be more beneficial to shape this library to cover more use cases, please feel free to comment on possible other use cases and desired functionalities. Changes in v10: Rebase to latest master. The code has been reworked following comments on the v9 patchset. Fix compile error on FreeBSD with clang. Changes in v9: The library has been reworked following comments on the v8 patchset. The validate API has changed to an internal function and renamed. The run API has been merged with the query API. The default_entry code has been removed from the library. A key_found parameter has been added to the rte_classify_table_entry_add API. Checks on the f_* functions has been added to library. The rte_flow_classify_table_entry structure has been made private. The doxygen API output has been revised. The flow_classify sample application has been revised for the latest API's. The flow_classify_autotest program has been revised for the latest API's Changes in v8: The library has been reworked so that it can be used with any of the tables supported by librte_table. Four new API's have been added to support this, rte_flow_classifier_create, rte_flow_classifier_free rte_flow_classify_table_create and rte_flow_classify_run. rte_flow_classify_create has been replaced by rte_flow_classify_table_entry_add. rte_flow_classify_destroy has been replaced by rte_flow_classify_table_entry_delete. rte_flow_classify_query has been replaced by rte_flow_classifier_run. Changes in v7: Fix rte_flow_classify_version.map file. Fix checkpatch warnings. Changes in v6: Dropped two librte_table patches (patches 1 and 2 in v5 patch set). Revised librte_flow_classify patch to use librte_table API's correctly. Changes in v5: Added tests for TCP an STCP traffic to unit test code. Added patch to packet_burst_generator code to add functions for TCP and SCTP protocols. Changes in v4: Replaced GET_CB_FIELD macro with get_cb_field function in the flow classify sample application to fix checkpatch warning. Fixed checkpatch warnings in test_flow_classify.c Changes in v3: Patch 3 from the v2 patch set has been dropped,"librte_ether: initialise IPv4 protocol mask for rte_flow". The flow_classify sample application is now using an input file of IPv4 five tuple rules instead of hardcoded values. A minor fix to the rte_flow_classify_create() function. Changes in v2: Patch 1, librte_table: move structure to header file, has been dropped. The code has been reworked to not access struct rte_table_acl directly. An entry_size parameter has been added to the rte_flow_classify_create function. The f_lookup function is now called instead of the rte_acl_classify function. Patch 2, librte_table: fix acl lookup function, has been added. Changes in v1, since RFC v3: added rte_flow_classify_validate API. librte_table ACL is used for packet matching. a table_acl parameter has been added to all of the API's an error parameter has been been added to all of the API's Bernard Iremonger (3): examples/flow_classify: flow classify sample application test: add packet burst generator functions test: flow classify library unit tests Ferruh Yigit (1): librte_flow_classify: add flow classify library MAINTAINERS | 7 + config/common_base | 6 + doc/api/doxy-api-index.md | 1 + doc/api/doxy-api.conf | 1 + examples/flow_classify/Makefile | 57 ++ examples/flow_classify/flow_classify.c | 849 +++++++++++++++++++++ examples/flow_classify/ipv4_rules_file.txt | 14 + lib/Makefile | 2 + lib/librte_eal/common/include/rte_log.h | 1 + lib/librte_flow_classify/Makefile | 51 ++ lib/librte_flow_classify/rte_flow_classify.c | 676 ++++++++++++++++ lib/librte_flow_classify/rte_flow_classify.h | 281 +++++++ lib/librte_flow_classify/rte_flow_classify_parse.c | 546 +++++++++++++ lib/librte_flow_classify/rte_flow_classify_parse.h | 74 ++ .../rte_flow_classify_version.map | 12 + mk/rte.app.mk | 1 + test/test/Makefile | 1 + test/test/packet_burst_generator.c | 191 +++++ test/test/packet_burst_generator.h | 22 +- test/test/test_flow_classify.c | 672 ++++++++++++++++ test/test/test_flow_classify.h | 234 ++++++ 21 files changed, 3697 insertions(+), 2 deletions(-) create mode 100644 examples/flow_classify/Makefile create mode 100644 examples/flow_classify/flow_classify.c create mode 100644 examples/flow_classify/ipv4_rules_file.txt create mode 100644 lib/librte_flow_classify/Makefile create mode 100644 lib/librte_flow_classify/rte_flow_classify.c create mode 100644 lib/librte_flow_classify/rte_flow_classify.h create mode 100644 lib/librte_flow_classify/rte_flow_classify_parse.c create mode 100644 lib/librte_flow_classify/rte_flow_classify_parse.h create mode 100644 lib/librte_flow_classify/rte_flow_classify_version.map create mode 100644 test/test/test_flow_classify.c create mode 100644 test/test/test_flow_classify.h -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [PATCH v10 0/4] flow classification library 2017-10-23 15:16 ` [dpdk-dev] [PATCH v10 " Bernard Iremonger @ 2017-10-23 20:59 ` Thomas Monjalon 2017-10-24 8:40 ` Iremonger, Bernard 2017-10-24 17:27 ` [dpdk-dev] [PATCH v11 " Bernard Iremonger ` (4 subsequent siblings) 5 siblings, 1 reply; 145+ messages in thread From: Thomas Monjalon @ 2017-10-23 20:59 UTC (permalink / raw) To: Bernard Iremonger Cc: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil, jasvinder.singh, john.mcnamara 23/10/2017 17:16, Bernard Iremonger: > The initial implementation is to provide counting of IPv4 five tuple packets for UDP, TCP and SCTP, > but the library is planned to be as generic as possible. > > The flow information provided by this library is missing to implement full IPFIX features, > but this is planned to be the initial step. > > Flows are defined using rte_flow, also measurements (actions) are provided by rte_flow. > To support more IPFIX measurements, the implementation may require extending rte_flow in addition to > extending this library. > > The library uses both flows and actions defined by rte_flow.h so this library has a dependency on > rte_flow.h > > This patch set also contains a set of unit tests for the Flow Classify library, patch(4) and > a patch(3) containing additional functions added to the packet burst generator code. > > For further steps, this library may be expanded to benefit from hardware filters for better performance. > > It will be more beneficial to shape this library to cover more use cases, > please feel free to comment on possible other use cases and desired functionalities. I had some feedbacks that this library won't be ready for 17.11. So I did not review it. I suppose you are OK to wait one more release and call for more reviewers? ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [PATCH v10 0/4] flow classification library 2017-10-23 20:59 ` Thomas Monjalon @ 2017-10-24 8:40 ` Iremonger, Bernard 2017-10-24 9:23 ` Mcnamara, John 0 siblings, 1 reply; 145+ messages in thread From: Iremonger, Bernard @ 2017-10-24 8:40 UTC (permalink / raw) To: Thomas Monjalon Cc: dev, Yigit, Ferruh, Ananyev, Konstantin, Dumitrescu, Cristian, adrien.mazarguil, Singh, Jasvinder, Mcnamara, John, Iremonger, Bernard Hi Thomas, > -----Original Message----- > From: Thomas Monjalon [mailto:thomas@monjalon.net] > Sent: Monday, October 23, 2017 9:59 PM > To: Iremonger, Bernard <bernard.iremonger@intel.com> > Cc: dev@dpdk.org; Yigit, Ferruh <ferruh.yigit@intel.com>; Ananyev, > Konstantin <konstantin.ananyev@intel.com>; Dumitrescu, Cristian > <cristian.dumitrescu@intel.com>; adrien.mazarguil@6wind.com; Singh, > Jasvinder <jasvinder.singh@intel.com>; Mcnamara, John > <john.mcnamara@intel.com> > Subject: Re: [dpdk-dev] [PATCH v10 0/4] flow classification library > > 23/10/2017 17:16, Bernard Iremonger: > > The initial implementation is to provide counting of IPv4 five tuple > > packets for UDP, TCP and SCTP, but the library is planned to be as generic > as possible. > > > > The flow information provided by this library is missing to implement > > full IPFIX features, but this is planned to be the initial step. > > > > Flows are defined using rte_flow, also measurements (actions) are > provided by rte_flow. > > To support more IPFIX measurements, the implementation may require > > extending rte_flow in addition to extending this library. > > > > The library uses both flows and actions defined by rte_flow.h so this > > library has a dependency on rte_flow.h > > > > This patch set also contains a set of unit tests for the Flow Classify > > library, patch(4) and a patch(3) containing additional functions added to the > packet burst generator code. > > > > For further steps, this library may be expanded to benefit from hardware > filters for better performance. > > > > It will be more beneficial to shape this library to cover more use > > cases, please feel free to comment on possible other use cases and desired > functionalities. > > I had some feedbacks that this library won't be ready for 17.11. > So I did not review it. > > I suppose you are OK to wait one more release and call for more reviewers? This library was not ready for 17.11.RC1 having received some comments just before the RC1 deadline. It was then targeted for RC2 and we have pulled out all the stops to get it ready for RC2. It is now at v10 of the patch set, there have been no review comments from the community (apart from Intel), since RFC v3. I think that there has been ample time for the community to review this patch set, calling for more reviewers at this point is not helpful. The API's of the library are marked as experimental, so there will be no issues with ABI breakage, if there are requests for changes later. I am not OK to wait one more release, I believe we have followed the process correctly. Regards, Bernard. ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [PATCH v10 0/4] flow classification library 2017-10-24 8:40 ` Iremonger, Bernard @ 2017-10-24 9:23 ` Mcnamara, John 2017-10-24 9:38 ` Thomas Monjalon 0 siblings, 1 reply; 145+ messages in thread From: Mcnamara, John @ 2017-10-24 9:23 UTC (permalink / raw) To: Iremonger, Bernard, Thomas Monjalon Cc: dev, Yigit, Ferruh, Ananyev, Konstantin, Dumitrescu, Cristian, adrien.mazarguil, Singh, Jasvinder > -----Original Message----- > From: Iremonger, Bernard > Sent: Tuesday, October 24, 2017 9:41 AM > To: Thomas Monjalon <thomas@monjalon.net> > Cc: dev@dpdk.org; Yigit, Ferruh <ferruh.yigit@intel.com>; Ananyev, > Konstantin <konstantin.ananyev@intel.com>; Dumitrescu, Cristian > <cristian.dumitrescu@intel.com>; adrien.mazarguil@6wind.com; Singh, > Jasvinder <jasvinder.singh@intel.com>; Mcnamara, John > <john.mcnamara@intel.com>; Iremonger, Bernard > <bernard.iremonger@intel.com> > Subject: RE: [dpdk-dev] [PATCH v10 0/4] flow classification library > > Hi Thomas, > > > -----Original Message----- > > From: Thomas Monjalon [mailto:thomas@monjalon.net] > > Sent: Monday, October 23, 2017 9:59 PM > > To: Iremonger, Bernard <bernard.iremonger@intel.com> > > Cc: dev@dpdk.org; Yigit, Ferruh <ferruh.yigit@intel.com>; Ananyev, > > Konstantin <konstantin.ananyev@intel.com>; Dumitrescu, Cristian > > <cristian.dumitrescu@intel.com>; adrien.mazarguil@6wind.com; Singh, > > Jasvinder <jasvinder.singh@intel.com>; Mcnamara, John > > <john.mcnamara@intel.com> > > Subject: Re: [dpdk-dev] [PATCH v10 0/4] flow classification library > > > I suppose you are OK to wait one more release and call for more > reviewers? > > This library was not ready for 17.11.RC1 having received some comments > just before the RC1 deadline. > It was then targeted for RC2 and we have pulled out all the stops to get > it ready for RC2. > > It is now at v10 of the patch set, there have been no review comments from > the community (apart from Intel), since RFC v3. > > I think that there has been ample time for the community to review this > patch set, calling for more reviewers at this point is not helpful. > > The API's of the library are marked as experimental, so there will be no > issues with ABI breakage, if there are requests for changes later. > > I am not OK to wait one more release, I believe we have followed the > process correctly. +1 for inclusion in RC2. John ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [PATCH v10 0/4] flow classification library 2017-10-24 9:23 ` Mcnamara, John @ 2017-10-24 9:38 ` Thomas Monjalon 2017-10-24 9:53 ` Iremonger, Bernard 0 siblings, 1 reply; 145+ messages in thread From: Thomas Monjalon @ 2017-10-24 9:38 UTC (permalink / raw) To: Mcnamara, John, Iremonger, Bernard Cc: dev, Yigit, Ferruh, Ananyev, Konstantin, Dumitrescu, Cristian, adrien.mazarguil, Singh, Jasvinder 24/10/2017 11:23, Mcnamara, John: > From: Iremonger, Bernard > > > > Hi Thomas, > > > > From: Thomas Monjalon [mailto:thomas@monjalon.net] > > > > > I suppose you are OK to wait one more release and call for more > > reviewers? > > > > This library was not ready for 17.11.RC1 having received some comments > > just before the RC1 deadline. > > It was then targeted for RC2 and we have pulled out all the stops to get > > it ready for RC2. > > > > It is now at v10 of the patch set, there have been no review comments from > > the community (apart from Intel), since RFC v3. > > > > I think that there has been ample time for the community to review this > > patch set, calling for more reviewers at this point is not helpful. I have to review some basic things in your series. I did not take time to review it because I thought John told me it would not make 17.11. > > The API's of the library are marked as experimental, so there will be no > > issues with ABI breakage, if there are requests for changes later. It is not marked EXPERIMENTAL in the MAINTAINERS file. > > I am not OK to wait one more release, I believe we have followed the > > process correctly. Yes, you followed the process. > +1 for inclusion in RC2. It is not common to add a new library in RC2. When doing the RC1 announce, I did not mention this library as a possible inclusion exception in RC2, and I had no feedback: http://dpdk.org/ml/archives/announce/2017-October/000153.html I was really sure you were not targetting 17.11. So I did not do the last pass review. Probably my mistake. We are having a hard time with 17.11 release, so I would prefer avoiding adding one more new library at this stage. ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [PATCH v10 0/4] flow classification library 2017-10-24 9:38 ` Thomas Monjalon @ 2017-10-24 9:53 ` Iremonger, Bernard 2017-10-24 10:25 ` Thomas Monjalon 0 siblings, 1 reply; 145+ messages in thread From: Iremonger, Bernard @ 2017-10-24 9:53 UTC (permalink / raw) To: Thomas Monjalon, Mcnamara, John Cc: dev, Yigit, Ferruh, Ananyev, Konstantin, Dumitrescu, Cristian, adrien.mazarguil, Singh, Jasvinder, Iremonger, Bernard Hi Thomas, > -----Original Message----- > From: Thomas Monjalon [mailto:thomas@monjalon.net] > Sent: Tuesday, October 24, 2017 10:39 AM > To: Mcnamara, John <john.mcnamara@intel.com>; Iremonger, Bernard > <bernard.iremonger@intel.com> > Cc: dev@dpdk.org; Yigit, Ferruh <ferruh.yigit@intel.com>; Ananyev, > Konstantin <konstantin.ananyev@intel.com>; Dumitrescu, Cristian > <cristian.dumitrescu@intel.com>; adrien.mazarguil@6wind.com; Singh, > Jasvinder <jasvinder.singh@intel.com> > Subject: Re: [dpdk-dev] [PATCH v10 0/4] flow classification library > > 24/10/2017 11:23, Mcnamara, John: > > From: Iremonger, Bernard > > > > > > Hi Thomas, > > > > > > From: Thomas Monjalon [mailto:thomas@monjalon.net] > > > > > > > I suppose you are OK to wait one more release and call for more > > > reviewers? > > > > > > This library was not ready for 17.11.RC1 having received some > > > comments just before the RC1 deadline. > > > It was then targeted for RC2 and we have pulled out all the stops to > > > get it ready for RC2. > > > > > > It is now at v10 of the patch set, there have been no review > > > comments from the community (apart from Intel), since RFC v3. > > > > > > I think that there has been ample time for the community to review > > > this patch set, calling for more reviewers at this point is not helpful. > > I have to review some basic things in your series. > I did not take time to review it because I thought John told me it would not > make 17.11. > > > > The API's of the library are marked as experimental, so there will > > > be no issues with ABI breakage, if there are requests for changes later. > > It is not marked EXPERIMENTAL in the MAINTAINERS file. My mistake, it is marked as experimental in rte_flow_classify_version.map I can send a v11 patch set if needed. > > > I am not OK to wait one more release, I believe we have followed the > > > process correctly. > > Yes, you followed the process. > > > +1 for inclusion in RC2. > > It is not common to add a new library in RC2. > > When doing the RC1 announce, I did not mention this library as a possible > inclusion exception in RC2, and I had no feedback: > http://dpdk.org/ml/archives/announce/2017-October/000153.html I probably should have replied to this email. > I was really sure you were not targetting 17.11. We have always been targeting 17.11 > So I did not do the last pass review. Probably my mistake. > > We are having a hard time with 17.11 release, so I would prefer avoiding > adding one more new library at this stage. This is a new library and should not impact anyone. I believe we have followed the process, so I think it should not be deferred to 18.02. Regards, Bernard. ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [PATCH v10 0/4] flow classification library 2017-10-24 9:53 ` Iremonger, Bernard @ 2017-10-24 10:25 ` Thomas Monjalon 0 siblings, 0 replies; 145+ messages in thread From: Thomas Monjalon @ 2017-10-24 10:25 UTC (permalink / raw) To: Iremonger, Bernard Cc: Mcnamara, John, dev, Yigit, Ferruh, Ananyev, Konstantin, Dumitrescu, Cristian, adrien.mazarguil, Singh, Jasvinder 24/10/2017 11:53, Iremonger, Bernard: > Hi Thomas, > > From: Thomas Monjalon [mailto:thomas@monjalon.net] > > 24/10/2017 11:23, Mcnamara, John: > > > From: Iremonger, Bernard > > > > > > > > Hi Thomas, > > > > > > > > From: Thomas Monjalon [mailto:thomas@monjalon.net] > > > > > > > > > I suppose you are OK to wait one more release and call for more > > > > reviewers? > > > > > > > > This library was not ready for 17.11.RC1 having received some > > > > comments just before the RC1 deadline. > > > > It was then targeted for RC2 and we have pulled out all the stops to > > > > get it ready for RC2. > > > > > > > > It is now at v10 of the patch set, there have been no review > > > > comments from the community (apart from Intel), since RFC v3. > > > > > > > > I think that there has been ample time for the community to review > > > > this patch set, calling for more reviewers at this point is not helpful. > > > > I have to review some basic things in your series. > > I did not take time to review it because I thought John told me it would not > > make 17.11. > > > > > > The API's of the library are marked as experimental, so there will > > > > be no issues with ABI breakage, if there are requests for changes later. > > > > It is not marked EXPERIMENTAL in the MAINTAINERS file. > > My mistake, it is marked as experimental in rte_flow_classify_version.map > I can send a v11 patch set if needed. > > > > > I am not OK to wait one more release, I believe we have followed the > > > > process correctly. > > > > Yes, you followed the process. > > > > > +1 for inclusion in RC2. > > > > It is not common to add a new library in RC2. > > > > When doing the RC1 announce, I did not mention this library as a possible > > inclusion exception in RC2, and I had no feedback: > > http://dpdk.org/ml/archives/announce/2017-October/000153.html > > I probably should have replied to this email. > > > I was really sure you were not targetting 17.11. > > We have always been targeting 17.11 > > > So I did not do the last pass review. Probably my mistake. > > > > We are having a hard time with 17.11 release, so I would prefer avoiding > > adding one more new library at this stage. > > This is a new library and should not impact anyone. > > I believe we have followed the process, so I think it should not be deferred to 18.02. OK, let's make a deal: If you can address my comments in v11 and if there is no compilation issue, then I will take it in RC2. ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v11 0/4] flow classification library 2017-10-23 15:16 ` [dpdk-dev] [PATCH v10 " Bernard Iremonger 2017-10-23 20:59 ` Thomas Monjalon @ 2017-10-24 17:27 ` Bernard Iremonger 2017-10-24 20:33 ` Thomas Monjalon 2017-10-24 17:28 ` [dpdk-dev] [PATCH v11 1/4] flow_classify: add flow classify library Bernard Iremonger ` (3 subsequent siblings) 5 siblings, 1 reply; 145+ messages in thread From: Bernard Iremonger @ 2017-10-24 17:27 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil, jasvinder.singh Cc: Bernard Iremonger DPDK works with packets, but some network administration tools work with flow information. This library is proposed to provide an API to classify received packets using flow patterns. Basically the library consists of APIs to create the classifier object, add a table to the classifer, add and delete flow rules to the table and to query the stats for a given rule. The application should use the following sequence of API's: call rte_flow_classifier_create() to create the classifier object. call rte_flow_classify_table_create() to add a table to the classifier. call rte_flow_classify_table_entry_add to add a flow rule to the table. After a call to rte_eth_rx-burst() to receive a packet burst. call rte_flow_classifier_query() to classify the packets in the against the rules in the classifier and to return data to the application. The flow_classify sample application in this patchset is using the ACL table for packet matching. The flow classification library can support other tables for example Hash and LPM tables. The library header file has more comments on how library works and the provided APIs. Packets to flow rule matching will cause a performance drop, that is why classification is done on demand by the rte_flow_classifier_query() API provided by this library. The initial implementation is to provide counting of IPv4 five tuple packets for UDP, TCP and SCTP, but the library is planned to be as generic as possible. The flow information provided by this library is missing to implement full IPFIX features, but this is planned to be the initial step. Flows are defined using rte_flow, also measurements (actions) are provided by rte_flow. To support more IPFIX measurements, the implementation may require extending rte_flow in addition to extending this library. The library uses both flows and actions defined by rte_flow.h so this library has a dependency on rte_flow.h This patch set also contains a set of unit tests for the Flow Classify library, patch(4) and a patch(3) containing additional functions added to the packet burst generator code. For further steps, this library may be expanded to benefit from hardware filters for better performance. It will be more beneficial to shape this library to cover more use cases, please feel free to comment on possible other use cases and desired functionalities. Changes in v11: Rebased to latest master. Updated maintainers file. Updated release notes. Revised library code to use dynamic logging. Removed RTE_LOGTYPE_CLASSIFY from rte_log.h Updated LDLIBS in Makefile. Fixed some compile warnings in the flow_classify sample app. Changes in v10: Rebase to latest master. The code has been reworked following comments on the v9 patchset. Fix compile error on FreeBSD with clang. Changes in v9: The library has been reworked following comments on the v8 patchset. The validate API has changed to an internal function and renamed. The run API has been merged with the query API. The default_entry code has been removed from the library. A key_found parameter has been added to the rte_classify_table_entry_add API. Checks on the f_* functions has been added to library. The rte_flow_classify_table_entry structure has been made private. The doxygen API output has been revised. The flow_classify sample application has been revised for the latest API's. The flow_classify_autotest program has been revised for the latest API's Changes in v8: The library has been reworked so that it can be used with any of the tables supported by librte_table. Four new API's have been added to support this, rte_flow_classifier_create, rte_flow_classifier_free rte_flow_classify_table_create and rte_flow_classify_run. rte_flow_classify_create has been replaced by rte_flow_classify_table_entry_add. rte_flow_classify_destroy has been replaced by rte_flow_classify_table_entry_delete. rte_flow_classify_query has been replaced by rte_flow_classifier_run. Changes in v7: Fix rte_flow_classify_version.map file. Fix checkpatch warnings. Changes in v6: Dropped two librte_table patches (patches 1 and 2 in v5 patch set). Revised librte_flow_classify patch to use librte_table API's correctly. Changes in v5: Added tests for TCP an STCP traffic to unit test code. Added patch to packet_burst_generator code to add functions for TCP and SCTP protocols. Changes in v4: Replaced GET_CB_FIELD macro with get_cb_field function in the flow classify sample application to fix checkpatch warning. Fixed checkpatch warnings in test_flow_classify.c Changes in v3: Patch 3 from the v2 patch set has been dropped,"librte_ether: initialise IPv4 protocol mask for rte_flow". The flow_classify sample application is now using an input file of IPv4 five tuple rules instead of hardcoded values. A minor fix to the rte_flow_classify_create() function. Changes in v2: Patch 1, librte_table: move structure to header file, has been dropped. The code has been reworked to not access struct rte_table_acl directly. An entry_size parameter has been added to the rte_flow_classify_create function. The f_lookup function is now called instead of the rte_acl_classify function. Patch 2, librte_table: fix acl lookup function, has been added. Changes in v1, since RFC v3: added rte_flow_classify_validate API. librte_table ACL is used for packet matching. a table_acl parameter has been added to all of the API's an error parameter has been been added to all of the API's Bernard Iremonger (3): examples/flow_classify: flow classify sample application test: add packet burst generator functions test: flow classify library unit tests Ferruh Yigit (1): flow_classify: add flow classify library MAINTAINERS | 9 +- config/common_base | 6 + doc/api/doxy-api-index.md | 3 +- doc/api/doxy-api.conf | 1 + doc/guides/rel_notes/release_17_11.rst | 5 + examples/flow_classify/Makefile | 57 ++ examples/flow_classify/flow_classify.c | 848 +++++++++++++++++++++ examples/flow_classify/ipv4_rules_file.txt | 14 + lib/Makefile | 2 + lib/librte_flow_classify/Makefile | 53 ++ lib/librte_flow_classify/rte_flow_classify.c | 690 +++++++++++++++++ lib/librte_flow_classify/rte_flow_classify.h | 287 +++++++ lib/librte_flow_classify/rte_flow_classify_parse.c | 546 +++++++++++++ lib/librte_flow_classify/rte_flow_classify_parse.h | 74 ++ .../rte_flow_classify_version.map | 13 + mk/rte.app.mk | 1 + test/test/Makefile | 1 + test/test/packet_burst_generator.c | 191 +++++ test/test/packet_burst_generator.h | 22 +- test/test/test_flow_classify.c | 672 ++++++++++++++++ test/test/test_flow_classify.h | 234 ++++++ 21 files changed, 3725 insertions(+), 4 deletions(-) create mode 100644 examples/flow_classify/Makefile create mode 100644 examples/flow_classify/flow_classify.c create mode 100644 examples/flow_classify/ipv4_rules_file.txt create mode 100644 lib/librte_flow_classify/Makefile create mode 100644 lib/librte_flow_classify/rte_flow_classify.c create mode 100644 lib/librte_flow_classify/rte_flow_classify.h create mode 100644 lib/librte_flow_classify/rte_flow_classify_parse.c create mode 100644 lib/librte_flow_classify/rte_flow_classify_parse.h create mode 100644 lib/librte_flow_classify/rte_flow_classify_version.map create mode 100644 test/test/test_flow_classify.c create mode 100644 test/test/test_flow_classify.h -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [PATCH v11 0/4] flow classification library 2017-10-24 17:27 ` [dpdk-dev] [PATCH v11 " Bernard Iremonger @ 2017-10-24 20:33 ` Thomas Monjalon 2017-10-25 8:47 ` Iremonger, Bernard 0 siblings, 1 reply; 145+ messages in thread From: Thomas Monjalon @ 2017-10-24 20:33 UTC (permalink / raw) To: Bernard Iremonger Cc: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil, jasvinder.singh 24/10/2017 19:27, Bernard Iremonger: > Bernard Iremonger (3): > examples/flow_classify: flow classify sample application > test: add packet burst generator functions > test: flow classify library unit tests > > Ferruh Yigit (1): > flow_classify: add flow classify library The deal was " If you can address my comments in v11 and if there is no compilation issue, then I will take it in RC2. " There are some compilation issues and other details to fix. But I fixed all of them myself. Applied ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [PATCH v11 0/4] flow classification library 2017-10-24 20:33 ` Thomas Monjalon @ 2017-10-25 8:47 ` Iremonger, Bernard 2017-10-25 8:56 ` Thomas Monjalon 0 siblings, 1 reply; 145+ messages in thread From: Iremonger, Bernard @ 2017-10-25 8:47 UTC (permalink / raw) To: Thomas Monjalon Cc: dev, Yigit, Ferruh, Ananyev, Konstantin, Dumitrescu, Cristian, adrien.mazarguil, Singh, Jasvinder, Iremonger, Bernard Hi Thomas, > -----Original Message----- > From: Thomas Monjalon [mailto:thomas@monjalon.net] > Sent: Tuesday, October 24, 2017 9:34 PM > To: Iremonger, Bernard <bernard.iremonger@intel.com> > Cc: dev@dpdk.org; Yigit, Ferruh <ferruh.yigit@intel.com>; Ananyev, > Konstantin <konstantin.ananyev@intel.com>; Dumitrescu, Cristian > <cristian.dumitrescu@intel.com>; adrien.mazarguil@6wind.com; Singh, > Jasvinder <jasvinder.singh@intel.com> > Subject: Re: [dpdk-dev] [PATCH v11 0/4] flow classification library > > 24/10/2017 19:27, Bernard Iremonger: > > Bernard Iremonger (3): > > examples/flow_classify: flow classify sample application > > test: add packet burst generator functions > > test: flow classify library unit tests > > > > Ferruh Yigit (1): > > flow_classify: add flow classify library > > The deal was > " > If you can address my comments in v11 and if there is no compilation issue, > then I will take it in RC2. > " > There are some compilation issues and other details to fix. > But I fixed all of them myself. > > Applied Thanks for your help with this patch set. I thought I had everything covered with the v11. Should I reply to your previous emails ? Regards, Bernard. ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [PATCH v11 0/4] flow classification library 2017-10-25 8:47 ` Iremonger, Bernard @ 2017-10-25 8:56 ` Thomas Monjalon 0 siblings, 0 replies; 145+ messages in thread From: Thomas Monjalon @ 2017-10-25 8:56 UTC (permalink / raw) To: Iremonger, Bernard Cc: dev, Yigit, Ferruh, Ananyev, Konstantin, Dumitrescu, Cristian, adrien.mazarguil, Singh, Jasvinder 25/10/2017 10:47, Iremonger, Bernard: > Hi Thomas, > > From: Thomas Monjalon [mailto:thomas@monjalon.net] > > 24/10/2017 19:27, Bernard Iremonger: > > > Bernard Iremonger (3): > > > examples/flow_classify: flow classify sample application > > > test: add packet burst generator functions > > > test: flow classify library unit tests > > > > > > Ferruh Yigit (1): > > > flow_classify: add flow classify library > > > > The deal was > > " > > If you can address my comments in v11 and if there is no compilation issue, > > then I will take it in RC2. > > " > > There are some compilation issues and other details to fix. > > But I fixed all of them myself. > > > > Applied > > Thanks for your help with this patch set. > I thought I had everything covered with the v11. > Should I reply to your previous emails ? No need to reply to the comments, it's OK. You know, everybody think all is covered, until we discover some issues. It takes some time. That's why it is not reasonnable to look at new features after RC1. ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v11 1/4] flow_classify: add flow classify library 2017-10-23 15:16 ` [dpdk-dev] [PATCH v10 " Bernard Iremonger 2017-10-23 20:59 ` Thomas Monjalon 2017-10-24 17:27 ` [dpdk-dev] [PATCH v11 " Bernard Iremonger @ 2017-10-24 17:28 ` Bernard Iremonger 2017-10-24 19:39 ` Thomas Monjalon ` (5 more replies) 2017-10-24 17:28 ` [dpdk-dev] [PATCH v11 2/4] examples/flow_classify: flow classify sample application Bernard Iremonger ` (2 subsequent siblings) 5 siblings, 6 replies; 145+ messages in thread From: Bernard Iremonger @ 2017-10-24 17:28 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil, jasvinder.singh Cc: Bernard Iremonger From: Ferruh Yigit <ferruh.yigit@intel.com> The following APIs's are implemented in the librte_flow_classify library: rte_flow_classifier_create rte_flow_classifier_free rte_flow_classifier_query rte_flow_classify_table_create rte_flow_classify_table_entry_add rte_flow_classify_table_entry_delete The following librte_table API's are used: f_create to create a table. f_add to add a rule to the table. f_del to delete a rule from the table. f_free to free a table f_lookup to match packets with the rules. The library supports counting of IPv4 five tupple packets only, ie IPv4 UDP, TCP and SCTP packets. Updated the release notes. Updated the MAINTAINERS file. Add library dependencies to LDLIBS in the Makefile. Using dynamic logging Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com> Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> Acked-by: Jasvinder Singh <jasvinder.singh@intel.com> --- MAINTAINERS | 9 +- config/common_base | 6 + doc/api/doxy-api-index.md | 3 +- doc/api/doxy-api.conf | 1 + doc/guides/rel_notes/release_17_11.rst | 5 + lib/Makefile | 2 + lib/librte_flow_classify/Makefile | 53 ++ lib/librte_flow_classify/rte_flow_classify.c | 690 +++++++++++++++++++++ lib/librte_flow_classify/rte_flow_classify.h | 287 +++++++++ lib/librte_flow_classify/rte_flow_classify_parse.c | 546 ++++++++++++++++ lib/librte_flow_classify/rte_flow_classify_parse.h | 74 +++ .../rte_flow_classify_version.map | 13 + mk/rte.app.mk | 1 + 13 files changed, 1688 insertions(+), 2 deletions(-) create mode 100644 lib/librte_flow_classify/Makefile create mode 100644 lib/librte_flow_classify/rte_flow_classify.c create mode 100644 lib/librte_flow_classify/rte_flow_classify.h create mode 100644 lib/librte_flow_classify/rte_flow_classify_parse.c create mode 100644 lib/librte_flow_classify/rte_flow_classify_parse.h create mode 100644 lib/librte_flow_classify/rte_flow_classify_version.map diff --git a/MAINTAINERS b/MAINTAINERS index 1f7c745..4eb13d1 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -707,6 +707,14 @@ M: Mark Kavanagh <mark.b.kavanagh@intel.com> F: lib/librte_gso/ F: doc/guides/prog_guide/generic_segmentation_offload_lib.rst +Flow Classify - EXPERIMENTAL +M: Bernard Iremonger <bernard.iremonger@intel.com> +F: lib/librte_flow_classify/ +F: test/test/test_flow_classify* +F: examples/flow_classify/ +F: doc/guides/sample_app_ug/flow_classify.rst +F: doc/guides/prog_guide/flow_classify_lib.rst + Distributor M: Bruce Richardson <bruce.richardson@intel.com> M: David Hunt <david.hunt@intel.com> @@ -740,7 +748,6 @@ F: doc/guides/prog_guide/pdump_lib.rst F: app/pdump/ F: doc/guides/tools/pdump.rst - Packet Framework ---------------- M: Cristian Dumitrescu <cristian.dumitrescu@intel.com> diff --git a/config/common_base b/config/common_base index d9471e8..e1079aa 100644 --- a/config/common_base +++ b/config/common_base @@ -707,6 +707,12 @@ CONFIG_RTE_LIBRTE_GSO=y CONFIG_RTE_LIBRTE_METER=y # +# Compile librte_classify +# +CONFIG_RTE_LIBRTE_FLOW_CLASSIFY=y +CONFIG_RTE_LIBRTE_CLASSIFY_DEBUG=n + +# # Compile librte_sched # CONFIG_RTE_LIBRTE_SCHED=y diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md index 97ce416..13bd411 100644 --- a/doc/api/doxy-api-index.md +++ b/doc/api/doxy-api-index.md @@ -127,7 +127,8 @@ The public API headers are grouped by topics: [distributor] (@ref rte_distributor.h), [EFD] (@ref rte_efd.h), [ACL] (@ref rte_acl.h), - [member] (@ref rte_member.h) + [member] (@ref rte_member.h), + [flow classify] (@ref rte_flow_classify.h), - **containers**: [mbuf] (@ref rte_mbuf.h), diff --git a/doc/api/doxy-api.conf b/doc/api/doxy-api.conf index 9e9fa56..9edb6fd 100644 --- a/doc/api/doxy-api.conf +++ b/doc/api/doxy-api.conf @@ -48,6 +48,7 @@ INPUT = doc/api/doxy-api-index.md \ lib/librte_efd \ lib/librte_ether \ lib/librte_eventdev \ + lib/librte_flow_classify \ lib/librte_gro \ lib/librte_gso \ lib/librte_hash \ diff --git a/doc/guides/rel_notes/release_17_11.rst b/doc/guides/rel_notes/release_17_11.rst index 722d5b0..4b3c3a2 100644 --- a/doc/guides/rel_notes/release_17_11.rst +++ b/doc/guides/rel_notes/release_17_11.rst @@ -165,6 +165,11 @@ New Features checksums, and doesn't update checksums for output packets. Additionally, the GSO library doesn't process IP fragmented packets. +* **Added the Flow Classification Library.** + + Added the Flow Classification library, it provides an API for DPDK + applications to classify an input packet by matching it against a set of flow + rules. It uses the librte_table API to manage the flow rules. Resolved Issues --------------- diff --git a/lib/Makefile b/lib/Makefile index 527b95b..6e45700 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -83,6 +83,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_POWER) += librte_power DEPDIRS-librte_power := librte_eal DIRS-$(CONFIG_RTE_LIBRTE_METER) += librte_meter DEPDIRS-librte_meter := librte_eal +DIRS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += librte_flow_classify +DEPDIRS-librte_flow_classify := librte_net librte_table librte_acl DIRS-$(CONFIG_RTE_LIBRTE_SCHED) += librte_sched DEPDIRS-librte_sched := librte_eal librte_mempool librte_mbuf librte_net DEPDIRS-librte_sched += librte_timer diff --git a/lib/librte_flow_classify/Makefile b/lib/librte_flow_classify/Makefile new file mode 100644 index 0000000..ea792f5 --- /dev/null +++ b/lib/librte_flow_classify/Makefile @@ -0,0 +1,53 @@ +# BSD LICENSE +# +# Copyright(c) 2017 Intel Corporation. All rights reserved. +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Intel Corporation nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +include $(RTE_SDK)/mk/rte.vars.mk + +# library name +LIB = librte_flow_classify.a + +CFLAGS += -O3 +CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) + +EXPORT_MAP := rte_flow_classify_version.map + +LIBABIVER := 1 + +LDLIBS += -lrte_eal -lrte_ethdev -lrte_net -lrte_table -lrte_acl + +# all source are stored in SRCS-y +SRCS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += rte_flow_classify.c +SRCS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += rte_flow_classify_parse.c + +# install this header file +SYMLINK-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY)-include := rte_flow_classify.h + +include $(RTE_SDK)/mk/rte.lib.mk diff --git a/lib/librte_flow_classify/rte_flow_classify.c b/lib/librte_flow_classify/rte_flow_classify.c new file mode 100644 index 0000000..f4a95d4 --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify.c @@ -0,0 +1,690 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <rte_flow_classify.h> +#include "rte_flow_classify_parse.h" +#include <rte_flow_driver.h> +#include <rte_table_acl.h> +#include <stdbool.h> + +int librte_flow_classify_logtype; + +static struct rte_eth_ntuple_filter ntuple_filter; +static uint32_t unique_id = 1; + + +struct rte_flow_classify_table_entry { + /* meta-data for classify rule */ + uint32_t rule_id; +}; + +struct rte_table { + /* Input parameters */ + struct rte_table_ops ops; + uint32_t entry_size; + enum rte_flow_classify_table_type type; + + /* Handle to the low-level table object */ + void *h_table; +}; + +#define RTE_FLOW_CLASSIFIER_MAX_NAME_SZ 256 + +struct rte_flow_classifier { + /* Input parameters */ + char name[RTE_FLOW_CLASSIFIER_MAX_NAME_SZ]; + int socket_id; + enum rte_flow_classify_table_type type; + + /* Internal tables */ + struct rte_table tables[RTE_FLOW_CLASSIFY_TABLE_MAX]; + uint32_t num_tables; + uint16_t nb_pkts; + struct rte_flow_classify_table_entry + *entries[RTE_PORT_IN_BURST_SIZE_MAX]; +} __rte_cache_aligned; + +enum { + PROTO_FIELD_IPV4, + SRC_FIELD_IPV4, + DST_FIELD_IPV4, + SRCP_FIELD_IPV4, + DSTP_FIELD_IPV4, + NUM_FIELDS_IPV4 +}; + +struct acl_keys { + struct rte_table_acl_rule_add_params key_add; /* add key */ + struct rte_table_acl_rule_delete_params key_del; /* delete key */ +}; + +struct classify_rules { + enum rte_flow_classify_rule_type type; + union { + struct rte_flow_classify_ipv4_5tuple ipv4_5tuple; + } u; +}; + +struct rte_flow_classify_rule { + uint32_t id; /* unique ID of classify rule */ + struct rte_flow_action action; /* action when match found */ + struct classify_rules rules; /* union of rules */ + union { + struct acl_keys key; + } u; + int key_found; /* rule key found in table */ + void *entry; /* pointer to buffer to hold rule meta data */ + void *entry_ptr; /* handle to the table entry for rule meta data */ +}; + +static int +flow_classify_parse_flow( + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + struct rte_flow_item *items; + parse_filter_t parse_filter; + uint32_t item_num = 0; + uint32_t i = 0; + int ret; + + memset(&ntuple_filter, 0, sizeof(ntuple_filter)); + + /* Get the non-void item number of pattern */ + while ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_END) { + if ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_VOID) + item_num++; + i++; + } + item_num++; + + items = malloc(item_num * sizeof(struct rte_flow_item)); + if (!items) { + rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_ITEM_NUM, + NULL, "No memory for pattern items."); + return -ENOMEM; + } + + memset(items, 0, item_num * sizeof(struct rte_flow_item)); + classify_pattern_skip_void_item(items, pattern); + + parse_filter = classify_find_parse_filter_func(items); + if (!parse_filter) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + pattern, "Unsupported pattern"); + free(items); + return -EINVAL; + } + + ret = parse_filter(attr, items, actions, &ntuple_filter, error); + free(items); + return ret; +} + +#ifdef RTE_LIBRTE_CLASSIFY_DEBUG +#define uint32_t_to_char(ip, a, b, c, d) do {\ + *a = (unsigned char)(ip >> 24 & 0xff);\ + *b = (unsigned char)(ip >> 16 & 0xff);\ + *c = (unsigned char)(ip >> 8 & 0xff);\ + *d = (unsigned char)(ip & 0xff);\ + } while (0) + +static inline void +print_acl_ipv4_key_add(struct rte_table_acl_rule_add_params *key) +{ + unsigned char a, b, c, d; + + printf("%s: 0x%02hhx/0x%hhx ", __func__, + key->field_value[PROTO_FIELD_IPV4].value.u8, + key->field_value[PROTO_FIELD_IPV4].mask_range.u8); + + uint32_t_to_char(key->field_value[SRC_FIELD_IPV4].value.u32, + &a, &b, &c, &d); + printf(" %hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, + key->field_value[SRC_FIELD_IPV4].mask_range.u32); + + uint32_t_to_char(key->field_value[DST_FIELD_IPV4].value.u32, + &a, &b, &c, &d); + printf("%hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, + key->field_value[DST_FIELD_IPV4].mask_range.u32); + + printf("%hu : 0x%x %hu : 0x%x", + key->field_value[SRCP_FIELD_IPV4].value.u16, + key->field_value[SRCP_FIELD_IPV4].mask_range.u16, + key->field_value[DSTP_FIELD_IPV4].value.u16, + key->field_value[DSTP_FIELD_IPV4].mask_range.u16); + + printf(" priority: 0x%x\n", key->priority); +} + +static inline void +print_acl_ipv4_key_delete(struct rte_table_acl_rule_delete_params *key) +{ + unsigned char a, b, c, d; + + printf("%s: 0x%02hhx/0x%hhx ", __func__, + key->field_value[PROTO_FIELD_IPV4].value.u8, + key->field_value[PROTO_FIELD_IPV4].mask_range.u8); + + uint32_t_to_char(key->field_value[SRC_FIELD_IPV4].value.u32, + &a, &b, &c, &d); + printf(" %hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, + key->field_value[SRC_FIELD_IPV4].mask_range.u32); + + uint32_t_to_char(key->field_value[DST_FIELD_IPV4].value.u32, + &a, &b, &c, &d); + printf("%hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, + key->field_value[DST_FIELD_IPV4].mask_range.u32); + + printf("%hu : 0x%x %hu : 0x%x\n", + key->field_value[SRCP_FIELD_IPV4].value.u16, + key->field_value[SRCP_FIELD_IPV4].mask_range.u16, + key->field_value[DSTP_FIELD_IPV4].value.u16, + key->field_value[DSTP_FIELD_IPV4].mask_range.u16); +} +#endif + +static int +rte_flow_classifier_check_params(struct rte_flow_classifier_params *params) +{ + if (params == NULL) { + RTE_FLOW_CLASSIFY_LOG(ERR, + "%s: Incorrect value for parameter params\n", __func__); + return -EINVAL; + } + + /* name */ + if (params->name == NULL) { + RTE_FLOW_CLASSIFY_LOG(ERR, + "%s: Incorrect value for parameter name\n", __func__); + return -EINVAL; + } + + /* socket */ + if ((params->socket_id < 0) || + (params->socket_id >= RTE_MAX_NUMA_NODES)) { + RTE_FLOW_CLASSIFY_LOG(ERR, + "%s: Incorrect value for parameter socket_id\n", + __func__); + return -EINVAL; + } + + return 0; +} + +struct rte_flow_classifier * +rte_flow_classifier_create(struct rte_flow_classifier_params *params) +{ + struct rte_flow_classifier *cls; + int ret; + + /* Check input parameters */ + ret = rte_flow_classifier_check_params(params); + if (ret != 0) { + RTE_FLOW_CLASSIFY_LOG(ERR, + "%s: flow classifier params check failed (%d)\n", + __func__, ret); + return NULL; + } + + /* Allocate memory for the flow classifier */ + cls = rte_zmalloc_socket("FLOW_CLASSIFIER", + sizeof(struct rte_flow_classifier), + RTE_CACHE_LINE_SIZE, params->socket_id); + + if (cls == NULL) { + RTE_FLOW_CLASSIFY_LOG(ERR, + "%s: flow classifier memory allocation failed\n", + __func__); + return NULL; + } + + /* Save input parameters */ + snprintf(cls->name, RTE_FLOW_CLASSIFIER_MAX_NAME_SZ, "%s", + params->name); + cls->socket_id = params->socket_id; + cls->type = params->type; + + /* Initialize flow classifier internal data structure */ + cls->num_tables = 0; + + return cls; +} + +static void +rte_flow_classify_table_free(struct rte_table *table) +{ + if (table->ops.f_free != NULL) + table->ops.f_free(table->h_table); +} + +int +rte_flow_classifier_free(struct rte_flow_classifier *cls) +{ + uint32_t i; + + /* Check input parameters */ + if (cls == NULL) { + RTE_FLOW_CLASSIFY_LOG(ERR, + "%s: rte_flow_classifier parameter is NULL\n", + __func__); + return -EINVAL; + } + + /* Free tables */ + for (i = 0; i < cls->num_tables; i++) { + struct rte_table *table = &cls->tables[i]; + + rte_flow_classify_table_free(table); + } + + /* Free flow classifier memory */ + rte_free(cls); + + return 0; +} + +static int +rte_table_check_params(struct rte_flow_classifier *cls, + struct rte_flow_classify_table_params *params, + uint32_t *table_id) +{ + if (cls == NULL) { + RTE_FLOW_CLASSIFY_LOG(ERR, + "%s: flow classifier parameter is NULL\n", + __func__); + return -EINVAL; + } + if (params == NULL) { + RTE_FLOW_CLASSIFY_LOG(ERR, "%s: params parameter is NULL\n", + __func__); + return -EINVAL; + } + if (table_id == NULL) { + RTE_FLOW_CLASSIFY_LOG(ERR, "%s: table_id parameter is NULL\n", + __func__); + return -EINVAL; + } + + /* ops */ + if (params->ops == NULL) { + RTE_FLOW_CLASSIFY_LOG(ERR, "%s: params->ops is NULL\n", + __func__); + return -EINVAL; + } + + if (params->ops->f_create == NULL) { + RTE_FLOW_CLASSIFY_LOG(ERR, + "%s: f_create function pointer is NULL\n", __func__); + return -EINVAL; + } + + if (params->ops->f_lookup == NULL) { + RTE_FLOW_CLASSIFY_LOG(ERR, + "%s: f_lookup function pointer is NULL\n", __func__); + return -EINVAL; + } + + /* De we have room for one more table? */ + if (cls->num_tables == RTE_FLOW_CLASSIFY_TABLE_MAX) { + RTE_FLOW_CLASSIFY_LOG(ERR, + "%s: Incorrect value for num_tables parameter\n", + __func__); + return -EINVAL; + } + + return 0; +} + +int +rte_flow_classify_table_create(struct rte_flow_classifier *cls, + struct rte_flow_classify_table_params *params, + uint32_t *table_id) +{ + struct rte_table *table; + void *h_table; + uint32_t entry_size, id; + int ret; + + /* Check input arguments */ + ret = rte_table_check_params(cls, params, table_id); + if (ret != 0) + return ret; + + id = cls->num_tables; + table = &cls->tables[id]; + + /* calculate table entry size */ + entry_size = sizeof(struct rte_flow_classify_table_entry); + + /* Create the table */ + h_table = params->ops->f_create(params->arg_create, cls->socket_id, + entry_size); + if (h_table == NULL) { + RTE_FLOW_CLASSIFY_LOG(ERR, "%s: Table creation failed\n", + __func__); + return -EINVAL; + } + + /* Commit current table to the classifier */ + cls->num_tables++; + *table_id = id; + + /* Save input parameters */ + memcpy(&table->ops, params->ops, sizeof(struct rte_table_ops)); + + /* Initialize table internal data structure */ + table->entry_size = entry_size; + table->h_table = h_table; + + return 0; +} + +static struct rte_flow_classify_rule * +allocate_acl_ipv4_5tuple_rule(void) +{ + struct rte_flow_classify_rule *rule; + + rule = malloc(sizeof(struct rte_flow_classify_rule)); + if (!rule) + return rule; + + memset(rule, 0, sizeof(struct rte_flow_classify_rule)); + rule->id = unique_id++; + rule->rules.type = RTE_FLOW_CLASSIFY_RULE_TYPE_IPV4_5TUPLE; + + memcpy(&rule->action, classify_get_flow_action(), + sizeof(struct rte_flow_action)); + + /* key add values */ + rule->u.key.key_add.priority = ntuple_filter.priority; + rule->u.key.key_add.field_value[PROTO_FIELD_IPV4].mask_range.u8 = + ntuple_filter.proto_mask; + rule->u.key.key_add.field_value[PROTO_FIELD_IPV4].value.u8 = + ntuple_filter.proto; + rule->rules.u.ipv4_5tuple.proto = ntuple_filter.proto; + rule->rules.u.ipv4_5tuple.proto_mask = ntuple_filter.proto_mask; + + rule->u.key.key_add.field_value[SRC_FIELD_IPV4].mask_range.u32 = + ntuple_filter.src_ip_mask; + rule->u.key.key_add.field_value[SRC_FIELD_IPV4].value.u32 = + ntuple_filter.src_ip; + rule->rules.u.ipv4_5tuple.src_ip_mask = ntuple_filter.src_ip_mask; + rule->rules.u.ipv4_5tuple.src_ip = ntuple_filter.src_ip; + + rule->u.key.key_add.field_value[DST_FIELD_IPV4].mask_range.u32 = + ntuple_filter.dst_ip_mask; + rule->u.key.key_add.field_value[DST_FIELD_IPV4].value.u32 = + ntuple_filter.dst_ip; + rule->rules.u.ipv4_5tuple.dst_ip_mask = ntuple_filter.dst_ip_mask; + rule->rules.u.ipv4_5tuple.dst_ip = ntuple_filter.dst_ip; + + rule->u.key.key_add.field_value[SRCP_FIELD_IPV4].mask_range.u16 = + ntuple_filter.src_port_mask; + rule->u.key.key_add.field_value[SRCP_FIELD_IPV4].value.u16 = + ntuple_filter.src_port; + rule->rules.u.ipv4_5tuple.src_port_mask = ntuple_filter.src_port_mask; + rule->rules.u.ipv4_5tuple.src_port = ntuple_filter.src_port; + + rule->u.key.key_add.field_value[DSTP_FIELD_IPV4].mask_range.u16 = + ntuple_filter.dst_port_mask; + rule->u.key.key_add.field_value[DSTP_FIELD_IPV4].value.u16 = + ntuple_filter.dst_port; + rule->rules.u.ipv4_5tuple.dst_port_mask = ntuple_filter.dst_port_mask; + rule->rules.u.ipv4_5tuple.dst_port = ntuple_filter.dst_port; + +#ifdef RTE_LIBRTE_CLASSIFY_DEBUG + print_acl_ipv4_key_add(&rule->u.key.key_add); +#endif + + /* key delete values */ + memcpy(&rule->u.key.key_del.field_value[PROTO_FIELD_IPV4], + &rule->u.key.key_add.field_value[PROTO_FIELD_IPV4], + NUM_FIELDS_IPV4 * sizeof(struct rte_acl_field)); + +#ifdef RTE_LIBRTE_CLASSIFY_DEBUG + print_acl_ipv4_key_delete(&rule->u.key.key_del); +#endif + return rule; +} + +struct rte_flow_classify_rule * +rte_flow_classify_table_entry_add(struct rte_flow_classifier *cls, + uint32_t table_id, + int *key_found, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + struct rte_flow_classify_rule *rule; + struct rte_flow_classify_table_entry *table_entry; + int ret; + + if (!error) + return NULL; + + if (!cls) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "NULL classifier."); + return NULL; + } + + if (table_id >= cls->num_tables) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "invalid table_id."); + return NULL; + } + + if (key_found == NULL) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "NULL key_found."); + return NULL; + } + + if (!pattern) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM_NUM, + NULL, "NULL pattern."); + return NULL; + } + + if (!actions) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_NUM, + NULL, "NULL action."); + return NULL; + } + + if (!attr) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, + NULL, "NULL attribute."); + return NULL; + } + + /* parse attr, pattern and actions */ + ret = flow_classify_parse_flow(attr, pattern, actions, error); + if (ret < 0) + return NULL; + + switch (cls->type) { + case RTE_FLOW_CLASSIFY_TABLE_TYPE_ACL: + rule = allocate_acl_ipv4_5tuple_rule(); + if (!rule) + return NULL; + break; + default: + return NULL; + } + + rule->entry = malloc(sizeof(struct rte_flow_classify_table_entry)); + if (!rule->entry) { + free(rule); + return NULL; + } + + table_entry = rule->entry; + table_entry->rule_id = rule->id; + + if (cls->tables[table_id].ops.f_add != NULL) { + ret = cls->tables[table_id].ops.f_add( + cls->tables[table_id].h_table, + &rule->u.key.key_add, + rule->entry, + &rule->key_found, + &rule->entry_ptr); + if (ret) { + free(rule->entry); + free(rule); + return NULL; + } + *key_found = rule->key_found; + } + return rule; +} + +int +rte_flow_classify_table_entry_delete(struct rte_flow_classifier *cls, + uint32_t table_id, + struct rte_flow_classify_rule *rule) +{ + int ret = -EINVAL; + + if (!cls || !rule || table_id >= cls->num_tables) + return ret; + + if (cls->tables[table_id].ops.f_delete != NULL) + ret = cls->tables[table_id].ops.f_delete( + cls->tables[table_id].h_table, + &rule->u.key.key_del, + &rule->key_found, + &rule->entry); + + return ret; +} + +static int +flow_classifier_lookup(struct rte_flow_classifier *cls, + uint32_t table_id, + struct rte_mbuf **pkts, + const uint16_t nb_pkts) +{ + int ret = -EINVAL; + uint64_t pkts_mask; + uint64_t lookup_hit_mask; + + pkts_mask = RTE_LEN2MASK(nb_pkts, uint64_t); + ret = cls->tables[table_id].ops.f_lookup( + cls->tables[table_id].h_table, + pkts, pkts_mask, &lookup_hit_mask, + (void **)cls->entries); + + if (!ret && lookup_hit_mask) + cls->nb_pkts = nb_pkts; + else + cls->nb_pkts = 0; + + return ret; +} + +static int +action_apply(struct rte_flow_classifier *cls, + struct rte_flow_classify_rule *rule, + struct rte_flow_classify_stats *stats) +{ + struct rte_flow_classify_ipv4_5tuple_stats *ntuple_stats; + uint64_t count = 0; + int i; + int ret = -EINVAL; + + switch (rule->action.type) { + case RTE_FLOW_ACTION_TYPE_COUNT: + for (i = 0; i < cls->nb_pkts; i++) { + if (rule->id == cls->entries[i]->rule_id) + count++; + } + if (count) { + ret = 0; + ntuple_stats = + (struct rte_flow_classify_ipv4_5tuple_stats *) + stats->stats; + ntuple_stats->counter1 = count; + ntuple_stats->ipv4_5tuple = rule->rules.u.ipv4_5tuple; + } + break; + default: + ret = -ENOTSUP; + break; + } + + return ret; +} + +int +rte_flow_classifier_query(struct rte_flow_classifier *cls, + uint32_t table_id, + struct rte_mbuf **pkts, + const uint16_t nb_pkts, + struct rte_flow_classify_rule *rule, + struct rte_flow_classify_stats *stats) +{ + int ret = -EINVAL; + + if (!cls || !rule || !stats || !pkts || nb_pkts == 0 || + table_id >= cls->num_tables) + return ret; + + ret = flow_classifier_lookup(cls, table_id, pkts, nb_pkts); + if (!ret) + ret = action_apply(cls, rule, stats); + return ret; +} + +RTE_INIT(librte_flow_classify_init_log); + +static void +librte_flow_classify_init_log(void) +{ + librte_flow_classify_logtype = + rte_log_register("librte.flow_classify"); + if (librte_flow_classify_logtype >= 0) + rte_log_set_level(librte_flow_classify_logtype, RTE_LOG_DEBUG); +} diff --git a/lib/librte_flow_classify/rte_flow_classify.h b/lib/librte_flow_classify/rte_flow_classify.h new file mode 100644 index 0000000..f547788 --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify.h @@ -0,0 +1,287 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _RTE_FLOW_CLASSIFY_H_ +#define _RTE_FLOW_CLASSIFY_H_ + +/** + * @file + * + * RTE Flow Classify Library + * + * This library provides flow record information with some measured properties. + * + * Application should define the flow and measurement criteria (action) for it. + * + * The Library doesn't maintain any flow records itself, instead flow + * information is returned to upper layer only for given packets. + * + * It is application's responsibility to call rte_flow_classifier_query() + * for a burst of packets, just after receiving them or before transmitting + * them. + * Application should provide the flow type interested in, measurement to apply + * to that flow in rte_flow_classify_table_entry_add() API, and should provide + * the rte_flow_classifier object and storage to put results in for the + * rte_flow_classifier_query() API. + * + * Usage: + * - application calls rte_flow_classifier_create() to create an + * rte_flow_classifier object. + * - application calls rte_flow_classify_table_create() to create a table + * in the rte_flow_classifier object. + * - application calls rte_flow_classify_table_entry_add() to add a rule to + * the table in the rte_flow_classifier object. + * - application calls rte_flow_classifier_query() in a polling manner, + * preferably after rte_eth_rx_burst(). This will cause the library to + * match packet information to flow information with some measurements. + * - rte_flow_classifier object can be destroyed when it is no longer needed + * with rte_flow_classifier_free() + */ + +#include <rte_ethdev.h> +#include <rte_ether.h> +#include <rte_flow.h> +#include <rte_acl.h> +#include <rte_table_acl.h> + +#ifdef __cplusplus +extern "C" { +#endif + +extern int librte_flow_classify_logtype; + +#define RTE_FLOW_CLASSIFY_LOG(level, fmt, args...) \ +rte_log(RTE_LOG_ ## level, librte_flow_classify_logtype, "%s(): " fmt, \ + __func__, ## args) + +/** Opaque data type for flow classifier */ +struct rte_flow_classifier; + +/** Opaque data type for flow classify rule */ +struct rte_flow_classify_rule; + +/** Flow classify rule type */ +enum rte_flow_classify_rule_type { + /** no type */ + RTE_FLOW_CLASSIFY_RULE_TYPE_NONE, + /** IPv4 5tuple type */ + RTE_FLOW_CLASSIFY_RULE_TYPE_IPV4_5TUPLE, +}; + +/** Flow classify table type */ +enum rte_flow_classify_table_type { + /** no type */ + RTE_FLOW_CLASSIFY_TABLE_TYPE_NONE, + /** ACL type */ + RTE_FLOW_CLASSIFY_TABLE_TYPE_ACL, +}; + +/** + * Maximum number of tables allowed for any Flow Classifier instance. + * The value of this parameter cannot be changed. + */ +#define RTE_FLOW_CLASSIFY_TABLE_MAX 64 + +/** Parameters for flow classifier creation */ +struct rte_flow_classifier_params { + /** flow classifier name */ + const char *name; + + /** CPU socket ID where memory for the flow classifier and its */ + /** elements (tables) should be allocated */ + int socket_id; + + /** Table type */ + enum rte_flow_classify_table_type type; +}; + +/** Parameters for table creation */ +struct rte_flow_classify_table_params { + /** Table operations (specific to each table type) */ + struct rte_table_ops *ops; + + /** Opaque param to be passed to the table create operation */ + void *arg_create; +}; + +/** IPv4 5-tuple data */ +struct rte_flow_classify_ipv4_5tuple { + uint32_t dst_ip; /**< Destination IP address in big endian. */ + uint32_t dst_ip_mask; /**< Mask of destination IP address. */ + uint32_t src_ip; /**< Source IP address in big endian. */ + uint32_t src_ip_mask; /**< Mask of destination IP address. */ + uint16_t dst_port; /**< Destination port in big endian. */ + uint16_t dst_port_mask; /**< Mask of destination port. */ + uint16_t src_port; /**< Source Port in big endian. */ + uint16_t src_port_mask; /**< Mask of source port. */ + uint8_t proto; /**< L4 protocol. */ + uint8_t proto_mask; /**< Mask of L4 protocol. */ +}; + +/** + * Flow stats + * + * For the count action, stats can be returned by the query API. + * + * Storage for stats is provided by application. + */ +struct rte_flow_classify_stats { + void *stats; +}; + +struct rte_flow_classify_ipv4_5tuple_stats { + /** count of packets that match IPv4 5tuple pattern */ + uint64_t counter1; + /** IPv4 5tuple data */ + struct rte_flow_classify_ipv4_5tuple ipv4_5tuple; +}; + +/** + * Flow classifier create + * + * @param params + * Parameters for flow classifier creation + * @return + * Handle to flow classifier instance on success or NULL otherwise + */ +struct rte_flow_classifier * +rte_flow_classifier_create(struct rte_flow_classifier_params *params); + +/** + * Flow classifier free + * + * @param cls + * Handle to flow classifier instance + * @return + * 0 on success, error code otherwise + */ +int +rte_flow_classifier_free(struct rte_flow_classifier *cls); + +/** + * Flow classify table create + * + * @param cls + * Handle to flow classifier instance + * @param params + * Parameters for flow_classify table creation + * @param table_id + * Table ID. Valid only within the scope of table IDs of the current + * classifier. Only returned after a successful invocation. + * @return + * 0 on success, error code otherwise + */ +int +rte_flow_classify_table_create(struct rte_flow_classifier *cls, + struct rte_flow_classify_table_params *params, + uint32_t *table_id); + +/** + * Add a flow classify rule to the flow_classifer table. + * + * @param[in] cls + * Flow classifier handle + * @param[in] table_id + * id of table + * @param[out] key_found + * returns 1 if key present already, 0 otherwise. + * @param[in] attr + * Flow rule attributes + * @param[in] pattern + * Pattern specification (list terminated by the END pattern item). + * @param[in] actions + * Associated actions (list terminated by the END pattern item). + * @param[out] error + * Perform verbose error reporting if not NULL. Structure + * initialised in case of error only. + * @return + * A valid handle in case of success, NULL otherwise. + */ +struct rte_flow_classify_rule * +rte_flow_classify_table_entry_add(struct rte_flow_classifier *cls, + uint32_t table_id, + int *key_found, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error); + +/** + * Delete a flow classify rule from the flow_classifer table. + * + * @param[in] cls + * Flow classifier handle + * @param[in] table_id + * id of table + * @param[in] rule + * Flow classify rule + * @return + * 0 on success, error code otherwise. + */ +int +rte_flow_classify_table_entry_delete(struct rte_flow_classifier *cls, + uint32_t table_id, + struct rte_flow_classify_rule *rule); + +/** + * Query flow classifier for given rule. + * + * @param[in] cls + * Flow classifier handle + * @param[in] table_id + * id of table + * @param[in] pkts + * Pointer to packets to process + * @param[in] nb_pkts + * Number of packets to process + * @param[in] rule + * Flow classify rule + * @param[in] stats + * Flow classify stats + * + * @return + * 0 on success, error code otherwise. + */ +int +rte_flow_classifier_query(struct rte_flow_classifier *cls, + uint32_t table_id, + struct rte_mbuf **pkts, + const uint16_t nb_pkts, + struct rte_flow_classify_rule *rule, + struct rte_flow_classify_stats *stats); + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_FLOW_CLASSIFY_H_ */ diff --git a/lib/librte_flow_classify/rte_flow_classify_parse.c b/lib/librte_flow_classify/rte_flow_classify_parse.c new file mode 100644 index 0000000..dbfa111 --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify_parse.c @@ -0,0 +1,546 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <rte_flow_classify.h> +#include "rte_flow_classify_parse.h" +#include <rte_flow_driver.h> + +struct classify_valid_pattern { + enum rte_flow_item_type *items; + parse_filter_t parse_filter; +}; + +static struct rte_flow_action action; + +/* Pattern for IPv4 5-tuple UDP filter */ +static enum rte_flow_item_type pattern_ntuple_1[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_UDP, + RTE_FLOW_ITEM_TYPE_END, +}; + +/* Pattern for IPv4 5-tuple TCP filter */ +static enum rte_flow_item_type pattern_ntuple_2[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_TCP, + RTE_FLOW_ITEM_TYPE_END, +}; + +/* Pattern for IPv4 5-tuple SCTP filter */ +static enum rte_flow_item_type pattern_ntuple_3[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_SCTP, + RTE_FLOW_ITEM_TYPE_END, +}; + +static int +classify_parse_ntuple_filter(const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_eth_ntuple_filter *filter, + struct rte_flow_error *error); + +static struct classify_valid_pattern classify_supported_patterns[] = { + /* ntuple */ + { pattern_ntuple_1, classify_parse_ntuple_filter }, + { pattern_ntuple_2, classify_parse_ntuple_filter }, + { pattern_ntuple_3, classify_parse_ntuple_filter }, +}; + +struct rte_flow_action * +classify_get_flow_action(void) +{ + return &action; +} + +/* Find the first VOID or non-VOID item pointer */ +const struct rte_flow_item * +classify_find_first_item(const struct rte_flow_item *item, bool is_void) +{ + bool is_find; + + while (item->type != RTE_FLOW_ITEM_TYPE_END) { + if (is_void) + is_find = item->type == RTE_FLOW_ITEM_TYPE_VOID; + else + is_find = item->type != RTE_FLOW_ITEM_TYPE_VOID; + if (is_find) + break; + item++; + } + return item; +} + +/* Skip all VOID items of the pattern */ +void +classify_pattern_skip_void_item(struct rte_flow_item *items, + const struct rte_flow_item *pattern) +{ + uint32_t cpy_count = 0; + const struct rte_flow_item *pb = pattern, *pe = pattern; + + for (;;) { + /* Find a non-void item first */ + pb = classify_find_first_item(pb, false); + if (pb->type == RTE_FLOW_ITEM_TYPE_END) { + pe = pb; + break; + } + + /* Find a void item */ + pe = classify_find_first_item(pb + 1, true); + + cpy_count = pe - pb; + rte_memcpy(items, pb, sizeof(struct rte_flow_item) * cpy_count); + + items += cpy_count; + + if (pe->type == RTE_FLOW_ITEM_TYPE_END) { + pb = pe; + break; + } + + pb = pe + 1; + } + /* Copy the END item. */ + rte_memcpy(items, pe, sizeof(struct rte_flow_item)); +} + +/* Check if the pattern matches a supported item type array */ +static bool +classify_match_pattern(enum rte_flow_item_type *item_array, + struct rte_flow_item *pattern) +{ + struct rte_flow_item *item = pattern; + + while ((*item_array == item->type) && + (*item_array != RTE_FLOW_ITEM_TYPE_END)) { + item_array++; + item++; + } + + return (*item_array == RTE_FLOW_ITEM_TYPE_END && + item->type == RTE_FLOW_ITEM_TYPE_END); +} + +/* Find if there's parse filter function matched */ +parse_filter_t +classify_find_parse_filter_func(struct rte_flow_item *pattern) +{ + parse_filter_t parse_filter = NULL; + uint8_t i = 0; + + for (; i < RTE_DIM(classify_supported_patterns); i++) { + if (classify_match_pattern(classify_supported_patterns[i].items, + pattern)) { + parse_filter = + classify_supported_patterns[i].parse_filter; + break; + } + } + + return parse_filter; +} + +#define FLOW_RULE_MIN_PRIORITY 8 +#define FLOW_RULE_MAX_PRIORITY 0 + +#define NEXT_ITEM_OF_PATTERN(item, pattern, index)\ + do {\ + item = pattern + index;\ + while (item->type == RTE_FLOW_ITEM_TYPE_VOID) {\ + index++;\ + item = pattern + index;\ + } \ + } while (0) + +#define NEXT_ITEM_OF_ACTION(act, actions, index)\ + do {\ + act = actions + index;\ + while (act->type == RTE_FLOW_ACTION_TYPE_VOID) {\ + index++;\ + act = actions + index;\ + } \ + } while (0) + +/** + * Please aware there's an assumption for all the parsers. + * rte_flow_item is using big endian, rte_flow_attr and + * rte_flow_action are using CPU order. + * Because the pattern is used to describe the packets, + * normally the packets should use network order. + */ + +/** + * Parse the rule to see if it is a n-tuple rule. + * And get the n-tuple filter info BTW. + * pattern: + * The first not void item can be ETH or IPV4. + * The second not void item must be IPV4 if the first one is ETH. + * The third not void item must be UDP or TCP. + * The next not void item must be END. + * action: + * The first not void action should be QUEUE. + * The next not void action should be END. + * pattern example: + * ITEM Spec Mask + * ETH NULL NULL + * IPV4 src_addr 192.168.1.20 0xFFFFFFFF + * dst_addr 192.167.3.50 0xFFFFFFFF + * next_proto_id 17 0xFF + * UDP/TCP/ src_port 80 0xFFFF + * SCTP dst_port 80 0xFFFF + * END + * other members in mask and spec should set to 0x00. + * item->last should be NULL. + */ +static int +classify_parse_ntuple_filter(const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_eth_ntuple_filter *filter, + struct rte_flow_error *error) +{ + const struct rte_flow_item *item; + const struct rte_flow_action *act; + const struct rte_flow_item_ipv4 *ipv4_spec; + const struct rte_flow_item_ipv4 *ipv4_mask; + const struct rte_flow_item_tcp *tcp_spec; + const struct rte_flow_item_tcp *tcp_mask; + const struct rte_flow_item_udp *udp_spec; + const struct rte_flow_item_udp *udp_mask; + const struct rte_flow_item_sctp *sctp_spec; + const struct rte_flow_item_sctp *sctp_mask; + uint32_t index; + + if (!pattern) { + rte_flow_error_set(error, + EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM, + NULL, "NULL pattern."); + return -EINVAL; + } + + if (!actions) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_NUM, + NULL, "NULL action."); + return -EINVAL; + } + if (!attr) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, + NULL, "NULL attribute."); + return -EINVAL; + } + + /* parse pattern */ + index = 0; + + /* the first not void item can be MAC or IPv4 */ + NEXT_ITEM_OF_PATTERN(item, pattern, index); + + if (item->type != RTE_FLOW_ITEM_TYPE_ETH && + item->type != RTE_FLOW_ITEM_TYPE_IPV4) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -EINVAL; + } + /* Skip Ethernet */ + if (item->type == RTE_FLOW_ITEM_TYPE_ETH) { + /*Not supported last point for range*/ + if (item->last) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + item, + "Not supported last point for range"); + return -EINVAL; + + } + /* if the first item is MAC, the content should be NULL */ + if (item->spec || item->mask) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Not supported by ntuple filter"); + return -EINVAL; + } + /* check if the next not void item is IPv4 */ + index++; + NEXT_ITEM_OF_PATTERN(item, pattern, index); + if (item->type != RTE_FLOW_ITEM_TYPE_IPV4) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Not supported by ntuple filter"); + return -EINVAL; + } + } + + /* get the IPv4 info */ + if (!item->spec || !item->mask) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Invalid ntuple mask"); + return -EINVAL; + } + /*Not supported last point for range*/ + if (item->last) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + item, "Not supported last point for range"); + return -EINVAL; + + } + + ipv4_mask = (const struct rte_flow_item_ipv4 *)item->mask; + /** + * Only support src & dst addresses, protocol, + * others should be masked. + */ + if (ipv4_mask->hdr.version_ihl || + ipv4_mask->hdr.type_of_service || + ipv4_mask->hdr.total_length || + ipv4_mask->hdr.packet_id || + ipv4_mask->hdr.fragment_offset || + ipv4_mask->hdr.time_to_live || + ipv4_mask->hdr.hdr_checksum) { + rte_flow_error_set(error, + EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -EINVAL; + } + + filter->dst_ip_mask = ipv4_mask->hdr.dst_addr; + filter->src_ip_mask = ipv4_mask->hdr.src_addr; + filter->proto_mask = ipv4_mask->hdr.next_proto_id; + + ipv4_spec = (const struct rte_flow_item_ipv4 *)item->spec; + filter->dst_ip = ipv4_spec->hdr.dst_addr; + filter->src_ip = ipv4_spec->hdr.src_addr; + filter->proto = ipv4_spec->hdr.next_proto_id; + + /* check if the next not void item is TCP or UDP or SCTP */ + index++; + NEXT_ITEM_OF_PATTERN(item, pattern, index); + if (item->type != RTE_FLOW_ITEM_TYPE_TCP && + item->type != RTE_FLOW_ITEM_TYPE_UDP && + item->type != RTE_FLOW_ITEM_TYPE_SCTP) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -EINVAL; + } + + /* get the TCP/UDP info */ + if (!item->spec || !item->mask) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Invalid ntuple mask"); + return -EINVAL; + } + + /*Not supported last point for range*/ + if (item->last) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + item, "Not supported last point for range"); + return -EINVAL; + + } + + if (item->type == RTE_FLOW_ITEM_TYPE_TCP) { + tcp_mask = (const struct rte_flow_item_tcp *)item->mask; + + /** + * Only support src & dst ports, tcp flags, + * others should be masked. + */ + if (tcp_mask->hdr.sent_seq || + tcp_mask->hdr.recv_ack || + tcp_mask->hdr.data_off || + tcp_mask->hdr.rx_win || + tcp_mask->hdr.cksum || + tcp_mask->hdr.tcp_urp) { + memset(filter, 0, + sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -EINVAL; + } + + filter->dst_port_mask = tcp_mask->hdr.dst_port; + filter->src_port_mask = tcp_mask->hdr.src_port; + if (tcp_mask->hdr.tcp_flags == 0xFF) { + filter->flags |= RTE_NTUPLE_FLAGS_TCP_FLAG; + } else if (!tcp_mask->hdr.tcp_flags) { + filter->flags &= ~RTE_NTUPLE_FLAGS_TCP_FLAG; + } else { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -EINVAL; + } + + tcp_spec = (const struct rte_flow_item_tcp *)item->spec; + filter->dst_port = tcp_spec->hdr.dst_port; + filter->src_port = tcp_spec->hdr.src_port; + filter->tcp_flags = tcp_spec->hdr.tcp_flags; + } else if (item->type == RTE_FLOW_ITEM_TYPE_UDP) { + udp_mask = (const struct rte_flow_item_udp *)item->mask; + + /** + * Only support src & dst ports, + * others should be masked. + */ + if (udp_mask->hdr.dgram_len || + udp_mask->hdr.dgram_cksum) { + memset(filter, 0, + sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -EINVAL; + } + + filter->dst_port_mask = udp_mask->hdr.dst_port; + filter->src_port_mask = udp_mask->hdr.src_port; + + udp_spec = (const struct rte_flow_item_udp *)item->spec; + filter->dst_port = udp_spec->hdr.dst_port; + filter->src_port = udp_spec->hdr.src_port; + } else { + sctp_mask = (const struct rte_flow_item_sctp *)item->mask; + + /** + * Only support src & dst ports, + * others should be masked. + */ + if (sctp_mask->hdr.tag || + sctp_mask->hdr.cksum) { + memset(filter, 0, + sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -EINVAL; + } + + filter->dst_port_mask = sctp_mask->hdr.dst_port; + filter->src_port_mask = sctp_mask->hdr.src_port; + + sctp_spec = (const struct rte_flow_item_sctp *)item->spec; + filter->dst_port = sctp_spec->hdr.dst_port; + filter->src_port = sctp_spec->hdr.src_port; + } + + /* check if the next not void item is END */ + index++; + NEXT_ITEM_OF_PATTERN(item, pattern, index); + if (item->type != RTE_FLOW_ITEM_TYPE_END) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -EINVAL; + } + + /* parse action */ + index = 0; + + /** + * n-tuple only supports count, + * check if the first not void action is COUNT. + */ + memset(&action, 0, sizeof(action)); + NEXT_ITEM_OF_ACTION(act, actions, index); + if (act->type != RTE_FLOW_ACTION_TYPE_COUNT) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + item, "Not supported action."); + return -EINVAL; + } + action.type = RTE_FLOW_ACTION_TYPE_COUNT; + + /* check if the next not void item is END */ + index++; + NEXT_ITEM_OF_ACTION(act, actions, index); + if (act->type != RTE_FLOW_ACTION_TYPE_END) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + act, "Not supported action."); + return -EINVAL; + } + + /* parse attr */ + /* must be input direction */ + if (!attr->ingress) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, + attr, "Only support ingress."); + return -EINVAL; + } + + /* not supported */ + if (attr->egress) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, + attr, "Not support egress."); + return -EINVAL; + } + + if (attr->priority > 0xFFFF) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, + attr, "Error priority."); + return -EINVAL; + } + filter->priority = (uint16_t)attr->priority; + if (attr->priority > FLOW_RULE_MIN_PRIORITY) + filter->priority = FLOW_RULE_MAX_PRIORITY; + + return 0; +} diff --git a/lib/librte_flow_classify/rte_flow_classify_parse.h b/lib/librte_flow_classify/rte_flow_classify_parse.h new file mode 100644 index 0000000..1d4708a --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify_parse.h @@ -0,0 +1,74 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _RTE_FLOW_CLASSIFY_PARSE_H_ +#define _RTE_FLOW_CLASSIFY_PARSE_H_ + +#include <rte_ethdev.h> +#include <rte_ether.h> +#include <rte_flow.h> +#include <stdbool.h> + +#ifdef __cplusplus +extern "C" { +#endif + +typedef int (*parse_filter_t)(const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_eth_ntuple_filter *filter, + struct rte_flow_error *error); + +/* Skip all VOID items of the pattern */ +void +classify_pattern_skip_void_item(struct rte_flow_item *items, + const struct rte_flow_item *pattern); + +/* Find the first VOID or non-VOID item pointer */ +const struct rte_flow_item * +classify_find_first_item(const struct rte_flow_item *item, bool is_void); + + +/* Find if there's parse filter function matched */ +parse_filter_t +classify_find_parse_filter_func(struct rte_flow_item *pattern); + +/* get action data */ +struct rte_flow_action * +classify_get_flow_action(void); + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_FLOW_CLASSIFY_PARSE_H_ */ diff --git a/lib/librte_flow_classify/rte_flow_classify_version.map b/lib/librte_flow_classify/rte_flow_classify_version.map new file mode 100644 index 0000000..9a1b6f1 --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify_version.map @@ -0,0 +1,13 @@ +EXPERIMENTAL { + global: + + rte_flow_classifier_create; + rte_flow_classifier_free; + rte_flow_classifier_query; + rte_flow_classify_table_create; + rte_flow_classify_table_entry_add; + rte_flow_classify_table_entry_delete; + + local: *; + +} DPDK_17.11; diff --git a/mk/rte.app.mk b/mk/rte.app.mk index 8192b98..482656c 100644 --- a/mk/rte.app.mk +++ b/mk/rte.app.mk @@ -58,6 +58,7 @@ _LDLIBS-y += -L$(RTE_SDK_BIN)/lib # # Order is important: from higher level to lower level # +_LDLIBS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += -lrte_flow_classify _LDLIBS-$(CONFIG_RTE_LIBRTE_PIPELINE) += -lrte_pipeline _LDLIBS-$(CONFIG_RTE_LIBRTE_TABLE) += -lrte_table _LDLIBS-$(CONFIG_RTE_LIBRTE_PORT) += -lrte_port -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [PATCH v11 1/4] flow_classify: add flow classify library 2017-10-24 17:28 ` [dpdk-dev] [PATCH v11 1/4] flow_classify: add flow classify library Bernard Iremonger @ 2017-10-24 19:39 ` Thomas Monjalon 2017-10-25 11:10 ` Iremonger, Bernard 2017-10-24 19:41 ` Thomas Monjalon ` (4 subsequent siblings) 5 siblings, 1 reply; 145+ messages in thread From: Thomas Monjalon @ 2017-10-24 19:39 UTC (permalink / raw) To: Bernard Iremonger Cc: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil, jasvinder.singh 24/10/2017 19:28, Bernard Iremonger: > # > +# Compile librte_classify > +# > +CONFIG_RTE_LIBRTE_FLOW_CLASSIFY=y > +CONFIG_RTE_LIBRTE_CLASSIFY_DEBUG=n The debug option is still there but seems not used. I guess I can remove it? ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [PATCH v11 1/4] flow_classify: add flow classify library 2017-10-24 19:39 ` Thomas Monjalon @ 2017-10-25 11:10 ` Iremonger, Bernard 2017-10-25 12:13 ` Thomas Monjalon 0 siblings, 1 reply; 145+ messages in thread From: Iremonger, Bernard @ 2017-10-25 11:10 UTC (permalink / raw) To: Thomas Monjalon Cc: dev, Yigit, Ferruh, Ananyev, Konstantin, Dumitrescu, Cristian, adrien.mazarguil, Singh, Jasvinder, Iremonger, Bernard Hi Thomas, > -----Original Message----- > From: Thomas Monjalon [mailto:thomas@monjalon.net] > Sent: Tuesday, October 24, 2017 8:39 PM > To: Iremonger, Bernard <bernard.iremonger@intel.com> > Cc: dev@dpdk.org; Yigit, Ferruh <ferruh.yigit@intel.com>; Ananyev, > Konstantin <konstantin.ananyev@intel.com>; Dumitrescu, Cristian > <cristian.dumitrescu@intel.com>; adrien.mazarguil@6wind.com; Singh, > Jasvinder <jasvinder.singh@intel.com> > Subject: Re: [dpdk-dev] [PATCH v11 1/4] flow_classify: add flow classify > library > > 24/10/2017 19:28, Bernard Iremonger: > > # > > +# Compile librte_classify > > +# > > +CONFIG_RTE_LIBRTE_FLOW_CLASSIFY=y > > +CONFIG_RTE_LIBRTE_CLASSIFY_DEBUG=n > > The debug option is still there but seems not used. > I guess I can remove it? The debug option is used in rte_flow_classify.c at line 158. It needs to be restored. Will I send a patch or can you restore it? Regards, Bernard. ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [PATCH v11 1/4] flow_classify: add flow classify library 2017-10-25 11:10 ` Iremonger, Bernard @ 2017-10-25 12:13 ` Thomas Monjalon 0 siblings, 0 replies; 145+ messages in thread From: Thomas Monjalon @ 2017-10-25 12:13 UTC (permalink / raw) To: Iremonger, Bernard Cc: dev, Yigit, Ferruh, Ananyev, Konstantin, Dumitrescu, Cristian, adrien.mazarguil, Singh, Jasvinder 25/10/2017 13:10, Iremonger, Bernard: > Hi Thomas, > > From: Thomas Monjalon [mailto:thomas@monjalon.net] > > 24/10/2017 19:28, Bernard Iremonger: > > > # > > > +# Compile librte_classify > > > +# > > > +CONFIG_RTE_LIBRTE_FLOW_CLASSIFY=y > > > +CONFIG_RTE_LIBRTE_CLASSIFY_DEBUG=n > > > > The debug option is still there but seems not used. > > I guess I can remove it? > > The debug option is used in rte_flow_classify.c at line 158. > It needs to be restored. > Will I send a patch or can you restore it? No, the intent is to stop adding some debug options in the config. It is used in this lib to dump some data. I think it should be triggered dynamically with the log level. ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [PATCH v11 1/4] flow_classify: add flow classify library 2017-10-24 17:28 ` [dpdk-dev] [PATCH v11 1/4] flow_classify: add flow classify library Bernard Iremonger 2017-10-24 19:39 ` Thomas Monjalon @ 2017-10-24 19:41 ` Thomas Monjalon 2017-10-24 19:43 ` Thomas Monjalon ` (3 subsequent siblings) 5 siblings, 0 replies; 145+ messages in thread From: Thomas Monjalon @ 2017-10-24 19:41 UTC (permalink / raw) To: Bernard Iremonger Cc: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil, jasvinder.singh 24/10/2017 19:28, Bernard Iremonger: > +/** > + * @file > + * > + * RTE Flow Classify Library > + * > + * This library provides flow record information with some measured properties. I would add the EXPERIMENTAL tag here: @b EXPERIMENTAL: this API may change without prior notice ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [PATCH v11 1/4] flow_classify: add flow classify library 2017-10-24 17:28 ` [dpdk-dev] [PATCH v11 1/4] flow_classify: add flow classify library Bernard Iremonger 2017-10-24 19:39 ` Thomas Monjalon 2017-10-24 19:41 ` Thomas Monjalon @ 2017-10-24 19:43 ` Thomas Monjalon 2017-10-24 20:05 ` Thomas Monjalon ` (2 subsequent siblings) 5 siblings, 0 replies; 145+ messages in thread From: Thomas Monjalon @ 2017-10-24 19:43 UTC (permalink / raw) To: Bernard Iremonger Cc: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil, jasvinder.singh 24/10/2017 19:28, Bernard Iremonger: > --- /dev/null > +++ b/lib/librte_flow_classify/rte_flow_classify_version.map > @@ -0,0 +1,13 @@ > +EXPERIMENTAL { > + global: > + > + rte_flow_classifier_create; > + rte_flow_classifier_free; > + rte_flow_classifier_query; > + rte_flow_classify_table_create; > + rte_flow_classify_table_entry_add; > + rte_flow_classify_table_entry_delete; > + > + local: *; > + > +} DPDK_17.11; It does not compile in shared library mode. The reason is that you cannot inherit DPDK_17.11 block because it does not exist in this file. ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [PATCH v11 1/4] flow_classify: add flow classify library 2017-10-24 17:28 ` [dpdk-dev] [PATCH v11 1/4] flow_classify: add flow classify library Bernard Iremonger ` (2 preceding siblings ...) 2017-10-24 19:43 ` Thomas Monjalon @ 2017-10-24 20:05 ` Thomas Monjalon 2017-10-24 20:16 ` Thomas Monjalon 2017-10-24 20:18 ` Thomas Monjalon 5 siblings, 0 replies; 145+ messages in thread From: Thomas Monjalon @ 2017-10-24 20:05 UTC (permalink / raw) To: Bernard Iremonger Cc: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil, jasvinder.singh 24/10/2017 19:28, Bernard Iremonger: > +F: doc/guides/sample_app_ug/flow_classify.rst > +F: doc/guides/prog_guide/flow_classify_lib.rst These files are listed in MAINTAINERS but they are missing. They must be removed from the list. ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [PATCH v11 1/4] flow_classify: add flow classify library 2017-10-24 17:28 ` [dpdk-dev] [PATCH v11 1/4] flow_classify: add flow classify library Bernard Iremonger ` (3 preceding siblings ...) 2017-10-24 20:05 ` Thomas Monjalon @ 2017-10-24 20:16 ` Thomas Monjalon 2017-10-24 20:18 ` Thomas Monjalon 5 siblings, 0 replies; 145+ messages in thread From: Thomas Monjalon @ 2017-10-24 20:16 UTC (permalink / raw) To: Bernard Iremonger Cc: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil, jasvinder.singh 24/10/2017 19:28, Bernard Iremonger: > --- a/doc/guides/rel_notes/release_17_11.rst > +++ b/doc/guides/rel_notes/release_17_11.rst > @@ -165,6 +165,11 @@ New Features > checksums, and doesn't update checksums for output packets. > Additionally, the GSO library doesn't process IP fragmented packets. > > +* **Added the Flow Classification Library.** > + > + Added the Flow Classification library, it provides an API for DPDK > + applications to classify an input packet by matching it against a set of flow > + rules. It uses the librte_table API to manage the flow rules. > > Resolved Issues > --------------- An additional blank line is missing before the section title. ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [PATCH v11 1/4] flow_classify: add flow classify library 2017-10-24 17:28 ` [dpdk-dev] [PATCH v11 1/4] flow_classify: add flow classify library Bernard Iremonger ` (4 preceding siblings ...) 2017-10-24 20:16 ` Thomas Monjalon @ 2017-10-24 20:18 ` Thomas Monjalon 5 siblings, 0 replies; 145+ messages in thread From: Thomas Monjalon @ 2017-10-24 20:18 UTC (permalink / raw) To: Bernard Iremonger Cc: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil, jasvinder.singh 24/10/2017 19:28, Bernard Iremonger: > --- a/doc/guides/rel_notes/release_17_11.rst > +++ b/doc/guides/rel_notes/release_17_11.rst > +* **Added the Flow Classification Library.** > + > + Added the Flow Classification library, it provides an API for DPDK > + applications to classify an input packet by matching it against a set of flow > + rules. It uses the librte_table API to manage the flow rules. The library must be added to the list in the release notes: + librte_flow_classify.so.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v11 2/4] examples/flow_classify: flow classify sample application 2017-10-23 15:16 ` [dpdk-dev] [PATCH v10 " Bernard Iremonger ` (2 preceding siblings ...) 2017-10-24 17:28 ` [dpdk-dev] [PATCH v11 1/4] flow_classify: add flow classify library Bernard Iremonger @ 2017-10-24 17:28 ` Bernard Iremonger 2017-10-24 20:13 ` Thomas Monjalon 2017-10-24 17:28 ` [dpdk-dev] [PATCH v11 3/4] test: add packet burst generator functions Bernard Iremonger 2017-10-24 17:28 ` [dpdk-dev] [PATCH v11 4/4] test: flow classify library unit tests Bernard Iremonger 5 siblings, 1 reply; 145+ messages in thread From: Bernard Iremonger @ 2017-10-24 17:28 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil, jasvinder.singh Cc: Bernard Iremonger The flow_classify sample application exercises the following librte_flow_classify API's: rte_flow_classifier_create rte_flow_classifier_query rte_flow_classify_table_create rte_flow_classify_table_entry_add rte_flow_classify_table_entry_delete It sets up the IPv4 ACL field definitions. It creates table_acl and adds and deletes rules using the librte_table API. It uses a file of IPv4 five tuple rules for input. Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> Acked-by: Jasvinder Singh <jasvinder.singh@intel.com> --- examples/flow_classify/Makefile | 57 ++ examples/flow_classify/flow_classify.c | 848 +++++++++++++++++++++++++++++ examples/flow_classify/ipv4_rules_file.txt | 14 + 3 files changed, 919 insertions(+) create mode 100644 examples/flow_classify/Makefile create mode 100644 examples/flow_classify/flow_classify.c create mode 100644 examples/flow_classify/ipv4_rules_file.txt diff --git a/examples/flow_classify/Makefile b/examples/flow_classify/Makefile new file mode 100644 index 0000000..eecdde1 --- /dev/null +++ b/examples/flow_classify/Makefile @@ -0,0 +1,57 @@ +# BSD LICENSE +# +# Copyright(c) 2017 Intel Corporation. All rights reserved. +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Intel Corporation nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +ifeq ($(RTE_SDK),) +$(error "Please define RTE_SDK environment variable") +endif + +# Default target, can be overridden by command line or environment +RTE_TARGET ?= x86_64-native-linuxapp-gcc + +include $(RTE_SDK)/mk/rte.vars.mk + +# binary name +APP = flow_classify + + +# all source are stored in SRCS-y +SRCS-y := flow_classify.c + +CFLAGS += -O3 +CFLAGS += $(WERROR_FLAGS) + +# workaround for a gcc bug with noreturn attribute +# http://gcc.gnu.org/bugzilla/show_bug.cgi?id=12603 +ifeq ($(CONFIG_RTE_TOOLCHAIN_GCC),y) +CFLAGS_main.o += -Wno-return-type +endif + +include $(RTE_SDK)/mk/rte.extapp.mk diff --git a/examples/flow_classify/flow_classify.c b/examples/flow_classify/flow_classify.c new file mode 100644 index 0000000..e4cabdb --- /dev/null +++ b/examples/flow_classify/flow_classify.c @@ -0,0 +1,848 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <stdint.h> +#include <inttypes.h> +#include <getopt.h> + +#include <rte_eal.h> +#include <rte_ethdev.h> +#include <rte_cycles.h> +#include <rte_lcore.h> +#include <rte_mbuf.h> +#include <rte_flow.h> +#include <rte_flow_classify.h> +#include <rte_table_acl.h> + +#define RX_RING_SIZE 128 +#define TX_RING_SIZE 512 + +#define NUM_MBUFS 8191 +#define MBUF_CACHE_SIZE 250 +#define BURST_SIZE 32 + +#define MAX_NUM_CLASSIFY 30 +#define FLOW_CLASSIFY_MAX_RULE_NUM 91 +#define FLOW_CLASSIFY_MAX_PRIORITY 8 +#define FLOW_CLASSIFIER_NAME_SIZE 64 + +#define COMMENT_LEAD_CHAR ('#') +#define OPTION_RULE_IPV4 "rule_ipv4" +#define RTE_LOGTYPE_FLOW_CLASSIFY RTE_LOGTYPE_USER3 +#define flow_classify_log(format, ...) \ + RTE_LOG(ERR, FLOW_CLASSIFY, format, ##__VA_ARGS__) + +#define uint32_t_to_char(ip, a, b, c, d) do {\ + *a = (unsigned char)(ip >> 24 & 0xff);\ + *b = (unsigned char)(ip >> 16 & 0xff);\ + *c = (unsigned char)(ip >> 8 & 0xff);\ + *d = (unsigned char)(ip & 0xff);\ + } while (0) + +enum { + CB_FLD_SRC_ADDR, + CB_FLD_DST_ADDR, + CB_FLD_SRC_PORT, + CB_FLD_SRC_PORT_DLM, + CB_FLD_SRC_PORT_MASK, + CB_FLD_DST_PORT, + CB_FLD_DST_PORT_DLM, + CB_FLD_DST_PORT_MASK, + CB_FLD_PROTO, + CB_FLD_PRIORITY, + CB_FLD_NUM, +}; + +static struct{ + const char *rule_ipv4_name; +} parm_config; +const char cb_port_delim[] = ":"; + +static const struct rte_eth_conf port_conf_default = { + .rxmode = { .max_rx_pkt_len = ETHER_MAX_LEN } +}; + +struct flow_classifier { + struct rte_flow_classifier *cls; + uint32_t table_id[RTE_FLOW_CLASSIFY_TABLE_MAX]; +}; + +struct flow_classifier_acl { + struct flow_classifier cls; +} __rte_cache_aligned; + +/* ACL field definitions for IPv4 5 tuple rule */ + +enum { + PROTO_FIELD_IPV4, + SRC_FIELD_IPV4, + DST_FIELD_IPV4, + SRCP_FIELD_IPV4, + DSTP_FIELD_IPV4, + NUM_FIELDS_IPV4 +}; + +enum { + PROTO_INPUT_IPV4, + SRC_INPUT_IPV4, + DST_INPUT_IPV4, + SRCP_DESTP_INPUT_IPV4 +}; + +static struct rte_acl_field_def ipv4_defs[NUM_FIELDS_IPV4] = { + /* first input field - always one byte long. */ + { + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint8_t), + .field_index = PROTO_FIELD_IPV4, + .input_index = PROTO_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, next_proto_id), + }, + /* next input field (IPv4 source address) - 4 consecutive bytes. */ + { + /* rte_flow uses a bit mask for IPv4 addresses */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint32_t), + .field_index = SRC_FIELD_IPV4, + .input_index = SRC_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, src_addr), + }, + /* next input field (IPv4 destination address) - 4 consecutive bytes. */ + { + /* rte_flow uses a bit mask for IPv4 addresses */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint32_t), + .field_index = DST_FIELD_IPV4, + .input_index = DST_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, dst_addr), + }, + /* + * Next 2 fields (src & dst ports) form 4 consecutive bytes. + * They share the same input index. + */ + { + /* rte_flow uses a bit mask for protocol ports */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint16_t), + .field_index = SRCP_FIELD_IPV4, + .input_index = SRCP_DESTP_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + sizeof(struct ipv4_hdr) + + offsetof(struct tcp_hdr, src_port), + }, + { + /* rte_flow uses a bit mask for protocol ports */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint16_t), + .field_index = DSTP_FIELD_IPV4, + .input_index = SRCP_DESTP_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + sizeof(struct ipv4_hdr) + + offsetof(struct tcp_hdr, dst_port), + }, +}; + +/* flow classify data */ +static int num_classify_rules; +static struct rte_flow_classify_rule *rules[MAX_NUM_CLASSIFY]; +static struct rte_flow_classify_ipv4_5tuple_stats ntuple_stats; +static struct rte_flow_classify_stats classify_stats = { + .stats = (void **)&ntuple_stats +}; + +/* parameters for rte_flow_classify_validate and + * rte_flow_classify_table_entry_add functions + */ + +static struct rte_flow_item eth_item = { RTE_FLOW_ITEM_TYPE_ETH, + 0, 0, 0 }; +static struct rte_flow_item end_item = { RTE_FLOW_ITEM_TYPE_END, + 0, 0, 0 }; + +/* sample actions: + * "actions count / end" + */ +static struct rte_flow_action count_action = { RTE_FLOW_ACTION_TYPE_COUNT, 0}; +static struct rte_flow_action end_action = { RTE_FLOW_ACTION_TYPE_END, 0}; +static struct rte_flow_action actions[2]; + +/* sample attributes */ +static struct rte_flow_attr attr; + +/* flow_classify.c: * Based on DPDK skeleton forwarding example. */ + +/* + * Initializes a given port using global settings and with the RX buffers + * coming from the mbuf_pool passed as a parameter. + */ +static inline int +port_init(uint8_t port, struct rte_mempool *mbuf_pool) +{ + struct rte_eth_conf port_conf = port_conf_default; + struct ether_addr addr; + const uint16_t rx_rings = 1, tx_rings = 1; + int retval; + uint16_t q; + + if (port >= rte_eth_dev_count()) + return -1; + + /* Configure the Ethernet device. */ + retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf); + if (retval != 0) + return retval; + + /* Allocate and set up 1 RX queue per Ethernet port. */ + for (q = 0; q < rx_rings; q++) { + retval = rte_eth_rx_queue_setup(port, q, RX_RING_SIZE, + rte_eth_dev_socket_id(port), NULL, mbuf_pool); + if (retval < 0) + return retval; + } + + /* Allocate and set up 1 TX queue per Ethernet port. */ + for (q = 0; q < tx_rings; q++) { + retval = rte_eth_tx_queue_setup(port, q, TX_RING_SIZE, + rte_eth_dev_socket_id(port), NULL); + if (retval < 0) + return retval; + } + + /* Start the Ethernet port. */ + retval = rte_eth_dev_start(port); + if (retval < 0) + return retval; + + /* Display the port MAC address. */ + rte_eth_macaddr_get(port, &addr); + printf("Port %u MAC: %02" PRIx8 " %02" PRIx8 " %02" PRIx8 + " %02" PRIx8 " %02" PRIx8 " %02" PRIx8 "\n", + port, + addr.addr_bytes[0], addr.addr_bytes[1], + addr.addr_bytes[2], addr.addr_bytes[3], + addr.addr_bytes[4], addr.addr_bytes[5]); + + /* Enable RX in promiscuous mode for the Ethernet device. */ + rte_eth_promiscuous_enable(port); + + return 0; +} + +/* + * The lcore main. This is the main thread that does the work, reading from + * an input port classifying the packets and writing to an output port. + */ +static __attribute__((noreturn)) void +lcore_main(struct flow_classifier *cls_app) +{ + const uint8_t nb_ports = rte_eth_dev_count(); + uint8_t port; + int ret; + int i = 0; + + ret = rte_flow_classify_table_entry_delete(cls_app->cls, + cls_app->table_id[0], rules[7]); + if (ret) + printf("table_entry_delete failed [7] %d\n\n", ret); + else + printf("table_entry_delete succeeded [7]\n\n"); + + /* + * Check that the port is on the same NUMA node as the polling thread + * for best performance. + */ + for (port = 0; port < nb_ports; port++) + if (rte_eth_dev_socket_id(port) > 0 && + rte_eth_dev_socket_id(port) != (int)rte_socket_id()) { + printf("\n\n"); + printf("WARNING: port %u is on remote NUMA node\n", + port); + printf("to polling thread.\n"); + printf("Performance will not be optimal.\n"); + + printf("\nCore %u forwarding packets. ", + rte_lcore_id()); + printf("[Ctrl+C to quit]\n"); + } + /* Run until the application is quit or killed. */ + for (;;) { + /* + * Receive packets on a port, classify them and forward them + * on the paired port. + * The mapping is 0 -> 1, 1 -> 0, 2 -> 3, 3 -> 2, etc. + */ + for (port = 0; port < nb_ports; port++) { + /* Get burst of RX packets, from first port of pair. */ + struct rte_mbuf *bufs[BURST_SIZE]; + const uint16_t nb_rx = rte_eth_rx_burst(port, 0, + bufs, BURST_SIZE); + + if (unlikely(nb_rx == 0)) + continue; + + for (i = 0; i < MAX_NUM_CLASSIFY; i++) { + if (rules[i]) { + ret = rte_flow_classifier_query( + cls_app->cls, + cls_app->table_id[0], + bufs, nb_rx, rules[i], + &classify_stats); + if (ret) + printf( + "rule [%d] query failed ret [%d]\n\n", + i, ret); + else { + printf( + "rule[%d] count=%"PRIu64"\n", + i, ntuple_stats.counter1); + + printf("proto = %d\n", + ntuple_stats.ipv4_5tuple.proto); + } + } + } + + /* Send burst of TX packets, to second port of pair. */ + const uint16_t nb_tx = rte_eth_tx_burst(port ^ 1, 0, + bufs, nb_rx); + + /* Free any unsent packets. */ + if (unlikely(nb_tx < nb_rx)) { + uint16_t buf; + + for (buf = nb_tx; buf < nb_rx; buf++) + rte_pktmbuf_free(bufs[buf]); + } + } + } +} + +/* + * Parse IPv4 5 tuple rules file, ipv4_rules_file.txt. + * Expected format: + * <src_ipv4_addr>'/'<masklen> <space> \ + * <dst_ipv4_addr>'/'<masklen> <space> \ + * <src_port> <space> ":" <src_port_mask> <space> \ + * <dst_port> <space> ":" <dst_port_mask> <space> \ + * <proto>'/'<proto_mask> <space> \ + * <priority> + */ + +static int +get_cb_field(char **in, uint32_t *fd, int base, unsigned long lim, + char dlm) +{ + unsigned long val; + char *end; + + errno = 0; + val = strtoul(*in, &end, base); + if (errno != 0 || end[0] != dlm || val > lim) + return -EINVAL; + *fd = (uint32_t)val; + *in = end + 1; + return 0; +} + +static int +parse_ipv4_net(char *in, uint32_t *addr, uint32_t *mask_len) +{ + uint32_t a, b, c, d, m; + + if (get_cb_field(&in, &a, 0, UINT8_MAX, '.')) + return -EINVAL; + if (get_cb_field(&in, &b, 0, UINT8_MAX, '.')) + return -EINVAL; + if (get_cb_field(&in, &c, 0, UINT8_MAX, '.')) + return -EINVAL; + if (get_cb_field(&in, &d, 0, UINT8_MAX, '/')) + return -EINVAL; + if (get_cb_field(&in, &m, 0, sizeof(uint32_t) * CHAR_BIT, 0)) + return -EINVAL; + + addr[0] = IPv4(a, b, c, d); + mask_len[0] = m; + return 0; +} + +static int +parse_ipv4_5tuple_rule(char *str, struct rte_eth_ntuple_filter *ntuple_filter) +{ + int i, ret; + char *s, *sp, *in[CB_FLD_NUM]; + static const char *dlm = " \t\n"; + int dim = CB_FLD_NUM; + uint32_t temp; + + s = str; + for (i = 0; i != dim; i++, s = NULL) { + in[i] = strtok_r(s, dlm, &sp); + if (in[i] == NULL) + return -EINVAL; + } + + ret = parse_ipv4_net(in[CB_FLD_SRC_ADDR], + &ntuple_filter->src_ip, + &ntuple_filter->src_ip_mask); + if (ret != 0) { + flow_classify_log("failed to read source address/mask: %s\n", + in[CB_FLD_SRC_ADDR]); + return ret; + } + + ret = parse_ipv4_net(in[CB_FLD_DST_ADDR], + &ntuple_filter->dst_ip, + &ntuple_filter->dst_ip_mask); + if (ret != 0) { + flow_classify_log("failed to read source address/mask: %s\n", + in[CB_FLD_DST_ADDR]); + return ret; + } + + if (get_cb_field(&in[CB_FLD_SRC_PORT], &temp, 0, UINT16_MAX, 0)) + return -EINVAL; + ntuple_filter->src_port = (uint16_t)temp; + + if (strncmp(in[CB_FLD_SRC_PORT_DLM], cb_port_delim, + sizeof(cb_port_delim)) != 0) + return -EINVAL; + + if (get_cb_field(&in[CB_FLD_SRC_PORT_MASK], &temp, 0, UINT16_MAX, 0)) + return -EINVAL; + ntuple_filter->src_port_mask = (uint16_t)temp; + + if (get_cb_field(&in[CB_FLD_DST_PORT], &temp, 0, UINT16_MAX, 0)) + return -EINVAL; + ntuple_filter->dst_port = (uint16_t)temp; + + if (strncmp(in[CB_FLD_DST_PORT_DLM], cb_port_delim, + sizeof(cb_port_delim)) != 0) + return -EINVAL; + + if (get_cb_field(&in[CB_FLD_DST_PORT_MASK], &temp, 0, UINT16_MAX, 0)) + return -EINVAL; + ntuple_filter->dst_port_mask = (uint16_t)temp; + + if (get_cb_field(&in[CB_FLD_PROTO], &temp, 0, UINT8_MAX, '/')) + return -EINVAL; + ntuple_filter->proto = (uint8_t)temp; + + if (get_cb_field(&in[CB_FLD_PROTO], &temp, 0, UINT8_MAX, 0)) + return -EINVAL; + ntuple_filter->proto_mask = (uint8_t)temp; + + if (get_cb_field(&in[CB_FLD_PRIORITY], &temp, 0, UINT16_MAX, 0)) + return -EINVAL; + ntuple_filter->priority = (uint16_t)temp; + if (ntuple_filter->priority > FLOW_CLASSIFY_MAX_PRIORITY) + ret = -EINVAL; + + return ret; +} + +/* Bypass comment and empty lines */ +static inline int +is_bypass_line(char *buff) +{ + int i = 0; + + /* comment line */ + if (buff[0] == COMMENT_LEAD_CHAR) + return 1; + /* empty line */ + while (buff[i] != '\0') { + if (!isspace(buff[i])) + return 0; + i++; + } + return 1; +} + +static uint32_t +convert_depth_to_bitmask(uint32_t depth_val) +{ + uint32_t bitmask = 0; + int i, j; + + for (i = depth_val, j = 0; i > 0; i--, j++) + bitmask |= (1 << (31 - j)); + return bitmask; +} + +static int +add_classify_rule(struct rte_eth_ntuple_filter *ntuple_filter, + struct flow_classifier *cls_app) +{ + int ret = -1; + int key_found; + struct rte_flow_error error; + struct rte_flow_item_ipv4 ipv4_spec; + struct rte_flow_item_ipv4 ipv4_mask; + struct rte_flow_item ipv4_udp_item; + struct rte_flow_item ipv4_tcp_item; + struct rte_flow_item ipv4_sctp_item; + struct rte_flow_item_udp udp_spec; + struct rte_flow_item_udp udp_mask; + struct rte_flow_item udp_item; + struct rte_flow_item_tcp tcp_spec; + struct rte_flow_item_tcp tcp_mask; + struct rte_flow_item tcp_item; + struct rte_flow_item_sctp sctp_spec; + struct rte_flow_item_sctp sctp_mask; + struct rte_flow_item sctp_item; + struct rte_flow_item pattern_ipv4_5tuple[4]; + struct rte_flow_classify_rule *rule; + uint8_t ipv4_proto; + + if (num_classify_rules >= MAX_NUM_CLASSIFY) { + printf( + "\nINFO: classify rule capacity %d reached\n", + num_classify_rules); + return ret; + } + + /* set up parameters for validate and add */ + memset(&ipv4_spec, 0, sizeof(ipv4_spec)); + ipv4_spec.hdr.next_proto_id = ntuple_filter->proto; + ipv4_spec.hdr.src_addr = ntuple_filter->src_ip; + ipv4_spec.hdr.dst_addr = ntuple_filter->dst_ip; + ipv4_proto = ipv4_spec.hdr.next_proto_id; + + memset(&ipv4_mask, 0, sizeof(ipv4_mask)); + ipv4_mask.hdr.next_proto_id = ntuple_filter->proto_mask; + ipv4_mask.hdr.src_addr = ntuple_filter->src_ip_mask; + ipv4_mask.hdr.src_addr = + convert_depth_to_bitmask(ipv4_mask.hdr.src_addr); + ipv4_mask.hdr.dst_addr = ntuple_filter->dst_ip_mask; + ipv4_mask.hdr.dst_addr = + convert_depth_to_bitmask(ipv4_mask.hdr.dst_addr); + + switch (ipv4_proto) { + case IPPROTO_UDP: + ipv4_udp_item.type = RTE_FLOW_ITEM_TYPE_IPV4; + ipv4_udp_item.spec = &ipv4_spec; + ipv4_udp_item.mask = &ipv4_mask; + ipv4_udp_item.last = NULL; + + udp_spec.hdr.src_port = ntuple_filter->src_port; + udp_spec.hdr.dst_port = ntuple_filter->dst_port; + udp_spec.hdr.dgram_len = 0; + udp_spec.hdr.dgram_cksum = 0; + + udp_mask.hdr.src_port = ntuple_filter->src_port_mask; + udp_mask.hdr.dst_port = ntuple_filter->dst_port_mask; + udp_mask.hdr.dgram_len = 0; + udp_mask.hdr.dgram_cksum = 0; + + udp_item.type = RTE_FLOW_ITEM_TYPE_UDP; + udp_item.spec = &udp_spec; + udp_item.mask = &udp_mask; + udp_item.last = NULL; + + attr.priority = ntuple_filter->priority; + pattern_ipv4_5tuple[1] = ipv4_udp_item; + pattern_ipv4_5tuple[2] = udp_item; + break; + case IPPROTO_TCP: + ipv4_tcp_item.type = RTE_FLOW_ITEM_TYPE_IPV4; + ipv4_tcp_item.spec = &ipv4_spec; + ipv4_tcp_item.mask = &ipv4_mask; + ipv4_tcp_item.last = NULL; + + memset(&tcp_spec, 0, sizeof(tcp_spec)); + tcp_spec.hdr.src_port = ntuple_filter->src_port; + tcp_spec.hdr.dst_port = ntuple_filter->dst_port; + + memset(&tcp_mask, 0, sizeof(tcp_mask)); + tcp_mask.hdr.src_port = ntuple_filter->src_port_mask; + tcp_mask.hdr.dst_port = ntuple_filter->dst_port_mask; + + tcp_item.type = RTE_FLOW_ITEM_TYPE_TCP; + tcp_item.spec = &tcp_spec; + tcp_item.mask = &tcp_mask; + tcp_item.last = NULL; + + attr.priority = ntuple_filter->priority; + pattern_ipv4_5tuple[1] = ipv4_tcp_item; + pattern_ipv4_5tuple[2] = tcp_item; + break; + case IPPROTO_SCTP: + ipv4_sctp_item.type = RTE_FLOW_ITEM_TYPE_IPV4; + ipv4_sctp_item.spec = &ipv4_spec; + ipv4_sctp_item.mask = &ipv4_mask; + ipv4_sctp_item.last = NULL; + + sctp_spec.hdr.src_port = ntuple_filter->src_port; + sctp_spec.hdr.dst_port = ntuple_filter->dst_port; + sctp_spec.hdr.cksum = 0; + sctp_spec.hdr.tag = 0; + + sctp_mask.hdr.src_port = ntuple_filter->src_port_mask; + sctp_mask.hdr.dst_port = ntuple_filter->dst_port_mask; + sctp_mask.hdr.cksum = 0; + sctp_mask.hdr.tag = 0; + + sctp_item.type = RTE_FLOW_ITEM_TYPE_SCTP; + sctp_item.spec = &sctp_spec; + sctp_item.mask = &sctp_mask; + sctp_item.last = NULL; + + attr.priority = ntuple_filter->priority; + pattern_ipv4_5tuple[1] = ipv4_sctp_item; + pattern_ipv4_5tuple[2] = sctp_item; + break; + default: + return ret; + } + + attr.ingress = 1; + pattern_ipv4_5tuple[0] = eth_item; + pattern_ipv4_5tuple[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + rule = rte_flow_classify_table_entry_add( + cls_app->cls, cls_app->table_id[0], &key_found, + &attr, pattern_ipv4_5tuple, actions, &error); + if (rule == NULL) { + printf("table entry add failed ipv4_proto = %u\n", + ipv4_proto); + ret = -1; + return ret; + } + + rules[num_classify_rules] = rule; + num_classify_rules++; + return 0; +} + +static int +add_rules(const char *rule_path, struct flow_classifier *cls_app) +{ + FILE *fh; + char buff[LINE_MAX]; + unsigned int i = 0; + unsigned int total_num = 0; + struct rte_eth_ntuple_filter ntuple_filter; + + fh = fopen(rule_path, "rb"); + if (fh == NULL) + rte_exit(EXIT_FAILURE, "%s: Open %s failed\n", __func__, + rule_path); + + fseek(fh, 0, SEEK_SET); + + i = 0; + while (fgets(buff, LINE_MAX, fh) != NULL) { + i++; + + if (is_bypass_line(buff)) + continue; + + if (total_num >= FLOW_CLASSIFY_MAX_RULE_NUM - 1) { + printf("\nINFO: classify rule capacity %d reached\n", + total_num); + break; + } + + if (parse_ipv4_5tuple_rule(buff, &ntuple_filter) != 0) + rte_exit(EXIT_FAILURE, + "%s Line %u: parse rules error\n", + rule_path, i); + + if (add_classify_rule(&ntuple_filter, cls_app) != 0) + rte_exit(EXIT_FAILURE, "add rule error\n"); + + total_num++; + } + + fclose(fh); + return 0; +} + +/* display usage */ +static void +print_usage(const char *prgname) +{ + printf("%s usage:\n", prgname); + printf("[EAL options] -- --"OPTION_RULE_IPV4"=FILE: "); + printf("specify the ipv4 rules file.\n"); + printf("Each rule occupies one line in the file.\n"); +} + +/* Parse the argument given in the command line of the application */ +static int +parse_args(int argc, char **argv) +{ + int opt, ret; + char **argvopt; + int option_index; + char *prgname = argv[0]; + static struct option lgopts[] = { + {OPTION_RULE_IPV4, 1, 0, 0}, + {NULL, 0, 0, 0} + }; + + argvopt = argv; + + while ((opt = getopt_long(argc, argvopt, "", + lgopts, &option_index)) != EOF) { + + switch (opt) { + /* long options */ + case 0: + if (!strncmp(lgopts[option_index].name, + OPTION_RULE_IPV4, + sizeof(OPTION_RULE_IPV4))) + parm_config.rule_ipv4_name = optarg; + break; + default: + print_usage(prgname); + return -1; + } + } + + if (optind >= 0) + argv[optind-1] = prgname; + + ret = optind-1; + optind = 1; /* reset getopt lib */ + return ret; +} + +/* + * The main function, which does initialization and calls the lcore_main + * function. + */ +int +main(int argc, char *argv[]) +{ + struct rte_mempool *mbuf_pool; + uint8_t nb_ports; + uint8_t portid; + int ret; + int socket_id; + struct rte_table_acl_params table_acl_params; + struct rte_flow_classify_table_params cls_table_params; + struct flow_classifier *cls_app; + struct rte_flow_classifier_params cls_params; + uint32_t size; + + /* Initialize the Environment Abstraction Layer (EAL). */ + ret = rte_eal_init(argc, argv); + if (ret < 0) + rte_exit(EXIT_FAILURE, "Error with EAL initialization\n"); + + argc -= ret; + argv += ret; + + /* parse application arguments (after the EAL ones) */ + ret = parse_args(argc, argv); + if (ret < 0) + rte_exit(EXIT_FAILURE, "Invalid flow_classify parameters\n"); + + /* Check that there is an even number of ports to send/receive on. */ + nb_ports = rte_eth_dev_count(); + if (nb_ports < 2 || (nb_ports & 1)) + rte_exit(EXIT_FAILURE, "Error: number of ports must be even\n"); + + /* Creates a new mempool in memory to hold the mbufs. */ + mbuf_pool = rte_pktmbuf_pool_create("MBUF_POOL", NUM_MBUFS * nb_ports, + MBUF_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id()); + + if (mbuf_pool == NULL) + rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n"); + + /* Initialize all ports. */ + for (portid = 0; portid < nb_ports; portid++) + if (port_init(portid, mbuf_pool) != 0) + rte_exit(EXIT_FAILURE, "Cannot init port %"PRIu8 "\n", + portid); + + if (rte_lcore_count() > 1) + printf("\nWARNING: Too many lcores enabled. Only 1 used.\n"); + + socket_id = rte_eth_dev_socket_id(0); + + /* Memory allocation */ + size = RTE_CACHE_LINE_ROUNDUP(sizeof(struct flow_classifier_acl)); + cls_app = rte_zmalloc(NULL, size, RTE_CACHE_LINE_SIZE); + if (cls_app == NULL) + rte_exit(EXIT_FAILURE, "Cannot allocate classifier memory\n"); + + cls_params.name = "flow_classifier"; + cls_params.socket_id = socket_id; + cls_params.type = RTE_FLOW_CLASSIFY_TABLE_TYPE_ACL; + + cls_app->cls = rte_flow_classifier_create(&cls_params); + if (cls_app->cls == NULL) { + rte_free(cls_app); + rte_exit(EXIT_FAILURE, "Cannot create classifier\n"); + } + + /* initialise ACL table params */ + table_acl_params.name = "table_acl_ipv4_5tuple"; + table_acl_params.n_rules = FLOW_CLASSIFY_MAX_RULE_NUM; + table_acl_params.n_rule_fields = RTE_DIM(ipv4_defs); + memcpy(table_acl_params.field_format, ipv4_defs, sizeof(ipv4_defs)); + + /* initialise table create params */ + cls_table_params.ops = &rte_table_acl_ops, + cls_table_params.arg_create = &table_acl_params, + + ret = rte_flow_classify_table_create(cls_app->cls, &cls_table_params, + &cls_app->table_id[0]); + if (ret) { + rte_flow_classifier_free(cls_app->cls); + rte_free(cls_app); + rte_exit(EXIT_FAILURE, "Failed to create classifier table\n"); + } + + /* read file of IPv4 5 tuple rules and initialize parameters + * for rte_flow_classify_validate and rte_flow_classify_table_entry_add + * API's. + */ + if (add_rules(parm_config.rule_ipv4_name, cls_app)) { + rte_flow_classifier_free(cls_app->cls); + rte_free(cls_app); + rte_exit(EXIT_FAILURE, "Failed to add rules\n"); + } + + /* Call lcore_main on the master core only. */ + lcore_main(cls_app); + + return 0; +} diff --git a/examples/flow_classify/ipv4_rules_file.txt b/examples/flow_classify/ipv4_rules_file.txt new file mode 100644 index 0000000..dfa0631 --- /dev/null +++ b/examples/flow_classify/ipv4_rules_file.txt @@ -0,0 +1,14 @@ +#file format: +#src_ip/masklen dst_ip/masklen src_port : mask dst_port : mask proto/mask priority +# +2.2.2.3/24 2.2.2.7/24 32 : 0xffff 33 : 0xffff 17/0xff 0 +9.9.9.3/24 9.9.9.7/24 32 : 0xffff 33 : 0xffff 17/0xff 1 +9.9.9.3/24 9.9.9.7/24 32 : 0xffff 33 : 0xffff 6/0xff 2 +9.9.8.3/24 9.9.8.7/24 32 : 0xffff 33 : 0xffff 6/0xff 3 +6.7.8.9/24 2.3.4.5/24 32 : 0x0000 33 : 0x0000 132/0xff 4 +6.7.8.9/32 192.168.0.36/32 10 : 0xffff 11 : 0xffff 6/0xfe 5 +6.7.8.9/24 192.168.0.36/24 10 : 0xffff 11 : 0xffff 6/0xfe 6 +6.7.8.9/16 192.168.0.36/16 10 : 0xffff 11 : 0xffff 6/0xfe 7 +6.7.8.9/8 192.168.0.36/8 10 : 0xffff 11 : 0xffff 6/0xfe 8 +#error rules +#9.8.7.6/8 192.168.0.36/8 10 : 0xffff 11 : 0xffff 6/0xfe 9 \ No newline at end of file -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [PATCH v11 2/4] examples/flow_classify: flow classify sample application 2017-10-24 17:28 ` [dpdk-dev] [PATCH v11 2/4] examples/flow_classify: flow classify sample application Bernard Iremonger @ 2017-10-24 20:13 ` Thomas Monjalon 0 siblings, 0 replies; 145+ messages in thread From: Thomas Monjalon @ 2017-10-24 20:13 UTC (permalink / raw) To: Bernard Iremonger Cc: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil, jasvinder.singh 24/10/2017 19:28, Bernard Iremonger: > examples/flow_classify/Makefile | 57 ++ > examples/flow_classify/flow_classify.c | 848 +++++++++++++++++++++++++++++ > examples/flow_classify/ipv4_rules_file.txt | 14 + The example app is not added in examples/Makefile. ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v11 3/4] test: add packet burst generator functions 2017-10-23 15:16 ` [dpdk-dev] [PATCH v10 " Bernard Iremonger ` (3 preceding siblings ...) 2017-10-24 17:28 ` [dpdk-dev] [PATCH v11 2/4] examples/flow_classify: flow classify sample application Bernard Iremonger @ 2017-10-24 17:28 ` Bernard Iremonger 2017-10-24 17:28 ` [dpdk-dev] [PATCH v11 4/4] test: flow classify library unit tests Bernard Iremonger 5 siblings, 0 replies; 145+ messages in thread From: Bernard Iremonger @ 2017-10-24 17:28 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil, jasvinder.singh Cc: Bernard Iremonger add initialize_tcp_header function add initialize_stcp_header function add initialize_ipv4_header_proto function add generate_packet_burst_proto function Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> Acked-by: Jasvinder Singh <jasvinder.singh@intel.com> --- test/test/packet_burst_generator.c | 191 +++++++++++++++++++++++++++++++++++++ test/test/packet_burst_generator.h | 22 ++++- 2 files changed, 211 insertions(+), 2 deletions(-) diff --git a/test/test/packet_burst_generator.c b/test/test/packet_burst_generator.c index a93c3b5..8f4ddcc 100644 --- a/test/test/packet_burst_generator.c +++ b/test/test/packet_burst_generator.c @@ -134,6 +134,36 @@ return pkt_len; } +uint16_t +initialize_tcp_header(struct tcp_hdr *tcp_hdr, uint16_t src_port, + uint16_t dst_port, uint16_t pkt_data_len) +{ + uint16_t pkt_len; + + pkt_len = (uint16_t) (pkt_data_len + sizeof(struct tcp_hdr)); + + memset(tcp_hdr, 0, sizeof(struct tcp_hdr)); + tcp_hdr->src_port = rte_cpu_to_be_16(src_port); + tcp_hdr->dst_port = rte_cpu_to_be_16(dst_port); + + return pkt_len; +} + +uint16_t +initialize_sctp_header(struct sctp_hdr *sctp_hdr, uint16_t src_port, + uint16_t dst_port, uint16_t pkt_data_len) +{ + uint16_t pkt_len; + + pkt_len = (uint16_t) (pkt_data_len + sizeof(struct udp_hdr)); + + sctp_hdr->src_port = rte_cpu_to_be_16(src_port); + sctp_hdr->dst_port = rte_cpu_to_be_16(dst_port); + sctp_hdr->tag = 0; + sctp_hdr->cksum = 0; /* No SCTP checksum. */ + + return pkt_len; +} uint16_t initialize_ipv6_header(struct ipv6_hdr *ip_hdr, uint8_t *src_addr, @@ -198,7 +228,53 @@ return pkt_len; } +uint16_t +initialize_ipv4_header_proto(struct ipv4_hdr *ip_hdr, uint32_t src_addr, + uint32_t dst_addr, uint16_t pkt_data_len, uint8_t proto) +{ + uint16_t pkt_len; + unaligned_uint16_t *ptr16; + uint32_t ip_cksum; + + /* + * Initialize IP header. + */ + pkt_len = (uint16_t) (pkt_data_len + sizeof(struct ipv4_hdr)); + + ip_hdr->version_ihl = IP_VHL_DEF; + ip_hdr->type_of_service = 0; + ip_hdr->fragment_offset = 0; + ip_hdr->time_to_live = IP_DEFTTL; + ip_hdr->next_proto_id = proto; + ip_hdr->packet_id = 0; + ip_hdr->total_length = rte_cpu_to_be_16(pkt_len); + ip_hdr->src_addr = rte_cpu_to_be_32(src_addr); + ip_hdr->dst_addr = rte_cpu_to_be_32(dst_addr); + + /* + * Compute IP header checksum. + */ + ptr16 = (unaligned_uint16_t *)ip_hdr; + ip_cksum = 0; + ip_cksum += ptr16[0]; ip_cksum += ptr16[1]; + ip_cksum += ptr16[2]; ip_cksum += ptr16[3]; + ip_cksum += ptr16[4]; + ip_cksum += ptr16[6]; ip_cksum += ptr16[7]; + ip_cksum += ptr16[8]; ip_cksum += ptr16[9]; + /* + * Reduce 32 bit checksum to 16 bits and complement it. + */ + ip_cksum = ((ip_cksum & 0xFFFF0000) >> 16) + + (ip_cksum & 0x0000FFFF); + ip_cksum %= 65536; + ip_cksum = (~ip_cksum) & 0x0000FFFF; + if (ip_cksum == 0) + ip_cksum = 0xFFFF; + ip_hdr->hdr_checksum = (uint16_t) ip_cksum; + + return pkt_len; +} /* * The maximum number of segments per packet is used when creating @@ -283,3 +359,118 @@ return nb_pkt; } + +int +generate_packet_burst_proto(struct rte_mempool *mp, + struct rte_mbuf **pkts_burst, + struct ether_hdr *eth_hdr, uint8_t vlan_enabled, void *ip_hdr, + uint8_t ipv4, uint8_t proto, void *proto_hdr, + int nb_pkt_per_burst, uint8_t pkt_len, uint8_t nb_pkt_segs) +{ + int i, nb_pkt = 0; + size_t eth_hdr_size; + + struct rte_mbuf *pkt_seg; + struct rte_mbuf *pkt; + + for (nb_pkt = 0; nb_pkt < nb_pkt_per_burst; nb_pkt++) { + pkt = rte_pktmbuf_alloc(mp); + if (pkt == NULL) { +nomore_mbuf: + if (nb_pkt == 0) + return -1; + break; + } + + pkt->data_len = pkt_len; + pkt_seg = pkt; + for (i = 1; i < nb_pkt_segs; i++) { + pkt_seg->next = rte_pktmbuf_alloc(mp); + if (pkt_seg->next == NULL) { + pkt->nb_segs = i; + rte_pktmbuf_free(pkt); + goto nomore_mbuf; + } + pkt_seg = pkt_seg->next; + pkt_seg->data_len = pkt_len; + } + pkt_seg->next = NULL; /* Last segment of packet. */ + + /* + * Copy headers in first packet segment(s). + */ + if (vlan_enabled) + eth_hdr_size = sizeof(struct ether_hdr) + + sizeof(struct vlan_hdr); + else + eth_hdr_size = sizeof(struct ether_hdr); + + copy_buf_to_pkt(eth_hdr, eth_hdr_size, pkt, 0); + + if (ipv4) { + copy_buf_to_pkt(ip_hdr, sizeof(struct ipv4_hdr), pkt, + eth_hdr_size); + switch (proto) { + case IPPROTO_UDP: + copy_buf_to_pkt(proto_hdr, + sizeof(struct udp_hdr), pkt, + eth_hdr_size + sizeof(struct ipv4_hdr)); + break; + case IPPROTO_TCP: + copy_buf_to_pkt(proto_hdr, + sizeof(struct tcp_hdr), pkt, + eth_hdr_size + sizeof(struct ipv4_hdr)); + break; + case IPPROTO_SCTP: + copy_buf_to_pkt(proto_hdr, + sizeof(struct sctp_hdr), pkt, + eth_hdr_size + sizeof(struct ipv4_hdr)); + break; + default: + break; + } + } else { + copy_buf_to_pkt(ip_hdr, sizeof(struct ipv6_hdr), pkt, + eth_hdr_size); + switch (proto) { + case IPPROTO_UDP: + copy_buf_to_pkt(proto_hdr, + sizeof(struct udp_hdr), pkt, + eth_hdr_size + sizeof(struct ipv6_hdr)); + break; + case IPPROTO_TCP: + copy_buf_to_pkt(proto_hdr, + sizeof(struct tcp_hdr), pkt, + eth_hdr_size + sizeof(struct ipv6_hdr)); + break; + case IPPROTO_SCTP: + copy_buf_to_pkt(proto_hdr, + sizeof(struct sctp_hdr), pkt, + eth_hdr_size + sizeof(struct ipv6_hdr)); + break; + default: + break; + } + } + + /* + * Complete first mbuf of packet and append it to the + * burst of packets to be transmitted. + */ + pkt->nb_segs = nb_pkt_segs; + pkt->pkt_len = pkt_len; + pkt->l2_len = eth_hdr_size; + + if (ipv4) { + pkt->vlan_tci = ETHER_TYPE_IPv4; + pkt->l3_len = sizeof(struct ipv4_hdr); + } else { + pkt->vlan_tci = ETHER_TYPE_IPv6; + pkt->l3_len = sizeof(struct ipv6_hdr); + } + + pkts_burst[nb_pkt] = pkt; + } + + return nb_pkt; +} diff --git a/test/test/packet_burst_generator.h b/test/test/packet_burst_generator.h index edc1044..3315bfa 100644 --- a/test/test/packet_burst_generator.h +++ b/test/test/packet_burst_generator.h @@ -43,7 +43,8 @@ #include <rte_arp.h> #include <rte_ip.h> #include <rte_udp.h> - +#include <rte_tcp.h> +#include <rte_sctp.h> #define IPV4_ADDR(a, b, c, d)(((a & 0xff) << 24) | ((b & 0xff) << 16) | \ ((c & 0xff) << 8) | (d & 0xff)) @@ -65,6 +66,13 @@ initialize_udp_header(struct udp_hdr *udp_hdr, uint16_t src_port, uint16_t dst_port, uint16_t pkt_data_len); +uint16_t +initialize_tcp_header(struct tcp_hdr *tcp_hdr, uint16_t src_port, + uint16_t dst_port, uint16_t pkt_data_len); + +uint16_t +initialize_sctp_header(struct sctp_hdr *sctp_hdr, uint16_t src_port, + uint16_t dst_port, uint16_t pkt_data_len); uint16_t initialize_ipv6_header(struct ipv6_hdr *ip_hdr, uint8_t *src_addr, @@ -74,15 +82,25 @@ initialize_ipv4_header(struct ipv4_hdr *ip_hdr, uint32_t src_addr, uint32_t dst_addr, uint16_t pkt_data_len); +uint16_t +initialize_ipv4_header_proto(struct ipv4_hdr *ip_hdr, uint32_t src_addr, + uint32_t dst_addr, uint16_t pkt_data_len, uint8_t proto); + int generate_packet_burst(struct rte_mempool *mp, struct rte_mbuf **pkts_burst, struct ether_hdr *eth_hdr, uint8_t vlan_enabled, void *ip_hdr, uint8_t ipv4, struct udp_hdr *udp_hdr, int nb_pkt_per_burst, uint8_t pkt_len, uint8_t nb_pkt_segs); +int +generate_packet_burst_proto(struct rte_mempool *mp, + struct rte_mbuf **pkts_burst, + struct ether_hdr *eth_hdr, uint8_t vlan_enabled, void *ip_hdr, + uint8_t ipv4, uint8_t proto, void *proto_hdr, + int nb_pkt_per_burst, uint8_t pkt_len, uint8_t nb_pkt_segs); + #ifdef __cplusplus } #endif - #endif /* PACKET_BURST_GENERATOR_H_ */ -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v11 4/4] test: flow classify library unit tests 2017-10-23 15:16 ` [dpdk-dev] [PATCH v10 " Bernard Iremonger ` (4 preceding siblings ...) 2017-10-24 17:28 ` [dpdk-dev] [PATCH v11 3/4] test: add packet burst generator functions Bernard Iremonger @ 2017-10-24 17:28 ` Bernard Iremonger 5 siblings, 0 replies; 145+ messages in thread From: Bernard Iremonger @ 2017-10-24 17:28 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil, jasvinder.singh Cc: Bernard Iremonger Add flow_classify_autotest program. Set up IPv4 ACL field definitions. Create table_acl for use by librte_flow_classify API's. Create an mbuf pool for use by rte_flow_classify_query. For each of the librte_flow_classify API's: test with invalid parameters test with invalid patterns test with invalid actions test with valid parameters Initialise ipv4 udp traffic for use by the udp test for rte_flow_classifier_run. Initialise ipv4 tcp traffic for use by the tcp test for rte_flow_classifier_run. Initialise ipv4 sctp traffic for use by the sctp test for rte_flow_classifier_run. Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> Acked-by: Jasvinder Singh <jasvinder.singh@intel.com> --- test/test/Makefile | 1 + test/test/test_flow_classify.c | 672 +++++++++++++++++++++++++++++++++++++++++ test/test/test_flow_classify.h | 234 ++++++++++++++ 3 files changed, 907 insertions(+) create mode 100644 test/test/test_flow_classify.c create mode 100644 test/test/test_flow_classify.h diff --git a/test/test/Makefile b/test/test/Makefile index dcbe363..c2dbe40 100644 --- a/test/test/Makefile +++ b/test/test/Makefile @@ -107,6 +107,7 @@ SRCS-y += test_table_tables.c SRCS-y += test_table_ports.c SRCS-y += test_table_combined.c SRCS-$(CONFIG_RTE_LIBRTE_ACL) += test_table_acl.c +SRCS-$(CONFIG_RTE_LIBRTE_ACL) += test_flow_classify.c endif SRCS-y += test_rwlock.c diff --git a/test/test/test_flow_classify.c b/test/test/test_flow_classify.c new file mode 100644 index 0000000..9f331cd --- /dev/null +++ b/test/test/test_flow_classify.c @@ -0,0 +1,672 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <string.h> +#include <errno.h> + +#include "test.h" + +#include <rte_string_fns.h> +#include <rte_mbuf.h> +#include <rte_byteorder.h> +#include <rte_ip.h> +#include <rte_acl.h> +#include <rte_common.h> +#include <rte_table_acl.h> +#include <rte_flow.h> +#include <rte_flow_classify.h> + +#include "packet_burst_generator.h" +#include "test_flow_classify.h" + + +#define FLOW_CLASSIFY_MAX_RULE_NUM 100 +struct flow_classifier *cls; + +struct flow_classifier { + struct rte_flow_classifier *cls; + uint32_t table_id[RTE_FLOW_CLASSIFY_TABLE_MAX]; + uint32_t n_tables; +}; + +struct flow_classifier_acl { + struct flow_classifier cls; +} __rte_cache_aligned; + +/* + * test functions by passing invalid or + * non-workable parameters. + */ +static int +test_invalid_parameters(void) +{ + struct rte_flow_classify_rule *rule; + int ret; + + rule = rte_flow_classify_table_entry_add(NULL, 1, NULL, NULL, NULL, + NULL, NULL); + if (rule) { + printf("Line %i: flow_classifier_table_entry_add", __LINE__); + printf(" with NULL param should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_table_entry_delete(NULL, 1, NULL); + if (!ret) { + printf("Line %i: rte_flow_classify_table_entry_delete", + __LINE__); + printf(" with NULL param should have failed!\n"); + return -1; + } + + ret = rte_flow_classifier_query(NULL, 1, NULL, 0, NULL, NULL); + if (!ret) { + printf("Line %i: flow_classifier_query", __LINE__); + printf(" with NULL param should have failed!\n"); + return -1; + } + + rule = rte_flow_classify_table_entry_add(NULL, 1, NULL, NULL, NULL, + NULL, &error); + if (rule) { + printf("Line %i: flow_classify_table_entry_add ", __LINE__); + printf("with NULL param should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_table_entry_delete(NULL, 1, NULL); + if (!ret) { + printf("Line %i: rte_flow_classify_table_entry_delete", + __LINE__); + printf("with NULL param should have failed!\n"); + return -1; + } + + ret = rte_flow_classifier_query(NULL, 1, NULL, 0, NULL, NULL); + if (!ret) { + printf("Line %i: flow_classifier_query", __LINE__); + printf(" with NULL param should have failed!\n"); + return -1; + } + return 0; +} + +static int +test_valid_parameters(void) +{ + struct rte_flow_classify_rule *rule; + int ret; + int key_found; + + /* + * set up parameters for rte_flow_classify_table_entry_add and + * rte_flow_classify_table_entry_delete + */ + + attr.ingress = 1; + attr.priority = 1; + pattern[0] = eth_item; + pattern[1] = ipv4_udp_item_1; + pattern[2] = udp_item_1; + pattern[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + rule = rte_flow_classify_table_entry_add(cls->cls, 0, &key_found, + &attr, pattern, actions, &error); + if (!rule) { + printf("Line %i: flow_classify_table_entry_add", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classify_table_entry_delete(cls->cls, 0, rule); + if (ret) { + printf("Line %i: rte_flow_classify_table_entry_delete", + __LINE__); + printf(" should not have failed!\n"); + return -1; + } + return 0; +} + +static int +test_invalid_patterns(void) +{ + struct rte_flow_classify_rule *rule; + int ret; + int key_found; + + /* + * set up parameters for rte_flow_classify_table_entry_add and + * rte_flow_classify_table_entry_delete + */ + + attr.ingress = 1; + attr.priority = 1; + pattern[0] = eth_item_bad; + pattern[1] = ipv4_udp_item_1; + pattern[2] = udp_item_1; + pattern[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + pattern[0] = eth_item; + pattern[1] = ipv4_udp_item_bad; + rule = rte_flow_classify_table_entry_add(cls->cls, 0, &key_found, + &attr, pattern, actions, &error); + if (rule) { + printf("Line %i: flow_classify_table_entry_add", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_table_entry_delete(cls->cls, 0, rule); + if (!ret) { + printf("Line %i: rte_flow_classify_table_entry_delete", + __LINE__); + printf(" should have failed!\n"); + return -1; + } + + pattern[1] = ipv4_udp_item_1; + pattern[2] = udp_item_bad; + pattern[3] = end_item_bad; + rule = rte_flow_classify_table_entry_add(cls->cls, 0, &key_found, + &attr, pattern, actions, &error); + if (rule) { + printf("Line %i: flow_classify_table_entry_add", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_table_entry_delete(cls->cls, 0, rule); + if (!ret) { + printf("Line %i: rte_flow_classify_table_entry_delete", + __LINE__); + printf(" should have failed!\n"); + return -1; + } + return 0; +} + +static int +test_invalid_actions(void) +{ + struct rte_flow_classify_rule *rule; + int ret; + int key_found; + + /* + * set up parameters for rte_flow_classify_table_entry_add and + * rte_flow_classify_table_entry_delete + */ + + attr.ingress = 1; + attr.priority = 1; + pattern[0] = eth_item; + pattern[1] = ipv4_udp_item_1; + pattern[2] = udp_item_1; + pattern[3] = end_item; + actions[0] = count_action_bad; + actions[1] = end_action; + + rule = rte_flow_classify_table_entry_add(cls->cls, 0, &key_found, + &attr, pattern, actions, &error); + if (rule) { + printf("Line %i: flow_classify_table_entry_add", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_table_entry_delete(cls->cls, 0, rule); + if (!ret) { + printf("Line %i: rte_flow_classify_table_entry_delete", + __LINE__); + printf(" should have failed!\n"); + return -1; + } + + actions[0] = count_action; + actions[1] = end_action_bad; + + rule = rte_flow_classify_table_entry_add(cls->cls, 0, &key_found, + &attr, pattern, actions, &error); + if (rule) { + printf("Line %i: flow_classify_table_entry_add", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_table_entry_delete(cls->cls, 0, rule); + if (!ret) { + printf("Line %i: rte_flow_classify_table_entry_delete", + __LINE__); + printf("should have failed!\n"); + return -1; + } + return 0; +} + +static int +init_ipv4_udp_traffic(struct rte_mempool *mp, + struct rte_mbuf **pkts_burst, uint32_t burst_size) +{ + struct ether_hdr pkt_eth_hdr; + struct ipv4_hdr pkt_ipv4_hdr; + struct udp_hdr pkt_udp_hdr; + uint32_t src_addr = IPV4_ADDR(2, 2, 2, 3); + uint32_t dst_addr = IPV4_ADDR(2, 2, 2, 7); + uint16_t src_port = 32; + uint16_t dst_port = 33; + uint16_t pktlen; + + static uint8_t src_mac[] = { 0x00, 0xFF, 0xAA, 0xFF, 0xAA, 0xFF }; + static uint8_t dst_mac[] = { 0x00, 0xAA, 0xFF, 0xAA, 0xFF, 0xAA }; + + printf("Set up IPv4 UDP traffic\n"); + initialize_eth_header(&pkt_eth_hdr, + (struct ether_addr *)src_mac, + (struct ether_addr *)dst_mac, ETHER_TYPE_IPv4, 0, 0); + pktlen = (uint16_t)(sizeof(struct ether_hdr)); + printf("ETH pktlen %u\n", pktlen); + + pktlen = initialize_ipv4_header(&pkt_ipv4_hdr, src_addr, dst_addr, + pktlen); + printf("ETH + IPv4 pktlen %u\n", pktlen); + + pktlen = initialize_udp_header(&pkt_udp_hdr, src_port, dst_port, + pktlen); + printf("ETH + IPv4 + UDP pktlen %u\n\n", pktlen); + + return generate_packet_burst(mp, pkts_burst, &pkt_eth_hdr, + 0, &pkt_ipv4_hdr, 1, + &pkt_udp_hdr, burst_size, + PACKET_BURST_GEN_PKT_LEN, 1); +} + +static int +init_ipv4_tcp_traffic(struct rte_mempool *mp, + struct rte_mbuf **pkts_burst, uint32_t burst_size) +{ + struct ether_hdr pkt_eth_hdr; + struct ipv4_hdr pkt_ipv4_hdr; + struct tcp_hdr pkt_tcp_hdr; + uint32_t src_addr = IPV4_ADDR(1, 2, 3, 4); + uint32_t dst_addr = IPV4_ADDR(5, 6, 7, 8); + uint16_t src_port = 16; + uint16_t dst_port = 17; + uint16_t pktlen; + + static uint8_t src_mac[] = { 0x00, 0xFF, 0xAA, 0xFF, 0xAA, 0xFF }; + static uint8_t dst_mac[] = { 0x00, 0xAA, 0xFF, 0xAA, 0xFF, 0xAA }; + + printf("Set up IPv4 TCP traffic\n"); + initialize_eth_header(&pkt_eth_hdr, + (struct ether_addr *)src_mac, + (struct ether_addr *)dst_mac, ETHER_TYPE_IPv4, 0, 0); + pktlen = (uint16_t)(sizeof(struct ether_hdr)); + printf("ETH pktlen %u\n", pktlen); + + pktlen = initialize_ipv4_header_proto(&pkt_ipv4_hdr, src_addr, + dst_addr, pktlen, IPPROTO_TCP); + printf("ETH + IPv4 pktlen %u\n", pktlen); + + pktlen = initialize_tcp_header(&pkt_tcp_hdr, src_port, dst_port, + pktlen); + printf("ETH + IPv4 + TCP pktlen %u\n\n", pktlen); + + return generate_packet_burst_proto(mp, pkts_burst, &pkt_eth_hdr, + 0, &pkt_ipv4_hdr, 1, IPPROTO_TCP, + &pkt_tcp_hdr, burst_size, + PACKET_BURST_GEN_PKT_LEN, 1); +} + +static int +init_ipv4_sctp_traffic(struct rte_mempool *mp, + struct rte_mbuf **pkts_burst, uint32_t burst_size) +{ + struct ether_hdr pkt_eth_hdr; + struct ipv4_hdr pkt_ipv4_hdr; + struct sctp_hdr pkt_sctp_hdr; + uint32_t src_addr = IPV4_ADDR(11, 12, 13, 14); + uint32_t dst_addr = IPV4_ADDR(15, 16, 17, 18); + uint16_t src_port = 10; + uint16_t dst_port = 11; + uint16_t pktlen; + + static uint8_t src_mac[] = { 0x00, 0xFF, 0xAA, 0xFF, 0xAA, 0xFF }; + static uint8_t dst_mac[] = { 0x00, 0xAA, 0xFF, 0xAA, 0xFF, 0xAA }; + + printf("Set up IPv4 SCTP traffic\n"); + initialize_eth_header(&pkt_eth_hdr, + (struct ether_addr *)src_mac, + (struct ether_addr *)dst_mac, ETHER_TYPE_IPv4, 0, 0); + pktlen = (uint16_t)(sizeof(struct ether_hdr)); + printf("ETH pktlen %u\n", pktlen); + + pktlen = initialize_ipv4_header_proto(&pkt_ipv4_hdr, src_addr, + dst_addr, pktlen, IPPROTO_SCTP); + printf("ETH + IPv4 pktlen %u\n", pktlen); + + pktlen = initialize_sctp_header(&pkt_sctp_hdr, src_port, dst_port, + pktlen); + printf("ETH + IPv4 + SCTP pktlen %u\n\n", pktlen); + + return generate_packet_burst_proto(mp, pkts_burst, &pkt_eth_hdr, + 0, &pkt_ipv4_hdr, 1, IPPROTO_SCTP, + &pkt_sctp_hdr, burst_size, + PACKET_BURST_GEN_PKT_LEN, 1); +} + +static int +init_mbufpool(void) +{ + int socketid; + int ret = 0; + unsigned int lcore_id; + char s[64]; + + for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { + if (rte_lcore_is_enabled(lcore_id) == 0) + continue; + + socketid = rte_lcore_to_socket_id(lcore_id); + if (socketid >= NB_SOCKETS) { + printf( + "Socket %d of lcore %u is out of range %d\n", + socketid, lcore_id, NB_SOCKETS); + ret = -1; + break; + } + if (mbufpool[socketid] == NULL) { + snprintf(s, sizeof(s), "mbuf_pool_%d", socketid); + mbufpool[socketid] = + rte_pktmbuf_pool_create(s, NB_MBUF, + MEMPOOL_CACHE_SIZE, 0, MBUF_SIZE, + socketid); + if (mbufpool[socketid]) { + printf("Allocated mbuf pool on socket %d\n", + socketid); + } else { + printf("Cannot init mbuf pool on socket %d\n", + socketid); + ret = -ENOMEM; + break; + } + } + } + return ret; +} + +static int +test_query_udp(void) +{ + struct rte_flow_error error; + struct rte_flow_classify_rule *rule; + int ret; + int i; + int key_found; + + ret = init_ipv4_udp_traffic(mbufpool[0], bufs, MAX_PKT_BURST); + if (ret != MAX_PKT_BURST) { + printf("Line %i: init_udp_ipv4_traffic has failed!\n", + __LINE__); + return -1; + } + + for (i = 0; i < MAX_PKT_BURST; i++) + bufs[i]->packet_type = RTE_PTYPE_L3_IPV4; + + /* + * set up parameters for rte_flow_classify_table_entry_add and + * rte_flow_classify_table_entry_delete + */ + + attr.ingress = 1; + attr.priority = 1; + pattern[0] = eth_item; + pattern[1] = ipv4_udp_item_1; + pattern[2] = udp_item_1; + pattern[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + rule = rte_flow_classify_table_entry_add(cls->cls, 0, &key_found, + &attr, pattern, actions, &error); + if (!rule) { + printf("Line %i: flow_classify_table_entry_add", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classifier_query(cls->cls, 0, bufs, MAX_PKT_BURST, + rule, &udp_classify_stats); + if (ret) { + printf("Line %i: flow_classifier_query", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classify_table_entry_delete(cls->cls, 0, rule); + if (ret) { + printf("Line %i: rte_flow_classify_table_entry_delete", + __LINE__); + printf(" should not have failed!\n"); + return -1; + } + return 0; +} + +static int +test_query_tcp(void) +{ + struct rte_flow_classify_rule *rule; + int ret; + int i; + int key_found; + + ret = init_ipv4_tcp_traffic(mbufpool[0], bufs, MAX_PKT_BURST); + if (ret != MAX_PKT_BURST) { + printf("Line %i: init_ipv4_tcp_traffic has failed!\n", + __LINE__); + return -1; + } + + for (i = 0; i < MAX_PKT_BURST; i++) + bufs[i]->packet_type = RTE_PTYPE_L3_IPV4; + + /* + * set up parameters for rte_flow_classify_table_entry_add and + * rte_flow_classify_table_entry_delete + */ + + attr.ingress = 1; + attr.priority = 1; + pattern[0] = eth_item; + pattern[1] = ipv4_tcp_item_1; + pattern[2] = tcp_item_1; + pattern[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + rule = rte_flow_classify_table_entry_add(cls->cls, 0, &key_found, + &attr, pattern, actions, &error); + if (!rule) { + printf("Line %i: flow_classify_table_entry_add", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classifier_query(cls->cls, 0, bufs, MAX_PKT_BURST, + rule, &tcp_classify_stats); + if (ret) { + printf("Line %i: flow_classifier_query", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classify_table_entry_delete(cls->cls, 0, rule); + if (ret) { + printf("Line %i: rte_flow_classify_table_entry_delete", + __LINE__); + printf(" should not have failed!\n"); + return -1; + } + return 0; +} + +static int +test_query_sctp(void) +{ + struct rte_flow_classify_rule *rule; + int ret; + int i; + int key_found; + + ret = init_ipv4_sctp_traffic(mbufpool[0], bufs, MAX_PKT_BURST); + if (ret != MAX_PKT_BURST) { + printf("Line %i: init_ipv4_tcp_traffic has failed!\n", + __LINE__); + return -1; + } + + for (i = 0; i < MAX_PKT_BURST; i++) + bufs[i]->packet_type = RTE_PTYPE_L3_IPV4; + + /* + * set up parameters rte_flow_classify_table_entry_add and + * rte_flow_classify_table_entry_delete + */ + + attr.ingress = 1; + attr.priority = 1; + pattern[0] = eth_item; + pattern[1] = ipv4_sctp_item_1; + pattern[2] = sctp_item_1; + pattern[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + rule = rte_flow_classify_table_entry_add(cls->cls, 0, &key_found, + &attr, pattern, actions, &error); + if (!rule) { + printf("Line %i: flow_classify_table_entry_add", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classifier_query(cls->cls, 0, bufs, MAX_PKT_BURST, + rule, &sctp_classify_stats); + if (ret) { + printf("Line %i: flow_classifier_query", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classify_table_entry_delete(cls->cls, 0, rule); + if (ret) { + printf("Line %i: rte_flow_classify_table_entry_delete", + __LINE__); + printf(" should not have failed!\n"); + return -1; + } + return 0; +} + +static int +test_flow_classify(void) +{ + struct rte_table_acl_params table_acl_params; + struct rte_flow_classify_table_params cls_table_params; + struct rte_flow_classifier_params cls_params; + int socket_id; + int ret; + uint32_t size; + + socket_id = rte_eth_dev_socket_id(0); + + /* Memory allocation */ + size = RTE_CACHE_LINE_ROUNDUP(sizeof(struct flow_classifier_acl)); + cls = rte_zmalloc(NULL, size, RTE_CACHE_LINE_SIZE); + + cls_params.name = "flow_classifier"; + cls_params.socket_id = socket_id; + cls_params.type = RTE_FLOW_CLASSIFY_TABLE_TYPE_ACL; + cls->cls = rte_flow_classifier_create(&cls_params); + + /* initialise ACL table params */ + table_acl_params.n_rule_fields = RTE_DIM(ipv4_defs); + table_acl_params.name = "table_acl_ipv4_5tuple"; + table_acl_params.n_rules = FLOW_CLASSIFY_MAX_RULE_NUM; + memcpy(table_acl_params.field_format, ipv4_defs, sizeof(ipv4_defs)); + + /* initialise table create params */ + cls_table_params.ops = &rte_table_acl_ops, + cls_table_params.arg_create = &table_acl_params, + + ret = rte_flow_classify_table_create(cls->cls, &cls_table_params, + &cls->table_id[0]); + if (ret) { + printf("Line %i: f_create has failed!\n", __LINE__); + rte_flow_classifier_free(cls->cls); + rte_free(cls); + return -1; + } + printf("Created table_acl for for IPv4 five tuple packets\n"); + + ret = init_mbufpool(); + if (ret) { + printf("Line %i: init_mbufpool has failed!\n", __LINE__); + return -1; + } + + if (test_invalid_parameters() < 0) + return -1; + if (test_valid_parameters() < 0) + return -1; + if (test_invalid_patterns() < 0) + return -1; + if (test_invalid_actions() < 0) + return -1; + if (test_query_udp() < 0) + return -1; + if (test_query_tcp() < 0) + return -1; + if (test_query_sctp() < 0) + return -1; + + return 0; +} + +REGISTER_TEST_COMMAND(flow_classify_autotest, test_flow_classify); diff --git a/test/test/test_flow_classify.h b/test/test/test_flow_classify.h new file mode 100644 index 0000000..39535cf --- /dev/null +++ b/test/test/test_flow_classify.h @@ -0,0 +1,234 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef TEST_FLOW_CLASSIFY_H_ +#define TEST_FLOW_CLASSIFY_H_ + +#define MAX_PKT_BURST (32) +#define NB_SOCKETS (1) +#define MEMPOOL_CACHE_SIZE (256) +#define MBUF_SIZE (512) +#define NB_MBUF (512) + +/* test UDP, TCP and SCTP packets */ +static struct rte_mempool *mbufpool[NB_SOCKETS]; +static struct rte_mbuf *bufs[MAX_PKT_BURST]; + +/* ACL field definitions for IPv4 5 tuple rule */ + +enum { + PROTO_FIELD_IPV4, + SRC_FIELD_IPV4, + DST_FIELD_IPV4, + SRCP_FIELD_IPV4, + DSTP_FIELD_IPV4, + NUM_FIELDS_IPV4 +}; + +enum { + PROTO_INPUT_IPV4, + SRC_INPUT_IPV4, + DST_INPUT_IPV4, + SRCP_DESTP_INPUT_IPV4 +}; + +static struct rte_acl_field_def ipv4_defs[NUM_FIELDS_IPV4] = { + /* first input field - always one byte long. */ + { + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint8_t), + .field_index = PROTO_FIELD_IPV4, + .input_index = PROTO_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, next_proto_id), + }, + /* next input field (IPv4 source address) - 4 consecutive bytes. */ + { + /* rte_flow uses a bit mask for IPv4 addresses */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint32_t), + .field_index = SRC_FIELD_IPV4, + .input_index = SRC_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, src_addr), + }, + /* next input field (IPv4 destination address) - 4 consecutive bytes. */ + { + /* rte_flow uses a bit mask for IPv4 addresses */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint32_t), + .field_index = DST_FIELD_IPV4, + .input_index = DST_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, dst_addr), + }, + /* + * Next 2 fields (src & dst ports) form 4 consecutive bytes. + * They share the same input index. + */ + { + /* rte_flow uses a bit mask for protocol ports */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint16_t), + .field_index = SRCP_FIELD_IPV4, + .input_index = SRCP_DESTP_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + sizeof(struct ipv4_hdr) + + offsetof(struct tcp_hdr, src_port), + }, + { + /* rte_flow uses a bit mask for protocol ports */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint16_t), + .field_index = DSTP_FIELD_IPV4, + .input_index = SRCP_DESTP_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + sizeof(struct ipv4_hdr) + + offsetof(struct tcp_hdr, dst_port), + }, +}; + +/* parameters for rte_flow_classify_validate and rte_flow_classify_create */ + +/* test UDP pattern: + * "eth / ipv4 src spec 2.2.2.3 src mask 255.255.255.00 dst spec 2.2.2.7 + * dst mask 255.255.255.00 / udp src is 32 dst is 33 / end" + */ +static struct rte_flow_item_ipv4 ipv4_udp_spec_1 = { + { 0, 0, 0, 0, 0, 0, IPPROTO_UDP, 0, IPv4(2, 2, 2, 3), IPv4(2, 2, 2, 7)} +}; +static const struct rte_flow_item_ipv4 ipv4_mask_24 = { + .hdr = { + .next_proto_id = 0xff, + .src_addr = 0xffffff00, + .dst_addr = 0xffffff00, + }, +}; +static struct rte_flow_item_udp udp_spec_1 = { + { 32, 33, 0, 0 } +}; + +static struct rte_flow_item eth_item = { RTE_FLOW_ITEM_TYPE_ETH, + 0, 0, 0 }; +static struct rte_flow_item eth_item_bad = { -1, 0, 0, 0 }; + +static struct rte_flow_item ipv4_udp_item_1 = { RTE_FLOW_ITEM_TYPE_IPV4, + &ipv4_udp_spec_1, 0, &ipv4_mask_24}; +static struct rte_flow_item ipv4_udp_item_bad = { RTE_FLOW_ITEM_TYPE_IPV4, + NULL, 0, NULL}; + +static struct rte_flow_item udp_item_1 = { RTE_FLOW_ITEM_TYPE_UDP, + &udp_spec_1, 0, &rte_flow_item_udp_mask}; +static struct rte_flow_item udp_item_bad = { RTE_FLOW_ITEM_TYPE_UDP, + NULL, 0, NULL}; + +static struct rte_flow_item end_item = { RTE_FLOW_ITEM_TYPE_END, + 0, 0, 0 }; +static struct rte_flow_item end_item_bad = { -1, 0, 0, 0 }; + +/* test TCP pattern: + * "eth / ipv4 src spec 1.2.3.4 src mask 255.255.255.00 dst spec 5.6.7.8 + * dst mask 255.255.255.00 / tcp src is 16 dst is 17 / end" + */ +static struct rte_flow_item_ipv4 ipv4_tcp_spec_1 = { + { 0, 0, 0, 0, 0, 0, IPPROTO_TCP, 0, IPv4(1, 2, 3, 4), IPv4(5, 6, 7, 8)} +}; + +static struct rte_flow_item_tcp tcp_spec_1 = { + { 16, 17, 0, 0, 0, 0, 0, 0, 0} +}; + +static struct rte_flow_item ipv4_tcp_item_1 = { RTE_FLOW_ITEM_TYPE_IPV4, + &ipv4_tcp_spec_1, 0, &ipv4_mask_24}; + +static struct rte_flow_item tcp_item_1 = { RTE_FLOW_ITEM_TYPE_TCP, + &tcp_spec_1, 0, &rte_flow_item_tcp_mask}; + +/* test SCTP pattern: + * "eth / ipv4 src spec 1.2.3.4 src mask 255.255.255.00 dst spec 5.6.7.8 + * dst mask 255.255.255.00 / sctp src is 16 dst is 17/ end" + */ +static struct rte_flow_item_ipv4 ipv4_sctp_spec_1 = { + { 0, 0, 0, 0, 0, 0, IPPROTO_SCTP, 0, IPv4(11, 12, 13, 14), + IPv4(15, 16, 17, 18)} +}; + +static struct rte_flow_item_sctp sctp_spec_1 = { + { 10, 11, 0, 0} +}; + +static struct rte_flow_item ipv4_sctp_item_1 = { RTE_FLOW_ITEM_TYPE_IPV4, + &ipv4_sctp_spec_1, 0, &ipv4_mask_24}; + +static struct rte_flow_item sctp_item_1 = { RTE_FLOW_ITEM_TYPE_SCTP, + &sctp_spec_1, 0, &rte_flow_item_sctp_mask}; + + +/* test actions: + * "actions count / end" + */ +static struct rte_flow_action count_action = { RTE_FLOW_ACTION_TYPE_COUNT, 0}; +static struct rte_flow_action count_action_bad = { -1, 0}; + +static struct rte_flow_action end_action = { RTE_FLOW_ACTION_TYPE_END, 0}; +static struct rte_flow_action end_action_bad = { -1, 0}; + +static struct rte_flow_action actions[2]; + +/* test attributes */ +static struct rte_flow_attr attr; + +/* test error */ +static struct rte_flow_error error; + +/* test pattern */ +static struct rte_flow_item pattern[4]; + +/* flow classify data for UDP burst */ +static struct rte_flow_classify_ipv4_5tuple_stats udp_ntuple_stats; +static struct rte_flow_classify_stats udp_classify_stats = { + .stats = (void *)&udp_ntuple_stats +}; + +/* flow classify data for TCP burst */ +static struct rte_flow_classify_ipv4_5tuple_stats tcp_ntuple_stats; +static struct rte_flow_classify_stats tcp_classify_stats = { + .stats = (void *)&tcp_ntuple_stats +}; + +/* flow classify data for SCTP burst */ +static struct rte_flow_classify_ipv4_5tuple_stats sctp_ntuple_stats; +static struct rte_flow_classify_stats sctp_classify_stats = { + .stats = (void *)&sctp_ntuple_stats +}; +#endif /* TEST_FLOW_CLASSIFY_H_ */ -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v10 1/4] librte_flow_classify: add flow classify library 2017-10-22 13:32 ` [dpdk-dev] [PATCH v9 " Bernard Iremonger 2017-10-23 15:16 ` [dpdk-dev] [PATCH v10 " Bernard Iremonger @ 2017-10-23 15:16 ` Bernard Iremonger 2017-10-23 16:03 ` Singh, Jasvinder 2017-10-24 9:50 ` Thomas Monjalon 2017-10-23 15:16 ` [dpdk-dev] [PATCH v10 2/4] examples/flow_classify: flow classify sample application Bernard Iremonger ` (2 subsequent siblings) 4 siblings, 2 replies; 145+ messages in thread From: Bernard Iremonger @ 2017-10-23 15:16 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil, jasvinder.singh Cc: Bernard Iremonger From: Ferruh Yigit <ferruh.yigit@intel.com> The following APIs's are implemented in the librte_flow_classify library: rte_flow_classifier_create rte_flow_classifier_free rte_flow_classifier_query rte_flow_classify_table_create rte_flow_classify_table_entry_add rte_flow_classify_table_entry_delete The following librte_table API's are used: f_create to create a table. f_add to add a rule to the table. f_del to delete a rule from the table. f_free to free a table f_lookup to match packets with the rules. The library supports counting of IPv4 five tupple packets only, ie IPv4 UDP, TCP and SCTP packets. Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com> Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> --- MAINTAINERS | 7 + config/common_base | 6 + doc/api/doxy-api-index.md | 1 + doc/api/doxy-api.conf | 1 + lib/Makefile | 2 + lib/librte_eal/common/include/rte_log.h | 1 + lib/librte_flow_classify/Makefile | 51 ++ lib/librte_flow_classify/rte_flow_classify.c | 676 +++++++++++++++++++++ lib/librte_flow_classify/rte_flow_classify.h | 281 +++++++++ lib/librte_flow_classify/rte_flow_classify_parse.c | 546 +++++++++++++++++ lib/librte_flow_classify/rte_flow_classify_parse.h | 74 +++ .../rte_flow_classify_version.map | 12 + mk/rte.app.mk | 1 + 13 files changed, 1659 insertions(+) create mode 100644 lib/librte_flow_classify/Makefile create mode 100644 lib/librte_flow_classify/rte_flow_classify.c create mode 100644 lib/librte_flow_classify/rte_flow_classify.h create mode 100644 lib/librte_flow_classify/rte_flow_classify_parse.c create mode 100644 lib/librte_flow_classify/rte_flow_classify_parse.h create mode 100644 lib/librte_flow_classify/rte_flow_classify_version.map diff --git a/MAINTAINERS b/MAINTAINERS index 9c08e36..9103ce1 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -739,6 +739,13 @@ F: doc/guides/prog_guide/pdump_lib.rst F: app/pdump/ F: doc/guides/tools/pdump.rst +Flow classify +M: Bernard Iremonger <bernard.iremonger@intel.com> +F: lib/librte_flow_classify/ +F: test/test/test_flow_classify* +F: examples/flow_classify/ +F: doc/guides/sample_app_ug/flow_classify.rst +F: doc/guides/prog_guide/flow_classify_lib.rst Packet Framework ---------------- diff --git a/config/common_base b/config/common_base index d9471e8..e1079aa 100644 --- a/config/common_base +++ b/config/common_base @@ -707,6 +707,12 @@ CONFIG_RTE_LIBRTE_GSO=y CONFIG_RTE_LIBRTE_METER=y # +# Compile librte_classify +# +CONFIG_RTE_LIBRTE_FLOW_CLASSIFY=y +CONFIG_RTE_LIBRTE_CLASSIFY_DEBUG=n + +# # Compile librte_sched # CONFIG_RTE_LIBRTE_SCHED=y diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md index 07d6f4a..8c89d77 100644 --- a/doc/api/doxy-api-index.md +++ b/doc/api/doxy-api-index.md @@ -112,6 +112,7 @@ The public API headers are grouped by topics: [ACL] (@ref rte_acl.h), [EFD] (@ref rte_efd.h), [member] (@ref rte_member.h) + [flow_classify] (@ref rte_flow_classify.h), - **QoS**: [metering] (@ref rte_meter.h), diff --git a/doc/api/doxy-api.conf b/doc/api/doxy-api.conf index 9e9fa56..9edb6fd 100644 --- a/doc/api/doxy-api.conf +++ b/doc/api/doxy-api.conf @@ -48,6 +48,7 @@ INPUT = doc/api/doxy-api-index.md \ lib/librte_efd \ lib/librte_ether \ lib/librte_eventdev \ + lib/librte_flow_classify \ lib/librte_gro \ lib/librte_gso \ lib/librte_hash \ diff --git a/lib/Makefile b/lib/Makefile index 86d475f..9f378e6 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -83,6 +83,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_POWER) += librte_power DEPDIRS-librte_power := librte_eal DIRS-$(CONFIG_RTE_LIBRTE_METER) += librte_meter DEPDIRS-librte_meter := librte_eal +DIRS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += librte_flow_classify +DEPDIRS-librte_flow_classify := librte_net librte_table librte_acl DIRS-$(CONFIG_RTE_LIBRTE_SCHED) += librte_sched DEPDIRS-librte_sched := librte_eal librte_mempool librte_mbuf librte_net DEPDIRS-librte_sched += librte_timer diff --git a/lib/librte_eal/common/include/rte_log.h b/lib/librte_eal/common/include/rte_log.h index 2fa1199..67209ae 100644 --- a/lib/librte_eal/common/include/rte_log.h +++ b/lib/librte_eal/common/include/rte_log.h @@ -88,6 +88,7 @@ struct rte_logs { #define RTE_LOGTYPE_EFD 18 /**< Log related to EFD. */ #define RTE_LOGTYPE_EVENTDEV 19 /**< Log related to eventdev. */ #define RTE_LOGTYPE_GSO 20 /**< Log related to GSO. */ +#define RTE_LOGTYPE_CLASSIFY 21 /**< Log related to flow classify. */ /* these log types can be used in an application */ #define RTE_LOGTYPE_USER1 24 /**< User-defined log type 1. */ diff --git a/lib/librte_flow_classify/Makefile b/lib/librte_flow_classify/Makefile new file mode 100644 index 0000000..7863a0c --- /dev/null +++ b/lib/librte_flow_classify/Makefile @@ -0,0 +1,51 @@ +# BSD LICENSE +# +# Copyright(c) 2017 Intel Corporation. All rights reserved. +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Intel Corporation nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +include $(RTE_SDK)/mk/rte.vars.mk + +# library name +LIB = librte_flow_classify.a + +CFLAGS += -O3 +CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) + +EXPORT_MAP := rte_flow_classify_version.map + +LIBABIVER := 1 + +# all source are stored in SRCS-y +SRCS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += rte_flow_classify.c +SRCS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += rte_flow_classify_parse.c + +# install this header file +SYMLINK-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY)-include := rte_flow_classify.h + +include $(RTE_SDK)/mk/rte.lib.mk diff --git a/lib/librte_flow_classify/rte_flow_classify.c b/lib/librte_flow_classify/rte_flow_classify.c new file mode 100644 index 0000000..094f46d --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify.c @@ -0,0 +1,676 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <rte_flow_classify.h> +#include "rte_flow_classify_parse.h" +#include <rte_flow_driver.h> +#include <rte_table_acl.h> +#include <stdbool.h> + +static struct rte_eth_ntuple_filter ntuple_filter; +static uint32_t unique_id = 1; + + +struct rte_flow_classify_table_entry { + /* meta-data for classify rule */ + uint32_t rule_id; +}; + +struct rte_table { + /* Input parameters */ + struct rte_table_ops ops; + uint32_t entry_size; + enum rte_flow_classify_table_type type; + + /* Handle to the low-level table object */ + void *h_table; +}; + +#define RTE_FLOW_CLASSIFIER_MAX_NAME_SZ 256 + +struct rte_flow_classifier { + /* Input parameters */ + char name[RTE_FLOW_CLASSIFIER_MAX_NAME_SZ]; + int socket_id; + enum rte_flow_classify_table_type type; + + /* Internal tables */ + struct rte_table tables[RTE_FLOW_CLASSIFY_TABLE_MAX]; + uint32_t num_tables; + uint16_t nb_pkts; + struct rte_flow_classify_table_entry + *entries[RTE_PORT_IN_BURST_SIZE_MAX]; +} __rte_cache_aligned; + +enum { + PROTO_FIELD_IPV4, + SRC_FIELD_IPV4, + DST_FIELD_IPV4, + SRCP_FIELD_IPV4, + DSTP_FIELD_IPV4, + NUM_FIELDS_IPV4 +}; + +struct acl_keys { + struct rte_table_acl_rule_add_params key_add; /* add key */ + struct rte_table_acl_rule_delete_params key_del; /* delete key */ +}; + +struct classify_rules { + enum rte_flow_classify_rule_type type; + union { + struct rte_flow_classify_ipv4_5tuple ipv4_5tuple; + } u; +}; + +struct rte_flow_classify_rule { + uint32_t id; /* unique ID of classify rule */ + struct rte_flow_action action; /* action when match found */ + struct classify_rules rules; /* union of rules */ + union { + struct acl_keys key; + } u; + int key_found; /* rule key found in table */ + void *entry; /* pointer to buffer to hold rule meta data */ + void *entry_ptr; /* handle to the table entry for rule meta data */ +}; + +static int +flow_classify_parse_flow( + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + struct rte_flow_item *items; + parse_filter_t parse_filter; + uint32_t item_num = 0; + uint32_t i = 0; + int ret; + + memset(&ntuple_filter, 0, sizeof(ntuple_filter)); + + /* Get the non-void item number of pattern */ + while ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_END) { + if ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_VOID) + item_num++; + i++; + } + item_num++; + + items = malloc(item_num * sizeof(struct rte_flow_item)); + if (!items) { + rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_ITEM_NUM, + NULL, "No memory for pattern items."); + return -ENOMEM; + } + + memset(items, 0, item_num * sizeof(struct rte_flow_item)); + classify_pattern_skip_void_item(items, pattern); + + parse_filter = classify_find_parse_filter_func(items); + if (!parse_filter) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + pattern, "Unsupported pattern"); + free(items); + return -EINVAL; + } + + ret = parse_filter(attr, items, actions, &ntuple_filter, error); + free(items); + return ret; +} + +#ifdef RTE_LIBRTE_CLASSIFY_DEBUG +#define uint32_t_to_char(ip, a, b, c, d) do {\ + *a = (unsigned char)(ip >> 24 & 0xff);\ + *b = (unsigned char)(ip >> 16 & 0xff);\ + *c = (unsigned char)(ip >> 8 & 0xff);\ + *d = (unsigned char)(ip & 0xff);\ + } while (0) + +static inline void +print_acl_ipv4_key_add(struct rte_table_acl_rule_add_params *key) +{ + unsigned char a, b, c, d; + + printf("%s: 0x%02hhx/0x%hhx ", __func__, + key->field_value[PROTO_FIELD_IPV4].value.u8, + key->field_value[PROTO_FIELD_IPV4].mask_range.u8); + + uint32_t_to_char(key->field_value[SRC_FIELD_IPV4].value.u32, + &a, &b, &c, &d); + printf(" %hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, + key->field_value[SRC_FIELD_IPV4].mask_range.u32); + + uint32_t_to_char(key->field_value[DST_FIELD_IPV4].value.u32, + &a, &b, &c, &d); + printf("%hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, + key->field_value[DST_FIELD_IPV4].mask_range.u32); + + printf("%hu : 0x%x %hu : 0x%x", + key->field_value[SRCP_FIELD_IPV4].value.u16, + key->field_value[SRCP_FIELD_IPV4].mask_range.u16, + key->field_value[DSTP_FIELD_IPV4].value.u16, + key->field_value[DSTP_FIELD_IPV4].mask_range.u16); + + printf(" priority: 0x%x\n", key->priority); +} + +static inline void +print_acl_ipv4_key_delete(struct rte_table_acl_rule_delete_params *key) +{ + unsigned char a, b, c, d; + + printf("%s: 0x%02hhx/0x%hhx ", __func__, + key->field_value[PROTO_FIELD_IPV4].value.u8, + key->field_value[PROTO_FIELD_IPV4].mask_range.u8); + + uint32_t_to_char(key->field_value[SRC_FIELD_IPV4].value.u32, + &a, &b, &c, &d); + printf(" %hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, + key->field_value[SRC_FIELD_IPV4].mask_range.u32); + + uint32_t_to_char(key->field_value[DST_FIELD_IPV4].value.u32, + &a, &b, &c, &d); + printf("%hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, + key->field_value[DST_FIELD_IPV4].mask_range.u32); + + printf("%hu : 0x%x %hu : 0x%x\n", + key->field_value[SRCP_FIELD_IPV4].value.u16, + key->field_value[SRCP_FIELD_IPV4].mask_range.u16, + key->field_value[DSTP_FIELD_IPV4].value.u16, + key->field_value[DSTP_FIELD_IPV4].mask_range.u16); +} +#endif + +static int +rte_flow_classifier_check_params(struct rte_flow_classifier_params *params) +{ + if (params == NULL) { + RTE_LOG(ERR, CLASSIFY, + "%s: Incorrect value for parameter params\n", __func__); + return -EINVAL; + } + + /* name */ + if (params->name == NULL) { + RTE_LOG(ERR, CLASSIFY, + "%s: Incorrect value for parameter name\n", __func__); + return -EINVAL; + } + + /* socket */ + if ((params->socket_id < 0) || + (params->socket_id >= RTE_MAX_NUMA_NODES)) { + RTE_LOG(ERR, CLASSIFY, + "%s: Incorrect value for parameter socket_id\n", + __func__); + return -EINVAL; + } + + return 0; +} + +struct rte_flow_classifier * +rte_flow_classifier_create(struct rte_flow_classifier_params *params) +{ + struct rte_flow_classifier *cls; + int ret; + + /* Check input parameters */ + ret = rte_flow_classifier_check_params(params); + if (ret != 0) { + RTE_LOG(ERR, CLASSIFY, + "%s: flow classifier params check failed (%d)\n", + __func__, ret); + return NULL; + } + + /* Allocate memory for the flow classifier */ + cls = rte_zmalloc_socket("FLOW_CLASSIFIER", + sizeof(struct rte_flow_classifier), + RTE_CACHE_LINE_SIZE, params->socket_id); + + if (cls == NULL) { + RTE_LOG(ERR, CLASSIFY, + "%s: flow classifier memory allocation failed\n", + __func__); + return NULL; + } + + /* Save input parameters */ + snprintf(cls->name, RTE_FLOW_CLASSIFIER_MAX_NAME_SZ, "%s", + params->name); + cls->socket_id = params->socket_id; + cls->type = params->type; + + /* Initialize flow classifier internal data structure */ + cls->num_tables = 0; + + return cls; +} + +static void +rte_flow_classify_table_free(struct rte_table *table) +{ + if (table->ops.f_free != NULL) + table->ops.f_free(table->h_table); +} + +int +rte_flow_classifier_free(struct rte_flow_classifier *cls) +{ + uint32_t i; + + /* Check input parameters */ + if (cls == NULL) { + RTE_LOG(ERR, CLASSIFY, + "%s: rte_flow_classifier parameter is NULL\n", + __func__); + return -EINVAL; + } + + /* Free tables */ + for (i = 0; i < cls->num_tables; i++) { + struct rte_table *table = &cls->tables[i]; + + rte_flow_classify_table_free(table); + } + + /* Free flow classifier memory */ + rte_free(cls); + + return 0; +} + +static int +rte_table_check_params(struct rte_flow_classifier *cls, + struct rte_flow_classify_table_params *params, + uint32_t *table_id) +{ + if (cls == NULL) { + RTE_LOG(ERR, CLASSIFY, + "%s: flow classifier parameter is NULL\n", + __func__); + return -EINVAL; + } + if (params == NULL) { + RTE_LOG(ERR, CLASSIFY, "%s: params parameter is NULL\n", + __func__); + return -EINVAL; + } + if (table_id == NULL) { + RTE_LOG(ERR, CLASSIFY, "%s: table_id parameter is NULL\n", + __func__); + return -EINVAL; + } + + /* ops */ + if (params->ops == NULL) { + RTE_LOG(ERR, CLASSIFY, "%s: params->ops is NULL\n", + __func__); + return -EINVAL; + } + + if (params->ops->f_create == NULL) { + RTE_LOG(ERR, CLASSIFY, + "%s: f_create function pointer is NULL\n", __func__); + return -EINVAL; + } + + if (params->ops->f_lookup == NULL) { + RTE_LOG(ERR, CLASSIFY, + "%s: f_lookup function pointer is NULL\n", __func__); + return -EINVAL; + } + + /* De we have room for one more table? */ + if (cls->num_tables == RTE_FLOW_CLASSIFY_TABLE_MAX) { + RTE_LOG(ERR, CLASSIFY, + "%s: Incorrect value for num_tables parameter\n", + __func__); + return -EINVAL; + } + + return 0; +} + +int +rte_flow_classify_table_create(struct rte_flow_classifier *cls, + struct rte_flow_classify_table_params *params, + uint32_t *table_id) +{ + struct rte_table *table; + void *h_table; + uint32_t entry_size, id; + int ret; + + /* Check input arguments */ + ret = rte_table_check_params(cls, params, table_id); + if (ret != 0) + return ret; + + id = cls->num_tables; + table = &cls->tables[id]; + + /* calculate table entry size */ + entry_size = sizeof(struct rte_flow_classify_table_entry); + + /* Create the table */ + h_table = params->ops->f_create(params->arg_create, cls->socket_id, + entry_size); + if (h_table == NULL) { + RTE_LOG(ERR, CLASSIFY, "%s: Table creation failed\n", __func__); + return -EINVAL; + } + + /* Commit current table to the classifier */ + cls->num_tables++; + *table_id = id; + + /* Save input parameters */ + memcpy(&table->ops, params->ops, sizeof(struct rte_table_ops)); + + /* Initialize table internal data structure */ + table->entry_size = entry_size; + table->h_table = h_table; + + return 0; +} + +static struct rte_flow_classify_rule * +allocate_acl_ipv4_5tuple_rule(void) +{ + struct rte_flow_classify_rule *rule; + + rule = malloc(sizeof(struct rte_flow_classify_rule)); + if (!rule) + return rule; + + memset(rule, 0, sizeof(struct rte_flow_classify_rule)); + rule->id = unique_id++; + rule->rules.type = RTE_FLOW_CLASSIFY_RULE_TYPE_IPV4_5TUPLE; + + memcpy(&rule->action, classify_get_flow_action(), + sizeof(struct rte_flow_action)); + + /* key add values */ + rule->u.key.key_add.priority = ntuple_filter.priority; + rule->u.key.key_add.field_value[PROTO_FIELD_IPV4].mask_range.u8 = + ntuple_filter.proto_mask; + rule->u.key.key_add.field_value[PROTO_FIELD_IPV4].value.u8 = + ntuple_filter.proto; + rule->rules.u.ipv4_5tuple.proto = ntuple_filter.proto; + rule->rules.u.ipv4_5tuple.proto_mask = ntuple_filter.proto_mask; + + rule->u.key.key_add.field_value[SRC_FIELD_IPV4].mask_range.u32 = + ntuple_filter.src_ip_mask; + rule->u.key.key_add.field_value[SRC_FIELD_IPV4].value.u32 = + ntuple_filter.src_ip; + rule->rules.u.ipv4_5tuple.src_ip_mask = ntuple_filter.src_ip_mask; + rule->rules.u.ipv4_5tuple.src_ip = ntuple_filter.src_ip; + + rule->u.key.key_add.field_value[DST_FIELD_IPV4].mask_range.u32 = + ntuple_filter.dst_ip_mask; + rule->u.key.key_add.field_value[DST_FIELD_IPV4].value.u32 = + ntuple_filter.dst_ip; + rule->rules.u.ipv4_5tuple.dst_ip_mask = ntuple_filter.dst_ip_mask; + rule->rules.u.ipv4_5tuple.dst_ip = ntuple_filter.dst_ip; + + rule->u.key.key_add.field_value[SRCP_FIELD_IPV4].mask_range.u16 = + ntuple_filter.src_port_mask; + rule->u.key.key_add.field_value[SRCP_FIELD_IPV4].value.u16 = + ntuple_filter.src_port; + rule->rules.u.ipv4_5tuple.src_port_mask = ntuple_filter.src_port_mask; + rule->rules.u.ipv4_5tuple.src_port = ntuple_filter.src_port; + + rule->u.key.key_add.field_value[DSTP_FIELD_IPV4].mask_range.u16 = + ntuple_filter.dst_port_mask; + rule->u.key.key_add.field_value[DSTP_FIELD_IPV4].value.u16 = + ntuple_filter.dst_port; + rule->rules.u.ipv4_5tuple.dst_port_mask = ntuple_filter.dst_port_mask; + rule->rules.u.ipv4_5tuple.dst_port = ntuple_filter.dst_port; + +#ifdef RTE_LIBRTE_CLASSIFY_DEBUG + print_acl_ipv4_key_add(&rule->u.key.key_add); +#endif + + /* key delete values */ + memcpy(&rule->u.key.key_del.field_value[PROTO_FIELD_IPV4], + &rule->u.key.key_add.field_value[PROTO_FIELD_IPV4], + NUM_FIELDS_IPV4 * sizeof(struct rte_acl_field)); + +#ifdef RTE_LIBRTE_CLASSIFY_DEBUG + print_acl_ipv4_key_delete(&rule->u.key.key_del); +#endif + return rule; +} + +struct rte_flow_classify_rule * +rte_flow_classify_table_entry_add(struct rte_flow_classifier *cls, + uint32_t table_id, + int *key_found, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + struct rte_flow_classify_rule *rule; + struct rte_flow_classify_table_entry *table_entry; + int ret; + + if (!error) + return NULL; + + if (!cls) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "NULL classifier."); + return NULL; + } + + if (table_id >= cls->num_tables) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "invalid table_id."); + return NULL; + } + + if (key_found == NULL) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "NULL key_found."); + return NULL; + } + + if (!pattern) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM_NUM, + NULL, "NULL pattern."); + return NULL; + } + + if (!actions) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_NUM, + NULL, "NULL action."); + return NULL; + } + + if (!attr) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, + NULL, "NULL attribute."); + return NULL; + } + + /* parse attr, pattern and actions */ + ret = flow_classify_parse_flow(attr, pattern, actions, error); + if (ret < 0) + return NULL; + + switch (cls->type) { + case RTE_FLOW_CLASSIFY_TABLE_TYPE_ACL: + rule = allocate_acl_ipv4_5tuple_rule(); + if (!rule) + return NULL; + break; + default: + return NULL; + } + + rule->entry = malloc(sizeof(struct rte_flow_classify_table_entry)); + if (!rule->entry) { + free(rule); + return NULL; + } + + table_entry = rule->entry; + table_entry->rule_id = rule->id; + + if (cls->tables[table_id].ops.f_add != NULL) { + ret = cls->tables[table_id].ops.f_add( + cls->tables[table_id].h_table, + &rule->u.key.key_add, + rule->entry, + &rule->key_found, + &rule->entry_ptr); + if (ret) { + free(rule->entry); + free(rule); + return NULL; + } + *key_found = rule->key_found; + } + return rule; +} + +int +rte_flow_classify_table_entry_delete(struct rte_flow_classifier *cls, + uint32_t table_id, + struct rte_flow_classify_rule *rule) +{ + int ret = -EINVAL; + + if (!cls || !rule || table_id >= cls->num_tables) + return ret; + + if (cls->tables[table_id].ops.f_delete != NULL) + ret = cls->tables[table_id].ops.f_delete( + cls->tables[table_id].h_table, + &rule->u.key.key_del, + &rule->key_found, + &rule->entry); + + return ret; +} + +static int +flow_classifier_lookup(struct rte_flow_classifier *cls, + uint32_t table_id, + struct rte_mbuf **pkts, + const uint16_t nb_pkts) +{ + int ret = -EINVAL; + uint64_t pkts_mask; + uint64_t lookup_hit_mask; + + pkts_mask = RTE_LEN2MASK(nb_pkts, uint64_t); + ret = cls->tables[table_id].ops.f_lookup( + cls->tables[table_id].h_table, + pkts, pkts_mask, &lookup_hit_mask, + (void **)cls->entries); + + if (!ret && lookup_hit_mask) + cls->nb_pkts = nb_pkts; + else + cls->nb_pkts = 0; + + return ret; +} + +static int +action_apply(struct rte_flow_classifier *cls, + struct rte_flow_classify_rule *rule, + struct rte_flow_classify_stats *stats) +{ + struct rte_flow_classify_ipv4_5tuple_stats *ntuple_stats; + uint64_t count = 0; + int i; + int ret = -EINVAL; + + switch (rule->action.type) { + case RTE_FLOW_ACTION_TYPE_COUNT: + for (i = 0; i < cls->nb_pkts; i++) { + if (rule->id == cls->entries[i]->rule_id) + count++; + } + if (count) { + ret = 0; + ntuple_stats = + (struct rte_flow_classify_ipv4_5tuple_stats *) + stats->stats; + ntuple_stats->counter1 = count; + ntuple_stats->ipv4_5tuple = rule->rules.u.ipv4_5tuple; + } + break; + default: + ret = -ENOTSUP; + break; + } + + return ret; +} + +int +rte_flow_classifier_query(struct rte_flow_classifier *cls, + uint32_t table_id, + struct rte_mbuf **pkts, + const uint16_t nb_pkts, + struct rte_flow_classify_rule *rule, + struct rte_flow_classify_stats *stats) +{ + int ret = -EINVAL; + + if (!cls || !rule || !stats || !pkts || nb_pkts == 0 || + table_id >= cls->num_tables) + return ret; + + ret = flow_classifier_lookup(cls, table_id, pkts, nb_pkts); + if (!ret) + ret = action_apply(cls, rule, stats); + return ret; +} diff --git a/lib/librte_flow_classify/rte_flow_classify.h b/lib/librte_flow_classify/rte_flow_classify.h new file mode 100644 index 0000000..f8838af --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify.h @@ -0,0 +1,281 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _RTE_FLOW_CLASSIFY_H_ +#define _RTE_FLOW_CLASSIFY_H_ + +/** + * @file + * + * RTE Flow Classify Library + * + * This library provides flow record information with some measured properties. + * + * Application should define the flow and measurement criteria (action) for it. + * + * The Library doesn't maintain any flow records itself, instead flow + * information is returned to upper layer only for given packets. + * + * It is application's responsibility to call rte_flow_classifier_query() + * for a burst of packets, just after receiving them or before transmitting + * them. + * Application should provide the flow type interested in, measurement to apply + * to that flow in rte_flow_classify_table_entry_add() API, and should provide + * the rte_flow_classifier object and storage to put results in for the + * rte_flow_classifier_query() API. + * + * Usage: + * - application calls rte_flow_classifier_create() to create an + * rte_flow_classifier object. + * - application calls rte_flow_classify_table_create() to create a table + * in the rte_flow_classifier object. + * - application calls rte_flow_classify_table_entry_add() to add a rule to + * the table in the rte_flow_classifier object. + * - application calls rte_flow_classifier_query() in a polling manner, + * preferably after rte_eth_rx_burst(). This will cause the library to + * match packet information to flow information with some measurements. + * - rte_flow_classifier object can be destroyed when it is no longer needed + * with rte_flow_classifier_free() + */ + +#include <rte_ethdev.h> +#include <rte_ether.h> +#include <rte_flow.h> +#include <rte_acl.h> +#include <rte_table_acl.h> + +#ifdef __cplusplus +extern "C" { +#endif + +/** Opaque data type for flow classifier */ +struct rte_flow_classifier; + +/** Opaque data type for flow classify rule */ +struct rte_flow_classify_rule; + +/** Flow classify rule type */ +enum rte_flow_classify_rule_type { + /** no type */ + RTE_FLOW_CLASSIFY_RULE_TYPE_NONE, + /** IPv4 5tuple type */ + RTE_FLOW_CLASSIFY_RULE_TYPE_IPV4_5TUPLE, +}; + +/** Flow classify table type */ +enum rte_flow_classify_table_type { + /** no type */ + RTE_FLOW_CLASSIFY_TABLE_TYPE_NONE, + /** ACL type */ + RTE_FLOW_CLASSIFY_TABLE_TYPE_ACL, +}; + +/** + * Maximum number of tables allowed for any Flow Classifier instance. + * The value of this parameter cannot be changed. + */ +#define RTE_FLOW_CLASSIFY_TABLE_MAX 64 + +/** Parameters for flow classifier creation */ +struct rte_flow_classifier_params { + /** flow classifier name */ + const char *name; + + /** CPU socket ID where memory for the flow classifier and its */ + /** elements (tables) should be allocated */ + int socket_id; + + /** Table type */ + enum rte_flow_classify_table_type type; +}; + +/** Parameters for table creation */ +struct rte_flow_classify_table_params { + /** Table operations (specific to each table type) */ + struct rte_table_ops *ops; + + /** Opaque param to be passed to the table create operation */ + void *arg_create; +}; + +/** IPv4 5-tuple data */ +struct rte_flow_classify_ipv4_5tuple { + uint32_t dst_ip; /**< Destination IP address in big endian. */ + uint32_t dst_ip_mask; /**< Mask of destination IP address. */ + uint32_t src_ip; /**< Source IP address in big endian. */ + uint32_t src_ip_mask; /**< Mask of destination IP address. */ + uint16_t dst_port; /**< Destination port in big endian. */ + uint16_t dst_port_mask; /**< Mask of destination port. */ + uint16_t src_port; /**< Source Port in big endian. */ + uint16_t src_port_mask; /**< Mask of source port. */ + uint8_t proto; /**< L4 protocol. */ + uint8_t proto_mask; /**< Mask of L4 protocol. */ +}; + +/** + * Flow stats + * + * For the count action, stats can be returned by the query API. + * + * Storage for stats is provided by application. + */ +struct rte_flow_classify_stats { + void *stats; +}; + +struct rte_flow_classify_ipv4_5tuple_stats { + /** count of packets that match IPv4 5tuple pattern */ + uint64_t counter1; + /** IPv4 5tuple data */ + struct rte_flow_classify_ipv4_5tuple ipv4_5tuple; +}; + +/** + * Flow classifier create + * + * @param params + * Parameters for flow classifier creation + * @return + * Handle to flow classifier instance on success or NULL otherwise + */ +struct rte_flow_classifier * +rte_flow_classifier_create(struct rte_flow_classifier_params *params); + +/** + * Flow classifier free + * + * @param cls + * Handle to flow classifier instance + * @return + * 0 on success, error code otherwise + */ +int +rte_flow_classifier_free(struct rte_flow_classifier *cls); + +/** + * Flow classify table create + * + * @param cls + * Handle to flow classifier instance + * @param params + * Parameters for flow_classify table creation + * @param table_id + * Table ID. Valid only within the scope of table IDs of the current + * classifier. Only returned after a successful invocation. + * @return + * 0 on success, error code otherwise + */ +int +rte_flow_classify_table_create(struct rte_flow_classifier *cls, + struct rte_flow_classify_table_params *params, + uint32_t *table_id); + +/** + * Add a flow classify rule to the flow_classifer table. + * + * @param[in] cls + * Flow classifier handle + * @param[in] table_id + * id of table + * @param[out] key_found + * returns 1 if key present already, 0 otherwise. + * @param[in] attr + * Flow rule attributes + * @param[in] pattern + * Pattern specification (list terminated by the END pattern item). + * @param[in] actions + * Associated actions (list terminated by the END pattern item). + * @param[out] error + * Perform verbose error reporting if not NULL. Structure + * initialised in case of error only. + * @return + * A valid handle in case of success, NULL otherwise. + */ +struct rte_flow_classify_rule * +rte_flow_classify_table_entry_add(struct rte_flow_classifier *cls, + uint32_t table_id, + int *key_found, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error); + +/** + * Delete a flow classify rule from the flow_classifer table. + * + * @param[in] cls + * Flow classifier handle + * @param[in] table_id + * id of table + * @param[in] rule + * Flow classify rule + * @return + * 0 on success, error code otherwise. + */ +int +rte_flow_classify_table_entry_delete(struct rte_flow_classifier *cls, + uint32_t table_id, + struct rte_flow_classify_rule *rule); + +/** + * Query flow classifier for given rule. + * + * @param[in] cls + * Flow classifier handle + * @param[in] table_id + * id of table + * @param[in] pkts + * Pointer to packets to process + * @param[in] nb_pkts + * Number of packets to process + * @param[in] rule + * Flow classify rule + * @param[in] stats + * Flow classify stats + * + * @return + * 0 on success, error code otherwise. + */ +int +rte_flow_classifier_query(struct rte_flow_classifier *cls, + uint32_t table_id, + struct rte_mbuf **pkts, + const uint16_t nb_pkts, + struct rte_flow_classify_rule *rule, + struct rte_flow_classify_stats *stats); + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_FLOW_CLASSIFY_H_ */ diff --git a/lib/librte_flow_classify/rte_flow_classify_parse.c b/lib/librte_flow_classify/rte_flow_classify_parse.c new file mode 100644 index 0000000..dbfa111 --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify_parse.c @@ -0,0 +1,546 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <rte_flow_classify.h> +#include "rte_flow_classify_parse.h" +#include <rte_flow_driver.h> + +struct classify_valid_pattern { + enum rte_flow_item_type *items; + parse_filter_t parse_filter; +}; + +static struct rte_flow_action action; + +/* Pattern for IPv4 5-tuple UDP filter */ +static enum rte_flow_item_type pattern_ntuple_1[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_UDP, + RTE_FLOW_ITEM_TYPE_END, +}; + +/* Pattern for IPv4 5-tuple TCP filter */ +static enum rte_flow_item_type pattern_ntuple_2[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_TCP, + RTE_FLOW_ITEM_TYPE_END, +}; + +/* Pattern for IPv4 5-tuple SCTP filter */ +static enum rte_flow_item_type pattern_ntuple_3[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_SCTP, + RTE_FLOW_ITEM_TYPE_END, +}; + +static int +classify_parse_ntuple_filter(const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_eth_ntuple_filter *filter, + struct rte_flow_error *error); + +static struct classify_valid_pattern classify_supported_patterns[] = { + /* ntuple */ + { pattern_ntuple_1, classify_parse_ntuple_filter }, + { pattern_ntuple_2, classify_parse_ntuple_filter }, + { pattern_ntuple_3, classify_parse_ntuple_filter }, +}; + +struct rte_flow_action * +classify_get_flow_action(void) +{ + return &action; +} + +/* Find the first VOID or non-VOID item pointer */ +const struct rte_flow_item * +classify_find_first_item(const struct rte_flow_item *item, bool is_void) +{ + bool is_find; + + while (item->type != RTE_FLOW_ITEM_TYPE_END) { + if (is_void) + is_find = item->type == RTE_FLOW_ITEM_TYPE_VOID; + else + is_find = item->type != RTE_FLOW_ITEM_TYPE_VOID; + if (is_find) + break; + item++; + } + return item; +} + +/* Skip all VOID items of the pattern */ +void +classify_pattern_skip_void_item(struct rte_flow_item *items, + const struct rte_flow_item *pattern) +{ + uint32_t cpy_count = 0; + const struct rte_flow_item *pb = pattern, *pe = pattern; + + for (;;) { + /* Find a non-void item first */ + pb = classify_find_first_item(pb, false); + if (pb->type == RTE_FLOW_ITEM_TYPE_END) { + pe = pb; + break; + } + + /* Find a void item */ + pe = classify_find_first_item(pb + 1, true); + + cpy_count = pe - pb; + rte_memcpy(items, pb, sizeof(struct rte_flow_item) * cpy_count); + + items += cpy_count; + + if (pe->type == RTE_FLOW_ITEM_TYPE_END) { + pb = pe; + break; + } + + pb = pe + 1; + } + /* Copy the END item. */ + rte_memcpy(items, pe, sizeof(struct rte_flow_item)); +} + +/* Check if the pattern matches a supported item type array */ +static bool +classify_match_pattern(enum rte_flow_item_type *item_array, + struct rte_flow_item *pattern) +{ + struct rte_flow_item *item = pattern; + + while ((*item_array == item->type) && + (*item_array != RTE_FLOW_ITEM_TYPE_END)) { + item_array++; + item++; + } + + return (*item_array == RTE_FLOW_ITEM_TYPE_END && + item->type == RTE_FLOW_ITEM_TYPE_END); +} + +/* Find if there's parse filter function matched */ +parse_filter_t +classify_find_parse_filter_func(struct rte_flow_item *pattern) +{ + parse_filter_t parse_filter = NULL; + uint8_t i = 0; + + for (; i < RTE_DIM(classify_supported_patterns); i++) { + if (classify_match_pattern(classify_supported_patterns[i].items, + pattern)) { + parse_filter = + classify_supported_patterns[i].parse_filter; + break; + } + } + + return parse_filter; +} + +#define FLOW_RULE_MIN_PRIORITY 8 +#define FLOW_RULE_MAX_PRIORITY 0 + +#define NEXT_ITEM_OF_PATTERN(item, pattern, index)\ + do {\ + item = pattern + index;\ + while (item->type == RTE_FLOW_ITEM_TYPE_VOID) {\ + index++;\ + item = pattern + index;\ + } \ + } while (0) + +#define NEXT_ITEM_OF_ACTION(act, actions, index)\ + do {\ + act = actions + index;\ + while (act->type == RTE_FLOW_ACTION_TYPE_VOID) {\ + index++;\ + act = actions + index;\ + } \ + } while (0) + +/** + * Please aware there's an assumption for all the parsers. + * rte_flow_item is using big endian, rte_flow_attr and + * rte_flow_action are using CPU order. + * Because the pattern is used to describe the packets, + * normally the packets should use network order. + */ + +/** + * Parse the rule to see if it is a n-tuple rule. + * And get the n-tuple filter info BTW. + * pattern: + * The first not void item can be ETH or IPV4. + * The second not void item must be IPV4 if the first one is ETH. + * The third not void item must be UDP or TCP. + * The next not void item must be END. + * action: + * The first not void action should be QUEUE. + * The next not void action should be END. + * pattern example: + * ITEM Spec Mask + * ETH NULL NULL + * IPV4 src_addr 192.168.1.20 0xFFFFFFFF + * dst_addr 192.167.3.50 0xFFFFFFFF + * next_proto_id 17 0xFF + * UDP/TCP/ src_port 80 0xFFFF + * SCTP dst_port 80 0xFFFF + * END + * other members in mask and spec should set to 0x00. + * item->last should be NULL. + */ +static int +classify_parse_ntuple_filter(const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_eth_ntuple_filter *filter, + struct rte_flow_error *error) +{ + const struct rte_flow_item *item; + const struct rte_flow_action *act; + const struct rte_flow_item_ipv4 *ipv4_spec; + const struct rte_flow_item_ipv4 *ipv4_mask; + const struct rte_flow_item_tcp *tcp_spec; + const struct rte_flow_item_tcp *tcp_mask; + const struct rte_flow_item_udp *udp_spec; + const struct rte_flow_item_udp *udp_mask; + const struct rte_flow_item_sctp *sctp_spec; + const struct rte_flow_item_sctp *sctp_mask; + uint32_t index; + + if (!pattern) { + rte_flow_error_set(error, + EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM, + NULL, "NULL pattern."); + return -EINVAL; + } + + if (!actions) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_NUM, + NULL, "NULL action."); + return -EINVAL; + } + if (!attr) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, + NULL, "NULL attribute."); + return -EINVAL; + } + + /* parse pattern */ + index = 0; + + /* the first not void item can be MAC or IPv4 */ + NEXT_ITEM_OF_PATTERN(item, pattern, index); + + if (item->type != RTE_FLOW_ITEM_TYPE_ETH && + item->type != RTE_FLOW_ITEM_TYPE_IPV4) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -EINVAL; + } + /* Skip Ethernet */ + if (item->type == RTE_FLOW_ITEM_TYPE_ETH) { + /*Not supported last point for range*/ + if (item->last) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + item, + "Not supported last point for range"); + return -EINVAL; + + } + /* if the first item is MAC, the content should be NULL */ + if (item->spec || item->mask) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Not supported by ntuple filter"); + return -EINVAL; + } + /* check if the next not void item is IPv4 */ + index++; + NEXT_ITEM_OF_PATTERN(item, pattern, index); + if (item->type != RTE_FLOW_ITEM_TYPE_IPV4) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Not supported by ntuple filter"); + return -EINVAL; + } + } + + /* get the IPv4 info */ + if (!item->spec || !item->mask) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Invalid ntuple mask"); + return -EINVAL; + } + /*Not supported last point for range*/ + if (item->last) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + item, "Not supported last point for range"); + return -EINVAL; + + } + + ipv4_mask = (const struct rte_flow_item_ipv4 *)item->mask; + /** + * Only support src & dst addresses, protocol, + * others should be masked. + */ + if (ipv4_mask->hdr.version_ihl || + ipv4_mask->hdr.type_of_service || + ipv4_mask->hdr.total_length || + ipv4_mask->hdr.packet_id || + ipv4_mask->hdr.fragment_offset || + ipv4_mask->hdr.time_to_live || + ipv4_mask->hdr.hdr_checksum) { + rte_flow_error_set(error, + EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -EINVAL; + } + + filter->dst_ip_mask = ipv4_mask->hdr.dst_addr; + filter->src_ip_mask = ipv4_mask->hdr.src_addr; + filter->proto_mask = ipv4_mask->hdr.next_proto_id; + + ipv4_spec = (const struct rte_flow_item_ipv4 *)item->spec; + filter->dst_ip = ipv4_spec->hdr.dst_addr; + filter->src_ip = ipv4_spec->hdr.src_addr; + filter->proto = ipv4_spec->hdr.next_proto_id; + + /* check if the next not void item is TCP or UDP or SCTP */ + index++; + NEXT_ITEM_OF_PATTERN(item, pattern, index); + if (item->type != RTE_FLOW_ITEM_TYPE_TCP && + item->type != RTE_FLOW_ITEM_TYPE_UDP && + item->type != RTE_FLOW_ITEM_TYPE_SCTP) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -EINVAL; + } + + /* get the TCP/UDP info */ + if (!item->spec || !item->mask) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Invalid ntuple mask"); + return -EINVAL; + } + + /*Not supported last point for range*/ + if (item->last) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + item, "Not supported last point for range"); + return -EINVAL; + + } + + if (item->type == RTE_FLOW_ITEM_TYPE_TCP) { + tcp_mask = (const struct rte_flow_item_tcp *)item->mask; + + /** + * Only support src & dst ports, tcp flags, + * others should be masked. + */ + if (tcp_mask->hdr.sent_seq || + tcp_mask->hdr.recv_ack || + tcp_mask->hdr.data_off || + tcp_mask->hdr.rx_win || + tcp_mask->hdr.cksum || + tcp_mask->hdr.tcp_urp) { + memset(filter, 0, + sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -EINVAL; + } + + filter->dst_port_mask = tcp_mask->hdr.dst_port; + filter->src_port_mask = tcp_mask->hdr.src_port; + if (tcp_mask->hdr.tcp_flags == 0xFF) { + filter->flags |= RTE_NTUPLE_FLAGS_TCP_FLAG; + } else if (!tcp_mask->hdr.tcp_flags) { + filter->flags &= ~RTE_NTUPLE_FLAGS_TCP_FLAG; + } else { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -EINVAL; + } + + tcp_spec = (const struct rte_flow_item_tcp *)item->spec; + filter->dst_port = tcp_spec->hdr.dst_port; + filter->src_port = tcp_spec->hdr.src_port; + filter->tcp_flags = tcp_spec->hdr.tcp_flags; + } else if (item->type == RTE_FLOW_ITEM_TYPE_UDP) { + udp_mask = (const struct rte_flow_item_udp *)item->mask; + + /** + * Only support src & dst ports, + * others should be masked. + */ + if (udp_mask->hdr.dgram_len || + udp_mask->hdr.dgram_cksum) { + memset(filter, 0, + sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -EINVAL; + } + + filter->dst_port_mask = udp_mask->hdr.dst_port; + filter->src_port_mask = udp_mask->hdr.src_port; + + udp_spec = (const struct rte_flow_item_udp *)item->spec; + filter->dst_port = udp_spec->hdr.dst_port; + filter->src_port = udp_spec->hdr.src_port; + } else { + sctp_mask = (const struct rte_flow_item_sctp *)item->mask; + + /** + * Only support src & dst ports, + * others should be masked. + */ + if (sctp_mask->hdr.tag || + sctp_mask->hdr.cksum) { + memset(filter, 0, + sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -EINVAL; + } + + filter->dst_port_mask = sctp_mask->hdr.dst_port; + filter->src_port_mask = sctp_mask->hdr.src_port; + + sctp_spec = (const struct rte_flow_item_sctp *)item->spec; + filter->dst_port = sctp_spec->hdr.dst_port; + filter->src_port = sctp_spec->hdr.src_port; + } + + /* check if the next not void item is END */ + index++; + NEXT_ITEM_OF_PATTERN(item, pattern, index); + if (item->type != RTE_FLOW_ITEM_TYPE_END) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -EINVAL; + } + + /* parse action */ + index = 0; + + /** + * n-tuple only supports count, + * check if the first not void action is COUNT. + */ + memset(&action, 0, sizeof(action)); + NEXT_ITEM_OF_ACTION(act, actions, index); + if (act->type != RTE_FLOW_ACTION_TYPE_COUNT) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + item, "Not supported action."); + return -EINVAL; + } + action.type = RTE_FLOW_ACTION_TYPE_COUNT; + + /* check if the next not void item is END */ + index++; + NEXT_ITEM_OF_ACTION(act, actions, index); + if (act->type != RTE_FLOW_ACTION_TYPE_END) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + act, "Not supported action."); + return -EINVAL; + } + + /* parse attr */ + /* must be input direction */ + if (!attr->ingress) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, + attr, "Only support ingress."); + return -EINVAL; + } + + /* not supported */ + if (attr->egress) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, + attr, "Not support egress."); + return -EINVAL; + } + + if (attr->priority > 0xFFFF) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, + attr, "Error priority."); + return -EINVAL; + } + filter->priority = (uint16_t)attr->priority; + if (attr->priority > FLOW_RULE_MIN_PRIORITY) + filter->priority = FLOW_RULE_MAX_PRIORITY; + + return 0; +} diff --git a/lib/librte_flow_classify/rte_flow_classify_parse.h b/lib/librte_flow_classify/rte_flow_classify_parse.h new file mode 100644 index 0000000..1d4708a --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify_parse.h @@ -0,0 +1,74 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _RTE_FLOW_CLASSIFY_PARSE_H_ +#define _RTE_FLOW_CLASSIFY_PARSE_H_ + +#include <rte_ethdev.h> +#include <rte_ether.h> +#include <rte_flow.h> +#include <stdbool.h> + +#ifdef __cplusplus +extern "C" { +#endif + +typedef int (*parse_filter_t)(const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_eth_ntuple_filter *filter, + struct rte_flow_error *error); + +/* Skip all VOID items of the pattern */ +void +classify_pattern_skip_void_item(struct rte_flow_item *items, + const struct rte_flow_item *pattern); + +/* Find the first VOID or non-VOID item pointer */ +const struct rte_flow_item * +classify_find_first_item(const struct rte_flow_item *item, bool is_void); + + +/* Find if there's parse filter function matched */ +parse_filter_t +classify_find_parse_filter_func(struct rte_flow_item *pattern); + +/* get action data */ +struct rte_flow_action * +classify_get_flow_action(void); + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_FLOW_CLASSIFY_PARSE_H_ */ diff --git a/lib/librte_flow_classify/rte_flow_classify_version.map b/lib/librte_flow_classify/rte_flow_classify_version.map new file mode 100644 index 0000000..f7695cb --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify_version.map @@ -0,0 +1,12 @@ +EXPERIMENTAL { + global: + + rte_flow_classifier_create; + rte_flow_classifier_free; + rte_flow_classifier_query; + rte_flow_classify_table_create; + rte_flow_classify_table_entry_add; + rte_flow_classify_table_entry_delete; + + local: *; +}; diff --git a/mk/rte.app.mk b/mk/rte.app.mk index 8192b98..482656c 100644 --- a/mk/rte.app.mk +++ b/mk/rte.app.mk @@ -58,6 +58,7 @@ _LDLIBS-y += -L$(RTE_SDK_BIN)/lib # # Order is important: from higher level to lower level # +_LDLIBS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += -lrte_flow_classify _LDLIBS-$(CONFIG_RTE_LIBRTE_PIPELINE) += -lrte_pipeline _LDLIBS-$(CONFIG_RTE_LIBRTE_TABLE) += -lrte_table _LDLIBS-$(CONFIG_RTE_LIBRTE_PORT) += -lrte_port -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [PATCH v10 1/4] librte_flow_classify: add flow classify library 2017-10-23 15:16 ` [dpdk-dev] [PATCH v10 1/4] librte_flow_classify: add flow classify library Bernard Iremonger @ 2017-10-23 16:03 ` Singh, Jasvinder 2017-10-24 9:50 ` Thomas Monjalon 1 sibling, 0 replies; 145+ messages in thread From: Singh, Jasvinder @ 2017-10-23 16:03 UTC (permalink / raw) To: Iremonger, Bernard, dev, Yigit, Ferruh, Ananyev, Konstantin, Dumitrescu, Cristian, adrien.mazarguil > -----Original Message----- > From: Iremonger, Bernard > Sent: Monday, October 23, 2017 4:16 PM > To: dev@dpdk.org; Yigit, Ferruh <ferruh.yigit@intel.com>; Ananyev, > Konstantin <konstantin.ananyev@intel.com>; Dumitrescu, Cristian > <cristian.dumitrescu@intel.com>; adrien.mazarguil@6wind.com; Singh, > Jasvinder <jasvinder.singh@intel.com> > Cc: Iremonger, Bernard <bernard.iremonger@intel.com> > Subject: [PATCH v10 1/4] librte_flow_classify: add flow classify library > > From: Ferruh Yigit <ferruh.yigit@intel.com> > > The following APIs's are implemented in the > librte_flow_classify library: > > rte_flow_classifier_create > rte_flow_classifier_free > rte_flow_classifier_query > rte_flow_classify_table_create > rte_flow_classify_table_entry_add > rte_flow_classify_table_entry_delete > > The following librte_table API's are used: > f_create to create a table. > f_add to add a rule to the table. > f_del to delete a rule from the table. > f_free to free a table > f_lookup to match packets with the rules. > > The library supports counting of IPv4 five tupple packets only, > ie IPv4 UDP, TCP and SCTP packets. > > Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com> > Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> > --- Acked-by: Jasvinder Singh <jasvinder.singh@intel.com> ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [PATCH v10 1/4] librte_flow_classify: add flow classify library 2017-10-23 15:16 ` [dpdk-dev] [PATCH v10 1/4] librte_flow_classify: add flow classify library Bernard Iremonger 2017-10-23 16:03 ` Singh, Jasvinder @ 2017-10-24 9:50 ` Thomas Monjalon 2017-10-24 10:09 ` Iremonger, Bernard 1 sibling, 1 reply; 145+ messages in thread From: Thomas Monjalon @ 2017-10-24 9:50 UTC (permalink / raw) To: Bernard Iremonger Cc: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil, jasvinder.singh Hi, Few comments detailed below. The new compilation dependencies management needs changes in the Makefile. And the new log system should be used. > --- a/MAINTAINERS > +++ b/MAINTAINERS > @@ -739,6 +739,13 @@ F: doc/guides/prog_guide/pdump_lib.rst > F: app/pdump/ > F: doc/guides/tools/pdump.rst > > +Flow classify > +M: Bernard Iremonger <bernard.iremonger@intel.com> > +F: lib/librte_flow_classify/ > +F: test/test/test_flow_classify* > +F: examples/flow_classify/ > +F: doc/guides/sample_app_ug/flow_classify.rst > +F: doc/guides/prog_guide/flow_classify_lib.rst I don't how to classify this classify library :) If it is using librte_table, it should be part of Packet Framework? If not part of Packet Framework, please move it before "Distributor". The library is missing in the release notes (.so section and new features). > --- a/lib/librte_eal/common/include/rte_log.h > +++ b/lib/librte_eal/common/include/rte_log.h > @@ -88,6 +88,7 @@ struct rte_logs { > #define RTE_LOGTYPE_EFD 18 /**< Log related to EFD. */ > #define RTE_LOGTYPE_EVENTDEV 19 /**< Log related to eventdev. */ > #define RTE_LOGTYPE_GSO 20 /**< Log related to GSO. */ > +#define RTE_LOGTYPE_CLASSIFY 21 /**< Log related to flow classify. */ We must stop adding the legacy log types. Please switch to dynamic logs and remove CONFIG_RTE_LIBRTE_CLASSIFY_DEBUG. > +CFLAGS += -O3 > +CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) Now the dependencies to internal libraries must be explicitly declared in LDLIBS. > --- a/mk/rte.app.mk > +++ b/mk/rte.app.mk > @@ -58,6 +58,7 @@ _LDLIBS-y += -L$(RTE_SDK_BIN)/lib > # > # Order is important: from higher level to lower level > # > +_LDLIBS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += -lrte_flow_classify > _LDLIBS-$(CONFIG_RTE_LIBRTE_PIPELINE) += -lrte_pipeline > _LDLIBS-$(CONFIG_RTE_LIBRTE_TABLE) += -lrte_table > _LDLIBS-$(CONFIG_RTE_LIBRTE_PORT) += -lrte_port Yes, rte_flow_classify is on top of packet framework. ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [PATCH v10 1/4] librte_flow_classify: add flow classify library 2017-10-24 9:50 ` Thomas Monjalon @ 2017-10-24 10:09 ` Iremonger, Bernard 0 siblings, 0 replies; 145+ messages in thread From: Iremonger, Bernard @ 2017-10-24 10:09 UTC (permalink / raw) To: Thomas Monjalon Cc: dev, Yigit, Ferruh, Ananyev, Konstantin, Dumitrescu, Cristian, adrien.mazarguil, Singh, Jasvinder Hi Thomas, > -----Original Message----- > From: Thomas Monjalon [mailto:thomas@monjalon.net] > Sent: Tuesday, October 24, 2017 10:51 AM > To: Iremonger, Bernard <bernard.iremonger@intel.com> > Cc: dev@dpdk.org; Yigit, Ferruh <ferruh.yigit@intel.com>; Ananyev, > Konstantin <konstantin.ananyev@intel.com>; Dumitrescu, Cristian > <cristian.dumitrescu@intel.com>; adrien.mazarguil@6wind.com; Singh, > Jasvinder <jasvinder.singh@intel.com> > Subject: Re: [dpdk-dev] [PATCH v10 1/4] librte_flow_classify: add flow > classify library > > Hi, > > Few comments detailed below. > The new compilation dependencies management needs changes in the > Makefile. > And the new log system should be used. I will send a v11 patch set. > > --- a/MAINTAINERS > > +++ b/MAINTAINERS > > @@ -739,6 +739,13 @@ F: doc/guides/prog_guide/pdump_lib.rst > > F: app/pdump/ > > F: doc/guides/tools/pdump.rst > > > > +Flow classify > > +M: Bernard Iremonger <bernard.iremonger@intel.com> > > +F: lib/librte_flow_classify/ > > +F: test/test/test_flow_classify* > > +F: examples/flow_classify/ > > +F: doc/guides/sample_app_ug/flow_classify.rst > > +F: doc/guides/prog_guide/flow_classify_lib.rst > > I don't how to classify this classify library :) If it is using librte_table, it should > be part of Packet Framework? No, it is not intended to be part of Packet Framework. > If not part of Packet Framework, please move it before "Distributor". Ok, I will move it to before "Distributor" > The library is missing in the release notes (.so section and new features). I will add it to the release notes. > > --- a/lib/librte_eal/common/include/rte_log.h > > +++ b/lib/librte_eal/common/include/rte_log.h > > @@ -88,6 +88,7 @@ struct rte_logs { > > #define RTE_LOGTYPE_EFD 18 /**< Log related to EFD. */ > > #define RTE_LOGTYPE_EVENTDEV 19 /**< Log related to eventdev. */ > > #define RTE_LOGTYPE_GSO 20 /**< Log related to GSO. */ > > +#define RTE_LOGTYPE_CLASSIFY 21 /**< Log related to flow classify. > > +*/ > > We must stop adding the legacy log types. > Please switch to dynamic logs and remove > CONFIG_RTE_LIBRTE_CLASSIFY_DEBUG. Ok, will do. > > +CFLAGS += -O3 > > +CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) > > Now the dependencies to internal libraries must be explicitly declared in > LDLIBS. Ok, will do. > > --- a/mk/rte.app.mk > > +++ b/mk/rte.app.mk > > @@ -58,6 +58,7 @@ _LDLIBS-y += -L$(RTE_SDK_BIN)/lib # # Order is > > important: from higher level to lower level # > > +_LDLIBS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += -lrte_flow_classify > > _LDLIBS-$(CONFIG_RTE_LIBRTE_PIPELINE) += -lrte_pipeline > > _LDLIBS-$(CONFIG_RTE_LIBRTE_TABLE) += -lrte_table > > _LDLIBS-$(CONFIG_RTE_LIBRTE_PORT) += -lrte_port > > Yes, rte_flow_classify is on top of packet framework. Regards, Bernard. ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v10 2/4] examples/flow_classify: flow classify sample application 2017-10-22 13:32 ` [dpdk-dev] [PATCH v9 " Bernard Iremonger 2017-10-23 15:16 ` [dpdk-dev] [PATCH v10 " Bernard Iremonger 2017-10-23 15:16 ` [dpdk-dev] [PATCH v10 1/4] librte_flow_classify: add flow classify library Bernard Iremonger @ 2017-10-23 15:16 ` Bernard Iremonger 2017-10-23 16:04 ` Singh, Jasvinder 2017-10-23 15:16 ` [dpdk-dev] [PATCH v10 3/4] test: add packet burst generator functions Bernard Iremonger 2017-10-23 15:16 ` [dpdk-dev] [PATCH v10 4/4] test: flow classify library unit tests Bernard Iremonger 4 siblings, 1 reply; 145+ messages in thread From: Bernard Iremonger @ 2017-10-23 15:16 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil, jasvinder.singh Cc: Bernard Iremonger The flow_classify sample application exercises the following librte_flow_classify API's: rte_flow_classifier_create rte_flow_classifier_query rte_flow_classify_table_create rte_flow_classify_table_entry_add rte_flow_classify_table_entry_delete It sets up the IPv4 ACL field definitions. It creates table_acl and adds and deletes rules using the librte_table API. It uses a file of IPv4 five tuple rules for input. Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> --- examples/flow_classify/Makefile | 57 ++ examples/flow_classify/flow_classify.c | 849 +++++++++++++++++++++++++++++ examples/flow_classify/ipv4_rules_file.txt | 14 + 3 files changed, 920 insertions(+) create mode 100644 examples/flow_classify/Makefile create mode 100644 examples/flow_classify/flow_classify.c create mode 100644 examples/flow_classify/ipv4_rules_file.txt diff --git a/examples/flow_classify/Makefile b/examples/flow_classify/Makefile new file mode 100644 index 0000000..eecdde1 --- /dev/null +++ b/examples/flow_classify/Makefile @@ -0,0 +1,57 @@ +# BSD LICENSE +# +# Copyright(c) 2017 Intel Corporation. All rights reserved. +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Intel Corporation nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +ifeq ($(RTE_SDK),) +$(error "Please define RTE_SDK environment variable") +endif + +# Default target, can be overridden by command line or environment +RTE_TARGET ?= x86_64-native-linuxapp-gcc + +include $(RTE_SDK)/mk/rte.vars.mk + +# binary name +APP = flow_classify + + +# all source are stored in SRCS-y +SRCS-y := flow_classify.c + +CFLAGS += -O3 +CFLAGS += $(WERROR_FLAGS) + +# workaround for a gcc bug with noreturn attribute +# http://gcc.gnu.org/bugzilla/show_bug.cgi?id=12603 +ifeq ($(CONFIG_RTE_TOOLCHAIN_GCC),y) +CFLAGS_main.o += -Wno-return-type +endif + +include $(RTE_SDK)/mk/rte.extapp.mk diff --git a/examples/flow_classify/flow_classify.c b/examples/flow_classify/flow_classify.c new file mode 100644 index 0000000..55e3e82 --- /dev/null +++ b/examples/flow_classify/flow_classify.c @@ -0,0 +1,849 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <stdint.h> +#include <inttypes.h> +#include <getopt.h> + +#include <rte_eal.h> +#include <rte_ethdev.h> +#include <rte_cycles.h> +#include <rte_lcore.h> +#include <rte_mbuf.h> +#include <rte_flow.h> +#include <rte_flow_classify.h> +#include <rte_table_acl.h> + +#define RX_RING_SIZE 128 +#define TX_RING_SIZE 512 + +#define NUM_MBUFS 8191 +#define MBUF_CACHE_SIZE 250 +#define BURST_SIZE 32 + +#define MAX_NUM_CLASSIFY 30 +#define FLOW_CLASSIFY_MAX_RULE_NUM 91 +#define FLOW_CLASSIFY_MAX_PRIORITY 8 +#define FLOW_CLASSIFIER_NAME_SIZE 64 + +#define COMMENT_LEAD_CHAR ('#') +#define OPTION_RULE_IPV4 "rule_ipv4" +#define RTE_LOGTYPE_FLOW_CLASSIFY RTE_LOGTYPE_USER3 +#define flow_classify_log(format, ...) \ + RTE_LOG(ERR, FLOW_CLASSIFY, format, ##__VA_ARGS__) + +#define uint32_t_to_char(ip, a, b, c, d) do {\ + *a = (unsigned char)(ip >> 24 & 0xff);\ + *b = (unsigned char)(ip >> 16 & 0xff);\ + *c = (unsigned char)(ip >> 8 & 0xff);\ + *d = (unsigned char)(ip & 0xff);\ + } while (0) + +enum { + CB_FLD_SRC_ADDR, + CB_FLD_DST_ADDR, + CB_FLD_SRC_PORT, + CB_FLD_SRC_PORT_DLM, + CB_FLD_SRC_PORT_MASK, + CB_FLD_DST_PORT, + CB_FLD_DST_PORT_DLM, + CB_FLD_DST_PORT_MASK, + CB_FLD_PROTO, + CB_FLD_PRIORITY, + CB_FLD_NUM, +}; + +static struct{ + const char *rule_ipv4_name; +} parm_config; +const char cb_port_delim[] = ":"; + +static const struct rte_eth_conf port_conf_default = { + .rxmode = { .max_rx_pkt_len = ETHER_MAX_LEN } +}; + +struct flow_classifier { + struct rte_flow_classifier *cls; + uint32_t table_id[RTE_FLOW_CLASSIFY_TABLE_MAX]; +}; + +struct flow_classifier_acl { + struct flow_classifier cls; +} __rte_cache_aligned; + +/* ACL field definitions for IPv4 5 tuple rule */ + +enum { + PROTO_FIELD_IPV4, + SRC_FIELD_IPV4, + DST_FIELD_IPV4, + SRCP_FIELD_IPV4, + DSTP_FIELD_IPV4, + NUM_FIELDS_IPV4 +}; + +enum { + PROTO_INPUT_IPV4, + SRC_INPUT_IPV4, + DST_INPUT_IPV4, + SRCP_DESTP_INPUT_IPV4 +}; + +static struct rte_acl_field_def ipv4_defs[NUM_FIELDS_IPV4] = { + /* first input field - always one byte long. */ + { + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint8_t), + .field_index = PROTO_FIELD_IPV4, + .input_index = PROTO_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, next_proto_id), + }, + /* next input field (IPv4 source address) - 4 consecutive bytes. */ + { + /* rte_flow uses a bit mask for IPv4 addresses */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint32_t), + .field_index = SRC_FIELD_IPV4, + .input_index = SRC_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, src_addr), + }, + /* next input field (IPv4 destination address) - 4 consecutive bytes. */ + { + /* rte_flow uses a bit mask for IPv4 addresses */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint32_t), + .field_index = DST_FIELD_IPV4, + .input_index = DST_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, dst_addr), + }, + /* + * Next 2 fields (src & dst ports) form 4 consecutive bytes. + * They share the same input index. + */ + { + /* rte_flow uses a bit mask for protocol ports */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint16_t), + .field_index = SRCP_FIELD_IPV4, + .input_index = SRCP_DESTP_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + sizeof(struct ipv4_hdr) + + offsetof(struct tcp_hdr, src_port), + }, + { + /* rte_flow uses a bit mask for protocol ports */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint16_t), + .field_index = DSTP_FIELD_IPV4, + .input_index = SRCP_DESTP_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + sizeof(struct ipv4_hdr) + + offsetof(struct tcp_hdr, dst_port), + }, +}; + +/* flow classify data */ +static int num_classify_rules; +static struct rte_flow_classify_rule *rules[MAX_NUM_CLASSIFY]; +static struct rte_flow_classify_ipv4_5tuple_stats ntuple_stats; +static struct rte_flow_classify_stats classify_stats = { + .stats = (void **)&ntuple_stats +}; + +/* parameters for rte_flow_classify_validate and + * rte_flow_classify_table_entry_add functions + */ + +static struct rte_flow_item eth_item = { RTE_FLOW_ITEM_TYPE_ETH, + 0, 0, 0 }; +static struct rte_flow_item end_item = { RTE_FLOW_ITEM_TYPE_END, + 0, 0, 0 }; + +/* sample actions: + * "actions count / end" + */ +static struct rte_flow_action count_action = { RTE_FLOW_ACTION_TYPE_COUNT, 0}; +static struct rte_flow_action end_action = { RTE_FLOW_ACTION_TYPE_END, 0}; +static struct rte_flow_action actions[2]; + +/* sample attributes */ +static struct rte_flow_attr attr; + +/* flow_classify.c: * Based on DPDK skeleton forwarding example. */ + +/* + * Initializes a given port using global settings and with the RX buffers + * coming from the mbuf_pool passed as a parameter. + */ +static inline int +port_init(uint8_t port, struct rte_mempool *mbuf_pool) +{ + struct rte_eth_conf port_conf = port_conf_default; + struct ether_addr addr; + const uint16_t rx_rings = 1, tx_rings = 1; + int retval; + uint16_t q; + + if (port >= rte_eth_dev_count()) + return -1; + + /* Configure the Ethernet device. */ + retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf); + if (retval != 0) + return retval; + + /* Allocate and set up 1 RX queue per Ethernet port. */ + for (q = 0; q < rx_rings; q++) { + retval = rte_eth_rx_queue_setup(port, q, RX_RING_SIZE, + rte_eth_dev_socket_id(port), NULL, mbuf_pool); + if (retval < 0) + return retval; + } + + /* Allocate and set up 1 TX queue per Ethernet port. */ + for (q = 0; q < tx_rings; q++) { + retval = rte_eth_tx_queue_setup(port, q, TX_RING_SIZE, + rte_eth_dev_socket_id(port), NULL); + if (retval < 0) + return retval; + } + + /* Start the Ethernet port. */ + retval = rte_eth_dev_start(port); + if (retval < 0) + return retval; + + /* Display the port MAC address. */ + rte_eth_macaddr_get(port, &addr); + printf("Port %u MAC: %02" PRIx8 " %02" PRIx8 " %02" PRIx8 + " %02" PRIx8 " %02" PRIx8 " %02" PRIx8 "\n", + port, + addr.addr_bytes[0], addr.addr_bytes[1], + addr.addr_bytes[2], addr.addr_bytes[3], + addr.addr_bytes[4], addr.addr_bytes[5]); + + /* Enable RX in promiscuous mode for the Ethernet device. */ + rte_eth_promiscuous_enable(port); + + return 0; +} + +/* + * The lcore main. This is the main thread that does the work, reading from + * an input port classifying the packets and writing to an output port. + */ +static __attribute__((noreturn)) void +lcore_main(struct flow_classifier *cls_app) +{ + const uint8_t nb_ports = rte_eth_dev_count(); + uint8_t port; + int ret; + int i = 0; + + ret = rte_flow_classify_table_entry_delete(cls_app->cls, + cls_app->table_id[0], rules[7]); + if (ret) + printf("table_entry_delete failed [7] %d\n\n", ret); + else + printf("table_entry_delete succeeded [7]\n\n"); + + /* + * Check that the port is on the same NUMA node as the polling thread + * for best performance. + */ + for (port = 0; port < nb_ports; port++) + if (rte_eth_dev_socket_id(port) > 0 && + rte_eth_dev_socket_id(port) != + (int)rte_socket_id()) + printf("\n\n"); + printf("WARNING: port %u is on remote NUMA node\n", + port); + printf("to polling thread.\n"); + printf("Performance will not be optimal.\n"); + + printf("\nCore %u forwarding packets. ", + rte_lcore_id()); + printf("[Ctrl+C to quit]\n"); + + /* Run until the application is quit or killed. */ + for (;;) { + /* + * Receive packets on a port, classify them and forward them + * on the paired port. + * The mapping is 0 -> 1, 1 -> 0, 2 -> 3, 3 -> 2, etc. + */ + for (port = 0; port < nb_ports; port++) { + /* Get burst of RX packets, from first port of pair. */ + struct rte_mbuf *bufs[BURST_SIZE]; + const uint16_t nb_rx = rte_eth_rx_burst(port, 0, + bufs, BURST_SIZE); + + if (unlikely(nb_rx == 0)) + continue; + + for (i = 0; i < MAX_NUM_CLASSIFY; i++) { + if (rules[i]) { + ret = rte_flow_classifier_query( + cls_app->cls, + cls_app->table_id[0], + bufs, nb_rx, rules[i], + &classify_stats); + if (ret) + printf( + "rule [%d] query failed ret [%d]\n\n", + i, ret); + else { + printf( + "rule [%d] counter1=%lu\n", + i, ntuple_stats.counter1); + + printf("proto = %d\n", + ntuple_stats.ipv4_5tuple.proto); + } + } + } + + /* Send burst of TX packets, to second port of pair. */ + const uint16_t nb_tx = rte_eth_tx_burst(port ^ 1, 0, + bufs, nb_rx); + + /* Free any unsent packets. */ + if (unlikely(nb_tx < nb_rx)) { + uint16_t buf; + + for (buf = nb_tx; buf < nb_rx; buf++) + rte_pktmbuf_free(bufs[buf]); + } + } + } +} + +/* + * Parse IPv4 5 tuple rules file, ipv4_rules_file.txt. + * Expected format: + * <src_ipv4_addr>'/'<masklen> <space> \ + * <dst_ipv4_addr>'/'<masklen> <space> \ + * <src_port> <space> ":" <src_port_mask> <space> \ + * <dst_port> <space> ":" <dst_port_mask> <space> \ + * <proto>'/'<proto_mask> <space> \ + * <priority> + */ + +static int +get_cb_field(char **in, uint32_t *fd, int base, unsigned long lim, + char dlm) +{ + unsigned long val; + char *end; + + errno = 0; + val = strtoul(*in, &end, base); + if (errno != 0 || end[0] != dlm || val > lim) + return -EINVAL; + *fd = (uint32_t)val; + *in = end + 1; + return 0; +} + +static int +parse_ipv4_net(char *in, uint32_t *addr, uint32_t *mask_len) +{ + uint32_t a, b, c, d, m; + + if (get_cb_field(&in, &a, 0, UINT8_MAX, '.')) + return -EINVAL; + if (get_cb_field(&in, &b, 0, UINT8_MAX, '.')) + return -EINVAL; + if (get_cb_field(&in, &c, 0, UINT8_MAX, '.')) + return -EINVAL; + if (get_cb_field(&in, &d, 0, UINT8_MAX, '/')) + return -EINVAL; + if (get_cb_field(&in, &m, 0, sizeof(uint32_t) * CHAR_BIT, 0)) + return -EINVAL; + + addr[0] = IPv4(a, b, c, d); + mask_len[0] = m; + return 0; +} + +static int +parse_ipv4_5tuple_rule(char *str, struct rte_eth_ntuple_filter *ntuple_filter) +{ + int i, ret; + char *s, *sp, *in[CB_FLD_NUM]; + static const char *dlm = " \t\n"; + int dim = CB_FLD_NUM; + uint32_t temp; + + s = str; + for (i = 0; i != dim; i++, s = NULL) { + in[i] = strtok_r(s, dlm, &sp); + if (in[i] == NULL) + return -EINVAL; + } + + ret = parse_ipv4_net(in[CB_FLD_SRC_ADDR], + &ntuple_filter->src_ip, + &ntuple_filter->src_ip_mask); + if (ret != 0) { + flow_classify_log("failed to read source address/mask: %s\n", + in[CB_FLD_SRC_ADDR]); + return ret; + } + + ret = parse_ipv4_net(in[CB_FLD_DST_ADDR], + &ntuple_filter->dst_ip, + &ntuple_filter->dst_ip_mask); + if (ret != 0) { + flow_classify_log("failed to read source address/mask: %s\n", + in[CB_FLD_DST_ADDR]); + return ret; + } + + if (get_cb_field(&in[CB_FLD_SRC_PORT], &temp, 0, UINT16_MAX, 0)) + return -EINVAL; + ntuple_filter->src_port = (uint16_t)temp; + + if (strncmp(in[CB_FLD_SRC_PORT_DLM], cb_port_delim, + sizeof(cb_port_delim)) != 0) + return -EINVAL; + + if (get_cb_field(&in[CB_FLD_SRC_PORT_MASK], &temp, 0, UINT16_MAX, 0)) + return -EINVAL; + ntuple_filter->src_port_mask = (uint16_t)temp; + + if (get_cb_field(&in[CB_FLD_DST_PORT], &temp, 0, UINT16_MAX, 0)) + return -EINVAL; + ntuple_filter->dst_port = (uint16_t)temp; + + if (strncmp(in[CB_FLD_DST_PORT_DLM], cb_port_delim, + sizeof(cb_port_delim)) != 0) + return -EINVAL; + + if (get_cb_field(&in[CB_FLD_DST_PORT_MASK], &temp, 0, UINT16_MAX, 0)) + return -EINVAL; + ntuple_filter->dst_port_mask = (uint16_t)temp; + + if (get_cb_field(&in[CB_FLD_PROTO], &temp, 0, UINT8_MAX, '/')) + return -EINVAL; + ntuple_filter->proto = (uint8_t)temp; + + if (get_cb_field(&in[CB_FLD_PROTO], &temp, 0, UINT8_MAX, 0)) + return -EINVAL; + ntuple_filter->proto_mask = (uint8_t)temp; + + if (get_cb_field(&in[CB_FLD_PRIORITY], &temp, 0, UINT16_MAX, 0)) + return -EINVAL; + ntuple_filter->priority = (uint16_t)temp; + if (ntuple_filter->priority > FLOW_CLASSIFY_MAX_PRIORITY) + ret = -EINVAL; + + return ret; +} + +/* Bypass comment and empty lines */ +static inline int +is_bypass_line(char *buff) +{ + int i = 0; + + /* comment line */ + if (buff[0] == COMMENT_LEAD_CHAR) + return 1; + /* empty line */ + while (buff[i] != '\0') { + if (!isspace(buff[i])) + return 0; + i++; + } + return 1; +} + +static uint32_t +convert_depth_to_bitmask(uint32_t depth_val) +{ + uint32_t bitmask = 0; + int i, j; + + for (i = depth_val, j = 0; i > 0; i--, j++) + bitmask |= (1 << (31 - j)); + return bitmask; +} + +static int +add_classify_rule(struct rte_eth_ntuple_filter *ntuple_filter, + struct flow_classifier *cls_app) +{ + int ret = -1; + int key_found; + struct rte_flow_error error; + struct rte_flow_item_ipv4 ipv4_spec; + struct rte_flow_item_ipv4 ipv4_mask; + struct rte_flow_item ipv4_udp_item; + struct rte_flow_item ipv4_tcp_item; + struct rte_flow_item ipv4_sctp_item; + struct rte_flow_item_udp udp_spec; + struct rte_flow_item_udp udp_mask; + struct rte_flow_item udp_item; + struct rte_flow_item_tcp tcp_spec; + struct rte_flow_item_tcp tcp_mask; + struct rte_flow_item tcp_item; + struct rte_flow_item_sctp sctp_spec; + struct rte_flow_item_sctp sctp_mask; + struct rte_flow_item sctp_item; + struct rte_flow_item pattern_ipv4_5tuple[4]; + struct rte_flow_classify_rule *rule; + uint8_t ipv4_proto; + + if (num_classify_rules >= MAX_NUM_CLASSIFY) { + printf( + "\nINFO: classify rule capacity %d reached\n", + num_classify_rules); + return ret; + } + + /* set up parameters for validate and add */ + memset(&ipv4_spec, 0, sizeof(ipv4_spec)); + ipv4_spec.hdr.next_proto_id = ntuple_filter->proto; + ipv4_spec.hdr.src_addr = ntuple_filter->src_ip; + ipv4_spec.hdr.dst_addr = ntuple_filter->dst_ip; + ipv4_proto = ipv4_spec.hdr.next_proto_id; + + memset(&ipv4_mask, 0, sizeof(ipv4_mask)); + ipv4_mask.hdr.next_proto_id = ntuple_filter->proto_mask; + ipv4_mask.hdr.src_addr = ntuple_filter->src_ip_mask; + ipv4_mask.hdr.src_addr = + convert_depth_to_bitmask(ipv4_mask.hdr.src_addr); + ipv4_mask.hdr.dst_addr = ntuple_filter->dst_ip_mask; + ipv4_mask.hdr.dst_addr = + convert_depth_to_bitmask(ipv4_mask.hdr.dst_addr); + + switch (ipv4_proto) { + case IPPROTO_UDP: + ipv4_udp_item.type = RTE_FLOW_ITEM_TYPE_IPV4; + ipv4_udp_item.spec = &ipv4_spec; + ipv4_udp_item.mask = &ipv4_mask; + ipv4_udp_item.last = NULL; + + udp_spec.hdr.src_port = ntuple_filter->src_port; + udp_spec.hdr.dst_port = ntuple_filter->dst_port; + udp_spec.hdr.dgram_len = 0; + udp_spec.hdr.dgram_cksum = 0; + + udp_mask.hdr.src_port = ntuple_filter->src_port_mask; + udp_mask.hdr.dst_port = ntuple_filter->dst_port_mask; + udp_mask.hdr.dgram_len = 0; + udp_mask.hdr.dgram_cksum = 0; + + udp_item.type = RTE_FLOW_ITEM_TYPE_UDP; + udp_item.spec = &udp_spec; + udp_item.mask = &udp_mask; + udp_item.last = NULL; + + attr.priority = ntuple_filter->priority; + pattern_ipv4_5tuple[1] = ipv4_udp_item; + pattern_ipv4_5tuple[2] = udp_item; + break; + case IPPROTO_TCP: + ipv4_tcp_item.type = RTE_FLOW_ITEM_TYPE_IPV4; + ipv4_tcp_item.spec = &ipv4_spec; + ipv4_tcp_item.mask = &ipv4_mask; + ipv4_tcp_item.last = NULL; + + memset(&tcp_spec, 0, sizeof(tcp_spec)); + tcp_spec.hdr.src_port = ntuple_filter->src_port; + tcp_spec.hdr.dst_port = ntuple_filter->dst_port; + + memset(&tcp_mask, 0, sizeof(tcp_mask)); + tcp_mask.hdr.src_port = ntuple_filter->src_port_mask; + tcp_mask.hdr.dst_port = ntuple_filter->dst_port_mask; + + tcp_item.type = RTE_FLOW_ITEM_TYPE_TCP; + tcp_item.spec = &tcp_spec; + tcp_item.mask = &tcp_mask; + tcp_item.last = NULL; + + attr.priority = ntuple_filter->priority; + pattern_ipv4_5tuple[1] = ipv4_tcp_item; + pattern_ipv4_5tuple[2] = tcp_item; + break; + case IPPROTO_SCTP: + ipv4_sctp_item.type = RTE_FLOW_ITEM_TYPE_IPV4; + ipv4_sctp_item.spec = &ipv4_spec; + ipv4_sctp_item.mask = &ipv4_mask; + ipv4_sctp_item.last = NULL; + + sctp_spec.hdr.src_port = ntuple_filter->src_port; + sctp_spec.hdr.dst_port = ntuple_filter->dst_port; + sctp_spec.hdr.cksum = 0; + sctp_spec.hdr.tag = 0; + + sctp_mask.hdr.src_port = ntuple_filter->src_port_mask; + sctp_mask.hdr.dst_port = ntuple_filter->dst_port_mask; + sctp_mask.hdr.cksum = 0; + sctp_mask.hdr.tag = 0; + + sctp_item.type = RTE_FLOW_ITEM_TYPE_SCTP; + sctp_item.spec = &sctp_spec; + sctp_item.mask = &sctp_mask; + sctp_item.last = NULL; + + attr.priority = ntuple_filter->priority; + pattern_ipv4_5tuple[1] = ipv4_sctp_item; + pattern_ipv4_5tuple[2] = sctp_item; + break; + default: + return ret; + } + + attr.ingress = 1; + pattern_ipv4_5tuple[0] = eth_item; + pattern_ipv4_5tuple[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + rule = rte_flow_classify_table_entry_add( + cls_app->cls, cls_app->table_id[0], &key_found, + &attr, pattern_ipv4_5tuple, actions, &error); + if (rule == NULL) { + printf("table entry add failed ipv4_proto = %u\n", + ipv4_proto); + ret = -1; + return ret; + } + + rules[num_classify_rules] = rule; + num_classify_rules++; + return 0; +} + +static int +add_rules(const char *rule_path, struct flow_classifier *cls_app) +{ + FILE *fh; + char buff[LINE_MAX]; + unsigned int i = 0; + unsigned int total_num = 0; + struct rte_eth_ntuple_filter ntuple_filter; + + fh = fopen(rule_path, "rb"); + if (fh == NULL) + rte_exit(EXIT_FAILURE, "%s: Open %s failed\n", __func__, + rule_path); + + fseek(fh, 0, SEEK_SET); + + i = 0; + while (fgets(buff, LINE_MAX, fh) != NULL) { + i++; + + if (is_bypass_line(buff)) + continue; + + if (total_num >= FLOW_CLASSIFY_MAX_RULE_NUM - 1) { + printf("\nINFO: classify rule capacity %d reached\n", + total_num); + break; + } + + if (parse_ipv4_5tuple_rule(buff, &ntuple_filter) != 0) + rte_exit(EXIT_FAILURE, + "%s Line %u: parse rules error\n", + rule_path, i); + + if (add_classify_rule(&ntuple_filter, cls_app) != 0) + rte_exit(EXIT_FAILURE, "add rule error\n"); + + total_num++; + } + + fclose(fh); + return 0; +} + +/* display usage */ +static void +print_usage(const char *prgname) +{ + printf("%s usage:\n", prgname); + printf("[EAL options] -- --"OPTION_RULE_IPV4"=FILE: "); + printf("specify the ipv4 rules file.\n"); + printf("Each rule occupies one line in the file.\n"); +} + +/* Parse the argument given in the command line of the application */ +static int +parse_args(int argc, char **argv) +{ + int opt, ret; + char **argvopt; + int option_index; + char *prgname = argv[0]; + static struct option lgopts[] = { + {OPTION_RULE_IPV4, 1, 0, 0}, + {NULL, 0, 0, 0} + }; + + argvopt = argv; + + while ((opt = getopt_long(argc, argvopt, "", + lgopts, &option_index)) != EOF) { + + switch (opt) { + /* long options */ + case 0: + if (!strncmp(lgopts[option_index].name, + OPTION_RULE_IPV4, + sizeof(OPTION_RULE_IPV4))) + parm_config.rule_ipv4_name = optarg; + break; + default: + print_usage(prgname); + return -1; + } + } + + if (optind >= 0) + argv[optind-1] = prgname; + + ret = optind-1; + optind = 1; /* reset getopt lib */ + return ret; +} + +/* + * The main function, which does initialization and calls the lcore_main + * function. + */ +int +main(int argc, char *argv[]) +{ + struct rte_mempool *mbuf_pool; + uint8_t nb_ports; + uint8_t portid; + int ret; + int socket_id; + struct rte_table_acl_params table_acl_params; + struct rte_flow_classify_table_params cls_table_params; + struct flow_classifier *cls_app; + struct rte_flow_classifier_params cls_params; + uint32_t size; + + /* Initialize the Environment Abstraction Layer (EAL). */ + ret = rte_eal_init(argc, argv); + if (ret < 0) + rte_exit(EXIT_FAILURE, "Error with EAL initialization\n"); + + argc -= ret; + argv += ret; + + /* parse application arguments (after the EAL ones) */ + ret = parse_args(argc, argv); + if (ret < 0) + rte_exit(EXIT_FAILURE, "Invalid flow_classify parameters\n"); + + /* Check that there is an even number of ports to send/receive on. */ + nb_ports = rte_eth_dev_count(); + if (nb_ports < 2 || (nb_ports & 1)) + rte_exit(EXIT_FAILURE, "Error: number of ports must be even\n"); + + /* Creates a new mempool in memory to hold the mbufs. */ + mbuf_pool = rte_pktmbuf_pool_create("MBUF_POOL", NUM_MBUFS * nb_ports, + MBUF_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id()); + + if (mbuf_pool == NULL) + rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n"); + + /* Initialize all ports. */ + for (portid = 0; portid < nb_ports; portid++) + if (port_init(portid, mbuf_pool) != 0) + rte_exit(EXIT_FAILURE, "Cannot init port %"PRIu8 "\n", + portid); + + if (rte_lcore_count() > 1) + printf("\nWARNING: Too many lcores enabled. Only 1 used.\n"); + + socket_id = rte_eth_dev_socket_id(0); + + /* Memory allocation */ + size = RTE_CACHE_LINE_ROUNDUP(sizeof(struct flow_classifier_acl)); + cls_app = rte_zmalloc(NULL, size, RTE_CACHE_LINE_SIZE); + if (cls_app == NULL) + rte_exit(EXIT_FAILURE, "Cannot allocate classifier memory\n"); + + cls_params.name = "flow_classifier"; + cls_params.socket_id = socket_id; + cls_params.type = RTE_FLOW_CLASSIFY_TABLE_TYPE_ACL; + + cls_app->cls = rte_flow_classifier_create(&cls_params); + if (cls_app->cls == NULL) { + rte_free(cls_app); + rte_exit(EXIT_FAILURE, "Cannot create classifier\n"); + } + + /* initialise ACL table params */ + table_acl_params.name = "table_acl_ipv4_5tuple"; + table_acl_params.n_rules = FLOW_CLASSIFY_MAX_RULE_NUM; + table_acl_params.n_rule_fields = RTE_DIM(ipv4_defs); + memcpy(table_acl_params.field_format, ipv4_defs, sizeof(ipv4_defs)); + + /* initialise table create params */ + cls_table_params.ops = &rte_table_acl_ops, + cls_table_params.arg_create = &table_acl_params, + + ret = rte_flow_classify_table_create(cls_app->cls, &cls_table_params, + &cls_app->table_id[0]); + if (ret) { + rte_flow_classifier_free(cls_app->cls); + rte_free(cls_app); + rte_exit(EXIT_FAILURE, "Failed to create classifier table\n"); + } + + /* read file of IPv4 5 tuple rules and initialize parameters + * for rte_flow_classify_validate and rte_flow_classify_table_entry_add + * API's. + */ + if (add_rules(parm_config.rule_ipv4_name, cls_app)) { + rte_flow_classifier_free(cls_app->cls); + rte_free(cls_app); + rte_exit(EXIT_FAILURE, "Failed to add rules\n"); + } + + /* Call lcore_main on the master core only. */ + lcore_main(cls_app); + + return 0; +} diff --git a/examples/flow_classify/ipv4_rules_file.txt b/examples/flow_classify/ipv4_rules_file.txt new file mode 100644 index 0000000..dfa0631 --- /dev/null +++ b/examples/flow_classify/ipv4_rules_file.txt @@ -0,0 +1,14 @@ +#file format: +#src_ip/masklen dst_ip/masklen src_port : mask dst_port : mask proto/mask priority +# +2.2.2.3/24 2.2.2.7/24 32 : 0xffff 33 : 0xffff 17/0xff 0 +9.9.9.3/24 9.9.9.7/24 32 : 0xffff 33 : 0xffff 17/0xff 1 +9.9.9.3/24 9.9.9.7/24 32 : 0xffff 33 : 0xffff 6/0xff 2 +9.9.8.3/24 9.9.8.7/24 32 : 0xffff 33 : 0xffff 6/0xff 3 +6.7.8.9/24 2.3.4.5/24 32 : 0x0000 33 : 0x0000 132/0xff 4 +6.7.8.9/32 192.168.0.36/32 10 : 0xffff 11 : 0xffff 6/0xfe 5 +6.7.8.9/24 192.168.0.36/24 10 : 0xffff 11 : 0xffff 6/0xfe 6 +6.7.8.9/16 192.168.0.36/16 10 : 0xffff 11 : 0xffff 6/0xfe 7 +6.7.8.9/8 192.168.0.36/8 10 : 0xffff 11 : 0xffff 6/0xfe 8 +#error rules +#9.8.7.6/8 192.168.0.36/8 10 : 0xffff 11 : 0xffff 6/0xfe 9 \ No newline at end of file -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [PATCH v10 2/4] examples/flow_classify: flow classify sample application 2017-10-23 15:16 ` [dpdk-dev] [PATCH v10 2/4] examples/flow_classify: flow classify sample application Bernard Iremonger @ 2017-10-23 16:04 ` Singh, Jasvinder 0 siblings, 0 replies; 145+ messages in thread From: Singh, Jasvinder @ 2017-10-23 16:04 UTC (permalink / raw) To: Iremonger, Bernard, dev, Yigit, Ferruh, Ananyev, Konstantin, Dumitrescu, Cristian, adrien.mazarguil > -----Original Message----- > From: Iremonger, Bernard > Sent: Monday, October 23, 2017 4:16 PM > To: dev@dpdk.org; Yigit, Ferruh <ferruh.yigit@intel.com>; Ananyev, > Konstantin <konstantin.ananyev@intel.com>; Dumitrescu, Cristian > <cristian.dumitrescu@intel.com>; adrien.mazarguil@6wind.com; Singh, > Jasvinder <jasvinder.singh@intel.com> > Cc: Iremonger, Bernard <bernard.iremonger@intel.com> > Subject: [PATCH v10 2/4] examples/flow_classify: flow classify sample > application > > The flow_classify sample application exercises the following > librte_flow_classify API's: > > rte_flow_classifier_create > rte_flow_classifier_query > rte_flow_classify_table_create > rte_flow_classify_table_entry_add > rte_flow_classify_table_entry_delete > > It sets up the IPv4 ACL field definitions. > It creates table_acl and adds and deletes rules using the librte_table API. > > It uses a file of IPv4 five tuple rules for input. > > Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> > --- Acked-by: Jasvinder Singh <jasvinder.singh@intel.com> ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v10 3/4] test: add packet burst generator functions 2017-10-22 13:32 ` [dpdk-dev] [PATCH v9 " Bernard Iremonger ` (2 preceding siblings ...) 2017-10-23 15:16 ` [dpdk-dev] [PATCH v10 2/4] examples/flow_classify: flow classify sample application Bernard Iremonger @ 2017-10-23 15:16 ` Bernard Iremonger 2017-10-23 16:05 ` Singh, Jasvinder 2017-10-23 15:16 ` [dpdk-dev] [PATCH v10 4/4] test: flow classify library unit tests Bernard Iremonger 4 siblings, 1 reply; 145+ messages in thread From: Bernard Iremonger @ 2017-10-23 15:16 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil, jasvinder.singh Cc: Bernard Iremonger add initialize_tcp_header function add initialize_stcp_header function add initialize_ipv4_header_proto function add generate_packet_burst_proto function Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> --- test/test/packet_burst_generator.c | 191 +++++++++++++++++++++++++++++++++++++ test/test/packet_burst_generator.h | 22 ++++- 2 files changed, 211 insertions(+), 2 deletions(-) diff --git a/test/test/packet_burst_generator.c b/test/test/packet_burst_generator.c index a93c3b5..8f4ddcc 100644 --- a/test/test/packet_burst_generator.c +++ b/test/test/packet_burst_generator.c @@ -134,6 +134,36 @@ return pkt_len; } +uint16_t +initialize_tcp_header(struct tcp_hdr *tcp_hdr, uint16_t src_port, + uint16_t dst_port, uint16_t pkt_data_len) +{ + uint16_t pkt_len; + + pkt_len = (uint16_t) (pkt_data_len + sizeof(struct tcp_hdr)); + + memset(tcp_hdr, 0, sizeof(struct tcp_hdr)); + tcp_hdr->src_port = rte_cpu_to_be_16(src_port); + tcp_hdr->dst_port = rte_cpu_to_be_16(dst_port); + + return pkt_len; +} + +uint16_t +initialize_sctp_header(struct sctp_hdr *sctp_hdr, uint16_t src_port, + uint16_t dst_port, uint16_t pkt_data_len) +{ + uint16_t pkt_len; + + pkt_len = (uint16_t) (pkt_data_len + sizeof(struct udp_hdr)); + + sctp_hdr->src_port = rte_cpu_to_be_16(src_port); + sctp_hdr->dst_port = rte_cpu_to_be_16(dst_port); + sctp_hdr->tag = 0; + sctp_hdr->cksum = 0; /* No SCTP checksum. */ + + return pkt_len; +} uint16_t initialize_ipv6_header(struct ipv6_hdr *ip_hdr, uint8_t *src_addr, @@ -198,7 +228,53 @@ return pkt_len; } +uint16_t +initialize_ipv4_header_proto(struct ipv4_hdr *ip_hdr, uint32_t src_addr, + uint32_t dst_addr, uint16_t pkt_data_len, uint8_t proto) +{ + uint16_t pkt_len; + unaligned_uint16_t *ptr16; + uint32_t ip_cksum; + + /* + * Initialize IP header. + */ + pkt_len = (uint16_t) (pkt_data_len + sizeof(struct ipv4_hdr)); + + ip_hdr->version_ihl = IP_VHL_DEF; + ip_hdr->type_of_service = 0; + ip_hdr->fragment_offset = 0; + ip_hdr->time_to_live = IP_DEFTTL; + ip_hdr->next_proto_id = proto; + ip_hdr->packet_id = 0; + ip_hdr->total_length = rte_cpu_to_be_16(pkt_len); + ip_hdr->src_addr = rte_cpu_to_be_32(src_addr); + ip_hdr->dst_addr = rte_cpu_to_be_32(dst_addr); + + /* + * Compute IP header checksum. + */ + ptr16 = (unaligned_uint16_t *)ip_hdr; + ip_cksum = 0; + ip_cksum += ptr16[0]; ip_cksum += ptr16[1]; + ip_cksum += ptr16[2]; ip_cksum += ptr16[3]; + ip_cksum += ptr16[4]; + ip_cksum += ptr16[6]; ip_cksum += ptr16[7]; + ip_cksum += ptr16[8]; ip_cksum += ptr16[9]; + /* + * Reduce 32 bit checksum to 16 bits and complement it. + */ + ip_cksum = ((ip_cksum & 0xFFFF0000) >> 16) + + (ip_cksum & 0x0000FFFF); + ip_cksum %= 65536; + ip_cksum = (~ip_cksum) & 0x0000FFFF; + if (ip_cksum == 0) + ip_cksum = 0xFFFF; + ip_hdr->hdr_checksum = (uint16_t) ip_cksum; + + return pkt_len; +} /* * The maximum number of segments per packet is used when creating @@ -283,3 +359,118 @@ return nb_pkt; } + +int +generate_packet_burst_proto(struct rte_mempool *mp, + struct rte_mbuf **pkts_burst, + struct ether_hdr *eth_hdr, uint8_t vlan_enabled, void *ip_hdr, + uint8_t ipv4, uint8_t proto, void *proto_hdr, + int nb_pkt_per_burst, uint8_t pkt_len, uint8_t nb_pkt_segs) +{ + int i, nb_pkt = 0; + size_t eth_hdr_size; + + struct rte_mbuf *pkt_seg; + struct rte_mbuf *pkt; + + for (nb_pkt = 0; nb_pkt < nb_pkt_per_burst; nb_pkt++) { + pkt = rte_pktmbuf_alloc(mp); + if (pkt == NULL) { +nomore_mbuf: + if (nb_pkt == 0) + return -1; + break; + } + + pkt->data_len = pkt_len; + pkt_seg = pkt; + for (i = 1; i < nb_pkt_segs; i++) { + pkt_seg->next = rte_pktmbuf_alloc(mp); + if (pkt_seg->next == NULL) { + pkt->nb_segs = i; + rte_pktmbuf_free(pkt); + goto nomore_mbuf; + } + pkt_seg = pkt_seg->next; + pkt_seg->data_len = pkt_len; + } + pkt_seg->next = NULL; /* Last segment of packet. */ + + /* + * Copy headers in first packet segment(s). + */ + if (vlan_enabled) + eth_hdr_size = sizeof(struct ether_hdr) + + sizeof(struct vlan_hdr); + else + eth_hdr_size = sizeof(struct ether_hdr); + + copy_buf_to_pkt(eth_hdr, eth_hdr_size, pkt, 0); + + if (ipv4) { + copy_buf_to_pkt(ip_hdr, sizeof(struct ipv4_hdr), pkt, + eth_hdr_size); + switch (proto) { + case IPPROTO_UDP: + copy_buf_to_pkt(proto_hdr, + sizeof(struct udp_hdr), pkt, + eth_hdr_size + sizeof(struct ipv4_hdr)); + break; + case IPPROTO_TCP: + copy_buf_to_pkt(proto_hdr, + sizeof(struct tcp_hdr), pkt, + eth_hdr_size + sizeof(struct ipv4_hdr)); + break; + case IPPROTO_SCTP: + copy_buf_to_pkt(proto_hdr, + sizeof(struct sctp_hdr), pkt, + eth_hdr_size + sizeof(struct ipv4_hdr)); + break; + default: + break; + } + } else { + copy_buf_to_pkt(ip_hdr, sizeof(struct ipv6_hdr), pkt, + eth_hdr_size); + switch (proto) { + case IPPROTO_UDP: + copy_buf_to_pkt(proto_hdr, + sizeof(struct udp_hdr), pkt, + eth_hdr_size + sizeof(struct ipv6_hdr)); + break; + case IPPROTO_TCP: + copy_buf_to_pkt(proto_hdr, + sizeof(struct tcp_hdr), pkt, + eth_hdr_size + sizeof(struct ipv6_hdr)); + break; + case IPPROTO_SCTP: + copy_buf_to_pkt(proto_hdr, + sizeof(struct sctp_hdr), pkt, + eth_hdr_size + sizeof(struct ipv6_hdr)); + break; + default: + break; + } + } + + /* + * Complete first mbuf of packet and append it to the + * burst of packets to be transmitted. + */ + pkt->nb_segs = nb_pkt_segs; + pkt->pkt_len = pkt_len; + pkt->l2_len = eth_hdr_size; + + if (ipv4) { + pkt->vlan_tci = ETHER_TYPE_IPv4; + pkt->l3_len = sizeof(struct ipv4_hdr); + } else { + pkt->vlan_tci = ETHER_TYPE_IPv6; + pkt->l3_len = sizeof(struct ipv6_hdr); + } + + pkts_burst[nb_pkt] = pkt; + } + + return nb_pkt; +} diff --git a/test/test/packet_burst_generator.h b/test/test/packet_burst_generator.h index edc1044..3315bfa 100644 --- a/test/test/packet_burst_generator.h +++ b/test/test/packet_burst_generator.h @@ -43,7 +43,8 @@ #include <rte_arp.h> #include <rte_ip.h> #include <rte_udp.h> - +#include <rte_tcp.h> +#include <rte_sctp.h> #define IPV4_ADDR(a, b, c, d)(((a & 0xff) << 24) | ((b & 0xff) << 16) | \ ((c & 0xff) << 8) | (d & 0xff)) @@ -65,6 +66,13 @@ initialize_udp_header(struct udp_hdr *udp_hdr, uint16_t src_port, uint16_t dst_port, uint16_t pkt_data_len); +uint16_t +initialize_tcp_header(struct tcp_hdr *tcp_hdr, uint16_t src_port, + uint16_t dst_port, uint16_t pkt_data_len); + +uint16_t +initialize_sctp_header(struct sctp_hdr *sctp_hdr, uint16_t src_port, + uint16_t dst_port, uint16_t pkt_data_len); uint16_t initialize_ipv6_header(struct ipv6_hdr *ip_hdr, uint8_t *src_addr, @@ -74,15 +82,25 @@ initialize_ipv4_header(struct ipv4_hdr *ip_hdr, uint32_t src_addr, uint32_t dst_addr, uint16_t pkt_data_len); +uint16_t +initialize_ipv4_header_proto(struct ipv4_hdr *ip_hdr, uint32_t src_addr, + uint32_t dst_addr, uint16_t pkt_data_len, uint8_t proto); + int generate_packet_burst(struct rte_mempool *mp, struct rte_mbuf **pkts_burst, struct ether_hdr *eth_hdr, uint8_t vlan_enabled, void *ip_hdr, uint8_t ipv4, struct udp_hdr *udp_hdr, int nb_pkt_per_burst, uint8_t pkt_len, uint8_t nb_pkt_segs); +int +generate_packet_burst_proto(struct rte_mempool *mp, + struct rte_mbuf **pkts_burst, + struct ether_hdr *eth_hdr, uint8_t vlan_enabled, void *ip_hdr, + uint8_t ipv4, uint8_t proto, void *proto_hdr, + int nb_pkt_per_burst, uint8_t pkt_len, uint8_t nb_pkt_segs); + #ifdef __cplusplus } #endif - #endif /* PACKET_BURST_GENERATOR_H_ */ -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [PATCH v10 3/4] test: add packet burst generator functions 2017-10-23 15:16 ` [dpdk-dev] [PATCH v10 3/4] test: add packet burst generator functions Bernard Iremonger @ 2017-10-23 16:05 ` Singh, Jasvinder 0 siblings, 0 replies; 145+ messages in thread From: Singh, Jasvinder @ 2017-10-23 16:05 UTC (permalink / raw) To: Iremonger, Bernard, dev, Yigit, Ferruh, Ananyev, Konstantin, Dumitrescu, Cristian, adrien.mazarguil > -----Original Message----- > From: Iremonger, Bernard > Sent: Monday, October 23, 2017 4:16 PM > To: dev@dpdk.org; Yigit, Ferruh <ferruh.yigit@intel.com>; Ananyev, > Konstantin <konstantin.ananyev@intel.com>; Dumitrescu, Cristian > <cristian.dumitrescu@intel.com>; adrien.mazarguil@6wind.com; Singh, > Jasvinder <jasvinder.singh@intel.com> > Cc: Iremonger, Bernard <bernard.iremonger@intel.com> > Subject: [PATCH v10 3/4] test: add packet burst generator functions > > add initialize_tcp_header function > add initialize_stcp_header function > add initialize_ipv4_header_proto function add > generate_packet_burst_proto function > > Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> > --- Acked-by: Jasvinder Singh <jasvinder.singh@intel.com> ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v10 4/4] test: flow classify library unit tests 2017-10-22 13:32 ` [dpdk-dev] [PATCH v9 " Bernard Iremonger ` (3 preceding siblings ...) 2017-10-23 15:16 ` [dpdk-dev] [PATCH v10 3/4] test: add packet burst generator functions Bernard Iremonger @ 2017-10-23 15:16 ` Bernard Iremonger 2017-10-23 16:06 ` Singh, Jasvinder 4 siblings, 1 reply; 145+ messages in thread From: Bernard Iremonger @ 2017-10-23 15:16 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil, jasvinder.singh Cc: Bernard Iremonger Add flow_classify_autotest program. Set up IPv4 ACL field definitions. Create table_acl for use by librte_flow_classify API's. Create an mbuf pool for use by rte_flow_classify_query. For each of the librte_flow_classify API's: test with invalid parameters test with invalid patterns test with invalid actions test with valid parameters Initialise ipv4 udp traffic for use by the udp test for rte_flow_classifier_run. Initialise ipv4 tcp traffic for use by the tcp test for rte_flow_classifier_run. Initialise ipv4 sctp traffic for use by the sctp test for rte_flow_classifier_run. Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> --- test/test/Makefile | 1 + test/test/test_flow_classify.c | 672 +++++++++++++++++++++++++++++++++++++++++ test/test/test_flow_classify.h | 234 ++++++++++++++ 3 files changed, 907 insertions(+) create mode 100644 test/test/test_flow_classify.c create mode 100644 test/test/test_flow_classify.h diff --git a/test/test/Makefile b/test/test/Makefile index dcbe363..c2dbe40 100644 --- a/test/test/Makefile +++ b/test/test/Makefile @@ -107,6 +107,7 @@ SRCS-y += test_table_tables.c SRCS-y += test_table_ports.c SRCS-y += test_table_combined.c SRCS-$(CONFIG_RTE_LIBRTE_ACL) += test_table_acl.c +SRCS-$(CONFIG_RTE_LIBRTE_ACL) += test_flow_classify.c endif SRCS-y += test_rwlock.c diff --git a/test/test/test_flow_classify.c b/test/test/test_flow_classify.c new file mode 100644 index 0000000..9f331cd --- /dev/null +++ b/test/test/test_flow_classify.c @@ -0,0 +1,672 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <string.h> +#include <errno.h> + +#include "test.h" + +#include <rte_string_fns.h> +#include <rte_mbuf.h> +#include <rte_byteorder.h> +#include <rte_ip.h> +#include <rte_acl.h> +#include <rte_common.h> +#include <rte_table_acl.h> +#include <rte_flow.h> +#include <rte_flow_classify.h> + +#include "packet_burst_generator.h" +#include "test_flow_classify.h" + + +#define FLOW_CLASSIFY_MAX_RULE_NUM 100 +struct flow_classifier *cls; + +struct flow_classifier { + struct rte_flow_classifier *cls; + uint32_t table_id[RTE_FLOW_CLASSIFY_TABLE_MAX]; + uint32_t n_tables; +}; + +struct flow_classifier_acl { + struct flow_classifier cls; +} __rte_cache_aligned; + +/* + * test functions by passing invalid or + * non-workable parameters. + */ +static int +test_invalid_parameters(void) +{ + struct rte_flow_classify_rule *rule; + int ret; + + rule = rte_flow_classify_table_entry_add(NULL, 1, NULL, NULL, NULL, + NULL, NULL); + if (rule) { + printf("Line %i: flow_classifier_table_entry_add", __LINE__); + printf(" with NULL param should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_table_entry_delete(NULL, 1, NULL); + if (!ret) { + printf("Line %i: rte_flow_classify_table_entry_delete", + __LINE__); + printf(" with NULL param should have failed!\n"); + return -1; + } + + ret = rte_flow_classifier_query(NULL, 1, NULL, 0, NULL, NULL); + if (!ret) { + printf("Line %i: flow_classifier_query", __LINE__); + printf(" with NULL param should have failed!\n"); + return -1; + } + + rule = rte_flow_classify_table_entry_add(NULL, 1, NULL, NULL, NULL, + NULL, &error); + if (rule) { + printf("Line %i: flow_classify_table_entry_add ", __LINE__); + printf("with NULL param should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_table_entry_delete(NULL, 1, NULL); + if (!ret) { + printf("Line %i: rte_flow_classify_table_entry_delete", + __LINE__); + printf("with NULL param should have failed!\n"); + return -1; + } + + ret = rte_flow_classifier_query(NULL, 1, NULL, 0, NULL, NULL); + if (!ret) { + printf("Line %i: flow_classifier_query", __LINE__); + printf(" with NULL param should have failed!\n"); + return -1; + } + return 0; +} + +static int +test_valid_parameters(void) +{ + struct rte_flow_classify_rule *rule; + int ret; + int key_found; + + /* + * set up parameters for rte_flow_classify_table_entry_add and + * rte_flow_classify_table_entry_delete + */ + + attr.ingress = 1; + attr.priority = 1; + pattern[0] = eth_item; + pattern[1] = ipv4_udp_item_1; + pattern[2] = udp_item_1; + pattern[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + rule = rte_flow_classify_table_entry_add(cls->cls, 0, &key_found, + &attr, pattern, actions, &error); + if (!rule) { + printf("Line %i: flow_classify_table_entry_add", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classify_table_entry_delete(cls->cls, 0, rule); + if (ret) { + printf("Line %i: rte_flow_classify_table_entry_delete", + __LINE__); + printf(" should not have failed!\n"); + return -1; + } + return 0; +} + +static int +test_invalid_patterns(void) +{ + struct rte_flow_classify_rule *rule; + int ret; + int key_found; + + /* + * set up parameters for rte_flow_classify_table_entry_add and + * rte_flow_classify_table_entry_delete + */ + + attr.ingress = 1; + attr.priority = 1; + pattern[0] = eth_item_bad; + pattern[1] = ipv4_udp_item_1; + pattern[2] = udp_item_1; + pattern[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + pattern[0] = eth_item; + pattern[1] = ipv4_udp_item_bad; + rule = rte_flow_classify_table_entry_add(cls->cls, 0, &key_found, + &attr, pattern, actions, &error); + if (rule) { + printf("Line %i: flow_classify_table_entry_add", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_table_entry_delete(cls->cls, 0, rule); + if (!ret) { + printf("Line %i: rte_flow_classify_table_entry_delete", + __LINE__); + printf(" should have failed!\n"); + return -1; + } + + pattern[1] = ipv4_udp_item_1; + pattern[2] = udp_item_bad; + pattern[3] = end_item_bad; + rule = rte_flow_classify_table_entry_add(cls->cls, 0, &key_found, + &attr, pattern, actions, &error); + if (rule) { + printf("Line %i: flow_classify_table_entry_add", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_table_entry_delete(cls->cls, 0, rule); + if (!ret) { + printf("Line %i: rte_flow_classify_table_entry_delete", + __LINE__); + printf(" should have failed!\n"); + return -1; + } + return 0; +} + +static int +test_invalid_actions(void) +{ + struct rte_flow_classify_rule *rule; + int ret; + int key_found; + + /* + * set up parameters for rte_flow_classify_table_entry_add and + * rte_flow_classify_table_entry_delete + */ + + attr.ingress = 1; + attr.priority = 1; + pattern[0] = eth_item; + pattern[1] = ipv4_udp_item_1; + pattern[2] = udp_item_1; + pattern[3] = end_item; + actions[0] = count_action_bad; + actions[1] = end_action; + + rule = rte_flow_classify_table_entry_add(cls->cls, 0, &key_found, + &attr, pattern, actions, &error); + if (rule) { + printf("Line %i: flow_classify_table_entry_add", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_table_entry_delete(cls->cls, 0, rule); + if (!ret) { + printf("Line %i: rte_flow_classify_table_entry_delete", + __LINE__); + printf(" should have failed!\n"); + return -1; + } + + actions[0] = count_action; + actions[1] = end_action_bad; + + rule = rte_flow_classify_table_entry_add(cls->cls, 0, &key_found, + &attr, pattern, actions, &error); + if (rule) { + printf("Line %i: flow_classify_table_entry_add", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_table_entry_delete(cls->cls, 0, rule); + if (!ret) { + printf("Line %i: rte_flow_classify_table_entry_delete", + __LINE__); + printf("should have failed!\n"); + return -1; + } + return 0; +} + +static int +init_ipv4_udp_traffic(struct rte_mempool *mp, + struct rte_mbuf **pkts_burst, uint32_t burst_size) +{ + struct ether_hdr pkt_eth_hdr; + struct ipv4_hdr pkt_ipv4_hdr; + struct udp_hdr pkt_udp_hdr; + uint32_t src_addr = IPV4_ADDR(2, 2, 2, 3); + uint32_t dst_addr = IPV4_ADDR(2, 2, 2, 7); + uint16_t src_port = 32; + uint16_t dst_port = 33; + uint16_t pktlen; + + static uint8_t src_mac[] = { 0x00, 0xFF, 0xAA, 0xFF, 0xAA, 0xFF }; + static uint8_t dst_mac[] = { 0x00, 0xAA, 0xFF, 0xAA, 0xFF, 0xAA }; + + printf("Set up IPv4 UDP traffic\n"); + initialize_eth_header(&pkt_eth_hdr, + (struct ether_addr *)src_mac, + (struct ether_addr *)dst_mac, ETHER_TYPE_IPv4, 0, 0); + pktlen = (uint16_t)(sizeof(struct ether_hdr)); + printf("ETH pktlen %u\n", pktlen); + + pktlen = initialize_ipv4_header(&pkt_ipv4_hdr, src_addr, dst_addr, + pktlen); + printf("ETH + IPv4 pktlen %u\n", pktlen); + + pktlen = initialize_udp_header(&pkt_udp_hdr, src_port, dst_port, + pktlen); + printf("ETH + IPv4 + UDP pktlen %u\n\n", pktlen); + + return generate_packet_burst(mp, pkts_burst, &pkt_eth_hdr, + 0, &pkt_ipv4_hdr, 1, + &pkt_udp_hdr, burst_size, + PACKET_BURST_GEN_PKT_LEN, 1); +} + +static int +init_ipv4_tcp_traffic(struct rte_mempool *mp, + struct rte_mbuf **pkts_burst, uint32_t burst_size) +{ + struct ether_hdr pkt_eth_hdr; + struct ipv4_hdr pkt_ipv4_hdr; + struct tcp_hdr pkt_tcp_hdr; + uint32_t src_addr = IPV4_ADDR(1, 2, 3, 4); + uint32_t dst_addr = IPV4_ADDR(5, 6, 7, 8); + uint16_t src_port = 16; + uint16_t dst_port = 17; + uint16_t pktlen; + + static uint8_t src_mac[] = { 0x00, 0xFF, 0xAA, 0xFF, 0xAA, 0xFF }; + static uint8_t dst_mac[] = { 0x00, 0xAA, 0xFF, 0xAA, 0xFF, 0xAA }; + + printf("Set up IPv4 TCP traffic\n"); + initialize_eth_header(&pkt_eth_hdr, + (struct ether_addr *)src_mac, + (struct ether_addr *)dst_mac, ETHER_TYPE_IPv4, 0, 0); + pktlen = (uint16_t)(sizeof(struct ether_hdr)); + printf("ETH pktlen %u\n", pktlen); + + pktlen = initialize_ipv4_header_proto(&pkt_ipv4_hdr, src_addr, + dst_addr, pktlen, IPPROTO_TCP); + printf("ETH + IPv4 pktlen %u\n", pktlen); + + pktlen = initialize_tcp_header(&pkt_tcp_hdr, src_port, dst_port, + pktlen); + printf("ETH + IPv4 + TCP pktlen %u\n\n", pktlen); + + return generate_packet_burst_proto(mp, pkts_burst, &pkt_eth_hdr, + 0, &pkt_ipv4_hdr, 1, IPPROTO_TCP, + &pkt_tcp_hdr, burst_size, + PACKET_BURST_GEN_PKT_LEN, 1); +} + +static int +init_ipv4_sctp_traffic(struct rte_mempool *mp, + struct rte_mbuf **pkts_burst, uint32_t burst_size) +{ + struct ether_hdr pkt_eth_hdr; + struct ipv4_hdr pkt_ipv4_hdr; + struct sctp_hdr pkt_sctp_hdr; + uint32_t src_addr = IPV4_ADDR(11, 12, 13, 14); + uint32_t dst_addr = IPV4_ADDR(15, 16, 17, 18); + uint16_t src_port = 10; + uint16_t dst_port = 11; + uint16_t pktlen; + + static uint8_t src_mac[] = { 0x00, 0xFF, 0xAA, 0xFF, 0xAA, 0xFF }; + static uint8_t dst_mac[] = { 0x00, 0xAA, 0xFF, 0xAA, 0xFF, 0xAA }; + + printf("Set up IPv4 SCTP traffic\n"); + initialize_eth_header(&pkt_eth_hdr, + (struct ether_addr *)src_mac, + (struct ether_addr *)dst_mac, ETHER_TYPE_IPv4, 0, 0); + pktlen = (uint16_t)(sizeof(struct ether_hdr)); + printf("ETH pktlen %u\n", pktlen); + + pktlen = initialize_ipv4_header_proto(&pkt_ipv4_hdr, src_addr, + dst_addr, pktlen, IPPROTO_SCTP); + printf("ETH + IPv4 pktlen %u\n", pktlen); + + pktlen = initialize_sctp_header(&pkt_sctp_hdr, src_port, dst_port, + pktlen); + printf("ETH + IPv4 + SCTP pktlen %u\n\n", pktlen); + + return generate_packet_burst_proto(mp, pkts_burst, &pkt_eth_hdr, + 0, &pkt_ipv4_hdr, 1, IPPROTO_SCTP, + &pkt_sctp_hdr, burst_size, + PACKET_BURST_GEN_PKT_LEN, 1); +} + +static int +init_mbufpool(void) +{ + int socketid; + int ret = 0; + unsigned int lcore_id; + char s[64]; + + for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { + if (rte_lcore_is_enabled(lcore_id) == 0) + continue; + + socketid = rte_lcore_to_socket_id(lcore_id); + if (socketid >= NB_SOCKETS) { + printf( + "Socket %d of lcore %u is out of range %d\n", + socketid, lcore_id, NB_SOCKETS); + ret = -1; + break; + } + if (mbufpool[socketid] == NULL) { + snprintf(s, sizeof(s), "mbuf_pool_%d", socketid); + mbufpool[socketid] = + rte_pktmbuf_pool_create(s, NB_MBUF, + MEMPOOL_CACHE_SIZE, 0, MBUF_SIZE, + socketid); + if (mbufpool[socketid]) { + printf("Allocated mbuf pool on socket %d\n", + socketid); + } else { + printf("Cannot init mbuf pool on socket %d\n", + socketid); + ret = -ENOMEM; + break; + } + } + } + return ret; +} + +static int +test_query_udp(void) +{ + struct rte_flow_error error; + struct rte_flow_classify_rule *rule; + int ret; + int i; + int key_found; + + ret = init_ipv4_udp_traffic(mbufpool[0], bufs, MAX_PKT_BURST); + if (ret != MAX_PKT_BURST) { + printf("Line %i: init_udp_ipv4_traffic has failed!\n", + __LINE__); + return -1; + } + + for (i = 0; i < MAX_PKT_BURST; i++) + bufs[i]->packet_type = RTE_PTYPE_L3_IPV4; + + /* + * set up parameters for rte_flow_classify_table_entry_add and + * rte_flow_classify_table_entry_delete + */ + + attr.ingress = 1; + attr.priority = 1; + pattern[0] = eth_item; + pattern[1] = ipv4_udp_item_1; + pattern[2] = udp_item_1; + pattern[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + rule = rte_flow_classify_table_entry_add(cls->cls, 0, &key_found, + &attr, pattern, actions, &error); + if (!rule) { + printf("Line %i: flow_classify_table_entry_add", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classifier_query(cls->cls, 0, bufs, MAX_PKT_BURST, + rule, &udp_classify_stats); + if (ret) { + printf("Line %i: flow_classifier_query", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classify_table_entry_delete(cls->cls, 0, rule); + if (ret) { + printf("Line %i: rte_flow_classify_table_entry_delete", + __LINE__); + printf(" should not have failed!\n"); + return -1; + } + return 0; +} + +static int +test_query_tcp(void) +{ + struct rte_flow_classify_rule *rule; + int ret; + int i; + int key_found; + + ret = init_ipv4_tcp_traffic(mbufpool[0], bufs, MAX_PKT_BURST); + if (ret != MAX_PKT_BURST) { + printf("Line %i: init_ipv4_tcp_traffic has failed!\n", + __LINE__); + return -1; + } + + for (i = 0; i < MAX_PKT_BURST; i++) + bufs[i]->packet_type = RTE_PTYPE_L3_IPV4; + + /* + * set up parameters for rte_flow_classify_table_entry_add and + * rte_flow_classify_table_entry_delete + */ + + attr.ingress = 1; + attr.priority = 1; + pattern[0] = eth_item; + pattern[1] = ipv4_tcp_item_1; + pattern[2] = tcp_item_1; + pattern[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + rule = rte_flow_classify_table_entry_add(cls->cls, 0, &key_found, + &attr, pattern, actions, &error); + if (!rule) { + printf("Line %i: flow_classify_table_entry_add", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classifier_query(cls->cls, 0, bufs, MAX_PKT_BURST, + rule, &tcp_classify_stats); + if (ret) { + printf("Line %i: flow_classifier_query", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classify_table_entry_delete(cls->cls, 0, rule); + if (ret) { + printf("Line %i: rte_flow_classify_table_entry_delete", + __LINE__); + printf(" should not have failed!\n"); + return -1; + } + return 0; +} + +static int +test_query_sctp(void) +{ + struct rte_flow_classify_rule *rule; + int ret; + int i; + int key_found; + + ret = init_ipv4_sctp_traffic(mbufpool[0], bufs, MAX_PKT_BURST); + if (ret != MAX_PKT_BURST) { + printf("Line %i: init_ipv4_tcp_traffic has failed!\n", + __LINE__); + return -1; + } + + for (i = 0; i < MAX_PKT_BURST; i++) + bufs[i]->packet_type = RTE_PTYPE_L3_IPV4; + + /* + * set up parameters rte_flow_classify_table_entry_add and + * rte_flow_classify_table_entry_delete + */ + + attr.ingress = 1; + attr.priority = 1; + pattern[0] = eth_item; + pattern[1] = ipv4_sctp_item_1; + pattern[2] = sctp_item_1; + pattern[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + rule = rte_flow_classify_table_entry_add(cls->cls, 0, &key_found, + &attr, pattern, actions, &error); + if (!rule) { + printf("Line %i: flow_classify_table_entry_add", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classifier_query(cls->cls, 0, bufs, MAX_PKT_BURST, + rule, &sctp_classify_stats); + if (ret) { + printf("Line %i: flow_classifier_query", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classify_table_entry_delete(cls->cls, 0, rule); + if (ret) { + printf("Line %i: rte_flow_classify_table_entry_delete", + __LINE__); + printf(" should not have failed!\n"); + return -1; + } + return 0; +} + +static int +test_flow_classify(void) +{ + struct rte_table_acl_params table_acl_params; + struct rte_flow_classify_table_params cls_table_params; + struct rte_flow_classifier_params cls_params; + int socket_id; + int ret; + uint32_t size; + + socket_id = rte_eth_dev_socket_id(0); + + /* Memory allocation */ + size = RTE_CACHE_LINE_ROUNDUP(sizeof(struct flow_classifier_acl)); + cls = rte_zmalloc(NULL, size, RTE_CACHE_LINE_SIZE); + + cls_params.name = "flow_classifier"; + cls_params.socket_id = socket_id; + cls_params.type = RTE_FLOW_CLASSIFY_TABLE_TYPE_ACL; + cls->cls = rte_flow_classifier_create(&cls_params); + + /* initialise ACL table params */ + table_acl_params.n_rule_fields = RTE_DIM(ipv4_defs); + table_acl_params.name = "table_acl_ipv4_5tuple"; + table_acl_params.n_rules = FLOW_CLASSIFY_MAX_RULE_NUM; + memcpy(table_acl_params.field_format, ipv4_defs, sizeof(ipv4_defs)); + + /* initialise table create params */ + cls_table_params.ops = &rte_table_acl_ops, + cls_table_params.arg_create = &table_acl_params, + + ret = rte_flow_classify_table_create(cls->cls, &cls_table_params, + &cls->table_id[0]); + if (ret) { + printf("Line %i: f_create has failed!\n", __LINE__); + rte_flow_classifier_free(cls->cls); + rte_free(cls); + return -1; + } + printf("Created table_acl for for IPv4 five tuple packets\n"); + + ret = init_mbufpool(); + if (ret) { + printf("Line %i: init_mbufpool has failed!\n", __LINE__); + return -1; + } + + if (test_invalid_parameters() < 0) + return -1; + if (test_valid_parameters() < 0) + return -1; + if (test_invalid_patterns() < 0) + return -1; + if (test_invalid_actions() < 0) + return -1; + if (test_query_udp() < 0) + return -1; + if (test_query_tcp() < 0) + return -1; + if (test_query_sctp() < 0) + return -1; + + return 0; +} + +REGISTER_TEST_COMMAND(flow_classify_autotest, test_flow_classify); diff --git a/test/test/test_flow_classify.h b/test/test/test_flow_classify.h new file mode 100644 index 0000000..39535cf --- /dev/null +++ b/test/test/test_flow_classify.h @@ -0,0 +1,234 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef TEST_FLOW_CLASSIFY_H_ +#define TEST_FLOW_CLASSIFY_H_ + +#define MAX_PKT_BURST (32) +#define NB_SOCKETS (1) +#define MEMPOOL_CACHE_SIZE (256) +#define MBUF_SIZE (512) +#define NB_MBUF (512) + +/* test UDP, TCP and SCTP packets */ +static struct rte_mempool *mbufpool[NB_SOCKETS]; +static struct rte_mbuf *bufs[MAX_PKT_BURST]; + +/* ACL field definitions for IPv4 5 tuple rule */ + +enum { + PROTO_FIELD_IPV4, + SRC_FIELD_IPV4, + DST_FIELD_IPV4, + SRCP_FIELD_IPV4, + DSTP_FIELD_IPV4, + NUM_FIELDS_IPV4 +}; + +enum { + PROTO_INPUT_IPV4, + SRC_INPUT_IPV4, + DST_INPUT_IPV4, + SRCP_DESTP_INPUT_IPV4 +}; + +static struct rte_acl_field_def ipv4_defs[NUM_FIELDS_IPV4] = { + /* first input field - always one byte long. */ + { + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint8_t), + .field_index = PROTO_FIELD_IPV4, + .input_index = PROTO_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, next_proto_id), + }, + /* next input field (IPv4 source address) - 4 consecutive bytes. */ + { + /* rte_flow uses a bit mask for IPv4 addresses */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint32_t), + .field_index = SRC_FIELD_IPV4, + .input_index = SRC_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, src_addr), + }, + /* next input field (IPv4 destination address) - 4 consecutive bytes. */ + { + /* rte_flow uses a bit mask for IPv4 addresses */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint32_t), + .field_index = DST_FIELD_IPV4, + .input_index = DST_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, dst_addr), + }, + /* + * Next 2 fields (src & dst ports) form 4 consecutive bytes. + * They share the same input index. + */ + { + /* rte_flow uses a bit mask for protocol ports */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint16_t), + .field_index = SRCP_FIELD_IPV4, + .input_index = SRCP_DESTP_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + sizeof(struct ipv4_hdr) + + offsetof(struct tcp_hdr, src_port), + }, + { + /* rte_flow uses a bit mask for protocol ports */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint16_t), + .field_index = DSTP_FIELD_IPV4, + .input_index = SRCP_DESTP_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + sizeof(struct ipv4_hdr) + + offsetof(struct tcp_hdr, dst_port), + }, +}; + +/* parameters for rte_flow_classify_validate and rte_flow_classify_create */ + +/* test UDP pattern: + * "eth / ipv4 src spec 2.2.2.3 src mask 255.255.255.00 dst spec 2.2.2.7 + * dst mask 255.255.255.00 / udp src is 32 dst is 33 / end" + */ +static struct rte_flow_item_ipv4 ipv4_udp_spec_1 = { + { 0, 0, 0, 0, 0, 0, IPPROTO_UDP, 0, IPv4(2, 2, 2, 3), IPv4(2, 2, 2, 7)} +}; +static const struct rte_flow_item_ipv4 ipv4_mask_24 = { + .hdr = { + .next_proto_id = 0xff, + .src_addr = 0xffffff00, + .dst_addr = 0xffffff00, + }, +}; +static struct rte_flow_item_udp udp_spec_1 = { + { 32, 33, 0, 0 } +}; + +static struct rte_flow_item eth_item = { RTE_FLOW_ITEM_TYPE_ETH, + 0, 0, 0 }; +static struct rte_flow_item eth_item_bad = { -1, 0, 0, 0 }; + +static struct rte_flow_item ipv4_udp_item_1 = { RTE_FLOW_ITEM_TYPE_IPV4, + &ipv4_udp_spec_1, 0, &ipv4_mask_24}; +static struct rte_flow_item ipv4_udp_item_bad = { RTE_FLOW_ITEM_TYPE_IPV4, + NULL, 0, NULL}; + +static struct rte_flow_item udp_item_1 = { RTE_FLOW_ITEM_TYPE_UDP, + &udp_spec_1, 0, &rte_flow_item_udp_mask}; +static struct rte_flow_item udp_item_bad = { RTE_FLOW_ITEM_TYPE_UDP, + NULL, 0, NULL}; + +static struct rte_flow_item end_item = { RTE_FLOW_ITEM_TYPE_END, + 0, 0, 0 }; +static struct rte_flow_item end_item_bad = { -1, 0, 0, 0 }; + +/* test TCP pattern: + * "eth / ipv4 src spec 1.2.3.4 src mask 255.255.255.00 dst spec 5.6.7.8 + * dst mask 255.255.255.00 / tcp src is 16 dst is 17 / end" + */ +static struct rte_flow_item_ipv4 ipv4_tcp_spec_1 = { + { 0, 0, 0, 0, 0, 0, IPPROTO_TCP, 0, IPv4(1, 2, 3, 4), IPv4(5, 6, 7, 8)} +}; + +static struct rte_flow_item_tcp tcp_spec_1 = { + { 16, 17, 0, 0, 0, 0, 0, 0, 0} +}; + +static struct rte_flow_item ipv4_tcp_item_1 = { RTE_FLOW_ITEM_TYPE_IPV4, + &ipv4_tcp_spec_1, 0, &ipv4_mask_24}; + +static struct rte_flow_item tcp_item_1 = { RTE_FLOW_ITEM_TYPE_TCP, + &tcp_spec_1, 0, &rte_flow_item_tcp_mask}; + +/* test SCTP pattern: + * "eth / ipv4 src spec 1.2.3.4 src mask 255.255.255.00 dst spec 5.6.7.8 + * dst mask 255.255.255.00 / sctp src is 16 dst is 17/ end" + */ +static struct rte_flow_item_ipv4 ipv4_sctp_spec_1 = { + { 0, 0, 0, 0, 0, 0, IPPROTO_SCTP, 0, IPv4(11, 12, 13, 14), + IPv4(15, 16, 17, 18)} +}; + +static struct rte_flow_item_sctp sctp_spec_1 = { + { 10, 11, 0, 0} +}; + +static struct rte_flow_item ipv4_sctp_item_1 = { RTE_FLOW_ITEM_TYPE_IPV4, + &ipv4_sctp_spec_1, 0, &ipv4_mask_24}; + +static struct rte_flow_item sctp_item_1 = { RTE_FLOW_ITEM_TYPE_SCTP, + &sctp_spec_1, 0, &rte_flow_item_sctp_mask}; + + +/* test actions: + * "actions count / end" + */ +static struct rte_flow_action count_action = { RTE_FLOW_ACTION_TYPE_COUNT, 0}; +static struct rte_flow_action count_action_bad = { -1, 0}; + +static struct rte_flow_action end_action = { RTE_FLOW_ACTION_TYPE_END, 0}; +static struct rte_flow_action end_action_bad = { -1, 0}; + +static struct rte_flow_action actions[2]; + +/* test attributes */ +static struct rte_flow_attr attr; + +/* test error */ +static struct rte_flow_error error; + +/* test pattern */ +static struct rte_flow_item pattern[4]; + +/* flow classify data for UDP burst */ +static struct rte_flow_classify_ipv4_5tuple_stats udp_ntuple_stats; +static struct rte_flow_classify_stats udp_classify_stats = { + .stats = (void *)&udp_ntuple_stats +}; + +/* flow classify data for TCP burst */ +static struct rte_flow_classify_ipv4_5tuple_stats tcp_ntuple_stats; +static struct rte_flow_classify_stats tcp_classify_stats = { + .stats = (void *)&tcp_ntuple_stats +}; + +/* flow classify data for SCTP burst */ +static struct rte_flow_classify_ipv4_5tuple_stats sctp_ntuple_stats; +static struct rte_flow_classify_stats sctp_classify_stats = { + .stats = (void *)&sctp_ntuple_stats +}; +#endif /* TEST_FLOW_CLASSIFY_H_ */ -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [PATCH v10 4/4] test: flow classify library unit tests 2017-10-23 15:16 ` [dpdk-dev] [PATCH v10 4/4] test: flow classify library unit tests Bernard Iremonger @ 2017-10-23 16:06 ` Singh, Jasvinder 0 siblings, 0 replies; 145+ messages in thread From: Singh, Jasvinder @ 2017-10-23 16:06 UTC (permalink / raw) To: Iremonger, Bernard, dev, Yigit, Ferruh, Ananyev, Konstantin, Dumitrescu, Cristian, adrien.mazarguil > -----Original Message----- > From: Iremonger, Bernard > Sent: Monday, October 23, 2017 4:16 PM > To: dev@dpdk.org; Yigit, Ferruh <ferruh.yigit@intel.com>; Ananyev, > Konstantin <konstantin.ananyev@intel.com>; Dumitrescu, Cristian > <cristian.dumitrescu@intel.com>; adrien.mazarguil@6wind.com; Singh, > Jasvinder <jasvinder.singh@intel.com> > Cc: Iremonger, Bernard <bernard.iremonger@intel.com> > Subject: [PATCH v10 4/4] test: flow classify library unit tests > > Add flow_classify_autotest program. > > Set up IPv4 ACL field definitions. > Create table_acl for use by librte_flow_classify API's. > Create an mbuf pool for use by rte_flow_classify_query. > > For each of the librte_flow_classify API's: > test with invalid parameters > test with invalid patterns > test with invalid actions > test with valid parameters > > Initialise ipv4 udp traffic for use by the udp test for rte_flow_classifier_run. > > Initialise ipv4 tcp traffic for use by the tcp test for rte_flow_classifier_run. > > Initialise ipv4 sctp traffic for use by the sctp test for rte_flow_classifier_run. > > Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> > --- Acked-by: Jasvinder Singh <jasvinder.singh@intel.com> ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v9 1/4] librte_flow_classify: add flow classify library 2017-10-17 20:26 ` [dpdk-dev] [PATCH v8 " Bernard Iremonger 2017-10-22 13:32 ` [dpdk-dev] [PATCH v9 " Bernard Iremonger @ 2017-10-22 13:32 ` Bernard Iremonger 2017-10-23 13:21 ` Singh, Jasvinder 2017-10-22 13:32 ` [dpdk-dev] [PATCH v9 2/4] examples/flow_classify: flow classify sample application Bernard Iremonger ` (2 subsequent siblings) 4 siblings, 1 reply; 145+ messages in thread From: Bernard Iremonger @ 2017-10-22 13:32 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil, jasvinder.singh Cc: Bernard Iremonger From: Ferruh Yigit <ferruh.yigit@intel.com> The following APIs's are implemented in the librte_flow_classify library: rte_flow_classifier_create rte_flow_classifier_free rte_flow_classifier_query rte_flow_classify_table_create rte_flow_classify_table_entry_add rte_flow_classify_table_entry_delete The following librte_table API's are used: f_create to create a table. f_add to add a rule to the table. f_del to delete a rule from the table. f_free to free a table f_lookup to match packets with the rules. The library supports counting of IPv4 five tupple packets only, ie IPv4 UDP, TCP and SCTP packets. Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com> Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> --- MAINTAINERS | 7 + config/common_base | 6 + doc/api/doxy-api-index.md | 1 + doc/api/doxy-api.conf | 1 + lib/Makefile | 3 + lib/librte_eal/common/include/rte_log.h | 1 + lib/librte_flow_classify/Makefile | 51 ++ lib/librte_flow_classify/rte_flow_classify.c | 685 +++++++++++++++++++++ lib/librte_flow_classify/rte_flow_classify.h | 285 +++++++++ lib/librte_flow_classify/rte_flow_classify_parse.c | 546 ++++++++++++++++ lib/librte_flow_classify/rte_flow_classify_parse.h | 74 +++ .../rte_flow_classify_version.map | 12 + mk/rte.app.mk | 1 + 13 files changed, 1673 insertions(+) create mode 100644 lib/librte_flow_classify/Makefile create mode 100644 lib/librte_flow_classify/rte_flow_classify.c create mode 100644 lib/librte_flow_classify/rte_flow_classify.h create mode 100644 lib/librte_flow_classify/rte_flow_classify_parse.c create mode 100644 lib/librte_flow_classify/rte_flow_classify_parse.h create mode 100644 lib/librte_flow_classify/rte_flow_classify_version.map diff --git a/MAINTAINERS b/MAINTAINERS index 2a58378..0981793 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -735,6 +735,13 @@ F: doc/guides/prog_guide/pdump_lib.rst F: app/pdump/ F: doc/guides/tools/pdump.rst +Flow classify +M: Bernard Iremonger <bernard.iremonger@intel.com> +F: lib/librte_flow_classify/ +F: test/test/test_flow_classify* +F: examples/flow_classify/ +F: doc/guides/sample_app_ug/flow_classify.rst +F: doc/guides/prog_guide/flow_classify_lib.rst Packet Framework ---------------- diff --git a/config/common_base b/config/common_base index d9471e8..e1079aa 100644 --- a/config/common_base +++ b/config/common_base @@ -707,6 +707,12 @@ CONFIG_RTE_LIBRTE_GSO=y CONFIG_RTE_LIBRTE_METER=y # +# Compile librte_classify +# +CONFIG_RTE_LIBRTE_FLOW_CLASSIFY=y +CONFIG_RTE_LIBRTE_CLASSIFY_DEBUG=n + +# # Compile librte_sched # CONFIG_RTE_LIBRTE_SCHED=y diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md index 990815f..e4468d0 100644 --- a/doc/api/doxy-api-index.md +++ b/doc/api/doxy-api-index.md @@ -111,6 +111,7 @@ The public API headers are grouped by topics: [ACL] (@ref rte_acl.h), [EFD] (@ref rte_efd.h), [member] (@ref rte_member.h) + [flow_classify] (@ref rte_flow_classify.h), - **QoS**: [metering] (@ref rte_meter.h), diff --git a/doc/api/doxy-api.conf b/doc/api/doxy-api.conf index 9e9fa56..9edb6fd 100644 --- a/doc/api/doxy-api.conf +++ b/doc/api/doxy-api.conf @@ -48,6 +48,7 @@ INPUT = doc/api/doxy-api-index.md \ lib/librte_efd \ lib/librte_ether \ lib/librte_eventdev \ + lib/librte_flow_classify \ lib/librte_gro \ lib/librte_gso \ lib/librte_hash \ diff --git a/lib/Makefile b/lib/Makefile index 86d475f..aba2593 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -83,6 +83,9 @@ DIRS-$(CONFIG_RTE_LIBRTE_POWER) += librte_power DEPDIRS-librte_power := librte_eal DIRS-$(CONFIG_RTE_LIBRTE_METER) += librte_meter DEPDIRS-librte_meter := librte_eal +DIRS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += librte_flow_classify +DEPDIRS-librte_flow_classify := librte_eal librte_ether librte_net +DEPDIRS-librte_flow_classify += librte_table librte_acl librte_port DIRS-$(CONFIG_RTE_LIBRTE_SCHED) += librte_sched DEPDIRS-librte_sched := librte_eal librte_mempool librte_mbuf librte_net DEPDIRS-librte_sched += librte_timer diff --git a/lib/librte_eal/common/include/rte_log.h b/lib/librte_eal/common/include/rte_log.h index 2fa1199..67209ae 100644 --- a/lib/librte_eal/common/include/rte_log.h +++ b/lib/librte_eal/common/include/rte_log.h @@ -88,6 +88,7 @@ struct rte_logs { #define RTE_LOGTYPE_EFD 18 /**< Log related to EFD. */ #define RTE_LOGTYPE_EVENTDEV 19 /**< Log related to eventdev. */ #define RTE_LOGTYPE_GSO 20 /**< Log related to GSO. */ +#define RTE_LOGTYPE_CLASSIFY 21 /**< Log related to flow classify. */ /* these log types can be used in an application */ #define RTE_LOGTYPE_USER1 24 /**< User-defined log type 1. */ diff --git a/lib/librte_flow_classify/Makefile b/lib/librte_flow_classify/Makefile new file mode 100644 index 0000000..7863a0c --- /dev/null +++ b/lib/librte_flow_classify/Makefile @@ -0,0 +1,51 @@ +# BSD LICENSE +# +# Copyright(c) 2017 Intel Corporation. All rights reserved. +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Intel Corporation nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +include $(RTE_SDK)/mk/rte.vars.mk + +# library name +LIB = librte_flow_classify.a + +CFLAGS += -O3 +CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) + +EXPORT_MAP := rte_flow_classify_version.map + +LIBABIVER := 1 + +# all source are stored in SRCS-y +SRCS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += rte_flow_classify.c +SRCS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += rte_flow_classify_parse.c + +# install this header file +SYMLINK-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY)-include := rte_flow_classify.h + +include $(RTE_SDK)/mk/rte.lib.mk diff --git a/lib/librte_flow_classify/rte_flow_classify.c b/lib/librte_flow_classify/rte_flow_classify.c new file mode 100644 index 0000000..794b61e --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify.c @@ -0,0 +1,685 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <rte_flow_classify.h> +#include "rte_flow_classify_parse.h" +#include <rte_flow_driver.h> +#include <rte_table_acl.h> +#include <stdbool.h> + +static struct rte_eth_ntuple_filter ntuple_filter; +static uint32_t unique_id = 1; + + +struct rte_flow_classify_table_entry { + /* meta-data for classify rule */ + uint32_t rule_id; + + /* Start of table entry area for user defined meta data */ + __extension__ uint8_t meta_data[0]; +}; + +struct rte_table { + /* Input parameters */ + struct rte_table_ops ops; + uint32_t entry_size; + enum rte_flow_classify_table_type type; + + /* Handle to the low-level table object */ + void *h_table; +}; + +#define RTE_FLOW_CLASSIFIER_MAX_NAME_SZ 256 + +struct rte_flow_classifier { + /* Input parameters */ + char name[RTE_FLOW_CLASSIFIER_MAX_NAME_SZ]; + int socket_id; + enum rte_flow_classify_table_type type; + + /* Internal tables */ + struct rte_table tables[RTE_FLOW_CLASSIFY_TABLE_MAX]; + uint32_t num_tables; + uint16_t nb_pkts; + struct rte_flow_classify_table_entry + *entries[RTE_PORT_IN_BURST_SIZE_MAX]; +} __rte_cache_aligned; + +enum { + PROTO_FIELD_IPV4, + SRC_FIELD_IPV4, + DST_FIELD_IPV4, + SRCP_FIELD_IPV4, + DSTP_FIELD_IPV4, + NUM_FIELDS_IPV4 +}; + +struct acl_keys { + struct rte_table_acl_rule_add_params key_add; /* add key */ + struct rte_table_acl_rule_delete_params key_del; /* delete key */ +}; + +struct classify_rules { + enum rte_flow_classify_rule_type type; + union { + struct rte_flow_classify_ipv4_5tuple ipv4_5tuple; + } u; +}; + +struct rte_flow_classify_rule { + uint32_t id; /* unique ID of classify rule */ + struct rte_flow_action action; /* action when match found */ + struct classify_rules rules; /* union of rules */ + union { + struct acl_keys key; + } u; + int key_found; /* rule key found in table */ + void *entry; /* pointer to buffer to hold rule meta data */ + void *entry_ptr; /* handle to the table entry for rule meta data */ +}; + +static int +rte_flow_classify_parse_flow( + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + struct rte_flow_item *items; + parse_filter_t parse_filter; + uint32_t item_num = 0; + uint32_t i = 0; + int ret; + + memset(&ntuple_filter, 0, sizeof(ntuple_filter)); + + /* Get the non-void item number of pattern */ + while ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_END) { + if ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_VOID) + item_num++; + i++; + } + item_num++; + + items = malloc(item_num * sizeof(struct rte_flow_item)); + if (!items) { + rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_ITEM_NUM, + NULL, "No memory for pattern items."); + return -ENOMEM; + } + + memset(items, 0, item_num * sizeof(struct rte_flow_item)); + classify_pattern_skip_void_item(items, pattern); + + parse_filter = classify_find_parse_filter_func(items); + if (!parse_filter) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + pattern, "Unsupported pattern"); + free(items); + return -EINVAL; + } + + ret = parse_filter(attr, items, actions, &ntuple_filter, error); + free(items); + return ret; +} + +#ifdef RTE_LIBRTE_CLASSIFY_DEBUG +#define uint32_t_to_char(ip, a, b, c, d) do {\ + *a = (unsigned char)(ip >> 24 & 0xff);\ + *b = (unsigned char)(ip >> 16 & 0xff);\ + *c = (unsigned char)(ip >> 8 & 0xff);\ + *d = (unsigned char)(ip & 0xff);\ + } while (0) + +static inline void +print_acl_ipv4_key_add(struct rte_table_acl_rule_add_params *key) +{ + unsigned char a, b, c, d; + + printf("%s: 0x%02hhx/0x%hhx ", __func__, + key->field_value[PROTO_FIELD_IPV4].value.u8, + key->field_value[PROTO_FIELD_IPV4].mask_range.u8); + + uint32_t_to_char(key->field_value[SRC_FIELD_IPV4].value.u32, + &a, &b, &c, &d); + printf(" %hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, + key->field_value[SRC_FIELD_IPV4].mask_range.u32); + + uint32_t_to_char(key->field_value[DST_FIELD_IPV4].value.u32, + &a, &b, &c, &d); + printf("%hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, + key->field_value[DST_FIELD_IPV4].mask_range.u32); + + printf("%hu : 0x%x %hu : 0x%x", + key->field_value[SRCP_FIELD_IPV4].value.u16, + key->field_value[SRCP_FIELD_IPV4].mask_range.u16, + key->field_value[DSTP_FIELD_IPV4].value.u16, + key->field_value[DSTP_FIELD_IPV4].mask_range.u16); + + printf(" priority: 0x%x\n", key->priority); +} + +static inline void +print_acl_ipv4_key_delete(struct rte_table_acl_rule_delete_params *key) +{ + unsigned char a, b, c, d; + + printf("%s: 0x%02hhx/0x%hhx ", __func__, + key->field_value[PROTO_FIELD_IPV4].value.u8, + key->field_value[PROTO_FIELD_IPV4].mask_range.u8); + + uint32_t_to_char(key->field_value[SRC_FIELD_IPV4].value.u32, + &a, &b, &c, &d); + printf(" %hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, + key->field_value[SRC_FIELD_IPV4].mask_range.u32); + + uint32_t_to_char(key->field_value[DST_FIELD_IPV4].value.u32, + &a, &b, &c, &d); + printf("%hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, + key->field_value[DST_FIELD_IPV4].mask_range.u32); + + printf("%hu : 0x%x %hu : 0x%x\n", + key->field_value[SRCP_FIELD_IPV4].value.u16, + key->field_value[SRCP_FIELD_IPV4].mask_range.u16, + key->field_value[DSTP_FIELD_IPV4].value.u16, + key->field_value[DSTP_FIELD_IPV4].mask_range.u16); +} +#endif + +static int +rte_flow_classifier_check_params(struct rte_flow_classifier_params *params) +{ + if (params == NULL) { + RTE_LOG(ERR, CLASSIFY, + "%s: Incorrect value for parameter params\n", __func__); + return -EINVAL; + } + + /* name */ + if (params->name == NULL) { + RTE_LOG(ERR, CLASSIFY, + "%s: Incorrect value for parameter name\n", __func__); + return -EINVAL; + } + + /* socket */ + if ((params->socket_id < 0) || + (params->socket_id >= RTE_MAX_NUMA_NODES)) { + RTE_LOG(ERR, CLASSIFY, + "%s: Incorrect value for parameter socket_id\n", + __func__); + return -EINVAL; + } + + return 0; +} + +struct rte_flow_classifier * +rte_flow_classifier_create(struct rte_flow_classifier_params *params) +{ + struct rte_flow_classifier *cls; + int ret; + + /* Check input parameters */ + ret = rte_flow_classifier_check_params(params); + if (ret != 0) { + RTE_LOG(ERR, CLASSIFY, + "%s: flow classifier params check failed (%d)\n", + __func__, ret); + return NULL; + } + + /* Allocate memory for the flow classifier */ + cls = rte_zmalloc_socket("FLOW_CLASSIFIER", + sizeof(struct rte_flow_classifier), + RTE_CACHE_LINE_SIZE, params->socket_id); + + if (cls == NULL) { + RTE_LOG(ERR, CLASSIFY, + "%s: flow classifier memory allocation failed\n", + __func__); + return NULL; + } + + /* Save input parameters */ + snprintf(cls->name, RTE_FLOW_CLASSIFIER_MAX_NAME_SZ, "%s", + params->name); + cls->socket_id = params->socket_id; + cls->type = params->type; + + /* Initialize flow classifier internal data structure */ + cls->num_tables = 0; + + return cls; +} + +static void +rte_flow_classify_table_free(struct rte_table *table) +{ + if (table->ops.f_free != NULL) + table->ops.f_free(table->h_table); +} + +int +rte_flow_classifier_free(struct rte_flow_classifier *cls) +{ + uint32_t i; + + /* Check input parameters */ + if (cls == NULL) { + RTE_LOG(ERR, CLASSIFY, + "%s: rte_flow_classifier parameter is NULL\n", + __func__); + return -EINVAL; + } + + /* Free tables */ + for (i = 0; i < cls->num_tables; i++) { + struct rte_table *table = &cls->tables[i]; + + rte_flow_classify_table_free(table); + } + + /* Free flow classifier memory */ + rte_free(cls); + + return 0; +} + +static int +rte_table_check_params(struct rte_flow_classifier *cls, + struct rte_flow_classify_table_params *params, + uint32_t *table_id) +{ + if (cls == NULL) { + RTE_LOG(ERR, CLASSIFY, + "%s: flow classifier parameter is NULL\n", + __func__); + return -EINVAL; + } + if (params == NULL) { + RTE_LOG(ERR, CLASSIFY, "%s: params parameter is NULL\n", + __func__); + return -EINVAL; + } + if (table_id == NULL) { + RTE_LOG(ERR, CLASSIFY, "%s: table_id parameter is NULL\n", + __func__); + return -EINVAL; + } + + /* ops */ + if (params->ops == NULL) { + RTE_LOG(ERR, CLASSIFY, "%s: params->ops is NULL\n", + __func__); + return -EINVAL; + } + + if (params->ops->f_create == NULL) { + RTE_LOG(ERR, CLASSIFY, + "%s: f_create function pointer is NULL\n", __func__); + return -EINVAL; + } + + if (params->ops->f_lookup == NULL) { + RTE_LOG(ERR, CLASSIFY, + "%s: f_lookup function pointer is NULL\n", __func__); + return -EINVAL; + } + + /* De we have room for one more table? */ + if (cls->num_tables == RTE_FLOW_CLASSIFY_TABLE_MAX) { + RTE_LOG(ERR, CLASSIFY, + "%s: Incorrect value for num_tables parameter\n", + __func__); + return -EINVAL; + } + + return 0; +} + +int +rte_flow_classify_table_create(struct rte_flow_classifier *cls, + struct rte_flow_classify_table_params *params, + uint32_t *table_id) +{ + struct rte_table *table; + void *h_table; + uint32_t entry_size, id; + int ret; + + /* Check input arguments */ + ret = rte_table_check_params(cls, params, table_id); + if (ret != 0) + return ret; + + id = cls->num_tables; + table = &cls->tables[id]; + + /* calculate table entry size */ + entry_size = sizeof(struct rte_flow_classify_table_entry) + + params->table_metadata_size; + + /* Create the table */ + h_table = params->ops->f_create(params->arg_create, cls->socket_id, + entry_size); + if (h_table == NULL) { + RTE_LOG(ERR, CLASSIFY, "%s: Table creation failed\n", __func__); + return -EINVAL; + } + + /* Commit current table to the classifier */ + cls->num_tables++; + *table_id = id; + + /* Save input parameters */ + memcpy(&table->ops, params->ops, sizeof(struct rte_table_ops)); + + /* Initialize table internal data structure */ + table->entry_size = entry_size; + table->h_table = h_table; + + return 0; +} + +static struct rte_flow_classify_rule * +allocate_acl_ipv4_5tuple_rule(void) +{ + struct rte_flow_classify_rule *rule; + + rule = malloc(sizeof(struct rte_flow_classify_rule)); + if (!rule) + return rule; + + memset(rule, 0, sizeof(struct rte_flow_classify_rule)); + rule->id = unique_id++; + rule->rules.type = RTE_FLOW_CLASSIFY_RULE_TYPE_IPV4_5TUPLE; + + memcpy(&rule->action, classify_get_flow_action(), + sizeof(struct rte_flow_action)); + + /* key add values */ + rule->u.key.key_add.priority = ntuple_filter.priority; + rule->u.key.key_add.field_value[PROTO_FIELD_IPV4].mask_range.u8 = + ntuple_filter.proto_mask; + rule->u.key.key_add.field_value[PROTO_FIELD_IPV4].value.u8 = + ntuple_filter.proto; + rule->rules.u.ipv4_5tuple.proto = ntuple_filter.proto; + rule->rules.u.ipv4_5tuple.proto_mask = ntuple_filter.proto_mask; + + rule->u.key.key_add.field_value[SRC_FIELD_IPV4].mask_range.u32 = + ntuple_filter.src_ip_mask; + rule->u.key.key_add.field_value[SRC_FIELD_IPV4].value.u32 = + ntuple_filter.src_ip; + rule->rules.u.ipv4_5tuple.src_ip_mask = ntuple_filter.src_ip_mask; + rule->rules.u.ipv4_5tuple.src_ip = ntuple_filter.src_ip; + + rule->u.key.key_add.field_value[DST_FIELD_IPV4].mask_range.u32 = + ntuple_filter.dst_ip_mask; + rule->u.key.key_add.field_value[DST_FIELD_IPV4].value.u32 = + ntuple_filter.dst_ip; + rule->rules.u.ipv4_5tuple.dst_ip_mask = ntuple_filter.dst_ip_mask; + rule->rules.u.ipv4_5tuple.dst_ip = ntuple_filter.dst_ip; + + rule->u.key.key_add.field_value[SRCP_FIELD_IPV4].mask_range.u16 = + ntuple_filter.src_port_mask; + rule->u.key.key_add.field_value[SRCP_FIELD_IPV4].value.u16 = + ntuple_filter.src_port; + rule->rules.u.ipv4_5tuple.src_port_mask = ntuple_filter.src_port_mask; + rule->rules.u.ipv4_5tuple.src_port = ntuple_filter.src_port; + + rule->u.key.key_add.field_value[DSTP_FIELD_IPV4].mask_range.u16 = + ntuple_filter.dst_port_mask; + rule->u.key.key_add.field_value[DSTP_FIELD_IPV4].value.u16 = + ntuple_filter.dst_port; + rule->rules.u.ipv4_5tuple.dst_port_mask = ntuple_filter.dst_port_mask; + rule->rules.u.ipv4_5tuple.dst_port = ntuple_filter.dst_port; + +#ifdef RTE_LIBRTE_CLASSIFY_DEBUG + print_acl_ipv4_key_add(&rule->u.key.key_add); +#endif + + /* key delete values */ + memcpy(&rule->u.key.key_del.field_value[PROTO_FIELD_IPV4], + &rule->u.key.key_add.field_value[PROTO_FIELD_IPV4], + NUM_FIELDS_IPV4 * sizeof(struct rte_acl_field)); + +#ifdef RTE_LIBRTE_CLASSIFY_DEBUG + print_acl_ipv4_key_delete(&rule->u.key.key_del); +#endif + return rule; +} + +struct rte_flow_classify_rule * +rte_flow_classify_table_entry_add(struct rte_flow_classifier *cls, + uint32_t table_id, + int *key_found, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + struct rte_flow_classify_rule *rule; + struct rte_flow_classify_table_entry *table_entry; + int ret; + + if (!error) + return NULL; + + if (!cls) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "NULL classifier."); + return NULL; + } + + if (table_id >= cls->num_tables) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "invalid table_id."); + return NULL; + } + + if (key_found == NULL) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "NULL key_found."); + return NULL; + } + + if (!pattern) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM_NUM, + NULL, "NULL pattern."); + return NULL; + } + + if (!actions) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_NUM, + NULL, "NULL action."); + return NULL; + } + + if (!attr) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, + NULL, "NULL attribute."); + return NULL; + } + + /* parse attr, pattern and actions */ + ret = rte_flow_classify_parse_flow(attr, pattern, actions, error); + if (ret < 0) + return NULL; + + switch (cls->type) { + case RTE_FLOW_CLASSIFY_TABLE_TYPE_ACL: + rule = allocate_acl_ipv4_5tuple_rule(); + if (!rule) + return NULL; + break; + default: + return NULL; + } + + rule->entry = malloc(sizeof(struct rte_flow_classify_table_entry)); + if (!rule->entry) { + free(rule); + return NULL; + } + + table_entry = rule->entry; + table_entry->rule_id = rule->id; + + if (cls->tables[table_id].ops.f_add != NULL) { + ret = cls->tables[table_id].ops.f_add( + cls->tables[table_id].h_table, + &rule->u.key.key_add, + rule->entry, + &rule->key_found, + &rule->entry_ptr); + if (ret) { + free(rule->entry); + free(rule); + return NULL; + } + *key_found = rule->key_found; + } + return rule; +} + +int +rte_flow_classify_table_entry_delete(struct rte_flow_classifier *cls, + uint32_t table_id, + struct rte_flow_classify_rule *rule) +{ + int ret = -EINVAL; + + if (!cls || !rule || table_id >= cls->num_tables) + return ret; + + if (cls->tables[table_id].ops.f_delete != NULL) + ret = cls->tables[table_id].ops.f_delete( + cls->tables[table_id].h_table, + &rule->u.key.key_del, + &rule->key_found, + &rule->entry); + + return ret; +} + +static int +flow_classifier_run(struct rte_flow_classifier *cls, + uint32_t table_id, + struct rte_mbuf **pkts, + const uint16_t nb_pkts) +{ + int ret = -EINVAL; + uint64_t pkts_mask; + uint64_t lookup_hit_mask; + + if (!cls || !pkts || nb_pkts == 0 || table_id >= cls->num_tables) + return ret; + + if (cls->tables[table_id].ops.f_lookup != NULL) { + pkts_mask = RTE_LEN2MASK(nb_pkts, uint64_t); + ret = cls->tables[table_id].ops.f_lookup( + cls->tables[table_id].h_table, + pkts, pkts_mask, &lookup_hit_mask, + (void **)cls->entries); + + if (!ret && lookup_hit_mask) + cls->nb_pkts = nb_pkts; + else + cls->nb_pkts = 0; + } + + return ret; +} + +static int +action_apply(struct rte_flow_classifier *cls, + struct rte_flow_classify_rule *rule, + struct rte_flow_classify_stats *stats) +{ + struct rte_flow_classify_ipv4_5tuple_stats *ntuple_stats; + uint64_t count = 0; + int i; + int ret = -ENODATA; + + switch (rule->action.type) { + case RTE_FLOW_ACTION_TYPE_COUNT: + for (i = 0; i < cls->nb_pkts; i++) { + if (rule->id == cls->entries[i]->rule_id) + count++; + } + if (count) { + ret = 0; + ntuple_stats = + (struct rte_flow_classify_ipv4_5tuple_stats *) + stats->stats; + ntuple_stats->counter1 = count; + ntuple_stats->ipv4_5tuple = rule->rules.u.ipv4_5tuple; + } + break; + default: + ret = -ENOTSUP; + break; + } + + return ret; +} + +int +rte_flow_classifier_query(struct rte_flow_classifier *cls, + uint32_t table_id, + struct rte_mbuf **pkts, + const uint16_t nb_pkts, + struct rte_flow_classify_rule *rule, + struct rte_flow_classify_stats *stats) +{ + int ret = -EINVAL; + + if (!cls || !rule || !stats || !pkts || nb_pkts == 0 || + table_id >= cls->num_tables) + return ret; + + ret = flow_classifier_run(cls, table_id, pkts, nb_pkts); + if (!ret) + ret = action_apply(cls, rule, stats); + return ret; +} diff --git a/lib/librte_flow_classify/rte_flow_classify.h b/lib/librte_flow_classify/rte_flow_classify.h new file mode 100644 index 0000000..3c97986 --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify.h @@ -0,0 +1,285 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _RTE_FLOW_CLASSIFY_H_ +#define _RTE_FLOW_CLASSIFY_H_ + +/** + * @file + * + * RTE Flow Classify Library + * + * This library provides flow record information with some measured properties. + * + * Application should define the flow and measurement criteria (action) for it. + * + * The Library doesn't maintain any flow records itself, instead flow + * information is returned to upper layer only for given packets. + * + * It is application's responsibility to call rte_flow_classifier_query() + * for a burst of packets, just after receiving them or before transmitting + * them. + * Application should provide the flow type interested in, measurement to apply + * to that flow in rte_flow_classify_table_entry_add() API, and should provide + * the rte_flow_classifier object and storage to put results in for the + * rte_flow_classifier_query() API. + * + * Usage: + * - application calls rte_flow_classifier_create() to create an + * rte_flow_classifier object. + * - application calls rte_flow_classify_table_create() to create a table + * in the rte_flow_classifier object. + * - application calls rte_flow_classify_table_entry_add() to add a rule to + * the table in the rte_flow_classifier object. + * - application calls rte_flow_classifier_query() in a polling manner, + * preferably after rte_eth_rx_burst(). This will cause the library to + * match packet information to flow information with some measurements. + * - rte_flow_classifier object can be destroyed when it is no longer needed + * with rte_flow_classifier_free() + */ + +#include <rte_ethdev.h> +#include <rte_ether.h> +#include <rte_flow.h> +#include <rte_acl.h> +#include <rte_table_acl.h> + +#ifdef __cplusplus +extern "C" { +#endif + +/** Opaque data type for flow classifier */ +struct rte_flow_classifier; + +/** Opaque data type for flow classify rule */ +struct rte_flow_classify_rule; + +/** Flow classify rule type */ +enum rte_flow_classify_rule_type { + /** no type */ + RTE_FLOW_CLASSIFY_RULE_TYPE_NONE, + /** IPv4 5tuple type */ + RTE_FLOW_CLASSIFY_RULE_TYPE_IPV4_5TUPLE, +}; + +/** Flow classify table type */ +enum rte_flow_classify_table_type { + /** no type */ + RTE_FLOW_CLASSIFY_TABLE_TYPE_NONE, + /** ACL type */ + RTE_FLOW_CLASSIFY_TABLE_TYPE_ACL, +}; + +/** + * Maximum number of tables allowed for any Flow Classifier instance. + * The value of this parameter cannot be changed. + */ +#define RTE_FLOW_CLASSIFY_TABLE_MAX 64 + +/** Parameters for flow classifier creation */ +struct rte_flow_classifier_params { + /** flow classifier name */ + const char *name; + + /** CPU socket ID where memory for the flow classifier and its */ + /** elements (tables) should be allocated */ + int socket_id; + + /** Table type */ + enum rte_flow_classify_table_type type; +}; + +/** Parameters for table creation */ +struct rte_flow_classify_table_params { + /** Table operations (specific to each table type) */ + struct rte_table_ops *ops; + + /** Opaque param to be passed to the table create operation */ + void *arg_create; + + /** Memory size to be reserved per classifier object entry for */ + /** storing meta data */ + uint32_t table_metadata_size; +}; + +/** IPv4 5-tuple data */ +struct rte_flow_classify_ipv4_5tuple { + uint32_t dst_ip; /**< Destination IP address in big endian. */ + uint32_t dst_ip_mask; /**< Mask of destination IP address. */ + uint32_t src_ip; /**< Source IP address in big endian. */ + uint32_t src_ip_mask; /**< Mask of destination IP address. */ + uint16_t dst_port; /**< Destination port in big endian. */ + uint16_t dst_port_mask; /**< Mask of destination port. */ + uint16_t src_port; /**< Source Port in big endian. */ + uint16_t src_port_mask; /**< Mask of source port. */ + uint8_t proto; /**< L4 protocol. */ + uint8_t proto_mask; /**< Mask of L4 protocol. */ +}; + +/** + * Flow stats + * + * For the count action, stats can be returned by the query API. + * + * Storage for stats is provided by application. + */ +struct rte_flow_classify_stats { + void *stats; +}; + +struct rte_flow_classify_ipv4_5tuple_stats { + /** count of packets that match IPv4 5tuple pattern */ + uint64_t counter1; + /** IPv4 5tuple data */ + struct rte_flow_classify_ipv4_5tuple ipv4_5tuple; +}; + +/** + * Flow classifier create + * + * @param params + * Parameters for flow classifier creation + * @return + * Handle to flow classifier instance on success or NULL otherwise + */ +struct rte_flow_classifier * +rte_flow_classifier_create(struct rte_flow_classifier_params *params); + +/** + * Flow classifier free + * + * @param cls + * Handle to flow classifier instance + * @return + * 0 on success, error code otherwise + */ +int +rte_flow_classifier_free(struct rte_flow_classifier *cls); + +/** + * Flow classify table create + * + * @param cls + * Handle to flow classifier instance + * @param params + * Parameters for flow_classify table creation + * @param table_id + * Table ID. Valid only within the scope of table IDs of the current + * classifier. Only returned after a successful invocation. + * @return + * 0 on success, error code otherwise + */ +int +rte_flow_classify_table_create(struct rte_flow_classifier *cls, + struct rte_flow_classify_table_params *params, + uint32_t *table_id); + +/** + * Add a flow classify rule to the flow_classifer table. + * + * @param[in] cls + * Flow classifier handle + * @param[in] table_id + * id of table + * @param[out] key_found + * returns 1 if key present already, 0 otherwise. + * @param[in] attr + * Flow rule attributes + * @param[in] pattern + * Pattern specification (list terminated by the END pattern item). + * @param[in] actions + * Associated actions (list terminated by the END pattern item). + * @param[out] error + * Perform verbose error reporting if not NULL. Structure + * initialised in case of error only. + * @return + * A valid handle in case of success, NULL otherwise. + */ +struct rte_flow_classify_rule * +rte_flow_classify_table_entry_add(struct rte_flow_classifier *cls, + uint32_t table_id, + int *key_found, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error); + +/** + * Delete a flow classify rule from the flow_classifer table. + * + * @param[in] cls + * Flow classifier handle + * @param[in] table_id + * id of table + * @param[in] rule + * Flow classify rule + * @return + * 0 on success, error code otherwise. + */ +int +rte_flow_classify_table_entry_delete(struct rte_flow_classifier *cls, + uint32_t table_id, + struct rte_flow_classify_rule *rule); + +/** + * Query flow classifier for given rule. + * + * @param[in] cls + * Flow classifier handle + * @param[in] table_id + * id of table + * @param[in] pkts + * Pointer to packets to process + * @param[in] nb_pkts + * Number of packets to process + * @param[in] rule + * Flow classify rule + * @param[in] stats + * Flow classify stats + * + * @return + * 0 on success, error code otherwise. + */ +int +rte_flow_classifier_query(struct rte_flow_classifier *cls, + uint32_t table_id, + struct rte_mbuf **pkts, + const uint16_t nb_pkts, + struct rte_flow_classify_rule *rule, + struct rte_flow_classify_stats *stats); + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_FLOW_CLASSIFY_H_ */ diff --git a/lib/librte_flow_classify/rte_flow_classify_parse.c b/lib/librte_flow_classify/rte_flow_classify_parse.c new file mode 100644 index 0000000..921a852 --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify_parse.c @@ -0,0 +1,546 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <rte_flow_classify.h> +#include "rte_flow_classify_parse.h" +#include <rte_flow_driver.h> + +struct classify_valid_pattern { + enum rte_flow_item_type *items; + parse_filter_t parse_filter; +}; + +static struct rte_flow_action action; + +/* Pattern matched ntuple filter */ +static enum rte_flow_item_type pattern_ntuple_1[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_UDP, + RTE_FLOW_ITEM_TYPE_END, +}; + +/* Pattern matched ntuple filter */ +static enum rte_flow_item_type pattern_ntuple_2[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_TCP, + RTE_FLOW_ITEM_TYPE_END, +}; + +/* Pattern matched ntuple filter */ +static enum rte_flow_item_type pattern_ntuple_3[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_SCTP, + RTE_FLOW_ITEM_TYPE_END, +}; + +static int +classify_parse_ntuple_filter(const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_eth_ntuple_filter *filter, + struct rte_flow_error *error); + +static struct classify_valid_pattern classify_supported_patterns[] = { + /* ntuple */ + { pattern_ntuple_1, classify_parse_ntuple_filter }, + { pattern_ntuple_2, classify_parse_ntuple_filter }, + { pattern_ntuple_3, classify_parse_ntuple_filter }, +}; + +struct rte_flow_action * +classify_get_flow_action(void) +{ + return &action; +} + +/* Find the first VOID or non-VOID item pointer */ +const struct rte_flow_item * +classify_find_first_item(const struct rte_flow_item *item, bool is_void) +{ + bool is_find; + + while (item->type != RTE_FLOW_ITEM_TYPE_END) { + if (is_void) + is_find = item->type == RTE_FLOW_ITEM_TYPE_VOID; + else + is_find = item->type != RTE_FLOW_ITEM_TYPE_VOID; + if (is_find) + break; + item++; + } + return item; +} + +/* Skip all VOID items of the pattern */ +void +classify_pattern_skip_void_item(struct rte_flow_item *items, + const struct rte_flow_item *pattern) +{ + uint32_t cpy_count = 0; + const struct rte_flow_item *pb = pattern, *pe = pattern; + + for (;;) { + /* Find a non-void item first */ + pb = classify_find_first_item(pb, false); + if (pb->type == RTE_FLOW_ITEM_TYPE_END) { + pe = pb; + break; + } + + /* Find a void item */ + pe = classify_find_first_item(pb + 1, true); + + cpy_count = pe - pb; + rte_memcpy(items, pb, sizeof(struct rte_flow_item) * cpy_count); + + items += cpy_count; + + if (pe->type == RTE_FLOW_ITEM_TYPE_END) { + pb = pe; + break; + } + + pb = pe + 1; + } + /* Copy the END item. */ + rte_memcpy(items, pe, sizeof(struct rte_flow_item)); +} + +/* Check if the pattern matches a supported item type array */ +static bool +classify_match_pattern(enum rte_flow_item_type *item_array, + struct rte_flow_item *pattern) +{ + struct rte_flow_item *item = pattern; + + while ((*item_array == item->type) && + (*item_array != RTE_FLOW_ITEM_TYPE_END)) { + item_array++; + item++; + } + + return (*item_array == RTE_FLOW_ITEM_TYPE_END && + item->type == RTE_FLOW_ITEM_TYPE_END); +} + +/* Find if there's parse filter function matched */ +parse_filter_t +classify_find_parse_filter_func(struct rte_flow_item *pattern) +{ + parse_filter_t parse_filter = NULL; + uint8_t i = 0; + + for (; i < RTE_DIM(classify_supported_patterns); i++) { + if (classify_match_pattern(classify_supported_patterns[i].items, + pattern)) { + parse_filter = + classify_supported_patterns[i].parse_filter; + break; + } + } + + return parse_filter; +} + +#define FLOW_RULE_MIN_PRIORITY 8 +#define FLOW_RULE_MAX_PRIORITY 0 + +#define NEXT_ITEM_OF_PATTERN(item, pattern, index)\ + do {\ + item = pattern + index;\ + while (item->type == RTE_FLOW_ITEM_TYPE_VOID) {\ + index++;\ + item = pattern + index;\ + } \ + } while (0) + +#define NEXT_ITEM_OF_ACTION(act, actions, index)\ + do {\ + act = actions + index;\ + while (act->type == RTE_FLOW_ACTION_TYPE_VOID) {\ + index++;\ + act = actions + index;\ + } \ + } while (0) + +/** + * Please aware there's an assumption for all the parsers. + * rte_flow_item is using big endian, rte_flow_attr and + * rte_flow_action are using CPU order. + * Because the pattern is used to describe the packets, + * normally the packets should use network order. + */ + +/** + * Parse the rule to see if it is a n-tuple rule. + * And get the n-tuple filter info BTW. + * pattern: + * The first not void item can be ETH or IPV4. + * The second not void item must be IPV4 if the first one is ETH. + * The third not void item must be UDP or TCP. + * The next not void item must be END. + * action: + * The first not void action should be QUEUE. + * The next not void action should be END. + * pattern example: + * ITEM Spec Mask + * ETH NULL NULL + * IPV4 src_addr 192.168.1.20 0xFFFFFFFF + * dst_addr 192.167.3.50 0xFFFFFFFF + * next_proto_id 17 0xFF + * UDP/TCP/ src_port 80 0xFFFF + * SCTP dst_port 80 0xFFFF + * END + * other members in mask and spec should set to 0x00. + * item->last should be NULL. + */ +static int +classify_parse_ntuple_filter(const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_eth_ntuple_filter *filter, + struct rte_flow_error *error) +{ + const struct rte_flow_item *item; + const struct rte_flow_action *act; + const struct rte_flow_item_ipv4 *ipv4_spec; + const struct rte_flow_item_ipv4 *ipv4_mask; + const struct rte_flow_item_tcp *tcp_spec; + const struct rte_flow_item_tcp *tcp_mask; + const struct rte_flow_item_udp *udp_spec; + const struct rte_flow_item_udp *udp_mask; + const struct rte_flow_item_sctp *sctp_spec; + const struct rte_flow_item_sctp *sctp_mask; + uint32_t index; + + if (!pattern) { + rte_flow_error_set(error, + EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM, + NULL, "NULL pattern."); + return -EINVAL; + } + + if (!actions) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_NUM, + NULL, "NULL action."); + return -EINVAL; + } + if (!attr) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, + NULL, "NULL attribute."); + return -EINVAL; + } + + /* parse pattern */ + index = 0; + + /* the first not void item can be MAC or IPv4 */ + NEXT_ITEM_OF_PATTERN(item, pattern, index); + + if (item->type != RTE_FLOW_ITEM_TYPE_ETH && + item->type != RTE_FLOW_ITEM_TYPE_IPV4) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -EINVAL; + } + /* Skip Ethernet */ + if (item->type == RTE_FLOW_ITEM_TYPE_ETH) { + /*Not supported last point for range*/ + if (item->last) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + item, + "Not supported last point for range"); + return -EINVAL; + + } + /* if the first item is MAC, the content should be NULL */ + if (item->spec || item->mask) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Not supported by ntuple filter"); + return -EINVAL; + } + /* check if the next not void item is IPv4 */ + index++; + NEXT_ITEM_OF_PATTERN(item, pattern, index); + if (item->type != RTE_FLOW_ITEM_TYPE_IPV4) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Not supported by ntuple filter"); + return -EINVAL; + } + } + + /* get the IPv4 info */ + if (!item->spec || !item->mask) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Invalid ntuple mask"); + return -EINVAL; + } + /*Not supported last point for range*/ + if (item->last) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + item, "Not supported last point for range"); + return -EINVAL; + + } + + ipv4_mask = (const struct rte_flow_item_ipv4 *)item->mask; + /** + * Only support src & dst addresses, protocol, + * others should be masked. + */ + if (ipv4_mask->hdr.version_ihl || + ipv4_mask->hdr.type_of_service || + ipv4_mask->hdr.total_length || + ipv4_mask->hdr.packet_id || + ipv4_mask->hdr.fragment_offset || + ipv4_mask->hdr.time_to_live || + ipv4_mask->hdr.hdr_checksum) { + rte_flow_error_set(error, + EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -EINVAL; + } + + filter->dst_ip_mask = ipv4_mask->hdr.dst_addr; + filter->src_ip_mask = ipv4_mask->hdr.src_addr; + filter->proto_mask = ipv4_mask->hdr.next_proto_id; + + ipv4_spec = (const struct rte_flow_item_ipv4 *)item->spec; + filter->dst_ip = ipv4_spec->hdr.dst_addr; + filter->src_ip = ipv4_spec->hdr.src_addr; + filter->proto = ipv4_spec->hdr.next_proto_id; + + /* check if the next not void item is TCP or UDP or SCTP */ + index++; + NEXT_ITEM_OF_PATTERN(item, pattern, index); + if (item->type != RTE_FLOW_ITEM_TYPE_TCP && + item->type != RTE_FLOW_ITEM_TYPE_UDP && + item->type != RTE_FLOW_ITEM_TYPE_SCTP) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -EINVAL; + } + + /* get the TCP/UDP info */ + if (!item->spec || !item->mask) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Invalid ntuple mask"); + return -EINVAL; + } + + /*Not supported last point for range*/ + if (item->last) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + item, "Not supported last point for range"); + return -EINVAL; + + } + + if (item->type == RTE_FLOW_ITEM_TYPE_TCP) { + tcp_mask = (const struct rte_flow_item_tcp *)item->mask; + + /** + * Only support src & dst ports, tcp flags, + * others should be masked. + */ + if (tcp_mask->hdr.sent_seq || + tcp_mask->hdr.recv_ack || + tcp_mask->hdr.data_off || + tcp_mask->hdr.rx_win || + tcp_mask->hdr.cksum || + tcp_mask->hdr.tcp_urp) { + memset(filter, 0, + sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -EINVAL; + } + + filter->dst_port_mask = tcp_mask->hdr.dst_port; + filter->src_port_mask = tcp_mask->hdr.src_port; + if (tcp_mask->hdr.tcp_flags == 0xFF) { + filter->flags |= RTE_NTUPLE_FLAGS_TCP_FLAG; + } else if (!tcp_mask->hdr.tcp_flags) { + filter->flags &= ~RTE_NTUPLE_FLAGS_TCP_FLAG; + } else { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -EINVAL; + } + + tcp_spec = (const struct rte_flow_item_tcp *)item->spec; + filter->dst_port = tcp_spec->hdr.dst_port; + filter->src_port = tcp_spec->hdr.src_port; + filter->tcp_flags = tcp_spec->hdr.tcp_flags; + } else if (item->type == RTE_FLOW_ITEM_TYPE_UDP) { + udp_mask = (const struct rte_flow_item_udp *)item->mask; + + /** + * Only support src & dst ports, + * others should be masked. + */ + if (udp_mask->hdr.dgram_len || + udp_mask->hdr.dgram_cksum) { + memset(filter, 0, + sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -EINVAL; + } + + filter->dst_port_mask = udp_mask->hdr.dst_port; + filter->src_port_mask = udp_mask->hdr.src_port; + + udp_spec = (const struct rte_flow_item_udp *)item->spec; + filter->dst_port = udp_spec->hdr.dst_port; + filter->src_port = udp_spec->hdr.src_port; + } else { + sctp_mask = (const struct rte_flow_item_sctp *)item->mask; + + /** + * Only support src & dst ports, + * others should be masked. + */ + if (sctp_mask->hdr.tag || + sctp_mask->hdr.cksum) { + memset(filter, 0, + sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -EINVAL; + } + + filter->dst_port_mask = sctp_mask->hdr.dst_port; + filter->src_port_mask = sctp_mask->hdr.src_port; + + sctp_spec = (const struct rte_flow_item_sctp *)item->spec; + filter->dst_port = sctp_spec->hdr.dst_port; + filter->src_port = sctp_spec->hdr.src_port; + } + + /* check if the next not void item is END */ + index++; + NEXT_ITEM_OF_PATTERN(item, pattern, index); + if (item->type != RTE_FLOW_ITEM_TYPE_END) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -EINVAL; + } + + /* parse action */ + index = 0; + + /** + * n-tuple only supports count, + * check if the first not void action is COUNT. + */ + memset(&action, 0, sizeof(action)); + NEXT_ITEM_OF_ACTION(act, actions, index); + if (act->type != RTE_FLOW_ACTION_TYPE_COUNT) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + item, "Not supported action."); + return -EINVAL; + } + action.type = RTE_FLOW_ACTION_TYPE_COUNT; + + /* check if the next not void item is END */ + index++; + NEXT_ITEM_OF_ACTION(act, actions, index); + if (act->type != RTE_FLOW_ACTION_TYPE_END) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + act, "Not supported action."); + return -EINVAL; + } + + /* parse attr */ + /* must be input direction */ + if (!attr->ingress) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, + attr, "Only support ingress."); + return -EINVAL; + } + + /* not supported */ + if (attr->egress) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, + attr, "Not support egress."); + return -EINVAL; + } + + if (attr->priority > 0xFFFF) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, + attr, "Error priority."); + return -EINVAL; + } + filter->priority = (uint16_t)attr->priority; + if (attr->priority > FLOW_RULE_MIN_PRIORITY) + filter->priority = FLOW_RULE_MAX_PRIORITY; + + return 0; +} diff --git a/lib/librte_flow_classify/rte_flow_classify_parse.h b/lib/librte_flow_classify/rte_flow_classify_parse.h new file mode 100644 index 0000000..1d4708a --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify_parse.h @@ -0,0 +1,74 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _RTE_FLOW_CLASSIFY_PARSE_H_ +#define _RTE_FLOW_CLASSIFY_PARSE_H_ + +#include <rte_ethdev.h> +#include <rte_ether.h> +#include <rte_flow.h> +#include <stdbool.h> + +#ifdef __cplusplus +extern "C" { +#endif + +typedef int (*parse_filter_t)(const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_eth_ntuple_filter *filter, + struct rte_flow_error *error); + +/* Skip all VOID items of the pattern */ +void +classify_pattern_skip_void_item(struct rte_flow_item *items, + const struct rte_flow_item *pattern); + +/* Find the first VOID or non-VOID item pointer */ +const struct rte_flow_item * +classify_find_first_item(const struct rte_flow_item *item, bool is_void); + + +/* Find if there's parse filter function matched */ +parse_filter_t +classify_find_parse_filter_func(struct rte_flow_item *pattern); + +/* get action data */ +struct rte_flow_action * +classify_get_flow_action(void); + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_FLOW_CLASSIFY_PARSE_H_ */ diff --git a/lib/librte_flow_classify/rte_flow_classify_version.map b/lib/librte_flow_classify/rte_flow_classify_version.map new file mode 100644 index 0000000..f7695cb --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify_version.map @@ -0,0 +1,12 @@ +EXPERIMENTAL { + global: + + rte_flow_classifier_create; + rte_flow_classifier_free; + rte_flow_classifier_query; + rte_flow_classify_table_create; + rte_flow_classify_table_entry_add; + rte_flow_classify_table_entry_delete; + + local: *; +}; diff --git a/mk/rte.app.mk b/mk/rte.app.mk index 8192b98..482656c 100644 --- a/mk/rte.app.mk +++ b/mk/rte.app.mk @@ -58,6 +58,7 @@ _LDLIBS-y += -L$(RTE_SDK_BIN)/lib # # Order is important: from higher level to lower level # +_LDLIBS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += -lrte_flow_classify _LDLIBS-$(CONFIG_RTE_LIBRTE_PIPELINE) += -lrte_pipeline _LDLIBS-$(CONFIG_RTE_LIBRTE_TABLE) += -lrte_table _LDLIBS-$(CONFIG_RTE_LIBRTE_PORT) += -lrte_port -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [PATCH v9 1/4] librte_flow_classify: add flow classify library 2017-10-22 13:32 ` [dpdk-dev] [PATCH v9 1/4] librte_flow_classify: add flow classify library Bernard Iremonger @ 2017-10-23 13:21 ` Singh, Jasvinder 2017-10-23 13:37 ` Iremonger, Bernard 0 siblings, 1 reply; 145+ messages in thread From: Singh, Jasvinder @ 2017-10-23 13:21 UTC (permalink / raw) To: Iremonger, Bernard, dev, Yigit, Ferruh, Ananyev, Konstantin, Dumitrescu, Cristian, adrien.mazarguil <snip> > --- a/lib/Makefile > +++ b/lib/Makefile > @@ -83,6 +83,9 @@ DIRS-$(CONFIG_RTE_LIBRTE_POWER) += librte_power > DEPDIRS-librte_power := librte_eal > DIRS-$(CONFIG_RTE_LIBRTE_METER) += librte_meter > DEPDIRS-librte_meter := librte_eal > +DIRS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += librte_flow_classify > +DEPDIRS-librte_flow_classify := librte_eal librte_ether librte_net > +DEPDIRS-librte_flow_classify += librte_table librte_acl librte_port Please check dependency, I think you don't need librte_port, librte_eal, librte_ether > DIRS-$(CONFIG_RTE_LIBRTE_SCHED) += librte_sched > DEPDIRS-librte_sched := librte_eal librte_mempool librte_mbuf librte_net > DEPDIRS-librte_sched += librte_timer > diff --git a/lib/librte_eal/common/include/rte_log.h > b/lib/librte_eal/common/include/rte_log.h > index 2fa1199..67209ae 100644 > --- a/lib/librte_eal/common/include/rte_log.h > +++ b/lib/librte_eal/common/include/rte_log.h > @@ -88,6 +88,7 @@ struct rte_logs { > #define RTE_LOGTYPE_EFD 18 /**< Log related to EFD. */ > #define RTE_LOGTYPE_EVENTDEV 19 /**< Log related to eventdev. */ > #define RTE_LOGTYPE_GSO 20 /**< Log related to GSO. */ > +#define RTE_LOGTYPE_CLASSIFY 21 /**< Log related to flow classify. */ > <snip> > +static int > +flow_classifier_run(struct rte_flow_classifier *cls, > + uint32_t table_id, > + struct rte_mbuf **pkts, > + const uint16_t nb_pkts) > +{ > + int ret = -EINVAL; > + uint64_t pkts_mask; > + uint64_t lookup_hit_mask; > + > + if (!cls || !pkts || nb_pkts == 0 || table_id >= cls->num_tables) > + return ret; > + > + if (cls->tables[table_id].ops.f_lookup != NULL) { > + pkts_mask = RTE_LEN2MASK(nb_pkts, uint64_t); > + ret = cls->tables[table_id].ops.f_lookup( > + cls->tables[table_id].h_table, > + pkts, pkts_mask, &lookup_hit_mask, > + (void **)cls->entries); > + > + if (!ret && lookup_hit_mask) > + cls->nb_pkts = nb_pkts; > + else > + cls->nb_pkts = 0; > + } > + > + return ret; > +} Remove checks in the above function as these are already checked in query function below. > +static int > +action_apply(struct rte_flow_classifier *cls, > + struct rte_flow_classify_rule *rule, > + struct rte_flow_classify_stats *stats) > +{ > + struct rte_flow_classify_ipv4_5tuple_stats *ntuple_stats; > + uint64_t count = 0; > + int i; > + int ret = -ENODATA; > + > + switch (rule->action.type) { > + case RTE_FLOW_ACTION_TYPE_COUNT: > + for (i = 0; i < cls->nb_pkts; i++) { > + if (rule->id == cls->entries[i]->rule_id) > + count++; > + } > + if (count) { > + ret = 0; > + ntuple_stats = > + (struct rte_flow_classify_ipv4_5tuple_stats > *) > + stats->stats; > + ntuple_stats->counter1 = count; > + ntuple_stats->ipv4_5tuple = rule- > >rules.u.ipv4_5tuple; > + } > + break; > + default: > + ret = -ENOTSUP; > + break; > + } > + > + return ret; > +} > + > +int > +rte_flow_classifier_query(struct rte_flow_classifier *cls, > + uint32_t table_id, > + struct rte_mbuf **pkts, > + const uint16_t nb_pkts, > + struct rte_flow_classify_rule *rule, > + struct rte_flow_classify_stats *stats) > +{ > + int ret = -EINVAL; > + > + if (!cls || !rule || !stats || !pkts || nb_pkts == 0 || > + table_id >= cls->num_tables) > + return ret; > + > + ret = flow_classifier_run(cls, table_id, pkts, nb_pkts); > + if (!ret) > + ret = action_apply(cls, rule, stats); > + return ret; > +} Also, there are some compilation warnings as below; Failed Build #1: OS: FreeBSD10.3_64 Target: x86_64-native-bsdapp-clang, x86_64-native-bsdapp-gcc SYMLINK-FILE include/rte_flow_classify.h/home/patchWorkOrg/compilation/lib/librte_flow_classify/rte_flow_classify.c:642:13: error: use of undeclared identifier 'ENODATA' int ret = -ENODATA; Thanks, Jasvinder ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [PATCH v9 1/4] librte_flow_classify: add flow classify library 2017-10-23 13:21 ` Singh, Jasvinder @ 2017-10-23 13:37 ` Iremonger, Bernard 0 siblings, 0 replies; 145+ messages in thread From: Iremonger, Bernard @ 2017-10-23 13:37 UTC (permalink / raw) To: Singh, Jasvinder, dev, Yigit, Ferruh, Ananyev, Konstantin, Dumitrescu, Cristian, adrien.mazarguil Cc: Iremonger, Bernard Hi Jasvinder, Thanks for reviewing. > -----Original Message----- > From: Singh, Jasvinder > Sent: Monday, October 23, 2017 2:22 PM > To: Iremonger, Bernard <bernard.iremonger@intel.com>; dev@dpdk.org; > Yigit, Ferruh <ferruh.yigit@intel.com>; Ananyev, Konstantin > <konstantin.ananyev@intel.com>; Dumitrescu, Cristian > <cristian.dumitrescu@intel.com>; adrien.mazarguil@6wind.com > Subject: RE: [PATCH v9 1/4] librte_flow_classify: add flow classify library > > <snip> > > --- a/lib/Makefile > > +++ b/lib/Makefile > > @@ -83,6 +83,9 @@ DIRS-$(CONFIG_RTE_LIBRTE_POWER) += > librte_power > > DEPDIRS-librte_power := librte_eal > > DIRS-$(CONFIG_RTE_LIBRTE_METER) += librte_meter DEPDIRS- > librte_meter > > := librte_eal > > +DIRS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += librte_flow_classify > > +DEPDIRS-librte_flow_classify := librte_eal librte_ether librte_net > > +DEPDIRS-librte_flow_classify += librte_table librte_acl librte_port > > Please check dependency, I think you don't need librte_port, librte_eal, > librte_ether I will check dependency > > DIRS-$(CONFIG_RTE_LIBRTE_SCHED) += librte_sched DEPDIRS- > librte_sched > > := librte_eal librte_mempool librte_mbuf librte_net > > DEPDIRS-librte_sched += librte_timer diff --git > > a/lib/librte_eal/common/include/rte_log.h > > b/lib/librte_eal/common/include/rte_log.h > > index 2fa1199..67209ae 100644 > > --- a/lib/librte_eal/common/include/rte_log.h > > +++ b/lib/librte_eal/common/include/rte_log.h > > @@ -88,6 +88,7 @@ struct rte_logs { > > #define RTE_LOGTYPE_EFD 18 /**< Log related to EFD. */ > > #define RTE_LOGTYPE_EVENTDEV 19 /**< Log related to eventdev. */ > > #define RTE_LOGTYPE_GSO 20 /**< Log related to GSO. */ > > +#define RTE_LOGTYPE_CLASSIFY 21 /**< Log related to flow classify. > > +*/ > > > > <snip> > > > +static int > > +flow_classifier_run(struct rte_flow_classifier *cls, > > + uint32_t table_id, > > + struct rte_mbuf **pkts, > > + const uint16_t nb_pkts) > > +{ > > + int ret = -EINVAL; > > + uint64_t pkts_mask; > > + uint64_t lookup_hit_mask; > > + > > + if (!cls || !pkts || nb_pkts == 0 || table_id >= cls->num_tables) > > + return ret; > > + > > + if (cls->tables[table_id].ops.f_lookup != NULL) { > > + pkts_mask = RTE_LEN2MASK(nb_pkts, uint64_t); > > + ret = cls->tables[table_id].ops.f_lookup( > > + cls->tables[table_id].h_table, > > + pkts, pkts_mask, &lookup_hit_mask, > > + (void **)cls->entries); > > + > > + if (!ret && lookup_hit_mask) > > + cls->nb_pkts = nb_pkts; > > + else > > + cls->nb_pkts = 0; > > + } > > + > > + return ret; > > +} > > Remove checks in the above function as these are already checked in query > function below. Ok, will do. > > +static int > > +action_apply(struct rte_flow_classifier *cls, > > + struct rte_flow_classify_rule *rule, > > + struct rte_flow_classify_stats *stats) { > > + struct rte_flow_classify_ipv4_5tuple_stats *ntuple_stats; > > + uint64_t count = 0; > > + int i; > > + int ret = -ENODATA; > > + > > + switch (rule->action.type) { > > + case RTE_FLOW_ACTION_TYPE_COUNT: > > + for (i = 0; i < cls->nb_pkts; i++) { > > + if (rule->id == cls->entries[i]->rule_id) > > + count++; > > + } > > + if (count) { > > + ret = 0; > > + ntuple_stats = > > + (struct rte_flow_classify_ipv4_5tuple_stats > > *) > > + stats->stats; > > + ntuple_stats->counter1 = count; > > + ntuple_stats->ipv4_5tuple = rule- > > >rules.u.ipv4_5tuple; > > + } > > + break; > > + default: > > + ret = -ENOTSUP; > > + break; > > + } > > + > > + return ret; > > +} > > + > > +int > > +rte_flow_classifier_query(struct rte_flow_classifier *cls, > > + uint32_t table_id, > > + struct rte_mbuf **pkts, > > + const uint16_t nb_pkts, > > + struct rte_flow_classify_rule *rule, > > + struct rte_flow_classify_stats *stats) { > > + int ret = -EINVAL; > > + > > + if (!cls || !rule || !stats || !pkts || nb_pkts == 0 || > > + table_id >= cls->num_tables) > > + return ret; > > + > > + ret = flow_classifier_run(cls, table_id, pkts, nb_pkts); > > + if (!ret) > > + ret = action_apply(cls, rule, stats); > > + return ret; > > +} > > > Also, there are some compilation warnings as below; > > Failed Build #1: > OS: FreeBSD10.3_64 > Target: x86_64-native-bsdapp-clang, x86_64-native-bsdapp-gcc > SYMLINK-FILE > include/rte_flow_classify.h/home/patchWorkOrg/compilation/lib/librte_flo > w_classify/rte_flow_classify.c:642:13: error: use of undeclared identifier > 'ENODATA' > int ret = -ENODATA; Ok, I will replace -ENODAT with -EINVAL > Thanks, > Jasvinder I will seed a v10 patch set. Regards, Bernard. ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v9 2/4] examples/flow_classify: flow classify sample application 2017-10-17 20:26 ` [dpdk-dev] [PATCH v8 " Bernard Iremonger 2017-10-22 13:32 ` [dpdk-dev] [PATCH v9 " Bernard Iremonger 2017-10-22 13:32 ` [dpdk-dev] [PATCH v9 1/4] librte_flow_classify: add flow classify library Bernard Iremonger @ 2017-10-22 13:32 ` Bernard Iremonger 2017-10-22 13:32 ` [dpdk-dev] [PATCH v9 3/4] test: add packet burst generator functions Bernard Iremonger 2017-10-22 13:32 ` [dpdk-dev] [PATCH v9 4/4] test: flow classify library unit tests Bernard Iremonger 4 siblings, 0 replies; 145+ messages in thread From: Bernard Iremonger @ 2017-10-22 13:32 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil, jasvinder.singh Cc: Bernard Iremonger The flow_classify sample application exercises the following librte_flow_classify API's: rte_flow_classifier_create rte_flow_classifier_query rte_flow_classify_table_create rte_flow_classify_table_entry_add rte_flow_classify_table_entry_delete It sets up the IPv4 ACL field definitions. It creates table_acl and adds and deletes rules using the librte_table API. It uses a file of IPv4 five tuple rules for input. Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> --- examples/flow_classify/Makefile | 57 ++ examples/flow_classify/flow_classify.c | 850 +++++++++++++++++++++++++++++ examples/flow_classify/ipv4_rules_file.txt | 14 + 3 files changed, 921 insertions(+) create mode 100644 examples/flow_classify/Makefile create mode 100644 examples/flow_classify/flow_classify.c create mode 100644 examples/flow_classify/ipv4_rules_file.txt diff --git a/examples/flow_classify/Makefile b/examples/flow_classify/Makefile new file mode 100644 index 0000000..eecdde1 --- /dev/null +++ b/examples/flow_classify/Makefile @@ -0,0 +1,57 @@ +# BSD LICENSE +# +# Copyright(c) 2017 Intel Corporation. All rights reserved. +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Intel Corporation nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +ifeq ($(RTE_SDK),) +$(error "Please define RTE_SDK environment variable") +endif + +# Default target, can be overridden by command line or environment +RTE_TARGET ?= x86_64-native-linuxapp-gcc + +include $(RTE_SDK)/mk/rte.vars.mk + +# binary name +APP = flow_classify + + +# all source are stored in SRCS-y +SRCS-y := flow_classify.c + +CFLAGS += -O3 +CFLAGS += $(WERROR_FLAGS) + +# workaround for a gcc bug with noreturn attribute +# http://gcc.gnu.org/bugzilla/show_bug.cgi?id=12603 +ifeq ($(CONFIG_RTE_TOOLCHAIN_GCC),y) +CFLAGS_main.o += -Wno-return-type +endif + +include $(RTE_SDK)/mk/rte.extapp.mk diff --git a/examples/flow_classify/flow_classify.c b/examples/flow_classify/flow_classify.c new file mode 100644 index 0000000..a7bcbae --- /dev/null +++ b/examples/flow_classify/flow_classify.c @@ -0,0 +1,850 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <stdint.h> +#include <inttypes.h> +#include <getopt.h> + +#include <rte_eal.h> +#include <rte_ethdev.h> +#include <rte_cycles.h> +#include <rte_lcore.h> +#include <rte_mbuf.h> +#include <rte_flow.h> +#include <rte_flow_classify.h> +#include <rte_table_acl.h> + +#define RX_RING_SIZE 128 +#define TX_RING_SIZE 512 + +#define NUM_MBUFS 8191 +#define MBUF_CACHE_SIZE 250 +#define BURST_SIZE 32 + +#define MAX_NUM_CLASSIFY 30 +#define FLOW_CLASSIFY_MAX_RULE_NUM 91 +#define FLOW_CLASSIFY_MAX_PRIORITY 8 +#define FLOW_CLASSIFIER_NAME_SIZE 64 + +#define COMMENT_LEAD_CHAR ('#') +#define OPTION_RULE_IPV4 "rule_ipv4" +#define RTE_LOGTYPE_FLOW_CLASSIFY RTE_LOGTYPE_USER3 +#define flow_classify_log(format, ...) \ + RTE_LOG(ERR, FLOW_CLASSIFY, format, ##__VA_ARGS__) + +#define uint32_t_to_char(ip, a, b, c, d) do {\ + *a = (unsigned char)(ip >> 24 & 0xff);\ + *b = (unsigned char)(ip >> 16 & 0xff);\ + *c = (unsigned char)(ip >> 8 & 0xff);\ + *d = (unsigned char)(ip & 0xff);\ + } while (0) + +enum { + CB_FLD_SRC_ADDR, + CB_FLD_DST_ADDR, + CB_FLD_SRC_PORT, + CB_FLD_SRC_PORT_DLM, + CB_FLD_SRC_PORT_MASK, + CB_FLD_DST_PORT, + CB_FLD_DST_PORT_DLM, + CB_FLD_DST_PORT_MASK, + CB_FLD_PROTO, + CB_FLD_PRIORITY, + CB_FLD_NUM, +}; + +static struct{ + const char *rule_ipv4_name; +} parm_config; +const char cb_port_delim[] = ":"; + +static const struct rte_eth_conf port_conf_default = { + .rxmode = { .max_rx_pkt_len = ETHER_MAX_LEN } +}; + +struct flow_classifier { + struct rte_flow_classifier *cls; + uint32_t table_id[RTE_FLOW_CLASSIFY_TABLE_MAX]; +}; + +struct flow_classifier_acl { + struct flow_classifier cls; +} __rte_cache_aligned; + +/* ACL field definitions for IPv4 5 tuple rule */ + +enum { + PROTO_FIELD_IPV4, + SRC_FIELD_IPV4, + DST_FIELD_IPV4, + SRCP_FIELD_IPV4, + DSTP_FIELD_IPV4, + NUM_FIELDS_IPV4 +}; + +enum { + PROTO_INPUT_IPV4, + SRC_INPUT_IPV4, + DST_INPUT_IPV4, + SRCP_DESTP_INPUT_IPV4 +}; + +static struct rte_acl_field_def ipv4_defs[NUM_FIELDS_IPV4] = { + /* first input field - always one byte long. */ + { + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint8_t), + .field_index = PROTO_FIELD_IPV4, + .input_index = PROTO_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, next_proto_id), + }, + /* next input field (IPv4 source address) - 4 consecutive bytes. */ + { + /* rte_flow uses a bit mask for IPv4 addresses */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint32_t), + .field_index = SRC_FIELD_IPV4, + .input_index = SRC_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, src_addr), + }, + /* next input field (IPv4 destination address) - 4 consecutive bytes. */ + { + /* rte_flow uses a bit mask for IPv4 addresses */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint32_t), + .field_index = DST_FIELD_IPV4, + .input_index = DST_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, dst_addr), + }, + /* + * Next 2 fields (src & dst ports) form 4 consecutive bytes. + * They share the same input index. + */ + { + /* rte_flow uses a bit mask for protocol ports */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint16_t), + .field_index = SRCP_FIELD_IPV4, + .input_index = SRCP_DESTP_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + sizeof(struct ipv4_hdr) + + offsetof(struct tcp_hdr, src_port), + }, + { + /* rte_flow uses a bit mask for protocol ports */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint16_t), + .field_index = DSTP_FIELD_IPV4, + .input_index = SRCP_DESTP_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + sizeof(struct ipv4_hdr) + + offsetof(struct tcp_hdr, dst_port), + }, +}; + +/* flow classify data */ +static int num_classify_rules; +static struct rte_flow_classify_rule *rules[MAX_NUM_CLASSIFY]; +static struct rte_flow_classify_ipv4_5tuple_stats ntuple_stats; +static struct rte_flow_classify_stats classify_stats = { + .stats = (void **)&ntuple_stats +}; + +/* parameters for rte_flow_classify_validate and + * rte_flow_classify_table_entry_add functions + */ + +static struct rte_flow_item eth_item = { RTE_FLOW_ITEM_TYPE_ETH, + 0, 0, 0 }; +static struct rte_flow_item end_item = { RTE_FLOW_ITEM_TYPE_END, + 0, 0, 0 }; + +/* sample actions: + * "actions count / end" + */ +static struct rte_flow_action count_action = { RTE_FLOW_ACTION_TYPE_COUNT, 0}; +static struct rte_flow_action end_action = { RTE_FLOW_ACTION_TYPE_END, 0}; +static struct rte_flow_action actions[2]; + +/* sample attributes */ +static struct rte_flow_attr attr; + +/* flow_classify.c: * Based on DPDK skeleton forwarding example. */ + +/* + * Initializes a given port using global settings and with the RX buffers + * coming from the mbuf_pool passed as a parameter. + */ +static inline int +port_init(uint8_t port, struct rte_mempool *mbuf_pool) +{ + struct rte_eth_conf port_conf = port_conf_default; + struct ether_addr addr; + const uint16_t rx_rings = 1, tx_rings = 1; + int retval; + uint16_t q; + + if (port >= rte_eth_dev_count()) + return -1; + + /* Configure the Ethernet device. */ + retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf); + if (retval != 0) + return retval; + + /* Allocate and set up 1 RX queue per Ethernet port. */ + for (q = 0; q < rx_rings; q++) { + retval = rte_eth_rx_queue_setup(port, q, RX_RING_SIZE, + rte_eth_dev_socket_id(port), NULL, mbuf_pool); + if (retval < 0) + return retval; + } + + /* Allocate and set up 1 TX queue per Ethernet port. */ + for (q = 0; q < tx_rings; q++) { + retval = rte_eth_tx_queue_setup(port, q, TX_RING_SIZE, + rte_eth_dev_socket_id(port), NULL); + if (retval < 0) + return retval; + } + + /* Start the Ethernet port. */ + retval = rte_eth_dev_start(port); + if (retval < 0) + return retval; + + /* Display the port MAC address. */ + rte_eth_macaddr_get(port, &addr); + printf("Port %u MAC: %02" PRIx8 " %02" PRIx8 " %02" PRIx8 + " %02" PRIx8 " %02" PRIx8 " %02" PRIx8 "\n", + port, + addr.addr_bytes[0], addr.addr_bytes[1], + addr.addr_bytes[2], addr.addr_bytes[3], + addr.addr_bytes[4], addr.addr_bytes[5]); + + /* Enable RX in promiscuous mode for the Ethernet device. */ + rte_eth_promiscuous_enable(port); + + return 0; +} + +/* + * The lcore main. This is the main thread that does the work, reading from + * an input port classifying the packets and writing to an output port. + */ +static __attribute__((noreturn)) void +lcore_main(struct flow_classifier *cls_app) +{ + const uint8_t nb_ports = rte_eth_dev_count(); + uint8_t port; + int ret; + int i = 0; + + ret = rte_flow_classify_table_entry_delete(cls_app->cls, + cls_app->table_id[0], rules[7]); + if (ret) + printf("table_entry_delete failed [7] %d\n\n", ret); + else + printf("table_entry_delete succeeded [7]\n\n"); + + /* + * Check that the port is on the same NUMA node as the polling thread + * for best performance. + */ + for (port = 0; port < nb_ports; port++) + if (rte_eth_dev_socket_id(port) > 0 && + rte_eth_dev_socket_id(port) != + (int)rte_socket_id()) + printf("\n\n"); + printf("WARNING: port %u is on remote NUMA node\n", + port); + printf("to polling thread.\n"); + printf("Performance will not be optimal.\n"); + + printf("\nCore %u forwarding packets. ", + rte_lcore_id()); + printf("[Ctrl+C to quit]\n"); + + /* Run until the application is quit or killed. */ + for (;;) { + /* + * Receive packets on a port, classify them and forward them + * on the paired port. + * The mapping is 0 -> 1, 1 -> 0, 2 -> 3, 3 -> 2, etc. + */ + for (port = 0; port < nb_ports; port++) { + /* Get burst of RX packets, from first port of pair. */ + struct rte_mbuf *bufs[BURST_SIZE]; + const uint16_t nb_rx = rte_eth_rx_burst(port, 0, + bufs, BURST_SIZE); + + if (unlikely(nb_rx == 0)) + continue; + + for (i = 0; i < MAX_NUM_CLASSIFY; i++) { + if (rules[i]) { + ret = rte_flow_classifier_query( + cls_app->cls, + cls_app->table_id[0], + bufs, nb_rx, rules[i], + &classify_stats); + if (ret) + printf( + "rule [%d] query failed ret [%d]\n\n", + i, ret); + else { + printf( + "rule [%d] counter1=%lu\n", + i, ntuple_stats.counter1); + + printf("proto = %d\n", + ntuple_stats.ipv4_5tuple.proto); + } + } + } + + /* Send burst of TX packets, to second port of pair. */ + const uint16_t nb_tx = rte_eth_tx_burst(port ^ 1, 0, + bufs, nb_rx); + + /* Free any unsent packets. */ + if (unlikely(nb_tx < nb_rx)) { + uint16_t buf; + + for (buf = nb_tx; buf < nb_rx; buf++) + rte_pktmbuf_free(bufs[buf]); + } + } + } +} + +/* + * Parse IPv4 5 tuple rules file, ipv4_rules_file.txt. + * Expected format: + * <src_ipv4_addr>'/'<masklen> <space> \ + * <dst_ipv4_addr>'/'<masklen> <space> \ + * <src_port> <space> ":" <src_port_mask> <space> \ + * <dst_port> <space> ":" <dst_port_mask> <space> \ + * <proto>'/'<proto_mask> <space> \ + * <priority> + */ + +static int +get_cb_field(char **in, uint32_t *fd, int base, unsigned long lim, + char dlm) +{ + unsigned long val; + char *end; + + errno = 0; + val = strtoul(*in, &end, base); + if (errno != 0 || end[0] != dlm || val > lim) + return -EINVAL; + *fd = (uint32_t)val; + *in = end + 1; + return 0; +} + +static int +parse_ipv4_net(char *in, uint32_t *addr, uint32_t *mask_len) +{ + uint32_t a, b, c, d, m; + + if (get_cb_field(&in, &a, 0, UINT8_MAX, '.')) + return -EINVAL; + if (get_cb_field(&in, &b, 0, UINT8_MAX, '.')) + return -EINVAL; + if (get_cb_field(&in, &c, 0, UINT8_MAX, '.')) + return -EINVAL; + if (get_cb_field(&in, &d, 0, UINT8_MAX, '/')) + return -EINVAL; + if (get_cb_field(&in, &m, 0, sizeof(uint32_t) * CHAR_BIT, 0)) + return -EINVAL; + + addr[0] = IPv4(a, b, c, d); + mask_len[0] = m; + return 0; +} + +static int +parse_ipv4_5tuple_rule(char *str, struct rte_eth_ntuple_filter *ntuple_filter) +{ + int i, ret; + char *s, *sp, *in[CB_FLD_NUM]; + static const char *dlm = " \t\n"; + int dim = CB_FLD_NUM; + uint32_t temp; + + s = str; + for (i = 0; i != dim; i++, s = NULL) { + in[i] = strtok_r(s, dlm, &sp); + if (in[i] == NULL) + return -EINVAL; + } + + ret = parse_ipv4_net(in[CB_FLD_SRC_ADDR], + &ntuple_filter->src_ip, + &ntuple_filter->src_ip_mask); + if (ret != 0) { + flow_classify_log("failed to read source address/mask: %s\n", + in[CB_FLD_SRC_ADDR]); + return ret; + } + + ret = parse_ipv4_net(in[CB_FLD_DST_ADDR], + &ntuple_filter->dst_ip, + &ntuple_filter->dst_ip_mask); + if (ret != 0) { + flow_classify_log("failed to read source address/mask: %s\n", + in[CB_FLD_DST_ADDR]); + return ret; + } + + if (get_cb_field(&in[CB_FLD_SRC_PORT], &temp, 0, UINT16_MAX, 0)) + return -EINVAL; + ntuple_filter->src_port = (uint16_t)temp; + + if (strncmp(in[CB_FLD_SRC_PORT_DLM], cb_port_delim, + sizeof(cb_port_delim)) != 0) + return -EINVAL; + + if (get_cb_field(&in[CB_FLD_SRC_PORT_MASK], &temp, 0, UINT16_MAX, 0)) + return -EINVAL; + ntuple_filter->src_port_mask = (uint16_t)temp; + + if (get_cb_field(&in[CB_FLD_DST_PORT], &temp, 0, UINT16_MAX, 0)) + return -EINVAL; + ntuple_filter->dst_port = (uint16_t)temp; + + if (strncmp(in[CB_FLD_DST_PORT_DLM], cb_port_delim, + sizeof(cb_port_delim)) != 0) + return -EINVAL; + + if (get_cb_field(&in[CB_FLD_DST_PORT_MASK], &temp, 0, UINT16_MAX, 0)) + return -EINVAL; + ntuple_filter->dst_port_mask = (uint16_t)temp; + + if (get_cb_field(&in[CB_FLD_PROTO], &temp, 0, UINT8_MAX, '/')) + return -EINVAL; + ntuple_filter->proto = (uint8_t)temp; + + if (get_cb_field(&in[CB_FLD_PROTO], &temp, 0, UINT8_MAX, 0)) + return -EINVAL; + ntuple_filter->proto_mask = (uint8_t)temp; + + if (get_cb_field(&in[CB_FLD_PRIORITY], &temp, 0, UINT16_MAX, 0)) + return -EINVAL; + ntuple_filter->priority = (uint16_t)temp; + if (ntuple_filter->priority > FLOW_CLASSIFY_MAX_PRIORITY) + ret = -EINVAL; + + return ret; +} + +/* Bypass comment and empty lines */ +static inline int +is_bypass_line(char *buff) +{ + int i = 0; + + /* comment line */ + if (buff[0] == COMMENT_LEAD_CHAR) + return 1; + /* empty line */ + while (buff[i] != '\0') { + if (!isspace(buff[i])) + return 0; + i++; + } + return 1; +} + +static uint32_t +convert_depth_to_bitmask(uint32_t depth_val) +{ + uint32_t bitmask = 0; + int i, j; + + for (i = depth_val, j = 0; i > 0; i--, j++) + bitmask |= (1 << (31 - j)); + return bitmask; +} + +static int +add_classify_rule(struct rte_eth_ntuple_filter *ntuple_filter, + struct flow_classifier *cls_app) +{ + int ret = -1; + int key_found; + struct rte_flow_error error; + struct rte_flow_item_ipv4 ipv4_spec; + struct rte_flow_item_ipv4 ipv4_mask; + struct rte_flow_item ipv4_udp_item; + struct rte_flow_item ipv4_tcp_item; + struct rte_flow_item ipv4_sctp_item; + struct rte_flow_item_udp udp_spec; + struct rte_flow_item_udp udp_mask; + struct rte_flow_item udp_item; + struct rte_flow_item_tcp tcp_spec; + struct rte_flow_item_tcp tcp_mask; + struct rte_flow_item tcp_item; + struct rte_flow_item_sctp sctp_spec; + struct rte_flow_item_sctp sctp_mask; + struct rte_flow_item sctp_item; + struct rte_flow_item pattern_ipv4_5tuple[4]; + struct rte_flow_classify_rule *rule; + uint8_t ipv4_proto; + + if (num_classify_rules >= MAX_NUM_CLASSIFY) { + printf( + "\nINFO: classify rule capacity %d reached\n", + num_classify_rules); + return ret; + } + + /* set up parameters for validate and add */ + memset(&ipv4_spec, 0, sizeof(ipv4_spec)); + ipv4_spec.hdr.next_proto_id = ntuple_filter->proto; + ipv4_spec.hdr.src_addr = ntuple_filter->src_ip; + ipv4_spec.hdr.dst_addr = ntuple_filter->dst_ip; + ipv4_proto = ipv4_spec.hdr.next_proto_id; + + memset(&ipv4_mask, 0, sizeof(ipv4_mask)); + ipv4_mask.hdr.next_proto_id = ntuple_filter->proto_mask; + ipv4_mask.hdr.src_addr = ntuple_filter->src_ip_mask; + ipv4_mask.hdr.src_addr = + convert_depth_to_bitmask(ipv4_mask.hdr.src_addr); + ipv4_mask.hdr.dst_addr = ntuple_filter->dst_ip_mask; + ipv4_mask.hdr.dst_addr = + convert_depth_to_bitmask(ipv4_mask.hdr.dst_addr); + + switch (ipv4_proto) { + case IPPROTO_UDP: + ipv4_udp_item.type = RTE_FLOW_ITEM_TYPE_IPV4; + ipv4_udp_item.spec = &ipv4_spec; + ipv4_udp_item.mask = &ipv4_mask; + ipv4_udp_item.last = NULL; + + udp_spec.hdr.src_port = ntuple_filter->src_port; + udp_spec.hdr.dst_port = ntuple_filter->dst_port; + udp_spec.hdr.dgram_len = 0; + udp_spec.hdr.dgram_cksum = 0; + + udp_mask.hdr.src_port = ntuple_filter->src_port_mask; + udp_mask.hdr.dst_port = ntuple_filter->dst_port_mask; + udp_mask.hdr.dgram_len = 0; + udp_mask.hdr.dgram_cksum = 0; + + udp_item.type = RTE_FLOW_ITEM_TYPE_UDP; + udp_item.spec = &udp_spec; + udp_item.mask = &udp_mask; + udp_item.last = NULL; + + attr.priority = ntuple_filter->priority; + pattern_ipv4_5tuple[1] = ipv4_udp_item; + pattern_ipv4_5tuple[2] = udp_item; + break; + case IPPROTO_TCP: + ipv4_tcp_item.type = RTE_FLOW_ITEM_TYPE_IPV4; + ipv4_tcp_item.spec = &ipv4_spec; + ipv4_tcp_item.mask = &ipv4_mask; + ipv4_tcp_item.last = NULL; + + memset(&tcp_spec, 0, sizeof(tcp_spec)); + tcp_spec.hdr.src_port = ntuple_filter->src_port; + tcp_spec.hdr.dst_port = ntuple_filter->dst_port; + + memset(&tcp_mask, 0, sizeof(tcp_mask)); + tcp_mask.hdr.src_port = ntuple_filter->src_port_mask; + tcp_mask.hdr.dst_port = ntuple_filter->dst_port_mask; + + tcp_item.type = RTE_FLOW_ITEM_TYPE_TCP; + tcp_item.spec = &tcp_spec; + tcp_item.mask = &tcp_mask; + tcp_item.last = NULL; + + attr.priority = ntuple_filter->priority; + pattern_ipv4_5tuple[1] = ipv4_tcp_item; + pattern_ipv4_5tuple[2] = tcp_item; + break; + case IPPROTO_SCTP: + ipv4_sctp_item.type = RTE_FLOW_ITEM_TYPE_IPV4; + ipv4_sctp_item.spec = &ipv4_spec; + ipv4_sctp_item.mask = &ipv4_mask; + ipv4_sctp_item.last = NULL; + + sctp_spec.hdr.src_port = ntuple_filter->src_port; + sctp_spec.hdr.dst_port = ntuple_filter->dst_port; + sctp_spec.hdr.cksum = 0; + sctp_spec.hdr.tag = 0; + + sctp_mask.hdr.src_port = ntuple_filter->src_port_mask; + sctp_mask.hdr.dst_port = ntuple_filter->dst_port_mask; + sctp_mask.hdr.cksum = 0; + sctp_mask.hdr.tag = 0; + + sctp_item.type = RTE_FLOW_ITEM_TYPE_SCTP; + sctp_item.spec = &sctp_spec; + sctp_item.mask = &sctp_mask; + sctp_item.last = NULL; + + attr.priority = ntuple_filter->priority; + pattern_ipv4_5tuple[1] = ipv4_sctp_item; + pattern_ipv4_5tuple[2] = sctp_item; + break; + default: + return ret; + } + + attr.ingress = 1; + pattern_ipv4_5tuple[0] = eth_item; + pattern_ipv4_5tuple[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + rule = rte_flow_classify_table_entry_add( + cls_app->cls, cls_app->table_id[0], &key_found, + &attr, pattern_ipv4_5tuple, actions, &error); + if (rule == NULL) { + printf("table entry add failed ipv4_proto = %u\n", + ipv4_proto); + ret = -1; + return ret; + } + + rules[num_classify_rules] = rule; + num_classify_rules++; + return 0; +} + +static int +add_rules(const char *rule_path, struct flow_classifier *cls_app) +{ + FILE *fh; + char buff[LINE_MAX]; + unsigned int i = 0; + unsigned int total_num = 0; + struct rte_eth_ntuple_filter ntuple_filter; + + fh = fopen(rule_path, "rb"); + if (fh == NULL) + rte_exit(EXIT_FAILURE, "%s: Open %s failed\n", __func__, + rule_path); + + fseek(fh, 0, SEEK_SET); + + i = 0; + while (fgets(buff, LINE_MAX, fh) != NULL) { + i++; + + if (is_bypass_line(buff)) + continue; + + if (total_num >= FLOW_CLASSIFY_MAX_RULE_NUM - 1) { + printf("\nINFO: classify rule capacity %d reached\n", + total_num); + break; + } + + if (parse_ipv4_5tuple_rule(buff, &ntuple_filter) != 0) + rte_exit(EXIT_FAILURE, + "%s Line %u: parse rules error\n", + rule_path, i); + + if (add_classify_rule(&ntuple_filter, cls_app) != 0) + rte_exit(EXIT_FAILURE, "add rule error\n"); + + total_num++; + } + + fclose(fh); + return 0; +} + +/* display usage */ +static void +print_usage(const char *prgname) +{ + printf("%s usage:\n", prgname); + printf("[EAL options] -- --"OPTION_RULE_IPV4"=FILE: "); + printf("specify the ipv4 rules file.\n"); + printf("Each rule occupies one line in the file.\n"); +} + +/* Parse the argument given in the command line of the application */ +static int +parse_args(int argc, char **argv) +{ + int opt, ret; + char **argvopt; + int option_index; + char *prgname = argv[0]; + static struct option lgopts[] = { + {OPTION_RULE_IPV4, 1, 0, 0}, + {NULL, 0, 0, 0} + }; + + argvopt = argv; + + while ((opt = getopt_long(argc, argvopt, "", + lgopts, &option_index)) != EOF) { + + switch (opt) { + /* long options */ + case 0: + if (!strncmp(lgopts[option_index].name, + OPTION_RULE_IPV4, + sizeof(OPTION_RULE_IPV4))) + parm_config.rule_ipv4_name = optarg; + break; + default: + print_usage(prgname); + return -1; + } + } + + if (optind >= 0) + argv[optind-1] = prgname; + + ret = optind-1; + optind = 1; /* reset getopt lib */ + return ret; +} + +/* + * The main function, which does initialization and calls the lcore_main + * function. + */ +int +main(int argc, char *argv[]) +{ + struct rte_mempool *mbuf_pool; + uint8_t nb_ports; + uint8_t portid; + int ret; + int socket_id; + struct rte_table_acl_params table_acl_params; + struct rte_flow_classify_table_params cls_table_params; + struct flow_classifier *cls_app; + struct rte_flow_classifier_params cls_params; + uint32_t size; + + /* Initialize the Environment Abstraction Layer (EAL). */ + ret = rte_eal_init(argc, argv); + if (ret < 0) + rte_exit(EXIT_FAILURE, "Error with EAL initialization\n"); + + argc -= ret; + argv += ret; + + /* parse application arguments (after the EAL ones) */ + ret = parse_args(argc, argv); + if (ret < 0) + rte_exit(EXIT_FAILURE, "Invalid flow_classify parameters\n"); + + /* Check that there is an even number of ports to send/receive on. */ + nb_ports = rte_eth_dev_count(); + if (nb_ports < 2 || (nb_ports & 1)) + rte_exit(EXIT_FAILURE, "Error: number of ports must be even\n"); + + /* Creates a new mempool in memory to hold the mbufs. */ + mbuf_pool = rte_pktmbuf_pool_create("MBUF_POOL", NUM_MBUFS * nb_ports, + MBUF_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id()); + + if (mbuf_pool == NULL) + rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n"); + + /* Initialize all ports. */ + for (portid = 0; portid < nb_ports; portid++) + if (port_init(portid, mbuf_pool) != 0) + rte_exit(EXIT_FAILURE, "Cannot init port %"PRIu8 "\n", + portid); + + if (rte_lcore_count() > 1) + printf("\nWARNING: Too many lcores enabled. Only 1 used.\n"); + + socket_id = rte_eth_dev_socket_id(0); + + /* Memory allocation */ + size = RTE_CACHE_LINE_ROUNDUP(sizeof(struct flow_classifier_acl)); + cls_app = rte_zmalloc(NULL, size, RTE_CACHE_LINE_SIZE); + if (cls_app == NULL) + rte_exit(EXIT_FAILURE, "Cannot allocate classifier memory\n"); + + cls_params.name = "flow_classifier"; + cls_params.socket_id = socket_id; + cls_params.type = RTE_FLOW_CLASSIFY_TABLE_TYPE_ACL; + + cls_app->cls = rte_flow_classifier_create(&cls_params); + if (cls_app->cls == NULL) { + rte_free(cls_app); + rte_exit(EXIT_FAILURE, "Cannot create classifier\n"); + } + + /* initialise ACL table params */ + table_acl_params.name = "table_acl_ipv4_5tuple"; + table_acl_params.n_rules = FLOW_CLASSIFY_MAX_RULE_NUM; + table_acl_params.n_rule_fields = RTE_DIM(ipv4_defs); + memcpy(table_acl_params.field_format, ipv4_defs, sizeof(ipv4_defs)); + + /* initialise table create params */ + cls_table_params.ops = &rte_table_acl_ops, + cls_table_params.arg_create = &table_acl_params, + cls_table_params.table_metadata_size = 0; + + ret = rte_flow_classify_table_create(cls_app->cls, &cls_table_params, + &cls_app->table_id[0]); + if (ret) { + rte_flow_classifier_free(cls_app->cls); + rte_free(cls_app); + rte_exit(EXIT_FAILURE, "Failed to create classifier table\n"); + } + + /* read file of IPv4 5 tuple rules and initialize parameters + * for rte_flow_classify_validate and rte_flow_classify_table_entry_add + * API's. + */ + if (add_rules(parm_config.rule_ipv4_name, cls_app)) { + rte_flow_classifier_free(cls_app->cls); + rte_free(cls_app); + rte_exit(EXIT_FAILURE, "Failed to add rules\n"); + } + + /* Call lcore_main on the master core only. */ + lcore_main(cls_app); + + return 0; +} diff --git a/examples/flow_classify/ipv4_rules_file.txt b/examples/flow_classify/ipv4_rules_file.txt new file mode 100644 index 0000000..dfa0631 --- /dev/null +++ b/examples/flow_classify/ipv4_rules_file.txt @@ -0,0 +1,14 @@ +#file format: +#src_ip/masklen dst_ip/masklen src_port : mask dst_port : mask proto/mask priority +# +2.2.2.3/24 2.2.2.7/24 32 : 0xffff 33 : 0xffff 17/0xff 0 +9.9.9.3/24 9.9.9.7/24 32 : 0xffff 33 : 0xffff 17/0xff 1 +9.9.9.3/24 9.9.9.7/24 32 : 0xffff 33 : 0xffff 6/0xff 2 +9.9.8.3/24 9.9.8.7/24 32 : 0xffff 33 : 0xffff 6/0xff 3 +6.7.8.9/24 2.3.4.5/24 32 : 0x0000 33 : 0x0000 132/0xff 4 +6.7.8.9/32 192.168.0.36/32 10 : 0xffff 11 : 0xffff 6/0xfe 5 +6.7.8.9/24 192.168.0.36/24 10 : 0xffff 11 : 0xffff 6/0xfe 6 +6.7.8.9/16 192.168.0.36/16 10 : 0xffff 11 : 0xffff 6/0xfe 7 +6.7.8.9/8 192.168.0.36/8 10 : 0xffff 11 : 0xffff 6/0xfe 8 +#error rules +#9.8.7.6/8 192.168.0.36/8 10 : 0xffff 11 : 0xffff 6/0xfe 9 \ No newline at end of file -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v9 3/4] test: add packet burst generator functions 2017-10-17 20:26 ` [dpdk-dev] [PATCH v8 " Bernard Iremonger ` (2 preceding siblings ...) 2017-10-22 13:32 ` [dpdk-dev] [PATCH v9 2/4] examples/flow_classify: flow classify sample application Bernard Iremonger @ 2017-10-22 13:32 ` Bernard Iremonger 2017-10-22 13:32 ` [dpdk-dev] [PATCH v9 4/4] test: flow classify library unit tests Bernard Iremonger 4 siblings, 0 replies; 145+ messages in thread From: Bernard Iremonger @ 2017-10-22 13:32 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil, jasvinder.singh Cc: Bernard Iremonger add initialize_tcp_header function add initialize_stcp_header function add initialize_ipv4_header_proto function add generate_packet_burst_proto function Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> --- test/test/packet_burst_generator.c | 191 +++++++++++++++++++++++++++++++++++++ test/test/packet_burst_generator.h | 22 ++++- 2 files changed, 211 insertions(+), 2 deletions(-) diff --git a/test/test/packet_burst_generator.c b/test/test/packet_burst_generator.c index a93c3b5..8f4ddcc 100644 --- a/test/test/packet_burst_generator.c +++ b/test/test/packet_burst_generator.c @@ -134,6 +134,36 @@ return pkt_len; } +uint16_t +initialize_tcp_header(struct tcp_hdr *tcp_hdr, uint16_t src_port, + uint16_t dst_port, uint16_t pkt_data_len) +{ + uint16_t pkt_len; + + pkt_len = (uint16_t) (pkt_data_len + sizeof(struct tcp_hdr)); + + memset(tcp_hdr, 0, sizeof(struct tcp_hdr)); + tcp_hdr->src_port = rte_cpu_to_be_16(src_port); + tcp_hdr->dst_port = rte_cpu_to_be_16(dst_port); + + return pkt_len; +} + +uint16_t +initialize_sctp_header(struct sctp_hdr *sctp_hdr, uint16_t src_port, + uint16_t dst_port, uint16_t pkt_data_len) +{ + uint16_t pkt_len; + + pkt_len = (uint16_t) (pkt_data_len + sizeof(struct udp_hdr)); + + sctp_hdr->src_port = rte_cpu_to_be_16(src_port); + sctp_hdr->dst_port = rte_cpu_to_be_16(dst_port); + sctp_hdr->tag = 0; + sctp_hdr->cksum = 0; /* No SCTP checksum. */ + + return pkt_len; +} uint16_t initialize_ipv6_header(struct ipv6_hdr *ip_hdr, uint8_t *src_addr, @@ -198,7 +228,53 @@ return pkt_len; } +uint16_t +initialize_ipv4_header_proto(struct ipv4_hdr *ip_hdr, uint32_t src_addr, + uint32_t dst_addr, uint16_t pkt_data_len, uint8_t proto) +{ + uint16_t pkt_len; + unaligned_uint16_t *ptr16; + uint32_t ip_cksum; + + /* + * Initialize IP header. + */ + pkt_len = (uint16_t) (pkt_data_len + sizeof(struct ipv4_hdr)); + + ip_hdr->version_ihl = IP_VHL_DEF; + ip_hdr->type_of_service = 0; + ip_hdr->fragment_offset = 0; + ip_hdr->time_to_live = IP_DEFTTL; + ip_hdr->next_proto_id = proto; + ip_hdr->packet_id = 0; + ip_hdr->total_length = rte_cpu_to_be_16(pkt_len); + ip_hdr->src_addr = rte_cpu_to_be_32(src_addr); + ip_hdr->dst_addr = rte_cpu_to_be_32(dst_addr); + + /* + * Compute IP header checksum. + */ + ptr16 = (unaligned_uint16_t *)ip_hdr; + ip_cksum = 0; + ip_cksum += ptr16[0]; ip_cksum += ptr16[1]; + ip_cksum += ptr16[2]; ip_cksum += ptr16[3]; + ip_cksum += ptr16[4]; + ip_cksum += ptr16[6]; ip_cksum += ptr16[7]; + ip_cksum += ptr16[8]; ip_cksum += ptr16[9]; + /* + * Reduce 32 bit checksum to 16 bits and complement it. + */ + ip_cksum = ((ip_cksum & 0xFFFF0000) >> 16) + + (ip_cksum & 0x0000FFFF); + ip_cksum %= 65536; + ip_cksum = (~ip_cksum) & 0x0000FFFF; + if (ip_cksum == 0) + ip_cksum = 0xFFFF; + ip_hdr->hdr_checksum = (uint16_t) ip_cksum; + + return pkt_len; +} /* * The maximum number of segments per packet is used when creating @@ -283,3 +359,118 @@ return nb_pkt; } + +int +generate_packet_burst_proto(struct rte_mempool *mp, + struct rte_mbuf **pkts_burst, + struct ether_hdr *eth_hdr, uint8_t vlan_enabled, void *ip_hdr, + uint8_t ipv4, uint8_t proto, void *proto_hdr, + int nb_pkt_per_burst, uint8_t pkt_len, uint8_t nb_pkt_segs) +{ + int i, nb_pkt = 0; + size_t eth_hdr_size; + + struct rte_mbuf *pkt_seg; + struct rte_mbuf *pkt; + + for (nb_pkt = 0; nb_pkt < nb_pkt_per_burst; nb_pkt++) { + pkt = rte_pktmbuf_alloc(mp); + if (pkt == NULL) { +nomore_mbuf: + if (nb_pkt == 0) + return -1; + break; + } + + pkt->data_len = pkt_len; + pkt_seg = pkt; + for (i = 1; i < nb_pkt_segs; i++) { + pkt_seg->next = rte_pktmbuf_alloc(mp); + if (pkt_seg->next == NULL) { + pkt->nb_segs = i; + rte_pktmbuf_free(pkt); + goto nomore_mbuf; + } + pkt_seg = pkt_seg->next; + pkt_seg->data_len = pkt_len; + } + pkt_seg->next = NULL; /* Last segment of packet. */ + + /* + * Copy headers in first packet segment(s). + */ + if (vlan_enabled) + eth_hdr_size = sizeof(struct ether_hdr) + + sizeof(struct vlan_hdr); + else + eth_hdr_size = sizeof(struct ether_hdr); + + copy_buf_to_pkt(eth_hdr, eth_hdr_size, pkt, 0); + + if (ipv4) { + copy_buf_to_pkt(ip_hdr, sizeof(struct ipv4_hdr), pkt, + eth_hdr_size); + switch (proto) { + case IPPROTO_UDP: + copy_buf_to_pkt(proto_hdr, + sizeof(struct udp_hdr), pkt, + eth_hdr_size + sizeof(struct ipv4_hdr)); + break; + case IPPROTO_TCP: + copy_buf_to_pkt(proto_hdr, + sizeof(struct tcp_hdr), pkt, + eth_hdr_size + sizeof(struct ipv4_hdr)); + break; + case IPPROTO_SCTP: + copy_buf_to_pkt(proto_hdr, + sizeof(struct sctp_hdr), pkt, + eth_hdr_size + sizeof(struct ipv4_hdr)); + break; + default: + break; + } + } else { + copy_buf_to_pkt(ip_hdr, sizeof(struct ipv6_hdr), pkt, + eth_hdr_size); + switch (proto) { + case IPPROTO_UDP: + copy_buf_to_pkt(proto_hdr, + sizeof(struct udp_hdr), pkt, + eth_hdr_size + sizeof(struct ipv6_hdr)); + break; + case IPPROTO_TCP: + copy_buf_to_pkt(proto_hdr, + sizeof(struct tcp_hdr), pkt, + eth_hdr_size + sizeof(struct ipv6_hdr)); + break; + case IPPROTO_SCTP: + copy_buf_to_pkt(proto_hdr, + sizeof(struct sctp_hdr), pkt, + eth_hdr_size + sizeof(struct ipv6_hdr)); + break; + default: + break; + } + } + + /* + * Complete first mbuf of packet and append it to the + * burst of packets to be transmitted. + */ + pkt->nb_segs = nb_pkt_segs; + pkt->pkt_len = pkt_len; + pkt->l2_len = eth_hdr_size; + + if (ipv4) { + pkt->vlan_tci = ETHER_TYPE_IPv4; + pkt->l3_len = sizeof(struct ipv4_hdr); + } else { + pkt->vlan_tci = ETHER_TYPE_IPv6; + pkt->l3_len = sizeof(struct ipv6_hdr); + } + + pkts_burst[nb_pkt] = pkt; + } + + return nb_pkt; +} diff --git a/test/test/packet_burst_generator.h b/test/test/packet_burst_generator.h index edc1044..3315bfa 100644 --- a/test/test/packet_burst_generator.h +++ b/test/test/packet_burst_generator.h @@ -43,7 +43,8 @@ #include <rte_arp.h> #include <rte_ip.h> #include <rte_udp.h> - +#include <rte_tcp.h> +#include <rte_sctp.h> #define IPV4_ADDR(a, b, c, d)(((a & 0xff) << 24) | ((b & 0xff) << 16) | \ ((c & 0xff) << 8) | (d & 0xff)) @@ -65,6 +66,13 @@ initialize_udp_header(struct udp_hdr *udp_hdr, uint16_t src_port, uint16_t dst_port, uint16_t pkt_data_len); +uint16_t +initialize_tcp_header(struct tcp_hdr *tcp_hdr, uint16_t src_port, + uint16_t dst_port, uint16_t pkt_data_len); + +uint16_t +initialize_sctp_header(struct sctp_hdr *sctp_hdr, uint16_t src_port, + uint16_t dst_port, uint16_t pkt_data_len); uint16_t initialize_ipv6_header(struct ipv6_hdr *ip_hdr, uint8_t *src_addr, @@ -74,15 +82,25 @@ initialize_ipv4_header(struct ipv4_hdr *ip_hdr, uint32_t src_addr, uint32_t dst_addr, uint16_t pkt_data_len); +uint16_t +initialize_ipv4_header_proto(struct ipv4_hdr *ip_hdr, uint32_t src_addr, + uint32_t dst_addr, uint16_t pkt_data_len, uint8_t proto); + int generate_packet_burst(struct rte_mempool *mp, struct rte_mbuf **pkts_burst, struct ether_hdr *eth_hdr, uint8_t vlan_enabled, void *ip_hdr, uint8_t ipv4, struct udp_hdr *udp_hdr, int nb_pkt_per_burst, uint8_t pkt_len, uint8_t nb_pkt_segs); +int +generate_packet_burst_proto(struct rte_mempool *mp, + struct rte_mbuf **pkts_burst, + struct ether_hdr *eth_hdr, uint8_t vlan_enabled, void *ip_hdr, + uint8_t ipv4, uint8_t proto, void *proto_hdr, + int nb_pkt_per_burst, uint8_t pkt_len, uint8_t nb_pkt_segs); + #ifdef __cplusplus } #endif - #endif /* PACKET_BURST_GENERATOR_H_ */ -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v9 4/4] test: flow classify library unit tests 2017-10-17 20:26 ` [dpdk-dev] [PATCH v8 " Bernard Iremonger ` (3 preceding siblings ...) 2017-10-22 13:32 ` [dpdk-dev] [PATCH v9 3/4] test: add packet burst generator functions Bernard Iremonger @ 2017-10-22 13:32 ` Bernard Iremonger 4 siblings, 0 replies; 145+ messages in thread From: Bernard Iremonger @ 2017-10-22 13:32 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil, jasvinder.singh Cc: Bernard Iremonger Add flow_classify_autotest program. Set up IPv4 ACL field definitions. Create table_acl for use by librte_flow_classify API's. Create an mbuf pool for use by rte_flow_classify_query. For each of the librte_flow_classify API's: test with invalid parameters test with invalid patterns test with invalid actions test with valid parameters Initialise ipv4 udp traffic for use by the udp test for rte_flow_classifier_run. Initialise ipv4 tcp traffic for use by the tcp test for rte_flow_classifier_run. Initialise ipv4 sctp traffic for use by the sctp test for rte_flow_classifier_run. Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> --- test/test/Makefile | 1 + test/test/test_flow_classify.c | 673 +++++++++++++++++++++++++++++++++++++++++ test/test/test_flow_classify.h | 234 ++++++++++++++ 3 files changed, 908 insertions(+) create mode 100644 test/test/test_flow_classify.c create mode 100644 test/test/test_flow_classify.h diff --git a/test/test/Makefile b/test/test/Makefile index dcbe363..c2dbe40 100644 --- a/test/test/Makefile +++ b/test/test/Makefile @@ -107,6 +107,7 @@ SRCS-y += test_table_tables.c SRCS-y += test_table_ports.c SRCS-y += test_table_combined.c SRCS-$(CONFIG_RTE_LIBRTE_ACL) += test_table_acl.c +SRCS-$(CONFIG_RTE_LIBRTE_ACL) += test_flow_classify.c endif SRCS-y += test_rwlock.c diff --git a/test/test/test_flow_classify.c b/test/test/test_flow_classify.c new file mode 100644 index 0000000..c01072c --- /dev/null +++ b/test/test/test_flow_classify.c @@ -0,0 +1,673 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <string.h> +#include <errno.h> + +#include "test.h" + +#include <rte_string_fns.h> +#include <rte_mbuf.h> +#include <rte_byteorder.h> +#include <rte_ip.h> +#include <rte_acl.h> +#include <rte_common.h> +#include <rte_table_acl.h> +#include <rte_flow.h> +#include <rte_flow_classify.h> + +#include "packet_burst_generator.h" +#include "test_flow_classify.h" + + +#define FLOW_CLASSIFY_MAX_RULE_NUM 100 +struct flow_classifier *cls; + +struct flow_classifier { + struct rte_flow_classifier *cls; + uint32_t table_id[RTE_FLOW_CLASSIFY_TABLE_MAX]; + uint32_t n_tables; +}; + +struct flow_classifier_acl { + struct flow_classifier cls; +} __rte_cache_aligned; + +/* + * test functions by passing invalid or + * non-workable parameters. + */ +static int +test_invalid_parameters(void) +{ + struct rte_flow_classify_rule *rule; + int ret; + + rule = rte_flow_classify_table_entry_add(NULL, 1, NULL, NULL, NULL, + NULL, NULL); + if (rule) { + printf("Line %i: flow_classifier_table_entry_add", __LINE__); + printf(" with NULL param should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_table_entry_delete(NULL, 1, NULL); + if (!ret) { + printf("Line %i: rte_flow_classify_table_entry_delete", + __LINE__); + printf(" with NULL param should have failed!\n"); + return -1; + } + + ret = rte_flow_classifier_query(NULL, 1, NULL, 0, NULL, NULL); + if (!ret) { + printf("Line %i: flow_classifier_query", __LINE__); + printf(" with NULL param should have failed!\n"); + return -1; + } + + rule = rte_flow_classify_table_entry_add(NULL, 1, NULL, NULL, NULL, + NULL, &error); + if (rule) { + printf("Line %i: flow_classify_table_entry_add ", __LINE__); + printf("with NULL param should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_table_entry_delete(NULL, 1, NULL); + if (!ret) { + printf("Line %i: rte_flow_classify_table_entry_delete", + __LINE__); + printf("with NULL param should have failed!\n"); + return -1; + } + + ret = rte_flow_classifier_query(NULL, 1, NULL, 0, NULL, NULL); + if (!ret) { + printf("Line %i: flow_classifier_query", __LINE__); + printf(" with NULL param should have failed!\n"); + return -1; + } + return 0; +} + +static int +test_valid_parameters(void) +{ + struct rte_flow_classify_rule *rule; + int ret; + int key_found; + + /* + * set up parameters for rte_flow_classify_table_entry_add and + * rte_flow_classify_table_entry_delete + */ + + attr.ingress = 1; + attr.priority = 1; + pattern[0] = eth_item; + pattern[1] = ipv4_udp_item_1; + pattern[2] = udp_item_1; + pattern[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + rule = rte_flow_classify_table_entry_add(cls->cls, 0, &key_found, + &attr, pattern, actions, &error); + if (!rule) { + printf("Line %i: flow_classify_table_entry_add", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classify_table_entry_delete(cls->cls, 0, rule); + if (ret) { + printf("Line %i: rte_flow_classify_table_entry_delete", + __LINE__); + printf(" should not have failed!\n"); + return -1; + } + return 0; +} + +static int +test_invalid_patterns(void) +{ + struct rte_flow_classify_rule *rule; + int ret; + int key_found; + + /* + * set up parameters for rte_flow_classify_table_entry_add and + * rte_flow_classify_table_entry_delete + */ + + attr.ingress = 1; + attr.priority = 1; + pattern[0] = eth_item_bad; + pattern[1] = ipv4_udp_item_1; + pattern[2] = udp_item_1; + pattern[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + pattern[0] = eth_item; + pattern[1] = ipv4_udp_item_bad; + rule = rte_flow_classify_table_entry_add(cls->cls, 0, &key_found, + &attr, pattern, actions, &error); + if (rule) { + printf("Line %i: flow_classify_table_entry_add", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_table_entry_delete(cls->cls, 0, rule); + if (!ret) { + printf("Line %i: rte_flow_classify_table_entry_delete", + __LINE__); + printf(" should have failed!\n"); + return -1; + } + + pattern[1] = ipv4_udp_item_1; + pattern[2] = udp_item_bad; + pattern[3] = end_item_bad; + rule = rte_flow_classify_table_entry_add(cls->cls, 0, &key_found, + &attr, pattern, actions, &error); + if (rule) { + printf("Line %i: flow_classify_table_entry_add", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_table_entry_delete(cls->cls, 0, rule); + if (!ret) { + printf("Line %i: rte_flow_classify_table_entry_delete", + __LINE__); + printf(" should have failed!\n"); + return -1; + } + return 0; +} + +static int +test_invalid_actions(void) +{ + struct rte_flow_classify_rule *rule; + int ret; + int key_found; + + /* + * set up parameters for rte_flow_classify_table_entry_add and + * rte_flow_classify_table_entry_delete + */ + + attr.ingress = 1; + attr.priority = 1; + pattern[0] = eth_item; + pattern[1] = ipv4_udp_item_1; + pattern[2] = udp_item_1; + pattern[3] = end_item; + actions[0] = count_action_bad; + actions[1] = end_action; + + rule = rte_flow_classify_table_entry_add(cls->cls, 0, &key_found, + &attr, pattern, actions, &error); + if (rule) { + printf("Line %i: flow_classify_table_entry_add", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_table_entry_delete(cls->cls, 0, rule); + if (!ret) { + printf("Line %i: rte_flow_classify_table_entry_delete", + __LINE__); + printf(" should have failed!\n"); + return -1; + } + + actions[0] = count_action; + actions[1] = end_action_bad; + + rule = rte_flow_classify_table_entry_add(cls->cls, 0, &key_found, + &attr, pattern, actions, &error); + if (rule) { + printf("Line %i: flow_classify_table_entry_add", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_table_entry_delete(cls->cls, 0, rule); + if (!ret) { + printf("Line %i: rte_flow_classify_table_entry_delete", + __LINE__); + printf("should have failed!\n"); + return -1; + } + return 0; +} + +static int +init_ipv4_udp_traffic(struct rte_mempool *mp, + struct rte_mbuf **pkts_burst, uint32_t burst_size) +{ + struct ether_hdr pkt_eth_hdr; + struct ipv4_hdr pkt_ipv4_hdr; + struct udp_hdr pkt_udp_hdr; + uint32_t src_addr = IPV4_ADDR(2, 2, 2, 3); + uint32_t dst_addr = IPV4_ADDR(2, 2, 2, 7); + uint16_t src_port = 32; + uint16_t dst_port = 33; + uint16_t pktlen; + + static uint8_t src_mac[] = { 0x00, 0xFF, 0xAA, 0xFF, 0xAA, 0xFF }; + static uint8_t dst_mac[] = { 0x00, 0xAA, 0xFF, 0xAA, 0xFF, 0xAA }; + + printf("Set up IPv4 UDP traffic\n"); + initialize_eth_header(&pkt_eth_hdr, + (struct ether_addr *)src_mac, + (struct ether_addr *)dst_mac, ETHER_TYPE_IPv4, 0, 0); + pktlen = (uint16_t)(sizeof(struct ether_hdr)); + printf("ETH pktlen %u\n", pktlen); + + pktlen = initialize_ipv4_header(&pkt_ipv4_hdr, src_addr, dst_addr, + pktlen); + printf("ETH + IPv4 pktlen %u\n", pktlen); + + pktlen = initialize_udp_header(&pkt_udp_hdr, src_port, dst_port, + pktlen); + printf("ETH + IPv4 + UDP pktlen %u\n\n", pktlen); + + return generate_packet_burst(mp, pkts_burst, &pkt_eth_hdr, + 0, &pkt_ipv4_hdr, 1, + &pkt_udp_hdr, burst_size, + PACKET_BURST_GEN_PKT_LEN, 1); +} + +static int +init_ipv4_tcp_traffic(struct rte_mempool *mp, + struct rte_mbuf **pkts_burst, uint32_t burst_size) +{ + struct ether_hdr pkt_eth_hdr; + struct ipv4_hdr pkt_ipv4_hdr; + struct tcp_hdr pkt_tcp_hdr; + uint32_t src_addr = IPV4_ADDR(1, 2, 3, 4); + uint32_t dst_addr = IPV4_ADDR(5, 6, 7, 8); + uint16_t src_port = 16; + uint16_t dst_port = 17; + uint16_t pktlen; + + static uint8_t src_mac[] = { 0x00, 0xFF, 0xAA, 0xFF, 0xAA, 0xFF }; + static uint8_t dst_mac[] = { 0x00, 0xAA, 0xFF, 0xAA, 0xFF, 0xAA }; + + printf("Set up IPv4 TCP traffic\n"); + initialize_eth_header(&pkt_eth_hdr, + (struct ether_addr *)src_mac, + (struct ether_addr *)dst_mac, ETHER_TYPE_IPv4, 0, 0); + pktlen = (uint16_t)(sizeof(struct ether_hdr)); + printf("ETH pktlen %u\n", pktlen); + + pktlen = initialize_ipv4_header_proto(&pkt_ipv4_hdr, src_addr, + dst_addr, pktlen, IPPROTO_TCP); + printf("ETH + IPv4 pktlen %u\n", pktlen); + + pktlen = initialize_tcp_header(&pkt_tcp_hdr, src_port, dst_port, + pktlen); + printf("ETH + IPv4 + TCP pktlen %u\n\n", pktlen); + + return generate_packet_burst_proto(mp, pkts_burst, &pkt_eth_hdr, + 0, &pkt_ipv4_hdr, 1, IPPROTO_TCP, + &pkt_tcp_hdr, burst_size, + PACKET_BURST_GEN_PKT_LEN, 1); +} + +static int +init_ipv4_sctp_traffic(struct rte_mempool *mp, + struct rte_mbuf **pkts_burst, uint32_t burst_size) +{ + struct ether_hdr pkt_eth_hdr; + struct ipv4_hdr pkt_ipv4_hdr; + struct sctp_hdr pkt_sctp_hdr; + uint32_t src_addr = IPV4_ADDR(11, 12, 13, 14); + uint32_t dst_addr = IPV4_ADDR(15, 16, 17, 18); + uint16_t src_port = 10; + uint16_t dst_port = 11; + uint16_t pktlen; + + static uint8_t src_mac[] = { 0x00, 0xFF, 0xAA, 0xFF, 0xAA, 0xFF }; + static uint8_t dst_mac[] = { 0x00, 0xAA, 0xFF, 0xAA, 0xFF, 0xAA }; + + printf("Set up IPv4 SCTP traffic\n"); + initialize_eth_header(&pkt_eth_hdr, + (struct ether_addr *)src_mac, + (struct ether_addr *)dst_mac, ETHER_TYPE_IPv4, 0, 0); + pktlen = (uint16_t)(sizeof(struct ether_hdr)); + printf("ETH pktlen %u\n", pktlen); + + pktlen = initialize_ipv4_header_proto(&pkt_ipv4_hdr, src_addr, + dst_addr, pktlen, IPPROTO_SCTP); + printf("ETH + IPv4 pktlen %u\n", pktlen); + + pktlen = initialize_sctp_header(&pkt_sctp_hdr, src_port, dst_port, + pktlen); + printf("ETH + IPv4 + SCTP pktlen %u\n\n", pktlen); + + return generate_packet_burst_proto(mp, pkts_burst, &pkt_eth_hdr, + 0, &pkt_ipv4_hdr, 1, IPPROTO_SCTP, + &pkt_sctp_hdr, burst_size, + PACKET_BURST_GEN_PKT_LEN, 1); +} + +static int +init_mbufpool(void) +{ + int socketid; + int ret = 0; + unsigned int lcore_id; + char s[64]; + + for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { + if (rte_lcore_is_enabled(lcore_id) == 0) + continue; + + socketid = rte_lcore_to_socket_id(lcore_id); + if (socketid >= NB_SOCKETS) { + printf( + "Socket %d of lcore %u is out of range %d\n", + socketid, lcore_id, NB_SOCKETS); + ret = -1; + break; + } + if (mbufpool[socketid] == NULL) { + snprintf(s, sizeof(s), "mbuf_pool_%d", socketid); + mbufpool[socketid] = + rte_pktmbuf_pool_create(s, NB_MBUF, + MEMPOOL_CACHE_SIZE, 0, MBUF_SIZE, + socketid); + if (mbufpool[socketid]) { + printf("Allocated mbuf pool on socket %d\n", + socketid); + } else { + printf("Cannot init mbuf pool on socket %d\n", + socketid); + ret = -ENOMEM; + break; + } + } + } + return ret; +} + +static int +test_query_udp(void) +{ + struct rte_flow_error error; + struct rte_flow_classify_rule *rule; + int ret; + int i; + int key_found; + + ret = init_ipv4_udp_traffic(mbufpool[0], bufs, MAX_PKT_BURST); + if (ret != MAX_PKT_BURST) { + printf("Line %i: init_udp_ipv4_traffic has failed!\n", + __LINE__); + return -1; + } + + for (i = 0; i < MAX_PKT_BURST; i++) + bufs[i]->packet_type = RTE_PTYPE_L3_IPV4; + + /* + * set up parameters for rte_flow_classify_table_entry_add and + * rte_flow_classify_table_entry_delete + */ + + attr.ingress = 1; + attr.priority = 1; + pattern[0] = eth_item; + pattern[1] = ipv4_udp_item_1; + pattern[2] = udp_item_1; + pattern[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + rule = rte_flow_classify_table_entry_add(cls->cls, 0, &key_found, + &attr, pattern, actions, &error); + if (!rule) { + printf("Line %i: flow_classify_table_entry_add", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classifier_query(cls->cls, 0, bufs, MAX_PKT_BURST, + rule, &udp_classify_stats); + if (ret) { + printf("Line %i: flow_classifier_query", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classify_table_entry_delete(cls->cls, 0, rule); + if (ret) { + printf("Line %i: rte_flow_classify_table_entry_delete", + __LINE__); + printf(" should not have failed!\n"); + return -1; + } + return 0; +} + +static int +test_query_tcp(void) +{ + struct rte_flow_classify_rule *rule; + int ret; + int i; + int key_found; + + ret = init_ipv4_tcp_traffic(mbufpool[0], bufs, MAX_PKT_BURST); + if (ret != MAX_PKT_BURST) { + printf("Line %i: init_ipv4_tcp_traffic has failed!\n", + __LINE__); + return -1; + } + + for (i = 0; i < MAX_PKT_BURST; i++) + bufs[i]->packet_type = RTE_PTYPE_L3_IPV4; + + /* + * set up parameters for rte_flow_classify_table_entry_add and + * rte_flow_classify_table_entry_delete + */ + + attr.ingress = 1; + attr.priority = 1; + pattern[0] = eth_item; + pattern[1] = ipv4_tcp_item_1; + pattern[2] = tcp_item_1; + pattern[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + rule = rte_flow_classify_table_entry_add(cls->cls, 0, &key_found, + &attr, pattern, actions, &error); + if (!rule) { + printf("Line %i: flow_classify_table_entry_add", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classifier_query(cls->cls, 0, bufs, MAX_PKT_BURST, + rule, &tcp_classify_stats); + if (ret) { + printf("Line %i: flow_classifier_query", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classify_table_entry_delete(cls->cls, 0, rule); + if (ret) { + printf("Line %i: rte_flow_classify_table_entry_delete", + __LINE__); + printf(" should not have failed!\n"); + return -1; + } + return 0; +} + +static int +test_query_sctp(void) +{ + struct rte_flow_classify_rule *rule; + int ret; + int i; + int key_found; + + ret = init_ipv4_sctp_traffic(mbufpool[0], bufs, MAX_PKT_BURST); + if (ret != MAX_PKT_BURST) { + printf("Line %i: init_ipv4_tcp_traffic has failed!\n", + __LINE__); + return -1; + } + + for (i = 0; i < MAX_PKT_BURST; i++) + bufs[i]->packet_type = RTE_PTYPE_L3_IPV4; + + /* + * set up parameters rte_flow_classify_table_entry_add and + * rte_flow_classify_table_entry_delete + */ + + attr.ingress = 1; + attr.priority = 1; + pattern[0] = eth_item; + pattern[1] = ipv4_sctp_item_1; + pattern[2] = sctp_item_1; + pattern[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + rule = rte_flow_classify_table_entry_add(cls->cls, 0, &key_found, + &attr, pattern, actions, &error); + if (!rule) { + printf("Line %i: flow_classify_table_entry_add", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classifier_query(cls->cls, 0, bufs, MAX_PKT_BURST, + rule, &sctp_classify_stats); + if (ret) { + printf("Line %i: flow_classifier_query", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classify_table_entry_delete(cls->cls, 0, rule); + if (ret) { + printf("Line %i: rte_flow_classify_table_entry_delete", + __LINE__); + printf(" should not have failed!\n"); + return -1; + } + return 0; +} + +static int +test_flow_classify(void) +{ + struct rte_table_acl_params table_acl_params; + struct rte_flow_classify_table_params cls_table_params; + struct rte_flow_classifier_params cls_params; + int socket_id; + int ret; + uint32_t size; + + socket_id = rte_eth_dev_socket_id(0); + + /* Memory allocation */ + size = RTE_CACHE_LINE_ROUNDUP(sizeof(struct flow_classifier_acl)); + cls = rte_zmalloc(NULL, size, RTE_CACHE_LINE_SIZE); + + cls_params.name = "flow_classifier"; + cls_params.socket_id = socket_id; + cls_params.type = RTE_FLOW_CLASSIFY_TABLE_TYPE_ACL; + cls->cls = rte_flow_classifier_create(&cls_params); + + /* initialise ACL table params */ + table_acl_params.n_rule_fields = RTE_DIM(ipv4_defs); + table_acl_params.name = "table_acl_ipv4_5tuple"; + table_acl_params.n_rules = FLOW_CLASSIFY_MAX_RULE_NUM; + memcpy(table_acl_params.field_format, ipv4_defs, sizeof(ipv4_defs)); + + /* initialise table create params */ + cls_table_params.ops = &rte_table_acl_ops, + cls_table_params.arg_create = &table_acl_params, + cls_table_params.table_metadata_size = 0; + + ret = rte_flow_classify_table_create(cls->cls, &cls_table_params, + &cls->table_id[0]); + if (ret) { + printf("Line %i: f_create has failed!\n", __LINE__); + rte_flow_classifier_free(cls->cls); + rte_free(cls); + return -1; + } + printf("Created table_acl for for IPv4 five tuple packets\n"); + + ret = init_mbufpool(); + if (ret) { + printf("Line %i: init_mbufpool has failed!\n", __LINE__); + return -1; + } + + if (test_invalid_parameters() < 0) + return -1; + if (test_valid_parameters() < 0) + return -1; + if (test_invalid_patterns() < 0) + return -1; + if (test_invalid_actions() < 0) + return -1; + if (test_query_udp() < 0) + return -1; + if (test_query_tcp() < 0) + return -1; + if (test_query_sctp() < 0) + return -1; + + return 0; +} + +REGISTER_TEST_COMMAND(flow_classify_autotest, test_flow_classify); diff --git a/test/test/test_flow_classify.h b/test/test/test_flow_classify.h new file mode 100644 index 0000000..39535cf --- /dev/null +++ b/test/test/test_flow_classify.h @@ -0,0 +1,234 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef TEST_FLOW_CLASSIFY_H_ +#define TEST_FLOW_CLASSIFY_H_ + +#define MAX_PKT_BURST (32) +#define NB_SOCKETS (1) +#define MEMPOOL_CACHE_SIZE (256) +#define MBUF_SIZE (512) +#define NB_MBUF (512) + +/* test UDP, TCP and SCTP packets */ +static struct rte_mempool *mbufpool[NB_SOCKETS]; +static struct rte_mbuf *bufs[MAX_PKT_BURST]; + +/* ACL field definitions for IPv4 5 tuple rule */ + +enum { + PROTO_FIELD_IPV4, + SRC_FIELD_IPV4, + DST_FIELD_IPV4, + SRCP_FIELD_IPV4, + DSTP_FIELD_IPV4, + NUM_FIELDS_IPV4 +}; + +enum { + PROTO_INPUT_IPV4, + SRC_INPUT_IPV4, + DST_INPUT_IPV4, + SRCP_DESTP_INPUT_IPV4 +}; + +static struct rte_acl_field_def ipv4_defs[NUM_FIELDS_IPV4] = { + /* first input field - always one byte long. */ + { + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint8_t), + .field_index = PROTO_FIELD_IPV4, + .input_index = PROTO_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, next_proto_id), + }, + /* next input field (IPv4 source address) - 4 consecutive bytes. */ + { + /* rte_flow uses a bit mask for IPv4 addresses */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint32_t), + .field_index = SRC_FIELD_IPV4, + .input_index = SRC_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, src_addr), + }, + /* next input field (IPv4 destination address) - 4 consecutive bytes. */ + { + /* rte_flow uses a bit mask for IPv4 addresses */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint32_t), + .field_index = DST_FIELD_IPV4, + .input_index = DST_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, dst_addr), + }, + /* + * Next 2 fields (src & dst ports) form 4 consecutive bytes. + * They share the same input index. + */ + { + /* rte_flow uses a bit mask for protocol ports */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint16_t), + .field_index = SRCP_FIELD_IPV4, + .input_index = SRCP_DESTP_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + sizeof(struct ipv4_hdr) + + offsetof(struct tcp_hdr, src_port), + }, + { + /* rte_flow uses a bit mask for protocol ports */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint16_t), + .field_index = DSTP_FIELD_IPV4, + .input_index = SRCP_DESTP_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + sizeof(struct ipv4_hdr) + + offsetof(struct tcp_hdr, dst_port), + }, +}; + +/* parameters for rte_flow_classify_validate and rte_flow_classify_create */ + +/* test UDP pattern: + * "eth / ipv4 src spec 2.2.2.3 src mask 255.255.255.00 dst spec 2.2.2.7 + * dst mask 255.255.255.00 / udp src is 32 dst is 33 / end" + */ +static struct rte_flow_item_ipv4 ipv4_udp_spec_1 = { + { 0, 0, 0, 0, 0, 0, IPPROTO_UDP, 0, IPv4(2, 2, 2, 3), IPv4(2, 2, 2, 7)} +}; +static const struct rte_flow_item_ipv4 ipv4_mask_24 = { + .hdr = { + .next_proto_id = 0xff, + .src_addr = 0xffffff00, + .dst_addr = 0xffffff00, + }, +}; +static struct rte_flow_item_udp udp_spec_1 = { + { 32, 33, 0, 0 } +}; + +static struct rte_flow_item eth_item = { RTE_FLOW_ITEM_TYPE_ETH, + 0, 0, 0 }; +static struct rte_flow_item eth_item_bad = { -1, 0, 0, 0 }; + +static struct rte_flow_item ipv4_udp_item_1 = { RTE_FLOW_ITEM_TYPE_IPV4, + &ipv4_udp_spec_1, 0, &ipv4_mask_24}; +static struct rte_flow_item ipv4_udp_item_bad = { RTE_FLOW_ITEM_TYPE_IPV4, + NULL, 0, NULL}; + +static struct rte_flow_item udp_item_1 = { RTE_FLOW_ITEM_TYPE_UDP, + &udp_spec_1, 0, &rte_flow_item_udp_mask}; +static struct rte_flow_item udp_item_bad = { RTE_FLOW_ITEM_TYPE_UDP, + NULL, 0, NULL}; + +static struct rte_flow_item end_item = { RTE_FLOW_ITEM_TYPE_END, + 0, 0, 0 }; +static struct rte_flow_item end_item_bad = { -1, 0, 0, 0 }; + +/* test TCP pattern: + * "eth / ipv4 src spec 1.2.3.4 src mask 255.255.255.00 dst spec 5.6.7.8 + * dst mask 255.255.255.00 / tcp src is 16 dst is 17 / end" + */ +static struct rte_flow_item_ipv4 ipv4_tcp_spec_1 = { + { 0, 0, 0, 0, 0, 0, IPPROTO_TCP, 0, IPv4(1, 2, 3, 4), IPv4(5, 6, 7, 8)} +}; + +static struct rte_flow_item_tcp tcp_spec_1 = { + { 16, 17, 0, 0, 0, 0, 0, 0, 0} +}; + +static struct rte_flow_item ipv4_tcp_item_1 = { RTE_FLOW_ITEM_TYPE_IPV4, + &ipv4_tcp_spec_1, 0, &ipv4_mask_24}; + +static struct rte_flow_item tcp_item_1 = { RTE_FLOW_ITEM_TYPE_TCP, + &tcp_spec_1, 0, &rte_flow_item_tcp_mask}; + +/* test SCTP pattern: + * "eth / ipv4 src spec 1.2.3.4 src mask 255.255.255.00 dst spec 5.6.7.8 + * dst mask 255.255.255.00 / sctp src is 16 dst is 17/ end" + */ +static struct rte_flow_item_ipv4 ipv4_sctp_spec_1 = { + { 0, 0, 0, 0, 0, 0, IPPROTO_SCTP, 0, IPv4(11, 12, 13, 14), + IPv4(15, 16, 17, 18)} +}; + +static struct rte_flow_item_sctp sctp_spec_1 = { + { 10, 11, 0, 0} +}; + +static struct rte_flow_item ipv4_sctp_item_1 = { RTE_FLOW_ITEM_TYPE_IPV4, + &ipv4_sctp_spec_1, 0, &ipv4_mask_24}; + +static struct rte_flow_item sctp_item_1 = { RTE_FLOW_ITEM_TYPE_SCTP, + &sctp_spec_1, 0, &rte_flow_item_sctp_mask}; + + +/* test actions: + * "actions count / end" + */ +static struct rte_flow_action count_action = { RTE_FLOW_ACTION_TYPE_COUNT, 0}; +static struct rte_flow_action count_action_bad = { -1, 0}; + +static struct rte_flow_action end_action = { RTE_FLOW_ACTION_TYPE_END, 0}; +static struct rte_flow_action end_action_bad = { -1, 0}; + +static struct rte_flow_action actions[2]; + +/* test attributes */ +static struct rte_flow_attr attr; + +/* test error */ +static struct rte_flow_error error; + +/* test pattern */ +static struct rte_flow_item pattern[4]; + +/* flow classify data for UDP burst */ +static struct rte_flow_classify_ipv4_5tuple_stats udp_ntuple_stats; +static struct rte_flow_classify_stats udp_classify_stats = { + .stats = (void *)&udp_ntuple_stats +}; + +/* flow classify data for TCP burst */ +static struct rte_flow_classify_ipv4_5tuple_stats tcp_ntuple_stats; +static struct rte_flow_classify_stats tcp_classify_stats = { + .stats = (void *)&tcp_ntuple_stats +}; + +/* flow classify data for SCTP burst */ +static struct rte_flow_classify_ipv4_5tuple_stats sctp_ntuple_stats; +static struct rte_flow_classify_stats sctp_classify_stats = { + .stats = (void *)&sctp_ntuple_stats +}; +#endif /* TEST_FLOW_CLASSIFY_H_ */ -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v8 1/4] librte_flow_classify: add flow classify library 2017-10-02 9:31 ` [dpdk-dev] [PATCH v7 " Bernard Iremonger 2017-10-17 20:26 ` [dpdk-dev] [PATCH v8 " Bernard Iremonger @ 2017-10-17 20:26 ` Bernard Iremonger 2017-10-19 14:22 ` Singh, Jasvinder 2017-10-17 20:26 ` [dpdk-dev] [PATCH v8 2/4] examples/flow_classify: flow classify sample application Bernard Iremonger ` (2 subsequent siblings) 4 siblings, 1 reply; 145+ messages in thread From: Bernard Iremonger @ 2017-10-17 20:26 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil, jasvinder.singh Cc: Bernard Iremonger From: Ferruh Yigit <ferruh.yigit@intel.com> The following APIs's are implemented in the librte_flow_classify library: rte_flow_classifier_create rte_flow_classifier_free rte_flow_classifier_query rte_flow_classifier_run rte_flow_classify_table_create rte_flow_classify_table_entry_add rte_flow_classify_table_entry_delete rte_flow_classify_validate The following librte_table API's are used: f_create to create a table. f_add to add a rule to the table. f_del to delete a rule from the table. f_lookup to match packets with the rules. The library supports counting of IPv4 five tupple packets only, ie IPv4 UDP, TCP and SCTP packets. Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com> Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> --- MAINTAINERS | 7 + config/common_base | 6 + doc/api/doxy-api-index.md | 1 + doc/api/doxy-api.conf | 1 + lib/Makefile | 3 + lib/librte_eal/common/include/rte_log.h | 1 + lib/librte_flow_classify/Makefile | 51 ++ lib/librte_flow_classify/rte_flow_classify.c | 735 +++++++++++++++++++++ lib/librte_flow_classify/rte_flow_classify.h | 321 +++++++++ lib/librte_flow_classify/rte_flow_classify_parse.c | 546 +++++++++++++++ lib/librte_flow_classify/rte_flow_classify_parse.h | 74 +++ .../rte_flow_classify_version.map | 14 + mk/rte.app.mk | 1 + 13 files changed, 1761 insertions(+) create mode 100644 lib/librte_flow_classify/Makefile create mode 100644 lib/librte_flow_classify/rte_flow_classify.c create mode 100644 lib/librte_flow_classify/rte_flow_classify.h create mode 100644 lib/librte_flow_classify/rte_flow_classify_parse.c create mode 100644 lib/librte_flow_classify/rte_flow_classify_parse.h create mode 100644 lib/librte_flow_classify/rte_flow_classify_version.map diff --git a/MAINTAINERS b/MAINTAINERS index 2a58378..0981793 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -735,6 +735,13 @@ F: doc/guides/prog_guide/pdump_lib.rst F: app/pdump/ F: doc/guides/tools/pdump.rst +Flow classify +M: Bernard Iremonger <bernard.iremonger@intel.com> +F: lib/librte_flow_classify/ +F: test/test/test_flow_classify* +F: examples/flow_classify/ +F: doc/guides/sample_app_ug/flow_classify.rst +F: doc/guides/prog_guide/flow_classify_lib.rst Packet Framework ---------------- diff --git a/config/common_base b/config/common_base index d9471e8..e1079aa 100644 --- a/config/common_base +++ b/config/common_base @@ -707,6 +707,12 @@ CONFIG_RTE_LIBRTE_GSO=y CONFIG_RTE_LIBRTE_METER=y # +# Compile librte_classify +# +CONFIG_RTE_LIBRTE_FLOW_CLASSIFY=y +CONFIG_RTE_LIBRTE_CLASSIFY_DEBUG=n + +# # Compile librte_sched # CONFIG_RTE_LIBRTE_SCHED=y diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md index 990815f..e4468d0 100644 --- a/doc/api/doxy-api-index.md +++ b/doc/api/doxy-api-index.md @@ -111,6 +111,7 @@ The public API headers are grouped by topics: [ACL] (@ref rte_acl.h), [EFD] (@ref rte_efd.h), [member] (@ref rte_member.h) + [flow_classify] (@ref rte_flow_classify.h), - **QoS**: [metering] (@ref rte_meter.h), diff --git a/doc/api/doxy-api.conf b/doc/api/doxy-api.conf index 9e9fa56..9edb6fd 100644 --- a/doc/api/doxy-api.conf +++ b/doc/api/doxy-api.conf @@ -48,6 +48,7 @@ INPUT = doc/api/doxy-api-index.md \ lib/librte_efd \ lib/librte_ether \ lib/librte_eventdev \ + lib/librte_flow_classify \ lib/librte_gro \ lib/librte_gso \ lib/librte_hash \ diff --git a/lib/Makefile b/lib/Makefile index 86d475f..aba2593 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -83,6 +83,9 @@ DIRS-$(CONFIG_RTE_LIBRTE_POWER) += librte_power DEPDIRS-librte_power := librte_eal DIRS-$(CONFIG_RTE_LIBRTE_METER) += librte_meter DEPDIRS-librte_meter := librte_eal +DIRS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += librte_flow_classify +DEPDIRS-librte_flow_classify := librte_eal librte_ether librte_net +DEPDIRS-librte_flow_classify += librte_table librte_acl librte_port DIRS-$(CONFIG_RTE_LIBRTE_SCHED) += librte_sched DEPDIRS-librte_sched := librte_eal librte_mempool librte_mbuf librte_net DEPDIRS-librte_sched += librte_timer diff --git a/lib/librte_eal/common/include/rte_log.h b/lib/librte_eal/common/include/rte_log.h index 2fa1199..67209ae 100644 --- a/lib/librte_eal/common/include/rte_log.h +++ b/lib/librte_eal/common/include/rte_log.h @@ -88,6 +88,7 @@ struct rte_logs { #define RTE_LOGTYPE_EFD 18 /**< Log related to EFD. */ #define RTE_LOGTYPE_EVENTDEV 19 /**< Log related to eventdev. */ #define RTE_LOGTYPE_GSO 20 /**< Log related to GSO. */ +#define RTE_LOGTYPE_CLASSIFY 21 /**< Log related to flow classify. */ /* these log types can be used in an application */ #define RTE_LOGTYPE_USER1 24 /**< User-defined log type 1. */ diff --git a/lib/librte_flow_classify/Makefile b/lib/librte_flow_classify/Makefile new file mode 100644 index 0000000..7863a0c --- /dev/null +++ b/lib/librte_flow_classify/Makefile @@ -0,0 +1,51 @@ +# BSD LICENSE +# +# Copyright(c) 2017 Intel Corporation. All rights reserved. +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Intel Corporation nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +include $(RTE_SDK)/mk/rte.vars.mk + +# library name +LIB = librte_flow_classify.a + +CFLAGS += -O3 +CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) + +EXPORT_MAP := rte_flow_classify_version.map + +LIBABIVER := 1 + +# all source are stored in SRCS-y +SRCS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += rte_flow_classify.c +SRCS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += rte_flow_classify_parse.c + +# install this header file +SYMLINK-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY)-include := rte_flow_classify.h + +include $(RTE_SDK)/mk/rte.lib.mk diff --git a/lib/librte_flow_classify/rte_flow_classify.c b/lib/librte_flow_classify/rte_flow_classify.c new file mode 100644 index 0000000..22082c4 --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify.c @@ -0,0 +1,735 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <rte_flow_classify.h> +#include "rte_flow_classify_parse.h" +#include <rte_flow_driver.h> +#include <rte_table_acl.h> +#include <stdbool.h> + +static struct rte_eth_ntuple_filter ntuple_filter; +static uint32_t unique_id = 1; + +struct rte_table { + /* Input parameters */ + struct rte_table_ops ops; + struct rte_flow_classify_table_entry *default_entry; + uint32_t entry_size; + enum rte_flow_classify_table_type type; + + /* Handle to the low-level table object */ + void *h_table; +}; + +#define RTE_FLOW_CLASSIFIER_MAX_NAME_SZ 256 + +struct rte_flow_classifier { + /* Input parameters */ + char name[RTE_FLOW_CLASSIFIER_MAX_NAME_SZ]; + int socket_id; + enum rte_flow_classify_table_type type; + + /* Internal tables */ + struct rte_table tables[RTE_FLOW_CLASSIFY_TABLE_MAX]; + uint32_t num_tables; + uint16_t nb_pkts; + struct rte_flow_classify_table_entry + *entries[RTE_PORT_IN_BURST_SIZE_MAX]; +} __rte_cache_aligned; + +enum { + PROTO_FIELD_IPV4, + SRC_FIELD_IPV4, + DST_FIELD_IPV4, + SRCP_FIELD_IPV4, + DSTP_FIELD_IPV4, + NUM_FIELDS_IPV4 +}; + +struct acl_keys { + struct rte_table_acl_rule_add_params key_add; /**< add key */ + struct rte_table_acl_rule_delete_params key_del; /**< delete key */ +}; + +struct rte_flow_classify_rule { + uint32_t id; /**< unique ID of classify rule */ + enum rte_flow_classify_rule_type rule_type; /**< classify rule type */ + struct rte_flow_action action; /**< action when match found */ + struct rte_flow_classify_ipv4_5tuple ipv4_5tuple; /**< ipv4 5tuple */ + union { + struct acl_keys key; + } u; + int key_found; /**< rule key found in table */ + void *entry; /**< pointer to buffer to hold rule meta data*/ + void *entry_ptr; /**< handle to the table entry for rule meta data*/ +}; + +int +rte_flow_classify_validate( + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + struct rte_flow_item *items; + parse_filter_t parse_filter; + uint32_t item_num = 0; + uint32_t i = 0; + int ret; + + if (!error) + return -EINVAL; + + if (!pattern) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM, + NULL, "NULL pattern."); + return -EINVAL; + } + + if (!actions) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_NUM, + NULL, "NULL action."); + return -EINVAL; + } + + if (!attr) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, + NULL, "NULL attribute."); + return -EINVAL; + } + + memset(&ntuple_filter, 0, sizeof(ntuple_filter)); + + /* Get the non-void item number of pattern */ + while ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_END) { + if ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_VOID) + item_num++; + i++; + } + item_num++; + + items = malloc(item_num * sizeof(struct rte_flow_item)); + if (!items) { + rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_ITEM_NUM, + NULL, "No memory for pattern items."); + return -ENOMEM; + } + + memset(items, 0, item_num * sizeof(struct rte_flow_item)); + classify_pattern_skip_void_item(items, pattern); + + parse_filter = classify_find_parse_filter_func(items); + if (!parse_filter) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + pattern, "Unsupported pattern"); + free(items); + return -EINVAL; + } + + ret = parse_filter(attr, items, actions, &ntuple_filter, error); + free(items); + return ret; +} + +#ifdef RTE_LIBRTE_CLASSIFY_DEBUG +#define uint32_t_to_char(ip, a, b, c, d) do {\ + *a = (unsigned char)(ip >> 24 & 0xff);\ + *b = (unsigned char)(ip >> 16 & 0xff);\ + *c = (unsigned char)(ip >> 8 & 0xff);\ + *d = (unsigned char)(ip & 0xff);\ + } while (0) + +static inline void +print_ipv4_key_add(struct rte_table_acl_rule_add_params *key) +{ + unsigned char a, b, c, d; + + printf("ipv4_key_add: 0x%02hhx/0x%hhx ", + key->field_value[PROTO_FIELD_IPV4].value.u8, + key->field_value[PROTO_FIELD_IPV4].mask_range.u8); + + uint32_t_to_char(key->field_value[SRC_FIELD_IPV4].value.u32, + &a, &b, &c, &d); + printf(" %hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, + key->field_value[SRC_FIELD_IPV4].mask_range.u32); + + uint32_t_to_char(key->field_value[DST_FIELD_IPV4].value.u32, + &a, &b, &c, &d); + printf("%hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, + key->field_value[DST_FIELD_IPV4].mask_range.u32); + + printf("%hu : 0x%x %hu : 0x%x", + key->field_value[SRCP_FIELD_IPV4].value.u16, + key->field_value[SRCP_FIELD_IPV4].mask_range.u16, + key->field_value[DSTP_FIELD_IPV4].value.u16, + key->field_value[DSTP_FIELD_IPV4].mask_range.u16); + + printf(" priority: 0x%x\n", key->priority); +} + +static inline void +print_ipv4_key_delete(struct rte_table_acl_rule_delete_params *key) +{ + unsigned char a, b, c, d; + + printf("ipv4_key_del: 0x%02hhx/0x%hhx ", + key->field_value[PROTO_FIELD_IPV4].value.u8, + key->field_value[PROTO_FIELD_IPV4].mask_range.u8); + + uint32_t_to_char(key->field_value[SRC_FIELD_IPV4].value.u32, + &a, &b, &c, &d); + printf(" %hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, + key->field_value[SRC_FIELD_IPV4].mask_range.u32); + + uint32_t_to_char(key->field_value[DST_FIELD_IPV4].value.u32, + &a, &b, &c, &d); + printf("%hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, + key->field_value[DST_FIELD_IPV4].mask_range.u32); + + printf("%hu : 0x%x %hu : 0x%x\n", + key->field_value[SRCP_FIELD_IPV4].value.u16, + key->field_value[SRCP_FIELD_IPV4].mask_range.u16, + key->field_value[DSTP_FIELD_IPV4].value.u16, + key->field_value[DSTP_FIELD_IPV4].mask_range.u16); +} +#endif + +static int +rte_flow_classifier_check_params(struct rte_flow_classifier_params *params) +{ + if (params == NULL) { + RTE_LOG(ERR, CLASSIFY, + "%s: Incorrect value for parameter params\n", __func__); + return -EINVAL; + } + + /* name */ + if (params->name == NULL) { + RTE_LOG(ERR, CLASSIFY, + "%s: Incorrect value for parameter name\n", __func__); + return -EINVAL; + } + + /* socket */ + if ((params->socket_id < 0) || + (params->socket_id >= RTE_MAX_NUMA_NODES)) { + RTE_LOG(ERR, CLASSIFY, + "%s: Incorrect value for parameter socket_id\n", + __func__); + return -EINVAL; + } + + return 0; +} + +struct rte_flow_classifier * +rte_flow_classifier_create(struct rte_flow_classifier_params *params) +{ + struct rte_flow_classifier *cls; + int ret; + + /* Check input parameters */ + ret = rte_flow_classifier_check_params(params); + if (ret != 0) { + RTE_LOG(ERR, CLASSIFY, + "%s: flow classifier params check failed (%d)\n", + __func__, ret); + return NULL; + } + + /* Allocate memory for the flow classifier */ + cls = rte_zmalloc_socket("FLOW_CLASSIFIER", + sizeof(struct rte_flow_classifier), + RTE_CACHE_LINE_SIZE, params->socket_id); + + if (cls == NULL) { + RTE_LOG(ERR, CLASSIFY, + "%s: flow classifier memory allocation failed\n", + __func__); + return NULL; + } + + /* Save input parameters */ + snprintf(cls->name, RTE_FLOW_CLASSIFIER_MAX_NAME_SZ, "%s", + params->name); + cls->socket_id = params->socket_id; + cls->type = params->type; + + /* Initialize flow classifier internal data structure */ + cls->num_tables = 0; + + return cls; +} + +static void +rte_flow_classify_table_free(struct rte_table *table) +{ + if (table->ops.f_free != NULL) + table->ops.f_free(table->h_table); + + rte_free(table->default_entry); +} + +int +rte_flow_classifier_free(struct rte_flow_classifier *cls) +{ + uint32_t i; + + /* Check input parameters */ + if (cls == NULL) { + RTE_LOG(ERR, CLASSIFY, + "%s: rte_flow_classifier parameter is NULL\n", + __func__); + return -EINVAL; + } + + /* Free tables */ + for (i = 0; i < cls->num_tables; i++) { + struct rte_table *table = &cls->tables[i]; + + rte_flow_classify_table_free(table); + } + + /* Free flow classifier memory */ + rte_free(cls); + + return 0; +} + +static int +rte_table_check_params(struct rte_flow_classifier *cls, + struct rte_flow_classify_table_params *params, + uint32_t *table_id) +{ + if (cls == NULL) { + RTE_LOG(ERR, CLASSIFY, + "%s: flow classifier parameter is NULL\n", + __func__); + return -EINVAL; + } + if (params == NULL) { + RTE_LOG(ERR, CLASSIFY, "%s: params parameter is NULL\n", + __func__); + return -EINVAL; + } + if (table_id == NULL) { + RTE_LOG(ERR, CLASSIFY, "%s: table_id parameter is NULL\n", + __func__); + return -EINVAL; + } + + /* ops */ + if (params->ops == NULL) { + RTE_LOG(ERR, CLASSIFY, "%s: params->ops is NULL\n", + __func__); + return -EINVAL; + } + + if (params->ops->f_create == NULL) { + RTE_LOG(ERR, CLASSIFY, + "%s: f_create function pointer is NULL\n", __func__); + return -EINVAL; + } + + if (params->ops->f_lookup == NULL) { + RTE_LOG(ERR, CLASSIFY, + "%s: f_lookup function pointer is NULL\n", __func__); + return -EINVAL; + } + + /* De we have room for one more table? */ + if (cls->num_tables == RTE_FLOW_CLASSIFY_TABLE_MAX) { + RTE_LOG(ERR, CLASSIFY, + "%s: Incorrect value for num_tables parameter\n", + __func__); + return -EINVAL; + } + + return 0; +} + +int +rte_flow_classify_table_create(struct rte_flow_classifier *cls, + struct rte_flow_classify_table_params *params, + uint32_t *table_id) +{ + struct rte_table *table; + struct rte_flow_classify_table_entry *default_entry; + void *h_table; + uint32_t entry_size, id; + int ret; + + /* Check input arguments */ + ret = rte_table_check_params(cls, params, table_id); + if (ret != 0) + return ret; + + id = cls->num_tables; + table = &cls->tables[id]; + + /* Allocate space for the default table entry */ + entry_size = sizeof(struct rte_flow_classify_table_entry) + + params->table_metadata_size; + default_entry = + (struct rte_flow_classify_table_entry *) rte_zmalloc_socket( + "Flow Classify default entry", entry_size, + RTE_CACHE_LINE_SIZE, cls->socket_id); + if (default_entry == NULL) { + RTE_LOG(ERR, CLASSIFY, + "%s: Failed to allocate default entry\n", __func__); + return -EINVAL; + } + + /* Create the table */ + h_table = params->ops->f_create(params->arg_create, cls->socket_id, + entry_size); + if (h_table == NULL) { + rte_free(default_entry); + RTE_LOG(ERR, CLASSIFY, "%s: Table creation failed\n", __func__); + return -EINVAL; + } + + /* Commit current table to the classifier */ + cls->num_tables++; + *table_id = id; + + /* Save input parameters */ + memcpy(&table->ops, params->ops, sizeof(struct rte_table_ops)); + + table->entry_size = entry_size; + table->default_entry = default_entry; + + /* Initialize table internal data structure */ + table->h_table = h_table; + + return 0; +} + +static struct rte_flow_classify_rule * +allocate_ipv4_5tuple_rule(void) +{ + struct rte_flow_classify_rule *rule; + + rule = malloc(sizeof(struct rte_flow_classify_rule)); + if (!rule) + return rule; + + memset(rule, 0, sizeof(struct rte_flow_classify_rule)); + rule->id = unique_id++; + rule->rule_type = RTE_FLOW_CLASSIFY_RULE_TYPE_IPV4_5TUPLE; + + memcpy(&rule->action, classify_get_flow_action(), + sizeof(struct rte_flow_action)); + + /* key add values */ + rule->u.key.key_add.priority = ntuple_filter.priority; + rule->u.key.key_add.field_value[PROTO_FIELD_IPV4].mask_range.u8 = + ntuple_filter.proto_mask; + rule->u.key.key_add.field_value[PROTO_FIELD_IPV4].value.u8 = + ntuple_filter.proto; + rule->ipv4_5tuple.proto = ntuple_filter.proto; + rule->ipv4_5tuple.proto_mask = ntuple_filter.proto_mask; + + rule->u.key.key_add.field_value[SRC_FIELD_IPV4].mask_range.u32 = + ntuple_filter.src_ip_mask; + rule->u.key.key_add.field_value[SRC_FIELD_IPV4].value.u32 = + ntuple_filter.src_ip; + rule->ipv4_5tuple.src_ip_mask = ntuple_filter.src_ip_mask; + rule->ipv4_5tuple.src_ip = ntuple_filter.src_ip; + + rule->u.key.key_add.field_value[DST_FIELD_IPV4].mask_range.u32 = + ntuple_filter.dst_ip_mask; + rule->u.key.key_add.field_value[DST_FIELD_IPV4].value.u32 = + ntuple_filter.dst_ip; + rule->ipv4_5tuple.dst_ip_mask = ntuple_filter.dst_ip_mask; + rule->ipv4_5tuple.dst_ip = ntuple_filter.dst_ip; + + rule->u.key.key_add.field_value[SRCP_FIELD_IPV4].mask_range.u16 = + ntuple_filter.src_port_mask; + rule->u.key.key_add.field_value[SRCP_FIELD_IPV4].value.u16 = + ntuple_filter.src_port; + rule->ipv4_5tuple.src_port_mask = ntuple_filter.src_port_mask; + rule->ipv4_5tuple.src_port = ntuple_filter.src_port; + + rule->u.key.key_add.field_value[DSTP_FIELD_IPV4].mask_range.u16 = + ntuple_filter.dst_port_mask; + rule->u.key.key_add.field_value[DSTP_FIELD_IPV4].value.u16 = + ntuple_filter.dst_port; + rule->ipv4_5tuple.dst_port_mask = ntuple_filter.dst_port_mask; + rule->ipv4_5tuple.dst_port = ntuple_filter.dst_port; + +#ifdef RTE_LIBRTE_CLASSIFY_DEBUG + print_ipv4_key_add(&rule->u.key.key_add); +#endif + + /* key delete values */ + memcpy(&rule->u.key.key_del.field_value[PROTO_FIELD_IPV4], + &rule->u.key.key_add.field_value[PROTO_FIELD_IPV4], + NUM_FIELDS_IPV4 * sizeof(struct rte_acl_field)); + +#ifdef RTE_LIBRTE_CLASSIFY_DEBUG + print_ipv4_key_delete(&rule->u.key.key_del); +#endif + return rule; +} + +struct rte_flow_classify_rule * +rte_flow_classify_table_entry_add(struct rte_flow_classifier *cls, + uint32_t table_id, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + struct rte_flow_classify_rule *rule; + struct rte_flow_classify_table_entry *table_entry; + int ret; + + if (!error) + return NULL; + + if (!cls) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "NULL classifier."); + return NULL; + } + + if (table_id >= cls->num_tables) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "invalid table_id."); + return NULL; + } + + if (!pattern) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM_NUM, + NULL, "NULL pattern."); + return NULL; + } + + if (!actions) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_NUM, + NULL, "NULL action."); + return NULL; + } + + if (!attr) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, + NULL, "NULL attribute."); + return NULL; + } + + /* parse attr, pattern and actions */ + ret = rte_flow_classify_validate(attr, pattern, actions, error); + if (ret < 0) + return NULL; + + switch (cls->type) { + case RTE_FLOW_CLASSIFY_TABLE_TYPE_ACL: + rule = allocate_ipv4_5tuple_rule(); + if (!rule) + return NULL; + break; + default: + return NULL; + } + + rule->entry = malloc(sizeof(struct rte_flow_classify_table_entry)); + if (!rule->entry) { + free(rule); + rule = NULL; + return NULL; + } + + table_entry = rule->entry; + table_entry->rule_id = rule->id; + + ret = cls->tables[table_id].ops.f_add( + cls->tables[table_id].h_table, + &rule->u.key.key_add, + rule->entry, + &rule->key_found, + &rule->entry_ptr); + if (ret) { + free(rule->entry); + free(rule); + rule = NULL; + return NULL; + } + return rule; +} + +int +rte_flow_classify_table_entry_delete(struct rte_flow_classifier *cls, + uint32_t table_id, + struct rte_flow_classify_rule *rule, + struct rte_flow_error *error) +{ + int ret = -EINVAL; + + if (!error) + return ret; + + if (!cls) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "NULL classifier."); + return ret; + } + + if (table_id >= cls->num_tables) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "invalid table_id."); + return ret; + } + + if (!rule) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "NULL rule."); + return ret; + } + + ret = cls->tables[table_id].ops.f_delete( + cls->tables[table_id].h_table, + &rule->u.key.key_del, + &rule->key_found, + &rule->entry); + + return ret; +} + +int +rte_flow_classifier_run(struct rte_flow_classifier *cls, + uint32_t table_id, + struct rte_mbuf **pkts, + const uint16_t nb_pkts, + struct rte_flow_error *error) +{ + int ret = -EINVAL; + uint64_t pkts_mask; + uint64_t lookup_hit_mask; + + if (!error) + return ret; + + if (!cls || !pkts || nb_pkts == 0) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "invalid input"); + return ret; + } + + if (table_id >= cls->num_tables) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "invalid table_id."); + return ret; + } + + pkts_mask = RTE_LEN2MASK(nb_pkts, uint64_t); + ret = cls->tables[table_id].ops.f_lookup( + cls->tables[table_id].h_table, + pkts, pkts_mask, &lookup_hit_mask, + (void **)cls->entries); + + if (!ret && lookup_hit_mask) + cls->nb_pkts = nb_pkts; + else + cls->nb_pkts = 0; + + return ret; +} + +static int +action_apply(struct rte_flow_classifier *cls, + struct rte_flow_classify_rule *rule, + struct rte_flow_classify_stats *stats) +{ + struct rte_flow_classify_ipv4_5tuple_stats *ntuple_stats; + uint64_t count = 0; + int i; + int ret = -ENODATA; + + switch (rule->action.type) { + case RTE_FLOW_ACTION_TYPE_COUNT: + for (i = 0; i < cls->nb_pkts; i++) { + if (rule->id == cls->entries[i]->rule_id) + count++; + } + if (count) { + ret = 0; + ntuple_stats = + (struct rte_flow_classify_ipv4_5tuple_stats *) + stats->stats; + ntuple_stats->counter1 = count; + ntuple_stats->ipv4_5tuple = rule->ipv4_5tuple; + } + break; + default: + ret = -ENOTSUP; + break; + } + + return ret; +} + +int +rte_flow_classifier_query(struct rte_flow_classifier *cls, + struct rte_flow_classify_rule *rule, + struct rte_flow_classify_stats *stats, + struct rte_flow_error *error) +{ + int ret = -EINVAL; + + if (!error) + return ret; + + if (!cls || !rule || !stats) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "invalid input"); + return ret; + } + + ret = action_apply(cls, rule, stats); + return ret; +} diff --git a/lib/librte_flow_classify/rte_flow_classify.h b/lib/librte_flow_classify/rte_flow_classify.h new file mode 100644 index 0000000..9bd6cf4 --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify.h @@ -0,0 +1,321 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _RTE_FLOW_CLASSIFY_H_ +#define _RTE_FLOW_CLASSIFY_H_ + +/** + * @file + * + * RTE Flow Classify Library + * + * This library provides flow record information with some measured properties. + * + * Application should define the flow and measurement criteria (action) for it. + * + * Library doesn't maintain any flow records itself, instead flow information is + * returned to upper layer only for given packets. + * + * It is application's responsibility to call rte_flow_classify_query() + * for group of packets, just after receiving them or before transmitting them. + * Application should provide the flow type interested in, measurement to apply + * to that flow in rte_flow_classify_create() API, and should provide + * rte_flow_classify object and storage to put results in + * rte_flow_classify_query() API. + * + * Usage: + * - application calls rte_flow_classify_create() to create a rte_flow_classify + * object. + * - application calls rte_flow_classify_query() in a polling manner, + * preferably after rte_eth_rx_burst(). This will cause the library to + * convert packet information to flow information with some measurements. + * - rte_flow_classify object can be destroyed when they are no more needed + * via rte_flow_classify_destroy() + */ + +#include <rte_ethdev.h> +#include <rte_ether.h> +#include <rte_flow.h> +#include <rte_acl.h> +#include <rte_table_acl.h> + +#ifdef __cplusplus +extern "C" { +#endif + + +#define RTE_FLOW_CLASSIFY_TABLE_MAX 1 + +/** Opaque data type for flow classifier */ +struct rte_flow_classifier; + +/** Opaque data type for flow classify rule */ +struct rte_flow_classify_rule; + +enum rte_flow_classify_rule_type { + RTE_FLOW_CLASSIFY_RULE_TYPE_NONE, /**< no type */ + RTE_FLOW_CLASSIFY_RULE_TYPE_IPV4_5TUPLE, /**< IPv4 5tuple type */ +}; + +enum rte_flow_classify_table_type { + RTE_FLOW_CLASSIFY_TABLE_TYPE_NONE, /**< no type */ + RTE_FLOW_CLASSIFY_TABLE_TYPE_ACL, /**< ACL type */ +}; + +/** Parameters for flow classifier creation */ +struct rte_flow_classifier_params { + /**< flow classifier name */ + const char *name; + + /**< CPU socket ID where memory for the flow classifier and its */ + /**< elements (tables) should be allocated */ + int socket_id; + + /**< Table type */ + enum rte_flow_classify_table_type type; + + /**< Table id */ + uint32_t table_id; +}; + +struct rte_flow_classify_table_params { + /**<Table operations (specific to each table type) */ + struct rte_table_ops *ops; + + /**< Opaque param to be passed to the table create operation */ + void *arg_create; + + /**< Memory size to be reserved per classifier object entry for */ + /**< storing meta data */ + uint32_t table_metadata_size; +}; + +struct rte_flow_classify_ipv4_5tuple { + uint32_t dst_ip; /**< Destination IP address in big endian. */ + uint32_t dst_ip_mask; /**< Mask of destination IP address. */ + uint32_t src_ip; /**< Source IP address in big endian. */ + uint32_t src_ip_mask; /**< Mask of destination IP address. */ + uint16_t dst_port; /**< Destination port in big endian. */ + uint16_t dst_port_mask; /**< Mask of destination port. */ + uint16_t src_port; /**< Source Port in big endian. */ + uint16_t src_port_mask; /**< Mask of source port. */ + uint8_t proto; /**< L4 protocol. */ + uint8_t proto_mask; /**< Mask of L4 protocol. */ +}; + +struct rte_flow_classify_table_entry { + /**< meta-data for classify rule */ + uint32_t rule_id; + + /**< Start of table entry area for user defined meta data */ + __extension__ uint8_t meta_data[0]; +}; + +/** + * Flow stats + * + * For the count action, stats can be returned by the query API. + * + * Storage for stats is provided by application. + */ +struct rte_flow_classify_stats { + void *stats; +}; + +struct rte_flow_classify_ipv4_5tuple_stats { + /**< count of packets that match IPv4 5tuple pattern */ + uint64_t counter1; + /**< IPv4 5tuple data */ + struct rte_flow_classify_ipv4_5tuple ipv4_5tuple; +}; + +/** + * Flow classifier create + * + * @param params + * Parameters for flow classifier creation + * @return + * Handle to flow classifier instance on success or NULL otherwise + */ +struct rte_flow_classifier *rte_flow_classifier_create( + struct rte_flow_classifier_params *params); + +/** + * Flow classifier free + * + * @param cls + * Handle to flow classifier instance + * @return + * 0 on success, error code otherwise + */ +int rte_flow_classifier_free(struct rte_flow_classifier *cls); + +/** + * Flow classify table create + * + * @param cls + * Handle to flow classifier instance + * @param params + * Parameters for flow_classify table creation + * @param table_id + * Table ID. Valid only within the scope of table IDs of the current + * classifier. Only returned after a successful invocation. + * @return + * 0 on success, error code otherwise + */ +int rte_flow_classify_table_create(struct rte_flow_classifier *cls, + struct rte_flow_classify_table_params *params, + uint32_t *table_id); + +/** + * Validate a flow classify rule. + * + * @param[in] attr + * Flow rule attributes + * @param[in] pattern + * Pattern specification (list terminated by the END pattern item). + * @param[in] actions + * Associated actions (list terminated by the END pattern item). + * @param[out] error + * Perform verbose error reporting if not NULL. Structure + * initialised in case of error only. + * + * @return + * 0 on success, error code otherwise. + */ +int +rte_flow_classify_validate( + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error); + +/** + * Add a flow classify rule to the flow_classifer table. + * + * @param[in] cls + * Flow classifier handle + * @param[in] table_id + * id of table + * @param[in] attr + * Flow rule attributes + * @param[in] pattern + * Pattern specification (list terminated by the END pattern item). + * @param[in] actions + * Associated actions (list terminated by the END pattern item). + * @param[out] error + * Perform verbose error reporting if not NULL. Structure + * initialised in case of error only. + * @return + * A valid handle in case of success, NULL otherwise. + */ +struct rte_flow_classify_rule * +rte_flow_classify_table_entry_add(struct rte_flow_classifier *cls, + uint32_t table_id, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error); + +/** + * Delete a flow classify rule from the flow_classifer table. + * + * @param[in] cls + * Flow classifier handle + * @param[in] table_id + * id of table + * @param[in] rule + * Flow classify rule + * @param[out] error + * Perform verbose error reporting if not NULL. Structure + * initialised in case of error only. + * @return + * 0 on success, error code otherwise. + */ +int +rte_flow_classify_table_entry_delete(struct rte_flow_classifier *cls, + uint32_t table_id, + struct rte_flow_classify_rule *rule, + struct rte_flow_error *error); + +/** + * Run flow classifier for given packets. + * + * @param[in] cls + * Flow classifier handle + * @param[in] table_id + * id of table + * @param[in] pkts + * Pointer to packets to process + * @param[in] nb_pkts + * Number of packets to process + * @param[out] error + * Perform verbose error reporting if not NULL. Structure + * initialised in case of error only. + * + * @return + * 0 on success, error code otherwise. + */ + +int rte_flow_classifier_run(struct rte_flow_classifier *cls, + uint32_t table_id, + struct rte_mbuf **pkts, + const uint16_t nb_pkts, + struct rte_flow_error *error); + +/** + * Query flow classifier for given rule. + * + * @param[in] cls + * Flow classifier handle + * @param[in] rule + * Flow classify rule + * @param[in] stats + * Flow classify stats + * @param[out] error + * Perform verbose error reporting if not NULL. Structure + * initialised in case of error only. + * + * @return + * 0 on success, error code otherwise. + */ +int rte_flow_classifier_query(struct rte_flow_classifier *cls, + struct rte_flow_classify_rule *rule, + struct rte_flow_classify_stats *stats, + struct rte_flow_error *error); + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_FLOW_CLASSIFY_H_ */ diff --git a/lib/librte_flow_classify/rte_flow_classify_parse.c b/lib/librte_flow_classify/rte_flow_classify_parse.c new file mode 100644 index 0000000..921a852 --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify_parse.c @@ -0,0 +1,546 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <rte_flow_classify.h> +#include "rte_flow_classify_parse.h" +#include <rte_flow_driver.h> + +struct classify_valid_pattern { + enum rte_flow_item_type *items; + parse_filter_t parse_filter; +}; + +static struct rte_flow_action action; + +/* Pattern matched ntuple filter */ +static enum rte_flow_item_type pattern_ntuple_1[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_UDP, + RTE_FLOW_ITEM_TYPE_END, +}; + +/* Pattern matched ntuple filter */ +static enum rte_flow_item_type pattern_ntuple_2[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_TCP, + RTE_FLOW_ITEM_TYPE_END, +}; + +/* Pattern matched ntuple filter */ +static enum rte_flow_item_type pattern_ntuple_3[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_SCTP, + RTE_FLOW_ITEM_TYPE_END, +}; + +static int +classify_parse_ntuple_filter(const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_eth_ntuple_filter *filter, + struct rte_flow_error *error); + +static struct classify_valid_pattern classify_supported_patterns[] = { + /* ntuple */ + { pattern_ntuple_1, classify_parse_ntuple_filter }, + { pattern_ntuple_2, classify_parse_ntuple_filter }, + { pattern_ntuple_3, classify_parse_ntuple_filter }, +}; + +struct rte_flow_action * +classify_get_flow_action(void) +{ + return &action; +} + +/* Find the first VOID or non-VOID item pointer */ +const struct rte_flow_item * +classify_find_first_item(const struct rte_flow_item *item, bool is_void) +{ + bool is_find; + + while (item->type != RTE_FLOW_ITEM_TYPE_END) { + if (is_void) + is_find = item->type == RTE_FLOW_ITEM_TYPE_VOID; + else + is_find = item->type != RTE_FLOW_ITEM_TYPE_VOID; + if (is_find) + break; + item++; + } + return item; +} + +/* Skip all VOID items of the pattern */ +void +classify_pattern_skip_void_item(struct rte_flow_item *items, + const struct rte_flow_item *pattern) +{ + uint32_t cpy_count = 0; + const struct rte_flow_item *pb = pattern, *pe = pattern; + + for (;;) { + /* Find a non-void item first */ + pb = classify_find_first_item(pb, false); + if (pb->type == RTE_FLOW_ITEM_TYPE_END) { + pe = pb; + break; + } + + /* Find a void item */ + pe = classify_find_first_item(pb + 1, true); + + cpy_count = pe - pb; + rte_memcpy(items, pb, sizeof(struct rte_flow_item) * cpy_count); + + items += cpy_count; + + if (pe->type == RTE_FLOW_ITEM_TYPE_END) { + pb = pe; + break; + } + + pb = pe + 1; + } + /* Copy the END item. */ + rte_memcpy(items, pe, sizeof(struct rte_flow_item)); +} + +/* Check if the pattern matches a supported item type array */ +static bool +classify_match_pattern(enum rte_flow_item_type *item_array, + struct rte_flow_item *pattern) +{ + struct rte_flow_item *item = pattern; + + while ((*item_array == item->type) && + (*item_array != RTE_FLOW_ITEM_TYPE_END)) { + item_array++; + item++; + } + + return (*item_array == RTE_FLOW_ITEM_TYPE_END && + item->type == RTE_FLOW_ITEM_TYPE_END); +} + +/* Find if there's parse filter function matched */ +parse_filter_t +classify_find_parse_filter_func(struct rte_flow_item *pattern) +{ + parse_filter_t parse_filter = NULL; + uint8_t i = 0; + + for (; i < RTE_DIM(classify_supported_patterns); i++) { + if (classify_match_pattern(classify_supported_patterns[i].items, + pattern)) { + parse_filter = + classify_supported_patterns[i].parse_filter; + break; + } + } + + return parse_filter; +} + +#define FLOW_RULE_MIN_PRIORITY 8 +#define FLOW_RULE_MAX_PRIORITY 0 + +#define NEXT_ITEM_OF_PATTERN(item, pattern, index)\ + do {\ + item = pattern + index;\ + while (item->type == RTE_FLOW_ITEM_TYPE_VOID) {\ + index++;\ + item = pattern + index;\ + } \ + } while (0) + +#define NEXT_ITEM_OF_ACTION(act, actions, index)\ + do {\ + act = actions + index;\ + while (act->type == RTE_FLOW_ACTION_TYPE_VOID) {\ + index++;\ + act = actions + index;\ + } \ + } while (0) + +/** + * Please aware there's an assumption for all the parsers. + * rte_flow_item is using big endian, rte_flow_attr and + * rte_flow_action are using CPU order. + * Because the pattern is used to describe the packets, + * normally the packets should use network order. + */ + +/** + * Parse the rule to see if it is a n-tuple rule. + * And get the n-tuple filter info BTW. + * pattern: + * The first not void item can be ETH or IPV4. + * The second not void item must be IPV4 if the first one is ETH. + * The third not void item must be UDP or TCP. + * The next not void item must be END. + * action: + * The first not void action should be QUEUE. + * The next not void action should be END. + * pattern example: + * ITEM Spec Mask + * ETH NULL NULL + * IPV4 src_addr 192.168.1.20 0xFFFFFFFF + * dst_addr 192.167.3.50 0xFFFFFFFF + * next_proto_id 17 0xFF + * UDP/TCP/ src_port 80 0xFFFF + * SCTP dst_port 80 0xFFFF + * END + * other members in mask and spec should set to 0x00. + * item->last should be NULL. + */ +static int +classify_parse_ntuple_filter(const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_eth_ntuple_filter *filter, + struct rte_flow_error *error) +{ + const struct rte_flow_item *item; + const struct rte_flow_action *act; + const struct rte_flow_item_ipv4 *ipv4_spec; + const struct rte_flow_item_ipv4 *ipv4_mask; + const struct rte_flow_item_tcp *tcp_spec; + const struct rte_flow_item_tcp *tcp_mask; + const struct rte_flow_item_udp *udp_spec; + const struct rte_flow_item_udp *udp_mask; + const struct rte_flow_item_sctp *sctp_spec; + const struct rte_flow_item_sctp *sctp_mask; + uint32_t index; + + if (!pattern) { + rte_flow_error_set(error, + EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM, + NULL, "NULL pattern."); + return -EINVAL; + } + + if (!actions) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_NUM, + NULL, "NULL action."); + return -EINVAL; + } + if (!attr) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, + NULL, "NULL attribute."); + return -EINVAL; + } + + /* parse pattern */ + index = 0; + + /* the first not void item can be MAC or IPv4 */ + NEXT_ITEM_OF_PATTERN(item, pattern, index); + + if (item->type != RTE_FLOW_ITEM_TYPE_ETH && + item->type != RTE_FLOW_ITEM_TYPE_IPV4) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -EINVAL; + } + /* Skip Ethernet */ + if (item->type == RTE_FLOW_ITEM_TYPE_ETH) { + /*Not supported last point for range*/ + if (item->last) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + item, + "Not supported last point for range"); + return -EINVAL; + + } + /* if the first item is MAC, the content should be NULL */ + if (item->spec || item->mask) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Not supported by ntuple filter"); + return -EINVAL; + } + /* check if the next not void item is IPv4 */ + index++; + NEXT_ITEM_OF_PATTERN(item, pattern, index); + if (item->type != RTE_FLOW_ITEM_TYPE_IPV4) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Not supported by ntuple filter"); + return -EINVAL; + } + } + + /* get the IPv4 info */ + if (!item->spec || !item->mask) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Invalid ntuple mask"); + return -EINVAL; + } + /*Not supported last point for range*/ + if (item->last) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + item, "Not supported last point for range"); + return -EINVAL; + + } + + ipv4_mask = (const struct rte_flow_item_ipv4 *)item->mask; + /** + * Only support src & dst addresses, protocol, + * others should be masked. + */ + if (ipv4_mask->hdr.version_ihl || + ipv4_mask->hdr.type_of_service || + ipv4_mask->hdr.total_length || + ipv4_mask->hdr.packet_id || + ipv4_mask->hdr.fragment_offset || + ipv4_mask->hdr.time_to_live || + ipv4_mask->hdr.hdr_checksum) { + rte_flow_error_set(error, + EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -EINVAL; + } + + filter->dst_ip_mask = ipv4_mask->hdr.dst_addr; + filter->src_ip_mask = ipv4_mask->hdr.src_addr; + filter->proto_mask = ipv4_mask->hdr.next_proto_id; + + ipv4_spec = (const struct rte_flow_item_ipv4 *)item->spec; + filter->dst_ip = ipv4_spec->hdr.dst_addr; + filter->src_ip = ipv4_spec->hdr.src_addr; + filter->proto = ipv4_spec->hdr.next_proto_id; + + /* check if the next not void item is TCP or UDP or SCTP */ + index++; + NEXT_ITEM_OF_PATTERN(item, pattern, index); + if (item->type != RTE_FLOW_ITEM_TYPE_TCP && + item->type != RTE_FLOW_ITEM_TYPE_UDP && + item->type != RTE_FLOW_ITEM_TYPE_SCTP) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -EINVAL; + } + + /* get the TCP/UDP info */ + if (!item->spec || !item->mask) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Invalid ntuple mask"); + return -EINVAL; + } + + /*Not supported last point for range*/ + if (item->last) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + item, "Not supported last point for range"); + return -EINVAL; + + } + + if (item->type == RTE_FLOW_ITEM_TYPE_TCP) { + tcp_mask = (const struct rte_flow_item_tcp *)item->mask; + + /** + * Only support src & dst ports, tcp flags, + * others should be masked. + */ + if (tcp_mask->hdr.sent_seq || + tcp_mask->hdr.recv_ack || + tcp_mask->hdr.data_off || + tcp_mask->hdr.rx_win || + tcp_mask->hdr.cksum || + tcp_mask->hdr.tcp_urp) { + memset(filter, 0, + sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -EINVAL; + } + + filter->dst_port_mask = tcp_mask->hdr.dst_port; + filter->src_port_mask = tcp_mask->hdr.src_port; + if (tcp_mask->hdr.tcp_flags == 0xFF) { + filter->flags |= RTE_NTUPLE_FLAGS_TCP_FLAG; + } else if (!tcp_mask->hdr.tcp_flags) { + filter->flags &= ~RTE_NTUPLE_FLAGS_TCP_FLAG; + } else { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -EINVAL; + } + + tcp_spec = (const struct rte_flow_item_tcp *)item->spec; + filter->dst_port = tcp_spec->hdr.dst_port; + filter->src_port = tcp_spec->hdr.src_port; + filter->tcp_flags = tcp_spec->hdr.tcp_flags; + } else if (item->type == RTE_FLOW_ITEM_TYPE_UDP) { + udp_mask = (const struct rte_flow_item_udp *)item->mask; + + /** + * Only support src & dst ports, + * others should be masked. + */ + if (udp_mask->hdr.dgram_len || + udp_mask->hdr.dgram_cksum) { + memset(filter, 0, + sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -EINVAL; + } + + filter->dst_port_mask = udp_mask->hdr.dst_port; + filter->src_port_mask = udp_mask->hdr.src_port; + + udp_spec = (const struct rte_flow_item_udp *)item->spec; + filter->dst_port = udp_spec->hdr.dst_port; + filter->src_port = udp_spec->hdr.src_port; + } else { + sctp_mask = (const struct rte_flow_item_sctp *)item->mask; + + /** + * Only support src & dst ports, + * others should be masked. + */ + if (sctp_mask->hdr.tag || + sctp_mask->hdr.cksum) { + memset(filter, 0, + sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -EINVAL; + } + + filter->dst_port_mask = sctp_mask->hdr.dst_port; + filter->src_port_mask = sctp_mask->hdr.src_port; + + sctp_spec = (const struct rte_flow_item_sctp *)item->spec; + filter->dst_port = sctp_spec->hdr.dst_port; + filter->src_port = sctp_spec->hdr.src_port; + } + + /* check if the next not void item is END */ + index++; + NEXT_ITEM_OF_PATTERN(item, pattern, index); + if (item->type != RTE_FLOW_ITEM_TYPE_END) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -EINVAL; + } + + /* parse action */ + index = 0; + + /** + * n-tuple only supports count, + * check if the first not void action is COUNT. + */ + memset(&action, 0, sizeof(action)); + NEXT_ITEM_OF_ACTION(act, actions, index); + if (act->type != RTE_FLOW_ACTION_TYPE_COUNT) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + item, "Not supported action."); + return -EINVAL; + } + action.type = RTE_FLOW_ACTION_TYPE_COUNT; + + /* check if the next not void item is END */ + index++; + NEXT_ITEM_OF_ACTION(act, actions, index); + if (act->type != RTE_FLOW_ACTION_TYPE_END) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + act, "Not supported action."); + return -EINVAL; + } + + /* parse attr */ + /* must be input direction */ + if (!attr->ingress) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, + attr, "Only support ingress."); + return -EINVAL; + } + + /* not supported */ + if (attr->egress) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, + attr, "Not support egress."); + return -EINVAL; + } + + if (attr->priority > 0xFFFF) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, + attr, "Error priority."); + return -EINVAL; + } + filter->priority = (uint16_t)attr->priority; + if (attr->priority > FLOW_RULE_MIN_PRIORITY) + filter->priority = FLOW_RULE_MAX_PRIORITY; + + return 0; +} diff --git a/lib/librte_flow_classify/rte_flow_classify_parse.h b/lib/librte_flow_classify/rte_flow_classify_parse.h new file mode 100644 index 0000000..1d4708a --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify_parse.h @@ -0,0 +1,74 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _RTE_FLOW_CLASSIFY_PARSE_H_ +#define _RTE_FLOW_CLASSIFY_PARSE_H_ + +#include <rte_ethdev.h> +#include <rte_ether.h> +#include <rte_flow.h> +#include <stdbool.h> + +#ifdef __cplusplus +extern "C" { +#endif + +typedef int (*parse_filter_t)(const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_eth_ntuple_filter *filter, + struct rte_flow_error *error); + +/* Skip all VOID items of the pattern */ +void +classify_pattern_skip_void_item(struct rte_flow_item *items, + const struct rte_flow_item *pattern); + +/* Find the first VOID or non-VOID item pointer */ +const struct rte_flow_item * +classify_find_first_item(const struct rte_flow_item *item, bool is_void); + + +/* Find if there's parse filter function matched */ +parse_filter_t +classify_find_parse_filter_func(struct rte_flow_item *pattern); + +/* get action data */ +struct rte_flow_action * +classify_get_flow_action(void); + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_FLOW_CLASSIFY_PARSE_H_ */ diff --git a/lib/librte_flow_classify/rte_flow_classify_version.map b/lib/librte_flow_classify/rte_flow_classify_version.map new file mode 100644 index 0000000..b51cb1a --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify_version.map @@ -0,0 +1,14 @@ +EXPERIMENTAL { + global: + + rte_flow_classifier_create; + rte_flow_classifier_free; + rte_flow_classifier_query; + rte_flow_classifier_run; + rte_flow_classify_table_create; + rte_flow_classify_table_entry_add; + rte_flow_classify_table_entry_delete; + rte_flow_classify_validate; + + local: *; +}; diff --git a/mk/rte.app.mk b/mk/rte.app.mk index 8192b98..482656c 100644 --- a/mk/rte.app.mk +++ b/mk/rte.app.mk @@ -58,6 +58,7 @@ _LDLIBS-y += -L$(RTE_SDK_BIN)/lib # # Order is important: from higher level to lower level # +_LDLIBS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += -lrte_flow_classify _LDLIBS-$(CONFIG_RTE_LIBRTE_PIPELINE) += -lrte_pipeline _LDLIBS-$(CONFIG_RTE_LIBRTE_TABLE) += -lrte_table _LDLIBS-$(CONFIG_RTE_LIBRTE_PORT) += -lrte_port -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [PATCH v8 1/4] librte_flow_classify: add flow classify library 2017-10-17 20:26 ` [dpdk-dev] [PATCH v8 1/4] librte_flow_classify: add flow classify library Bernard Iremonger @ 2017-10-19 14:22 ` Singh, Jasvinder 2017-10-20 16:59 ` Iremonger, Bernard 0 siblings, 1 reply; 145+ messages in thread From: Singh, Jasvinder @ 2017-10-19 14:22 UTC (permalink / raw) To: Iremonger, Bernard, dev, Yigit, Ferruh, Ananyev, Konstantin, Dumitrescu, Cristian, adrien.mazarguil <snip> > + > +struct acl_keys { > + struct rte_table_acl_rule_add_params key_add; /**< add key */ > + struct rte_table_acl_rule_delete_params key_del; /**< delete > key */ > +}; > + > +struct rte_flow_classify_rule { > + uint32_t id; /**< unique ID of classify rule */ > + enum rte_flow_classify_rule_type rule_type; /**< classify rule type > */ > + struct rte_flow_action action; /**< action when match found */ > + struct rte_flow_classify_ipv4_5tuple ipv4_5tuple; /**< ipv4 5tuple */ > + union { > + struct acl_keys key; > + } u; > + int key_found; /**< rule key found in table */ > + void *entry; /**< pointer to buffer to hold rule meta data*/ > + void *entry_ptr; /**< handle to the table entry for rule meta data*/ > +}; In my opnion, the above struct should have the provision to accommodate other types of rules, not only ipv4_5tuples. Making this structure modular will help in extending it for other rule types in the future. > +int > +rte_flow_classify_validate( > + const struct rte_flow_attr *attr, > + const struct rte_flow_item pattern[], > + const struct rte_flow_action actions[], > + struct rte_flow_error *error) > +{ > + struct rte_flow_item *items; > + parse_filter_t parse_filter; > + uint32_t item_num = 0; > + uint32_t i = 0; > + int ret; > + > + if (!error) > + return -EINVAL; > + > + if (!pattern) { > + rte_flow_error_set(error, EINVAL, > RTE_FLOW_ERROR_TYPE_ITEM_NUM, > + NULL, "NULL pattern."); > + return -EINVAL; > + } > + > + if (!actions) { > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ACTION_NUM, > + NULL, "NULL action."); > + return -EINVAL; > + } > + > + if (!attr) { > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ATTR, > + NULL, "NULL attribute."); > + return -EINVAL; > + } > + > + memset(&ntuple_filter, 0, sizeof(ntuple_filter)); > + > + /* Get the non-void item number of pattern */ > + while ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_END) { > + if ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_VOID) > + item_num++; > + i++; > + } > + item_num++; > + > + items = malloc(item_num * sizeof(struct rte_flow_item)); > + if (!items) { > + rte_flow_error_set(error, ENOMEM, > RTE_FLOW_ERROR_TYPE_ITEM_NUM, > + NULL, "No memory for pattern items."); > + return -ENOMEM; > + } > + > + memset(items, 0, item_num * sizeof(struct rte_flow_item)); > + classify_pattern_skip_void_item(items, pattern); > + > + parse_filter = classify_find_parse_filter_func(items); > + if (!parse_filter) { > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ITEM, > + pattern, "Unsupported pattern"); > + free(items); > + return -EINVAL; > + } > + > + ret = parse_filter(attr, items, actions, &ntuple_filter, error); > + free(items); > + return ret; > +} This function mainly parses the flow pattern, actions etc and fill the entries in internal ntuple_filter.It is invoked only in flow_entry_add(). Is there any reason to make this function available as public API? > +#ifdef RTE_LIBRTE_CLASSIFY_DEBUG > +#define uint32_t_to_char(ip, a, b, c, d) do {\ > + *a = (unsigned char)(ip >> 24 & 0xff);\ > + *b = (unsigned char)(ip >> 16 & 0xff);\ > + *c = (unsigned char)(ip >> 8 & 0xff);\ > + *d = (unsigned char)(ip & 0xff);\ > + } while (0) > + > +static inline void > +print_ipv4_key_add(struct rte_table_acl_rule_add_params *key) > +{ > + unsigned char a, b, c, d; > + > + printf("ipv4_key_add: 0x%02hhx/0x%hhx ", > + key->field_value[PROTO_FIELD_IPV4].value.u8, > + key->field_value[PROTO_FIELD_IPV4].mask_range.u8); > + > + uint32_t_to_char(key->field_value[SRC_FIELD_IPV4].value.u32, > + &a, &b, &c, &d); > + printf(" %hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, > + key- > >field_value[SRC_FIELD_IPV4].mask_range.u32); > + > + uint32_t_to_char(key->field_value[DST_FIELD_IPV4].value.u32, > + &a, &b, &c, &d); > + printf("%hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, > + key- > >field_value[DST_FIELD_IPV4].mask_range.u32); > + > + printf("%hu : 0x%x %hu : 0x%x", > + key->field_value[SRCP_FIELD_IPV4].value.u16, > + key->field_value[SRCP_FIELD_IPV4].mask_range.u16, > + key->field_value[DSTP_FIELD_IPV4].value.u16, > + key->field_value[DSTP_FIELD_IPV4].mask_range.u16); > + > + printf(" priority: 0x%x\n", key->priority); > +} The above function is specific to printing acl table keys. How about making this function little generic by passing the parameters to distinguish the rule, table type, etc. and do the printing? Same comments for the print_ipv4_key_delete(). > +static inline void > +print_ipv4_key_delete(struct rte_table_acl_rule_delete_params *key) > +{ > + unsigned char a, b, c, d; > + > + printf("ipv4_key_del: 0x%02hhx/0x%hhx ", > + key->field_value[PROTO_FIELD_IPV4].value.u8, > + key->field_value[PROTO_FIELD_IPV4].mask_range.u8); > + > + uint32_t_to_char(key->field_value[SRC_FIELD_IPV4].value.u32, > + &a, &b, &c, &d); > + printf(" %hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, > + key- > >field_value[SRC_FIELD_IPV4].mask_range.u32); > + > + uint32_t_to_char(key->field_value[DST_FIELD_IPV4].value.u32, > + &a, &b, &c, &d); > + printf("%hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, > + key- > >field_value[DST_FIELD_IPV4].mask_range.u32); > + > + printf("%hu : 0x%x %hu : 0x%x\n", > + key->field_value[SRCP_FIELD_IPV4].value.u16, > + key->field_value[SRCP_FIELD_IPV4].mask_range.u16, > + key->field_value[DSTP_FIELD_IPV4].value.u16, > + key->field_value[DSTP_FIELD_IPV4].mask_range.u16); > +} > +#endif > + > +static int > +rte_flow_classifier_check_params(struct rte_flow_classifier_params > *params) > +{ > + if (params == NULL) { > + RTE_LOG(ERR, CLASSIFY, > + "%s: Incorrect value for parameter params\n", > __func__); > + return -EINVAL; > + } > + > + /* name */ > + if (params->name == NULL) { > + RTE_LOG(ERR, CLASSIFY, > + "%s: Incorrect value for parameter name\n", > __func__); > + return -EINVAL; > + } > + > + /* socket */ > + if ((params->socket_id < 0) || > + (params->socket_id >= RTE_MAX_NUMA_NODES)) { > + RTE_LOG(ERR, CLASSIFY, > + "%s: Incorrect value for parameter socket_id\n", > + __func__); > + return -EINVAL; > + } > + > + return 0; > +} > + > +struct rte_flow_classifier * > +rte_flow_classifier_create(struct rte_flow_classifier_params *params) > +{ > + struct rte_flow_classifier *cls; > + int ret; > + > + /* Check input parameters */ > + ret = rte_flow_classifier_check_params(params); > + if (ret != 0) { > + RTE_LOG(ERR, CLASSIFY, > + "%s: flow classifier params check failed (%d)\n", > + __func__, ret); > + return NULL; > + } > + > + /* Allocate memory for the flow classifier */ > + cls = rte_zmalloc_socket("FLOW_CLASSIFIER", > + sizeof(struct rte_flow_classifier), > + RTE_CACHE_LINE_SIZE, params->socket_id); > + > + if (cls == NULL) { > + RTE_LOG(ERR, CLASSIFY, > + "%s: flow classifier memory allocation failed\n", > + __func__); > + return NULL; > + } > + > + /* Save input parameters */ > + snprintf(cls->name, RTE_FLOW_CLASSIFIER_MAX_NAME_SZ, "%s", > + params->name); > + cls->socket_id = params->socket_id; > + cls->type = params->type; > + > + /* Initialize flow classifier internal data structure */ > + cls->num_tables = 0; > + > + return cls; > +} > + > +static void > +rte_flow_classify_table_free(struct rte_table *table) > +{ > + if (table->ops.f_free != NULL) > + table->ops.f_free(table->h_table); > + > + rte_free(table->default_entry); > +} This is internal function. There is an API for creating a table for classifier instance but not for destroying the table. What if application requires destroying the specific table of the classifier but want to keep the classifier instance? > +int > +rte_flow_classifier_free(struct rte_flow_classifier *cls) > +{ > + uint32_t i; > + > + /* Check input parameters */ > + if (cls == NULL) { > + RTE_LOG(ERR, CLASSIFY, > + "%s: rte_flow_classifier parameter is NULL\n", > + __func__); > + return -EINVAL; > + } > + > + /* Free tables */ > + for (i = 0; i < cls->num_tables; i++) { > + struct rte_table *table = &cls->tables[i]; > + > + rte_flow_classify_table_free(table); > + } > + > + /* Free flow classifier memory */ > + rte_free(cls); > + > + return 0; > +} > + > +static int > +rte_table_check_params(struct rte_flow_classifier *cls, > + struct rte_flow_classify_table_params *params, > + uint32_t *table_id) > +{ > + if (cls == NULL) { > + RTE_LOG(ERR, CLASSIFY, > + "%s: flow classifier parameter is NULL\n", > + __func__); > + return -EINVAL; > + } > + if (params == NULL) { > + RTE_LOG(ERR, CLASSIFY, "%s: params parameter is NULL\n", > + __func__); > + return -EINVAL; > + } > + if (table_id == NULL) { > + RTE_LOG(ERR, CLASSIFY, "%s: table_id parameter is NULL\n", > + __func__); > + return -EINVAL; > + } > + > + /* ops */ > + if (params->ops == NULL) { > + RTE_LOG(ERR, CLASSIFY, "%s: params->ops is NULL\n", > + __func__); > + return -EINVAL; > + } > + > + if (params->ops->f_create == NULL) { > + RTE_LOG(ERR, CLASSIFY, > + "%s: f_create function pointer is NULL\n", __func__); > + return -EINVAL; > + } > + > + if (params->ops->f_lookup == NULL) { > + RTE_LOG(ERR, CLASSIFY, > + "%s: f_lookup function pointer is NULL\n", > __func__); > + return -EINVAL; > + } > + > + /* De we have room for one more table? */ > + if (cls->num_tables == RTE_FLOW_CLASSIFY_TABLE_MAX) { > + RTE_LOG(ERR, CLASSIFY, > + "%s: Incorrect value for num_tables parameter\n", > + __func__); > + return -EINVAL; > + } > + > + return 0; > +} > + > +int > +rte_flow_classify_table_create(struct rte_flow_classifier *cls, > + struct rte_flow_classify_table_params *params, > + uint32_t *table_id) > +{ > + struct rte_table *table; > + struct rte_flow_classify_table_entry *default_entry; > + void *h_table; > + uint32_t entry_size, id; > + int ret; > + > + /* Check input arguments */ > + ret = rte_table_check_params(cls, params, table_id); > + if (ret != 0) > + return ret; > + > + id = cls->num_tables; > + table = &cls->tables[id]; > + > + /* Allocate space for the default table entry */ > + entry_size = sizeof(struct rte_flow_classify_table_entry) + > + params->table_metadata_size; > + default_entry = > + (struct rte_flow_classify_table_entry *) rte_zmalloc_socket( > + "Flow Classify default entry", entry_size, > + RTE_CACHE_LINE_SIZE, cls->socket_id); > + if (default_entry == NULL) { > + RTE_LOG(ERR, CLASSIFY, > + "%s: Failed to allocate default entry\n", __func__); > + return -EINVAL; > + } what is the purpose of default_entry as I don't see its usage anywhere in the library? > + /* Create the table */ > + h_table = params->ops->f_create(params->arg_create, cls- > >socket_id, > + entry_size); > + if (h_table == NULL) { > + rte_free(default_entry); > + RTE_LOG(ERR, CLASSIFY, "%s: Table creation failed\n", > __func__); > + return -EINVAL; > + } > + > + /* Commit current table to the classifier */ > + cls->num_tables++; > + *table_id = id; > + > + /* Save input parameters */ > + memcpy(&table->ops, params->ops, sizeof(struct rte_table_ops)); > + > + table->entry_size = entry_size; > + table->default_entry = default_entry; > + > + /* Initialize table internal data structure */ > + table->h_table = h_table; > + > + return 0; > +} > + > +static struct rte_flow_classify_rule * > +allocate_ipv4_5tuple_rule(void) > +{ > + struct rte_flow_classify_rule *rule; > + > + rule = malloc(sizeof(struct rte_flow_classify_rule)); > + if (!rule) > + return rule; > + > + memset(rule, 0, sizeof(struct rte_flow_classify_rule)); > + rule->id = unique_id++; > + rule->rule_type = RTE_FLOW_CLASSIFY_RULE_TYPE_IPV4_5TUPLE; > + > + memcpy(&rule->action, classify_get_flow_action(), > + sizeof(struct rte_flow_action)); > + > + /* key add values */ > + rule->u.key.key_add.priority = ntuple_filter.priority; > + rule- > >u.key.key_add.field_value[PROTO_FIELD_IPV4].mask_range.u8 = > + ntuple_filter.proto_mask; > + rule->u.key.key_add.field_value[PROTO_FIELD_IPV4].value.u8 = > + ntuple_filter.proto; > + rule->ipv4_5tuple.proto = ntuple_filter.proto; > + rule->ipv4_5tuple.proto_mask = ntuple_filter.proto_mask; > + > + rule->u.key.key_add.field_value[SRC_FIELD_IPV4].mask_range.u32 > = > + ntuple_filter.src_ip_mask; > + rule->u.key.key_add.field_value[SRC_FIELD_IPV4].value.u32 = > + ntuple_filter.src_ip; > + rule->ipv4_5tuple.src_ip_mask = ntuple_filter.src_ip_mask; > + rule->ipv4_5tuple.src_ip = ntuple_filter.src_ip; > + > + rule->u.key.key_add.field_value[DST_FIELD_IPV4].mask_range.u32 > = > + ntuple_filter.dst_ip_mask; > + rule->u.key.key_add.field_value[DST_FIELD_IPV4].value.u32 = > + ntuple_filter.dst_ip; > + rule->ipv4_5tuple.dst_ip_mask = ntuple_filter.dst_ip_mask; > + rule->ipv4_5tuple.dst_ip = ntuple_filter.dst_ip; > + > + rule- > >u.key.key_add.field_value[SRCP_FIELD_IPV4].mask_range.u16 = > + ntuple_filter.src_port_mask; > + rule->u.key.key_add.field_value[SRCP_FIELD_IPV4].value.u16 = > + ntuple_filter.src_port; > + rule->ipv4_5tuple.src_port_mask = ntuple_filter.src_port_mask; > + rule->ipv4_5tuple.src_port = ntuple_filter.src_port; > + > + rule- > >u.key.key_add.field_value[DSTP_FIELD_IPV4].mask_range.u16 = > + ntuple_filter.dst_port_mask; > + rule->u.key.key_add.field_value[DSTP_FIELD_IPV4].value.u16 = > + ntuple_filter.dst_port; > + rule->ipv4_5tuple.dst_port_mask = ntuple_filter.dst_port_mask; > + rule->ipv4_5tuple.dst_port = ntuple_filter.dst_port; > + > +#ifdef RTE_LIBRTE_CLASSIFY_DEBUG > + print_ipv4_key_add(&rule->u.key.key_add); > +#endif > + > + /* key delete values */ > + memcpy(&rule->u.key.key_del.field_value[PROTO_FIELD_IPV4], > + &rule->u.key.key_add.field_value[PROTO_FIELD_IPV4], > + NUM_FIELDS_IPV4 * sizeof(struct rte_acl_field)); > + > +#ifdef RTE_LIBRTE_CLASSIFY_DEBUG > + print_ipv4_key_delete(&rule->u.key.key_del); > +#endif > + return rule; > +} > + > +struct rte_flow_classify_rule * > +rte_flow_classify_table_entry_add(struct rte_flow_classifier *cls, > + uint32_t table_id, > + const struct rte_flow_attr *attr, > + const struct rte_flow_item pattern[], > + const struct rte_flow_action actions[], > + struct rte_flow_error *error) > +{ > + struct rte_flow_classify_rule *rule; > + struct rte_flow_classify_table_entry *table_entry; > + int ret; > + > + if (!error) > + return NULL; > + > + if (!cls) { > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, > + NULL, "NULL classifier."); > + return NULL; > + } > + > + if (table_id >= cls->num_tables) { > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, > + NULL, "invalid table_id."); > + return NULL; > + } > + > + if (!pattern) { > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ITEM_NUM, > + NULL, "NULL pattern."); > + return NULL; > + } > + > + if (!actions) { > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ACTION_NUM, > + NULL, "NULL action."); > + return NULL; > + } > + > + if (!attr) { > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ATTR, > + NULL, "NULL attribute."); > + return NULL; > + } > + > + /* parse attr, pattern and actions */ > + ret = rte_flow_classify_validate(attr, pattern, actions, error); > + if (ret < 0) > + return NULL; > + > + switch (cls->type) { > + case RTE_FLOW_CLASSIFY_TABLE_TYPE_ACL: > + rule = allocate_ipv4_5tuple_rule(); > + if (!rule) > + return NULL; > + break; > + default: > + return NULL; > + } > + > + rule->entry = malloc(sizeof(struct rte_flow_classify_table_entry)); > + if (!rule->entry) { > + free(rule); > + rule = NULL; > + return NULL; > + } > + > + table_entry = rule->entry; > + table_entry->rule_id = rule->id; > + > + ret = cls->tables[table_id].ops.f_add( > + cls->tables[table_id].h_table, > + &rule->u.key.key_add, > + rule->entry, > + &rule->key_found, > + &rule->entry_ptr); > + if (ret) { > + free(rule->entry); > + free(rule); > + rule = NULL; > + return NULL; > + } > + return rule; > +} It is not clear if the pattern to be added already exists in the table? how this information will be propagated to the application? > +int > +rte_flow_classify_table_entry_delete(struct rte_flow_classifier *cls, > + uint32_t table_id, > + struct rte_flow_classify_rule *rule, > + struct rte_flow_error *error) > +{ > + int ret = -EINVAL; > + > + if (!error) > + return ret; > + > + if (!cls) { > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, > + NULL, "NULL classifier."); > + return ret; > + } > + > + if (table_id >= cls->num_tables) { > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, > + NULL, "invalid table_id."); > + return ret; > + } > + > + if (!rule) { > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, > + NULL, "NULL rule."); > + return ret; > + } > + > + ret = cls->tables[table_id].ops.f_delete( > + cls->tables[table_id].h_table, > + &rule->u.key.key_del, > + &rule->key_found, > + &rule->entry); Please introduce check for f_delete, shouldn't be NULL. > + > +int > +rte_flow_classifier_run(struct rte_flow_classifier *cls, > + uint32_t table_id, > + struct rte_mbuf **pkts, > + const uint16_t nb_pkts, > + struct rte_flow_error *error) > +{ > + int ret = -EINVAL; > + uint64_t pkts_mask; > + uint64_t lookup_hit_mask; > + > + if (!error) > + return ret; > + > + if (!cls || !pkts || nb_pkts == 0) { > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, > + NULL, "invalid input"); > + return ret; > + } > + > + if (table_id >= cls->num_tables) { > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, > + NULL, "invalid table_id."); > + return ret; > + } > + > + pkts_mask = RTE_LEN2MASK(nb_pkts, uint64_t); > + ret = cls->tables[table_id].ops.f_lookup( > + cls->tables[table_id].h_table, > + pkts, pkts_mask, &lookup_hit_mask, > + (void **)cls->entries); > + > + if (!ret && lookup_hit_mask) > + cls->nb_pkts = nb_pkts; > + else > + cls->nb_pkts = 0; > + > + return ret; > +} > + > +static int > +action_apply(struct rte_flow_classifier *cls, > + struct rte_flow_classify_rule *rule, > + struct rte_flow_classify_stats *stats) > +{ > + struct rte_flow_classify_ipv4_5tuple_stats *ntuple_stats; > + uint64_t count = 0; > + int i; > + int ret = -ENODATA; > + > + switch (rule->action.type) { > + case RTE_FLOW_ACTION_TYPE_COUNT: > + for (i = 0; i < cls->nb_pkts; i++) { > + if (rule->id == cls->entries[i]->rule_id) > + count++; > + } > + if (count) { > + ret = 0; > + ntuple_stats = > + (struct rte_flow_classify_ipv4_5tuple_stats > *) > + stats->stats; > + ntuple_stats->counter1 = count; > + ntuple_stats->ipv4_5tuple = rule->ipv4_5tuple; > + } > + break; > + default: > + ret = -ENOTSUP; > + break; > + } > + > + return ret; > +} > + > +int > +rte_flow_classifier_query(struct rte_flow_classifier *cls, > + struct rte_flow_classify_rule *rule, > + struct rte_flow_classify_stats *stats, > + struct rte_flow_error *error) > +{ > + int ret = -EINVAL; > + > + if (!error) > + return ret; > + > + if (!cls || !rule || !stats) { > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, > + NULL, "invalid input"); > + return ret; > + } > + > + ret = action_apply(cls, rule, stats); > + return ret; > +} The rte_flow_classify_run and rte_flow_classify_query API should be invoked consecutively in the application, true? > diff --git a/lib/librte_flow_classify/rte_flow_classify.h > b/lib/librte_flow_classify/rte_flow_classify.h > new file mode 100644 > index 0000000..9bd6cf4 > --- /dev/null > +++ b/lib/librte_flow_classify/rte_flow_classify.h > @@ -0,0 +1,321 @@ > +/*- > + * BSD LICENSE > + * > + * Copyright(c) 2017 Intel Corporation. All rights reserved. > + * All rights reserved. > + * > + * Redistribution and use in source and binary forms, with or without > + * modification, are permitted provided that the following conditions > + * are met: > + * > + * * Redistributions of source code must retain the above copyright > + * notice, this list of conditions and the following disclaimer. > + * * Redistributions in binary form must reproduce the above copyright > + * notice, this list of conditions and the following disclaimer in > + * the documentation and/or other materials provided with the > + * distribution. > + * * Neither the name of Intel Corporation nor the names of its > + * contributors may be used to endorse or promote products derived > + * from this software without specific prior written permission. > + * > + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND > CONTRIBUTORS > + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT > NOT > + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND > FITNESS FOR > + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE > COPYRIGHT > + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, > INCIDENTAL, > + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT > NOT > + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS > OF USE, > + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED > AND ON ANY > + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR > TORT > + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF > THE USE > + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH > DAMAGE. > + */ > + > +#ifndef _RTE_FLOW_CLASSIFY_H_ > +#define _RTE_FLOW_CLASSIFY_H_ > + > +/** > + * @file > + * > + * RTE Flow Classify Library > + * > + * This library provides flow record information with some measured > properties. > + * > + * Application should define the flow and measurement criteria (action) for > it. > + * > + * Library doesn't maintain any flow records itself, instead flow information > is > + * returned to upper layer only for given packets. > + * > + * It is application's responsibility to call rte_flow_classify_query() > + * for group of packets, just after receiving them or before transmitting > them. > + * Application should provide the flow type interested in, measurement to > apply > + * to that flow in rte_flow_classify_create() API, and should provide > + * rte_flow_classify object and storage to put results in > + * rte_flow_classify_query() API. > + * > + * Usage: > + * - application calls rte_flow_classify_create() to create a rte_flow_classify > + * object. > + * - application calls rte_flow_classify_query() in a polling manner, > + * preferably after rte_eth_rx_burst(). This will cause the library to > + * convert packet information to flow information with some > measurements. > + * - rte_flow_classify object can be destroyed when they are no more > needed > + * via rte_flow_classify_destroy() > + */ > + > +#include <rte_ethdev.h> > +#include <rte_ether.h> > +#include <rte_flow.h> > +#include <rte_acl.h> > +#include <rte_table_acl.h> > + > +#ifdef __cplusplus > +extern "C" { > +#endif > + > + > +#define RTE_FLOW_CLASSIFY_TABLE_MAX 1 > + > +/** Opaque data type for flow classifier */ > +struct rte_flow_classifier; > + > +/** Opaque data type for flow classify rule */ > +struct rte_flow_classify_rule; > + > +enum rte_flow_classify_rule_type { > + RTE_FLOW_CLASSIFY_RULE_TYPE_NONE, /**< no type */ > + RTE_FLOW_CLASSIFY_RULE_TYPE_IPV4_5TUPLE, /**< IPv4 5tuple > type */ > +}; > + > +enum rte_flow_classify_table_type { > + RTE_FLOW_CLASSIFY_TABLE_TYPE_NONE, /**< no type */ > + RTE_FLOW_CLASSIFY_TABLE_TYPE_ACL, /**< ACL type */ > +}; > + > +/** Parameters for flow classifier creation */ > +struct rte_flow_classifier_params { > + /**< flow classifier name */ > + const char *name; > + > + /**< CPU socket ID where memory for the flow classifier and its */ > + /**< elements (tables) should be allocated */ > + int socket_id; > + > + /**< Table type */ > + enum rte_flow_classify_table_type type; > + > + /**< Table id */ > + uint32_t table_id; > +}; > + > +struct rte_flow_classify_table_params { > + /**<Table operations (specific to each table type) */ > + struct rte_table_ops *ops; > + > + /**< Opaque param to be passed to the table create operation */ > + void *arg_create; > + > + /**< Memory size to be reserved per classifier object entry for */ > + /**< storing meta data */ > + uint32_t table_metadata_size; > +}; > + > +struct rte_flow_classify_ipv4_5tuple { > + uint32_t dst_ip; /**< Destination IP address in big endian. */ > + uint32_t dst_ip_mask; /**< Mask of destination IP address. */ > + uint32_t src_ip; /**< Source IP address in big endian. */ > + uint32_t src_ip_mask; /**< Mask of destination IP address. */ > + uint16_t dst_port; /**< Destination port in big endian. */ > + uint16_t dst_port_mask; /**< Mask of destination port. */ > + uint16_t src_port; /**< Source Port in big endian. */ > + uint16_t src_port_mask; /**< Mask of source port. */ > + uint8_t proto; /**< L4 protocol. */ > + uint8_t proto_mask; /**< Mask of L4 protocol. */ > +}; > + > +struct rte_flow_classify_table_entry { > + /**< meta-data for classify rule */ > + uint32_t rule_id; > + > + /**< Start of table entry area for user defined meta data */ > + __extension__ uint8_t meta_data[0]; > +}; The above structure is not used by any of the public API ? > + * Flow stats > + * > + * For the count action, stats can be returned by the query API. > + * > + * Storage for stats is provided by application. > + */ > +struct rte_flow_classify_stats { > + void *stats; > +}; > + > +struct rte_flow_classify_ipv4_5tuple_stats { > + /**< count of packets that match IPv4 5tuple pattern */ > + uint64_t counter1; > + /**< IPv4 5tuple data */ > + struct rte_flow_classify_ipv4_5tuple ipv4_5tuple; > +}; > + > +/** > + * Flow classifier create > + * > + * @param params > + * Parameters for flow classifier creation > + * @return > + * Handle to flow classifier instance on success or NULL otherwise > + */ > +struct rte_flow_classifier *rte_flow_classifier_create( > + struct rte_flow_classifier_params *params); > + > +/** > + * Flow classifier free > + * > + * @param cls > + * Handle to flow classifier instance > + * @return > + * 0 on success, error code otherwise > + */ > +int rte_flow_classifier_free(struct rte_flow_classifier *cls); > + > +/** > + * Flow classify table create > + * > + * @param cls > + * Handle to flow classifier instance > + * @param params > + * Parameters for flow_classify table creation > + * @param table_id > + * Table ID. Valid only within the scope of table IDs of the current > + * classifier. Only returned after a successful invocation. > + * @return > + * 0 on success, error code otherwise > + */ > +int rte_flow_classify_table_create(struct rte_flow_classifier *cls, > + struct rte_flow_classify_table_params *params, > + uint32_t *table_id); > + > +/** > + * Validate a flow classify rule. > + * > + * @param[in] attr > + * Flow rule attributes > + * @param[in] pattern > + * Pattern specification (list terminated by the END pattern item). > + * @param[in] actions > + * Associated actions (list terminated by the END pattern item). > + * @param[out] error > + * Perform verbose error reporting if not NULL. Structure > + * initialised in case of error only. > + * > + * @return > + * 0 on success, error code otherwise. > + */ > +int > +rte_flow_classify_validate( > + const struct rte_flow_attr *attr, > + const struct rte_flow_item pattern[], > + const struct rte_flow_action actions[], > + struct rte_flow_error *error); > + > +/** > + * Add a flow classify rule to the flow_classifer table. > + * > + * @param[in] cls > + * Flow classifier handle > + * @param[in] table_id > + * id of table > + * @param[in] attr > + * Flow rule attributes > + * @param[in] pattern > + * Pattern specification (list terminated by the END pattern item). > + * @param[in] actions > + * Associated actions (list terminated by the END pattern item). > + * @param[out] error > + * Perform verbose error reporting if not NULL. Structure > + * initialised in case of error only. > + * @return > + * A valid handle in case of success, NULL otherwise. > + */ > +struct rte_flow_classify_rule * > +rte_flow_classify_table_entry_add(struct rte_flow_classifier *cls, > + uint32_t table_id, > + const struct rte_flow_attr *attr, > + const struct rte_flow_item pattern[], > + const struct rte_flow_action actions[], > + struct rte_flow_error *error); > + > +/** > + * Delete a flow classify rule from the flow_classifer table. > + * > + * @param[in] cls > + * Flow classifier handle > + * @param[in] table_id > + * id of table > + * @param[in] rule > + * Flow classify rule > + * @param[out] error > + * Perform verbose error reporting if not NULL. Structure > + * initialised in case of error only. > + * @return > + * 0 on success, error code otherwise. > + */ > +int > +rte_flow_classify_table_entry_delete(struct rte_flow_classifier *cls, > + uint32_t table_id, > + struct rte_flow_classify_rule *rule, > + struct rte_flow_error *error); > + > +/** > + * Run flow classifier for given packets. > + * > + * @param[in] cls > + * Flow classifier handle > + * @param[in] table_id > + * id of table > + * @param[in] pkts > + * Pointer to packets to process > + * @param[in] nb_pkts > + * Number of packets to process > + * @param[out] error > + * Perform verbose error reporting if not NULL. Structure > + * initialised in case of error only. > + * > + * @return > + * 0 on success, error code otherwise. > + */ > + > +int rte_flow_classifier_run(struct rte_flow_classifier *cls, > + uint32_t table_id, > + struct rte_mbuf **pkts, > + const uint16_t nb_pkts, > + struct rte_flow_error *error); > + > +/** > + * Query flow classifier for given rule. > + * > + * @param[in] cls > + * Flow classifier handle > + * @param[in] rule > + * Flow classify rule > + * @param[in] stats > + * Flow classify stats > + * @param[out] error > + * Perform verbose error reporting if not NULL. Structure > + * initialised in case of error only. > + * > + * @return > + * 0 on success, error code otherwise. > + */ > +int rte_flow_classifier_query(struct rte_flow_classifier *cls, > + struct rte_flow_classify_rule *rule, > + struct rte_flow_classify_stats *stats, > + struct rte_flow_error *error); > + > +#ifdef __cplusplus > +} > +#endif > + > +#endif /* _RTE_FLOW_CLASSIFY_H_ */ There are doxygen rendering issues in this document. Please cross check the header file with "make doc-api-html" output. ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [PATCH v8 1/4] librte_flow_classify: add flow classify library 2017-10-19 14:22 ` Singh, Jasvinder @ 2017-10-20 16:59 ` Iremonger, Bernard 2017-10-21 12:07 ` Iremonger, Bernard 0 siblings, 1 reply; 145+ messages in thread From: Iremonger, Bernard @ 2017-10-20 16:59 UTC (permalink / raw) To: Singh, Jasvinder, dev, Yigit, Ferruh, Ananyev, Konstantin, Dumitrescu, Cristian, adrien.mazarguil Cc: Iremonger, Bernard Hi Jasvinder, Thanks for the review. > -----Original Message----- > From: Singh, Jasvinder > Sent: Thursday, October 19, 2017 3:22 PM > To: Iremonger, Bernard <bernard.iremonger@intel.com>; dev@dpdk.org; > Yigit, Ferruh <ferruh.yigit@intel.com>; Ananyev, Konstantin > <konstantin.ananyev@intel.com>; Dumitrescu, Cristian > <cristian.dumitrescu@intel.com>; adrien.mazarguil@6wind.com > Subject: RE: [PATCH v8 1/4] librte_flow_classify: add flow classify library > > > <snip> > > > + > > +struct acl_keys { > > + struct rte_table_acl_rule_add_params key_add; /**< add key */ > > + struct rte_table_acl_rule_delete_params key_del; /**< delete > > key */ > > +}; > > + > > +struct rte_flow_classify_rule { > > + uint32_t id; /**< unique ID of classify rule */ > > + enum rte_flow_classify_rule_type rule_type; /**< classify rule type > > */ > > + struct rte_flow_action action; /**< action when match found */ > > + struct rte_flow_classify_ipv4_5tuple ipv4_5tuple; /**< ipv4 5tuple */ > > + union { > > + struct acl_keys key; > > + } u; > > + int key_found; /**< rule key found in table */ > > + void *entry; /**< pointer to buffer to hold rule meta data*/ > > + void *entry_ptr; /**< handle to the table entry for rule meta data*/ > > +}; > > In my opnion, the above struct should have the provision to accommodate > other types of rules, not only ipv4_5tuples. > Making this structure modular will help in extending it for other rule types in > the future. I will refactor by adding struct classify_rules{} to struct rte_flow_classif_rule{}: struct classify_rules { enum rte_flow_classify_rule_type type; union { struct rte_flow_classify_ipv4_5tuple ipv4_5tuple; } u; }; struct rte_flow_classify_rule { uint32_t id; /* unique ID of classify rule */ struct rte_flow_action action; /* action when match found */ struct classify_rules rules; /* union of rules */ union { struct acl_keys key; } u; int key_found; /* rule key found in table */ void *entry; /* pointer to buffer to hold rule meta data */ void *entry_ptr; /* handle to the table entry for rule meta data */ }; > > +int > > +rte_flow_classify_validate( > > + const struct rte_flow_attr *attr, > > + const struct rte_flow_item pattern[], > > + const struct rte_flow_action actions[], > > + struct rte_flow_error *error) > > +{ > > + struct rte_flow_item *items; > > + parse_filter_t parse_filter; > > + uint32_t item_num = 0; > > + uint32_t i = 0; > > + int ret; > > + > > + if (!error) > > + return -EINVAL; > > + > > + if (!pattern) { > > + rte_flow_error_set(error, EINVAL, > > RTE_FLOW_ERROR_TYPE_ITEM_NUM, > > + NULL, "NULL pattern."); > > + return -EINVAL; > > + } > > + > > + if (!actions) { > > + rte_flow_error_set(error, EINVAL, > > + RTE_FLOW_ERROR_TYPE_ACTION_NUM, > > + NULL, "NULL action."); > > + return -EINVAL; > > + } > > + > > + if (!attr) { > > + rte_flow_error_set(error, EINVAL, > > + RTE_FLOW_ERROR_TYPE_ATTR, > > + NULL, "NULL attribute."); > > + return -EINVAL; > > + } > > + > > + memset(&ntuple_filter, 0, sizeof(ntuple_filter)); > > + > > + /* Get the non-void item number of pattern */ > > + while ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_END) { > > + if ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_VOID) > > + item_num++; > > + i++; > > + } > > + item_num++; > > + > > + items = malloc(item_num * sizeof(struct rte_flow_item)); > > + if (!items) { > > + rte_flow_error_set(error, ENOMEM, > > RTE_FLOW_ERROR_TYPE_ITEM_NUM, > > + NULL, "No memory for pattern items."); > > + return -ENOMEM; > > + } > > + > > + memset(items, 0, item_num * sizeof(struct rte_flow_item)); > > + classify_pattern_skip_void_item(items, pattern); > > + > > + parse_filter = classify_find_parse_filter_func(items); > > + if (!parse_filter) { > > + rte_flow_error_set(error, EINVAL, > > + RTE_FLOW_ERROR_TYPE_ITEM, > > + pattern, "Unsupported pattern"); > > + free(items); > > + return -EINVAL; > > + } > > + > > + ret = parse_filter(attr, items, actions, &ntuple_filter, error); > > + free(items); > > + return ret; > > +} > > This function mainly parses the flow pattern, actions etc and fill the entries in > internal ntuple_filter.It is invoked only in flow_entry_add(). Is there any > reason to make this function available as public API? This function does not need to be a public API. The flow_classify API's started out mirroring the flow API's but this is no longer the case. Probably better to make it internal and it could be made public later if needed. > > +#ifdef RTE_LIBRTE_CLASSIFY_DEBUG > > +#define uint32_t_to_char(ip, a, b, c, d) do {\ > > + *a = (unsigned char)(ip >> 24 & 0xff);\ > > + *b = (unsigned char)(ip >> 16 & 0xff);\ > > + *c = (unsigned char)(ip >> 8 & 0xff);\ > > + *d = (unsigned char)(ip & 0xff);\ > > + } while (0) > > + > > +static inline void > > +print_ipv4_key_add(struct rte_table_acl_rule_add_params *key) { > > + unsigned char a, b, c, d; > > + > > + printf("ipv4_key_add: 0x%02hhx/0x%hhx ", > > + key->field_value[PROTO_FIELD_IPV4].value.u8, > > + key->field_value[PROTO_FIELD_IPV4].mask_range.u8); > > + > > + uint32_t_to_char(key->field_value[SRC_FIELD_IPV4].value.u32, > > + &a, &b, &c, &d); > > + printf(" %hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, > > + key- > > >field_value[SRC_FIELD_IPV4].mask_range.u32); > > + > > + uint32_t_to_char(key->field_value[DST_FIELD_IPV4].value.u32, > > + &a, &b, &c, &d); > > + printf("%hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, > > + key- > > >field_value[DST_FIELD_IPV4].mask_range.u32); > > + > > + printf("%hu : 0x%x %hu : 0x%x", > > + key->field_value[SRCP_FIELD_IPV4].value.u16, > > + key->field_value[SRCP_FIELD_IPV4].mask_range.u16, > > + key->field_value[DSTP_FIELD_IPV4].value.u16, > > + key->field_value[DSTP_FIELD_IPV4].mask_range.u16); > > + > > + printf(" priority: 0x%x\n", key->priority); } > > The above function is specific to printing acl table keys. How about making > this function little generic by passing the parameters to distinguish the rule, > table type, etc. and do the printing? > > Same comments for the print_ipv4_key_delete(). > This is debug code, could it be left as is until another table type is added? I will rename to include acl in the function names. > > > +static inline void > > +print_ipv4_key_delete(struct rte_table_acl_rule_delete_params *key) { > > + unsigned char a, b, c, d; > > + > > + printf("ipv4_key_del: 0x%02hhx/0x%hhx ", > > + key->field_value[PROTO_FIELD_IPV4].value.u8, > > + key->field_value[PROTO_FIELD_IPV4].mask_range.u8); > > + > > + uint32_t_to_char(key->field_value[SRC_FIELD_IPV4].value.u32, > > + &a, &b, &c, &d); > > + printf(" %hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, > > + key- > > >field_value[SRC_FIELD_IPV4].mask_range.u32); > > + > > + uint32_t_to_char(key->field_value[DST_FIELD_IPV4].value.u32, > > + &a, &b, &c, &d); > > + printf("%hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, > > + key- > > >field_value[DST_FIELD_IPV4].mask_range.u32); > > + > > + printf("%hu : 0x%x %hu : 0x%x\n", > > + key->field_value[SRCP_FIELD_IPV4].value.u16, > > + key->field_value[SRCP_FIELD_IPV4].mask_range.u16, > > + key->field_value[DSTP_FIELD_IPV4].value.u16, > > + key->field_value[DSTP_FIELD_IPV4].mask_range.u16); > > +} > > +#endif > > + > > +static int > > +rte_flow_classifier_check_params(struct rte_flow_classifier_params > > *params) > > +{ > > + if (params == NULL) { > > + RTE_LOG(ERR, CLASSIFY, > > + "%s: Incorrect value for parameter params\n", > > __func__); > > + return -EINVAL; > > + } > > + > > + /* name */ > > + if (params->name == NULL) { > > + RTE_LOG(ERR, CLASSIFY, > > + "%s: Incorrect value for parameter name\n", > > __func__); > > + return -EINVAL; > > + } > > + > > + /* socket */ > > + if ((params->socket_id < 0) || > > + (params->socket_id >= RTE_MAX_NUMA_NODES)) { > > + RTE_LOG(ERR, CLASSIFY, > > + "%s: Incorrect value for parameter socket_id\n", > > + __func__); > > + return -EINVAL; > > + } > > + > > + return 0; > > +} > > + > > +struct rte_flow_classifier * > > +rte_flow_classifier_create(struct rte_flow_classifier_params *params) > > +{ > > + struct rte_flow_classifier *cls; > > + int ret; > > + > > + /* Check input parameters */ > > + ret = rte_flow_classifier_check_params(params); > > + if (ret != 0) { > > + RTE_LOG(ERR, CLASSIFY, > > + "%s: flow classifier params check failed (%d)\n", > > + __func__, ret); > > + return NULL; > > + } > > + > > + /* Allocate memory for the flow classifier */ > > + cls = rte_zmalloc_socket("FLOW_CLASSIFIER", > > + sizeof(struct rte_flow_classifier), > > + RTE_CACHE_LINE_SIZE, params->socket_id); > > + > > + if (cls == NULL) { > > + RTE_LOG(ERR, CLASSIFY, > > + "%s: flow classifier memory allocation failed\n", > > + __func__); > > + return NULL; > > + } > > + > > + /* Save input parameters */ > > + snprintf(cls->name, RTE_FLOW_CLASSIFIER_MAX_NAME_SZ, "%s", > > + params->name); > > + cls->socket_id = params->socket_id; > > + cls->type = params->type; > > + > > + /* Initialize flow classifier internal data structure */ > > + cls->num_tables = 0; > > + > > + return cls; > > +} > > + > > +static void > > +rte_flow_classify_table_free(struct rte_table *table) { > > + if (table->ops.f_free != NULL) > > + table->ops.f_free(table->h_table); > > + > > + rte_free(table->default_entry); > > +} > > This is internal function. There is an API for creating a table for classifier > instance but not for destroying the table. What if application requires > destroying the specific table of the classifier but want to keep the classifier > instance? Yes, there should probably be an API to delete a table. I will add an rte_flow_classify_table_delete() API. > > +int > > +rte_flow_classifier_free(struct rte_flow_classifier *cls) { > > + uint32_t i; > > + > > + /* Check input parameters */ > > + if (cls == NULL) { > > + RTE_LOG(ERR, CLASSIFY, > > + "%s: rte_flow_classifier parameter is NULL\n", > > + __func__); > > + return -EINVAL; > > + } > > + > > + /* Free tables */ > > + for (i = 0; i < cls->num_tables; i++) { > > + struct rte_table *table = &cls->tables[i]; > > + > > + rte_flow_classify_table_free(table); > > + } > > + > > + /* Free flow classifier memory */ > > + rte_free(cls); > > + > > + return 0; > > +} > > + > > +static int > > +rte_table_check_params(struct rte_flow_classifier *cls, > > + struct rte_flow_classify_table_params *params, > > + uint32_t *table_id) > > +{ > > + if (cls == NULL) { > > + RTE_LOG(ERR, CLASSIFY, > > + "%s: flow classifier parameter is NULL\n", > > + __func__); > > + return -EINVAL; > > + } > > + if (params == NULL) { > > + RTE_LOG(ERR, CLASSIFY, "%s: params parameter is NULL\n", > > + __func__); > > + return -EINVAL; > > + } > > + if (table_id == NULL) { > > + RTE_LOG(ERR, CLASSIFY, "%s: table_id parameter is NULL\n", > > + __func__); > > + return -EINVAL; > > + } > > + > > + /* ops */ > > + if (params->ops == NULL) { > > + RTE_LOG(ERR, CLASSIFY, "%s: params->ops is NULL\n", > > + __func__); > > + return -EINVAL; > > + } > > + > > + if (params->ops->f_create == NULL) { > > + RTE_LOG(ERR, CLASSIFY, > > + "%s: f_create function pointer is NULL\n", __func__); > > + return -EINVAL; > > + } > > + > > + if (params->ops->f_lookup == NULL) { > > + RTE_LOG(ERR, CLASSIFY, > > + "%s: f_lookup function pointer is NULL\n", > > __func__); > > + return -EINVAL; > > + } > > + > > + /* De we have room for one more table? */ > > + if (cls->num_tables == RTE_FLOW_CLASSIFY_TABLE_MAX) { > > + RTE_LOG(ERR, CLASSIFY, > > + "%s: Incorrect value for num_tables parameter\n", > > + __func__); > > + return -EINVAL; > > + } > > + > > + return 0; > > +} > > + > > +int > > +rte_flow_classify_table_create(struct rte_flow_classifier *cls, > > + struct rte_flow_classify_table_params *params, > > + uint32_t *table_id) > > +{ > > + struct rte_table *table; > > + struct rte_flow_classify_table_entry *default_entry; > > + void *h_table; > > + uint32_t entry_size, id; > > + int ret; > > + > > + /* Check input arguments */ > > + ret = rte_table_check_params(cls, params, table_id); > > + if (ret != 0) > > + return ret; > > + > > + id = cls->num_tables; > > + table = &cls->tables[id]; > > + > > + /* Allocate space for the default table entry */ > > + entry_size = sizeof(struct rte_flow_classify_table_entry) + > > + params->table_metadata_size; > > + default_entry = > > + (struct rte_flow_classify_table_entry *) rte_zmalloc_socket( > > + "Flow Classify default entry", entry_size, > > + RTE_CACHE_LINE_SIZE, cls->socket_id); > > + if (default_entry == NULL) { > > + RTE_LOG(ERR, CLASSIFY, > > + "%s: Failed to allocate default entry\n", __func__); > > + return -EINVAL; > > + } > > what is the purpose of default_entry as I don't see its usage anywhere in the > library? This came from the ip_pipeline code in earlier discussions, it is not used at present. I will remove it. > > + /* Create the table */ > > + h_table = params->ops->f_create(params->arg_create, cls- > > >socket_id, > > + entry_size); > > + if (h_table == NULL) { > > + rte_free(default_entry); > > + RTE_LOG(ERR, CLASSIFY, "%s: Table creation failed\n", > > __func__); > > + return -EINVAL; > > + } > > + > > + /* Commit current table to the classifier */ > > + cls->num_tables++; > > + *table_id = id; > > + > > + /* Save input parameters */ > > + memcpy(&table->ops, params->ops, sizeof(struct rte_table_ops)); > > + > > + table->entry_size = entry_size; > > + table->default_entry = default_entry; > > + > > + /* Initialize table internal data structure */ > > + table->h_table = h_table; > > + > > + return 0; > > +} > > + > > +static struct rte_flow_classify_rule * > > +allocate_ipv4_5tuple_rule(void) > > +{ > > + struct rte_flow_classify_rule *rule; > > + > > + rule = malloc(sizeof(struct rte_flow_classify_rule)); > > + if (!rule) > > + return rule; > > + > > + memset(rule, 0, sizeof(struct rte_flow_classify_rule)); > > + rule->id = unique_id++; > > + rule->rule_type = RTE_FLOW_CLASSIFY_RULE_TYPE_IPV4_5TUPLE; > > + > > + memcpy(&rule->action, classify_get_flow_action(), > > + sizeof(struct rte_flow_action)); > > + > > + /* key add values */ > > + rule->u.key.key_add.priority = ntuple_filter.priority; > > + rule- > > >u.key.key_add.field_value[PROTO_FIELD_IPV4].mask_range.u8 = > > + ntuple_filter.proto_mask; > > + rule->u.key.key_add.field_value[PROTO_FIELD_IPV4].value.u8 = > > + ntuple_filter.proto; > > + rule->ipv4_5tuple.proto = ntuple_filter.proto; > > + rule->ipv4_5tuple.proto_mask = ntuple_filter.proto_mask; > > + > > + rule->u.key.key_add.field_value[SRC_FIELD_IPV4].mask_range.u32 > > = > > + ntuple_filter.src_ip_mask; > > + rule->u.key.key_add.field_value[SRC_FIELD_IPV4].value.u32 = > > + ntuple_filter.src_ip; > > + rule->ipv4_5tuple.src_ip_mask = ntuple_filter.src_ip_mask; > > + rule->ipv4_5tuple.src_ip = ntuple_filter.src_ip; > > + > > + rule->u.key.key_add.field_value[DST_FIELD_IPV4].mask_range.u32 > > = > > + ntuple_filter.dst_ip_mask; > > + rule->u.key.key_add.field_value[DST_FIELD_IPV4].value.u32 = > > + ntuple_filter.dst_ip; > > + rule->ipv4_5tuple.dst_ip_mask = ntuple_filter.dst_ip_mask; > > + rule->ipv4_5tuple.dst_ip = ntuple_filter.dst_ip; > > + > > + rule- > > >u.key.key_add.field_value[SRCP_FIELD_IPV4].mask_range.u16 = > > + ntuple_filter.src_port_mask; > > + rule->u.key.key_add.field_value[SRCP_FIELD_IPV4].value.u16 = > > + ntuple_filter.src_port; > > + rule->ipv4_5tuple.src_port_mask = ntuple_filter.src_port_mask; > > + rule->ipv4_5tuple.src_port = ntuple_filter.src_port; > > + > > + rule- > > >u.key.key_add.field_value[DSTP_FIELD_IPV4].mask_range.u16 = > > + ntuple_filter.dst_port_mask; > > + rule->u.key.key_add.field_value[DSTP_FIELD_IPV4].value.u16 = > > + ntuple_filter.dst_port; > > + rule->ipv4_5tuple.dst_port_mask = ntuple_filter.dst_port_mask; > > + rule->ipv4_5tuple.dst_port = ntuple_filter.dst_port; > > + > > +#ifdef RTE_LIBRTE_CLASSIFY_DEBUG > > + print_ipv4_key_add(&rule->u.key.key_add); > > +#endif > > + > > + /* key delete values */ > > + memcpy(&rule->u.key.key_del.field_value[PROTO_FIELD_IPV4], > > + &rule->u.key.key_add.field_value[PROTO_FIELD_IPV4], > > + NUM_FIELDS_IPV4 * sizeof(struct rte_acl_field)); > > + > > +#ifdef RTE_LIBRTE_CLASSIFY_DEBUG > > + print_ipv4_key_delete(&rule->u.key.key_del); > > +#endif > > + return rule; > > +} > > + > > +struct rte_flow_classify_rule * > > +rte_flow_classify_table_entry_add(struct rte_flow_classifier *cls, > > + uint32_t table_id, > > + const struct rte_flow_attr *attr, > > + const struct rte_flow_item pattern[], > > + const struct rte_flow_action actions[], > > + struct rte_flow_error *error) > > +{ > > + struct rte_flow_classify_rule *rule; > > + struct rte_flow_classify_table_entry *table_entry; > > + int ret; > > + > > + if (!error) > > + return NULL; > > + > > + if (!cls) { > > + rte_flow_error_set(error, EINVAL, > > + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, > > + NULL, "NULL classifier."); > > + return NULL; > > + } > > + > > + if (table_id >= cls->num_tables) { > > + rte_flow_error_set(error, EINVAL, > > + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, > > + NULL, "invalid table_id."); > > + return NULL; > > + } > > + > > + if (!pattern) { > > + rte_flow_error_set(error, EINVAL, > > + RTE_FLOW_ERROR_TYPE_ITEM_NUM, > > + NULL, "NULL pattern."); > > + return NULL; > > + } > > + > > + if (!actions) { > > + rte_flow_error_set(error, EINVAL, > > + RTE_FLOW_ERROR_TYPE_ACTION_NUM, > > + NULL, "NULL action."); > > + return NULL; > > + } > > + > > + if (!attr) { > > + rte_flow_error_set(error, EINVAL, > > + RTE_FLOW_ERROR_TYPE_ATTR, > > + NULL, "NULL attribute."); > > + return NULL; > > + } > > + > > + /* parse attr, pattern and actions */ > > + ret = rte_flow_classify_validate(attr, pattern, actions, error); > > + if (ret < 0) > > + return NULL; > > + > > + switch (cls->type) { > > + case RTE_FLOW_CLASSIFY_TABLE_TYPE_ACL: > > + rule = allocate_ipv4_5tuple_rule(); > > + if (!rule) > > + return NULL; > > + break; > > + default: > > + return NULL; > > + } > > + > > + rule->entry = malloc(sizeof(struct rte_flow_classify_table_entry)); > > + if (!rule->entry) { > > + free(rule); > > + rule = NULL; > > + return NULL; > > + } > > + > > + table_entry = rule->entry; > > + table_entry->rule_id = rule->id; > > + > > + ret = cls->tables[table_id].ops.f_add( > > + cls->tables[table_id].h_table, > > + &rule->u.key.key_add, > > + rule->entry, > > + &rule->key_found, > > + &rule->entry_ptr); > > + if (ret) { > > + free(rule->entry); > > + free(rule); > > + rule = NULL; > > + return NULL; > > + } > > + return rule; > > +} > > It is not clear if the pattern to be added already exists in the table? how this > information will be propagated to the application? The key found flag will be set if the key is already present. I will add a key_found parameter to the API to return the key found data. > > +int > > +rte_flow_classify_table_entry_delete(struct rte_flow_classifier *cls, > > + uint32_t table_id, > > + struct rte_flow_classify_rule *rule, > > + struct rte_flow_error *error) > > +{ > > + int ret = -EINVAL; > > + > > + if (!error) > > + return ret; > > + > > + if (!cls) { > > + rte_flow_error_set(error, EINVAL, > > + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, > > + NULL, "NULL classifier."); > > + return ret; > > + } > > + > > + if (table_id >= cls->num_tables) { > > + rte_flow_error_set(error, EINVAL, > > + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, > > + NULL, "invalid table_id."); > > + return ret; > > + } > > + > > + if (!rule) { > > + rte_flow_error_set(error, EINVAL, > > + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, > > + NULL, "NULL rule."); > > + return ret; > > + } > > + > > + ret = cls->tables[table_id].ops.f_delete( > > + cls->tables[table_id].h_table, > > + &rule->u.key.key_del, > > + &rule->key_found, > > + &rule->entry); > > Please introduce check for f_delete, shouldn't be NULL. I will add a check that f_delete is not NULL. > > + > > +int > > +rte_flow_classifier_run(struct rte_flow_classifier *cls, > > + uint32_t table_id, > > + struct rte_mbuf **pkts, > > + const uint16_t nb_pkts, > > + struct rte_flow_error *error) > > +{ > > + int ret = -EINVAL; > > + uint64_t pkts_mask; > > + uint64_t lookup_hit_mask; > > + > > + if (!error) > > + return ret; > > + > > + if (!cls || !pkts || nb_pkts == 0) { > > + rte_flow_error_set(error, EINVAL, > > + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, > > + NULL, "invalid input"); > > + return ret; > > + } > > + > > + if (table_id >= cls->num_tables) { > > + rte_flow_error_set(error, EINVAL, > > + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, > > + NULL, "invalid table_id."); > > + return ret; > > + } > > + > > + pkts_mask = RTE_LEN2MASK(nb_pkts, uint64_t); > > + ret = cls->tables[table_id].ops.f_lookup( > > + cls->tables[table_id].h_table, > > + pkts, pkts_mask, &lookup_hit_mask, > > + (void **)cls->entries); > > + > > + if (!ret && lookup_hit_mask) > > + cls->nb_pkts = nb_pkts; > > + else > > + cls->nb_pkts = 0; > > + > > + return ret; > > +} > > + > > +static int > > +action_apply(struct rte_flow_classifier *cls, > > + struct rte_flow_classify_rule *rule, > > + struct rte_flow_classify_stats *stats) { > > + struct rte_flow_classify_ipv4_5tuple_stats *ntuple_stats; > > + uint64_t count = 0; > > + int i; > > + int ret = -ENODATA; > > + > > + switch (rule->action.type) { > > + case RTE_FLOW_ACTION_TYPE_COUNT: > > + for (i = 0; i < cls->nb_pkts; i++) { > > + if (rule->id == cls->entries[i]->rule_id) > > + count++; > > + } > > + if (count) { > > + ret = 0; > > + ntuple_stats = > > + (struct rte_flow_classify_ipv4_5tuple_stats > > *) > > + stats->stats; > > + ntuple_stats->counter1 = count; > > + ntuple_stats->ipv4_5tuple = rule->ipv4_5tuple; > > + } > > + break; > > + default: > > + ret = -ENOTSUP; > > + break; > > + } > > + > > + return ret; > > +} > > + > > +int > > +rte_flow_classifier_query(struct rte_flow_classifier *cls, > > + struct rte_flow_classify_rule *rule, > > + struct rte_flow_classify_stats *stats, > > + struct rte_flow_error *error) > > +{ > > + int ret = -EINVAL; > > + > > + if (!error) > > + return ret; > > + > > + if (!cls || !rule || !stats) { > > + rte_flow_error_set(error, EINVAL, > > + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, > > + NULL, "invalid input"); > > + return ret; > > + } > > + > > + ret = action_apply(cls, rule, stats); > > + return ret; > > +} > > The rte_flow_classify_run and rte_flow_classify_query API should be > invoked consecutively in the application, true? Yes, they should be invoked consecutively. I will merge the rte_flow_classify_run API with the rte_flow_classify_query API and drop the rte_flow_classif_run API. > > diff --git a/lib/librte_flow_classify/rte_flow_classify.h > > b/lib/librte_flow_classify/rte_flow_classify.h > > new file mode 100644 > > index 0000000..9bd6cf4 > > --- /dev/null > > +++ b/lib/librte_flow_classify/rte_flow_classify.h > > @@ -0,0 +1,321 @@ > > +/*- > > + * BSD LICENSE > > + * > > + * Copyright(c) 2017 Intel Corporation. All rights reserved. > > + * All rights reserved. > > + * > > + * Redistribution and use in source and binary forms, with or without > > + * modification, are permitted provided that the following conditions > > + * are met: > > + * > > + * * Redistributions of source code must retain the above copyright > > + * notice, this list of conditions and the following disclaimer. > > + * * Redistributions in binary form must reproduce the above copyright > > + * notice, this list of conditions and the following disclaimer in > > + * the documentation and/or other materials provided with the > > + * distribution. > > + * * Neither the name of Intel Corporation nor the names of its > > + * contributors may be used to endorse or promote products derived > > + * from this software without specific prior written permission. > > + * > > + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND > > CONTRIBUTORS > > + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT > > NOT > > + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND > > FITNESS FOR > > + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE > > COPYRIGHT > > + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, > > INCIDENTAL, > > + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, > BUT > > NOT > > + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; > LOSS > > OF USE, > > + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED > > AND ON ANY > > + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR > > TORT > > + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT > OF > > THE USE > > + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH > > DAMAGE. > > + */ > > + > > +#ifndef _RTE_FLOW_CLASSIFY_H_ > > +#define _RTE_FLOW_CLASSIFY_H_ > > + > > +/** > > + * @file > > + * > > + * RTE Flow Classify Library > > + * > > + * This library provides flow record information with some measured > > properties. > > + * > > + * Application should define the flow and measurement criteria > > + (action) for > > it. > > + * > > + * Library doesn't maintain any flow records itself, instead flow > > + information > > is > > + * returned to upper layer only for given packets. > > + * > > + * It is application's responsibility to call > > + rte_flow_classify_query() > > + * for group of packets, just after receiving them or before > > + transmitting > > them. > > + * Application should provide the flow type interested in, > > + measurement to > > apply > > + * to that flow in rte_flow_classify_create() API, and should provide > > + * rte_flow_classify object and storage to put results in > > + * rte_flow_classify_query() API. > > + * > > + * Usage: > > + * - application calls rte_flow_classify_create() to create a > rte_flow_classify > > + * object. > > + * - application calls rte_flow_classify_query() in a polling manner, > > + * preferably after rte_eth_rx_burst(). This will cause the library to > > + * convert packet information to flow information with some > > measurements. > > + * - rte_flow_classify object can be destroyed when they are no more > > needed > > + * via rte_flow_classify_destroy() > > + */ > > + > > +#include <rte_ethdev.h> > > +#include <rte_ether.h> > > +#include <rte_flow.h> > > +#include <rte_acl.h> > > +#include <rte_table_acl.h> > > + > > +#ifdef __cplusplus > > +extern "C" { > > +#endif > > + > > + > > +#define RTE_FLOW_CLASSIFY_TABLE_MAX 1 > > + > > +/** Opaque data type for flow classifier */ struct > > +rte_flow_classifier; > > + > > +/** Opaque data type for flow classify rule */ struct > > +rte_flow_classify_rule; > > + > > +enum rte_flow_classify_rule_type { > > + RTE_FLOW_CLASSIFY_RULE_TYPE_NONE, /**< no type */ > > + RTE_FLOW_CLASSIFY_RULE_TYPE_IPV4_5TUPLE, /**< IPv4 5tuple > > type */ > > +}; > > + > > +enum rte_flow_classify_table_type { > > + RTE_FLOW_CLASSIFY_TABLE_TYPE_NONE, /**< no type */ > > + RTE_FLOW_CLASSIFY_TABLE_TYPE_ACL, /**< ACL type */ }; > > + > > +/** Parameters for flow classifier creation */ struct > > +rte_flow_classifier_params { > > + /**< flow classifier name */ > > + const char *name; > > + > > + /**< CPU socket ID where memory for the flow classifier and its */ > > + /**< elements (tables) should be allocated */ > > + int socket_id; > > + > > + /**< Table type */ > > + enum rte_flow_classify_table_type type; > > + > > + /**< Table id */ > > + uint32_t table_id; > > +}; > > + > > +struct rte_flow_classify_table_params { > > + /**<Table operations (specific to each table type) */ > > + struct rte_table_ops *ops; > > + > > + /**< Opaque param to be passed to the table create operation */ > > + void *arg_create; > > + > > + /**< Memory size to be reserved per classifier object entry for */ > > + /**< storing meta data */ > > + uint32_t table_metadata_size; > > +}; > > + > > +struct rte_flow_classify_ipv4_5tuple { > > + uint32_t dst_ip; /**< Destination IP address in big endian. */ > > + uint32_t dst_ip_mask; /**< Mask of destination IP address. */ > > + uint32_t src_ip; /**< Source IP address in big endian. */ > > + uint32_t src_ip_mask; /**< Mask of destination IP address. */ > > + uint16_t dst_port; /**< Destination port in big endian. */ > > + uint16_t dst_port_mask; /**< Mask of destination port. */ > > + uint16_t src_port; /**< Source Port in big endian. */ > > + uint16_t src_port_mask; /**< Mask of source port. */ > > + uint8_t proto; /**< L4 protocol. */ > > + uint8_t proto_mask; /**< Mask of L4 protocol. */ > > +}; > > + > > +struct rte_flow_classify_table_entry { > > + /**< meta-data for classify rule */ > > + uint32_t rule_id; > > + > > + /**< Start of table entry area for user defined meta data */ > > + __extension__ uint8_t meta_data[0]; > > +}; > > The above structure is not used by any of the public API ? > > > + * Flow stats > > + * > > + * For the count action, stats can be returned by the query API. > > + * > > + * Storage for stats is provided by application. > > + */ > > +struct rte_flow_classify_stats { > > + void *stats; > > +}; > > + > > +struct rte_flow_classify_ipv4_5tuple_stats { > > + /**< count of packets that match IPv4 5tuple pattern */ > > + uint64_t counter1; > > + /**< IPv4 5tuple data */ > > + struct rte_flow_classify_ipv4_5tuple ipv4_5tuple; }; > > + > > +/** > > + * Flow classifier create > > + * > > + * @param params > > + * Parameters for flow classifier creation > > + * @return > > + * Handle to flow classifier instance on success or NULL otherwise > > + */ > > +struct rte_flow_classifier *rte_flow_classifier_create( > > + struct rte_flow_classifier_params *params); > > + > > +/** > > + * Flow classifier free > > + * > > + * @param cls > > + * Handle to flow classifier instance > > + * @return > > + * 0 on success, error code otherwise > > + */ > > +int rte_flow_classifier_free(struct rte_flow_classifier *cls); > > + > > +/** > > + * Flow classify table create > > + * > > + * @param cls > > + * Handle to flow classifier instance > > + * @param params > > + * Parameters for flow_classify table creation > > + * @param table_id > > + * Table ID. Valid only within the scope of table IDs of the current > > + * classifier. Only returned after a successful invocation. > > + * @return > > + * 0 on success, error code otherwise > > + */ > > +int rte_flow_classify_table_create(struct rte_flow_classifier *cls, > > + struct rte_flow_classify_table_params *params, > > + uint32_t *table_id); > > + > > +/** > > + * Validate a flow classify rule. > > + * > > + * @param[in] attr > > + * Flow rule attributes > > + * @param[in] pattern > > + * Pattern specification (list terminated by the END pattern item). > > + * @param[in] actions > > + * Associated actions (list terminated by the END pattern item). > > + * @param[out] error > > + * Perform verbose error reporting if not NULL. Structure > > + * initialised in case of error only. > > + * > > + * @return > > + * 0 on success, error code otherwise. > > + */ > > +int > > +rte_flow_classify_validate( > > + const struct rte_flow_attr *attr, > > + const struct rte_flow_item pattern[], > > + const struct rte_flow_action actions[], > > + struct rte_flow_error *error); > > + > > +/** > > + * Add a flow classify rule to the flow_classifer table. > > + * > > + * @param[in] cls > > + * Flow classifier handle > > + * @param[in] table_id > > + * id of table > > + * @param[in] attr > > + * Flow rule attributes > > + * @param[in] pattern > > + * Pattern specification (list terminated by the END pattern item). > > + * @param[in] actions > > + * Associated actions (list terminated by the END pattern item). > > + * @param[out] error > > + * Perform verbose error reporting if not NULL. Structure > > + * initialised in case of error only. > > + * @return > > + * A valid handle in case of success, NULL otherwise. > > + */ > > +struct rte_flow_classify_rule * > > +rte_flow_classify_table_entry_add(struct rte_flow_classifier *cls, > > + uint32_t table_id, > > + const struct rte_flow_attr *attr, > > + const struct rte_flow_item pattern[], > > + const struct rte_flow_action actions[], > > + struct rte_flow_error *error); > > + > > +/** > > + * Delete a flow classify rule from the flow_classifer table. > > + * > > + * @param[in] cls > > + * Flow classifier handle > > + * @param[in] table_id > > + * id of table > > + * @param[in] rule > > + * Flow classify rule > > + * @param[out] error > > + * Perform verbose error reporting if not NULL. Structure > > + * initialised in case of error only. > > + * @return > > + * 0 on success, error code otherwise. > > + */ > > +int > > +rte_flow_classify_table_entry_delete(struct rte_flow_classifier *cls, > > + uint32_t table_id, > > + struct rte_flow_classify_rule *rule, > > + struct rte_flow_error *error); > > + > > +/** > > + * Run flow classifier for given packets. > > + * > > + * @param[in] cls > > + * Flow classifier handle > > + * @param[in] table_id > > + * id of table > > + * @param[in] pkts > > + * Pointer to packets to process > > + * @param[in] nb_pkts > > + * Number of packets to process > > + * @param[out] error > > + * Perform verbose error reporting if not NULL. Structure > > + * initialised in case of error only. > > + * > > + * @return > > + * 0 on success, error code otherwise. > > + */ > > + > > +int rte_flow_classifier_run(struct rte_flow_classifier *cls, > > + uint32_t table_id, > > + struct rte_mbuf **pkts, > > + const uint16_t nb_pkts, > > + struct rte_flow_error *error); > > + > > +/** > > + * Query flow classifier for given rule. > > + * > > + * @param[in] cls > > + * Flow classifier handle > > + * @param[in] rule > > + * Flow classify rule > > + * @param[in] stats > > + * Flow classify stats > > + * @param[out] error > > + * Perform verbose error reporting if not NULL. Structure > > + * initialised in case of error only. > > + * > > + * @return > > + * 0 on success, error code otherwise. > > + */ > > +int rte_flow_classifier_query(struct rte_flow_classifier *cls, > > + struct rte_flow_classify_rule *rule, > > + struct rte_flow_classify_stats *stats, > > + struct rte_flow_error *error); > > + > > +#ifdef __cplusplus > > +} > > +#endif > > + > > +#endif /* _RTE_FLOW_CLASSIFY_H_ */ > > > There are doxygen rendering issues in this document. Please cross check the > header file with "make doc-api-html" output. I will check the doxygen output. I will send a v9 patch set with the above changes. Regards, Bernard. ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [PATCH v8 1/4] librte_flow_classify: add flow classify library 2017-10-20 16:59 ` Iremonger, Bernard @ 2017-10-21 12:07 ` Iremonger, Bernard 0 siblings, 0 replies; 145+ messages in thread From: Iremonger, Bernard @ 2017-10-21 12:07 UTC (permalink / raw) To: Singh, Jasvinder, dev, Yigit, Ferruh, Ananyev, Konstantin, Dumitrescu, Cristian, adrien.mazarguil Cc: Iremonger, Bernard Hi Jasvinder <snip> > > > > > + > > > +struct acl_keys { > > > + struct rte_table_acl_rule_add_params key_add; /**< add key */ > > > + struct rte_table_acl_rule_delete_params key_del; /**< delete > > > key */ > > > +}; > > > + > > > +struct rte_flow_classify_rule { > > > + uint32_t id; /**< unique ID of classify rule */ > > > + enum rte_flow_classify_rule_type rule_type; /**< classify rule > > > +type > > > */ > > > + struct rte_flow_action action; /**< action when match found */ > > > + struct rte_flow_classify_ipv4_5tuple ipv4_5tuple; /**< ipv4 5tuple */ > > > + union { > > > + struct acl_keys key; > > > + } u; > > > + int key_found; /**< rule key found in table */ > > > + void *entry; /**< pointer to buffer to hold rule meta data*/ > > > + void *entry_ptr; /**< handle to the table entry for rule meta > > > +data*/ }; > > > > In my opnion, the above struct should have the provision to > > accommodate other types of rules, not only ipv4_5tuples. > > Making this structure modular will help in extending it for other rule > > types in the future. > > I will refactor by adding struct classify_rules{} to struct > rte_flow_classif_rule{}: > > struct classify_rules { > enum rte_flow_classify_rule_type type; > union { > struct rte_flow_classify_ipv4_5tuple ipv4_5tuple; > } u; > }; > > struct rte_flow_classify_rule { > uint32_t id; /* unique ID of classify rule */ > struct rte_flow_action action; /* action when match found */ > struct classify_rules rules; /* union of rules */ > union { > struct acl_keys key; > } u; > int key_found; /* rule key found in table */ > void *entry; /* pointer to buffer to hold rule meta data */ > void *entry_ptr; /* handle to the table entry for rule meta data */ }; > > > > > +int > > > +rte_flow_classify_validate( > > > + const struct rte_flow_attr *attr, > > > + const struct rte_flow_item pattern[], > > > + const struct rte_flow_action actions[], > > > + struct rte_flow_error *error) > > > +{ > > > + struct rte_flow_item *items; > > > + parse_filter_t parse_filter; > > > + uint32_t item_num = 0; > > > + uint32_t i = 0; > > > + int ret; > > > + > > > + if (!error) > > > + return -EINVAL; > > > + > > > + if (!pattern) { > > > + rte_flow_error_set(error, EINVAL, > > > RTE_FLOW_ERROR_TYPE_ITEM_NUM, > > > + NULL, "NULL pattern."); > > > + return -EINVAL; > > > + } > > > + > > > + if (!actions) { > > > + rte_flow_error_set(error, EINVAL, > > > + RTE_FLOW_ERROR_TYPE_ACTION_NUM, > > > + NULL, "NULL action."); > > > + return -EINVAL; > > > + } > > > + > > > + if (!attr) { > > > + rte_flow_error_set(error, EINVAL, > > > + RTE_FLOW_ERROR_TYPE_ATTR, > > > + NULL, "NULL attribute."); > > > + return -EINVAL; > > > + } > > > + > > > + memset(&ntuple_filter, 0, sizeof(ntuple_filter)); > > > + > > > + /* Get the non-void item number of pattern */ > > > + while ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_END) { > > > + if ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_VOID) > > > + item_num++; > > > + i++; > > > + } > > > + item_num++; > > > + > > > + items = malloc(item_num * sizeof(struct rte_flow_item)); > > > + if (!items) { > > > + rte_flow_error_set(error, ENOMEM, > > > RTE_FLOW_ERROR_TYPE_ITEM_NUM, > > > + NULL, "No memory for pattern items."); > > > + return -ENOMEM; > > > + } > > > + > > > + memset(items, 0, item_num * sizeof(struct rte_flow_item)); > > > + classify_pattern_skip_void_item(items, pattern); > > > + > > > + parse_filter = classify_find_parse_filter_func(items); > > > + if (!parse_filter) { > > > + rte_flow_error_set(error, EINVAL, > > > + RTE_FLOW_ERROR_TYPE_ITEM, > > > + pattern, "Unsupported pattern"); > > > + free(items); > > > + return -EINVAL; > > > + } > > > + > > > + ret = parse_filter(attr, items, actions, &ntuple_filter, error); > > > + free(items); > > > + return ret; > > > +} > > > > This function mainly parses the flow pattern, actions etc and fill the > > entries in internal ntuple_filter.It is invoked only in > > flow_entry_add(). Is there any reason to make this function available as > public API? > > This function does not need to be a public API. > The flow_classify API's started out mirroring the flow API's but this is no > longer the case. > Probably better to make it internal and it could be made public later if > needed. > > > > > +#ifdef RTE_LIBRTE_CLASSIFY_DEBUG > > > +#define uint32_t_to_char(ip, a, b, c, d) do {\ > > > + *a = (unsigned char)(ip >> 24 & 0xff);\ > > > + *b = (unsigned char)(ip >> 16 & 0xff);\ > > > + *c = (unsigned char)(ip >> 8 & 0xff);\ > > > + *d = (unsigned char)(ip & 0xff);\ > > > + } while (0) > > > + > > > +static inline void > > > +print_ipv4_key_add(struct rte_table_acl_rule_add_params *key) { > > > + unsigned char a, b, c, d; > > > + > > > + printf("ipv4_key_add: 0x%02hhx/0x%hhx ", > > > + key->field_value[PROTO_FIELD_IPV4].value.u8, > > > + key->field_value[PROTO_FIELD_IPV4].mask_range.u8); > > > + > > > + uint32_t_to_char(key->field_value[SRC_FIELD_IPV4].value.u32, > > > + &a, &b, &c, &d); > > > + printf(" %hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, > > > + key- > > > >field_value[SRC_FIELD_IPV4].mask_range.u32); > > > + > > > + uint32_t_to_char(key->field_value[DST_FIELD_IPV4].value.u32, > > > + &a, &b, &c, &d); > > > + printf("%hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, > > > + key- > > > >field_value[DST_FIELD_IPV4].mask_range.u32); > > > + > > > + printf("%hu : 0x%x %hu : 0x%x", > > > + key->field_value[SRCP_FIELD_IPV4].value.u16, > > > + key->field_value[SRCP_FIELD_IPV4].mask_range.u16, > > > + key->field_value[DSTP_FIELD_IPV4].value.u16, > > > + key->field_value[DSTP_FIELD_IPV4].mask_range.u16); > > > + > > > + printf(" priority: 0x%x\n", key->priority); } > > > > The above function is specific to printing acl table keys. How about > > making this function little generic by passing the parameters to > > distinguish the rule, table type, etc. and do the printing? > > > > Same comments for the print_ipv4_key_delete(). > > > > This is debug code, could it be left as is until another table type is added? > I will rename to include acl in the function names. > > > > > > +static inline void > > > +print_ipv4_key_delete(struct rte_table_acl_rule_delete_params *key) > { > > > + unsigned char a, b, c, d; > > > + > > > + printf("ipv4_key_del: 0x%02hhx/0x%hhx ", > > > + key->field_value[PROTO_FIELD_IPV4].value.u8, > > > + key->field_value[PROTO_FIELD_IPV4].mask_range.u8); > > > + > > > + uint32_t_to_char(key->field_value[SRC_FIELD_IPV4].value.u32, > > > + &a, &b, &c, &d); > > > + printf(" %hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, > > > + key- > > > >field_value[SRC_FIELD_IPV4].mask_range.u32); > > > + > > > + uint32_t_to_char(key->field_value[DST_FIELD_IPV4].value.u32, > > > + &a, &b, &c, &d); > > > + printf("%hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, > > > + key- > > > >field_value[DST_FIELD_IPV4].mask_range.u32); > > > + > > > + printf("%hu : 0x%x %hu : 0x%x\n", > > > + key->field_value[SRCP_FIELD_IPV4].value.u16, > > > + key->field_value[SRCP_FIELD_IPV4].mask_range.u16, > > > + key->field_value[DSTP_FIELD_IPV4].value.u16, > > > + key->field_value[DSTP_FIELD_IPV4].mask_range.u16); > > > +} > > > +#endif > > > + > > > +static int > > > +rte_flow_classifier_check_params(struct rte_flow_classifier_params > > > *params) > > > +{ > > > + if (params == NULL) { > > > + RTE_LOG(ERR, CLASSIFY, > > > + "%s: Incorrect value for parameter params\n", > > > __func__); > > > + return -EINVAL; > > > + } > > > + > > > + /* name */ > > > + if (params->name == NULL) { > > > + RTE_LOG(ERR, CLASSIFY, > > > + "%s: Incorrect value for parameter name\n", > > > __func__); > > > + return -EINVAL; > > > + } > > > + > > > + /* socket */ > > > + if ((params->socket_id < 0) || > > > + (params->socket_id >= RTE_MAX_NUMA_NODES)) { > > > + RTE_LOG(ERR, CLASSIFY, > > > + "%s: Incorrect value for parameter socket_id\n", > > > + __func__); > > > + return -EINVAL; > > > + } > > > + > > > + return 0; > > > +} > > > + > > > +struct rte_flow_classifier * > > > +rte_flow_classifier_create(struct rte_flow_classifier_params > > > +*params) { > > > + struct rte_flow_classifier *cls; > > > + int ret; > > > + > > > + /* Check input parameters */ > > > + ret = rte_flow_classifier_check_params(params); > > > + if (ret != 0) { > > > + RTE_LOG(ERR, CLASSIFY, > > > + "%s: flow classifier params check failed (%d)\n", > > > + __func__, ret); > > > + return NULL; > > > + } > > > + > > > + /* Allocate memory for the flow classifier */ > > > + cls = rte_zmalloc_socket("FLOW_CLASSIFIER", > > > + sizeof(struct rte_flow_classifier), > > > + RTE_CACHE_LINE_SIZE, params->socket_id); > > > + > > > + if (cls == NULL) { > > > + RTE_LOG(ERR, CLASSIFY, > > > + "%s: flow classifier memory allocation failed\n", > > > + __func__); > > > + return NULL; > > > + } > > > + > > > + /* Save input parameters */ > > > + snprintf(cls->name, RTE_FLOW_CLASSIFIER_MAX_NAME_SZ, "%s", > > > + params->name); > > > + cls->socket_id = params->socket_id; > > > + cls->type = params->type; > > > + > > > + /* Initialize flow classifier internal data structure */ > > > + cls->num_tables = 0; > > > + > > > + return cls; > > > +} > > > + > > > +static void > > > +rte_flow_classify_table_free(struct rte_table *table) { > > > + if (table->ops.f_free != NULL) > > > + table->ops.f_free(table->h_table); > > > + > > > + rte_free(table->default_entry); > > > +} > > > > This is internal function. There is an API for creating a table for > > classifier instance but not for destroying the table. What if > > application requires destroying the specific table of the classifier > > but want to keep the classifier instance? > > Yes, there should probably be an API to delete a table. > I will add an rte_flow_classify_table_delete() API. After further investigation, I will not add an rte_flow_table_delete() API. The tables are stored in an array. Deleting a table will leave an empty entry in the array. The table_create API just adds tables to the array until the array is full, it does handle empty entries in the array. Note, the ip_pipeline code does not have a table_delete API either. The ret_flow_classifier_free() API frees all the tables in the classifier. > > > +int > > > +rte_flow_classifier_free(struct rte_flow_classifier *cls) { > > > + uint32_t i; > > > + > > > + /* Check input parameters */ > > > + if (cls == NULL) { > > > + RTE_LOG(ERR, CLASSIFY, > > > + "%s: rte_flow_classifier parameter is NULL\n", > > > + __func__); > > > + return -EINVAL; > > > + } > > > + > > > + /* Free tables */ > > > + for (i = 0; i < cls->num_tables; i++) { > > > + struct rte_table *table = &cls->tables[i]; > > > + > > > + rte_flow_classify_table_free(table); > > > + } > > > + > > > + /* Free flow classifier memory */ > > > + rte_free(cls); > > > + > > > + return 0; > > > +} > > > + > > > +static int > > > +rte_table_check_params(struct rte_flow_classifier *cls, > > > + struct rte_flow_classify_table_params *params, > > > + uint32_t *table_id) > > > +{ > > > + if (cls == NULL) { > > > + RTE_LOG(ERR, CLASSIFY, > > > + "%s: flow classifier parameter is NULL\n", > > > + __func__); > > > + return -EINVAL; > > > + } > > > + if (params == NULL) { > > > + RTE_LOG(ERR, CLASSIFY, "%s: params parameter is NULL\n", > > > + __func__); > > > + return -EINVAL; > > > + } > > > + if (table_id == NULL) { > > > + RTE_LOG(ERR, CLASSIFY, "%s: table_id parameter is NULL\n", > > > + __func__); > > > + return -EINVAL; > > > + } > > > + > > > + /* ops */ > > > + if (params->ops == NULL) { > > > + RTE_LOG(ERR, CLASSIFY, "%s: params->ops is NULL\n", > > > + __func__); > > > + return -EINVAL; > > > + } > > > + > > > + if (params->ops->f_create == NULL) { > > > + RTE_LOG(ERR, CLASSIFY, > > > + "%s: f_create function pointer is NULL\n", __func__); > > > + return -EINVAL; > > > + } > > > + > > > + if (params->ops->f_lookup == NULL) { > > > + RTE_LOG(ERR, CLASSIFY, > > > + "%s: f_lookup function pointer is NULL\n", > > > __func__); > > > + return -EINVAL; > > > + } > > > + > > > + /* De we have room for one more table? */ > > > + if (cls->num_tables == RTE_FLOW_CLASSIFY_TABLE_MAX) { > > > + RTE_LOG(ERR, CLASSIFY, > > > + "%s: Incorrect value for num_tables parameter\n", > > > + __func__); > > > + return -EINVAL; > > > + } > > > + > > > + return 0; > > > +} > > > + > > > +int > > > +rte_flow_classify_table_create(struct rte_flow_classifier *cls, > > > + struct rte_flow_classify_table_params *params, > > > + uint32_t *table_id) > > > +{ > > > + struct rte_table *table; > > > + struct rte_flow_classify_table_entry *default_entry; > > > + void *h_table; > > > + uint32_t entry_size, id; > > > + int ret; > > > + > > > + /* Check input arguments */ > > > + ret = rte_table_check_params(cls, params, table_id); > > > + if (ret != 0) > > > + return ret; > > > + > > > + id = cls->num_tables; > > > + table = &cls->tables[id]; > > > + > > > + /* Allocate space for the default table entry */ > > > + entry_size = sizeof(struct rte_flow_classify_table_entry) + > > > + params->table_metadata_size; > > > + default_entry = > > > + (struct rte_flow_classify_table_entry *) rte_zmalloc_socket( > > > + "Flow Classify default entry", entry_size, > > > + RTE_CACHE_LINE_SIZE, cls->socket_id); > > > + if (default_entry == NULL) { > > > + RTE_LOG(ERR, CLASSIFY, > > > + "%s: Failed to allocate default entry\n", __func__); > > > + return -EINVAL; > > > + } > > > > what is the purpose of default_entry as I don't see its usage anywhere > > in the library? > > This came from the ip_pipeline code in earlier discussions, it is not used at > present. > I will remove it. > > > > > + /* Create the table */ > > > + h_table = params->ops->f_create(params->arg_create, cls- > > > >socket_id, > > > + entry_size); > > > + if (h_table == NULL) { > > > + rte_free(default_entry); > > > + RTE_LOG(ERR, CLASSIFY, "%s: Table creation failed\n", > > > __func__); > > > + return -EINVAL; > > > + } > > > + > > > + /* Commit current table to the classifier */ > > > + cls->num_tables++; > > > + *table_id = id; > > > + > > > + /* Save input parameters */ > > > + memcpy(&table->ops, params->ops, sizeof(struct rte_table_ops)); > > > + > > > + table->entry_size = entry_size; > > > + table->default_entry = default_entry; > > > + > > > + /* Initialize table internal data structure */ > > > + table->h_table = h_table; > > > + > > > + return 0; > > > +} > > > + > > > +static struct rte_flow_classify_rule * > > > +allocate_ipv4_5tuple_rule(void) > > > +{ > > > + struct rte_flow_classify_rule *rule; > > > + > > > + rule = malloc(sizeof(struct rte_flow_classify_rule)); > > > + if (!rule) > > > + return rule; > > > + > > > + memset(rule, 0, sizeof(struct rte_flow_classify_rule)); > > > + rule->id = unique_id++; > > > + rule->rule_type = RTE_FLOW_CLASSIFY_RULE_TYPE_IPV4_5TUPLE; > > > + > > > + memcpy(&rule->action, classify_get_flow_action(), > > > + sizeof(struct rte_flow_action)); > > > + > > > + /* key add values */ > > > + rule->u.key.key_add.priority = ntuple_filter.priority; > > > + rule- > > > >u.key.key_add.field_value[PROTO_FIELD_IPV4].mask_range.u8 = > > > + ntuple_filter.proto_mask; > > > + rule->u.key.key_add.field_value[PROTO_FIELD_IPV4].value.u8 = > > > + ntuple_filter.proto; > > > + rule->ipv4_5tuple.proto = ntuple_filter.proto; > > > + rule->ipv4_5tuple.proto_mask = ntuple_filter.proto_mask; > > > + > > > + rule->u.key.key_add.field_value[SRC_FIELD_IPV4].mask_range.u32 > > > = > > > + ntuple_filter.src_ip_mask; > > > + rule->u.key.key_add.field_value[SRC_FIELD_IPV4].value.u32 = > > > + ntuple_filter.src_ip; > > > + rule->ipv4_5tuple.src_ip_mask = ntuple_filter.src_ip_mask; > > > + rule->ipv4_5tuple.src_ip = ntuple_filter.src_ip; > > > + > > > + rule->u.key.key_add.field_value[DST_FIELD_IPV4].mask_range.u32 > > > = > > > + ntuple_filter.dst_ip_mask; > > > + rule->u.key.key_add.field_value[DST_FIELD_IPV4].value.u32 = > > > + ntuple_filter.dst_ip; > > > + rule->ipv4_5tuple.dst_ip_mask = ntuple_filter.dst_ip_mask; > > > + rule->ipv4_5tuple.dst_ip = ntuple_filter.dst_ip; > > > + > > > + rule- > > > >u.key.key_add.field_value[SRCP_FIELD_IPV4].mask_range.u16 = > > > + ntuple_filter.src_port_mask; > > > + rule->u.key.key_add.field_value[SRCP_FIELD_IPV4].value.u16 = > > > + ntuple_filter.src_port; > > > + rule->ipv4_5tuple.src_port_mask = ntuple_filter.src_port_mask; > > > + rule->ipv4_5tuple.src_port = ntuple_filter.src_port; > > > + > > > + rule- > > > >u.key.key_add.field_value[DSTP_FIELD_IPV4].mask_range.u16 = > > > + ntuple_filter.dst_port_mask; > > > + rule->u.key.key_add.field_value[DSTP_FIELD_IPV4].value.u16 = > > > + ntuple_filter.dst_port; > > > + rule->ipv4_5tuple.dst_port_mask = ntuple_filter.dst_port_mask; > > > + rule->ipv4_5tuple.dst_port = ntuple_filter.dst_port; > > > + > > > +#ifdef RTE_LIBRTE_CLASSIFY_DEBUG > > > + print_ipv4_key_add(&rule->u.key.key_add); > > > +#endif > > > + > > > + /* key delete values */ > > > + memcpy(&rule->u.key.key_del.field_value[PROTO_FIELD_IPV4], > > > + &rule->u.key.key_add.field_value[PROTO_FIELD_IPV4], > > > + NUM_FIELDS_IPV4 * sizeof(struct rte_acl_field)); > > > + > > > +#ifdef RTE_LIBRTE_CLASSIFY_DEBUG > > > + print_ipv4_key_delete(&rule->u.key.key_del); > > > +#endif > > > + return rule; > > > +} > > > + > > > +struct rte_flow_classify_rule * > > > +rte_flow_classify_table_entry_add(struct rte_flow_classifier *cls, > > > + uint32_t table_id, > > > + const struct rte_flow_attr *attr, > > > + const struct rte_flow_item pattern[], > > > + const struct rte_flow_action actions[], > > > + struct rte_flow_error *error) > > > +{ > > > + struct rte_flow_classify_rule *rule; > > > + struct rte_flow_classify_table_entry *table_entry; > > > + int ret; > > > + > > > + if (!error) > > > + return NULL; > > > + > > > + if (!cls) { > > > + rte_flow_error_set(error, EINVAL, > > > + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, > > > + NULL, "NULL classifier."); > > > + return NULL; > > > + } > > > + > > > + if (table_id >= cls->num_tables) { > > > + rte_flow_error_set(error, EINVAL, > > > + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, > > > + NULL, "invalid table_id."); > > > + return NULL; > > > + } > > > + > > > + if (!pattern) { > > > + rte_flow_error_set(error, EINVAL, > > > + RTE_FLOW_ERROR_TYPE_ITEM_NUM, > > > + NULL, "NULL pattern."); > > > + return NULL; > > > + } > > > + > > > + if (!actions) { > > > + rte_flow_error_set(error, EINVAL, > > > + RTE_FLOW_ERROR_TYPE_ACTION_NUM, > > > + NULL, "NULL action."); > > > + return NULL; > > > + } > > > + > > > + if (!attr) { > > > + rte_flow_error_set(error, EINVAL, > > > + RTE_FLOW_ERROR_TYPE_ATTR, > > > + NULL, "NULL attribute."); > > > + return NULL; > > > + } > > > + > > > + /* parse attr, pattern and actions */ > > > + ret = rte_flow_classify_validate(attr, pattern, actions, error); > > > + if (ret < 0) > > > + return NULL; > > > + > > > + switch (cls->type) { > > > + case RTE_FLOW_CLASSIFY_TABLE_TYPE_ACL: > > > + rule = allocate_ipv4_5tuple_rule(); > > > + if (!rule) > > > + return NULL; > > > + break; > > > + default: > > > + return NULL; > > > + } > > > + > > > + rule->entry = malloc(sizeof(struct rte_flow_classify_table_entry)); > > > + if (!rule->entry) { > > > + free(rule); > > > + rule = NULL; > > > + return NULL; > > > + } > > > + > > > + table_entry = rule->entry; > > > + table_entry->rule_id = rule->id; > > > + > > > + ret = cls->tables[table_id].ops.f_add( > > > + cls->tables[table_id].h_table, > > > + &rule->u.key.key_add, > > > + rule->entry, > > > + &rule->key_found, > > > + &rule->entry_ptr); > > > + if (ret) { > > > + free(rule->entry); > > > + free(rule); > > > + rule = NULL; > > > + return NULL; > > > + } > > > + return rule; > > > +} > > > > It is not clear if the pattern to be added already exists in the > > table? how this information will be propagated to the application? > > The key found flag will be set if the key is already present. > I will add a key_found parameter to the API to return the key found data. > > > > +int > > > +rte_flow_classify_table_entry_delete(struct rte_flow_classifier *cls, > > > + uint32_t table_id, > > > + struct rte_flow_classify_rule *rule, > > > + struct rte_flow_error *error) > > > +{ > > > + int ret = -EINVAL; > > > + > > > + if (!error) > > > + return ret; > > > + > > > + if (!cls) { > > > + rte_flow_error_set(error, EINVAL, > > > + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, > > > + NULL, "NULL classifier."); > > > + return ret; > > > + } > > > + > > > + if (table_id >= cls->num_tables) { > > > + rte_flow_error_set(error, EINVAL, > > > + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, > > > + NULL, "invalid table_id."); > > > + return ret; > > > + } > > > + > > > + if (!rule) { > > > + rte_flow_error_set(error, EINVAL, > > > + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, > > > + NULL, "NULL rule."); > > > + return ret; > > > + } > > > + > > > + ret = cls->tables[table_id].ops.f_delete( > > > + cls->tables[table_id].h_table, > > > + &rule->u.key.key_del, > > > + &rule->key_found, > > > + &rule->entry); > > > > Please introduce check for f_delete, shouldn't be NULL. > > I will add a check that f_delete is not NULL. > > > > > + > > > +int > > > +rte_flow_classifier_run(struct rte_flow_classifier *cls, > > > + uint32_t table_id, > > > + struct rte_mbuf **pkts, > > > + const uint16_t nb_pkts, > > > + struct rte_flow_error *error) > > > +{ > > > + int ret = -EINVAL; > > > + uint64_t pkts_mask; > > > + uint64_t lookup_hit_mask; > > > + > > > + if (!error) > > > + return ret; > > > + > > > + if (!cls || !pkts || nb_pkts == 0) { > > > + rte_flow_error_set(error, EINVAL, > > > + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, > > > + NULL, "invalid input"); > > > + return ret; > > > + } > > > + > > > + if (table_id >= cls->num_tables) { > > > + rte_flow_error_set(error, EINVAL, > > > + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, > > > + NULL, "invalid table_id."); > > > + return ret; > > > + } > > > + > > > + pkts_mask = RTE_LEN2MASK(nb_pkts, uint64_t); > > > + ret = cls->tables[table_id].ops.f_lookup( > > > + cls->tables[table_id].h_table, > > > + pkts, pkts_mask, &lookup_hit_mask, > > > + (void **)cls->entries); > > > + > > > + if (!ret && lookup_hit_mask) > > > + cls->nb_pkts = nb_pkts; > > > + else > > > + cls->nb_pkts = 0; > > > + > > > + return ret; > > > +} > > > + > > > +static int > > > +action_apply(struct rte_flow_classifier *cls, > > > + struct rte_flow_classify_rule *rule, > > > + struct rte_flow_classify_stats *stats) { > > > + struct rte_flow_classify_ipv4_5tuple_stats *ntuple_stats; > > > + uint64_t count = 0; > > > + int i; > > > + int ret = -ENODATA; > > > + > > > + switch (rule->action.type) { > > > + case RTE_FLOW_ACTION_TYPE_COUNT: > > > + for (i = 0; i < cls->nb_pkts; i++) { > > > + if (rule->id == cls->entries[i]->rule_id) > > > + count++; > > > + } > > > + if (count) { > > > + ret = 0; > > > + ntuple_stats = > > > + (struct rte_flow_classify_ipv4_5tuple_stats > > > *) > > > + stats->stats; > > > + ntuple_stats->counter1 = count; > > > + ntuple_stats->ipv4_5tuple = rule->ipv4_5tuple; > > > + } > > > + break; > > > + default: > > > + ret = -ENOTSUP; > > > + break; > > > + } > > > + > > > + return ret; > > > +} > > > + > > > +int > > > +rte_flow_classifier_query(struct rte_flow_classifier *cls, > > > + struct rte_flow_classify_rule *rule, > > > + struct rte_flow_classify_stats *stats, > > > + struct rte_flow_error *error) > > > +{ > > > + int ret = -EINVAL; > > > + > > > + if (!error) > > > + return ret; > > > + > > > + if (!cls || !rule || !stats) { > > > + rte_flow_error_set(error, EINVAL, > > > + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, > > > + NULL, "invalid input"); > > > + return ret; > > > + } > > > + > > > + ret = action_apply(cls, rule, stats); > > > + return ret; > > > +} > > > > The rte_flow_classify_run and rte_flow_classify_query API should be > > invoked consecutively in the application, true? > > Yes, they should be invoked consecutively. > I will merge the rte_flow_classify_run API with the rte_flow_classify_query > API and drop the rte_flow_classif_run API. > > > > > diff --git a/lib/librte_flow_classify/rte_flow_classify.h > > > b/lib/librte_flow_classify/rte_flow_classify.h > > > new file mode 100644 > > > index 0000000..9bd6cf4 > > > --- /dev/null > > > +++ b/lib/librte_flow_classify/rte_flow_classify.h > > > @@ -0,0 +1,321 @@ > > > +/*- > > > + * BSD LICENSE > > > + * > > > + * Copyright(c) 2017 Intel Corporation. All rights reserved. > > > + * All rights reserved. > > > + * > > > + * Redistribution and use in source and binary forms, with or without > > > + * modification, are permitted provided that the following conditions > > > + * are met: > > > + * > > > + * * Redistributions of source code must retain the above copyright > > > + * notice, this list of conditions and the following disclaimer. > > > + * * Redistributions in binary form must reproduce the above > copyright > > > + * notice, this list of conditions and the following disclaimer in > > > + * the documentation and/or other materials provided with the > > > + * distribution. > > > + * * Neither the name of Intel Corporation nor the names of its > > > + * contributors may be used to endorse or promote products derived > > > + * from this software without specific prior written permission. > > > + * > > > + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND > > > CONTRIBUTORS > > > + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, > BUT > > > NOT > > > + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND > > > FITNESS FOR > > > + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE > > > COPYRIGHT > > > + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, > > > INCIDENTAL, > > > + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, > > BUT > > > NOT > > > + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; > > LOSS > > > OF USE, > > > + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER > CAUSED > > > AND ON ANY > > > + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR > > > TORT > > > + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY > OUT > > OF > > > THE USE > > > + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH > > > DAMAGE. > > > + */ > > > + > > > +#ifndef _RTE_FLOW_CLASSIFY_H_ > > > +#define _RTE_FLOW_CLASSIFY_H_ > > > + > > > +/** > > > + * @file > > > + * > > > + * RTE Flow Classify Library > > > + * > > > + * This library provides flow record information with some measured > > > properties. > > > + * > > > + * Application should define the flow and measurement criteria > > > + (action) for > > > it. > > > + * > > > + * Library doesn't maintain any flow records itself, instead flow > > > + information > > > is > > > + * returned to upper layer only for given packets. > > > + * > > > + * It is application's responsibility to call > > > + rte_flow_classify_query() > > > + * for group of packets, just after receiving them or before > > > + transmitting > > > them. > > > + * Application should provide the flow type interested in, > > > + measurement to > > > apply > > > + * to that flow in rte_flow_classify_create() API, and should > > > + provide > > > + * rte_flow_classify object and storage to put results in > > > + * rte_flow_classify_query() API. > > > + * > > > + * Usage: > > > + * - application calls rte_flow_classify_create() to create a > > rte_flow_classify > > > + * object. > > > + * - application calls rte_flow_classify_query() in a polling manner, > > > + * preferably after rte_eth_rx_burst(). This will cause the library to > > > + * convert packet information to flow information with some > > > measurements. > > > + * - rte_flow_classify object can be destroyed when they are no > > > + more > > > needed > > > + * via rte_flow_classify_destroy() > > > + */ > > > + > > > +#include <rte_ethdev.h> > > > +#include <rte_ether.h> > > > +#include <rte_flow.h> > > > +#include <rte_acl.h> > > > +#include <rte_table_acl.h> > > > + > > > +#ifdef __cplusplus > > > +extern "C" { > > > +#endif > > > + > > > + > > > +#define RTE_FLOW_CLASSIFY_TABLE_MAX 1 > > > + > > > +/** Opaque data type for flow classifier */ struct > > > +rte_flow_classifier; > > > + > > > +/** Opaque data type for flow classify rule */ struct > > > +rte_flow_classify_rule; > > > + > > > +enum rte_flow_classify_rule_type { > > > + RTE_FLOW_CLASSIFY_RULE_TYPE_NONE, /**< no type */ > > > + RTE_FLOW_CLASSIFY_RULE_TYPE_IPV4_5TUPLE, /**< IPv4 5tuple > > > type */ > > > +}; > > > + > > > +enum rte_flow_classify_table_type { > > > + RTE_FLOW_CLASSIFY_TABLE_TYPE_NONE, /**< no type */ > > > + RTE_FLOW_CLASSIFY_TABLE_TYPE_ACL, /**< ACL type */ }; > > > + > > > +/** Parameters for flow classifier creation */ struct > > > +rte_flow_classifier_params { > > > + /**< flow classifier name */ > > > + const char *name; > > > + > > > + /**< CPU socket ID where memory for the flow classifier and its */ > > > + /**< elements (tables) should be allocated */ > > > + int socket_id; > > > + > > > + /**< Table type */ > > > + enum rte_flow_classify_table_type type; > > > + > > > + /**< Table id */ > > > + uint32_t table_id; > > > +}; > > > + > > > +struct rte_flow_classify_table_params { > > > + /**<Table operations (specific to each table type) */ > > > + struct rte_table_ops *ops; > > > + > > > + /**< Opaque param to be passed to the table create operation */ > > > + void *arg_create; > > > + > > > + /**< Memory size to be reserved per classifier object entry for */ > > > + /**< storing meta data */ > > > + uint32_t table_metadata_size; > > > +}; > > > + > > > +struct rte_flow_classify_ipv4_5tuple { > > > + uint32_t dst_ip; /**< Destination IP address in big endian. */ > > > + uint32_t dst_ip_mask; /**< Mask of destination IP address. */ > > > + uint32_t src_ip; /**< Source IP address in big endian. */ > > > + uint32_t src_ip_mask; /**< Mask of destination IP address. */ > > > + uint16_t dst_port; /**< Destination port in big endian. */ > > > + uint16_t dst_port_mask; /**< Mask of destination port. */ > > > + uint16_t src_port; /**< Source Port in big endian. */ > > > + uint16_t src_port_mask; /**< Mask of source port. */ > > > + uint8_t proto; /**< L4 protocol. */ > > > + uint8_t proto_mask; /**< Mask of L4 protocol. */ > > > +}; > > > + > > > +struct rte_flow_classify_table_entry { > > > + /**< meta-data for classify rule */ > > > + uint32_t rule_id; > > > + > > > + /**< Start of table entry area for user defined meta data */ > > > + __extension__ uint8_t meta_data[0]; }; > > > > The above structure is not used by any of the public API ? > > > > > + * Flow stats > > > + * > > > + * For the count action, stats can be returned by the query API. > > > + * > > > + * Storage for stats is provided by application. > > > + */ > > > +struct rte_flow_classify_stats { > > > + void *stats; > > > +}; > > > + > > > +struct rte_flow_classify_ipv4_5tuple_stats { > > > + /**< count of packets that match IPv4 5tuple pattern */ > > > + uint64_t counter1; > > > + /**< IPv4 5tuple data */ > > > + struct rte_flow_classify_ipv4_5tuple ipv4_5tuple; }; > > > + > > > +/** > > > + * Flow classifier create > > > + * > > > + * @param params > > > + * Parameters for flow classifier creation > > > + * @return > > > + * Handle to flow classifier instance on success or NULL otherwise > > > + */ > > > +struct rte_flow_classifier *rte_flow_classifier_create( > > > + struct rte_flow_classifier_params *params); > > > + > > > +/** > > > + * Flow classifier free > > > + * > > > + * @param cls > > > + * Handle to flow classifier instance > > > + * @return > > > + * 0 on success, error code otherwise > > > + */ > > > +int rte_flow_classifier_free(struct rte_flow_classifier *cls); > > > + > > > +/** > > > + * Flow classify table create > > > + * > > > + * @param cls > > > + * Handle to flow classifier instance > > > + * @param params > > > + * Parameters for flow_classify table creation > > > + * @param table_id > > > + * Table ID. Valid only within the scope of table IDs of the current > > > + * classifier. Only returned after a successful invocation. > > > + * @return > > > + * 0 on success, error code otherwise > > > + */ > > > +int rte_flow_classify_table_create(struct rte_flow_classifier *cls, > > > + struct rte_flow_classify_table_params *params, > > > + uint32_t *table_id); > > > + > > > +/** > > > + * Validate a flow classify rule. > > > + * > > > + * @param[in] attr > > > + * Flow rule attributes > > > + * @param[in] pattern > > > + * Pattern specification (list terminated by the END pattern item). > > > + * @param[in] actions > > > + * Associated actions (list terminated by the END pattern item). > > > + * @param[out] error > > > + * Perform verbose error reporting if not NULL. Structure > > > + * initialised in case of error only. > > > + * > > > + * @return > > > + * 0 on success, error code otherwise. > > > + */ > > > +int > > > +rte_flow_classify_validate( > > > + const struct rte_flow_attr *attr, > > > + const struct rte_flow_item pattern[], > > > + const struct rte_flow_action actions[], > > > + struct rte_flow_error *error); > > > + > > > +/** > > > + * Add a flow classify rule to the flow_classifer table. > > > + * > > > + * @param[in] cls > > > + * Flow classifier handle > > > + * @param[in] table_id > > > + * id of table > > > + * @param[in] attr > > > + * Flow rule attributes > > > + * @param[in] pattern > > > + * Pattern specification (list terminated by the END pattern item). > > > + * @param[in] actions > > > + * Associated actions (list terminated by the END pattern item). > > > + * @param[out] error > > > + * Perform verbose error reporting if not NULL. Structure > > > + * initialised in case of error only. > > > + * @return > > > + * A valid handle in case of success, NULL otherwise. > > > + */ > > > +struct rte_flow_classify_rule * > > > +rte_flow_classify_table_entry_add(struct rte_flow_classifier *cls, > > > + uint32_t table_id, > > > + const struct rte_flow_attr *attr, > > > + const struct rte_flow_item pattern[], > > > + const struct rte_flow_action actions[], > > > + struct rte_flow_error *error); > > > + > > > +/** > > > + * Delete a flow classify rule from the flow_classifer table. > > > + * > > > + * @param[in] cls > > > + * Flow classifier handle > > > + * @param[in] table_id > > > + * id of table > > > + * @param[in] rule > > > + * Flow classify rule > > > + * @param[out] error > > > + * Perform verbose error reporting if not NULL. Structure > > > + * initialised in case of error only. > > > + * @return > > > + * 0 on success, error code otherwise. > > > + */ > > > +int > > > +rte_flow_classify_table_entry_delete(struct rte_flow_classifier *cls, > > > + uint32_t table_id, > > > + struct rte_flow_classify_rule *rule, > > > + struct rte_flow_error *error); > > > + > > > +/** > > > + * Run flow classifier for given packets. > > > + * > > > + * @param[in] cls > > > + * Flow classifier handle > > > + * @param[in] table_id > > > + * id of table > > > + * @param[in] pkts > > > + * Pointer to packets to process > > > + * @param[in] nb_pkts > > > + * Number of packets to process > > > + * @param[out] error > > > + * Perform verbose error reporting if not NULL. Structure > > > + * initialised in case of error only. > > > + * > > > + * @return > > > + * 0 on success, error code otherwise. > > > + */ > > > + > > > +int rte_flow_classifier_run(struct rte_flow_classifier *cls, > > > + uint32_t table_id, > > > + struct rte_mbuf **pkts, > > > + const uint16_t nb_pkts, > > > + struct rte_flow_error *error); > > > + > > > +/** > > > + * Query flow classifier for given rule. > > > + * > > > + * @param[in] cls > > > + * Flow classifier handle > > > + * @param[in] rule > > > + * Flow classify rule > > > + * @param[in] stats > > > + * Flow classify stats > > > + * @param[out] error > > > + * Perform verbose error reporting if not NULL. Structure > > > + * initialised in case of error only. > > > + * > > > + * @return > > > + * 0 on success, error code otherwise. > > > + */ > > > +int rte_flow_classifier_query(struct rte_flow_classifier *cls, > > > + struct rte_flow_classify_rule *rule, > > > + struct rte_flow_classify_stats *stats, > > > + struct rte_flow_error *error); > > > + > > > +#ifdef __cplusplus > > > +} > > > +#endif > > > + > > > +#endif /* _RTE_FLOW_CLASSIFY_H_ */ > > > > > > There are doxygen rendering issues in this document. Please cross > > check the header file with "make doc-api-html" output. > > I will check the doxygen output. > > I will send a v9 patch set with the above changes. > Regards, Bernard. ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v8 2/4] examples/flow_classify: flow classify sample application 2017-10-02 9:31 ` [dpdk-dev] [PATCH v7 " Bernard Iremonger 2017-10-17 20:26 ` [dpdk-dev] [PATCH v8 " Bernard Iremonger 2017-10-17 20:26 ` [dpdk-dev] [PATCH v8 1/4] librte_flow_classify: add flow classify library Bernard Iremonger @ 2017-10-17 20:26 ` Bernard Iremonger 2017-10-17 20:26 ` [dpdk-dev] [PATCH v8 3/4] test: add packet burst generator functions Bernard Iremonger 2017-10-17 20:26 ` [dpdk-dev] [PATCH v8 4/4] test: flow classify library unit tests Bernard Iremonger 4 siblings, 0 replies; 145+ messages in thread From: Bernard Iremonger @ 2017-10-17 20:26 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil, jasvinder.singh Cc: Bernard Iremonger The flow_classify sample application exercises the following librte_flow_classify API's: rte_flow_classifier_create rte_flow_classifier_run rte_flow_classifier_query rte_flow_classify_table_create rte_flow_classify_table_entry_add rte_flow_classify_table_entry_delete rte_flow_classify_validate It sets up the IPv4 ACL field definitions. It creates table_acl and adds and deletes rules using the librte_table API. It uses a file of IPv4 five tuple rules for input. Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> --- examples/flow_classify/Makefile | 57 ++ examples/flow_classify/flow_classify.c | 854 +++++++++++++++++++++++++++++ examples/flow_classify/ipv4_rules_file.txt | 14 + 3 files changed, 925 insertions(+) create mode 100644 examples/flow_classify/Makefile create mode 100644 examples/flow_classify/flow_classify.c create mode 100644 examples/flow_classify/ipv4_rules_file.txt diff --git a/examples/flow_classify/Makefile b/examples/flow_classify/Makefile new file mode 100644 index 0000000..eecdde1 --- /dev/null +++ b/examples/flow_classify/Makefile @@ -0,0 +1,57 @@ +# BSD LICENSE +# +# Copyright(c) 2017 Intel Corporation. All rights reserved. +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Intel Corporation nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +ifeq ($(RTE_SDK),) +$(error "Please define RTE_SDK environment variable") +endif + +# Default target, can be overridden by command line or environment +RTE_TARGET ?= x86_64-native-linuxapp-gcc + +include $(RTE_SDK)/mk/rte.vars.mk + +# binary name +APP = flow_classify + + +# all source are stored in SRCS-y +SRCS-y := flow_classify.c + +CFLAGS += -O3 +CFLAGS += $(WERROR_FLAGS) + +# workaround for a gcc bug with noreturn attribute +# http://gcc.gnu.org/bugzilla/show_bug.cgi?id=12603 +ifeq ($(CONFIG_RTE_TOOLCHAIN_GCC),y) +CFLAGS_main.o += -Wno-return-type +endif + +include $(RTE_SDK)/mk/rte.extapp.mk diff --git a/examples/flow_classify/flow_classify.c b/examples/flow_classify/flow_classify.c new file mode 100644 index 0000000..98ae586 --- /dev/null +++ b/examples/flow_classify/flow_classify.c @@ -0,0 +1,854 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <stdint.h> +#include <inttypes.h> +#include <getopt.h> + +#include <rte_eal.h> +#include <rte_ethdev.h> +#include <rte_cycles.h> +#include <rte_lcore.h> +#include <rte_mbuf.h> +#include <rte_flow.h> +#include <rte_flow_classify.h> +#include <rte_table_acl.h> + +#define RX_RING_SIZE 128 +#define TX_RING_SIZE 512 + +#define NUM_MBUFS 8191 +#define MBUF_CACHE_SIZE 250 +#define BURST_SIZE 32 + +#define MAX_NUM_CLASSIFY 30 +#define FLOW_CLASSIFY_MAX_RULE_NUM 91 +#define FLOW_CLASSIFY_MAX_PRIORITY 8 +#define FLOW_CLASSIFIER_NAME_SIZE 64 + +#define COMMENT_LEAD_CHAR ('#') +#define OPTION_RULE_IPV4 "rule_ipv4" +#define RTE_LOGTYPE_FLOW_CLASSIFY RTE_LOGTYPE_USER3 +#define flow_classify_log(format, ...) \ + RTE_LOG(ERR, FLOW_CLASSIFY, format, ##__VA_ARGS__) + +#define uint32_t_to_char(ip, a, b, c, d) do {\ + *a = (unsigned char)(ip >> 24 & 0xff);\ + *b = (unsigned char)(ip >> 16 & 0xff);\ + *c = (unsigned char)(ip >> 8 & 0xff);\ + *d = (unsigned char)(ip & 0xff);\ + } while (0) + +enum { + CB_FLD_SRC_ADDR, + CB_FLD_DST_ADDR, + CB_FLD_SRC_PORT, + CB_FLD_SRC_PORT_DLM, + CB_FLD_SRC_PORT_MASK, + CB_FLD_DST_PORT, + CB_FLD_DST_PORT_DLM, + CB_FLD_DST_PORT_MASK, + CB_FLD_PROTO, + CB_FLD_PRIORITY, + CB_FLD_NUM, +}; + +static struct{ + const char *rule_ipv4_name; +} parm_config; +const char cb_port_delim[] = ":"; + +static const struct rte_eth_conf port_conf_default = { + .rxmode = { .max_rx_pkt_len = ETHER_MAX_LEN } +}; + +struct flow_classifier { + struct rte_flow_classifier *cls; + uint32_t table_id[RTE_FLOW_CLASSIFY_TABLE_MAX]; +}; + +struct flow_classifier_acl { + struct flow_classifier cls; +} __rte_cache_aligned; + +static int num_classify_rules; + +/* ACL field definitions for IPv4 5 tuple rule */ + +enum { + PROTO_FIELD_IPV4, + SRC_FIELD_IPV4, + DST_FIELD_IPV4, + SRCP_FIELD_IPV4, + DSTP_FIELD_IPV4, + NUM_FIELDS_IPV4 +}; + +enum { + PROTO_INPUT_IPV4, + SRC_INPUT_IPV4, + DST_INPUT_IPV4, + SRCP_DESTP_INPUT_IPV4 +}; + +static struct rte_acl_field_def ipv4_defs[NUM_FIELDS_IPV4] = { + /* first input field - always one byte long. */ + { + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint8_t), + .field_index = PROTO_FIELD_IPV4, + .input_index = PROTO_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, next_proto_id), + }, + /* next input field (IPv4 source address) - 4 consecutive bytes. */ + { + /* rte_flow uses a bit mask for IPv4 addresses */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint32_t), + .field_index = SRC_FIELD_IPV4, + .input_index = SRC_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, src_addr), + }, + /* next input field (IPv4 destination address) - 4 consecutive bytes. */ + { + /* rte_flow uses a bit mask for IPv4 addresses */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint32_t), + .field_index = DST_FIELD_IPV4, + .input_index = DST_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, dst_addr), + }, + /* + * Next 2 fields (src & dst ports) form 4 consecutive bytes. + * They share the same input index. + */ + { + /* rte_flow uses a bit mask for protocol ports */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint16_t), + .field_index = SRCP_FIELD_IPV4, + .input_index = SRCP_DESTP_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + sizeof(struct ipv4_hdr) + + offsetof(struct tcp_hdr, src_port), + }, + { + /* rte_flow uses a bit mask for protocol ports */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint16_t), + .field_index = DSTP_FIELD_IPV4, + .input_index = SRCP_DESTP_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + sizeof(struct ipv4_hdr) + + offsetof(struct tcp_hdr, dst_port), + }, +}; + +/* flow classify data */ +static struct rte_flow_classify_rule *rules[MAX_NUM_CLASSIFY]; +static struct rte_flow_classify_ipv4_5tuple_stats ntuple_stats; +static struct rte_flow_classify_stats classify_stats = { + .stats = (void **)&ntuple_stats +}; + +/* parameters for rte_flow_classify_validate and + * rte_flow_classify_table_entry_add functions + */ + +static struct rte_flow_item eth_item = { RTE_FLOW_ITEM_TYPE_ETH, + 0, 0, 0 }; +static struct rte_flow_item end_item = { RTE_FLOW_ITEM_TYPE_END, + 0, 0, 0 }; + +/* sample actions: + * "actions count / end" + */ +static struct rte_flow_action count_action = { RTE_FLOW_ACTION_TYPE_COUNT, 0}; +static struct rte_flow_action end_action = { RTE_FLOW_ACTION_TYPE_END, 0}; +static struct rte_flow_action actions[2]; + +/* sample attributes */ +static struct rte_flow_attr attr; + +/* flow_classify.c: * Based on DPDK skeleton forwarding example. */ + +/* + * Initializes a given port using global settings and with the RX buffers + * coming from the mbuf_pool passed as a parameter. + */ +static inline int +port_init(uint8_t port, struct rte_mempool *mbuf_pool) +{ + struct rte_eth_conf port_conf = port_conf_default; + struct ether_addr addr; + const uint16_t rx_rings = 1, tx_rings = 1; + int retval; + uint16_t q; + + if (port >= rte_eth_dev_count()) + return -1; + + /* Configure the Ethernet device. */ + retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf); + if (retval != 0) + return retval; + + /* Allocate and set up 1 RX queue per Ethernet port. */ + for (q = 0; q < rx_rings; q++) { + retval = rte_eth_rx_queue_setup(port, q, RX_RING_SIZE, + rte_eth_dev_socket_id(port), NULL, mbuf_pool); + if (retval < 0) + return retval; + } + + /* Allocate and set up 1 TX queue per Ethernet port. */ + for (q = 0; q < tx_rings; q++) { + retval = rte_eth_tx_queue_setup(port, q, TX_RING_SIZE, + rte_eth_dev_socket_id(port), NULL); + if (retval < 0) + return retval; + } + + /* Start the Ethernet port. */ + retval = rte_eth_dev_start(port); + if (retval < 0) + return retval; + + /* Display the port MAC address. */ + rte_eth_macaddr_get(port, &addr); + printf("Port %u MAC: %02" PRIx8 " %02" PRIx8 " %02" PRIx8 + " %02" PRIx8 " %02" PRIx8 " %02" PRIx8 "\n", + port, + addr.addr_bytes[0], addr.addr_bytes[1], + addr.addr_bytes[2], addr.addr_bytes[3], + addr.addr_bytes[4], addr.addr_bytes[5]); + + /* Enable RX in promiscuous mode for the Ethernet device. */ + rte_eth_promiscuous_enable(port); + + return 0; +} + +/* + * The lcore main. This is the main thread that does the work, reading from + * an input port classifying the packets and writing to an output port. + */ +static __attribute__((noreturn)) void +lcore_main(struct flow_classifier *cls) +{ + struct rte_flow_error error; + const uint8_t nb_ports = rte_eth_dev_count(); + uint8_t port; + int ret; + int i = 0; + + ret = rte_flow_classify_table_entry_delete(cls->cls, cls->table_id[0], + rules[7], &error); + if (ret) + printf("table_entry_delete failed [7] %d\n\n", ret); + else + printf("table_entry_delete succeeded [7]\n\n"); + + /* + * Check that the port is on the same NUMA node as the polling thread + * for best performance. + */ + for (port = 0; port < nb_ports; port++) + if (rte_eth_dev_socket_id(port) > 0 && + rte_eth_dev_socket_id(port) != + (int)rte_socket_id()) + printf("\n\n"); + printf("WARNING: port %u is on remote NUMA node\n", + port); + printf("to polling thread.\n"); + printf("Performance will not be optimal.\n"); + + printf("\nCore %u forwarding packets. ", + rte_lcore_id()); + printf("[Ctrl+C to quit]\n"); + + /* Run until the application is quit or killed. */ + for (;;) { + /* + * Receive packets on a port, classify them and forward them + * on the paired port. + * The mapping is 0 -> 1, 1 -> 0, 2 -> 3, 3 -> 2, etc. + */ + for (port = 0; port < nb_ports; port++) { + /* Get burst of RX packets, from first port of pair. */ + struct rte_mbuf *bufs[BURST_SIZE]; + const uint16_t nb_rx = rte_eth_rx_burst(port, 0, + bufs, BURST_SIZE); + + if (unlikely(nb_rx == 0)) + continue; + + ret = rte_flow_classifier_run(cls->cls, + cls->table_id[0], + bufs, nb_rx, &error); + if (ret) { + printf("flow classify run failed\n\n"); + continue; + } + + for (i = 0; i < MAX_NUM_CLASSIFY; i++) { + if (rules[i]) { + ret = rte_flow_classifier_query( + cls->cls, + rules[i], + &classify_stats, &error); + if (ret) + printf( + "rule [%d] query failed ret [%d]\n\n", + i, ret); + else { + printf( + "rule [%d] counter1=%lu\n", + i, ntuple_stats.counter1); + + printf("proto = %d\n", + ntuple_stats.ipv4_5tuple.proto); + } + } + } + + /* Send burst of TX packets, to second port of pair. */ + const uint16_t nb_tx = rte_eth_tx_burst(port ^ 1, 0, + bufs, nb_rx); + + /* Free any unsent packets. */ + if (unlikely(nb_tx < nb_rx)) { + uint16_t buf; + + for (buf = nb_tx; buf < nb_rx; buf++) + rte_pktmbuf_free(bufs[buf]); + } + } + } +} + +/* + * Parse IPv4 5 tuple rules file, ipv4_rules_file.txt. + * Expected format: + * <src_ipv4_addr>'/'<masklen> <space> \ + * <dst_ipv4_addr>'/'<masklen> <space> \ + * <src_port> <space> ":" <src_port_mask> <space> \ + * <dst_port> <space> ":" <dst_port_mask> <space> \ + * <proto>'/'<proto_mask> <space> \ + * <priority> + */ + +static int +get_cb_field(char **in, uint32_t *fd, int base, unsigned long lim, + char dlm) +{ + unsigned long val; + char *end; + + errno = 0; + val = strtoul(*in, &end, base); + if (errno != 0 || end[0] != dlm || val > lim) + return -EINVAL; + *fd = (uint32_t)val; + *in = end + 1; + return 0; +} + +static int +parse_ipv4_net(char *in, uint32_t *addr, uint32_t *mask_len) +{ + uint32_t a, b, c, d, m; + + if (get_cb_field(&in, &a, 0, UINT8_MAX, '.')) + return -EINVAL; + if (get_cb_field(&in, &b, 0, UINT8_MAX, '.')) + return -EINVAL; + if (get_cb_field(&in, &c, 0, UINT8_MAX, '.')) + return -EINVAL; + if (get_cb_field(&in, &d, 0, UINT8_MAX, '/')) + return -EINVAL; + if (get_cb_field(&in, &m, 0, sizeof(uint32_t) * CHAR_BIT, 0)) + return -EINVAL; + + addr[0] = IPv4(a, b, c, d); + mask_len[0] = m; + return 0; +} + +static int +parse_ipv4_5tuple_rule(char *str, struct rte_eth_ntuple_filter *ntuple_filter) +{ + int i, ret; + char *s, *sp, *in[CB_FLD_NUM]; + static const char *dlm = " \t\n"; + int dim = CB_FLD_NUM; + uint32_t temp; + + s = str; + for (i = 0; i != dim; i++, s = NULL) { + in[i] = strtok_r(s, dlm, &sp); + if (in[i] == NULL) + return -EINVAL; + } + + ret = parse_ipv4_net(in[CB_FLD_SRC_ADDR], + &ntuple_filter->src_ip, + &ntuple_filter->src_ip_mask); + if (ret != 0) { + flow_classify_log("failed to read source address/mask: %s\n", + in[CB_FLD_SRC_ADDR]); + return ret; + } + + ret = parse_ipv4_net(in[CB_FLD_DST_ADDR], + &ntuple_filter->dst_ip, + &ntuple_filter->dst_ip_mask); + if (ret != 0) { + flow_classify_log("failed to read source address/mask: %s\n", + in[CB_FLD_DST_ADDR]); + return ret; + } + + if (get_cb_field(&in[CB_FLD_SRC_PORT], &temp, 0, UINT16_MAX, 0)) + return -EINVAL; + ntuple_filter->src_port = (uint16_t)temp; + + if (strncmp(in[CB_FLD_SRC_PORT_DLM], cb_port_delim, + sizeof(cb_port_delim)) != 0) + return -EINVAL; + + if (get_cb_field(&in[CB_FLD_SRC_PORT_MASK], &temp, 0, UINT16_MAX, 0)) + return -EINVAL; + ntuple_filter->src_port_mask = (uint16_t)temp; + + if (get_cb_field(&in[CB_FLD_DST_PORT], &temp, 0, UINT16_MAX, 0)) + return -EINVAL; + ntuple_filter->dst_port = (uint16_t)temp; + + if (strncmp(in[CB_FLD_DST_PORT_DLM], cb_port_delim, + sizeof(cb_port_delim)) != 0) + return -EINVAL; + + if (get_cb_field(&in[CB_FLD_DST_PORT_MASK], &temp, 0, UINT16_MAX, 0)) + return -EINVAL; + ntuple_filter->dst_port_mask = (uint16_t)temp; + + if (get_cb_field(&in[CB_FLD_PROTO], &temp, 0, UINT8_MAX, '/')) + return -EINVAL; + ntuple_filter->proto = (uint8_t)temp; + + if (get_cb_field(&in[CB_FLD_PROTO], &temp, 0, UINT8_MAX, 0)) + return -EINVAL; + ntuple_filter->proto_mask = (uint8_t)temp; + + if (get_cb_field(&in[CB_FLD_PRIORITY], &temp, 0, UINT16_MAX, 0)) + return -EINVAL; + ntuple_filter->priority = (uint16_t)temp; + if (ntuple_filter->priority > FLOW_CLASSIFY_MAX_PRIORITY) + ret = -EINVAL; + + return ret; +} + +/* Bypass comment and empty lines */ +static inline int +is_bypass_line(char *buff) +{ + int i = 0; + + /* comment line */ + if (buff[0] == COMMENT_LEAD_CHAR) + return 1; + /* empty line */ + while (buff[i] != '\0') { + if (!isspace(buff[i])) + return 0; + i++; + } + return 1; +} + +static uint32_t +convert_depth_to_bitmask(uint32_t depth_val) +{ + uint32_t bitmask = 0; + int i, j; + + for (i = depth_val, j = 0; i > 0; i--, j++) + bitmask |= (1 << (31 - j)); + return bitmask; +} + +static int +add_classify_rule(struct rte_eth_ntuple_filter *ntuple_filter, + struct flow_classifier *cls) +{ + int ret = -1; + struct rte_flow_error error; + struct rte_flow_item_ipv4 ipv4_spec; + struct rte_flow_item_ipv4 ipv4_mask; + struct rte_flow_item ipv4_udp_item; + struct rte_flow_item ipv4_tcp_item; + struct rte_flow_item ipv4_sctp_item; + struct rte_flow_item_udp udp_spec; + struct rte_flow_item_udp udp_mask; + struct rte_flow_item udp_item; + struct rte_flow_item_tcp tcp_spec; + struct rte_flow_item_tcp tcp_mask; + struct rte_flow_item tcp_item; + struct rte_flow_item_sctp sctp_spec; + struct rte_flow_item_sctp sctp_mask; + struct rte_flow_item sctp_item; + struct rte_flow_item pattern_ipv4_5tuple[4]; + struct rte_flow_classify_rule *rule; + uint8_t ipv4_proto; + + if (num_classify_rules >= MAX_NUM_CLASSIFY) { + printf( + "\nINFO: classify rule capacity %d reached\n", + num_classify_rules); + return ret; + } + + /* set up parameters for validate and add */ + memset(&ipv4_spec, 0, sizeof(ipv4_spec)); + ipv4_spec.hdr.next_proto_id = ntuple_filter->proto; + ipv4_spec.hdr.src_addr = ntuple_filter->src_ip; + ipv4_spec.hdr.dst_addr = ntuple_filter->dst_ip; + ipv4_proto = ipv4_spec.hdr.next_proto_id; + + memset(&ipv4_mask, 0, sizeof(ipv4_mask)); + ipv4_mask.hdr.next_proto_id = ntuple_filter->proto_mask; + ipv4_mask.hdr.src_addr = ntuple_filter->src_ip_mask; + ipv4_mask.hdr.src_addr = + convert_depth_to_bitmask(ipv4_mask.hdr.src_addr); + ipv4_mask.hdr.dst_addr = ntuple_filter->dst_ip_mask; + ipv4_mask.hdr.dst_addr = + convert_depth_to_bitmask(ipv4_mask.hdr.dst_addr); + + switch (ipv4_proto) { + case IPPROTO_UDP: + ipv4_udp_item.type = RTE_FLOW_ITEM_TYPE_IPV4; + ipv4_udp_item.spec = &ipv4_spec; + ipv4_udp_item.mask = &ipv4_mask; + ipv4_udp_item.last = NULL; + + udp_spec.hdr.src_port = ntuple_filter->src_port; + udp_spec.hdr.dst_port = ntuple_filter->dst_port; + udp_spec.hdr.dgram_len = 0; + udp_spec.hdr.dgram_cksum = 0; + + udp_mask.hdr.src_port = ntuple_filter->src_port_mask; + udp_mask.hdr.dst_port = ntuple_filter->dst_port_mask; + udp_mask.hdr.dgram_len = 0; + udp_mask.hdr.dgram_cksum = 0; + + udp_item.type = RTE_FLOW_ITEM_TYPE_UDP; + udp_item.spec = &udp_spec; + udp_item.mask = &udp_mask; + udp_item.last = NULL; + + attr.priority = ntuple_filter->priority; + pattern_ipv4_5tuple[1] = ipv4_udp_item; + pattern_ipv4_5tuple[2] = udp_item; + break; + case IPPROTO_TCP: + ipv4_tcp_item.type = RTE_FLOW_ITEM_TYPE_IPV4; + ipv4_tcp_item.spec = &ipv4_spec; + ipv4_tcp_item.mask = &ipv4_mask; + ipv4_tcp_item.last = NULL; + + memset(&tcp_spec, 0, sizeof(tcp_spec)); + tcp_spec.hdr.src_port = ntuple_filter->src_port; + tcp_spec.hdr.dst_port = ntuple_filter->dst_port; + + memset(&tcp_mask, 0, sizeof(tcp_mask)); + tcp_mask.hdr.src_port = ntuple_filter->src_port_mask; + tcp_mask.hdr.dst_port = ntuple_filter->dst_port_mask; + + tcp_item.type = RTE_FLOW_ITEM_TYPE_TCP; + tcp_item.spec = &tcp_spec; + tcp_item.mask = &tcp_mask; + tcp_item.last = NULL; + + attr.priority = ntuple_filter->priority; + pattern_ipv4_5tuple[1] = ipv4_tcp_item; + pattern_ipv4_5tuple[2] = tcp_item; + break; + case IPPROTO_SCTP: + ipv4_sctp_item.type = RTE_FLOW_ITEM_TYPE_IPV4; + ipv4_sctp_item.spec = &ipv4_spec; + ipv4_sctp_item.mask = &ipv4_mask; + ipv4_sctp_item.last = NULL; + + sctp_spec.hdr.src_port = ntuple_filter->src_port; + sctp_spec.hdr.dst_port = ntuple_filter->dst_port; + sctp_spec.hdr.cksum = 0; + sctp_spec.hdr.tag = 0; + + sctp_mask.hdr.src_port = ntuple_filter->src_port_mask; + sctp_mask.hdr.dst_port = ntuple_filter->dst_port_mask; + sctp_mask.hdr.cksum = 0; + sctp_mask.hdr.tag = 0; + + sctp_item.type = RTE_FLOW_ITEM_TYPE_SCTP; + sctp_item.spec = &sctp_spec; + sctp_item.mask = &sctp_mask; + sctp_item.last = NULL; + + attr.priority = ntuple_filter->priority; + pattern_ipv4_5tuple[1] = ipv4_sctp_item; + pattern_ipv4_5tuple[2] = sctp_item; + break; + default: + return ret; + } + + attr.ingress = 1; + pattern_ipv4_5tuple[0] = eth_item; + pattern_ipv4_5tuple[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(&attr, pattern_ipv4_5tuple, + actions, &error); + if (ret) + rte_exit(EXIT_FAILURE, + "flow classify validate failed ipv4_proto = %u\n", + ipv4_proto); + + rule = rte_flow_classify_table_entry_add(cls->cls, cls->table_id[0], + &attr, pattern_ipv4_5tuple, actions, &error); + if (rule == NULL) { + printf("table entry add failed ipv4_proto = %u\n", + ipv4_proto); + ret = -1; + return ret; + } + + rules[num_classify_rules] = rule; + num_classify_rules++; + return 0; +} + +static int +add_rules(const char *rule_path, struct flow_classifier *cls) +{ + FILE *fh; + char buff[LINE_MAX]; + unsigned int i = 0; + unsigned int total_num = 0; + struct rte_eth_ntuple_filter ntuple_filter; + + fh = fopen(rule_path, "rb"); + if (fh == NULL) + rte_exit(EXIT_FAILURE, "%s: Open %s failed\n", __func__, + rule_path); + + fseek(fh, 0, SEEK_SET); + + i = 0; + while (fgets(buff, LINE_MAX, fh) != NULL) { + i++; + + if (is_bypass_line(buff)) + continue; + + if (total_num >= FLOW_CLASSIFY_MAX_RULE_NUM - 1) { + printf("\nINFO: classify rule capacity %d reached\n", + total_num); + break; + } + + if (parse_ipv4_5tuple_rule(buff, &ntuple_filter) != 0) + rte_exit(EXIT_FAILURE, + "%s Line %u: parse rules error\n", + rule_path, i); + + if (add_classify_rule(&ntuple_filter, cls) != 0) + rte_exit(EXIT_FAILURE, "add rule error\n"); + + total_num++; + } + + fclose(fh); + return 0; +} + +/* display usage */ +static void +print_usage(const char *prgname) +{ + printf("%s usage:\n", prgname); + printf("[EAL options] -- --"OPTION_RULE_IPV4"=FILE: "); + printf("specify the ipv4 rules file.\n"); + printf("Each rule occupies one line in the file.\n"); +} + +/* Parse the argument given in the command line of the application */ +static int +parse_args(int argc, char **argv) +{ + int opt, ret; + char **argvopt; + int option_index; + char *prgname = argv[0]; + static struct option lgopts[] = { + {OPTION_RULE_IPV4, 1, 0, 0}, + {NULL, 0, 0, 0} + }; + + argvopt = argv; + + while ((opt = getopt_long(argc, argvopt, "", + lgopts, &option_index)) != EOF) { + + switch (opt) { + /* long options */ + case 0: + if (!strncmp(lgopts[option_index].name, + OPTION_RULE_IPV4, + sizeof(OPTION_RULE_IPV4))) + parm_config.rule_ipv4_name = optarg; + break; + default: + print_usage(prgname); + return -1; + } + } + + if (optind >= 0) + argv[optind-1] = prgname; + + ret = optind-1; + optind = 1; /* reset getopt lib */ + return ret; +} + +/* + * The main function, which does initialization and calls the lcore_main + * function. + */ +int +main(int argc, char *argv[]) +{ + struct rte_mempool *mbuf_pool; + uint8_t nb_ports; + uint8_t portid; + int ret; + int socket_id; + struct rte_table_acl_params table_acl_params; + struct rte_flow_classify_table_params cls_table_params; + struct flow_classifier *cls; + struct rte_flow_classifier_params cls_params; + uint32_t size; + + /* Initialize the Environment Abstraction Layer (EAL). */ + ret = rte_eal_init(argc, argv); + if (ret < 0) + rte_exit(EXIT_FAILURE, "Error with EAL initialization\n"); + + argc -= ret; + argv += ret; + + /* parse application arguments (after the EAL ones) */ + ret = parse_args(argc, argv); + if (ret < 0) + rte_exit(EXIT_FAILURE, "Invalid flow_classify parameters\n"); + + /* Check that there is an even number of ports to send/receive on. */ + nb_ports = rte_eth_dev_count(); + if (nb_ports < 2 || (nb_ports & 1)) + rte_exit(EXIT_FAILURE, "Error: number of ports must be even\n"); + + /* Creates a new mempool in memory to hold the mbufs. */ + mbuf_pool = rte_pktmbuf_pool_create("MBUF_POOL", NUM_MBUFS * nb_ports, + MBUF_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id()); + + if (mbuf_pool == NULL) + rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n"); + + /* Initialize all ports. */ + for (portid = 0; portid < nb_ports; portid++) + if (port_init(portid, mbuf_pool) != 0) + rte_exit(EXIT_FAILURE, "Cannot init port %"PRIu8 "\n", + portid); + + if (rte_lcore_count() > 1) + printf("\nWARNING: Too many lcores enabled. Only 1 used.\n"); + + socket_id = rte_eth_dev_socket_id(0); + + /* Memory allocation */ + size = RTE_CACHE_LINE_ROUNDUP(sizeof(struct flow_classifier_acl)); + cls = rte_zmalloc(NULL, size, RTE_CACHE_LINE_SIZE); + + cls_params.name = "flow_classifier"; + cls_params.socket_id = socket_id; + cls_params.type = RTE_FLOW_CLASSIFY_TABLE_TYPE_ACL; + cls_params.table_id = 0; + cls->cls = rte_flow_classifier_create(&cls_params); + + /* initialise ACL table params */ + table_acl_params.name = "table_acl_ipv4_5tuple"; + table_acl_params.n_rules = FLOW_CLASSIFY_MAX_RULE_NUM; + table_acl_params.n_rule_fields = RTE_DIM(ipv4_defs); + memcpy(table_acl_params.field_format, ipv4_defs, sizeof(ipv4_defs)); + + /* initialise table create params */ + cls_table_params.ops = &rte_table_acl_ops, + cls_table_params.arg_create = &table_acl_params, + cls_table_params.table_metadata_size = 0; + + ret = rte_flow_classify_table_create(cls->cls, &cls_table_params, + &cls->table_id[0]); + if (ret) { + rte_flow_classifier_free(cls->cls); + rte_free(cls); + rte_exit(EXIT_FAILURE, "Failed to create classifier table\n"); + } + + /* read file of IPv4 5 tuple rules and initialize parameters + * for rte_flow_classify_validate and rte_flow_classify_create + */ + if (add_rules(parm_config.rule_ipv4_name, cls)) + rte_exit(EXIT_FAILURE, "Failed to add rules\n"); + + /* Call lcore_main on the master core only. */ + lcore_main(cls); + + return 0; +} diff --git a/examples/flow_classify/ipv4_rules_file.txt b/examples/flow_classify/ipv4_rules_file.txt new file mode 100644 index 0000000..dfa0631 --- /dev/null +++ b/examples/flow_classify/ipv4_rules_file.txt @@ -0,0 +1,14 @@ +#file format: +#src_ip/masklen dst_ip/masklen src_port : mask dst_port : mask proto/mask priority +# +2.2.2.3/24 2.2.2.7/24 32 : 0xffff 33 : 0xffff 17/0xff 0 +9.9.9.3/24 9.9.9.7/24 32 : 0xffff 33 : 0xffff 17/0xff 1 +9.9.9.3/24 9.9.9.7/24 32 : 0xffff 33 : 0xffff 6/0xff 2 +9.9.8.3/24 9.9.8.7/24 32 : 0xffff 33 : 0xffff 6/0xff 3 +6.7.8.9/24 2.3.4.5/24 32 : 0x0000 33 : 0x0000 132/0xff 4 +6.7.8.9/32 192.168.0.36/32 10 : 0xffff 11 : 0xffff 6/0xfe 5 +6.7.8.9/24 192.168.0.36/24 10 : 0xffff 11 : 0xffff 6/0xfe 6 +6.7.8.9/16 192.168.0.36/16 10 : 0xffff 11 : 0xffff 6/0xfe 7 +6.7.8.9/8 192.168.0.36/8 10 : 0xffff 11 : 0xffff 6/0xfe 8 +#error rules +#9.8.7.6/8 192.168.0.36/8 10 : 0xffff 11 : 0xffff 6/0xfe 9 \ No newline at end of file -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v8 3/4] test: add packet burst generator functions 2017-10-02 9:31 ` [dpdk-dev] [PATCH v7 " Bernard Iremonger ` (2 preceding siblings ...) 2017-10-17 20:26 ` [dpdk-dev] [PATCH v8 2/4] examples/flow_classify: flow classify sample application Bernard Iremonger @ 2017-10-17 20:26 ` Bernard Iremonger 2017-10-17 20:26 ` [dpdk-dev] [PATCH v8 4/4] test: flow classify library unit tests Bernard Iremonger 4 siblings, 0 replies; 145+ messages in thread From: Bernard Iremonger @ 2017-10-17 20:26 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil, jasvinder.singh Cc: Bernard Iremonger add initialize_tcp_header function add initialize_stcp_header function add initialize_ipv4_header_proto function add generate_packet_burst_proto function Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> --- test/test/packet_burst_generator.c | 191 +++++++++++++++++++++++++++++++++++++ test/test/packet_burst_generator.h | 22 ++++- 2 files changed, 211 insertions(+), 2 deletions(-) diff --git a/test/test/packet_burst_generator.c b/test/test/packet_burst_generator.c index a93c3b5..8f4ddcc 100644 --- a/test/test/packet_burst_generator.c +++ b/test/test/packet_burst_generator.c @@ -134,6 +134,36 @@ return pkt_len; } +uint16_t +initialize_tcp_header(struct tcp_hdr *tcp_hdr, uint16_t src_port, + uint16_t dst_port, uint16_t pkt_data_len) +{ + uint16_t pkt_len; + + pkt_len = (uint16_t) (pkt_data_len + sizeof(struct tcp_hdr)); + + memset(tcp_hdr, 0, sizeof(struct tcp_hdr)); + tcp_hdr->src_port = rte_cpu_to_be_16(src_port); + tcp_hdr->dst_port = rte_cpu_to_be_16(dst_port); + + return pkt_len; +} + +uint16_t +initialize_sctp_header(struct sctp_hdr *sctp_hdr, uint16_t src_port, + uint16_t dst_port, uint16_t pkt_data_len) +{ + uint16_t pkt_len; + + pkt_len = (uint16_t) (pkt_data_len + sizeof(struct udp_hdr)); + + sctp_hdr->src_port = rte_cpu_to_be_16(src_port); + sctp_hdr->dst_port = rte_cpu_to_be_16(dst_port); + sctp_hdr->tag = 0; + sctp_hdr->cksum = 0; /* No SCTP checksum. */ + + return pkt_len; +} uint16_t initialize_ipv6_header(struct ipv6_hdr *ip_hdr, uint8_t *src_addr, @@ -198,7 +228,53 @@ return pkt_len; } +uint16_t +initialize_ipv4_header_proto(struct ipv4_hdr *ip_hdr, uint32_t src_addr, + uint32_t dst_addr, uint16_t pkt_data_len, uint8_t proto) +{ + uint16_t pkt_len; + unaligned_uint16_t *ptr16; + uint32_t ip_cksum; + + /* + * Initialize IP header. + */ + pkt_len = (uint16_t) (pkt_data_len + sizeof(struct ipv4_hdr)); + + ip_hdr->version_ihl = IP_VHL_DEF; + ip_hdr->type_of_service = 0; + ip_hdr->fragment_offset = 0; + ip_hdr->time_to_live = IP_DEFTTL; + ip_hdr->next_proto_id = proto; + ip_hdr->packet_id = 0; + ip_hdr->total_length = rte_cpu_to_be_16(pkt_len); + ip_hdr->src_addr = rte_cpu_to_be_32(src_addr); + ip_hdr->dst_addr = rte_cpu_to_be_32(dst_addr); + + /* + * Compute IP header checksum. + */ + ptr16 = (unaligned_uint16_t *)ip_hdr; + ip_cksum = 0; + ip_cksum += ptr16[0]; ip_cksum += ptr16[1]; + ip_cksum += ptr16[2]; ip_cksum += ptr16[3]; + ip_cksum += ptr16[4]; + ip_cksum += ptr16[6]; ip_cksum += ptr16[7]; + ip_cksum += ptr16[8]; ip_cksum += ptr16[9]; + /* + * Reduce 32 bit checksum to 16 bits and complement it. + */ + ip_cksum = ((ip_cksum & 0xFFFF0000) >> 16) + + (ip_cksum & 0x0000FFFF); + ip_cksum %= 65536; + ip_cksum = (~ip_cksum) & 0x0000FFFF; + if (ip_cksum == 0) + ip_cksum = 0xFFFF; + ip_hdr->hdr_checksum = (uint16_t) ip_cksum; + + return pkt_len; +} /* * The maximum number of segments per packet is used when creating @@ -283,3 +359,118 @@ return nb_pkt; } + +int +generate_packet_burst_proto(struct rte_mempool *mp, + struct rte_mbuf **pkts_burst, + struct ether_hdr *eth_hdr, uint8_t vlan_enabled, void *ip_hdr, + uint8_t ipv4, uint8_t proto, void *proto_hdr, + int nb_pkt_per_burst, uint8_t pkt_len, uint8_t nb_pkt_segs) +{ + int i, nb_pkt = 0; + size_t eth_hdr_size; + + struct rte_mbuf *pkt_seg; + struct rte_mbuf *pkt; + + for (nb_pkt = 0; nb_pkt < nb_pkt_per_burst; nb_pkt++) { + pkt = rte_pktmbuf_alloc(mp); + if (pkt == NULL) { +nomore_mbuf: + if (nb_pkt == 0) + return -1; + break; + } + + pkt->data_len = pkt_len; + pkt_seg = pkt; + for (i = 1; i < nb_pkt_segs; i++) { + pkt_seg->next = rte_pktmbuf_alloc(mp); + if (pkt_seg->next == NULL) { + pkt->nb_segs = i; + rte_pktmbuf_free(pkt); + goto nomore_mbuf; + } + pkt_seg = pkt_seg->next; + pkt_seg->data_len = pkt_len; + } + pkt_seg->next = NULL; /* Last segment of packet. */ + + /* + * Copy headers in first packet segment(s). + */ + if (vlan_enabled) + eth_hdr_size = sizeof(struct ether_hdr) + + sizeof(struct vlan_hdr); + else + eth_hdr_size = sizeof(struct ether_hdr); + + copy_buf_to_pkt(eth_hdr, eth_hdr_size, pkt, 0); + + if (ipv4) { + copy_buf_to_pkt(ip_hdr, sizeof(struct ipv4_hdr), pkt, + eth_hdr_size); + switch (proto) { + case IPPROTO_UDP: + copy_buf_to_pkt(proto_hdr, + sizeof(struct udp_hdr), pkt, + eth_hdr_size + sizeof(struct ipv4_hdr)); + break; + case IPPROTO_TCP: + copy_buf_to_pkt(proto_hdr, + sizeof(struct tcp_hdr), pkt, + eth_hdr_size + sizeof(struct ipv4_hdr)); + break; + case IPPROTO_SCTP: + copy_buf_to_pkt(proto_hdr, + sizeof(struct sctp_hdr), pkt, + eth_hdr_size + sizeof(struct ipv4_hdr)); + break; + default: + break; + } + } else { + copy_buf_to_pkt(ip_hdr, sizeof(struct ipv6_hdr), pkt, + eth_hdr_size); + switch (proto) { + case IPPROTO_UDP: + copy_buf_to_pkt(proto_hdr, + sizeof(struct udp_hdr), pkt, + eth_hdr_size + sizeof(struct ipv6_hdr)); + break; + case IPPROTO_TCP: + copy_buf_to_pkt(proto_hdr, + sizeof(struct tcp_hdr), pkt, + eth_hdr_size + sizeof(struct ipv6_hdr)); + break; + case IPPROTO_SCTP: + copy_buf_to_pkt(proto_hdr, + sizeof(struct sctp_hdr), pkt, + eth_hdr_size + sizeof(struct ipv6_hdr)); + break; + default: + break; + } + } + + /* + * Complete first mbuf of packet and append it to the + * burst of packets to be transmitted. + */ + pkt->nb_segs = nb_pkt_segs; + pkt->pkt_len = pkt_len; + pkt->l2_len = eth_hdr_size; + + if (ipv4) { + pkt->vlan_tci = ETHER_TYPE_IPv4; + pkt->l3_len = sizeof(struct ipv4_hdr); + } else { + pkt->vlan_tci = ETHER_TYPE_IPv6; + pkt->l3_len = sizeof(struct ipv6_hdr); + } + + pkts_burst[nb_pkt] = pkt; + } + + return nb_pkt; +} diff --git a/test/test/packet_burst_generator.h b/test/test/packet_burst_generator.h index edc1044..3315bfa 100644 --- a/test/test/packet_burst_generator.h +++ b/test/test/packet_burst_generator.h @@ -43,7 +43,8 @@ #include <rte_arp.h> #include <rte_ip.h> #include <rte_udp.h> - +#include <rte_tcp.h> +#include <rte_sctp.h> #define IPV4_ADDR(a, b, c, d)(((a & 0xff) << 24) | ((b & 0xff) << 16) | \ ((c & 0xff) << 8) | (d & 0xff)) @@ -65,6 +66,13 @@ initialize_udp_header(struct udp_hdr *udp_hdr, uint16_t src_port, uint16_t dst_port, uint16_t pkt_data_len); +uint16_t +initialize_tcp_header(struct tcp_hdr *tcp_hdr, uint16_t src_port, + uint16_t dst_port, uint16_t pkt_data_len); + +uint16_t +initialize_sctp_header(struct sctp_hdr *sctp_hdr, uint16_t src_port, + uint16_t dst_port, uint16_t pkt_data_len); uint16_t initialize_ipv6_header(struct ipv6_hdr *ip_hdr, uint8_t *src_addr, @@ -74,15 +82,25 @@ initialize_ipv4_header(struct ipv4_hdr *ip_hdr, uint32_t src_addr, uint32_t dst_addr, uint16_t pkt_data_len); +uint16_t +initialize_ipv4_header_proto(struct ipv4_hdr *ip_hdr, uint32_t src_addr, + uint32_t dst_addr, uint16_t pkt_data_len, uint8_t proto); + int generate_packet_burst(struct rte_mempool *mp, struct rte_mbuf **pkts_burst, struct ether_hdr *eth_hdr, uint8_t vlan_enabled, void *ip_hdr, uint8_t ipv4, struct udp_hdr *udp_hdr, int nb_pkt_per_burst, uint8_t pkt_len, uint8_t nb_pkt_segs); +int +generate_packet_burst_proto(struct rte_mempool *mp, + struct rte_mbuf **pkts_burst, + struct ether_hdr *eth_hdr, uint8_t vlan_enabled, void *ip_hdr, + uint8_t ipv4, uint8_t proto, void *proto_hdr, + int nb_pkt_per_burst, uint8_t pkt_len, uint8_t nb_pkt_segs); + #ifdef __cplusplus } #endif - #endif /* PACKET_BURST_GENERATOR_H_ */ -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v8 4/4] test: flow classify library unit tests 2017-10-02 9:31 ` [dpdk-dev] [PATCH v7 " Bernard Iremonger ` (3 preceding siblings ...) 2017-10-17 20:26 ` [dpdk-dev] [PATCH v8 3/4] test: add packet burst generator functions Bernard Iremonger @ 2017-10-17 20:26 ` Bernard Iremonger 4 siblings, 0 replies; 145+ messages in thread From: Bernard Iremonger @ 2017-10-17 20:26 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil, jasvinder.singh Cc: Bernard Iremonger Add flow_classify_autotest program. Set up IPv4 ACL field definitions. Create table_acl for use by librte_flow_classify API's. Create an mbuf pool for use by rte_flow_classify_query. For each of the librte_flow_classify API's: test with invalid parameters test with invalid patterns test with invalid actions test with valid parameters Initialise ipv4 udp traffic for use by the udp test for rte_flow_classifier_run. Initialise ipv4 tcp traffic for use by the tcp test for rte_flow_classifier_run. Initialise ipv4 sctp traffic for use by the sctp test for rte_flow_classifier_run. Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> --- test/test/Makefile | 1 + test/test/test_flow_classify.c | 783 +++++++++++++++++++++++++++++++++++++++++ test/test/test_flow_classify.h | 234 ++++++++++++ 3 files changed, 1018 insertions(+) create mode 100644 test/test/test_flow_classify.c create mode 100644 test/test/test_flow_classify.h diff --git a/test/test/Makefile b/test/test/Makefile index dcbe363..c2dbe40 100644 --- a/test/test/Makefile +++ b/test/test/Makefile @@ -107,6 +107,7 @@ SRCS-y += test_table_tables.c SRCS-y += test_table_ports.c SRCS-y += test_table_combined.c SRCS-$(CONFIG_RTE_LIBRTE_ACL) += test_table_acl.c +SRCS-$(CONFIG_RTE_LIBRTE_ACL) += test_flow_classify.c endif SRCS-y += test_rwlock.c diff --git a/test/test/test_flow_classify.c b/test/test/test_flow_classify.c new file mode 100644 index 0000000..0f3d1fa --- /dev/null +++ b/test/test/test_flow_classify.c @@ -0,0 +1,783 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <string.h> +#include <errno.h> + +#include "test.h" + +#include <rte_string_fns.h> +#include <rte_mbuf.h> +#include <rte_byteorder.h> +#include <rte_ip.h> +#include <rte_acl.h> +#include <rte_common.h> +#include <rte_table_acl.h> +#include <rte_flow.h> +#include <rte_flow_classify.h> + +#include "packet_burst_generator.h" +#include "test_flow_classify.h" + + +#define FLOW_CLASSIFY_MAX_RULE_NUM 100 +struct flow_classifier *cls; + +struct flow_classifier { + struct rte_flow_classifier *cls; + uint32_t table_id[RTE_FLOW_CLASSIFY_TABLE_MAX]; + uint32_t n_tables; +}; + +struct flow_classifier_acl { + struct flow_classifier cls; +} __rte_cache_aligned; + +/* + * test functions by passing invalid or + * non-workable parameters. + */ +static int +test_invalid_parameters(void) +{ + struct rte_flow_classify_rule *rule; + int ret; + + ret = rte_flow_classify_validate(NULL, NULL, NULL, NULL); + if (!ret) { + printf("Line %i: flow_classify_validate", __LINE__); + printf(" with NULL param should have failed!\n"); + return -1; + } + + rule = rte_flow_classify_table_entry_add(NULL, 1, NULL, NULL, NULL, + NULL); + if (rule) { + printf("Line %i: flow_classifier_table_entry_add", __LINE__); + printf(" with NULL param should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_table_entry_delete(NULL, 1, NULL, NULL); + if (!ret) { + printf("Line %i: rte_flow_classify_table_entry_delete", + __LINE__); + printf(" with NULL param should have failed!\n"); + return -1; + } + + ret = rte_flow_classifier_run(NULL, 1, NULL, 0, NULL); + if (!ret) { + printf("Line %i: flow_classifier_run", __LINE__); + printf(" with NULL param should have failed!\n"); + return -1; + } + + ret = rte_flow_classifier_query(NULL, NULL, NULL, NULL); + if (!ret) { + printf("Line %i: flow_classifier_query", __LINE__); + printf(" with NULL param should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_validate(NULL, NULL, NULL, &error); + if (!ret) { + printf("Line %i: flow_classify_validate", __LINE__); + printf(" with NULL param should have failed!\n"); + return -1; + } + + rule = rte_flow_classify_table_entry_add(NULL, 1, NULL, NULL, NULL, + &error); + if (rule) { + printf("Line %i: flow_classify_table_entry_add ", __LINE__); + printf("with NULL param should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_table_entry_delete(NULL, 1, NULL, &error); + if (!ret) { + printf("Line %i: rte_flow_classify_table_entry_delete", + __LINE__); + printf("with NULL param should have failed!\n"); + return -1; + } + + ret = rte_flow_classifier_run(NULL, 1, NULL, 0, NULL); + if (!ret) { + printf("Line %i: flow_classifier_run", __LINE__); + printf(" with NULL param should have failed!\n"); + return -1; + } + + ret = rte_flow_classifier_query(NULL, NULL, NULL, &error); + if (!ret) { + printf("Line %i: flow_classifier_query", __LINE__); + printf(" with NULL param should have failed!\n"); + return -1; + } + return 0; +} + +static int +test_valid_parameters(void) +{ + struct rte_flow_classify_rule *rule; + int ret; + + /* set up parameters for rte_flow_classify_validate and + * rte_flow_classify_table_entry_add and + * rte_flow_classify_table_entry_delete + */ + + attr.ingress = 1; + attr.priority = 1; + pattern[0] = eth_item; + pattern[1] = ipv4_udp_item_1; + pattern[2] = udp_item_1; + pattern[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(&attr, pattern, actions, &error); + if (ret) { + printf("Line %i: flow_classify_validate", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + rule = rte_flow_classify_table_entry_add(cls->cls, 0, &attr, + pattern, actions, &error); + if (!rule) { + printf("Line %i: flow_classify_table_entry_add", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classify_table_entry_delete(cls->cls, 0, rule, + &error); + if (ret) { + printf("Line %i: rte_flow_classify_table_entry_delete", + __LINE__); + printf(" should not have failed!\n"); + return -1; + } + return 0; +} + +static int +test_invalid_patterns(void) +{ + struct rte_flow_classify_rule *rule; + int ret; + + /* set up parameters for rte_flow_classify_validate and + * rte_flow_classify_table_entry_add and + * rte_flow_classify_table_entry_delete + */ + + attr.ingress = 1; + attr.priority = 1; + pattern[0] = eth_item_bad; + pattern[1] = ipv4_udp_item_1; + pattern[2] = udp_item_1; + pattern[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(&attr, pattern, actions, &error); + if (!ret) { + printf("Line %i: flow_classify_validate", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + pattern[0] = eth_item; + pattern[1] = ipv4_udp_item_bad; + rule = rte_flow_classify_table_entry_add(cls->cls, 0, &attr, + pattern, actions, &error); + if (rule) { + printf("Line %i: flow_classify_table_entry_add", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_table_entry_delete(cls->cls, 0, rule, + &error); + if (!ret) { + printf("Line %i: rte_flow_classify_table_entry_delete", + __LINE__); + printf(" should have failed!\n"); + return -1; + } + + pattern[1] = ipv4_udp_item_1; + pattern[2] = udp_item_bad; + ret = rte_flow_classify_validate(&attr, pattern, actions, &error); + if (!ret) { + printf("Line %i: flow_classify_validate", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + pattern[2] = udp_item_1; + pattern[3] = end_item_bad; + rule = rte_flow_classify_table_entry_add(cls->cls, 0, &attr, + pattern, actions, &error); + if (rule) { + printf("Line %i: flow_classify_table_entry_add", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_table_entry_delete(cls->cls, 0, rule, + &error); + if (!ret) { + printf("Line %i: rte_flow_classify_table_entry_delete", + __LINE__); + printf(" should have failed!\n"); + return -1; + } + return 0; +} + +static int +test_invalid_actions(void) +{ + struct rte_flow_classify_rule *rule; + int ret; + + /* set up parameters for rte_flow_classify_validate and + * rte_flow_classify_table_entry_add and + * rte_flow_classify_table_entry_delete + */ + + attr.ingress = 1; + attr.priority = 1; + pattern[0] = eth_item; + pattern[1] = ipv4_udp_item_1; + pattern[2] = udp_item_1; + pattern[3] = end_item; + actions[0] = count_action_bad; + actions[1] = end_action; + + ret = rte_flow_classify_validate(&attr, pattern, actions, &error); + if (!ret) { + printf("Line %i: flow_classify_validate", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + rule = rte_flow_classify_table_entry_add(cls->cls, 0, &attr, + pattern, actions, &error); + if (rule) { + printf("Line %i: flow_classify_table_entry_add", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_table_entry_delete(cls->cls, 0, rule, + &error); + if (!ret) { + printf("Line %i: rte_flow_classify_table_entry_delete", + __LINE__); + printf(" should have failed!\n"); + return -1; + } + + actions[0] = count_action; + actions[1] = end_action_bad; + ret = rte_flow_classify_validate(&attr, pattern, actions, &error); + if (!ret) { + printf("Line %i: flow_classify_validate", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + rule = rte_flow_classify_table_entry_add(cls->cls, 0, &attr, + pattern, actions, &error); + if (rule) { + printf("Line %i: flow_classify_table_entry_add", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_table_entry_delete(cls->cls, 0, rule, + &error); + if (!ret) { + printf("Line %i: rte_flow_classify_table_entry_delete", + __LINE__); + printf("should have failed!\n"); + return -1; + } + return 0; +} + +static int +init_ipv4_udp_traffic(struct rte_mempool *mp, + struct rte_mbuf **pkts_burst, uint32_t burst_size) +{ + struct ether_hdr pkt_eth_hdr; + struct ipv4_hdr pkt_ipv4_hdr; + struct udp_hdr pkt_udp_hdr; + uint32_t src_addr = IPV4_ADDR(2, 2, 2, 3); + uint32_t dst_addr = IPV4_ADDR(2, 2, 2, 7); + uint16_t src_port = 32; + uint16_t dst_port = 33; + uint16_t pktlen; + + static uint8_t src_mac[] = { 0x00, 0xFF, 0xAA, 0xFF, 0xAA, 0xFF }; + static uint8_t dst_mac[] = { 0x00, 0xAA, 0xFF, 0xAA, 0xFF, 0xAA }; + + printf("Set up IPv4 UDP traffic\n"); + initialize_eth_header(&pkt_eth_hdr, + (struct ether_addr *)src_mac, + (struct ether_addr *)dst_mac, ETHER_TYPE_IPv4, 0, 0); + pktlen = (uint16_t)(sizeof(struct ether_hdr)); + printf("ETH pktlen %u\n", pktlen); + + pktlen = initialize_ipv4_header(&pkt_ipv4_hdr, src_addr, dst_addr, + pktlen); + printf("ETH + IPv4 pktlen %u\n", pktlen); + + pktlen = initialize_udp_header(&pkt_udp_hdr, src_port, dst_port, + pktlen); + printf("ETH + IPv4 + UDP pktlen %u\n\n", pktlen); + + return generate_packet_burst(mp, pkts_burst, &pkt_eth_hdr, + 0, &pkt_ipv4_hdr, 1, + &pkt_udp_hdr, burst_size, + PACKET_BURST_GEN_PKT_LEN, 1); +} + +static int +init_ipv4_tcp_traffic(struct rte_mempool *mp, + struct rte_mbuf **pkts_burst, uint32_t burst_size) +{ + struct ether_hdr pkt_eth_hdr; + struct ipv4_hdr pkt_ipv4_hdr; + struct tcp_hdr pkt_tcp_hdr; + uint32_t src_addr = IPV4_ADDR(1, 2, 3, 4); + uint32_t dst_addr = IPV4_ADDR(5, 6, 7, 8); + uint16_t src_port = 16; + uint16_t dst_port = 17; + uint16_t pktlen; + + static uint8_t src_mac[] = { 0x00, 0xFF, 0xAA, 0xFF, 0xAA, 0xFF }; + static uint8_t dst_mac[] = { 0x00, 0xAA, 0xFF, 0xAA, 0xFF, 0xAA }; + + printf("Set up IPv4 TCP traffic\n"); + initialize_eth_header(&pkt_eth_hdr, + (struct ether_addr *)src_mac, + (struct ether_addr *)dst_mac, ETHER_TYPE_IPv4, 0, 0); + pktlen = (uint16_t)(sizeof(struct ether_hdr)); + printf("ETH pktlen %u\n", pktlen); + + pktlen = initialize_ipv4_header_proto(&pkt_ipv4_hdr, src_addr, + dst_addr, pktlen, IPPROTO_TCP); + printf("ETH + IPv4 pktlen %u\n", pktlen); + + pktlen = initialize_tcp_header(&pkt_tcp_hdr, src_port, dst_port, + pktlen); + printf("ETH + IPv4 + TCP pktlen %u\n\n", pktlen); + + return generate_packet_burst_proto(mp, pkts_burst, &pkt_eth_hdr, + 0, &pkt_ipv4_hdr, 1, IPPROTO_TCP, + &pkt_tcp_hdr, burst_size, + PACKET_BURST_GEN_PKT_LEN, 1); +} + +static int +init_ipv4_sctp_traffic(struct rte_mempool *mp, + struct rte_mbuf **pkts_burst, uint32_t burst_size) +{ + struct ether_hdr pkt_eth_hdr; + struct ipv4_hdr pkt_ipv4_hdr; + struct sctp_hdr pkt_sctp_hdr; + uint32_t src_addr = IPV4_ADDR(11, 12, 13, 14); + uint32_t dst_addr = IPV4_ADDR(15, 16, 17, 18); + uint16_t src_port = 10; + uint16_t dst_port = 11; + uint16_t pktlen; + + static uint8_t src_mac[] = { 0x00, 0xFF, 0xAA, 0xFF, 0xAA, 0xFF }; + static uint8_t dst_mac[] = { 0x00, 0xAA, 0xFF, 0xAA, 0xFF, 0xAA }; + + printf("Set up IPv4 SCTP traffic\n"); + initialize_eth_header(&pkt_eth_hdr, + (struct ether_addr *)src_mac, + (struct ether_addr *)dst_mac, ETHER_TYPE_IPv4, 0, 0); + pktlen = (uint16_t)(sizeof(struct ether_hdr)); + printf("ETH pktlen %u\n", pktlen); + + pktlen = initialize_ipv4_header_proto(&pkt_ipv4_hdr, src_addr, + dst_addr, pktlen, IPPROTO_SCTP); + printf("ETH + IPv4 pktlen %u\n", pktlen); + + pktlen = initialize_sctp_header(&pkt_sctp_hdr, src_port, dst_port, + pktlen); + printf("ETH + IPv4 + SCTP pktlen %u\n\n", pktlen); + + return generate_packet_burst_proto(mp, pkts_burst, &pkt_eth_hdr, + 0, &pkt_ipv4_hdr, 1, IPPROTO_SCTP, + &pkt_sctp_hdr, burst_size, + PACKET_BURST_GEN_PKT_LEN, 1); +} + +static int +init_mbufpool(void) +{ + int socketid; + int ret = 0; + unsigned int lcore_id; + char s[64]; + + for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { + if (rte_lcore_is_enabled(lcore_id) == 0) + continue; + + socketid = rte_lcore_to_socket_id(lcore_id); + if (socketid >= NB_SOCKETS) { + printf( + "Socket %d of lcore %u is out of range %d\n", + socketid, lcore_id, NB_SOCKETS); + ret = -1; + break; + } + if (mbufpool[socketid] == NULL) { + snprintf(s, sizeof(s), "mbuf_pool_%d", socketid); + mbufpool[socketid] = + rte_pktmbuf_pool_create(s, NB_MBUF, + MEMPOOL_CACHE_SIZE, 0, MBUF_SIZE, + socketid); + if (mbufpool[socketid]) { + printf("Allocated mbuf pool on socket %d\n", + socketid); + } else { + printf("Cannot init mbuf pool on socket %d\n", + socketid); + ret = -ENOMEM; + break; + } + } + } + return ret; +} + +static int +test_query_udp(void) +{ + struct rte_flow_error error; + struct rte_flow_classify_rule *rule; + int ret; + int i; + + ret = init_ipv4_udp_traffic(mbufpool[0], bufs, MAX_PKT_BURST); + if (ret != MAX_PKT_BURST) { + printf("Line %i: init_udp_ipv4_traffic has failed!\n", + __LINE__); + return -1; + } + + for (i = 0; i < MAX_PKT_BURST; i++) + bufs[i]->packet_type = RTE_PTYPE_L3_IPV4; + + /* set up parameters for rte_flow_classify_validate and + * rte_flow_classify_table_entry_add and + * rte_flow_classify_table_entry_delete + */ + + attr.ingress = 1; + attr.priority = 1; + pattern[0] = eth_item; + pattern[1] = ipv4_udp_item_1; + pattern[2] = udp_item_1; + pattern[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(&attr, pattern, actions, &error); + if (ret) { + printf("Line %i: flow_classify_validate", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + rule = rte_flow_classify_table_entry_add(cls->cls, 0, &attr, + pattern, actions, &error); + if (!rule) { + printf("Line %i: flow_classify_table_entry_add", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classifier_run(cls->cls, 0, bufs, MAX_PKT_BURST, + &error); + if (ret) { + printf("Line %i: flow_classifier_run", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classifier_query(cls->cls, rule, + &udp_classify_stats, &error); + if (ret) { + printf("Line %i: flow_classifier_query", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classify_table_entry_delete(cls->cls, 0, rule, + &error); + if (ret) { + printf("Line %i: rte_flow_classify_table_entry_delete", + __LINE__); + printf(" should not have failed!\n"); + return -1; + } + return 0; +} + +static int +test_query_tcp(void) +{ + struct rte_flow_classify_rule *rule; + int ret; + int i; + + ret = init_ipv4_tcp_traffic(mbufpool[0], bufs, MAX_PKT_BURST); + if (ret != MAX_PKT_BURST) { + printf("Line %i: init_ipv4_tcp_traffic has failed!\n", + __LINE__); + return -1; + } + + for (i = 0; i < MAX_PKT_BURST; i++) + bufs[i]->packet_type = RTE_PTYPE_L3_IPV4; + + /* set up parameters for rte_flow_classify_validate and + * rte_flow_classify_table_entry_add and + * rte_flow_classify_table_entry_delete + */ + + attr.ingress = 1; + attr.priority = 1; + pattern[0] = eth_item; + pattern[1] = ipv4_tcp_item_1; + pattern[2] = tcp_item_1; + pattern[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(&attr, pattern, actions, &error); + if (ret) { + printf("Line %i: flow_classify_validate", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + rule = rte_flow_classify_table_entry_add(cls->cls, 0, &attr, + pattern, actions, &error); + if (!rule) { + printf("Line %i: flow_classify_table_entry_add", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classifier_run(cls->cls, 0, bufs, MAX_PKT_BURST, + &error); + if (ret) { + printf("Line %i: flow_classifier_run", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classifier_query(cls->cls, rule, + &tcp_classify_stats, &error); + if (ret) { + printf("Line %i: flow_classifier_query", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classify_table_entry_delete(cls->cls, 0, rule, + &error); + if (ret) { + printf("Line %i: rte_flow_classify_table_entry_delete", + __LINE__); + printf(" should not have failed!\n"); + return -1; + } + return 0; +} + +static int +test_query_sctp(void) +{ + struct rte_flow_classify_rule *rule; + int ret; + int i; + + ret = init_ipv4_sctp_traffic(mbufpool[0], bufs, MAX_PKT_BURST); + if (ret != MAX_PKT_BURST) { + printf("Line %i: init_ipv4_tcp_traffic has failed!\n", + __LINE__); + return -1; + } + + for (i = 0; i < MAX_PKT_BURST; i++) + bufs[i]->packet_type = RTE_PTYPE_L3_IPV4; + + /* set up parameters for rte_flow_classify_validate and + * rte_flow_classify_table_entry_add and + * rte_flow_classify_table_entry_delete + */ + + attr.ingress = 1; + attr.priority = 1; + pattern[0] = eth_item; + pattern[1] = ipv4_sctp_item_1; + pattern[2] = sctp_item_1; + pattern[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(&attr, pattern, actions, &error); + if (ret) { + printf("Line %i: flow_classify_validate", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + rule = rte_flow_classify_table_entry_add(cls->cls, 0, &attr, + pattern, actions, &error); + if (!rule) { + printf("Line %i: flow_classify_table_entry_add", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classifier_run(cls->cls, 0, bufs, MAX_PKT_BURST, + &error); + if (ret) { + printf("Line %i: flow_classifier_run", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classifier_query(cls->cls, rule, + &sctp_classify_stats, &error); + if (ret) { + printf("Line %i: flow_classifier_query", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classify_table_entry_delete(cls->cls, 0, rule, + &error); + if (ret) { + printf("Line %i: rte_flow_classify_table_entry_delete", + __LINE__); + printf(" should not have failed!\n"); + return -1; + } + return 0; +} + +static int +test_flow_classify(void) +{ + struct rte_table_acl_params table_acl_params; + struct rte_flow_classify_table_params cls_table_params; + struct rte_flow_classifier_params cls_params; + int socket_id; + int ret; + uint32_t size; + + socket_id = rte_eth_dev_socket_id(0); + + /* Memory allocation */ + size = RTE_CACHE_LINE_ROUNDUP(sizeof(struct flow_classifier_acl)); + cls = rte_zmalloc(NULL, size, RTE_CACHE_LINE_SIZE); + + cls_params.name = "flow_classifier"; + cls_params.socket_id = socket_id; + cls_params.type = RTE_FLOW_CLASSIFY_TABLE_TYPE_ACL; + cls->cls = rte_flow_classifier_create(&cls_params); + + /* initialise ACL table params */ + table_acl_params.n_rule_fields = RTE_DIM(ipv4_defs); + table_acl_params.name = "table_acl_ipv4_5tuple"; + table_acl_params.n_rules = FLOW_CLASSIFY_MAX_RULE_NUM; + memcpy(table_acl_params.field_format, ipv4_defs, sizeof(ipv4_defs)); + + /* initialise table create params */ + cls_table_params.ops = &rte_table_acl_ops, + cls_table_params.arg_create = &table_acl_params, + cls_table_params.table_metadata_size = 0; + + ret = rte_flow_classify_table_create(cls->cls, &cls_table_params, + &cls->table_id[0]); + if (ret) { + printf("Line %i: f_create has failed!\n", __LINE__); + rte_flow_classifier_free(cls->cls); + rte_free(cls); + return -1; + } + printf("Created table_acl for for IPv4 five tuple packets\n"); + + ret = init_mbufpool(); + if (ret) { + printf("Line %i: init_mbufpool has failed!\n", __LINE__); + return -1; + } + + if (test_invalid_parameters() < 0) + return -1; + if (test_valid_parameters() < 0) + return -1; + if (test_invalid_patterns() < 0) + return -1; + if (test_invalid_actions() < 0) + return -1; + if (test_query_udp() < 0) + return -1; + if (test_query_tcp() < 0) + return -1; + if (test_query_sctp() < 0) + return -1; + + return 0; +} + +REGISTER_TEST_COMMAND(flow_classify_autotest, test_flow_classify); diff --git a/test/test/test_flow_classify.h b/test/test/test_flow_classify.h new file mode 100644 index 0000000..39535cf --- /dev/null +++ b/test/test/test_flow_classify.h @@ -0,0 +1,234 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef TEST_FLOW_CLASSIFY_H_ +#define TEST_FLOW_CLASSIFY_H_ + +#define MAX_PKT_BURST (32) +#define NB_SOCKETS (1) +#define MEMPOOL_CACHE_SIZE (256) +#define MBUF_SIZE (512) +#define NB_MBUF (512) + +/* test UDP, TCP and SCTP packets */ +static struct rte_mempool *mbufpool[NB_SOCKETS]; +static struct rte_mbuf *bufs[MAX_PKT_BURST]; + +/* ACL field definitions for IPv4 5 tuple rule */ + +enum { + PROTO_FIELD_IPV4, + SRC_FIELD_IPV4, + DST_FIELD_IPV4, + SRCP_FIELD_IPV4, + DSTP_FIELD_IPV4, + NUM_FIELDS_IPV4 +}; + +enum { + PROTO_INPUT_IPV4, + SRC_INPUT_IPV4, + DST_INPUT_IPV4, + SRCP_DESTP_INPUT_IPV4 +}; + +static struct rte_acl_field_def ipv4_defs[NUM_FIELDS_IPV4] = { + /* first input field - always one byte long. */ + { + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint8_t), + .field_index = PROTO_FIELD_IPV4, + .input_index = PROTO_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, next_proto_id), + }, + /* next input field (IPv4 source address) - 4 consecutive bytes. */ + { + /* rte_flow uses a bit mask for IPv4 addresses */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint32_t), + .field_index = SRC_FIELD_IPV4, + .input_index = SRC_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, src_addr), + }, + /* next input field (IPv4 destination address) - 4 consecutive bytes. */ + { + /* rte_flow uses a bit mask for IPv4 addresses */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint32_t), + .field_index = DST_FIELD_IPV4, + .input_index = DST_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, dst_addr), + }, + /* + * Next 2 fields (src & dst ports) form 4 consecutive bytes. + * They share the same input index. + */ + { + /* rte_flow uses a bit mask for protocol ports */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint16_t), + .field_index = SRCP_FIELD_IPV4, + .input_index = SRCP_DESTP_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + sizeof(struct ipv4_hdr) + + offsetof(struct tcp_hdr, src_port), + }, + { + /* rte_flow uses a bit mask for protocol ports */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint16_t), + .field_index = DSTP_FIELD_IPV4, + .input_index = SRCP_DESTP_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + sizeof(struct ipv4_hdr) + + offsetof(struct tcp_hdr, dst_port), + }, +}; + +/* parameters for rte_flow_classify_validate and rte_flow_classify_create */ + +/* test UDP pattern: + * "eth / ipv4 src spec 2.2.2.3 src mask 255.255.255.00 dst spec 2.2.2.7 + * dst mask 255.255.255.00 / udp src is 32 dst is 33 / end" + */ +static struct rte_flow_item_ipv4 ipv4_udp_spec_1 = { + { 0, 0, 0, 0, 0, 0, IPPROTO_UDP, 0, IPv4(2, 2, 2, 3), IPv4(2, 2, 2, 7)} +}; +static const struct rte_flow_item_ipv4 ipv4_mask_24 = { + .hdr = { + .next_proto_id = 0xff, + .src_addr = 0xffffff00, + .dst_addr = 0xffffff00, + }, +}; +static struct rte_flow_item_udp udp_spec_1 = { + { 32, 33, 0, 0 } +}; + +static struct rte_flow_item eth_item = { RTE_FLOW_ITEM_TYPE_ETH, + 0, 0, 0 }; +static struct rte_flow_item eth_item_bad = { -1, 0, 0, 0 }; + +static struct rte_flow_item ipv4_udp_item_1 = { RTE_FLOW_ITEM_TYPE_IPV4, + &ipv4_udp_spec_1, 0, &ipv4_mask_24}; +static struct rte_flow_item ipv4_udp_item_bad = { RTE_FLOW_ITEM_TYPE_IPV4, + NULL, 0, NULL}; + +static struct rte_flow_item udp_item_1 = { RTE_FLOW_ITEM_TYPE_UDP, + &udp_spec_1, 0, &rte_flow_item_udp_mask}; +static struct rte_flow_item udp_item_bad = { RTE_FLOW_ITEM_TYPE_UDP, + NULL, 0, NULL}; + +static struct rte_flow_item end_item = { RTE_FLOW_ITEM_TYPE_END, + 0, 0, 0 }; +static struct rte_flow_item end_item_bad = { -1, 0, 0, 0 }; + +/* test TCP pattern: + * "eth / ipv4 src spec 1.2.3.4 src mask 255.255.255.00 dst spec 5.6.7.8 + * dst mask 255.255.255.00 / tcp src is 16 dst is 17 / end" + */ +static struct rte_flow_item_ipv4 ipv4_tcp_spec_1 = { + { 0, 0, 0, 0, 0, 0, IPPROTO_TCP, 0, IPv4(1, 2, 3, 4), IPv4(5, 6, 7, 8)} +}; + +static struct rte_flow_item_tcp tcp_spec_1 = { + { 16, 17, 0, 0, 0, 0, 0, 0, 0} +}; + +static struct rte_flow_item ipv4_tcp_item_1 = { RTE_FLOW_ITEM_TYPE_IPV4, + &ipv4_tcp_spec_1, 0, &ipv4_mask_24}; + +static struct rte_flow_item tcp_item_1 = { RTE_FLOW_ITEM_TYPE_TCP, + &tcp_spec_1, 0, &rte_flow_item_tcp_mask}; + +/* test SCTP pattern: + * "eth / ipv4 src spec 1.2.3.4 src mask 255.255.255.00 dst spec 5.6.7.8 + * dst mask 255.255.255.00 / sctp src is 16 dst is 17/ end" + */ +static struct rte_flow_item_ipv4 ipv4_sctp_spec_1 = { + { 0, 0, 0, 0, 0, 0, IPPROTO_SCTP, 0, IPv4(11, 12, 13, 14), + IPv4(15, 16, 17, 18)} +}; + +static struct rte_flow_item_sctp sctp_spec_1 = { + { 10, 11, 0, 0} +}; + +static struct rte_flow_item ipv4_sctp_item_1 = { RTE_FLOW_ITEM_TYPE_IPV4, + &ipv4_sctp_spec_1, 0, &ipv4_mask_24}; + +static struct rte_flow_item sctp_item_1 = { RTE_FLOW_ITEM_TYPE_SCTP, + &sctp_spec_1, 0, &rte_flow_item_sctp_mask}; + + +/* test actions: + * "actions count / end" + */ +static struct rte_flow_action count_action = { RTE_FLOW_ACTION_TYPE_COUNT, 0}; +static struct rte_flow_action count_action_bad = { -1, 0}; + +static struct rte_flow_action end_action = { RTE_FLOW_ACTION_TYPE_END, 0}; +static struct rte_flow_action end_action_bad = { -1, 0}; + +static struct rte_flow_action actions[2]; + +/* test attributes */ +static struct rte_flow_attr attr; + +/* test error */ +static struct rte_flow_error error; + +/* test pattern */ +static struct rte_flow_item pattern[4]; + +/* flow classify data for UDP burst */ +static struct rte_flow_classify_ipv4_5tuple_stats udp_ntuple_stats; +static struct rte_flow_classify_stats udp_classify_stats = { + .stats = (void *)&udp_ntuple_stats +}; + +/* flow classify data for TCP burst */ +static struct rte_flow_classify_ipv4_5tuple_stats tcp_ntuple_stats; +static struct rte_flow_classify_stats tcp_classify_stats = { + .stats = (void *)&tcp_ntuple_stats +}; + +/* flow classify data for SCTP burst */ +static struct rte_flow_classify_ipv4_5tuple_stats sctp_ntuple_stats; +static struct rte_flow_classify_stats sctp_classify_stats = { + .stats = (void *)&sctp_ntuple_stats +}; +#endif /* TEST_FLOW_CLASSIFY_H_ */ -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v7 1/4] librte_flow_classify: add librte_flow_classify library 2017-09-29 9:18 ` [dpdk-dev] [PATCH v6 0/4] " Bernard Iremonger 2017-10-02 9:31 ` [dpdk-dev] [PATCH v7 " Bernard Iremonger @ 2017-10-02 9:31 ` Bernard Iremonger 2017-10-06 15:00 ` Singh, Jasvinder 2017-10-02 9:31 ` [dpdk-dev] [PATCH v7 2/4] examples/flow_classify: flow classify sample application Bernard Iremonger ` (2 subsequent siblings) 4 siblings, 1 reply; 145+ messages in thread From: Bernard Iremonger @ 2017-10-02 9:31 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil Cc: Bernard Iremonger From: Ferruh Yigit <ferruh.yigit@intel.com> The following library APIs's are implemented: rte_flow_classify_create rte_flow_classify_validate rte_flow_classify_destroy rte_flow_classify_query The following librte_table ACL API's are used: f_create to create a table ACL. f_add to add an ACL rule to the table. f_del to delete an ACL form the table. f_lookup to match packets with the ACL rules. use f_add entry data for matching The library supports counting of IPv4 five tupple packets only, ie IPv4 UDP, TCP and SCTP packets. updated MAINTAINERS file Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com> Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> --- MAINTAINERS | 7 + config/common_base | 6 + doc/api/doxy-api-index.md | 1 + doc/api/doxy-api.conf | 1 + lib/Makefile | 3 + lib/librte_eal/common/include/rte_log.h | 1 + lib/librte_flow_classify/Makefile | 51 ++ lib/librte_flow_classify/rte_flow_classify.c | 460 +++++++++++++++++ lib/librte_flow_classify/rte_flow_classify.h | 207 ++++++++ lib/librte_flow_classify/rte_flow_classify_parse.c | 546 +++++++++++++++++++++ lib/librte_flow_classify/rte_flow_classify_parse.h | 74 +++ .../rte_flow_classify_version.map | 10 + mk/rte.app.mk | 2 +- 13 files changed, 1368 insertions(+), 1 deletion(-) create mode 100644 lib/librte_flow_classify/Makefile create mode 100644 lib/librte_flow_classify/rte_flow_classify.c create mode 100644 lib/librte_flow_classify/rte_flow_classify.h create mode 100644 lib/librte_flow_classify/rte_flow_classify_parse.c create mode 100644 lib/librte_flow_classify/rte_flow_classify_parse.h create mode 100644 lib/librte_flow_classify/rte_flow_classify_version.map diff --git a/MAINTAINERS b/MAINTAINERS index 8df2a7f..4b875ad 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -677,6 +677,13 @@ F: doc/guides/prog_guide/pdump_lib.rst F: app/pdump/ F: doc/guides/tools/pdump.rst +Flow classify +M: Bernard Iremonger <bernard.iremonger@intel.com> +F: lib/librte_flow_classify/ +F: test/test/test_flow_classify* +F: examples/flow_classify/ +F: doc/guides/sample_app_ug/flow_classify.rst +F: doc/guides/prog_guide/flow_classify_lib.rst Packet Framework ---------------- diff --git a/config/common_base b/config/common_base index 12f6be9..0638a37 100644 --- a/config/common_base +++ b/config/common_base @@ -658,6 +658,12 @@ CONFIG_RTE_LIBRTE_GRO=y CONFIG_RTE_LIBRTE_METER=y # +# Compile librte_classify +# +CONFIG_RTE_LIBRTE_FLOW_CLASSIFY=y +CONFIG_RTE_LIBRTE_CLASSIFY_DEBUG=n + +# # Compile librte_sched # CONFIG_RTE_LIBRTE_SCHED=y diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md index 19e0d4f..a2fa281 100644 --- a/doc/api/doxy-api-index.md +++ b/doc/api/doxy-api-index.md @@ -105,6 +105,7 @@ The public API headers are grouped by topics: [LPM IPv4 route] (@ref rte_lpm.h), [LPM IPv6 route] (@ref rte_lpm6.h), [ACL] (@ref rte_acl.h), + [flow_classify] (@ref rte_flow_classify.h), [EFD] (@ref rte_efd.h) - **QoS**: diff --git a/doc/api/doxy-api.conf b/doc/api/doxy-api.conf index 823554f..4e43a66 100644 --- a/doc/api/doxy-api.conf +++ b/doc/api/doxy-api.conf @@ -46,6 +46,7 @@ INPUT = doc/api/doxy-api-index.md \ lib/librte_efd \ lib/librte_ether \ lib/librte_eventdev \ + lib/librte_flow_classify \ lib/librte_gro \ lib/librte_hash \ lib/librte_ip_frag \ diff --git a/lib/Makefile b/lib/Makefile index 86caba1..21fc3b0 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -82,6 +82,9 @@ DIRS-$(CONFIG_RTE_LIBRTE_POWER) += librte_power DEPDIRS-librte_power := librte_eal DIRS-$(CONFIG_RTE_LIBRTE_METER) += librte_meter DEPDIRS-librte_meter := librte_eal +DIRS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += librte_flow_classify +DEPDIRS-librte_flow_classify := librte_eal librte_ether librte_net +DEPDIRS-librte_flow_classify += librte_table librte_acl librte_port DIRS-$(CONFIG_RTE_LIBRTE_SCHED) += librte_sched DEPDIRS-librte_sched := librte_eal librte_mempool librte_mbuf librte_net DEPDIRS-librte_sched += librte_timer diff --git a/lib/librte_eal/common/include/rte_log.h b/lib/librte_eal/common/include/rte_log.h index ec8dba7..f975bde 100644 --- a/lib/librte_eal/common/include/rte_log.h +++ b/lib/librte_eal/common/include/rte_log.h @@ -87,6 +87,7 @@ struct rte_logs { #define RTE_LOGTYPE_CRYPTODEV 17 /**< Log related to cryptodev. */ #define RTE_LOGTYPE_EFD 18 /**< Log related to EFD. */ #define RTE_LOGTYPE_EVENTDEV 19 /**< Log related to eventdev. */ +#define RTE_LOGTYPE_CLASSIFY 20 /**< Log related to flow classify. */ /* these log types can be used in an application */ #define RTE_LOGTYPE_USER1 24 /**< User-defined log type 1. */ diff --git a/lib/librte_flow_classify/Makefile b/lib/librte_flow_classify/Makefile new file mode 100644 index 0000000..7863a0c --- /dev/null +++ b/lib/librte_flow_classify/Makefile @@ -0,0 +1,51 @@ +# BSD LICENSE +# +# Copyright(c) 2017 Intel Corporation. All rights reserved. +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Intel Corporation nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +include $(RTE_SDK)/mk/rte.vars.mk + +# library name +LIB = librte_flow_classify.a + +CFLAGS += -O3 +CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) + +EXPORT_MAP := rte_flow_classify_version.map + +LIBABIVER := 1 + +# all source are stored in SRCS-y +SRCS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += rte_flow_classify.c +SRCS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += rte_flow_classify_parse.c + +# install this header file +SYMLINK-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY)-include := rte_flow_classify.h + +include $(RTE_SDK)/mk/rte.lib.mk diff --git a/lib/librte_flow_classify/rte_flow_classify.c b/lib/librte_flow_classify/rte_flow_classify.c new file mode 100644 index 0000000..7f08382 --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify.c @@ -0,0 +1,460 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <rte_flow_classify.h> +#include "rte_flow_classify_parse.h" +#include <rte_flow_driver.h> +#include <rte_table_acl.h> +#include <stdbool.h> + +static struct rte_eth_ntuple_filter ntuple_filter; +static uint32_t unique_id = 1; + +enum { + PROTO_FIELD_IPV4, + SRC_FIELD_IPV4, + DST_FIELD_IPV4, + SRCP_FIELD_IPV4, + DSTP_FIELD_IPV4, + NUM_FIELDS_IPV4 +}; + +struct ipv4_5tuple_data { + uint16_t priority; /**< flow API uses priority 0 to 8, 0 is highest */ + uint32_t userdata; /**< value returned for match */ + uint8_t tcp_flags; /**< tcp_flags only meaningful TCP protocol */ +}; + +struct rte_flow_classify { + uint32_t id; /**< unique ID of classify object */ + enum rte_flow_classify_type type; /**< classify type */ + struct rte_flow_action action; /**< action when match found */ + struct ipv4_5tuple_data flow_extra_data; /**< extra flow data */ + struct rte_table_acl_rule_add_params key_add; /**< add ACL rule key */ + struct rte_table_acl_rule_delete_params + key_del; /**< delete ACL rule key */ + int key_found; /**< ACL rule key found in table */ + void *entry; /**< pointer to buffer to hold ACL rule */ + void *entry_ptr; /**< handle to the table entry for the ACL rule */ +}; + +/* number of packets in a burst */ +#define MAX_PKT_BURST 32 + +struct mbuf_search { + struct rte_mbuf *m_ipv4[MAX_PKT_BURST]; + uint32_t res_ipv4[MAX_PKT_BURST]; + int num_ipv4; +}; + +int +rte_flow_classify_validate(void *table_handle, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + struct rte_flow_item *items; + parse_filter_t parse_filter; + uint32_t item_num = 0; + uint32_t i = 0; + int ret; + + (void) table_handle; + + if (!error) + return -EINVAL; + + if (!pattern) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM, + NULL, "NULL pattern."); + return -EINVAL; + } + + if (!actions) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_NUM, + NULL, "NULL action."); + return -EINVAL; + } + + if (!attr) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, + NULL, "NULL attribute."); + return -EINVAL; + } + + memset(&ntuple_filter, 0, sizeof(ntuple_filter)); + + /* Get the non-void item number of pattern */ + while ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_END) { + if ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_VOID) + item_num++; + i++; + } + item_num++; + + items = malloc(item_num * sizeof(struct rte_flow_item)); + if (!items) { + rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_ITEM_NUM, + NULL, "No memory for pattern items."); + return -ENOMEM; + } + + memset(items, 0, item_num * sizeof(struct rte_flow_item)); + classify_pattern_skip_void_item(items, pattern); + + parse_filter = classify_find_parse_filter_func(items); + if (!parse_filter) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + pattern, "Unsupported pattern"); + return -EINVAL; + } + + ret = parse_filter(attr, items, actions, &ntuple_filter, error); + free(items); + return ret; +} + +#ifdef RTE_LIBRTE_CLASSIFY_DEBUG +#define uint32_t_to_char(ip, a, b, c, d) do {\ + *a = (unsigned char)(ip >> 24 & 0xff);\ + *b = (unsigned char)(ip >> 16 & 0xff);\ + *c = (unsigned char)(ip >> 8 & 0xff);\ + *d = (unsigned char)(ip & 0xff);\ + } while (0) + +static inline void +print_ipv4_key_add(struct rte_table_acl_rule_add_params *key) +{ + unsigned char a, b, c, d; + + printf("ipv4_key_add: 0x%02hhx/0x%hhx ", + key->field_value[PROTO_FIELD_IPV4].value.u8, + key->field_value[PROTO_FIELD_IPV4].mask_range.u8); + + uint32_t_to_char(key->field_value[SRC_FIELD_IPV4].value.u32, + &a, &b, &c, &d); + printf(" %hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, + key->field_value[SRC_FIELD_IPV4].mask_range.u32); + + uint32_t_to_char(key->field_value[DST_FIELD_IPV4].value.u32, + &a, &b, &c, &d); + printf("%hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, + key->field_value[DST_FIELD_IPV4].mask_range.u32); + + printf("%hu : 0x%x %hu : 0x%x", + key->field_value[SRCP_FIELD_IPV4].value.u16, + key->field_value[SRCP_FIELD_IPV4].mask_range.u16, + key->field_value[DSTP_FIELD_IPV4].value.u16, + key->field_value[DSTP_FIELD_IPV4].mask_range.u16); + + printf(" priority: 0x%x\n", key->priority); +} + +static inline void +print_ipv4_key_delete(struct rte_table_acl_rule_delete_params *key) +{ + unsigned char a, b, c, d; + + printf("ipv4_key_del: 0x%02hhx/0x%hhx ", + key->field_value[PROTO_FIELD_IPV4].value.u8, + key->field_value[PROTO_FIELD_IPV4].mask_range.u8); + + uint32_t_to_char(key->field_value[SRC_FIELD_IPV4].value.u32, + &a, &b, &c, &d); + printf(" %hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, + key->field_value[SRC_FIELD_IPV4].mask_range.u32); + + uint32_t_to_char(key->field_value[DST_FIELD_IPV4].value.u32, + &a, &b, &c, &d); + printf("%hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, + key->field_value[DST_FIELD_IPV4].mask_range.u32); + + printf("%hu : 0x%x %hu : 0x%x\n", + key->field_value[SRCP_FIELD_IPV4].value.u16, + key->field_value[SRCP_FIELD_IPV4].mask_range.u16, + key->field_value[DSTP_FIELD_IPV4].value.u16, + key->field_value[DSTP_FIELD_IPV4].mask_range.u16); +} +#endif + +static struct rte_flow_classify * +allocate_5tuple(void) +{ + struct rte_flow_classify *flow_classify; + + flow_classify = malloc(sizeof(struct rte_flow_classify)); + if (!flow_classify) + return flow_classify; + + memset(flow_classify, 0, sizeof(struct rte_flow_classify)); + flow_classify->id = unique_id++; + flow_classify->type = RTE_FLOW_CLASSIFY_TYPE_5TUPLE; + memcpy(&flow_classify->action, classify_get_flow_action(), + sizeof(struct rte_flow_action)); + + flow_classify->flow_extra_data.priority = ntuple_filter.priority; + flow_classify->flow_extra_data.tcp_flags = ntuple_filter.tcp_flags; + + /* key add values */ + flow_classify->key_add.priority = ntuple_filter.priority; + flow_classify->key_add.field_value[PROTO_FIELD_IPV4].mask_range.u8 = + ntuple_filter.proto_mask; + flow_classify->key_add.field_value[PROTO_FIELD_IPV4].value.u8 = + ntuple_filter.proto; + + flow_classify->key_add.field_value[SRC_FIELD_IPV4].mask_range.u32 = + ntuple_filter.src_ip_mask; + flow_classify->key_add.field_value[SRC_FIELD_IPV4].value.u32 = + ntuple_filter.src_ip; + + flow_classify->key_add.field_value[DST_FIELD_IPV4].mask_range.u32 = + ntuple_filter.dst_ip_mask; + flow_classify->key_add.field_value[DST_FIELD_IPV4].value.u32 = + ntuple_filter.dst_ip; + + flow_classify->key_add.field_value[SRCP_FIELD_IPV4].mask_range.u16 = + ntuple_filter.src_port_mask; + flow_classify->key_add.field_value[SRCP_FIELD_IPV4].value.u16 = + ntuple_filter.src_port; + + flow_classify->key_add.field_value[DSTP_FIELD_IPV4].mask_range.u16 = + ntuple_filter.dst_port_mask; + flow_classify->key_add.field_value[DSTP_FIELD_IPV4].value.u16 = + ntuple_filter.dst_port; + +#ifdef RTE_LIBRTE_CLASSIFY_DEBUG + print_ipv4_key_add(&flow_classify->key_add); +#endif + + /* key delete values */ + memcpy(&flow_classify->key_del.field_value[PROTO_FIELD_IPV4], + &flow_classify->key_add.field_value[PROTO_FIELD_IPV4], + NUM_FIELDS_IPV4 * sizeof(struct rte_acl_field)); + +#ifdef RTE_LIBRTE_CLASSIFY_DEBUG + print_ipv4_key_delete(&flow_classify->key_del); +#endif + return flow_classify; +} + +struct rte_flow_classify * +rte_flow_classify_create(void *table_handle, + uint32_t entry_size, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + struct rte_flow_classify *flow_classify; + int ret; + + if (!error) + return NULL; + + if (!table_handle) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_HANDLE, + NULL, "NULL table_handle."); + return NULL; + } + + if (!pattern) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM, + NULL, "NULL pattern."); + return NULL; + } + + if (!actions) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_NUM, + NULL, "NULL action."); + return NULL; + } + + if (!attr) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, + NULL, "NULL attribute."); + return NULL; + } + + /* parse attr, pattern and actions */ + ret = rte_flow_classify_validate(table_handle, attr, pattern, + actions, error); + if (ret < 0) + return NULL; + + flow_classify = allocate_5tuple(); + if (!flow_classify) + return NULL; + + flow_classify->entry = malloc(entry_size); + if (!flow_classify->entry) { + free(flow_classify); + flow_classify = NULL; + return NULL; + } + memset(flow_classify->entry, 0, entry_size); + memmove(flow_classify->entry, &flow_classify->id, sizeof(uint32_t)); + + ret = rte_table_acl_ops.f_add(table_handle, &flow_classify->key_add, + flow_classify->entry, &flow_classify->key_found, + &flow_classify->entry_ptr); + if (ret) { + free(flow_classify->entry); + free(flow_classify); + flow_classify = NULL; + return NULL; + } + + return flow_classify; +} + +int +rte_flow_classify_destroy(void *table_handle, + struct rte_flow_classify *flow_classify, + struct rte_flow_error *error) +{ + int ret; + int key_found; + + if (!error) + return -EINVAL; + + if (!flow_classify || !table_handle) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "invalid input"); + return -EINVAL; + } + + ret = rte_table_acl_ops.f_delete(table_handle, + &flow_classify->key_del, &key_found, + flow_classify->entry); + if ((ret == 0) && key_found) { + free(flow_classify->entry); + free(flow_classify); + } else + ret = -1; + return ret; +} + +static int +flow_match(void *table, struct rte_mbuf **pkts_in, const uint16_t nb_pkts, + uint64_t *count, uint32_t id) +{ + int ret = -1; + int i; + uint64_t pkts_mask; + uint64_t lookup_hit_mask; + uint32_t classify_id; + void *entries[RTE_PORT_IN_BURST_SIZE_MAX]; + + if (nb_pkts) { + pkts_mask = RTE_LEN2MASK(nb_pkts, uint64_t); + ret = rte_table_acl_ops.f_lookup(table, pkts_in, + pkts_mask, &lookup_hit_mask, entries); + if (!ret) { + for (i = 0; i < nb_pkts && + (lookup_hit_mask & (1 << i)); i++) { + memmove(&classify_id, entries[i], + sizeof(uint32_t)); + if (id == classify_id) + (*count)++; /* match found */ + } + if (*count == 0) + ret = -1; + } else + ret = -1; + } + return ret; +} + +static int +action_apply(const struct rte_flow_classify *flow_classify, + struct rte_flow_classify_stats *stats, uint64_t count) +{ + struct rte_flow_classify_5tuple_stats *ntuple_stats; + + switch (flow_classify->action.type) { + case RTE_FLOW_ACTION_TYPE_COUNT: + ntuple_stats = + (struct rte_flow_classify_5tuple_stats *)stats->stats; + ntuple_stats->counter1 = count; + stats->used_space = 1; + break; + default: + return -ENOTSUP; + } + + return 0; +} + +int +rte_flow_classify_query(void *table_handle, + const struct rte_flow_classify *flow_classify, + struct rte_mbuf **pkts, + const uint16_t nb_pkts, + struct rte_flow_classify_stats *stats, + struct rte_flow_error *error) +{ + uint64_t count = 0; + int ret = -EINVAL; + + if (!error) + return ret; + + if (!table_handle || !flow_classify || !pkts || !stats) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "invalid input"); + return ret; + } + + if ((stats->available_space == 0) || (nb_pkts == 0)) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "invalid input"); + return ret; + } + + ret = flow_match(table_handle, pkts, nb_pkts, &count, + flow_classify->id); + if (ret == 0) + ret = action_apply(flow_classify, stats, count); + + return ret; +} diff --git a/lib/librte_flow_classify/rte_flow_classify.h b/lib/librte_flow_classify/rte_flow_classify.h new file mode 100644 index 0000000..2b200fb --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify.h @@ -0,0 +1,207 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _RTE_FLOW_CLASSIFY_H_ +#define _RTE_FLOW_CLASSIFY_H_ + +/** + * @file + * + * RTE Flow Classify Library + * + * This library provides flow record information with some measured properties. + * + * Application should define the flow and measurement criteria (action) for it. + * + * Library doesn't maintain any flow records itself, instead flow information is + * returned to upper layer only for given packets. + * + * It is application's responsibility to call rte_flow_classify_query() + * for group of packets, just after receiving them or before transmitting them. + * Application should provide the flow type interested in, measurement to apply + * to that flow in rte_flow_classify_create() API, and should provide + * rte_flow_classify object and storage to put results in + * rte_flow_classify_query() API. + * + * Usage: + * - application calls rte_flow_classify_create() to create a rte_flow_classify + * object. + * - application calls rte_flow_classify_query() in a polling manner, + * preferably after rte_eth_rx_burst(). This will cause the library to + * convert packet information to flow information with some measurements. + * - rte_flow_classify object can be destroyed when they are no more needed + * via rte_flow_classify_destroy() + */ + +#include <rte_ethdev.h> +#include <rte_ether.h> +#include <rte_flow.h> +#include <rte_acl.h> + +#ifdef __cplusplus +extern "C" { +#endif + +enum rte_flow_classify_type { + RTE_FLOW_CLASSIFY_TYPE_NONE, /**< no type */ + RTE_FLOW_CLASSIFY_TYPE_5TUPLE, /**< IPv4 5tuple type */ +}; + +struct rte_flow_classify; + +/** + * Flow stats + * + * For single action an array of stats can be returned by API. Technically each + * packet can return a stat at max. + * + * Storage for stats is provided by application, library should know available + * space, and should return the number of used space. + * + * stats type is based on what measurement (action) requested by application. + * + */ +struct rte_flow_classify_stats { + const unsigned int available_space; + unsigned int used_space; + void **stats; +}; + +struct rte_flow_classify_5tuple_stats { + uint64_t counter1; /**< count of packets that match 5tupple pattern */ +}; + +/** + * Create a flow classify rule. + * + * @param[in] table_handle + * Pointer to table ACL + * @param[in] entry_size + * Size of ACL rule + * @param[in] attr + * Flow rule attributes + * @param[in] pattern + * Pattern specification (list terminated by the END pattern item). + * @param[in] actions + * Associated actions (list terminated by the END pattern item). + * @param[out] error + * Perform verbose error reporting if not NULL. Structure + * initialised in case of error only. + * @return + * A valid handle in case of success, NULL otherwise. + */ +struct rte_flow_classify * +rte_flow_classify_create(void *table_handle, + uint32_t entry_size, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error); + +/** + * Validate a flow classify rule. + * + * @param[in] table_handle + * Pointer to table ACL + * @param[in] attr + * Flow rule attributes + * @param[in] pattern + * Pattern specification (list terminated by the END pattern item). + * @param[in] actions + * Associated actions (list terminated by the END pattern item). + * @param[out] error + * Perform verbose error reporting if not NULL. Structure + * initialised in case of error only. + * + * @return + * reurn code. + */ +int +rte_flow_classify_validate(void *table_handle, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error); + +/** + * Destroy a flow classify rule. + * + * @param[in] table_handle + * Pointer to table ACL + * @param[in] flow_classify + * Flow rule handle to destroy + * @param[out] error + * Perform verbose error reporting if not NULL. Structure + * initialised in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise. + */ +int +rte_flow_classify_destroy(void *table_handle, + struct rte_flow_classify *flow_classify, + struct rte_flow_error *error); + +/** + * Get flow classification stats for given packets. + * + * @param[in] table_handle + * Pointer to table ACL + * @param[in] flow_classify + * Pointer to Flow rule object + * @param[in] pkts + * Pointer to packets to process + * @param[in] nb_pkts + * Number of packets to process + * @param[in] stats + * To store stats defined by action + * @param[out] error + * Perform verbose error reporting if not NULL. Structure + * initialised in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise. + */ +int +rte_flow_classify_query(void *table_handle, + const struct rte_flow_classify *flow_classify, + struct rte_mbuf **pkts, + const uint16_t nb_pkts, + struct rte_flow_classify_stats *stats, + struct rte_flow_error *error); + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_FLOW_CLASSIFY_H_ */ diff --git a/lib/librte_flow_classify/rte_flow_classify_parse.c b/lib/librte_flow_classify/rte_flow_classify_parse.c new file mode 100644 index 0000000..6eb2048 --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify_parse.c @@ -0,0 +1,546 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <rte_flow_classify.h> +#include "rte_flow_classify_parse.h" +#include <rte_flow_driver.h> + +struct classify_valid_pattern { + enum rte_flow_item_type *items; + parse_filter_t parse_filter; +}; + +static struct rte_flow_action action; + +/* Pattern matched ntuple filter */ +static enum rte_flow_item_type pattern_ntuple_1[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_UDP, + RTE_FLOW_ITEM_TYPE_END, +}; + +/* Pattern matched ntuple filter */ +static enum rte_flow_item_type pattern_ntuple_2[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_TCP, + RTE_FLOW_ITEM_TYPE_END, +}; + +/* Pattern matched ntuple filter */ +static enum rte_flow_item_type pattern_ntuple_3[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_SCTP, + RTE_FLOW_ITEM_TYPE_END, +}; + +static int +classify_parse_ntuple_filter(const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_eth_ntuple_filter *filter, + struct rte_flow_error *error); + +static struct classify_valid_pattern classify_supported_patterns[] = { + /* ntuple */ + { pattern_ntuple_1, classify_parse_ntuple_filter }, + { pattern_ntuple_2, classify_parse_ntuple_filter }, + { pattern_ntuple_3, classify_parse_ntuple_filter }, +}; + +struct rte_flow_action * +classify_get_flow_action(void) +{ + return &action; +} + +/* Find the first VOID or non-VOID item pointer */ +const struct rte_flow_item * +classify_find_first_item(const struct rte_flow_item *item, bool is_void) +{ + bool is_find; + + while (item->type != RTE_FLOW_ITEM_TYPE_END) { + if (is_void) + is_find = item->type == RTE_FLOW_ITEM_TYPE_VOID; + else + is_find = item->type != RTE_FLOW_ITEM_TYPE_VOID; + if (is_find) + break; + item++; + } + return item; +} + +/* Skip all VOID items of the pattern */ +void +classify_pattern_skip_void_item(struct rte_flow_item *items, + const struct rte_flow_item *pattern) +{ + uint32_t cpy_count = 0; + const struct rte_flow_item *pb = pattern, *pe = pattern; + + for (;;) { + /* Find a non-void item first */ + pb = classify_find_first_item(pb, false); + if (pb->type == RTE_FLOW_ITEM_TYPE_END) { + pe = pb; + break; + } + + /* Find a void item */ + pe = classify_find_first_item(pb + 1, true); + + cpy_count = pe - pb; + rte_memcpy(items, pb, sizeof(struct rte_flow_item) * cpy_count); + + items += cpy_count; + + if (pe->type == RTE_FLOW_ITEM_TYPE_END) { + pb = pe; + break; + } + + pb = pe + 1; + } + /* Copy the END item. */ + rte_memcpy(items, pe, sizeof(struct rte_flow_item)); +} + +/* Check if the pattern matches a supported item type array */ +static bool +classify_match_pattern(enum rte_flow_item_type *item_array, + struct rte_flow_item *pattern) +{ + struct rte_flow_item *item = pattern; + + while ((*item_array == item->type) && + (*item_array != RTE_FLOW_ITEM_TYPE_END)) { + item_array++; + item++; + } + + return (*item_array == RTE_FLOW_ITEM_TYPE_END && + item->type == RTE_FLOW_ITEM_TYPE_END); +} + +/* Find if there's parse filter function matched */ +parse_filter_t +classify_find_parse_filter_func(struct rte_flow_item *pattern) +{ + parse_filter_t parse_filter = NULL; + uint8_t i = 0; + + for (; i < RTE_DIM(classify_supported_patterns); i++) { + if (classify_match_pattern(classify_supported_patterns[i].items, + pattern)) { + parse_filter = + classify_supported_patterns[i].parse_filter; + break; + } + } + + return parse_filter; +} + +#define FLOW_RULE_MIN_PRIORITY 8 +#define FLOW_RULE_MAX_PRIORITY 0 + +#define NEXT_ITEM_OF_PATTERN(item, pattern, index)\ + do {\ + item = pattern + index;\ + while (item->type == RTE_FLOW_ITEM_TYPE_VOID) {\ + index++;\ + item = pattern + index;\ + } \ + } while (0) + +#define NEXT_ITEM_OF_ACTION(act, actions, index)\ + do {\ + act = actions + index;\ + while (act->type == RTE_FLOW_ACTION_TYPE_VOID) {\ + index++;\ + act = actions + index;\ + } \ + } while (0) + +/** + * Please aware there's an assumption for all the parsers. + * rte_flow_item is using big endian, rte_flow_attr and + * rte_flow_action are using CPU order. + * Because the pattern is used to describe the packets, + * normally the packets should use network order. + */ + +/** + * Parse the rule to see if it is a n-tuple rule. + * And get the n-tuple filter info BTW. + * pattern: + * The first not void item can be ETH or IPV4. + * The second not void item must be IPV4 if the first one is ETH. + * The third not void item must be UDP or TCP. + * The next not void item must be END. + * action: + * The first not void action should be QUEUE. + * The next not void action should be END. + * pattern example: + * ITEM Spec Mask + * ETH NULL NULL + * IPV4 src_addr 192.168.1.20 0xFFFFFFFF + * dst_addr 192.167.3.50 0xFFFFFFFF + * next_proto_id 17 0xFF + * UDP/TCP/ src_port 80 0xFFFF + * SCTP dst_port 80 0xFFFF + * END + * other members in mask and spec should set to 0x00. + * item->last should be NULL. + */ +static int +classify_parse_ntuple_filter(const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_eth_ntuple_filter *filter, + struct rte_flow_error *error) +{ + const struct rte_flow_item *item; + const struct rte_flow_action *act; + const struct rte_flow_item_ipv4 *ipv4_spec; + const struct rte_flow_item_ipv4 *ipv4_mask; + const struct rte_flow_item_tcp *tcp_spec; + const struct rte_flow_item_tcp *tcp_mask; + const struct rte_flow_item_udp *udp_spec; + const struct rte_flow_item_udp *udp_mask; + const struct rte_flow_item_sctp *sctp_spec; + const struct rte_flow_item_sctp *sctp_mask; + uint32_t index; + + if (!pattern) { + rte_flow_error_set(error, + EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM, + NULL, "NULL pattern."); + return -rte_errno; + } + + if (!actions) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_NUM, + NULL, "NULL action."); + return -rte_errno; + } + if (!attr) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, + NULL, "NULL attribute."); + return -rte_errno; + } + + /* parse pattern */ + index = 0; + + /* the first not void item can be MAC or IPv4 */ + NEXT_ITEM_OF_PATTERN(item, pattern, index); + + if (item->type != RTE_FLOW_ITEM_TYPE_ETH && + item->type != RTE_FLOW_ITEM_TYPE_IPV4) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + /* Skip Ethernet */ + if (item->type == RTE_FLOW_ITEM_TYPE_ETH) { + /*Not supported last point for range*/ + if (item->last) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + item, + "Not supported last point for range"); + return -rte_errno; + + } + /* if the first item is MAC, the content should be NULL */ + if (item->spec || item->mask) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Not supported by ntuple filter"); + return -rte_errno; + } + /* check if the next not void item is IPv4 */ + index++; + NEXT_ITEM_OF_PATTERN(item, pattern, index); + if (item->type != RTE_FLOW_ITEM_TYPE_IPV4) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Not supported by ntuple filter"); + return -rte_errno; + } + } + + /* get the IPv4 info */ + if (!item->spec || !item->mask) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Invalid ntuple mask"); + return -rte_errno; + } + /*Not supported last point for range*/ + if (item->last) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + item, "Not supported last point for range"); + return -rte_errno; + + } + + ipv4_mask = (const struct rte_flow_item_ipv4 *)item->mask; + /** + * Only support src & dst addresses, protocol, + * others should be masked. + */ + if (ipv4_mask->hdr.version_ihl || + ipv4_mask->hdr.type_of_service || + ipv4_mask->hdr.total_length || + ipv4_mask->hdr.packet_id || + ipv4_mask->hdr.fragment_offset || + ipv4_mask->hdr.time_to_live || + ipv4_mask->hdr.hdr_checksum) { + rte_flow_error_set(error, + EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + + filter->dst_ip_mask = ipv4_mask->hdr.dst_addr; + filter->src_ip_mask = ipv4_mask->hdr.src_addr; + filter->proto_mask = ipv4_mask->hdr.next_proto_id; + + ipv4_spec = (const struct rte_flow_item_ipv4 *)item->spec; + filter->dst_ip = ipv4_spec->hdr.dst_addr; + filter->src_ip = ipv4_spec->hdr.src_addr; + filter->proto = ipv4_spec->hdr.next_proto_id; + + /* check if the next not void item is TCP or UDP or SCTP */ + index++; + NEXT_ITEM_OF_PATTERN(item, pattern, index); + if (item->type != RTE_FLOW_ITEM_TYPE_TCP && + item->type != RTE_FLOW_ITEM_TYPE_UDP && + item->type != RTE_FLOW_ITEM_TYPE_SCTP) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + + /* get the TCP/UDP info */ + if (!item->spec || !item->mask) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Invalid ntuple mask"); + return -rte_errno; + } + + /*Not supported last point for range*/ + if (item->last) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + item, "Not supported last point for range"); + return -rte_errno; + + } + + if (item->type == RTE_FLOW_ITEM_TYPE_TCP) { + tcp_mask = (const struct rte_flow_item_tcp *)item->mask; + + /** + * Only support src & dst ports, tcp flags, + * others should be masked. + */ + if (tcp_mask->hdr.sent_seq || + tcp_mask->hdr.recv_ack || + tcp_mask->hdr.data_off || + tcp_mask->hdr.rx_win || + tcp_mask->hdr.cksum || + tcp_mask->hdr.tcp_urp) { + memset(filter, 0, + sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + + filter->dst_port_mask = tcp_mask->hdr.dst_port; + filter->src_port_mask = tcp_mask->hdr.src_port; + if (tcp_mask->hdr.tcp_flags == 0xFF) { + filter->flags |= RTE_NTUPLE_FLAGS_TCP_FLAG; + } else if (!tcp_mask->hdr.tcp_flags) { + filter->flags &= ~RTE_NTUPLE_FLAGS_TCP_FLAG; + } else { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + + tcp_spec = (const struct rte_flow_item_tcp *)item->spec; + filter->dst_port = tcp_spec->hdr.dst_port; + filter->src_port = tcp_spec->hdr.src_port; + filter->tcp_flags = tcp_spec->hdr.tcp_flags; + } else if (item->type == RTE_FLOW_ITEM_TYPE_UDP) { + udp_mask = (const struct rte_flow_item_udp *)item->mask; + + /** + * Only support src & dst ports, + * others should be masked. + */ + if (udp_mask->hdr.dgram_len || + udp_mask->hdr.dgram_cksum) { + memset(filter, 0, + sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + + filter->dst_port_mask = udp_mask->hdr.dst_port; + filter->src_port_mask = udp_mask->hdr.src_port; + + udp_spec = (const struct rte_flow_item_udp *)item->spec; + filter->dst_port = udp_spec->hdr.dst_port; + filter->src_port = udp_spec->hdr.src_port; + } else { + sctp_mask = (const struct rte_flow_item_sctp *)item->mask; + + /** + * Only support src & dst ports, + * others should be masked. + */ + if (sctp_mask->hdr.tag || + sctp_mask->hdr.cksum) { + memset(filter, 0, + sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + + filter->dst_port_mask = sctp_mask->hdr.dst_port; + filter->src_port_mask = sctp_mask->hdr.src_port; + + sctp_spec = (const struct rte_flow_item_sctp *)item->spec; + filter->dst_port = sctp_spec->hdr.dst_port; + filter->src_port = sctp_spec->hdr.src_port; + } + + /* check if the next not void item is END */ + index++; + NEXT_ITEM_OF_PATTERN(item, pattern, index); + if (item->type != RTE_FLOW_ITEM_TYPE_END) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + + /* parse action */ + index = 0; + + /** + * n-tuple only supports count, + * check if the first not void action is COUNT. + */ + memset(&action, 0, sizeof(action)); + NEXT_ITEM_OF_ACTION(act, actions, index); + if (act->type != RTE_FLOW_ACTION_TYPE_COUNT) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + item, "Not supported action."); + return -rte_errno; + } + action.type = RTE_FLOW_ACTION_TYPE_COUNT; + + /* check if the next not void item is END */ + index++; + NEXT_ITEM_OF_ACTION(act, actions, index); + if (act->type != RTE_FLOW_ACTION_TYPE_END) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + act, "Not supported action."); + return -rte_errno; + } + + /* parse attr */ + /* must be input direction */ + if (!attr->ingress) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, + attr, "Only support ingress."); + return -rte_errno; + } + + /* not supported */ + if (attr->egress) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, + attr, "Not support egress."); + return -rte_errno; + } + + if (attr->priority > 0xFFFF) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, + attr, "Error priority."); + return -rte_errno; + } + filter->priority = (uint16_t)attr->priority; + if (attr->priority > FLOW_RULE_MIN_PRIORITY) + filter->priority = FLOW_RULE_MAX_PRIORITY; + + return 0; +} diff --git a/lib/librte_flow_classify/rte_flow_classify_parse.h b/lib/librte_flow_classify/rte_flow_classify_parse.h new file mode 100644 index 0000000..1d4708a --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify_parse.h @@ -0,0 +1,74 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _RTE_FLOW_CLASSIFY_PARSE_H_ +#define _RTE_FLOW_CLASSIFY_PARSE_H_ + +#include <rte_ethdev.h> +#include <rte_ether.h> +#include <rte_flow.h> +#include <stdbool.h> + +#ifdef __cplusplus +extern "C" { +#endif + +typedef int (*parse_filter_t)(const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_eth_ntuple_filter *filter, + struct rte_flow_error *error); + +/* Skip all VOID items of the pattern */ +void +classify_pattern_skip_void_item(struct rte_flow_item *items, + const struct rte_flow_item *pattern); + +/* Find the first VOID or non-VOID item pointer */ +const struct rte_flow_item * +classify_find_first_item(const struct rte_flow_item *item, bool is_void); + + +/* Find if there's parse filter function matched */ +parse_filter_t +classify_find_parse_filter_func(struct rte_flow_item *pattern); + +/* get action data */ +struct rte_flow_action * +classify_get_flow_action(void); + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_FLOW_CLASSIFY_PARSE_H_ */ diff --git a/lib/librte_flow_classify/rte_flow_classify_version.map b/lib/librte_flow_classify/rte_flow_classify_version.map new file mode 100644 index 0000000..93b67f6 --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify_version.map @@ -0,0 +1,10 @@ +DPDK_17.11 { + global: + + rte_flow_classify_create; + rte_flow_classify_destroy; + rte_flow_classify_query; + rte_flow_classify_validate; + + local: *; +}; diff --git a/mk/rte.app.mk b/mk/rte.app.mk index c25fdd9..909ab95 100644 --- a/mk/rte.app.mk +++ b/mk/rte.app.mk @@ -58,6 +58,7 @@ _LDLIBS-y += -L$(RTE_SDK_BIN)/lib # # Order is important: from higher level to lower level # +_LDLIBS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += -lrte_flow_classify _LDLIBS-$(CONFIG_RTE_LIBRTE_PIPELINE) += -lrte_pipeline _LDLIBS-$(CONFIG_RTE_LIBRTE_TABLE) += -lrte_table _LDLIBS-$(CONFIG_RTE_LIBRTE_PORT) += -lrte_port @@ -84,7 +85,6 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_EFD) += -lrte_efd _LDLIBS-$(CONFIG_RTE_LIBRTE_CFGFILE) += -lrte_cfgfile _LDLIBS-y += --whole-archive - _LDLIBS-$(CONFIG_RTE_LIBRTE_HASH) += -lrte_hash _LDLIBS-$(CONFIG_RTE_LIBRTE_VHOST) += -lrte_vhost _LDLIBS-$(CONFIG_RTE_LIBRTE_KVARGS) += -lrte_kvargs -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [PATCH v7 1/4] librte_flow_classify: add librte_flow_classify library 2017-10-02 9:31 ` [dpdk-dev] [PATCH v7 1/4] librte_flow_classify: add librte_flow_classify library Bernard Iremonger @ 2017-10-06 15:00 ` Singh, Jasvinder 2017-10-09 9:28 ` Mcnamara, John 2017-10-13 15:39 ` Iremonger, Bernard 0 siblings, 2 replies; 145+ messages in thread From: Singh, Jasvinder @ 2017-10-06 15:00 UTC (permalink / raw) To: Iremonger, Bernard, dev, Yigit, Ferruh, Ananyev, Konstantin, Dumitrescu, Cristian, adrien.mazarguil Cc: Iremonger, Bernard Hi Bernard, <snip> > +struct rte_flow_classify * > +rte_flow_classify_create(void *table_handle, > + uint32_t entry_size, > + const struct rte_flow_attr *attr, > + const struct rte_flow_item pattern[], > + const struct rte_flow_action actions[], > + struct rte_flow_error *error) > +{ > + struct rte_flow_classify *flow_classify; > + int ret; > + > + if (!error) > + return NULL; > + > + if (!table_handle) { > + rte_flow_error_set(error, EINVAL, > RTE_FLOW_ERROR_TYPE_HANDLE, > + NULL, "NULL table_handle."); > + return NULL; > + } > + > + if (!pattern) { > + rte_flow_error_set(error, EINVAL, > RTE_FLOW_ERROR_TYPE_ITEM_NUM, > + NULL, "NULL pattern."); > + return NULL; > + } > + > + if (!actions) { > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ACTION_NUM, > + NULL, "NULL action."); > + return NULL; > + } > + > + if (!attr) { > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ATTR, > + NULL, "NULL attribute."); > + return NULL; > + } > + > + /* parse attr, pattern and actions */ > + ret = rte_flow_classify_validate(table_handle, attr, pattern, > + actions, error); > + if (ret < 0) > + return NULL; > + > + flow_classify = allocate_5tuple(); > + if (!flow_classify) > + return NULL; > + > + flow_classify->entry = malloc(entry_size); > + if (!flow_classify->entry) { > + free(flow_classify); > + flow_classify = NULL; > + return NULL; > + } > + memset(flow_classify->entry, 0, entry_size); > + memmove(flow_classify->entry, &flow_classify->id, > sizeof(uint32_t)); > + > + ret = rte_table_acl_ops.f_add(table_handle, &flow_classify- > >key_add, > + flow_classify->entry, &flow_classify->key_found, > + &flow_classify->entry_ptr); > + if (ret) { > + free(flow_classify->entry); > + free(flow_classify); > + flow_classify = NULL; > + return NULL; > + } > + > + return flow_classify; > +} The API in its current form creates the classifier object which will always use librte_acl based classification mechanism. This behavior imposes restriction on the application to always pass only ACL table related parameters for flow classification. In my opinion, API implementation should be agnostic to specific classification method and should be generic enough to allow application to select any of the available flow classification method (for e.g. acl, hash, LPM, etc.). Otherwise, this library will become another abstraction of librte_acl for flow classification. Also, library allows table entries to be added while creating the classifier object, not later. Is there any specific reason? ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [PATCH v7 1/4] librte_flow_classify: add librte_flow_classify library 2017-10-06 15:00 ` Singh, Jasvinder @ 2017-10-09 9:28 ` Mcnamara, John 2017-10-13 15:39 ` Iremonger, Bernard 1 sibling, 0 replies; 145+ messages in thread From: Mcnamara, John @ 2017-10-09 9:28 UTC (permalink / raw) To: Singh, Jasvinder, Iremonger, Bernard, dev, Yigit, Ferruh, Ananyev, Konstantin, Dumitrescu, Cristian, adrien.mazarguil Cc: Iremonger, Bernard > -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Singh, Jasvinder > Sent: Friday, October 6, 2017 4:01 PM > To: Iremonger, Bernard <bernard.iremonger@intel.com>; dev@dpdk.org; Yigit, > Ferruh <ferruh.yigit@intel.com>; Ananyev, Konstantin > <konstantin.ananyev@intel.com>; Dumitrescu, Cristian > <cristian.dumitrescu@intel.com>; adrien.mazarguil@6wind.com > Cc: Iremonger, Bernard <bernard.iremonger@intel.com> > Subject: Re: [dpdk-dev] [PATCH v7 1/4] librte_flow_classify: add > librte_flow_classify library > > ... > > The API in its current form creates the classifier object which will > always use librte_acl based classification mechanism. This behavior > imposes restriction on the application to always pass only ACL table > related parameters for flow classification. In my opinion, API > implementation should be agnostic to specific classification method and > should be generic enough to allow application to select any of the > available flow classification method (for e.g. acl, hash, LPM, etc.). > Otherwise, this library will become another abstraction of librte_acl for > flow classification. > > Also, library allows table entries to be added while creating the > classifier object, not later. Is there any specific reason? Hi, I think that we should fix this prior to merge. Can we make these changes and target RC2 instead. John ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [PATCH v7 1/4] librte_flow_classify: add librte_flow_classify library 2017-10-06 15:00 ` Singh, Jasvinder 2017-10-09 9:28 ` Mcnamara, John @ 2017-10-13 15:39 ` Iremonger, Bernard 1 sibling, 0 replies; 145+ messages in thread From: Iremonger, Bernard @ 2017-10-13 15:39 UTC (permalink / raw) To: Singh, Jasvinder, dev, Yigit, Ferruh, Ananyev, Konstantin, Dumitrescu, Cristian, adrien.mazarguil Cc: Iremonger, Bernard Hi Jasvinder, > -----Original Message----- > From: Singh, Jasvinder > Sent: Friday, October 6, 2017 4:01 PM > To: Iremonger, Bernard <bernard.iremonger@intel.com>; dev@dpdk.org; > Yigit, Ferruh <ferruh.yigit@intel.com>; Ananyev, Konstantin > <konstantin.ananyev@intel.com>; Dumitrescu, Cristian > <cristian.dumitrescu@intel.com>; adrien.mazarguil@6wind.com > Cc: Iremonger, Bernard <bernard.iremonger@intel.com> > Subject: RE: [dpdk-dev] [PATCH v7 1/4] librte_flow_classify: add > librte_flow_classify library > > Hi Bernard, > > <snip> > > > +struct rte_flow_classify * > > +rte_flow_classify_create(void *table_handle, > > + uint32_t entry_size, > > + const struct rte_flow_attr *attr, > > + const struct rte_flow_item pattern[], > > + const struct rte_flow_action actions[], > > + struct rte_flow_error *error) > > +{ > > + struct rte_flow_classify *flow_classify; > > + int ret; > > + > > + if (!error) > > + return NULL; > > + > > + if (!table_handle) { > > + rte_flow_error_set(error, EINVAL, > > RTE_FLOW_ERROR_TYPE_HANDLE, > > + NULL, "NULL table_handle."); > > + return NULL; > > + } > > + > > + if (!pattern) { > > + rte_flow_error_set(error, EINVAL, > > RTE_FLOW_ERROR_TYPE_ITEM_NUM, > > + NULL, "NULL pattern."); > > + return NULL; > > + } > > + > > + if (!actions) { > > + rte_flow_error_set(error, EINVAL, > > + RTE_FLOW_ERROR_TYPE_ACTION_NUM, > > + NULL, "NULL action."); > > + return NULL; > > + } > > + > > + if (!attr) { > > + rte_flow_error_set(error, EINVAL, > > + RTE_FLOW_ERROR_TYPE_ATTR, > > + NULL, "NULL attribute."); > > + return NULL; > > + } > > + > > + /* parse attr, pattern and actions */ > > + ret = rte_flow_classify_validate(table_handle, attr, pattern, > > + actions, error); > > + if (ret < 0) > > + return NULL; > > + > > + flow_classify = allocate_5tuple(); > > + if (!flow_classify) > > + return NULL; > > + > > + flow_classify->entry = malloc(entry_size); > > + if (!flow_classify->entry) { > > + free(flow_classify); > > + flow_classify = NULL; > > + return NULL; > > + } > > + memset(flow_classify->entry, 0, entry_size); > > + memmove(flow_classify->entry, &flow_classify->id, > > sizeof(uint32_t)); > > + > > + ret = rte_table_acl_ops.f_add(table_handle, &flow_classify- > > >key_add, > > + flow_classify->entry, &flow_classify->key_found, > > + &flow_classify->entry_ptr); > > + if (ret) { > > + free(flow_classify->entry); > > + free(flow_classify); > > + flow_classify = NULL; > > + return NULL; > > + } > > + > > + return flow_classify; > > +} > > The API in its current form creates the classifier object which will always use > librte_acl based classification mechanism. This behavior imposes restriction > on the application to always pass only ACL table related parameters for flow > classification. In my opinion, API implementation should be agnostic to > specific classification method and should be generic enough to allow > application to select any of the available flow classification method (for e.g. > acl, hash, LPM, etc.). Otherwise, this library will become another abstraction > of librte_acl for flow classification. > > Also, library allows table entries to be added while creating the classifier > object, not later. Is there any specific reason? Thanks for reviewing this patchset. I will rework the code so that the API's are table agnostic. In the v7 patchset the application creates the table ACL and then classify rules can be added and deleted at will. I will send a v8 patchset. Regards, Bernard. ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v7 2/4] examples/flow_classify: flow classify sample application 2017-09-29 9:18 ` [dpdk-dev] [PATCH v6 0/4] " Bernard Iremonger 2017-10-02 9:31 ` [dpdk-dev] [PATCH v7 " Bernard Iremonger 2017-10-02 9:31 ` [dpdk-dev] [PATCH v7 1/4] librte_flow_classify: add librte_flow_classify library Bernard Iremonger @ 2017-10-02 9:31 ` Bernard Iremonger 2017-10-02 9:31 ` [dpdk-dev] [PATCH v7 3/4] test: add packet burst generator functions Bernard Iremonger 2017-10-02 9:31 ` [dpdk-dev] [PATCH v7 4/4] test: flow classify library unit tests Bernard Iremonger 4 siblings, 0 replies; 145+ messages in thread From: Bernard Iremonger @ 2017-10-02 9:31 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil Cc: Bernard Iremonger The flow_classify sample application exercises the following librte_flow_classify API's: rte_flow_classify_create rte_flow_classify_validate rte_flow_classify_destroy rte_flow_classify_query It sets up the IPv4 ACL field definitions. It creates table_acl and adds and deletes rules using the librte_table API. It uses a file of IPv4 five tuple rules for input. Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> --- examples/flow_classify/Makefile | 57 ++ examples/flow_classify/flow_classify.c | 897 +++++++++++++++++++++++++++++ examples/flow_classify/ipv4_rules_file.txt | 14 + 3 files changed, 968 insertions(+) create mode 100644 examples/flow_classify/Makefile create mode 100644 examples/flow_classify/flow_classify.c create mode 100644 examples/flow_classify/ipv4_rules_file.txt diff --git a/examples/flow_classify/Makefile b/examples/flow_classify/Makefile new file mode 100644 index 0000000..eecdde1 --- /dev/null +++ b/examples/flow_classify/Makefile @@ -0,0 +1,57 @@ +# BSD LICENSE +# +# Copyright(c) 2017 Intel Corporation. All rights reserved. +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Intel Corporation nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +ifeq ($(RTE_SDK),) +$(error "Please define RTE_SDK environment variable") +endif + +# Default target, can be overridden by command line or environment +RTE_TARGET ?= x86_64-native-linuxapp-gcc + +include $(RTE_SDK)/mk/rte.vars.mk + +# binary name +APP = flow_classify + + +# all source are stored in SRCS-y +SRCS-y := flow_classify.c + +CFLAGS += -O3 +CFLAGS += $(WERROR_FLAGS) + +# workaround for a gcc bug with noreturn attribute +# http://gcc.gnu.org/bugzilla/show_bug.cgi?id=12603 +ifeq ($(CONFIG_RTE_TOOLCHAIN_GCC),y) +CFLAGS_main.o += -Wno-return-type +endif + +include $(RTE_SDK)/mk/rte.extapp.mk diff --git a/examples/flow_classify/flow_classify.c b/examples/flow_classify/flow_classify.c new file mode 100644 index 0000000..651fa8f --- /dev/null +++ b/examples/flow_classify/flow_classify.c @@ -0,0 +1,897 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <stdint.h> +#include <inttypes.h> +#include <getopt.h> + +#include <rte_eal.h> +#include <rte_ethdev.h> +#include <rte_cycles.h> +#include <rte_lcore.h> +#include <rte_mbuf.h> +#include <rte_flow.h> +#include <rte_flow_classify.h> +#include <rte_table_acl.h> + +#define RX_RING_SIZE 128 +#define TX_RING_SIZE 512 + +#define NUM_MBUFS 8191 +#define MBUF_CACHE_SIZE 250 +#define BURST_SIZE 32 +#define MAX_NUM_CLASSIFY 30 +#define FLOW_CLASSIFY_MAX_RULE_NUM 91 +#define FLOW_CLASSIFY_MAX_PRIORITY 8 +#define PROTO_TCP 6 +#define PROTO_UDP 17 +#define PROTO_SCTP 132 + +#define COMMENT_LEAD_CHAR ('#') +#define OPTION_RULE_IPV4 "rule_ipv4" +#define RTE_LOGTYPE_FLOW_CLASSIFY RTE_LOGTYPE_USER3 +#define flow_classify_log(format, ...) \ + RTE_LOG(ERR, FLOW_CLASSIFY, format, ##__VA_ARGS__) + +#define uint32_t_to_char(ip, a, b, c, d) do {\ + *a = (unsigned char)(ip >> 24 & 0xff);\ + *b = (unsigned char)(ip >> 16 & 0xff);\ + *c = (unsigned char)(ip >> 8 & 0xff);\ + *d = (unsigned char)(ip & 0xff);\ + } while (0) + +enum { + CB_FLD_SRC_ADDR, + CB_FLD_DST_ADDR, + CB_FLD_SRC_PORT, + CB_FLD_SRC_PORT_DLM, + CB_FLD_SRC_PORT_MASK, + CB_FLD_DST_PORT, + CB_FLD_DST_PORT_DLM, + CB_FLD_DST_PORT_MASK, + CB_FLD_PROTO, + CB_FLD_PRIORITY, + CB_FLD_NUM, +}; + +static struct{ + const char *rule_ipv4_name; +} parm_config; +const char cb_port_delim[] = ":"; + +static const struct rte_eth_conf port_conf_default = { + .rxmode = { .max_rx_pkt_len = ETHER_MAX_LEN } +}; + +static void *table_acl; +uint32_t entry_size; +static int udp_num_classify; +static int tcp_num_classify; +static int sctp_num_classify; + +/* ACL field definitions for IPv4 5 tuple rule */ + +enum { + PROTO_FIELD_IPV4, + SRC_FIELD_IPV4, + DST_FIELD_IPV4, + SRCP_FIELD_IPV4, + DSTP_FIELD_IPV4, + NUM_FIELDS_IPV4 +}; + +enum { + PROTO_INPUT_IPV4, + SRC_INPUT_IPV4, + DST_INPUT_IPV4, + SRCP_DESTP_INPUT_IPV4 +}; + +static struct rte_acl_field_def ipv4_defs[NUM_FIELDS_IPV4] = { + /* first input field - always one byte long. */ + { + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint8_t), + .field_index = PROTO_FIELD_IPV4, + .input_index = PROTO_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, next_proto_id), + }, + /* next input field (IPv4 source address) - 4 consecutive bytes. */ + { + /* rte_flow uses a bit mask for IPv4 addresses */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint32_t), + .field_index = SRC_FIELD_IPV4, + .input_index = SRC_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, src_addr), + }, + /* next input field (IPv4 destination address) - 4 consecutive bytes. */ + { + /* rte_flow uses a bit mask for IPv4 addresses */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint32_t), + .field_index = DST_FIELD_IPV4, + .input_index = DST_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, dst_addr), + }, + /* + * Next 2 fields (src & dst ports) form 4 consecutive bytes. + * They share the same input index. + */ + { + /* rte_flow uses a bit mask for protocol ports */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint16_t), + .field_index = SRCP_FIELD_IPV4, + .input_index = SRCP_DESTP_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + sizeof(struct ipv4_hdr) + + offsetof(struct tcp_hdr, src_port), + }, + { + /* rte_flow uses a bit mask for protocol ports */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint16_t), + .field_index = DSTP_FIELD_IPV4, + .input_index = SRCP_DESTP_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + sizeof(struct ipv4_hdr) + + offsetof(struct tcp_hdr, dst_port), + }, +}; + +/* flow classify data */ +static struct rte_flow_classify *udp_flow_classify[MAX_NUM_CLASSIFY]; +static struct rte_flow_classify *tcp_flow_classify[MAX_NUM_CLASSIFY]; +static struct rte_flow_classify *sctp_flow_classify[MAX_NUM_CLASSIFY]; + +static struct rte_flow_classify_5tuple_stats udp_ntuple_stats; +static struct rte_flow_classify_stats udp_classify_stats = { + .available_space = BURST_SIZE, + .used_space = 0, + .stats = (void **)&udp_ntuple_stats +}; + +static struct rte_flow_classify_5tuple_stats tcp_ntuple_stats; +static struct rte_flow_classify_stats tcp_classify_stats = { + .available_space = BURST_SIZE, + .used_space = 0, + .stats = (void **)&tcp_ntuple_stats +}; + +static struct rte_flow_classify_5tuple_stats sctp_ntuple_stats; +static struct rte_flow_classify_stats sctp_classify_stats = { + .available_space = BURST_SIZE, + .used_space = 0, + .stats = (void **)&sctp_ntuple_stats +}; + +/* parameters for rte_flow_classify_validate and rte_flow_classify_create */ + +static struct rte_flow_item eth_item = { RTE_FLOW_ITEM_TYPE_ETH, + 0, 0, 0 }; +static struct rte_flow_item end_item = { RTE_FLOW_ITEM_TYPE_END, + 0, 0, 0 }; + +/* sample actions: + * "actions count / end" + */ +static struct rte_flow_action count_action = { RTE_FLOW_ACTION_TYPE_COUNT, 0}; +static struct rte_flow_action end_action = { RTE_FLOW_ACTION_TYPE_END, 0}; +static struct rte_flow_action actions[2]; + +/* sample attributes */ +static struct rte_flow_attr attr; + +/* flow_classify.c: * Based on DPDK skeleton forwarding example. */ + +/* + * Initializes a given port using global settings and with the RX buffers + * coming from the mbuf_pool passed as a parameter. + */ +static inline int +port_init(uint8_t port, struct rte_mempool *mbuf_pool) +{ + struct rte_eth_conf port_conf = port_conf_default; + struct ether_addr addr; + const uint16_t rx_rings = 1, tx_rings = 1; + int retval; + uint16_t q; + + if (port >= rte_eth_dev_count()) + return -1; + + /* Configure the Ethernet device. */ + retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf); + if (retval != 0) + return retval; + + /* Allocate and set up 1 RX queue per Ethernet port. */ + for (q = 0; q < rx_rings; q++) { + retval = rte_eth_rx_queue_setup(port, q, RX_RING_SIZE, + rte_eth_dev_socket_id(port), NULL, mbuf_pool); + if (retval < 0) + return retval; + } + + /* Allocate and set up 1 TX queue per Ethernet port. */ + for (q = 0; q < tx_rings; q++) { + retval = rte_eth_tx_queue_setup(port, q, TX_RING_SIZE, + rte_eth_dev_socket_id(port), NULL); + if (retval < 0) + return retval; + } + + /* Start the Ethernet port. */ + retval = rte_eth_dev_start(port); + if (retval < 0) + return retval; + + /* Display the port MAC address. */ + rte_eth_macaddr_get(port, &addr); + printf("Port %u MAC: %02" PRIx8 " %02" PRIx8 " %02" PRIx8 + " %02" PRIx8 " %02" PRIx8 " %02" PRIx8 "\n", + port, + addr.addr_bytes[0], addr.addr_bytes[1], + addr.addr_bytes[2], addr.addr_bytes[3], + addr.addr_bytes[4], addr.addr_bytes[5]); + + /* Enable RX in promiscuous mode for the Ethernet device. */ + rte_eth_promiscuous_enable(port); + + return 0; +} + +/* + * The lcore main. This is the main thread that does the work, reading from + * an input port classifying the packets and writing to an output port. + */ +static __attribute__((noreturn)) void +lcore_main(void) +{ + struct rte_flow_error error; + const uint8_t nb_ports = rte_eth_dev_count(); + uint8_t port; + int ret; + int i; + + /* + * Check that the port is on the same NUMA node as the polling thread + * for best performance. + */ + for (port = 0; port < nb_ports; port++) + if (rte_eth_dev_socket_id(port) > 0 && + rte_eth_dev_socket_id(port) != + (int)rte_socket_id()) + printf("\n\n"); + printf("WARNING: port %u is on remote NUMA node\n", + port); + printf("to polling thread.\n"); + printf("Performance will not be optimal.\n"); + + printf("\nCore %u forwarding packets. [Ctrl+C to quit]\n", + rte_lcore_id()); + + /* Run until the application is quit or killed. */ + for (;;) { + /* + * Receive packets on a port, classify them and forward them + * on the paired port. + * The mapping is 0 -> 1, 1 -> 0, 2 -> 3, 3 -> 2, etc. + */ + for (port = 0; port < nb_ports; port++) { + + /* Get burst of RX packets, from first port of pair. */ + struct rte_mbuf *bufs[BURST_SIZE]; + const uint16_t nb_rx = rte_eth_rx_burst(port, 0, + bufs, BURST_SIZE); + + if (unlikely(nb_rx == 0)) + continue; + + for (i = 0; i < MAX_NUM_CLASSIFY; i++) { + if (udp_flow_classify[i]) { + ret = rte_flow_classify_query( + table_acl, + udp_flow_classify[i], + bufs, nb_rx, + &udp_classify_stats, &error); + if (ret) + printf( + "udp flow classify[%d] query failed port=%u\n\n", + i, port); + else + printf( + "udp rule [%d] counter1=%lu used_space=%d\n\n", + i, udp_ntuple_stats.counter1, + udp_classify_stats.used_space); + } + } + + for (i = 0; i < MAX_NUM_CLASSIFY; i++) { + if (tcp_flow_classify[i]) { + ret = rte_flow_classify_query( + table_acl, + tcp_flow_classify[i], + bufs, nb_rx, + &tcp_classify_stats, &error); + if (ret) + printf( + "tcp flow classify[%d] query failed port=%u\n\n", + i, port); + else + printf( + "tcp rule [%d] counter1=%lu used_space=%d\n\n", + i, tcp_ntuple_stats.counter1, + tcp_classify_stats.used_space); + } + } + + for (i = 0; i < MAX_NUM_CLASSIFY; i++) { + if (sctp_flow_classify[i]) { + ret = rte_flow_classify_query( + table_acl, + sctp_flow_classify[i], + bufs, nb_rx, + &sctp_classify_stats, &error); + if (ret) + printf( + "sctp flow classify[%d] query failed port=%u\n\n", + i, port); + else + printf( + "sctp rule [%d] counter1=%lu used_space=%d\n\n", + i, sctp_ntuple_stats.counter1, + sctp_classify_stats.used_space); + } + } + + /* Send burst of TX packets, to second port of pair. */ + const uint16_t nb_tx = rte_eth_tx_burst(port ^ 1, 0, + bufs, nb_rx); + + /* Free any unsent packets. */ + if (unlikely(nb_tx < nb_rx)) { + uint16_t buf; + + for (buf = nb_tx; buf < nb_rx; buf++) + rte_pktmbuf_free(bufs[buf]); + } + } + } +} + +/* + * Parse IPv4 5 tuple rules file, ipv4_rules_file.txt. + * Expected format: + * <src_ipv4_addr>'/'<masklen> <space> \ + * <dst_ipv4_addr>'/'<masklen> <space> \ + * <src_port> <space> ":" <src_port_mask> <space> \ + * <dst_port> <space> ":" <dst_port_mask> <space> \ + * <proto>'/'<proto_mask> <space> \ + * <priority> + */ + +static int +get_cb_field(char **in, uint32_t *fd, int base, unsigned long lim, + char dlm) +{ + unsigned long val; + char *end; + + errno = 0; + val = strtoul(*in, &end, base); + if (errno != 0 || end[0] != dlm || val > lim) + return -EINVAL; + *fd = (uint32_t)val; + *in = end + 1; + return 0; +} + +static int +parse_ipv4_net(char *in, uint32_t *addr, uint32_t *mask_len) +{ + uint32_t a, b, c, d, m; + + if (get_cb_field(&in, &a, 0, UINT8_MAX, '.')) + return -EINVAL; + if (get_cb_field(&in, &b, 0, UINT8_MAX, '.')) + return -EINVAL; + if (get_cb_field(&in, &c, 0, UINT8_MAX, '.')) + return -EINVAL; + if (get_cb_field(&in, &d, 0, UINT8_MAX, '/')) + return -EINVAL; + if (get_cb_field(&in, &m, 0, sizeof(uint32_t) * CHAR_BIT, 0)) + return -EINVAL; + + addr[0] = IPv4(a, b, c, d); + mask_len[0] = m; + return 0; +} + +static int +parse_ipv4_5tuple_rule(char *str, struct rte_eth_ntuple_filter *ntuple_filter) +{ + int i, ret; + char *s, *sp, *in[CB_FLD_NUM]; + static const char *dlm = " \t\n"; + int dim = CB_FLD_NUM; + uint32_t temp; + + s = str; + for (i = 0; i != dim; i++, s = NULL) { + in[i] = strtok_r(s, dlm, &sp); + if (in[i] == NULL) + return -EINVAL; + } + + ret = parse_ipv4_net(in[CB_FLD_SRC_ADDR], + &ntuple_filter->src_ip, + &ntuple_filter->src_ip_mask); + if (ret != 0) { + flow_classify_log("failed to read source address/mask: %s\n", + in[CB_FLD_SRC_ADDR]); + return ret; + } + + ret = parse_ipv4_net(in[CB_FLD_DST_ADDR], + &ntuple_filter->dst_ip, + &ntuple_filter->dst_ip_mask); + if (ret != 0) { + flow_classify_log("failed to read source address/mask: %s\n", + in[CB_FLD_DST_ADDR]); + return ret; + } + + if (get_cb_field(&in[CB_FLD_SRC_PORT], &temp, 0, UINT16_MAX, 0)) + return -EINVAL; + ntuple_filter->src_port = (uint16_t)temp; + + if (strncmp(in[CB_FLD_SRC_PORT_DLM], cb_port_delim, + sizeof(cb_port_delim)) != 0) + return -EINVAL; + + if (get_cb_field(&in[CB_FLD_SRC_PORT_MASK], &temp, 0, UINT16_MAX, 0)) + return -EINVAL; + ntuple_filter->src_port_mask = (uint16_t)temp; + + if (get_cb_field(&in[CB_FLD_DST_PORT], &temp, 0, UINT16_MAX, 0)) + return -EINVAL; + ntuple_filter->dst_port = (uint16_t)temp; + + if (strncmp(in[CB_FLD_DST_PORT_DLM], cb_port_delim, + sizeof(cb_port_delim)) != 0) + return -EINVAL; + + if (get_cb_field(&in[CB_FLD_DST_PORT_MASK], &temp, 0, UINT16_MAX, 0)) + return -EINVAL; + ntuple_filter->dst_port_mask = (uint16_t)temp; + + if (get_cb_field(&in[CB_FLD_PROTO], &temp, 0, UINT8_MAX, '/')) + return -EINVAL; + ntuple_filter->proto = (uint8_t)temp; + + if (get_cb_field(&in[CB_FLD_PROTO], &temp, 0, UINT8_MAX, 0)) + return -EINVAL; + ntuple_filter->proto_mask = (uint8_t)temp; + + if (get_cb_field(&in[CB_FLD_PRIORITY], &temp, 0, UINT16_MAX, 0)) + return -EINVAL; + ntuple_filter->priority = (uint16_t)temp; + if (ntuple_filter->priority > FLOW_CLASSIFY_MAX_PRIORITY) + ret = -EINVAL; + + return ret; +} + +/* Bypass comment and empty lines */ +static inline int +is_bypass_line(char *buff) +{ + int i = 0; + + /* comment line */ + if (buff[0] == COMMENT_LEAD_CHAR) + return 1; + /* empty line */ + while (buff[i] != '\0') { + if (!isspace(buff[i])) + return 0; + i++; + } + return 1; +} + +static uint32_t +convert_depth_to_bitmask(uint32_t depth_val) +{ + uint32_t bitmask = 0; + int i, j; + + for (i = depth_val, j = 0; i > 0; i--, j++) + bitmask |= (1 << (31 - j)); + return bitmask; +} + +static int +add_classify_rule(struct rte_eth_ntuple_filter *ntuple_filter) +{ + int ret = 0; + struct rte_flow_error error; + struct rte_flow_item_ipv4 ipv4_spec; + struct rte_flow_item_ipv4 ipv4_mask; + struct rte_flow_item ipv4_udp_item; + struct rte_flow_item ipv4_tcp_item; + struct rte_flow_item ipv4_sctp_item; + struct rte_flow_item_udp udp_spec; + struct rte_flow_item_udp udp_mask; + struct rte_flow_item udp_item; + struct rte_flow_item_tcp tcp_spec; + struct rte_flow_item_tcp tcp_mask; + struct rte_flow_item tcp_item; + struct rte_flow_item_sctp sctp_spec; + struct rte_flow_item_sctp sctp_mask; + struct rte_flow_item sctp_item; + struct rte_flow_item pattern_ipv4_5tuple[4]; + struct rte_flow_classify *flow_classify; + uint8_t ipv4_proto; + + /* set up parameters for validate and create */ + memset(&ipv4_spec, 0, sizeof(ipv4_spec)); + ipv4_spec.hdr.next_proto_id = ntuple_filter->proto; + ipv4_spec.hdr.src_addr = ntuple_filter->src_ip; + ipv4_spec.hdr.dst_addr = ntuple_filter->dst_ip; + ipv4_proto = ipv4_spec.hdr.next_proto_id; + + memset(&ipv4_mask, 0, sizeof(ipv4_mask)); + ipv4_mask.hdr.next_proto_id = ntuple_filter->proto_mask; + ipv4_mask.hdr.src_addr = ntuple_filter->src_ip_mask; + ipv4_mask.hdr.src_addr = + convert_depth_to_bitmask(ipv4_mask.hdr.src_addr); + ipv4_mask.hdr.dst_addr = ntuple_filter->dst_ip_mask; + ipv4_mask.hdr.dst_addr = + convert_depth_to_bitmask(ipv4_mask.hdr.dst_addr); + + switch (ipv4_proto) { + case PROTO_UDP: + if (udp_num_classify >= MAX_NUM_CLASSIFY) { + printf( + "\nINFO: UDP classify rule capacity %d reached\n", + udp_num_classify); + ret = -1; + break; + } + ipv4_udp_item.type = RTE_FLOW_ITEM_TYPE_IPV4; + ipv4_udp_item.spec = &ipv4_spec; + ipv4_udp_item.mask = &ipv4_mask; + ipv4_udp_item.last = NULL; + + udp_spec.hdr.src_port = ntuple_filter->src_port; + udp_spec.hdr.dst_port = ntuple_filter->dst_port; + udp_spec.hdr.dgram_len = 0; + udp_spec.hdr.dgram_cksum = 0; + + udp_mask.hdr.src_port = ntuple_filter->src_port_mask; + udp_mask.hdr.dst_port = ntuple_filter->dst_port_mask; + udp_mask.hdr.dgram_len = 0; + udp_mask.hdr.dgram_cksum = 0; + + udp_item.type = RTE_FLOW_ITEM_TYPE_UDP; + udp_item.spec = &udp_spec; + udp_item.mask = &udp_mask; + udp_item.last = NULL; + + attr.priority = ntuple_filter->priority; + pattern_ipv4_5tuple[1] = ipv4_udp_item; + pattern_ipv4_5tuple[2] = udp_item; + break; + case PROTO_TCP: + if (tcp_num_classify >= MAX_NUM_CLASSIFY) { + printf( + "\nINFO: TCP classify rule capacity %d reached\n", + tcp_num_classify); + ret = -1; + break; + } + ipv4_tcp_item.type = RTE_FLOW_ITEM_TYPE_IPV4; + ipv4_tcp_item.spec = &ipv4_spec; + ipv4_tcp_item.mask = &ipv4_mask; + ipv4_tcp_item.last = NULL; + + memset(&tcp_spec, 0, sizeof(tcp_spec)); + tcp_spec.hdr.src_port = ntuple_filter->src_port; + tcp_spec.hdr.dst_port = ntuple_filter->dst_port; + + memset(&tcp_mask, 0, sizeof(tcp_mask)); + tcp_mask.hdr.src_port = ntuple_filter->src_port_mask; + tcp_mask.hdr.dst_port = ntuple_filter->dst_port_mask; + + tcp_item.type = RTE_FLOW_ITEM_TYPE_TCP; + tcp_item.spec = &tcp_spec; + tcp_item.mask = &tcp_mask; + tcp_item.last = NULL; + + attr.priority = ntuple_filter->priority; + pattern_ipv4_5tuple[1] = ipv4_tcp_item; + pattern_ipv4_5tuple[2] = tcp_item; + break; + case PROTO_SCTP: + if (sctp_num_classify >= MAX_NUM_CLASSIFY) { + printf( + "\nINFO: SCTP classify rule capacity %d reached\n", + sctp_num_classify); + ret = -1; + break; + } + ipv4_sctp_item.type = RTE_FLOW_ITEM_TYPE_IPV4; + ipv4_sctp_item.spec = &ipv4_spec; + ipv4_sctp_item.mask = &ipv4_mask; + ipv4_sctp_item.last = NULL; + + sctp_spec.hdr.src_port = ntuple_filter->src_port; + sctp_spec.hdr.dst_port = ntuple_filter->dst_port; + sctp_spec.hdr.cksum = 0; + sctp_spec.hdr.tag = 0; + + sctp_mask.hdr.src_port = ntuple_filter->src_port_mask; + sctp_mask.hdr.dst_port = ntuple_filter->dst_port_mask; + sctp_mask.hdr.cksum = 0; + sctp_mask.hdr.tag = 0; + + sctp_item.type = RTE_FLOW_ITEM_TYPE_SCTP; + sctp_item.spec = &sctp_spec; + sctp_item.mask = &sctp_mask; + sctp_item.last = NULL; + + attr.priority = ntuple_filter->priority; + pattern_ipv4_5tuple[1] = ipv4_sctp_item; + pattern_ipv4_5tuple[2] = sctp_item; + break; + default: + break; + } + + if (ret == -1) + return 0; + + attr.ingress = 1; + pattern_ipv4_5tuple[0] = eth_item; + pattern_ipv4_5tuple[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, pattern_ipv4_5tuple, + actions, &error); + if (ret) + rte_exit(EXIT_FAILURE, + "flow classify validate failed ipv4_proto = %u\n", + ipv4_proto); + + flow_classify = rte_flow_classify_create( + table_acl, entry_size, &attr, pattern_ipv4_5tuple, + actions, &error); + if (flow_classify == NULL) + rte_exit(EXIT_FAILURE, + "flow classify create failed ipv4_proto = %u\n", + ipv4_proto); + + switch (ipv4_proto) { + case PROTO_UDP: + udp_flow_classify[udp_num_classify] = flow_classify; + udp_num_classify++; + break; + case PROTO_TCP: + tcp_flow_classify[tcp_num_classify] = flow_classify; + tcp_num_classify++; + break; + case PROTO_SCTP: + sctp_flow_classify[sctp_num_classify] = flow_classify; + sctp_num_classify++; + break; + default: + break; + } + return 0; +} + +static int +add_rules(const char *rule_path) +{ + FILE *fh; + char buff[LINE_MAX]; + unsigned int i = 0; + unsigned int total_num = 0; + struct rte_eth_ntuple_filter ntuple_filter; + + fh = fopen(rule_path, "rb"); + if (fh == NULL) + rte_exit(EXIT_FAILURE, "%s: Open %s failed\n", __func__, + rule_path); + + fseek(fh, 0, SEEK_SET); + + i = 0; + while (fgets(buff, LINE_MAX, fh) != NULL) { + i++; + + if (is_bypass_line(buff)) + continue; + + if (total_num >= FLOW_CLASSIFY_MAX_RULE_NUM - 1) { + printf("\nINFO: classify rule capacity %d reached\n", + total_num); + break; + } + + if (parse_ipv4_5tuple_rule(buff, &ntuple_filter) != 0) + rte_exit(EXIT_FAILURE, + "%s Line %u: parse rules error\n", + rule_path, i); + + if (add_classify_rule(&ntuple_filter) != 0) + rte_exit(EXIT_FAILURE, "add rule error\n"); + + total_num++; + } + + fclose(fh); + return 0; +} + +/* display usage */ +static void +print_usage(const char *prgname) +{ + printf("%s usage:\n", prgname); + printf("[EAL options] -- --"OPTION_RULE_IPV4"=FILE: "); + printf("specify the ipv4 rules file.\n"); + printf("Each rule occupies one line in the file.\n"); +} + +/* Parse the argument given in the command line of the application */ +static int +parse_args(int argc, char **argv) +{ + int opt, ret; + char **argvopt; + int option_index; + char *prgname = argv[0]; + static struct option lgopts[] = { + {OPTION_RULE_IPV4, 1, 0, 0}, + {NULL, 0, 0, 0} + }; + + argvopt = argv; + + while ((opt = getopt_long(argc, argvopt, "", + lgopts, &option_index)) != EOF) { + + switch (opt) { + /* long options */ + case 0: + if (!strncmp(lgopts[option_index].name, + OPTION_RULE_IPV4, + sizeof(OPTION_RULE_IPV4))) + parm_config.rule_ipv4_name = optarg; + break; + default: + print_usage(prgname); + return -1; + } + } + + if (optind >= 0) + argv[optind-1] = prgname; + + ret = optind-1; + optind = 1; /* reset getopt lib */ + return ret; +} + +/* + * The main function, which does initialization and calls the per-lcore + * functions. + */ +int +main(int argc, char *argv[]) +{ + struct rte_mempool *mbuf_pool; + uint8_t nb_ports; + uint8_t portid; + int ret; + int socket_id; + struct rte_table_acl_params table_acl_params; + + /* Initialize the Environment Abstraction Layer (EAL). */ + ret = rte_eal_init(argc, argv); + if (ret < 0) + rte_exit(EXIT_FAILURE, "Error with EAL initialization\n"); + + argc -= ret; + argv += ret; + + /* parse application arguments (after the EAL ones) */ + ret = parse_args(argc, argv); + if (ret < 0) + rte_exit(EXIT_FAILURE, "Invalid flow_classify parameters\n"); + + /* Check that there is an even number of ports to send/receive on. */ + nb_ports = rte_eth_dev_count(); + if (nb_ports < 2 || (nb_ports & 1)) + rte_exit(EXIT_FAILURE, "Error: number of ports must be even\n"); + + /* Creates a new mempool in memory to hold the mbufs. */ + mbuf_pool = rte_pktmbuf_pool_create("MBUF_POOL", NUM_MBUFS * nb_ports, + MBUF_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id()); + + if (mbuf_pool == NULL) + rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n"); + + /* Initialize all ports. */ + for (portid = 0; portid < nb_ports; portid++) + if (port_init(portid, mbuf_pool) != 0) + rte_exit(EXIT_FAILURE, "Cannot init port %"PRIu8 "\n", + portid); + + if (rte_lcore_count() > 1) + printf("\nWARNING: Too many lcores enabled. Only 1 used.\n"); + + socket_id = rte_eth_dev_socket_id(0); + + /* initialise ACL table params */ + table_acl_params.n_rule_fields = RTE_DIM(ipv4_defs); + table_acl_params.name = "table_acl_ipv4_5tuple"; + table_acl_params.n_rules = FLOW_CLASSIFY_MAX_RULE_NUM; + memcpy(table_acl_params.field_format, ipv4_defs, sizeof(ipv4_defs)); + entry_size = RTE_ACL_RULE_SZ(RTE_DIM(ipv4_defs)); + + table_acl = rte_table_acl_ops.f_create(&table_acl_params, socket_id, + entry_size); + if (table_acl == NULL) + rte_exit(EXIT_FAILURE, "Failed to create table_acl\n"); + + /* read file of IPv4 5 tuple rules and initialise parameters + * for rte_flow_classify_validate and rte_flow_classify_create + */ + + if (add_rules(parm_config.rule_ipv4_name)) + rte_exit(EXIT_FAILURE, "Failed to add rules\n"); + + /* Call lcore_main on the master core only. */ + lcore_main(); + + return 0; +} diff --git a/examples/flow_classify/ipv4_rules_file.txt b/examples/flow_classify/ipv4_rules_file.txt new file mode 100644 index 0000000..dfa0631 --- /dev/null +++ b/examples/flow_classify/ipv4_rules_file.txt @@ -0,0 +1,14 @@ +#file format: +#src_ip/masklen dst_ip/masklen src_port : mask dst_port : mask proto/mask priority +# +2.2.2.3/24 2.2.2.7/24 32 : 0xffff 33 : 0xffff 17/0xff 0 +9.9.9.3/24 9.9.9.7/24 32 : 0xffff 33 : 0xffff 17/0xff 1 +9.9.9.3/24 9.9.9.7/24 32 : 0xffff 33 : 0xffff 6/0xff 2 +9.9.8.3/24 9.9.8.7/24 32 : 0xffff 33 : 0xffff 6/0xff 3 +6.7.8.9/24 2.3.4.5/24 32 : 0x0000 33 : 0x0000 132/0xff 4 +6.7.8.9/32 192.168.0.36/32 10 : 0xffff 11 : 0xffff 6/0xfe 5 +6.7.8.9/24 192.168.0.36/24 10 : 0xffff 11 : 0xffff 6/0xfe 6 +6.7.8.9/16 192.168.0.36/16 10 : 0xffff 11 : 0xffff 6/0xfe 7 +6.7.8.9/8 192.168.0.36/8 10 : 0xffff 11 : 0xffff 6/0xfe 8 +#error rules +#9.8.7.6/8 192.168.0.36/8 10 : 0xffff 11 : 0xffff 6/0xfe 9 \ No newline at end of file -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v7 3/4] test: add packet burst generator functions 2017-09-29 9:18 ` [dpdk-dev] [PATCH v6 0/4] " Bernard Iremonger ` (2 preceding siblings ...) 2017-10-02 9:31 ` [dpdk-dev] [PATCH v7 2/4] examples/flow_classify: flow classify sample application Bernard Iremonger @ 2017-10-02 9:31 ` Bernard Iremonger 2017-10-02 9:31 ` [dpdk-dev] [PATCH v7 4/4] test: flow classify library unit tests Bernard Iremonger 4 siblings, 0 replies; 145+ messages in thread From: Bernard Iremonger @ 2017-10-02 9:31 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil Cc: Bernard Iremonger add initialize_tcp_header function add initialize_stcp_header function add initialize_ipv4_header_proto function add generate_packet_burst_proto function Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> --- test/test/packet_burst_generator.c | 191 +++++++++++++++++++++++++++++++++++++ test/test/packet_burst_generator.h | 22 ++++- 2 files changed, 211 insertions(+), 2 deletions(-) diff --git a/test/test/packet_burst_generator.c b/test/test/packet_burst_generator.c index a93c3b5..8f4ddcc 100644 --- a/test/test/packet_burst_generator.c +++ b/test/test/packet_burst_generator.c @@ -134,6 +134,36 @@ return pkt_len; } +uint16_t +initialize_tcp_header(struct tcp_hdr *tcp_hdr, uint16_t src_port, + uint16_t dst_port, uint16_t pkt_data_len) +{ + uint16_t pkt_len; + + pkt_len = (uint16_t) (pkt_data_len + sizeof(struct tcp_hdr)); + + memset(tcp_hdr, 0, sizeof(struct tcp_hdr)); + tcp_hdr->src_port = rte_cpu_to_be_16(src_port); + tcp_hdr->dst_port = rte_cpu_to_be_16(dst_port); + + return pkt_len; +} + +uint16_t +initialize_sctp_header(struct sctp_hdr *sctp_hdr, uint16_t src_port, + uint16_t dst_port, uint16_t pkt_data_len) +{ + uint16_t pkt_len; + + pkt_len = (uint16_t) (pkt_data_len + sizeof(struct udp_hdr)); + + sctp_hdr->src_port = rte_cpu_to_be_16(src_port); + sctp_hdr->dst_port = rte_cpu_to_be_16(dst_port); + sctp_hdr->tag = 0; + sctp_hdr->cksum = 0; /* No SCTP checksum. */ + + return pkt_len; +} uint16_t initialize_ipv6_header(struct ipv6_hdr *ip_hdr, uint8_t *src_addr, @@ -198,7 +228,53 @@ return pkt_len; } +uint16_t +initialize_ipv4_header_proto(struct ipv4_hdr *ip_hdr, uint32_t src_addr, + uint32_t dst_addr, uint16_t pkt_data_len, uint8_t proto) +{ + uint16_t pkt_len; + unaligned_uint16_t *ptr16; + uint32_t ip_cksum; + + /* + * Initialize IP header. + */ + pkt_len = (uint16_t) (pkt_data_len + sizeof(struct ipv4_hdr)); + + ip_hdr->version_ihl = IP_VHL_DEF; + ip_hdr->type_of_service = 0; + ip_hdr->fragment_offset = 0; + ip_hdr->time_to_live = IP_DEFTTL; + ip_hdr->next_proto_id = proto; + ip_hdr->packet_id = 0; + ip_hdr->total_length = rte_cpu_to_be_16(pkt_len); + ip_hdr->src_addr = rte_cpu_to_be_32(src_addr); + ip_hdr->dst_addr = rte_cpu_to_be_32(dst_addr); + + /* + * Compute IP header checksum. + */ + ptr16 = (unaligned_uint16_t *)ip_hdr; + ip_cksum = 0; + ip_cksum += ptr16[0]; ip_cksum += ptr16[1]; + ip_cksum += ptr16[2]; ip_cksum += ptr16[3]; + ip_cksum += ptr16[4]; + ip_cksum += ptr16[6]; ip_cksum += ptr16[7]; + ip_cksum += ptr16[8]; ip_cksum += ptr16[9]; + /* + * Reduce 32 bit checksum to 16 bits and complement it. + */ + ip_cksum = ((ip_cksum & 0xFFFF0000) >> 16) + + (ip_cksum & 0x0000FFFF); + ip_cksum %= 65536; + ip_cksum = (~ip_cksum) & 0x0000FFFF; + if (ip_cksum == 0) + ip_cksum = 0xFFFF; + ip_hdr->hdr_checksum = (uint16_t) ip_cksum; + + return pkt_len; +} /* * The maximum number of segments per packet is used when creating @@ -283,3 +359,118 @@ return nb_pkt; } + +int +generate_packet_burst_proto(struct rte_mempool *mp, + struct rte_mbuf **pkts_burst, + struct ether_hdr *eth_hdr, uint8_t vlan_enabled, void *ip_hdr, + uint8_t ipv4, uint8_t proto, void *proto_hdr, + int nb_pkt_per_burst, uint8_t pkt_len, uint8_t nb_pkt_segs) +{ + int i, nb_pkt = 0; + size_t eth_hdr_size; + + struct rte_mbuf *pkt_seg; + struct rte_mbuf *pkt; + + for (nb_pkt = 0; nb_pkt < nb_pkt_per_burst; nb_pkt++) { + pkt = rte_pktmbuf_alloc(mp); + if (pkt == NULL) { +nomore_mbuf: + if (nb_pkt == 0) + return -1; + break; + } + + pkt->data_len = pkt_len; + pkt_seg = pkt; + for (i = 1; i < nb_pkt_segs; i++) { + pkt_seg->next = rte_pktmbuf_alloc(mp); + if (pkt_seg->next == NULL) { + pkt->nb_segs = i; + rte_pktmbuf_free(pkt); + goto nomore_mbuf; + } + pkt_seg = pkt_seg->next; + pkt_seg->data_len = pkt_len; + } + pkt_seg->next = NULL; /* Last segment of packet. */ + + /* + * Copy headers in first packet segment(s). + */ + if (vlan_enabled) + eth_hdr_size = sizeof(struct ether_hdr) + + sizeof(struct vlan_hdr); + else + eth_hdr_size = sizeof(struct ether_hdr); + + copy_buf_to_pkt(eth_hdr, eth_hdr_size, pkt, 0); + + if (ipv4) { + copy_buf_to_pkt(ip_hdr, sizeof(struct ipv4_hdr), pkt, + eth_hdr_size); + switch (proto) { + case IPPROTO_UDP: + copy_buf_to_pkt(proto_hdr, + sizeof(struct udp_hdr), pkt, + eth_hdr_size + sizeof(struct ipv4_hdr)); + break; + case IPPROTO_TCP: + copy_buf_to_pkt(proto_hdr, + sizeof(struct tcp_hdr), pkt, + eth_hdr_size + sizeof(struct ipv4_hdr)); + break; + case IPPROTO_SCTP: + copy_buf_to_pkt(proto_hdr, + sizeof(struct sctp_hdr), pkt, + eth_hdr_size + sizeof(struct ipv4_hdr)); + break; + default: + break; + } + } else { + copy_buf_to_pkt(ip_hdr, sizeof(struct ipv6_hdr), pkt, + eth_hdr_size); + switch (proto) { + case IPPROTO_UDP: + copy_buf_to_pkt(proto_hdr, + sizeof(struct udp_hdr), pkt, + eth_hdr_size + sizeof(struct ipv6_hdr)); + break; + case IPPROTO_TCP: + copy_buf_to_pkt(proto_hdr, + sizeof(struct tcp_hdr), pkt, + eth_hdr_size + sizeof(struct ipv6_hdr)); + break; + case IPPROTO_SCTP: + copy_buf_to_pkt(proto_hdr, + sizeof(struct sctp_hdr), pkt, + eth_hdr_size + sizeof(struct ipv6_hdr)); + break; + default: + break; + } + } + + /* + * Complete first mbuf of packet and append it to the + * burst of packets to be transmitted. + */ + pkt->nb_segs = nb_pkt_segs; + pkt->pkt_len = pkt_len; + pkt->l2_len = eth_hdr_size; + + if (ipv4) { + pkt->vlan_tci = ETHER_TYPE_IPv4; + pkt->l3_len = sizeof(struct ipv4_hdr); + } else { + pkt->vlan_tci = ETHER_TYPE_IPv6; + pkt->l3_len = sizeof(struct ipv6_hdr); + } + + pkts_burst[nb_pkt] = pkt; + } + + return nb_pkt; +} diff --git a/test/test/packet_burst_generator.h b/test/test/packet_burst_generator.h index edc1044..3315bfa 100644 --- a/test/test/packet_burst_generator.h +++ b/test/test/packet_burst_generator.h @@ -43,7 +43,8 @@ #include <rte_arp.h> #include <rte_ip.h> #include <rte_udp.h> - +#include <rte_tcp.h> +#include <rte_sctp.h> #define IPV4_ADDR(a, b, c, d)(((a & 0xff) << 24) | ((b & 0xff) << 16) | \ ((c & 0xff) << 8) | (d & 0xff)) @@ -65,6 +66,13 @@ initialize_udp_header(struct udp_hdr *udp_hdr, uint16_t src_port, uint16_t dst_port, uint16_t pkt_data_len); +uint16_t +initialize_tcp_header(struct tcp_hdr *tcp_hdr, uint16_t src_port, + uint16_t dst_port, uint16_t pkt_data_len); + +uint16_t +initialize_sctp_header(struct sctp_hdr *sctp_hdr, uint16_t src_port, + uint16_t dst_port, uint16_t pkt_data_len); uint16_t initialize_ipv6_header(struct ipv6_hdr *ip_hdr, uint8_t *src_addr, @@ -74,15 +82,25 @@ initialize_ipv4_header(struct ipv4_hdr *ip_hdr, uint32_t src_addr, uint32_t dst_addr, uint16_t pkt_data_len); +uint16_t +initialize_ipv4_header_proto(struct ipv4_hdr *ip_hdr, uint32_t src_addr, + uint32_t dst_addr, uint16_t pkt_data_len, uint8_t proto); + int generate_packet_burst(struct rte_mempool *mp, struct rte_mbuf **pkts_burst, struct ether_hdr *eth_hdr, uint8_t vlan_enabled, void *ip_hdr, uint8_t ipv4, struct udp_hdr *udp_hdr, int nb_pkt_per_burst, uint8_t pkt_len, uint8_t nb_pkt_segs); +int +generate_packet_burst_proto(struct rte_mempool *mp, + struct rte_mbuf **pkts_burst, + struct ether_hdr *eth_hdr, uint8_t vlan_enabled, void *ip_hdr, + uint8_t ipv4, uint8_t proto, void *proto_hdr, + int nb_pkt_per_burst, uint8_t pkt_len, uint8_t nb_pkt_segs); + #ifdef __cplusplus } #endif - #endif /* PACKET_BURST_GENERATOR_H_ */ -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v7 4/4] test: flow classify library unit tests 2017-09-29 9:18 ` [dpdk-dev] [PATCH v6 0/4] " Bernard Iremonger ` (3 preceding siblings ...) 2017-10-02 9:31 ` [dpdk-dev] [PATCH v7 3/4] test: add packet burst generator functions Bernard Iremonger @ 2017-10-02 9:31 ` Bernard Iremonger 4 siblings, 0 replies; 145+ messages in thread From: Bernard Iremonger @ 2017-10-02 9:31 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil Cc: Bernard Iremonger Add flow_classify_autotest program. Set up IPv4 ACL field definitions. Create table_acl for use by librte_flow_classify API's. Create an mbuf pool for use by rte_flow_classify_query. For each of the librte_flow_classify API's: add bad parameter tests add bad pattern tests add bad action tests add good parameter tests Initialise ipv4 udp traffic for use by the udp test for rte_flow_classif_query. Initialise ipv4 tcp traffic for use by the tcp test for rte_flow_classif_query. Initialise ipv4 sctp traffic for use by the sctp test for rte_flow_classif_query. Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> --- test/test/Makefile | 1 + test/test/test_flow_classify.c | 698 +++++++++++++++++++++++++++++++++++++++++ test/test/test_flow_classify.h | 240 ++++++++++++++ 3 files changed, 939 insertions(+) create mode 100644 test/test/test_flow_classify.c create mode 100644 test/test/test_flow_classify.h diff --git a/test/test/Makefile b/test/test/Makefile index 42d9a49..073e1ed 100644 --- a/test/test/Makefile +++ b/test/test/Makefile @@ -106,6 +106,7 @@ SRCS-y += test_table_tables.c SRCS-y += test_table_ports.c SRCS-y += test_table_combined.c SRCS-$(CONFIG_RTE_LIBRTE_ACL) += test_table_acl.c +SRCS-$(CONFIG_RTE_LIBRTE_ACL) += test_flow_classify.c endif SRCS-y += test_rwlock.c diff --git a/test/test/test_flow_classify.c b/test/test/test_flow_classify.c new file mode 100644 index 0000000..e7fbe73 --- /dev/null +++ b/test/test/test_flow_classify.c @@ -0,0 +1,698 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <string.h> +#include <errno.h> + +#include "test.h" + +#include <rte_string_fns.h> +#include <rte_mbuf.h> +#include <rte_byteorder.h> +#include <rte_ip.h> +#include <rte_acl.h> +#include <rte_common.h> +#include <rte_table_acl.h> +#include <rte_flow.h> +#include <rte_flow_classify.h> + +#include "packet_burst_generator.h" +#include "test_flow_classify.h" + + +#define FLOW_CLASSIFY_MAX_RULE_NUM 100 +static void *table_acl; +static uint32_t entry_size; + +/* + * test functions by passing invalid or + * non-workable parameters. + */ +static int +test_invalid_parameters(void) +{ + struct rte_flow_classify *classify; + int ret; + + ret = rte_flow_classify_validate(NULL, NULL, NULL, NULL, NULL); + if (!ret) { + printf("Line %i: flow_classify_validate", __LINE__); + printf(" with NULL param should have failed!\n"); + return -1; + } + + classify = rte_flow_classify_create(NULL, 0, NULL, NULL, NULL, NULL); + if (classify) { + printf("Line %i: flow_classify_create", __LINE__); + printf(" with NULL param should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_destroy(NULL, NULL, NULL); + if (!ret) { + printf("Line %i: flow_classify_destroy", __LINE__); + printf(" with NULL param should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_query(NULL, NULL, NULL, 0, NULL, NULL); + if (!ret) { + printf("Line %i: flow_classify_query", __LINE__); + printf(" with NULL param should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_validate(NULL, NULL, NULL, NULL, &error); + if (!ret) { + printf("Line %i: flow_classify_validate", __LINE__); + printf(" with NULL param should have failed!\n"); + return -1; + } + + classify = rte_flow_classify_create(NULL, 0, NULL, NULL, NULL, &error); + if (classify) { + printf("Line %i: flow_classify_create ", __LINE__); + printf("with NULL param should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_destroy(NULL, NULL, &error); + if (!ret) { + printf("Line %i: flow_classify_destroy", __LINE__); + printf("with NULL param should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_query(NULL, NULL, NULL, 0, NULL, &error); + if (!ret) { + printf("Line %i: flow_classify_query", __LINE__); + printf(" with NULL param should have failed!\n"); + return -1; + } + return 0; +} + +static int +test_valid_parameters(void) +{ + struct rte_flow_classify *flow_classify; + int ret; + + /* set up parameters for rte_flow_classify_validate and + * rte_flow_classify_create and rte_flow_classify_destroy + */ + + attr.ingress = 1; + attr.priority = 1; + pattern[0] = eth_item; + pattern[1] = ipv4_udp_item_1; + pattern[2] = udp_item_1; + pattern[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, + pattern, actions, &error); + if (ret) { + printf("Line %i: flow_classify_validate", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + flow_classify = rte_flow_classify_create(table_acl, entry_size, &attr, + pattern, actions, &error); + if (!flow_classify) { + printf("Line %i: flow_classify_create", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classify_destroy(table_acl, flow_classify, &error); + if (ret) { + printf("Line %i: flow_classify_destroy", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + return 0; +} + +static int +test_invalid_patterns(void) +{ + struct rte_flow_classify *flow_classify; + int ret; + + /* set up parameters for rte_flow_classify_validate and + * rte_flow_classify_create and rte_flow_classify_destroy + */ + + attr.ingress = 1; + attr.priority = 1; + pattern[0] = eth_item_bad; + pattern[1] = ipv4_udp_item_1; + pattern[2] = udp_item_1; + pattern[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, + pattern, actions, &error); + if (!ret) { + printf("Line %i: flow_classify_validate", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + pattern[0] = eth_item; + pattern[1] = ipv4_udp_item_bad; + flow_classify = rte_flow_classify_create(table_acl, entry_size, &attr, + pattern, actions, &error); + if (flow_classify) { + printf("Line %i: flow_classify_create", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_destroy(table_acl, flow_classify, &error); + if (!ret) { + printf("Line %i: flow_classify_destroy", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + pattern[1] = ipv4_udp_item_1; + pattern[2] = udp_item_bad; + ret = rte_flow_classify_validate(table_acl, &attr, + pattern, actions, &error); + if (!ret) { + printf("Line %i: flow_classify_validate", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + pattern[2] = udp_item_1; + pattern[3] = end_item_bad; + flow_classify = rte_flow_classify_create(table_acl, entry_size, &attr, + pattern, actions, &error); + if (flow_classify) { + printf("Line %i: flow_classify_create", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_destroy(table_acl, flow_classify, &error); + if (!ret) { + printf("Line %i: flow_classify_destroy", __LINE__); + printf(" should have failed!\n"); + return -1; + } + return 0; +} + +static int +test_invalid_actions(void) +{ + struct rte_flow_classify *flow_classify; + int ret; + + /* set up parameters for rte_flow_classify_validate and + * rte_flow_classify_create and rte_flow_classify_destroy + */ + + attr.ingress = 1; + attr.priority = 1; + pattern[0] = eth_item; + pattern[1] = ipv4_udp_item_1; + pattern[2] = udp_item_1; + pattern[3] = end_item; + actions[0] = count_action_bad; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, + pattern, actions, &error); + if (!ret) { + printf("Line %i: flow_classify_validate", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + flow_classify = rte_flow_classify_create(table_acl, entry_size, &attr, + pattern, actions, &error); + if (flow_classify) { + printf("Line %i: flow_classify_create", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_destroy(table_acl, flow_classify, &error); + if (!ret) { + printf("Line %i: flow_classify_destroy", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + actions[0] = count_action; + actions[1] = end_action_bad; + ret = rte_flow_classify_validate(table_acl, &attr, + pattern, actions, &error); + if (!ret) { + printf("Line %i: flow_classify_validate", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + flow_classify = rte_flow_classify_create(table_acl, entry_size, &attr, + pattern, actions, &error); + if (flow_classify) { + printf("Line %i: flow_classify_create", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_destroy(table_acl, flow_classify, &error); + if (!ret) { + printf("Line %i: flow_classify_destroy", __LINE__); + printf("should have failed!\n"); + return -1; + } + return 0; +} + +static int +init_ipv4_udp_traffic(struct rte_mempool *mp, + struct rte_mbuf **pkts_burst, uint32_t burst_size) +{ + struct ether_hdr pkt_eth_hdr; + struct ipv4_hdr pkt_ipv4_hdr; + struct udp_hdr pkt_udp_hdr; + uint32_t src_addr = IPV4_ADDR(2, 2, 2, 3); + uint32_t dst_addr = IPV4_ADDR(2, 2, 2, 7); + uint16_t src_port = 32; + uint16_t dst_port = 33; + uint16_t pktlen; + + static uint8_t src_mac[] = { 0x00, 0xFF, 0xAA, 0xFF, 0xAA, 0xFF }; + static uint8_t dst_mac[] = { 0x00, 0xAA, 0xFF, 0xAA, 0xFF, 0xAA }; + + printf("Set up IPv4 UDP traffic\n"); + initialize_eth_header(&pkt_eth_hdr, + (struct ether_addr *)src_mac, + (struct ether_addr *)dst_mac, ETHER_TYPE_IPv4, 0, 0); + pktlen = (uint16_t)(sizeof(struct ether_hdr)); + printf("ETH pktlen %u\n", pktlen); + + pktlen = initialize_ipv4_header(&pkt_ipv4_hdr, src_addr, dst_addr, + pktlen); + printf("ETH + IPv4 pktlen %u\n", pktlen); + + pktlen = initialize_udp_header(&pkt_udp_hdr, src_port, dst_port, + pktlen); + printf("ETH + IPv4 + UDP pktlen %u\n\n", pktlen); + + return generate_packet_burst(mp, pkts_burst, &pkt_eth_hdr, + 0, &pkt_ipv4_hdr, 1, + &pkt_udp_hdr, burst_size, + PACKET_BURST_GEN_PKT_LEN, 1); +} + +static int +init_ipv4_tcp_traffic(struct rte_mempool *mp, + struct rte_mbuf **pkts_burst, uint32_t burst_size) +{ + struct ether_hdr pkt_eth_hdr; + struct ipv4_hdr pkt_ipv4_hdr; + struct tcp_hdr pkt_tcp_hdr; + uint32_t src_addr = IPV4_ADDR(1, 2, 3, 4); + uint32_t dst_addr = IPV4_ADDR(5, 6, 7, 8); + uint16_t src_port = 16; + uint16_t dst_port = 17; + uint16_t pktlen; + + static uint8_t src_mac[] = { 0x00, 0xFF, 0xAA, 0xFF, 0xAA, 0xFF }; + static uint8_t dst_mac[] = { 0x00, 0xAA, 0xFF, 0xAA, 0xFF, 0xAA }; + + printf("Set up IPv4 TCP traffic\n"); + initialize_eth_header(&pkt_eth_hdr, + (struct ether_addr *)src_mac, + (struct ether_addr *)dst_mac, ETHER_TYPE_IPv4, 0, 0); + pktlen = (uint16_t)(sizeof(struct ether_hdr)); + printf("ETH pktlen %u\n", pktlen); + + pktlen = initialize_ipv4_header_proto(&pkt_ipv4_hdr, src_addr, + dst_addr, pktlen, IPPROTO_TCP); + printf("ETH + IPv4 pktlen %u\n", pktlen); + + pktlen = initialize_tcp_header(&pkt_tcp_hdr, src_port, dst_port, + pktlen); + printf("ETH + IPv4 + TCP pktlen %u\n\n", pktlen); + + return generate_packet_burst_proto(mp, pkts_burst, &pkt_eth_hdr, + 0, &pkt_ipv4_hdr, 1, IPPROTO_TCP, + &pkt_tcp_hdr, burst_size, + PACKET_BURST_GEN_PKT_LEN, 1); +} + +static int +init_ipv4_sctp_traffic(struct rte_mempool *mp, + struct rte_mbuf **pkts_burst, uint32_t burst_size) +{ + struct ether_hdr pkt_eth_hdr; + struct ipv4_hdr pkt_ipv4_hdr; + struct sctp_hdr pkt_sctp_hdr; + uint32_t src_addr = IPV4_ADDR(11, 12, 13, 14); + uint32_t dst_addr = IPV4_ADDR(15, 16, 17, 18); + uint16_t src_port = 10; + uint16_t dst_port = 11; + uint16_t pktlen; + + static uint8_t src_mac[] = { 0x00, 0xFF, 0xAA, 0xFF, 0xAA, 0xFF }; + static uint8_t dst_mac[] = { 0x00, 0xAA, 0xFF, 0xAA, 0xFF, 0xAA }; + + printf("Set up IPv4 SCTP traffic\n"); + initialize_eth_header(&pkt_eth_hdr, + (struct ether_addr *)src_mac, + (struct ether_addr *)dst_mac, ETHER_TYPE_IPv4, 0, 0); + pktlen = (uint16_t)(sizeof(struct ether_hdr)); + printf("ETH pktlen %u\n", pktlen); + + pktlen = initialize_ipv4_header_proto(&pkt_ipv4_hdr, src_addr, + dst_addr, pktlen, IPPROTO_SCTP); + printf("ETH + IPv4 pktlen %u\n", pktlen); + + pktlen = initialize_sctp_header(&pkt_sctp_hdr, src_port, dst_port, + pktlen); + printf("ETH + IPv4 + SCTP pktlen %u\n\n", pktlen); + + return generate_packet_burst_proto(mp, pkts_burst, &pkt_eth_hdr, + 0, &pkt_ipv4_hdr, 1, IPPROTO_SCTP, + &pkt_sctp_hdr, burst_size, + PACKET_BURST_GEN_PKT_LEN, 1); +} + +static int +init_mbufpool(void) +{ + int socketid; + int ret = 0; + unsigned int lcore_id; + char s[64]; + + for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { + if (rte_lcore_is_enabled(lcore_id) == 0) + continue; + + socketid = rte_lcore_to_socket_id(lcore_id); + if (socketid >= NB_SOCKETS) { + printf( + "Socket %d of lcore %u is out of range %d\n", + socketid, lcore_id, NB_SOCKETS); + ret = -1; + break; + } + if (mbufpool[socketid] == NULL) { + snprintf(s, sizeof(s), "mbuf_pool_%d", socketid); + mbufpool[socketid] = + rte_pktmbuf_pool_create(s, NB_MBUF, + MEMPOOL_CACHE_SIZE, 0, MBUF_SIZE, + socketid); + if (mbufpool[socketid]) { + printf("Allocated mbuf pool on socket %d\n", + socketid); + } else { + printf("Cannot init mbuf pool on socket %d\n", + socketid); + ret = -ENOMEM; + break; + } + } + } + return ret; +} + +static int +test_query_udp(void) +{ + struct rte_flow_error error; + struct rte_flow_classify *flow_classify; + int ret; + int i; + + ret = init_ipv4_udp_traffic(mbufpool[0], bufs, MAX_PKT_BURST); + if (ret != MAX_PKT_BURST) { + printf("Line %i: init_udp_ipv4_traffic has failed!\n", + __LINE__); + return -1; + } + + for (i = 0; i < MAX_PKT_BURST; i++) + bufs[i]->packet_type = RTE_PTYPE_L3_IPV4; + + /* set up parameters for rte_flow_classify_validate and + * rte_flow_classify_create and rte_flow_classify_destroy + */ + + attr.ingress = 1; + attr.priority = 1; + pattern[0] = eth_item; + pattern[1] = ipv4_udp_item_1; + pattern[2] = udp_item_1; + pattern[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, + pattern, actions, &error); + if (ret) { + printf("Line %i: flow_classify_validate", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + flow_classify = rte_flow_classify_create(table_acl, entry_size, &attr, + pattern, actions, &error); + if (!flow_classify) { + printf("Line %i: flow_classify_create", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classify_query(table_acl, flow_classify, bufs, + MAX_PKT_BURST, &udp_classify_stats, &error); + if (ret) { + printf("Line %i: flow_classify_query", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classify_destroy(table_acl, flow_classify, &error); + if (ret) { + printf("Line %i: flow_classify_destroy", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + return 0; +} + +static int +test_query_tcp(void) +{ + struct rte_flow_classify *flow_classify; + int ret; + int i; + + ret = init_ipv4_tcp_traffic(mbufpool[0], bufs, MAX_PKT_BURST); + if (ret != MAX_PKT_BURST) { + printf("Line %i: init_ipv4_tcp_traffic has failed!\n", + __LINE__); + return -1; + } + + for (i = 0; i < MAX_PKT_BURST; i++) + bufs[i]->packet_type = RTE_PTYPE_L3_IPV4; + + /* set up parameters for rte_flow_classify_validate and + * rte_flow_classify_create and rte_flow_classify_destroy + */ + + attr.ingress = 1; + attr.priority = 1; + pattern[0] = eth_item; + pattern[1] = ipv4_tcp_item_1; + pattern[2] = tcp_item_1; + pattern[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, + pattern, actions, &error); + if (ret) { + printf("Line %i: flow_classify_validate", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + flow_classify = rte_flow_classify_create(table_acl, entry_size, &attr, + pattern, actions, &error); + if (!flow_classify) { + printf("Line %i: flow_classify_create", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classify_query(table_acl, flow_classify, bufs, + MAX_PKT_BURST, &tcp_classify_stats, &error); + if (ret) { + printf("Line %i: flow_classify_query", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classify_destroy(table_acl, flow_classify, &error); + if (ret) { + printf("Line %i: flow_classify_destroy", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + return 0; +} + +static int +test_query_sctp(void) +{ + struct rte_flow_classify *flow_classify; + int ret; + int i; + + ret = init_ipv4_sctp_traffic(mbufpool[0], bufs, MAX_PKT_BURST); + if (ret != MAX_PKT_BURST) { + printf("Line %i: init_ipv4_tcp_traffic has failed!\n", + __LINE__); + return -1; + } + + for (i = 0; i < MAX_PKT_BURST; i++) + bufs[i]->packet_type = RTE_PTYPE_L3_IPV4; + + /* set up parameters for rte_flow_classify_validate and + * rte_flow_classify_create and rte_flow_classify_destroy + */ + + attr.ingress = 1; + attr.priority = 1; + pattern[0] = eth_item; + pattern[1] = ipv4_sctp_item_1; + pattern[2] = sctp_item_1; + pattern[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, + pattern, actions, &error); + if (ret) { + printf("Line %i: flow_classify_validate", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + flow_classify = rte_flow_classify_create(table_acl, entry_size, &attr, + pattern, actions, &error); + if (!flow_classify) { + printf("Line %i: flow_classify_create", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classify_query(table_acl, flow_classify, bufs, + MAX_PKT_BURST, &sctp_classify_stats, &error); + if (ret) { + printf("Line %i: flow_classify_query", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classify_destroy(table_acl, flow_classify, &error); + if (ret) { + printf("Line %i: flow_classify_destroy", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + return 0; +} + +static int +test_flow_classify(void) +{ + struct rte_table_acl_params table_acl_params; + int socket_id = 0; + int ret; + + /* initialise ACL table params */ + table_acl_params.n_rule_fields = RTE_DIM(ipv4_defs); + table_acl_params.name = "table_acl_ipv4_5tuple"; + table_acl_params.n_rules = FLOW_CLASSIFY_MAX_RULE_NUM; + memcpy(table_acl_params.field_format, ipv4_defs, sizeof(ipv4_defs)); + entry_size = RTE_ACL_RULE_SZ(RTE_DIM(ipv4_defs)); + + table_acl = rte_table_acl_ops.f_create(&table_acl_params, socket_id, + entry_size); + if (table_acl == NULL) { + printf("Line %i: f_create has failed!\n", __LINE__); + return -1; + } + printf("Created table_acl for for IPv4 five tuple packets\n"); + + ret = init_mbufpool(); + if (ret) { + printf("Line %i: init_mbufpool has failed!\n", __LINE__); + return -1; + } + + if (test_invalid_parameters() < 0) + return -1; + if (test_valid_parameters() < 0) + return -1; + if (test_invalid_patterns() < 0) + return -1; + if (test_invalid_actions() < 0) + return -1; + if (test_query_udp() < 0) + return -1; + if (test_query_tcp() < 0) + return -1; + if (test_query_sctp() < 0) + return -1; + + return 0; +} + +REGISTER_TEST_COMMAND(flow_classify_autotest, test_flow_classify); diff --git a/test/test/test_flow_classify.h b/test/test/test_flow_classify.h new file mode 100644 index 0000000..95ddc94 --- /dev/null +++ b/test/test/test_flow_classify.h @@ -0,0 +1,240 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef TEST_FLOW_CLASSIFY_H_ +#define TEST_FLOW_CLASSIFY_H_ + +#define MAX_PKT_BURST (32) +#define NB_SOCKETS (1) +#define MEMPOOL_CACHE_SIZE (256) +#define MBUF_SIZE (512) +#define NB_MBUF (512) + +/* test UDP, TCP and SCTP packets */ +static struct rte_mempool *mbufpool[NB_SOCKETS]; +static struct rte_mbuf *bufs[MAX_PKT_BURST]; + +/* ACL field definitions for IPv4 5 tuple rule */ + +enum { + PROTO_FIELD_IPV4, + SRC_FIELD_IPV4, + DST_FIELD_IPV4, + SRCP_FIELD_IPV4, + DSTP_FIELD_IPV4, + NUM_FIELDS_IPV4 +}; + +enum { + PROTO_INPUT_IPV4, + SRC_INPUT_IPV4, + DST_INPUT_IPV4, + SRCP_DESTP_INPUT_IPV4 +}; + +static struct rte_acl_field_def ipv4_defs[NUM_FIELDS_IPV4] = { + /* first input field - always one byte long. */ + { + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint8_t), + .field_index = PROTO_FIELD_IPV4, + .input_index = PROTO_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, next_proto_id), + }, + /* next input field (IPv4 source address) - 4 consecutive bytes. */ + { + /* rte_flow uses a bit mask for IPv4 addresses */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint32_t), + .field_index = SRC_FIELD_IPV4, + .input_index = SRC_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, src_addr), + }, + /* next input field (IPv4 destination address) - 4 consecutive bytes. */ + { + /* rte_flow uses a bit mask for IPv4 addresses */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint32_t), + .field_index = DST_FIELD_IPV4, + .input_index = DST_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, dst_addr), + }, + /* + * Next 2 fields (src & dst ports) form 4 consecutive bytes. + * They share the same input index. + */ + { + /* rte_flow uses a bit mask for protocol ports */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint16_t), + .field_index = SRCP_FIELD_IPV4, + .input_index = SRCP_DESTP_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + sizeof(struct ipv4_hdr) + + offsetof(struct tcp_hdr, src_port), + }, + { + /* rte_flow uses a bit mask for protocol ports */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint16_t), + .field_index = DSTP_FIELD_IPV4, + .input_index = SRCP_DESTP_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + sizeof(struct ipv4_hdr) + + offsetof(struct tcp_hdr, dst_port), + }, +}; + +/* parameters for rte_flow_classify_validate and rte_flow_classify_create */ + +/* test UDP pattern: + * "eth / ipv4 src spec 2.2.2.3 src mask 255.255.255.00 dst spec 2.2.2.7 + * dst mask 255.255.255.00 / udp src is 32 dst is 33 / end" + */ +static struct rte_flow_item_ipv4 ipv4_udp_spec_1 = { + { 0, 0, 0, 0, 0, 0, IPPROTO_UDP, 0, IPv4(2, 2, 2, 3), IPv4(2, 2, 2, 7)} +}; +static const struct rte_flow_item_ipv4 ipv4_mask_24 = { + .hdr = { + .next_proto_id = 0xff, + .src_addr = 0xffffff00, + .dst_addr = 0xffffff00, + }, +}; +static struct rte_flow_item_udp udp_spec_1 = { + { 32, 33, 0, 0 } +}; + +static struct rte_flow_item eth_item = { RTE_FLOW_ITEM_TYPE_ETH, + 0, 0, 0 }; +static struct rte_flow_item eth_item_bad = { -1, 0, 0, 0 }; + +static struct rte_flow_item ipv4_udp_item_1 = { RTE_FLOW_ITEM_TYPE_IPV4, + &ipv4_udp_spec_1, 0, &ipv4_mask_24}; +static struct rte_flow_item ipv4_udp_item_bad = { RTE_FLOW_ITEM_TYPE_IPV4, + NULL, 0, NULL}; + +static struct rte_flow_item udp_item_1 = { RTE_FLOW_ITEM_TYPE_UDP, + &udp_spec_1, 0, &rte_flow_item_udp_mask}; +static struct rte_flow_item udp_item_bad = { RTE_FLOW_ITEM_TYPE_UDP, + NULL, 0, NULL}; + +static struct rte_flow_item end_item = { RTE_FLOW_ITEM_TYPE_END, + 0, 0, 0 }; +static struct rte_flow_item end_item_bad = { -1, 0, 0, 0 }; + +/* test TCP pattern: + * "eth / ipv4 src spec 1.2.3.4 src mask 255.255.255.00 dst spec 5.6.7.8 + * dst mask 255.255.255.00 / tcp src is 16 dst is 17 / end" + */ +static struct rte_flow_item_ipv4 ipv4_tcp_spec_1 = { + { 0, 0, 0, 0, 0, 0, IPPROTO_TCP, 0, IPv4(1, 2, 3, 4), IPv4(5, 6, 7, 8)} +}; + +static struct rte_flow_item_tcp tcp_spec_1 = { + { 16, 17, 0, 0, 0, 0, 0, 0, 0} +}; + +static struct rte_flow_item ipv4_tcp_item_1 = { RTE_FLOW_ITEM_TYPE_IPV4, + &ipv4_tcp_spec_1, 0, &ipv4_mask_24}; + +static struct rte_flow_item tcp_item_1 = { RTE_FLOW_ITEM_TYPE_TCP, + &tcp_spec_1, 0, &rte_flow_item_tcp_mask}; + +/* test SCTP pattern: + * "eth / ipv4 src spec 1.2.3.4 src mask 255.255.255.00 dst spec 5.6.7.8 + * dst mask 255.255.255.00 / sctp src is 16 dst is 17/ end" + */ +static struct rte_flow_item_ipv4 ipv4_sctp_spec_1 = { + { 0, 0, 0, 0, 0, 0, IPPROTO_SCTP, 0, IPv4(11, 12, 13, 14), + IPv4(15, 16, 17, 18)} +}; + +static struct rte_flow_item_sctp sctp_spec_1 = { + { 10, 11, 0, 0} +}; + +static struct rte_flow_item ipv4_sctp_item_1 = { RTE_FLOW_ITEM_TYPE_IPV4, + &ipv4_sctp_spec_1, 0, &ipv4_mask_24}; + +static struct rte_flow_item sctp_item_1 = { RTE_FLOW_ITEM_TYPE_SCTP, + &sctp_spec_1, 0, &rte_flow_item_sctp_mask}; + + +/* test actions: + * "actions count / end" + */ +static struct rte_flow_action count_action = { RTE_FLOW_ACTION_TYPE_COUNT, 0}; +static struct rte_flow_action count_action_bad = { -1, 0}; + +static struct rte_flow_action end_action = { RTE_FLOW_ACTION_TYPE_END, 0}; +static struct rte_flow_action end_action_bad = { -1, 0}; + +static struct rte_flow_action actions[2]; + +/* test attributes */ +static struct rte_flow_attr attr; + +/* test error */ +static struct rte_flow_error error; + +/* test pattern */ +static struct rte_flow_item pattern[4]; + +/* flow classify data for UDP burst */ +static struct rte_flow_classify_5tuple_stats udp_ntuple_stats; +static struct rte_flow_classify_stats udp_classify_stats = { + .available_space = MAX_PKT_BURST, + .used_space = 0, + .stats = (void **)&udp_ntuple_stats +}; + +/* flow classify data for TCP burst */ +static struct rte_flow_classify_5tuple_stats tcp_ntuple_stats; +static struct rte_flow_classify_stats tcp_classify_stats = { + .available_space = MAX_PKT_BURST, + .used_space = 0, + .stats = (void **)&tcp_ntuple_stats +}; + +/* flow classify data for SCTP burst */ +static struct rte_flow_classify_5tuple_stats sctp_ntuple_stats; +static struct rte_flow_classify_stats sctp_classify_stats = { + .available_space = MAX_PKT_BURST, + .used_space = 0, + .stats = (void **)&sctp_ntuple_stats +}; +#endif /* TEST_FLOW_CLASSIFY_H_ */ -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v6 1/4] librte_flow_classify: add librte_flow_classify library 2017-09-07 16:43 ` [dpdk-dev] [PATCH v5 0/6] " Bernard Iremonger 2017-09-29 9:18 ` [dpdk-dev] [PATCH v6 0/4] " Bernard Iremonger @ 2017-09-29 9:18 ` Bernard Iremonger 2017-09-29 9:18 ` [dpdk-dev] [PATCH v6 2/4] examples/flow_classify: flow classify sample application Bernard Iremonger ` (2 subsequent siblings) 4 siblings, 0 replies; 145+ messages in thread From: Bernard Iremonger @ 2017-09-29 9:18 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil Cc: Bernard Iremonger From: Ferruh Yigit <ferruh.yigit@intel.com> The following library APIs's are implemented: rte_flow_classify_create rte_flow_classify_validate rte_flow_classify_destroy rte_flow_classify_query The following librte_table ACL API's are used: f_create to create a table ACL. f_add to add an ACL rule to the table. f_del to delete an ACL form the table. f_lookup to match packets with the ACL rules. use f_add entry data for matching The library supports counting of IPv4 five tupple packets only, ie IPv4 UDP, TCP and SCTP packets. updated MAINTAINERS file Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com> Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> --- MAINTAINERS | 7 + config/common_base | 6 + doc/api/doxy-api-index.md | 1 + doc/api/doxy-api.conf | 1 + lib/Makefile | 3 + lib/librte_eal/common/include/rte_log.h | 1 + lib/librte_flow_classify/Makefile | 51 ++ lib/librte_flow_classify/rte_flow_classify.c | 460 +++++++++++++++++ lib/librte_flow_classify/rte_flow_classify.h | 207 ++++++++ lib/librte_flow_classify/rte_flow_classify_parse.c | 546 +++++++++++++++++++++ lib/librte_flow_classify/rte_flow_classify_parse.h | 74 +++ .../rte_flow_classify_version.map | 10 + mk/rte.app.mk | 2 +- 13 files changed, 1368 insertions(+), 1 deletion(-) create mode 100644 lib/librte_flow_classify/Makefile create mode 100644 lib/librte_flow_classify/rte_flow_classify.c create mode 100644 lib/librte_flow_classify/rte_flow_classify.h create mode 100644 lib/librte_flow_classify/rte_flow_classify_parse.c create mode 100644 lib/librte_flow_classify/rte_flow_classify_parse.h create mode 100644 lib/librte_flow_classify/rte_flow_classify_version.map diff --git a/MAINTAINERS b/MAINTAINERS index 8df2a7f..4b875ad 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -677,6 +677,13 @@ F: doc/guides/prog_guide/pdump_lib.rst F: app/pdump/ F: doc/guides/tools/pdump.rst +Flow classify +M: Bernard Iremonger <bernard.iremonger@intel.com> +F: lib/librte_flow_classify/ +F: test/test/test_flow_classify* +F: examples/flow_classify/ +F: doc/guides/sample_app_ug/flow_classify.rst +F: doc/guides/prog_guide/flow_classify_lib.rst Packet Framework ---------------- diff --git a/config/common_base b/config/common_base index 12f6be9..0638a37 100644 --- a/config/common_base +++ b/config/common_base @@ -658,6 +658,12 @@ CONFIG_RTE_LIBRTE_GRO=y CONFIG_RTE_LIBRTE_METER=y # +# Compile librte_classify +# +CONFIG_RTE_LIBRTE_FLOW_CLASSIFY=y +CONFIG_RTE_LIBRTE_CLASSIFY_DEBUG=n + +# # Compile librte_sched # CONFIG_RTE_LIBRTE_SCHED=y diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md index 19e0d4f..a2fa281 100644 --- a/doc/api/doxy-api-index.md +++ b/doc/api/doxy-api-index.md @@ -105,6 +105,7 @@ The public API headers are grouped by topics: [LPM IPv4 route] (@ref rte_lpm.h), [LPM IPv6 route] (@ref rte_lpm6.h), [ACL] (@ref rte_acl.h), + [flow_classify] (@ref rte_flow_classify.h), [EFD] (@ref rte_efd.h) - **QoS**: diff --git a/doc/api/doxy-api.conf b/doc/api/doxy-api.conf index 823554f..4e43a66 100644 --- a/doc/api/doxy-api.conf +++ b/doc/api/doxy-api.conf @@ -46,6 +46,7 @@ INPUT = doc/api/doxy-api-index.md \ lib/librte_efd \ lib/librte_ether \ lib/librte_eventdev \ + lib/librte_flow_classify \ lib/librte_gro \ lib/librte_hash \ lib/librte_ip_frag \ diff --git a/lib/Makefile b/lib/Makefile index 86caba1..21fc3b0 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -82,6 +82,9 @@ DIRS-$(CONFIG_RTE_LIBRTE_POWER) += librte_power DEPDIRS-librte_power := librte_eal DIRS-$(CONFIG_RTE_LIBRTE_METER) += librte_meter DEPDIRS-librte_meter := librte_eal +DIRS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += librte_flow_classify +DEPDIRS-librte_flow_classify := librte_eal librte_ether librte_net +DEPDIRS-librte_flow_classify += librte_table librte_acl librte_port DIRS-$(CONFIG_RTE_LIBRTE_SCHED) += librte_sched DEPDIRS-librte_sched := librte_eal librte_mempool librte_mbuf librte_net DEPDIRS-librte_sched += librte_timer diff --git a/lib/librte_eal/common/include/rte_log.h b/lib/librte_eal/common/include/rte_log.h index ec8dba7..f975bde 100644 --- a/lib/librte_eal/common/include/rte_log.h +++ b/lib/librte_eal/common/include/rte_log.h @@ -87,6 +87,7 @@ struct rte_logs { #define RTE_LOGTYPE_CRYPTODEV 17 /**< Log related to cryptodev. */ #define RTE_LOGTYPE_EFD 18 /**< Log related to EFD. */ #define RTE_LOGTYPE_EVENTDEV 19 /**< Log related to eventdev. */ +#define RTE_LOGTYPE_CLASSIFY 20 /**< Log related to flow classify. */ /* these log types can be used in an application */ #define RTE_LOGTYPE_USER1 24 /**< User-defined log type 1. */ diff --git a/lib/librte_flow_classify/Makefile b/lib/librte_flow_classify/Makefile new file mode 100644 index 0000000..7863a0c --- /dev/null +++ b/lib/librte_flow_classify/Makefile @@ -0,0 +1,51 @@ +# BSD LICENSE +# +# Copyright(c) 2017 Intel Corporation. All rights reserved. +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Intel Corporation nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +include $(RTE_SDK)/mk/rte.vars.mk + +# library name +LIB = librte_flow_classify.a + +CFLAGS += -O3 +CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) + +EXPORT_MAP := rte_flow_classify_version.map + +LIBABIVER := 1 + +# all source are stored in SRCS-y +SRCS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += rte_flow_classify.c +SRCS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += rte_flow_classify_parse.c + +# install this header file +SYMLINK-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY)-include := rte_flow_classify.h + +include $(RTE_SDK)/mk/rte.lib.mk diff --git a/lib/librte_flow_classify/rte_flow_classify.c b/lib/librte_flow_classify/rte_flow_classify.c new file mode 100644 index 0000000..7f08382 --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify.c @@ -0,0 +1,460 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <rte_flow_classify.h> +#include "rte_flow_classify_parse.h" +#include <rte_flow_driver.h> +#include <rte_table_acl.h> +#include <stdbool.h> + +static struct rte_eth_ntuple_filter ntuple_filter; +static uint32_t unique_id = 1; + +enum { + PROTO_FIELD_IPV4, + SRC_FIELD_IPV4, + DST_FIELD_IPV4, + SRCP_FIELD_IPV4, + DSTP_FIELD_IPV4, + NUM_FIELDS_IPV4 +}; + +struct ipv4_5tuple_data { + uint16_t priority; /**< flow API uses priority 0 to 8, 0 is highest */ + uint32_t userdata; /**< value returned for match */ + uint8_t tcp_flags; /**< tcp_flags only meaningful TCP protocol */ +}; + +struct rte_flow_classify { + uint32_t id; /**< unique ID of classify object */ + enum rte_flow_classify_type type; /**< classify type */ + struct rte_flow_action action; /**< action when match found */ + struct ipv4_5tuple_data flow_extra_data; /**< extra flow data */ + struct rte_table_acl_rule_add_params key_add; /**< add ACL rule key */ + struct rte_table_acl_rule_delete_params + key_del; /**< delete ACL rule key */ + int key_found; /**< ACL rule key found in table */ + void *entry; /**< pointer to buffer to hold ACL rule */ + void *entry_ptr; /**< handle to the table entry for the ACL rule */ +}; + +/* number of packets in a burst */ +#define MAX_PKT_BURST 32 + +struct mbuf_search { + struct rte_mbuf *m_ipv4[MAX_PKT_BURST]; + uint32_t res_ipv4[MAX_PKT_BURST]; + int num_ipv4; +}; + +int +rte_flow_classify_validate(void *table_handle, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + struct rte_flow_item *items; + parse_filter_t parse_filter; + uint32_t item_num = 0; + uint32_t i = 0; + int ret; + + (void) table_handle; + + if (!error) + return -EINVAL; + + if (!pattern) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM, + NULL, "NULL pattern."); + return -EINVAL; + } + + if (!actions) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_NUM, + NULL, "NULL action."); + return -EINVAL; + } + + if (!attr) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, + NULL, "NULL attribute."); + return -EINVAL; + } + + memset(&ntuple_filter, 0, sizeof(ntuple_filter)); + + /* Get the non-void item number of pattern */ + while ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_END) { + if ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_VOID) + item_num++; + i++; + } + item_num++; + + items = malloc(item_num * sizeof(struct rte_flow_item)); + if (!items) { + rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_ITEM_NUM, + NULL, "No memory for pattern items."); + return -ENOMEM; + } + + memset(items, 0, item_num * sizeof(struct rte_flow_item)); + classify_pattern_skip_void_item(items, pattern); + + parse_filter = classify_find_parse_filter_func(items); + if (!parse_filter) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + pattern, "Unsupported pattern"); + return -EINVAL; + } + + ret = parse_filter(attr, items, actions, &ntuple_filter, error); + free(items); + return ret; +} + +#ifdef RTE_LIBRTE_CLASSIFY_DEBUG +#define uint32_t_to_char(ip, a, b, c, d) do {\ + *a = (unsigned char)(ip >> 24 & 0xff);\ + *b = (unsigned char)(ip >> 16 & 0xff);\ + *c = (unsigned char)(ip >> 8 & 0xff);\ + *d = (unsigned char)(ip & 0xff);\ + } while (0) + +static inline void +print_ipv4_key_add(struct rte_table_acl_rule_add_params *key) +{ + unsigned char a, b, c, d; + + printf("ipv4_key_add: 0x%02hhx/0x%hhx ", + key->field_value[PROTO_FIELD_IPV4].value.u8, + key->field_value[PROTO_FIELD_IPV4].mask_range.u8); + + uint32_t_to_char(key->field_value[SRC_FIELD_IPV4].value.u32, + &a, &b, &c, &d); + printf(" %hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, + key->field_value[SRC_FIELD_IPV4].mask_range.u32); + + uint32_t_to_char(key->field_value[DST_FIELD_IPV4].value.u32, + &a, &b, &c, &d); + printf("%hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, + key->field_value[DST_FIELD_IPV4].mask_range.u32); + + printf("%hu : 0x%x %hu : 0x%x", + key->field_value[SRCP_FIELD_IPV4].value.u16, + key->field_value[SRCP_FIELD_IPV4].mask_range.u16, + key->field_value[DSTP_FIELD_IPV4].value.u16, + key->field_value[DSTP_FIELD_IPV4].mask_range.u16); + + printf(" priority: 0x%x\n", key->priority); +} + +static inline void +print_ipv4_key_delete(struct rte_table_acl_rule_delete_params *key) +{ + unsigned char a, b, c, d; + + printf("ipv4_key_del: 0x%02hhx/0x%hhx ", + key->field_value[PROTO_FIELD_IPV4].value.u8, + key->field_value[PROTO_FIELD_IPV4].mask_range.u8); + + uint32_t_to_char(key->field_value[SRC_FIELD_IPV4].value.u32, + &a, &b, &c, &d); + printf(" %hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, + key->field_value[SRC_FIELD_IPV4].mask_range.u32); + + uint32_t_to_char(key->field_value[DST_FIELD_IPV4].value.u32, + &a, &b, &c, &d); + printf("%hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, + key->field_value[DST_FIELD_IPV4].mask_range.u32); + + printf("%hu : 0x%x %hu : 0x%x\n", + key->field_value[SRCP_FIELD_IPV4].value.u16, + key->field_value[SRCP_FIELD_IPV4].mask_range.u16, + key->field_value[DSTP_FIELD_IPV4].value.u16, + key->field_value[DSTP_FIELD_IPV4].mask_range.u16); +} +#endif + +static struct rte_flow_classify * +allocate_5tuple(void) +{ + struct rte_flow_classify *flow_classify; + + flow_classify = malloc(sizeof(struct rte_flow_classify)); + if (!flow_classify) + return flow_classify; + + memset(flow_classify, 0, sizeof(struct rte_flow_classify)); + flow_classify->id = unique_id++; + flow_classify->type = RTE_FLOW_CLASSIFY_TYPE_5TUPLE; + memcpy(&flow_classify->action, classify_get_flow_action(), + sizeof(struct rte_flow_action)); + + flow_classify->flow_extra_data.priority = ntuple_filter.priority; + flow_classify->flow_extra_data.tcp_flags = ntuple_filter.tcp_flags; + + /* key add values */ + flow_classify->key_add.priority = ntuple_filter.priority; + flow_classify->key_add.field_value[PROTO_FIELD_IPV4].mask_range.u8 = + ntuple_filter.proto_mask; + flow_classify->key_add.field_value[PROTO_FIELD_IPV4].value.u8 = + ntuple_filter.proto; + + flow_classify->key_add.field_value[SRC_FIELD_IPV4].mask_range.u32 = + ntuple_filter.src_ip_mask; + flow_classify->key_add.field_value[SRC_FIELD_IPV4].value.u32 = + ntuple_filter.src_ip; + + flow_classify->key_add.field_value[DST_FIELD_IPV4].mask_range.u32 = + ntuple_filter.dst_ip_mask; + flow_classify->key_add.field_value[DST_FIELD_IPV4].value.u32 = + ntuple_filter.dst_ip; + + flow_classify->key_add.field_value[SRCP_FIELD_IPV4].mask_range.u16 = + ntuple_filter.src_port_mask; + flow_classify->key_add.field_value[SRCP_FIELD_IPV4].value.u16 = + ntuple_filter.src_port; + + flow_classify->key_add.field_value[DSTP_FIELD_IPV4].mask_range.u16 = + ntuple_filter.dst_port_mask; + flow_classify->key_add.field_value[DSTP_FIELD_IPV4].value.u16 = + ntuple_filter.dst_port; + +#ifdef RTE_LIBRTE_CLASSIFY_DEBUG + print_ipv4_key_add(&flow_classify->key_add); +#endif + + /* key delete values */ + memcpy(&flow_classify->key_del.field_value[PROTO_FIELD_IPV4], + &flow_classify->key_add.field_value[PROTO_FIELD_IPV4], + NUM_FIELDS_IPV4 * sizeof(struct rte_acl_field)); + +#ifdef RTE_LIBRTE_CLASSIFY_DEBUG + print_ipv4_key_delete(&flow_classify->key_del); +#endif + return flow_classify; +} + +struct rte_flow_classify * +rte_flow_classify_create(void *table_handle, + uint32_t entry_size, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + struct rte_flow_classify *flow_classify; + int ret; + + if (!error) + return NULL; + + if (!table_handle) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_HANDLE, + NULL, "NULL table_handle."); + return NULL; + } + + if (!pattern) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM, + NULL, "NULL pattern."); + return NULL; + } + + if (!actions) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_NUM, + NULL, "NULL action."); + return NULL; + } + + if (!attr) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, + NULL, "NULL attribute."); + return NULL; + } + + /* parse attr, pattern and actions */ + ret = rte_flow_classify_validate(table_handle, attr, pattern, + actions, error); + if (ret < 0) + return NULL; + + flow_classify = allocate_5tuple(); + if (!flow_classify) + return NULL; + + flow_classify->entry = malloc(entry_size); + if (!flow_classify->entry) { + free(flow_classify); + flow_classify = NULL; + return NULL; + } + memset(flow_classify->entry, 0, entry_size); + memmove(flow_classify->entry, &flow_classify->id, sizeof(uint32_t)); + + ret = rte_table_acl_ops.f_add(table_handle, &flow_classify->key_add, + flow_classify->entry, &flow_classify->key_found, + &flow_classify->entry_ptr); + if (ret) { + free(flow_classify->entry); + free(flow_classify); + flow_classify = NULL; + return NULL; + } + + return flow_classify; +} + +int +rte_flow_classify_destroy(void *table_handle, + struct rte_flow_classify *flow_classify, + struct rte_flow_error *error) +{ + int ret; + int key_found; + + if (!error) + return -EINVAL; + + if (!flow_classify || !table_handle) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "invalid input"); + return -EINVAL; + } + + ret = rte_table_acl_ops.f_delete(table_handle, + &flow_classify->key_del, &key_found, + flow_classify->entry); + if ((ret == 0) && key_found) { + free(flow_classify->entry); + free(flow_classify); + } else + ret = -1; + return ret; +} + +static int +flow_match(void *table, struct rte_mbuf **pkts_in, const uint16_t nb_pkts, + uint64_t *count, uint32_t id) +{ + int ret = -1; + int i; + uint64_t pkts_mask; + uint64_t lookup_hit_mask; + uint32_t classify_id; + void *entries[RTE_PORT_IN_BURST_SIZE_MAX]; + + if (nb_pkts) { + pkts_mask = RTE_LEN2MASK(nb_pkts, uint64_t); + ret = rte_table_acl_ops.f_lookup(table, pkts_in, + pkts_mask, &lookup_hit_mask, entries); + if (!ret) { + for (i = 0; i < nb_pkts && + (lookup_hit_mask & (1 << i)); i++) { + memmove(&classify_id, entries[i], + sizeof(uint32_t)); + if (id == classify_id) + (*count)++; /* match found */ + } + if (*count == 0) + ret = -1; + } else + ret = -1; + } + return ret; +} + +static int +action_apply(const struct rte_flow_classify *flow_classify, + struct rte_flow_classify_stats *stats, uint64_t count) +{ + struct rte_flow_classify_5tuple_stats *ntuple_stats; + + switch (flow_classify->action.type) { + case RTE_FLOW_ACTION_TYPE_COUNT: + ntuple_stats = + (struct rte_flow_classify_5tuple_stats *)stats->stats; + ntuple_stats->counter1 = count; + stats->used_space = 1; + break; + default: + return -ENOTSUP; + } + + return 0; +} + +int +rte_flow_classify_query(void *table_handle, + const struct rte_flow_classify *flow_classify, + struct rte_mbuf **pkts, + const uint16_t nb_pkts, + struct rte_flow_classify_stats *stats, + struct rte_flow_error *error) +{ + uint64_t count = 0; + int ret = -EINVAL; + + if (!error) + return ret; + + if (!table_handle || !flow_classify || !pkts || !stats) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "invalid input"); + return ret; + } + + if ((stats->available_space == 0) || (nb_pkts == 0)) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "invalid input"); + return ret; + } + + ret = flow_match(table_handle, pkts, nb_pkts, &count, + flow_classify->id); + if (ret == 0) + ret = action_apply(flow_classify, stats, count); + + return ret; +} diff --git a/lib/librte_flow_classify/rte_flow_classify.h b/lib/librte_flow_classify/rte_flow_classify.h new file mode 100644 index 0000000..2b200fb --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify.h @@ -0,0 +1,207 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _RTE_FLOW_CLASSIFY_H_ +#define _RTE_FLOW_CLASSIFY_H_ + +/** + * @file + * + * RTE Flow Classify Library + * + * This library provides flow record information with some measured properties. + * + * Application should define the flow and measurement criteria (action) for it. + * + * Library doesn't maintain any flow records itself, instead flow information is + * returned to upper layer only for given packets. + * + * It is application's responsibility to call rte_flow_classify_query() + * for group of packets, just after receiving them or before transmitting them. + * Application should provide the flow type interested in, measurement to apply + * to that flow in rte_flow_classify_create() API, and should provide + * rte_flow_classify object and storage to put results in + * rte_flow_classify_query() API. + * + * Usage: + * - application calls rte_flow_classify_create() to create a rte_flow_classify + * object. + * - application calls rte_flow_classify_query() in a polling manner, + * preferably after rte_eth_rx_burst(). This will cause the library to + * convert packet information to flow information with some measurements. + * - rte_flow_classify object can be destroyed when they are no more needed + * via rte_flow_classify_destroy() + */ + +#include <rte_ethdev.h> +#include <rte_ether.h> +#include <rte_flow.h> +#include <rte_acl.h> + +#ifdef __cplusplus +extern "C" { +#endif + +enum rte_flow_classify_type { + RTE_FLOW_CLASSIFY_TYPE_NONE, /**< no type */ + RTE_FLOW_CLASSIFY_TYPE_5TUPLE, /**< IPv4 5tuple type */ +}; + +struct rte_flow_classify; + +/** + * Flow stats + * + * For single action an array of stats can be returned by API. Technically each + * packet can return a stat at max. + * + * Storage for stats is provided by application, library should know available + * space, and should return the number of used space. + * + * stats type is based on what measurement (action) requested by application. + * + */ +struct rte_flow_classify_stats { + const unsigned int available_space; + unsigned int used_space; + void **stats; +}; + +struct rte_flow_classify_5tuple_stats { + uint64_t counter1; /**< count of packets that match 5tupple pattern */ +}; + +/** + * Create a flow classify rule. + * + * @param[in] table_handle + * Pointer to table ACL + * @param[in] entry_size + * Size of ACL rule + * @param[in] attr + * Flow rule attributes + * @param[in] pattern + * Pattern specification (list terminated by the END pattern item). + * @param[in] actions + * Associated actions (list terminated by the END pattern item). + * @param[out] error + * Perform verbose error reporting if not NULL. Structure + * initialised in case of error only. + * @return + * A valid handle in case of success, NULL otherwise. + */ +struct rte_flow_classify * +rte_flow_classify_create(void *table_handle, + uint32_t entry_size, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error); + +/** + * Validate a flow classify rule. + * + * @param[in] table_handle + * Pointer to table ACL + * @param[in] attr + * Flow rule attributes + * @param[in] pattern + * Pattern specification (list terminated by the END pattern item). + * @param[in] actions + * Associated actions (list terminated by the END pattern item). + * @param[out] error + * Perform verbose error reporting if not NULL. Structure + * initialised in case of error only. + * + * @return + * reurn code. + */ +int +rte_flow_classify_validate(void *table_handle, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error); + +/** + * Destroy a flow classify rule. + * + * @param[in] table_handle + * Pointer to table ACL + * @param[in] flow_classify + * Flow rule handle to destroy + * @param[out] error + * Perform verbose error reporting if not NULL. Structure + * initialised in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise. + */ +int +rte_flow_classify_destroy(void *table_handle, + struct rte_flow_classify *flow_classify, + struct rte_flow_error *error); + +/** + * Get flow classification stats for given packets. + * + * @param[in] table_handle + * Pointer to table ACL + * @param[in] flow_classify + * Pointer to Flow rule object + * @param[in] pkts + * Pointer to packets to process + * @param[in] nb_pkts + * Number of packets to process + * @param[in] stats + * To store stats defined by action + * @param[out] error + * Perform verbose error reporting if not NULL. Structure + * initialised in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise. + */ +int +rte_flow_classify_query(void *table_handle, + const struct rte_flow_classify *flow_classify, + struct rte_mbuf **pkts, + const uint16_t nb_pkts, + struct rte_flow_classify_stats *stats, + struct rte_flow_error *error); + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_FLOW_CLASSIFY_H_ */ diff --git a/lib/librte_flow_classify/rte_flow_classify_parse.c b/lib/librte_flow_classify/rte_flow_classify_parse.c new file mode 100644 index 0000000..e5a3885 --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify_parse.c @@ -0,0 +1,546 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <rte_flow_classify.h> +#include "rte_flow_classify_parse.h" +#include <rte_flow_driver.h> + +struct classify_valid_pattern { + enum rte_flow_item_type *items; + parse_filter_t parse_filter; +}; + +static struct rte_flow_action action; + +/* Pattern matched ntuple filter */ +static enum rte_flow_item_type pattern_ntuple_1[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_UDP, + RTE_FLOW_ITEM_TYPE_END, +}; + +/* Pattern matched ntuple filter */ +static enum rte_flow_item_type pattern_ntuple_2[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_TCP, + RTE_FLOW_ITEM_TYPE_END, +}; + +/* Pattern matched ntuple filter */ +static enum rte_flow_item_type pattern_ntuple_3[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_SCTP, + RTE_FLOW_ITEM_TYPE_END, +}; + +static int +classify_parse_ntuple_filter(const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_eth_ntuple_filter *filter, + struct rte_flow_error *error); + +static struct classify_valid_pattern classify_supported_patterns[] = { + /* ntuple */ + { pattern_ntuple_1, classify_parse_ntuple_filter }, + { pattern_ntuple_2, classify_parse_ntuple_filter }, + { pattern_ntuple_3, classify_parse_ntuple_filter }, +}; + +struct rte_flow_action * +classify_get_flow_action(void) +{ + return &action; +} + +/* Find the first VOID or non-VOID item pointer */ +const struct rte_flow_item * +classify_find_first_item(const struct rte_flow_item *item, bool is_void) +{ + bool is_find; + + while (item->type != RTE_FLOW_ITEM_TYPE_END) { + if (is_void) + is_find = item->type == RTE_FLOW_ITEM_TYPE_VOID; + else + is_find = item->type != RTE_FLOW_ITEM_TYPE_VOID; + if (is_find) + break; + item++; + } + return item; +} + +/* Skip all VOID items of the pattern */ +void +classify_pattern_skip_void_item(struct rte_flow_item *items, + const struct rte_flow_item *pattern) +{ + uint32_t cpy_count = 0; + const struct rte_flow_item *pb = pattern, *pe = pattern; + + for (;;) { + /* Find a non-void item first */ + pb = classify_find_first_item(pb, false); + if (pb->type == RTE_FLOW_ITEM_TYPE_END) { + pe = pb; + break; + } + + /* Find a void item */ + pe = classify_find_first_item(pb + 1, true); + + cpy_count = pe - pb; + rte_memcpy(items, pb, sizeof(struct rte_flow_item) * cpy_count); + + items += cpy_count; + + if (pe->type == RTE_FLOW_ITEM_TYPE_END) { + pb = pe; + break; + } + + pb = pe + 1; + } + /* Copy the END item. */ + rte_memcpy(items, pe, sizeof(struct rte_flow_item)); +} + +/* Check if the pattern matches a supported item type array */ +static bool +classify_match_pattern(enum rte_flow_item_type *item_array, + struct rte_flow_item *pattern) +{ + struct rte_flow_item *item = pattern; + + while ((*item_array == item->type) && + (*item_array != RTE_FLOW_ITEM_TYPE_END)) { + item_array++; + item++; + } + + return (*item_array == RTE_FLOW_ITEM_TYPE_END && + item->type == RTE_FLOW_ITEM_TYPE_END); +} + +/* Find if there's parse filter function matched */ +parse_filter_t +classify_find_parse_filter_func(struct rte_flow_item *pattern) +{ + parse_filter_t parse_filter = NULL; + uint8_t i = 0; + + for (; i < RTE_DIM(classify_supported_patterns); i++) { + if (classify_match_pattern(classify_supported_patterns[i].items, + pattern)) { + parse_filter = + classify_supported_patterns[i].parse_filter; + break; + } + } + + return parse_filter; +} + +#define FLOW_RULE_MIN_PRIORITY 8 +#define FLOW_RULE_MAX_PRIORITY 0 + +#define NEXT_ITEM_OF_PATTERN(item, pattern, index)\ + do { \ + item = pattern + index;\ + while (item->type == RTE_FLOW_ITEM_TYPE_VOID) {\ + index++; \ + item = pattern + index; \ + } \ + } while (0) + +#define NEXT_ITEM_OF_ACTION(act, actions, index)\ + do { \ + act = actions + index; \ + while (act->type == RTE_FLOW_ACTION_TYPE_VOID) {\ + index++; \ + act = actions + index; \ + } \ + } while (0) + +/** + * Please aware there's an asumption for all the parsers. + * rte_flow_item is using big endian, rte_flow_attr and + * rte_flow_action are using CPU order. + * Because the pattern is used to describe the packets, + * normally the packets should use network order. + */ + +/** + * Parse the rule to see if it is a n-tuple rule. + * And get the n-tuple filter info BTW. + * pattern: + * The first not void item can be ETH or IPV4. + * The second not void item must be IPV4 if the first one is ETH. + * The third not void item must be UDP or TCP. + * The next not void item must be END. + * action: + * The first not void action should be QUEUE. + * The next not void action should be END. + * pattern example: + * ITEM Spec Mask + * ETH NULL NULL + * IPV4 src_addr 192.168.1.20 0xFFFFFFFF + * dst_addr 192.167.3.50 0xFFFFFFFF + * next_proto_id 17 0xFF + * UDP/TCP/ src_port 80 0xFFFF + * SCTP dst_port 80 0xFFFF + * END + * other members in mask and spec should set to 0x00. + * item->last should be NULL. + */ +static int +classify_parse_ntuple_filter(const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_eth_ntuple_filter *filter, + struct rte_flow_error *error) +{ + const struct rte_flow_item *item; + const struct rte_flow_action *act; + const struct rte_flow_item_ipv4 *ipv4_spec; + const struct rte_flow_item_ipv4 *ipv4_mask; + const struct rte_flow_item_tcp *tcp_spec; + const struct rte_flow_item_tcp *tcp_mask; + const struct rte_flow_item_udp *udp_spec; + const struct rte_flow_item_udp *udp_mask; + const struct rte_flow_item_sctp *sctp_spec; + const struct rte_flow_item_sctp *sctp_mask; + uint32_t index; + + if (!pattern) { + rte_flow_error_set(error, + EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM, + NULL, "NULL pattern."); + return -rte_errno; + } + + if (!actions) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_NUM, + NULL, "NULL action."); + return -rte_errno; + } + if (!attr) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, + NULL, "NULL attribute."); + return -rte_errno; + } + + /* parse pattern */ + index = 0; + + /* the first not void item can be MAC or IPv4 */ + NEXT_ITEM_OF_PATTERN(item, pattern, index); + + if (item->type != RTE_FLOW_ITEM_TYPE_ETH && + item->type != RTE_FLOW_ITEM_TYPE_IPV4) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + /* Skip Ethernet */ + if (item->type == RTE_FLOW_ITEM_TYPE_ETH) { + /*Not supported last point for range*/ + if (item->last) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + item, + "Not supported last point for range"); + return -rte_errno; + + } + /* if the first item is MAC, the content should be NULL */ + if (item->spec || item->mask) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Not supported by ntuple filter"); + return -rte_errno; + } + /* check if the next not void item is IPv4 */ + index++; + NEXT_ITEM_OF_PATTERN(item, pattern, index); + if (item->type != RTE_FLOW_ITEM_TYPE_IPV4) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Not supported by ntuple filter"); + return -rte_errno; + } + } + + /* get the IPv4 info */ + if (!item->spec || !item->mask) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Invalid ntuple mask"); + return -rte_errno; + } + /*Not supported last point for range*/ + if (item->last) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + item, "Not supported last point for range"); + return -rte_errno; + + } + + ipv4_mask = (const struct rte_flow_item_ipv4 *)item->mask; + /** + * Only support src & dst addresses, protocol, + * others should be masked. + */ + if (ipv4_mask->hdr.version_ihl || + ipv4_mask->hdr.type_of_service || + ipv4_mask->hdr.total_length || + ipv4_mask->hdr.packet_id || + ipv4_mask->hdr.fragment_offset || + ipv4_mask->hdr.time_to_live || + ipv4_mask->hdr.hdr_checksum) { + rte_flow_error_set(error, + EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + + filter->dst_ip_mask = ipv4_mask->hdr.dst_addr; + filter->src_ip_mask = ipv4_mask->hdr.src_addr; + filter->proto_mask = ipv4_mask->hdr.next_proto_id; + + ipv4_spec = (const struct rte_flow_item_ipv4 *)item->spec; + filter->dst_ip = ipv4_spec->hdr.dst_addr; + filter->src_ip = ipv4_spec->hdr.src_addr; + filter->proto = ipv4_spec->hdr.next_proto_id; + + /* check if the next not void item is TCP or UDP or SCTP */ + index++; + NEXT_ITEM_OF_PATTERN(item, pattern, index); + if (item->type != RTE_FLOW_ITEM_TYPE_TCP && + item->type != RTE_FLOW_ITEM_TYPE_UDP && + item->type != RTE_FLOW_ITEM_TYPE_SCTP) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + + /* get the TCP/UDP info */ + if (!item->spec || !item->mask) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Invalid ntuple mask"); + return -rte_errno; + } + + /*Not supported last point for range*/ + if (item->last) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + item, "Not supported last point for range"); + return -rte_errno; + + } + + if (item->type == RTE_FLOW_ITEM_TYPE_TCP) { + tcp_mask = (const struct rte_flow_item_tcp *)item->mask; + + /** + * Only support src & dst ports, tcp flags, + * others should be masked. + */ + if (tcp_mask->hdr.sent_seq || + tcp_mask->hdr.recv_ack || + tcp_mask->hdr.data_off || + tcp_mask->hdr.rx_win || + tcp_mask->hdr.cksum || + tcp_mask->hdr.tcp_urp) { + memset(filter, 0, + sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + + filter->dst_port_mask = tcp_mask->hdr.dst_port; + filter->src_port_mask = tcp_mask->hdr.src_port; + if (tcp_mask->hdr.tcp_flags == 0xFF) { + filter->flags |= RTE_NTUPLE_FLAGS_TCP_FLAG; + } else if (!tcp_mask->hdr.tcp_flags) { + filter->flags &= ~RTE_NTUPLE_FLAGS_TCP_FLAG; + } else { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + + tcp_spec = (const struct rte_flow_item_tcp *)item->spec; + filter->dst_port = tcp_spec->hdr.dst_port; + filter->src_port = tcp_spec->hdr.src_port; + filter->tcp_flags = tcp_spec->hdr.tcp_flags; + } else if (item->type == RTE_FLOW_ITEM_TYPE_UDP) { + udp_mask = (const struct rte_flow_item_udp *)item->mask; + + /** + * Only support src & dst ports, + * others should be masked. + */ + if (udp_mask->hdr.dgram_len || + udp_mask->hdr.dgram_cksum) { + memset(filter, 0, + sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + + filter->dst_port_mask = udp_mask->hdr.dst_port; + filter->src_port_mask = udp_mask->hdr.src_port; + + udp_spec = (const struct rte_flow_item_udp *)item->spec; + filter->dst_port = udp_spec->hdr.dst_port; + filter->src_port = udp_spec->hdr.src_port; + } else { + sctp_mask = (const struct rte_flow_item_sctp *)item->mask; + + /** + * Only support src & dst ports, + * others should be masked. + */ + if (sctp_mask->hdr.tag || + sctp_mask->hdr.cksum) { + memset(filter, 0, + sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + + filter->dst_port_mask = sctp_mask->hdr.dst_port; + filter->src_port_mask = sctp_mask->hdr.src_port; + + sctp_spec = (const struct rte_flow_item_sctp *)item->spec; + filter->dst_port = sctp_spec->hdr.dst_port; + filter->src_port = sctp_spec->hdr.src_port; + } + + /* check if the next not void item is END */ + index++; + NEXT_ITEM_OF_PATTERN(item, pattern, index); + if (item->type != RTE_FLOW_ITEM_TYPE_END) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + + /* parse action */ + index = 0; + + /** + * n-tuple only supports count, + * check if the first not void action is COUNT. + */ + memset(&action, 0, sizeof(action)); + NEXT_ITEM_OF_ACTION(act, actions, index); + if (act->type != RTE_FLOW_ACTION_TYPE_COUNT) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + item, "Not supported action."); + return -rte_errno; + } + action.type = RTE_FLOW_ACTION_TYPE_COUNT; + + /* check if the next not void item is END */ + index++; + NEXT_ITEM_OF_ACTION(act, actions, index); + if (act->type != RTE_FLOW_ACTION_TYPE_END) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + act, "Not supported action."); + return -rte_errno; + } + + /* parse attr */ + /* must be input direction */ + if (!attr->ingress) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, + attr, "Only support ingress."); + return -rte_errno; + } + + /* not supported */ + if (attr->egress) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, + attr, "Not support egress."); + return -rte_errno; + } + + if (attr->priority > 0xFFFF) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, + attr, "Error priority."); + return -rte_errno; + } + filter->priority = (uint16_t)attr->priority; + if (attr->priority > FLOW_RULE_MIN_PRIORITY) + filter->priority = FLOW_RULE_MAX_PRIORITY; + + return 0; +} diff --git a/lib/librte_flow_classify/rte_flow_classify_parse.h b/lib/librte_flow_classify/rte_flow_classify_parse.h new file mode 100644 index 0000000..1d4708a --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify_parse.h @@ -0,0 +1,74 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _RTE_FLOW_CLASSIFY_PARSE_H_ +#define _RTE_FLOW_CLASSIFY_PARSE_H_ + +#include <rte_ethdev.h> +#include <rte_ether.h> +#include <rte_flow.h> +#include <stdbool.h> + +#ifdef __cplusplus +extern "C" { +#endif + +typedef int (*parse_filter_t)(const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_eth_ntuple_filter *filter, + struct rte_flow_error *error); + +/* Skip all VOID items of the pattern */ +void +classify_pattern_skip_void_item(struct rte_flow_item *items, + const struct rte_flow_item *pattern); + +/* Find the first VOID or non-VOID item pointer */ +const struct rte_flow_item * +classify_find_first_item(const struct rte_flow_item *item, bool is_void); + + +/* Find if there's parse filter function matched */ +parse_filter_t +classify_find_parse_filter_func(struct rte_flow_item *pattern); + +/* get action data */ +struct rte_flow_action * +classify_get_flow_action(void); + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_FLOW_CLASSIFY_PARSE_H_ */ diff --git a/lib/librte_flow_classify/rte_flow_classify_version.map b/lib/librte_flow_classify/rte_flow_classify_version.map new file mode 100644 index 0000000..e2c9ecf --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify_version.map @@ -0,0 +1,10 @@ +DPDK_17.08 { + global: + + rte_flow_classify_create; + rte_flow_classify_destroy; + rte_flow_classify_query; + rte_flow_classify_validate; + + local: *; +}; diff --git a/mk/rte.app.mk b/mk/rte.app.mk index c25fdd9..909ab95 100644 --- a/mk/rte.app.mk +++ b/mk/rte.app.mk @@ -58,6 +58,7 @@ _LDLIBS-y += -L$(RTE_SDK_BIN)/lib # # Order is important: from higher level to lower level # +_LDLIBS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += -lrte_flow_classify _LDLIBS-$(CONFIG_RTE_LIBRTE_PIPELINE) += -lrte_pipeline _LDLIBS-$(CONFIG_RTE_LIBRTE_TABLE) += -lrte_table _LDLIBS-$(CONFIG_RTE_LIBRTE_PORT) += -lrte_port @@ -84,7 +85,6 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_EFD) += -lrte_efd _LDLIBS-$(CONFIG_RTE_LIBRTE_CFGFILE) += -lrte_cfgfile _LDLIBS-y += --whole-archive - _LDLIBS-$(CONFIG_RTE_LIBRTE_HASH) += -lrte_hash _LDLIBS-$(CONFIG_RTE_LIBRTE_VHOST) += -lrte_vhost _LDLIBS-$(CONFIG_RTE_LIBRTE_KVARGS) += -lrte_kvargs -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v6 2/4] examples/flow_classify: flow classify sample application 2017-09-07 16:43 ` [dpdk-dev] [PATCH v5 0/6] " Bernard Iremonger 2017-09-29 9:18 ` [dpdk-dev] [PATCH v6 0/4] " Bernard Iremonger 2017-09-29 9:18 ` [dpdk-dev] [PATCH v6 1/4] librte_flow_classify: add librte_flow_classify library Bernard Iremonger @ 2017-09-29 9:18 ` Bernard Iremonger 2017-09-29 9:18 ` [dpdk-dev] [PATCH v6 3/4] test: add packet burst generator functions Bernard Iremonger 2017-09-29 9:18 ` [dpdk-dev] [PATCH v6 4/4] test: flow classify library unit tests Bernard Iremonger 4 siblings, 0 replies; 145+ messages in thread From: Bernard Iremonger @ 2017-09-29 9:18 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil Cc: Bernard Iremonger The flow_classify sample application exercises the following librte_flow_classify API's: rte_flow_classify_create rte_flow_classify_validate rte_flow_classify_destroy rte_flow_classify_query It sets up the IPv4 ACL field definitions. It creates table_acl and adds and deletes rules using the librte_table API. It uses a file of IPv4 five tuple rules for input. Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> --- examples/flow_classify/Makefile | 57 ++ examples/flow_classify/flow_classify.c | 897 +++++++++++++++++++++++++++++ examples/flow_classify/ipv4_rules_file.txt | 14 + 3 files changed, 968 insertions(+) create mode 100644 examples/flow_classify/Makefile create mode 100644 examples/flow_classify/flow_classify.c create mode 100644 examples/flow_classify/ipv4_rules_file.txt diff --git a/examples/flow_classify/Makefile b/examples/flow_classify/Makefile new file mode 100644 index 0000000..eecdde1 --- /dev/null +++ b/examples/flow_classify/Makefile @@ -0,0 +1,57 @@ +# BSD LICENSE +# +# Copyright(c) 2017 Intel Corporation. All rights reserved. +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Intel Corporation nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +ifeq ($(RTE_SDK),) +$(error "Please define RTE_SDK environment variable") +endif + +# Default target, can be overridden by command line or environment +RTE_TARGET ?= x86_64-native-linuxapp-gcc + +include $(RTE_SDK)/mk/rte.vars.mk + +# binary name +APP = flow_classify + + +# all source are stored in SRCS-y +SRCS-y := flow_classify.c + +CFLAGS += -O3 +CFLAGS += $(WERROR_FLAGS) + +# workaround for a gcc bug with noreturn attribute +# http://gcc.gnu.org/bugzilla/show_bug.cgi?id=12603 +ifeq ($(CONFIG_RTE_TOOLCHAIN_GCC),y) +CFLAGS_main.o += -Wno-return-type +endif + +include $(RTE_SDK)/mk/rte.extapp.mk diff --git a/examples/flow_classify/flow_classify.c b/examples/flow_classify/flow_classify.c new file mode 100644 index 0000000..651fa8f --- /dev/null +++ b/examples/flow_classify/flow_classify.c @@ -0,0 +1,897 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <stdint.h> +#include <inttypes.h> +#include <getopt.h> + +#include <rte_eal.h> +#include <rte_ethdev.h> +#include <rte_cycles.h> +#include <rte_lcore.h> +#include <rte_mbuf.h> +#include <rte_flow.h> +#include <rte_flow_classify.h> +#include <rte_table_acl.h> + +#define RX_RING_SIZE 128 +#define TX_RING_SIZE 512 + +#define NUM_MBUFS 8191 +#define MBUF_CACHE_SIZE 250 +#define BURST_SIZE 32 +#define MAX_NUM_CLASSIFY 30 +#define FLOW_CLASSIFY_MAX_RULE_NUM 91 +#define FLOW_CLASSIFY_MAX_PRIORITY 8 +#define PROTO_TCP 6 +#define PROTO_UDP 17 +#define PROTO_SCTP 132 + +#define COMMENT_LEAD_CHAR ('#') +#define OPTION_RULE_IPV4 "rule_ipv4" +#define RTE_LOGTYPE_FLOW_CLASSIFY RTE_LOGTYPE_USER3 +#define flow_classify_log(format, ...) \ + RTE_LOG(ERR, FLOW_CLASSIFY, format, ##__VA_ARGS__) + +#define uint32_t_to_char(ip, a, b, c, d) do {\ + *a = (unsigned char)(ip >> 24 & 0xff);\ + *b = (unsigned char)(ip >> 16 & 0xff);\ + *c = (unsigned char)(ip >> 8 & 0xff);\ + *d = (unsigned char)(ip & 0xff);\ + } while (0) + +enum { + CB_FLD_SRC_ADDR, + CB_FLD_DST_ADDR, + CB_FLD_SRC_PORT, + CB_FLD_SRC_PORT_DLM, + CB_FLD_SRC_PORT_MASK, + CB_FLD_DST_PORT, + CB_FLD_DST_PORT_DLM, + CB_FLD_DST_PORT_MASK, + CB_FLD_PROTO, + CB_FLD_PRIORITY, + CB_FLD_NUM, +}; + +static struct{ + const char *rule_ipv4_name; +} parm_config; +const char cb_port_delim[] = ":"; + +static const struct rte_eth_conf port_conf_default = { + .rxmode = { .max_rx_pkt_len = ETHER_MAX_LEN } +}; + +static void *table_acl; +uint32_t entry_size; +static int udp_num_classify; +static int tcp_num_classify; +static int sctp_num_classify; + +/* ACL field definitions for IPv4 5 tuple rule */ + +enum { + PROTO_FIELD_IPV4, + SRC_FIELD_IPV4, + DST_FIELD_IPV4, + SRCP_FIELD_IPV4, + DSTP_FIELD_IPV4, + NUM_FIELDS_IPV4 +}; + +enum { + PROTO_INPUT_IPV4, + SRC_INPUT_IPV4, + DST_INPUT_IPV4, + SRCP_DESTP_INPUT_IPV4 +}; + +static struct rte_acl_field_def ipv4_defs[NUM_FIELDS_IPV4] = { + /* first input field - always one byte long. */ + { + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint8_t), + .field_index = PROTO_FIELD_IPV4, + .input_index = PROTO_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, next_proto_id), + }, + /* next input field (IPv4 source address) - 4 consecutive bytes. */ + { + /* rte_flow uses a bit mask for IPv4 addresses */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint32_t), + .field_index = SRC_FIELD_IPV4, + .input_index = SRC_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, src_addr), + }, + /* next input field (IPv4 destination address) - 4 consecutive bytes. */ + { + /* rte_flow uses a bit mask for IPv4 addresses */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint32_t), + .field_index = DST_FIELD_IPV4, + .input_index = DST_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, dst_addr), + }, + /* + * Next 2 fields (src & dst ports) form 4 consecutive bytes. + * They share the same input index. + */ + { + /* rte_flow uses a bit mask for protocol ports */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint16_t), + .field_index = SRCP_FIELD_IPV4, + .input_index = SRCP_DESTP_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + sizeof(struct ipv4_hdr) + + offsetof(struct tcp_hdr, src_port), + }, + { + /* rte_flow uses a bit mask for protocol ports */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint16_t), + .field_index = DSTP_FIELD_IPV4, + .input_index = SRCP_DESTP_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + sizeof(struct ipv4_hdr) + + offsetof(struct tcp_hdr, dst_port), + }, +}; + +/* flow classify data */ +static struct rte_flow_classify *udp_flow_classify[MAX_NUM_CLASSIFY]; +static struct rte_flow_classify *tcp_flow_classify[MAX_NUM_CLASSIFY]; +static struct rte_flow_classify *sctp_flow_classify[MAX_NUM_CLASSIFY]; + +static struct rte_flow_classify_5tuple_stats udp_ntuple_stats; +static struct rte_flow_classify_stats udp_classify_stats = { + .available_space = BURST_SIZE, + .used_space = 0, + .stats = (void **)&udp_ntuple_stats +}; + +static struct rte_flow_classify_5tuple_stats tcp_ntuple_stats; +static struct rte_flow_classify_stats tcp_classify_stats = { + .available_space = BURST_SIZE, + .used_space = 0, + .stats = (void **)&tcp_ntuple_stats +}; + +static struct rte_flow_classify_5tuple_stats sctp_ntuple_stats; +static struct rte_flow_classify_stats sctp_classify_stats = { + .available_space = BURST_SIZE, + .used_space = 0, + .stats = (void **)&sctp_ntuple_stats +}; + +/* parameters for rte_flow_classify_validate and rte_flow_classify_create */ + +static struct rte_flow_item eth_item = { RTE_FLOW_ITEM_TYPE_ETH, + 0, 0, 0 }; +static struct rte_flow_item end_item = { RTE_FLOW_ITEM_TYPE_END, + 0, 0, 0 }; + +/* sample actions: + * "actions count / end" + */ +static struct rte_flow_action count_action = { RTE_FLOW_ACTION_TYPE_COUNT, 0}; +static struct rte_flow_action end_action = { RTE_FLOW_ACTION_TYPE_END, 0}; +static struct rte_flow_action actions[2]; + +/* sample attributes */ +static struct rte_flow_attr attr; + +/* flow_classify.c: * Based on DPDK skeleton forwarding example. */ + +/* + * Initializes a given port using global settings and with the RX buffers + * coming from the mbuf_pool passed as a parameter. + */ +static inline int +port_init(uint8_t port, struct rte_mempool *mbuf_pool) +{ + struct rte_eth_conf port_conf = port_conf_default; + struct ether_addr addr; + const uint16_t rx_rings = 1, tx_rings = 1; + int retval; + uint16_t q; + + if (port >= rte_eth_dev_count()) + return -1; + + /* Configure the Ethernet device. */ + retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf); + if (retval != 0) + return retval; + + /* Allocate and set up 1 RX queue per Ethernet port. */ + for (q = 0; q < rx_rings; q++) { + retval = rte_eth_rx_queue_setup(port, q, RX_RING_SIZE, + rte_eth_dev_socket_id(port), NULL, mbuf_pool); + if (retval < 0) + return retval; + } + + /* Allocate and set up 1 TX queue per Ethernet port. */ + for (q = 0; q < tx_rings; q++) { + retval = rte_eth_tx_queue_setup(port, q, TX_RING_SIZE, + rte_eth_dev_socket_id(port), NULL); + if (retval < 0) + return retval; + } + + /* Start the Ethernet port. */ + retval = rte_eth_dev_start(port); + if (retval < 0) + return retval; + + /* Display the port MAC address. */ + rte_eth_macaddr_get(port, &addr); + printf("Port %u MAC: %02" PRIx8 " %02" PRIx8 " %02" PRIx8 + " %02" PRIx8 " %02" PRIx8 " %02" PRIx8 "\n", + port, + addr.addr_bytes[0], addr.addr_bytes[1], + addr.addr_bytes[2], addr.addr_bytes[3], + addr.addr_bytes[4], addr.addr_bytes[5]); + + /* Enable RX in promiscuous mode for the Ethernet device. */ + rte_eth_promiscuous_enable(port); + + return 0; +} + +/* + * The lcore main. This is the main thread that does the work, reading from + * an input port classifying the packets and writing to an output port. + */ +static __attribute__((noreturn)) void +lcore_main(void) +{ + struct rte_flow_error error; + const uint8_t nb_ports = rte_eth_dev_count(); + uint8_t port; + int ret; + int i; + + /* + * Check that the port is on the same NUMA node as the polling thread + * for best performance. + */ + for (port = 0; port < nb_ports; port++) + if (rte_eth_dev_socket_id(port) > 0 && + rte_eth_dev_socket_id(port) != + (int)rte_socket_id()) + printf("\n\n"); + printf("WARNING: port %u is on remote NUMA node\n", + port); + printf("to polling thread.\n"); + printf("Performance will not be optimal.\n"); + + printf("\nCore %u forwarding packets. [Ctrl+C to quit]\n", + rte_lcore_id()); + + /* Run until the application is quit or killed. */ + for (;;) { + /* + * Receive packets on a port, classify them and forward them + * on the paired port. + * The mapping is 0 -> 1, 1 -> 0, 2 -> 3, 3 -> 2, etc. + */ + for (port = 0; port < nb_ports; port++) { + + /* Get burst of RX packets, from first port of pair. */ + struct rte_mbuf *bufs[BURST_SIZE]; + const uint16_t nb_rx = rte_eth_rx_burst(port, 0, + bufs, BURST_SIZE); + + if (unlikely(nb_rx == 0)) + continue; + + for (i = 0; i < MAX_NUM_CLASSIFY; i++) { + if (udp_flow_classify[i]) { + ret = rte_flow_classify_query( + table_acl, + udp_flow_classify[i], + bufs, nb_rx, + &udp_classify_stats, &error); + if (ret) + printf( + "udp flow classify[%d] query failed port=%u\n\n", + i, port); + else + printf( + "udp rule [%d] counter1=%lu used_space=%d\n\n", + i, udp_ntuple_stats.counter1, + udp_classify_stats.used_space); + } + } + + for (i = 0; i < MAX_NUM_CLASSIFY; i++) { + if (tcp_flow_classify[i]) { + ret = rte_flow_classify_query( + table_acl, + tcp_flow_classify[i], + bufs, nb_rx, + &tcp_classify_stats, &error); + if (ret) + printf( + "tcp flow classify[%d] query failed port=%u\n\n", + i, port); + else + printf( + "tcp rule [%d] counter1=%lu used_space=%d\n\n", + i, tcp_ntuple_stats.counter1, + tcp_classify_stats.used_space); + } + } + + for (i = 0; i < MAX_NUM_CLASSIFY; i++) { + if (sctp_flow_classify[i]) { + ret = rte_flow_classify_query( + table_acl, + sctp_flow_classify[i], + bufs, nb_rx, + &sctp_classify_stats, &error); + if (ret) + printf( + "sctp flow classify[%d] query failed port=%u\n\n", + i, port); + else + printf( + "sctp rule [%d] counter1=%lu used_space=%d\n\n", + i, sctp_ntuple_stats.counter1, + sctp_classify_stats.used_space); + } + } + + /* Send burst of TX packets, to second port of pair. */ + const uint16_t nb_tx = rte_eth_tx_burst(port ^ 1, 0, + bufs, nb_rx); + + /* Free any unsent packets. */ + if (unlikely(nb_tx < nb_rx)) { + uint16_t buf; + + for (buf = nb_tx; buf < nb_rx; buf++) + rte_pktmbuf_free(bufs[buf]); + } + } + } +} + +/* + * Parse IPv4 5 tuple rules file, ipv4_rules_file.txt. + * Expected format: + * <src_ipv4_addr>'/'<masklen> <space> \ + * <dst_ipv4_addr>'/'<masklen> <space> \ + * <src_port> <space> ":" <src_port_mask> <space> \ + * <dst_port> <space> ":" <dst_port_mask> <space> \ + * <proto>'/'<proto_mask> <space> \ + * <priority> + */ + +static int +get_cb_field(char **in, uint32_t *fd, int base, unsigned long lim, + char dlm) +{ + unsigned long val; + char *end; + + errno = 0; + val = strtoul(*in, &end, base); + if (errno != 0 || end[0] != dlm || val > lim) + return -EINVAL; + *fd = (uint32_t)val; + *in = end + 1; + return 0; +} + +static int +parse_ipv4_net(char *in, uint32_t *addr, uint32_t *mask_len) +{ + uint32_t a, b, c, d, m; + + if (get_cb_field(&in, &a, 0, UINT8_MAX, '.')) + return -EINVAL; + if (get_cb_field(&in, &b, 0, UINT8_MAX, '.')) + return -EINVAL; + if (get_cb_field(&in, &c, 0, UINT8_MAX, '.')) + return -EINVAL; + if (get_cb_field(&in, &d, 0, UINT8_MAX, '/')) + return -EINVAL; + if (get_cb_field(&in, &m, 0, sizeof(uint32_t) * CHAR_BIT, 0)) + return -EINVAL; + + addr[0] = IPv4(a, b, c, d); + mask_len[0] = m; + return 0; +} + +static int +parse_ipv4_5tuple_rule(char *str, struct rte_eth_ntuple_filter *ntuple_filter) +{ + int i, ret; + char *s, *sp, *in[CB_FLD_NUM]; + static const char *dlm = " \t\n"; + int dim = CB_FLD_NUM; + uint32_t temp; + + s = str; + for (i = 0; i != dim; i++, s = NULL) { + in[i] = strtok_r(s, dlm, &sp); + if (in[i] == NULL) + return -EINVAL; + } + + ret = parse_ipv4_net(in[CB_FLD_SRC_ADDR], + &ntuple_filter->src_ip, + &ntuple_filter->src_ip_mask); + if (ret != 0) { + flow_classify_log("failed to read source address/mask: %s\n", + in[CB_FLD_SRC_ADDR]); + return ret; + } + + ret = parse_ipv4_net(in[CB_FLD_DST_ADDR], + &ntuple_filter->dst_ip, + &ntuple_filter->dst_ip_mask); + if (ret != 0) { + flow_classify_log("failed to read source address/mask: %s\n", + in[CB_FLD_DST_ADDR]); + return ret; + } + + if (get_cb_field(&in[CB_FLD_SRC_PORT], &temp, 0, UINT16_MAX, 0)) + return -EINVAL; + ntuple_filter->src_port = (uint16_t)temp; + + if (strncmp(in[CB_FLD_SRC_PORT_DLM], cb_port_delim, + sizeof(cb_port_delim)) != 0) + return -EINVAL; + + if (get_cb_field(&in[CB_FLD_SRC_PORT_MASK], &temp, 0, UINT16_MAX, 0)) + return -EINVAL; + ntuple_filter->src_port_mask = (uint16_t)temp; + + if (get_cb_field(&in[CB_FLD_DST_PORT], &temp, 0, UINT16_MAX, 0)) + return -EINVAL; + ntuple_filter->dst_port = (uint16_t)temp; + + if (strncmp(in[CB_FLD_DST_PORT_DLM], cb_port_delim, + sizeof(cb_port_delim)) != 0) + return -EINVAL; + + if (get_cb_field(&in[CB_FLD_DST_PORT_MASK], &temp, 0, UINT16_MAX, 0)) + return -EINVAL; + ntuple_filter->dst_port_mask = (uint16_t)temp; + + if (get_cb_field(&in[CB_FLD_PROTO], &temp, 0, UINT8_MAX, '/')) + return -EINVAL; + ntuple_filter->proto = (uint8_t)temp; + + if (get_cb_field(&in[CB_FLD_PROTO], &temp, 0, UINT8_MAX, 0)) + return -EINVAL; + ntuple_filter->proto_mask = (uint8_t)temp; + + if (get_cb_field(&in[CB_FLD_PRIORITY], &temp, 0, UINT16_MAX, 0)) + return -EINVAL; + ntuple_filter->priority = (uint16_t)temp; + if (ntuple_filter->priority > FLOW_CLASSIFY_MAX_PRIORITY) + ret = -EINVAL; + + return ret; +} + +/* Bypass comment and empty lines */ +static inline int +is_bypass_line(char *buff) +{ + int i = 0; + + /* comment line */ + if (buff[0] == COMMENT_LEAD_CHAR) + return 1; + /* empty line */ + while (buff[i] != '\0') { + if (!isspace(buff[i])) + return 0; + i++; + } + return 1; +} + +static uint32_t +convert_depth_to_bitmask(uint32_t depth_val) +{ + uint32_t bitmask = 0; + int i, j; + + for (i = depth_val, j = 0; i > 0; i--, j++) + bitmask |= (1 << (31 - j)); + return bitmask; +} + +static int +add_classify_rule(struct rte_eth_ntuple_filter *ntuple_filter) +{ + int ret = 0; + struct rte_flow_error error; + struct rte_flow_item_ipv4 ipv4_spec; + struct rte_flow_item_ipv4 ipv4_mask; + struct rte_flow_item ipv4_udp_item; + struct rte_flow_item ipv4_tcp_item; + struct rte_flow_item ipv4_sctp_item; + struct rte_flow_item_udp udp_spec; + struct rte_flow_item_udp udp_mask; + struct rte_flow_item udp_item; + struct rte_flow_item_tcp tcp_spec; + struct rte_flow_item_tcp tcp_mask; + struct rte_flow_item tcp_item; + struct rte_flow_item_sctp sctp_spec; + struct rte_flow_item_sctp sctp_mask; + struct rte_flow_item sctp_item; + struct rte_flow_item pattern_ipv4_5tuple[4]; + struct rte_flow_classify *flow_classify; + uint8_t ipv4_proto; + + /* set up parameters for validate and create */ + memset(&ipv4_spec, 0, sizeof(ipv4_spec)); + ipv4_spec.hdr.next_proto_id = ntuple_filter->proto; + ipv4_spec.hdr.src_addr = ntuple_filter->src_ip; + ipv4_spec.hdr.dst_addr = ntuple_filter->dst_ip; + ipv4_proto = ipv4_spec.hdr.next_proto_id; + + memset(&ipv4_mask, 0, sizeof(ipv4_mask)); + ipv4_mask.hdr.next_proto_id = ntuple_filter->proto_mask; + ipv4_mask.hdr.src_addr = ntuple_filter->src_ip_mask; + ipv4_mask.hdr.src_addr = + convert_depth_to_bitmask(ipv4_mask.hdr.src_addr); + ipv4_mask.hdr.dst_addr = ntuple_filter->dst_ip_mask; + ipv4_mask.hdr.dst_addr = + convert_depth_to_bitmask(ipv4_mask.hdr.dst_addr); + + switch (ipv4_proto) { + case PROTO_UDP: + if (udp_num_classify >= MAX_NUM_CLASSIFY) { + printf( + "\nINFO: UDP classify rule capacity %d reached\n", + udp_num_classify); + ret = -1; + break; + } + ipv4_udp_item.type = RTE_FLOW_ITEM_TYPE_IPV4; + ipv4_udp_item.spec = &ipv4_spec; + ipv4_udp_item.mask = &ipv4_mask; + ipv4_udp_item.last = NULL; + + udp_spec.hdr.src_port = ntuple_filter->src_port; + udp_spec.hdr.dst_port = ntuple_filter->dst_port; + udp_spec.hdr.dgram_len = 0; + udp_spec.hdr.dgram_cksum = 0; + + udp_mask.hdr.src_port = ntuple_filter->src_port_mask; + udp_mask.hdr.dst_port = ntuple_filter->dst_port_mask; + udp_mask.hdr.dgram_len = 0; + udp_mask.hdr.dgram_cksum = 0; + + udp_item.type = RTE_FLOW_ITEM_TYPE_UDP; + udp_item.spec = &udp_spec; + udp_item.mask = &udp_mask; + udp_item.last = NULL; + + attr.priority = ntuple_filter->priority; + pattern_ipv4_5tuple[1] = ipv4_udp_item; + pattern_ipv4_5tuple[2] = udp_item; + break; + case PROTO_TCP: + if (tcp_num_classify >= MAX_NUM_CLASSIFY) { + printf( + "\nINFO: TCP classify rule capacity %d reached\n", + tcp_num_classify); + ret = -1; + break; + } + ipv4_tcp_item.type = RTE_FLOW_ITEM_TYPE_IPV4; + ipv4_tcp_item.spec = &ipv4_spec; + ipv4_tcp_item.mask = &ipv4_mask; + ipv4_tcp_item.last = NULL; + + memset(&tcp_spec, 0, sizeof(tcp_spec)); + tcp_spec.hdr.src_port = ntuple_filter->src_port; + tcp_spec.hdr.dst_port = ntuple_filter->dst_port; + + memset(&tcp_mask, 0, sizeof(tcp_mask)); + tcp_mask.hdr.src_port = ntuple_filter->src_port_mask; + tcp_mask.hdr.dst_port = ntuple_filter->dst_port_mask; + + tcp_item.type = RTE_FLOW_ITEM_TYPE_TCP; + tcp_item.spec = &tcp_spec; + tcp_item.mask = &tcp_mask; + tcp_item.last = NULL; + + attr.priority = ntuple_filter->priority; + pattern_ipv4_5tuple[1] = ipv4_tcp_item; + pattern_ipv4_5tuple[2] = tcp_item; + break; + case PROTO_SCTP: + if (sctp_num_classify >= MAX_NUM_CLASSIFY) { + printf( + "\nINFO: SCTP classify rule capacity %d reached\n", + sctp_num_classify); + ret = -1; + break; + } + ipv4_sctp_item.type = RTE_FLOW_ITEM_TYPE_IPV4; + ipv4_sctp_item.spec = &ipv4_spec; + ipv4_sctp_item.mask = &ipv4_mask; + ipv4_sctp_item.last = NULL; + + sctp_spec.hdr.src_port = ntuple_filter->src_port; + sctp_spec.hdr.dst_port = ntuple_filter->dst_port; + sctp_spec.hdr.cksum = 0; + sctp_spec.hdr.tag = 0; + + sctp_mask.hdr.src_port = ntuple_filter->src_port_mask; + sctp_mask.hdr.dst_port = ntuple_filter->dst_port_mask; + sctp_mask.hdr.cksum = 0; + sctp_mask.hdr.tag = 0; + + sctp_item.type = RTE_FLOW_ITEM_TYPE_SCTP; + sctp_item.spec = &sctp_spec; + sctp_item.mask = &sctp_mask; + sctp_item.last = NULL; + + attr.priority = ntuple_filter->priority; + pattern_ipv4_5tuple[1] = ipv4_sctp_item; + pattern_ipv4_5tuple[2] = sctp_item; + break; + default: + break; + } + + if (ret == -1) + return 0; + + attr.ingress = 1; + pattern_ipv4_5tuple[0] = eth_item; + pattern_ipv4_5tuple[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, pattern_ipv4_5tuple, + actions, &error); + if (ret) + rte_exit(EXIT_FAILURE, + "flow classify validate failed ipv4_proto = %u\n", + ipv4_proto); + + flow_classify = rte_flow_classify_create( + table_acl, entry_size, &attr, pattern_ipv4_5tuple, + actions, &error); + if (flow_classify == NULL) + rte_exit(EXIT_FAILURE, + "flow classify create failed ipv4_proto = %u\n", + ipv4_proto); + + switch (ipv4_proto) { + case PROTO_UDP: + udp_flow_classify[udp_num_classify] = flow_classify; + udp_num_classify++; + break; + case PROTO_TCP: + tcp_flow_classify[tcp_num_classify] = flow_classify; + tcp_num_classify++; + break; + case PROTO_SCTP: + sctp_flow_classify[sctp_num_classify] = flow_classify; + sctp_num_classify++; + break; + default: + break; + } + return 0; +} + +static int +add_rules(const char *rule_path) +{ + FILE *fh; + char buff[LINE_MAX]; + unsigned int i = 0; + unsigned int total_num = 0; + struct rte_eth_ntuple_filter ntuple_filter; + + fh = fopen(rule_path, "rb"); + if (fh == NULL) + rte_exit(EXIT_FAILURE, "%s: Open %s failed\n", __func__, + rule_path); + + fseek(fh, 0, SEEK_SET); + + i = 0; + while (fgets(buff, LINE_MAX, fh) != NULL) { + i++; + + if (is_bypass_line(buff)) + continue; + + if (total_num >= FLOW_CLASSIFY_MAX_RULE_NUM - 1) { + printf("\nINFO: classify rule capacity %d reached\n", + total_num); + break; + } + + if (parse_ipv4_5tuple_rule(buff, &ntuple_filter) != 0) + rte_exit(EXIT_FAILURE, + "%s Line %u: parse rules error\n", + rule_path, i); + + if (add_classify_rule(&ntuple_filter) != 0) + rte_exit(EXIT_FAILURE, "add rule error\n"); + + total_num++; + } + + fclose(fh); + return 0; +} + +/* display usage */ +static void +print_usage(const char *prgname) +{ + printf("%s usage:\n", prgname); + printf("[EAL options] -- --"OPTION_RULE_IPV4"=FILE: "); + printf("specify the ipv4 rules file.\n"); + printf("Each rule occupies one line in the file.\n"); +} + +/* Parse the argument given in the command line of the application */ +static int +parse_args(int argc, char **argv) +{ + int opt, ret; + char **argvopt; + int option_index; + char *prgname = argv[0]; + static struct option lgopts[] = { + {OPTION_RULE_IPV4, 1, 0, 0}, + {NULL, 0, 0, 0} + }; + + argvopt = argv; + + while ((opt = getopt_long(argc, argvopt, "", + lgopts, &option_index)) != EOF) { + + switch (opt) { + /* long options */ + case 0: + if (!strncmp(lgopts[option_index].name, + OPTION_RULE_IPV4, + sizeof(OPTION_RULE_IPV4))) + parm_config.rule_ipv4_name = optarg; + break; + default: + print_usage(prgname); + return -1; + } + } + + if (optind >= 0) + argv[optind-1] = prgname; + + ret = optind-1; + optind = 1; /* reset getopt lib */ + return ret; +} + +/* + * The main function, which does initialization and calls the per-lcore + * functions. + */ +int +main(int argc, char *argv[]) +{ + struct rte_mempool *mbuf_pool; + uint8_t nb_ports; + uint8_t portid; + int ret; + int socket_id; + struct rte_table_acl_params table_acl_params; + + /* Initialize the Environment Abstraction Layer (EAL). */ + ret = rte_eal_init(argc, argv); + if (ret < 0) + rte_exit(EXIT_FAILURE, "Error with EAL initialization\n"); + + argc -= ret; + argv += ret; + + /* parse application arguments (after the EAL ones) */ + ret = parse_args(argc, argv); + if (ret < 0) + rte_exit(EXIT_FAILURE, "Invalid flow_classify parameters\n"); + + /* Check that there is an even number of ports to send/receive on. */ + nb_ports = rte_eth_dev_count(); + if (nb_ports < 2 || (nb_ports & 1)) + rte_exit(EXIT_FAILURE, "Error: number of ports must be even\n"); + + /* Creates a new mempool in memory to hold the mbufs. */ + mbuf_pool = rte_pktmbuf_pool_create("MBUF_POOL", NUM_MBUFS * nb_ports, + MBUF_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id()); + + if (mbuf_pool == NULL) + rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n"); + + /* Initialize all ports. */ + for (portid = 0; portid < nb_ports; portid++) + if (port_init(portid, mbuf_pool) != 0) + rte_exit(EXIT_FAILURE, "Cannot init port %"PRIu8 "\n", + portid); + + if (rte_lcore_count() > 1) + printf("\nWARNING: Too many lcores enabled. Only 1 used.\n"); + + socket_id = rte_eth_dev_socket_id(0); + + /* initialise ACL table params */ + table_acl_params.n_rule_fields = RTE_DIM(ipv4_defs); + table_acl_params.name = "table_acl_ipv4_5tuple"; + table_acl_params.n_rules = FLOW_CLASSIFY_MAX_RULE_NUM; + memcpy(table_acl_params.field_format, ipv4_defs, sizeof(ipv4_defs)); + entry_size = RTE_ACL_RULE_SZ(RTE_DIM(ipv4_defs)); + + table_acl = rte_table_acl_ops.f_create(&table_acl_params, socket_id, + entry_size); + if (table_acl == NULL) + rte_exit(EXIT_FAILURE, "Failed to create table_acl\n"); + + /* read file of IPv4 5 tuple rules and initialise parameters + * for rte_flow_classify_validate and rte_flow_classify_create + */ + + if (add_rules(parm_config.rule_ipv4_name)) + rte_exit(EXIT_FAILURE, "Failed to add rules\n"); + + /* Call lcore_main on the master core only. */ + lcore_main(); + + return 0; +} diff --git a/examples/flow_classify/ipv4_rules_file.txt b/examples/flow_classify/ipv4_rules_file.txt new file mode 100644 index 0000000..262763d --- /dev/null +++ b/examples/flow_classify/ipv4_rules_file.txt @@ -0,0 +1,14 @@ +#file format: +#src_ip/masklen dstt_ip/masklen src_port : mask dst_port : mask proto/mask priority +# +2.2.2.3/24 2.2.2.7/24 32 : 0xffff 33 : 0xffff 17/0xff 0 +9.9.9.3/24 9.9.9.7/24 32 : 0xffff 33 : 0xffff 17/0xff 1 +9.9.9.3/24 9.9.9.7/24 32 : 0xffff 33 : 0xffff 6/0xff 2 +9.9.8.3/24 9.9.8.7/24 32 : 0xffff 33 : 0xffff 6/0xff 3 +6.7.8.9/24 2.3.4.5/24 32 : 0xffff 33 : 0xffff 132/0xff 4 +6.7.8.9/32 192.168.0.36/32 10 : 0xffff 11 : 0xffff 6/0xfe 5 +6.7.8.9/24 192.168.0.36/24 10 : 0xffff 11 : 0xffff 6/0xfe 6 +6.7.8.9/16 192.168.0.36/16 10 : 0xffff 11 : 0xffff 6/0xfe 7 +6.7.8.9/8 192.168.0.36/8 10 : 0xffff 11 : 0xffff 6/0xfe 8 +#error rules +#9.8.7.6/8 192.168.0.36/8 10 : 0xffff 11 : 0xffff 6/0xfe 9 \ No newline at end of file -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v6 3/4] test: add packet burst generator functions 2017-09-07 16:43 ` [dpdk-dev] [PATCH v5 0/6] " Bernard Iremonger ` (2 preceding siblings ...) 2017-09-29 9:18 ` [dpdk-dev] [PATCH v6 2/4] examples/flow_classify: flow classify sample application Bernard Iremonger @ 2017-09-29 9:18 ` Bernard Iremonger 2017-09-29 9:18 ` [dpdk-dev] [PATCH v6 4/4] test: flow classify library unit tests Bernard Iremonger 4 siblings, 0 replies; 145+ messages in thread From: Bernard Iremonger @ 2017-09-29 9:18 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil Cc: Bernard Iremonger add initialize_tcp_header function add initialize_stcp_header function add initialize_ipv4_header_proto function add generate_packet_burst_proto function Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> --- test/test/packet_burst_generator.c | 191 +++++++++++++++++++++++++++++++++++++ test/test/packet_burst_generator.h | 22 ++++- 2 files changed, 211 insertions(+), 2 deletions(-) diff --git a/test/test/packet_burst_generator.c b/test/test/packet_burst_generator.c index a93c3b5..8f4ddcc 100644 --- a/test/test/packet_burst_generator.c +++ b/test/test/packet_burst_generator.c @@ -134,6 +134,36 @@ return pkt_len; } +uint16_t +initialize_tcp_header(struct tcp_hdr *tcp_hdr, uint16_t src_port, + uint16_t dst_port, uint16_t pkt_data_len) +{ + uint16_t pkt_len; + + pkt_len = (uint16_t) (pkt_data_len + sizeof(struct tcp_hdr)); + + memset(tcp_hdr, 0, sizeof(struct tcp_hdr)); + tcp_hdr->src_port = rte_cpu_to_be_16(src_port); + tcp_hdr->dst_port = rte_cpu_to_be_16(dst_port); + + return pkt_len; +} + +uint16_t +initialize_sctp_header(struct sctp_hdr *sctp_hdr, uint16_t src_port, + uint16_t dst_port, uint16_t pkt_data_len) +{ + uint16_t pkt_len; + + pkt_len = (uint16_t) (pkt_data_len + sizeof(struct udp_hdr)); + + sctp_hdr->src_port = rte_cpu_to_be_16(src_port); + sctp_hdr->dst_port = rte_cpu_to_be_16(dst_port); + sctp_hdr->tag = 0; + sctp_hdr->cksum = 0; /* No SCTP checksum. */ + + return pkt_len; +} uint16_t initialize_ipv6_header(struct ipv6_hdr *ip_hdr, uint8_t *src_addr, @@ -198,7 +228,53 @@ return pkt_len; } +uint16_t +initialize_ipv4_header_proto(struct ipv4_hdr *ip_hdr, uint32_t src_addr, + uint32_t dst_addr, uint16_t pkt_data_len, uint8_t proto) +{ + uint16_t pkt_len; + unaligned_uint16_t *ptr16; + uint32_t ip_cksum; + + /* + * Initialize IP header. + */ + pkt_len = (uint16_t) (pkt_data_len + sizeof(struct ipv4_hdr)); + + ip_hdr->version_ihl = IP_VHL_DEF; + ip_hdr->type_of_service = 0; + ip_hdr->fragment_offset = 0; + ip_hdr->time_to_live = IP_DEFTTL; + ip_hdr->next_proto_id = proto; + ip_hdr->packet_id = 0; + ip_hdr->total_length = rte_cpu_to_be_16(pkt_len); + ip_hdr->src_addr = rte_cpu_to_be_32(src_addr); + ip_hdr->dst_addr = rte_cpu_to_be_32(dst_addr); + + /* + * Compute IP header checksum. + */ + ptr16 = (unaligned_uint16_t *)ip_hdr; + ip_cksum = 0; + ip_cksum += ptr16[0]; ip_cksum += ptr16[1]; + ip_cksum += ptr16[2]; ip_cksum += ptr16[3]; + ip_cksum += ptr16[4]; + ip_cksum += ptr16[6]; ip_cksum += ptr16[7]; + ip_cksum += ptr16[8]; ip_cksum += ptr16[9]; + /* + * Reduce 32 bit checksum to 16 bits and complement it. + */ + ip_cksum = ((ip_cksum & 0xFFFF0000) >> 16) + + (ip_cksum & 0x0000FFFF); + ip_cksum %= 65536; + ip_cksum = (~ip_cksum) & 0x0000FFFF; + if (ip_cksum == 0) + ip_cksum = 0xFFFF; + ip_hdr->hdr_checksum = (uint16_t) ip_cksum; + + return pkt_len; +} /* * The maximum number of segments per packet is used when creating @@ -283,3 +359,118 @@ return nb_pkt; } + +int +generate_packet_burst_proto(struct rte_mempool *mp, + struct rte_mbuf **pkts_burst, + struct ether_hdr *eth_hdr, uint8_t vlan_enabled, void *ip_hdr, + uint8_t ipv4, uint8_t proto, void *proto_hdr, + int nb_pkt_per_burst, uint8_t pkt_len, uint8_t nb_pkt_segs) +{ + int i, nb_pkt = 0; + size_t eth_hdr_size; + + struct rte_mbuf *pkt_seg; + struct rte_mbuf *pkt; + + for (nb_pkt = 0; nb_pkt < nb_pkt_per_burst; nb_pkt++) { + pkt = rte_pktmbuf_alloc(mp); + if (pkt == NULL) { +nomore_mbuf: + if (nb_pkt == 0) + return -1; + break; + } + + pkt->data_len = pkt_len; + pkt_seg = pkt; + for (i = 1; i < nb_pkt_segs; i++) { + pkt_seg->next = rte_pktmbuf_alloc(mp); + if (pkt_seg->next == NULL) { + pkt->nb_segs = i; + rte_pktmbuf_free(pkt); + goto nomore_mbuf; + } + pkt_seg = pkt_seg->next; + pkt_seg->data_len = pkt_len; + } + pkt_seg->next = NULL; /* Last segment of packet. */ + + /* + * Copy headers in first packet segment(s). + */ + if (vlan_enabled) + eth_hdr_size = sizeof(struct ether_hdr) + + sizeof(struct vlan_hdr); + else + eth_hdr_size = sizeof(struct ether_hdr); + + copy_buf_to_pkt(eth_hdr, eth_hdr_size, pkt, 0); + + if (ipv4) { + copy_buf_to_pkt(ip_hdr, sizeof(struct ipv4_hdr), pkt, + eth_hdr_size); + switch (proto) { + case IPPROTO_UDP: + copy_buf_to_pkt(proto_hdr, + sizeof(struct udp_hdr), pkt, + eth_hdr_size + sizeof(struct ipv4_hdr)); + break; + case IPPROTO_TCP: + copy_buf_to_pkt(proto_hdr, + sizeof(struct tcp_hdr), pkt, + eth_hdr_size + sizeof(struct ipv4_hdr)); + break; + case IPPROTO_SCTP: + copy_buf_to_pkt(proto_hdr, + sizeof(struct sctp_hdr), pkt, + eth_hdr_size + sizeof(struct ipv4_hdr)); + break; + default: + break; + } + } else { + copy_buf_to_pkt(ip_hdr, sizeof(struct ipv6_hdr), pkt, + eth_hdr_size); + switch (proto) { + case IPPROTO_UDP: + copy_buf_to_pkt(proto_hdr, + sizeof(struct udp_hdr), pkt, + eth_hdr_size + sizeof(struct ipv6_hdr)); + break; + case IPPROTO_TCP: + copy_buf_to_pkt(proto_hdr, + sizeof(struct tcp_hdr), pkt, + eth_hdr_size + sizeof(struct ipv6_hdr)); + break; + case IPPROTO_SCTP: + copy_buf_to_pkt(proto_hdr, + sizeof(struct sctp_hdr), pkt, + eth_hdr_size + sizeof(struct ipv6_hdr)); + break; + default: + break; + } + } + + /* + * Complete first mbuf of packet and append it to the + * burst of packets to be transmitted. + */ + pkt->nb_segs = nb_pkt_segs; + pkt->pkt_len = pkt_len; + pkt->l2_len = eth_hdr_size; + + if (ipv4) { + pkt->vlan_tci = ETHER_TYPE_IPv4; + pkt->l3_len = sizeof(struct ipv4_hdr); + } else { + pkt->vlan_tci = ETHER_TYPE_IPv6; + pkt->l3_len = sizeof(struct ipv6_hdr); + } + + pkts_burst[nb_pkt] = pkt; + } + + return nb_pkt; +} diff --git a/test/test/packet_burst_generator.h b/test/test/packet_burst_generator.h index edc1044..3315bfa 100644 --- a/test/test/packet_burst_generator.h +++ b/test/test/packet_burst_generator.h @@ -43,7 +43,8 @@ #include <rte_arp.h> #include <rte_ip.h> #include <rte_udp.h> - +#include <rte_tcp.h> +#include <rte_sctp.h> #define IPV4_ADDR(a, b, c, d)(((a & 0xff) << 24) | ((b & 0xff) << 16) | \ ((c & 0xff) << 8) | (d & 0xff)) @@ -65,6 +66,13 @@ initialize_udp_header(struct udp_hdr *udp_hdr, uint16_t src_port, uint16_t dst_port, uint16_t pkt_data_len); +uint16_t +initialize_tcp_header(struct tcp_hdr *tcp_hdr, uint16_t src_port, + uint16_t dst_port, uint16_t pkt_data_len); + +uint16_t +initialize_sctp_header(struct sctp_hdr *sctp_hdr, uint16_t src_port, + uint16_t dst_port, uint16_t pkt_data_len); uint16_t initialize_ipv6_header(struct ipv6_hdr *ip_hdr, uint8_t *src_addr, @@ -74,15 +82,25 @@ initialize_ipv4_header(struct ipv4_hdr *ip_hdr, uint32_t src_addr, uint32_t dst_addr, uint16_t pkt_data_len); +uint16_t +initialize_ipv4_header_proto(struct ipv4_hdr *ip_hdr, uint32_t src_addr, + uint32_t dst_addr, uint16_t pkt_data_len, uint8_t proto); + int generate_packet_burst(struct rte_mempool *mp, struct rte_mbuf **pkts_burst, struct ether_hdr *eth_hdr, uint8_t vlan_enabled, void *ip_hdr, uint8_t ipv4, struct udp_hdr *udp_hdr, int nb_pkt_per_burst, uint8_t pkt_len, uint8_t nb_pkt_segs); +int +generate_packet_burst_proto(struct rte_mempool *mp, + struct rte_mbuf **pkts_burst, + struct ether_hdr *eth_hdr, uint8_t vlan_enabled, void *ip_hdr, + uint8_t ipv4, uint8_t proto, void *proto_hdr, + int nb_pkt_per_burst, uint8_t pkt_len, uint8_t nb_pkt_segs); + #ifdef __cplusplus } #endif - #endif /* PACKET_BURST_GENERATOR_H_ */ -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v6 4/4] test: flow classify library unit tests 2017-09-07 16:43 ` [dpdk-dev] [PATCH v5 0/6] " Bernard Iremonger ` (3 preceding siblings ...) 2017-09-29 9:18 ` [dpdk-dev] [PATCH v6 3/4] test: add packet burst generator functions Bernard Iremonger @ 2017-09-29 9:18 ` Bernard Iremonger 4 siblings, 0 replies; 145+ messages in thread From: Bernard Iremonger @ 2017-09-29 9:18 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil Cc: Bernard Iremonger Add flow_classify_autotest program. Set up IPv4 ACL field definitions. Create table_acl for use by librte_flow_classify API's. Create an mbuf pool for use by rte_flow_classify_query. For each of the librte_flow_classify API's: add bad parameter tests add bad pattern tests add bad action tests add good parameter tests Initialise ipv4 udp traffic for use by the udp test for rte_flow_classif_query. Initialise ipv4 tcp traffic for use by the tcp test for rte_flow_classif_query. Initialise ipv4 sctp traffic for use by the sctp test for rte_flow_classif_query. Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> --- test/test/Makefile | 1 + test/test/test_flow_classify.c | 698 +++++++++++++++++++++++++++++++++++++++++ test/test/test_flow_classify.h | 240 ++++++++++++++ 3 files changed, 939 insertions(+) create mode 100644 test/test/test_flow_classify.c create mode 100644 test/test/test_flow_classify.h diff --git a/test/test/Makefile b/test/test/Makefile index 42d9a49..073e1ed 100644 --- a/test/test/Makefile +++ b/test/test/Makefile @@ -106,6 +106,7 @@ SRCS-y += test_table_tables.c SRCS-y += test_table_ports.c SRCS-y += test_table_combined.c SRCS-$(CONFIG_RTE_LIBRTE_ACL) += test_table_acl.c +SRCS-$(CONFIG_RTE_LIBRTE_ACL) += test_flow_classify.c endif SRCS-y += test_rwlock.c diff --git a/test/test/test_flow_classify.c b/test/test/test_flow_classify.c new file mode 100644 index 0000000..e7fbe73 --- /dev/null +++ b/test/test/test_flow_classify.c @@ -0,0 +1,698 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <string.h> +#include <errno.h> + +#include "test.h" + +#include <rte_string_fns.h> +#include <rte_mbuf.h> +#include <rte_byteorder.h> +#include <rte_ip.h> +#include <rte_acl.h> +#include <rte_common.h> +#include <rte_table_acl.h> +#include <rte_flow.h> +#include <rte_flow_classify.h> + +#include "packet_burst_generator.h" +#include "test_flow_classify.h" + + +#define FLOW_CLASSIFY_MAX_RULE_NUM 100 +static void *table_acl; +static uint32_t entry_size; + +/* + * test functions by passing invalid or + * non-workable parameters. + */ +static int +test_invalid_parameters(void) +{ + struct rte_flow_classify *classify; + int ret; + + ret = rte_flow_classify_validate(NULL, NULL, NULL, NULL, NULL); + if (!ret) { + printf("Line %i: flow_classify_validate", __LINE__); + printf(" with NULL param should have failed!\n"); + return -1; + } + + classify = rte_flow_classify_create(NULL, 0, NULL, NULL, NULL, NULL); + if (classify) { + printf("Line %i: flow_classify_create", __LINE__); + printf(" with NULL param should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_destroy(NULL, NULL, NULL); + if (!ret) { + printf("Line %i: flow_classify_destroy", __LINE__); + printf(" with NULL param should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_query(NULL, NULL, NULL, 0, NULL, NULL); + if (!ret) { + printf("Line %i: flow_classify_query", __LINE__); + printf(" with NULL param should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_validate(NULL, NULL, NULL, NULL, &error); + if (!ret) { + printf("Line %i: flow_classify_validate", __LINE__); + printf(" with NULL param should have failed!\n"); + return -1; + } + + classify = rte_flow_classify_create(NULL, 0, NULL, NULL, NULL, &error); + if (classify) { + printf("Line %i: flow_classify_create ", __LINE__); + printf("with NULL param should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_destroy(NULL, NULL, &error); + if (!ret) { + printf("Line %i: flow_classify_destroy", __LINE__); + printf("with NULL param should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_query(NULL, NULL, NULL, 0, NULL, &error); + if (!ret) { + printf("Line %i: flow_classify_query", __LINE__); + printf(" with NULL param should have failed!\n"); + return -1; + } + return 0; +} + +static int +test_valid_parameters(void) +{ + struct rte_flow_classify *flow_classify; + int ret; + + /* set up parameters for rte_flow_classify_validate and + * rte_flow_classify_create and rte_flow_classify_destroy + */ + + attr.ingress = 1; + attr.priority = 1; + pattern[0] = eth_item; + pattern[1] = ipv4_udp_item_1; + pattern[2] = udp_item_1; + pattern[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, + pattern, actions, &error); + if (ret) { + printf("Line %i: flow_classify_validate", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + flow_classify = rte_flow_classify_create(table_acl, entry_size, &attr, + pattern, actions, &error); + if (!flow_classify) { + printf("Line %i: flow_classify_create", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classify_destroy(table_acl, flow_classify, &error); + if (ret) { + printf("Line %i: flow_classify_destroy", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + return 0; +} + +static int +test_invalid_patterns(void) +{ + struct rte_flow_classify *flow_classify; + int ret; + + /* set up parameters for rte_flow_classify_validate and + * rte_flow_classify_create and rte_flow_classify_destroy + */ + + attr.ingress = 1; + attr.priority = 1; + pattern[0] = eth_item_bad; + pattern[1] = ipv4_udp_item_1; + pattern[2] = udp_item_1; + pattern[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, + pattern, actions, &error); + if (!ret) { + printf("Line %i: flow_classify_validate", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + pattern[0] = eth_item; + pattern[1] = ipv4_udp_item_bad; + flow_classify = rte_flow_classify_create(table_acl, entry_size, &attr, + pattern, actions, &error); + if (flow_classify) { + printf("Line %i: flow_classify_create", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_destroy(table_acl, flow_classify, &error); + if (!ret) { + printf("Line %i: flow_classify_destroy", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + pattern[1] = ipv4_udp_item_1; + pattern[2] = udp_item_bad; + ret = rte_flow_classify_validate(table_acl, &attr, + pattern, actions, &error); + if (!ret) { + printf("Line %i: flow_classify_validate", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + pattern[2] = udp_item_1; + pattern[3] = end_item_bad; + flow_classify = rte_flow_classify_create(table_acl, entry_size, &attr, + pattern, actions, &error); + if (flow_classify) { + printf("Line %i: flow_classify_create", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_destroy(table_acl, flow_classify, &error); + if (!ret) { + printf("Line %i: flow_classify_destroy", __LINE__); + printf(" should have failed!\n"); + return -1; + } + return 0; +} + +static int +test_invalid_actions(void) +{ + struct rte_flow_classify *flow_classify; + int ret; + + /* set up parameters for rte_flow_classify_validate and + * rte_flow_classify_create and rte_flow_classify_destroy + */ + + attr.ingress = 1; + attr.priority = 1; + pattern[0] = eth_item; + pattern[1] = ipv4_udp_item_1; + pattern[2] = udp_item_1; + pattern[3] = end_item; + actions[0] = count_action_bad; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, + pattern, actions, &error); + if (!ret) { + printf("Line %i: flow_classify_validate", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + flow_classify = rte_flow_classify_create(table_acl, entry_size, &attr, + pattern, actions, &error); + if (flow_classify) { + printf("Line %i: flow_classify_create", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_destroy(table_acl, flow_classify, &error); + if (!ret) { + printf("Line %i: flow_classify_destroy", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + actions[0] = count_action; + actions[1] = end_action_bad; + ret = rte_flow_classify_validate(table_acl, &attr, + pattern, actions, &error); + if (!ret) { + printf("Line %i: flow_classify_validate", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + flow_classify = rte_flow_classify_create(table_acl, entry_size, &attr, + pattern, actions, &error); + if (flow_classify) { + printf("Line %i: flow_classify_create", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_destroy(table_acl, flow_classify, &error); + if (!ret) { + printf("Line %i: flow_classify_destroy", __LINE__); + printf("should have failed!\n"); + return -1; + } + return 0; +} + +static int +init_ipv4_udp_traffic(struct rte_mempool *mp, + struct rte_mbuf **pkts_burst, uint32_t burst_size) +{ + struct ether_hdr pkt_eth_hdr; + struct ipv4_hdr pkt_ipv4_hdr; + struct udp_hdr pkt_udp_hdr; + uint32_t src_addr = IPV4_ADDR(2, 2, 2, 3); + uint32_t dst_addr = IPV4_ADDR(2, 2, 2, 7); + uint16_t src_port = 32; + uint16_t dst_port = 33; + uint16_t pktlen; + + static uint8_t src_mac[] = { 0x00, 0xFF, 0xAA, 0xFF, 0xAA, 0xFF }; + static uint8_t dst_mac[] = { 0x00, 0xAA, 0xFF, 0xAA, 0xFF, 0xAA }; + + printf("Set up IPv4 UDP traffic\n"); + initialize_eth_header(&pkt_eth_hdr, + (struct ether_addr *)src_mac, + (struct ether_addr *)dst_mac, ETHER_TYPE_IPv4, 0, 0); + pktlen = (uint16_t)(sizeof(struct ether_hdr)); + printf("ETH pktlen %u\n", pktlen); + + pktlen = initialize_ipv4_header(&pkt_ipv4_hdr, src_addr, dst_addr, + pktlen); + printf("ETH + IPv4 pktlen %u\n", pktlen); + + pktlen = initialize_udp_header(&pkt_udp_hdr, src_port, dst_port, + pktlen); + printf("ETH + IPv4 + UDP pktlen %u\n\n", pktlen); + + return generate_packet_burst(mp, pkts_burst, &pkt_eth_hdr, + 0, &pkt_ipv4_hdr, 1, + &pkt_udp_hdr, burst_size, + PACKET_BURST_GEN_PKT_LEN, 1); +} + +static int +init_ipv4_tcp_traffic(struct rte_mempool *mp, + struct rte_mbuf **pkts_burst, uint32_t burst_size) +{ + struct ether_hdr pkt_eth_hdr; + struct ipv4_hdr pkt_ipv4_hdr; + struct tcp_hdr pkt_tcp_hdr; + uint32_t src_addr = IPV4_ADDR(1, 2, 3, 4); + uint32_t dst_addr = IPV4_ADDR(5, 6, 7, 8); + uint16_t src_port = 16; + uint16_t dst_port = 17; + uint16_t pktlen; + + static uint8_t src_mac[] = { 0x00, 0xFF, 0xAA, 0xFF, 0xAA, 0xFF }; + static uint8_t dst_mac[] = { 0x00, 0xAA, 0xFF, 0xAA, 0xFF, 0xAA }; + + printf("Set up IPv4 TCP traffic\n"); + initialize_eth_header(&pkt_eth_hdr, + (struct ether_addr *)src_mac, + (struct ether_addr *)dst_mac, ETHER_TYPE_IPv4, 0, 0); + pktlen = (uint16_t)(sizeof(struct ether_hdr)); + printf("ETH pktlen %u\n", pktlen); + + pktlen = initialize_ipv4_header_proto(&pkt_ipv4_hdr, src_addr, + dst_addr, pktlen, IPPROTO_TCP); + printf("ETH + IPv4 pktlen %u\n", pktlen); + + pktlen = initialize_tcp_header(&pkt_tcp_hdr, src_port, dst_port, + pktlen); + printf("ETH + IPv4 + TCP pktlen %u\n\n", pktlen); + + return generate_packet_burst_proto(mp, pkts_burst, &pkt_eth_hdr, + 0, &pkt_ipv4_hdr, 1, IPPROTO_TCP, + &pkt_tcp_hdr, burst_size, + PACKET_BURST_GEN_PKT_LEN, 1); +} + +static int +init_ipv4_sctp_traffic(struct rte_mempool *mp, + struct rte_mbuf **pkts_burst, uint32_t burst_size) +{ + struct ether_hdr pkt_eth_hdr; + struct ipv4_hdr pkt_ipv4_hdr; + struct sctp_hdr pkt_sctp_hdr; + uint32_t src_addr = IPV4_ADDR(11, 12, 13, 14); + uint32_t dst_addr = IPV4_ADDR(15, 16, 17, 18); + uint16_t src_port = 10; + uint16_t dst_port = 11; + uint16_t pktlen; + + static uint8_t src_mac[] = { 0x00, 0xFF, 0xAA, 0xFF, 0xAA, 0xFF }; + static uint8_t dst_mac[] = { 0x00, 0xAA, 0xFF, 0xAA, 0xFF, 0xAA }; + + printf("Set up IPv4 SCTP traffic\n"); + initialize_eth_header(&pkt_eth_hdr, + (struct ether_addr *)src_mac, + (struct ether_addr *)dst_mac, ETHER_TYPE_IPv4, 0, 0); + pktlen = (uint16_t)(sizeof(struct ether_hdr)); + printf("ETH pktlen %u\n", pktlen); + + pktlen = initialize_ipv4_header_proto(&pkt_ipv4_hdr, src_addr, + dst_addr, pktlen, IPPROTO_SCTP); + printf("ETH + IPv4 pktlen %u\n", pktlen); + + pktlen = initialize_sctp_header(&pkt_sctp_hdr, src_port, dst_port, + pktlen); + printf("ETH + IPv4 + SCTP pktlen %u\n\n", pktlen); + + return generate_packet_burst_proto(mp, pkts_burst, &pkt_eth_hdr, + 0, &pkt_ipv4_hdr, 1, IPPROTO_SCTP, + &pkt_sctp_hdr, burst_size, + PACKET_BURST_GEN_PKT_LEN, 1); +} + +static int +init_mbufpool(void) +{ + int socketid; + int ret = 0; + unsigned int lcore_id; + char s[64]; + + for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { + if (rte_lcore_is_enabled(lcore_id) == 0) + continue; + + socketid = rte_lcore_to_socket_id(lcore_id); + if (socketid >= NB_SOCKETS) { + printf( + "Socket %d of lcore %u is out of range %d\n", + socketid, lcore_id, NB_SOCKETS); + ret = -1; + break; + } + if (mbufpool[socketid] == NULL) { + snprintf(s, sizeof(s), "mbuf_pool_%d", socketid); + mbufpool[socketid] = + rte_pktmbuf_pool_create(s, NB_MBUF, + MEMPOOL_CACHE_SIZE, 0, MBUF_SIZE, + socketid); + if (mbufpool[socketid]) { + printf("Allocated mbuf pool on socket %d\n", + socketid); + } else { + printf("Cannot init mbuf pool on socket %d\n", + socketid); + ret = -ENOMEM; + break; + } + } + } + return ret; +} + +static int +test_query_udp(void) +{ + struct rte_flow_error error; + struct rte_flow_classify *flow_classify; + int ret; + int i; + + ret = init_ipv4_udp_traffic(mbufpool[0], bufs, MAX_PKT_BURST); + if (ret != MAX_PKT_BURST) { + printf("Line %i: init_udp_ipv4_traffic has failed!\n", + __LINE__); + return -1; + } + + for (i = 0; i < MAX_PKT_BURST; i++) + bufs[i]->packet_type = RTE_PTYPE_L3_IPV4; + + /* set up parameters for rte_flow_classify_validate and + * rte_flow_classify_create and rte_flow_classify_destroy + */ + + attr.ingress = 1; + attr.priority = 1; + pattern[0] = eth_item; + pattern[1] = ipv4_udp_item_1; + pattern[2] = udp_item_1; + pattern[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, + pattern, actions, &error); + if (ret) { + printf("Line %i: flow_classify_validate", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + flow_classify = rte_flow_classify_create(table_acl, entry_size, &attr, + pattern, actions, &error); + if (!flow_classify) { + printf("Line %i: flow_classify_create", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classify_query(table_acl, flow_classify, bufs, + MAX_PKT_BURST, &udp_classify_stats, &error); + if (ret) { + printf("Line %i: flow_classify_query", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classify_destroy(table_acl, flow_classify, &error); + if (ret) { + printf("Line %i: flow_classify_destroy", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + return 0; +} + +static int +test_query_tcp(void) +{ + struct rte_flow_classify *flow_classify; + int ret; + int i; + + ret = init_ipv4_tcp_traffic(mbufpool[0], bufs, MAX_PKT_BURST); + if (ret != MAX_PKT_BURST) { + printf("Line %i: init_ipv4_tcp_traffic has failed!\n", + __LINE__); + return -1; + } + + for (i = 0; i < MAX_PKT_BURST; i++) + bufs[i]->packet_type = RTE_PTYPE_L3_IPV4; + + /* set up parameters for rte_flow_classify_validate and + * rte_flow_classify_create and rte_flow_classify_destroy + */ + + attr.ingress = 1; + attr.priority = 1; + pattern[0] = eth_item; + pattern[1] = ipv4_tcp_item_1; + pattern[2] = tcp_item_1; + pattern[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, + pattern, actions, &error); + if (ret) { + printf("Line %i: flow_classify_validate", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + flow_classify = rte_flow_classify_create(table_acl, entry_size, &attr, + pattern, actions, &error); + if (!flow_classify) { + printf("Line %i: flow_classify_create", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classify_query(table_acl, flow_classify, bufs, + MAX_PKT_BURST, &tcp_classify_stats, &error); + if (ret) { + printf("Line %i: flow_classify_query", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classify_destroy(table_acl, flow_classify, &error); + if (ret) { + printf("Line %i: flow_classify_destroy", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + return 0; +} + +static int +test_query_sctp(void) +{ + struct rte_flow_classify *flow_classify; + int ret; + int i; + + ret = init_ipv4_sctp_traffic(mbufpool[0], bufs, MAX_PKT_BURST); + if (ret != MAX_PKT_BURST) { + printf("Line %i: init_ipv4_tcp_traffic has failed!\n", + __LINE__); + return -1; + } + + for (i = 0; i < MAX_PKT_BURST; i++) + bufs[i]->packet_type = RTE_PTYPE_L3_IPV4; + + /* set up parameters for rte_flow_classify_validate and + * rte_flow_classify_create and rte_flow_classify_destroy + */ + + attr.ingress = 1; + attr.priority = 1; + pattern[0] = eth_item; + pattern[1] = ipv4_sctp_item_1; + pattern[2] = sctp_item_1; + pattern[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, + pattern, actions, &error); + if (ret) { + printf("Line %i: flow_classify_validate", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + flow_classify = rte_flow_classify_create(table_acl, entry_size, &attr, + pattern, actions, &error); + if (!flow_classify) { + printf("Line %i: flow_classify_create", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classify_query(table_acl, flow_classify, bufs, + MAX_PKT_BURST, &sctp_classify_stats, &error); + if (ret) { + printf("Line %i: flow_classify_query", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classify_destroy(table_acl, flow_classify, &error); + if (ret) { + printf("Line %i: flow_classify_destroy", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + return 0; +} + +static int +test_flow_classify(void) +{ + struct rte_table_acl_params table_acl_params; + int socket_id = 0; + int ret; + + /* initialise ACL table params */ + table_acl_params.n_rule_fields = RTE_DIM(ipv4_defs); + table_acl_params.name = "table_acl_ipv4_5tuple"; + table_acl_params.n_rules = FLOW_CLASSIFY_MAX_RULE_NUM; + memcpy(table_acl_params.field_format, ipv4_defs, sizeof(ipv4_defs)); + entry_size = RTE_ACL_RULE_SZ(RTE_DIM(ipv4_defs)); + + table_acl = rte_table_acl_ops.f_create(&table_acl_params, socket_id, + entry_size); + if (table_acl == NULL) { + printf("Line %i: f_create has failed!\n", __LINE__); + return -1; + } + printf("Created table_acl for for IPv4 five tuple packets\n"); + + ret = init_mbufpool(); + if (ret) { + printf("Line %i: init_mbufpool has failed!\n", __LINE__); + return -1; + } + + if (test_invalid_parameters() < 0) + return -1; + if (test_valid_parameters() < 0) + return -1; + if (test_invalid_patterns() < 0) + return -1; + if (test_invalid_actions() < 0) + return -1; + if (test_query_udp() < 0) + return -1; + if (test_query_tcp() < 0) + return -1; + if (test_query_sctp() < 0) + return -1; + + return 0; +} + +REGISTER_TEST_COMMAND(flow_classify_autotest, test_flow_classify); diff --git a/test/test/test_flow_classify.h b/test/test/test_flow_classify.h new file mode 100644 index 0000000..95ddc94 --- /dev/null +++ b/test/test/test_flow_classify.h @@ -0,0 +1,240 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef TEST_FLOW_CLASSIFY_H_ +#define TEST_FLOW_CLASSIFY_H_ + +#define MAX_PKT_BURST (32) +#define NB_SOCKETS (1) +#define MEMPOOL_CACHE_SIZE (256) +#define MBUF_SIZE (512) +#define NB_MBUF (512) + +/* test UDP, TCP and SCTP packets */ +static struct rte_mempool *mbufpool[NB_SOCKETS]; +static struct rte_mbuf *bufs[MAX_PKT_BURST]; + +/* ACL field definitions for IPv4 5 tuple rule */ + +enum { + PROTO_FIELD_IPV4, + SRC_FIELD_IPV4, + DST_FIELD_IPV4, + SRCP_FIELD_IPV4, + DSTP_FIELD_IPV4, + NUM_FIELDS_IPV4 +}; + +enum { + PROTO_INPUT_IPV4, + SRC_INPUT_IPV4, + DST_INPUT_IPV4, + SRCP_DESTP_INPUT_IPV4 +}; + +static struct rte_acl_field_def ipv4_defs[NUM_FIELDS_IPV4] = { + /* first input field - always one byte long. */ + { + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint8_t), + .field_index = PROTO_FIELD_IPV4, + .input_index = PROTO_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, next_proto_id), + }, + /* next input field (IPv4 source address) - 4 consecutive bytes. */ + { + /* rte_flow uses a bit mask for IPv4 addresses */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint32_t), + .field_index = SRC_FIELD_IPV4, + .input_index = SRC_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, src_addr), + }, + /* next input field (IPv4 destination address) - 4 consecutive bytes. */ + { + /* rte_flow uses a bit mask for IPv4 addresses */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint32_t), + .field_index = DST_FIELD_IPV4, + .input_index = DST_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, dst_addr), + }, + /* + * Next 2 fields (src & dst ports) form 4 consecutive bytes. + * They share the same input index. + */ + { + /* rte_flow uses a bit mask for protocol ports */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint16_t), + .field_index = SRCP_FIELD_IPV4, + .input_index = SRCP_DESTP_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + sizeof(struct ipv4_hdr) + + offsetof(struct tcp_hdr, src_port), + }, + { + /* rte_flow uses a bit mask for protocol ports */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint16_t), + .field_index = DSTP_FIELD_IPV4, + .input_index = SRCP_DESTP_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + sizeof(struct ipv4_hdr) + + offsetof(struct tcp_hdr, dst_port), + }, +}; + +/* parameters for rte_flow_classify_validate and rte_flow_classify_create */ + +/* test UDP pattern: + * "eth / ipv4 src spec 2.2.2.3 src mask 255.255.255.00 dst spec 2.2.2.7 + * dst mask 255.255.255.00 / udp src is 32 dst is 33 / end" + */ +static struct rte_flow_item_ipv4 ipv4_udp_spec_1 = { + { 0, 0, 0, 0, 0, 0, IPPROTO_UDP, 0, IPv4(2, 2, 2, 3), IPv4(2, 2, 2, 7)} +}; +static const struct rte_flow_item_ipv4 ipv4_mask_24 = { + .hdr = { + .next_proto_id = 0xff, + .src_addr = 0xffffff00, + .dst_addr = 0xffffff00, + }, +}; +static struct rte_flow_item_udp udp_spec_1 = { + { 32, 33, 0, 0 } +}; + +static struct rte_flow_item eth_item = { RTE_FLOW_ITEM_TYPE_ETH, + 0, 0, 0 }; +static struct rte_flow_item eth_item_bad = { -1, 0, 0, 0 }; + +static struct rte_flow_item ipv4_udp_item_1 = { RTE_FLOW_ITEM_TYPE_IPV4, + &ipv4_udp_spec_1, 0, &ipv4_mask_24}; +static struct rte_flow_item ipv4_udp_item_bad = { RTE_FLOW_ITEM_TYPE_IPV4, + NULL, 0, NULL}; + +static struct rte_flow_item udp_item_1 = { RTE_FLOW_ITEM_TYPE_UDP, + &udp_spec_1, 0, &rte_flow_item_udp_mask}; +static struct rte_flow_item udp_item_bad = { RTE_FLOW_ITEM_TYPE_UDP, + NULL, 0, NULL}; + +static struct rte_flow_item end_item = { RTE_FLOW_ITEM_TYPE_END, + 0, 0, 0 }; +static struct rte_flow_item end_item_bad = { -1, 0, 0, 0 }; + +/* test TCP pattern: + * "eth / ipv4 src spec 1.2.3.4 src mask 255.255.255.00 dst spec 5.6.7.8 + * dst mask 255.255.255.00 / tcp src is 16 dst is 17 / end" + */ +static struct rte_flow_item_ipv4 ipv4_tcp_spec_1 = { + { 0, 0, 0, 0, 0, 0, IPPROTO_TCP, 0, IPv4(1, 2, 3, 4), IPv4(5, 6, 7, 8)} +}; + +static struct rte_flow_item_tcp tcp_spec_1 = { + { 16, 17, 0, 0, 0, 0, 0, 0, 0} +}; + +static struct rte_flow_item ipv4_tcp_item_1 = { RTE_FLOW_ITEM_TYPE_IPV4, + &ipv4_tcp_spec_1, 0, &ipv4_mask_24}; + +static struct rte_flow_item tcp_item_1 = { RTE_FLOW_ITEM_TYPE_TCP, + &tcp_spec_1, 0, &rte_flow_item_tcp_mask}; + +/* test SCTP pattern: + * "eth / ipv4 src spec 1.2.3.4 src mask 255.255.255.00 dst spec 5.6.7.8 + * dst mask 255.255.255.00 / sctp src is 16 dst is 17/ end" + */ +static struct rte_flow_item_ipv4 ipv4_sctp_spec_1 = { + { 0, 0, 0, 0, 0, 0, IPPROTO_SCTP, 0, IPv4(11, 12, 13, 14), + IPv4(15, 16, 17, 18)} +}; + +static struct rte_flow_item_sctp sctp_spec_1 = { + { 10, 11, 0, 0} +}; + +static struct rte_flow_item ipv4_sctp_item_1 = { RTE_FLOW_ITEM_TYPE_IPV4, + &ipv4_sctp_spec_1, 0, &ipv4_mask_24}; + +static struct rte_flow_item sctp_item_1 = { RTE_FLOW_ITEM_TYPE_SCTP, + &sctp_spec_1, 0, &rte_flow_item_sctp_mask}; + + +/* test actions: + * "actions count / end" + */ +static struct rte_flow_action count_action = { RTE_FLOW_ACTION_TYPE_COUNT, 0}; +static struct rte_flow_action count_action_bad = { -1, 0}; + +static struct rte_flow_action end_action = { RTE_FLOW_ACTION_TYPE_END, 0}; +static struct rte_flow_action end_action_bad = { -1, 0}; + +static struct rte_flow_action actions[2]; + +/* test attributes */ +static struct rte_flow_attr attr; + +/* test error */ +static struct rte_flow_error error; + +/* test pattern */ +static struct rte_flow_item pattern[4]; + +/* flow classify data for UDP burst */ +static struct rte_flow_classify_5tuple_stats udp_ntuple_stats; +static struct rte_flow_classify_stats udp_classify_stats = { + .available_space = MAX_PKT_BURST, + .used_space = 0, + .stats = (void **)&udp_ntuple_stats +}; + +/* flow classify data for TCP burst */ +static struct rte_flow_classify_5tuple_stats tcp_ntuple_stats; +static struct rte_flow_classify_stats tcp_classify_stats = { + .available_space = MAX_PKT_BURST, + .used_space = 0, + .stats = (void **)&tcp_ntuple_stats +}; + +/* flow classify data for SCTP burst */ +static struct rte_flow_classify_5tuple_stats sctp_ntuple_stats; +static struct rte_flow_classify_stats sctp_classify_stats = { + .available_space = MAX_PKT_BURST, + .used_space = 0, + .stats = (void **)&sctp_ntuple_stats +}; +#endif /* TEST_FLOW_CLASSIFY_H_ */ -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v5 1/6] librte_table: fix acl entry add and delete functions 2017-09-06 10:27 ` [dpdk-dev] [PATCH v4 " Bernard Iremonger 2017-09-07 16:43 ` [dpdk-dev] [PATCH v5 0/6] " Bernard Iremonger @ 2017-09-07 16:43 ` Bernard Iremonger 2017-09-18 15:29 ` Singh, Jasvinder 2017-09-07 16:43 ` [dpdk-dev] [PATCH v5 2/6] librte_table: fix acl lookup function Bernard Iremonger ` (4 subsequent siblings) 6 siblings, 1 reply; 145+ messages in thread From: Bernard Iremonger @ 2017-09-07 16:43 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil Cc: Bernard Iremonger, stable The rte_table_acl_entry_add() function was returning data from acl_memory instead of acl_rule_memory. It was also returning data from entry instead of entry_ptr. The rte_table_acl_entry_delete() function was returning data from acl_memory instead of acl_rule_memory. Fixes: 166923eb2f78 ("table: ACL") Cc: stable@dpdk.org Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> --- lib/librte_table/rte_table_acl.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/lib/librte_table/rte_table_acl.c b/lib/librte_table/rte_table_acl.c index 3c05e4a..e84b437 100644 --- a/lib/librte_table/rte_table_acl.c +++ b/lib/librte_table/rte_table_acl.c @@ -316,8 +316,7 @@ struct rte_table_acl { if (status == 0) { *key_found = 1; *entry_ptr = &acl->memory[i * acl->entry_size]; - memcpy(*entry_ptr, entry, acl->entry_size); - + memcpy(entry, *entry_ptr, acl->entry_size); return 0; } } @@ -353,8 +352,8 @@ struct rte_table_acl { rte_acl_free(acl->ctx); acl->ctx = ctx; *key_found = 0; - *entry_ptr = &acl->memory[free_pos * acl->entry_size]; - memcpy(*entry_ptr, entry, acl->entry_size); + *entry_ptr = &acl->acl_rule_memory[free_pos * acl->entry_size]; + memcpy(entry, *entry_ptr, acl->entry_size); return 0; } @@ -435,7 +434,7 @@ struct rte_table_acl { acl->ctx = ctx; *key_found = 1; if (entry != NULL) - memcpy(entry, &acl->memory[pos * acl->entry_size], + memcpy(entry, &acl->acl_rule_memory[pos * acl->entry_size], acl->entry_size); return 0; -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/6] librte_table: fix acl entry add and delete functions 2017-09-07 16:43 ` [dpdk-dev] [PATCH v5 1/6] librte_table: fix acl entry add and delete functions Bernard Iremonger @ 2017-09-18 15:29 ` Singh, Jasvinder 2017-09-20 12:21 ` Dumitrescu, Cristian 0 siblings, 1 reply; 145+ messages in thread From: Singh, Jasvinder @ 2017-09-18 15:29 UTC (permalink / raw) To: Iremonger, Bernard, dev, Yigit, Ferruh, Ananyev, Konstantin, Dumitrescu, Cristian, adrien.mazarguil Cc: Iremonger, Bernard, stable Hi Bernard, <snip> > --- a/lib/librte_table/rte_table_acl.c > +++ b/lib/librte_table/rte_table_acl.c > @@ -316,8 +316,7 @@ struct rte_table_acl { > if (status == 0) { > *key_found = 1; > *entry_ptr = &acl->memory[i * acl->entry_size]; > - memcpy(*entry_ptr, entry, acl->entry_size); > - > + memcpy(entry, *entry_ptr, acl->entry_size); > return 0; > } > } In this case, table entry which is being added already presents in the table. So, first the pointer to that entry from the memory[] that stores the pipeline table data(struct rte_pipeline_table_entry) associated with key is retrieved and the changes (action and metadara) are stored in the internal table pointed by action_table. So, above fix doesn't seem correct. > @@ -353,8 +352,8 @@ struct rte_table_acl { > rte_acl_free(acl->ctx); > acl->ctx = ctx; > *key_found = 0; > - *entry_ptr = &acl->memory[free_pos * acl->entry_size]; > - memcpy(*entry_ptr, entry, acl->entry_size); > + *entry_ptr = &acl->acl_rule_memory[free_pos * acl->entry_size]; > + memcpy(entry, *entry_ptr, acl->entry_size); > > return 0; > } > @@ -435,7 +434,7 @@ struct rte_table_acl { > acl->ctx = ctx; > *key_found = 1; > if (entry != NULL) > - memcpy(entry, &acl->memory[pos * acl->entry_size], > + memcpy(entry, &acl->acl_rule_memory[pos * acl- > >entry_size], > acl->entry_size); Above fixes also seems not correct. As per documentation, *entry_ptr is intended to store the handle to the pipeline table entry containing the data associated with the current key instead of pointing to memory used to store the acl rules, etc. Please refer rte_table_acl_create() where memory is initialized and organized to stores different types of internal tables (pointed by action_table, acl_rule_list and acl_rule_memory). ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/6] librte_table: fix acl entry add and delete functions 2017-09-18 15:29 ` Singh, Jasvinder @ 2017-09-20 12:21 ` Dumitrescu, Cristian 2017-09-29 8:25 ` Iremonger, Bernard 0 siblings, 1 reply; 145+ messages in thread From: Dumitrescu, Cristian @ 2017-09-20 12:21 UTC (permalink / raw) To: Singh, Jasvinder, Iremonger, Bernard, dev, Yigit, Ferruh, Ananyev, Konstantin, adrien.mazarguil Cc: Iremonger, Bernard, stable > -----Original Message----- > From: Singh, Jasvinder > Sent: Monday, September 18, 2017 4:30 PM > To: Iremonger, Bernard <bernard.iremonger@intel.com>; dev@dpdk.org; > Yigit, Ferruh <ferruh.yigit@intel.com>; Ananyev, Konstantin > <konstantin.ananyev@intel.com>; Dumitrescu, Cristian > <cristian.dumitrescu@intel.com>; adrien.mazarguil@6wind.com > Cc: Iremonger, Bernard <bernard.iremonger@intel.com>; stable@dpdk.org > Subject: RE: [dpdk-dev] [PATCH v5 1/6] librte_table: fix acl entry add and > delete functions > > Hi Bernard, > > <snip> > > > --- a/lib/librte_table/rte_table_acl.c > > +++ b/lib/librte_table/rte_table_acl.c > > @@ -316,8 +316,7 @@ struct rte_table_acl { > > if (status == 0) { > > *key_found = 1; > > *entry_ptr = &acl->memory[i * acl->entry_size]; > > - memcpy(*entry_ptr, entry, acl->entry_size); > > - > > + memcpy(entry, *entry_ptr, acl->entry_size); > > return 0; > > } > > } > > In this case, table entry which is being added already presents in the table. > So, first the pointer to that entry from the memory[] that stores the pipeline > table data(struct rte_pipeline_table_entry) associated with key is retrieved > and the changes (action and metadara) are stored in the internal table > pointed by action_table. So, above fix doesn't seem correct. > > > @@ -353,8 +352,8 @@ struct rte_table_acl { > > rte_acl_free(acl->ctx); > > acl->ctx = ctx; > > *key_found = 0; > > - *entry_ptr = &acl->memory[free_pos * acl->entry_size]; > > - memcpy(*entry_ptr, entry, acl->entry_size); > > + *entry_ptr = &acl->acl_rule_memory[free_pos * acl->entry_size]; > > + memcpy(entry, *entry_ptr, acl->entry_size); > > > > return 0; > > } > > @@ -435,7 +434,7 @@ struct rte_table_acl { > > acl->ctx = ctx; > > *key_found = 1; > > if (entry != NULL) > > - memcpy(entry, &acl->memory[pos * acl->entry_size], > > + memcpy(entry, &acl->acl_rule_memory[pos * acl- > > >entry_size], > > acl->entry_size); > > > Above fixes also seems not correct. As per documentation, *entry_ptr is > intended to store the handle to the pipeline table entry containing the data > associated with the current key instead of pointing to memory used to store > the acl rules, etc. Please refer rte_table_acl_create() where memory is > initialized and organized to stores different types of internal tables (pointed > by action_table, acl_rule_list and acl_rule_memory). NACK Fully agree with Jasvinder. Existing code is correct, proposed code changes are wrong. ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/6] librte_table: fix acl entry add and delete functions 2017-09-20 12:21 ` Dumitrescu, Cristian @ 2017-09-29 8:25 ` Iremonger, Bernard 0 siblings, 0 replies; 145+ messages in thread From: Iremonger, Bernard @ 2017-09-29 8:25 UTC (permalink / raw) To: Dumitrescu, Cristian, Singh, Jasvinder, dev, Yigit, Ferruh, Ananyev, Konstantin, adrien.mazarguil Cc: stable Hi Cristian, > -----Original Message----- > From: Dumitrescu, Cristian > Sent: Wednesday, September 20, 2017 1:21 PM > To: Singh, Jasvinder <jasvinder.singh@intel.com>; Iremonger, Bernard > <bernard.iremonger@intel.com>; dev@dpdk.org; Yigit, Ferruh > <ferruh.yigit@intel.com>; Ananyev, Konstantin > <konstantin.ananyev@intel.com>; adrien.mazarguil@6wind.com > Cc: Iremonger, Bernard <bernard.iremonger@intel.com>; stable@dpdk.org > Subject: RE: [dpdk-dev] [PATCH v5 1/6] librte_table: fix acl entry add and > delete functions > > > > > -----Original Message----- > > From: Singh, Jasvinder > > Sent: Monday, September 18, 2017 4:30 PM > > To: Iremonger, Bernard <bernard.iremonger@intel.com>; dev@dpdk.org; > > Yigit, Ferruh <ferruh.yigit@intel.com>; Ananyev, Konstantin > > <konstantin.ananyev@intel.com>; Dumitrescu, Cristian > > <cristian.dumitrescu@intel.com>; adrien.mazarguil@6wind.com > > Cc: Iremonger, Bernard <bernard.iremonger@intel.com>; > stable@dpdk.org > > Subject: RE: [dpdk-dev] [PATCH v5 1/6] librte_table: fix acl entry add > > and delete functions > > > > Hi Bernard, > > > > <snip> > > > > > --- a/lib/librte_table/rte_table_acl.c > > > +++ b/lib/librte_table/rte_table_acl.c > > > @@ -316,8 +316,7 @@ struct rte_table_acl { > > > if (status == 0) { > > > *key_found = 1; > > > *entry_ptr = &acl->memory[i * acl->entry_size]; > > > - memcpy(*entry_ptr, entry, acl->entry_size); > > > - > > > + memcpy(entry, *entry_ptr, acl->entry_size); > > > return 0; > > > } > > > } > > > > In this case, table entry which is being added already presents in the table. > > So, first the pointer to that entry from the memory[] that stores the > > pipeline table data(struct rte_pipeline_table_entry) associated with > > key is retrieved and the changes (action and metadara) are stored in > > the internal table pointed by action_table. So, above fix doesn't seem > correct. > > > > > @@ -353,8 +352,8 @@ struct rte_table_acl { > > > rte_acl_free(acl->ctx); > > > acl->ctx = ctx; > > > *key_found = 0; > > > - *entry_ptr = &acl->memory[free_pos * acl->entry_size]; > > > - memcpy(*entry_ptr, entry, acl->entry_size); > > > + *entry_ptr = &acl->acl_rule_memory[free_pos * acl->entry_size]; > > > + memcpy(entry, *entry_ptr, acl->entry_size); > > > > > > return 0; > > > } > > > @@ -435,7 +434,7 @@ struct rte_table_acl { > > > acl->ctx = ctx; > > > *key_found = 1; > > > if (entry != NULL) > > > - memcpy(entry, &acl->memory[pos * acl->entry_size], > > > + memcpy(entry, &acl->acl_rule_memory[pos * acl- > > > >entry_size], > > > acl->entry_size); > > > > > > Above fixes also seems not correct. As per documentation, *entry_ptr > > is intended to store the handle to the pipeline table entry containing > > the data associated with the current key instead of pointing to memory > > used to store the acl rules, etc. Please refer rte_table_acl_create() > > where memory is initialized and organized to stores different types of > > internal tables (pointed by action_table, acl_rule_list and > acl_rule_memory). > > NACK > > Fully agree with Jasvinder. > > Existing code is correct, proposed code changes are wrong. I will drop this patch and send a v6 patchset. Regards, Bernard. ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v5 2/6] librte_table: fix acl lookup function 2017-09-06 10:27 ` [dpdk-dev] [PATCH v4 " Bernard Iremonger 2017-09-07 16:43 ` [dpdk-dev] [PATCH v5 0/6] " Bernard Iremonger 2017-09-07 16:43 ` [dpdk-dev] [PATCH v5 1/6] librte_table: fix acl entry add and delete functions Bernard Iremonger @ 2017-09-07 16:43 ` Bernard Iremonger 2017-09-20 12:24 ` Dumitrescu, Cristian 2017-09-07 16:43 ` [dpdk-dev] [PATCH v5 3/6] librte_flow_classify: add librte_flow_classify library Bernard Iremonger ` (3 subsequent siblings) 6 siblings, 1 reply; 145+ messages in thread From: Bernard Iremonger @ 2017-09-07 16:43 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil Cc: Bernard Iremonger, stable The rte_table_acl_lookup() function was returning data from acl_memory instead of acl_rule_memory. Fixes: 166923eb2f78 ("table: ACL") Cc: stable@dpdk.org Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> --- lib/librte_table/rte_table_acl.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/lib/librte_table/rte_table_acl.c b/lib/librte_table/rte_table_acl.c index e84b437..258916d 100644 --- a/lib/librte_table/rte_table_acl.c +++ b/lib/librte_table/rte_table_acl.c @@ -794,7 +794,7 @@ struct rte_table_acl { if (action_table_pos != 0) { pkts_out_mask |= pkt_mask; entries[pkt_pos] = (void *) - &acl->memory[action_table_pos * + &acl->acl_rule_memory[action_table_pos * acl->entry_size]; rte_prefetch0(entries[pkt_pos]); } -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [PATCH v5 2/6] librte_table: fix acl lookup function 2017-09-07 16:43 ` [dpdk-dev] [PATCH v5 2/6] librte_table: fix acl lookup function Bernard Iremonger @ 2017-09-20 12:24 ` Dumitrescu, Cristian 2017-09-29 8:27 ` Iremonger, Bernard 0 siblings, 1 reply; 145+ messages in thread From: Dumitrescu, Cristian @ 2017-09-20 12:24 UTC (permalink / raw) To: Iremonger, Bernard, dev, Yigit, Ferruh, Ananyev, Konstantin, adrien.mazarguil Cc: stable > -----Original Message----- > From: Iremonger, Bernard > Sent: Thursday, September 7, 2017 5:43 PM > To: dev@dpdk.org; Yigit, Ferruh <ferruh.yigit@intel.com>; Ananyev, > Konstantin <konstantin.ananyev@intel.com>; Dumitrescu, Cristian > <cristian.dumitrescu@intel.com>; adrien.mazarguil@6wind.com > Cc: Iremonger, Bernard <bernard.iremonger@intel.com>; stable@dpdk.org > Subject: [PATCH v5 2/6] librte_table: fix acl lookup function > > The rte_table_acl_lookup() function was returning data from acl_memory > instead of acl_rule_memory. > > Fixes: 166923eb2f78 ("table: ACL") > Cc: stable@dpdk.org > > Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> > --- > lib/librte_table/rte_table_acl.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/lib/librte_table/rte_table_acl.c b/lib/librte_table/rte_table_acl.c > index e84b437..258916d 100644 > --- a/lib/librte_table/rte_table_acl.c > +++ b/lib/librte_table/rte_table_acl.c > @@ -794,7 +794,7 @@ struct rte_table_acl { > if (action_table_pos != 0) { > pkts_out_mask |= pkt_mask; > entries[pkt_pos] = (void *) > - &acl->memory[action_table_pos * > + &acl->acl_rule_memory[action_table_pos * > acl->entry_size]; > rte_prefetch0(entries[pkt_pos]); > } > -- > 1.9.1 NACK Existing code is correct, proposed code changes are wrong (for same reasons described in patch 1 of this patch set). ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [PATCH v5 2/6] librte_table: fix acl lookup function 2017-09-20 12:24 ` Dumitrescu, Cristian @ 2017-09-29 8:27 ` Iremonger, Bernard 0 siblings, 0 replies; 145+ messages in thread From: Iremonger, Bernard @ 2017-09-29 8:27 UTC (permalink / raw) To: Dumitrescu, Cristian, dev, Yigit, Ferruh, Ananyev, Konstantin, adrien.mazarguil Cc: stable Hi Cristian, > -----Original Message----- > From: Dumitrescu, Cristian > Sent: Wednesday, September 20, 2017 1:24 PM > To: Iremonger, Bernard <bernard.iremonger@intel.com>; dev@dpdk.org; > Yigit, Ferruh <ferruh.yigit@intel.com>; Ananyev, Konstantin > <konstantin.ananyev@intel.com>; adrien.mazarguil@6wind.com > Cc: stable@dpdk.org > Subject: RE: [PATCH v5 2/6] librte_table: fix acl lookup function > > > > > -----Original Message----- > > From: Iremonger, Bernard > > Sent: Thursday, September 7, 2017 5:43 PM > > To: dev@dpdk.org; Yigit, Ferruh <ferruh.yigit@intel.com>; Ananyev, > > Konstantin <konstantin.ananyev@intel.com>; Dumitrescu, Cristian > > <cristian.dumitrescu@intel.com>; adrien.mazarguil@6wind.com > > Cc: Iremonger, Bernard <bernard.iremonger@intel.com>; > stable@dpdk.org > > Subject: [PATCH v5 2/6] librte_table: fix acl lookup function > > > > The rte_table_acl_lookup() function was returning data from acl_memory > > instead of acl_rule_memory. > > > > Fixes: 166923eb2f78 ("table: ACL") > > Cc: stable@dpdk.org > > > > Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> > > --- > > lib/librte_table/rte_table_acl.c | 2 +- > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > diff --git a/lib/librte_table/rte_table_acl.c > > b/lib/librte_table/rte_table_acl.c > > index e84b437..258916d 100644 > > --- a/lib/librte_table/rte_table_acl.c > > +++ b/lib/librte_table/rte_table_acl.c > > @@ -794,7 +794,7 @@ struct rte_table_acl { > > if (action_table_pos != 0) { > > pkts_out_mask |= pkt_mask; > > entries[pkt_pos] = (void *) > > - &acl->memory[action_table_pos * > > + &acl->acl_rule_memory[action_table_pos * > > acl->entry_size]; > > rte_prefetch0(entries[pkt_pos]); > > } > > -- > > 1.9.1 > > NACK > > Existing code is correct, proposed code changes are wrong (for same reasons > described in patch 1 of this patch set). I will drop this patch and send a v6 patch set. Regards, Bernard. ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v5 3/6] librte_flow_classify: add librte_flow_classify library 2017-09-06 10:27 ` [dpdk-dev] [PATCH v4 " Bernard Iremonger ` (2 preceding siblings ...) 2017-09-07 16:43 ` [dpdk-dev] [PATCH v5 2/6] librte_table: fix acl lookup function Bernard Iremonger @ 2017-09-07 16:43 ` Bernard Iremonger 2017-09-07 16:43 ` [dpdk-dev] [PATCH v5 4/6] examples/flow_classify: flow classify sample application Bernard Iremonger ` (2 subsequent siblings) 6 siblings, 0 replies; 145+ messages in thread From: Bernard Iremonger @ 2017-09-07 16:43 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil Cc: Bernard Iremonger From: Ferruh Yigit <ferruh.yigit@intel.com> The following library APIs's are implemented: rte_flow_classify_create rte_flow_classify_validate rte_flow_classify_destroy rte_flow_classify_query The following librte_table ACL API's are used: f_create to create a table ACL. f_add to add an ACL rule to the table. f_del to delete an ACL form the table. f_lookup to match packets with the ACL rules. The library supports counting of IPv4 five tupple packets only, ie IPv4 UDP, TCP and SCTP packets. Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com> Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> --- config/common_base | 6 + doc/api/doxy-api-index.md | 1 + doc/api/doxy-api.conf | 1 + lib/Makefile | 3 + lib/librte_eal/common/include/rte_log.h | 1 + lib/librte_flow_classify/Makefile | 51 ++ lib/librte_flow_classify/rte_flow_classify.c | 459 +++++++++++++++++ lib/librte_flow_classify/rte_flow_classify.h | 207 ++++++++ lib/librte_flow_classify/rte_flow_classify_parse.c | 546 +++++++++++++++++++++ lib/librte_flow_classify/rte_flow_classify_parse.h | 74 +++ .../rte_flow_classify_version.map | 10 + mk/rte.app.mk | 2 +- 12 files changed, 1360 insertions(+), 1 deletion(-) create mode 100644 lib/librte_flow_classify/Makefile create mode 100644 lib/librte_flow_classify/rte_flow_classify.c create mode 100644 lib/librte_flow_classify/rte_flow_classify.h create mode 100644 lib/librte_flow_classify/rte_flow_classify_parse.c create mode 100644 lib/librte_flow_classify/rte_flow_classify_parse.h create mode 100644 lib/librte_flow_classify/rte_flow_classify_version.map diff --git a/config/common_base b/config/common_base index 5e97a08..e378e0a 100644 --- a/config/common_base +++ b/config/common_base @@ -657,6 +657,12 @@ CONFIG_RTE_LIBRTE_GRO=y CONFIG_RTE_LIBRTE_METER=y # +# Compile librte_classify +# +CONFIG_RTE_LIBRTE_FLOW_CLASSIFY=y +CONFIG_RTE_LIBRTE_CLASSIFY_DEBUG=n + +# # Compile librte_sched # CONFIG_RTE_LIBRTE_SCHED=y diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md index 19e0d4f..a2fa281 100644 --- a/doc/api/doxy-api-index.md +++ b/doc/api/doxy-api-index.md @@ -105,6 +105,7 @@ The public API headers are grouped by topics: [LPM IPv4 route] (@ref rte_lpm.h), [LPM IPv6 route] (@ref rte_lpm6.h), [ACL] (@ref rte_acl.h), + [flow_classify] (@ref rte_flow_classify.h), [EFD] (@ref rte_efd.h) - **QoS**: diff --git a/doc/api/doxy-api.conf b/doc/api/doxy-api.conf index 823554f..4e43a66 100644 --- a/doc/api/doxy-api.conf +++ b/doc/api/doxy-api.conf @@ -46,6 +46,7 @@ INPUT = doc/api/doxy-api-index.md \ lib/librte_efd \ lib/librte_ether \ lib/librte_eventdev \ + lib/librte_flow_classify \ lib/librte_gro \ lib/librte_hash \ lib/librte_ip_frag \ diff --git a/lib/Makefile b/lib/Makefile index 86caba1..21fc3b0 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -82,6 +82,9 @@ DIRS-$(CONFIG_RTE_LIBRTE_POWER) += librte_power DEPDIRS-librte_power := librte_eal DIRS-$(CONFIG_RTE_LIBRTE_METER) += librte_meter DEPDIRS-librte_meter := librte_eal +DIRS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += librte_flow_classify +DEPDIRS-librte_flow_classify := librte_eal librte_ether librte_net +DEPDIRS-librte_flow_classify += librte_table librte_acl librte_port DIRS-$(CONFIG_RTE_LIBRTE_SCHED) += librte_sched DEPDIRS-librte_sched := librte_eal librte_mempool librte_mbuf librte_net DEPDIRS-librte_sched += librte_timer diff --git a/lib/librte_eal/common/include/rte_log.h b/lib/librte_eal/common/include/rte_log.h index ec8dba7..f975bde 100644 --- a/lib/librte_eal/common/include/rte_log.h +++ b/lib/librte_eal/common/include/rte_log.h @@ -87,6 +87,7 @@ struct rte_logs { #define RTE_LOGTYPE_CRYPTODEV 17 /**< Log related to cryptodev. */ #define RTE_LOGTYPE_EFD 18 /**< Log related to EFD. */ #define RTE_LOGTYPE_EVENTDEV 19 /**< Log related to eventdev. */ +#define RTE_LOGTYPE_CLASSIFY 20 /**< Log related to flow classify. */ /* these log types can be used in an application */ #define RTE_LOGTYPE_USER1 24 /**< User-defined log type 1. */ diff --git a/lib/librte_flow_classify/Makefile b/lib/librte_flow_classify/Makefile new file mode 100644 index 0000000..7863a0c --- /dev/null +++ b/lib/librte_flow_classify/Makefile @@ -0,0 +1,51 @@ +# BSD LICENSE +# +# Copyright(c) 2017 Intel Corporation. All rights reserved. +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Intel Corporation nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +include $(RTE_SDK)/mk/rte.vars.mk + +# library name +LIB = librte_flow_classify.a + +CFLAGS += -O3 +CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) + +EXPORT_MAP := rte_flow_classify_version.map + +LIBABIVER := 1 + +# all source are stored in SRCS-y +SRCS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += rte_flow_classify.c +SRCS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += rte_flow_classify_parse.c + +# install this header file +SYMLINK-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY)-include := rte_flow_classify.h + +include $(RTE_SDK)/mk/rte.lib.mk diff --git a/lib/librte_flow_classify/rte_flow_classify.c b/lib/librte_flow_classify/rte_flow_classify.c new file mode 100644 index 0000000..595e08c --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify.c @@ -0,0 +1,459 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <rte_flow_classify.h> +#include "rte_flow_classify_parse.h" +#include <rte_flow_driver.h> +#include <rte_table_acl.h> +#include <stdbool.h> + +static struct rte_eth_ntuple_filter ntuple_filter; + +enum { + PROTO_FIELD_IPV4, + SRC_FIELD_IPV4, + DST_FIELD_IPV4, + SRCP_FIELD_IPV4, + DSTP_FIELD_IPV4, + NUM_FIELDS_IPV4 +}; + +struct ipv4_5tuple_data { + uint16_t priority; /**< flow API uses priority 0 to 8, 0 is highest */ + uint32_t userdata; /**< value returned for match */ + uint8_t tcp_flags; /**< tcp_flags only meaningful TCP protocol */ +}; + +struct rte_flow_classify { + enum rte_flow_classify_type type; /**< classify type */ + struct rte_flow_action action; /**< action when match found */ + struct ipv4_5tuple_data flow_extra_data; /** extra rule data */ + struct rte_table_acl_rule_add_params key_add; /**< add ACL rule key */ + struct rte_table_acl_rule_delete_params + key_del; /**< delete ACL rule key */ + int key_found; /**< ACL rule key found in table */ + void *entry; /**< pointer to buffer to hold ACL rule key */ + void *entry_ptr; /**< handle to the table entry for the ACL rule key */ +}; + +/* number of categories in an ACL context */ +#define FLOW_CLASSIFY_NUM_CATEGORY 1 + +/* number of packets in a burst */ +#define MAX_PKT_BURST 32 + +struct mbuf_search { + struct rte_mbuf *m_ipv4[MAX_PKT_BURST]; + uint32_t res_ipv4[MAX_PKT_BURST]; + int num_ipv4; +}; + +int +rte_flow_classify_validate(void *table_handle, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + struct rte_flow_item *items; + parse_filter_t parse_filter; + uint32_t item_num = 0; + uint32_t i = 0; + int ret; + + (void) table_handle; + + if (!error) + return -EINVAL; + + if (!pattern) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM, + NULL, "NULL pattern."); + return -EINVAL; + } + + if (!actions) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_NUM, + NULL, "NULL action."); + return -EINVAL; + } + + if (!attr) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, + NULL, "NULL attribute."); + return -EINVAL; + } + + memset(&ntuple_filter, 0, sizeof(ntuple_filter)); + + /* Get the non-void item number of pattern */ + while ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_END) { + if ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_VOID) + item_num++; + i++; + } + item_num++; + + items = malloc(item_num * sizeof(struct rte_flow_item)); + if (!items) { + rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_ITEM_NUM, + NULL, "No memory for pattern items."); + return -ENOMEM; + } + + memset(items, 0, item_num * sizeof(struct rte_flow_item)); + classify_pattern_skip_void_item(items, pattern); + + parse_filter = classify_find_parse_filter_func(items); + if (!parse_filter) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + pattern, "Unsupported pattern"); + return -EINVAL; + } + + ret = parse_filter(attr, items, actions, &ntuple_filter, error); + free(items); + return ret; +} + +#ifdef RTE_LIBRTE_CLASSIFY_DEBUG +#define uint32_t_to_char(ip, a, b, c, d) do {\ + *a = (unsigned char)(ip >> 24 & 0xff);\ + *b = (unsigned char)(ip >> 16 & 0xff);\ + *c = (unsigned char)(ip >> 8 & 0xff);\ + *d = (unsigned char)(ip & 0xff);\ + } while (0) + +static inline void +print_ipv4_key_add(struct rte_table_acl_rule_add_params *key) +{ + unsigned char a, b, c, d; + + printf("ipv4_key_add: 0x%02hhx/0x%hhx ", + key->field_value[PROTO_FIELD_IPV4].value.u8, + key->field_value[PROTO_FIELD_IPV4].mask_range.u8); + + uint32_t_to_char(key->field_value[SRC_FIELD_IPV4].value.u32, + &a, &b, &c, &d); + printf(" %hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, + key->field_value[SRC_FIELD_IPV4].mask_range.u32); + + uint32_t_to_char(key->field_value[DST_FIELD_IPV4].value.u32, + &a, &b, &c, &d); + printf("%hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, + key->field_value[DST_FIELD_IPV4].mask_range.u32); + + printf("%hu : 0x%x %hu : 0x%x", + key->field_value[SRCP_FIELD_IPV4].value.u16, + key->field_value[SRCP_FIELD_IPV4].mask_range.u16, + key->field_value[DSTP_FIELD_IPV4].value.u16, + key->field_value[DSTP_FIELD_IPV4].mask_range.u16); + + printf(" priority: 0x%x\n", key->priority); +} + +static inline void +print_ipv4_key_delete(struct rte_table_acl_rule_delete_params *key) +{ + unsigned char a, b, c, d; + + printf("ipv4_key_del: 0x%02hhx/0x%hhx ", + key->field_value[PROTO_FIELD_IPV4].value.u8, + key->field_value[PROTO_FIELD_IPV4].mask_range.u8); + + uint32_t_to_char(key->field_value[SRC_FIELD_IPV4].value.u32, + &a, &b, &c, &d); + printf(" %hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, + key->field_value[SRC_FIELD_IPV4].mask_range.u32); + + uint32_t_to_char(key->field_value[DST_FIELD_IPV4].value.u32, + &a, &b, &c, &d); + printf("%hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, + key->field_value[DST_FIELD_IPV4].mask_range.u32); + + printf("%hu : 0x%x %hu : 0x%x\n", + key->field_value[SRCP_FIELD_IPV4].value.u16, + key->field_value[SRCP_FIELD_IPV4].mask_range.u16, + key->field_value[DSTP_FIELD_IPV4].value.u16, + key->field_value[DSTP_FIELD_IPV4].mask_range.u16); +} +#endif + +static struct rte_flow_classify * +allocate_5tuple(void) +{ + struct rte_flow_classify *flow_classify; + + flow_classify = malloc(sizeof(struct rte_flow_classify)); + if (!flow_classify) + return flow_classify; + + memset(flow_classify, 0, sizeof(struct rte_flow_classify)); + + flow_classify->type = RTE_FLOW_CLASSIFY_TYPE_5TUPLE; + memcpy(&flow_classify->action, classify_get_flow_action(), + sizeof(struct rte_flow_action)); + + flow_classify->flow_extra_data.priority = ntuple_filter.priority; + flow_classify->flow_extra_data.tcp_flags = ntuple_filter.tcp_flags; + + /* key add values */ + flow_classify->key_add.priority = ntuple_filter.priority; + flow_classify->key_add.field_value[PROTO_FIELD_IPV4].mask_range.u8 = + ntuple_filter.proto_mask; + flow_classify->key_add.field_value[PROTO_FIELD_IPV4].value.u8 = + ntuple_filter.proto; + + flow_classify->key_add.field_value[SRC_FIELD_IPV4].mask_range.u32 = + ntuple_filter.src_ip_mask; + flow_classify->key_add.field_value[SRC_FIELD_IPV4].value.u32 = + ntuple_filter.src_ip; + + flow_classify->key_add.field_value[DST_FIELD_IPV4].mask_range.u32 = + ntuple_filter.dst_ip_mask; + flow_classify->key_add.field_value[DST_FIELD_IPV4].value.u32 = + ntuple_filter.dst_ip; + + flow_classify->key_add.field_value[SRCP_FIELD_IPV4].mask_range.u16 = + ntuple_filter.src_port_mask; + flow_classify->key_add.field_value[SRCP_FIELD_IPV4].value.u16 = + ntuple_filter.src_port; + + flow_classify->key_add.field_value[DSTP_FIELD_IPV4].mask_range.u16 = + ntuple_filter.dst_port_mask; + flow_classify->key_add.field_value[DSTP_FIELD_IPV4].value.u16 = + ntuple_filter.dst_port; + +#ifdef RTE_LIBRTE_CLASSIFY_DEBUG + print_ipv4_key_add(&flow_classify->key_add); +#endif + + /* key delete values */ + memcpy(&flow_classify->key_del.field_value[PROTO_FIELD_IPV4], + &flow_classify->key_add.field_value[PROTO_FIELD_IPV4], + NUM_FIELDS_IPV4 * sizeof(struct rte_acl_field)); + +#ifdef RTE_LIBRTE_CLASSIFY_DEBUG + print_ipv4_key_delete(&flow_classify->key_del); +#endif + return flow_classify; +} + +struct rte_flow_classify * +rte_flow_classify_create(void *table_handle, + uint32_t entry_size, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + struct rte_flow_classify *flow_classify; + struct rte_acl_rule *acl_rule; + int ret; + + if (!error) + return NULL; + + if (!table_handle) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_HANDLE, + NULL, "NULL table_handle."); + return NULL; + } + + if (!pattern) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM, + NULL, "NULL pattern."); + return NULL; + } + + if (!actions) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_NUM, + NULL, "NULL action."); + return NULL; + } + + if (!attr) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, + NULL, "NULL attribute."); + return NULL; + } + + /* parse attr, pattern and actions */ + ret = rte_flow_classify_validate(table_handle, attr, pattern, + actions, error); + if (ret < 0) + return NULL; + + flow_classify = allocate_5tuple(); + if (!flow_classify) + return NULL; + + flow_classify->entry = malloc(entry_size); + if (!flow_classify->entry) { + free(flow_classify); + flow_classify = NULL; + return NULL; + } + + ret = rte_table_acl_ops.f_add(table_handle, &flow_classify->key_add, + flow_classify->entry, &flow_classify->key_found, + &flow_classify->entry_ptr); + if (ret) { + free(flow_classify->entry); + free(flow_classify); + flow_classify = NULL; + return NULL; + } + acl_rule = flow_classify->entry; + flow_classify->flow_extra_data.userdata = acl_rule->data.userdata; + + return flow_classify; +} + +int +rte_flow_classify_destroy(void *table_handle, + struct rte_flow_classify *flow_classify, + struct rte_flow_error *error) +{ + int ret; + int key_found; + + if (!error) + return -EINVAL; + + if (!flow_classify || !table_handle) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "invalid input"); + return -EINVAL; + } + + ret = rte_table_acl_ops.f_delete(table_handle, + &flow_classify->key_del, &key_found, + flow_classify->entry); + if ((ret == 0) && key_found) { + free(flow_classify->entry); + free(flow_classify); + } else + ret = -1; + return ret; +} + +static int +flow_match(void *table, struct rte_mbuf **pkts_in, const uint16_t nb_pkts, + uint64_t *count, uint32_t userdata) +{ + int ret = -1; + int i; + uint64_t pkts_mask; + uint64_t lookup_hit_mask; + struct rte_acl_rule *entries[RTE_PORT_IN_BURST_SIZE_MAX]; + + if (nb_pkts) { + pkts_mask = RTE_LEN2MASK(nb_pkts, uint64_t); + ret = rte_table_acl_ops.f_lookup(table, pkts_in, + pkts_mask, &lookup_hit_mask, (void **)entries); + if (!ret) { + for (i = 0; i < nb_pkts && + (lookup_hit_mask & (1 << i)); i++) { + if (entries[i]->data.userdata == userdata) + (*count)++; /* match found */ + } + if (*count == 0) + ret = -1; + } else + ret = -1; + } + return ret; +} + +static int +action_apply(const struct rte_flow_classify *flow_classify, + struct rte_flow_classify_stats *stats, uint64_t count) +{ + struct rte_flow_classify_5tuple_stats *ntuple_stats; + + switch (flow_classify->action.type) { + case RTE_FLOW_ACTION_TYPE_COUNT: + ntuple_stats = + (struct rte_flow_classify_5tuple_stats *)stats->stats; + ntuple_stats->counter1 = count; + stats->used_space = 1; + break; + default: + return -ENOTSUP; + } + + return 0; +} + +int +rte_flow_classify_query(void *table_handle, + const struct rte_flow_classify *flow_classify, + struct rte_mbuf **pkts, + const uint16_t nb_pkts, + struct rte_flow_classify_stats *stats, + struct rte_flow_error *error) +{ + uint64_t count = 0; + int ret = -EINVAL; + + if (!error) + return ret; + + if (!table_handle || !flow_classify || !pkts || !stats) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "invalid input"); + return ret; + } + + if ((stats->available_space == 0) || (nb_pkts == 0)) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "invalid input"); + return ret; + } + + ret = flow_match(table_handle, pkts, nb_pkts, &count, + flow_classify->flow_extra_data.userdata); + if (ret == 0) + ret = action_apply(flow_classify, stats, count); + + return ret; +} diff --git a/lib/librte_flow_classify/rte_flow_classify.h b/lib/librte_flow_classify/rte_flow_classify.h new file mode 100644 index 0000000..2b200fb --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify.h @@ -0,0 +1,207 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _RTE_FLOW_CLASSIFY_H_ +#define _RTE_FLOW_CLASSIFY_H_ + +/** + * @file + * + * RTE Flow Classify Library + * + * This library provides flow record information with some measured properties. + * + * Application should define the flow and measurement criteria (action) for it. + * + * Library doesn't maintain any flow records itself, instead flow information is + * returned to upper layer only for given packets. + * + * It is application's responsibility to call rte_flow_classify_query() + * for group of packets, just after receiving them or before transmitting them. + * Application should provide the flow type interested in, measurement to apply + * to that flow in rte_flow_classify_create() API, and should provide + * rte_flow_classify object and storage to put results in + * rte_flow_classify_query() API. + * + * Usage: + * - application calls rte_flow_classify_create() to create a rte_flow_classify + * object. + * - application calls rte_flow_classify_query() in a polling manner, + * preferably after rte_eth_rx_burst(). This will cause the library to + * convert packet information to flow information with some measurements. + * - rte_flow_classify object can be destroyed when they are no more needed + * via rte_flow_classify_destroy() + */ + +#include <rte_ethdev.h> +#include <rte_ether.h> +#include <rte_flow.h> +#include <rte_acl.h> + +#ifdef __cplusplus +extern "C" { +#endif + +enum rte_flow_classify_type { + RTE_FLOW_CLASSIFY_TYPE_NONE, /**< no type */ + RTE_FLOW_CLASSIFY_TYPE_5TUPLE, /**< IPv4 5tuple type */ +}; + +struct rte_flow_classify; + +/** + * Flow stats + * + * For single action an array of stats can be returned by API. Technically each + * packet can return a stat at max. + * + * Storage for stats is provided by application, library should know available + * space, and should return the number of used space. + * + * stats type is based on what measurement (action) requested by application. + * + */ +struct rte_flow_classify_stats { + const unsigned int available_space; + unsigned int used_space; + void **stats; +}; + +struct rte_flow_classify_5tuple_stats { + uint64_t counter1; /**< count of packets that match 5tupple pattern */ +}; + +/** + * Create a flow classify rule. + * + * @param[in] table_handle + * Pointer to table ACL + * @param[in] entry_size + * Size of ACL rule + * @param[in] attr + * Flow rule attributes + * @param[in] pattern + * Pattern specification (list terminated by the END pattern item). + * @param[in] actions + * Associated actions (list terminated by the END pattern item). + * @param[out] error + * Perform verbose error reporting if not NULL. Structure + * initialised in case of error only. + * @return + * A valid handle in case of success, NULL otherwise. + */ +struct rte_flow_classify * +rte_flow_classify_create(void *table_handle, + uint32_t entry_size, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error); + +/** + * Validate a flow classify rule. + * + * @param[in] table_handle + * Pointer to table ACL + * @param[in] attr + * Flow rule attributes + * @param[in] pattern + * Pattern specification (list terminated by the END pattern item). + * @param[in] actions + * Associated actions (list terminated by the END pattern item). + * @param[out] error + * Perform verbose error reporting if not NULL. Structure + * initialised in case of error only. + * + * @return + * reurn code. + */ +int +rte_flow_classify_validate(void *table_handle, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error); + +/** + * Destroy a flow classify rule. + * + * @param[in] table_handle + * Pointer to table ACL + * @param[in] flow_classify + * Flow rule handle to destroy + * @param[out] error + * Perform verbose error reporting if not NULL. Structure + * initialised in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise. + */ +int +rte_flow_classify_destroy(void *table_handle, + struct rte_flow_classify *flow_classify, + struct rte_flow_error *error); + +/** + * Get flow classification stats for given packets. + * + * @param[in] table_handle + * Pointer to table ACL + * @param[in] flow_classify + * Pointer to Flow rule object + * @param[in] pkts + * Pointer to packets to process + * @param[in] nb_pkts + * Number of packets to process + * @param[in] stats + * To store stats defined by action + * @param[out] error + * Perform verbose error reporting if not NULL. Structure + * initialised in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise. + */ +int +rte_flow_classify_query(void *table_handle, + const struct rte_flow_classify *flow_classify, + struct rte_mbuf **pkts, + const uint16_t nb_pkts, + struct rte_flow_classify_stats *stats, + struct rte_flow_error *error); + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_FLOW_CLASSIFY_H_ */ diff --git a/lib/librte_flow_classify/rte_flow_classify_parse.c b/lib/librte_flow_classify/rte_flow_classify_parse.c new file mode 100644 index 0000000..e5a3885 --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify_parse.c @@ -0,0 +1,546 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <rte_flow_classify.h> +#include "rte_flow_classify_parse.h" +#include <rte_flow_driver.h> + +struct classify_valid_pattern { + enum rte_flow_item_type *items; + parse_filter_t parse_filter; +}; + +static struct rte_flow_action action; + +/* Pattern matched ntuple filter */ +static enum rte_flow_item_type pattern_ntuple_1[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_UDP, + RTE_FLOW_ITEM_TYPE_END, +}; + +/* Pattern matched ntuple filter */ +static enum rte_flow_item_type pattern_ntuple_2[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_TCP, + RTE_FLOW_ITEM_TYPE_END, +}; + +/* Pattern matched ntuple filter */ +static enum rte_flow_item_type pattern_ntuple_3[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_SCTP, + RTE_FLOW_ITEM_TYPE_END, +}; + +static int +classify_parse_ntuple_filter(const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_eth_ntuple_filter *filter, + struct rte_flow_error *error); + +static struct classify_valid_pattern classify_supported_patterns[] = { + /* ntuple */ + { pattern_ntuple_1, classify_parse_ntuple_filter }, + { pattern_ntuple_2, classify_parse_ntuple_filter }, + { pattern_ntuple_3, classify_parse_ntuple_filter }, +}; + +struct rte_flow_action * +classify_get_flow_action(void) +{ + return &action; +} + +/* Find the first VOID or non-VOID item pointer */ +const struct rte_flow_item * +classify_find_first_item(const struct rte_flow_item *item, bool is_void) +{ + bool is_find; + + while (item->type != RTE_FLOW_ITEM_TYPE_END) { + if (is_void) + is_find = item->type == RTE_FLOW_ITEM_TYPE_VOID; + else + is_find = item->type != RTE_FLOW_ITEM_TYPE_VOID; + if (is_find) + break; + item++; + } + return item; +} + +/* Skip all VOID items of the pattern */ +void +classify_pattern_skip_void_item(struct rte_flow_item *items, + const struct rte_flow_item *pattern) +{ + uint32_t cpy_count = 0; + const struct rte_flow_item *pb = pattern, *pe = pattern; + + for (;;) { + /* Find a non-void item first */ + pb = classify_find_first_item(pb, false); + if (pb->type == RTE_FLOW_ITEM_TYPE_END) { + pe = pb; + break; + } + + /* Find a void item */ + pe = classify_find_first_item(pb + 1, true); + + cpy_count = pe - pb; + rte_memcpy(items, pb, sizeof(struct rte_flow_item) * cpy_count); + + items += cpy_count; + + if (pe->type == RTE_FLOW_ITEM_TYPE_END) { + pb = pe; + break; + } + + pb = pe + 1; + } + /* Copy the END item. */ + rte_memcpy(items, pe, sizeof(struct rte_flow_item)); +} + +/* Check if the pattern matches a supported item type array */ +static bool +classify_match_pattern(enum rte_flow_item_type *item_array, + struct rte_flow_item *pattern) +{ + struct rte_flow_item *item = pattern; + + while ((*item_array == item->type) && + (*item_array != RTE_FLOW_ITEM_TYPE_END)) { + item_array++; + item++; + } + + return (*item_array == RTE_FLOW_ITEM_TYPE_END && + item->type == RTE_FLOW_ITEM_TYPE_END); +} + +/* Find if there's parse filter function matched */ +parse_filter_t +classify_find_parse_filter_func(struct rte_flow_item *pattern) +{ + parse_filter_t parse_filter = NULL; + uint8_t i = 0; + + for (; i < RTE_DIM(classify_supported_patterns); i++) { + if (classify_match_pattern(classify_supported_patterns[i].items, + pattern)) { + parse_filter = + classify_supported_patterns[i].parse_filter; + break; + } + } + + return parse_filter; +} + +#define FLOW_RULE_MIN_PRIORITY 8 +#define FLOW_RULE_MAX_PRIORITY 0 + +#define NEXT_ITEM_OF_PATTERN(item, pattern, index)\ + do { \ + item = pattern + index;\ + while (item->type == RTE_FLOW_ITEM_TYPE_VOID) {\ + index++; \ + item = pattern + index; \ + } \ + } while (0) + +#define NEXT_ITEM_OF_ACTION(act, actions, index)\ + do { \ + act = actions + index; \ + while (act->type == RTE_FLOW_ACTION_TYPE_VOID) {\ + index++; \ + act = actions + index; \ + } \ + } while (0) + +/** + * Please aware there's an asumption for all the parsers. + * rte_flow_item is using big endian, rte_flow_attr and + * rte_flow_action are using CPU order. + * Because the pattern is used to describe the packets, + * normally the packets should use network order. + */ + +/** + * Parse the rule to see if it is a n-tuple rule. + * And get the n-tuple filter info BTW. + * pattern: + * The first not void item can be ETH or IPV4. + * The second not void item must be IPV4 if the first one is ETH. + * The third not void item must be UDP or TCP. + * The next not void item must be END. + * action: + * The first not void action should be QUEUE. + * The next not void action should be END. + * pattern example: + * ITEM Spec Mask + * ETH NULL NULL + * IPV4 src_addr 192.168.1.20 0xFFFFFFFF + * dst_addr 192.167.3.50 0xFFFFFFFF + * next_proto_id 17 0xFF + * UDP/TCP/ src_port 80 0xFFFF + * SCTP dst_port 80 0xFFFF + * END + * other members in mask and spec should set to 0x00. + * item->last should be NULL. + */ +static int +classify_parse_ntuple_filter(const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_eth_ntuple_filter *filter, + struct rte_flow_error *error) +{ + const struct rte_flow_item *item; + const struct rte_flow_action *act; + const struct rte_flow_item_ipv4 *ipv4_spec; + const struct rte_flow_item_ipv4 *ipv4_mask; + const struct rte_flow_item_tcp *tcp_spec; + const struct rte_flow_item_tcp *tcp_mask; + const struct rte_flow_item_udp *udp_spec; + const struct rte_flow_item_udp *udp_mask; + const struct rte_flow_item_sctp *sctp_spec; + const struct rte_flow_item_sctp *sctp_mask; + uint32_t index; + + if (!pattern) { + rte_flow_error_set(error, + EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM, + NULL, "NULL pattern."); + return -rte_errno; + } + + if (!actions) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_NUM, + NULL, "NULL action."); + return -rte_errno; + } + if (!attr) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, + NULL, "NULL attribute."); + return -rte_errno; + } + + /* parse pattern */ + index = 0; + + /* the first not void item can be MAC or IPv4 */ + NEXT_ITEM_OF_PATTERN(item, pattern, index); + + if (item->type != RTE_FLOW_ITEM_TYPE_ETH && + item->type != RTE_FLOW_ITEM_TYPE_IPV4) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + /* Skip Ethernet */ + if (item->type == RTE_FLOW_ITEM_TYPE_ETH) { + /*Not supported last point for range*/ + if (item->last) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + item, + "Not supported last point for range"); + return -rte_errno; + + } + /* if the first item is MAC, the content should be NULL */ + if (item->spec || item->mask) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Not supported by ntuple filter"); + return -rte_errno; + } + /* check if the next not void item is IPv4 */ + index++; + NEXT_ITEM_OF_PATTERN(item, pattern, index); + if (item->type != RTE_FLOW_ITEM_TYPE_IPV4) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Not supported by ntuple filter"); + return -rte_errno; + } + } + + /* get the IPv4 info */ + if (!item->spec || !item->mask) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Invalid ntuple mask"); + return -rte_errno; + } + /*Not supported last point for range*/ + if (item->last) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + item, "Not supported last point for range"); + return -rte_errno; + + } + + ipv4_mask = (const struct rte_flow_item_ipv4 *)item->mask; + /** + * Only support src & dst addresses, protocol, + * others should be masked. + */ + if (ipv4_mask->hdr.version_ihl || + ipv4_mask->hdr.type_of_service || + ipv4_mask->hdr.total_length || + ipv4_mask->hdr.packet_id || + ipv4_mask->hdr.fragment_offset || + ipv4_mask->hdr.time_to_live || + ipv4_mask->hdr.hdr_checksum) { + rte_flow_error_set(error, + EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + + filter->dst_ip_mask = ipv4_mask->hdr.dst_addr; + filter->src_ip_mask = ipv4_mask->hdr.src_addr; + filter->proto_mask = ipv4_mask->hdr.next_proto_id; + + ipv4_spec = (const struct rte_flow_item_ipv4 *)item->spec; + filter->dst_ip = ipv4_spec->hdr.dst_addr; + filter->src_ip = ipv4_spec->hdr.src_addr; + filter->proto = ipv4_spec->hdr.next_proto_id; + + /* check if the next not void item is TCP or UDP or SCTP */ + index++; + NEXT_ITEM_OF_PATTERN(item, pattern, index); + if (item->type != RTE_FLOW_ITEM_TYPE_TCP && + item->type != RTE_FLOW_ITEM_TYPE_UDP && + item->type != RTE_FLOW_ITEM_TYPE_SCTP) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + + /* get the TCP/UDP info */ + if (!item->spec || !item->mask) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Invalid ntuple mask"); + return -rte_errno; + } + + /*Not supported last point for range*/ + if (item->last) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + item, "Not supported last point for range"); + return -rte_errno; + + } + + if (item->type == RTE_FLOW_ITEM_TYPE_TCP) { + tcp_mask = (const struct rte_flow_item_tcp *)item->mask; + + /** + * Only support src & dst ports, tcp flags, + * others should be masked. + */ + if (tcp_mask->hdr.sent_seq || + tcp_mask->hdr.recv_ack || + tcp_mask->hdr.data_off || + tcp_mask->hdr.rx_win || + tcp_mask->hdr.cksum || + tcp_mask->hdr.tcp_urp) { + memset(filter, 0, + sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + + filter->dst_port_mask = tcp_mask->hdr.dst_port; + filter->src_port_mask = tcp_mask->hdr.src_port; + if (tcp_mask->hdr.tcp_flags == 0xFF) { + filter->flags |= RTE_NTUPLE_FLAGS_TCP_FLAG; + } else if (!tcp_mask->hdr.tcp_flags) { + filter->flags &= ~RTE_NTUPLE_FLAGS_TCP_FLAG; + } else { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + + tcp_spec = (const struct rte_flow_item_tcp *)item->spec; + filter->dst_port = tcp_spec->hdr.dst_port; + filter->src_port = tcp_spec->hdr.src_port; + filter->tcp_flags = tcp_spec->hdr.tcp_flags; + } else if (item->type == RTE_FLOW_ITEM_TYPE_UDP) { + udp_mask = (const struct rte_flow_item_udp *)item->mask; + + /** + * Only support src & dst ports, + * others should be masked. + */ + if (udp_mask->hdr.dgram_len || + udp_mask->hdr.dgram_cksum) { + memset(filter, 0, + sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + + filter->dst_port_mask = udp_mask->hdr.dst_port; + filter->src_port_mask = udp_mask->hdr.src_port; + + udp_spec = (const struct rte_flow_item_udp *)item->spec; + filter->dst_port = udp_spec->hdr.dst_port; + filter->src_port = udp_spec->hdr.src_port; + } else { + sctp_mask = (const struct rte_flow_item_sctp *)item->mask; + + /** + * Only support src & dst ports, + * others should be masked. + */ + if (sctp_mask->hdr.tag || + sctp_mask->hdr.cksum) { + memset(filter, 0, + sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + + filter->dst_port_mask = sctp_mask->hdr.dst_port; + filter->src_port_mask = sctp_mask->hdr.src_port; + + sctp_spec = (const struct rte_flow_item_sctp *)item->spec; + filter->dst_port = sctp_spec->hdr.dst_port; + filter->src_port = sctp_spec->hdr.src_port; + } + + /* check if the next not void item is END */ + index++; + NEXT_ITEM_OF_PATTERN(item, pattern, index); + if (item->type != RTE_FLOW_ITEM_TYPE_END) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + + /* parse action */ + index = 0; + + /** + * n-tuple only supports count, + * check if the first not void action is COUNT. + */ + memset(&action, 0, sizeof(action)); + NEXT_ITEM_OF_ACTION(act, actions, index); + if (act->type != RTE_FLOW_ACTION_TYPE_COUNT) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + item, "Not supported action."); + return -rte_errno; + } + action.type = RTE_FLOW_ACTION_TYPE_COUNT; + + /* check if the next not void item is END */ + index++; + NEXT_ITEM_OF_ACTION(act, actions, index); + if (act->type != RTE_FLOW_ACTION_TYPE_END) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + act, "Not supported action."); + return -rte_errno; + } + + /* parse attr */ + /* must be input direction */ + if (!attr->ingress) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, + attr, "Only support ingress."); + return -rte_errno; + } + + /* not supported */ + if (attr->egress) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, + attr, "Not support egress."); + return -rte_errno; + } + + if (attr->priority > 0xFFFF) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, + attr, "Error priority."); + return -rte_errno; + } + filter->priority = (uint16_t)attr->priority; + if (attr->priority > FLOW_RULE_MIN_PRIORITY) + filter->priority = FLOW_RULE_MAX_PRIORITY; + + return 0; +} diff --git a/lib/librte_flow_classify/rte_flow_classify_parse.h b/lib/librte_flow_classify/rte_flow_classify_parse.h new file mode 100644 index 0000000..1d4708a --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify_parse.h @@ -0,0 +1,74 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _RTE_FLOW_CLASSIFY_PARSE_H_ +#define _RTE_FLOW_CLASSIFY_PARSE_H_ + +#include <rte_ethdev.h> +#include <rte_ether.h> +#include <rte_flow.h> +#include <stdbool.h> + +#ifdef __cplusplus +extern "C" { +#endif + +typedef int (*parse_filter_t)(const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_eth_ntuple_filter *filter, + struct rte_flow_error *error); + +/* Skip all VOID items of the pattern */ +void +classify_pattern_skip_void_item(struct rte_flow_item *items, + const struct rte_flow_item *pattern); + +/* Find the first VOID or non-VOID item pointer */ +const struct rte_flow_item * +classify_find_first_item(const struct rte_flow_item *item, bool is_void); + + +/* Find if there's parse filter function matched */ +parse_filter_t +classify_find_parse_filter_func(struct rte_flow_item *pattern); + +/* get action data */ +struct rte_flow_action * +classify_get_flow_action(void); + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_FLOW_CLASSIFY_PARSE_H_ */ diff --git a/lib/librte_flow_classify/rte_flow_classify_version.map b/lib/librte_flow_classify/rte_flow_classify_version.map new file mode 100644 index 0000000..e2c9ecf --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify_version.map @@ -0,0 +1,10 @@ +DPDK_17.08 { + global: + + rte_flow_classify_create; + rte_flow_classify_destroy; + rte_flow_classify_query; + rte_flow_classify_validate; + + local: *; +}; diff --git a/mk/rte.app.mk b/mk/rte.app.mk index c25fdd9..909ab95 100644 --- a/mk/rte.app.mk +++ b/mk/rte.app.mk @@ -58,6 +58,7 @@ _LDLIBS-y += -L$(RTE_SDK_BIN)/lib # # Order is important: from higher level to lower level # +_LDLIBS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += -lrte_flow_classify _LDLIBS-$(CONFIG_RTE_LIBRTE_PIPELINE) += -lrte_pipeline _LDLIBS-$(CONFIG_RTE_LIBRTE_TABLE) += -lrte_table _LDLIBS-$(CONFIG_RTE_LIBRTE_PORT) += -lrte_port @@ -84,7 +85,6 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_EFD) += -lrte_efd _LDLIBS-$(CONFIG_RTE_LIBRTE_CFGFILE) += -lrte_cfgfile _LDLIBS-y += --whole-archive - _LDLIBS-$(CONFIG_RTE_LIBRTE_HASH) += -lrte_hash _LDLIBS-$(CONFIG_RTE_LIBRTE_VHOST) += -lrte_vhost _LDLIBS-$(CONFIG_RTE_LIBRTE_KVARGS) += -lrte_kvargs -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v5 4/6] examples/flow_classify: flow classify sample application 2017-09-06 10:27 ` [dpdk-dev] [PATCH v4 " Bernard Iremonger ` (3 preceding siblings ...) 2017-09-07 16:43 ` [dpdk-dev] [PATCH v5 3/6] librte_flow_classify: add librte_flow_classify library Bernard Iremonger @ 2017-09-07 16:43 ` Bernard Iremonger 2017-09-07 16:43 ` [dpdk-dev] [PATCH v5 5/6] test: add packet burst generator functions Bernard Iremonger 2017-09-07 16:43 ` [dpdk-dev] [PATCH v5 6/6] test: flow classify library unit tests Bernard Iremonger 6 siblings, 0 replies; 145+ messages in thread From: Bernard Iremonger @ 2017-09-07 16:43 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil Cc: Bernard Iremonger The flow_classify sample application exercises the following librte_flow_classify API's: rte_flow_classify_create rte_flow_classify_validate rte_flow_classify_destroy rte_flow_classify_query It sets up the IPv4 ACL field definitions. It creates table_acl and adds and deletes rules using the librte_table API. It uses a file of IPv4 five tuple rules for input. Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> --- examples/flow_classify/Makefile | 57 ++ examples/flow_classify/flow_classify.c | 897 +++++++++++++++++++++++++++++ examples/flow_classify/ipv4_rules_file.txt | 14 + 3 files changed, 968 insertions(+) create mode 100644 examples/flow_classify/Makefile create mode 100644 examples/flow_classify/flow_classify.c create mode 100644 examples/flow_classify/ipv4_rules_file.txt diff --git a/examples/flow_classify/Makefile b/examples/flow_classify/Makefile new file mode 100644 index 0000000..eecdde1 --- /dev/null +++ b/examples/flow_classify/Makefile @@ -0,0 +1,57 @@ +# BSD LICENSE +# +# Copyright(c) 2017 Intel Corporation. All rights reserved. +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Intel Corporation nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +ifeq ($(RTE_SDK),) +$(error "Please define RTE_SDK environment variable") +endif + +# Default target, can be overridden by command line or environment +RTE_TARGET ?= x86_64-native-linuxapp-gcc + +include $(RTE_SDK)/mk/rte.vars.mk + +# binary name +APP = flow_classify + + +# all source are stored in SRCS-y +SRCS-y := flow_classify.c + +CFLAGS += -O3 +CFLAGS += $(WERROR_FLAGS) + +# workaround for a gcc bug with noreturn attribute +# http://gcc.gnu.org/bugzilla/show_bug.cgi?id=12603 +ifeq ($(CONFIG_RTE_TOOLCHAIN_GCC),y) +CFLAGS_main.o += -Wno-return-type +endif + +include $(RTE_SDK)/mk/rte.extapp.mk diff --git a/examples/flow_classify/flow_classify.c b/examples/flow_classify/flow_classify.c new file mode 100644 index 0000000..651fa8f --- /dev/null +++ b/examples/flow_classify/flow_classify.c @@ -0,0 +1,897 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <stdint.h> +#include <inttypes.h> +#include <getopt.h> + +#include <rte_eal.h> +#include <rte_ethdev.h> +#include <rte_cycles.h> +#include <rte_lcore.h> +#include <rte_mbuf.h> +#include <rte_flow.h> +#include <rte_flow_classify.h> +#include <rte_table_acl.h> + +#define RX_RING_SIZE 128 +#define TX_RING_SIZE 512 + +#define NUM_MBUFS 8191 +#define MBUF_CACHE_SIZE 250 +#define BURST_SIZE 32 +#define MAX_NUM_CLASSIFY 30 +#define FLOW_CLASSIFY_MAX_RULE_NUM 91 +#define FLOW_CLASSIFY_MAX_PRIORITY 8 +#define PROTO_TCP 6 +#define PROTO_UDP 17 +#define PROTO_SCTP 132 + +#define COMMENT_LEAD_CHAR ('#') +#define OPTION_RULE_IPV4 "rule_ipv4" +#define RTE_LOGTYPE_FLOW_CLASSIFY RTE_LOGTYPE_USER3 +#define flow_classify_log(format, ...) \ + RTE_LOG(ERR, FLOW_CLASSIFY, format, ##__VA_ARGS__) + +#define uint32_t_to_char(ip, a, b, c, d) do {\ + *a = (unsigned char)(ip >> 24 & 0xff);\ + *b = (unsigned char)(ip >> 16 & 0xff);\ + *c = (unsigned char)(ip >> 8 & 0xff);\ + *d = (unsigned char)(ip & 0xff);\ + } while (0) + +enum { + CB_FLD_SRC_ADDR, + CB_FLD_DST_ADDR, + CB_FLD_SRC_PORT, + CB_FLD_SRC_PORT_DLM, + CB_FLD_SRC_PORT_MASK, + CB_FLD_DST_PORT, + CB_FLD_DST_PORT_DLM, + CB_FLD_DST_PORT_MASK, + CB_FLD_PROTO, + CB_FLD_PRIORITY, + CB_FLD_NUM, +}; + +static struct{ + const char *rule_ipv4_name; +} parm_config; +const char cb_port_delim[] = ":"; + +static const struct rte_eth_conf port_conf_default = { + .rxmode = { .max_rx_pkt_len = ETHER_MAX_LEN } +}; + +static void *table_acl; +uint32_t entry_size; +static int udp_num_classify; +static int tcp_num_classify; +static int sctp_num_classify; + +/* ACL field definitions for IPv4 5 tuple rule */ + +enum { + PROTO_FIELD_IPV4, + SRC_FIELD_IPV4, + DST_FIELD_IPV4, + SRCP_FIELD_IPV4, + DSTP_FIELD_IPV4, + NUM_FIELDS_IPV4 +}; + +enum { + PROTO_INPUT_IPV4, + SRC_INPUT_IPV4, + DST_INPUT_IPV4, + SRCP_DESTP_INPUT_IPV4 +}; + +static struct rte_acl_field_def ipv4_defs[NUM_FIELDS_IPV4] = { + /* first input field - always one byte long. */ + { + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint8_t), + .field_index = PROTO_FIELD_IPV4, + .input_index = PROTO_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, next_proto_id), + }, + /* next input field (IPv4 source address) - 4 consecutive bytes. */ + { + /* rte_flow uses a bit mask for IPv4 addresses */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint32_t), + .field_index = SRC_FIELD_IPV4, + .input_index = SRC_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, src_addr), + }, + /* next input field (IPv4 destination address) - 4 consecutive bytes. */ + { + /* rte_flow uses a bit mask for IPv4 addresses */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint32_t), + .field_index = DST_FIELD_IPV4, + .input_index = DST_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, dst_addr), + }, + /* + * Next 2 fields (src & dst ports) form 4 consecutive bytes. + * They share the same input index. + */ + { + /* rte_flow uses a bit mask for protocol ports */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint16_t), + .field_index = SRCP_FIELD_IPV4, + .input_index = SRCP_DESTP_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + sizeof(struct ipv4_hdr) + + offsetof(struct tcp_hdr, src_port), + }, + { + /* rte_flow uses a bit mask for protocol ports */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint16_t), + .field_index = DSTP_FIELD_IPV4, + .input_index = SRCP_DESTP_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + sizeof(struct ipv4_hdr) + + offsetof(struct tcp_hdr, dst_port), + }, +}; + +/* flow classify data */ +static struct rte_flow_classify *udp_flow_classify[MAX_NUM_CLASSIFY]; +static struct rte_flow_classify *tcp_flow_classify[MAX_NUM_CLASSIFY]; +static struct rte_flow_classify *sctp_flow_classify[MAX_NUM_CLASSIFY]; + +static struct rte_flow_classify_5tuple_stats udp_ntuple_stats; +static struct rte_flow_classify_stats udp_classify_stats = { + .available_space = BURST_SIZE, + .used_space = 0, + .stats = (void **)&udp_ntuple_stats +}; + +static struct rte_flow_classify_5tuple_stats tcp_ntuple_stats; +static struct rte_flow_classify_stats tcp_classify_stats = { + .available_space = BURST_SIZE, + .used_space = 0, + .stats = (void **)&tcp_ntuple_stats +}; + +static struct rte_flow_classify_5tuple_stats sctp_ntuple_stats; +static struct rte_flow_classify_stats sctp_classify_stats = { + .available_space = BURST_SIZE, + .used_space = 0, + .stats = (void **)&sctp_ntuple_stats +}; + +/* parameters for rte_flow_classify_validate and rte_flow_classify_create */ + +static struct rte_flow_item eth_item = { RTE_FLOW_ITEM_TYPE_ETH, + 0, 0, 0 }; +static struct rte_flow_item end_item = { RTE_FLOW_ITEM_TYPE_END, + 0, 0, 0 }; + +/* sample actions: + * "actions count / end" + */ +static struct rte_flow_action count_action = { RTE_FLOW_ACTION_TYPE_COUNT, 0}; +static struct rte_flow_action end_action = { RTE_FLOW_ACTION_TYPE_END, 0}; +static struct rte_flow_action actions[2]; + +/* sample attributes */ +static struct rte_flow_attr attr; + +/* flow_classify.c: * Based on DPDK skeleton forwarding example. */ + +/* + * Initializes a given port using global settings and with the RX buffers + * coming from the mbuf_pool passed as a parameter. + */ +static inline int +port_init(uint8_t port, struct rte_mempool *mbuf_pool) +{ + struct rte_eth_conf port_conf = port_conf_default; + struct ether_addr addr; + const uint16_t rx_rings = 1, tx_rings = 1; + int retval; + uint16_t q; + + if (port >= rte_eth_dev_count()) + return -1; + + /* Configure the Ethernet device. */ + retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf); + if (retval != 0) + return retval; + + /* Allocate and set up 1 RX queue per Ethernet port. */ + for (q = 0; q < rx_rings; q++) { + retval = rte_eth_rx_queue_setup(port, q, RX_RING_SIZE, + rte_eth_dev_socket_id(port), NULL, mbuf_pool); + if (retval < 0) + return retval; + } + + /* Allocate and set up 1 TX queue per Ethernet port. */ + for (q = 0; q < tx_rings; q++) { + retval = rte_eth_tx_queue_setup(port, q, TX_RING_SIZE, + rte_eth_dev_socket_id(port), NULL); + if (retval < 0) + return retval; + } + + /* Start the Ethernet port. */ + retval = rte_eth_dev_start(port); + if (retval < 0) + return retval; + + /* Display the port MAC address. */ + rte_eth_macaddr_get(port, &addr); + printf("Port %u MAC: %02" PRIx8 " %02" PRIx8 " %02" PRIx8 + " %02" PRIx8 " %02" PRIx8 " %02" PRIx8 "\n", + port, + addr.addr_bytes[0], addr.addr_bytes[1], + addr.addr_bytes[2], addr.addr_bytes[3], + addr.addr_bytes[4], addr.addr_bytes[5]); + + /* Enable RX in promiscuous mode for the Ethernet device. */ + rte_eth_promiscuous_enable(port); + + return 0; +} + +/* + * The lcore main. This is the main thread that does the work, reading from + * an input port classifying the packets and writing to an output port. + */ +static __attribute__((noreturn)) void +lcore_main(void) +{ + struct rte_flow_error error; + const uint8_t nb_ports = rte_eth_dev_count(); + uint8_t port; + int ret; + int i; + + /* + * Check that the port is on the same NUMA node as the polling thread + * for best performance. + */ + for (port = 0; port < nb_ports; port++) + if (rte_eth_dev_socket_id(port) > 0 && + rte_eth_dev_socket_id(port) != + (int)rte_socket_id()) + printf("\n\n"); + printf("WARNING: port %u is on remote NUMA node\n", + port); + printf("to polling thread.\n"); + printf("Performance will not be optimal.\n"); + + printf("\nCore %u forwarding packets. [Ctrl+C to quit]\n", + rte_lcore_id()); + + /* Run until the application is quit or killed. */ + for (;;) { + /* + * Receive packets on a port, classify them and forward them + * on the paired port. + * The mapping is 0 -> 1, 1 -> 0, 2 -> 3, 3 -> 2, etc. + */ + for (port = 0; port < nb_ports; port++) { + + /* Get burst of RX packets, from first port of pair. */ + struct rte_mbuf *bufs[BURST_SIZE]; + const uint16_t nb_rx = rte_eth_rx_burst(port, 0, + bufs, BURST_SIZE); + + if (unlikely(nb_rx == 0)) + continue; + + for (i = 0; i < MAX_NUM_CLASSIFY; i++) { + if (udp_flow_classify[i]) { + ret = rte_flow_classify_query( + table_acl, + udp_flow_classify[i], + bufs, nb_rx, + &udp_classify_stats, &error); + if (ret) + printf( + "udp flow classify[%d] query failed port=%u\n\n", + i, port); + else + printf( + "udp rule [%d] counter1=%lu used_space=%d\n\n", + i, udp_ntuple_stats.counter1, + udp_classify_stats.used_space); + } + } + + for (i = 0; i < MAX_NUM_CLASSIFY; i++) { + if (tcp_flow_classify[i]) { + ret = rte_flow_classify_query( + table_acl, + tcp_flow_classify[i], + bufs, nb_rx, + &tcp_classify_stats, &error); + if (ret) + printf( + "tcp flow classify[%d] query failed port=%u\n\n", + i, port); + else + printf( + "tcp rule [%d] counter1=%lu used_space=%d\n\n", + i, tcp_ntuple_stats.counter1, + tcp_classify_stats.used_space); + } + } + + for (i = 0; i < MAX_NUM_CLASSIFY; i++) { + if (sctp_flow_classify[i]) { + ret = rte_flow_classify_query( + table_acl, + sctp_flow_classify[i], + bufs, nb_rx, + &sctp_classify_stats, &error); + if (ret) + printf( + "sctp flow classify[%d] query failed port=%u\n\n", + i, port); + else + printf( + "sctp rule [%d] counter1=%lu used_space=%d\n\n", + i, sctp_ntuple_stats.counter1, + sctp_classify_stats.used_space); + } + } + + /* Send burst of TX packets, to second port of pair. */ + const uint16_t nb_tx = rte_eth_tx_burst(port ^ 1, 0, + bufs, nb_rx); + + /* Free any unsent packets. */ + if (unlikely(nb_tx < nb_rx)) { + uint16_t buf; + + for (buf = nb_tx; buf < nb_rx; buf++) + rte_pktmbuf_free(bufs[buf]); + } + } + } +} + +/* + * Parse IPv4 5 tuple rules file, ipv4_rules_file.txt. + * Expected format: + * <src_ipv4_addr>'/'<masklen> <space> \ + * <dst_ipv4_addr>'/'<masklen> <space> \ + * <src_port> <space> ":" <src_port_mask> <space> \ + * <dst_port> <space> ":" <dst_port_mask> <space> \ + * <proto>'/'<proto_mask> <space> \ + * <priority> + */ + +static int +get_cb_field(char **in, uint32_t *fd, int base, unsigned long lim, + char dlm) +{ + unsigned long val; + char *end; + + errno = 0; + val = strtoul(*in, &end, base); + if (errno != 0 || end[0] != dlm || val > lim) + return -EINVAL; + *fd = (uint32_t)val; + *in = end + 1; + return 0; +} + +static int +parse_ipv4_net(char *in, uint32_t *addr, uint32_t *mask_len) +{ + uint32_t a, b, c, d, m; + + if (get_cb_field(&in, &a, 0, UINT8_MAX, '.')) + return -EINVAL; + if (get_cb_field(&in, &b, 0, UINT8_MAX, '.')) + return -EINVAL; + if (get_cb_field(&in, &c, 0, UINT8_MAX, '.')) + return -EINVAL; + if (get_cb_field(&in, &d, 0, UINT8_MAX, '/')) + return -EINVAL; + if (get_cb_field(&in, &m, 0, sizeof(uint32_t) * CHAR_BIT, 0)) + return -EINVAL; + + addr[0] = IPv4(a, b, c, d); + mask_len[0] = m; + return 0; +} + +static int +parse_ipv4_5tuple_rule(char *str, struct rte_eth_ntuple_filter *ntuple_filter) +{ + int i, ret; + char *s, *sp, *in[CB_FLD_NUM]; + static const char *dlm = " \t\n"; + int dim = CB_FLD_NUM; + uint32_t temp; + + s = str; + for (i = 0; i != dim; i++, s = NULL) { + in[i] = strtok_r(s, dlm, &sp); + if (in[i] == NULL) + return -EINVAL; + } + + ret = parse_ipv4_net(in[CB_FLD_SRC_ADDR], + &ntuple_filter->src_ip, + &ntuple_filter->src_ip_mask); + if (ret != 0) { + flow_classify_log("failed to read source address/mask: %s\n", + in[CB_FLD_SRC_ADDR]); + return ret; + } + + ret = parse_ipv4_net(in[CB_FLD_DST_ADDR], + &ntuple_filter->dst_ip, + &ntuple_filter->dst_ip_mask); + if (ret != 0) { + flow_classify_log("failed to read source address/mask: %s\n", + in[CB_FLD_DST_ADDR]); + return ret; + } + + if (get_cb_field(&in[CB_FLD_SRC_PORT], &temp, 0, UINT16_MAX, 0)) + return -EINVAL; + ntuple_filter->src_port = (uint16_t)temp; + + if (strncmp(in[CB_FLD_SRC_PORT_DLM], cb_port_delim, + sizeof(cb_port_delim)) != 0) + return -EINVAL; + + if (get_cb_field(&in[CB_FLD_SRC_PORT_MASK], &temp, 0, UINT16_MAX, 0)) + return -EINVAL; + ntuple_filter->src_port_mask = (uint16_t)temp; + + if (get_cb_field(&in[CB_FLD_DST_PORT], &temp, 0, UINT16_MAX, 0)) + return -EINVAL; + ntuple_filter->dst_port = (uint16_t)temp; + + if (strncmp(in[CB_FLD_DST_PORT_DLM], cb_port_delim, + sizeof(cb_port_delim)) != 0) + return -EINVAL; + + if (get_cb_field(&in[CB_FLD_DST_PORT_MASK], &temp, 0, UINT16_MAX, 0)) + return -EINVAL; + ntuple_filter->dst_port_mask = (uint16_t)temp; + + if (get_cb_field(&in[CB_FLD_PROTO], &temp, 0, UINT8_MAX, '/')) + return -EINVAL; + ntuple_filter->proto = (uint8_t)temp; + + if (get_cb_field(&in[CB_FLD_PROTO], &temp, 0, UINT8_MAX, 0)) + return -EINVAL; + ntuple_filter->proto_mask = (uint8_t)temp; + + if (get_cb_field(&in[CB_FLD_PRIORITY], &temp, 0, UINT16_MAX, 0)) + return -EINVAL; + ntuple_filter->priority = (uint16_t)temp; + if (ntuple_filter->priority > FLOW_CLASSIFY_MAX_PRIORITY) + ret = -EINVAL; + + return ret; +} + +/* Bypass comment and empty lines */ +static inline int +is_bypass_line(char *buff) +{ + int i = 0; + + /* comment line */ + if (buff[0] == COMMENT_LEAD_CHAR) + return 1; + /* empty line */ + while (buff[i] != '\0') { + if (!isspace(buff[i])) + return 0; + i++; + } + return 1; +} + +static uint32_t +convert_depth_to_bitmask(uint32_t depth_val) +{ + uint32_t bitmask = 0; + int i, j; + + for (i = depth_val, j = 0; i > 0; i--, j++) + bitmask |= (1 << (31 - j)); + return bitmask; +} + +static int +add_classify_rule(struct rte_eth_ntuple_filter *ntuple_filter) +{ + int ret = 0; + struct rte_flow_error error; + struct rte_flow_item_ipv4 ipv4_spec; + struct rte_flow_item_ipv4 ipv4_mask; + struct rte_flow_item ipv4_udp_item; + struct rte_flow_item ipv4_tcp_item; + struct rte_flow_item ipv4_sctp_item; + struct rte_flow_item_udp udp_spec; + struct rte_flow_item_udp udp_mask; + struct rte_flow_item udp_item; + struct rte_flow_item_tcp tcp_spec; + struct rte_flow_item_tcp tcp_mask; + struct rte_flow_item tcp_item; + struct rte_flow_item_sctp sctp_spec; + struct rte_flow_item_sctp sctp_mask; + struct rte_flow_item sctp_item; + struct rte_flow_item pattern_ipv4_5tuple[4]; + struct rte_flow_classify *flow_classify; + uint8_t ipv4_proto; + + /* set up parameters for validate and create */ + memset(&ipv4_spec, 0, sizeof(ipv4_spec)); + ipv4_spec.hdr.next_proto_id = ntuple_filter->proto; + ipv4_spec.hdr.src_addr = ntuple_filter->src_ip; + ipv4_spec.hdr.dst_addr = ntuple_filter->dst_ip; + ipv4_proto = ipv4_spec.hdr.next_proto_id; + + memset(&ipv4_mask, 0, sizeof(ipv4_mask)); + ipv4_mask.hdr.next_proto_id = ntuple_filter->proto_mask; + ipv4_mask.hdr.src_addr = ntuple_filter->src_ip_mask; + ipv4_mask.hdr.src_addr = + convert_depth_to_bitmask(ipv4_mask.hdr.src_addr); + ipv4_mask.hdr.dst_addr = ntuple_filter->dst_ip_mask; + ipv4_mask.hdr.dst_addr = + convert_depth_to_bitmask(ipv4_mask.hdr.dst_addr); + + switch (ipv4_proto) { + case PROTO_UDP: + if (udp_num_classify >= MAX_NUM_CLASSIFY) { + printf( + "\nINFO: UDP classify rule capacity %d reached\n", + udp_num_classify); + ret = -1; + break; + } + ipv4_udp_item.type = RTE_FLOW_ITEM_TYPE_IPV4; + ipv4_udp_item.spec = &ipv4_spec; + ipv4_udp_item.mask = &ipv4_mask; + ipv4_udp_item.last = NULL; + + udp_spec.hdr.src_port = ntuple_filter->src_port; + udp_spec.hdr.dst_port = ntuple_filter->dst_port; + udp_spec.hdr.dgram_len = 0; + udp_spec.hdr.dgram_cksum = 0; + + udp_mask.hdr.src_port = ntuple_filter->src_port_mask; + udp_mask.hdr.dst_port = ntuple_filter->dst_port_mask; + udp_mask.hdr.dgram_len = 0; + udp_mask.hdr.dgram_cksum = 0; + + udp_item.type = RTE_FLOW_ITEM_TYPE_UDP; + udp_item.spec = &udp_spec; + udp_item.mask = &udp_mask; + udp_item.last = NULL; + + attr.priority = ntuple_filter->priority; + pattern_ipv4_5tuple[1] = ipv4_udp_item; + pattern_ipv4_5tuple[2] = udp_item; + break; + case PROTO_TCP: + if (tcp_num_classify >= MAX_NUM_CLASSIFY) { + printf( + "\nINFO: TCP classify rule capacity %d reached\n", + tcp_num_classify); + ret = -1; + break; + } + ipv4_tcp_item.type = RTE_FLOW_ITEM_TYPE_IPV4; + ipv4_tcp_item.spec = &ipv4_spec; + ipv4_tcp_item.mask = &ipv4_mask; + ipv4_tcp_item.last = NULL; + + memset(&tcp_spec, 0, sizeof(tcp_spec)); + tcp_spec.hdr.src_port = ntuple_filter->src_port; + tcp_spec.hdr.dst_port = ntuple_filter->dst_port; + + memset(&tcp_mask, 0, sizeof(tcp_mask)); + tcp_mask.hdr.src_port = ntuple_filter->src_port_mask; + tcp_mask.hdr.dst_port = ntuple_filter->dst_port_mask; + + tcp_item.type = RTE_FLOW_ITEM_TYPE_TCP; + tcp_item.spec = &tcp_spec; + tcp_item.mask = &tcp_mask; + tcp_item.last = NULL; + + attr.priority = ntuple_filter->priority; + pattern_ipv4_5tuple[1] = ipv4_tcp_item; + pattern_ipv4_5tuple[2] = tcp_item; + break; + case PROTO_SCTP: + if (sctp_num_classify >= MAX_NUM_CLASSIFY) { + printf( + "\nINFO: SCTP classify rule capacity %d reached\n", + sctp_num_classify); + ret = -1; + break; + } + ipv4_sctp_item.type = RTE_FLOW_ITEM_TYPE_IPV4; + ipv4_sctp_item.spec = &ipv4_spec; + ipv4_sctp_item.mask = &ipv4_mask; + ipv4_sctp_item.last = NULL; + + sctp_spec.hdr.src_port = ntuple_filter->src_port; + sctp_spec.hdr.dst_port = ntuple_filter->dst_port; + sctp_spec.hdr.cksum = 0; + sctp_spec.hdr.tag = 0; + + sctp_mask.hdr.src_port = ntuple_filter->src_port_mask; + sctp_mask.hdr.dst_port = ntuple_filter->dst_port_mask; + sctp_mask.hdr.cksum = 0; + sctp_mask.hdr.tag = 0; + + sctp_item.type = RTE_FLOW_ITEM_TYPE_SCTP; + sctp_item.spec = &sctp_spec; + sctp_item.mask = &sctp_mask; + sctp_item.last = NULL; + + attr.priority = ntuple_filter->priority; + pattern_ipv4_5tuple[1] = ipv4_sctp_item; + pattern_ipv4_5tuple[2] = sctp_item; + break; + default: + break; + } + + if (ret == -1) + return 0; + + attr.ingress = 1; + pattern_ipv4_5tuple[0] = eth_item; + pattern_ipv4_5tuple[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, pattern_ipv4_5tuple, + actions, &error); + if (ret) + rte_exit(EXIT_FAILURE, + "flow classify validate failed ipv4_proto = %u\n", + ipv4_proto); + + flow_classify = rte_flow_classify_create( + table_acl, entry_size, &attr, pattern_ipv4_5tuple, + actions, &error); + if (flow_classify == NULL) + rte_exit(EXIT_FAILURE, + "flow classify create failed ipv4_proto = %u\n", + ipv4_proto); + + switch (ipv4_proto) { + case PROTO_UDP: + udp_flow_classify[udp_num_classify] = flow_classify; + udp_num_classify++; + break; + case PROTO_TCP: + tcp_flow_classify[tcp_num_classify] = flow_classify; + tcp_num_classify++; + break; + case PROTO_SCTP: + sctp_flow_classify[sctp_num_classify] = flow_classify; + sctp_num_classify++; + break; + default: + break; + } + return 0; +} + +static int +add_rules(const char *rule_path) +{ + FILE *fh; + char buff[LINE_MAX]; + unsigned int i = 0; + unsigned int total_num = 0; + struct rte_eth_ntuple_filter ntuple_filter; + + fh = fopen(rule_path, "rb"); + if (fh == NULL) + rte_exit(EXIT_FAILURE, "%s: Open %s failed\n", __func__, + rule_path); + + fseek(fh, 0, SEEK_SET); + + i = 0; + while (fgets(buff, LINE_MAX, fh) != NULL) { + i++; + + if (is_bypass_line(buff)) + continue; + + if (total_num >= FLOW_CLASSIFY_MAX_RULE_NUM - 1) { + printf("\nINFO: classify rule capacity %d reached\n", + total_num); + break; + } + + if (parse_ipv4_5tuple_rule(buff, &ntuple_filter) != 0) + rte_exit(EXIT_FAILURE, + "%s Line %u: parse rules error\n", + rule_path, i); + + if (add_classify_rule(&ntuple_filter) != 0) + rte_exit(EXIT_FAILURE, "add rule error\n"); + + total_num++; + } + + fclose(fh); + return 0; +} + +/* display usage */ +static void +print_usage(const char *prgname) +{ + printf("%s usage:\n", prgname); + printf("[EAL options] -- --"OPTION_RULE_IPV4"=FILE: "); + printf("specify the ipv4 rules file.\n"); + printf("Each rule occupies one line in the file.\n"); +} + +/* Parse the argument given in the command line of the application */ +static int +parse_args(int argc, char **argv) +{ + int opt, ret; + char **argvopt; + int option_index; + char *prgname = argv[0]; + static struct option lgopts[] = { + {OPTION_RULE_IPV4, 1, 0, 0}, + {NULL, 0, 0, 0} + }; + + argvopt = argv; + + while ((opt = getopt_long(argc, argvopt, "", + lgopts, &option_index)) != EOF) { + + switch (opt) { + /* long options */ + case 0: + if (!strncmp(lgopts[option_index].name, + OPTION_RULE_IPV4, + sizeof(OPTION_RULE_IPV4))) + parm_config.rule_ipv4_name = optarg; + break; + default: + print_usage(prgname); + return -1; + } + } + + if (optind >= 0) + argv[optind-1] = prgname; + + ret = optind-1; + optind = 1; /* reset getopt lib */ + return ret; +} + +/* + * The main function, which does initialization and calls the per-lcore + * functions. + */ +int +main(int argc, char *argv[]) +{ + struct rte_mempool *mbuf_pool; + uint8_t nb_ports; + uint8_t portid; + int ret; + int socket_id; + struct rte_table_acl_params table_acl_params; + + /* Initialize the Environment Abstraction Layer (EAL). */ + ret = rte_eal_init(argc, argv); + if (ret < 0) + rte_exit(EXIT_FAILURE, "Error with EAL initialization\n"); + + argc -= ret; + argv += ret; + + /* parse application arguments (after the EAL ones) */ + ret = parse_args(argc, argv); + if (ret < 0) + rte_exit(EXIT_FAILURE, "Invalid flow_classify parameters\n"); + + /* Check that there is an even number of ports to send/receive on. */ + nb_ports = rte_eth_dev_count(); + if (nb_ports < 2 || (nb_ports & 1)) + rte_exit(EXIT_FAILURE, "Error: number of ports must be even\n"); + + /* Creates a new mempool in memory to hold the mbufs. */ + mbuf_pool = rte_pktmbuf_pool_create("MBUF_POOL", NUM_MBUFS * nb_ports, + MBUF_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id()); + + if (mbuf_pool == NULL) + rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n"); + + /* Initialize all ports. */ + for (portid = 0; portid < nb_ports; portid++) + if (port_init(portid, mbuf_pool) != 0) + rte_exit(EXIT_FAILURE, "Cannot init port %"PRIu8 "\n", + portid); + + if (rte_lcore_count() > 1) + printf("\nWARNING: Too many lcores enabled. Only 1 used.\n"); + + socket_id = rte_eth_dev_socket_id(0); + + /* initialise ACL table params */ + table_acl_params.n_rule_fields = RTE_DIM(ipv4_defs); + table_acl_params.name = "table_acl_ipv4_5tuple"; + table_acl_params.n_rules = FLOW_CLASSIFY_MAX_RULE_NUM; + memcpy(table_acl_params.field_format, ipv4_defs, sizeof(ipv4_defs)); + entry_size = RTE_ACL_RULE_SZ(RTE_DIM(ipv4_defs)); + + table_acl = rte_table_acl_ops.f_create(&table_acl_params, socket_id, + entry_size); + if (table_acl == NULL) + rte_exit(EXIT_FAILURE, "Failed to create table_acl\n"); + + /* read file of IPv4 5 tuple rules and initialise parameters + * for rte_flow_classify_validate and rte_flow_classify_create + */ + + if (add_rules(parm_config.rule_ipv4_name)) + rte_exit(EXIT_FAILURE, "Failed to add rules\n"); + + /* Call lcore_main on the master core only. */ + lcore_main(); + + return 0; +} diff --git a/examples/flow_classify/ipv4_rules_file.txt b/examples/flow_classify/ipv4_rules_file.txt new file mode 100644 index 0000000..262763d --- /dev/null +++ b/examples/flow_classify/ipv4_rules_file.txt @@ -0,0 +1,14 @@ +#file format: +#src_ip/masklen dstt_ip/masklen src_port : mask dst_port : mask proto/mask priority +# +2.2.2.3/24 2.2.2.7/24 32 : 0xffff 33 : 0xffff 17/0xff 0 +9.9.9.3/24 9.9.9.7/24 32 : 0xffff 33 : 0xffff 17/0xff 1 +9.9.9.3/24 9.9.9.7/24 32 : 0xffff 33 : 0xffff 6/0xff 2 +9.9.8.3/24 9.9.8.7/24 32 : 0xffff 33 : 0xffff 6/0xff 3 +6.7.8.9/24 2.3.4.5/24 32 : 0xffff 33 : 0xffff 132/0xff 4 +6.7.8.9/32 192.168.0.36/32 10 : 0xffff 11 : 0xffff 6/0xfe 5 +6.7.8.9/24 192.168.0.36/24 10 : 0xffff 11 : 0xffff 6/0xfe 6 +6.7.8.9/16 192.168.0.36/16 10 : 0xffff 11 : 0xffff 6/0xfe 7 +6.7.8.9/8 192.168.0.36/8 10 : 0xffff 11 : 0xffff 6/0xfe 8 +#error rules +#9.8.7.6/8 192.168.0.36/8 10 : 0xffff 11 : 0xffff 6/0xfe 9 \ No newline at end of file -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v5 5/6] test: add packet burst generator functions 2017-09-06 10:27 ` [dpdk-dev] [PATCH v4 " Bernard Iremonger ` (4 preceding siblings ...) 2017-09-07 16:43 ` [dpdk-dev] [PATCH v5 4/6] examples/flow_classify: flow classify sample application Bernard Iremonger @ 2017-09-07 16:43 ` Bernard Iremonger 2017-09-07 16:43 ` [dpdk-dev] [PATCH v5 6/6] test: flow classify library unit tests Bernard Iremonger 6 siblings, 0 replies; 145+ messages in thread From: Bernard Iremonger @ 2017-09-07 16:43 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil Cc: Bernard Iremonger add initialize_tcp_header function add initialize_stcp_header function add initialize_ipv4_header_proto function add generate_packet_burst_proto function Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> --- test/test/packet_burst_generator.c | 191 +++++++++++++++++++++++++++++++++++++ test/test/packet_burst_generator.h | 22 ++++- 2 files changed, 211 insertions(+), 2 deletions(-) diff --git a/test/test/packet_burst_generator.c b/test/test/packet_burst_generator.c index a93c3b5..8f4ddcc 100644 --- a/test/test/packet_burst_generator.c +++ b/test/test/packet_burst_generator.c @@ -134,6 +134,36 @@ return pkt_len; } +uint16_t +initialize_tcp_header(struct tcp_hdr *tcp_hdr, uint16_t src_port, + uint16_t dst_port, uint16_t pkt_data_len) +{ + uint16_t pkt_len; + + pkt_len = (uint16_t) (pkt_data_len + sizeof(struct tcp_hdr)); + + memset(tcp_hdr, 0, sizeof(struct tcp_hdr)); + tcp_hdr->src_port = rte_cpu_to_be_16(src_port); + tcp_hdr->dst_port = rte_cpu_to_be_16(dst_port); + + return pkt_len; +} + +uint16_t +initialize_sctp_header(struct sctp_hdr *sctp_hdr, uint16_t src_port, + uint16_t dst_port, uint16_t pkt_data_len) +{ + uint16_t pkt_len; + + pkt_len = (uint16_t) (pkt_data_len + sizeof(struct udp_hdr)); + + sctp_hdr->src_port = rte_cpu_to_be_16(src_port); + sctp_hdr->dst_port = rte_cpu_to_be_16(dst_port); + sctp_hdr->tag = 0; + sctp_hdr->cksum = 0; /* No SCTP checksum. */ + + return pkt_len; +} uint16_t initialize_ipv6_header(struct ipv6_hdr *ip_hdr, uint8_t *src_addr, @@ -198,7 +228,53 @@ return pkt_len; } +uint16_t +initialize_ipv4_header_proto(struct ipv4_hdr *ip_hdr, uint32_t src_addr, + uint32_t dst_addr, uint16_t pkt_data_len, uint8_t proto) +{ + uint16_t pkt_len; + unaligned_uint16_t *ptr16; + uint32_t ip_cksum; + + /* + * Initialize IP header. + */ + pkt_len = (uint16_t) (pkt_data_len + sizeof(struct ipv4_hdr)); + + ip_hdr->version_ihl = IP_VHL_DEF; + ip_hdr->type_of_service = 0; + ip_hdr->fragment_offset = 0; + ip_hdr->time_to_live = IP_DEFTTL; + ip_hdr->next_proto_id = proto; + ip_hdr->packet_id = 0; + ip_hdr->total_length = rte_cpu_to_be_16(pkt_len); + ip_hdr->src_addr = rte_cpu_to_be_32(src_addr); + ip_hdr->dst_addr = rte_cpu_to_be_32(dst_addr); + + /* + * Compute IP header checksum. + */ + ptr16 = (unaligned_uint16_t *)ip_hdr; + ip_cksum = 0; + ip_cksum += ptr16[0]; ip_cksum += ptr16[1]; + ip_cksum += ptr16[2]; ip_cksum += ptr16[3]; + ip_cksum += ptr16[4]; + ip_cksum += ptr16[6]; ip_cksum += ptr16[7]; + ip_cksum += ptr16[8]; ip_cksum += ptr16[9]; + /* + * Reduce 32 bit checksum to 16 bits and complement it. + */ + ip_cksum = ((ip_cksum & 0xFFFF0000) >> 16) + + (ip_cksum & 0x0000FFFF); + ip_cksum %= 65536; + ip_cksum = (~ip_cksum) & 0x0000FFFF; + if (ip_cksum == 0) + ip_cksum = 0xFFFF; + ip_hdr->hdr_checksum = (uint16_t) ip_cksum; + + return pkt_len; +} /* * The maximum number of segments per packet is used when creating @@ -283,3 +359,118 @@ return nb_pkt; } + +int +generate_packet_burst_proto(struct rte_mempool *mp, + struct rte_mbuf **pkts_burst, + struct ether_hdr *eth_hdr, uint8_t vlan_enabled, void *ip_hdr, + uint8_t ipv4, uint8_t proto, void *proto_hdr, + int nb_pkt_per_burst, uint8_t pkt_len, uint8_t nb_pkt_segs) +{ + int i, nb_pkt = 0; + size_t eth_hdr_size; + + struct rte_mbuf *pkt_seg; + struct rte_mbuf *pkt; + + for (nb_pkt = 0; nb_pkt < nb_pkt_per_burst; nb_pkt++) { + pkt = rte_pktmbuf_alloc(mp); + if (pkt == NULL) { +nomore_mbuf: + if (nb_pkt == 0) + return -1; + break; + } + + pkt->data_len = pkt_len; + pkt_seg = pkt; + for (i = 1; i < nb_pkt_segs; i++) { + pkt_seg->next = rte_pktmbuf_alloc(mp); + if (pkt_seg->next == NULL) { + pkt->nb_segs = i; + rte_pktmbuf_free(pkt); + goto nomore_mbuf; + } + pkt_seg = pkt_seg->next; + pkt_seg->data_len = pkt_len; + } + pkt_seg->next = NULL; /* Last segment of packet. */ + + /* + * Copy headers in first packet segment(s). + */ + if (vlan_enabled) + eth_hdr_size = sizeof(struct ether_hdr) + + sizeof(struct vlan_hdr); + else + eth_hdr_size = sizeof(struct ether_hdr); + + copy_buf_to_pkt(eth_hdr, eth_hdr_size, pkt, 0); + + if (ipv4) { + copy_buf_to_pkt(ip_hdr, sizeof(struct ipv4_hdr), pkt, + eth_hdr_size); + switch (proto) { + case IPPROTO_UDP: + copy_buf_to_pkt(proto_hdr, + sizeof(struct udp_hdr), pkt, + eth_hdr_size + sizeof(struct ipv4_hdr)); + break; + case IPPROTO_TCP: + copy_buf_to_pkt(proto_hdr, + sizeof(struct tcp_hdr), pkt, + eth_hdr_size + sizeof(struct ipv4_hdr)); + break; + case IPPROTO_SCTP: + copy_buf_to_pkt(proto_hdr, + sizeof(struct sctp_hdr), pkt, + eth_hdr_size + sizeof(struct ipv4_hdr)); + break; + default: + break; + } + } else { + copy_buf_to_pkt(ip_hdr, sizeof(struct ipv6_hdr), pkt, + eth_hdr_size); + switch (proto) { + case IPPROTO_UDP: + copy_buf_to_pkt(proto_hdr, + sizeof(struct udp_hdr), pkt, + eth_hdr_size + sizeof(struct ipv6_hdr)); + break; + case IPPROTO_TCP: + copy_buf_to_pkt(proto_hdr, + sizeof(struct tcp_hdr), pkt, + eth_hdr_size + sizeof(struct ipv6_hdr)); + break; + case IPPROTO_SCTP: + copy_buf_to_pkt(proto_hdr, + sizeof(struct sctp_hdr), pkt, + eth_hdr_size + sizeof(struct ipv6_hdr)); + break; + default: + break; + } + } + + /* + * Complete first mbuf of packet and append it to the + * burst of packets to be transmitted. + */ + pkt->nb_segs = nb_pkt_segs; + pkt->pkt_len = pkt_len; + pkt->l2_len = eth_hdr_size; + + if (ipv4) { + pkt->vlan_tci = ETHER_TYPE_IPv4; + pkt->l3_len = sizeof(struct ipv4_hdr); + } else { + pkt->vlan_tci = ETHER_TYPE_IPv6; + pkt->l3_len = sizeof(struct ipv6_hdr); + } + + pkts_burst[nb_pkt] = pkt; + } + + return nb_pkt; +} diff --git a/test/test/packet_burst_generator.h b/test/test/packet_burst_generator.h index edc1044..3315bfa 100644 --- a/test/test/packet_burst_generator.h +++ b/test/test/packet_burst_generator.h @@ -43,7 +43,8 @@ #include <rte_arp.h> #include <rte_ip.h> #include <rte_udp.h> - +#include <rte_tcp.h> +#include <rte_sctp.h> #define IPV4_ADDR(a, b, c, d)(((a & 0xff) << 24) | ((b & 0xff) << 16) | \ ((c & 0xff) << 8) | (d & 0xff)) @@ -65,6 +66,13 @@ initialize_udp_header(struct udp_hdr *udp_hdr, uint16_t src_port, uint16_t dst_port, uint16_t pkt_data_len); +uint16_t +initialize_tcp_header(struct tcp_hdr *tcp_hdr, uint16_t src_port, + uint16_t dst_port, uint16_t pkt_data_len); + +uint16_t +initialize_sctp_header(struct sctp_hdr *sctp_hdr, uint16_t src_port, + uint16_t dst_port, uint16_t pkt_data_len); uint16_t initialize_ipv6_header(struct ipv6_hdr *ip_hdr, uint8_t *src_addr, @@ -74,15 +82,25 @@ initialize_ipv4_header(struct ipv4_hdr *ip_hdr, uint32_t src_addr, uint32_t dst_addr, uint16_t pkt_data_len); +uint16_t +initialize_ipv4_header_proto(struct ipv4_hdr *ip_hdr, uint32_t src_addr, + uint32_t dst_addr, uint16_t pkt_data_len, uint8_t proto); + int generate_packet_burst(struct rte_mempool *mp, struct rte_mbuf **pkts_burst, struct ether_hdr *eth_hdr, uint8_t vlan_enabled, void *ip_hdr, uint8_t ipv4, struct udp_hdr *udp_hdr, int nb_pkt_per_burst, uint8_t pkt_len, uint8_t nb_pkt_segs); +int +generate_packet_burst_proto(struct rte_mempool *mp, + struct rte_mbuf **pkts_burst, + struct ether_hdr *eth_hdr, uint8_t vlan_enabled, void *ip_hdr, + uint8_t ipv4, uint8_t proto, void *proto_hdr, + int nb_pkt_per_burst, uint8_t pkt_len, uint8_t nb_pkt_segs); + #ifdef __cplusplus } #endif - #endif /* PACKET_BURST_GENERATOR_H_ */ -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v5 6/6] test: flow classify library unit tests 2017-09-06 10:27 ` [dpdk-dev] [PATCH v4 " Bernard Iremonger ` (5 preceding siblings ...) 2017-09-07 16:43 ` [dpdk-dev] [PATCH v5 5/6] test: add packet burst generator functions Bernard Iremonger @ 2017-09-07 16:43 ` Bernard Iremonger 6 siblings, 0 replies; 145+ messages in thread From: Bernard Iremonger @ 2017-09-07 16:43 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil Cc: Bernard Iremonger Add flow_classify_autotest program. Set up IPv4 ACL field definitions. Create table_acl for use by librte_flow_classify API's. Create an mbuf pool for use by rte_flow_classify_query. For each of the librte_flow_classify API's: add bad parameter tests add bad pattern tests add bad action tests add good parameter tests Initialise ipv4 udp traffic for use by the udp test for rte_flow_classif_query. Initialise ipv4 tcp traffic for use by the tcp test for rte_flow_classif_query. Initialise ipv4 sctp traffic for use by the sctp test for rte_flow_classif_query. Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> --- test/test/Makefile | 1 + test/test/test_flow_classify.c | 698 +++++++++++++++++++++++++++++++++++++++++ test/test/test_flow_classify.h | 240 ++++++++++++++ 3 files changed, 939 insertions(+) create mode 100644 test/test/test_flow_classify.c create mode 100644 test/test/test_flow_classify.h diff --git a/test/test/Makefile b/test/test/Makefile index 42d9a49..073e1ed 100644 --- a/test/test/Makefile +++ b/test/test/Makefile @@ -106,6 +106,7 @@ SRCS-y += test_table_tables.c SRCS-y += test_table_ports.c SRCS-y += test_table_combined.c SRCS-$(CONFIG_RTE_LIBRTE_ACL) += test_table_acl.c +SRCS-$(CONFIG_RTE_LIBRTE_ACL) += test_flow_classify.c endif SRCS-y += test_rwlock.c diff --git a/test/test/test_flow_classify.c b/test/test/test_flow_classify.c new file mode 100644 index 0000000..e7fbe73 --- /dev/null +++ b/test/test/test_flow_classify.c @@ -0,0 +1,698 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <string.h> +#include <errno.h> + +#include "test.h" + +#include <rte_string_fns.h> +#include <rte_mbuf.h> +#include <rte_byteorder.h> +#include <rte_ip.h> +#include <rte_acl.h> +#include <rte_common.h> +#include <rte_table_acl.h> +#include <rte_flow.h> +#include <rte_flow_classify.h> + +#include "packet_burst_generator.h" +#include "test_flow_classify.h" + + +#define FLOW_CLASSIFY_MAX_RULE_NUM 100 +static void *table_acl; +static uint32_t entry_size; + +/* + * test functions by passing invalid or + * non-workable parameters. + */ +static int +test_invalid_parameters(void) +{ + struct rte_flow_classify *classify; + int ret; + + ret = rte_flow_classify_validate(NULL, NULL, NULL, NULL, NULL); + if (!ret) { + printf("Line %i: flow_classify_validate", __LINE__); + printf(" with NULL param should have failed!\n"); + return -1; + } + + classify = rte_flow_classify_create(NULL, 0, NULL, NULL, NULL, NULL); + if (classify) { + printf("Line %i: flow_classify_create", __LINE__); + printf(" with NULL param should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_destroy(NULL, NULL, NULL); + if (!ret) { + printf("Line %i: flow_classify_destroy", __LINE__); + printf(" with NULL param should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_query(NULL, NULL, NULL, 0, NULL, NULL); + if (!ret) { + printf("Line %i: flow_classify_query", __LINE__); + printf(" with NULL param should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_validate(NULL, NULL, NULL, NULL, &error); + if (!ret) { + printf("Line %i: flow_classify_validate", __LINE__); + printf(" with NULL param should have failed!\n"); + return -1; + } + + classify = rte_flow_classify_create(NULL, 0, NULL, NULL, NULL, &error); + if (classify) { + printf("Line %i: flow_classify_create ", __LINE__); + printf("with NULL param should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_destroy(NULL, NULL, &error); + if (!ret) { + printf("Line %i: flow_classify_destroy", __LINE__); + printf("with NULL param should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_query(NULL, NULL, NULL, 0, NULL, &error); + if (!ret) { + printf("Line %i: flow_classify_query", __LINE__); + printf(" with NULL param should have failed!\n"); + return -1; + } + return 0; +} + +static int +test_valid_parameters(void) +{ + struct rte_flow_classify *flow_classify; + int ret; + + /* set up parameters for rte_flow_classify_validate and + * rte_flow_classify_create and rte_flow_classify_destroy + */ + + attr.ingress = 1; + attr.priority = 1; + pattern[0] = eth_item; + pattern[1] = ipv4_udp_item_1; + pattern[2] = udp_item_1; + pattern[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, + pattern, actions, &error); + if (ret) { + printf("Line %i: flow_classify_validate", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + flow_classify = rte_flow_classify_create(table_acl, entry_size, &attr, + pattern, actions, &error); + if (!flow_classify) { + printf("Line %i: flow_classify_create", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classify_destroy(table_acl, flow_classify, &error); + if (ret) { + printf("Line %i: flow_classify_destroy", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + return 0; +} + +static int +test_invalid_patterns(void) +{ + struct rte_flow_classify *flow_classify; + int ret; + + /* set up parameters for rte_flow_classify_validate and + * rte_flow_classify_create and rte_flow_classify_destroy + */ + + attr.ingress = 1; + attr.priority = 1; + pattern[0] = eth_item_bad; + pattern[1] = ipv4_udp_item_1; + pattern[2] = udp_item_1; + pattern[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, + pattern, actions, &error); + if (!ret) { + printf("Line %i: flow_classify_validate", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + pattern[0] = eth_item; + pattern[1] = ipv4_udp_item_bad; + flow_classify = rte_flow_classify_create(table_acl, entry_size, &attr, + pattern, actions, &error); + if (flow_classify) { + printf("Line %i: flow_classify_create", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_destroy(table_acl, flow_classify, &error); + if (!ret) { + printf("Line %i: flow_classify_destroy", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + pattern[1] = ipv4_udp_item_1; + pattern[2] = udp_item_bad; + ret = rte_flow_classify_validate(table_acl, &attr, + pattern, actions, &error); + if (!ret) { + printf("Line %i: flow_classify_validate", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + pattern[2] = udp_item_1; + pattern[3] = end_item_bad; + flow_classify = rte_flow_classify_create(table_acl, entry_size, &attr, + pattern, actions, &error); + if (flow_classify) { + printf("Line %i: flow_classify_create", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_destroy(table_acl, flow_classify, &error); + if (!ret) { + printf("Line %i: flow_classify_destroy", __LINE__); + printf(" should have failed!\n"); + return -1; + } + return 0; +} + +static int +test_invalid_actions(void) +{ + struct rte_flow_classify *flow_classify; + int ret; + + /* set up parameters for rte_flow_classify_validate and + * rte_flow_classify_create and rte_flow_classify_destroy + */ + + attr.ingress = 1; + attr.priority = 1; + pattern[0] = eth_item; + pattern[1] = ipv4_udp_item_1; + pattern[2] = udp_item_1; + pattern[3] = end_item; + actions[0] = count_action_bad; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, + pattern, actions, &error); + if (!ret) { + printf("Line %i: flow_classify_validate", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + flow_classify = rte_flow_classify_create(table_acl, entry_size, &attr, + pattern, actions, &error); + if (flow_classify) { + printf("Line %i: flow_classify_create", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_destroy(table_acl, flow_classify, &error); + if (!ret) { + printf("Line %i: flow_classify_destroy", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + actions[0] = count_action; + actions[1] = end_action_bad; + ret = rte_flow_classify_validate(table_acl, &attr, + pattern, actions, &error); + if (!ret) { + printf("Line %i: flow_classify_validate", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + flow_classify = rte_flow_classify_create(table_acl, entry_size, &attr, + pattern, actions, &error); + if (flow_classify) { + printf("Line %i: flow_classify_create", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_destroy(table_acl, flow_classify, &error); + if (!ret) { + printf("Line %i: flow_classify_destroy", __LINE__); + printf("should have failed!\n"); + return -1; + } + return 0; +} + +static int +init_ipv4_udp_traffic(struct rte_mempool *mp, + struct rte_mbuf **pkts_burst, uint32_t burst_size) +{ + struct ether_hdr pkt_eth_hdr; + struct ipv4_hdr pkt_ipv4_hdr; + struct udp_hdr pkt_udp_hdr; + uint32_t src_addr = IPV4_ADDR(2, 2, 2, 3); + uint32_t dst_addr = IPV4_ADDR(2, 2, 2, 7); + uint16_t src_port = 32; + uint16_t dst_port = 33; + uint16_t pktlen; + + static uint8_t src_mac[] = { 0x00, 0xFF, 0xAA, 0xFF, 0xAA, 0xFF }; + static uint8_t dst_mac[] = { 0x00, 0xAA, 0xFF, 0xAA, 0xFF, 0xAA }; + + printf("Set up IPv4 UDP traffic\n"); + initialize_eth_header(&pkt_eth_hdr, + (struct ether_addr *)src_mac, + (struct ether_addr *)dst_mac, ETHER_TYPE_IPv4, 0, 0); + pktlen = (uint16_t)(sizeof(struct ether_hdr)); + printf("ETH pktlen %u\n", pktlen); + + pktlen = initialize_ipv4_header(&pkt_ipv4_hdr, src_addr, dst_addr, + pktlen); + printf("ETH + IPv4 pktlen %u\n", pktlen); + + pktlen = initialize_udp_header(&pkt_udp_hdr, src_port, dst_port, + pktlen); + printf("ETH + IPv4 + UDP pktlen %u\n\n", pktlen); + + return generate_packet_burst(mp, pkts_burst, &pkt_eth_hdr, + 0, &pkt_ipv4_hdr, 1, + &pkt_udp_hdr, burst_size, + PACKET_BURST_GEN_PKT_LEN, 1); +} + +static int +init_ipv4_tcp_traffic(struct rte_mempool *mp, + struct rte_mbuf **pkts_burst, uint32_t burst_size) +{ + struct ether_hdr pkt_eth_hdr; + struct ipv4_hdr pkt_ipv4_hdr; + struct tcp_hdr pkt_tcp_hdr; + uint32_t src_addr = IPV4_ADDR(1, 2, 3, 4); + uint32_t dst_addr = IPV4_ADDR(5, 6, 7, 8); + uint16_t src_port = 16; + uint16_t dst_port = 17; + uint16_t pktlen; + + static uint8_t src_mac[] = { 0x00, 0xFF, 0xAA, 0xFF, 0xAA, 0xFF }; + static uint8_t dst_mac[] = { 0x00, 0xAA, 0xFF, 0xAA, 0xFF, 0xAA }; + + printf("Set up IPv4 TCP traffic\n"); + initialize_eth_header(&pkt_eth_hdr, + (struct ether_addr *)src_mac, + (struct ether_addr *)dst_mac, ETHER_TYPE_IPv4, 0, 0); + pktlen = (uint16_t)(sizeof(struct ether_hdr)); + printf("ETH pktlen %u\n", pktlen); + + pktlen = initialize_ipv4_header_proto(&pkt_ipv4_hdr, src_addr, + dst_addr, pktlen, IPPROTO_TCP); + printf("ETH + IPv4 pktlen %u\n", pktlen); + + pktlen = initialize_tcp_header(&pkt_tcp_hdr, src_port, dst_port, + pktlen); + printf("ETH + IPv4 + TCP pktlen %u\n\n", pktlen); + + return generate_packet_burst_proto(mp, pkts_burst, &pkt_eth_hdr, + 0, &pkt_ipv4_hdr, 1, IPPROTO_TCP, + &pkt_tcp_hdr, burst_size, + PACKET_BURST_GEN_PKT_LEN, 1); +} + +static int +init_ipv4_sctp_traffic(struct rte_mempool *mp, + struct rte_mbuf **pkts_burst, uint32_t burst_size) +{ + struct ether_hdr pkt_eth_hdr; + struct ipv4_hdr pkt_ipv4_hdr; + struct sctp_hdr pkt_sctp_hdr; + uint32_t src_addr = IPV4_ADDR(11, 12, 13, 14); + uint32_t dst_addr = IPV4_ADDR(15, 16, 17, 18); + uint16_t src_port = 10; + uint16_t dst_port = 11; + uint16_t pktlen; + + static uint8_t src_mac[] = { 0x00, 0xFF, 0xAA, 0xFF, 0xAA, 0xFF }; + static uint8_t dst_mac[] = { 0x00, 0xAA, 0xFF, 0xAA, 0xFF, 0xAA }; + + printf("Set up IPv4 SCTP traffic\n"); + initialize_eth_header(&pkt_eth_hdr, + (struct ether_addr *)src_mac, + (struct ether_addr *)dst_mac, ETHER_TYPE_IPv4, 0, 0); + pktlen = (uint16_t)(sizeof(struct ether_hdr)); + printf("ETH pktlen %u\n", pktlen); + + pktlen = initialize_ipv4_header_proto(&pkt_ipv4_hdr, src_addr, + dst_addr, pktlen, IPPROTO_SCTP); + printf("ETH + IPv4 pktlen %u\n", pktlen); + + pktlen = initialize_sctp_header(&pkt_sctp_hdr, src_port, dst_port, + pktlen); + printf("ETH + IPv4 + SCTP pktlen %u\n\n", pktlen); + + return generate_packet_burst_proto(mp, pkts_burst, &pkt_eth_hdr, + 0, &pkt_ipv4_hdr, 1, IPPROTO_SCTP, + &pkt_sctp_hdr, burst_size, + PACKET_BURST_GEN_PKT_LEN, 1); +} + +static int +init_mbufpool(void) +{ + int socketid; + int ret = 0; + unsigned int lcore_id; + char s[64]; + + for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { + if (rte_lcore_is_enabled(lcore_id) == 0) + continue; + + socketid = rte_lcore_to_socket_id(lcore_id); + if (socketid >= NB_SOCKETS) { + printf( + "Socket %d of lcore %u is out of range %d\n", + socketid, lcore_id, NB_SOCKETS); + ret = -1; + break; + } + if (mbufpool[socketid] == NULL) { + snprintf(s, sizeof(s), "mbuf_pool_%d", socketid); + mbufpool[socketid] = + rte_pktmbuf_pool_create(s, NB_MBUF, + MEMPOOL_CACHE_SIZE, 0, MBUF_SIZE, + socketid); + if (mbufpool[socketid]) { + printf("Allocated mbuf pool on socket %d\n", + socketid); + } else { + printf("Cannot init mbuf pool on socket %d\n", + socketid); + ret = -ENOMEM; + break; + } + } + } + return ret; +} + +static int +test_query_udp(void) +{ + struct rte_flow_error error; + struct rte_flow_classify *flow_classify; + int ret; + int i; + + ret = init_ipv4_udp_traffic(mbufpool[0], bufs, MAX_PKT_BURST); + if (ret != MAX_PKT_BURST) { + printf("Line %i: init_udp_ipv4_traffic has failed!\n", + __LINE__); + return -1; + } + + for (i = 0; i < MAX_PKT_BURST; i++) + bufs[i]->packet_type = RTE_PTYPE_L3_IPV4; + + /* set up parameters for rte_flow_classify_validate and + * rte_flow_classify_create and rte_flow_classify_destroy + */ + + attr.ingress = 1; + attr.priority = 1; + pattern[0] = eth_item; + pattern[1] = ipv4_udp_item_1; + pattern[2] = udp_item_1; + pattern[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, + pattern, actions, &error); + if (ret) { + printf("Line %i: flow_classify_validate", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + flow_classify = rte_flow_classify_create(table_acl, entry_size, &attr, + pattern, actions, &error); + if (!flow_classify) { + printf("Line %i: flow_classify_create", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classify_query(table_acl, flow_classify, bufs, + MAX_PKT_BURST, &udp_classify_stats, &error); + if (ret) { + printf("Line %i: flow_classify_query", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classify_destroy(table_acl, flow_classify, &error); + if (ret) { + printf("Line %i: flow_classify_destroy", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + return 0; +} + +static int +test_query_tcp(void) +{ + struct rte_flow_classify *flow_classify; + int ret; + int i; + + ret = init_ipv4_tcp_traffic(mbufpool[0], bufs, MAX_PKT_BURST); + if (ret != MAX_PKT_BURST) { + printf("Line %i: init_ipv4_tcp_traffic has failed!\n", + __LINE__); + return -1; + } + + for (i = 0; i < MAX_PKT_BURST; i++) + bufs[i]->packet_type = RTE_PTYPE_L3_IPV4; + + /* set up parameters for rte_flow_classify_validate and + * rte_flow_classify_create and rte_flow_classify_destroy + */ + + attr.ingress = 1; + attr.priority = 1; + pattern[0] = eth_item; + pattern[1] = ipv4_tcp_item_1; + pattern[2] = tcp_item_1; + pattern[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, + pattern, actions, &error); + if (ret) { + printf("Line %i: flow_classify_validate", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + flow_classify = rte_flow_classify_create(table_acl, entry_size, &attr, + pattern, actions, &error); + if (!flow_classify) { + printf("Line %i: flow_classify_create", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classify_query(table_acl, flow_classify, bufs, + MAX_PKT_BURST, &tcp_classify_stats, &error); + if (ret) { + printf("Line %i: flow_classify_query", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classify_destroy(table_acl, flow_classify, &error); + if (ret) { + printf("Line %i: flow_classify_destroy", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + return 0; +} + +static int +test_query_sctp(void) +{ + struct rte_flow_classify *flow_classify; + int ret; + int i; + + ret = init_ipv4_sctp_traffic(mbufpool[0], bufs, MAX_PKT_BURST); + if (ret != MAX_PKT_BURST) { + printf("Line %i: init_ipv4_tcp_traffic has failed!\n", + __LINE__); + return -1; + } + + for (i = 0; i < MAX_PKT_BURST; i++) + bufs[i]->packet_type = RTE_PTYPE_L3_IPV4; + + /* set up parameters for rte_flow_classify_validate and + * rte_flow_classify_create and rte_flow_classify_destroy + */ + + attr.ingress = 1; + attr.priority = 1; + pattern[0] = eth_item; + pattern[1] = ipv4_sctp_item_1; + pattern[2] = sctp_item_1; + pattern[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, + pattern, actions, &error); + if (ret) { + printf("Line %i: flow_classify_validate", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + flow_classify = rte_flow_classify_create(table_acl, entry_size, &attr, + pattern, actions, &error); + if (!flow_classify) { + printf("Line %i: flow_classify_create", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classify_query(table_acl, flow_classify, bufs, + MAX_PKT_BURST, &sctp_classify_stats, &error); + if (ret) { + printf("Line %i: flow_classify_query", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classify_destroy(table_acl, flow_classify, &error); + if (ret) { + printf("Line %i: flow_classify_destroy", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + return 0; +} + +static int +test_flow_classify(void) +{ + struct rte_table_acl_params table_acl_params; + int socket_id = 0; + int ret; + + /* initialise ACL table params */ + table_acl_params.n_rule_fields = RTE_DIM(ipv4_defs); + table_acl_params.name = "table_acl_ipv4_5tuple"; + table_acl_params.n_rules = FLOW_CLASSIFY_MAX_RULE_NUM; + memcpy(table_acl_params.field_format, ipv4_defs, sizeof(ipv4_defs)); + entry_size = RTE_ACL_RULE_SZ(RTE_DIM(ipv4_defs)); + + table_acl = rte_table_acl_ops.f_create(&table_acl_params, socket_id, + entry_size); + if (table_acl == NULL) { + printf("Line %i: f_create has failed!\n", __LINE__); + return -1; + } + printf("Created table_acl for for IPv4 five tuple packets\n"); + + ret = init_mbufpool(); + if (ret) { + printf("Line %i: init_mbufpool has failed!\n", __LINE__); + return -1; + } + + if (test_invalid_parameters() < 0) + return -1; + if (test_valid_parameters() < 0) + return -1; + if (test_invalid_patterns() < 0) + return -1; + if (test_invalid_actions() < 0) + return -1; + if (test_query_udp() < 0) + return -1; + if (test_query_tcp() < 0) + return -1; + if (test_query_sctp() < 0) + return -1; + + return 0; +} + +REGISTER_TEST_COMMAND(flow_classify_autotest, test_flow_classify); diff --git a/test/test/test_flow_classify.h b/test/test/test_flow_classify.h new file mode 100644 index 0000000..95ddc94 --- /dev/null +++ b/test/test/test_flow_classify.h @@ -0,0 +1,240 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef TEST_FLOW_CLASSIFY_H_ +#define TEST_FLOW_CLASSIFY_H_ + +#define MAX_PKT_BURST (32) +#define NB_SOCKETS (1) +#define MEMPOOL_CACHE_SIZE (256) +#define MBUF_SIZE (512) +#define NB_MBUF (512) + +/* test UDP, TCP and SCTP packets */ +static struct rte_mempool *mbufpool[NB_SOCKETS]; +static struct rte_mbuf *bufs[MAX_PKT_BURST]; + +/* ACL field definitions for IPv4 5 tuple rule */ + +enum { + PROTO_FIELD_IPV4, + SRC_FIELD_IPV4, + DST_FIELD_IPV4, + SRCP_FIELD_IPV4, + DSTP_FIELD_IPV4, + NUM_FIELDS_IPV4 +}; + +enum { + PROTO_INPUT_IPV4, + SRC_INPUT_IPV4, + DST_INPUT_IPV4, + SRCP_DESTP_INPUT_IPV4 +}; + +static struct rte_acl_field_def ipv4_defs[NUM_FIELDS_IPV4] = { + /* first input field - always one byte long. */ + { + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint8_t), + .field_index = PROTO_FIELD_IPV4, + .input_index = PROTO_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, next_proto_id), + }, + /* next input field (IPv4 source address) - 4 consecutive bytes. */ + { + /* rte_flow uses a bit mask for IPv4 addresses */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint32_t), + .field_index = SRC_FIELD_IPV4, + .input_index = SRC_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, src_addr), + }, + /* next input field (IPv4 destination address) - 4 consecutive bytes. */ + { + /* rte_flow uses a bit mask for IPv4 addresses */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint32_t), + .field_index = DST_FIELD_IPV4, + .input_index = DST_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, dst_addr), + }, + /* + * Next 2 fields (src & dst ports) form 4 consecutive bytes. + * They share the same input index. + */ + { + /* rte_flow uses a bit mask for protocol ports */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint16_t), + .field_index = SRCP_FIELD_IPV4, + .input_index = SRCP_DESTP_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + sizeof(struct ipv4_hdr) + + offsetof(struct tcp_hdr, src_port), + }, + { + /* rte_flow uses a bit mask for protocol ports */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint16_t), + .field_index = DSTP_FIELD_IPV4, + .input_index = SRCP_DESTP_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + sizeof(struct ipv4_hdr) + + offsetof(struct tcp_hdr, dst_port), + }, +}; + +/* parameters for rte_flow_classify_validate and rte_flow_classify_create */ + +/* test UDP pattern: + * "eth / ipv4 src spec 2.2.2.3 src mask 255.255.255.00 dst spec 2.2.2.7 + * dst mask 255.255.255.00 / udp src is 32 dst is 33 / end" + */ +static struct rte_flow_item_ipv4 ipv4_udp_spec_1 = { + { 0, 0, 0, 0, 0, 0, IPPROTO_UDP, 0, IPv4(2, 2, 2, 3), IPv4(2, 2, 2, 7)} +}; +static const struct rte_flow_item_ipv4 ipv4_mask_24 = { + .hdr = { + .next_proto_id = 0xff, + .src_addr = 0xffffff00, + .dst_addr = 0xffffff00, + }, +}; +static struct rte_flow_item_udp udp_spec_1 = { + { 32, 33, 0, 0 } +}; + +static struct rte_flow_item eth_item = { RTE_FLOW_ITEM_TYPE_ETH, + 0, 0, 0 }; +static struct rte_flow_item eth_item_bad = { -1, 0, 0, 0 }; + +static struct rte_flow_item ipv4_udp_item_1 = { RTE_FLOW_ITEM_TYPE_IPV4, + &ipv4_udp_spec_1, 0, &ipv4_mask_24}; +static struct rte_flow_item ipv4_udp_item_bad = { RTE_FLOW_ITEM_TYPE_IPV4, + NULL, 0, NULL}; + +static struct rte_flow_item udp_item_1 = { RTE_FLOW_ITEM_TYPE_UDP, + &udp_spec_1, 0, &rte_flow_item_udp_mask}; +static struct rte_flow_item udp_item_bad = { RTE_FLOW_ITEM_TYPE_UDP, + NULL, 0, NULL}; + +static struct rte_flow_item end_item = { RTE_FLOW_ITEM_TYPE_END, + 0, 0, 0 }; +static struct rte_flow_item end_item_bad = { -1, 0, 0, 0 }; + +/* test TCP pattern: + * "eth / ipv4 src spec 1.2.3.4 src mask 255.255.255.00 dst spec 5.6.7.8 + * dst mask 255.255.255.00 / tcp src is 16 dst is 17 / end" + */ +static struct rte_flow_item_ipv4 ipv4_tcp_spec_1 = { + { 0, 0, 0, 0, 0, 0, IPPROTO_TCP, 0, IPv4(1, 2, 3, 4), IPv4(5, 6, 7, 8)} +}; + +static struct rte_flow_item_tcp tcp_spec_1 = { + { 16, 17, 0, 0, 0, 0, 0, 0, 0} +}; + +static struct rte_flow_item ipv4_tcp_item_1 = { RTE_FLOW_ITEM_TYPE_IPV4, + &ipv4_tcp_spec_1, 0, &ipv4_mask_24}; + +static struct rte_flow_item tcp_item_1 = { RTE_FLOW_ITEM_TYPE_TCP, + &tcp_spec_1, 0, &rte_flow_item_tcp_mask}; + +/* test SCTP pattern: + * "eth / ipv4 src spec 1.2.3.4 src mask 255.255.255.00 dst spec 5.6.7.8 + * dst mask 255.255.255.00 / sctp src is 16 dst is 17/ end" + */ +static struct rte_flow_item_ipv4 ipv4_sctp_spec_1 = { + { 0, 0, 0, 0, 0, 0, IPPROTO_SCTP, 0, IPv4(11, 12, 13, 14), + IPv4(15, 16, 17, 18)} +}; + +static struct rte_flow_item_sctp sctp_spec_1 = { + { 10, 11, 0, 0} +}; + +static struct rte_flow_item ipv4_sctp_item_1 = { RTE_FLOW_ITEM_TYPE_IPV4, + &ipv4_sctp_spec_1, 0, &ipv4_mask_24}; + +static struct rte_flow_item sctp_item_1 = { RTE_FLOW_ITEM_TYPE_SCTP, + &sctp_spec_1, 0, &rte_flow_item_sctp_mask}; + + +/* test actions: + * "actions count / end" + */ +static struct rte_flow_action count_action = { RTE_FLOW_ACTION_TYPE_COUNT, 0}; +static struct rte_flow_action count_action_bad = { -1, 0}; + +static struct rte_flow_action end_action = { RTE_FLOW_ACTION_TYPE_END, 0}; +static struct rte_flow_action end_action_bad = { -1, 0}; + +static struct rte_flow_action actions[2]; + +/* test attributes */ +static struct rte_flow_attr attr; + +/* test error */ +static struct rte_flow_error error; + +/* test pattern */ +static struct rte_flow_item pattern[4]; + +/* flow classify data for UDP burst */ +static struct rte_flow_classify_5tuple_stats udp_ntuple_stats; +static struct rte_flow_classify_stats udp_classify_stats = { + .available_space = MAX_PKT_BURST, + .used_space = 0, + .stats = (void **)&udp_ntuple_stats +}; + +/* flow classify data for TCP burst */ +static struct rte_flow_classify_5tuple_stats tcp_ntuple_stats; +static struct rte_flow_classify_stats tcp_classify_stats = { + .available_space = MAX_PKT_BURST, + .used_space = 0, + .stats = (void **)&tcp_ntuple_stats +}; + +/* flow classify data for SCTP burst */ +static struct rte_flow_classify_5tuple_stats sctp_ntuple_stats; +static struct rte_flow_classify_stats sctp_classify_stats = { + .available_space = MAX_PKT_BURST, + .used_space = 0, + .stats = (void **)&sctp_ntuple_stats +}; +#endif /* TEST_FLOW_CLASSIFY_H_ */ -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v4 1/5] librte_table: fix acl entry add and delete functions 2017-08-31 14:54 ` [dpdk-dev] [PATCH v3 0/5] " Bernard Iremonger 2017-09-06 10:27 ` [dpdk-dev] [PATCH v4 " Bernard Iremonger @ 2017-09-06 10:27 ` Bernard Iremonger 2017-09-06 10:27 ` [dpdk-dev] [PATCH v4 2/5] librte_table: fix acl lookup function Bernard Iremonger ` (3 subsequent siblings) 5 siblings, 0 replies; 145+ messages in thread From: Bernard Iremonger @ 2017-09-06 10:27 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil Cc: Bernard Iremonger, stable The rte_table_acl_entry_add() function was returning data from acl_memory instead of acl_rule_memory. It was also returning data from entry instead of entry_ptr. The rte_table_acl_entry_delete() function was returning data from acl_memory instead of acl_rule_memory. Fixes: 166923eb2f78 ("table: ACL") Cc: stable@dpdk.org Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> --- lib/librte_table/rte_table_acl.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/lib/librte_table/rte_table_acl.c b/lib/librte_table/rte_table_acl.c index 3c05e4a..e84b437 100644 --- a/lib/librte_table/rte_table_acl.c +++ b/lib/librte_table/rte_table_acl.c @@ -316,8 +316,7 @@ struct rte_table_acl { if (status == 0) { *key_found = 1; *entry_ptr = &acl->memory[i * acl->entry_size]; - memcpy(*entry_ptr, entry, acl->entry_size); - + memcpy(entry, *entry_ptr, acl->entry_size); return 0; } } @@ -353,8 +352,8 @@ struct rte_table_acl { rte_acl_free(acl->ctx); acl->ctx = ctx; *key_found = 0; - *entry_ptr = &acl->memory[free_pos * acl->entry_size]; - memcpy(*entry_ptr, entry, acl->entry_size); + *entry_ptr = &acl->acl_rule_memory[free_pos * acl->entry_size]; + memcpy(entry, *entry_ptr, acl->entry_size); return 0; } @@ -435,7 +434,7 @@ struct rte_table_acl { acl->ctx = ctx; *key_found = 1; if (entry != NULL) - memcpy(entry, &acl->memory[pos * acl->entry_size], + memcpy(entry, &acl->acl_rule_memory[pos * acl->entry_size], acl->entry_size); return 0; -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v4 2/5] librte_table: fix acl lookup function 2017-08-31 14:54 ` [dpdk-dev] [PATCH v3 0/5] " Bernard Iremonger 2017-09-06 10:27 ` [dpdk-dev] [PATCH v4 " Bernard Iremonger 2017-09-06 10:27 ` [dpdk-dev] [PATCH v4 1/5] librte_table: fix acl entry add and delete functions Bernard Iremonger @ 2017-09-06 10:27 ` Bernard Iremonger 2017-09-06 10:27 ` [dpdk-dev] [PATCH v4 3/5] librte_flow_classify: add librte_flow_classify library Bernard Iremonger ` (2 subsequent siblings) 5 siblings, 0 replies; 145+ messages in thread From: Bernard Iremonger @ 2017-09-06 10:27 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil Cc: Bernard Iremonger, stable The rte_table_acl_lookup() function was returning data from acl_memory instead of acl_rule_memory. Fixes: 166923eb2f78 ("table: ACL") Cc: stable@dpdk.org Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> --- lib/librte_table/rte_table_acl.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/lib/librte_table/rte_table_acl.c b/lib/librte_table/rte_table_acl.c index e84b437..258916d 100644 --- a/lib/librte_table/rte_table_acl.c +++ b/lib/librte_table/rte_table_acl.c @@ -794,7 +794,7 @@ struct rte_table_acl { if (action_table_pos != 0) { pkts_out_mask |= pkt_mask; entries[pkt_pos] = (void *) - &acl->memory[action_table_pos * + &acl->acl_rule_memory[action_table_pos * acl->entry_size]; rte_prefetch0(entries[pkt_pos]); } -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v4 3/5] librte_flow_classify: add librte_flow_classify library 2017-08-31 14:54 ` [dpdk-dev] [PATCH v3 0/5] " Bernard Iremonger ` (2 preceding siblings ...) 2017-09-06 10:27 ` [dpdk-dev] [PATCH v4 2/5] librte_table: fix acl lookup function Bernard Iremonger @ 2017-09-06 10:27 ` Bernard Iremonger 2017-09-06 10:27 ` [dpdk-dev] [PATCH v4 4/5] examples/flow_classify: flow classify sample application Bernard Iremonger 2017-09-06 10:27 ` [dpdk-dev] [PATCH v4 5/5] test: flow classify library unit tests Bernard Iremonger 5 siblings, 0 replies; 145+ messages in thread From: Bernard Iremonger @ 2017-09-06 10:27 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil Cc: Bernard Iremonger From: Ferruh Yigit <ferruh.yigit@intel.com> The following library APIs's are implemented: rte_flow_classify_create rte_flow_classify_validate rte_flow_classify_destroy rte_flow_classify_query The following librte_table ACL API's are used: f_create to create a table ACL. f_add to add an ACL rule to the table. f_del to delete an ACL form the table. f_lookup to match packets with the ACL rules. The library supports counting of IPv4 five tupple packets only, ie IPv4 UDP, TCP and SCTP packets. Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com> Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> --- config/common_base | 6 + doc/api/doxy-api-index.md | 1 + doc/api/doxy-api.conf | 1 + lib/Makefile | 3 + lib/librte_eal/common/include/rte_log.h | 1 + lib/librte_flow_classify/Makefile | 51 ++ lib/librte_flow_classify/rte_flow_classify.c | 459 +++++++++++++++++ lib/librte_flow_classify/rte_flow_classify.h | 207 ++++++++ lib/librte_flow_classify/rte_flow_classify_parse.c | 546 +++++++++++++++++++++ lib/librte_flow_classify/rte_flow_classify_parse.h | 74 +++ .../rte_flow_classify_version.map | 10 + mk/rte.app.mk | 2 +- 12 files changed, 1360 insertions(+), 1 deletion(-) create mode 100644 lib/librte_flow_classify/Makefile create mode 100644 lib/librte_flow_classify/rte_flow_classify.c create mode 100644 lib/librte_flow_classify/rte_flow_classify.h create mode 100644 lib/librte_flow_classify/rte_flow_classify_parse.c create mode 100644 lib/librte_flow_classify/rte_flow_classify_parse.h create mode 100644 lib/librte_flow_classify/rte_flow_classify_version.map diff --git a/config/common_base b/config/common_base index 5e97a08..e378e0a 100644 --- a/config/common_base +++ b/config/common_base @@ -657,6 +657,12 @@ CONFIG_RTE_LIBRTE_GRO=y CONFIG_RTE_LIBRTE_METER=y # +# Compile librte_classify +# +CONFIG_RTE_LIBRTE_FLOW_CLASSIFY=y +CONFIG_RTE_LIBRTE_CLASSIFY_DEBUG=n + +# # Compile librte_sched # CONFIG_RTE_LIBRTE_SCHED=y diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md index 19e0d4f..a2fa281 100644 --- a/doc/api/doxy-api-index.md +++ b/doc/api/doxy-api-index.md @@ -105,6 +105,7 @@ The public API headers are grouped by topics: [LPM IPv4 route] (@ref rte_lpm.h), [LPM IPv6 route] (@ref rte_lpm6.h), [ACL] (@ref rte_acl.h), + [flow_classify] (@ref rte_flow_classify.h), [EFD] (@ref rte_efd.h) - **QoS**: diff --git a/doc/api/doxy-api.conf b/doc/api/doxy-api.conf index 823554f..4e43a66 100644 --- a/doc/api/doxy-api.conf +++ b/doc/api/doxy-api.conf @@ -46,6 +46,7 @@ INPUT = doc/api/doxy-api-index.md \ lib/librte_efd \ lib/librte_ether \ lib/librte_eventdev \ + lib/librte_flow_classify \ lib/librte_gro \ lib/librte_hash \ lib/librte_ip_frag \ diff --git a/lib/Makefile b/lib/Makefile index 86caba1..21fc3b0 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -82,6 +82,9 @@ DIRS-$(CONFIG_RTE_LIBRTE_POWER) += librte_power DEPDIRS-librte_power := librte_eal DIRS-$(CONFIG_RTE_LIBRTE_METER) += librte_meter DEPDIRS-librte_meter := librte_eal +DIRS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += librte_flow_classify +DEPDIRS-librte_flow_classify := librte_eal librte_ether librte_net +DEPDIRS-librte_flow_classify += librte_table librte_acl librte_port DIRS-$(CONFIG_RTE_LIBRTE_SCHED) += librte_sched DEPDIRS-librte_sched := librte_eal librte_mempool librte_mbuf librte_net DEPDIRS-librte_sched += librte_timer diff --git a/lib/librte_eal/common/include/rte_log.h b/lib/librte_eal/common/include/rte_log.h index ec8dba7..f975bde 100644 --- a/lib/librte_eal/common/include/rte_log.h +++ b/lib/librte_eal/common/include/rte_log.h @@ -87,6 +87,7 @@ struct rte_logs { #define RTE_LOGTYPE_CRYPTODEV 17 /**< Log related to cryptodev. */ #define RTE_LOGTYPE_EFD 18 /**< Log related to EFD. */ #define RTE_LOGTYPE_EVENTDEV 19 /**< Log related to eventdev. */ +#define RTE_LOGTYPE_CLASSIFY 20 /**< Log related to flow classify. */ /* these log types can be used in an application */ #define RTE_LOGTYPE_USER1 24 /**< User-defined log type 1. */ diff --git a/lib/librte_flow_classify/Makefile b/lib/librte_flow_classify/Makefile new file mode 100644 index 0000000..7863a0c --- /dev/null +++ b/lib/librte_flow_classify/Makefile @@ -0,0 +1,51 @@ +# BSD LICENSE +# +# Copyright(c) 2017 Intel Corporation. All rights reserved. +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Intel Corporation nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +include $(RTE_SDK)/mk/rte.vars.mk + +# library name +LIB = librte_flow_classify.a + +CFLAGS += -O3 +CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) + +EXPORT_MAP := rte_flow_classify_version.map + +LIBABIVER := 1 + +# all source are stored in SRCS-y +SRCS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += rte_flow_classify.c +SRCS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += rte_flow_classify_parse.c + +# install this header file +SYMLINK-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY)-include := rte_flow_classify.h + +include $(RTE_SDK)/mk/rte.lib.mk diff --git a/lib/librte_flow_classify/rte_flow_classify.c b/lib/librte_flow_classify/rte_flow_classify.c new file mode 100644 index 0000000..595e08c --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify.c @@ -0,0 +1,459 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <rte_flow_classify.h> +#include "rte_flow_classify_parse.h" +#include <rte_flow_driver.h> +#include <rte_table_acl.h> +#include <stdbool.h> + +static struct rte_eth_ntuple_filter ntuple_filter; + +enum { + PROTO_FIELD_IPV4, + SRC_FIELD_IPV4, + DST_FIELD_IPV4, + SRCP_FIELD_IPV4, + DSTP_FIELD_IPV4, + NUM_FIELDS_IPV4 +}; + +struct ipv4_5tuple_data { + uint16_t priority; /**< flow API uses priority 0 to 8, 0 is highest */ + uint32_t userdata; /**< value returned for match */ + uint8_t tcp_flags; /**< tcp_flags only meaningful TCP protocol */ +}; + +struct rte_flow_classify { + enum rte_flow_classify_type type; /**< classify type */ + struct rte_flow_action action; /**< action when match found */ + struct ipv4_5tuple_data flow_extra_data; /** extra rule data */ + struct rte_table_acl_rule_add_params key_add; /**< add ACL rule key */ + struct rte_table_acl_rule_delete_params + key_del; /**< delete ACL rule key */ + int key_found; /**< ACL rule key found in table */ + void *entry; /**< pointer to buffer to hold ACL rule key */ + void *entry_ptr; /**< handle to the table entry for the ACL rule key */ +}; + +/* number of categories in an ACL context */ +#define FLOW_CLASSIFY_NUM_CATEGORY 1 + +/* number of packets in a burst */ +#define MAX_PKT_BURST 32 + +struct mbuf_search { + struct rte_mbuf *m_ipv4[MAX_PKT_BURST]; + uint32_t res_ipv4[MAX_PKT_BURST]; + int num_ipv4; +}; + +int +rte_flow_classify_validate(void *table_handle, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + struct rte_flow_item *items; + parse_filter_t parse_filter; + uint32_t item_num = 0; + uint32_t i = 0; + int ret; + + (void) table_handle; + + if (!error) + return -EINVAL; + + if (!pattern) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM, + NULL, "NULL pattern."); + return -EINVAL; + } + + if (!actions) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_NUM, + NULL, "NULL action."); + return -EINVAL; + } + + if (!attr) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, + NULL, "NULL attribute."); + return -EINVAL; + } + + memset(&ntuple_filter, 0, sizeof(ntuple_filter)); + + /* Get the non-void item number of pattern */ + while ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_END) { + if ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_VOID) + item_num++; + i++; + } + item_num++; + + items = malloc(item_num * sizeof(struct rte_flow_item)); + if (!items) { + rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_ITEM_NUM, + NULL, "No memory for pattern items."); + return -ENOMEM; + } + + memset(items, 0, item_num * sizeof(struct rte_flow_item)); + classify_pattern_skip_void_item(items, pattern); + + parse_filter = classify_find_parse_filter_func(items); + if (!parse_filter) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + pattern, "Unsupported pattern"); + return -EINVAL; + } + + ret = parse_filter(attr, items, actions, &ntuple_filter, error); + free(items); + return ret; +} + +#ifdef RTE_LIBRTE_CLASSIFY_DEBUG +#define uint32_t_to_char(ip, a, b, c, d) do {\ + *a = (unsigned char)(ip >> 24 & 0xff);\ + *b = (unsigned char)(ip >> 16 & 0xff);\ + *c = (unsigned char)(ip >> 8 & 0xff);\ + *d = (unsigned char)(ip & 0xff);\ + } while (0) + +static inline void +print_ipv4_key_add(struct rte_table_acl_rule_add_params *key) +{ + unsigned char a, b, c, d; + + printf("ipv4_key_add: 0x%02hhx/0x%hhx ", + key->field_value[PROTO_FIELD_IPV4].value.u8, + key->field_value[PROTO_FIELD_IPV4].mask_range.u8); + + uint32_t_to_char(key->field_value[SRC_FIELD_IPV4].value.u32, + &a, &b, &c, &d); + printf(" %hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, + key->field_value[SRC_FIELD_IPV4].mask_range.u32); + + uint32_t_to_char(key->field_value[DST_FIELD_IPV4].value.u32, + &a, &b, &c, &d); + printf("%hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, + key->field_value[DST_FIELD_IPV4].mask_range.u32); + + printf("%hu : 0x%x %hu : 0x%x", + key->field_value[SRCP_FIELD_IPV4].value.u16, + key->field_value[SRCP_FIELD_IPV4].mask_range.u16, + key->field_value[DSTP_FIELD_IPV4].value.u16, + key->field_value[DSTP_FIELD_IPV4].mask_range.u16); + + printf(" priority: 0x%x\n", key->priority); +} + +static inline void +print_ipv4_key_delete(struct rte_table_acl_rule_delete_params *key) +{ + unsigned char a, b, c, d; + + printf("ipv4_key_del: 0x%02hhx/0x%hhx ", + key->field_value[PROTO_FIELD_IPV4].value.u8, + key->field_value[PROTO_FIELD_IPV4].mask_range.u8); + + uint32_t_to_char(key->field_value[SRC_FIELD_IPV4].value.u32, + &a, &b, &c, &d); + printf(" %hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, + key->field_value[SRC_FIELD_IPV4].mask_range.u32); + + uint32_t_to_char(key->field_value[DST_FIELD_IPV4].value.u32, + &a, &b, &c, &d); + printf("%hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, + key->field_value[DST_FIELD_IPV4].mask_range.u32); + + printf("%hu : 0x%x %hu : 0x%x\n", + key->field_value[SRCP_FIELD_IPV4].value.u16, + key->field_value[SRCP_FIELD_IPV4].mask_range.u16, + key->field_value[DSTP_FIELD_IPV4].value.u16, + key->field_value[DSTP_FIELD_IPV4].mask_range.u16); +} +#endif + +static struct rte_flow_classify * +allocate_5tuple(void) +{ + struct rte_flow_classify *flow_classify; + + flow_classify = malloc(sizeof(struct rte_flow_classify)); + if (!flow_classify) + return flow_classify; + + memset(flow_classify, 0, sizeof(struct rte_flow_classify)); + + flow_classify->type = RTE_FLOW_CLASSIFY_TYPE_5TUPLE; + memcpy(&flow_classify->action, classify_get_flow_action(), + sizeof(struct rte_flow_action)); + + flow_classify->flow_extra_data.priority = ntuple_filter.priority; + flow_classify->flow_extra_data.tcp_flags = ntuple_filter.tcp_flags; + + /* key add values */ + flow_classify->key_add.priority = ntuple_filter.priority; + flow_classify->key_add.field_value[PROTO_FIELD_IPV4].mask_range.u8 = + ntuple_filter.proto_mask; + flow_classify->key_add.field_value[PROTO_FIELD_IPV4].value.u8 = + ntuple_filter.proto; + + flow_classify->key_add.field_value[SRC_FIELD_IPV4].mask_range.u32 = + ntuple_filter.src_ip_mask; + flow_classify->key_add.field_value[SRC_FIELD_IPV4].value.u32 = + ntuple_filter.src_ip; + + flow_classify->key_add.field_value[DST_FIELD_IPV4].mask_range.u32 = + ntuple_filter.dst_ip_mask; + flow_classify->key_add.field_value[DST_FIELD_IPV4].value.u32 = + ntuple_filter.dst_ip; + + flow_classify->key_add.field_value[SRCP_FIELD_IPV4].mask_range.u16 = + ntuple_filter.src_port_mask; + flow_classify->key_add.field_value[SRCP_FIELD_IPV4].value.u16 = + ntuple_filter.src_port; + + flow_classify->key_add.field_value[DSTP_FIELD_IPV4].mask_range.u16 = + ntuple_filter.dst_port_mask; + flow_classify->key_add.field_value[DSTP_FIELD_IPV4].value.u16 = + ntuple_filter.dst_port; + +#ifdef RTE_LIBRTE_CLASSIFY_DEBUG + print_ipv4_key_add(&flow_classify->key_add); +#endif + + /* key delete values */ + memcpy(&flow_classify->key_del.field_value[PROTO_FIELD_IPV4], + &flow_classify->key_add.field_value[PROTO_FIELD_IPV4], + NUM_FIELDS_IPV4 * sizeof(struct rte_acl_field)); + +#ifdef RTE_LIBRTE_CLASSIFY_DEBUG + print_ipv4_key_delete(&flow_classify->key_del); +#endif + return flow_classify; +} + +struct rte_flow_classify * +rte_flow_classify_create(void *table_handle, + uint32_t entry_size, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + struct rte_flow_classify *flow_classify; + struct rte_acl_rule *acl_rule; + int ret; + + if (!error) + return NULL; + + if (!table_handle) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_HANDLE, + NULL, "NULL table_handle."); + return NULL; + } + + if (!pattern) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM, + NULL, "NULL pattern."); + return NULL; + } + + if (!actions) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_NUM, + NULL, "NULL action."); + return NULL; + } + + if (!attr) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, + NULL, "NULL attribute."); + return NULL; + } + + /* parse attr, pattern and actions */ + ret = rte_flow_classify_validate(table_handle, attr, pattern, + actions, error); + if (ret < 0) + return NULL; + + flow_classify = allocate_5tuple(); + if (!flow_classify) + return NULL; + + flow_classify->entry = malloc(entry_size); + if (!flow_classify->entry) { + free(flow_classify); + flow_classify = NULL; + return NULL; + } + + ret = rte_table_acl_ops.f_add(table_handle, &flow_classify->key_add, + flow_classify->entry, &flow_classify->key_found, + &flow_classify->entry_ptr); + if (ret) { + free(flow_classify->entry); + free(flow_classify); + flow_classify = NULL; + return NULL; + } + acl_rule = flow_classify->entry; + flow_classify->flow_extra_data.userdata = acl_rule->data.userdata; + + return flow_classify; +} + +int +rte_flow_classify_destroy(void *table_handle, + struct rte_flow_classify *flow_classify, + struct rte_flow_error *error) +{ + int ret; + int key_found; + + if (!error) + return -EINVAL; + + if (!flow_classify || !table_handle) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "invalid input"); + return -EINVAL; + } + + ret = rte_table_acl_ops.f_delete(table_handle, + &flow_classify->key_del, &key_found, + flow_classify->entry); + if ((ret == 0) && key_found) { + free(flow_classify->entry); + free(flow_classify); + } else + ret = -1; + return ret; +} + +static int +flow_match(void *table, struct rte_mbuf **pkts_in, const uint16_t nb_pkts, + uint64_t *count, uint32_t userdata) +{ + int ret = -1; + int i; + uint64_t pkts_mask; + uint64_t lookup_hit_mask; + struct rte_acl_rule *entries[RTE_PORT_IN_BURST_SIZE_MAX]; + + if (nb_pkts) { + pkts_mask = RTE_LEN2MASK(nb_pkts, uint64_t); + ret = rte_table_acl_ops.f_lookup(table, pkts_in, + pkts_mask, &lookup_hit_mask, (void **)entries); + if (!ret) { + for (i = 0; i < nb_pkts && + (lookup_hit_mask & (1 << i)); i++) { + if (entries[i]->data.userdata == userdata) + (*count)++; /* match found */ + } + if (*count == 0) + ret = -1; + } else + ret = -1; + } + return ret; +} + +static int +action_apply(const struct rte_flow_classify *flow_classify, + struct rte_flow_classify_stats *stats, uint64_t count) +{ + struct rte_flow_classify_5tuple_stats *ntuple_stats; + + switch (flow_classify->action.type) { + case RTE_FLOW_ACTION_TYPE_COUNT: + ntuple_stats = + (struct rte_flow_classify_5tuple_stats *)stats->stats; + ntuple_stats->counter1 = count; + stats->used_space = 1; + break; + default: + return -ENOTSUP; + } + + return 0; +} + +int +rte_flow_classify_query(void *table_handle, + const struct rte_flow_classify *flow_classify, + struct rte_mbuf **pkts, + const uint16_t nb_pkts, + struct rte_flow_classify_stats *stats, + struct rte_flow_error *error) +{ + uint64_t count = 0; + int ret = -EINVAL; + + if (!error) + return ret; + + if (!table_handle || !flow_classify || !pkts || !stats) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "invalid input"); + return ret; + } + + if ((stats->available_space == 0) || (nb_pkts == 0)) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "invalid input"); + return ret; + } + + ret = flow_match(table_handle, pkts, nb_pkts, &count, + flow_classify->flow_extra_data.userdata); + if (ret == 0) + ret = action_apply(flow_classify, stats, count); + + return ret; +} diff --git a/lib/librte_flow_classify/rte_flow_classify.h b/lib/librte_flow_classify/rte_flow_classify.h new file mode 100644 index 0000000..2b200fb --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify.h @@ -0,0 +1,207 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _RTE_FLOW_CLASSIFY_H_ +#define _RTE_FLOW_CLASSIFY_H_ + +/** + * @file + * + * RTE Flow Classify Library + * + * This library provides flow record information with some measured properties. + * + * Application should define the flow and measurement criteria (action) for it. + * + * Library doesn't maintain any flow records itself, instead flow information is + * returned to upper layer only for given packets. + * + * It is application's responsibility to call rte_flow_classify_query() + * for group of packets, just after receiving them or before transmitting them. + * Application should provide the flow type interested in, measurement to apply + * to that flow in rte_flow_classify_create() API, and should provide + * rte_flow_classify object and storage to put results in + * rte_flow_classify_query() API. + * + * Usage: + * - application calls rte_flow_classify_create() to create a rte_flow_classify + * object. + * - application calls rte_flow_classify_query() in a polling manner, + * preferably after rte_eth_rx_burst(). This will cause the library to + * convert packet information to flow information with some measurements. + * - rte_flow_classify object can be destroyed when they are no more needed + * via rte_flow_classify_destroy() + */ + +#include <rte_ethdev.h> +#include <rte_ether.h> +#include <rte_flow.h> +#include <rte_acl.h> + +#ifdef __cplusplus +extern "C" { +#endif + +enum rte_flow_classify_type { + RTE_FLOW_CLASSIFY_TYPE_NONE, /**< no type */ + RTE_FLOW_CLASSIFY_TYPE_5TUPLE, /**< IPv4 5tuple type */ +}; + +struct rte_flow_classify; + +/** + * Flow stats + * + * For single action an array of stats can be returned by API. Technically each + * packet can return a stat at max. + * + * Storage for stats is provided by application, library should know available + * space, and should return the number of used space. + * + * stats type is based on what measurement (action) requested by application. + * + */ +struct rte_flow_classify_stats { + const unsigned int available_space; + unsigned int used_space; + void **stats; +}; + +struct rte_flow_classify_5tuple_stats { + uint64_t counter1; /**< count of packets that match 5tupple pattern */ +}; + +/** + * Create a flow classify rule. + * + * @param[in] table_handle + * Pointer to table ACL + * @param[in] entry_size + * Size of ACL rule + * @param[in] attr + * Flow rule attributes + * @param[in] pattern + * Pattern specification (list terminated by the END pattern item). + * @param[in] actions + * Associated actions (list terminated by the END pattern item). + * @param[out] error + * Perform verbose error reporting if not NULL. Structure + * initialised in case of error only. + * @return + * A valid handle in case of success, NULL otherwise. + */ +struct rte_flow_classify * +rte_flow_classify_create(void *table_handle, + uint32_t entry_size, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error); + +/** + * Validate a flow classify rule. + * + * @param[in] table_handle + * Pointer to table ACL + * @param[in] attr + * Flow rule attributes + * @param[in] pattern + * Pattern specification (list terminated by the END pattern item). + * @param[in] actions + * Associated actions (list terminated by the END pattern item). + * @param[out] error + * Perform verbose error reporting if not NULL. Structure + * initialised in case of error only. + * + * @return + * reurn code. + */ +int +rte_flow_classify_validate(void *table_handle, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error); + +/** + * Destroy a flow classify rule. + * + * @param[in] table_handle + * Pointer to table ACL + * @param[in] flow_classify + * Flow rule handle to destroy + * @param[out] error + * Perform verbose error reporting if not NULL. Structure + * initialised in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise. + */ +int +rte_flow_classify_destroy(void *table_handle, + struct rte_flow_classify *flow_classify, + struct rte_flow_error *error); + +/** + * Get flow classification stats for given packets. + * + * @param[in] table_handle + * Pointer to table ACL + * @param[in] flow_classify + * Pointer to Flow rule object + * @param[in] pkts + * Pointer to packets to process + * @param[in] nb_pkts + * Number of packets to process + * @param[in] stats + * To store stats defined by action + * @param[out] error + * Perform verbose error reporting if not NULL. Structure + * initialised in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise. + */ +int +rte_flow_classify_query(void *table_handle, + const struct rte_flow_classify *flow_classify, + struct rte_mbuf **pkts, + const uint16_t nb_pkts, + struct rte_flow_classify_stats *stats, + struct rte_flow_error *error); + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_FLOW_CLASSIFY_H_ */ diff --git a/lib/librte_flow_classify/rte_flow_classify_parse.c b/lib/librte_flow_classify/rte_flow_classify_parse.c new file mode 100644 index 0000000..e5a3885 --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify_parse.c @@ -0,0 +1,546 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <rte_flow_classify.h> +#include "rte_flow_classify_parse.h" +#include <rte_flow_driver.h> + +struct classify_valid_pattern { + enum rte_flow_item_type *items; + parse_filter_t parse_filter; +}; + +static struct rte_flow_action action; + +/* Pattern matched ntuple filter */ +static enum rte_flow_item_type pattern_ntuple_1[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_UDP, + RTE_FLOW_ITEM_TYPE_END, +}; + +/* Pattern matched ntuple filter */ +static enum rte_flow_item_type pattern_ntuple_2[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_TCP, + RTE_FLOW_ITEM_TYPE_END, +}; + +/* Pattern matched ntuple filter */ +static enum rte_flow_item_type pattern_ntuple_3[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_SCTP, + RTE_FLOW_ITEM_TYPE_END, +}; + +static int +classify_parse_ntuple_filter(const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_eth_ntuple_filter *filter, + struct rte_flow_error *error); + +static struct classify_valid_pattern classify_supported_patterns[] = { + /* ntuple */ + { pattern_ntuple_1, classify_parse_ntuple_filter }, + { pattern_ntuple_2, classify_parse_ntuple_filter }, + { pattern_ntuple_3, classify_parse_ntuple_filter }, +}; + +struct rte_flow_action * +classify_get_flow_action(void) +{ + return &action; +} + +/* Find the first VOID or non-VOID item pointer */ +const struct rte_flow_item * +classify_find_first_item(const struct rte_flow_item *item, bool is_void) +{ + bool is_find; + + while (item->type != RTE_FLOW_ITEM_TYPE_END) { + if (is_void) + is_find = item->type == RTE_FLOW_ITEM_TYPE_VOID; + else + is_find = item->type != RTE_FLOW_ITEM_TYPE_VOID; + if (is_find) + break; + item++; + } + return item; +} + +/* Skip all VOID items of the pattern */ +void +classify_pattern_skip_void_item(struct rte_flow_item *items, + const struct rte_flow_item *pattern) +{ + uint32_t cpy_count = 0; + const struct rte_flow_item *pb = pattern, *pe = pattern; + + for (;;) { + /* Find a non-void item first */ + pb = classify_find_first_item(pb, false); + if (pb->type == RTE_FLOW_ITEM_TYPE_END) { + pe = pb; + break; + } + + /* Find a void item */ + pe = classify_find_first_item(pb + 1, true); + + cpy_count = pe - pb; + rte_memcpy(items, pb, sizeof(struct rte_flow_item) * cpy_count); + + items += cpy_count; + + if (pe->type == RTE_FLOW_ITEM_TYPE_END) { + pb = pe; + break; + } + + pb = pe + 1; + } + /* Copy the END item. */ + rte_memcpy(items, pe, sizeof(struct rte_flow_item)); +} + +/* Check if the pattern matches a supported item type array */ +static bool +classify_match_pattern(enum rte_flow_item_type *item_array, + struct rte_flow_item *pattern) +{ + struct rte_flow_item *item = pattern; + + while ((*item_array == item->type) && + (*item_array != RTE_FLOW_ITEM_TYPE_END)) { + item_array++; + item++; + } + + return (*item_array == RTE_FLOW_ITEM_TYPE_END && + item->type == RTE_FLOW_ITEM_TYPE_END); +} + +/* Find if there's parse filter function matched */ +parse_filter_t +classify_find_parse_filter_func(struct rte_flow_item *pattern) +{ + parse_filter_t parse_filter = NULL; + uint8_t i = 0; + + for (; i < RTE_DIM(classify_supported_patterns); i++) { + if (classify_match_pattern(classify_supported_patterns[i].items, + pattern)) { + parse_filter = + classify_supported_patterns[i].parse_filter; + break; + } + } + + return parse_filter; +} + +#define FLOW_RULE_MIN_PRIORITY 8 +#define FLOW_RULE_MAX_PRIORITY 0 + +#define NEXT_ITEM_OF_PATTERN(item, pattern, index)\ + do { \ + item = pattern + index;\ + while (item->type == RTE_FLOW_ITEM_TYPE_VOID) {\ + index++; \ + item = pattern + index; \ + } \ + } while (0) + +#define NEXT_ITEM_OF_ACTION(act, actions, index)\ + do { \ + act = actions + index; \ + while (act->type == RTE_FLOW_ACTION_TYPE_VOID) {\ + index++; \ + act = actions + index; \ + } \ + } while (0) + +/** + * Please aware there's an asumption for all the parsers. + * rte_flow_item is using big endian, rte_flow_attr and + * rte_flow_action are using CPU order. + * Because the pattern is used to describe the packets, + * normally the packets should use network order. + */ + +/** + * Parse the rule to see if it is a n-tuple rule. + * And get the n-tuple filter info BTW. + * pattern: + * The first not void item can be ETH or IPV4. + * The second not void item must be IPV4 if the first one is ETH. + * The third not void item must be UDP or TCP. + * The next not void item must be END. + * action: + * The first not void action should be QUEUE. + * The next not void action should be END. + * pattern example: + * ITEM Spec Mask + * ETH NULL NULL + * IPV4 src_addr 192.168.1.20 0xFFFFFFFF + * dst_addr 192.167.3.50 0xFFFFFFFF + * next_proto_id 17 0xFF + * UDP/TCP/ src_port 80 0xFFFF + * SCTP dst_port 80 0xFFFF + * END + * other members in mask and spec should set to 0x00. + * item->last should be NULL. + */ +static int +classify_parse_ntuple_filter(const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_eth_ntuple_filter *filter, + struct rte_flow_error *error) +{ + const struct rte_flow_item *item; + const struct rte_flow_action *act; + const struct rte_flow_item_ipv4 *ipv4_spec; + const struct rte_flow_item_ipv4 *ipv4_mask; + const struct rte_flow_item_tcp *tcp_spec; + const struct rte_flow_item_tcp *tcp_mask; + const struct rte_flow_item_udp *udp_spec; + const struct rte_flow_item_udp *udp_mask; + const struct rte_flow_item_sctp *sctp_spec; + const struct rte_flow_item_sctp *sctp_mask; + uint32_t index; + + if (!pattern) { + rte_flow_error_set(error, + EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM, + NULL, "NULL pattern."); + return -rte_errno; + } + + if (!actions) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_NUM, + NULL, "NULL action."); + return -rte_errno; + } + if (!attr) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, + NULL, "NULL attribute."); + return -rte_errno; + } + + /* parse pattern */ + index = 0; + + /* the first not void item can be MAC or IPv4 */ + NEXT_ITEM_OF_PATTERN(item, pattern, index); + + if (item->type != RTE_FLOW_ITEM_TYPE_ETH && + item->type != RTE_FLOW_ITEM_TYPE_IPV4) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + /* Skip Ethernet */ + if (item->type == RTE_FLOW_ITEM_TYPE_ETH) { + /*Not supported last point for range*/ + if (item->last) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + item, + "Not supported last point for range"); + return -rte_errno; + + } + /* if the first item is MAC, the content should be NULL */ + if (item->spec || item->mask) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Not supported by ntuple filter"); + return -rte_errno; + } + /* check if the next not void item is IPv4 */ + index++; + NEXT_ITEM_OF_PATTERN(item, pattern, index); + if (item->type != RTE_FLOW_ITEM_TYPE_IPV4) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Not supported by ntuple filter"); + return -rte_errno; + } + } + + /* get the IPv4 info */ + if (!item->spec || !item->mask) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Invalid ntuple mask"); + return -rte_errno; + } + /*Not supported last point for range*/ + if (item->last) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + item, "Not supported last point for range"); + return -rte_errno; + + } + + ipv4_mask = (const struct rte_flow_item_ipv4 *)item->mask; + /** + * Only support src & dst addresses, protocol, + * others should be masked. + */ + if (ipv4_mask->hdr.version_ihl || + ipv4_mask->hdr.type_of_service || + ipv4_mask->hdr.total_length || + ipv4_mask->hdr.packet_id || + ipv4_mask->hdr.fragment_offset || + ipv4_mask->hdr.time_to_live || + ipv4_mask->hdr.hdr_checksum) { + rte_flow_error_set(error, + EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + + filter->dst_ip_mask = ipv4_mask->hdr.dst_addr; + filter->src_ip_mask = ipv4_mask->hdr.src_addr; + filter->proto_mask = ipv4_mask->hdr.next_proto_id; + + ipv4_spec = (const struct rte_flow_item_ipv4 *)item->spec; + filter->dst_ip = ipv4_spec->hdr.dst_addr; + filter->src_ip = ipv4_spec->hdr.src_addr; + filter->proto = ipv4_spec->hdr.next_proto_id; + + /* check if the next not void item is TCP or UDP or SCTP */ + index++; + NEXT_ITEM_OF_PATTERN(item, pattern, index); + if (item->type != RTE_FLOW_ITEM_TYPE_TCP && + item->type != RTE_FLOW_ITEM_TYPE_UDP && + item->type != RTE_FLOW_ITEM_TYPE_SCTP) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + + /* get the TCP/UDP info */ + if (!item->spec || !item->mask) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Invalid ntuple mask"); + return -rte_errno; + } + + /*Not supported last point for range*/ + if (item->last) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + item, "Not supported last point for range"); + return -rte_errno; + + } + + if (item->type == RTE_FLOW_ITEM_TYPE_TCP) { + tcp_mask = (const struct rte_flow_item_tcp *)item->mask; + + /** + * Only support src & dst ports, tcp flags, + * others should be masked. + */ + if (tcp_mask->hdr.sent_seq || + tcp_mask->hdr.recv_ack || + tcp_mask->hdr.data_off || + tcp_mask->hdr.rx_win || + tcp_mask->hdr.cksum || + tcp_mask->hdr.tcp_urp) { + memset(filter, 0, + sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + + filter->dst_port_mask = tcp_mask->hdr.dst_port; + filter->src_port_mask = tcp_mask->hdr.src_port; + if (tcp_mask->hdr.tcp_flags == 0xFF) { + filter->flags |= RTE_NTUPLE_FLAGS_TCP_FLAG; + } else if (!tcp_mask->hdr.tcp_flags) { + filter->flags &= ~RTE_NTUPLE_FLAGS_TCP_FLAG; + } else { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + + tcp_spec = (const struct rte_flow_item_tcp *)item->spec; + filter->dst_port = tcp_spec->hdr.dst_port; + filter->src_port = tcp_spec->hdr.src_port; + filter->tcp_flags = tcp_spec->hdr.tcp_flags; + } else if (item->type == RTE_FLOW_ITEM_TYPE_UDP) { + udp_mask = (const struct rte_flow_item_udp *)item->mask; + + /** + * Only support src & dst ports, + * others should be masked. + */ + if (udp_mask->hdr.dgram_len || + udp_mask->hdr.dgram_cksum) { + memset(filter, 0, + sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + + filter->dst_port_mask = udp_mask->hdr.dst_port; + filter->src_port_mask = udp_mask->hdr.src_port; + + udp_spec = (const struct rte_flow_item_udp *)item->spec; + filter->dst_port = udp_spec->hdr.dst_port; + filter->src_port = udp_spec->hdr.src_port; + } else { + sctp_mask = (const struct rte_flow_item_sctp *)item->mask; + + /** + * Only support src & dst ports, + * others should be masked. + */ + if (sctp_mask->hdr.tag || + sctp_mask->hdr.cksum) { + memset(filter, 0, + sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + + filter->dst_port_mask = sctp_mask->hdr.dst_port; + filter->src_port_mask = sctp_mask->hdr.src_port; + + sctp_spec = (const struct rte_flow_item_sctp *)item->spec; + filter->dst_port = sctp_spec->hdr.dst_port; + filter->src_port = sctp_spec->hdr.src_port; + } + + /* check if the next not void item is END */ + index++; + NEXT_ITEM_OF_PATTERN(item, pattern, index); + if (item->type != RTE_FLOW_ITEM_TYPE_END) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + + /* parse action */ + index = 0; + + /** + * n-tuple only supports count, + * check if the first not void action is COUNT. + */ + memset(&action, 0, sizeof(action)); + NEXT_ITEM_OF_ACTION(act, actions, index); + if (act->type != RTE_FLOW_ACTION_TYPE_COUNT) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + item, "Not supported action."); + return -rte_errno; + } + action.type = RTE_FLOW_ACTION_TYPE_COUNT; + + /* check if the next not void item is END */ + index++; + NEXT_ITEM_OF_ACTION(act, actions, index); + if (act->type != RTE_FLOW_ACTION_TYPE_END) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + act, "Not supported action."); + return -rte_errno; + } + + /* parse attr */ + /* must be input direction */ + if (!attr->ingress) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, + attr, "Only support ingress."); + return -rte_errno; + } + + /* not supported */ + if (attr->egress) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, + attr, "Not support egress."); + return -rte_errno; + } + + if (attr->priority > 0xFFFF) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, + attr, "Error priority."); + return -rte_errno; + } + filter->priority = (uint16_t)attr->priority; + if (attr->priority > FLOW_RULE_MIN_PRIORITY) + filter->priority = FLOW_RULE_MAX_PRIORITY; + + return 0; +} diff --git a/lib/librte_flow_classify/rte_flow_classify_parse.h b/lib/librte_flow_classify/rte_flow_classify_parse.h new file mode 100644 index 0000000..1d4708a --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify_parse.h @@ -0,0 +1,74 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _RTE_FLOW_CLASSIFY_PARSE_H_ +#define _RTE_FLOW_CLASSIFY_PARSE_H_ + +#include <rte_ethdev.h> +#include <rte_ether.h> +#include <rte_flow.h> +#include <stdbool.h> + +#ifdef __cplusplus +extern "C" { +#endif + +typedef int (*parse_filter_t)(const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_eth_ntuple_filter *filter, + struct rte_flow_error *error); + +/* Skip all VOID items of the pattern */ +void +classify_pattern_skip_void_item(struct rte_flow_item *items, + const struct rte_flow_item *pattern); + +/* Find the first VOID or non-VOID item pointer */ +const struct rte_flow_item * +classify_find_first_item(const struct rte_flow_item *item, bool is_void); + + +/* Find if there's parse filter function matched */ +parse_filter_t +classify_find_parse_filter_func(struct rte_flow_item *pattern); + +/* get action data */ +struct rte_flow_action * +classify_get_flow_action(void); + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_FLOW_CLASSIFY_PARSE_H_ */ diff --git a/lib/librte_flow_classify/rte_flow_classify_version.map b/lib/librte_flow_classify/rte_flow_classify_version.map new file mode 100644 index 0000000..e2c9ecf --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify_version.map @@ -0,0 +1,10 @@ +DPDK_17.08 { + global: + + rte_flow_classify_create; + rte_flow_classify_destroy; + rte_flow_classify_query; + rte_flow_classify_validate; + + local: *; +}; diff --git a/mk/rte.app.mk b/mk/rte.app.mk index c25fdd9..909ab95 100644 --- a/mk/rte.app.mk +++ b/mk/rte.app.mk @@ -58,6 +58,7 @@ _LDLIBS-y += -L$(RTE_SDK_BIN)/lib # # Order is important: from higher level to lower level # +_LDLIBS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += -lrte_flow_classify _LDLIBS-$(CONFIG_RTE_LIBRTE_PIPELINE) += -lrte_pipeline _LDLIBS-$(CONFIG_RTE_LIBRTE_TABLE) += -lrte_table _LDLIBS-$(CONFIG_RTE_LIBRTE_PORT) += -lrte_port @@ -84,7 +85,6 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_EFD) += -lrte_efd _LDLIBS-$(CONFIG_RTE_LIBRTE_CFGFILE) += -lrte_cfgfile _LDLIBS-y += --whole-archive - _LDLIBS-$(CONFIG_RTE_LIBRTE_HASH) += -lrte_hash _LDLIBS-$(CONFIG_RTE_LIBRTE_VHOST) += -lrte_vhost _LDLIBS-$(CONFIG_RTE_LIBRTE_KVARGS) += -lrte_kvargs -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v4 4/5] examples/flow_classify: flow classify sample application 2017-08-31 14:54 ` [dpdk-dev] [PATCH v3 0/5] " Bernard Iremonger ` (3 preceding siblings ...) 2017-09-06 10:27 ` [dpdk-dev] [PATCH v4 3/5] librte_flow_classify: add librte_flow_classify library Bernard Iremonger @ 2017-09-06 10:27 ` Bernard Iremonger 2017-09-06 10:27 ` [dpdk-dev] [PATCH v4 5/5] test: flow classify library unit tests Bernard Iremonger 5 siblings, 0 replies; 145+ messages in thread From: Bernard Iremonger @ 2017-09-06 10:27 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil Cc: Bernard Iremonger The flow_classify sample application exercises the following librte_flow_classify API's: rte_flow_classify_create rte_flow_classify_validate rte_flow_classify_destroy rte_flow_classify_query It sets up the IPv4 ACL field definitions. It creates table_acl and adds and deletes rules using the librte_table API. It uses a file of IPv4 five tuple rules for input. Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> --- examples/flow_classify/Makefile | 57 ++ examples/flow_classify/flow_classify.c | 897 +++++++++++++++++++++++++++++ examples/flow_classify/ipv4_rules_file.txt | 14 + 3 files changed, 968 insertions(+) create mode 100644 examples/flow_classify/Makefile create mode 100644 examples/flow_classify/flow_classify.c create mode 100644 examples/flow_classify/ipv4_rules_file.txt diff --git a/examples/flow_classify/Makefile b/examples/flow_classify/Makefile new file mode 100644 index 0000000..eecdde1 --- /dev/null +++ b/examples/flow_classify/Makefile @@ -0,0 +1,57 @@ +# BSD LICENSE +# +# Copyright(c) 2017 Intel Corporation. All rights reserved. +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Intel Corporation nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +ifeq ($(RTE_SDK),) +$(error "Please define RTE_SDK environment variable") +endif + +# Default target, can be overridden by command line or environment +RTE_TARGET ?= x86_64-native-linuxapp-gcc + +include $(RTE_SDK)/mk/rte.vars.mk + +# binary name +APP = flow_classify + + +# all source are stored in SRCS-y +SRCS-y := flow_classify.c + +CFLAGS += -O3 +CFLAGS += $(WERROR_FLAGS) + +# workaround for a gcc bug with noreturn attribute +# http://gcc.gnu.org/bugzilla/show_bug.cgi?id=12603 +ifeq ($(CONFIG_RTE_TOOLCHAIN_GCC),y) +CFLAGS_main.o += -Wno-return-type +endif + +include $(RTE_SDK)/mk/rte.extapp.mk diff --git a/examples/flow_classify/flow_classify.c b/examples/flow_classify/flow_classify.c new file mode 100644 index 0000000..651fa8f --- /dev/null +++ b/examples/flow_classify/flow_classify.c @@ -0,0 +1,897 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <stdint.h> +#include <inttypes.h> +#include <getopt.h> + +#include <rte_eal.h> +#include <rte_ethdev.h> +#include <rte_cycles.h> +#include <rte_lcore.h> +#include <rte_mbuf.h> +#include <rte_flow.h> +#include <rte_flow_classify.h> +#include <rte_table_acl.h> + +#define RX_RING_SIZE 128 +#define TX_RING_SIZE 512 + +#define NUM_MBUFS 8191 +#define MBUF_CACHE_SIZE 250 +#define BURST_SIZE 32 +#define MAX_NUM_CLASSIFY 30 +#define FLOW_CLASSIFY_MAX_RULE_NUM 91 +#define FLOW_CLASSIFY_MAX_PRIORITY 8 +#define PROTO_TCP 6 +#define PROTO_UDP 17 +#define PROTO_SCTP 132 + +#define COMMENT_LEAD_CHAR ('#') +#define OPTION_RULE_IPV4 "rule_ipv4" +#define RTE_LOGTYPE_FLOW_CLASSIFY RTE_LOGTYPE_USER3 +#define flow_classify_log(format, ...) \ + RTE_LOG(ERR, FLOW_CLASSIFY, format, ##__VA_ARGS__) + +#define uint32_t_to_char(ip, a, b, c, d) do {\ + *a = (unsigned char)(ip >> 24 & 0xff);\ + *b = (unsigned char)(ip >> 16 & 0xff);\ + *c = (unsigned char)(ip >> 8 & 0xff);\ + *d = (unsigned char)(ip & 0xff);\ + } while (0) + +enum { + CB_FLD_SRC_ADDR, + CB_FLD_DST_ADDR, + CB_FLD_SRC_PORT, + CB_FLD_SRC_PORT_DLM, + CB_FLD_SRC_PORT_MASK, + CB_FLD_DST_PORT, + CB_FLD_DST_PORT_DLM, + CB_FLD_DST_PORT_MASK, + CB_FLD_PROTO, + CB_FLD_PRIORITY, + CB_FLD_NUM, +}; + +static struct{ + const char *rule_ipv4_name; +} parm_config; +const char cb_port_delim[] = ":"; + +static const struct rte_eth_conf port_conf_default = { + .rxmode = { .max_rx_pkt_len = ETHER_MAX_LEN } +}; + +static void *table_acl; +uint32_t entry_size; +static int udp_num_classify; +static int tcp_num_classify; +static int sctp_num_classify; + +/* ACL field definitions for IPv4 5 tuple rule */ + +enum { + PROTO_FIELD_IPV4, + SRC_FIELD_IPV4, + DST_FIELD_IPV4, + SRCP_FIELD_IPV4, + DSTP_FIELD_IPV4, + NUM_FIELDS_IPV4 +}; + +enum { + PROTO_INPUT_IPV4, + SRC_INPUT_IPV4, + DST_INPUT_IPV4, + SRCP_DESTP_INPUT_IPV4 +}; + +static struct rte_acl_field_def ipv4_defs[NUM_FIELDS_IPV4] = { + /* first input field - always one byte long. */ + { + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint8_t), + .field_index = PROTO_FIELD_IPV4, + .input_index = PROTO_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, next_proto_id), + }, + /* next input field (IPv4 source address) - 4 consecutive bytes. */ + { + /* rte_flow uses a bit mask for IPv4 addresses */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint32_t), + .field_index = SRC_FIELD_IPV4, + .input_index = SRC_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, src_addr), + }, + /* next input field (IPv4 destination address) - 4 consecutive bytes. */ + { + /* rte_flow uses a bit mask for IPv4 addresses */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint32_t), + .field_index = DST_FIELD_IPV4, + .input_index = DST_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, dst_addr), + }, + /* + * Next 2 fields (src & dst ports) form 4 consecutive bytes. + * They share the same input index. + */ + { + /* rte_flow uses a bit mask for protocol ports */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint16_t), + .field_index = SRCP_FIELD_IPV4, + .input_index = SRCP_DESTP_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + sizeof(struct ipv4_hdr) + + offsetof(struct tcp_hdr, src_port), + }, + { + /* rte_flow uses a bit mask for protocol ports */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint16_t), + .field_index = DSTP_FIELD_IPV4, + .input_index = SRCP_DESTP_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + sizeof(struct ipv4_hdr) + + offsetof(struct tcp_hdr, dst_port), + }, +}; + +/* flow classify data */ +static struct rte_flow_classify *udp_flow_classify[MAX_NUM_CLASSIFY]; +static struct rte_flow_classify *tcp_flow_classify[MAX_NUM_CLASSIFY]; +static struct rte_flow_classify *sctp_flow_classify[MAX_NUM_CLASSIFY]; + +static struct rte_flow_classify_5tuple_stats udp_ntuple_stats; +static struct rte_flow_classify_stats udp_classify_stats = { + .available_space = BURST_SIZE, + .used_space = 0, + .stats = (void **)&udp_ntuple_stats +}; + +static struct rte_flow_classify_5tuple_stats tcp_ntuple_stats; +static struct rte_flow_classify_stats tcp_classify_stats = { + .available_space = BURST_SIZE, + .used_space = 0, + .stats = (void **)&tcp_ntuple_stats +}; + +static struct rte_flow_classify_5tuple_stats sctp_ntuple_stats; +static struct rte_flow_classify_stats sctp_classify_stats = { + .available_space = BURST_SIZE, + .used_space = 0, + .stats = (void **)&sctp_ntuple_stats +}; + +/* parameters for rte_flow_classify_validate and rte_flow_classify_create */ + +static struct rte_flow_item eth_item = { RTE_FLOW_ITEM_TYPE_ETH, + 0, 0, 0 }; +static struct rte_flow_item end_item = { RTE_FLOW_ITEM_TYPE_END, + 0, 0, 0 }; + +/* sample actions: + * "actions count / end" + */ +static struct rte_flow_action count_action = { RTE_FLOW_ACTION_TYPE_COUNT, 0}; +static struct rte_flow_action end_action = { RTE_FLOW_ACTION_TYPE_END, 0}; +static struct rte_flow_action actions[2]; + +/* sample attributes */ +static struct rte_flow_attr attr; + +/* flow_classify.c: * Based on DPDK skeleton forwarding example. */ + +/* + * Initializes a given port using global settings and with the RX buffers + * coming from the mbuf_pool passed as a parameter. + */ +static inline int +port_init(uint8_t port, struct rte_mempool *mbuf_pool) +{ + struct rte_eth_conf port_conf = port_conf_default; + struct ether_addr addr; + const uint16_t rx_rings = 1, tx_rings = 1; + int retval; + uint16_t q; + + if (port >= rte_eth_dev_count()) + return -1; + + /* Configure the Ethernet device. */ + retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf); + if (retval != 0) + return retval; + + /* Allocate and set up 1 RX queue per Ethernet port. */ + for (q = 0; q < rx_rings; q++) { + retval = rte_eth_rx_queue_setup(port, q, RX_RING_SIZE, + rte_eth_dev_socket_id(port), NULL, mbuf_pool); + if (retval < 0) + return retval; + } + + /* Allocate and set up 1 TX queue per Ethernet port. */ + for (q = 0; q < tx_rings; q++) { + retval = rte_eth_tx_queue_setup(port, q, TX_RING_SIZE, + rte_eth_dev_socket_id(port), NULL); + if (retval < 0) + return retval; + } + + /* Start the Ethernet port. */ + retval = rte_eth_dev_start(port); + if (retval < 0) + return retval; + + /* Display the port MAC address. */ + rte_eth_macaddr_get(port, &addr); + printf("Port %u MAC: %02" PRIx8 " %02" PRIx8 " %02" PRIx8 + " %02" PRIx8 " %02" PRIx8 " %02" PRIx8 "\n", + port, + addr.addr_bytes[0], addr.addr_bytes[1], + addr.addr_bytes[2], addr.addr_bytes[3], + addr.addr_bytes[4], addr.addr_bytes[5]); + + /* Enable RX in promiscuous mode for the Ethernet device. */ + rte_eth_promiscuous_enable(port); + + return 0; +} + +/* + * The lcore main. This is the main thread that does the work, reading from + * an input port classifying the packets and writing to an output port. + */ +static __attribute__((noreturn)) void +lcore_main(void) +{ + struct rte_flow_error error; + const uint8_t nb_ports = rte_eth_dev_count(); + uint8_t port; + int ret; + int i; + + /* + * Check that the port is on the same NUMA node as the polling thread + * for best performance. + */ + for (port = 0; port < nb_ports; port++) + if (rte_eth_dev_socket_id(port) > 0 && + rte_eth_dev_socket_id(port) != + (int)rte_socket_id()) + printf("\n\n"); + printf("WARNING: port %u is on remote NUMA node\n", + port); + printf("to polling thread.\n"); + printf("Performance will not be optimal.\n"); + + printf("\nCore %u forwarding packets. [Ctrl+C to quit]\n", + rte_lcore_id()); + + /* Run until the application is quit or killed. */ + for (;;) { + /* + * Receive packets on a port, classify them and forward them + * on the paired port. + * The mapping is 0 -> 1, 1 -> 0, 2 -> 3, 3 -> 2, etc. + */ + for (port = 0; port < nb_ports; port++) { + + /* Get burst of RX packets, from first port of pair. */ + struct rte_mbuf *bufs[BURST_SIZE]; + const uint16_t nb_rx = rte_eth_rx_burst(port, 0, + bufs, BURST_SIZE); + + if (unlikely(nb_rx == 0)) + continue; + + for (i = 0; i < MAX_NUM_CLASSIFY; i++) { + if (udp_flow_classify[i]) { + ret = rte_flow_classify_query( + table_acl, + udp_flow_classify[i], + bufs, nb_rx, + &udp_classify_stats, &error); + if (ret) + printf( + "udp flow classify[%d] query failed port=%u\n\n", + i, port); + else + printf( + "udp rule [%d] counter1=%lu used_space=%d\n\n", + i, udp_ntuple_stats.counter1, + udp_classify_stats.used_space); + } + } + + for (i = 0; i < MAX_NUM_CLASSIFY; i++) { + if (tcp_flow_classify[i]) { + ret = rte_flow_classify_query( + table_acl, + tcp_flow_classify[i], + bufs, nb_rx, + &tcp_classify_stats, &error); + if (ret) + printf( + "tcp flow classify[%d] query failed port=%u\n\n", + i, port); + else + printf( + "tcp rule [%d] counter1=%lu used_space=%d\n\n", + i, tcp_ntuple_stats.counter1, + tcp_classify_stats.used_space); + } + } + + for (i = 0; i < MAX_NUM_CLASSIFY; i++) { + if (sctp_flow_classify[i]) { + ret = rte_flow_classify_query( + table_acl, + sctp_flow_classify[i], + bufs, nb_rx, + &sctp_classify_stats, &error); + if (ret) + printf( + "sctp flow classify[%d] query failed port=%u\n\n", + i, port); + else + printf( + "sctp rule [%d] counter1=%lu used_space=%d\n\n", + i, sctp_ntuple_stats.counter1, + sctp_classify_stats.used_space); + } + } + + /* Send burst of TX packets, to second port of pair. */ + const uint16_t nb_tx = rte_eth_tx_burst(port ^ 1, 0, + bufs, nb_rx); + + /* Free any unsent packets. */ + if (unlikely(nb_tx < nb_rx)) { + uint16_t buf; + + for (buf = nb_tx; buf < nb_rx; buf++) + rte_pktmbuf_free(bufs[buf]); + } + } + } +} + +/* + * Parse IPv4 5 tuple rules file, ipv4_rules_file.txt. + * Expected format: + * <src_ipv4_addr>'/'<masklen> <space> \ + * <dst_ipv4_addr>'/'<masklen> <space> \ + * <src_port> <space> ":" <src_port_mask> <space> \ + * <dst_port> <space> ":" <dst_port_mask> <space> \ + * <proto>'/'<proto_mask> <space> \ + * <priority> + */ + +static int +get_cb_field(char **in, uint32_t *fd, int base, unsigned long lim, + char dlm) +{ + unsigned long val; + char *end; + + errno = 0; + val = strtoul(*in, &end, base); + if (errno != 0 || end[0] != dlm || val > lim) + return -EINVAL; + *fd = (uint32_t)val; + *in = end + 1; + return 0; +} + +static int +parse_ipv4_net(char *in, uint32_t *addr, uint32_t *mask_len) +{ + uint32_t a, b, c, d, m; + + if (get_cb_field(&in, &a, 0, UINT8_MAX, '.')) + return -EINVAL; + if (get_cb_field(&in, &b, 0, UINT8_MAX, '.')) + return -EINVAL; + if (get_cb_field(&in, &c, 0, UINT8_MAX, '.')) + return -EINVAL; + if (get_cb_field(&in, &d, 0, UINT8_MAX, '/')) + return -EINVAL; + if (get_cb_field(&in, &m, 0, sizeof(uint32_t) * CHAR_BIT, 0)) + return -EINVAL; + + addr[0] = IPv4(a, b, c, d); + mask_len[0] = m; + return 0; +} + +static int +parse_ipv4_5tuple_rule(char *str, struct rte_eth_ntuple_filter *ntuple_filter) +{ + int i, ret; + char *s, *sp, *in[CB_FLD_NUM]; + static const char *dlm = " \t\n"; + int dim = CB_FLD_NUM; + uint32_t temp; + + s = str; + for (i = 0; i != dim; i++, s = NULL) { + in[i] = strtok_r(s, dlm, &sp); + if (in[i] == NULL) + return -EINVAL; + } + + ret = parse_ipv4_net(in[CB_FLD_SRC_ADDR], + &ntuple_filter->src_ip, + &ntuple_filter->src_ip_mask); + if (ret != 0) { + flow_classify_log("failed to read source address/mask: %s\n", + in[CB_FLD_SRC_ADDR]); + return ret; + } + + ret = parse_ipv4_net(in[CB_FLD_DST_ADDR], + &ntuple_filter->dst_ip, + &ntuple_filter->dst_ip_mask); + if (ret != 0) { + flow_classify_log("failed to read source address/mask: %s\n", + in[CB_FLD_DST_ADDR]); + return ret; + } + + if (get_cb_field(&in[CB_FLD_SRC_PORT], &temp, 0, UINT16_MAX, 0)) + return -EINVAL; + ntuple_filter->src_port = (uint16_t)temp; + + if (strncmp(in[CB_FLD_SRC_PORT_DLM], cb_port_delim, + sizeof(cb_port_delim)) != 0) + return -EINVAL; + + if (get_cb_field(&in[CB_FLD_SRC_PORT_MASK], &temp, 0, UINT16_MAX, 0)) + return -EINVAL; + ntuple_filter->src_port_mask = (uint16_t)temp; + + if (get_cb_field(&in[CB_FLD_DST_PORT], &temp, 0, UINT16_MAX, 0)) + return -EINVAL; + ntuple_filter->dst_port = (uint16_t)temp; + + if (strncmp(in[CB_FLD_DST_PORT_DLM], cb_port_delim, + sizeof(cb_port_delim)) != 0) + return -EINVAL; + + if (get_cb_field(&in[CB_FLD_DST_PORT_MASK], &temp, 0, UINT16_MAX, 0)) + return -EINVAL; + ntuple_filter->dst_port_mask = (uint16_t)temp; + + if (get_cb_field(&in[CB_FLD_PROTO], &temp, 0, UINT8_MAX, '/')) + return -EINVAL; + ntuple_filter->proto = (uint8_t)temp; + + if (get_cb_field(&in[CB_FLD_PROTO], &temp, 0, UINT8_MAX, 0)) + return -EINVAL; + ntuple_filter->proto_mask = (uint8_t)temp; + + if (get_cb_field(&in[CB_FLD_PRIORITY], &temp, 0, UINT16_MAX, 0)) + return -EINVAL; + ntuple_filter->priority = (uint16_t)temp; + if (ntuple_filter->priority > FLOW_CLASSIFY_MAX_PRIORITY) + ret = -EINVAL; + + return ret; +} + +/* Bypass comment and empty lines */ +static inline int +is_bypass_line(char *buff) +{ + int i = 0; + + /* comment line */ + if (buff[0] == COMMENT_LEAD_CHAR) + return 1; + /* empty line */ + while (buff[i] != '\0') { + if (!isspace(buff[i])) + return 0; + i++; + } + return 1; +} + +static uint32_t +convert_depth_to_bitmask(uint32_t depth_val) +{ + uint32_t bitmask = 0; + int i, j; + + for (i = depth_val, j = 0; i > 0; i--, j++) + bitmask |= (1 << (31 - j)); + return bitmask; +} + +static int +add_classify_rule(struct rte_eth_ntuple_filter *ntuple_filter) +{ + int ret = 0; + struct rte_flow_error error; + struct rte_flow_item_ipv4 ipv4_spec; + struct rte_flow_item_ipv4 ipv4_mask; + struct rte_flow_item ipv4_udp_item; + struct rte_flow_item ipv4_tcp_item; + struct rte_flow_item ipv4_sctp_item; + struct rte_flow_item_udp udp_spec; + struct rte_flow_item_udp udp_mask; + struct rte_flow_item udp_item; + struct rte_flow_item_tcp tcp_spec; + struct rte_flow_item_tcp tcp_mask; + struct rte_flow_item tcp_item; + struct rte_flow_item_sctp sctp_spec; + struct rte_flow_item_sctp sctp_mask; + struct rte_flow_item sctp_item; + struct rte_flow_item pattern_ipv4_5tuple[4]; + struct rte_flow_classify *flow_classify; + uint8_t ipv4_proto; + + /* set up parameters for validate and create */ + memset(&ipv4_spec, 0, sizeof(ipv4_spec)); + ipv4_spec.hdr.next_proto_id = ntuple_filter->proto; + ipv4_spec.hdr.src_addr = ntuple_filter->src_ip; + ipv4_spec.hdr.dst_addr = ntuple_filter->dst_ip; + ipv4_proto = ipv4_spec.hdr.next_proto_id; + + memset(&ipv4_mask, 0, sizeof(ipv4_mask)); + ipv4_mask.hdr.next_proto_id = ntuple_filter->proto_mask; + ipv4_mask.hdr.src_addr = ntuple_filter->src_ip_mask; + ipv4_mask.hdr.src_addr = + convert_depth_to_bitmask(ipv4_mask.hdr.src_addr); + ipv4_mask.hdr.dst_addr = ntuple_filter->dst_ip_mask; + ipv4_mask.hdr.dst_addr = + convert_depth_to_bitmask(ipv4_mask.hdr.dst_addr); + + switch (ipv4_proto) { + case PROTO_UDP: + if (udp_num_classify >= MAX_NUM_CLASSIFY) { + printf( + "\nINFO: UDP classify rule capacity %d reached\n", + udp_num_classify); + ret = -1; + break; + } + ipv4_udp_item.type = RTE_FLOW_ITEM_TYPE_IPV4; + ipv4_udp_item.spec = &ipv4_spec; + ipv4_udp_item.mask = &ipv4_mask; + ipv4_udp_item.last = NULL; + + udp_spec.hdr.src_port = ntuple_filter->src_port; + udp_spec.hdr.dst_port = ntuple_filter->dst_port; + udp_spec.hdr.dgram_len = 0; + udp_spec.hdr.dgram_cksum = 0; + + udp_mask.hdr.src_port = ntuple_filter->src_port_mask; + udp_mask.hdr.dst_port = ntuple_filter->dst_port_mask; + udp_mask.hdr.dgram_len = 0; + udp_mask.hdr.dgram_cksum = 0; + + udp_item.type = RTE_FLOW_ITEM_TYPE_UDP; + udp_item.spec = &udp_spec; + udp_item.mask = &udp_mask; + udp_item.last = NULL; + + attr.priority = ntuple_filter->priority; + pattern_ipv4_5tuple[1] = ipv4_udp_item; + pattern_ipv4_5tuple[2] = udp_item; + break; + case PROTO_TCP: + if (tcp_num_classify >= MAX_NUM_CLASSIFY) { + printf( + "\nINFO: TCP classify rule capacity %d reached\n", + tcp_num_classify); + ret = -1; + break; + } + ipv4_tcp_item.type = RTE_FLOW_ITEM_TYPE_IPV4; + ipv4_tcp_item.spec = &ipv4_spec; + ipv4_tcp_item.mask = &ipv4_mask; + ipv4_tcp_item.last = NULL; + + memset(&tcp_spec, 0, sizeof(tcp_spec)); + tcp_spec.hdr.src_port = ntuple_filter->src_port; + tcp_spec.hdr.dst_port = ntuple_filter->dst_port; + + memset(&tcp_mask, 0, sizeof(tcp_mask)); + tcp_mask.hdr.src_port = ntuple_filter->src_port_mask; + tcp_mask.hdr.dst_port = ntuple_filter->dst_port_mask; + + tcp_item.type = RTE_FLOW_ITEM_TYPE_TCP; + tcp_item.spec = &tcp_spec; + tcp_item.mask = &tcp_mask; + tcp_item.last = NULL; + + attr.priority = ntuple_filter->priority; + pattern_ipv4_5tuple[1] = ipv4_tcp_item; + pattern_ipv4_5tuple[2] = tcp_item; + break; + case PROTO_SCTP: + if (sctp_num_classify >= MAX_NUM_CLASSIFY) { + printf( + "\nINFO: SCTP classify rule capacity %d reached\n", + sctp_num_classify); + ret = -1; + break; + } + ipv4_sctp_item.type = RTE_FLOW_ITEM_TYPE_IPV4; + ipv4_sctp_item.spec = &ipv4_spec; + ipv4_sctp_item.mask = &ipv4_mask; + ipv4_sctp_item.last = NULL; + + sctp_spec.hdr.src_port = ntuple_filter->src_port; + sctp_spec.hdr.dst_port = ntuple_filter->dst_port; + sctp_spec.hdr.cksum = 0; + sctp_spec.hdr.tag = 0; + + sctp_mask.hdr.src_port = ntuple_filter->src_port_mask; + sctp_mask.hdr.dst_port = ntuple_filter->dst_port_mask; + sctp_mask.hdr.cksum = 0; + sctp_mask.hdr.tag = 0; + + sctp_item.type = RTE_FLOW_ITEM_TYPE_SCTP; + sctp_item.spec = &sctp_spec; + sctp_item.mask = &sctp_mask; + sctp_item.last = NULL; + + attr.priority = ntuple_filter->priority; + pattern_ipv4_5tuple[1] = ipv4_sctp_item; + pattern_ipv4_5tuple[2] = sctp_item; + break; + default: + break; + } + + if (ret == -1) + return 0; + + attr.ingress = 1; + pattern_ipv4_5tuple[0] = eth_item; + pattern_ipv4_5tuple[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, pattern_ipv4_5tuple, + actions, &error); + if (ret) + rte_exit(EXIT_FAILURE, + "flow classify validate failed ipv4_proto = %u\n", + ipv4_proto); + + flow_classify = rte_flow_classify_create( + table_acl, entry_size, &attr, pattern_ipv4_5tuple, + actions, &error); + if (flow_classify == NULL) + rte_exit(EXIT_FAILURE, + "flow classify create failed ipv4_proto = %u\n", + ipv4_proto); + + switch (ipv4_proto) { + case PROTO_UDP: + udp_flow_classify[udp_num_classify] = flow_classify; + udp_num_classify++; + break; + case PROTO_TCP: + tcp_flow_classify[tcp_num_classify] = flow_classify; + tcp_num_classify++; + break; + case PROTO_SCTP: + sctp_flow_classify[sctp_num_classify] = flow_classify; + sctp_num_classify++; + break; + default: + break; + } + return 0; +} + +static int +add_rules(const char *rule_path) +{ + FILE *fh; + char buff[LINE_MAX]; + unsigned int i = 0; + unsigned int total_num = 0; + struct rte_eth_ntuple_filter ntuple_filter; + + fh = fopen(rule_path, "rb"); + if (fh == NULL) + rte_exit(EXIT_FAILURE, "%s: Open %s failed\n", __func__, + rule_path); + + fseek(fh, 0, SEEK_SET); + + i = 0; + while (fgets(buff, LINE_MAX, fh) != NULL) { + i++; + + if (is_bypass_line(buff)) + continue; + + if (total_num >= FLOW_CLASSIFY_MAX_RULE_NUM - 1) { + printf("\nINFO: classify rule capacity %d reached\n", + total_num); + break; + } + + if (parse_ipv4_5tuple_rule(buff, &ntuple_filter) != 0) + rte_exit(EXIT_FAILURE, + "%s Line %u: parse rules error\n", + rule_path, i); + + if (add_classify_rule(&ntuple_filter) != 0) + rte_exit(EXIT_FAILURE, "add rule error\n"); + + total_num++; + } + + fclose(fh); + return 0; +} + +/* display usage */ +static void +print_usage(const char *prgname) +{ + printf("%s usage:\n", prgname); + printf("[EAL options] -- --"OPTION_RULE_IPV4"=FILE: "); + printf("specify the ipv4 rules file.\n"); + printf("Each rule occupies one line in the file.\n"); +} + +/* Parse the argument given in the command line of the application */ +static int +parse_args(int argc, char **argv) +{ + int opt, ret; + char **argvopt; + int option_index; + char *prgname = argv[0]; + static struct option lgopts[] = { + {OPTION_RULE_IPV4, 1, 0, 0}, + {NULL, 0, 0, 0} + }; + + argvopt = argv; + + while ((opt = getopt_long(argc, argvopt, "", + lgopts, &option_index)) != EOF) { + + switch (opt) { + /* long options */ + case 0: + if (!strncmp(lgopts[option_index].name, + OPTION_RULE_IPV4, + sizeof(OPTION_RULE_IPV4))) + parm_config.rule_ipv4_name = optarg; + break; + default: + print_usage(prgname); + return -1; + } + } + + if (optind >= 0) + argv[optind-1] = prgname; + + ret = optind-1; + optind = 1; /* reset getopt lib */ + return ret; +} + +/* + * The main function, which does initialization and calls the per-lcore + * functions. + */ +int +main(int argc, char *argv[]) +{ + struct rte_mempool *mbuf_pool; + uint8_t nb_ports; + uint8_t portid; + int ret; + int socket_id; + struct rte_table_acl_params table_acl_params; + + /* Initialize the Environment Abstraction Layer (EAL). */ + ret = rte_eal_init(argc, argv); + if (ret < 0) + rte_exit(EXIT_FAILURE, "Error with EAL initialization\n"); + + argc -= ret; + argv += ret; + + /* parse application arguments (after the EAL ones) */ + ret = parse_args(argc, argv); + if (ret < 0) + rte_exit(EXIT_FAILURE, "Invalid flow_classify parameters\n"); + + /* Check that there is an even number of ports to send/receive on. */ + nb_ports = rte_eth_dev_count(); + if (nb_ports < 2 || (nb_ports & 1)) + rte_exit(EXIT_FAILURE, "Error: number of ports must be even\n"); + + /* Creates a new mempool in memory to hold the mbufs. */ + mbuf_pool = rte_pktmbuf_pool_create("MBUF_POOL", NUM_MBUFS * nb_ports, + MBUF_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id()); + + if (mbuf_pool == NULL) + rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n"); + + /* Initialize all ports. */ + for (portid = 0; portid < nb_ports; portid++) + if (port_init(portid, mbuf_pool) != 0) + rte_exit(EXIT_FAILURE, "Cannot init port %"PRIu8 "\n", + portid); + + if (rte_lcore_count() > 1) + printf("\nWARNING: Too many lcores enabled. Only 1 used.\n"); + + socket_id = rte_eth_dev_socket_id(0); + + /* initialise ACL table params */ + table_acl_params.n_rule_fields = RTE_DIM(ipv4_defs); + table_acl_params.name = "table_acl_ipv4_5tuple"; + table_acl_params.n_rules = FLOW_CLASSIFY_MAX_RULE_NUM; + memcpy(table_acl_params.field_format, ipv4_defs, sizeof(ipv4_defs)); + entry_size = RTE_ACL_RULE_SZ(RTE_DIM(ipv4_defs)); + + table_acl = rte_table_acl_ops.f_create(&table_acl_params, socket_id, + entry_size); + if (table_acl == NULL) + rte_exit(EXIT_FAILURE, "Failed to create table_acl\n"); + + /* read file of IPv4 5 tuple rules and initialise parameters + * for rte_flow_classify_validate and rte_flow_classify_create + */ + + if (add_rules(parm_config.rule_ipv4_name)) + rte_exit(EXIT_FAILURE, "Failed to add rules\n"); + + /* Call lcore_main on the master core only. */ + lcore_main(); + + return 0; +} diff --git a/examples/flow_classify/ipv4_rules_file.txt b/examples/flow_classify/ipv4_rules_file.txt new file mode 100644 index 0000000..262763d --- /dev/null +++ b/examples/flow_classify/ipv4_rules_file.txt @@ -0,0 +1,14 @@ +#file format: +#src_ip/masklen dstt_ip/masklen src_port : mask dst_port : mask proto/mask priority +# +2.2.2.3/24 2.2.2.7/24 32 : 0xffff 33 : 0xffff 17/0xff 0 +9.9.9.3/24 9.9.9.7/24 32 : 0xffff 33 : 0xffff 17/0xff 1 +9.9.9.3/24 9.9.9.7/24 32 : 0xffff 33 : 0xffff 6/0xff 2 +9.9.8.3/24 9.9.8.7/24 32 : 0xffff 33 : 0xffff 6/0xff 3 +6.7.8.9/24 2.3.4.5/24 32 : 0xffff 33 : 0xffff 132/0xff 4 +6.7.8.9/32 192.168.0.36/32 10 : 0xffff 11 : 0xffff 6/0xfe 5 +6.7.8.9/24 192.168.0.36/24 10 : 0xffff 11 : 0xffff 6/0xfe 6 +6.7.8.9/16 192.168.0.36/16 10 : 0xffff 11 : 0xffff 6/0xfe 7 +6.7.8.9/8 192.168.0.36/8 10 : 0xffff 11 : 0xffff 6/0xfe 8 +#error rules +#9.8.7.6/8 192.168.0.36/8 10 : 0xffff 11 : 0xffff 6/0xfe 9 \ No newline at end of file -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v4 5/5] test: flow classify library unit tests 2017-08-31 14:54 ` [dpdk-dev] [PATCH v3 0/5] " Bernard Iremonger ` (4 preceding siblings ...) 2017-09-06 10:27 ` [dpdk-dev] [PATCH v4 4/5] examples/flow_classify: flow classify sample application Bernard Iremonger @ 2017-09-06 10:27 ` Bernard Iremonger 5 siblings, 0 replies; 145+ messages in thread From: Bernard Iremonger @ 2017-09-06 10:27 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil Cc: Bernard Iremonger Add flow_classify_autotest program. Set up IPv4 ACL field definitions. Create table_acl for use by librte_flow_classify API's. Create an mbuf pool for use by rte_flow_classify_query. For each of the librte_flow_classify API's: add bad parameter tests add bad pattern tests add bad action tests add good parameter tests Initialise ipv4 udp traffic for use by the test for rte_flow_classif_query. Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> --- test/test/Makefile | 1 + test/test/test_flow_classify.c | 493 +++++++++++++++++++++++++++++++++++++++++ test/test/test_flow_classify.h | 186 ++++++++++++++++ 3 files changed, 680 insertions(+) create mode 100644 test/test/test_flow_classify.c create mode 100644 test/test/test_flow_classify.h diff --git a/test/test/Makefile b/test/test/Makefile index 42d9a49..073e1ed 100644 --- a/test/test/Makefile +++ b/test/test/Makefile @@ -106,6 +106,7 @@ SRCS-y += test_table_tables.c SRCS-y += test_table_ports.c SRCS-y += test_table_combined.c SRCS-$(CONFIG_RTE_LIBRTE_ACL) += test_table_acl.c +SRCS-$(CONFIG_RTE_LIBRTE_ACL) += test_flow_classify.c endif SRCS-y += test_rwlock.c diff --git a/test/test/test_flow_classify.c b/test/test/test_flow_classify.c new file mode 100644 index 0000000..0badf49 --- /dev/null +++ b/test/test/test_flow_classify.c @@ -0,0 +1,493 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <string.h> +#include <errno.h> + +#include "test.h" + +#include <rte_string_fns.h> +#include <rte_mbuf.h> +#include <rte_byteorder.h> +#include <rte_ip.h> +#include <rte_acl.h> +#include <rte_common.h> +#include <rte_table_acl.h> +#include <rte_flow.h> +#include <rte_flow_classify.h> + +#include "packet_burst_generator.h" +#include "test_flow_classify.h" + + +#define FLOW_CLASSIFY_MAX_RULE_NUM 100 +static void *table_acl; +static uint32_t entry_size; + +/* + * test functions by passing invalid or + * non-workable parameters. + */ +static int +test_invalid_parameters(void) +{ + struct rte_flow_classify *classify; + int ret; + + ret = rte_flow_classify_validate(NULL, NULL, NULL, NULL, NULL); + if (!ret) { + printf("Line %i: flow_classify_validate", __LINE__); + printf(" with NULL param should have failed!\n"); + return -1; + } + + classify = rte_flow_classify_create(NULL, 0, NULL, NULL, NULL, NULL); + if (classify) { + printf("Line %i: flow_classify_create", __LINE__); + printf(" with NULL param should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_destroy(NULL, NULL, NULL); + if (!ret) { + printf("Line %i: flow_classify_destroy", __LINE__); + printf(" with NULL param should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_query(NULL, NULL, NULL, 0, NULL, NULL); + if (!ret) { + printf("Line %i: flow_classify_query", __LINE__); + printf(" with NULL param should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_validate(NULL, NULL, NULL, NULL, &error); + if (!ret) { + printf("Line %i: flow_classify_validate", __LINE__); + printf(" with NULL param should have failed!\n"); + return -1; + } + + classify = rte_flow_classify_create(NULL, 0, NULL, NULL, NULL, &error); + if (classify) { + printf("Line %i: flow_classify_create ", __LINE__); + printf("with NULL param should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_destroy(NULL, NULL, &error); + if (!ret) { + printf("Line %i: flow_classify_destroy", __LINE__); + printf("with NULL param should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_query(NULL, NULL, NULL, 0, NULL, &error); + if (!ret) { + printf("Line %i: flow_classify_query", __LINE__); + printf(" with NULL param should have failed!\n"); + return -1; + } + return 0; +} + +static int +test_valid_parameters(void) +{ + struct rte_flow_classify *flow_classify; + int ret; + + /* set up parameters for rte_flow_classify_validate and + * rte_flow_classify_create and rte_flow_classify_destroy + */ + + attr.ingress = 1; + attr.priority = 1; + pattern_udp_1[0] = eth_item; + pattern_udp_1[1] = ipv4_udp_item_1; + pattern_udp_1[2] = udp_item_1; + pattern_udp_1[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, + pattern_udp_1, actions, &error); + if (ret) { + printf("Line %i: flow_classify_validate", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + flow_classify = rte_flow_classify_create(table_acl, entry_size, &attr, + pattern_udp_1, actions, &error); + if (!flow_classify) { + printf("Line %i: flow_classify_create", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classify_destroy(table_acl, flow_classify, &error); + if (ret) { + printf("Line %i: flow_classify_destroy", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + return 0; +} + +static int +test_invalid_patterns(void) +{ + struct rte_flow_classify *flow_classify; + int ret; + + /* set up parameters for rte_flow_classify_validate and + * rte_flow_classify_create and rte_flow_classify_destroy + */ + + attr.ingress = 1; + attr.priority = 1; + pattern_udp_1[0] = eth_item_bad; + pattern_udp_1[1] = ipv4_udp_item_1; + pattern_udp_1[2] = udp_item_1; + pattern_udp_1[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, + pattern_udp_1, actions, &error); + if (!ret) { + printf("Line %i: flow_classify_validate", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + pattern_udp_1[0] = eth_item; + pattern_udp_1[1] = ipv4_udp_item_bad; + flow_classify = rte_flow_classify_create(table_acl, entry_size, &attr, + pattern_udp_1, actions, &error); + if (flow_classify) { + printf("Line %i: flow_classify_create", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_destroy(table_acl, flow_classify, &error); + if (!ret) { + printf("Line %i: flow_classify_destroy", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + pattern_udp_1[1] = ipv4_udp_item_1; + pattern_udp_1[2] = udp_item_bad; + ret = rte_flow_classify_validate(table_acl, &attr, + pattern_udp_1, actions, &error); + if (!ret) { + printf("Line %i: flow_classify_validate", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + pattern_udp_1[2] = udp_item_1; + pattern_udp_1[3] = end_item_bad; + flow_classify = rte_flow_classify_create(table_acl, entry_size, &attr, + pattern_udp_1, actions, &error); + if (flow_classify) { + printf("Line %i: flow_classify_create", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_destroy(table_acl, flow_classify, &error); + if (!ret) { + printf("Line %i: flow_classify_destroy", __LINE__); + printf(" should have failed!\n"); + return -1; + } + return 0; +} + +static int +test_invalid_actions(void) +{ + struct rte_flow_classify *flow_classify; + int ret; + + /* set up parameters for rte_flow_classify_validate and + * rte_flow_classify_create and rte_flow_classify_destroy + */ + + attr.ingress = 1; + attr.priority = 1; + pattern_udp_1[0] = eth_item; + pattern_udp_1[1] = ipv4_udp_item_1; + pattern_udp_1[2] = udp_item_1; + pattern_udp_1[3] = end_item; + actions[0] = count_action_bad; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, + pattern_udp_1, actions, &error); + if (!ret) { + printf("Line %i: flow_classify_validate", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + flow_classify = rte_flow_classify_create(table_acl, entry_size, &attr, + pattern_udp_1, actions, &error); + if (flow_classify) { + printf("Line %i: flow_classify_create", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_destroy(table_acl, flow_classify, &error); + if (!ret) { + printf("Line %i: flow_classify_destroy", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + actions[0] = count_action; + actions[1] = end_action_bad; + ret = rte_flow_classify_validate(table_acl, &attr, + pattern_udp_1, actions, &error); + if (!ret) { + printf("Line %i: flow_classify_validate", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + flow_classify = rte_flow_classify_create(table_acl, entry_size, &attr, + pattern_udp_1, actions, &error); + if (flow_classify) { + printf("Line %i: flow_classify_create", __LINE__); + printf(" should have failed!\n"); + return -1; + } + + ret = rte_flow_classify_destroy(table_acl, flow_classify, &error); + if (!ret) { + printf("Line %i: flow_classify_destroy", __LINE__); + printf("should have failed!\n"); + return -1; + } + return 0; +} + +static int +init_udp_ipv4_traffic(struct rte_mempool *mp, + struct rte_mbuf **pkts_burst, uint32_t burst_size) +{ + struct ether_hdr pkt_eth_hdr; + struct ipv4_hdr pkt_ipv4_hdr; + struct udp_hdr pkt_udp_hdr; + uint32_t src_addr = IPV4_ADDR(2, 2, 2, 3); + uint32_t dst_addr = IPV4_ADDR(2, 2, 2, 7); + uint16_t src_port = 32; + uint16_t dst_port = 33; + uint16_t pktlen; + + static uint8_t src_mac[] = { 0x00, 0xFF, 0xAA, 0xFF, 0xAA, 0xFF }; + static uint8_t dst_mac[] = { 0x00, 0xAA, 0xFF, 0xAA, 0xFF, 0xAA }; + + initialize_eth_header(&pkt_eth_hdr, + (struct ether_addr *)src_mac, + (struct ether_addr *)dst_mac, ETHER_TYPE_IPv4, 0, 0); + pktlen = (uint16_t)(sizeof(struct ether_hdr)); + printf("ETH pktlen %u\n", pktlen); + + pktlen = initialize_ipv4_header(&pkt_ipv4_hdr, src_addr, dst_addr, + pktlen); + printf("ETH + IPv4 pktlen %u\n", pktlen); + + pktlen = initialize_udp_header(&pkt_udp_hdr, src_port, dst_port, + pktlen); + printf("ETH + IPv4 + UDP pktlen %u\n", pktlen); + + return generate_packet_burst(mp, pkts_burst, &pkt_eth_hdr, + 0, &pkt_ipv4_hdr, 1, + &pkt_udp_hdr, burst_size, + PACKET_BURST_GEN_PKT_LEN, 1); +} + +static int +init_mbufpool(void) +{ + int socketid; + int ret = 0; + unsigned int lcore_id; + char s[64]; + + for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { + if (rte_lcore_is_enabled(lcore_id) == 0) + continue; + + socketid = rte_lcore_to_socket_id(lcore_id); + if (socketid >= NB_SOCKETS) { + printf( + "Socket %d of lcore %u is out of range %d\n", + socketid, lcore_id, NB_SOCKETS); + ret = -1; + break; + } + if (mbufpool[socketid] == NULL) { + snprintf(s, sizeof(s), "mbuf_pool_%d", socketid); + mbufpool[socketid] = + rte_pktmbuf_pool_create(s, NB_MBUF, + MEMPOOL_CACHE_SIZE, 0, MBUF_SIZE, + socketid); + if (mbufpool[socketid]) { + printf("Allocated mbuf pool on socket %d\n", + socketid); + } else { + printf("Cannot init mbuf pool on socket %d\n", + socketid); + ret = -ENOMEM; + break; + } + } + } + return ret; +} + +static int +test_query_udp(void) +{ + struct rte_flow_error error; + struct rte_flow_classify *flow_classify; + int ret; + int i; + + ret = init_udp_ipv4_traffic(mbufpool[0], bufs, MAX_PKT_BURST); + if (ret != MAX_PKT_BURST) { + printf("Line %i: init_udp_ipv4_traffic has failed!\n", + __LINE__); + return -1; + } + + for (i = 0; i < MAX_PKT_BURST; i++) + bufs[i]->packet_type = RTE_PTYPE_L3_IPV4; + + /* set up parameters for rte_flow_classify_validate and + * rte_flow_classify_create and rte_flow_classify_destroy + */ + + attr.ingress = 1; + attr.priority = 1; + pattern_udp_1[0] = eth_item; + pattern_udp_1[1] = ipv4_udp_item_1; + pattern_udp_1[2] = udp_item_1; + pattern_udp_1[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, + pattern_udp_1, actions, &error); + if (ret) { + printf("Line %i: flow_classify_validate", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + flow_classify = rte_flow_classify_create(table_acl, entry_size, &attr, + pattern_udp_1, actions, &error); + if (!flow_classify) { + printf("Line %i: flow_classify_create", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classify_query(table_acl, flow_classify, bufs, + MAX_PKT_BURST, &udp_classify_stats, &error); + if (ret) { + printf("Line %i: flow_classify_query", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + + ret = rte_flow_classify_destroy(table_acl, flow_classify, &error); + if (ret) { + printf("Line %i: flow_classify_destroy", __LINE__); + printf(" should not have failed!\n"); + return -1; + } + return 0; +} + +static int +test_flow_classify(void) +{ + struct rte_table_acl_params table_acl_params; + int socket_id = 0; + int ret; + + /* initialise ACL table params */ + table_acl_params.n_rule_fields = RTE_DIM(ipv4_defs); + table_acl_params.name = "table_acl_ipv4_5tuple"; + table_acl_params.n_rules = FLOW_CLASSIFY_MAX_RULE_NUM; + memcpy(table_acl_params.field_format, ipv4_defs, sizeof(ipv4_defs)); + entry_size = RTE_ACL_RULE_SZ(RTE_DIM(ipv4_defs)); + + table_acl = rte_table_acl_ops.f_create(&table_acl_params, socket_id, + entry_size); + if (table_acl == NULL) { + printf("Line %i: f_create has failed!\n", __LINE__); + return -1; + } + printf("Created table_acl for for IPv4 five tuple packets\n"); + + ret = init_mbufpool(); + if (ret) { + printf("Line %i: init_mbufpool has failed!\n", __LINE__); + return -1; + } + + if (test_invalid_parameters() < 0) + return -1; + if (test_valid_parameters() < 0) + return -1; + if (test_invalid_patterns() < 0) + return -1; + if (test_invalid_actions() < 0) + return -1; + if (test_query_udp() < 0) + return -1; + + return 0; +} + +REGISTER_TEST_COMMAND(flow_classify_autotest, test_flow_classify); diff --git a/test/test/test_flow_classify.h b/test/test/test_flow_classify.h new file mode 100644 index 0000000..af04dd3 --- /dev/null +++ b/test/test/test_flow_classify.h @@ -0,0 +1,186 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef TEST_FLOW_CLASSIFY_H_ +#define TEST_FLOW_CLASSIFY_H_ + +#define MAX_PKT_BURST (32) +#define NB_SOCKETS (1) +#define MEMPOOL_CACHE_SIZE (256) +#define MBUF_SIZE (512) +#define NB_MBUF (512) + +/* test UDP packets */ +static struct rte_mempool *mbufpool[NB_SOCKETS]; +static struct rte_mbuf *bufs[MAX_PKT_BURST]; + +/* ACL field definitions for IPv4 5 tuple rule */ + +enum { + PROTO_FIELD_IPV4, + SRC_FIELD_IPV4, + DST_FIELD_IPV4, + SRCP_FIELD_IPV4, + DSTP_FIELD_IPV4, + NUM_FIELDS_IPV4 +}; + +enum { + PROTO_INPUT_IPV4, + SRC_INPUT_IPV4, + DST_INPUT_IPV4, + SRCP_DESTP_INPUT_IPV4 +}; + +static struct rte_acl_field_def ipv4_defs[NUM_FIELDS_IPV4] = { + /* first input field - always one byte long. */ + { + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint8_t), + .field_index = PROTO_FIELD_IPV4, + .input_index = PROTO_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, next_proto_id), + }, + /* next input field (IPv4 source address) - 4 consecutive bytes. */ + { + /* rte_flow uses a bit mask for IPv4 addresses */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint32_t), + .field_index = SRC_FIELD_IPV4, + .input_index = SRC_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, src_addr), + }, + /* next input field (IPv4 destination address) - 4 consecutive bytes. */ + { + /* rte_flow uses a bit mask for IPv4 addresses */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint32_t), + .field_index = DST_FIELD_IPV4, + .input_index = DST_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, dst_addr), + }, + /* + * Next 2 fields (src & dst ports) form 4 consecutive bytes. + * They share the same input index. + */ + { + /* rte_flow uses a bit mask for protocol ports */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint16_t), + .field_index = SRCP_FIELD_IPV4, + .input_index = SRCP_DESTP_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + sizeof(struct ipv4_hdr) + + offsetof(struct tcp_hdr, src_port), + }, + { + /* rte_flow uses a bit mask for protocol ports */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint16_t), + .field_index = DSTP_FIELD_IPV4, + .input_index = SRCP_DESTP_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + sizeof(struct ipv4_hdr) + + offsetof(struct tcp_hdr, dst_port), + }, +}; + +/* parameters for rte_flow_classify_validate and rte_flow_classify_create */ + +/* first sample UDP pattern: + * "eth / ipv4 src spec 2.2.2.3 src mask 255.255.255.00 dst spec 2.2.2.7 + * dst mask 255.255.255.00 / udp src is 32 dst is 33 / end" + */ +static struct rte_flow_item_ipv4 ipv4_udp_spec_1 = { + { 0, 0, 0, 0, 0, 0, 17, 0, IPv4(2, 2, 2, 3), IPv4(2, 2, 2, 7)} +}; +static const struct rte_flow_item_ipv4 ipv4_mask_24 = { + .hdr = { + .next_proto_id = 0xff, + .src_addr = 0xffffff00, + .dst_addr = 0xffffff00, + }, +}; +static struct rte_flow_item_udp udp_spec_1 = { + { 32, 33, 0, 0 } +}; + +static struct rte_flow_item eth_item = { RTE_FLOW_ITEM_TYPE_ETH, + 0, 0, 0 }; +static struct rte_flow_item eth_item_bad = { -1, 0, 0, 0 }; + +static struct rte_flow_item ipv4_udp_item_1 = { RTE_FLOW_ITEM_TYPE_IPV4, + &ipv4_udp_spec_1, 0, &ipv4_mask_24}; +static struct rte_flow_item ipv4_udp_item_bad = { RTE_FLOW_ITEM_TYPE_IPV4, + NULL, 0, NULL}; + +static struct rte_flow_item udp_item_1 = { RTE_FLOW_ITEM_TYPE_UDP, + &udp_spec_1, 0, &rte_flow_item_udp_mask}; +static struct rte_flow_item udp_item_bad = { RTE_FLOW_ITEM_TYPE_UDP, + NULL, 0, NULL}; + +static struct rte_flow_item end_item = { RTE_FLOW_ITEM_TYPE_END, + 0, 0, 0 }; +static struct rte_flow_item end_item_bad = { -1, 0, 0, 0 }; + +static struct rte_flow_item pattern_udp_1[4]; + +/* sample actions: + * "actions count / end" + */ +static struct rte_flow_action count_action = { RTE_FLOW_ACTION_TYPE_COUNT, 0}; +static struct rte_flow_action count_action_bad = { -1, 0}; + +static struct rte_flow_action end_action = { RTE_FLOW_ACTION_TYPE_END, 0}; +static struct rte_flow_action end_action_bad = { -1, 0}; + +static struct rte_flow_action actions[2]; + +/* sample attributes */ +static struct rte_flow_attr attr; + +/* sample error */ +static struct rte_flow_error error; + +/* flow classify data for UDP burst */ +static struct rte_flow_classify_5tuple_stats udp_ntuple_stats; +static struct rte_flow_classify_stats udp_classify_stats = { + .available_space = MAX_PKT_BURST, + .used_space = 0, + .stats = (void **)&udp_ntuple_stats +}; + +#endif /* TEST_FLOW_CLASSIFY_H_ */ -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v3 1/5] librte_table: fix acl entry add and delete functions 2017-08-25 16:10 ` [dpdk-dev] [PATCH v2 0/6] flow " Bernard Iremonger 2017-08-31 14:54 ` [dpdk-dev] [PATCH v3 0/5] " Bernard Iremonger @ 2017-08-31 14:54 ` Bernard Iremonger 2017-08-31 15:09 ` Pavan Nikhilesh Bhagavatula 2017-08-31 14:54 ` [dpdk-dev] [PATCH v3 2/5] librte_table: fix acl lookup function Bernard Iremonger ` (3 subsequent siblings) 5 siblings, 1 reply; 145+ messages in thread From: Bernard Iremonger @ 2017-08-31 14:54 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil Cc: Bernard Iremonger, stable The rte_table_acl_entry_add() function was returning data from acl_memory instead of acl_rule_memory. It was also returning data from entry instead of entry_ptr. The rte_table_acl_entry_delete() function was returning data from acl_memory instead of acl_rule_memory. Fixes: 166923eb2f78 ("table: ACL") Cc: stable@dpdk.org Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> --- lib/librte_table/rte_table_acl.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/lib/librte_table/rte_table_acl.c b/lib/librte_table/rte_table_acl.c index 3c05e4a..e84b437 100644 --- a/lib/librte_table/rte_table_acl.c +++ b/lib/librte_table/rte_table_acl.c @@ -316,8 +316,7 @@ struct rte_table_acl { if (status == 0) { *key_found = 1; *entry_ptr = &acl->memory[i * acl->entry_size]; - memcpy(*entry_ptr, entry, acl->entry_size); - + memcpy(entry, *entry_ptr, acl->entry_size); return 0; } } @@ -353,8 +352,8 @@ struct rte_table_acl { rte_acl_free(acl->ctx); acl->ctx = ctx; *key_found = 0; - *entry_ptr = &acl->memory[free_pos * acl->entry_size]; - memcpy(*entry_ptr, entry, acl->entry_size); + *entry_ptr = &acl->acl_rule_memory[free_pos * acl->entry_size]; + memcpy(entry, *entry_ptr, acl->entry_size); return 0; } @@ -435,7 +434,7 @@ struct rte_table_acl { acl->ctx = ctx; *key_found = 1; if (entry != NULL) - memcpy(entry, &acl->memory[pos * acl->entry_size], + memcpy(entry, &acl->acl_rule_memory[pos * acl->entry_size], acl->entry_size); return 0; -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [PATCH v3 1/5] librte_table: fix acl entry add and delete functions 2017-08-31 14:54 ` [dpdk-dev] [PATCH v3 1/5] librte_table: fix acl entry add and delete functions Bernard Iremonger @ 2017-08-31 15:09 ` Pavan Nikhilesh Bhagavatula 0 siblings, 0 replies; 145+ messages in thread From: Pavan Nikhilesh Bhagavatula @ 2017-08-31 15:09 UTC (permalink / raw) To: Bernard Iremonger; +Cc: dev On Thu, Aug 31, 2017 at 03:54:43PM +0100, Bernard Iremonger wrote: Hi Bernard, Few suggestions inline. > The rte_table_acl_entry_add() function was returning data from > acl_memory instead of acl_rule_memory. It was also returning data > from entry instead of entry_ptr. > > The rte_table_acl_entry_delete() function was returning data from > acl_memory instead of acl_rule_memory. > > Fixes: 166923eb2f78 ("table: ACL") > Cc: stable@dpdk.org > Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> > --- > lib/librte_table/rte_table_acl.c | 9 ++++----- > 1 file changed, 4 insertions(+), 5 deletions(-) > > diff --git a/lib/librte_table/rte_table_acl.c b/lib/librte_table/rte_table_acl.c > index 3c05e4a..e84b437 100644 > --- a/lib/librte_table/rte_table_acl.c > +++ b/lib/librte_table/rte_table_acl.c > @@ -316,8 +316,7 @@ struct rte_table_acl { > if (status == 0) { > *key_found = 1; > *entry_ptr = &acl->memory[i * acl->entry_size]; > - memcpy(*entry_ptr, entry, acl->entry_size); > - > + memcpy(entry, *entry_ptr, acl->entry_size); > return 0; > } > } > @@ -353,8 +352,8 @@ struct rte_table_acl { > rte_acl_free(acl->ctx); > acl->ctx = ctx; > *key_found = 0; > - *entry_ptr = &acl->memory[free_pos * acl->entry_size]; > - memcpy(*entry_ptr, entry, acl->entry_size); > + *entry_ptr = &acl->acl_rule_memory[free_pos * acl->entry_size]; > + memcpy(entry, *entry_ptr, acl->entry_size); > Why not use rte_memcpy instead?. > return 0; > } > @@ -435,7 +434,7 @@ struct rte_table_acl { > acl->ctx = ctx; > *key_found = 1; > if (entry != NULL) > - memcpy(entry, &acl->memory[pos * acl->entry_size], > + memcpy(entry, &acl->acl_rule_memory[pos * acl->entry_size], > acl->entry_size); > > return 0; > -- > 1.9.1 > -Pavan ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v3 2/5] librte_table: fix acl lookup function 2017-08-25 16:10 ` [dpdk-dev] [PATCH v2 0/6] flow " Bernard Iremonger 2017-08-31 14:54 ` [dpdk-dev] [PATCH v3 0/5] " Bernard Iremonger 2017-08-31 14:54 ` [dpdk-dev] [PATCH v3 1/5] librte_table: fix acl entry add and delete functions Bernard Iremonger @ 2017-08-31 14:54 ` Bernard Iremonger 2017-08-31 14:54 ` [dpdk-dev] [PATCH v3 3/5] librte_flow_classify: add librte_flow_classify library Bernard Iremonger ` (2 subsequent siblings) 5 siblings, 0 replies; 145+ messages in thread From: Bernard Iremonger @ 2017-08-31 14:54 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil Cc: Bernard Iremonger, stable The rte_table_acl_lookup() function was returning data from acl_memory instead of acl_rule_memory. Fixes: 166923eb2f78 ("table: ACL") Cc: stable@dpdk.org Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> --- lib/librte_table/rte_table_acl.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/lib/librte_table/rte_table_acl.c b/lib/librte_table/rte_table_acl.c index e84b437..258916d 100644 --- a/lib/librte_table/rte_table_acl.c +++ b/lib/librte_table/rte_table_acl.c @@ -794,7 +794,7 @@ struct rte_table_acl { if (action_table_pos != 0) { pkts_out_mask |= pkt_mask; entries[pkt_pos] = (void *) - &acl->memory[action_table_pos * + &acl->acl_rule_memory[action_table_pos * acl->entry_size]; rte_prefetch0(entries[pkt_pos]); } -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v3 3/5] librte_flow_classify: add librte_flow_classify library 2017-08-25 16:10 ` [dpdk-dev] [PATCH v2 0/6] flow " Bernard Iremonger ` (2 preceding siblings ...) 2017-08-31 14:54 ` [dpdk-dev] [PATCH v3 2/5] librte_table: fix acl lookup function Bernard Iremonger @ 2017-08-31 14:54 ` Bernard Iremonger 2017-08-31 15:18 ` Pavan Nikhilesh Bhagavatula 2017-08-31 14:54 ` [dpdk-dev] [PATCH v3 4/5] examples/flow_classify: flow classify sample application Bernard Iremonger 2017-08-31 14:54 ` [dpdk-dev] [PATCH v3 5/5] test: flow classify library unit tests Bernard Iremonger 5 siblings, 1 reply; 145+ messages in thread From: Bernard Iremonger @ 2017-08-31 14:54 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil Cc: Bernard Iremonger From: Ferruh Yigit <ferruh.yigit@intel.com> The following library APIs's are implemented: rte_flow_classify_create rte_flow_classify_validate rte_flow_classify_destroy rte_flow_classify_query The following librte_table ACL API's are used: f_create to create a table ACL. f_add to add an ACL rule to the table. f_del to delete an ACL form the table. f_lookup to match packets with the ACL rules. The library supports counting of IPv4 five tupple packets only, ie IPv4 UDP, TCP and SCTP packets. Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com> Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> --- config/common_base | 6 + doc/api/doxy-api-index.md | 1 + doc/api/doxy-api.conf | 1 + lib/Makefile | 3 + lib/librte_eal/common/include/rte_log.h | 1 + lib/librte_flow_classify/Makefile | 51 ++ lib/librte_flow_classify/rte_flow_classify.c | 459 +++++++++++++++++ lib/librte_flow_classify/rte_flow_classify.h | 207 ++++++++ lib/librte_flow_classify/rte_flow_classify_parse.c | 546 +++++++++++++++++++++ lib/librte_flow_classify/rte_flow_classify_parse.h | 74 +++ .../rte_flow_classify_version.map | 10 + mk/rte.app.mk | 2 +- 12 files changed, 1360 insertions(+), 1 deletion(-) create mode 100644 lib/librte_flow_classify/Makefile create mode 100644 lib/librte_flow_classify/rte_flow_classify.c create mode 100644 lib/librte_flow_classify/rte_flow_classify.h create mode 100644 lib/librte_flow_classify/rte_flow_classify_parse.c create mode 100644 lib/librte_flow_classify/rte_flow_classify_parse.h create mode 100644 lib/librte_flow_classify/rte_flow_classify_version.map diff --git a/config/common_base b/config/common_base index 5e97a08..e378e0a 100644 --- a/config/common_base +++ b/config/common_base @@ -657,6 +657,12 @@ CONFIG_RTE_LIBRTE_GRO=y CONFIG_RTE_LIBRTE_METER=y # +# Compile librte_classify +# +CONFIG_RTE_LIBRTE_FLOW_CLASSIFY=y +CONFIG_RTE_LIBRTE_CLASSIFY_DEBUG=n + +# # Compile librte_sched # CONFIG_RTE_LIBRTE_SCHED=y diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md index 19e0d4f..a2fa281 100644 --- a/doc/api/doxy-api-index.md +++ b/doc/api/doxy-api-index.md @@ -105,6 +105,7 @@ The public API headers are grouped by topics: [LPM IPv4 route] (@ref rte_lpm.h), [LPM IPv6 route] (@ref rte_lpm6.h), [ACL] (@ref rte_acl.h), + [flow_classify] (@ref rte_flow_classify.h), [EFD] (@ref rte_efd.h) - **QoS**: diff --git a/doc/api/doxy-api.conf b/doc/api/doxy-api.conf index 823554f..4e43a66 100644 --- a/doc/api/doxy-api.conf +++ b/doc/api/doxy-api.conf @@ -46,6 +46,7 @@ INPUT = doc/api/doxy-api-index.md \ lib/librte_efd \ lib/librte_ether \ lib/librte_eventdev \ + lib/librte_flow_classify \ lib/librte_gro \ lib/librte_hash \ lib/librte_ip_frag \ diff --git a/lib/Makefile b/lib/Makefile index 86caba1..21fc3b0 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -82,6 +82,9 @@ DIRS-$(CONFIG_RTE_LIBRTE_POWER) += librte_power DEPDIRS-librte_power := librte_eal DIRS-$(CONFIG_RTE_LIBRTE_METER) += librte_meter DEPDIRS-librte_meter := librte_eal +DIRS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += librte_flow_classify +DEPDIRS-librte_flow_classify := librte_eal librte_ether librte_net +DEPDIRS-librte_flow_classify += librte_table librte_acl librte_port DIRS-$(CONFIG_RTE_LIBRTE_SCHED) += librte_sched DEPDIRS-librte_sched := librte_eal librte_mempool librte_mbuf librte_net DEPDIRS-librte_sched += librte_timer diff --git a/lib/librte_eal/common/include/rte_log.h b/lib/librte_eal/common/include/rte_log.h index ec8dba7..f975bde 100644 --- a/lib/librte_eal/common/include/rte_log.h +++ b/lib/librte_eal/common/include/rte_log.h @@ -87,6 +87,7 @@ struct rte_logs { #define RTE_LOGTYPE_CRYPTODEV 17 /**< Log related to cryptodev. */ #define RTE_LOGTYPE_EFD 18 /**< Log related to EFD. */ #define RTE_LOGTYPE_EVENTDEV 19 /**< Log related to eventdev. */ +#define RTE_LOGTYPE_CLASSIFY 20 /**< Log related to flow classify. */ /* these log types can be used in an application */ #define RTE_LOGTYPE_USER1 24 /**< User-defined log type 1. */ diff --git a/lib/librte_flow_classify/Makefile b/lib/librte_flow_classify/Makefile new file mode 100644 index 0000000..7863a0c --- /dev/null +++ b/lib/librte_flow_classify/Makefile @@ -0,0 +1,51 @@ +# BSD LICENSE +# +# Copyright(c) 2017 Intel Corporation. All rights reserved. +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Intel Corporation nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +include $(RTE_SDK)/mk/rte.vars.mk + +# library name +LIB = librte_flow_classify.a + +CFLAGS += -O3 +CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) + +EXPORT_MAP := rte_flow_classify_version.map + +LIBABIVER := 1 + +# all source are stored in SRCS-y +SRCS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += rte_flow_classify.c +SRCS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += rte_flow_classify_parse.c + +# install this header file +SYMLINK-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY)-include := rte_flow_classify.h + +include $(RTE_SDK)/mk/rte.lib.mk diff --git a/lib/librte_flow_classify/rte_flow_classify.c b/lib/librte_flow_classify/rte_flow_classify.c new file mode 100644 index 0000000..595e08c --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify.c @@ -0,0 +1,459 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <rte_flow_classify.h> +#include "rte_flow_classify_parse.h" +#include <rte_flow_driver.h> +#include <rte_table_acl.h> +#include <stdbool.h> + +static struct rte_eth_ntuple_filter ntuple_filter; + +enum { + PROTO_FIELD_IPV4, + SRC_FIELD_IPV4, + DST_FIELD_IPV4, + SRCP_FIELD_IPV4, + DSTP_FIELD_IPV4, + NUM_FIELDS_IPV4 +}; + +struct ipv4_5tuple_data { + uint16_t priority; /**< flow API uses priority 0 to 8, 0 is highest */ + uint32_t userdata; /**< value returned for match */ + uint8_t tcp_flags; /**< tcp_flags only meaningful TCP protocol */ +}; + +struct rte_flow_classify { + enum rte_flow_classify_type type; /**< classify type */ + struct rte_flow_action action; /**< action when match found */ + struct ipv4_5tuple_data flow_extra_data; /** extra rule data */ + struct rte_table_acl_rule_add_params key_add; /**< add ACL rule key */ + struct rte_table_acl_rule_delete_params + key_del; /**< delete ACL rule key */ + int key_found; /**< ACL rule key found in table */ + void *entry; /**< pointer to buffer to hold ACL rule key */ + void *entry_ptr; /**< handle to the table entry for the ACL rule key */ +}; + +/* number of categories in an ACL context */ +#define FLOW_CLASSIFY_NUM_CATEGORY 1 + +/* number of packets in a burst */ +#define MAX_PKT_BURST 32 + +struct mbuf_search { + struct rte_mbuf *m_ipv4[MAX_PKT_BURST]; + uint32_t res_ipv4[MAX_PKT_BURST]; + int num_ipv4; +}; + +int +rte_flow_classify_validate(void *table_handle, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + struct rte_flow_item *items; + parse_filter_t parse_filter; + uint32_t item_num = 0; + uint32_t i = 0; + int ret; + + (void) table_handle; + + if (!error) + return -EINVAL; + + if (!pattern) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM, + NULL, "NULL pattern."); + return -EINVAL; + } + + if (!actions) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_NUM, + NULL, "NULL action."); + return -EINVAL; + } + + if (!attr) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, + NULL, "NULL attribute."); + return -EINVAL; + } + + memset(&ntuple_filter, 0, sizeof(ntuple_filter)); + + /* Get the non-void item number of pattern */ + while ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_END) { + if ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_VOID) + item_num++; + i++; + } + item_num++; + + items = malloc(item_num * sizeof(struct rte_flow_item)); + if (!items) { + rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_ITEM_NUM, + NULL, "No memory for pattern items."); + return -ENOMEM; + } + + memset(items, 0, item_num * sizeof(struct rte_flow_item)); + classify_pattern_skip_void_item(items, pattern); + + parse_filter = classify_find_parse_filter_func(items); + if (!parse_filter) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + pattern, "Unsupported pattern"); + return -EINVAL; + } + + ret = parse_filter(attr, items, actions, &ntuple_filter, error); + free(items); + return ret; +} + +#ifdef RTE_LIBRTE_CLASSIFY_DEBUG +#define uint32_t_to_char(ip, a, b, c, d) do {\ + *a = (unsigned char)(ip >> 24 & 0xff);\ + *b = (unsigned char)(ip >> 16 & 0xff);\ + *c = (unsigned char)(ip >> 8 & 0xff);\ + *d = (unsigned char)(ip & 0xff);\ + } while (0) + +static inline void +print_ipv4_key_add(struct rte_table_acl_rule_add_params *key) +{ + unsigned char a, b, c, d; + + printf("ipv4_key_add: 0x%02hhx/0x%hhx ", + key->field_value[PROTO_FIELD_IPV4].value.u8, + key->field_value[PROTO_FIELD_IPV4].mask_range.u8); + + uint32_t_to_char(key->field_value[SRC_FIELD_IPV4].value.u32, + &a, &b, &c, &d); + printf(" %hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, + key->field_value[SRC_FIELD_IPV4].mask_range.u32); + + uint32_t_to_char(key->field_value[DST_FIELD_IPV4].value.u32, + &a, &b, &c, &d); + printf("%hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, + key->field_value[DST_FIELD_IPV4].mask_range.u32); + + printf("%hu : 0x%x %hu : 0x%x", + key->field_value[SRCP_FIELD_IPV4].value.u16, + key->field_value[SRCP_FIELD_IPV4].mask_range.u16, + key->field_value[DSTP_FIELD_IPV4].value.u16, + key->field_value[DSTP_FIELD_IPV4].mask_range.u16); + + printf(" priority: 0x%x\n", key->priority); +} + +static inline void +print_ipv4_key_delete(struct rte_table_acl_rule_delete_params *key) +{ + unsigned char a, b, c, d; + + printf("ipv4_key_del: 0x%02hhx/0x%hhx ", + key->field_value[PROTO_FIELD_IPV4].value.u8, + key->field_value[PROTO_FIELD_IPV4].mask_range.u8); + + uint32_t_to_char(key->field_value[SRC_FIELD_IPV4].value.u32, + &a, &b, &c, &d); + printf(" %hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, + key->field_value[SRC_FIELD_IPV4].mask_range.u32); + + uint32_t_to_char(key->field_value[DST_FIELD_IPV4].value.u32, + &a, &b, &c, &d); + printf("%hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, + key->field_value[DST_FIELD_IPV4].mask_range.u32); + + printf("%hu : 0x%x %hu : 0x%x\n", + key->field_value[SRCP_FIELD_IPV4].value.u16, + key->field_value[SRCP_FIELD_IPV4].mask_range.u16, + key->field_value[DSTP_FIELD_IPV4].value.u16, + key->field_value[DSTP_FIELD_IPV4].mask_range.u16); +} +#endif + +static struct rte_flow_classify * +allocate_5tuple(void) +{ + struct rte_flow_classify *flow_classify; + + flow_classify = malloc(sizeof(struct rte_flow_classify)); + if (!flow_classify) + return flow_classify; + + memset(flow_classify, 0, sizeof(struct rte_flow_classify)); + + flow_classify->type = RTE_FLOW_CLASSIFY_TYPE_5TUPLE; + memcpy(&flow_classify->action, classify_get_flow_action(), + sizeof(struct rte_flow_action)); + + flow_classify->flow_extra_data.priority = ntuple_filter.priority; + flow_classify->flow_extra_data.tcp_flags = ntuple_filter.tcp_flags; + + /* key add values */ + flow_classify->key_add.priority = ntuple_filter.priority; + flow_classify->key_add.field_value[PROTO_FIELD_IPV4].mask_range.u8 = + ntuple_filter.proto_mask; + flow_classify->key_add.field_value[PROTO_FIELD_IPV4].value.u8 = + ntuple_filter.proto; + + flow_classify->key_add.field_value[SRC_FIELD_IPV4].mask_range.u32 = + ntuple_filter.src_ip_mask; + flow_classify->key_add.field_value[SRC_FIELD_IPV4].value.u32 = + ntuple_filter.src_ip; + + flow_classify->key_add.field_value[DST_FIELD_IPV4].mask_range.u32 = + ntuple_filter.dst_ip_mask; + flow_classify->key_add.field_value[DST_FIELD_IPV4].value.u32 = + ntuple_filter.dst_ip; + + flow_classify->key_add.field_value[SRCP_FIELD_IPV4].mask_range.u16 = + ntuple_filter.src_port_mask; + flow_classify->key_add.field_value[SRCP_FIELD_IPV4].value.u16 = + ntuple_filter.src_port; + + flow_classify->key_add.field_value[DSTP_FIELD_IPV4].mask_range.u16 = + ntuple_filter.dst_port_mask; + flow_classify->key_add.field_value[DSTP_FIELD_IPV4].value.u16 = + ntuple_filter.dst_port; + +#ifdef RTE_LIBRTE_CLASSIFY_DEBUG + print_ipv4_key_add(&flow_classify->key_add); +#endif + + /* key delete values */ + memcpy(&flow_classify->key_del.field_value[PROTO_FIELD_IPV4], + &flow_classify->key_add.field_value[PROTO_FIELD_IPV4], + NUM_FIELDS_IPV4 * sizeof(struct rte_acl_field)); + +#ifdef RTE_LIBRTE_CLASSIFY_DEBUG + print_ipv4_key_delete(&flow_classify->key_del); +#endif + return flow_classify; +} + +struct rte_flow_classify * +rte_flow_classify_create(void *table_handle, + uint32_t entry_size, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + struct rte_flow_classify *flow_classify; + struct rte_acl_rule *acl_rule; + int ret; + + if (!error) + return NULL; + + if (!table_handle) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_HANDLE, + NULL, "NULL table_handle."); + return NULL; + } + + if (!pattern) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM, + NULL, "NULL pattern."); + return NULL; + } + + if (!actions) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_NUM, + NULL, "NULL action."); + return NULL; + } + + if (!attr) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, + NULL, "NULL attribute."); + return NULL; + } + + /* parse attr, pattern and actions */ + ret = rte_flow_classify_validate(table_handle, attr, pattern, + actions, error); + if (ret < 0) + return NULL; + + flow_classify = allocate_5tuple(); + if (!flow_classify) + return NULL; + + flow_classify->entry = malloc(entry_size); + if (!flow_classify->entry) { + free(flow_classify); + flow_classify = NULL; + return NULL; + } + + ret = rte_table_acl_ops.f_add(table_handle, &flow_classify->key_add, + flow_classify->entry, &flow_classify->key_found, + &flow_classify->entry_ptr); + if (ret) { + free(flow_classify->entry); + free(flow_classify); + flow_classify = NULL; + return NULL; + } + acl_rule = flow_classify->entry; + flow_classify->flow_extra_data.userdata = acl_rule->data.userdata; + + return flow_classify; +} + +int +rte_flow_classify_destroy(void *table_handle, + struct rte_flow_classify *flow_classify, + struct rte_flow_error *error) +{ + int ret; + int key_found; + + if (!error) + return -EINVAL; + + if (!flow_classify || !table_handle) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "invalid input"); + return -EINVAL; + } + + ret = rte_table_acl_ops.f_delete(table_handle, + &flow_classify->key_del, &key_found, + flow_classify->entry); + if ((ret == 0) && key_found) { + free(flow_classify->entry); + free(flow_classify); + } else + ret = -1; + return ret; +} + +static int +flow_match(void *table, struct rte_mbuf **pkts_in, const uint16_t nb_pkts, + uint64_t *count, uint32_t userdata) +{ + int ret = -1; + int i; + uint64_t pkts_mask; + uint64_t lookup_hit_mask; + struct rte_acl_rule *entries[RTE_PORT_IN_BURST_SIZE_MAX]; + + if (nb_pkts) { + pkts_mask = RTE_LEN2MASK(nb_pkts, uint64_t); + ret = rte_table_acl_ops.f_lookup(table, pkts_in, + pkts_mask, &lookup_hit_mask, (void **)entries); + if (!ret) { + for (i = 0; i < nb_pkts && + (lookup_hit_mask & (1 << i)); i++) { + if (entries[i]->data.userdata == userdata) + (*count)++; /* match found */ + } + if (*count == 0) + ret = -1; + } else + ret = -1; + } + return ret; +} + +static int +action_apply(const struct rte_flow_classify *flow_classify, + struct rte_flow_classify_stats *stats, uint64_t count) +{ + struct rte_flow_classify_5tuple_stats *ntuple_stats; + + switch (flow_classify->action.type) { + case RTE_FLOW_ACTION_TYPE_COUNT: + ntuple_stats = + (struct rte_flow_classify_5tuple_stats *)stats->stats; + ntuple_stats->counter1 = count; + stats->used_space = 1; + break; + default: + return -ENOTSUP; + } + + return 0; +} + +int +rte_flow_classify_query(void *table_handle, + const struct rte_flow_classify *flow_classify, + struct rte_mbuf **pkts, + const uint16_t nb_pkts, + struct rte_flow_classify_stats *stats, + struct rte_flow_error *error) +{ + uint64_t count = 0; + int ret = -EINVAL; + + if (!error) + return ret; + + if (!table_handle || !flow_classify || !pkts || !stats) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "invalid input"); + return ret; + } + + if ((stats->available_space == 0) || (nb_pkts == 0)) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "invalid input"); + return ret; + } + + ret = flow_match(table_handle, pkts, nb_pkts, &count, + flow_classify->flow_extra_data.userdata); + if (ret == 0) + ret = action_apply(flow_classify, stats, count); + + return ret; +} diff --git a/lib/librte_flow_classify/rte_flow_classify.h b/lib/librte_flow_classify/rte_flow_classify.h new file mode 100644 index 0000000..2b200fb --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify.h @@ -0,0 +1,207 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _RTE_FLOW_CLASSIFY_H_ +#define _RTE_FLOW_CLASSIFY_H_ + +/** + * @file + * + * RTE Flow Classify Library + * + * This library provides flow record information with some measured properties. + * + * Application should define the flow and measurement criteria (action) for it. + * + * Library doesn't maintain any flow records itself, instead flow information is + * returned to upper layer only for given packets. + * + * It is application's responsibility to call rte_flow_classify_query() + * for group of packets, just after receiving them or before transmitting them. + * Application should provide the flow type interested in, measurement to apply + * to that flow in rte_flow_classify_create() API, and should provide + * rte_flow_classify object and storage to put results in + * rte_flow_classify_query() API. + * + * Usage: + * - application calls rte_flow_classify_create() to create a rte_flow_classify + * object. + * - application calls rte_flow_classify_query() in a polling manner, + * preferably after rte_eth_rx_burst(). This will cause the library to + * convert packet information to flow information with some measurements. + * - rte_flow_classify object can be destroyed when they are no more needed + * via rte_flow_classify_destroy() + */ + +#include <rte_ethdev.h> +#include <rte_ether.h> +#include <rte_flow.h> +#include <rte_acl.h> + +#ifdef __cplusplus +extern "C" { +#endif + +enum rte_flow_classify_type { + RTE_FLOW_CLASSIFY_TYPE_NONE, /**< no type */ + RTE_FLOW_CLASSIFY_TYPE_5TUPLE, /**< IPv4 5tuple type */ +}; + +struct rte_flow_classify; + +/** + * Flow stats + * + * For single action an array of stats can be returned by API. Technically each + * packet can return a stat at max. + * + * Storage for stats is provided by application, library should know available + * space, and should return the number of used space. + * + * stats type is based on what measurement (action) requested by application. + * + */ +struct rte_flow_classify_stats { + const unsigned int available_space; + unsigned int used_space; + void **stats; +}; + +struct rte_flow_classify_5tuple_stats { + uint64_t counter1; /**< count of packets that match 5tupple pattern */ +}; + +/** + * Create a flow classify rule. + * + * @param[in] table_handle + * Pointer to table ACL + * @param[in] entry_size + * Size of ACL rule + * @param[in] attr + * Flow rule attributes + * @param[in] pattern + * Pattern specification (list terminated by the END pattern item). + * @param[in] actions + * Associated actions (list terminated by the END pattern item). + * @param[out] error + * Perform verbose error reporting if not NULL. Structure + * initialised in case of error only. + * @return + * A valid handle in case of success, NULL otherwise. + */ +struct rte_flow_classify * +rte_flow_classify_create(void *table_handle, + uint32_t entry_size, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error); + +/** + * Validate a flow classify rule. + * + * @param[in] table_handle + * Pointer to table ACL + * @param[in] attr + * Flow rule attributes + * @param[in] pattern + * Pattern specification (list terminated by the END pattern item). + * @param[in] actions + * Associated actions (list terminated by the END pattern item). + * @param[out] error + * Perform verbose error reporting if not NULL. Structure + * initialised in case of error only. + * + * @return + * reurn code. + */ +int +rte_flow_classify_validate(void *table_handle, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error); + +/** + * Destroy a flow classify rule. + * + * @param[in] table_handle + * Pointer to table ACL + * @param[in] flow_classify + * Flow rule handle to destroy + * @param[out] error + * Perform verbose error reporting if not NULL. Structure + * initialised in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise. + */ +int +rte_flow_classify_destroy(void *table_handle, + struct rte_flow_classify *flow_classify, + struct rte_flow_error *error); + +/** + * Get flow classification stats for given packets. + * + * @param[in] table_handle + * Pointer to table ACL + * @param[in] flow_classify + * Pointer to Flow rule object + * @param[in] pkts + * Pointer to packets to process + * @param[in] nb_pkts + * Number of packets to process + * @param[in] stats + * To store stats defined by action + * @param[out] error + * Perform verbose error reporting if not NULL. Structure + * initialised in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise. + */ +int +rte_flow_classify_query(void *table_handle, + const struct rte_flow_classify *flow_classify, + struct rte_mbuf **pkts, + const uint16_t nb_pkts, + struct rte_flow_classify_stats *stats, + struct rte_flow_error *error); + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_FLOW_CLASSIFY_H_ */ diff --git a/lib/librte_flow_classify/rte_flow_classify_parse.c b/lib/librte_flow_classify/rte_flow_classify_parse.c new file mode 100644 index 0000000..e5a3885 --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify_parse.c @@ -0,0 +1,546 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <rte_flow_classify.h> +#include "rte_flow_classify_parse.h" +#include <rte_flow_driver.h> + +struct classify_valid_pattern { + enum rte_flow_item_type *items; + parse_filter_t parse_filter; +}; + +static struct rte_flow_action action; + +/* Pattern matched ntuple filter */ +static enum rte_flow_item_type pattern_ntuple_1[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_UDP, + RTE_FLOW_ITEM_TYPE_END, +}; + +/* Pattern matched ntuple filter */ +static enum rte_flow_item_type pattern_ntuple_2[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_TCP, + RTE_FLOW_ITEM_TYPE_END, +}; + +/* Pattern matched ntuple filter */ +static enum rte_flow_item_type pattern_ntuple_3[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_SCTP, + RTE_FLOW_ITEM_TYPE_END, +}; + +static int +classify_parse_ntuple_filter(const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_eth_ntuple_filter *filter, + struct rte_flow_error *error); + +static struct classify_valid_pattern classify_supported_patterns[] = { + /* ntuple */ + { pattern_ntuple_1, classify_parse_ntuple_filter }, + { pattern_ntuple_2, classify_parse_ntuple_filter }, + { pattern_ntuple_3, classify_parse_ntuple_filter }, +}; + +struct rte_flow_action * +classify_get_flow_action(void) +{ + return &action; +} + +/* Find the first VOID or non-VOID item pointer */ +const struct rte_flow_item * +classify_find_first_item(const struct rte_flow_item *item, bool is_void) +{ + bool is_find; + + while (item->type != RTE_FLOW_ITEM_TYPE_END) { + if (is_void) + is_find = item->type == RTE_FLOW_ITEM_TYPE_VOID; + else + is_find = item->type != RTE_FLOW_ITEM_TYPE_VOID; + if (is_find) + break; + item++; + } + return item; +} + +/* Skip all VOID items of the pattern */ +void +classify_pattern_skip_void_item(struct rte_flow_item *items, + const struct rte_flow_item *pattern) +{ + uint32_t cpy_count = 0; + const struct rte_flow_item *pb = pattern, *pe = pattern; + + for (;;) { + /* Find a non-void item first */ + pb = classify_find_first_item(pb, false); + if (pb->type == RTE_FLOW_ITEM_TYPE_END) { + pe = pb; + break; + } + + /* Find a void item */ + pe = classify_find_first_item(pb + 1, true); + + cpy_count = pe - pb; + rte_memcpy(items, pb, sizeof(struct rte_flow_item) * cpy_count); + + items += cpy_count; + + if (pe->type == RTE_FLOW_ITEM_TYPE_END) { + pb = pe; + break; + } + + pb = pe + 1; + } + /* Copy the END item. */ + rte_memcpy(items, pe, sizeof(struct rte_flow_item)); +} + +/* Check if the pattern matches a supported item type array */ +static bool +classify_match_pattern(enum rte_flow_item_type *item_array, + struct rte_flow_item *pattern) +{ + struct rte_flow_item *item = pattern; + + while ((*item_array == item->type) && + (*item_array != RTE_FLOW_ITEM_TYPE_END)) { + item_array++; + item++; + } + + return (*item_array == RTE_FLOW_ITEM_TYPE_END && + item->type == RTE_FLOW_ITEM_TYPE_END); +} + +/* Find if there's parse filter function matched */ +parse_filter_t +classify_find_parse_filter_func(struct rte_flow_item *pattern) +{ + parse_filter_t parse_filter = NULL; + uint8_t i = 0; + + for (; i < RTE_DIM(classify_supported_patterns); i++) { + if (classify_match_pattern(classify_supported_patterns[i].items, + pattern)) { + parse_filter = + classify_supported_patterns[i].parse_filter; + break; + } + } + + return parse_filter; +} + +#define FLOW_RULE_MIN_PRIORITY 8 +#define FLOW_RULE_MAX_PRIORITY 0 + +#define NEXT_ITEM_OF_PATTERN(item, pattern, index)\ + do { \ + item = pattern + index;\ + while (item->type == RTE_FLOW_ITEM_TYPE_VOID) {\ + index++; \ + item = pattern + index; \ + } \ + } while (0) + +#define NEXT_ITEM_OF_ACTION(act, actions, index)\ + do { \ + act = actions + index; \ + while (act->type == RTE_FLOW_ACTION_TYPE_VOID) {\ + index++; \ + act = actions + index; \ + } \ + } while (0) + +/** + * Please aware there's an asumption for all the parsers. + * rte_flow_item is using big endian, rte_flow_attr and + * rte_flow_action are using CPU order. + * Because the pattern is used to describe the packets, + * normally the packets should use network order. + */ + +/** + * Parse the rule to see if it is a n-tuple rule. + * And get the n-tuple filter info BTW. + * pattern: + * The first not void item can be ETH or IPV4. + * The second not void item must be IPV4 if the first one is ETH. + * The third not void item must be UDP or TCP. + * The next not void item must be END. + * action: + * The first not void action should be QUEUE. + * The next not void action should be END. + * pattern example: + * ITEM Spec Mask + * ETH NULL NULL + * IPV4 src_addr 192.168.1.20 0xFFFFFFFF + * dst_addr 192.167.3.50 0xFFFFFFFF + * next_proto_id 17 0xFF + * UDP/TCP/ src_port 80 0xFFFF + * SCTP dst_port 80 0xFFFF + * END + * other members in mask and spec should set to 0x00. + * item->last should be NULL. + */ +static int +classify_parse_ntuple_filter(const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_eth_ntuple_filter *filter, + struct rte_flow_error *error) +{ + const struct rte_flow_item *item; + const struct rte_flow_action *act; + const struct rte_flow_item_ipv4 *ipv4_spec; + const struct rte_flow_item_ipv4 *ipv4_mask; + const struct rte_flow_item_tcp *tcp_spec; + const struct rte_flow_item_tcp *tcp_mask; + const struct rte_flow_item_udp *udp_spec; + const struct rte_flow_item_udp *udp_mask; + const struct rte_flow_item_sctp *sctp_spec; + const struct rte_flow_item_sctp *sctp_mask; + uint32_t index; + + if (!pattern) { + rte_flow_error_set(error, + EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM, + NULL, "NULL pattern."); + return -rte_errno; + } + + if (!actions) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_NUM, + NULL, "NULL action."); + return -rte_errno; + } + if (!attr) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, + NULL, "NULL attribute."); + return -rte_errno; + } + + /* parse pattern */ + index = 0; + + /* the first not void item can be MAC or IPv4 */ + NEXT_ITEM_OF_PATTERN(item, pattern, index); + + if (item->type != RTE_FLOW_ITEM_TYPE_ETH && + item->type != RTE_FLOW_ITEM_TYPE_IPV4) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + /* Skip Ethernet */ + if (item->type == RTE_FLOW_ITEM_TYPE_ETH) { + /*Not supported last point for range*/ + if (item->last) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + item, + "Not supported last point for range"); + return -rte_errno; + + } + /* if the first item is MAC, the content should be NULL */ + if (item->spec || item->mask) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Not supported by ntuple filter"); + return -rte_errno; + } + /* check if the next not void item is IPv4 */ + index++; + NEXT_ITEM_OF_PATTERN(item, pattern, index); + if (item->type != RTE_FLOW_ITEM_TYPE_IPV4) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Not supported by ntuple filter"); + return -rte_errno; + } + } + + /* get the IPv4 info */ + if (!item->spec || !item->mask) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Invalid ntuple mask"); + return -rte_errno; + } + /*Not supported last point for range*/ + if (item->last) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + item, "Not supported last point for range"); + return -rte_errno; + + } + + ipv4_mask = (const struct rte_flow_item_ipv4 *)item->mask; + /** + * Only support src & dst addresses, protocol, + * others should be masked. + */ + if (ipv4_mask->hdr.version_ihl || + ipv4_mask->hdr.type_of_service || + ipv4_mask->hdr.total_length || + ipv4_mask->hdr.packet_id || + ipv4_mask->hdr.fragment_offset || + ipv4_mask->hdr.time_to_live || + ipv4_mask->hdr.hdr_checksum) { + rte_flow_error_set(error, + EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + + filter->dst_ip_mask = ipv4_mask->hdr.dst_addr; + filter->src_ip_mask = ipv4_mask->hdr.src_addr; + filter->proto_mask = ipv4_mask->hdr.next_proto_id; + + ipv4_spec = (const struct rte_flow_item_ipv4 *)item->spec; + filter->dst_ip = ipv4_spec->hdr.dst_addr; + filter->src_ip = ipv4_spec->hdr.src_addr; + filter->proto = ipv4_spec->hdr.next_proto_id; + + /* check if the next not void item is TCP or UDP or SCTP */ + index++; + NEXT_ITEM_OF_PATTERN(item, pattern, index); + if (item->type != RTE_FLOW_ITEM_TYPE_TCP && + item->type != RTE_FLOW_ITEM_TYPE_UDP && + item->type != RTE_FLOW_ITEM_TYPE_SCTP) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + + /* get the TCP/UDP info */ + if (!item->spec || !item->mask) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Invalid ntuple mask"); + return -rte_errno; + } + + /*Not supported last point for range*/ + if (item->last) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + item, "Not supported last point for range"); + return -rte_errno; + + } + + if (item->type == RTE_FLOW_ITEM_TYPE_TCP) { + tcp_mask = (const struct rte_flow_item_tcp *)item->mask; + + /** + * Only support src & dst ports, tcp flags, + * others should be masked. + */ + if (tcp_mask->hdr.sent_seq || + tcp_mask->hdr.recv_ack || + tcp_mask->hdr.data_off || + tcp_mask->hdr.rx_win || + tcp_mask->hdr.cksum || + tcp_mask->hdr.tcp_urp) { + memset(filter, 0, + sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + + filter->dst_port_mask = tcp_mask->hdr.dst_port; + filter->src_port_mask = tcp_mask->hdr.src_port; + if (tcp_mask->hdr.tcp_flags == 0xFF) { + filter->flags |= RTE_NTUPLE_FLAGS_TCP_FLAG; + } else if (!tcp_mask->hdr.tcp_flags) { + filter->flags &= ~RTE_NTUPLE_FLAGS_TCP_FLAG; + } else { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + + tcp_spec = (const struct rte_flow_item_tcp *)item->spec; + filter->dst_port = tcp_spec->hdr.dst_port; + filter->src_port = tcp_spec->hdr.src_port; + filter->tcp_flags = tcp_spec->hdr.tcp_flags; + } else if (item->type == RTE_FLOW_ITEM_TYPE_UDP) { + udp_mask = (const struct rte_flow_item_udp *)item->mask; + + /** + * Only support src & dst ports, + * others should be masked. + */ + if (udp_mask->hdr.dgram_len || + udp_mask->hdr.dgram_cksum) { + memset(filter, 0, + sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + + filter->dst_port_mask = udp_mask->hdr.dst_port; + filter->src_port_mask = udp_mask->hdr.src_port; + + udp_spec = (const struct rte_flow_item_udp *)item->spec; + filter->dst_port = udp_spec->hdr.dst_port; + filter->src_port = udp_spec->hdr.src_port; + } else { + sctp_mask = (const struct rte_flow_item_sctp *)item->mask; + + /** + * Only support src & dst ports, + * others should be masked. + */ + if (sctp_mask->hdr.tag || + sctp_mask->hdr.cksum) { + memset(filter, 0, + sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + + filter->dst_port_mask = sctp_mask->hdr.dst_port; + filter->src_port_mask = sctp_mask->hdr.src_port; + + sctp_spec = (const struct rte_flow_item_sctp *)item->spec; + filter->dst_port = sctp_spec->hdr.dst_port; + filter->src_port = sctp_spec->hdr.src_port; + } + + /* check if the next not void item is END */ + index++; + NEXT_ITEM_OF_PATTERN(item, pattern, index); + if (item->type != RTE_FLOW_ITEM_TYPE_END) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + + /* parse action */ + index = 0; + + /** + * n-tuple only supports count, + * check if the first not void action is COUNT. + */ + memset(&action, 0, sizeof(action)); + NEXT_ITEM_OF_ACTION(act, actions, index); + if (act->type != RTE_FLOW_ACTION_TYPE_COUNT) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + item, "Not supported action."); + return -rte_errno; + } + action.type = RTE_FLOW_ACTION_TYPE_COUNT; + + /* check if the next not void item is END */ + index++; + NEXT_ITEM_OF_ACTION(act, actions, index); + if (act->type != RTE_FLOW_ACTION_TYPE_END) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + act, "Not supported action."); + return -rte_errno; + } + + /* parse attr */ + /* must be input direction */ + if (!attr->ingress) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, + attr, "Only support ingress."); + return -rte_errno; + } + + /* not supported */ + if (attr->egress) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, + attr, "Not support egress."); + return -rte_errno; + } + + if (attr->priority > 0xFFFF) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, + attr, "Error priority."); + return -rte_errno; + } + filter->priority = (uint16_t)attr->priority; + if (attr->priority > FLOW_RULE_MIN_PRIORITY) + filter->priority = FLOW_RULE_MAX_PRIORITY; + + return 0; +} diff --git a/lib/librte_flow_classify/rte_flow_classify_parse.h b/lib/librte_flow_classify/rte_flow_classify_parse.h new file mode 100644 index 0000000..1d4708a --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify_parse.h @@ -0,0 +1,74 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _RTE_FLOW_CLASSIFY_PARSE_H_ +#define _RTE_FLOW_CLASSIFY_PARSE_H_ + +#include <rte_ethdev.h> +#include <rte_ether.h> +#include <rte_flow.h> +#include <stdbool.h> + +#ifdef __cplusplus +extern "C" { +#endif + +typedef int (*parse_filter_t)(const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_eth_ntuple_filter *filter, + struct rte_flow_error *error); + +/* Skip all VOID items of the pattern */ +void +classify_pattern_skip_void_item(struct rte_flow_item *items, + const struct rte_flow_item *pattern); + +/* Find the first VOID or non-VOID item pointer */ +const struct rte_flow_item * +classify_find_first_item(const struct rte_flow_item *item, bool is_void); + + +/* Find if there's parse filter function matched */ +parse_filter_t +classify_find_parse_filter_func(struct rte_flow_item *pattern); + +/* get action data */ +struct rte_flow_action * +classify_get_flow_action(void); + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_FLOW_CLASSIFY_PARSE_H_ */ diff --git a/lib/librte_flow_classify/rte_flow_classify_version.map b/lib/librte_flow_classify/rte_flow_classify_version.map new file mode 100644 index 0000000..e2c9ecf --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify_version.map @@ -0,0 +1,10 @@ +DPDK_17.08 { + global: + + rte_flow_classify_create; + rte_flow_classify_destroy; + rte_flow_classify_query; + rte_flow_classify_validate; + + local: *; +}; diff --git a/mk/rte.app.mk b/mk/rte.app.mk index c25fdd9..909ab95 100644 --- a/mk/rte.app.mk +++ b/mk/rte.app.mk @@ -58,6 +58,7 @@ _LDLIBS-y += -L$(RTE_SDK_BIN)/lib # # Order is important: from higher level to lower level # +_LDLIBS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += -lrte_flow_classify _LDLIBS-$(CONFIG_RTE_LIBRTE_PIPELINE) += -lrte_pipeline _LDLIBS-$(CONFIG_RTE_LIBRTE_TABLE) += -lrte_table _LDLIBS-$(CONFIG_RTE_LIBRTE_PORT) += -lrte_port @@ -84,7 +85,6 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_EFD) += -lrte_efd _LDLIBS-$(CONFIG_RTE_LIBRTE_CFGFILE) += -lrte_cfgfile _LDLIBS-y += --whole-archive - _LDLIBS-$(CONFIG_RTE_LIBRTE_HASH) += -lrte_hash _LDLIBS-$(CONFIG_RTE_LIBRTE_VHOST) += -lrte_vhost _LDLIBS-$(CONFIG_RTE_LIBRTE_KVARGS) += -lrte_kvargs -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [PATCH v3 3/5] librte_flow_classify: add librte_flow_classify library 2017-08-31 14:54 ` [dpdk-dev] [PATCH v3 3/5] librte_flow_classify: add librte_flow_classify library Bernard Iremonger @ 2017-08-31 15:18 ` Pavan Nikhilesh Bhagavatula 0 siblings, 0 replies; 145+ messages in thread From: Pavan Nikhilesh Bhagavatula @ 2017-08-31 15:18 UTC (permalink / raw) To: Bernard Iremonger; +Cc: dev On Thu, Aug 31, 2017 at 03:54:45PM +0100, Bernard Iremonger wrote: Hi Bernard, > From: Ferruh Yigit <ferruh.yigit@intel.com> > > The following library APIs's are implemented: > rte_flow_classify_create > rte_flow_classify_validate > rte_flow_classify_destroy > rte_flow_classify_query > > The following librte_table ACL API's are used: > f_create to create a table ACL. > f_add to add an ACL rule to the table. > f_del to delete an ACL form the table. > f_lookup to match packets with the ACL rules. > > The library supports counting of IPv4 five tupple packets only, > ie IPv4 UDP, TCP and SCTP packets. > > Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com> > Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> > --- > config/common_base | 6 + > doc/api/doxy-api-index.md | 1 + > doc/api/doxy-api.conf | 1 + > lib/Makefile | 3 + > lib/librte_eal/common/include/rte_log.h | 1 + > lib/librte_flow_classify/Makefile | 51 ++ > lib/librte_flow_classify/rte_flow_classify.c | 459 +++++++++++++++++ > lib/librte_flow_classify/rte_flow_classify.h | 207 ++++++++ > lib/librte_flow_classify/rte_flow_classify_parse.c | 546 +++++++++++++++++++++ > lib/librte_flow_classify/rte_flow_classify_parse.h | 74 +++ > .../rte_flow_classify_version.map | 10 + > mk/rte.app.mk | 2 +- > 12 files changed, 1360 insertions(+), 1 deletion(-) > create mode 100644 lib/librte_flow_classify/Makefile > create mode 100644 lib/librte_flow_classify/rte_flow_classify.c > create mode 100644 lib/librte_flow_classify/rte_flow_classify.h > create mode 100644 lib/librte_flow_classify/rte_flow_classify_parse.c > create mode 100644 lib/librte_flow_classify/rte_flow_classify_parse.h > create mode 100644 lib/librte_flow_classify/rte_flow_classify_version.map > > diff --git a/config/common_base b/config/common_base > index 5e97a08..e378e0a 100644 > --- a/config/common_base > +++ b/config/common_base > @@ -657,6 +657,12 @@ CONFIG_RTE_LIBRTE_GRO=y > CONFIG_RTE_LIBRTE_METER=y > > # > +# Compile librte_classify > +# > +CONFIG_RTE_LIBRTE_FLOW_CLASSIFY=y > +CONFIG_RTE_LIBRTE_CLASSIFY_DEBUG=n > + > +# > # Compile librte_sched > # > CONFIG_RTE_LIBRTE_SCHED=y > diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md > index 19e0d4f..a2fa281 100644 > --- a/doc/api/doxy-api-index.md > +++ b/doc/api/doxy-api-index.md > @@ -105,6 +105,7 @@ The public API headers are grouped by topics: > [LPM IPv4 route] (@ref rte_lpm.h), > [LPM IPv6 route] (@ref rte_lpm6.h), > [ACL] (@ref rte_acl.h), > + [flow_classify] (@ref rte_flow_classify.h), > [EFD] (@ref rte_efd.h) > > - **QoS**: > diff --git a/doc/api/doxy-api.conf b/doc/api/doxy-api.conf > index 823554f..4e43a66 100644 > --- a/doc/api/doxy-api.conf > +++ b/doc/api/doxy-api.conf > @@ -46,6 +46,7 @@ INPUT = doc/api/doxy-api-index.md \ > lib/librte_efd \ > lib/librte_ether \ > lib/librte_eventdev \ > + lib/librte_flow_classify \ > lib/librte_gro \ > lib/librte_hash \ > lib/librte_ip_frag \ > diff --git a/lib/Makefile b/lib/Makefile > index 86caba1..21fc3b0 100644 > --- a/lib/Makefile > +++ b/lib/Makefile > @@ -82,6 +82,9 @@ DIRS-$(CONFIG_RTE_LIBRTE_POWER) += librte_power > DEPDIRS-librte_power := librte_eal > DIRS-$(CONFIG_RTE_LIBRTE_METER) += librte_meter > DEPDIRS-librte_meter := librte_eal > +DIRS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += librte_flow_classify > +DEPDIRS-librte_flow_classify := librte_eal librte_ether librte_net > +DEPDIRS-librte_flow_classify += librte_table librte_acl librte_port > DIRS-$(CONFIG_RTE_LIBRTE_SCHED) += librte_sched > DEPDIRS-librte_sched := librte_eal librte_mempool librte_mbuf librte_net > DEPDIRS-librte_sched += librte_timer > diff --git a/lib/librte_eal/common/include/rte_log.h b/lib/librte_eal/common/include/rte_log.h > index ec8dba7..f975bde 100644 > --- a/lib/librte_eal/common/include/rte_log.h > +++ b/lib/librte_eal/common/include/rte_log.h > @@ -87,6 +87,7 @@ struct rte_logs { > #define RTE_LOGTYPE_CRYPTODEV 17 /**< Log related to cryptodev. */ > #define RTE_LOGTYPE_EFD 18 /**< Log related to EFD. */ > #define RTE_LOGTYPE_EVENTDEV 19 /**< Log related to eventdev. */ > +#define RTE_LOGTYPE_CLASSIFY 20 /**< Log related to flow classify. */ > > /* these log types can be used in an application */ > #define RTE_LOGTYPE_USER1 24 /**< User-defined log type 1. */ > diff --git a/lib/librte_flow_classify/Makefile b/lib/librte_flow_classify/Makefile > new file mode 100644 > index 0000000..7863a0c > --- /dev/null > +++ b/lib/librte_flow_classify/Makefile > @@ -0,0 +1,51 @@ > +# BSD LICENSE > +# > +# Copyright(c) 2017 Intel Corporation. All rights reserved. > +# All rights reserved. > +# > +# Redistribution and use in source and binary forms, with or without > +# modification, are permitted provided that the following conditions > +# are met: > +# > +# * Redistributions of source code must retain the above copyright > +# notice, this list of conditions and the following disclaimer. > +# * Redistributions in binary form must reproduce the above copyright > +# notice, this list of conditions and the following disclaimer in > +# the documentation and/or other materials provided with the > +# distribution. > +# * Neither the name of Intel Corporation nor the names of its > +# contributors may be used to endorse or promote products derived > +# from this software without specific prior written permission. > +# > +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS > +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT > +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR > +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT > +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, > +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT > +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, > +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY > +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT > +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE > +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. > + > +include $(RTE_SDK)/mk/rte.vars.mk > + > +# library name > +LIB = librte_flow_classify.a > + > +CFLAGS += -O3 > +CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) > + > +EXPORT_MAP := rte_flow_classify_version.map > + > +LIBABIVER := 1 > + > +# all source are stored in SRCS-y > +SRCS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += rte_flow_classify.c > +SRCS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += rte_flow_classify_parse.c > + > +# install this header file > +SYMLINK-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY)-include := rte_flow_classify.h > + > +include $(RTE_SDK)/mk/rte.lib.mk > diff --git a/lib/librte_flow_classify/rte_flow_classify.c b/lib/librte_flow_classify/rte_flow_classify.c > new file mode 100644 > index 0000000..595e08c > --- /dev/null > +++ b/lib/librte_flow_classify/rte_flow_classify.c > @@ -0,0 +1,459 @@ > +/*- > + * BSD LICENSE > + * > + * Copyright(c) 2017 Intel Corporation. All rights reserved. > + * All rights reserved. > + * > + * Redistribution and use in source and binary forms, with or without > + * modification, are permitted provided that the following conditions > + * are met: > + * > + * * Redistributions of source code must retain the above copyright > + * notice, this list of conditions and the following disclaimer. > + * * Redistributions in binary form must reproduce the above copyright > + * notice, this list of conditions and the following disclaimer in > + * the documentation and/or other materials provided with the > + * distribution. > + * * Neither the name of Intel Corporation nor the names of its > + * contributors may be used to endorse or promote products derived > + * from this software without specific prior written permission. > + * > + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS > + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT > + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR > + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT > + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, > + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT > + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, > + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY > + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT > + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE > + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. > + */ > + > +#include <rte_flow_classify.h> > +#include "rte_flow_classify_parse.h" > +#include <rte_flow_driver.h> > +#include <rte_table_acl.h> > +#include <stdbool.h> > + > +static struct rte_eth_ntuple_filter ntuple_filter; > + > +enum { > + PROTO_FIELD_IPV4, > + SRC_FIELD_IPV4, > + DST_FIELD_IPV4, > + SRCP_FIELD_IPV4, > + DSTP_FIELD_IPV4, > + NUM_FIELDS_IPV4 > +}; > + > +struct ipv4_5tuple_data { > + uint16_t priority; /**< flow API uses priority 0 to 8, 0 is highest */ > + uint32_t userdata; /**< value returned for match */ > + uint8_t tcp_flags; /**< tcp_flags only meaningful TCP protocol */ > +}; > + > +struct rte_flow_classify { > + enum rte_flow_classify_type type; /**< classify type */ > + struct rte_flow_action action; /**< action when match found */ > + struct ipv4_5tuple_data flow_extra_data; /** extra rule data */ > + struct rte_table_acl_rule_add_params key_add; /**< add ACL rule key */ > + struct rte_table_acl_rule_delete_params > + key_del; /**< delete ACL rule key */ > + int key_found; /**< ACL rule key found in table */ > + void *entry; /**< pointer to buffer to hold ACL rule key */ > + void *entry_ptr; /**< handle to the table entry for the ACL rule key */ > +}; > + > +/* number of categories in an ACL context */ > +#define FLOW_CLASSIFY_NUM_CATEGORY 1 > + > +/* number of packets in a burst */ > +#define MAX_PKT_BURST 32 > + > +struct mbuf_search { > + struct rte_mbuf *m_ipv4[MAX_PKT_BURST]; > + uint32_t res_ipv4[MAX_PKT_BURST]; > + int num_ipv4; > +}; > + > +int > +rte_flow_classify_validate(void *table_handle, > + const struct rte_flow_attr *attr, > + const struct rte_flow_item pattern[], > + const struct rte_flow_action actions[], > + struct rte_flow_error *error) > +{ > + struct rte_flow_item *items; > + parse_filter_t parse_filter; > + uint32_t item_num = 0; > + uint32_t i = 0; > + int ret; > + > + (void) table_handle; > + > + if (!error) > + return -EINVAL; > + > + if (!pattern) { > + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM, > + NULL, "NULL pattern."); > + return -EINVAL; > + } > + > + if (!actions) { > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ACTION_NUM, > + NULL, "NULL action."); > + return -EINVAL; > + } > + > + if (!attr) { > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ATTR, > + NULL, "NULL attribute."); > + return -EINVAL; > + } > + > + memset(&ntuple_filter, 0, sizeof(ntuple_filter)); > + > + /* Get the non-void item number of pattern */ > + while ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_END) { > + if ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_VOID) > + item_num++; > + i++; > + } > + item_num++; > + > + items = malloc(item_num * sizeof(struct rte_flow_item)); Use rte_zmalloc instead (takes care of memset and keeps memory in dpdk scope). > + if (!items) { > + rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_ITEM_NUM, > + NULL, "No memory for pattern items."); > + return -ENOMEM; > + } > + > + memset(items, 0, item_num * sizeof(struct rte_flow_item)); > + classify_pattern_skip_void_item(items, pattern); > + > + parse_filter = classify_find_parse_filter_func(items); > + if (!parse_filter) { > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ITEM, > + pattern, "Unsupported pattern"); > + return -EINVAL; > + } > + > + ret = parse_filter(attr, items, actions, &ntuple_filter, error); > + free(items); > + return ret; > +} > + > +#ifdef RTE_LIBRTE_CLASSIFY_DEBUG > +#define uint32_t_to_char(ip, a, b, c, d) do {\ > + *a = (unsigned char)(ip >> 24 & 0xff);\ > + *b = (unsigned char)(ip >> 16 & 0xff);\ > + *c = (unsigned char)(ip >> 8 & 0xff);\ > + *d = (unsigned char)(ip & 0xff);\ > + } while (0) > + > +static inline void > +print_ipv4_key_add(struct rte_table_acl_rule_add_params *key) > +{ > + unsigned char a, b, c, d; > + > + printf("ipv4_key_add: 0x%02hhx/0x%hhx ", > + key->field_value[PROTO_FIELD_IPV4].value.u8, > + key->field_value[PROTO_FIELD_IPV4].mask_range.u8); > + > + uint32_t_to_char(key->field_value[SRC_FIELD_IPV4].value.u32, > + &a, &b, &c, &d); > + printf(" %hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, > + key->field_value[SRC_FIELD_IPV4].mask_range.u32); > + > + uint32_t_to_char(key->field_value[DST_FIELD_IPV4].value.u32, > + &a, &b, &c, &d); > + printf("%hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, > + key->field_value[DST_FIELD_IPV4].mask_range.u32); > + > + printf("%hu : 0x%x %hu : 0x%x", > + key->field_value[SRCP_FIELD_IPV4].value.u16, > + key->field_value[SRCP_FIELD_IPV4].mask_range.u16, > + key->field_value[DSTP_FIELD_IPV4].value.u16, > + key->field_value[DSTP_FIELD_IPV4].mask_range.u16); > + > + printf(" priority: 0x%x\n", key->priority); > +} > + > +static inline void > +print_ipv4_key_delete(struct rte_table_acl_rule_delete_params *key) > +{ > + unsigned char a, b, c, d; > + > + printf("ipv4_key_del: 0x%02hhx/0x%hhx ", > + key->field_value[PROTO_FIELD_IPV4].value.u8, > + key->field_value[PROTO_FIELD_IPV4].mask_range.u8); > + > + uint32_t_to_char(key->field_value[SRC_FIELD_IPV4].value.u32, > + &a, &b, &c, &d); > + printf(" %hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, > + key->field_value[SRC_FIELD_IPV4].mask_range.u32); > + > + uint32_t_to_char(key->field_value[DST_FIELD_IPV4].value.u32, > + &a, &b, &c, &d); > + printf("%hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, > + key->field_value[DST_FIELD_IPV4].mask_range.u32); > + > + printf("%hu : 0x%x %hu : 0x%x\n", > + key->field_value[SRCP_FIELD_IPV4].value.u16, > + key->field_value[SRCP_FIELD_IPV4].mask_range.u16, > + key->field_value[DSTP_FIELD_IPV4].value.u16, > + key->field_value[DSTP_FIELD_IPV4].mask_range.u16); > +} > +#endif > + > +static struct rte_flow_classify * > +allocate_5tuple(void) > +{ > + struct rte_flow_classify *flow_classify; > + > + flow_classify = malloc(sizeof(struct rte_flow_classify)); > + if (!flow_classify) > + return flow_classify; > + > + memset(flow_classify, 0, sizeof(struct rte_flow_classify)); > + > + flow_classify->type = RTE_FLOW_CLASSIFY_TYPE_5TUPLE; > + memcpy(&flow_classify->action, classify_get_flow_action(), > + sizeof(struct rte_flow_action)); > + rte_zmalloc & rte_memcpy would be more efficient. > + flow_classify->flow_extra_data.priority = ntuple_filter.priority; > + flow_classify->flow_extra_data.tcp_flags = ntuple_filter.tcp_flags; > + > + /* key add values */ > + flow_classify->key_add.priority = ntuple_filter.priority; > + flow_classify->key_add.field_value[PROTO_FIELD_IPV4].mask_range.u8 = > + ntuple_filter.proto_mask; > + flow_classify->key_add.field_value[PROTO_FIELD_IPV4].value.u8 = > + ntuple_filter.proto; > + > + flow_classify->key_add.field_value[SRC_FIELD_IPV4].mask_range.u32 = > + ntuple_filter.src_ip_mask; > + flow_classify->key_add.field_value[SRC_FIELD_IPV4].value.u32 = > + ntuple_filter.src_ip; > + > + flow_classify->key_add.field_value[DST_FIELD_IPV4].mask_range.u32 = > + ntuple_filter.dst_ip_mask; > + flow_classify->key_add.field_value[DST_FIELD_IPV4].value.u32 = > + ntuple_filter.dst_ip; > + > + flow_classify->key_add.field_value[SRCP_FIELD_IPV4].mask_range.u16 = > + ntuple_filter.src_port_mask; > + flow_classify->key_add.field_value[SRCP_FIELD_IPV4].value.u16 = > + ntuple_filter.src_port; > + > + flow_classify->key_add.field_value[DSTP_FIELD_IPV4].mask_range.u16 = > + ntuple_filter.dst_port_mask; > + flow_classify->key_add.field_value[DSTP_FIELD_IPV4].value.u16 = > + ntuple_filter.dst_port; > + > +#ifdef RTE_LIBRTE_CLASSIFY_DEBUG > + print_ipv4_key_add(&flow_classify->key_add); > +#endif > + > + /* key delete values */ > + memcpy(&flow_classify->key_del.field_value[PROTO_FIELD_IPV4], > + &flow_classify->key_add.field_value[PROTO_FIELD_IPV4], > + NUM_FIELDS_IPV4 * sizeof(struct rte_acl_field)); > + > +#ifdef RTE_LIBRTE_CLASSIFY_DEBUG > + print_ipv4_key_delete(&flow_classify->key_del); > +#endif > + return flow_classify; > +} > + > +struct rte_flow_classify * > +rte_flow_classify_create(void *table_handle, > + uint32_t entry_size, > + const struct rte_flow_attr *attr, > + const struct rte_flow_item pattern[], > + const struct rte_flow_action actions[], > + struct rte_flow_error *error) > +{ > + struct rte_flow_classify *flow_classify; > + struct rte_acl_rule *acl_rule; > + int ret; > + > + if (!error) > + return NULL; > + > + if (!table_handle) { > + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_HANDLE, > + NULL, "NULL table_handle."); > + return NULL; > + } > + > + if (!pattern) { > + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM, > + NULL, "NULL pattern."); > + return NULL; > + } > + > + if (!actions) { > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ACTION_NUM, > + NULL, "NULL action."); > + return NULL; > + } > + > + if (!attr) { > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ATTR, > + NULL, "NULL attribute."); > + return NULL; > + } > + > + /* parse attr, pattern and actions */ > + ret = rte_flow_classify_validate(table_handle, attr, pattern, > + actions, error); > + if (ret < 0) > + return NULL; > + > + flow_classify = allocate_5tuple(); > + if (!flow_classify) > + return NULL; > + > + flow_classify->entry = malloc(entry_size); > + if (!flow_classify->entry) { > + free(flow_classify); > + flow_classify = NULL; > + return NULL; > + } > + > + ret = rte_table_acl_ops.f_add(table_handle, &flow_classify->key_add, > + flow_classify->entry, &flow_classify->key_found, > + &flow_classify->entry_ptr); > + if (ret) { > + free(flow_classify->entry); > + free(flow_classify); > + flow_classify = NULL; > + return NULL; > + } > + acl_rule = flow_classify->entry; > + flow_classify->flow_extra_data.userdata = acl_rule->data.userdata; > + > + return flow_classify; > +} > + > +int > +rte_flow_classify_destroy(void *table_handle, > + struct rte_flow_classify *flow_classify, > + struct rte_flow_error *error) > +{ > + int ret; > + int key_found; > + > + if (!error) > + return -EINVAL; > + > + if (!flow_classify || !table_handle) { > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, > + NULL, "invalid input"); > + return -EINVAL; > + } > + > + ret = rte_table_acl_ops.f_delete(table_handle, > + &flow_classify->key_del, &key_found, > + flow_classify->entry); > + if ((ret == 0) && key_found) { > + free(flow_classify->entry); > + free(flow_classify); > + } else > + ret = -1; > + return ret; > +} > + > +static int > +flow_match(void *table, struct rte_mbuf **pkts_in, const uint16_t nb_pkts, > + uint64_t *count, uint32_t userdata) > +{ > + int ret = -1; > + int i; > + uint64_t pkts_mask; > + uint64_t lookup_hit_mask; > + struct rte_acl_rule *entries[RTE_PORT_IN_BURST_SIZE_MAX]; > + > + if (nb_pkts) { > + pkts_mask = RTE_LEN2MASK(nb_pkts, uint64_t); > + ret = rte_table_acl_ops.f_lookup(table, pkts_in, > + pkts_mask, &lookup_hit_mask, (void **)entries); > + if (!ret) { > + for (i = 0; i < nb_pkts && > + (lookup_hit_mask & (1 << i)); i++) { > + if (entries[i]->data.userdata == userdata) > + (*count)++; /* match found */ > + } > + if (*count == 0) > + ret = -1; > + } else > + ret = -1; > + } > + return ret; > +} > + > +static int > +action_apply(const struct rte_flow_classify *flow_classify, > + struct rte_flow_classify_stats *stats, uint64_t count) > +{ > + struct rte_flow_classify_5tuple_stats *ntuple_stats; > + > + switch (flow_classify->action.type) { > + case RTE_FLOW_ACTION_TYPE_COUNT: > + ntuple_stats = > + (struct rte_flow_classify_5tuple_stats *)stats->stats; > + ntuple_stats->counter1 = count; > + stats->used_space = 1; > + break; > + default: > + return -ENOTSUP; > + } > + > + return 0; > +} > + > +int > +rte_flow_classify_query(void *table_handle, > + const struct rte_flow_classify *flow_classify, > + struct rte_mbuf **pkts, > + const uint16_t nb_pkts, > + struct rte_flow_classify_stats *stats, > + struct rte_flow_error *error) > +{ > + uint64_t count = 0; > + int ret = -EINVAL; > + > + if (!error) > + return ret; > + > + if (!table_handle || !flow_classify || !pkts || !stats) { > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, > + NULL, "invalid input"); > + return ret; > + } > + > + if ((stats->available_space == 0) || (nb_pkts == 0)) { > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, > + NULL, "invalid input"); > + return ret; > + } > + > + ret = flow_match(table_handle, pkts, nb_pkts, &count, > + flow_classify->flow_extra_data.userdata); > + if (ret == 0) > + ret = action_apply(flow_classify, stats, count); > + > + return ret; > +} > diff --git a/lib/librte_flow_classify/rte_flow_classify.h b/lib/librte_flow_classify/rte_flow_classify.h > new file mode 100644 > index 0000000..2b200fb > --- /dev/null > +++ b/lib/librte_flow_classify/rte_flow_classify.h > @@ -0,0 +1,207 @@ > +/*- > + * BSD LICENSE > + * > + * Copyright(c) 2017 Intel Corporation. All rights reserved. > + * All rights reserved. > + * > + * Redistribution and use in source and binary forms, with or without > + * modification, are permitted provided that the following conditions > + * are met: > + * > + * * Redistributions of source code must retain the above copyright > + * notice, this list of conditions and the following disclaimer. > + * * Redistributions in binary form must reproduce the above copyright > + * notice, this list of conditions and the following disclaimer in > + * the documentation and/or other materials provided with the > + * distribution. > + * * Neither the name of Intel Corporation nor the names of its > + * contributors may be used to endorse or promote products derived > + * from this software without specific prior written permission. > + * > + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS > + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT > + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR > + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT > + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, > + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT > + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, > + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY > + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT > + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE > + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. > + */ > + > +#ifndef _RTE_FLOW_CLASSIFY_H_ > +#define _RTE_FLOW_CLASSIFY_H_ > + > +/** > + * @file > + * > + * RTE Flow Classify Library > + * > + * This library provides flow record information with some measured properties. > + * > + * Application should define the flow and measurement criteria (action) for it. > + * > + * Library doesn't maintain any flow records itself, instead flow information is > + * returned to upper layer only for given packets. > + * > + * It is application's responsibility to call rte_flow_classify_query() > + * for group of packets, just after receiving them or before transmitting them. > + * Application should provide the flow type interested in, measurement to apply > + * to that flow in rte_flow_classify_create() API, and should provide > + * rte_flow_classify object and storage to put results in > + * rte_flow_classify_query() API. > + * > + * Usage: > + * - application calls rte_flow_classify_create() to create a rte_flow_classify > + * object. > + * - application calls rte_flow_classify_query() in a polling manner, > + * preferably after rte_eth_rx_burst(). This will cause the library to > + * convert packet information to flow information with some measurements. > + * - rte_flow_classify object can be destroyed when they are no more needed > + * via rte_flow_classify_destroy() > + */ > + > +#include <rte_ethdev.h> > +#include <rte_ether.h> > +#include <rte_flow.h> > +#include <rte_acl.h> > + > +#ifdef __cplusplus > +extern "C" { > +#endif > + > +enum rte_flow_classify_type { > + RTE_FLOW_CLASSIFY_TYPE_NONE, /**< no type */ > + RTE_FLOW_CLASSIFY_TYPE_5TUPLE, /**< IPv4 5tuple type */ > +}; > + > +struct rte_flow_classify; > + > +/** > + * Flow stats > + * > + * For single action an array of stats can be returned by API. Technically each > + * packet can return a stat at max. > + * > + * Storage for stats is provided by application, library should know available > + * space, and should return the number of used space. > + * > + * stats type is based on what measurement (action) requested by application. > + * > + */ > +struct rte_flow_classify_stats { > + const unsigned int available_space; > + unsigned int used_space; > + void **stats; > +}; > + > +struct rte_flow_classify_5tuple_stats { > + uint64_t counter1; /**< count of packets that match 5tupple pattern */ > +}; > + > +/** > + * Create a flow classify rule. > + * > + * @param[in] table_handle > + * Pointer to table ACL > + * @param[in] entry_size > + * Size of ACL rule > + * @param[in] attr > + * Flow rule attributes > + * @param[in] pattern > + * Pattern specification (list terminated by the END pattern item). > + * @param[in] actions > + * Associated actions (list terminated by the END pattern item). > + * @param[out] error > + * Perform verbose error reporting if not NULL. Structure > + * initialised in case of error only. > + * @return > + * A valid handle in case of success, NULL otherwise. > + */ > +struct rte_flow_classify * > +rte_flow_classify_create(void *table_handle, > + uint32_t entry_size, > + const struct rte_flow_attr *attr, > + const struct rte_flow_item pattern[], > + const struct rte_flow_action actions[], > + struct rte_flow_error *error); > + > +/** > + * Validate a flow classify rule. > + * > + * @param[in] table_handle > + * Pointer to table ACL > + * @param[in] attr > + * Flow rule attributes > + * @param[in] pattern > + * Pattern specification (list terminated by the END pattern item). > + * @param[in] actions > + * Associated actions (list terminated by the END pattern item). > + * @param[out] error > + * Perform verbose error reporting if not NULL. Structure > + * initialised in case of error only. > + * > + * @return > + * reurn code. > + */ > +int > +rte_flow_classify_validate(void *table_handle, > + const struct rte_flow_attr *attr, > + const struct rte_flow_item pattern[], > + const struct rte_flow_action actions[], > + struct rte_flow_error *error); > + > +/** > + * Destroy a flow classify rule. > + * > + * @param[in] table_handle > + * Pointer to table ACL > + * @param[in] flow_classify > + * Flow rule handle to destroy > + * @param[out] error > + * Perform verbose error reporting if not NULL. Structure > + * initialised in case of error only. > + * > + * @return > + * 0 on success, a negative errno value otherwise. > + */ > +int > +rte_flow_classify_destroy(void *table_handle, > + struct rte_flow_classify *flow_classify, > + struct rte_flow_error *error); > + > +/** > + * Get flow classification stats for given packets. > + * > + * @param[in] table_handle > + * Pointer to table ACL > + * @param[in] flow_classify > + * Pointer to Flow rule object > + * @param[in] pkts > + * Pointer to packets to process > + * @param[in] nb_pkts > + * Number of packets to process > + * @param[in] stats > + * To store stats defined by action > + * @param[out] error > + * Perform verbose error reporting if not NULL. Structure > + * initialised in case of error only. > + * > + * @return > + * 0 on success, a negative errno value otherwise. > + */ > +int > +rte_flow_classify_query(void *table_handle, > + const struct rte_flow_classify *flow_classify, > + struct rte_mbuf **pkts, > + const uint16_t nb_pkts, > + struct rte_flow_classify_stats *stats, > + struct rte_flow_error *error); > + > +#ifdef __cplusplus > +} > +#endif > + > +#endif /* _RTE_FLOW_CLASSIFY_H_ */ > diff --git a/lib/librte_flow_classify/rte_flow_classify_parse.c b/lib/librte_flow_classify/rte_flow_classify_parse.c > new file mode 100644 > index 0000000..e5a3885 > --- /dev/null > +++ b/lib/librte_flow_classify/rte_flow_classify_parse.c > @@ -0,0 +1,546 @@ > +/*- > + * BSD LICENSE > + * > + * Copyright(c) 2017 Intel Corporation. All rights reserved. > + * All rights reserved. > + * > + * Redistribution and use in source and binary forms, with or without > + * modification, are permitted provided that the following conditions > + * are met: > + * > + * * Redistributions of source code must retain the above copyright > + * notice, this list of conditions and the following disclaimer. > + * * Redistributions in binary form must reproduce the above copyright > + * notice, this list of conditions and the following disclaimer in > + * the documentation and/or other materials provided with the > + * distribution. > + * * Neither the name of Intel Corporation nor the names of its > + * contributors may be used to endorse or promote products derived > + * from this software without specific prior written permission. > + * > + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS > + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT > + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR > + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT > + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, > + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT > + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, > + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY > + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT > + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE > + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. > + */ > + > +#include <rte_flow_classify.h> > +#include "rte_flow_classify_parse.h" > +#include <rte_flow_driver.h> > + > +struct classify_valid_pattern { > + enum rte_flow_item_type *items; > + parse_filter_t parse_filter; > +}; > + > +static struct rte_flow_action action; > + > +/* Pattern matched ntuple filter */ > +static enum rte_flow_item_type pattern_ntuple_1[] = { > + RTE_FLOW_ITEM_TYPE_ETH, > + RTE_FLOW_ITEM_TYPE_IPV4, > + RTE_FLOW_ITEM_TYPE_UDP, > + RTE_FLOW_ITEM_TYPE_END, > +}; > + > +/* Pattern matched ntuple filter */ > +static enum rte_flow_item_type pattern_ntuple_2[] = { > + RTE_FLOW_ITEM_TYPE_ETH, > + RTE_FLOW_ITEM_TYPE_IPV4, > + RTE_FLOW_ITEM_TYPE_TCP, > + RTE_FLOW_ITEM_TYPE_END, > +}; > + > +/* Pattern matched ntuple filter */ > +static enum rte_flow_item_type pattern_ntuple_3[] = { > + RTE_FLOW_ITEM_TYPE_ETH, > + RTE_FLOW_ITEM_TYPE_IPV4, > + RTE_FLOW_ITEM_TYPE_SCTP, > + RTE_FLOW_ITEM_TYPE_END, > +}; > + > +static int > +classify_parse_ntuple_filter(const struct rte_flow_attr *attr, > + const struct rte_flow_item pattern[], > + const struct rte_flow_action actions[], > + struct rte_eth_ntuple_filter *filter, > + struct rte_flow_error *error); > + > +static struct classify_valid_pattern classify_supported_patterns[] = { > + /* ntuple */ > + { pattern_ntuple_1, classify_parse_ntuple_filter }, > + { pattern_ntuple_2, classify_parse_ntuple_filter }, > + { pattern_ntuple_3, classify_parse_ntuple_filter }, > +}; > + > +struct rte_flow_action * > +classify_get_flow_action(void) > +{ > + return &action; > +} > + > +/* Find the first VOID or non-VOID item pointer */ > +const struct rte_flow_item * > +classify_find_first_item(const struct rte_flow_item *item, bool is_void) > +{ > + bool is_find; > + > + while (item->type != RTE_FLOW_ITEM_TYPE_END) { > + if (is_void) > + is_find = item->type == RTE_FLOW_ITEM_TYPE_VOID; > + else > + is_find = item->type != RTE_FLOW_ITEM_TYPE_VOID; > + if (is_find) > + break; > + item++; > + } > + return item; > +} > + > +/* Skip all VOID items of the pattern */ > +void > +classify_pattern_skip_void_item(struct rte_flow_item *items, > + const struct rte_flow_item *pattern) > +{ > + uint32_t cpy_count = 0; > + const struct rte_flow_item *pb = pattern, *pe = pattern; > + > + for (;;) { > + /* Find a non-void item first */ > + pb = classify_find_first_item(pb, false); > + if (pb->type == RTE_FLOW_ITEM_TYPE_END) { > + pe = pb; > + break; > + } > + > + /* Find a void item */ > + pe = classify_find_first_item(pb + 1, true); > + > + cpy_count = pe - pb; > + rte_memcpy(items, pb, sizeof(struct rte_flow_item) * cpy_count); > + > + items += cpy_count; > + > + if (pe->type == RTE_FLOW_ITEM_TYPE_END) { > + pb = pe; > + break; > + } > + > + pb = pe + 1; > + } > + /* Copy the END item. */ > + rte_memcpy(items, pe, sizeof(struct rte_flow_item)); > +} > + > +/* Check if the pattern matches a supported item type array */ > +static bool > +classify_match_pattern(enum rte_flow_item_type *item_array, > + struct rte_flow_item *pattern) > +{ > + struct rte_flow_item *item = pattern; > + > + while ((*item_array == item->type) && > + (*item_array != RTE_FLOW_ITEM_TYPE_END)) { > + item_array++; > + item++; > + } > + > + return (*item_array == RTE_FLOW_ITEM_TYPE_END && > + item->type == RTE_FLOW_ITEM_TYPE_END); > +} > + > +/* Find if there's parse filter function matched */ > +parse_filter_t > +classify_find_parse_filter_func(struct rte_flow_item *pattern) > +{ > + parse_filter_t parse_filter = NULL; > + uint8_t i = 0; > + > + for (; i < RTE_DIM(classify_supported_patterns); i++) { > + if (classify_match_pattern(classify_supported_patterns[i].items, > + pattern)) { > + parse_filter = > + classify_supported_patterns[i].parse_filter; > + break; > + } > + } > + > + return parse_filter; > +} > + > +#define FLOW_RULE_MIN_PRIORITY 8 > +#define FLOW_RULE_MAX_PRIORITY 0 > + > +#define NEXT_ITEM_OF_PATTERN(item, pattern, index)\ > + do { \ > + item = pattern + index;\ > + while (item->type == RTE_FLOW_ITEM_TYPE_VOID) {\ > + index++; \ > + item = pattern + index; \ > + } \ > + } while (0) > + > +#define NEXT_ITEM_OF_ACTION(act, actions, index)\ > + do { \ > + act = actions + index; \ > + while (act->type == RTE_FLOW_ACTION_TYPE_VOID) {\ > + index++; \ > + act = actions + index; \ > + } \ > + } while (0) > + > +/** > + * Please aware there's an asumption for all the parsers. > + * rte_flow_item is using big endian, rte_flow_attr and > + * rte_flow_action are using CPU order. > + * Because the pattern is used to describe the packets, > + * normally the packets should use network order. > + */ > + > +/** > + * Parse the rule to see if it is a n-tuple rule. > + * And get the n-tuple filter info BTW. > + * pattern: > + * The first not void item can be ETH or IPV4. > + * The second not void item must be IPV4 if the first one is ETH. > + * The third not void item must be UDP or TCP. > + * The next not void item must be END. > + * action: > + * The first not void action should be QUEUE. > + * The next not void action should be END. > + * pattern example: > + * ITEM Spec Mask > + * ETH NULL NULL > + * IPV4 src_addr 192.168.1.20 0xFFFFFFFF > + * dst_addr 192.167.3.50 0xFFFFFFFF > + * next_proto_id 17 0xFF > + * UDP/TCP/ src_port 80 0xFFFF > + * SCTP dst_port 80 0xFFFF > + * END > + * other members in mask and spec should set to 0x00. > + * item->last should be NULL. > + */ > +static int > +classify_parse_ntuple_filter(const struct rte_flow_attr *attr, > + const struct rte_flow_item pattern[], > + const struct rte_flow_action actions[], > + struct rte_eth_ntuple_filter *filter, > + struct rte_flow_error *error) > +{ > + const struct rte_flow_item *item; > + const struct rte_flow_action *act; > + const struct rte_flow_item_ipv4 *ipv4_spec; > + const struct rte_flow_item_ipv4 *ipv4_mask; > + const struct rte_flow_item_tcp *tcp_spec; > + const struct rte_flow_item_tcp *tcp_mask; > + const struct rte_flow_item_udp *udp_spec; > + const struct rte_flow_item_udp *udp_mask; > + const struct rte_flow_item_sctp *sctp_spec; > + const struct rte_flow_item_sctp *sctp_mask; > + uint32_t index; > + > + if (!pattern) { > + rte_flow_error_set(error, > + EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM, > + NULL, "NULL pattern."); > + return -rte_errno; > + } > + > + if (!actions) { > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ACTION_NUM, > + NULL, "NULL action."); > + return -rte_errno; > + } > + if (!attr) { > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ATTR, > + NULL, "NULL attribute."); > + return -rte_errno; > + } > + > + /* parse pattern */ > + index = 0; > + > + /* the first not void item can be MAC or IPv4 */ > + NEXT_ITEM_OF_PATTERN(item, pattern, index); > + > + if (item->type != RTE_FLOW_ITEM_TYPE_ETH && > + item->type != RTE_FLOW_ITEM_TYPE_IPV4) { > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ITEM, > + item, "Not supported by ntuple filter"); > + return -rte_errno; > + } > + /* Skip Ethernet */ > + if (item->type == RTE_FLOW_ITEM_TYPE_ETH) { > + /*Not supported last point for range*/ > + if (item->last) { > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, > + item, > + "Not supported last point for range"); > + return -rte_errno; > + > + } > + /* if the first item is MAC, the content should be NULL */ > + if (item->spec || item->mask) { > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ITEM, > + item, > + "Not supported by ntuple filter"); > + return -rte_errno; > + } > + /* check if the next not void item is IPv4 */ > + index++; > + NEXT_ITEM_OF_PATTERN(item, pattern, index); > + if (item->type != RTE_FLOW_ITEM_TYPE_IPV4) { > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ITEM, > + item, > + "Not supported by ntuple filter"); > + return -rte_errno; > + } > + } > + > + /* get the IPv4 info */ > + if (!item->spec || !item->mask) { > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ITEM, > + item, "Invalid ntuple mask"); > + return -rte_errno; > + } > + /*Not supported last point for range*/ > + if (item->last) { > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, > + item, "Not supported last point for range"); > + return -rte_errno; > + > + } > + > + ipv4_mask = (const struct rte_flow_item_ipv4 *)item->mask; > + /** > + * Only support src & dst addresses, protocol, > + * others should be masked. > + */ > + if (ipv4_mask->hdr.version_ihl || > + ipv4_mask->hdr.type_of_service || > + ipv4_mask->hdr.total_length || > + ipv4_mask->hdr.packet_id || > + ipv4_mask->hdr.fragment_offset || > + ipv4_mask->hdr.time_to_live || > + ipv4_mask->hdr.hdr_checksum) { > + rte_flow_error_set(error, > + EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, > + item, "Not supported by ntuple filter"); > + return -rte_errno; > + } > + > + filter->dst_ip_mask = ipv4_mask->hdr.dst_addr; > + filter->src_ip_mask = ipv4_mask->hdr.src_addr; > + filter->proto_mask = ipv4_mask->hdr.next_proto_id; > + > + ipv4_spec = (const struct rte_flow_item_ipv4 *)item->spec; > + filter->dst_ip = ipv4_spec->hdr.dst_addr; > + filter->src_ip = ipv4_spec->hdr.src_addr; > + filter->proto = ipv4_spec->hdr.next_proto_id; > + > + /* check if the next not void item is TCP or UDP or SCTP */ > + index++; > + NEXT_ITEM_OF_PATTERN(item, pattern, index); > + if (item->type != RTE_FLOW_ITEM_TYPE_TCP && > + item->type != RTE_FLOW_ITEM_TYPE_UDP && > + item->type != RTE_FLOW_ITEM_TYPE_SCTP) { > + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ITEM, > + item, "Not supported by ntuple filter"); > + return -rte_errno; > + } > + > + /* get the TCP/UDP info */ > + if (!item->spec || !item->mask) { > + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ITEM, > + item, "Invalid ntuple mask"); > + return -rte_errno; > + } > + > + /*Not supported last point for range*/ > + if (item->last) { > + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, > + item, "Not supported last point for range"); > + return -rte_errno; > + > + } > + > + if (item->type == RTE_FLOW_ITEM_TYPE_TCP) { > + tcp_mask = (const struct rte_flow_item_tcp *)item->mask; > + > + /** > + * Only support src & dst ports, tcp flags, > + * others should be masked. > + */ > + if (tcp_mask->hdr.sent_seq || > + tcp_mask->hdr.recv_ack || > + tcp_mask->hdr.data_off || > + tcp_mask->hdr.rx_win || > + tcp_mask->hdr.cksum || > + tcp_mask->hdr.tcp_urp) { > + memset(filter, 0, > + sizeof(struct rte_eth_ntuple_filter)); > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ITEM, > + item, "Not supported by ntuple filter"); > + return -rte_errno; > + } > + > + filter->dst_port_mask = tcp_mask->hdr.dst_port; > + filter->src_port_mask = tcp_mask->hdr.src_port; > + if (tcp_mask->hdr.tcp_flags == 0xFF) { > + filter->flags |= RTE_NTUPLE_FLAGS_TCP_FLAG; > + } else if (!tcp_mask->hdr.tcp_flags) { > + filter->flags &= ~RTE_NTUPLE_FLAGS_TCP_FLAG; > + } else { > + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ITEM, > + item, "Not supported by ntuple filter"); > + return -rte_errno; > + } > + > + tcp_spec = (const struct rte_flow_item_tcp *)item->spec; > + filter->dst_port = tcp_spec->hdr.dst_port; > + filter->src_port = tcp_spec->hdr.src_port; > + filter->tcp_flags = tcp_spec->hdr.tcp_flags; > + } else if (item->type == RTE_FLOW_ITEM_TYPE_UDP) { > + udp_mask = (const struct rte_flow_item_udp *)item->mask; > + > + /** > + * Only support src & dst ports, > + * others should be masked. > + */ > + if (udp_mask->hdr.dgram_len || > + udp_mask->hdr.dgram_cksum) { > + memset(filter, 0, > + sizeof(struct rte_eth_ntuple_filter)); > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ITEM, > + item, "Not supported by ntuple filter"); > + return -rte_errno; > + } > + > + filter->dst_port_mask = udp_mask->hdr.dst_port; > + filter->src_port_mask = udp_mask->hdr.src_port; > + > + udp_spec = (const struct rte_flow_item_udp *)item->spec; > + filter->dst_port = udp_spec->hdr.dst_port; > + filter->src_port = udp_spec->hdr.src_port; > + } else { > + sctp_mask = (const struct rte_flow_item_sctp *)item->mask; > + > + /** > + * Only support src & dst ports, > + * others should be masked. > + */ > + if (sctp_mask->hdr.tag || > + sctp_mask->hdr.cksum) { > + memset(filter, 0, > + sizeof(struct rte_eth_ntuple_filter)); > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ITEM, > + item, "Not supported by ntuple filter"); > + return -rte_errno; > + } > + > + filter->dst_port_mask = sctp_mask->hdr.dst_port; > + filter->src_port_mask = sctp_mask->hdr.src_port; > + > + sctp_spec = (const struct rte_flow_item_sctp *)item->spec; > + filter->dst_port = sctp_spec->hdr.dst_port; > + filter->src_port = sctp_spec->hdr.src_port; > + } > + > + /* check if the next not void item is END */ > + index++; > + NEXT_ITEM_OF_PATTERN(item, pattern, index); > + if (item->type != RTE_FLOW_ITEM_TYPE_END) { > + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ITEM, > + item, "Not supported by ntuple filter"); > + return -rte_errno; > + } > + > + /* parse action */ > + index = 0; > + > + /** > + * n-tuple only supports count, > + * check if the first not void action is COUNT. > + */ > + memset(&action, 0, sizeof(action)); > + NEXT_ITEM_OF_ACTION(act, actions, index); > + if (act->type != RTE_FLOW_ACTION_TYPE_COUNT) { > + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ACTION, > + item, "Not supported action."); > + return -rte_errno; > + } > + action.type = RTE_FLOW_ACTION_TYPE_COUNT; > + > + /* check if the next not void item is END */ > + index++; > + NEXT_ITEM_OF_ACTION(act, actions, index); > + if (act->type != RTE_FLOW_ACTION_TYPE_END) { > + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ACTION, > + act, "Not supported action."); > + return -rte_errno; > + } > + > + /* parse attr */ > + /* must be input direction */ > + if (!attr->ingress) { > + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, > + attr, "Only support ingress."); > + return -rte_errno; > + } > + > + /* not supported */ > + if (attr->egress) { > + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, > + attr, "Not support egress."); > + return -rte_errno; > + } > + > + if (attr->priority > 0xFFFF) { > + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, > + attr, "Error priority."); > + return -rte_errno; > + } > + filter->priority = (uint16_t)attr->priority; > + if (attr->priority > FLOW_RULE_MIN_PRIORITY) > + filter->priority = FLOW_RULE_MAX_PRIORITY; > + > + return 0; > +} > diff --git a/lib/librte_flow_classify/rte_flow_classify_parse.h b/lib/librte_flow_classify/rte_flow_classify_parse.h > new file mode 100644 > index 0000000..1d4708a > --- /dev/null > +++ b/lib/librte_flow_classify/rte_flow_classify_parse.h > @@ -0,0 +1,74 @@ > +/*- > + * BSD LICENSE > + * > + * Copyright(c) 2017 Intel Corporation. All rights reserved. > + * All rights reserved. > + * > + * Redistribution and use in source and binary forms, with or without > + * modification, are permitted provided that the following conditions > + * are met: > + * > + * * Redistributions of source code must retain the above copyright > + * notice, this list of conditions and the following disclaimer. > + * * Redistributions in binary form must reproduce the above copyright > + * notice, this list of conditions and the following disclaimer in > + * the documentation and/or other materials provided with the > + * distribution. > + * * Neither the name of Intel Corporation nor the names of its > + * contributors may be used to endorse or promote products derived > + * from this software without specific prior written permission. > + * > + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS > + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT > + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR > + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT > + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, > + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT > + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, > + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY > + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT > + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE > + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. > + */ > + > +#ifndef _RTE_FLOW_CLASSIFY_PARSE_H_ > +#define _RTE_FLOW_CLASSIFY_PARSE_H_ > + > +#include <rte_ethdev.h> > +#include <rte_ether.h> > +#include <rte_flow.h> > +#include <stdbool.h> > + > +#ifdef __cplusplus > +extern "C" { > +#endif > + > +typedef int (*parse_filter_t)(const struct rte_flow_attr *attr, > + const struct rte_flow_item pattern[], > + const struct rte_flow_action actions[], > + struct rte_eth_ntuple_filter *filter, > + struct rte_flow_error *error); > + > +/* Skip all VOID items of the pattern */ > +void > +classify_pattern_skip_void_item(struct rte_flow_item *items, > + const struct rte_flow_item *pattern); > + > +/* Find the first VOID or non-VOID item pointer */ > +const struct rte_flow_item * > +classify_find_first_item(const struct rte_flow_item *item, bool is_void); > + > + > +/* Find if there's parse filter function matched */ > +parse_filter_t > +classify_find_parse_filter_func(struct rte_flow_item *pattern); > + > +/* get action data */ > +struct rte_flow_action * > +classify_get_flow_action(void); > + > +#ifdef __cplusplus > +} > +#endif > + > +#endif /* _RTE_FLOW_CLASSIFY_PARSE_H_ */ > diff --git a/lib/librte_flow_classify/rte_flow_classify_version.map b/lib/librte_flow_classify/rte_flow_classify_version.map > new file mode 100644 > index 0000000..e2c9ecf > --- /dev/null > +++ b/lib/librte_flow_classify/rte_flow_classify_version.map > @@ -0,0 +1,10 @@ > +DPDK_17.08 { > + global: > + > + rte_flow_classify_create; > + rte_flow_classify_destroy; > + rte_flow_classify_query; > + rte_flow_classify_validate; > + > + local: *; > +}; > diff --git a/mk/rte.app.mk b/mk/rte.app.mk > index c25fdd9..909ab95 100644 > --- a/mk/rte.app.mk > +++ b/mk/rte.app.mk > @@ -58,6 +58,7 @@ _LDLIBS-y += -L$(RTE_SDK_BIN)/lib > # > # Order is important: from higher level to lower level > # > +_LDLIBS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += -lrte_flow_classify > _LDLIBS-$(CONFIG_RTE_LIBRTE_PIPELINE) += -lrte_pipeline > _LDLIBS-$(CONFIG_RTE_LIBRTE_TABLE) += -lrte_table > _LDLIBS-$(CONFIG_RTE_LIBRTE_PORT) += -lrte_port > @@ -84,7 +85,6 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_EFD) += -lrte_efd > _LDLIBS-$(CONFIG_RTE_LIBRTE_CFGFILE) += -lrte_cfgfile > > _LDLIBS-y += --whole-archive > - > _LDLIBS-$(CONFIG_RTE_LIBRTE_HASH) += -lrte_hash > _LDLIBS-$(CONFIG_RTE_LIBRTE_VHOST) += -lrte_vhost > _LDLIBS-$(CONFIG_RTE_LIBRTE_KVARGS) += -lrte_kvargs > -- > 1.9.1 > ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v3 4/5] examples/flow_classify: flow classify sample application 2017-08-25 16:10 ` [dpdk-dev] [PATCH v2 0/6] flow " Bernard Iremonger ` (3 preceding siblings ...) 2017-08-31 14:54 ` [dpdk-dev] [PATCH v3 3/5] librte_flow_classify: add librte_flow_classify library Bernard Iremonger @ 2017-08-31 14:54 ` Bernard Iremonger 2017-08-31 14:54 ` [dpdk-dev] [PATCH v3 5/5] test: flow classify library unit tests Bernard Iremonger 5 siblings, 0 replies; 145+ messages in thread From: Bernard Iremonger @ 2017-08-31 14:54 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil Cc: Bernard Iremonger The flow_classify sample application exercises the following librte_flow_classify API's: rte_flow_classify_create rte_flow_classify_validate rte_flow_classify_destroy rte_flow_classify_query It sets up the IPv4 ACL field definitions. It creates table_acl and adds and deletes rules using the librte_table API. It uses a file of IPv4 five tuple rules for input. Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> --- examples/flow_classify/Makefile | 57 ++ examples/flow_classify/flow_classify.c | 879 +++++++++++++++++++++++++++++ examples/flow_classify/ipv4_rules_file.txt | 14 + 3 files changed, 950 insertions(+) create mode 100644 examples/flow_classify/Makefile create mode 100644 examples/flow_classify/flow_classify.c create mode 100644 examples/flow_classify/ipv4_rules_file.txt diff --git a/examples/flow_classify/Makefile b/examples/flow_classify/Makefile new file mode 100644 index 0000000..eecdde1 --- /dev/null +++ b/examples/flow_classify/Makefile @@ -0,0 +1,57 @@ +# BSD LICENSE +# +# Copyright(c) 2017 Intel Corporation. All rights reserved. +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Intel Corporation nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +ifeq ($(RTE_SDK),) +$(error "Please define RTE_SDK environment variable") +endif + +# Default target, can be overridden by command line or environment +RTE_TARGET ?= x86_64-native-linuxapp-gcc + +include $(RTE_SDK)/mk/rte.vars.mk + +# binary name +APP = flow_classify + + +# all source are stored in SRCS-y +SRCS-y := flow_classify.c + +CFLAGS += -O3 +CFLAGS += $(WERROR_FLAGS) + +# workaround for a gcc bug with noreturn attribute +# http://gcc.gnu.org/bugzilla/show_bug.cgi?id=12603 +ifeq ($(CONFIG_RTE_TOOLCHAIN_GCC),y) +CFLAGS_main.o += -Wno-return-type +endif + +include $(RTE_SDK)/mk/rte.extapp.mk diff --git a/examples/flow_classify/flow_classify.c b/examples/flow_classify/flow_classify.c new file mode 100644 index 0000000..99b3e6e --- /dev/null +++ b/examples/flow_classify/flow_classify.c @@ -0,0 +1,879 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <stdint.h> +#include <inttypes.h> +#include <getopt.h> + +#include <rte_eal.h> +#include <rte_ethdev.h> +#include <rte_cycles.h> +#include <rte_lcore.h> +#include <rte_mbuf.h> +#include <rte_flow.h> +#include <rte_flow_classify.h> +#include <rte_table_acl.h> + +#define RX_RING_SIZE 128 +#define TX_RING_SIZE 512 + +#define NUM_MBUFS 8191 +#define MBUF_CACHE_SIZE 250 +#define BURST_SIZE 32 +#define MAX_NUM_CLASSIFY 30 +#define FLOW_CLASSIFY_MAX_RULE_NUM 91 +#define FLOW_CLASSIFY_MAX_PRIORITY 8 +#define PROTO_TCP 6 +#define PROTO_UDP 17 +#define PROTO_SCTP 132 + +#define COMMENT_LEAD_CHAR ('#') +#define OPTION_RULE_IPV4 "rule_ipv4" +#define RTE_LOGTYPE_FLOW_CLASSIFY RTE_LOGTYPE_USER3 +#define flow_classify_log(format, ...) \ + RTE_LOG(ERR, FLOW_CLASSIFY, format, ##__VA_ARGS__) + +#define uint32_t_to_char(ip, a, b, c, d) do {\ + *a = (unsigned char)(ip >> 24 & 0xff);\ + *b = (unsigned char)(ip >> 16 & 0xff);\ + *c = (unsigned char)(ip >> 8 & 0xff);\ + *d = (unsigned char)(ip & 0xff);\ + } while (0) + +#define GET_CB_FIELD(in, fd, base, lim, dlm) do { \ + unsigned long val; \ + char *end; \ + errno = 0; \ + val = strtoul((in), &end, (base)); \ + if (errno != 0 || end[0] != (dlm) || val > (lim)) \ + return -EINVAL; \ + (fd) = (typeof(fd))val; \ + (in) = end + 1; \ +} while (0) + +enum { + CB_FLD_SRC_ADDR, + CB_FLD_DST_ADDR, + CB_FLD_SRC_PORT, + CB_FLD_SRC_PORT_DLM, + CB_FLD_SRC_PORT_MASK, + CB_FLD_DST_PORT, + CB_FLD_DST_PORT_DLM, + CB_FLD_DST_PORT_MASK, + CB_FLD_PROTO, + CB_FLD_PRIORITY, + CB_FLD_NUM, +}; + +static struct{ + const char *rule_ipv4_name; +} parm_config; +const char cb_port_delim[] = ":"; + +static const struct rte_eth_conf port_conf_default = { + .rxmode = { .max_rx_pkt_len = ETHER_MAX_LEN } +}; + +static void *table_acl; +uint32_t entry_size; +static int udp_num_classify; +static int tcp_num_classify; +static int sctp_num_classify; + +/* ACL field definitions for IPv4 5 tuple rule */ + +enum { + PROTO_FIELD_IPV4, + SRC_FIELD_IPV4, + DST_FIELD_IPV4, + SRCP_FIELD_IPV4, + DSTP_FIELD_IPV4, + NUM_FIELDS_IPV4 +}; + +enum { + PROTO_INPUT_IPV4, + SRC_INPUT_IPV4, + DST_INPUT_IPV4, + SRCP_DESTP_INPUT_IPV4 +}; + +static struct rte_acl_field_def ipv4_defs[NUM_FIELDS_IPV4] = { + /* first input field - always one byte long. */ + { + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint8_t), + .field_index = PROTO_FIELD_IPV4, + .input_index = PROTO_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, next_proto_id), + }, + /* next input field (IPv4 source address) - 4 consecutive bytes. */ + { + /* rte_flow uses a bit mask for IPv4 addresses */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint32_t), + .field_index = SRC_FIELD_IPV4, + .input_index = SRC_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, src_addr), + }, + /* next input field (IPv4 destination address) - 4 consecutive bytes. */ + { + /* rte_flow uses a bit mask for IPv4 addresses */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint32_t), + .field_index = DST_FIELD_IPV4, + .input_index = DST_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, dst_addr), + }, + /* + * Next 2 fields (src & dst ports) form 4 consecutive bytes. + * They share the same input index. + */ + { + /* rte_flow uses a bit mask for protocol ports */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint16_t), + .field_index = SRCP_FIELD_IPV4, + .input_index = SRCP_DESTP_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + sizeof(struct ipv4_hdr) + + offsetof(struct tcp_hdr, src_port), + }, + { + /* rte_flow uses a bit mask for protocol ports */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint16_t), + .field_index = DSTP_FIELD_IPV4, + .input_index = SRCP_DESTP_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + sizeof(struct ipv4_hdr) + + offsetof(struct tcp_hdr, dst_port), + }, +}; + +/* flow classify data */ +static struct rte_flow_classify *udp_flow_classify[MAX_NUM_CLASSIFY]; +static struct rte_flow_classify *tcp_flow_classify[MAX_NUM_CLASSIFY]; +static struct rte_flow_classify *sctp_flow_classify[MAX_NUM_CLASSIFY]; + +static struct rte_flow_classify_5tuple_stats udp_ntuple_stats; +static struct rte_flow_classify_stats udp_classify_stats = { + .available_space = BURST_SIZE, + .used_space = 0, + .stats = (void **)&udp_ntuple_stats +}; + +static struct rte_flow_classify_5tuple_stats tcp_ntuple_stats; +static struct rte_flow_classify_stats tcp_classify_stats = { + .available_space = BURST_SIZE, + .used_space = 0, + .stats = (void **)&tcp_ntuple_stats +}; + +static struct rte_flow_classify_5tuple_stats sctp_ntuple_stats; +static struct rte_flow_classify_stats sctp_classify_stats = { + .available_space = BURST_SIZE, + .used_space = 0, + .stats = (void **)&sctp_ntuple_stats +}; + +/* parameters for rte_flow_classify_validate and rte_flow_classify_create */ + +static struct rte_flow_item eth_item = { RTE_FLOW_ITEM_TYPE_ETH, + 0, 0, 0 }; +static struct rte_flow_item end_item = { RTE_FLOW_ITEM_TYPE_END, + 0, 0, 0 }; + +/* sample actions: + * "actions count / end" + */ +static struct rte_flow_action count_action = { RTE_FLOW_ACTION_TYPE_COUNT, 0}; +static struct rte_flow_action end_action = { RTE_FLOW_ACTION_TYPE_END, 0}; +static struct rte_flow_action actions[2]; + +/* sample attributes */ +static struct rte_flow_attr attr; + +/* flow_classify.c: * Based on DPDK skeleton forwarding example. */ + +/* + * Initializes a given port using global settings and with the RX buffers + * coming from the mbuf_pool passed as a parameter. + */ +static inline int +port_init(uint8_t port, struct rte_mempool *mbuf_pool) +{ + struct rte_eth_conf port_conf = port_conf_default; + struct ether_addr addr; + const uint16_t rx_rings = 1, tx_rings = 1; + int retval; + uint16_t q; + + if (port >= rte_eth_dev_count()) + return -1; + + /* Configure the Ethernet device. */ + retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf); + if (retval != 0) + return retval; + + /* Allocate and set up 1 RX queue per Ethernet port. */ + for (q = 0; q < rx_rings; q++) { + retval = rte_eth_rx_queue_setup(port, q, RX_RING_SIZE, + rte_eth_dev_socket_id(port), NULL, mbuf_pool); + if (retval < 0) + return retval; + } + + /* Allocate and set up 1 TX queue per Ethernet port. */ + for (q = 0; q < tx_rings; q++) { + retval = rte_eth_tx_queue_setup(port, q, TX_RING_SIZE, + rte_eth_dev_socket_id(port), NULL); + if (retval < 0) + return retval; + } + + /* Start the Ethernet port. */ + retval = rte_eth_dev_start(port); + if (retval < 0) + return retval; + + /* Display the port MAC address. */ + rte_eth_macaddr_get(port, &addr); + printf("Port %u MAC: %02" PRIx8 " %02" PRIx8 " %02" PRIx8 + " %02" PRIx8 " %02" PRIx8 " %02" PRIx8 "\n", + port, + addr.addr_bytes[0], addr.addr_bytes[1], + addr.addr_bytes[2], addr.addr_bytes[3], + addr.addr_bytes[4], addr.addr_bytes[5]); + + /* Enable RX in promiscuous mode for the Ethernet device. */ + rte_eth_promiscuous_enable(port); + + return 0; +} + +/* + * The lcore main. This is the main thread that does the work, reading from + * an input port classifying the packets and writing to an output port. + */ +static __attribute__((noreturn)) void +lcore_main(void) +{ + struct rte_flow_error error; + const uint8_t nb_ports = rte_eth_dev_count(); + uint8_t port; + int ret; + int i; + + /* + * Check that the port is on the same NUMA node as the polling thread + * for best performance. + */ + for (port = 0; port < nb_ports; port++) + if (rte_eth_dev_socket_id(port) > 0 && + rte_eth_dev_socket_id(port) != + (int)rte_socket_id()) + printf("\n\n"); + printf("WARNING: port %u is on remote NUMA node\n", + port); + printf("to polling thread.\n"); + printf("Performance will not be optimal.\n"); + + printf("\nCore %u forwarding packets. [Ctrl+C to quit]\n", + rte_lcore_id()); + + /* Run until the application is quit or killed. */ + for (;;) { + /* + * Receive packets on a port, classify them and forward them + * on the paired port. + * The mapping is 0 -> 1, 1 -> 0, 2 -> 3, 3 -> 2, etc. + */ + for (port = 0; port < nb_ports; port++) { + + /* Get burst of RX packets, from first port of pair. */ + struct rte_mbuf *bufs[BURST_SIZE]; + const uint16_t nb_rx = rte_eth_rx_burst(port, 0, + bufs, BURST_SIZE); + + if (unlikely(nb_rx == 0)) + continue; + + for (i = 0; i < MAX_NUM_CLASSIFY; i++) { + if (udp_flow_classify[i]) { + ret = rte_flow_classify_query( + table_acl, + udp_flow_classify[i], + bufs, nb_rx, + &udp_classify_stats, &error); + if (ret) + printf( + "udp flow classify[%d] query failed port=%u\n\n", + i, port); + else + printf( + "udp rule [%d] counter1=%lu used_space=%d\n\n", + i, udp_ntuple_stats.counter1, + udp_classify_stats.used_space); + } + } + + for (i = 0; i < MAX_NUM_CLASSIFY; i++) { + if (tcp_flow_classify[i]) { + ret = rte_flow_classify_query( + table_acl, + tcp_flow_classify[i], + bufs, nb_rx, + &tcp_classify_stats, &error); + if (ret) + printf( + "tcp flow classify[%d] query failed port=%u\n\n", + i, port); + else + printf( + "tcp rule [%d] counter1=%lu used_space=%d\n\n", + i, tcp_ntuple_stats.counter1, + tcp_classify_stats.used_space); + } + } + + for (i = 0; i < MAX_NUM_CLASSIFY; i++) { + if (sctp_flow_classify[i]) { + ret = rte_flow_classify_query( + table_acl, + sctp_flow_classify[i], + bufs, nb_rx, + &sctp_classify_stats, &error); + if (ret) + printf( + "sctp flow classify[%d] query failed port=%u\n\n", + i, port); + else + printf( + "sctp rule [%d] counter1=%lu used_space=%d\n\n", + i, sctp_ntuple_stats.counter1, + sctp_classify_stats.used_space); + } + } + + /* Send burst of TX packets, to second port of pair. */ + const uint16_t nb_tx = rte_eth_tx_burst(port ^ 1, 0, + bufs, nb_rx); + + /* Free any unsent packets. */ + if (unlikely(nb_tx < nb_rx)) { + uint16_t buf; + + for (buf = nb_tx; buf < nb_rx; buf++) + rte_pktmbuf_free(bufs[buf]); + } + } + } +} + +/* + * Parse IPv4 5 tuple rules file, ipv4_rules_file.txt. + * Expected format: + * <src_ipv4_addr>'/'<masklen> <space> \ + * <dst_ipv4_addr>'/'<masklen> <space> \ + * <src_port> <space> ":" <src_port_mask> <space> \ + * <dst_port> <space> ":" <dst_port_mask> <space> \ + * <proto>'/'<proto_mask> <space> \ + * <priority> + */ + +static int +parse_ipv4_net(const char *in, uint32_t *addr, uint32_t *mask_len) +{ + uint8_t a, b, c, d, m; + + GET_CB_FIELD(in, a, 0, UINT8_MAX, '.'); + GET_CB_FIELD(in, b, 0, UINT8_MAX, '.'); + GET_CB_FIELD(in, c, 0, UINT8_MAX, '.'); + GET_CB_FIELD(in, d, 0, UINT8_MAX, '/'); + GET_CB_FIELD(in, m, 0, sizeof(uint32_t) * CHAR_BIT, 0); + + addr[0] = IPv4(a, b, c, d); + mask_len[0] = m; + + return 0; +} + +static int +parse_ipv4_5tuple_rule(char *str, struct rte_eth_ntuple_filter *ntuple_filter) +{ + int i, ret; + char *s, *sp, *in[CB_FLD_NUM]; + static const char *dlm = " \t\n"; + int dim = CB_FLD_NUM; + + s = str; + for (i = 0; i != dim; i++, s = NULL) { + in[i] = strtok_r(s, dlm, &sp); + if (in[i] == NULL) + return -EINVAL; + } + + ret = parse_ipv4_net(in[CB_FLD_SRC_ADDR], + &ntuple_filter->src_ip, + &ntuple_filter->src_ip_mask); + if (ret != 0) { + flow_classify_log("failed to read source address/mask: %s\n", + in[CB_FLD_SRC_ADDR]); + return ret; + } + + ret = parse_ipv4_net(in[CB_FLD_DST_ADDR], + &ntuple_filter->dst_ip, + &ntuple_filter->dst_ip_mask); + if (ret != 0) { + flow_classify_log("failed to read source address/mask: %s\n", + in[CB_FLD_DST_ADDR]); + return ret; + } + + GET_CB_FIELD(in[CB_FLD_SRC_PORT], + ntuple_filter->src_port, 0, UINT16_MAX, 0); + + if (strncmp(in[CB_FLD_SRC_PORT_DLM], cb_port_delim, + sizeof(cb_port_delim)) != 0) + return -EINVAL; + + GET_CB_FIELD(in[CB_FLD_SRC_PORT_MASK], + ntuple_filter->src_port_mask, 0, UINT16_MAX, 0); + + GET_CB_FIELD(in[CB_FLD_DST_PORT], + ntuple_filter->dst_port, 0, UINT16_MAX, 0); + + if (strncmp(in[CB_FLD_DST_PORT_DLM], cb_port_delim, + sizeof(cb_port_delim)) != 0) + return -EINVAL; + + GET_CB_FIELD(in[CB_FLD_DST_PORT_MASK], + ntuple_filter->dst_port_mask, 0, UINT16_MAX, 0); + + GET_CB_FIELD(in[CB_FLD_PROTO], ntuple_filter->proto, + 0, UINT8_MAX, '/'); + GET_CB_FIELD(in[CB_FLD_PROTO], ntuple_filter->proto_mask, + 0, UINT8_MAX, 0); + + GET_CB_FIELD(in[CB_FLD_PRIORITY], ntuple_filter->priority, 0, + UINT16_MAX, 0); + if (ntuple_filter->priority > FLOW_CLASSIFY_MAX_PRIORITY) + ret = -EINVAL; + + return ret; +} + +/* Bypass comment and empty lines */ +static inline int +is_bypass_line(char *buff) +{ + int i = 0; + + /* comment line */ + if (buff[0] == COMMENT_LEAD_CHAR) + return 1; + /* empty line */ + while (buff[i] != '\0') { + if (!isspace(buff[i])) + return 0; + i++; + } + return 1; +} + +static uint32_t +convert_depth_to_bitmask(uint32_t depth_val) +{ + uint32_t bitmask = 0; + int i, j; + + for (i = depth_val, j = 0; i > 0; i--, j++) + bitmask |= (1 << (31 - j)); + return bitmask; +} + +static int +add_classify_rule(struct rte_eth_ntuple_filter *ntuple_filter) +{ + int ret = 0; + struct rte_flow_error error; + struct rte_flow_item_ipv4 ipv4_spec; + struct rte_flow_item_ipv4 ipv4_mask; + struct rte_flow_item ipv4_udp_item; + struct rte_flow_item ipv4_tcp_item; + struct rte_flow_item ipv4_sctp_item; + struct rte_flow_item_udp udp_spec; + struct rte_flow_item_udp udp_mask; + struct rte_flow_item udp_item; + struct rte_flow_item_tcp tcp_spec; + struct rte_flow_item_tcp tcp_mask; + struct rte_flow_item tcp_item; + struct rte_flow_item_sctp sctp_spec; + struct rte_flow_item_sctp sctp_mask; + struct rte_flow_item sctp_item; + struct rte_flow_item pattern_ipv4_5tuple[4]; + struct rte_flow_classify *flow_classify; + uint8_t ipv4_proto; + + /* set up parameters for validate and create */ + memset(&ipv4_spec, 0, sizeof(ipv4_spec)); + ipv4_spec.hdr.next_proto_id = ntuple_filter->proto; + ipv4_spec.hdr.src_addr = ntuple_filter->src_ip; + ipv4_spec.hdr.dst_addr = ntuple_filter->dst_ip; + ipv4_proto = ipv4_spec.hdr.next_proto_id; + + memset(&ipv4_mask, 0, sizeof(ipv4_mask)); + ipv4_mask.hdr.next_proto_id = ntuple_filter->proto_mask; + ipv4_mask.hdr.src_addr = ntuple_filter->src_ip_mask; + ipv4_mask.hdr.src_addr = + convert_depth_to_bitmask(ipv4_mask.hdr.src_addr); + ipv4_mask.hdr.dst_addr = ntuple_filter->dst_ip_mask; + ipv4_mask.hdr.dst_addr = + convert_depth_to_bitmask(ipv4_mask.hdr.dst_addr); + + switch (ipv4_proto) { + case PROTO_UDP: + if (udp_num_classify >= MAX_NUM_CLASSIFY) { + printf( + "\nINFO: UDP classify rule capacity %d reached\n", + udp_num_classify); + ret = -1; + break; + } + ipv4_udp_item.type = RTE_FLOW_ITEM_TYPE_IPV4; + ipv4_udp_item.spec = &ipv4_spec; + ipv4_udp_item.mask = &ipv4_mask; + ipv4_udp_item.last = NULL; + + udp_spec.hdr.src_port = ntuple_filter->src_port; + udp_spec.hdr.dst_port = ntuple_filter->dst_port; + udp_spec.hdr.dgram_len = 0; + udp_spec.hdr.dgram_cksum = 0; + + udp_mask.hdr.src_port = ntuple_filter->src_port_mask; + udp_mask.hdr.dst_port = ntuple_filter->dst_port_mask; + udp_mask.hdr.dgram_len = 0; + udp_mask.hdr.dgram_cksum = 0; + + udp_item.type = RTE_FLOW_ITEM_TYPE_UDP; + udp_item.spec = &udp_spec; + udp_item.mask = &udp_mask; + udp_item.last = NULL; + + attr.priority = ntuple_filter->priority; + pattern_ipv4_5tuple[1] = ipv4_udp_item; + pattern_ipv4_5tuple[2] = udp_item; + break; + case PROTO_TCP: + if (tcp_num_classify >= MAX_NUM_CLASSIFY) { + printf( + "\nINFO: TCP classify rule capacity %d reached\n", + tcp_num_classify); + ret = -1; + break; + } + ipv4_tcp_item.type = RTE_FLOW_ITEM_TYPE_IPV4; + ipv4_tcp_item.spec = &ipv4_spec; + ipv4_tcp_item.mask = &ipv4_mask; + ipv4_tcp_item.last = NULL; + + memset(&tcp_spec, 0, sizeof(tcp_spec)); + tcp_spec.hdr.src_port = ntuple_filter->src_port; + tcp_spec.hdr.dst_port = ntuple_filter->dst_port; + + memset(&tcp_mask, 0, sizeof(tcp_mask)); + tcp_mask.hdr.src_port = ntuple_filter->src_port_mask; + tcp_mask.hdr.dst_port = ntuple_filter->dst_port_mask; + + tcp_item.type = RTE_FLOW_ITEM_TYPE_TCP; + tcp_item.spec = &tcp_spec; + tcp_item.mask = &tcp_mask; + tcp_item.last = NULL; + + attr.priority = ntuple_filter->priority; + pattern_ipv4_5tuple[1] = ipv4_tcp_item; + pattern_ipv4_5tuple[2] = tcp_item; + break; + case PROTO_SCTP: + if (sctp_num_classify >= MAX_NUM_CLASSIFY) { + printf( + "\nINFO: SCTP classify rule capacity %d reached\n", + sctp_num_classify); + ret = -1; + break; + } + ipv4_sctp_item.type = RTE_FLOW_ITEM_TYPE_IPV4; + ipv4_sctp_item.spec = &ipv4_spec; + ipv4_sctp_item.mask = &ipv4_mask; + ipv4_sctp_item.last = NULL; + + sctp_spec.hdr.src_port = ntuple_filter->src_port; + sctp_spec.hdr.dst_port = ntuple_filter->dst_port; + sctp_spec.hdr.cksum = 0; + sctp_spec.hdr.tag = 0; + + sctp_mask.hdr.src_port = ntuple_filter->src_port_mask; + sctp_mask.hdr.dst_port = ntuple_filter->dst_port_mask; + sctp_mask.hdr.cksum = 0; + sctp_mask.hdr.tag = 0; + + sctp_item.type = RTE_FLOW_ITEM_TYPE_SCTP; + sctp_item.spec = &sctp_spec; + sctp_item.mask = &sctp_mask; + sctp_item.last = NULL; + + attr.priority = ntuple_filter->priority; + pattern_ipv4_5tuple[1] = ipv4_sctp_item; + pattern_ipv4_5tuple[2] = sctp_item; + break; + default: + break; + } + + if (ret == -1) + return 0; + + attr.ingress = 1; + pattern_ipv4_5tuple[0] = eth_item; + pattern_ipv4_5tuple[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, pattern_ipv4_5tuple, + actions, &error); + if (ret) + rte_exit(EXIT_FAILURE, + "flow classify validate failed ipv4_proto = %u\n", + ipv4_proto); + + flow_classify = rte_flow_classify_create( + table_acl, entry_size, &attr, pattern_ipv4_5tuple, + actions, &error); + if (flow_classify == NULL) + rte_exit(EXIT_FAILURE, + "flow classify create failed ipv4_proto = %u\n", + ipv4_proto); + + switch (ipv4_proto) { + case PROTO_UDP: + udp_flow_classify[udp_num_classify] = flow_classify; + udp_num_classify++; + break; + case PROTO_TCP: + tcp_flow_classify[tcp_num_classify] = flow_classify; + tcp_num_classify++; + break; + case PROTO_SCTP: + sctp_flow_classify[sctp_num_classify] = flow_classify; + sctp_num_classify++; + break; + default: + break; + } + return 0; +} + +static int +add_rules(const char *rule_path) +{ + FILE *fh; + char buff[LINE_MAX]; + unsigned int i = 0; + unsigned int total_num = 0; + struct rte_eth_ntuple_filter ntuple_filter; + + fh = fopen(rule_path, "rb"); + if (fh == NULL) + rte_exit(EXIT_FAILURE, "%s: Open %s failed\n", __func__, + rule_path); + + fseek(fh, 0, SEEK_SET); + + i = 0; + while (fgets(buff, LINE_MAX, fh) != NULL) { + i++; + + if (is_bypass_line(buff)) + continue; + + if (total_num >= FLOW_CLASSIFY_MAX_RULE_NUM - 1) { + printf("\nINFO: classify rule capacity %d reached\n", + total_num); + break; + } + + if (parse_ipv4_5tuple_rule(buff, &ntuple_filter) != 0) + rte_exit(EXIT_FAILURE, + "%s Line %u: parse rules error\n", + rule_path, i); + + if (add_classify_rule(&ntuple_filter) != 0) + rte_exit(EXIT_FAILURE, "add rule error\n"); + + total_num++; + } + + fclose(fh); + return 0; +} + +/* display usage */ +static void +print_usage(const char *prgname) +{ + printf("%s [EAL options] -- --"OPTION_RULE_IPV4"=FILE: " + "specify the ipv4 rules entries file.\n" + "Each rule occupies one line in the file.\n", + prgname); +} + +/* Parse the argument given in the command line of the application */ +static int +parse_args(int argc, char **argv) +{ + int opt, ret; + char **argvopt; + int option_index; + char *prgname = argv[0]; + static struct option lgopts[] = { + {OPTION_RULE_IPV4, 1, 0, 0}, + {NULL, 0, 0, 0} + }; + + argvopt = argv; + + while ((opt = getopt_long(argc, argvopt, "", + lgopts, &option_index)) != EOF) { + + switch (opt) { + /* long options */ + case 0: + if (!strncmp(lgopts[option_index].name, + OPTION_RULE_IPV4, + sizeof(OPTION_RULE_IPV4))) + parm_config.rule_ipv4_name = optarg; + break; + default: + print_usage(prgname); + return -1; + } + } + + if (optind >= 0) + argv[optind-1] = prgname; + + ret = optind-1; + optind = 1; /* reset getopt lib */ + return ret; +} + +/* + * The main function, which does initialization and calls the per-lcore + * functions. + */ +int +main(int argc, char *argv[]) +{ + struct rte_mempool *mbuf_pool; + uint8_t nb_ports; + uint8_t portid; + int ret; + int socket_id; + struct rte_table_acl_params table_acl_params; + + /* Initialize the Environment Abstraction Layer (EAL). */ + ret = rte_eal_init(argc, argv); + if (ret < 0) + rte_exit(EXIT_FAILURE, "Error with EAL initialization\n"); + + argc -= ret; + argv += ret; + + /* parse application arguments (after the EAL ones) */ + ret = parse_args(argc, argv); + if (ret < 0) + rte_exit(EXIT_FAILURE, "Invalid flow_classify parameters\n"); + + /* Check that there is an even number of ports to send/receive on. */ + nb_ports = rte_eth_dev_count(); + if (nb_ports < 2 || (nb_ports & 1)) + rte_exit(EXIT_FAILURE, "Error: number of ports must be even\n"); + + /* Creates a new mempool in memory to hold the mbufs. */ + mbuf_pool = rte_pktmbuf_pool_create("MBUF_POOL", NUM_MBUFS * nb_ports, + MBUF_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id()); + + if (mbuf_pool == NULL) + rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n"); + + /* Initialize all ports. */ + for (portid = 0; portid < nb_ports; portid++) + if (port_init(portid, mbuf_pool) != 0) + rte_exit(EXIT_FAILURE, "Cannot init port %"PRIu8 "\n", + portid); + + if (rte_lcore_count() > 1) + printf("\nWARNING: Too many lcores enabled. Only 1 used.\n"); + + socket_id = rte_eth_dev_socket_id(0); + + /* initialise ACL table params */ + table_acl_params.n_rule_fields = RTE_DIM(ipv4_defs); + table_acl_params.name = "table_acl_ipv4_5tuple"; + table_acl_params.n_rules = FLOW_CLASSIFY_MAX_RULE_NUM; + memcpy(table_acl_params.field_format, ipv4_defs, sizeof(ipv4_defs)); + entry_size = RTE_ACL_RULE_SZ(RTE_DIM(ipv4_defs)); + + table_acl = rte_table_acl_ops.f_create(&table_acl_params, socket_id, + entry_size); + if (table_acl == NULL) + rte_exit(EXIT_FAILURE, "Failed to create table_acl\n"); + + /* read file of IPv4 5 tuple rules and initialise parameters + * for rte_flow_classify_validate and rte_flow_classify_create + */ + + if (add_rules(parm_config.rule_ipv4_name)) + rte_exit(EXIT_FAILURE, "Failed to add rules\n"); + + /* Call lcore_main on the master core only. */ + lcore_main(); + + return 0; +} diff --git a/examples/flow_classify/ipv4_rules_file.txt b/examples/flow_classify/ipv4_rules_file.txt new file mode 100644 index 0000000..262763d --- /dev/null +++ b/examples/flow_classify/ipv4_rules_file.txt @@ -0,0 +1,14 @@ +#file format: +#src_ip/masklen dstt_ip/masklen src_port : mask dst_port : mask proto/mask priority +# +2.2.2.3/24 2.2.2.7/24 32 : 0xffff 33 : 0xffff 17/0xff 0 +9.9.9.3/24 9.9.9.7/24 32 : 0xffff 33 : 0xffff 17/0xff 1 +9.9.9.3/24 9.9.9.7/24 32 : 0xffff 33 : 0xffff 6/0xff 2 +9.9.8.3/24 9.9.8.7/24 32 : 0xffff 33 : 0xffff 6/0xff 3 +6.7.8.9/24 2.3.4.5/24 32 : 0xffff 33 : 0xffff 132/0xff 4 +6.7.8.9/32 192.168.0.36/32 10 : 0xffff 11 : 0xffff 6/0xfe 5 +6.7.8.9/24 192.168.0.36/24 10 : 0xffff 11 : 0xffff 6/0xfe 6 +6.7.8.9/16 192.168.0.36/16 10 : 0xffff 11 : 0xffff 6/0xfe 7 +6.7.8.9/8 192.168.0.36/8 10 : 0xffff 11 : 0xffff 6/0xfe 8 +#error rules +#9.8.7.6/8 192.168.0.36/8 10 : 0xffff 11 : 0xffff 6/0xfe 9 \ No newline at end of file -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v3 5/5] test: flow classify library unit tests 2017-08-25 16:10 ` [dpdk-dev] [PATCH v2 0/6] flow " Bernard Iremonger ` (4 preceding siblings ...) 2017-08-31 14:54 ` [dpdk-dev] [PATCH v3 4/5] examples/flow_classify: flow classify sample application Bernard Iremonger @ 2017-08-31 14:54 ` Bernard Iremonger 5 siblings, 0 replies; 145+ messages in thread From: Bernard Iremonger @ 2017-08-31 14:54 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil Cc: Bernard Iremonger Add flow_classify_autotest program. Set up IPv4 ACL field definitions. Create table_acl for use by librte_flow_classify API's. Create an mbuf pool for use by rte_flow_classify_query. For each of the librte_flow_classify API's: add bad parameter tests add bad pattern tests add bad action tests add good parameter tests Initialise ipv4 udp traffic for use by test for rte_flow_classif_query. add entry_size param to classify_create change acl field offsets Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> --- test/test/Makefile | 1 + test/test/test_flow_classify.c | 494 +++++++++++++++++++++++++++++++++++++++++ test/test/test_flow_classify.h | 186 ++++++++++++++++ 3 files changed, 681 insertions(+) create mode 100644 test/test/test_flow_classify.c create mode 100644 test/test/test_flow_classify.h diff --git a/test/test/Makefile b/test/test/Makefile index 42d9a49..073e1ed 100644 --- a/test/test/Makefile +++ b/test/test/Makefile @@ -106,6 +106,7 @@ SRCS-y += test_table_tables.c SRCS-y += test_table_ports.c SRCS-y += test_table_combined.c SRCS-$(CONFIG_RTE_LIBRTE_ACL) += test_table_acl.c +SRCS-$(CONFIG_RTE_LIBRTE_ACL) += test_flow_classify.c endif SRCS-y += test_rwlock.c diff --git a/test/test/test_flow_classify.c b/test/test/test_flow_classify.c new file mode 100644 index 0000000..5a45c6b --- /dev/null +++ b/test/test/test_flow_classify.c @@ -0,0 +1,494 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <string.h> +#include <errno.h> + +#include "test.h" + +#include <rte_string_fns.h> +#include <rte_mbuf.h> +#include <rte_byteorder.h> +#include <rte_ip.h> +#include <rte_acl.h> +#include <rte_common.h> +#include <rte_table_acl.h> +#include <rte_flow.h> +#include <rte_flow_classify.h> + +#include "packet_burst_generator.h" +#include "test_flow_classify.h" + + +#define FLOW_CLASSIFY_MAX_RULE_NUM 100 +static void *table_acl; +static uint32_t entry_size; + +/* + * test functions by passing invalid or + * non-workable parameters. + */ +static int +test_invalid_parameters(void) +{ + struct rte_flow_error error; + struct rte_flow_classify *classify; + int ret; + + ret = rte_flow_classify_validate(NULL, NULL, NULL, NULL, NULL); + if (!ret) { + printf("Line %i: flow_classify_validate with NULL param " + "should have failed!\n", __LINE__); + return -1; + } + + classify = rte_flow_classify_create(NULL, 0, NULL, NULL, NULL, NULL); + if (classify) { + printf("Line %i: flow_classify_create with NULL param " + "should have failed!\n", __LINE__); + return -1; + } + + ret = rte_flow_classify_destroy(NULL, NULL, NULL); + if (!ret) { + printf("Line %i: flow_classify_destroy with NULL param " + "should have failed!\n", __LINE__); + return -1; + } + + ret = rte_flow_classify_query(NULL, NULL, NULL, 0, NULL, NULL); + if (!ret) { + printf("Line %i: flow_classify_query with NULL param " + "should have failed!\n", __LINE__); + return -1; + } + + ret = rte_flow_classify_validate(NULL, NULL, NULL, NULL, &error); + if (!ret) { + printf("Line %i: flow_classify_validate with NULL param " + "should have failed!\n", __LINE__); + return -1; + } + + classify = rte_flow_classify_create(NULL, 0, NULL, NULL, NULL, &error); + if (classify) { + printf("Line %i: flow_classify_create with NULL param " + "should have failed!\n", __LINE__); + return -1; + } + + ret = rte_flow_classify_destroy(NULL, NULL, &error); + if (!ret) { + printf("Line %i: flow_classify_destroy with NULL param " + "should have failed!\n", __LINE__); + return -1; + } + + ret = rte_flow_classify_query(NULL, NULL, NULL, 0, NULL, &error); + if (!ret) { + printf("Line %i: flow_classify_query with NULL param " + "should have failed!\n", __LINE__); + return -1; + } + return 0; +} + +static int +test_valid_parameters(void) +{ + struct rte_flow_classify *flow_classify; + int ret; + + /* set up parameters for rte_flow_classify_validate and + * rte_flow_classify_create and rte_flow_classify_destroy + */ + + attr.ingress = 1; + attr.priority = 1; + pattern_udp_1[0] = eth_item; + pattern_udp_1[1] = ipv4_udp_item_1; + pattern_udp_1[2] = udp_item_1; + pattern_udp_1[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, + pattern_udp_1, actions, &error); + if (ret) { + printf("Line %i: flow_classify_validate " + "should not have failed!\n", __LINE__); + return -1; + } + + flow_classify = rte_flow_classify_create(table_acl, entry_size, &attr, + pattern_udp_1, actions, &error); + if (!flow_classify) { + printf("Line %i: flow_classify_create " + "should not have failed!\n", __LINE__); + return -1; + } + + ret = rte_flow_classify_destroy(table_acl, flow_classify, &error); + if (ret) { + printf("Line %i: flow_classify_destroy " + "should not have failed!\n", __LINE__); + return -1; + } + return 0; +} + +static int +test_invalid_patterns(void) +{ + struct rte_flow_classify *flow_classify; + int ret; + + /* set up parameters for rte_flow_classify_validate and + * rte_flow_classify_create and rte_flow_classify_destroy + */ + + attr.ingress = 1; + attr.priority = 1; + pattern_udp_1[0] = eth_item_bad; + pattern_udp_1[1] = ipv4_udp_item_1; + pattern_udp_1[2] = udp_item_1; + pattern_udp_1[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, + pattern_udp_1, actions, &error); + if (!ret) { + printf("Line %i: flow_classify_validate " + "should have failed!\n", __LINE__); + return -1; + } + + pattern_udp_1[0] = eth_item; + pattern_udp_1[1] = ipv4_udp_item_bad; + flow_classify = rte_flow_classify_create(table_acl, entry_size, &attr, + pattern_udp_1, actions, &error); + if (flow_classify) { + printf("Line %i: flow_classify_create " + "should have failed!\n", __LINE__); + return -1; + } + + ret = rte_flow_classify_destroy(table_acl, flow_classify, &error); + if (!ret) { + printf("Line %i: flow_classify_destroy " + "should have failed!\n", __LINE__); + return -1; + } + + pattern_udp_1[1] = ipv4_udp_item_1; + pattern_udp_1[2] = udp_item_bad; + ret = rte_flow_classify_validate(table_acl, &attr, + pattern_udp_1, actions, &error); + if (!ret) { + printf("Line %i: flow_classify_validate " + "should have failed!\n", __LINE__); + return -1; + } + + pattern_udp_1[2] = udp_item_1; + pattern_udp_1[3] = end_item_bad; + flow_classify = rte_flow_classify_create(table_acl, entry_size, &attr, + pattern_udp_1, actions, &error); + if (flow_classify) { + printf("Line %i: flow_classify_create " + "should have failed!\n", __LINE__); + return -1; + } + + ret = rte_flow_classify_destroy(table_acl, flow_classify, &error); + if (!ret) { + printf("Line %i: flow_classify_destroy " + "should have failed!\n", __LINE__); + return -1; + } + return 0; +} + +static int +test_invalid_actions(void) +{ + struct rte_flow_classify *flow_classify; + int ret; + + /* set up parameters for rte_flow_classify_validate and + * rte_flow_classify_create and rte_flow_classify_destroy + */ + + attr.ingress = 1; + attr.priority = 1; + pattern_udp_1[0] = eth_item; + pattern_udp_1[1] = ipv4_udp_item_1; + pattern_udp_1[2] = udp_item_1; + pattern_udp_1[3] = end_item; + actions[0] = count_action_bad; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, + pattern_udp_1, actions, &error); + if (!ret) { + printf("Line %i: flow_classify_validate " + "should have failed!\n", __LINE__); + return -1; + } + + flow_classify = rte_flow_classify_create(table_acl, entry_size, &attr, + pattern_udp_1, actions, &error); + if (flow_classify) { + printf("Line %i: flow_classify_create " + "should have failed!\n", __LINE__); + return -1; + } + + ret = rte_flow_classify_destroy(table_acl, flow_classify, &error); + if (!ret) { + printf("Line %i: flow_classify_destroy " + "should have failed!\n", __LINE__); + return -1; + } + + actions[0] = count_action; + actions[1] = end_action_bad; + ret = rte_flow_classify_validate(table_acl, &attr, + pattern_udp_1, actions, &error); + if (!ret) { + printf("Line %i: flow_classify_validate " + "should have failed!\n", __LINE__); + return -1; + } + + flow_classify = rte_flow_classify_create(table_acl, entry_size, &attr, + pattern_udp_1, actions, &error); + if (flow_classify) { + printf("Line %i: flow_classify_create " + "should have failed!\n", __LINE__); + return -1; + } + + ret = rte_flow_classify_destroy(table_acl, flow_classify, &error); + if (!ret) { + printf("Line %i: flow_classify_destroy " + "should have failed!\n", __LINE__); + return -1; + } + return 0; +} + +static int +init_udp_ipv4_traffic(struct rte_mempool *mp, + struct rte_mbuf **pkts_burst, uint32_t burst_size) +{ + struct ether_hdr pkt_eth_hdr; + struct ipv4_hdr pkt_ipv4_hdr; + struct udp_hdr pkt_udp_hdr; + uint32_t src_addr = IPV4_ADDR(2, 2, 2, 3); + uint32_t dst_addr = IPV4_ADDR(2, 2, 2, 7); + uint16_t src_port = 32; + uint16_t dst_port = 33; + uint16_t pktlen; + + static uint8_t src_mac[] = { 0x00, 0xFF, 0xAA, 0xFF, 0xAA, 0xFF }; + static uint8_t dst_mac[] = { 0x00, 0xAA, 0xFF, 0xAA, 0xFF, 0xAA }; + + initialize_eth_header(&pkt_eth_hdr, + (struct ether_addr *)src_mac, + (struct ether_addr *)dst_mac, ETHER_TYPE_IPv4, 0, 0); + pktlen = (uint16_t)(sizeof(struct ether_hdr)); + printf("ETH pktlen %u\n", pktlen); + + pktlen = initialize_ipv4_header(&pkt_ipv4_hdr, src_addr, dst_addr, + pktlen); + printf("ETH + IPv4 pktlen %u\n", pktlen); + + pktlen = initialize_udp_header(&pkt_udp_hdr, src_port, dst_port, + pktlen); + printf("ETH + IPv4 + UDP pktlen %u\n", pktlen); + + return generate_packet_burst(mp, pkts_burst, &pkt_eth_hdr, + 0, &pkt_ipv4_hdr, 1, + &pkt_udp_hdr, burst_size, + PACKET_BURST_GEN_PKT_LEN, 1); +} + +static int +init_mbufpool(void) +{ + int socketid; + int ret = 0; + unsigned int lcore_id; + char s[64]; + + for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { + if (rte_lcore_is_enabled(lcore_id) == 0) + continue; + + socketid = rte_lcore_to_socket_id(lcore_id); + if (socketid >= NB_SOCKETS) { + printf( + "Socket %d of lcore %u is out of range %d\n", + socketid, lcore_id, NB_SOCKETS); + ret = -1; + break; + } + if (mbufpool[socketid] == NULL) { + snprintf(s, sizeof(s), "mbuf_pool_%d", socketid); + mbufpool[socketid] = + rte_pktmbuf_pool_create(s, NB_MBUF, + MEMPOOL_CACHE_SIZE, 0, MBUF_SIZE, + socketid); + if (mbufpool[socketid]) { + printf("Allocated mbuf pool on socket %d\n", + socketid); + } else { + printf("Cannot init mbuf pool on socket %d\n", + socketid); + ret = -ENOMEM; + break; + } + } + } + return ret; +} + +static int +test_query_udp(void) +{ + struct rte_flow_error error; + struct rte_flow_classify *flow_classify; + int ret; + int i; + + ret = init_udp_ipv4_traffic(mbufpool[0], bufs, MAX_PKT_BURST); + if (ret != MAX_PKT_BURST) { + printf("Line %i: init_udp_ipv4_traffic has failed!\n", + __LINE__); + return -1; + } + + for (i = 0; i < MAX_PKT_BURST; i++) + bufs[i]->packet_type = RTE_PTYPE_L3_IPV4; + + /* set up parameters for rte_flow_classify_validate and + * rte_flow_classify_create and rte_flow_classify_destroy + */ + + attr.ingress = 1; + attr.priority = 1; + pattern_udp_1[0] = eth_item; + pattern_udp_1[1] = ipv4_udp_item_1; + pattern_udp_1[2] = udp_item_1; + pattern_udp_1[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, + pattern_udp_1, actions, &error); + if (ret) { + printf("Line %i: flow_classify_validate " + "should not have failed!\n", __LINE__); + return -1; + } + + flow_classify = rte_flow_classify_create(table_acl, entry_size, &attr, + pattern_udp_1, actions, &error); + if (!flow_classify) { + printf("Line %i: flow_classify_create " + "should not have failed!\n", __LINE__); + return -1; + } + + ret = rte_flow_classify_query(table_acl, flow_classify, bufs, + MAX_PKT_BURST, &udp_classify_stats, &error); + if (ret) { + printf("Line %i: flow_classify_query " + "should not have failed!\n", __LINE__); + return -1; + } + + ret = rte_flow_classify_destroy(table_acl, flow_classify, &error); + if (ret) { + printf("Line %i: flow_classify_destroy " + "should not have failed!\n", __LINE__); + return -1; + } + return 0; +} + +static int +test_flow_classify(void) +{ + struct rte_table_acl_params table_acl_params; + int socket_id = 0; + int ret; + + /* initialise ACL table params */ + table_acl_params.n_rule_fields = RTE_DIM(ipv4_defs); + table_acl_params.name = "table_acl_ipv4_5tuple"; + table_acl_params.n_rules = FLOW_CLASSIFY_MAX_RULE_NUM; + memcpy(table_acl_params.field_format, ipv4_defs, sizeof(ipv4_defs)); + entry_size = RTE_ACL_RULE_SZ(RTE_DIM(ipv4_defs)); + + table_acl = rte_table_acl_ops.f_create(&table_acl_params, socket_id, + entry_size); + if (table_acl == NULL) { + printf("Line %i: f_create has failed!\n", __LINE__); + return -1; + } + printf("Created table_acl for for IPv4 five tuple packets\n"); + + ret = init_mbufpool(); + if (ret) { + printf("Line %i: init_mbufpool has failed!\n", __LINE__); + return -1; + } + + if (test_invalid_parameters() < 0) + return -1; + if (test_valid_parameters() < 0) + return -1; + if (test_invalid_patterns() < 0) + return -1; + if (test_invalid_actions() < 0) + return -1; + if (test_query_udp() < 0) + return -1; + + return 0; +} + +REGISTER_TEST_COMMAND(flow_classify_autotest, test_flow_classify); diff --git a/test/test/test_flow_classify.h b/test/test/test_flow_classify.h new file mode 100644 index 0000000..af04dd3 --- /dev/null +++ b/test/test/test_flow_classify.h @@ -0,0 +1,186 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef TEST_FLOW_CLASSIFY_H_ +#define TEST_FLOW_CLASSIFY_H_ + +#define MAX_PKT_BURST (32) +#define NB_SOCKETS (1) +#define MEMPOOL_CACHE_SIZE (256) +#define MBUF_SIZE (512) +#define NB_MBUF (512) + +/* test UDP packets */ +static struct rte_mempool *mbufpool[NB_SOCKETS]; +static struct rte_mbuf *bufs[MAX_PKT_BURST]; + +/* ACL field definitions for IPv4 5 tuple rule */ + +enum { + PROTO_FIELD_IPV4, + SRC_FIELD_IPV4, + DST_FIELD_IPV4, + SRCP_FIELD_IPV4, + DSTP_FIELD_IPV4, + NUM_FIELDS_IPV4 +}; + +enum { + PROTO_INPUT_IPV4, + SRC_INPUT_IPV4, + DST_INPUT_IPV4, + SRCP_DESTP_INPUT_IPV4 +}; + +static struct rte_acl_field_def ipv4_defs[NUM_FIELDS_IPV4] = { + /* first input field - always one byte long. */ + { + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint8_t), + .field_index = PROTO_FIELD_IPV4, + .input_index = PROTO_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, next_proto_id), + }, + /* next input field (IPv4 source address) - 4 consecutive bytes. */ + { + /* rte_flow uses a bit mask for IPv4 addresses */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint32_t), + .field_index = SRC_FIELD_IPV4, + .input_index = SRC_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, src_addr), + }, + /* next input field (IPv4 destination address) - 4 consecutive bytes. */ + { + /* rte_flow uses a bit mask for IPv4 addresses */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint32_t), + .field_index = DST_FIELD_IPV4, + .input_index = DST_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, dst_addr), + }, + /* + * Next 2 fields (src & dst ports) form 4 consecutive bytes. + * They share the same input index. + */ + { + /* rte_flow uses a bit mask for protocol ports */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint16_t), + .field_index = SRCP_FIELD_IPV4, + .input_index = SRCP_DESTP_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + sizeof(struct ipv4_hdr) + + offsetof(struct tcp_hdr, src_port), + }, + { + /* rte_flow uses a bit mask for protocol ports */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint16_t), + .field_index = DSTP_FIELD_IPV4, + .input_index = SRCP_DESTP_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + sizeof(struct ipv4_hdr) + + offsetof(struct tcp_hdr, dst_port), + }, +}; + +/* parameters for rte_flow_classify_validate and rte_flow_classify_create */ + +/* first sample UDP pattern: + * "eth / ipv4 src spec 2.2.2.3 src mask 255.255.255.00 dst spec 2.2.2.7 + * dst mask 255.255.255.00 / udp src is 32 dst is 33 / end" + */ +static struct rte_flow_item_ipv4 ipv4_udp_spec_1 = { + { 0, 0, 0, 0, 0, 0, 17, 0, IPv4(2, 2, 2, 3), IPv4(2, 2, 2, 7)} +}; +static const struct rte_flow_item_ipv4 ipv4_mask_24 = { + .hdr = { + .next_proto_id = 0xff, + .src_addr = 0xffffff00, + .dst_addr = 0xffffff00, + }, +}; +static struct rte_flow_item_udp udp_spec_1 = { + { 32, 33, 0, 0 } +}; + +static struct rte_flow_item eth_item = { RTE_FLOW_ITEM_TYPE_ETH, + 0, 0, 0 }; +static struct rte_flow_item eth_item_bad = { -1, 0, 0, 0 }; + +static struct rte_flow_item ipv4_udp_item_1 = { RTE_FLOW_ITEM_TYPE_IPV4, + &ipv4_udp_spec_1, 0, &ipv4_mask_24}; +static struct rte_flow_item ipv4_udp_item_bad = { RTE_FLOW_ITEM_TYPE_IPV4, + NULL, 0, NULL}; + +static struct rte_flow_item udp_item_1 = { RTE_FLOW_ITEM_TYPE_UDP, + &udp_spec_1, 0, &rte_flow_item_udp_mask}; +static struct rte_flow_item udp_item_bad = { RTE_FLOW_ITEM_TYPE_UDP, + NULL, 0, NULL}; + +static struct rte_flow_item end_item = { RTE_FLOW_ITEM_TYPE_END, + 0, 0, 0 }; +static struct rte_flow_item end_item_bad = { -1, 0, 0, 0 }; + +static struct rte_flow_item pattern_udp_1[4]; + +/* sample actions: + * "actions count / end" + */ +static struct rte_flow_action count_action = { RTE_FLOW_ACTION_TYPE_COUNT, 0}; +static struct rte_flow_action count_action_bad = { -1, 0}; + +static struct rte_flow_action end_action = { RTE_FLOW_ACTION_TYPE_END, 0}; +static struct rte_flow_action end_action_bad = { -1, 0}; + +static struct rte_flow_action actions[2]; + +/* sample attributes */ +static struct rte_flow_attr attr; + +/* sample error */ +static struct rte_flow_error error; + +/* flow classify data for UDP burst */ +static struct rte_flow_classify_5tuple_stats udp_ntuple_stats; +static struct rte_flow_classify_stats udp_classify_stats = { + .available_space = MAX_PKT_BURST, + .used_space = 0, + .stats = (void **)&udp_ntuple_stats +}; + +#endif /* TEST_FLOW_CLASSIFY_H_ */ -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v2 1/6] librte_table: fix acl entry add and delete functions 2017-08-23 13:51 ` [dpdk-dev] [PATCH v1 0/6] Flow classification library Bernard Iremonger 2017-08-25 16:10 ` [dpdk-dev] [PATCH v2 0/6] flow " Bernard Iremonger @ 2017-08-25 16:10 ` Bernard Iremonger 2017-08-25 16:10 ` [dpdk-dev] [PATCH v2 2/6] librte_table: fix acl lookup function Bernard Iremonger ` (4 subsequent siblings) 6 siblings, 0 replies; 145+ messages in thread From: Bernard Iremonger @ 2017-08-25 16:10 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil Cc: Bernard Iremonger, stable The rte_table_acl_entry_add() function was returning data from acl_memory instead of acl_rule_memory. It was also returning data from entry instead of entry_ptr. The rte_table_acl_entry_delete() function was returning data from acl_memory instead of acl_rule_memory. Fixes: 166923eb2f78 ("table: ACL") Cc: stable@dpdk.org Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> --- lib/librte_table/rte_table_acl.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/lib/librte_table/rte_table_acl.c b/lib/librte_table/rte_table_acl.c index 3c05e4a..e84b437 100644 --- a/lib/librte_table/rte_table_acl.c +++ b/lib/librte_table/rte_table_acl.c @@ -316,8 +316,7 @@ struct rte_table_acl { if (status == 0) { *key_found = 1; *entry_ptr = &acl->memory[i * acl->entry_size]; - memcpy(*entry_ptr, entry, acl->entry_size); - + memcpy(entry, *entry_ptr, acl->entry_size); return 0; } } @@ -353,8 +352,8 @@ struct rte_table_acl { rte_acl_free(acl->ctx); acl->ctx = ctx; *key_found = 0; - *entry_ptr = &acl->memory[free_pos * acl->entry_size]; - memcpy(*entry_ptr, entry, acl->entry_size); + *entry_ptr = &acl->acl_rule_memory[free_pos * acl->entry_size]; + memcpy(entry, *entry_ptr, acl->entry_size); return 0; } @@ -435,7 +434,7 @@ struct rte_table_acl { acl->ctx = ctx; *key_found = 1; if (entry != NULL) - memcpy(entry, &acl->memory[pos * acl->entry_size], + memcpy(entry, &acl->acl_rule_memory[pos * acl->entry_size], acl->entry_size); return 0; -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v2 2/6] librte_table: fix acl lookup function 2017-08-23 13:51 ` [dpdk-dev] [PATCH v1 0/6] Flow classification library Bernard Iremonger 2017-08-25 16:10 ` [dpdk-dev] [PATCH v2 0/6] flow " Bernard Iremonger 2017-08-25 16:10 ` [dpdk-dev] [PATCH v2 1/6] librte_table: fix acl entry add and delete functions Bernard Iremonger @ 2017-08-25 16:10 ` Bernard Iremonger 2017-08-25 16:10 ` [dpdk-dev] [PATCH v2 3/6] librte_ether: initialise IPv4 protocol mask for rte_flow Bernard Iremonger ` (3 subsequent siblings) 6 siblings, 0 replies; 145+ messages in thread From: Bernard Iremonger @ 2017-08-25 16:10 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil Cc: Bernard Iremonger, stable The rte_table_acl_lookup() function was returning data from acl_memory instead of acl_rule_memory. Fixes: 166923eb2f78 ("table: ACL") Cc: stable@dpdk.org Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> --- lib/librte_table/rte_table_acl.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/lib/librte_table/rte_table_acl.c b/lib/librte_table/rte_table_acl.c index e84b437..258916d 100644 --- a/lib/librte_table/rte_table_acl.c +++ b/lib/librte_table/rte_table_acl.c @@ -794,7 +794,7 @@ struct rte_table_acl { if (action_table_pos != 0) { pkts_out_mask |= pkt_mask; entries[pkt_pos] = (void *) - &acl->memory[action_table_pos * + &acl->acl_rule_memory[action_table_pos * acl->entry_size]; rte_prefetch0(entries[pkt_pos]); } -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v2 3/6] librte_ether: initialise IPv4 protocol mask for rte_flow 2017-08-23 13:51 ` [dpdk-dev] [PATCH v1 0/6] Flow classification library Bernard Iremonger ` (2 preceding siblings ...) 2017-08-25 16:10 ` [dpdk-dev] [PATCH v2 2/6] librte_table: fix acl lookup function Bernard Iremonger @ 2017-08-25 16:10 ` Bernard Iremonger 2017-08-30 12:39 ` Adrien Mazarguil 2017-08-25 16:10 ` [dpdk-dev] [PATCH v2 4/6] librte_flow_classify: add librte_flow_classify library Bernard Iremonger ` (2 subsequent siblings) 6 siblings, 1 reply; 145+ messages in thread From: Bernard Iremonger @ 2017-08-25 16:10 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil Cc: Bernard Iremonger Initialise the next_proto_id mask in the default mask for rte_flow_item_type_ipv4. Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> --- lib/librte_ether/rte_flow.h | 1 + 1 file changed, 1 insertion(+) diff --git a/lib/librte_ether/rte_flow.h b/lib/librte_ether/rte_flow.h index bba6169..59c42fa 100644 --- a/lib/librte_ether/rte_flow.h +++ b/lib/librte_ether/rte_flow.h @@ -489,6 +489,7 @@ struct rte_flow_item_ipv4 { #ifndef __cplusplus static const struct rte_flow_item_ipv4 rte_flow_item_ipv4_mask = { .hdr = { + .next_proto_id = 0xff, .src_addr = RTE_BE32(0xffffffff), .dst_addr = RTE_BE32(0xffffffff), }, -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [PATCH v2 3/6] librte_ether: initialise IPv4 protocol mask for rte_flow 2017-08-25 16:10 ` [dpdk-dev] [PATCH v2 3/6] librte_ether: initialise IPv4 protocol mask for rte_flow Bernard Iremonger @ 2017-08-30 12:39 ` Adrien Mazarguil 2017-08-30 13:28 ` Iremonger, Bernard 0 siblings, 1 reply; 145+ messages in thread From: Adrien Mazarguil @ 2017-08-30 12:39 UTC (permalink / raw) To: Bernard Iremonger Cc: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu Hi Bernard, On Fri, Aug 25, 2017 at 05:10:35PM +0100, Bernard Iremonger wrote: > Initialise the next_proto_id mask in the default mask for > rte_flow_item_type_ipv4. > > Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> > --- > lib/librte_ether/rte_flow.h | 1 + > 1 file changed, 1 insertion(+) > > diff --git a/lib/librte_ether/rte_flow.h b/lib/librte_ether/rte_flow.h > index bba6169..59c42fa 100644 > --- a/lib/librte_ether/rte_flow.h > +++ b/lib/librte_ether/rte_flow.h > @@ -489,6 +489,7 @@ struct rte_flow_item_ipv4 { > #ifndef __cplusplus > static const struct rte_flow_item_ipv4 rte_flow_item_ipv4_mask = { > .hdr = { > + .next_proto_id = 0xff, Please don't change the default mask to cover this field as it means all rte_flow-based applications that do not provide a specific mask (.mask == NULL) have to always set this field to some valid value. This is not a convenient default behavior. > .src_addr = RTE_BE32(0xffffffff), > .dst_addr = RTE_BE32(0xffffffff), > }, > -- > 1.9.1 > I'll have to NACK this change. The example application should define its own mask if next_proto_id must be always set. -- Adrien Mazarguil 6WIND ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [PATCH v2 3/6] librte_ether: initialise IPv4 protocol mask for rte_flow 2017-08-30 12:39 ` Adrien Mazarguil @ 2017-08-30 13:28 ` Iremonger, Bernard 2017-08-30 14:39 ` Adrien Mazarguil 0 siblings, 1 reply; 145+ messages in thread From: Iremonger, Bernard @ 2017-08-30 13:28 UTC (permalink / raw) To: Adrien Mazarguil Cc: dev, Yigit, Ferruh, Ananyev, Konstantin, Dumitrescu, Cristian Hi Adrien, > -----Original Message----- > From: Adrien Mazarguil [mailto:adrien.mazarguil@6wind.com] > Sent: Wednesday, August 30, 2017 1:39 PM > To: Iremonger, Bernard <bernard.iremonger@intel.com> > Cc: dev@dpdk.org; Yigit, Ferruh <ferruh.yigit@intel.com>; Ananyev, > Konstantin <konstantin.ananyev@intel.com>; Dumitrescu, Cristian > <cristian.dumitrescu@intel.com> > Subject: Re: [PATCH v2 3/6] librte_ether: initialise IPv4 protocol mask for > rte_flow > > Hi Bernard, > > On Fri, Aug 25, 2017 at 05:10:35PM +0100, Bernard Iremonger wrote: > > Initialise the next_proto_id mask in the default mask for > > rte_flow_item_type_ipv4. > > > > Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> > > --- > > lib/librte_ether/rte_flow.h | 1 + > > 1 file changed, 1 insertion(+) > > > > diff --git a/lib/librte_ether/rte_flow.h b/lib/librte_ether/rte_flow.h > > index bba6169..59c42fa 100644 > > --- a/lib/librte_ether/rte_flow.h > > +++ b/lib/librte_ether/rte_flow.h > > @@ -489,6 +489,7 @@ struct rte_flow_item_ipv4 { #ifndef __cplusplus > > static const struct rte_flow_item_ipv4 rte_flow_item_ipv4_mask = { > > .hdr = { > > + .next_proto_id = 0xff, > > Please don't change the default mask to cover this field as it means > all rte_flow-based applications that do not provide a specific mask > (.mask == NULL) have to always set this field to some valid value. > This is not a convenient default behavior. > > > .src_addr = RTE_BE32(0xffffffff), > > .dst_addr = RTE_BE32(0xffffffff), > > }, > > -- > > 1.9.1 > > > > I'll have to NACK this change. The example application should define its own > mask if next_proto_id must be always set. Surely for IPv4 the next_proto_id will always be set to TCP(6) , UDP(17) or SCTP (132). If the mask is 0 for next_proto_id then it is not possible to match on the protocol. I can define an ipv4_mask in the application if you insist. Regards, Bernard. ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [PATCH v2 3/6] librte_ether: initialise IPv4 protocol mask for rte_flow 2017-08-30 13:28 ` Iremonger, Bernard @ 2017-08-30 14:39 ` Adrien Mazarguil 2017-08-30 15:12 ` Iremonger, Bernard 0 siblings, 1 reply; 145+ messages in thread From: Adrien Mazarguil @ 2017-08-30 14:39 UTC (permalink / raw) To: Iremonger, Bernard Cc: dev, Yigit, Ferruh, Ananyev, Konstantin, Dumitrescu, Cristian On Wed, Aug 30, 2017 at 01:28:04PM +0000, Iremonger, Bernard wrote: > Hi Adrien, > > > -----Original Message----- > > From: Adrien Mazarguil [mailto:adrien.mazarguil@6wind.com] > > Sent: Wednesday, August 30, 2017 1:39 PM > > To: Iremonger, Bernard <bernard.iremonger@intel.com> > > Cc: dev@dpdk.org; Yigit, Ferruh <ferruh.yigit@intel.com>; Ananyev, > > Konstantin <konstantin.ananyev@intel.com>; Dumitrescu, Cristian > > <cristian.dumitrescu@intel.com> > > Subject: Re: [PATCH v2 3/6] librte_ether: initialise IPv4 protocol mask for > > rte_flow > > > > Hi Bernard, > > > > On Fri, Aug 25, 2017 at 05:10:35PM +0100, Bernard Iremonger wrote: > > > Initialise the next_proto_id mask in the default mask for > > > rte_flow_item_type_ipv4. > > > > > > Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> > > > --- > > > lib/librte_ether/rte_flow.h | 1 + > > > 1 file changed, 1 insertion(+) > > > > > > diff --git a/lib/librte_ether/rte_flow.h b/lib/librte_ether/rte_flow.h > > > index bba6169..59c42fa 100644 > > > --- a/lib/librte_ether/rte_flow.h > > > +++ b/lib/librte_ether/rte_flow.h > > > @@ -489,6 +489,7 @@ struct rte_flow_item_ipv4 { #ifndef __cplusplus > > > static const struct rte_flow_item_ipv4 rte_flow_item_ipv4_mask = { > > > .hdr = { > > > + .next_proto_id = 0xff, > > > > Please don't change the default mask to cover this field as it means > > all rte_flow-based applications that do not provide a specific mask > > (.mask == NULL) have to always set this field to some valid value. > > This is not a convenient default behavior. > > > > > .src_addr = RTE_BE32(0xffffffff), > > > .dst_addr = RTE_BE32(0xffffffff), > > > }, > > > -- > > > 1.9.1 > > > > > > > I'll have to NACK this change. The example application should define its own > > mask if next_proto_id must be always set. > > Surely for IPv4 the next_proto_id will always be set to TCP(6) , UDP(17) or SCTP (132). > If the mask is 0 for next_proto_id then it is not possible to match on the protocol. Applications normally match the next protocol implicitly by providing it as the subsequent item (e.g. in testpmd by writing "eth / ip / tcp" instead of "eth / ip next_proto_id spec 6"). This change forces users to write "eth / ip next_proto_id spec 6 / tcp" or face an error due to an uninitialized next_proto_id (which might be garbage due to uninitialized memory, not just 0). > I can define an ipv4_mask in the application if you insist. Yes please, a better suggestion would be to rely on the subsequent item type and not on the value of this field. These default masks must only cover basic fields like source/destination addresses and ports for most protocols. -- Adrien Mazarguil 6WIND ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [PATCH v2 3/6] librte_ether: initialise IPv4 protocol mask for rte_flow 2017-08-30 14:39 ` Adrien Mazarguil @ 2017-08-30 15:12 ` Iremonger, Bernard 0 siblings, 0 replies; 145+ messages in thread From: Iremonger, Bernard @ 2017-08-30 15:12 UTC (permalink / raw) To: Adrien Mazarguil Cc: dev, Yigit, Ferruh, Ananyev, Konstantin, Dumitrescu, Cristian Hi Adrien, > -----Original Message----- > From: Adrien Mazarguil [mailto:adrien.mazarguil@6wind.com] > Sent: Wednesday, August 30, 2017 3:39 PM > To: Iremonger, Bernard <bernard.iremonger@intel.com> > Cc: dev@dpdk.org; Yigit, Ferruh <ferruh.yigit@intel.com>; Ananyev, > Konstantin <konstantin.ananyev@intel.com>; Dumitrescu, Cristian > <cristian.dumitrescu@intel.com> > Subject: Re: [PATCH v2 3/6] librte_ether: initialise IPv4 protocol mask for > rte_flow > > On Wed, Aug 30, 2017 at 01:28:04PM +0000, Iremonger, Bernard wrote: > > Hi Adrien, > > > > > -----Original Message----- > > > From: Adrien Mazarguil [mailto:adrien.mazarguil@6wind.com] > > > Sent: Wednesday, August 30, 2017 1:39 PM > > > To: Iremonger, Bernard <bernard.iremonger@intel.com> > > > Cc: dev@dpdk.org; Yigit, Ferruh <ferruh.yigit@intel.com>; Ananyev, > > > Konstantin <konstantin.ananyev@intel.com>; Dumitrescu, Cristian > > > <cristian.dumitrescu@intel.com> > > > Subject: Re: [PATCH v2 3/6] librte_ether: initialise IPv4 protocol > > > mask for rte_flow > > > > > > Hi Bernard, > > > > > > On Fri, Aug 25, 2017 at 05:10:35PM +0100, Bernard Iremonger wrote: > > > > Initialise the next_proto_id mask in the default mask for > > > > rte_flow_item_type_ipv4. > > > > > > > > Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> > > > > --- > > > > lib/librte_ether/rte_flow.h | 1 + > > > > 1 file changed, 1 insertion(+) > > > > > > > > diff --git a/lib/librte_ether/rte_flow.h > > > > b/lib/librte_ether/rte_flow.h index bba6169..59c42fa 100644 > > > > --- a/lib/librte_ether/rte_flow.h > > > > +++ b/lib/librte_ether/rte_flow.h > > > > @@ -489,6 +489,7 @@ struct rte_flow_item_ipv4 { #ifndef > > > > __cplusplus static const struct rte_flow_item_ipv4 > rte_flow_item_ipv4_mask = { > > > > .hdr = { > > > > + .next_proto_id = 0xff, > > > > > > Please don't change the default mask to cover this field as it means > > > all rte_flow-based applications that do not provide a specific mask > > > (.mask == NULL) have to always set this field to some valid value. > > > This is not a convenient default behavior. > > > > > > > .src_addr = RTE_BE32(0xffffffff), > > > > .dst_addr = RTE_BE32(0xffffffff), > > > > }, > > > > -- > > > > 1.9.1 > > > > > > > > > > I'll have to NACK this change. The example application should define > > > its own mask if next_proto_id must be always set. > > > > Surely for IPv4 the next_proto_id will always be set to TCP(6) , UDP(17) or > SCTP (132). > > If the mask is 0 for next_proto_id then it is not possible to match on the > protocol. > > Applications normally match the next protocol implicitly by providing it as the > subsequent item (e.g. in testpmd by writing "eth / ip / tcp" instead of "eth / > ip next_proto_id spec 6"). > > This change forces users to write "eth / ip next_proto_id spec 6 / tcp" or face > an error due to an uninitialized next_proto_id (which might be garbage due > to uninitialized memory, not just 0). > > > I can define an ipv4_mask in the application if you insist. > > Yes please, a better suggestion would be to rely on the subsequent item > type and not on the value of this field. > > These default masks must only cover basic fields like source/destination > addresses and ports for most protocols. > > -- > Adrien Mazarguil > 6WIND I will drop this patch and send a v3 patch set. I will define an ipv4_mask in the application. Regards, Bernard. ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v2 4/6] librte_flow_classify: add librte_flow_classify library 2017-08-23 13:51 ` [dpdk-dev] [PATCH v1 0/6] Flow classification library Bernard Iremonger ` (3 preceding siblings ...) 2017-08-25 16:10 ` [dpdk-dev] [PATCH v2 3/6] librte_ether: initialise IPv4 protocol mask for rte_flow Bernard Iremonger @ 2017-08-25 16:10 ` Bernard Iremonger 2017-08-25 16:10 ` [dpdk-dev] [PATCH v2 5/6] examples/flow_classify: flow classify sample application Bernard Iremonger 2017-08-25 16:10 ` [dpdk-dev] [PATCH v2 6/6] test: flow classify library unit tests Bernard Iremonger 6 siblings, 0 replies; 145+ messages in thread From: Bernard Iremonger @ 2017-08-25 16:10 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil Cc: Bernard Iremonger From: Ferruh Yigit <ferruh.yigit@intel.com> The following library APIs's are implemented: rte_flow_classify_create rte_flow_classify_validate rte_flow_classify_destroy rte_flow_classify_query The following librte_table ACL API's are used: f_create to create a table ACL. f_add to add an ACL rule to the table. f_del to delete an ACL form the table. f_lookup to match packets with the ACL rules. The library supports counting of IPv4 five tupple packets only, ie IPv4 UDP, TCP and SCTP packets. Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com> Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> --- config/common_base | 6 + doc/api/doxy-api-index.md | 1 + doc/api/doxy-api.conf | 1 + lib/Makefile | 3 + lib/librte_eal/common/include/rte_log.h | 1 + lib/librte_flow_classify/Makefile | 51 ++ lib/librte_flow_classify/rte_flow_classify.c | 465 ++++++++++++++++++ lib/librte_flow_classify/rte_flow_classify.h | 207 ++++++++ lib/librte_flow_classify/rte_flow_classify_parse.c | 546 +++++++++++++++++++++ lib/librte_flow_classify/rte_flow_classify_parse.h | 74 +++ .../rte_flow_classify_version.map | 10 + mk/rte.app.mk | 2 +- 12 files changed, 1366 insertions(+), 1 deletion(-) create mode 100644 lib/librte_flow_classify/Makefile create mode 100644 lib/librte_flow_classify/rte_flow_classify.c create mode 100644 lib/librte_flow_classify/rte_flow_classify.h create mode 100644 lib/librte_flow_classify/rte_flow_classify_parse.c create mode 100644 lib/librte_flow_classify/rte_flow_classify_parse.h create mode 100644 lib/librte_flow_classify/rte_flow_classify_version.map diff --git a/config/common_base b/config/common_base index 5e97a08..e378e0a 100644 --- a/config/common_base +++ b/config/common_base @@ -657,6 +657,12 @@ CONFIG_RTE_LIBRTE_GRO=y CONFIG_RTE_LIBRTE_METER=y # +# Compile librte_classify +# +CONFIG_RTE_LIBRTE_FLOW_CLASSIFY=y +CONFIG_RTE_LIBRTE_CLASSIFY_DEBUG=n + +# # Compile librte_sched # CONFIG_RTE_LIBRTE_SCHED=y diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md index 19e0d4f..a2fa281 100644 --- a/doc/api/doxy-api-index.md +++ b/doc/api/doxy-api-index.md @@ -105,6 +105,7 @@ The public API headers are grouped by topics: [LPM IPv4 route] (@ref rte_lpm.h), [LPM IPv6 route] (@ref rte_lpm6.h), [ACL] (@ref rte_acl.h), + [flow_classify] (@ref rte_flow_classify.h), [EFD] (@ref rte_efd.h) - **QoS**: diff --git a/doc/api/doxy-api.conf b/doc/api/doxy-api.conf index 823554f..4e43a66 100644 --- a/doc/api/doxy-api.conf +++ b/doc/api/doxy-api.conf @@ -46,6 +46,7 @@ INPUT = doc/api/doxy-api-index.md \ lib/librte_efd \ lib/librte_ether \ lib/librte_eventdev \ + lib/librte_flow_classify \ lib/librte_gro \ lib/librte_hash \ lib/librte_ip_frag \ diff --git a/lib/Makefile b/lib/Makefile index 86caba1..21fc3b0 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -82,6 +82,9 @@ DIRS-$(CONFIG_RTE_LIBRTE_POWER) += librte_power DEPDIRS-librte_power := librte_eal DIRS-$(CONFIG_RTE_LIBRTE_METER) += librte_meter DEPDIRS-librte_meter := librte_eal +DIRS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += librte_flow_classify +DEPDIRS-librte_flow_classify := librte_eal librte_ether librte_net +DEPDIRS-librte_flow_classify += librte_table librte_acl librte_port DIRS-$(CONFIG_RTE_LIBRTE_SCHED) += librte_sched DEPDIRS-librte_sched := librte_eal librte_mempool librte_mbuf librte_net DEPDIRS-librte_sched += librte_timer diff --git a/lib/librte_eal/common/include/rte_log.h b/lib/librte_eal/common/include/rte_log.h index ec8dba7..f975bde 100644 --- a/lib/librte_eal/common/include/rte_log.h +++ b/lib/librte_eal/common/include/rte_log.h @@ -87,6 +87,7 @@ struct rte_logs { #define RTE_LOGTYPE_CRYPTODEV 17 /**< Log related to cryptodev. */ #define RTE_LOGTYPE_EFD 18 /**< Log related to EFD. */ #define RTE_LOGTYPE_EVENTDEV 19 /**< Log related to eventdev. */ +#define RTE_LOGTYPE_CLASSIFY 20 /**< Log related to flow classify. */ /* these log types can be used in an application */ #define RTE_LOGTYPE_USER1 24 /**< User-defined log type 1. */ diff --git a/lib/librte_flow_classify/Makefile b/lib/librte_flow_classify/Makefile new file mode 100644 index 0000000..7863a0c --- /dev/null +++ b/lib/librte_flow_classify/Makefile @@ -0,0 +1,51 @@ +# BSD LICENSE +# +# Copyright(c) 2017 Intel Corporation. All rights reserved. +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Intel Corporation nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +include $(RTE_SDK)/mk/rte.vars.mk + +# library name +LIB = librte_flow_classify.a + +CFLAGS += -O3 +CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) + +EXPORT_MAP := rte_flow_classify_version.map + +LIBABIVER := 1 + +# all source are stored in SRCS-y +SRCS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += rte_flow_classify.c +SRCS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += rte_flow_classify_parse.c + +# install this header file +SYMLINK-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY)-include := rte_flow_classify.h + +include $(RTE_SDK)/mk/rte.lib.mk diff --git a/lib/librte_flow_classify/rte_flow_classify.c b/lib/librte_flow_classify/rte_flow_classify.c new file mode 100644 index 0000000..de1c8fa --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify.c @@ -0,0 +1,465 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <rte_flow_classify.h> +#include "rte_flow_classify_parse.h" +#include <rte_flow_driver.h> +#include <rte_table_acl.h> +#include <stdbool.h> + +static struct rte_eth_ntuple_filter ntuple_filter; +static uint32_t unique_id = 1; + +enum { + PROTO_FIELD_IPV4, + SRC_FIELD_IPV4, + DST_FIELD_IPV4, + SRCP_FIELD_IPV4, + DSTP_FIELD_IPV4, + NUM_FIELDS_IPV4 +}; + +struct ipv4_5tuple_data { + uint16_t priority; /**< flow API uses priority 0 to 8, 0 is highest */ + uint32_t userdata; /**< value returned for match */ + uint8_t tcp_flags; /**< tcp_flags only meaningful TCP protocol */ +}; + +struct rte_flow_classify { + uint32_t id; /**< unique ID of classify object */ + enum rte_flow_classify_type type; /**< classify type */ + struct rte_flow_action action; /**< action when match found */ + struct ipv4_5tuple_data flow_extra_data; /** extra rule data */ + struct rte_table_acl_rule_add_params key_add; /**< add ACL rule key */ + struct rte_table_acl_rule_delete_params + key_del; /**< delete ACL rule key */ + int key_found; /**< ACL rule key found in table */ + void *entry; /**< pointer to buffer to hold ACL rule key */ + void *entry_ptr; /**< handle to the table entry for the ACL rule key */ +}; + +/* number of categories in an ACL context */ +#define FLOW_CLASSIFY_NUM_CATEGORY 1 + +/* macros for mbuf processing */ +#define MAX_PKT_BURST 32 +#define OFF_ETHHEAD (sizeof(struct ether_hdr)) +#define OFF_IPV42PROTO (offsetof(struct ipv4_hdr, next_proto_id)) +#define MBUF_IPV4_2PROTO(m) \ + rte_pktmbuf_mtod_offset((m), uint8_t *, OFF_ETHHEAD + OFF_IPV42PROTO) + +struct mbuf_search { + const uint8_t *data_ipv4[MAX_PKT_BURST]; + struct rte_mbuf *m_ipv4[MAX_PKT_BURST]; + uint32_t res_ipv4[MAX_PKT_BURST]; + int num_ipv4; +}; + +int +rte_flow_classify_validate(void *table_handle, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + struct rte_flow_item *items; + parse_filter_t parse_filter; + uint32_t item_num = 0; + uint32_t i = 0; + int ret; + + (void) table_handle; + + if (!error) + return -EINVAL; + + if (!pattern) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM, + NULL, "NULL pattern."); + return -EINVAL; + } + + if (!actions) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_NUM, + NULL, "NULL action."); + return -EINVAL; + } + + if (!attr) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, + NULL, "NULL attribute."); + return -EINVAL; + } + + memset(&ntuple_filter, 0, sizeof(ntuple_filter)); + + /* Get the non-void item number of pattern */ + while ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_END) { + if ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_VOID) + item_num++; + i++; + } + item_num++; + + items = malloc(item_num * sizeof(struct rte_flow_item)); + if (!items) { + rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_ITEM_NUM, + NULL, "No memory for pattern items."); + return -ENOMEM; + } + + memset(items, 0, item_num * sizeof(struct rte_flow_item)); + classify_pattern_skip_void_item(items, pattern); + + parse_filter = classify_find_parse_filter_func(items); + if (!parse_filter) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + pattern, "Unsupported pattern"); + return -EINVAL; + } + + ret = parse_filter(attr, items, actions, &ntuple_filter, error); + free(items); + return ret; +} + +#ifdef RTE_LIBRTE_CLASSIFY_DEBUG +#define uint32_t_to_char(ip, a, b, c, d) do {\ + *a = (unsigned char)(ip >> 24 & 0xff);\ + *b = (unsigned char)(ip >> 16 & 0xff);\ + *c = (unsigned char)(ip >> 8 & 0xff);\ + *d = (unsigned char)(ip & 0xff);\ + } while (0) + +static inline void +print_ipv4_key_add(struct rte_table_acl_rule_add_params *key) +{ + unsigned char a, b, c, d; + + printf("ipv4_key_add: 0x%02hhx/0x%hhx ", + key->field_value[PROTO_FIELD_IPV4].value.u8, + key->field_value[PROTO_FIELD_IPV4].mask_range.u8); + + uint32_t_to_char(key->field_value[SRC_FIELD_IPV4].value.u32, + &a, &b, &c, &d); + printf(" %hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, + key->field_value[SRC_FIELD_IPV4].mask_range.u32); + + uint32_t_to_char(key->field_value[DST_FIELD_IPV4].value.u32, + &a, &b, &c, &d); + printf("%hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, + key->field_value[DST_FIELD_IPV4].mask_range.u32); + + printf("%hu : 0x%x %hu : 0x%x", + key->field_value[SRCP_FIELD_IPV4].value.u16, + key->field_value[SRCP_FIELD_IPV4].mask_range.u16, + key->field_value[DSTP_FIELD_IPV4].value.u16, + key->field_value[DSTP_FIELD_IPV4].mask_range.u16); + + printf(" priority: 0x%x\n", key->priority); +} + +static inline void +print_ipv4_key_delete(struct rte_table_acl_rule_delete_params *key) +{ + unsigned char a, b, c, d; + + printf("ipv4_key_del: 0x%02hhx/0x%hhx ", + key->field_value[PROTO_FIELD_IPV4].value.u8, + key->field_value[PROTO_FIELD_IPV4].mask_range.u8); + + uint32_t_to_char(key->field_value[SRC_FIELD_IPV4].value.u32, + &a, &b, &c, &d); + printf(" %hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, + key->field_value[SRC_FIELD_IPV4].mask_range.u32); + + uint32_t_to_char(key->field_value[DST_FIELD_IPV4].value.u32, + &a, &b, &c, &d); + printf("%hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, + key->field_value[DST_FIELD_IPV4].mask_range.u32); + + printf("%hu : 0x%x %hu : 0x%x\n", + key->field_value[SRCP_FIELD_IPV4].value.u16, + key->field_value[SRCP_FIELD_IPV4].mask_range.u16, + key->field_value[DSTP_FIELD_IPV4].value.u16, + key->field_value[DSTP_FIELD_IPV4].mask_range.u16); +} +#endif + +static struct rte_flow_classify * +allocate_5tuple(void) +{ + struct rte_flow_classify *flow_classify; + + flow_classify = malloc(sizeof(struct rte_flow_classify)); + if (!flow_classify) + return flow_classify; + + memset(flow_classify, 0, sizeof(struct rte_flow_classify)); + flow_classify->id = unique_id++; + flow_classify->type = RTE_FLOW_CLASSIFY_TYPE_5TUPLE; + memcpy(&flow_classify->action, classify_get_flow_action(), + sizeof(struct rte_flow_action)); + + flow_classify->flow_extra_data.priority = ntuple_filter.priority; + flow_classify->flow_extra_data.tcp_flags = ntuple_filter.tcp_flags; + + /* key add values */ + flow_classify->key_add.priority = ntuple_filter.priority; + flow_classify->key_add.field_value[PROTO_FIELD_IPV4].mask_range.u8 = + ntuple_filter.proto_mask; + flow_classify->key_add.field_value[PROTO_FIELD_IPV4].value.u8 = + ntuple_filter.proto; + + flow_classify->key_add.field_value[SRC_FIELD_IPV4].mask_range.u32 = + ntuple_filter.src_ip_mask; + flow_classify->key_add.field_value[SRC_FIELD_IPV4].value.u32 = + ntuple_filter.src_ip; + + flow_classify->key_add.field_value[DST_FIELD_IPV4].mask_range.u32 = + ntuple_filter.dst_ip_mask; + flow_classify->key_add.field_value[DST_FIELD_IPV4].value.u32 = + ntuple_filter.dst_ip; + + flow_classify->key_add.field_value[SRCP_FIELD_IPV4].mask_range.u16 = + ntuple_filter.src_port_mask; + flow_classify->key_add.field_value[SRCP_FIELD_IPV4].value.u16 = + ntuple_filter.src_port; + + flow_classify->key_add.field_value[DSTP_FIELD_IPV4].mask_range.u16 = + ntuple_filter.dst_port_mask; + flow_classify->key_add.field_value[DSTP_FIELD_IPV4].value.u16 = + ntuple_filter.dst_port; + +#ifdef RTE_LIBRTE_CLASSIFY_DEBUG + print_ipv4_key_add(&flow_classify->key_add); +#endif + + /* key delete values */ + memcpy(&flow_classify->key_del.field_value[PROTO_FIELD_IPV4], + &flow_classify->key_add.field_value[PROTO_FIELD_IPV4], + NUM_FIELDS_IPV4 * sizeof(struct rte_acl_field)); + +#ifdef RTE_LIBRTE_CLASSIFY_DEBUG + print_ipv4_key_delete(&flow_classify->key_del); +#endif + return flow_classify; +} + +struct rte_flow_classify * +rte_flow_classify_create(void *table_handle, + uint32_t entry_size, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + struct rte_flow_classify *flow_classify; + struct rte_acl_rule *acl_rule; + int ret; + + if (!error) + return NULL; + + if (!table_handle) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_HANDLE, + NULL, "NULL table_handle."); + return NULL; + } + + if (!pattern) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM, + NULL, "NULL pattern."); + return NULL; + } + + if (!actions) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_NUM, + NULL, "NULL action."); + return NULL; + } + + if (!attr) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, + NULL, "NULL attribute."); + return NULL; + } + + /* parse attr, pattern and actions */ + ret = rte_flow_classify_validate(table_handle, attr, pattern, + actions, error); + if (ret < 0) + return NULL; + + flow_classify = allocate_5tuple(); + if (!flow_classify) + return NULL; + + flow_classify->entry = malloc(entry_size); + if (!flow_classify->entry) { + free(flow_classify); + flow_classify = NULL; + return NULL; + } + + ret = rte_table_acl_ops.f_add(table_handle, &flow_classify->key_add, + flow_classify->entry, &flow_classify->key_found, + &flow_classify->entry_ptr); + if (ret) { + free(flow_classify->entry); + free(flow_classify); + flow_classify = NULL; + } + acl_rule = flow_classify->entry; + flow_classify->flow_extra_data.userdata = acl_rule->data.userdata; + + return flow_classify; +} + +int +rte_flow_classify_destroy(void *table_handle, + struct rte_flow_classify *flow_classify, + struct rte_flow_error *error) +{ + int ret; + int key_found; + + if (!error) + return -EINVAL; + + if (!flow_classify || !table_handle) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "invalid input"); + return -EINVAL; + } + + ret = rte_table_acl_ops.f_delete(table_handle, + &flow_classify->key_del, &key_found, + flow_classify->entry); + if ((ret == 0) && key_found) { + free(flow_classify->entry); + free(flow_classify); + } else + ret = -1; + return ret; +} + +static int +flow_match(void *table, struct rte_mbuf **pkts_in, const uint16_t nb_pkts, + uint64_t *count, uint32_t userdata) +{ + int ret = -1; + int i; + uint64_t pkts_mask; + uint64_t lookup_hit_mask; + struct rte_acl_rule *entries[RTE_PORT_IN_BURST_SIZE_MAX]; + + if (nb_pkts) { + pkts_mask = RTE_LEN2MASK(nb_pkts, uint64_t); + ret = rte_table_acl_ops.f_lookup(table, pkts_in, + pkts_mask, &lookup_hit_mask, (void **)entries); + if (!ret) { + for (i = 0; i < nb_pkts && + (lookup_hit_mask & (1 << i)); i++) { + if (entries[i]->data.userdata == userdata) + (*count)++; /* match found */ + } + if (*count == 0) + ret = -1; + } else + ret = -1; + } + return ret; +} + +static int +action_apply(const struct rte_flow_classify *flow_classify, + struct rte_flow_classify_stats *stats, uint64_t count) +{ + struct rte_flow_classify_5tuple_stats *ntuple_stats; + + switch (flow_classify->action.type) { + case RTE_FLOW_ACTION_TYPE_COUNT: + ntuple_stats = + (struct rte_flow_classify_5tuple_stats *)stats->stats; + ntuple_stats->counter1 = count; + stats->used_space = 1; + break; + default: + return -ENOTSUP; + } + + return 0; +} + +int +rte_flow_classify_query(void *table_handle, + const struct rte_flow_classify *flow_classify, + struct rte_mbuf **pkts, + const uint16_t nb_pkts, + struct rte_flow_classify_stats *stats, + struct rte_flow_error *error) +{ + uint64_t count = 0; + int ret = -EINVAL; + + if (!error) + return ret; + + if (!table_handle || !flow_classify || !pkts || !stats) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "invalid input"); + return ret; + } + + if ((stats->available_space == 0) || (nb_pkts == 0)) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "invalid input"); + return ret; + } + + ret = flow_match(table_handle, pkts, nb_pkts, &count, + flow_classify->flow_extra_data.userdata); + if (ret == 0) + ret = action_apply(flow_classify, stats, count); + + return ret; +} diff --git a/lib/librte_flow_classify/rte_flow_classify.h b/lib/librte_flow_classify/rte_flow_classify.h new file mode 100644 index 0000000..2b200fb --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify.h @@ -0,0 +1,207 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _RTE_FLOW_CLASSIFY_H_ +#define _RTE_FLOW_CLASSIFY_H_ + +/** + * @file + * + * RTE Flow Classify Library + * + * This library provides flow record information with some measured properties. + * + * Application should define the flow and measurement criteria (action) for it. + * + * Library doesn't maintain any flow records itself, instead flow information is + * returned to upper layer only for given packets. + * + * It is application's responsibility to call rte_flow_classify_query() + * for group of packets, just after receiving them or before transmitting them. + * Application should provide the flow type interested in, measurement to apply + * to that flow in rte_flow_classify_create() API, and should provide + * rte_flow_classify object and storage to put results in + * rte_flow_classify_query() API. + * + * Usage: + * - application calls rte_flow_classify_create() to create a rte_flow_classify + * object. + * - application calls rte_flow_classify_query() in a polling manner, + * preferably after rte_eth_rx_burst(). This will cause the library to + * convert packet information to flow information with some measurements. + * - rte_flow_classify object can be destroyed when they are no more needed + * via rte_flow_classify_destroy() + */ + +#include <rte_ethdev.h> +#include <rte_ether.h> +#include <rte_flow.h> +#include <rte_acl.h> + +#ifdef __cplusplus +extern "C" { +#endif + +enum rte_flow_classify_type { + RTE_FLOW_CLASSIFY_TYPE_NONE, /**< no type */ + RTE_FLOW_CLASSIFY_TYPE_5TUPLE, /**< IPv4 5tuple type */ +}; + +struct rte_flow_classify; + +/** + * Flow stats + * + * For single action an array of stats can be returned by API. Technically each + * packet can return a stat at max. + * + * Storage for stats is provided by application, library should know available + * space, and should return the number of used space. + * + * stats type is based on what measurement (action) requested by application. + * + */ +struct rte_flow_classify_stats { + const unsigned int available_space; + unsigned int used_space; + void **stats; +}; + +struct rte_flow_classify_5tuple_stats { + uint64_t counter1; /**< count of packets that match 5tupple pattern */ +}; + +/** + * Create a flow classify rule. + * + * @param[in] table_handle + * Pointer to table ACL + * @param[in] entry_size + * Size of ACL rule + * @param[in] attr + * Flow rule attributes + * @param[in] pattern + * Pattern specification (list terminated by the END pattern item). + * @param[in] actions + * Associated actions (list terminated by the END pattern item). + * @param[out] error + * Perform verbose error reporting if not NULL. Structure + * initialised in case of error only. + * @return + * A valid handle in case of success, NULL otherwise. + */ +struct rte_flow_classify * +rte_flow_classify_create(void *table_handle, + uint32_t entry_size, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error); + +/** + * Validate a flow classify rule. + * + * @param[in] table_handle + * Pointer to table ACL + * @param[in] attr + * Flow rule attributes + * @param[in] pattern + * Pattern specification (list terminated by the END pattern item). + * @param[in] actions + * Associated actions (list terminated by the END pattern item). + * @param[out] error + * Perform verbose error reporting if not NULL. Structure + * initialised in case of error only. + * + * @return + * reurn code. + */ +int +rte_flow_classify_validate(void *table_handle, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error); + +/** + * Destroy a flow classify rule. + * + * @param[in] table_handle + * Pointer to table ACL + * @param[in] flow_classify + * Flow rule handle to destroy + * @param[out] error + * Perform verbose error reporting if not NULL. Structure + * initialised in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise. + */ +int +rte_flow_classify_destroy(void *table_handle, + struct rte_flow_classify *flow_classify, + struct rte_flow_error *error); + +/** + * Get flow classification stats for given packets. + * + * @param[in] table_handle + * Pointer to table ACL + * @param[in] flow_classify + * Pointer to Flow rule object + * @param[in] pkts + * Pointer to packets to process + * @param[in] nb_pkts + * Number of packets to process + * @param[in] stats + * To store stats defined by action + * @param[out] error + * Perform verbose error reporting if not NULL. Structure + * initialised in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise. + */ +int +rte_flow_classify_query(void *table_handle, + const struct rte_flow_classify *flow_classify, + struct rte_mbuf **pkts, + const uint16_t nb_pkts, + struct rte_flow_classify_stats *stats, + struct rte_flow_error *error); + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_FLOW_CLASSIFY_H_ */ diff --git a/lib/librte_flow_classify/rte_flow_classify_parse.c b/lib/librte_flow_classify/rte_flow_classify_parse.c new file mode 100644 index 0000000..e5a3885 --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify_parse.c @@ -0,0 +1,546 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <rte_flow_classify.h> +#include "rte_flow_classify_parse.h" +#include <rte_flow_driver.h> + +struct classify_valid_pattern { + enum rte_flow_item_type *items; + parse_filter_t parse_filter; +}; + +static struct rte_flow_action action; + +/* Pattern matched ntuple filter */ +static enum rte_flow_item_type pattern_ntuple_1[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_UDP, + RTE_FLOW_ITEM_TYPE_END, +}; + +/* Pattern matched ntuple filter */ +static enum rte_flow_item_type pattern_ntuple_2[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_TCP, + RTE_FLOW_ITEM_TYPE_END, +}; + +/* Pattern matched ntuple filter */ +static enum rte_flow_item_type pattern_ntuple_3[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_SCTP, + RTE_FLOW_ITEM_TYPE_END, +}; + +static int +classify_parse_ntuple_filter(const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_eth_ntuple_filter *filter, + struct rte_flow_error *error); + +static struct classify_valid_pattern classify_supported_patterns[] = { + /* ntuple */ + { pattern_ntuple_1, classify_parse_ntuple_filter }, + { pattern_ntuple_2, classify_parse_ntuple_filter }, + { pattern_ntuple_3, classify_parse_ntuple_filter }, +}; + +struct rte_flow_action * +classify_get_flow_action(void) +{ + return &action; +} + +/* Find the first VOID or non-VOID item pointer */ +const struct rte_flow_item * +classify_find_first_item(const struct rte_flow_item *item, bool is_void) +{ + bool is_find; + + while (item->type != RTE_FLOW_ITEM_TYPE_END) { + if (is_void) + is_find = item->type == RTE_FLOW_ITEM_TYPE_VOID; + else + is_find = item->type != RTE_FLOW_ITEM_TYPE_VOID; + if (is_find) + break; + item++; + } + return item; +} + +/* Skip all VOID items of the pattern */ +void +classify_pattern_skip_void_item(struct rte_flow_item *items, + const struct rte_flow_item *pattern) +{ + uint32_t cpy_count = 0; + const struct rte_flow_item *pb = pattern, *pe = pattern; + + for (;;) { + /* Find a non-void item first */ + pb = classify_find_first_item(pb, false); + if (pb->type == RTE_FLOW_ITEM_TYPE_END) { + pe = pb; + break; + } + + /* Find a void item */ + pe = classify_find_first_item(pb + 1, true); + + cpy_count = pe - pb; + rte_memcpy(items, pb, sizeof(struct rte_flow_item) * cpy_count); + + items += cpy_count; + + if (pe->type == RTE_FLOW_ITEM_TYPE_END) { + pb = pe; + break; + } + + pb = pe + 1; + } + /* Copy the END item. */ + rte_memcpy(items, pe, sizeof(struct rte_flow_item)); +} + +/* Check if the pattern matches a supported item type array */ +static bool +classify_match_pattern(enum rte_flow_item_type *item_array, + struct rte_flow_item *pattern) +{ + struct rte_flow_item *item = pattern; + + while ((*item_array == item->type) && + (*item_array != RTE_FLOW_ITEM_TYPE_END)) { + item_array++; + item++; + } + + return (*item_array == RTE_FLOW_ITEM_TYPE_END && + item->type == RTE_FLOW_ITEM_TYPE_END); +} + +/* Find if there's parse filter function matched */ +parse_filter_t +classify_find_parse_filter_func(struct rte_flow_item *pattern) +{ + parse_filter_t parse_filter = NULL; + uint8_t i = 0; + + for (; i < RTE_DIM(classify_supported_patterns); i++) { + if (classify_match_pattern(classify_supported_patterns[i].items, + pattern)) { + parse_filter = + classify_supported_patterns[i].parse_filter; + break; + } + } + + return parse_filter; +} + +#define FLOW_RULE_MIN_PRIORITY 8 +#define FLOW_RULE_MAX_PRIORITY 0 + +#define NEXT_ITEM_OF_PATTERN(item, pattern, index)\ + do { \ + item = pattern + index;\ + while (item->type == RTE_FLOW_ITEM_TYPE_VOID) {\ + index++; \ + item = pattern + index; \ + } \ + } while (0) + +#define NEXT_ITEM_OF_ACTION(act, actions, index)\ + do { \ + act = actions + index; \ + while (act->type == RTE_FLOW_ACTION_TYPE_VOID) {\ + index++; \ + act = actions + index; \ + } \ + } while (0) + +/** + * Please aware there's an asumption for all the parsers. + * rte_flow_item is using big endian, rte_flow_attr and + * rte_flow_action are using CPU order. + * Because the pattern is used to describe the packets, + * normally the packets should use network order. + */ + +/** + * Parse the rule to see if it is a n-tuple rule. + * And get the n-tuple filter info BTW. + * pattern: + * The first not void item can be ETH or IPV4. + * The second not void item must be IPV4 if the first one is ETH. + * The third not void item must be UDP or TCP. + * The next not void item must be END. + * action: + * The first not void action should be QUEUE. + * The next not void action should be END. + * pattern example: + * ITEM Spec Mask + * ETH NULL NULL + * IPV4 src_addr 192.168.1.20 0xFFFFFFFF + * dst_addr 192.167.3.50 0xFFFFFFFF + * next_proto_id 17 0xFF + * UDP/TCP/ src_port 80 0xFFFF + * SCTP dst_port 80 0xFFFF + * END + * other members in mask and spec should set to 0x00. + * item->last should be NULL. + */ +static int +classify_parse_ntuple_filter(const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_eth_ntuple_filter *filter, + struct rte_flow_error *error) +{ + const struct rte_flow_item *item; + const struct rte_flow_action *act; + const struct rte_flow_item_ipv4 *ipv4_spec; + const struct rte_flow_item_ipv4 *ipv4_mask; + const struct rte_flow_item_tcp *tcp_spec; + const struct rte_flow_item_tcp *tcp_mask; + const struct rte_flow_item_udp *udp_spec; + const struct rte_flow_item_udp *udp_mask; + const struct rte_flow_item_sctp *sctp_spec; + const struct rte_flow_item_sctp *sctp_mask; + uint32_t index; + + if (!pattern) { + rte_flow_error_set(error, + EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM, + NULL, "NULL pattern."); + return -rte_errno; + } + + if (!actions) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_NUM, + NULL, "NULL action."); + return -rte_errno; + } + if (!attr) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, + NULL, "NULL attribute."); + return -rte_errno; + } + + /* parse pattern */ + index = 0; + + /* the first not void item can be MAC or IPv4 */ + NEXT_ITEM_OF_PATTERN(item, pattern, index); + + if (item->type != RTE_FLOW_ITEM_TYPE_ETH && + item->type != RTE_FLOW_ITEM_TYPE_IPV4) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + /* Skip Ethernet */ + if (item->type == RTE_FLOW_ITEM_TYPE_ETH) { + /*Not supported last point for range*/ + if (item->last) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + item, + "Not supported last point for range"); + return -rte_errno; + + } + /* if the first item is MAC, the content should be NULL */ + if (item->spec || item->mask) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Not supported by ntuple filter"); + return -rte_errno; + } + /* check if the next not void item is IPv4 */ + index++; + NEXT_ITEM_OF_PATTERN(item, pattern, index); + if (item->type != RTE_FLOW_ITEM_TYPE_IPV4) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Not supported by ntuple filter"); + return -rte_errno; + } + } + + /* get the IPv4 info */ + if (!item->spec || !item->mask) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Invalid ntuple mask"); + return -rte_errno; + } + /*Not supported last point for range*/ + if (item->last) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + item, "Not supported last point for range"); + return -rte_errno; + + } + + ipv4_mask = (const struct rte_flow_item_ipv4 *)item->mask; + /** + * Only support src & dst addresses, protocol, + * others should be masked. + */ + if (ipv4_mask->hdr.version_ihl || + ipv4_mask->hdr.type_of_service || + ipv4_mask->hdr.total_length || + ipv4_mask->hdr.packet_id || + ipv4_mask->hdr.fragment_offset || + ipv4_mask->hdr.time_to_live || + ipv4_mask->hdr.hdr_checksum) { + rte_flow_error_set(error, + EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + + filter->dst_ip_mask = ipv4_mask->hdr.dst_addr; + filter->src_ip_mask = ipv4_mask->hdr.src_addr; + filter->proto_mask = ipv4_mask->hdr.next_proto_id; + + ipv4_spec = (const struct rte_flow_item_ipv4 *)item->spec; + filter->dst_ip = ipv4_spec->hdr.dst_addr; + filter->src_ip = ipv4_spec->hdr.src_addr; + filter->proto = ipv4_spec->hdr.next_proto_id; + + /* check if the next not void item is TCP or UDP or SCTP */ + index++; + NEXT_ITEM_OF_PATTERN(item, pattern, index); + if (item->type != RTE_FLOW_ITEM_TYPE_TCP && + item->type != RTE_FLOW_ITEM_TYPE_UDP && + item->type != RTE_FLOW_ITEM_TYPE_SCTP) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + + /* get the TCP/UDP info */ + if (!item->spec || !item->mask) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Invalid ntuple mask"); + return -rte_errno; + } + + /*Not supported last point for range*/ + if (item->last) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + item, "Not supported last point for range"); + return -rte_errno; + + } + + if (item->type == RTE_FLOW_ITEM_TYPE_TCP) { + tcp_mask = (const struct rte_flow_item_tcp *)item->mask; + + /** + * Only support src & dst ports, tcp flags, + * others should be masked. + */ + if (tcp_mask->hdr.sent_seq || + tcp_mask->hdr.recv_ack || + tcp_mask->hdr.data_off || + tcp_mask->hdr.rx_win || + tcp_mask->hdr.cksum || + tcp_mask->hdr.tcp_urp) { + memset(filter, 0, + sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + + filter->dst_port_mask = tcp_mask->hdr.dst_port; + filter->src_port_mask = tcp_mask->hdr.src_port; + if (tcp_mask->hdr.tcp_flags == 0xFF) { + filter->flags |= RTE_NTUPLE_FLAGS_TCP_FLAG; + } else if (!tcp_mask->hdr.tcp_flags) { + filter->flags &= ~RTE_NTUPLE_FLAGS_TCP_FLAG; + } else { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + + tcp_spec = (const struct rte_flow_item_tcp *)item->spec; + filter->dst_port = tcp_spec->hdr.dst_port; + filter->src_port = tcp_spec->hdr.src_port; + filter->tcp_flags = tcp_spec->hdr.tcp_flags; + } else if (item->type == RTE_FLOW_ITEM_TYPE_UDP) { + udp_mask = (const struct rte_flow_item_udp *)item->mask; + + /** + * Only support src & dst ports, + * others should be masked. + */ + if (udp_mask->hdr.dgram_len || + udp_mask->hdr.dgram_cksum) { + memset(filter, 0, + sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + + filter->dst_port_mask = udp_mask->hdr.dst_port; + filter->src_port_mask = udp_mask->hdr.src_port; + + udp_spec = (const struct rte_flow_item_udp *)item->spec; + filter->dst_port = udp_spec->hdr.dst_port; + filter->src_port = udp_spec->hdr.src_port; + } else { + sctp_mask = (const struct rte_flow_item_sctp *)item->mask; + + /** + * Only support src & dst ports, + * others should be masked. + */ + if (sctp_mask->hdr.tag || + sctp_mask->hdr.cksum) { + memset(filter, 0, + sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + + filter->dst_port_mask = sctp_mask->hdr.dst_port; + filter->src_port_mask = sctp_mask->hdr.src_port; + + sctp_spec = (const struct rte_flow_item_sctp *)item->spec; + filter->dst_port = sctp_spec->hdr.dst_port; + filter->src_port = sctp_spec->hdr.src_port; + } + + /* check if the next not void item is END */ + index++; + NEXT_ITEM_OF_PATTERN(item, pattern, index); + if (item->type != RTE_FLOW_ITEM_TYPE_END) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + + /* parse action */ + index = 0; + + /** + * n-tuple only supports count, + * check if the first not void action is COUNT. + */ + memset(&action, 0, sizeof(action)); + NEXT_ITEM_OF_ACTION(act, actions, index); + if (act->type != RTE_FLOW_ACTION_TYPE_COUNT) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + item, "Not supported action."); + return -rte_errno; + } + action.type = RTE_FLOW_ACTION_TYPE_COUNT; + + /* check if the next not void item is END */ + index++; + NEXT_ITEM_OF_ACTION(act, actions, index); + if (act->type != RTE_FLOW_ACTION_TYPE_END) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + act, "Not supported action."); + return -rte_errno; + } + + /* parse attr */ + /* must be input direction */ + if (!attr->ingress) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, + attr, "Only support ingress."); + return -rte_errno; + } + + /* not supported */ + if (attr->egress) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, + attr, "Not support egress."); + return -rte_errno; + } + + if (attr->priority > 0xFFFF) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, + attr, "Error priority."); + return -rte_errno; + } + filter->priority = (uint16_t)attr->priority; + if (attr->priority > FLOW_RULE_MIN_PRIORITY) + filter->priority = FLOW_RULE_MAX_PRIORITY; + + return 0; +} diff --git a/lib/librte_flow_classify/rte_flow_classify_parse.h b/lib/librte_flow_classify/rte_flow_classify_parse.h new file mode 100644 index 0000000..1d4708a --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify_parse.h @@ -0,0 +1,74 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _RTE_FLOW_CLASSIFY_PARSE_H_ +#define _RTE_FLOW_CLASSIFY_PARSE_H_ + +#include <rte_ethdev.h> +#include <rte_ether.h> +#include <rte_flow.h> +#include <stdbool.h> + +#ifdef __cplusplus +extern "C" { +#endif + +typedef int (*parse_filter_t)(const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_eth_ntuple_filter *filter, + struct rte_flow_error *error); + +/* Skip all VOID items of the pattern */ +void +classify_pattern_skip_void_item(struct rte_flow_item *items, + const struct rte_flow_item *pattern); + +/* Find the first VOID or non-VOID item pointer */ +const struct rte_flow_item * +classify_find_first_item(const struct rte_flow_item *item, bool is_void); + + +/* Find if there's parse filter function matched */ +parse_filter_t +classify_find_parse_filter_func(struct rte_flow_item *pattern); + +/* get action data */ +struct rte_flow_action * +classify_get_flow_action(void); + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_FLOW_CLASSIFY_PARSE_H_ */ diff --git a/lib/librte_flow_classify/rte_flow_classify_version.map b/lib/librte_flow_classify/rte_flow_classify_version.map new file mode 100644 index 0000000..e2c9ecf --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify_version.map @@ -0,0 +1,10 @@ +DPDK_17.08 { + global: + + rte_flow_classify_create; + rte_flow_classify_destroy; + rte_flow_classify_query; + rte_flow_classify_validate; + + local: *; +}; diff --git a/mk/rte.app.mk b/mk/rte.app.mk index c25fdd9..909ab95 100644 --- a/mk/rte.app.mk +++ b/mk/rte.app.mk @@ -58,6 +58,7 @@ _LDLIBS-y += -L$(RTE_SDK_BIN)/lib # # Order is important: from higher level to lower level # +_LDLIBS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += -lrte_flow_classify _LDLIBS-$(CONFIG_RTE_LIBRTE_PIPELINE) += -lrte_pipeline _LDLIBS-$(CONFIG_RTE_LIBRTE_TABLE) += -lrte_table _LDLIBS-$(CONFIG_RTE_LIBRTE_PORT) += -lrte_port @@ -84,7 +85,6 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_EFD) += -lrte_efd _LDLIBS-$(CONFIG_RTE_LIBRTE_CFGFILE) += -lrte_cfgfile _LDLIBS-y += --whole-archive - _LDLIBS-$(CONFIG_RTE_LIBRTE_HASH) += -lrte_hash _LDLIBS-$(CONFIG_RTE_LIBRTE_VHOST) += -lrte_vhost _LDLIBS-$(CONFIG_RTE_LIBRTE_KVARGS) += -lrte_kvargs -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v2 5/6] examples/flow_classify: flow classify sample application 2017-08-23 13:51 ` [dpdk-dev] [PATCH v1 0/6] Flow classification library Bernard Iremonger ` (4 preceding siblings ...) 2017-08-25 16:10 ` [dpdk-dev] [PATCH v2 4/6] librte_flow_classify: add librte_flow_classify library Bernard Iremonger @ 2017-08-25 16:10 ` Bernard Iremonger 2017-08-25 16:10 ` [dpdk-dev] [PATCH v2 6/6] test: flow classify library unit tests Bernard Iremonger 6 siblings, 0 replies; 145+ messages in thread From: Bernard Iremonger @ 2017-08-25 16:10 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil Cc: Bernard Iremonger The flow_classify sample application exercises the following librte_flow_classify API's: rte_flow_classify_create rte_flow_classify_validate rte_flow_classify_destroy rte_flow_classify_query It sets up the IPv4 ACL field definitions. It creates table_acl using the librte_table API. Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> --- examples/flow_classify/Makefile | 57 +++ examples/flow_classify/flow_classify.c | 634 +++++++++++++++++++++++++++++++++ 2 files changed, 691 insertions(+) create mode 100644 examples/flow_classify/Makefile create mode 100644 examples/flow_classify/flow_classify.c diff --git a/examples/flow_classify/Makefile b/examples/flow_classify/Makefile new file mode 100644 index 0000000..eecdde1 --- /dev/null +++ b/examples/flow_classify/Makefile @@ -0,0 +1,57 @@ +# BSD LICENSE +# +# Copyright(c) 2017 Intel Corporation. All rights reserved. +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Intel Corporation nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +ifeq ($(RTE_SDK),) +$(error "Please define RTE_SDK environment variable") +endif + +# Default target, can be overridden by command line or environment +RTE_TARGET ?= x86_64-native-linuxapp-gcc + +include $(RTE_SDK)/mk/rte.vars.mk + +# binary name +APP = flow_classify + + +# all source are stored in SRCS-y +SRCS-y := flow_classify.c + +CFLAGS += -O3 +CFLAGS += $(WERROR_FLAGS) + +# workaround for a gcc bug with noreturn attribute +# http://gcc.gnu.org/bugzilla/show_bug.cgi?id=12603 +ifeq ($(CONFIG_RTE_TOOLCHAIN_GCC),y) +CFLAGS_main.o += -Wno-return-type +endif + +include $(RTE_SDK)/mk/rte.extapp.mk diff --git a/examples/flow_classify/flow_classify.c b/examples/flow_classify/flow_classify.c new file mode 100644 index 0000000..cc64e3d --- /dev/null +++ b/examples/flow_classify/flow_classify.c @@ -0,0 +1,634 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <stdint.h> +#include <inttypes.h> +#include <rte_eal.h> +#include <rte_ethdev.h> +#include <rte_cycles.h> +#include <rte_lcore.h> +#include <rte_mbuf.h> +#include <rte_flow.h> +#include <rte_flow_classify.h> +#include <rte_table_acl.h> + +#define RX_RING_SIZE 128 +#define TX_RING_SIZE 512 + +#define NUM_MBUFS 8191 +#define MBUF_CACHE_SIZE 250 +#define BURST_SIZE 32 +#define MAX_NUM_CLASSIFY 5 +#define FLOW_CLASSIFY_MAX_RULE_NUM 10 + +static const struct rte_eth_conf port_conf_default = { + .rxmode = { .max_rx_pkt_len = ETHER_MAX_LEN } +}; + +static void *table_acl; + +/* ACL field definitions for IPv4 5 tuple rule */ + +enum { + PROTO_FIELD_IPV4, + SRC_FIELD_IPV4, + DST_FIELD_IPV4, + SRCP_FIELD_IPV4, + DSTP_FIELD_IPV4, + NUM_FIELDS_IPV4 +}; + +enum { + PROTO_INPUT_IPV4, + SRC_INPUT_IPV4, + DST_INPUT_IPV4, + SRCP_DESTP_INPUT_IPV4 +}; + +static struct rte_acl_field_def ipv4_defs[NUM_FIELDS_IPV4] = { + /* first input field - always one byte long. */ + { + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint8_t), + .field_index = PROTO_FIELD_IPV4, + .input_index = PROTO_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, next_proto_id), + }, + /* next input field (IPv4 source address) - 4 consecutive bytes. */ + { + /* rte_flow uses a bit mask for IPv4 addresses */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint32_t), + .field_index = SRC_FIELD_IPV4, + .input_index = SRC_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, src_addr), + }, + /* next input field (IPv4 destination address) - 4 consecutive bytes. */ + { + /* rte_flow uses a bit mask for IPv4 addresses */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint32_t), + .field_index = DST_FIELD_IPV4, + .input_index = DST_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, dst_addr), + }, + /* + * Next 2 fields (src & dst ports) form 4 consecutive bytes. + * They share the same input index. + */ + { + /* rte_flow uses a bit mask for protocol ports */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint16_t), + .field_index = SRCP_FIELD_IPV4, + .input_index = SRCP_DESTP_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + sizeof(struct ipv4_hdr) + + offsetof(struct tcp_hdr, src_port), + }, + { + /* rte_flow uses a bit mask for protocol ports */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint16_t), + .field_index = DSTP_FIELD_IPV4, + .input_index = SRCP_DESTP_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + sizeof(struct ipv4_hdr) + + offsetof(struct tcp_hdr, dst_port), + }, +}; + +/* flow classify data */ +static struct rte_flow_classify *udp_flow_classify[MAX_NUM_CLASSIFY]; +static struct rte_flow_classify *tcp_flow_classify[MAX_NUM_CLASSIFY]; +static struct rte_flow_classify *sctp_flow_classify[MAX_NUM_CLASSIFY]; + +static struct rte_flow_classify_5tuple_stats udp_ntuple_stats; +static struct rte_flow_classify_stats udp_classify_stats = { + .available_space = BURST_SIZE, + .used_space = 0, + .stats = (void **)&udp_ntuple_stats +}; + +static struct rte_flow_classify_5tuple_stats tcp_ntuple_stats; +static struct rte_flow_classify_stats tcp_classify_stats = { + .available_space = BURST_SIZE, + .used_space = 0, + .stats = (void **)&tcp_ntuple_stats +}; + +static struct rte_flow_classify_5tuple_stats sctp_ntuple_stats; +static struct rte_flow_classify_stats sctp_classify_stats = { + .available_space = BURST_SIZE, + .used_space = 0, + .stats = (void **)&sctp_ntuple_stats +}; + +/* parameters for rte_flow_classify_validate and rte_flow_classify_create */ + +/* first sample UDP pattern: + * "eth / ipv4 src spec 2.2.2.3 src mask 255.255.255.00 dst spec 2.2.2.7 + * dst mask 255.255.255.00 / udp src is 32 dst is 33 / end" + */ +static struct rte_flow_item_ipv4 ipv4_udp_spec_1 = { + { 0, 0, 0, 0, 0, 0, 17, 0, IPv4(2, 2, 2, 3), IPv4(2, 2, 2, 7)} +}; +static const struct rte_flow_item_ipv4 ipv4_mask_24 = { + .hdr = { + .next_proto_id = 0xff, + .src_addr = 0xffffff00, + .dst_addr = 0xffffff00, + }, +}; +static struct rte_flow_item_udp udp_spec_1 = { + { 32, 33, 0, 0 } +}; + +static struct rte_flow_item eth_item = { RTE_FLOW_ITEM_TYPE_ETH, + 0, 0, 0 }; +static struct rte_flow_item ipv4_udp_item_1 = { RTE_FLOW_ITEM_TYPE_IPV4, + &ipv4_udp_spec_1, 0, &ipv4_mask_24}; +static struct rte_flow_item udp_item_1 = { RTE_FLOW_ITEM_TYPE_UDP, + &udp_spec_1, 0, &rte_flow_item_udp_mask}; +static struct rte_flow_item end_item = { RTE_FLOW_ITEM_TYPE_END, + 0, 0, 0 }; +static struct rte_flow_item pattern_udp_1[4]; + +/* second sample UDP pattern: + * "eth / ipv4 src is 9.9.9.3 dst is 9.9.9.7 / udp src is 32 dst is 33 / end" + */ +static struct rte_flow_item_ipv4 ipv4_udp_spec_2 = { + { 0, 0, 0, 0, 0, 0, 17, 0, IPv4(9, 9, 9, 3), IPv4(9, 9, 9, 7)} +}; +static struct rte_flow_item_udp udp_spec_2 = { + { 32, 33, 0, 0 } +}; + +static struct rte_flow_item ipv4_udp_item_2 = { RTE_FLOW_ITEM_TYPE_IPV4, + &ipv4_udp_spec_2, 0, &rte_flow_item_ipv4_mask}; +static struct rte_flow_item udp_item_2 = { RTE_FLOW_ITEM_TYPE_UDP, + &udp_spec_2, 0, &rte_flow_item_udp_mask}; +static struct rte_flow_item pattern_udp_2[4]; + +/* first sample TCP pattern: + * "eth / ipv4 src spec 9.9.9.3 src mask 255.255.255.0 dst spec 9.9.9.7 dst + * mask 255.255.255.0/ tcp src is 32 dst is 33 / end" + */ +static struct rte_flow_item_ipv4 ipv4_tcp_spec_1 = { + { 0, 0, 0, 0, 0, 0, 6, 0, IPv4(9, 9, 9, 3), IPv4(9, 9, 9, 7)} +}; +static struct rte_flow_item_tcp tcp_spec_1 = { + { 32, 33, 0, 0, 0, 0, 0, 0, 0 } +}; + +static struct rte_flow_item ipv4_tcp_item_1 = { RTE_FLOW_ITEM_TYPE_IPV4, + &ipv4_tcp_spec_1, 0, &ipv4_mask_24}; +static struct rte_flow_item tcp_item_1 = { RTE_FLOW_ITEM_TYPE_TCP, + &tcp_spec_1, 0, &rte_flow_item_tcp_mask}; + +static struct rte_flow_item pattern_tcp_1[4]; + +/* second sample TCP pattern: + * "eth / ipv4 src is 9.9.8.3 dst is 9.9.8.7 / tcp src is 32 dst is 33 / end" + */ +static struct rte_flow_item_ipv4 ipv4_tcp_spec_2 = { + { 0, 0, 0, 0, 0, 0, 6, 0, IPv4(9, 9, 8, 3), IPv4(9, 9, 8, 7)} +}; +static struct rte_flow_item_tcp tcp_spec_2 = { + { 32, 33, 0, 0, 0, 0, 0, 0, 0 } +}; + +static struct rte_flow_item ipv4_tcp_item_2 = { RTE_FLOW_ITEM_TYPE_IPV4, + &ipv4_tcp_spec_2, 0, &rte_flow_item_ipv4_mask}; +static struct rte_flow_item tcp_item_2 = { RTE_FLOW_ITEM_TYPE_TCP, + &tcp_spec_2, 0, &rte_flow_item_tcp_mask}; + +static struct rte_flow_item pattern_tcp_2[4]; + +/* first sample SCTP pattern: + * "eth / ipv4 src is 6.7.8.9 dst is 2.3.4.5 / sctp src is 32 dst is 33 / end" + */ +static struct rte_flow_item_ipv4 ipv4_sctp_spec_1 = { + { 0, 0, 0, 0, 0, 0, 132, 0, IPv4(6, 7, 8, 9), IPv4(2, 3, 4, 5)} +}; +static struct rte_flow_item_sctp sctp_spec_1 = { + { 32, 33, 0, 0 } +}; + +static struct rte_flow_item ipv4_sctp_item_1 = { RTE_FLOW_ITEM_TYPE_IPV4, + &ipv4_sctp_spec_1, 0, &rte_flow_item_ipv4_mask}; +static struct rte_flow_item sctp_item_1 = { RTE_FLOW_ITEM_TYPE_SCTP, + &sctp_spec_1, 0, &rte_flow_item_sctp_mask}; + +static struct rte_flow_item pattern_sctp_1[4]; + + +/* sample actions: + * "actions count / end" + */ +static struct rte_flow_action count_action = { RTE_FLOW_ACTION_TYPE_COUNT, 0}; +static struct rte_flow_action end_action = { RTE_FLOW_ACTION_TYPE_END, 0}; +static struct rte_flow_action actions[2]; + +/* sample attributes */ +static struct rte_flow_attr attr; + +/* flow_classify.c: + * Based on DPDK skeleton forwarding example. + */ + +/* + * Initializes a given port using global settings and with the RX buffers + * coming from the mbuf_pool passed as a parameter. + */ +static inline int +port_init(uint8_t port, struct rte_mempool *mbuf_pool) +{ + struct rte_eth_conf port_conf = port_conf_default; + struct ether_addr addr; + const uint16_t rx_rings = 1, tx_rings = 1; + int retval; + uint16_t q; + + if (port >= rte_eth_dev_count()) + return -1; + + /* Configure the Ethernet device. */ + retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf); + if (retval != 0) + return retval; + + /* Allocate and set up 1 RX queue per Ethernet port. */ + for (q = 0; q < rx_rings; q++) { + retval = rte_eth_rx_queue_setup(port, q, RX_RING_SIZE, + rte_eth_dev_socket_id(port), NULL, mbuf_pool); + if (retval < 0) + return retval; + } + + /* Allocate and set up 1 TX queue per Ethernet port. */ + for (q = 0; q < tx_rings; q++) { + retval = rte_eth_tx_queue_setup(port, q, TX_RING_SIZE, + rte_eth_dev_socket_id(port), NULL); + if (retval < 0) + return retval; + } + + /* Start the Ethernet port. */ + retval = rte_eth_dev_start(port); + if (retval < 0) + return retval; + + /* Display the port MAC address. */ + rte_eth_macaddr_get(port, &addr); + printf("Port %u MAC: %02" PRIx8 " %02" PRIx8 " %02" PRIx8 + " %02" PRIx8 " %02" PRIx8 " %02" PRIx8 "\n", + port, + addr.addr_bytes[0], addr.addr_bytes[1], + addr.addr_bytes[2], addr.addr_bytes[3], + addr.addr_bytes[4], addr.addr_bytes[5]); + + /* Enable RX in promiscuous mode for the Ethernet device. */ + rte_eth_promiscuous_enable(port); + + return 0; +} + +/* + * The lcore main. This is the main thread that does the work, reading from + * an input port classifying the packets and writing to an output port. + */ +static __attribute__((noreturn)) void +lcore_main(void) +{ + struct rte_flow_error error; + const uint8_t nb_ports = rte_eth_dev_count(); + uint8_t port; + int ret; + int i; + + /* + * Check that the port is on the same NUMA node as the polling thread + * for best performance. + */ + for (port = 0; port < nb_ports; port++) + if (rte_eth_dev_socket_id(port) > 0 && + rte_eth_dev_socket_id(port) != + (int)rte_socket_id()) + printf("\n\n"); + printf("WARNING: port %u is on remote NUMA node\n", + port); + printf("to polling thread.\n"); + printf("Performance will not be optimal.\n"); + + printf("\nCore %u forwarding packets. [Ctrl+C to quit]\n", + rte_lcore_id()); + + /* Run until the application is quit or killed. */ + for (;;) { + /* + * Receive packets on a port, classify them and forward them + * on the paired port. + * The mapping is 0 -> 1, 1 -> 0, 2 -> 3, 3 -> 2, etc. + */ + for (port = 0; port < nb_ports; port++) { + + /* Get burst of RX packets, from first port of pair. */ + struct rte_mbuf *bufs[BURST_SIZE]; + const uint16_t nb_rx = rte_eth_rx_burst(port, 0, + bufs, BURST_SIZE); + + if (unlikely(nb_rx == 0)) + continue; + + for (i = 0; i < MAX_NUM_CLASSIFY; i++) { + if (udp_flow_classify[i]) { + ret = rte_flow_classify_query( + table_acl, + udp_flow_classify[i], + bufs, nb_rx, + &udp_classify_stats, &error); + if (ret) + printf( + "udp flow classify[%d] query failed port=%u\n\n", + i, port); + else + printf( + "udp rule [%d] counter1=%lu used_space=%d\n\n", + i, udp_ntuple_stats.counter1, + udp_classify_stats.used_space); + } + } + + for (i = 0; i < MAX_NUM_CLASSIFY; i++) { + if (tcp_flow_classify[i]) { + ret = rte_flow_classify_query( + table_acl, + tcp_flow_classify[i], + bufs, nb_rx, + &tcp_classify_stats, &error); + if (ret) + printf( + "tcp flow classify[%d] query failed port=%u\n\n", + i, port); + else + printf( + "tcp rule [%d] counter1=%lu used_space=%d\n\n", + i, tcp_ntuple_stats.counter1, + tcp_classify_stats.used_space); + } + } + + for (i = 0; i < MAX_NUM_CLASSIFY; i++) { + if (sctp_flow_classify[i]) { + ret = rte_flow_classify_query( + table_acl, + sctp_flow_classify[i], + bufs, nb_rx, + &sctp_classify_stats, &error); + if (ret) + printf( + "sctp flow classify[%d] query failed port=%u\n\n", + i, port); + else + printf( + "sctp rule [%d] counter1=%lu used_space=%d\n\n", + i, sctp_ntuple_stats.counter1, + sctp_classify_stats.used_space); + } + } + + /* Send burst of TX packets, to second port of pair. */ + const uint16_t nb_tx = rte_eth_tx_burst(port ^ 1, 0, + bufs, nb_rx); + + /* Free any unsent packets. */ + if (unlikely(nb_tx < nb_rx)) { + uint16_t buf; + + for (buf = nb_tx; buf < nb_rx; buf++) + rte_pktmbuf_free(bufs[buf]); + } + } + } +} + +/* + * The main function, which does initialization and calls the per-lcore + * functions. + */ +int +main(int argc, char *argv[]) +{ + struct rte_mempool *mbuf_pool; + struct rte_flow_error error; + uint8_t nb_ports; + uint8_t portid; + int ret; + int udp_num_classify = 0; + int tcp_num_classify = 0; + int sctp_num_classify = 0; + int socket_id; + struct rte_table_acl_params table_acl_params; + uint32_t entry_size; + + /* Initialize the Environment Abstraction Layer (EAL). */ + ret = rte_eal_init(argc, argv); + if (ret < 0) + rte_exit(EXIT_FAILURE, "Error with EAL initialization\n"); + + argc -= ret; + argv += ret; + + /* Check that there is an even number of ports to send/receive on. */ + nb_ports = rte_eth_dev_count(); + if (nb_ports < 2 || (nb_ports & 1)) + rte_exit(EXIT_FAILURE, "Error: number of ports must be even\n"); + + /* Creates a new mempool in memory to hold the mbufs. */ + mbuf_pool = rte_pktmbuf_pool_create("MBUF_POOL", NUM_MBUFS * nb_ports, + MBUF_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id()); + + if (mbuf_pool == NULL) + rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n"); + + /* Initialize all ports. */ + for (portid = 0; portid < nb_ports; portid++) + if (port_init(portid, mbuf_pool) != 0) + rte_exit(EXIT_FAILURE, "Cannot init port %"PRIu8 "\n", + portid); + + if (rte_lcore_count() > 1) + printf("\nWARNING: Too many lcores enabled. Only 1 used.\n"); + + socket_id = rte_eth_dev_socket_id(0); + + /* initialise ACL table params */ + table_acl_params.n_rule_fields = RTE_DIM(ipv4_defs); + table_acl_params.name = "table_acl_ipv4_5tuple"; + table_acl_params.n_rules = FLOW_CLASSIFY_MAX_RULE_NUM; + memcpy(table_acl_params.field_format, ipv4_defs, sizeof(ipv4_defs)); + entry_size = RTE_ACL_RULE_SZ(RTE_DIM(ipv4_defs)); + + table_acl = rte_table_acl_ops.f_create(&table_acl_params, socket_id, + entry_size); + if (table_acl == NULL) + return -1; + + /* set up parameters for rte_flow_classify_validate and + * rte_flow_classify_create + */ + + attr.ingress = 1; + attr.priority = 1; + pattern_udp_1[0] = eth_item; + pattern_udp_1[1] = ipv4_udp_item_1; + pattern_udp_1[2] = udp_item_1; + pattern_udp_1[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, + pattern_udp_1, actions, &error); + if (ret) + rte_exit(EXIT_FAILURE, "udp_1 flow classify validate failed\n"); + + udp_flow_classify[udp_num_classify] = rte_flow_classify_create( + table_acl, entry_size, &attr, pattern_udp_1, actions, + &error); + if (udp_flow_classify[udp_num_classify] == NULL) + rte_exit(EXIT_FAILURE, "udp_1 flow classify create failed\n"); + udp_num_classify++; + + attr.ingress = 1; + attr.priority = 2; + pattern_udp_2[0] = eth_item; + pattern_udp_2[1] = ipv4_udp_item_2; + pattern_udp_2[2] = udp_item_2; + pattern_udp_2[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, pattern_udp_2, + actions, &error); + if (ret) + rte_exit(EXIT_FAILURE, "udp_2 flow classify validate failed\n"); + + udp_flow_classify[udp_num_classify] = rte_flow_classify_create( + table_acl, entry_size, &attr, pattern_udp_2, actions, + &error); + if (udp_flow_classify[udp_num_classify] == NULL) + rte_exit(EXIT_FAILURE, "udp_2 flow classify create failed\n"); + udp_num_classify++; + + attr.ingress = 1; + attr.priority = 3; + pattern_tcp_1[0] = eth_item; + pattern_tcp_1[1] = ipv4_tcp_item_1; + pattern_tcp_1[2] = tcp_item_1; + pattern_tcp_1[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, pattern_tcp_1, + actions, &error); + if (ret) + rte_exit(EXIT_FAILURE, "tcp_1 flow classify validate failed\n"); + + tcp_flow_classify[tcp_num_classify] = rte_flow_classify_create( + table_acl, entry_size, &attr, pattern_tcp_1, actions, + &error); + if (tcp_flow_classify[tcp_num_classify] == NULL) + rte_exit(EXIT_FAILURE, "tcp_1 flow classify create failed\n"); + tcp_num_classify++; + + attr.ingress = 1; + attr.priority = 4; + pattern_tcp_2[0] = eth_item; + pattern_tcp_2[1] = ipv4_tcp_item_2; + pattern_tcp_2[2] = tcp_item_2; + pattern_tcp_2[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, pattern_tcp_2, + actions, &error); + if (ret) + rte_exit(EXIT_FAILURE, "tcp_2 flow classify validate failed\n"); + + tcp_flow_classify[tcp_num_classify] = rte_flow_classify_create( + table_acl, entry_size, &attr, pattern_tcp_2, actions, + &error); + if (tcp_flow_classify[tcp_num_classify] == NULL) + rte_exit(EXIT_FAILURE, "tcp_2 flow classify create failed\n"); + tcp_num_classify++; + + attr.ingress = 1; + attr.priority = 5; + pattern_sctp_1[0] = eth_item; + pattern_sctp_1[1] = ipv4_sctp_item_1; + pattern_sctp_1[2] = sctp_item_1; + pattern_sctp_1[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, pattern_sctp_1, + actions, &error); + if (ret) + rte_exit(EXIT_FAILURE, + "sctp_1 flow classify validate failed\n"); + + sctp_flow_classify[sctp_num_classify] = rte_flow_classify_create( + table_acl, entry_size, &attr, pattern_sctp_1, actions, + &error); + if (sctp_flow_classify[sctp_num_classify] == NULL) + rte_exit(EXIT_FAILURE, "sctp_1 flow classify create failed\n"); + sctp_num_classify++; + + ret = rte_flow_classify_destroy(table_acl, sctp_flow_classify[0], + &error); + if (ret) + rte_exit(EXIT_FAILURE, + "sctp_1 flow classify destroy failed\n"); + else { + sctp_num_classify--; + sctp_flow_classify[0] = NULL; + } + /* Call lcore_main on the master core only. */ + lcore_main(); + + return 0; +} -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v2 6/6] test: flow classify library unit tests 2017-08-23 13:51 ` [dpdk-dev] [PATCH v1 0/6] Flow classification library Bernard Iremonger ` (5 preceding siblings ...) 2017-08-25 16:10 ` [dpdk-dev] [PATCH v2 5/6] examples/flow_classify: flow classify sample application Bernard Iremonger @ 2017-08-25 16:10 ` Bernard Iremonger 6 siblings, 0 replies; 145+ messages in thread From: Bernard Iremonger @ 2017-08-25 16:10 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil Cc: Bernard Iremonger Add flow_classify_autotest program. Set up IPv4 ACL field definitions. Create table_acl for use by librte_flow_classify API's. Create an mbuf pool for use by rte_flow_classify_query. For each of the librte_flow_classify API's: add bad parameter tests add bad pattern tests add bad action tests add good parameter tests Initialise ipv4 udp traffic for use by test for rte_flow_classif_query. add entry_size param to classify_create change acl field offsets Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> --- test/test/Makefile | 1 + test/test/test_flow_classify.c | 494 +++++++++++++++++++++++++++++++++++++++++ test/test/test_flow_classify.h | 186 ++++++++++++++++ 3 files changed, 681 insertions(+) create mode 100644 test/test/test_flow_classify.c create mode 100644 test/test/test_flow_classify.h diff --git a/test/test/Makefile b/test/test/Makefile index 42d9a49..073e1ed 100644 --- a/test/test/Makefile +++ b/test/test/Makefile @@ -106,6 +106,7 @@ SRCS-y += test_table_tables.c SRCS-y += test_table_ports.c SRCS-y += test_table_combined.c SRCS-$(CONFIG_RTE_LIBRTE_ACL) += test_table_acl.c +SRCS-$(CONFIG_RTE_LIBRTE_ACL) += test_flow_classify.c endif SRCS-y += test_rwlock.c diff --git a/test/test/test_flow_classify.c b/test/test/test_flow_classify.c new file mode 100644 index 0000000..5a45c6b --- /dev/null +++ b/test/test/test_flow_classify.c @@ -0,0 +1,494 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <string.h> +#include <errno.h> + +#include "test.h" + +#include <rte_string_fns.h> +#include <rte_mbuf.h> +#include <rte_byteorder.h> +#include <rte_ip.h> +#include <rte_acl.h> +#include <rte_common.h> +#include <rte_table_acl.h> +#include <rte_flow.h> +#include <rte_flow_classify.h> + +#include "packet_burst_generator.h" +#include "test_flow_classify.h" + + +#define FLOW_CLASSIFY_MAX_RULE_NUM 100 +static void *table_acl; +static uint32_t entry_size; + +/* + * test functions by passing invalid or + * non-workable parameters. + */ +static int +test_invalid_parameters(void) +{ + struct rte_flow_error error; + struct rte_flow_classify *classify; + int ret; + + ret = rte_flow_classify_validate(NULL, NULL, NULL, NULL, NULL); + if (!ret) { + printf("Line %i: flow_classify_validate with NULL param " + "should have failed!\n", __LINE__); + return -1; + } + + classify = rte_flow_classify_create(NULL, 0, NULL, NULL, NULL, NULL); + if (classify) { + printf("Line %i: flow_classify_create with NULL param " + "should have failed!\n", __LINE__); + return -1; + } + + ret = rte_flow_classify_destroy(NULL, NULL, NULL); + if (!ret) { + printf("Line %i: flow_classify_destroy with NULL param " + "should have failed!\n", __LINE__); + return -1; + } + + ret = rte_flow_classify_query(NULL, NULL, NULL, 0, NULL, NULL); + if (!ret) { + printf("Line %i: flow_classify_query with NULL param " + "should have failed!\n", __LINE__); + return -1; + } + + ret = rte_flow_classify_validate(NULL, NULL, NULL, NULL, &error); + if (!ret) { + printf("Line %i: flow_classify_validate with NULL param " + "should have failed!\n", __LINE__); + return -1; + } + + classify = rte_flow_classify_create(NULL, 0, NULL, NULL, NULL, &error); + if (classify) { + printf("Line %i: flow_classify_create with NULL param " + "should have failed!\n", __LINE__); + return -1; + } + + ret = rte_flow_classify_destroy(NULL, NULL, &error); + if (!ret) { + printf("Line %i: flow_classify_destroy with NULL param " + "should have failed!\n", __LINE__); + return -1; + } + + ret = rte_flow_classify_query(NULL, NULL, NULL, 0, NULL, &error); + if (!ret) { + printf("Line %i: flow_classify_query with NULL param " + "should have failed!\n", __LINE__); + return -1; + } + return 0; +} + +static int +test_valid_parameters(void) +{ + struct rte_flow_classify *flow_classify; + int ret; + + /* set up parameters for rte_flow_classify_validate and + * rte_flow_classify_create and rte_flow_classify_destroy + */ + + attr.ingress = 1; + attr.priority = 1; + pattern_udp_1[0] = eth_item; + pattern_udp_1[1] = ipv4_udp_item_1; + pattern_udp_1[2] = udp_item_1; + pattern_udp_1[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, + pattern_udp_1, actions, &error); + if (ret) { + printf("Line %i: flow_classify_validate " + "should not have failed!\n", __LINE__); + return -1; + } + + flow_classify = rte_flow_classify_create(table_acl, entry_size, &attr, + pattern_udp_1, actions, &error); + if (!flow_classify) { + printf("Line %i: flow_classify_create " + "should not have failed!\n", __LINE__); + return -1; + } + + ret = rte_flow_classify_destroy(table_acl, flow_classify, &error); + if (ret) { + printf("Line %i: flow_classify_destroy " + "should not have failed!\n", __LINE__); + return -1; + } + return 0; +} + +static int +test_invalid_patterns(void) +{ + struct rte_flow_classify *flow_classify; + int ret; + + /* set up parameters for rte_flow_classify_validate and + * rte_flow_classify_create and rte_flow_classify_destroy + */ + + attr.ingress = 1; + attr.priority = 1; + pattern_udp_1[0] = eth_item_bad; + pattern_udp_1[1] = ipv4_udp_item_1; + pattern_udp_1[2] = udp_item_1; + pattern_udp_1[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, + pattern_udp_1, actions, &error); + if (!ret) { + printf("Line %i: flow_classify_validate " + "should have failed!\n", __LINE__); + return -1; + } + + pattern_udp_1[0] = eth_item; + pattern_udp_1[1] = ipv4_udp_item_bad; + flow_classify = rte_flow_classify_create(table_acl, entry_size, &attr, + pattern_udp_1, actions, &error); + if (flow_classify) { + printf("Line %i: flow_classify_create " + "should have failed!\n", __LINE__); + return -1; + } + + ret = rte_flow_classify_destroy(table_acl, flow_classify, &error); + if (!ret) { + printf("Line %i: flow_classify_destroy " + "should have failed!\n", __LINE__); + return -1; + } + + pattern_udp_1[1] = ipv4_udp_item_1; + pattern_udp_1[2] = udp_item_bad; + ret = rte_flow_classify_validate(table_acl, &attr, + pattern_udp_1, actions, &error); + if (!ret) { + printf("Line %i: flow_classify_validate " + "should have failed!\n", __LINE__); + return -1; + } + + pattern_udp_1[2] = udp_item_1; + pattern_udp_1[3] = end_item_bad; + flow_classify = rte_flow_classify_create(table_acl, entry_size, &attr, + pattern_udp_1, actions, &error); + if (flow_classify) { + printf("Line %i: flow_classify_create " + "should have failed!\n", __LINE__); + return -1; + } + + ret = rte_flow_classify_destroy(table_acl, flow_classify, &error); + if (!ret) { + printf("Line %i: flow_classify_destroy " + "should have failed!\n", __LINE__); + return -1; + } + return 0; +} + +static int +test_invalid_actions(void) +{ + struct rte_flow_classify *flow_classify; + int ret; + + /* set up parameters for rte_flow_classify_validate and + * rte_flow_classify_create and rte_flow_classify_destroy + */ + + attr.ingress = 1; + attr.priority = 1; + pattern_udp_1[0] = eth_item; + pattern_udp_1[1] = ipv4_udp_item_1; + pattern_udp_1[2] = udp_item_1; + pattern_udp_1[3] = end_item; + actions[0] = count_action_bad; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, + pattern_udp_1, actions, &error); + if (!ret) { + printf("Line %i: flow_classify_validate " + "should have failed!\n", __LINE__); + return -1; + } + + flow_classify = rte_flow_classify_create(table_acl, entry_size, &attr, + pattern_udp_1, actions, &error); + if (flow_classify) { + printf("Line %i: flow_classify_create " + "should have failed!\n", __LINE__); + return -1; + } + + ret = rte_flow_classify_destroy(table_acl, flow_classify, &error); + if (!ret) { + printf("Line %i: flow_classify_destroy " + "should have failed!\n", __LINE__); + return -1; + } + + actions[0] = count_action; + actions[1] = end_action_bad; + ret = rte_flow_classify_validate(table_acl, &attr, + pattern_udp_1, actions, &error); + if (!ret) { + printf("Line %i: flow_classify_validate " + "should have failed!\n", __LINE__); + return -1; + } + + flow_classify = rte_flow_classify_create(table_acl, entry_size, &attr, + pattern_udp_1, actions, &error); + if (flow_classify) { + printf("Line %i: flow_classify_create " + "should have failed!\n", __LINE__); + return -1; + } + + ret = rte_flow_classify_destroy(table_acl, flow_classify, &error); + if (!ret) { + printf("Line %i: flow_classify_destroy " + "should have failed!\n", __LINE__); + return -1; + } + return 0; +} + +static int +init_udp_ipv4_traffic(struct rte_mempool *mp, + struct rte_mbuf **pkts_burst, uint32_t burst_size) +{ + struct ether_hdr pkt_eth_hdr; + struct ipv4_hdr pkt_ipv4_hdr; + struct udp_hdr pkt_udp_hdr; + uint32_t src_addr = IPV4_ADDR(2, 2, 2, 3); + uint32_t dst_addr = IPV4_ADDR(2, 2, 2, 7); + uint16_t src_port = 32; + uint16_t dst_port = 33; + uint16_t pktlen; + + static uint8_t src_mac[] = { 0x00, 0xFF, 0xAA, 0xFF, 0xAA, 0xFF }; + static uint8_t dst_mac[] = { 0x00, 0xAA, 0xFF, 0xAA, 0xFF, 0xAA }; + + initialize_eth_header(&pkt_eth_hdr, + (struct ether_addr *)src_mac, + (struct ether_addr *)dst_mac, ETHER_TYPE_IPv4, 0, 0); + pktlen = (uint16_t)(sizeof(struct ether_hdr)); + printf("ETH pktlen %u\n", pktlen); + + pktlen = initialize_ipv4_header(&pkt_ipv4_hdr, src_addr, dst_addr, + pktlen); + printf("ETH + IPv4 pktlen %u\n", pktlen); + + pktlen = initialize_udp_header(&pkt_udp_hdr, src_port, dst_port, + pktlen); + printf("ETH + IPv4 + UDP pktlen %u\n", pktlen); + + return generate_packet_burst(mp, pkts_burst, &pkt_eth_hdr, + 0, &pkt_ipv4_hdr, 1, + &pkt_udp_hdr, burst_size, + PACKET_BURST_GEN_PKT_LEN, 1); +} + +static int +init_mbufpool(void) +{ + int socketid; + int ret = 0; + unsigned int lcore_id; + char s[64]; + + for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { + if (rte_lcore_is_enabled(lcore_id) == 0) + continue; + + socketid = rte_lcore_to_socket_id(lcore_id); + if (socketid >= NB_SOCKETS) { + printf( + "Socket %d of lcore %u is out of range %d\n", + socketid, lcore_id, NB_SOCKETS); + ret = -1; + break; + } + if (mbufpool[socketid] == NULL) { + snprintf(s, sizeof(s), "mbuf_pool_%d", socketid); + mbufpool[socketid] = + rte_pktmbuf_pool_create(s, NB_MBUF, + MEMPOOL_CACHE_SIZE, 0, MBUF_SIZE, + socketid); + if (mbufpool[socketid]) { + printf("Allocated mbuf pool on socket %d\n", + socketid); + } else { + printf("Cannot init mbuf pool on socket %d\n", + socketid); + ret = -ENOMEM; + break; + } + } + } + return ret; +} + +static int +test_query_udp(void) +{ + struct rte_flow_error error; + struct rte_flow_classify *flow_classify; + int ret; + int i; + + ret = init_udp_ipv4_traffic(mbufpool[0], bufs, MAX_PKT_BURST); + if (ret != MAX_PKT_BURST) { + printf("Line %i: init_udp_ipv4_traffic has failed!\n", + __LINE__); + return -1; + } + + for (i = 0; i < MAX_PKT_BURST; i++) + bufs[i]->packet_type = RTE_PTYPE_L3_IPV4; + + /* set up parameters for rte_flow_classify_validate and + * rte_flow_classify_create and rte_flow_classify_destroy + */ + + attr.ingress = 1; + attr.priority = 1; + pattern_udp_1[0] = eth_item; + pattern_udp_1[1] = ipv4_udp_item_1; + pattern_udp_1[2] = udp_item_1; + pattern_udp_1[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, + pattern_udp_1, actions, &error); + if (ret) { + printf("Line %i: flow_classify_validate " + "should not have failed!\n", __LINE__); + return -1; + } + + flow_classify = rte_flow_classify_create(table_acl, entry_size, &attr, + pattern_udp_1, actions, &error); + if (!flow_classify) { + printf("Line %i: flow_classify_create " + "should not have failed!\n", __LINE__); + return -1; + } + + ret = rte_flow_classify_query(table_acl, flow_classify, bufs, + MAX_PKT_BURST, &udp_classify_stats, &error); + if (ret) { + printf("Line %i: flow_classify_query " + "should not have failed!\n", __LINE__); + return -1; + } + + ret = rte_flow_classify_destroy(table_acl, flow_classify, &error); + if (ret) { + printf("Line %i: flow_classify_destroy " + "should not have failed!\n", __LINE__); + return -1; + } + return 0; +} + +static int +test_flow_classify(void) +{ + struct rte_table_acl_params table_acl_params; + int socket_id = 0; + int ret; + + /* initialise ACL table params */ + table_acl_params.n_rule_fields = RTE_DIM(ipv4_defs); + table_acl_params.name = "table_acl_ipv4_5tuple"; + table_acl_params.n_rules = FLOW_CLASSIFY_MAX_RULE_NUM; + memcpy(table_acl_params.field_format, ipv4_defs, sizeof(ipv4_defs)); + entry_size = RTE_ACL_RULE_SZ(RTE_DIM(ipv4_defs)); + + table_acl = rte_table_acl_ops.f_create(&table_acl_params, socket_id, + entry_size); + if (table_acl == NULL) { + printf("Line %i: f_create has failed!\n", __LINE__); + return -1; + } + printf("Created table_acl for for IPv4 five tuple packets\n"); + + ret = init_mbufpool(); + if (ret) { + printf("Line %i: init_mbufpool has failed!\n", __LINE__); + return -1; + } + + if (test_invalid_parameters() < 0) + return -1; + if (test_valid_parameters() < 0) + return -1; + if (test_invalid_patterns() < 0) + return -1; + if (test_invalid_actions() < 0) + return -1; + if (test_query_udp() < 0) + return -1; + + return 0; +} + +REGISTER_TEST_COMMAND(flow_classify_autotest, test_flow_classify); diff --git a/test/test/test_flow_classify.h b/test/test/test_flow_classify.h new file mode 100644 index 0000000..af04dd3 --- /dev/null +++ b/test/test/test_flow_classify.h @@ -0,0 +1,186 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef TEST_FLOW_CLASSIFY_H_ +#define TEST_FLOW_CLASSIFY_H_ + +#define MAX_PKT_BURST (32) +#define NB_SOCKETS (1) +#define MEMPOOL_CACHE_SIZE (256) +#define MBUF_SIZE (512) +#define NB_MBUF (512) + +/* test UDP packets */ +static struct rte_mempool *mbufpool[NB_SOCKETS]; +static struct rte_mbuf *bufs[MAX_PKT_BURST]; + +/* ACL field definitions for IPv4 5 tuple rule */ + +enum { + PROTO_FIELD_IPV4, + SRC_FIELD_IPV4, + DST_FIELD_IPV4, + SRCP_FIELD_IPV4, + DSTP_FIELD_IPV4, + NUM_FIELDS_IPV4 +}; + +enum { + PROTO_INPUT_IPV4, + SRC_INPUT_IPV4, + DST_INPUT_IPV4, + SRCP_DESTP_INPUT_IPV4 +}; + +static struct rte_acl_field_def ipv4_defs[NUM_FIELDS_IPV4] = { + /* first input field - always one byte long. */ + { + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint8_t), + .field_index = PROTO_FIELD_IPV4, + .input_index = PROTO_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, next_proto_id), + }, + /* next input field (IPv4 source address) - 4 consecutive bytes. */ + { + /* rte_flow uses a bit mask for IPv4 addresses */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint32_t), + .field_index = SRC_FIELD_IPV4, + .input_index = SRC_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, src_addr), + }, + /* next input field (IPv4 destination address) - 4 consecutive bytes. */ + { + /* rte_flow uses a bit mask for IPv4 addresses */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint32_t), + .field_index = DST_FIELD_IPV4, + .input_index = DST_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + offsetof(struct ipv4_hdr, dst_addr), + }, + /* + * Next 2 fields (src & dst ports) form 4 consecutive bytes. + * They share the same input index. + */ + { + /* rte_flow uses a bit mask for protocol ports */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint16_t), + .field_index = SRCP_FIELD_IPV4, + .input_index = SRCP_DESTP_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + sizeof(struct ipv4_hdr) + + offsetof(struct tcp_hdr, src_port), + }, + { + /* rte_flow uses a bit mask for protocol ports */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint16_t), + .field_index = DSTP_FIELD_IPV4, + .input_index = SRCP_DESTP_INPUT_IPV4, + .offset = sizeof(struct ether_hdr) + + sizeof(struct ipv4_hdr) + + offsetof(struct tcp_hdr, dst_port), + }, +}; + +/* parameters for rte_flow_classify_validate and rte_flow_classify_create */ + +/* first sample UDP pattern: + * "eth / ipv4 src spec 2.2.2.3 src mask 255.255.255.00 dst spec 2.2.2.7 + * dst mask 255.255.255.00 / udp src is 32 dst is 33 / end" + */ +static struct rte_flow_item_ipv4 ipv4_udp_spec_1 = { + { 0, 0, 0, 0, 0, 0, 17, 0, IPv4(2, 2, 2, 3), IPv4(2, 2, 2, 7)} +}; +static const struct rte_flow_item_ipv4 ipv4_mask_24 = { + .hdr = { + .next_proto_id = 0xff, + .src_addr = 0xffffff00, + .dst_addr = 0xffffff00, + }, +}; +static struct rte_flow_item_udp udp_spec_1 = { + { 32, 33, 0, 0 } +}; + +static struct rte_flow_item eth_item = { RTE_FLOW_ITEM_TYPE_ETH, + 0, 0, 0 }; +static struct rte_flow_item eth_item_bad = { -1, 0, 0, 0 }; + +static struct rte_flow_item ipv4_udp_item_1 = { RTE_FLOW_ITEM_TYPE_IPV4, + &ipv4_udp_spec_1, 0, &ipv4_mask_24}; +static struct rte_flow_item ipv4_udp_item_bad = { RTE_FLOW_ITEM_TYPE_IPV4, + NULL, 0, NULL}; + +static struct rte_flow_item udp_item_1 = { RTE_FLOW_ITEM_TYPE_UDP, + &udp_spec_1, 0, &rte_flow_item_udp_mask}; +static struct rte_flow_item udp_item_bad = { RTE_FLOW_ITEM_TYPE_UDP, + NULL, 0, NULL}; + +static struct rte_flow_item end_item = { RTE_FLOW_ITEM_TYPE_END, + 0, 0, 0 }; +static struct rte_flow_item end_item_bad = { -1, 0, 0, 0 }; + +static struct rte_flow_item pattern_udp_1[4]; + +/* sample actions: + * "actions count / end" + */ +static struct rte_flow_action count_action = { RTE_FLOW_ACTION_TYPE_COUNT, 0}; +static struct rte_flow_action count_action_bad = { -1, 0}; + +static struct rte_flow_action end_action = { RTE_FLOW_ACTION_TYPE_END, 0}; +static struct rte_flow_action end_action_bad = { -1, 0}; + +static struct rte_flow_action actions[2]; + +/* sample attributes */ +static struct rte_flow_attr attr; + +/* sample error */ +static struct rte_flow_error error; + +/* flow classify data for UDP burst */ +static struct rte_flow_classify_5tuple_stats udp_ntuple_stats; +static struct rte_flow_classify_stats udp_classify_stats = { + .available_space = MAX_PKT_BURST, + .used_space = 0, + .stats = (void **)&udp_ntuple_stats +}; + +#endif /* TEST_FLOW_CLASSIFY_H_ */ -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v1 1/6] librte_table: move structure to header file 2017-05-25 15:46 ` [dpdk-dev] [RFC v3] " Ferruh Yigit 2017-05-25 15:46 ` [dpdk-dev] [RFC v3] flow_classify: add librte_flow_classify library Ferruh Yigit 2017-08-23 13:51 ` [dpdk-dev] [PATCH v1 0/6] Flow classification library Bernard Iremonger @ 2017-08-23 13:51 ` Bernard Iremonger 2017-08-23 14:13 ` Dumitrescu, Cristian 2017-08-23 13:51 ` [dpdk-dev] [PATCH v1 2/6] librte_table: fix acl entry add and delete functions Bernard Iremonger ` (4 subsequent siblings) 7 siblings, 1 reply; 145+ messages in thread From: Bernard Iremonger @ 2017-08-23 13:51 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil Cc: Bernard Iremonger Move struct librte_table from the rte_table_acl.c to the rte_table_acl.h file. Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> --- lib/librte_table/rte_table_acl.c | 24 ------------------------ lib/librte_table/rte_table_acl.h | 24 ++++++++++++++++++++++++ 2 files changed, 24 insertions(+), 24 deletions(-) diff --git a/lib/librte_table/rte_table_acl.c b/lib/librte_table/rte_table_acl.c index 3c05e4a..900f658 100644 --- a/lib/librte_table/rte_table_acl.c +++ b/lib/librte_table/rte_table_acl.c @@ -57,30 +57,6 @@ #endif -struct rte_table_acl { - struct rte_table_stats stats; - - /* Low-level ACL table */ - char name[2][RTE_ACL_NAMESIZE]; - struct rte_acl_param acl_params; /* for creating low level acl table */ - struct rte_acl_config cfg; /* Holds the field definitions (metadata) */ - struct rte_acl_ctx *ctx; - uint32_t name_id; - - /* Input parameters */ - uint32_t n_rules; - uint32_t entry_size; - - /* Internal tables */ - uint8_t *action_table; - struct rte_acl_rule **acl_rule_list; /* Array of pointers to rules */ - uint8_t *acl_rule_memory; /* Memory to store the rules */ - - /* Memory to store the action table and stack of free entries */ - uint8_t memory[0] __rte_cache_aligned; -}; - - static void * rte_table_acl_create( void *params, diff --git a/lib/librte_table/rte_table_acl.h b/lib/librte_table/rte_table_acl.h index a9cc032..1370b12 100644 --- a/lib/librte_table/rte_table_acl.h +++ b/lib/librte_table/rte_table_acl.h @@ -55,6 +55,30 @@ #include "rte_table.h" + +struct rte_table_acl { + struct rte_table_stats stats; + + /* Low-level ACL table */ + char name[2][RTE_ACL_NAMESIZE]; + struct rte_acl_param acl_params; /* for creating low level acl table */ + struct rte_acl_config cfg; /* Holds the field definitions (metadata) */ + struct rte_acl_ctx *ctx; + uint32_t name_id; + + /* Input parameters */ + uint32_t n_rules; + uint32_t entry_size; + + /* Internal tables */ + uint8_t *action_table; + struct rte_acl_rule **acl_rule_list; /* Array of pointers to rules */ + uint8_t *acl_rule_memory; /* Memory to store the rules */ + + /* Memory to store the action table and stack of free entries */ + uint8_t memory[0] __rte_cache_aligned; +}; + /** ACL table parameters */ struct rte_table_acl_params { /** Name */ -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [PATCH v1 1/6] librte_table: move structure to header file 2017-08-23 13:51 ` [dpdk-dev] [PATCH v1 1/6] librte_table: move structure to header file Bernard Iremonger @ 2017-08-23 14:13 ` Dumitrescu, Cristian 2017-08-23 14:32 ` Iremonger, Bernard 0 siblings, 1 reply; 145+ messages in thread From: Dumitrescu, Cristian @ 2017-08-23 14:13 UTC (permalink / raw) To: Iremonger, Bernard, dev, Yigit, Ferruh, Ananyev, Konstantin, adrien.mazarguil > -----Original Message----- > From: Iremonger, Bernard > Sent: Wednesday, August 23, 2017 2:51 PM > To: dev@dpdk.org; Yigit, Ferruh <ferruh.yigit@intel.com>; Ananyev, > Konstantin <konstantin.ananyev@intel.com>; Dumitrescu, Cristian > <cristian.dumitrescu@intel.com>; adrien.mazarguil@6wind.com > Cc: Iremonger, Bernard <bernard.iremonger@intel.com> > Subject: [PATCH v1 1/6] librte_table: move structure to header file > > Move struct librte_table from the rte_table_acl.c to > the rte_table_acl.h file. > > Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> > --- > lib/librte_table/rte_table_acl.c | 24 ------------------------ > lib/librte_table/rte_table_acl.h | 24 ++++++++++++++++++++++++ > 2 files changed, 24 insertions(+), 24 deletions(-) > > diff --git a/lib/librte_table/rte_table_acl.c b/lib/librte_table/rte_table_acl.c > index 3c05e4a..900f658 100644 > --- a/lib/librte_table/rte_table_acl.c > +++ b/lib/librte_table/rte_table_acl.c > @@ -57,30 +57,6 @@ > > #endif > > -struct rte_table_acl { > - struct rte_table_stats stats; > - > - /* Low-level ACL table */ > - char name[2][RTE_ACL_NAMESIZE]; > - struct rte_acl_param acl_params; /* for creating low level acl table */ > - struct rte_acl_config cfg; /* Holds the field definitions (metadata) */ > - struct rte_acl_ctx *ctx; > - uint32_t name_id; > - > - /* Input parameters */ > - uint32_t n_rules; > - uint32_t entry_size; > - > - /* Internal tables */ > - uint8_t *action_table; > - struct rte_acl_rule **acl_rule_list; /* Array of pointers to rules */ > - uint8_t *acl_rule_memory; /* Memory to store the rules */ > - > - /* Memory to store the action table and stack of free entries */ > - uint8_t memory[0] __rte_cache_aligned; > -}; > - > - > static void * > rte_table_acl_create( > void *params, > diff --git a/lib/librte_table/rte_table_acl.h b/lib/librte_table/rte_table_acl.h > index a9cc032..1370b12 100644 > --- a/lib/librte_table/rte_table_acl.h > +++ b/lib/librte_table/rte_table_acl.h > @@ -55,6 +55,30 @@ > > #include "rte_table.h" > > + > +struct rte_table_acl { > + struct rte_table_stats stats; > + > + /* Low-level ACL table */ > + char name[2][RTE_ACL_NAMESIZE]; > + struct rte_acl_param acl_params; /* for creating low level acl table */ > + struct rte_acl_config cfg; /* Holds the field definitions (metadata) */ > + struct rte_acl_ctx *ctx; > + uint32_t name_id; > + > + /* Input parameters */ > + uint32_t n_rules; > + uint32_t entry_size; > + > + /* Internal tables */ > + uint8_t *action_table; > + struct rte_acl_rule **acl_rule_list; /* Array of pointers to rules */ > + uint8_t *acl_rule_memory; /* Memory to store the rules */ > + > + /* Memory to store the action table and stack of free entries */ > + uint8_t memory[0] __rte_cache_aligned; > +}; > + > /** ACL table parameters */ > struct rte_table_acl_params { > /** Name */ > -- > 1.9.1 Hi Bernard, Strong objection here: - This data structure contains the internal data needed to run the ACL table. It is implementation dependent, it is not part of the API. Therefore, it must not be exposed as part of the API, so it has to stay in the .c file as opposed to the .h file. - Users should handle the ACL table through the handle returned by the create function as opposed to accessing this structure directly. Regards, Cristian ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [PATCH v1 1/6] librte_table: move structure to header file 2017-08-23 14:13 ` Dumitrescu, Cristian @ 2017-08-23 14:32 ` Iremonger, Bernard 2017-08-28 8:48 ` Iremonger, Bernard 0 siblings, 1 reply; 145+ messages in thread From: Iremonger, Bernard @ 2017-08-23 14:32 UTC (permalink / raw) To: Dumitrescu, Cristian, dev, Yigit, Ferruh, Ananyev, Konstantin, adrien.mazarguil Hi Cristian, <snip> > > Subject: [PATCH v1 1/6] librte_table: move structure to header file > > > > Move struct librte_table from the rte_table_acl.c to the > > rte_table_acl.h file. > > > > Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> > > --- > > lib/librte_table/rte_table_acl.c | 24 ------------------------ > > lib/librte_table/rte_table_acl.h | 24 ++++++++++++++++++++++++ > > 2 files changed, 24 insertions(+), 24 deletions(-) > > > > diff --git a/lib/librte_table/rte_table_acl.c > > b/lib/librte_table/rte_table_acl.c > > index 3c05e4a..900f658 100644 > > --- a/lib/librte_table/rte_table_acl.c > > +++ b/lib/librte_table/rte_table_acl.c > > @@ -57,30 +57,6 @@ > > > > #endif > > > > -struct rte_table_acl { > > - struct rte_table_stats stats; > > - > > - /* Low-level ACL table */ > > - char name[2][RTE_ACL_NAMESIZE]; > > - struct rte_acl_param acl_params; /* for creating low level acl table */ > > - struct rte_acl_config cfg; /* Holds the field definitions (metadata) */ > > - struct rte_acl_ctx *ctx; > > - uint32_t name_id; > > - > > - /* Input parameters */ > > - uint32_t n_rules; > > - uint32_t entry_size; > > - > > - /* Internal tables */ > > - uint8_t *action_table; > > - struct rte_acl_rule **acl_rule_list; /* Array of pointers to rules */ > > - uint8_t *acl_rule_memory; /* Memory to store the rules */ > > - > > - /* Memory to store the action table and stack of free entries */ > > - uint8_t memory[0] __rte_cache_aligned; > > -}; > > - > > - > > static void * > > rte_table_acl_create( > > void *params, > > diff --git a/lib/librte_table/rte_table_acl.h > > b/lib/librte_table/rte_table_acl.h > > index a9cc032..1370b12 100644 > > --- a/lib/librte_table/rte_table_acl.h > > +++ b/lib/librte_table/rte_table_acl.h > > @@ -55,6 +55,30 @@ > > > > #include "rte_table.h" > > > > + > > +struct rte_table_acl { > > + struct rte_table_stats stats; > > + > > + /* Low-level ACL table */ > > + char name[2][RTE_ACL_NAMESIZE]; > > + struct rte_acl_param acl_params; /* for creating low level acl table */ > > + struct rte_acl_config cfg; /* Holds the field definitions (metadata) */ > > + struct rte_acl_ctx *ctx; > > + uint32_t name_id; > > + > > + /* Input parameters */ > > + uint32_t n_rules; > > + uint32_t entry_size; > > + > > + /* Internal tables */ > > + uint8_t *action_table; > > + struct rte_acl_rule **acl_rule_list; /* Array of pointers to rules */ > > + uint8_t *acl_rule_memory; /* Memory to store the rules */ > > + > > + /* Memory to store the action table and stack of free entries */ > > + uint8_t memory[0] __rte_cache_aligned; }; > > + > > /** ACL table parameters */ > > struct rte_table_acl_params { > > /** Name */ > > -- > > 1.9.1 > > > Hi Bernard, > > Strong objection here: > - This data structure contains the internal data needed to run the ACL table. It > is implementation dependent, it is not part of the API. Therefore, it must not > be exposed as part of the API, so it has to stay in the .c file as opposed to the > .h file. > - Users should handle the ACL table through the handle returned by the > create function as opposed to accessing this structure directly. > > Regards, > Cristian I will revisit this to see if there is another way. Regards, Bernard. ^ permalink raw reply [flat|nested] 145+ messages in thread
* Re: [dpdk-dev] [PATCH v1 1/6] librte_table: move structure to header file 2017-08-23 14:32 ` Iremonger, Bernard @ 2017-08-28 8:48 ` Iremonger, Bernard 0 siblings, 0 replies; 145+ messages in thread From: Iremonger, Bernard @ 2017-08-28 8:48 UTC (permalink / raw) To: Dumitrescu, Cristian, dev, Yigit, Ferruh, Ananyev, Konstantin, adrien.mazarguil Cc: Iremonger, Bernard Hi Cristian, > -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Iremonger, Bernard > Sent: Wednesday, August 23, 2017 3:32 PM > To: Dumitrescu, Cristian <cristian.dumitrescu@intel.com>; dev@dpdk.org; > Yigit, Ferruh <ferruh.yigit@intel.com>; Ananyev, Konstantin > <konstantin.ananyev@intel.com>; adrien.mazarguil@6wind.com > Subject: Re: [dpdk-dev] [PATCH v1 1/6] librte_table: move structure to > header file > > Hi Cristian, > > <snip> > > > > Subject: [PATCH v1 1/6] librte_table: move structure to header file > > > > > > Move struct librte_table from the rte_table_acl.c to the > > > rte_table_acl.h file. > > > > > > Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> > > > --- > > > lib/librte_table/rte_table_acl.c | 24 ------------------------ > > > lib/librte_table/rte_table_acl.h | 24 ++++++++++++++++++++++++ > > > 2 files changed, 24 insertions(+), 24 deletions(-) > > > > > > diff --git a/lib/librte_table/rte_table_acl.c > > > b/lib/librte_table/rte_table_acl.c > > > index 3c05e4a..900f658 100644 > > > --- a/lib/librte_table/rte_table_acl.c > > > +++ b/lib/librte_table/rte_table_acl.c > > > @@ -57,30 +57,6 @@ > > > > > > #endif > > > > > > -struct rte_table_acl { > > > - struct rte_table_stats stats; > > > - > > > - /* Low-level ACL table */ > > > - char name[2][RTE_ACL_NAMESIZE]; > > > - struct rte_acl_param acl_params; /* for creating low level acl table */ > > > - struct rte_acl_config cfg; /* Holds the field definitions (metadata) */ > > > - struct rte_acl_ctx *ctx; > > > - uint32_t name_id; > > > - > > > - /* Input parameters */ > > > - uint32_t n_rules; > > > - uint32_t entry_size; > > > - > > > - /* Internal tables */ > > > - uint8_t *action_table; > > > - struct rte_acl_rule **acl_rule_list; /* Array of pointers to rules */ > > > - uint8_t *acl_rule_memory; /* Memory to store the rules */ > > > - > > > - /* Memory to store the action table and stack of free entries */ > > > - uint8_t memory[0] __rte_cache_aligned; > > > -}; > > > - > > > - > > > static void * > > > rte_table_acl_create( > > > void *params, > > > diff --git a/lib/librte_table/rte_table_acl.h > > > b/lib/librte_table/rte_table_acl.h > > > index a9cc032..1370b12 100644 > > > --- a/lib/librte_table/rte_table_acl.h > > > +++ b/lib/librte_table/rte_table_acl.h > > > @@ -55,6 +55,30 @@ > > > > > > #include "rte_table.h" > > > > > > + > > > +struct rte_table_acl { > > > + struct rte_table_stats stats; > > > + > > > + /* Low-level ACL table */ > > > + char name[2][RTE_ACL_NAMESIZE]; > > > + struct rte_acl_param acl_params; /* for creating low level acl table */ > > > + struct rte_acl_config cfg; /* Holds the field definitions (metadata) */ > > > + struct rte_acl_ctx *ctx; > > > + uint32_t name_id; > > > + > > > + /* Input parameters */ > > > + uint32_t n_rules; > > > + uint32_t entry_size; > > > + > > > + /* Internal tables */ > > > + uint8_t *action_table; > > > + struct rte_acl_rule **acl_rule_list; /* Array of pointers to rules */ > > > + uint8_t *acl_rule_memory; /* Memory to store the rules */ > > > + > > > + /* Memory to store the action table and stack of free entries */ > > > + uint8_t memory[0] __rte_cache_aligned; }; > > > + > > > /** ACL table parameters */ > > > struct rte_table_acl_params { > > > /** Name */ > > > -- > > > 1.9.1 > > > > > > Hi Bernard, > > > > Strong objection here: > > - This data structure contains the internal data needed to run the ACL > > table. It is implementation dependent, it is not part of the API. > > Therefore, it must not be exposed as part of the API, so it has to > > stay in the .c file as opposed to the .h file. > > - Users should handle the ACL table through the handle returned by the > > create function as opposed to accessing this structure directly. > > > > Regards, > > Cristian > > I will revisit this to see if there is another way. > > Regards, > > Bernard. > This patch has been dropped from the v2 patch set. The functionality needed has been implemented in a different way. Regards, Bernard. ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v1 2/6] librte_table: fix acl entry add and delete functions 2017-05-25 15:46 ` [dpdk-dev] [RFC v3] " Ferruh Yigit ` (2 preceding siblings ...) 2017-08-23 13:51 ` [dpdk-dev] [PATCH v1 1/6] librte_table: move structure to header file Bernard Iremonger @ 2017-08-23 13:51 ` Bernard Iremonger 2017-08-23 13:51 ` [dpdk-dev] [PATCH v1 3/6] librte_ether: initialise IPv4 protocol mask for rte_flow Bernard Iremonger ` (3 subsequent siblings) 7 siblings, 0 replies; 145+ messages in thread From: Bernard Iremonger @ 2017-08-23 13:51 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil Cc: Bernard Iremonger, stable The rte_table_acl_entry_add() function was returning data from acl_memory instead of acl_rule_memory. It was also returning data from entry instead of entry_ptr. The rte_table_acl_entry_delete() function was returning data from acl_memory instead of acl_rule_memory. Fixes: 166923eb2f78 ("table: ACL") Cc: stable@dpdk.org Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> --- lib/librte_table/rte_table_acl.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/lib/librte_table/rte_table_acl.c b/lib/librte_table/rte_table_acl.c index 900f658..9865b05 100644 --- a/lib/librte_table/rte_table_acl.c +++ b/lib/librte_table/rte_table_acl.c @@ -292,8 +292,7 @@ if (status == 0) { *key_found = 1; *entry_ptr = &acl->memory[i * acl->entry_size]; - memcpy(*entry_ptr, entry, acl->entry_size); - + memcpy(entry, *entry_ptr, acl->entry_size); return 0; } } @@ -329,8 +328,8 @@ rte_acl_free(acl->ctx); acl->ctx = ctx; *key_found = 0; - *entry_ptr = &acl->memory[free_pos * acl->entry_size]; - memcpy(*entry_ptr, entry, acl->entry_size); + *entry_ptr = &acl->acl_rule_memory[free_pos * acl->entry_size]; + memcpy(entry, *entry_ptr, acl->entry_size); return 0; } @@ -411,7 +410,7 @@ acl->ctx = ctx; *key_found = 1; if (entry != NULL) - memcpy(entry, &acl->memory[pos * acl->entry_size], + memcpy(entry, &acl->acl_rule_memory[pos * acl->entry_size], acl->entry_size); return 0; -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v1 3/6] librte_ether: initialise IPv4 protocol mask for rte_flow 2017-05-25 15:46 ` [dpdk-dev] [RFC v3] " Ferruh Yigit ` (3 preceding siblings ...) 2017-08-23 13:51 ` [dpdk-dev] [PATCH v1 2/6] librte_table: fix acl entry add and delete functions Bernard Iremonger @ 2017-08-23 13:51 ` Bernard Iremonger 2017-08-23 13:51 ` [dpdk-dev] [PATCH v1 4/6] librte_flow_classify: add librte_flow_classify library Bernard Iremonger ` (2 subsequent siblings) 7 siblings, 0 replies; 145+ messages in thread From: Bernard Iremonger @ 2017-08-23 13:51 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil Cc: Bernard Iremonger Initialise the next_proto_id mask in the default mask for rte_flow_item_type_ipv4. Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> --- lib/librte_ether/rte_flow.h | 1 + 1 file changed, 1 insertion(+) diff --git a/lib/librte_ether/rte_flow.h b/lib/librte_ether/rte_flow.h index bba6169..59c42fa 100644 --- a/lib/librte_ether/rte_flow.h +++ b/lib/librte_ether/rte_flow.h @@ -489,6 +489,7 @@ struct rte_flow_item_ipv4 { #ifndef __cplusplus static const struct rte_flow_item_ipv4 rte_flow_item_ipv4_mask = { .hdr = { + .next_proto_id = 0xff, .src_addr = RTE_BE32(0xffffffff), .dst_addr = RTE_BE32(0xffffffff), }, -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v1 4/6] librte_flow_classify: add librte_flow_classify library 2017-05-25 15:46 ` [dpdk-dev] [RFC v3] " Ferruh Yigit ` (4 preceding siblings ...) 2017-08-23 13:51 ` [dpdk-dev] [PATCH v1 3/6] librte_ether: initialise IPv4 protocol mask for rte_flow Bernard Iremonger @ 2017-08-23 13:51 ` Bernard Iremonger 2017-08-23 13:51 ` [dpdk-dev] [PATCH v1 5/6] examples/flow_classify: flow classify sample application Bernard Iremonger 2017-08-23 13:51 ` [dpdk-dev] [PATCH v1 6/6] test: flow classify library unit tests Bernard Iremonger 7 siblings, 0 replies; 145+ messages in thread From: Bernard Iremonger @ 2017-08-23 13:51 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil Cc: Bernard Iremonger From: Ferruh Yigit <ferruh.yigit@intel.com> The following library APIs's are implemented: rte_flow_classify_create rte_flow_classify_validate rte_flow_classify_destroy rte_flow_classify_query The librte_table ACL API is used for matching packets The library supports counting of IPv4 five tupple packets only, ie IPv4 UDP, TCP and SCTP packets. Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com> Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> --- config/common_base | 6 + doc/api/doxy-api-index.md | 1 + doc/api/doxy-api.conf | 1 + lib/Makefile | 3 + lib/librte_eal/common/include/rte_log.h | 1 + lib/librte_flow_classify/Makefile | 51 ++ lib/librte_flow_classify/rte_flow_classify.c | 559 +++++++++++++++++++++ lib/librte_flow_classify/rte_flow_classify.h | 204 ++++++++ lib/librte_flow_classify/rte_flow_classify_parse.c | 546 ++++++++++++++++++++ lib/librte_flow_classify/rte_flow_classify_parse.h | 74 +++ .../rte_flow_classify_version.map | 10 + mk/rte.app.mk | 2 +- 12 files changed, 1457 insertions(+), 1 deletion(-) create mode 100644 lib/librte_flow_classify/Makefile create mode 100644 lib/librte_flow_classify/rte_flow_classify.c create mode 100644 lib/librte_flow_classify/rte_flow_classify.h create mode 100644 lib/librte_flow_classify/rte_flow_classify_parse.c create mode 100644 lib/librte_flow_classify/rte_flow_classify_parse.h create mode 100644 lib/librte_flow_classify/rte_flow_classify_version.map diff --git a/config/common_base b/config/common_base index 5e97a08..e378e0a 100644 --- a/config/common_base +++ b/config/common_base @@ -657,6 +657,12 @@ CONFIG_RTE_LIBRTE_GRO=y CONFIG_RTE_LIBRTE_METER=y # +# Compile librte_classify +# +CONFIG_RTE_LIBRTE_FLOW_CLASSIFY=y +CONFIG_RTE_LIBRTE_CLASSIFY_DEBUG=n + +# # Compile librte_sched # CONFIG_RTE_LIBRTE_SCHED=y diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md index 19e0d4f..a2fa281 100644 --- a/doc/api/doxy-api-index.md +++ b/doc/api/doxy-api-index.md @@ -105,6 +105,7 @@ The public API headers are grouped by topics: [LPM IPv4 route] (@ref rte_lpm.h), [LPM IPv6 route] (@ref rte_lpm6.h), [ACL] (@ref rte_acl.h), + [flow_classify] (@ref rte_flow_classify.h), [EFD] (@ref rte_efd.h) - **QoS**: diff --git a/doc/api/doxy-api.conf b/doc/api/doxy-api.conf index 823554f..4e43a66 100644 --- a/doc/api/doxy-api.conf +++ b/doc/api/doxy-api.conf @@ -46,6 +46,7 @@ INPUT = doc/api/doxy-api-index.md \ lib/librte_efd \ lib/librte_ether \ lib/librte_eventdev \ + lib/librte_flow_classify \ lib/librte_gro \ lib/librte_hash \ lib/librte_ip_frag \ diff --git a/lib/Makefile b/lib/Makefile index 86caba1..21fc3b0 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -82,6 +82,9 @@ DIRS-$(CONFIG_RTE_LIBRTE_POWER) += librte_power DEPDIRS-librte_power := librte_eal DIRS-$(CONFIG_RTE_LIBRTE_METER) += librte_meter DEPDIRS-librte_meter := librte_eal +DIRS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += librte_flow_classify +DEPDIRS-librte_flow_classify := librte_eal librte_ether librte_net +DEPDIRS-librte_flow_classify += librte_table librte_acl librte_port DIRS-$(CONFIG_RTE_LIBRTE_SCHED) += librte_sched DEPDIRS-librte_sched := librte_eal librte_mempool librte_mbuf librte_net DEPDIRS-librte_sched += librte_timer diff --git a/lib/librte_eal/common/include/rte_log.h b/lib/librte_eal/common/include/rte_log.h index ec8dba7..f975bde 100644 --- a/lib/librte_eal/common/include/rte_log.h +++ b/lib/librte_eal/common/include/rte_log.h @@ -87,6 +87,7 @@ struct rte_logs { #define RTE_LOGTYPE_CRYPTODEV 17 /**< Log related to cryptodev. */ #define RTE_LOGTYPE_EFD 18 /**< Log related to EFD. */ #define RTE_LOGTYPE_EVENTDEV 19 /**< Log related to eventdev. */ +#define RTE_LOGTYPE_CLASSIFY 20 /**< Log related to flow classify. */ /* these log types can be used in an application */ #define RTE_LOGTYPE_USER1 24 /**< User-defined log type 1. */ diff --git a/lib/librte_flow_classify/Makefile b/lib/librte_flow_classify/Makefile new file mode 100644 index 0000000..7863a0c --- /dev/null +++ b/lib/librte_flow_classify/Makefile @@ -0,0 +1,51 @@ +# BSD LICENSE +# +# Copyright(c) 2017 Intel Corporation. All rights reserved. +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Intel Corporation nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +include $(RTE_SDK)/mk/rte.vars.mk + +# library name +LIB = librte_flow_classify.a + +CFLAGS += -O3 +CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) + +EXPORT_MAP := rte_flow_classify_version.map + +LIBABIVER := 1 + +# all source are stored in SRCS-y +SRCS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += rte_flow_classify.c +SRCS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += rte_flow_classify_parse.c + +# install this header file +SYMLINK-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY)-include := rte_flow_classify.h + +include $(RTE_SDK)/mk/rte.lib.mk diff --git a/lib/librte_flow_classify/rte_flow_classify.c b/lib/librte_flow_classify/rte_flow_classify.c new file mode 100644 index 0000000..23f0a94 --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify.c @@ -0,0 +1,559 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <rte_flow_classify.h> +#include "rte_flow_classify_parse.h" +#include <rte_flow_driver.h> +#include <rte_table_acl.h> +#include <stdbool.h> + +static struct rte_eth_ntuple_filter ntuple_filter; +static uint32_t unique_id = 1; + +enum { + PROTO_FIELD_IPV4, + SRC_FIELD_IPV4, + DST_FIELD_IPV4, + SRCP_FIELD_IPV4, + DSTP_FIELD_IPV4, + NUM_FIELDS_IPV4 +}; + +struct ipv4_5tuple_data { + uint16_t priority; /**< flow API uses priority 0 to 8, 0 is highest */ + uint32_t userdata; /**< value returned for match */ + uint8_t tcp_flags; /**< tcp_flags only meaningful TCP protocol */ +}; + +struct rte_flow_classify { + uint32_t id; /**< unique ID of classify object */ + enum rte_flow_classify_type type; /**< classify type */ + struct rte_flow_action action; /**< action when match found */ + struct ipv4_5tuple_data flow_extra_data; /** extra rule data */ + struct rte_table_acl_rule_add_params key_add; /**< add ACL rule key */ + struct rte_table_acl_rule_delete_params + key_del; /**< delete ACL rule key */ + int key_found; /**< ACL rule key found in table */ + void *entry; /**< pointer to buffer to hold ACL rule key */ + void *entry_ptr; /**< handle to the table entry for the ACL rule key */ +}; + +/* number of categories in an ACL context */ +#define FLOW_CLASSIFY_NUM_CATEGORY 1 + +/* macros for mbuf processing */ +#define MAX_PKT_BURST 32 +#define OFF_ETHHEAD (sizeof(struct ether_hdr)) +#define OFF_IPV42PROTO (offsetof(struct ipv4_hdr, next_proto_id)) +#define MBUF_IPV4_2PROTO(m) \ + rte_pktmbuf_mtod_offset((m), uint8_t *, OFF_ETHHEAD + OFF_IPV42PROTO) + +struct mbuf_search { + const uint8_t *data_ipv4[MAX_PKT_BURST]; + uint32_t res_ipv4[MAX_PKT_BURST]; + int num_ipv4; +}; + +int +rte_flow_classify_validate(void *table_handle, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + struct rte_flow_item *items; + parse_filter_t parse_filter; + uint32_t item_num = 0; + uint32_t i = 0; + int ret; + + (void) table_handle; + + if (!error) + return -EINVAL; + + if (!pattern) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM, + NULL, "NULL pattern."); + return -EINVAL; + } + + if (!actions) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_NUM, + NULL, "NULL action."); + return -EINVAL; + } + + if (!attr) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, + NULL, "NULL attribute."); + return -EINVAL; + } + + memset(&ntuple_filter, 0, sizeof(ntuple_filter)); + + /* Get the non-void item number of pattern */ + while ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_END) { + if ((pattern + i)->type != RTE_FLOW_ITEM_TYPE_VOID) + item_num++; + i++; + } + item_num++; + + items = malloc(item_num * sizeof(struct rte_flow_item)); + if (!items) { + rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_ITEM_NUM, + NULL, "No memory for pattern items."); + return -ENOMEM; + } + + memset(items, 0, item_num * sizeof(struct rte_flow_item)); + classify_pattern_skip_void_item(items, pattern); + + parse_filter = classify_find_parse_filter_func(items); + if (!parse_filter) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + pattern, "Unsupported pattern"); + return -EINVAL; + } + + ret = parse_filter(attr, items, actions, &ntuple_filter, error); + free(items); + return ret; +} + +#ifdef RTE_LIBRTE_CLASSIFY_DEBUG +#define uint32_t_to_char(ip, a, b, c, d) do {\ + *a = (unsigned char)(ip >> 24 & 0xff);\ + *b = (unsigned char)(ip >> 16 & 0xff);\ + *c = (unsigned char)(ip >> 8 & 0xff);\ + *d = (unsigned char)(ip & 0xff);\ + } while (0) + +static inline void +print_ipv4_key_add(struct rte_table_acl_rule_add_params *key) +{ + unsigned char a, b, c, d; + + printf("ipv4_key_add: 0x%02hhx/0x%hhx ", + key->field_value[PROTO_FIELD_IPV4].value.u8, + key->field_value[PROTO_FIELD_IPV4].mask_range.u8); + + uint32_t_to_char(key->field_value[SRC_FIELD_IPV4].value.u32, + &a, &b, &c, &d); + printf(" %hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, + key->field_value[SRC_FIELD_IPV4].mask_range.u32); + + uint32_t_to_char(key->field_value[DST_FIELD_IPV4].value.u32, + &a, &b, &c, &d); + printf("%hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, + key->field_value[DST_FIELD_IPV4].mask_range.u32); + + printf("%hu : 0x%x %hu : 0x%x", + key->field_value[SRCP_FIELD_IPV4].value.u16, + key->field_value[SRCP_FIELD_IPV4].mask_range.u16, + key->field_value[DSTP_FIELD_IPV4].value.u16, + key->field_value[DSTP_FIELD_IPV4].mask_range.u16); + + printf(" priority: 0x%x\n", key->priority); +} + +static inline void +print_ipv4_key_delete(struct rte_table_acl_rule_delete_params *key) +{ + unsigned char a, b, c, d; + + printf("ipv4_key_del: 0x%02hhx/0x%hhx ", + key->field_value[PROTO_FIELD_IPV4].value.u8, + key->field_value[PROTO_FIELD_IPV4].mask_range.u8); + + uint32_t_to_char(key->field_value[SRC_FIELD_IPV4].value.u32, + &a, &b, &c, &d); + printf(" %hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, + key->field_value[SRC_FIELD_IPV4].mask_range.u32); + + uint32_t_to_char(key->field_value[DST_FIELD_IPV4].value.u32, + &a, &b, &c, &d); + printf("%hhu.%hhu.%hhu.%hhu/0x%x ", a, b, c, d, + key->field_value[DST_FIELD_IPV4].mask_range.u32); + + printf("%hu : 0x%x %hu : 0x%x\n", + key->field_value[SRCP_FIELD_IPV4].value.u16, + key->field_value[SRCP_FIELD_IPV4].mask_range.u16, + key->field_value[DSTP_FIELD_IPV4].value.u16, + key->field_value[DSTP_FIELD_IPV4].mask_range.u16); +} +#endif + +static struct rte_flow_classify * +allocate_5tuple(void) +{ + struct rte_flow_classify *flow_classify; + + flow_classify = malloc(sizeof(struct rte_flow_classify)); + if (!flow_classify) + return flow_classify; + + memset(flow_classify, 0, sizeof(struct rte_flow_classify)); + flow_classify->id = unique_id++; + flow_classify->type = RTE_FLOW_CLASSIFY_TYPE_5TUPLE; + memcpy(&flow_classify->action, classify_get_flow_action(), + sizeof(struct rte_flow_action)); + + flow_classify->flow_extra_data.priority = ntuple_filter.priority; + flow_classify->flow_extra_data.tcp_flags = ntuple_filter.tcp_flags; + + /* key add values */ + flow_classify->key_add.priority = ntuple_filter.priority; + flow_classify->key_add.field_value[PROTO_FIELD_IPV4].mask_range.u8 = + ntuple_filter.proto_mask; + flow_classify->key_add.field_value[PROTO_FIELD_IPV4].value.u8 = + ntuple_filter.proto; + + flow_classify->key_add.field_value[SRC_FIELD_IPV4].mask_range.u32 = + ntuple_filter.src_ip_mask; + flow_classify->key_add.field_value[SRC_FIELD_IPV4].value.u32 = + ntuple_filter.src_ip; + + flow_classify->key_add.field_value[DST_FIELD_IPV4].mask_range.u32 = + ntuple_filter.dst_ip_mask; + flow_classify->key_add.field_value[DST_FIELD_IPV4].value.u32 = + ntuple_filter.dst_ip; + + flow_classify->key_add.field_value[SRCP_FIELD_IPV4].mask_range.u16 = + ntuple_filter.src_port_mask; + flow_classify->key_add.field_value[SRCP_FIELD_IPV4].value.u16 = + ntuple_filter.src_port; + + flow_classify->key_add.field_value[DSTP_FIELD_IPV4].mask_range.u16 = + ntuple_filter.dst_port_mask; + flow_classify->key_add.field_value[DSTP_FIELD_IPV4].value.u16 = + ntuple_filter.dst_port; + +#ifdef RTE_LIBRTE_CLASSIFY_DEBUG + print_ipv4_key_add(&flow_classify->key_add); +#endif + + /* key delete values */ + memcpy(&flow_classify->key_del.field_value[PROTO_FIELD_IPV4], + &flow_classify->key_add.field_value[PROTO_FIELD_IPV4], + NUM_FIELDS_IPV4 * sizeof(struct rte_acl_field)); + +#ifdef RTE_LIBRTE_CLASSIFY_DEBUG + print_ipv4_key_delete(&flow_classify->key_del); +#endif + return flow_classify; +} + +struct rte_flow_classify * +rte_flow_classify_create(void *table_handle, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + struct rte_flow_classify *flow_classify; + struct rte_table_acl *table_acl = table_handle; + struct rte_acl_rule *acl_rule; + int ret; + + if (!error) + return NULL; + + if (!table_handle) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_HANDLE, + NULL, "NULL table_handle."); + return NULL; + } + + if (!pattern) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM, + NULL, "NULL pattern."); + return NULL; + } + + if (!actions) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_NUM, + NULL, "NULL action."); + return NULL; + } + + if (!attr) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, + NULL, "NULL attribute."); + return NULL; + } + + /* parse attr, pattern and actions */ + ret = rte_flow_classify_validate(table_handle, attr, pattern, + actions, error); + if (ret < 0) + return NULL; + + flow_classify = allocate_5tuple(); + if (!flow_classify) + return NULL; + + flow_classify->entry = malloc(table_acl->entry_size); + if (!flow_classify->entry) { + free(flow_classify); + flow_classify = NULL; + return NULL; + } + + ret = rte_table_acl_ops.f_add(table_handle, &flow_classify->key_add, + flow_classify->entry, &flow_classify->key_found, + &flow_classify->entry_ptr); + if (ret) { + free(flow_classify->entry); + free(flow_classify); + flow_classify = NULL; + } + acl_rule = flow_classify->entry; + flow_classify->flow_extra_data.userdata = acl_rule->data.userdata; + + return flow_classify; +} + +int +rte_flow_classify_destroy(void *table_handle, + struct rte_flow_classify *flow_classify, + struct rte_flow_error *error) +{ + int ret; + int key_found; + + if (!error) + return -EINVAL; + + if (!flow_classify || !table_handle) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "invalid input"); + return -EINVAL; + } + + ret = rte_table_acl_ops.f_delete(table_handle, + &flow_classify->key_del, &key_found, + flow_classify->entry); + if ((ret == 0) && key_found) { + free(flow_classify->entry); + free(flow_classify); + } else + ret = -1; + return ret; +} + +static int +flow_match(struct rte_acl_ctx *acl_ctx, struct mbuf_search *search, + uint64_t *count, uint32_t userdata) +{ + int ret = -1; + int i; + int num_ipv4 = search->num_ipv4; + + if (num_ipv4) { + ret = rte_acl_classify(acl_ctx, + search->data_ipv4, + search->res_ipv4, + num_ipv4, + FLOW_CLASSIFY_NUM_CATEGORY); + if (!ret) { + for (i = 0; i < num_ipv4; i++) { + if (search->res_ipv4[i] == userdata) + (*count)++; /* match found */ + } + if (*count == 0) + ret = -1; + } else + ret = -1; + } + return ret; +} + +static int +action_apply(const struct rte_flow_classify *flow_classify, + struct rte_flow_classify_stats *stats, uint64_t count) +{ + struct rte_flow_classify_5tuple_stats *ntuple_stats; + + switch (flow_classify->action.type) { + case RTE_FLOW_ACTION_TYPE_COUNT: + ntuple_stats = + (struct rte_flow_classify_5tuple_stats *)stats->stats; + ntuple_stats->counter1 = count; + stats->used_space = 1; + break; + default: + return -ENOTSUP; + } + + return 0; +} + +static inline int +is_valid_ipv4_pkt(struct ipv4_hdr *pkt, uint32_t link_len) +{ + /* From http://www.rfc-editor.org/rfc/rfc1812.txt section 5.2.2 */ + /* + * 1. The packet length reported by the Link Layer must be large + * enough to hold the minimum length legal IP datagram (20 bytes). + */ + if (link_len < sizeof(struct ipv4_hdr)) + return -1; + + /* 2. The IP checksum must be correct. */ + /* this is checked in H/W */ + + /* + * 3. The IP version number must be 4. If the version number is not 4 + * then the packet may be another version of IP, such as IPng or + * ST-II. + */ + if (((pkt->version_ihl) >> 4) != 4) + return -3; + /* + * 4. The IP header length field must be large enough to hold the + * minimum length legal IP datagram (20 bytes = 5 words). + */ + if ((pkt->version_ihl & 0xf) < 5) + return -4; + + /* + * 5. The IP total length field must be large enough to hold the IP + * datagram header, whose length is specified in the IP header length + * field. + */ + if (rte_cpu_to_be_16(pkt->total_length) < sizeof(struct ipv4_hdr)) + return -5; + + return 0; +} + +#ifdef RTE_LIBRTE_CLASSIFY_DEBUG +static inline void +print_ipv4_input(struct mbuf_search *search) { + const uint8_t *ipv4_info; + unsigned int i; + + ipv4_info = search->data_ipv4[search->num_ipv4]; + printf("ipv4_data:"); + for (i = 0; i < sizeof(struct ipv4_hdr); i++) + printf(" 0x%02x", *ipv4_info++); + printf("\n"); +} +#endif + +static inline void +prepare_one_packet(struct rte_mbuf **pkts_in, struct mbuf_search *search, + int index) +{ + struct ipv4_hdr *ipv4_hdr; + struct rte_mbuf *pkt = pkts_in[index]; + + if (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) { + ipv4_hdr = rte_pktmbuf_mtod_offset(pkt, struct ipv4_hdr *, + sizeof(struct ether_hdr)); + + /* Check to make sure the packet is valid (RFC 1812) */ + if (is_valid_ipv4_pkt(ipv4_hdr, pkt->pkt_len) >= 0) { + /* Fill search structure */ + search->data_ipv4[search->num_ipv4] = + MBUF_IPV4_2PROTO(pkt); +#ifdef RTE_LIBRTE_CLASSIFY_DEBUG + print_ipv4_input(search); +#endif + search->num_ipv4++; + } + } +} + +static inline void +prepare_parameter(struct rte_mbuf **pkts_in, struct mbuf_search *search, + int nb_rx) +{ + int i; + + search->num_ipv4 = 0; + for (i = 0; i < nb_rx; i++) { + rte_prefetch0(rte_pktmbuf_mtod(pkts_in[i], void *)); + prepare_one_packet(pkts_in, search, i); + } +} + +int +rte_flow_classify_query(void *table_handle, + const struct rte_flow_classify *flow_classify, + struct rte_mbuf **pkts, + const uint16_t nb_pkts, + struct rte_flow_classify_stats *stats, + struct rte_flow_error *error) +{ + struct mbuf_search search; + struct rte_table_acl *table_acl = table_handle; + uint64_t count = 0; + int ret = -1; + + if (!error) + return -EINVAL; + + if (!table_handle || !flow_classify || !pkts || !stats || + !table_acl->ctx) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "invalid input"); + return -EINVAL; + } + + if (stats->available_space == 0) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "invalid input"); + return -EINVAL; + } + + if (nb_pkts > 0) { + prepare_parameter(pkts, &search, nb_pkts); + + ret = flow_match(table_acl->ctx, &search, &count, + flow_classify->flow_extra_data.userdata); + if (ret == 0) + ret = action_apply(flow_classify, stats, count); + } + return ret; +} diff --git a/lib/librte_flow_classify/rte_flow_classify.h b/lib/librte_flow_classify/rte_flow_classify.h new file mode 100644 index 0000000..a7dbd97 --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify.h @@ -0,0 +1,204 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _RTE_FLOW_CLASSIFY_H_ +#define _RTE_FLOW_CLASSIFY_H_ + +/** + * @file + * + * RTE Flow Classify Library + * + * This library provides flow record information with some measured properties. + * + * Application should define the flow and measurement criteria (action) for it. + * + * Library doesn't maintain any flow records itself, instead flow information is + * returned to upper layer only for given packets. + * + * It is application's responsibility to call rte_flow_classify_query() + * for group of packets, just after receiving them or before transmitting them. + * Application should provide the flow type interested in, measurement to apply + * to that flow in rte_flow_classify_create() API, and should provide + * rte_flow_classify object and storage to put results in + * rte_flow_classify_query() API. + * + * Usage: + * - application calls rte_flow_classify_create() to create a rte_flow_classify + * object. + * - application calls rte_flow_classify_query() in a polling manner, + * preferably after rte_eth_rx_burst(). This will cause the library to + * convert packet information to flow information with some measurements. + * - rte_flow_classify object can be destroyed when they are no more needed + * via rte_flow_classify_destroy() + */ + +#include <rte_ethdev.h> +#include <rte_ether.h> +#include <rte_flow.h> +#include <rte_acl.h> + +#ifdef __cplusplus +extern "C" { +#endif + +enum rte_flow_classify_type { + RTE_FLOW_CLASSIFY_TYPE_NONE, /**< no type */ + RTE_FLOW_CLASSIFY_TYPE_5TUPLE, /**< IPv4 5tuple type */ +}; + +struct rte_flow_classify; + +/** + * Flow stats + * + * For single action an array of stats can be returned by API. Technically each + * packet can return a stat at max. + * + * Storage for stats is provided by application, library should know available + * space, and should return the number of used space. + * + * stats type is based on what measurement (action) requested by application. + * + */ +struct rte_flow_classify_stats { + const unsigned int available_space; + unsigned int used_space; + void **stats; +}; + +struct rte_flow_classify_5tuple_stats { + uint64_t counter1; /**< count of packets that match 5tupple pattern */ +}; + +/** + * Create a flow classify rule. + * + * @param[in] table_handle + * Pointer to table ACL + * @param[in] attr + * Flow rule attributes + * @param[in] pattern + * Pattern specification (list terminated by the END pattern item). + * @param[in] actions + * Associated actions (list terminated by the END pattern item). + * @param[out] error + * Perform verbose error reporting if not NULL. Structure + * initialised in case of error only. + * @return + * A valid handle in case of success, NULL otherwise. + */ +struct rte_flow_classify * +rte_flow_classify_create(void *table_handle, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error); + +/** + * Validate a flow classify rule. + * + * @param[in] table_handle + * Pointer to table ACL + * @param[in] attr + * Flow rule attributes + * @param[in] pattern + * Pattern specification (list terminated by the END pattern item). + * @param[in] actions + * Associated actions (list terminated by the END pattern item). + * @param[out] error + * Perform verbose error reporting if not NULL. Structure + * initialised in case of error only. + * + * @return + * reurn code. + */ +int +rte_flow_classify_validate(void *table_handle, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error); + +/** + * Destroy a flow classify rule. + * + * @param[in] table_handle + * Pointer to table ACL + * @param[in] flow_classify + * Flow rule handle to destroy + * @param[out] error + * Perform verbose error reporting if not NULL. Structure + * initialised in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise. + */ +int +rte_flow_classify_destroy(void *table_handle, + struct rte_flow_classify *flow_classify, + struct rte_flow_error *error); + +/** + * Get flow classification stats for given packets. + * + * @param[in] table_handle + * Pointer to table ACL + * @param[in] flow_classify + * Pointer to Flow rule object + * @param[in] pkts + * Pointer to packets to process + * @param[in] nb_pkts + * Number of packets to process + * @param[in] stats + * To store stats defined by action + * @param[out] error + * Perform verbose error reporting if not NULL. Structure + * initialised in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise. + */ +int +rte_flow_classify_query(void *table_handle, + const struct rte_flow_classify *flow_classify, + struct rte_mbuf **pkts, + const uint16_t nb_pkts, + struct rte_flow_classify_stats *stats, + struct rte_flow_error *error); + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_FLOW_CLASSIFY_H_ */ diff --git a/lib/librte_flow_classify/rte_flow_classify_parse.c b/lib/librte_flow_classify/rte_flow_classify_parse.c new file mode 100644 index 0000000..e5a3885 --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify_parse.c @@ -0,0 +1,546 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <rte_flow_classify.h> +#include "rte_flow_classify_parse.h" +#include <rte_flow_driver.h> + +struct classify_valid_pattern { + enum rte_flow_item_type *items; + parse_filter_t parse_filter; +}; + +static struct rte_flow_action action; + +/* Pattern matched ntuple filter */ +static enum rte_flow_item_type pattern_ntuple_1[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_UDP, + RTE_FLOW_ITEM_TYPE_END, +}; + +/* Pattern matched ntuple filter */ +static enum rte_flow_item_type pattern_ntuple_2[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_TCP, + RTE_FLOW_ITEM_TYPE_END, +}; + +/* Pattern matched ntuple filter */ +static enum rte_flow_item_type pattern_ntuple_3[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_SCTP, + RTE_FLOW_ITEM_TYPE_END, +}; + +static int +classify_parse_ntuple_filter(const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_eth_ntuple_filter *filter, + struct rte_flow_error *error); + +static struct classify_valid_pattern classify_supported_patterns[] = { + /* ntuple */ + { pattern_ntuple_1, classify_parse_ntuple_filter }, + { pattern_ntuple_2, classify_parse_ntuple_filter }, + { pattern_ntuple_3, classify_parse_ntuple_filter }, +}; + +struct rte_flow_action * +classify_get_flow_action(void) +{ + return &action; +} + +/* Find the first VOID or non-VOID item pointer */ +const struct rte_flow_item * +classify_find_first_item(const struct rte_flow_item *item, bool is_void) +{ + bool is_find; + + while (item->type != RTE_FLOW_ITEM_TYPE_END) { + if (is_void) + is_find = item->type == RTE_FLOW_ITEM_TYPE_VOID; + else + is_find = item->type != RTE_FLOW_ITEM_TYPE_VOID; + if (is_find) + break; + item++; + } + return item; +} + +/* Skip all VOID items of the pattern */ +void +classify_pattern_skip_void_item(struct rte_flow_item *items, + const struct rte_flow_item *pattern) +{ + uint32_t cpy_count = 0; + const struct rte_flow_item *pb = pattern, *pe = pattern; + + for (;;) { + /* Find a non-void item first */ + pb = classify_find_first_item(pb, false); + if (pb->type == RTE_FLOW_ITEM_TYPE_END) { + pe = pb; + break; + } + + /* Find a void item */ + pe = classify_find_first_item(pb + 1, true); + + cpy_count = pe - pb; + rte_memcpy(items, pb, sizeof(struct rte_flow_item) * cpy_count); + + items += cpy_count; + + if (pe->type == RTE_FLOW_ITEM_TYPE_END) { + pb = pe; + break; + } + + pb = pe + 1; + } + /* Copy the END item. */ + rte_memcpy(items, pe, sizeof(struct rte_flow_item)); +} + +/* Check if the pattern matches a supported item type array */ +static bool +classify_match_pattern(enum rte_flow_item_type *item_array, + struct rte_flow_item *pattern) +{ + struct rte_flow_item *item = pattern; + + while ((*item_array == item->type) && + (*item_array != RTE_FLOW_ITEM_TYPE_END)) { + item_array++; + item++; + } + + return (*item_array == RTE_FLOW_ITEM_TYPE_END && + item->type == RTE_FLOW_ITEM_TYPE_END); +} + +/* Find if there's parse filter function matched */ +parse_filter_t +classify_find_parse_filter_func(struct rte_flow_item *pattern) +{ + parse_filter_t parse_filter = NULL; + uint8_t i = 0; + + for (; i < RTE_DIM(classify_supported_patterns); i++) { + if (classify_match_pattern(classify_supported_patterns[i].items, + pattern)) { + parse_filter = + classify_supported_patterns[i].parse_filter; + break; + } + } + + return parse_filter; +} + +#define FLOW_RULE_MIN_PRIORITY 8 +#define FLOW_RULE_MAX_PRIORITY 0 + +#define NEXT_ITEM_OF_PATTERN(item, pattern, index)\ + do { \ + item = pattern + index;\ + while (item->type == RTE_FLOW_ITEM_TYPE_VOID) {\ + index++; \ + item = pattern + index; \ + } \ + } while (0) + +#define NEXT_ITEM_OF_ACTION(act, actions, index)\ + do { \ + act = actions + index; \ + while (act->type == RTE_FLOW_ACTION_TYPE_VOID) {\ + index++; \ + act = actions + index; \ + } \ + } while (0) + +/** + * Please aware there's an asumption for all the parsers. + * rte_flow_item is using big endian, rte_flow_attr and + * rte_flow_action are using CPU order. + * Because the pattern is used to describe the packets, + * normally the packets should use network order. + */ + +/** + * Parse the rule to see if it is a n-tuple rule. + * And get the n-tuple filter info BTW. + * pattern: + * The first not void item can be ETH or IPV4. + * The second not void item must be IPV4 if the first one is ETH. + * The third not void item must be UDP or TCP. + * The next not void item must be END. + * action: + * The first not void action should be QUEUE. + * The next not void action should be END. + * pattern example: + * ITEM Spec Mask + * ETH NULL NULL + * IPV4 src_addr 192.168.1.20 0xFFFFFFFF + * dst_addr 192.167.3.50 0xFFFFFFFF + * next_proto_id 17 0xFF + * UDP/TCP/ src_port 80 0xFFFF + * SCTP dst_port 80 0xFFFF + * END + * other members in mask and spec should set to 0x00. + * item->last should be NULL. + */ +static int +classify_parse_ntuple_filter(const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_eth_ntuple_filter *filter, + struct rte_flow_error *error) +{ + const struct rte_flow_item *item; + const struct rte_flow_action *act; + const struct rte_flow_item_ipv4 *ipv4_spec; + const struct rte_flow_item_ipv4 *ipv4_mask; + const struct rte_flow_item_tcp *tcp_spec; + const struct rte_flow_item_tcp *tcp_mask; + const struct rte_flow_item_udp *udp_spec; + const struct rte_flow_item_udp *udp_mask; + const struct rte_flow_item_sctp *sctp_spec; + const struct rte_flow_item_sctp *sctp_mask; + uint32_t index; + + if (!pattern) { + rte_flow_error_set(error, + EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_NUM, + NULL, "NULL pattern."); + return -rte_errno; + } + + if (!actions) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_NUM, + NULL, "NULL action."); + return -rte_errno; + } + if (!attr) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR, + NULL, "NULL attribute."); + return -rte_errno; + } + + /* parse pattern */ + index = 0; + + /* the first not void item can be MAC or IPv4 */ + NEXT_ITEM_OF_PATTERN(item, pattern, index); + + if (item->type != RTE_FLOW_ITEM_TYPE_ETH && + item->type != RTE_FLOW_ITEM_TYPE_IPV4) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + /* Skip Ethernet */ + if (item->type == RTE_FLOW_ITEM_TYPE_ETH) { + /*Not supported last point for range*/ + if (item->last) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + item, + "Not supported last point for range"); + return -rte_errno; + + } + /* if the first item is MAC, the content should be NULL */ + if (item->spec || item->mask) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Not supported by ntuple filter"); + return -rte_errno; + } + /* check if the next not void item is IPv4 */ + index++; + NEXT_ITEM_OF_PATTERN(item, pattern, index); + if (item->type != RTE_FLOW_ITEM_TYPE_IPV4) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Not supported by ntuple filter"); + return -rte_errno; + } + } + + /* get the IPv4 info */ + if (!item->spec || !item->mask) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Invalid ntuple mask"); + return -rte_errno; + } + /*Not supported last point for range*/ + if (item->last) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + item, "Not supported last point for range"); + return -rte_errno; + + } + + ipv4_mask = (const struct rte_flow_item_ipv4 *)item->mask; + /** + * Only support src & dst addresses, protocol, + * others should be masked. + */ + if (ipv4_mask->hdr.version_ihl || + ipv4_mask->hdr.type_of_service || + ipv4_mask->hdr.total_length || + ipv4_mask->hdr.packet_id || + ipv4_mask->hdr.fragment_offset || + ipv4_mask->hdr.time_to_live || + ipv4_mask->hdr.hdr_checksum) { + rte_flow_error_set(error, + EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + + filter->dst_ip_mask = ipv4_mask->hdr.dst_addr; + filter->src_ip_mask = ipv4_mask->hdr.src_addr; + filter->proto_mask = ipv4_mask->hdr.next_proto_id; + + ipv4_spec = (const struct rte_flow_item_ipv4 *)item->spec; + filter->dst_ip = ipv4_spec->hdr.dst_addr; + filter->src_ip = ipv4_spec->hdr.src_addr; + filter->proto = ipv4_spec->hdr.next_proto_id; + + /* check if the next not void item is TCP or UDP or SCTP */ + index++; + NEXT_ITEM_OF_PATTERN(item, pattern, index); + if (item->type != RTE_FLOW_ITEM_TYPE_TCP && + item->type != RTE_FLOW_ITEM_TYPE_UDP && + item->type != RTE_FLOW_ITEM_TYPE_SCTP) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + + /* get the TCP/UDP info */ + if (!item->spec || !item->mask) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Invalid ntuple mask"); + return -rte_errno; + } + + /*Not supported last point for range*/ + if (item->last) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + item, "Not supported last point for range"); + return -rte_errno; + + } + + if (item->type == RTE_FLOW_ITEM_TYPE_TCP) { + tcp_mask = (const struct rte_flow_item_tcp *)item->mask; + + /** + * Only support src & dst ports, tcp flags, + * others should be masked. + */ + if (tcp_mask->hdr.sent_seq || + tcp_mask->hdr.recv_ack || + tcp_mask->hdr.data_off || + tcp_mask->hdr.rx_win || + tcp_mask->hdr.cksum || + tcp_mask->hdr.tcp_urp) { + memset(filter, 0, + sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + + filter->dst_port_mask = tcp_mask->hdr.dst_port; + filter->src_port_mask = tcp_mask->hdr.src_port; + if (tcp_mask->hdr.tcp_flags == 0xFF) { + filter->flags |= RTE_NTUPLE_FLAGS_TCP_FLAG; + } else if (!tcp_mask->hdr.tcp_flags) { + filter->flags &= ~RTE_NTUPLE_FLAGS_TCP_FLAG; + } else { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + + tcp_spec = (const struct rte_flow_item_tcp *)item->spec; + filter->dst_port = tcp_spec->hdr.dst_port; + filter->src_port = tcp_spec->hdr.src_port; + filter->tcp_flags = tcp_spec->hdr.tcp_flags; + } else if (item->type == RTE_FLOW_ITEM_TYPE_UDP) { + udp_mask = (const struct rte_flow_item_udp *)item->mask; + + /** + * Only support src & dst ports, + * others should be masked. + */ + if (udp_mask->hdr.dgram_len || + udp_mask->hdr.dgram_cksum) { + memset(filter, 0, + sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + + filter->dst_port_mask = udp_mask->hdr.dst_port; + filter->src_port_mask = udp_mask->hdr.src_port; + + udp_spec = (const struct rte_flow_item_udp *)item->spec; + filter->dst_port = udp_spec->hdr.dst_port; + filter->src_port = udp_spec->hdr.src_port; + } else { + sctp_mask = (const struct rte_flow_item_sctp *)item->mask; + + /** + * Only support src & dst ports, + * others should be masked. + */ + if (sctp_mask->hdr.tag || + sctp_mask->hdr.cksum) { + memset(filter, 0, + sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + + filter->dst_port_mask = sctp_mask->hdr.dst_port; + filter->src_port_mask = sctp_mask->hdr.src_port; + + sctp_spec = (const struct rte_flow_item_sctp *)item->spec; + filter->dst_port = sctp_spec->hdr.dst_port; + filter->src_port = sctp_spec->hdr.src_port; + } + + /* check if the next not void item is END */ + index++; + NEXT_ITEM_OF_PATTERN(item, pattern, index); + if (item->type != RTE_FLOW_ITEM_TYPE_END) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not supported by ntuple filter"); + return -rte_errno; + } + + /* parse action */ + index = 0; + + /** + * n-tuple only supports count, + * check if the first not void action is COUNT. + */ + memset(&action, 0, sizeof(action)); + NEXT_ITEM_OF_ACTION(act, actions, index); + if (act->type != RTE_FLOW_ACTION_TYPE_COUNT) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + item, "Not supported action."); + return -rte_errno; + } + action.type = RTE_FLOW_ACTION_TYPE_COUNT; + + /* check if the next not void item is END */ + index++; + NEXT_ITEM_OF_ACTION(act, actions, index); + if (act->type != RTE_FLOW_ACTION_TYPE_END) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + act, "Not supported action."); + return -rte_errno; + } + + /* parse attr */ + /* must be input direction */ + if (!attr->ingress) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, + attr, "Only support ingress."); + return -rte_errno; + } + + /* not supported */ + if (attr->egress) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, + attr, "Not support egress."); + return -rte_errno; + } + + if (attr->priority > 0xFFFF) { + memset(filter, 0, sizeof(struct rte_eth_ntuple_filter)); + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, + attr, "Error priority."); + return -rte_errno; + } + filter->priority = (uint16_t)attr->priority; + if (attr->priority > FLOW_RULE_MIN_PRIORITY) + filter->priority = FLOW_RULE_MAX_PRIORITY; + + return 0; +} diff --git a/lib/librte_flow_classify/rte_flow_classify_parse.h b/lib/librte_flow_classify/rte_flow_classify_parse.h new file mode 100644 index 0000000..1d4708a --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify_parse.h @@ -0,0 +1,74 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _RTE_FLOW_CLASSIFY_PARSE_H_ +#define _RTE_FLOW_CLASSIFY_PARSE_H_ + +#include <rte_ethdev.h> +#include <rte_ether.h> +#include <rte_flow.h> +#include <stdbool.h> + +#ifdef __cplusplus +extern "C" { +#endif + +typedef int (*parse_filter_t)(const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_eth_ntuple_filter *filter, + struct rte_flow_error *error); + +/* Skip all VOID items of the pattern */ +void +classify_pattern_skip_void_item(struct rte_flow_item *items, + const struct rte_flow_item *pattern); + +/* Find the first VOID or non-VOID item pointer */ +const struct rte_flow_item * +classify_find_first_item(const struct rte_flow_item *item, bool is_void); + + +/* Find if there's parse filter function matched */ +parse_filter_t +classify_find_parse_filter_func(struct rte_flow_item *pattern); + +/* get action data */ +struct rte_flow_action * +classify_get_flow_action(void); + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_FLOW_CLASSIFY_PARSE_H_ */ diff --git a/lib/librte_flow_classify/rte_flow_classify_version.map b/lib/librte_flow_classify/rte_flow_classify_version.map new file mode 100644 index 0000000..e2c9ecf --- /dev/null +++ b/lib/librte_flow_classify/rte_flow_classify_version.map @@ -0,0 +1,10 @@ +DPDK_17.08 { + global: + + rte_flow_classify_create; + rte_flow_classify_destroy; + rte_flow_classify_query; + rte_flow_classify_validate; + + local: *; +}; diff --git a/mk/rte.app.mk b/mk/rte.app.mk index c25fdd9..909ab95 100644 --- a/mk/rte.app.mk +++ b/mk/rte.app.mk @@ -58,6 +58,7 @@ _LDLIBS-y += -L$(RTE_SDK_BIN)/lib # # Order is important: from higher level to lower level # +_LDLIBS-$(CONFIG_RTE_LIBRTE_FLOW_CLASSIFY) += -lrte_flow_classify _LDLIBS-$(CONFIG_RTE_LIBRTE_PIPELINE) += -lrte_pipeline _LDLIBS-$(CONFIG_RTE_LIBRTE_TABLE) += -lrte_table _LDLIBS-$(CONFIG_RTE_LIBRTE_PORT) += -lrte_port @@ -84,7 +85,6 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_EFD) += -lrte_efd _LDLIBS-$(CONFIG_RTE_LIBRTE_CFGFILE) += -lrte_cfgfile _LDLIBS-y += --whole-archive - _LDLIBS-$(CONFIG_RTE_LIBRTE_HASH) += -lrte_hash _LDLIBS-$(CONFIG_RTE_LIBRTE_VHOST) += -lrte_vhost _LDLIBS-$(CONFIG_RTE_LIBRTE_KVARGS) += -lrte_kvargs -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v1 5/6] examples/flow_classify: flow classify sample application 2017-05-25 15:46 ` [dpdk-dev] [RFC v3] " Ferruh Yigit ` (5 preceding siblings ...) 2017-08-23 13:51 ` [dpdk-dev] [PATCH v1 4/6] librte_flow_classify: add librte_flow_classify library Bernard Iremonger @ 2017-08-23 13:51 ` Bernard Iremonger 2017-08-23 13:51 ` [dpdk-dev] [PATCH v1 6/6] test: flow classify library unit tests Bernard Iremonger 7 siblings, 0 replies; 145+ messages in thread From: Bernard Iremonger @ 2017-08-23 13:51 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil Cc: Bernard Iremonger The flow_classify sample application exercises the following librte_flow_classify API's: rte_flow_classify_create rte_flow_classify_validate rte_flow_classify_destroy rte_flow_classify_query It sets up the IPv4 ACL field definitions. It creates table_acl using the librte_table API. Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> --- examples/flow_classify/Makefile | 57 +++ examples/flow_classify/flow_classify.c | 625 +++++++++++++++++++++++++++++++++ 2 files changed, 682 insertions(+) create mode 100644 examples/flow_classify/Makefile create mode 100644 examples/flow_classify/flow_classify.c diff --git a/examples/flow_classify/Makefile b/examples/flow_classify/Makefile new file mode 100644 index 0000000..eecdde1 --- /dev/null +++ b/examples/flow_classify/Makefile @@ -0,0 +1,57 @@ +# BSD LICENSE +# +# Copyright(c) 2017 Intel Corporation. All rights reserved. +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Intel Corporation nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +ifeq ($(RTE_SDK),) +$(error "Please define RTE_SDK environment variable") +endif + +# Default target, can be overridden by command line or environment +RTE_TARGET ?= x86_64-native-linuxapp-gcc + +include $(RTE_SDK)/mk/rte.vars.mk + +# binary name +APP = flow_classify + + +# all source are stored in SRCS-y +SRCS-y := flow_classify.c + +CFLAGS += -O3 +CFLAGS += $(WERROR_FLAGS) + +# workaround for a gcc bug with noreturn attribute +# http://gcc.gnu.org/bugzilla/show_bug.cgi?id=12603 +ifeq ($(CONFIG_RTE_TOOLCHAIN_GCC),y) +CFLAGS_main.o += -Wno-return-type +endif + +include $(RTE_SDK)/mk/rte.extapp.mk diff --git a/examples/flow_classify/flow_classify.c b/examples/flow_classify/flow_classify.c new file mode 100644 index 0000000..61b0241 --- /dev/null +++ b/examples/flow_classify/flow_classify.c @@ -0,0 +1,625 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <stdint.h> +#include <inttypes.h> +#include <rte_eal.h> +#include <rte_ethdev.h> +#include <rte_cycles.h> +#include <rte_lcore.h> +#include <rte_mbuf.h> +#include <rte_flow.h> +#include <rte_flow_classify.h> +#include <rte_table_acl.h> + +#define RX_RING_SIZE 128 +#define TX_RING_SIZE 512 + +#define NUM_MBUFS 8191 +#define MBUF_CACHE_SIZE 250 +#define BURST_SIZE 32 +#define MAX_NUM_CLASSIFY 5 +#define FLOW_CLASSIFY_MAX_RULE_NUM 10 + +static const struct rte_eth_conf port_conf_default = { + .rxmode = { .max_rx_pkt_len = ETHER_MAX_LEN } +}; + +static void *table_acl; + +/* ACL field definitions for IPv4 5 tuple rule */ + +enum { + PROTO_FIELD_IPV4, + SRC_FIELD_IPV4, + DST_FIELD_IPV4, + SRCP_FIELD_IPV4, + DSTP_FIELD_IPV4, + NUM_FIELDS_IPV4 +}; + +enum { + PROTO_INPUT_IPV4, + SRC_INPUT_IPV4, + DST_INPUT_IPV4, + SRCP_DESTP_INPUT_IPV4 +}; + +static struct rte_acl_field_def ipv4_defs[NUM_FIELDS_IPV4] = { + /* first input field - always one byte long. */ + { + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint8_t), + .field_index = PROTO_FIELD_IPV4, + .input_index = PROTO_INPUT_IPV4, + .offset = 0, + }, + /* next input field (IPv4 source address) - 4 consecutive bytes. */ + { + /* rte_flow uses a bit mask for IPv4 addresses */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint32_t), + .field_index = SRC_FIELD_IPV4, + .input_index = SRC_INPUT_IPV4, + .offset = offsetof(struct ipv4_hdr, src_addr) - + offsetof(struct ipv4_hdr, next_proto_id), + }, + /* next input field (IPv4 destination address) - 4 consecutive bytes. */ + { + /* rte_flow uses a bit mask for IPv4 addresses */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint32_t), + .field_index = DST_FIELD_IPV4, + .input_index = DST_INPUT_IPV4, + .offset = offsetof(struct ipv4_hdr, dst_addr) - + offsetof(struct ipv4_hdr, next_proto_id), + }, + /* + * Next 2 fields (src & dst ports) form 4 consecutive bytes. + * They share the same input index. + */ + { + /* rte_flow uses a bit mask for protocol ports */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint16_t), + .field_index = SRCP_FIELD_IPV4, + .input_index = SRCP_DESTP_INPUT_IPV4, + .offset = sizeof(struct ipv4_hdr) - + offsetof(struct ipv4_hdr, next_proto_id), + }, + { + /* rte_flow uses a bit mask for protocol ports */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint16_t), + .field_index = DSTP_FIELD_IPV4, + .input_index = SRCP_DESTP_INPUT_IPV4, + .offset = sizeof(struct ipv4_hdr) - + offsetof(struct ipv4_hdr, next_proto_id) + + sizeof(uint16_t), + }, +}; + +/* flow classify data */ +static struct rte_flow_classify *udp_flow_classify[MAX_NUM_CLASSIFY]; +static struct rte_flow_classify *tcp_flow_classify[MAX_NUM_CLASSIFY]; +static struct rte_flow_classify *sctp_flow_classify[MAX_NUM_CLASSIFY]; + +static struct rte_flow_classify_5tuple_stats udp_ntuple_stats; +static struct rte_flow_classify_stats udp_classify_stats = { + .available_space = BURST_SIZE, + .used_space = 0, + .stats = (void **)&udp_ntuple_stats +}; + +static struct rte_flow_classify_5tuple_stats tcp_ntuple_stats; +static struct rte_flow_classify_stats tcp_classify_stats = { + .available_space = BURST_SIZE, + .used_space = 0, + .stats = (void **)&tcp_ntuple_stats +}; + +static struct rte_flow_classify_5tuple_stats sctp_ntuple_stats; +static struct rte_flow_classify_stats sctp_classify_stats = { + .available_space = BURST_SIZE, + .used_space = 0, + .stats = (void **)&sctp_ntuple_stats +}; + +/* parameters for rte_flow_classify_validate and rte_flow_classify_create */ + +/* first sample UDP pattern: + * "eth / ipv4 src spec 2.2.2.3 src mask 255.255.255.00 dst spec 2.2.2.7 + * dst mask 255.255.255.00 / udp src is 32 dst is 33 / end" + */ +static struct rte_flow_item_ipv4 ipv4_udp_spec_1 = { + { 0, 0, 0, 0, 0, 0, 17, 0, IPv4(2, 2, 2, 3), IPv4(2, 2, 2, 7)} +}; +static const struct rte_flow_item_ipv4 ipv4_mask_24 = { + .hdr = { + .next_proto_id = 0xff, + .src_addr = 0xffffff00, + .dst_addr = 0xffffff00, + }, +}; +static struct rte_flow_item_udp udp_spec_1 = { + { 32, 33, 0, 0 } +}; + +static struct rte_flow_item eth_item = { RTE_FLOW_ITEM_TYPE_ETH, + 0, 0, 0 }; +static struct rte_flow_item ipv4_udp_item_1 = { RTE_FLOW_ITEM_TYPE_IPV4, + &ipv4_udp_spec_1, 0, &ipv4_mask_24}; +static struct rte_flow_item udp_item_1 = { RTE_FLOW_ITEM_TYPE_UDP, + &udp_spec_1, 0, &rte_flow_item_udp_mask}; +static struct rte_flow_item end_item = { RTE_FLOW_ITEM_TYPE_END, + 0, 0, 0 }; +static struct rte_flow_item pattern_udp_1[4]; + +/* second sample UDP pattern: + * "eth / ipv4 src is 9.9.9.3 dst is 9.9.9.7 / udp src is 32 dst is 33 / end" + */ +static struct rte_flow_item_ipv4 ipv4_udp_spec_2 = { + { 0, 0, 0, 0, 0, 0, 17, 0, IPv4(9, 9, 9, 3), IPv4(9, 9, 9, 7)} +}; +static struct rte_flow_item_udp udp_spec_2 = { + { 32, 33, 0, 0 } +}; + +static struct rte_flow_item ipv4_udp_item_2 = { RTE_FLOW_ITEM_TYPE_IPV4, + &ipv4_udp_spec_2, 0, &rte_flow_item_ipv4_mask}; +static struct rte_flow_item udp_item_2 = { RTE_FLOW_ITEM_TYPE_UDP, + &udp_spec_2, 0, &rte_flow_item_udp_mask}; +static struct rte_flow_item pattern_udp_2[4]; + +/* first sample TCP pattern: + * "eth / ipv4 src spec 9.9.9.3 src mask 255.255.255.0 dst spec 9.9.9.7 dst + * mask 255.255.255.0/ tcp src is 32 dst is 33 / end" + */ +static struct rte_flow_item_ipv4 ipv4_tcp_spec_1 = { + { 0, 0, 0, 0, 0, 0, 6, 0, IPv4(9, 9, 9, 3), IPv4(9, 9, 9, 7)} +}; +static struct rte_flow_item_tcp tcp_spec_1 = { + { 32, 33, 0, 0, 0, 0, 0, 0, 0 } +}; + +static struct rte_flow_item ipv4_tcp_item_1 = { RTE_FLOW_ITEM_TYPE_IPV4, + &ipv4_tcp_spec_1, 0, &ipv4_mask_24}; +static struct rte_flow_item tcp_item_1 = { RTE_FLOW_ITEM_TYPE_TCP, + &tcp_spec_1, 0, &rte_flow_item_tcp_mask}; + +static struct rte_flow_item pattern_tcp_1[4]; + +/* second sample TCP pattern: + * "eth / ipv4 src is 9.9.8.3 dst is 9.9.8.7 / tcp src is 32 dst is 33 / end" + */ +static struct rte_flow_item_ipv4 ipv4_tcp_spec_2 = { + { 0, 0, 0, 0, 0, 0, 6, 0, IPv4(9, 9, 8, 3), IPv4(9, 9, 8, 7)} +}; +static struct rte_flow_item_tcp tcp_spec_2 = { + { 32, 33, 0, 0, 0, 0, 0, 0, 0 } +}; + +static struct rte_flow_item ipv4_tcp_item_2 = { RTE_FLOW_ITEM_TYPE_IPV4, + &ipv4_tcp_spec_2, 0, &rte_flow_item_ipv4_mask}; +static struct rte_flow_item tcp_item_2 = { RTE_FLOW_ITEM_TYPE_TCP, + &tcp_spec_2, 0, &rte_flow_item_tcp_mask}; + +static struct rte_flow_item pattern_tcp_2[4]; + +/* first sample SCTP pattern: + * "eth / ipv4 src is 6.7.8.9 dst is 2.3.4.5 / sctp src is 32 dst is 33 / end" + */ +static struct rte_flow_item_ipv4 ipv4_sctp_spec_1 = { + { 0, 0, 0, 0, 0, 0, 132, 0, IPv4(6, 7, 8, 9), IPv4(2, 3, 4, 5)} +}; +static struct rte_flow_item_sctp sctp_spec_1 = { + { 32, 33, 0, 0 } +}; + +static struct rte_flow_item ipv4_sctp_item_1 = { RTE_FLOW_ITEM_TYPE_IPV4, + &ipv4_sctp_spec_1, 0, &rte_flow_item_ipv4_mask}; +static struct rte_flow_item sctp_item_1 = { RTE_FLOW_ITEM_TYPE_SCTP, + &sctp_spec_1, 0, &rte_flow_item_sctp_mask}; + +static struct rte_flow_item pattern_sctp_1[4]; + + +/* sample actions: + * "actions count / end" + */ +static struct rte_flow_action count_action = { RTE_FLOW_ACTION_TYPE_COUNT, 0}; +static struct rte_flow_action end_action = { RTE_FLOW_ACTION_TYPE_END, 0}; +static struct rte_flow_action actions[2]; + +/* sample attributes */ +static struct rte_flow_attr attr; + +/* flow_classify.c: + * Based on DPDK skeleton forwarding example. + */ + +/* + * Initializes a given port using global settings and with the RX buffers + * coming from the mbuf_pool passed as a parameter. + */ +static inline int +port_init(uint8_t port, struct rte_mempool *mbuf_pool) +{ + struct rte_eth_conf port_conf = port_conf_default; + struct ether_addr addr; + const uint16_t rx_rings = 1, tx_rings = 1; + int retval; + uint16_t q; + + if (port >= rte_eth_dev_count()) + return -1; + + /* Configure the Ethernet device. */ + retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf); + if (retval != 0) + return retval; + + /* Allocate and set up 1 RX queue per Ethernet port. */ + for (q = 0; q < rx_rings; q++) { + retval = rte_eth_rx_queue_setup(port, q, RX_RING_SIZE, + rte_eth_dev_socket_id(port), NULL, mbuf_pool); + if (retval < 0) + return retval; + } + + /* Allocate and set up 1 TX queue per Ethernet port. */ + for (q = 0; q < tx_rings; q++) { + retval = rte_eth_tx_queue_setup(port, q, TX_RING_SIZE, + rte_eth_dev_socket_id(port), NULL); + if (retval < 0) + return retval; + } + + /* Start the Ethernet port. */ + retval = rte_eth_dev_start(port); + if (retval < 0) + return retval; + + /* Display the port MAC address. */ + rte_eth_macaddr_get(port, &addr); + printf("Port %u MAC: %02" PRIx8 " %02" PRIx8 " %02" PRIx8 + " %02" PRIx8 " %02" PRIx8 " %02" PRIx8 "\n", + port, + addr.addr_bytes[0], addr.addr_bytes[1], + addr.addr_bytes[2], addr.addr_bytes[3], + addr.addr_bytes[4], addr.addr_bytes[5]); + + /* Enable RX in promiscuous mode for the Ethernet device. */ + rte_eth_promiscuous_enable(port); + + return 0; +} + +/* + * The lcore main. This is the main thread that does the work, reading from + * an input port classifying the packets and writing to an output port. + */ +static __attribute__((noreturn)) void +lcore_main(void) +{ + struct rte_flow_error error; + const uint8_t nb_ports = rte_eth_dev_count(); + uint8_t port; + int ret; + int i; + + /* + * Check that the port is on the same NUMA node as the polling thread + * for best performance. + */ + for (port = 0; port < nb_ports; port++) + if (rte_eth_dev_socket_id(port) > 0 && + rte_eth_dev_socket_id(port) != + (int)rte_socket_id()) + printf("\n\n"); + printf("WARNING: port %u is on remote NUMA node\n", + port); + printf("to polling thread.\n"); + printf("Performance will not be optimal.\n"); + + printf("\nCore %u forwarding packets. [Ctrl+C to quit]\n", + rte_lcore_id()); + + /* Run until the application is quit or killed. */ + for (;;) { + /* + * Receive packets on a port, classify them and forward them + * on the paired port. + * The mapping is 0 -> 1, 1 -> 0, 2 -> 3, 3 -> 2, etc. + */ + for (port = 0; port < nb_ports; port++) { + + /* Get burst of RX packets, from first port of pair. */ + struct rte_mbuf *bufs[BURST_SIZE]; + const uint16_t nb_rx = rte_eth_rx_burst(port, 0, + bufs, BURST_SIZE); + + if (unlikely(nb_rx == 0)) + continue; + + for (i = 0; i < MAX_NUM_CLASSIFY; i++) { + if (udp_flow_classify[i]) { + ret = rte_flow_classify_query( + table_acl, + udp_flow_classify[i], + bufs, nb_rx, + &udp_classify_stats, &error); + if (ret) + printf( + "udp flow classify[%d] query failed port=%u\n\n", + i, port); + else + printf( + "udp rule [%d] counter1=%lu used_space=%d\n\n", + i, udp_ntuple_stats.counter1, + udp_classify_stats.used_space); + } + } + + for (i = 0; i < MAX_NUM_CLASSIFY; i++) { + if (tcp_flow_classify[i]) { + ret = rte_flow_classify_query( + table_acl, + tcp_flow_classify[i], + bufs, nb_rx, + &tcp_classify_stats, &error); + if (ret) + printf( + "tcp flow classify[%d] query failed port=%u\n\n", + i, port); + else + printf( + "tcp rule [%d] counter1=%lu used_space=%d\n\n", + i, tcp_ntuple_stats.counter1, + tcp_classify_stats.used_space); + } + } + + for (i = 0; i < MAX_NUM_CLASSIFY; i++) { + if (sctp_flow_classify[i]) { + ret = rte_flow_classify_query( + table_acl, + sctp_flow_classify[i], + bufs, nb_rx, + &sctp_classify_stats, &error); + if (ret) + printf( + "sctp flow classify[%d] query failed port=%u\n\n", + i, port); + else + printf( + "sctp rule [%d] counter1=%lu used_space=%d\n\n", + i, sctp_ntuple_stats.counter1, + sctp_classify_stats.used_space); + } + } + + /* Send burst of TX packets, to second port of pair. */ + const uint16_t nb_tx = rte_eth_tx_burst(port ^ 1, 0, + bufs, nb_rx); + + /* Free any unsent packets. */ + if (unlikely(nb_tx < nb_rx)) { + uint16_t buf; + + for (buf = nb_tx; buf < nb_rx; buf++) + rte_pktmbuf_free(bufs[buf]); + } + } + } +} + +/* + * The main function, which does initialization and calls the per-lcore + * functions. + */ +int +main(int argc, char *argv[]) +{ + struct rte_mempool *mbuf_pool; + struct rte_flow_error error; + uint8_t nb_ports; + uint8_t portid; + int ret; + int udp_num_classify = 0; + int tcp_num_classify = 0; + int sctp_num_classify = 0; + int socket_id; + struct rte_table_acl_params table_acl_params; + + /* Initialize the Environment Abstraction Layer (EAL). */ + ret = rte_eal_init(argc, argv); + if (ret < 0) + rte_exit(EXIT_FAILURE, "Error with EAL initialization\n"); + + argc -= ret; + argv += ret; + + /* Check that there is an even number of ports to send/receive on. */ + nb_ports = rte_eth_dev_count(); + if (nb_ports < 2 || (nb_ports & 1)) + rte_exit(EXIT_FAILURE, "Error: number of ports must be even\n"); + + /* Creates a new mempool in memory to hold the mbufs. */ + mbuf_pool = rte_pktmbuf_pool_create("MBUF_POOL", NUM_MBUFS * nb_ports, + MBUF_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id()); + + if (mbuf_pool == NULL) + rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n"); + + /* Initialize all ports. */ + for (portid = 0; portid < nb_ports; portid++) + if (port_init(portid, mbuf_pool) != 0) + rte_exit(EXIT_FAILURE, "Cannot init port %"PRIu8 "\n", + portid); + + if (rte_lcore_count() > 1) + printf("\nWARNING: Too many lcores enabled. Only 1 used.\n"); + + socket_id = rte_eth_dev_socket_id(0); + + /* initialise ACL table params */ + table_acl_params.n_rule_fields = RTE_DIM(ipv4_defs); + table_acl_params.name = "table_acl_ipv4_5tuple"; + table_acl_params.n_rules = FLOW_CLASSIFY_MAX_RULE_NUM; + memcpy(table_acl_params.field_format, ipv4_defs, sizeof(ipv4_defs)); + + table_acl = rte_table_acl_ops.f_create(&table_acl_params, socket_id, + RTE_ACL_RULE_SZ(RTE_DIM(ipv4_defs))); + if (table_acl == NULL) + return -1; + + /* set up parameters for rte_flow_classify_validate and + * rte_flow_classify_create + */ + + attr.ingress = 1; + attr.priority = 1; + pattern_udp_1[0] = eth_item; + pattern_udp_1[1] = ipv4_udp_item_1; + pattern_udp_1[2] = udp_item_1; + pattern_udp_1[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, + pattern_udp_1, actions, &error); + if (ret) + rte_exit(EXIT_FAILURE, "udp_1 flow classify validate failed\n"); + + udp_flow_classify[udp_num_classify] = rte_flow_classify_create( + table_acl, &attr, pattern_udp_1, actions, &error); + if (udp_flow_classify[udp_num_classify] == NULL) + rte_exit(EXIT_FAILURE, "udp_1 flow classify create failed\n"); + udp_num_classify++; + + attr.ingress = 1; + attr.priority = 2; + pattern_udp_2[0] = eth_item; + pattern_udp_2[1] = ipv4_udp_item_2; + pattern_udp_2[2] = udp_item_2; + pattern_udp_2[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, pattern_udp_2, + actions, &error); + if (ret) + rte_exit(EXIT_FAILURE, "udp_2 flow classify validate failed\n"); + + udp_flow_classify[udp_num_classify] = rte_flow_classify_create( + table_acl, &attr, pattern_udp_2, actions, &error); + if (udp_flow_classify[udp_num_classify] == NULL) + rte_exit(EXIT_FAILURE, "udp_2 flow classify create failed\n"); + udp_num_classify++; + + attr.ingress = 1; + attr.priority = 3; + pattern_tcp_1[0] = eth_item; + pattern_tcp_1[1] = ipv4_tcp_item_1; + pattern_tcp_1[2] = tcp_item_1; + pattern_tcp_1[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, pattern_tcp_1, + actions, &error); + if (ret) + rte_exit(EXIT_FAILURE, "tcp_1 flow classify validate failed\n"); + + tcp_flow_classify[tcp_num_classify] = rte_flow_classify_create( + table_acl, &attr, pattern_tcp_1, actions, &error); + if (tcp_flow_classify[tcp_num_classify] == NULL) + rte_exit(EXIT_FAILURE, "tcp_1 flow classify create failed\n"); + tcp_num_classify++; + + attr.ingress = 1; + attr.priority = 4; + pattern_tcp_2[0] = eth_item; + pattern_tcp_2[1] = ipv4_tcp_item_2; + pattern_tcp_2[2] = tcp_item_2; + pattern_tcp_2[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, pattern_tcp_2, + actions, &error); + if (ret) + rte_exit(EXIT_FAILURE, "tcp_2 flow classify validate failed\n"); + + tcp_flow_classify[tcp_num_classify] = rte_flow_classify_create( + table_acl, &attr, pattern_tcp_2, actions, &error); + if (tcp_flow_classify[tcp_num_classify] == NULL) + rte_exit(EXIT_FAILURE, "tcp_2 flow classify create failed\n"); + tcp_num_classify++; + + attr.ingress = 1; + attr.priority = 5; + pattern_sctp_1[0] = eth_item; + pattern_sctp_1[1] = ipv4_sctp_item_1; + pattern_sctp_1[2] = sctp_item_1; + pattern_sctp_1[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, pattern_sctp_1, + actions, &error); + if (ret) + rte_exit(EXIT_FAILURE, + "sctp_1 flow classify validate failed\n"); + + sctp_flow_classify[sctp_num_classify] = rte_flow_classify_create( + table_acl, &attr, pattern_sctp_1, actions, &error); + if (sctp_flow_classify[sctp_num_classify] == NULL) + rte_exit(EXIT_FAILURE, "sctp_1 flow classify create failed\n"); + sctp_num_classify++; + + ret = rte_flow_classify_destroy(table_acl, sctp_flow_classify[0], + &error); + if (ret) + rte_exit(EXIT_FAILURE, + "sctp_1 flow classify destroy failed\n"); + else { + sctp_num_classify--; + sctp_flow_classify[0] = NULL; + } + /* Call lcore_main on the master core only. */ + lcore_main(); + + return 0; +} -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
* [dpdk-dev] [PATCH v1 6/6] test: flow classify library unit tests 2017-05-25 15:46 ` [dpdk-dev] [RFC v3] " Ferruh Yigit ` (6 preceding siblings ...) 2017-08-23 13:51 ` [dpdk-dev] [PATCH v1 5/6] examples/flow_classify: flow classify sample application Bernard Iremonger @ 2017-08-23 13:51 ` Bernard Iremonger 7 siblings, 0 replies; 145+ messages in thread From: Bernard Iremonger @ 2017-08-23 13:51 UTC (permalink / raw) To: dev, ferruh.yigit, konstantin.ananyev, cristian.dumitrescu, adrien.mazarguil Cc: Bernard Iremonger Add flow_classify_autotest program. Set up IPv4 ACL field definitions. Create table_acl for use by librte_flow_classify API's. Create an mbuf pool for use by rte_flow_classify_query. For each of the librte_flow_classify API's: add bad parameter tests add bad pattern tests add bad action tests add good parameter tests Initialise ipv4 udp traffic for use by test for rte_flow_classif_query. Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> --- test/test/Makefile | 1 + test/test/test_flow_classify.c | 487 +++++++++++++++++++++++++++++++++++++++++ test/test/test_flow_classify.h | 184 ++++++++++++++++ 3 files changed, 672 insertions(+) create mode 100644 test/test/test_flow_classify.c create mode 100644 test/test/test_flow_classify.h diff --git a/test/test/Makefile b/test/test/Makefile index 42d9a49..073e1ed 100644 --- a/test/test/Makefile +++ b/test/test/Makefile @@ -106,6 +106,7 @@ SRCS-y += test_table_tables.c SRCS-y += test_table_ports.c SRCS-y += test_table_combined.c SRCS-$(CONFIG_RTE_LIBRTE_ACL) += test_table_acl.c +SRCS-$(CONFIG_RTE_LIBRTE_ACL) += test_flow_classify.c endif SRCS-y += test_rwlock.c diff --git a/test/test/test_flow_classify.c b/test/test/test_flow_classify.c new file mode 100644 index 0000000..1921821 --- /dev/null +++ b/test/test/test_flow_classify.c @@ -0,0 +1,487 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <string.h> +#include <errno.h> + +#include "test.h" + +#include <rte_string_fns.h> +#include <rte_mbuf.h> +#include <rte_byteorder.h> +#include <rte_ip.h> +#include <rte_acl.h> +#include <rte_common.h> +#include <rte_table_acl.h> +#include <rte_flow.h> +#include <rte_flow_classify.h> + +#include "packet_burst_generator.h" +#include "test_flow_classify.h" + + +#define FLOW_CLASSIFY_MAX_RULE_NUM 100 +static void *table_acl; + +/* + * test functions by passing invalid or + * non-workable parameters. + */ +static int +test_invalid_parameters(void) +{ + struct rte_flow_error error; + struct rte_flow_classify *classify; + int ret; + + ret = rte_flow_classify_validate(NULL, NULL, NULL, NULL, NULL); + if (!ret) { + printf("Line %i: flow_classify_validate with NULL param " + "should have failed!\n", __LINE__); + return -1; + } + + classify = rte_flow_classify_create(NULL, NULL, NULL, NULL, NULL); + if (classify) { + printf("Line %i: flow_classify_create with NULL param " + "should have failed!\n", __LINE__); + return -1; + } + + ret = rte_flow_classify_destroy(NULL, NULL, NULL); + if (!ret) { + printf("Line %i: flow_classify_destroy with NULL param " + "should have failed!\n", __LINE__); + return -1; + } + + ret = rte_flow_classify_query(NULL, NULL, NULL, 0, NULL, NULL); + if (!ret) { + printf("Line %i: flow_classify_query with NULL param " + "should have failed!\n", __LINE__); + return -1; + } + + ret = rte_flow_classify_validate(NULL, NULL, NULL, NULL, &error); + if (!ret) { + printf("Line %i: flow_classify_validate with NULL param " + "should have failed!\n", __LINE__); + return -1; + } + + classify = rte_flow_classify_create(NULL, NULL, NULL, NULL, &error); + if (classify) { + printf("Line %i: flow_classify_create with NULL param " + "should have failed!\n", __LINE__); + return -1; + } + + ret = rte_flow_classify_destroy(NULL, NULL, &error); + if (!ret) { + printf("Line %i: flow_classify_destroy with NULL param " + "should have failed!\n", __LINE__); + return -1; + } + + ret = rte_flow_classify_query(NULL, NULL, NULL, 0, NULL, &error); + if (!ret) { + printf("Line %i: flow_classify_query with NULL param " + "should have failed!\n", __LINE__); + return -1; + } + return 0; +} + +static int +test_valid_parameters(void) +{ + struct rte_flow_classify *flow_classify; + int ret; + + /* set up parameters for rte_flow_classify_validate and + * rte_flow_classify_create and rte_flow_classify_destroy + */ + + attr.ingress = 1; + attr.priority = 1; + pattern_udp_1[0] = eth_item; + pattern_udp_1[1] = ipv4_udp_item_1; + pattern_udp_1[2] = udp_item_1; + pattern_udp_1[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, + pattern_udp_1, actions, &error); + if (ret) { + printf("Line %i: flow_classify_validate " + "should not have failed!\n", __LINE__); + return -1; + } + + flow_classify = rte_flow_classify_create(table_acl, &attr, + pattern_udp_1, actions, &error); + if (!flow_classify) { + printf("Line %i: flow_classify_create " + "should not have failed!\n", __LINE__); + return -1; + } + + ret = rte_flow_classify_destroy(table_acl, flow_classify, &error); + if (ret) { + printf("Line %i: flow_classify_destroy " + "should not have failed!\n", __LINE__); + return -1; + } + return 0; +} + +static int +test_invalid_patterns(void) +{ + struct rte_flow_classify *flow_classify; + int ret; + + /* set up parameters for rte_flow_classify_validate and + * rte_flow_classify_create and rte_flow_classify_destroy + */ + + attr.ingress = 1; + attr.priority = 1; + pattern_udp_1[0] = eth_item_bad; + pattern_udp_1[1] = ipv4_udp_item_1; + pattern_udp_1[2] = udp_item_1; + pattern_udp_1[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, + pattern_udp_1, actions, &error); + if (!ret) { + printf("Line %i: flow_classify_validate " + "should have failed!\n", __LINE__); + return -1; + } + + pattern_udp_1[0] = eth_item; + pattern_udp_1[1] = ipv4_udp_item_bad; + flow_classify = rte_flow_classify_create(table_acl, &attr, + pattern_udp_1, actions, &error); + if (flow_classify) { + printf("Line %i: flow_classify_create " + "should have failed!\n", __LINE__); + return -1; + } + + ret = rte_flow_classify_destroy(table_acl, flow_classify, &error); + if (!ret) { + printf("Line %i: flow_classify_destroy " + "should have failed!\n", __LINE__); + return -1; + } + + pattern_udp_1[1] = ipv4_udp_item_1; + pattern_udp_1[2] = udp_item_bad; + ret = rte_flow_classify_validate(table_acl, &attr, + pattern_udp_1, actions, &error); + if (!ret) { + printf("Line %i: flow_classify_validate " + "should have failed!\n", __LINE__); + return -1; + } + + pattern_udp_1[2] = udp_item_1; + pattern_udp_1[3] = end_item_bad; + flow_classify = rte_flow_classify_create(table_acl, &attr, + pattern_udp_1, actions, &error); + if (flow_classify) { + printf("Line %i: flow_classify_create " + "should have failed!\n", __LINE__); + return -1; + } + + ret = rte_flow_classify_destroy(table_acl, flow_classify, &error); + if (!ret) { + printf("Line %i: flow_classify_destroy " + "should have failed!\n", __LINE__); + return -1; + } + return 0; +} + +static int +test_invalid_actions(void) +{ + struct rte_flow_classify *flow_classify; + int ret; + + /* set up parameters for rte_flow_classify_validate and + * rte_flow_classify_create and rte_flow_classify_destroy + */ + + attr.ingress = 1; + attr.priority = 1; + pattern_udp_1[0] = eth_item; + pattern_udp_1[1] = ipv4_udp_item_1; + pattern_udp_1[2] = udp_item_1; + pattern_udp_1[3] = end_item; + actions[0] = count_action_bad; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, + pattern_udp_1, actions, &error); + if (!ret) { + printf("Line %i: flow_classify_validate " + "should have failed!\n", __LINE__); + return -1; + } + + flow_classify = rte_flow_classify_create(table_acl, &attr, + pattern_udp_1, actions, &error); + if (flow_classify) { + printf("Line %i: flow_classify_create " + "should have failed!\n", __LINE__); + return -1; + } + + ret = rte_flow_classify_destroy(table_acl, flow_classify, &error); + if (!ret) { + printf("Line %i: flow_classify_destroy " + "should have failed!\n", __LINE__); + return -1; + } + + actions[0] = count_action; + actions[1] = end_action_bad; + ret = rte_flow_classify_validate(table_acl, &attr, + pattern_udp_1, actions, &error); + if (!ret) { + printf("Line %i: flow_classify_validate " + "should have failed!\n", __LINE__); + return -1; + } + + flow_classify = rte_flow_classify_create(table_acl, &attr, + pattern_udp_1, actions, &error); + if (flow_classify) { + printf("Line %i: flow_classify_create " + "should have failed!\n", __LINE__); + return -1; + } + + ret = rte_flow_classify_destroy(table_acl, flow_classify, &error); + if (!ret) { + printf("Line %i: flow_classify_destroy " + "should have failed!\n", __LINE__); + return -1; + } + return 0; +} + +static int +init_udp_ipv4_traffic(struct rte_mempool *mp, + struct rte_mbuf **pkts_burst, uint32_t burst_size) +{ + struct ether_hdr pkt_eth_hdr; + struct ipv4_hdr pkt_ipv4_hdr; + struct udp_hdr pkt_udp_hdr; + uint32_t src_addr = IPV4_ADDR(2, 2, 2, 3); + uint32_t dst_addr = IPV4_ADDR(2, 2, 2, 7); + uint16_t src_port = 32; + uint16_t dst_port = 33; + uint16_t pktlen; + + static uint8_t src_mac[] = { 0x00, 0xFF, 0xAA, 0xFF, 0xAA, 0xFF }; + static uint8_t dst_mac[] = { 0x00, 0xAA, 0xFF, 0xAA, 0xFF, 0xAA }; + + initialize_eth_header(&pkt_eth_hdr, + (struct ether_addr *)src_mac, + (struct ether_addr *)dst_mac, ETHER_TYPE_IPv4, 0, 0); + pktlen = (uint16_t)(sizeof(struct ether_hdr)); + printf("ETH pktlen %u\n", pktlen); + + pktlen = initialize_ipv4_header(&pkt_ipv4_hdr, src_addr, dst_addr, + pktlen); + printf("ETH + IPv4 pktlen %u\n", pktlen); + + pktlen = initialize_udp_header(&pkt_udp_hdr, src_port, dst_port, + pktlen); + printf("ETH + IPv4 + UDP pktlen %u\n", pktlen); + + return generate_packet_burst(mp, pkts_burst, &pkt_eth_hdr, + 0, &pkt_ipv4_hdr, 1, + &pkt_udp_hdr, burst_size, + PACKET_BURST_GEN_PKT_LEN, 1); +} + +static int +init_mbufpool(void) +{ + int socketid; + unsigned int lcore_id; + char s[64]; + + for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { + if (rte_lcore_is_enabled(lcore_id) == 0) + continue; + + socketid = rte_lcore_to_socket_id(lcore_id); + if (socketid >= NB_SOCKETS) { + rte_exit(EXIT_FAILURE, + "Socket %d of lcore %u is out of range %d\n", + socketid, lcore_id, NB_SOCKETS); + } + if (mbufpool[socketid] == NULL) { + snprintf(s, sizeof(s), "mbuf_pool_%d", socketid); + mbufpool[socketid] = + rte_pktmbuf_pool_create(s, NB_MBUF, + MEMPOOL_CACHE_SIZE, 0, MBUF_SIZE, + socketid); + if (mbufpool[socketid] == NULL) { + printf("Cannot init mbuf pool on socket %d\n", + socketid); + return -ENOMEM; + } else + printf("Allocated mbuf pool on socket %d\n", + socketid); + } + } + return 0; +} + +static int +test_query_udp(void) +{ + struct rte_flow_error error; + struct rte_flow_classify *flow_classify; + int ret; + int i; + + ret = init_udp_ipv4_traffic(mbufpool[0], bufs, MAX_PKT_BURST); + if (ret != MAX_PKT_BURST) { + printf("Line %i: init_udp_ipv4_traffic has failed!\n", + __LINE__); + return -1; + } + + for (i = 0; i < MAX_PKT_BURST; i++) + bufs[i]->packet_type = RTE_PTYPE_L3_IPV4; + + /* set up parameters for rte_flow_classify_validate and + * rte_flow_classify_create and rte_flow_classify_destroy + */ + + attr.ingress = 1; + attr.priority = 1; + pattern_udp_1[0] = eth_item; + pattern_udp_1[1] = ipv4_udp_item_1; + pattern_udp_1[2] = udp_item_1; + pattern_udp_1[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, + pattern_udp_1, actions, &error); + if (ret) { + printf("Line %i: flow_classify_validate " + "should not have failed!\n", __LINE__); + return -1; + } + + flow_classify = rte_flow_classify_create(table_acl, &attr, + pattern_udp_1, actions, &error); + if (!flow_classify) { + printf("Line %i: flow_classify_create " + "should not have failed!\n", __LINE__); + return -1; + } + + ret = rte_flow_classify_query(table_acl, flow_classify, bufs, + MAX_PKT_BURST, &udp_classify_stats, &error); + if (ret) { + printf("Line %i: flow_classify_query " + "should not have failed!\n", __LINE__); + return -1; + } + + ret = rte_flow_classify_destroy(table_acl, flow_classify, &error); + if (ret) { + printf("Line %i: flow_classify_destroy " + "should not have failed!\n", __LINE__); + return -1; + } + return 0; +} + +static int +test_flow_classify(void) +{ + struct rte_table_acl_params table_acl_params; + int socket_id = 0; + int ret; + + /* initialise ACL table params */ + table_acl_params.n_rule_fields = RTE_DIM(ipv4_defs); + table_acl_params.name = "table_acl_ipv4_5tuple"; + table_acl_params.n_rules = FLOW_CLASSIFY_MAX_RULE_NUM; + memcpy(table_acl_params.field_format, ipv4_defs, sizeof(ipv4_defs)); + + table_acl = rte_table_acl_ops.f_create(&table_acl_params, socket_id, + RTE_ACL_RULE_SZ(RTE_DIM(ipv4_defs))); + if (table_acl == NULL) { + printf("Line %i: f_create has failed!\n", __LINE__); + return -1; + } + printf("Created table_acl for for IPv4 5tuple packets\n"); + + ret = init_mbufpool(); + if (ret) { + printf("Line %i: init_mbufpool has failed!\n", __LINE__); + return -1; + } + + if (test_invalid_parameters() < 0) + return -1; + if (test_valid_parameters() < 0) + return -1; + if (test_invalid_patterns() < 0) + return -1; + if (test_invalid_actions() < 0) + return -1; + if (test_query_udp() < 0) + return -1; + + return 0; +} + +REGISTER_TEST_COMMAND(flow_classify_autotest, test_flow_classify); diff --git a/test/test/test_flow_classify.h b/test/test/test_flow_classify.h new file mode 100644 index 0000000..180d36e --- /dev/null +++ b/test/test/test_flow_classify.h @@ -0,0 +1,184 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef TEST_FLOW_CLASSIFY_H_ +#define TEST_FLOW_CLASSIFY_H_ + +#define MAX_PKT_BURST (32) +#define NB_SOCKETS (1) +#define MEMPOOL_CACHE_SIZE (256) +#define MBUF_SIZE (512) +#define NB_MBUF (512) + +/* test UDP packets */ +static struct rte_mempool *mbufpool[NB_SOCKETS]; +static struct rte_mbuf *bufs[MAX_PKT_BURST]; + +/* ACL field definitions for IPv4 5 tuple rule */ + +enum { + PROTO_FIELD_IPV4, + SRC_FIELD_IPV4, + DST_FIELD_IPV4, + SRCP_FIELD_IPV4, + DSTP_FIELD_IPV4, + NUM_FIELDS_IPV4 +}; + +enum { + PROTO_INPUT_IPV4, + SRC_INPUT_IPV4, + DST_INPUT_IPV4, + SRCP_DESTP_INPUT_IPV4 +}; + +static struct rte_acl_field_def ipv4_defs[NUM_FIELDS_IPV4] = { + /* first input field - always one byte long. */ + { + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint8_t), + .field_index = PROTO_FIELD_IPV4, + .input_index = PROTO_INPUT_IPV4, + .offset = 0, + }, + /* next input field (IPv4 source address) - 4 consecutive bytes. */ + { + /* rte_flow uses a bit mask for IPv4 addresses */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint32_t), + .field_index = SRC_FIELD_IPV4, + .input_index = SRC_INPUT_IPV4, + .offset = offsetof(struct ipv4_hdr, src_addr) - + offsetof(struct ipv4_hdr, next_proto_id), + }, + /* next input field (IPv4 destination address) - 4 consecutive bytes. */ + { + /* rte_flow uses a bit mask for IPv4 addresses */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint32_t), + .field_index = DST_FIELD_IPV4, + .input_index = DST_INPUT_IPV4, + .offset = offsetof(struct ipv4_hdr, dst_addr) - + offsetof(struct ipv4_hdr, next_proto_id), + }, + /* + * Next 2 fields (src & dst ports) form 4 consecutive bytes. + * They share the same input index. + */ + { + /* rte_flow uses a bit mask for protocol ports */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint16_t), + .field_index = SRCP_FIELD_IPV4, + .input_index = SRCP_DESTP_INPUT_IPV4, + .offset = sizeof(struct ipv4_hdr) - + offsetof(struct ipv4_hdr, next_proto_id), + }, + { + /* rte_flow uses a bit mask for protocol ports */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint16_t), + .field_index = DSTP_FIELD_IPV4, + .input_index = SRCP_DESTP_INPUT_IPV4, + .offset = sizeof(struct ipv4_hdr) - + offsetof(struct ipv4_hdr, next_proto_id) + + sizeof(uint16_t), + }, +}; + +/* parameters for rte_flow_classify_validate and rte_flow_classify_create */ + +/* first sample UDP pattern: + * "eth / ipv4 src spec 2.2.2.3 src mask 255.255.255.00 dst spec 2.2.2.7 + * dst mask 255.255.255.00 / udp src is 32 dst is 33 / end" + */ +static struct rte_flow_item_ipv4 ipv4_udp_spec_1 = { + { 0, 0, 0, 0, 0, 0, 17, 0, IPv4(2, 2, 2, 3), IPv4(2, 2, 2, 7)} +}; +static const struct rte_flow_item_ipv4 ipv4_mask_24 = { + .hdr = { + .next_proto_id = 0xff, + .src_addr = 0xffffff00, + .dst_addr = 0xffffff00, + }, +}; +static struct rte_flow_item_udp udp_spec_1 = { + { 32, 33, 0, 0 } +}; + +static struct rte_flow_item eth_item = { RTE_FLOW_ITEM_TYPE_ETH, + 0, 0, 0 }; +static struct rte_flow_item eth_item_bad = { -1, 0, 0, 0 }; + +static struct rte_flow_item ipv4_udp_item_1 = { RTE_FLOW_ITEM_TYPE_IPV4, + &ipv4_udp_spec_1, 0, &ipv4_mask_24}; +static struct rte_flow_item ipv4_udp_item_bad = { RTE_FLOW_ITEM_TYPE_IPV4, + NULL, 0, NULL}; + +static struct rte_flow_item udp_item_1 = { RTE_FLOW_ITEM_TYPE_UDP, + &udp_spec_1, 0, &rte_flow_item_udp_mask}; +static struct rte_flow_item udp_item_bad = { RTE_FLOW_ITEM_TYPE_UDP, + NULL, 0, NULL}; + +static struct rte_flow_item end_item = { RTE_FLOW_ITEM_TYPE_END, + 0, 0, 0 }; +static struct rte_flow_item end_item_bad = { -1, 0, 0, 0 }; + +static struct rte_flow_item pattern_udp_1[4]; + +/* sample actions: + * "actions count / end" + */ +static struct rte_flow_action count_action = { RTE_FLOW_ACTION_TYPE_COUNT, 0}; +static struct rte_flow_action count_action_bad = { -1, 0}; + +static struct rte_flow_action end_action = { RTE_FLOW_ACTION_TYPE_END, 0}; +static struct rte_flow_action end_action_bad = { -1, 0}; + +static struct rte_flow_action actions[2]; + +/* sample attributes */ +static struct rte_flow_attr attr; + +/* sample error */ +static struct rte_flow_error error; + +/* flow classify data for UDP burst */ +static struct rte_flow_classify_5tuple_stats udp_ntuple_stats; +static struct rte_flow_classify_stats udp_classify_stats = { + .available_space = MAX_PKT_BURST, + .used_space = 0, + .stats = (void **)&udp_ntuple_stats +}; + +#endif /* TEST_FLOW_CLASSIFY_H_ */ -- 1.9.1 ^ permalink raw reply [flat|nested] 145+ messages in thread
end of thread, other threads:[~2017-10-25 12:13 UTC | newest] Thread overview: 145+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2017-04-20 18:54 [dpdk-dev] [RFC 17.08] Flow classification library Ferruh Yigit 2017-04-20 18:54 ` [dpdk-dev] [RFC 17.08] flow_classify: add librte_flow_classify library Ferruh Yigit 2017-05-04 11:35 ` Mcnamara, John 2017-05-16 22:19 ` Thomas Monjalon 2017-05-17 14:54 ` Ananyev, Konstantin 2017-05-17 15:37 ` Ferruh Yigit 2017-05-17 16:10 ` Ananyev, Konstantin 2017-05-18 12:12 ` Ferruh Yigit 2017-05-17 16:02 ` Ferruh Yigit 2017-05-17 16:18 ` Ananyev, Konstantin 2017-05-17 16:38 ` Gaëtan Rivet 2017-05-18 11:33 ` Ferruh Yigit 2017-05-18 20:31 ` Thomas Monjalon 2017-05-19 8:57 ` Ananyev, Konstantin 2017-05-19 9:11 ` Gaëtan Rivet 2017-05-19 9:40 ` Ananyev, Konstantin 2017-05-19 10:11 ` Thomas Monjalon 2017-05-22 9:13 ` Adrien Mazarguil 2017-04-21 10:38 ` [dpdk-dev] [RFC 17.08] Flow classification library Gaëtan Rivet 2017-05-03 9:15 ` Mcnamara, John 2017-05-06 14:04 ` Morten Brørup 2017-05-09 13:37 ` Ferruh Yigit 2017-05-09 19:24 ` Morten Brørup 2017-05-17 11:26 ` Ferruh Yigit 2017-05-09 13:26 ` Ferruh Yigit 2017-05-18 18:12 ` [dpdk-dev] [RFC v2] " Ferruh Yigit 2017-05-18 18:12 ` [dpdk-dev] [RFC v2] flow_classify: add librte_flow_classify library Ferruh Yigit 2017-05-19 16:30 ` [dpdk-dev] [RFC v2] Flow classification library Iremonger, Bernard 2017-05-22 13:53 ` Ferruh Yigit 2017-05-23 12:26 ` Adrien Mazarguil 2017-05-23 12:58 ` Ferruh Yigit 2017-05-23 13:30 ` Adrien Mazarguil 2017-05-23 16:42 ` Ferruh Yigit 2017-05-25 15:46 ` [dpdk-dev] [RFC v3] " Ferruh Yigit 2017-05-25 15:46 ` [dpdk-dev] [RFC v3] flow_classify: add librte_flow_classify library Ferruh Yigit 2017-05-30 12:59 ` Iremonger, Bernard 2017-08-23 13:51 ` [dpdk-dev] [PATCH v1 0/6] Flow classification library Bernard Iremonger 2017-08-25 16:10 ` [dpdk-dev] [PATCH v2 0/6] flow " Bernard Iremonger 2017-08-31 14:54 ` [dpdk-dev] [PATCH v3 0/5] " Bernard Iremonger 2017-09-06 10:27 ` [dpdk-dev] [PATCH v4 " Bernard Iremonger 2017-09-07 16:43 ` [dpdk-dev] [PATCH v5 0/6] " Bernard Iremonger 2017-09-29 9:18 ` [dpdk-dev] [PATCH v6 0/4] " Bernard Iremonger 2017-10-02 9:31 ` [dpdk-dev] [PATCH v7 " Bernard Iremonger 2017-10-17 20:26 ` [dpdk-dev] [PATCH v8 " Bernard Iremonger 2017-10-22 13:32 ` [dpdk-dev] [PATCH v9 " Bernard Iremonger 2017-10-23 15:16 ` [dpdk-dev] [PATCH v10 " Bernard Iremonger 2017-10-23 20:59 ` Thomas Monjalon 2017-10-24 8:40 ` Iremonger, Bernard 2017-10-24 9:23 ` Mcnamara, John 2017-10-24 9:38 ` Thomas Monjalon 2017-10-24 9:53 ` Iremonger, Bernard 2017-10-24 10:25 ` Thomas Monjalon 2017-10-24 17:27 ` [dpdk-dev] [PATCH v11 " Bernard Iremonger 2017-10-24 20:33 ` Thomas Monjalon 2017-10-25 8:47 ` Iremonger, Bernard 2017-10-25 8:56 ` Thomas Monjalon 2017-10-24 17:28 ` [dpdk-dev] [PATCH v11 1/4] flow_classify: add flow classify library Bernard Iremonger 2017-10-24 19:39 ` Thomas Monjalon 2017-10-25 11:10 ` Iremonger, Bernard 2017-10-25 12:13 ` Thomas Monjalon 2017-10-24 19:41 ` Thomas Monjalon 2017-10-24 19:43 ` Thomas Monjalon 2017-10-24 20:05 ` Thomas Monjalon 2017-10-24 20:16 ` Thomas Monjalon 2017-10-24 20:18 ` Thomas Monjalon 2017-10-24 17:28 ` [dpdk-dev] [PATCH v11 2/4] examples/flow_classify: flow classify sample application Bernard Iremonger 2017-10-24 20:13 ` Thomas Monjalon 2017-10-24 17:28 ` [dpdk-dev] [PATCH v11 3/4] test: add packet burst generator functions Bernard Iremonger 2017-10-24 17:28 ` [dpdk-dev] [PATCH v11 4/4] test: flow classify library unit tests Bernard Iremonger 2017-10-23 15:16 ` [dpdk-dev] [PATCH v10 1/4] librte_flow_classify: add flow classify library Bernard Iremonger 2017-10-23 16:03 ` Singh, Jasvinder 2017-10-24 9:50 ` Thomas Monjalon 2017-10-24 10:09 ` Iremonger, Bernard 2017-10-23 15:16 ` [dpdk-dev] [PATCH v10 2/4] examples/flow_classify: flow classify sample application Bernard Iremonger 2017-10-23 16:04 ` Singh, Jasvinder 2017-10-23 15:16 ` [dpdk-dev] [PATCH v10 3/4] test: add packet burst generator functions Bernard Iremonger 2017-10-23 16:05 ` Singh, Jasvinder 2017-10-23 15:16 ` [dpdk-dev] [PATCH v10 4/4] test: flow classify library unit tests Bernard Iremonger 2017-10-23 16:06 ` Singh, Jasvinder 2017-10-22 13:32 ` [dpdk-dev] [PATCH v9 1/4] librte_flow_classify: add flow classify library Bernard Iremonger 2017-10-23 13:21 ` Singh, Jasvinder 2017-10-23 13:37 ` Iremonger, Bernard 2017-10-22 13:32 ` [dpdk-dev] [PATCH v9 2/4] examples/flow_classify: flow classify sample application Bernard Iremonger 2017-10-22 13:32 ` [dpdk-dev] [PATCH v9 3/4] test: add packet burst generator functions Bernard Iremonger 2017-10-22 13:32 ` [dpdk-dev] [PATCH v9 4/4] test: flow classify library unit tests Bernard Iremonger 2017-10-17 20:26 ` [dpdk-dev] [PATCH v8 1/4] librte_flow_classify: add flow classify library Bernard Iremonger 2017-10-19 14:22 ` Singh, Jasvinder 2017-10-20 16:59 ` Iremonger, Bernard 2017-10-21 12:07 ` Iremonger, Bernard 2017-10-17 20:26 ` [dpdk-dev] [PATCH v8 2/4] examples/flow_classify: flow classify sample application Bernard Iremonger 2017-10-17 20:26 ` [dpdk-dev] [PATCH v8 3/4] test: add packet burst generator functions Bernard Iremonger 2017-10-17 20:26 ` [dpdk-dev] [PATCH v8 4/4] test: flow classify library unit tests Bernard Iremonger 2017-10-02 9:31 ` [dpdk-dev] [PATCH v7 1/4] librte_flow_classify: add librte_flow_classify library Bernard Iremonger 2017-10-06 15:00 ` Singh, Jasvinder 2017-10-09 9:28 ` Mcnamara, John 2017-10-13 15:39 ` Iremonger, Bernard 2017-10-02 9:31 ` [dpdk-dev] [PATCH v7 2/4] examples/flow_classify: flow classify sample application Bernard Iremonger 2017-10-02 9:31 ` [dpdk-dev] [PATCH v7 3/4] test: add packet burst generator functions Bernard Iremonger 2017-10-02 9:31 ` [dpdk-dev] [PATCH v7 4/4] test: flow classify library unit tests Bernard Iremonger 2017-09-29 9:18 ` [dpdk-dev] [PATCH v6 1/4] librte_flow_classify: add librte_flow_classify library Bernard Iremonger 2017-09-29 9:18 ` [dpdk-dev] [PATCH v6 2/4] examples/flow_classify: flow classify sample application Bernard Iremonger 2017-09-29 9:18 ` [dpdk-dev] [PATCH v6 3/4] test: add packet burst generator functions Bernard Iremonger 2017-09-29 9:18 ` [dpdk-dev] [PATCH v6 4/4] test: flow classify library unit tests Bernard Iremonger 2017-09-07 16:43 ` [dpdk-dev] [PATCH v5 1/6] librte_table: fix acl entry add and delete functions Bernard Iremonger 2017-09-18 15:29 ` Singh, Jasvinder 2017-09-20 12:21 ` Dumitrescu, Cristian 2017-09-29 8:25 ` Iremonger, Bernard 2017-09-07 16:43 ` [dpdk-dev] [PATCH v5 2/6] librte_table: fix acl lookup function Bernard Iremonger 2017-09-20 12:24 ` Dumitrescu, Cristian 2017-09-29 8:27 ` Iremonger, Bernard 2017-09-07 16:43 ` [dpdk-dev] [PATCH v5 3/6] librte_flow_classify: add librte_flow_classify library Bernard Iremonger 2017-09-07 16:43 ` [dpdk-dev] [PATCH v5 4/6] examples/flow_classify: flow classify sample application Bernard Iremonger 2017-09-07 16:43 ` [dpdk-dev] [PATCH v5 5/6] test: add packet burst generator functions Bernard Iremonger 2017-09-07 16:43 ` [dpdk-dev] [PATCH v5 6/6] test: flow classify library unit tests Bernard Iremonger 2017-09-06 10:27 ` [dpdk-dev] [PATCH v4 1/5] librte_table: fix acl entry add and delete functions Bernard Iremonger 2017-09-06 10:27 ` [dpdk-dev] [PATCH v4 2/5] librte_table: fix acl lookup function Bernard Iremonger 2017-09-06 10:27 ` [dpdk-dev] [PATCH v4 3/5] librte_flow_classify: add librte_flow_classify library Bernard Iremonger 2017-09-06 10:27 ` [dpdk-dev] [PATCH v4 4/5] examples/flow_classify: flow classify sample application Bernard Iremonger 2017-09-06 10:27 ` [dpdk-dev] [PATCH v4 5/5] test: flow classify library unit tests Bernard Iremonger 2017-08-31 14:54 ` [dpdk-dev] [PATCH v3 1/5] librte_table: fix acl entry add and delete functions Bernard Iremonger 2017-08-31 15:09 ` Pavan Nikhilesh Bhagavatula 2017-08-31 14:54 ` [dpdk-dev] [PATCH v3 2/5] librte_table: fix acl lookup function Bernard Iremonger 2017-08-31 14:54 ` [dpdk-dev] [PATCH v3 3/5] librte_flow_classify: add librte_flow_classify library Bernard Iremonger 2017-08-31 15:18 ` Pavan Nikhilesh Bhagavatula 2017-08-31 14:54 ` [dpdk-dev] [PATCH v3 4/5] examples/flow_classify: flow classify sample application Bernard Iremonger 2017-08-31 14:54 ` [dpdk-dev] [PATCH v3 5/5] test: flow classify library unit tests Bernard Iremonger 2017-08-25 16:10 ` [dpdk-dev] [PATCH v2 1/6] librte_table: fix acl entry add and delete functions Bernard Iremonger 2017-08-25 16:10 ` [dpdk-dev] [PATCH v2 2/6] librte_table: fix acl lookup function Bernard Iremonger 2017-08-25 16:10 ` [dpdk-dev] [PATCH v2 3/6] librte_ether: initialise IPv4 protocol mask for rte_flow Bernard Iremonger 2017-08-30 12:39 ` Adrien Mazarguil 2017-08-30 13:28 ` Iremonger, Bernard 2017-08-30 14:39 ` Adrien Mazarguil 2017-08-30 15:12 ` Iremonger, Bernard 2017-08-25 16:10 ` [dpdk-dev] [PATCH v2 4/6] librte_flow_classify: add librte_flow_classify library Bernard Iremonger 2017-08-25 16:10 ` [dpdk-dev] [PATCH v2 5/6] examples/flow_classify: flow classify sample application Bernard Iremonger 2017-08-25 16:10 ` [dpdk-dev] [PATCH v2 6/6] test: flow classify library unit tests Bernard Iremonger 2017-08-23 13:51 ` [dpdk-dev] [PATCH v1 1/6] librte_table: move structure to header file Bernard Iremonger 2017-08-23 14:13 ` Dumitrescu, Cristian 2017-08-23 14:32 ` Iremonger, Bernard 2017-08-28 8:48 ` Iremonger, Bernard 2017-08-23 13:51 ` [dpdk-dev] [PATCH v1 2/6] librte_table: fix acl entry add and delete functions Bernard Iremonger 2017-08-23 13:51 ` [dpdk-dev] [PATCH v1 3/6] librte_ether: initialise IPv4 protocol mask for rte_flow Bernard Iremonger 2017-08-23 13:51 ` [dpdk-dev] [PATCH v1 4/6] librte_flow_classify: add librte_flow_classify library Bernard Iremonger 2017-08-23 13:51 ` [dpdk-dev] [PATCH v1 5/6] examples/flow_classify: flow classify sample application Bernard Iremonger 2017-08-23 13:51 ` [dpdk-dev] [PATCH v1 6/6] test: flow classify library unit tests Bernard Iremonger
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).