From: Yongseok Koh <yskoh@mellanox.com>
To: Slava Ovsiienko <viacheslavo@mellanox.com>
Cc: Shahaf Shuler <shahafs@mellanox.com>, "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH v3 06/13] net/mlx5: add e-switch VXLAN support to validation routine
Date: Thu, 1 Nov 2018 20:49:12 +0000 [thread overview]
Message-ID: <20181101204905.GG6118@mtidpdk.mti.labs.mlnx> (raw)
In-Reply-To: <1541074741-41368-7-git-send-email-viacheslavo@mellanox.com>
On Thu, Nov 01, 2018 at 05:19:27AM -0700, Slava Ovsiienko wrote:
> This patch adds VXLAN support for flow item/action lists validation.
> The following entities are now supported:
>
> - RTE_FLOW_ITEM_TYPE_VXLAN, contains the tunnel VNI
>
> - RTE_FLOW_ACTION_TYPE_VXLAN_DECAP, if this action is specified
> the items in the flow items list treated as outer network
> parameters for tunnel outer header match. The ethernet layer
> addresses always are treated as inner ones.
>
> - RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP, contains the item list to
> build the encapsulation header. In current implementation the
> values is the subject for some constraints:
> - outer source MAC address will be always unconditionally
> set to the one of MAC addresses of outer egress interface
> - no way to specify source UDP port
> - all abovementioned parameters are ignored if specified
> in the rule, warning messages are sent to the log
>
> Minimal tunneling support is also added. If VXLAN decapsulation
> action is specified the ETH item can follow the VXLAN VNI item,
> the content of this ETH item is treated as inner MAC addresses
> and type. The outer ETH item for VXLAN decapsulation action
> is always ignored.
>
> Suggested-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
> ---
Overall, it is so good. But please make some cosmetic changes. Refer to my
comments below. When you send out v4 with the changes, please put my acked-by
tag.
> drivers/net/mlx5/mlx5_flow_tcf.c | 741 ++++++++++++++++++++++++++++++++++++++-
> 1 file changed, 739 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/net/mlx5/mlx5_flow_tcf.c b/drivers/net/mlx5/mlx5_flow_tcf.c
> index 50f3bd1..7e00232 100644
> --- a/drivers/net/mlx5/mlx5_flow_tcf.c
> +++ b/drivers/net/mlx5/mlx5_flow_tcf.c
> @@ -1116,6 +1116,633 @@ struct pedit_parser {
> }
>
> /**
> + * Validate VXLAN_ENCAP action RTE_FLOW_ITEM_TYPE_ETH item for E-Switch.
> + * The routine checks the L2 fields to be used in encapsulation header.
> + *
> + * @param[in] item
> + * Pointer to the item structure.
> + * @param[out] error
> + * Pointer to the error structure.
> + *
> + * @return
> + * 0 on success, a negative errno value otherwise and rte_errno is set.
> + **/
> +static int
> +flow_tcf_validate_vxlan_encap_eth(const struct rte_flow_item *item,
> + struct rte_flow_error *error)
> +{
> + const struct rte_flow_item_eth *spec = item->spec;
> + const struct rte_flow_item_eth *mask = item->mask;
> +
> + if (!spec)
> + /*
> + * Specification for L2 addresses can be empty
> + * because these ones are optional and not
> + * required directly by tc rule. Kernel tries
> + * to resolve these ones on its own
> + */
> + return 0;
Even if it is one line of code, let's use bracket {} because it is multiple
lines with a comment. Without bracket, it could cause a bug if more lines are
added later because people would have wrong impression that there're already
brackets. Please also fix a few more occurrences below.
> + if (!mask)
> + /* If mask is not specified use the default one. */
> + mask = &rte_flow_item_eth_mask;
> + if (memcmp(&mask->dst,
> + &flow_tcf_mask_empty.eth.dst,
> + sizeof(flow_tcf_mask_empty.eth.dst))) {
> + if (memcmp(&mask->dst,
> + &rte_flow_item_eth_mask.dst,
> + sizeof(rte_flow_item_eth_mask.dst)))
> + return rte_flow_error_set(error, ENOTSUP,
> + RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask,
> + "no support for partial mask on"
> + " \"eth.dst\" field");
The following would be better,
return rte_flow_error_set(error, ENOTSUP,
RTE_FLOW_ERROR_TYPE_ITEM_MASK,
mask,
"no support for partial mask"
" on \"eth.dst\" field");
But, this one is also acceptable (to minimize your effort of correction :-)
return rte_flow_error_set
(error, ENOTSUP,
RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask,
"no support for partial mask on"
" \"eth.dst\" field");
Please make the same changes for the entire patch set.
Thanks,
Yongseok
> + }
> + if (memcmp(&mask->src,
> + &flow_tcf_mask_empty.eth.src,
> + sizeof(flow_tcf_mask_empty.eth.src))) {
> + if (memcmp(&mask->src,
> + &rte_flow_item_eth_mask.src,
> + sizeof(rte_flow_item_eth_mask.src)))
> + return rte_flow_error_set(error, ENOTSUP,
> + RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask,
> + "no support for partial mask on"
> + " \"eth.src\" field");
> + }
> + if (mask->type != RTE_BE16(0x0000)) {
> + if (mask->type != RTE_BE16(0xffff))
> + return rte_flow_error_set(error, ENOTSUP,
> + RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask,
> + "no support for partial mask on"
> + " \"eth.type\" field");
> + DRV_LOG(WARNING,
> + "outer ethernet type field"
> + " cannot be forced for vxlan"
> + " encapsulation, parameter ignored");
> + }
> + return 0;
> +}
> +
> +/**
> + * Validate VXLAN_ENCAP action RTE_FLOW_ITEM_TYPE_IPV4 item for E-Switch.
> + * The routine checks the IPv4 fields to be used in encapsulation header.
> + *
> + * @param[in] item
> + * Pointer to the item structure.
> + * @param[out] error
> + * Pointer to the error structure.
> + *
> + * @return
> + * 0 on success, a negative errno value otherwise and rte_errno is set.
> + **/
> +static int
> +flow_tcf_validate_vxlan_encap_ipv4(const struct rte_flow_item *item,
> + struct rte_flow_error *error)
> +{
> + const struct rte_flow_item_ipv4 *spec = item->spec;
> + const struct rte_flow_item_ipv4 *mask = item->mask;
> +
> + if (!spec)
> + /*
> + * Specification for IP addresses cannot be empty
> + * because it is required by tunnel_key parameter.
> + */
> + return rte_flow_error_set(error, EINVAL,
> + RTE_FLOW_ERROR_TYPE_ITEM, item,
> + "NULL outer ipv4 address specification"
> + " for vxlan encapsulation");
> + if (!mask)
> + mask = &rte_flow_item_ipv4_mask;
> + if (mask->hdr.dst_addr != RTE_BE32(0x00000000)) {
> + if (mask->hdr.dst_addr != RTE_BE32(0xffffffff))
> + return rte_flow_error_set(error, ENOTSUP,
> + RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask,
> + "no support for partial mask on"
> + " \"ipv4.hdr.dst_addr\" field"
> + " for vxlan encapsulation");
> + /* More IPv4 address validations can be put here. */
> + } else {
> + /*
> + * Kernel uses the destination IP address to determine
> + * the routing path and obtain the MAC destination
> + * address, so IP destination address must be
> + * specified in the tc rule.
> + */
> + return rte_flow_error_set(error, EINVAL,
> + RTE_FLOW_ERROR_TYPE_ITEM, item,
> + "outer ipv4 destination address must be"
> + " specified for vxlan encapsulation");
> + }
> + if (mask->hdr.src_addr != RTE_BE32(0x00000000)) {
> + if (mask->hdr.src_addr != RTE_BE32(0xffffffff))
> + return rte_flow_error_set(error, ENOTSUP,
> + RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask,
> + "no support for partial mask on"
> + " \"ipv4.hdr.src_addr\" field"
> + " for vxlan encapsulation");
> + /* More IPv4 address validations can be put here. */
> + } else {
> + /*
> + * Kernel uses the source IP address to select the
> + * interface for egress encapsulated traffic, so
> + * it must be specified in the tc rule.
> + */
> + return rte_flow_error_set(error, EINVAL,
> + RTE_FLOW_ERROR_TYPE_ITEM, item,
> + "outer ipv4 source address must be"
> + " specified for vxlan encapsulation");
> + }
> + return 0;
> +}
> +
> +/**
> + * Validate VXLAN_ENCAP action RTE_FLOW_ITEM_TYPE_IPV6 item for E-Switch.
> + * The routine checks the IPv6 fields to be used in encapsulation header.
> + *
> + * @param[in] item
> + * Pointer to the item structure.
> + * @param[out] error
> + * Pointer to the error structure.
> + *
> + * @return
> + * 0 on success, a negative errno value otherwise and rte_ernno is set.
> + **/
> +static int
> +flow_tcf_validate_vxlan_encap_ipv6(const struct rte_flow_item *item,
> + struct rte_flow_error *error)
> +{
> + const struct rte_flow_item_ipv6 *spec = item->spec;
> + const struct rte_flow_item_ipv6 *mask = item->mask;
> +
> + if (!spec)
> + /*
> + * Specification for IP addresses cannot be empty
> + * because it is required by tunnel_key parameter.
> + */
> + return rte_flow_error_set(error, EINVAL,
> + RTE_FLOW_ERROR_TYPE_ITEM, item,
> + "NULL outer ipv6 address specification"
> + " for vxlan encapsulation");
> + if (!mask)
> + mask = &rte_flow_item_ipv6_mask;
> + if (memcmp(&mask->hdr.dst_addr,
> + &flow_tcf_mask_empty.ipv6.hdr.dst_addr,
> + IPV6_ADDR_LEN)) {
> + if (memcmp(&mask->hdr.dst_addr,
> + &rte_flow_item_ipv6_mask.hdr.dst_addr,
> + IPV6_ADDR_LEN))
> + return rte_flow_error_set(error, ENOTSUP,
> + RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask,
> + "no support for partial mask on"
> + " \"ipv6.hdr.dst_addr\" field"
> + " for vxlan encapsulation");
> + /* More IPv6 address validations can be put here. */
> + } else {
> + /*
> + * Kernel uses the destination IP address to determine
> + * the routing path and obtain the MAC destination
> + * address (heigh or gate), so IP destination address
> + * must be specified within the tc rule.
> + */
> + return rte_flow_error_set(error, EINVAL,
> + RTE_FLOW_ERROR_TYPE_ITEM, item,
> + "outer ipv6 destination address must be"
> + " specified for vxlan encapsulation");
> + }
> + if (memcmp(&mask->hdr.src_addr,
> + &flow_tcf_mask_empty.ipv6.hdr.src_addr,
> + IPV6_ADDR_LEN)) {
> + if (memcmp(&mask->hdr.src_addr,
> + &rte_flow_item_ipv6_mask.hdr.src_addr,
> + IPV6_ADDR_LEN))
> + return rte_flow_error_set(error, ENOTSUP,
> + RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask,
> + "no support for partial mask on"
> + " \"ipv6.hdr.src_addr\" field"
> + " for vxlan encapsulation");
> + /* More L3 address validation can be put here. */
> + } else {
> + /*
> + * Kernel uses the source IP address to select the
> + * interface for egress encapsulated traffic, so
> + * it must be specified in the tc rule.
> + */
> + return rte_flow_error_set(error, EINVAL,
> + RTE_FLOW_ERROR_TYPE_ITEM, item,
> + "outer L3 source address must be"
> + " specified for vxlan encapsulation");
> + }
> + return 0;
> +}
> +
> +/**
> + * Validate VXLAN_ENCAP action RTE_FLOW_ITEM_TYPE_UDP item for E-Switch.
> + * The routine checks the UDP fields to be used in encapsulation header.
> + *
> + * @param[in] item
> + * Pointer to the item structure.
> + * @param[out] error
> + * Pointer to the error structure.
> + *
> + * @return
> + * 0 on success, a negative errno value otherwise and rte_ernno is set.
> + **/
> +static int
> +flow_tcf_validate_vxlan_encap_udp(const struct rte_flow_item *item,
> + struct rte_flow_error *error)
> +{
> + const struct rte_flow_item_udp *spec = item->spec;
> + const struct rte_flow_item_udp *mask = item->mask;
> +
> + if (!spec)
> + /*
> + * Specification for UDP ports cannot be empty
> + * because it is required by tunnel_key parameter.
> + */
> + return rte_flow_error_set(error, EINVAL,
> + RTE_FLOW_ERROR_TYPE_ITEM, item,
> + "NULL UDP port specification "
> + " for vxlan encapsulation");
> + if (!mask)
> + mask = &rte_flow_item_udp_mask;
> + if (mask->hdr.dst_port != RTE_BE16(0x0000)) {
> + if (mask->hdr.dst_port != RTE_BE16(0xffff))
> + return rte_flow_error_set(error, ENOTSUP,
> + RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask,
> + "no support for partial mask on"
> + " \"udp.hdr.dst_port\" field"
> + " for vxlan encapsulation");
> + if (!spec->hdr.dst_port)
> + return rte_flow_error_set(error, EINVAL,
> + RTE_FLOW_ERROR_TYPE_ITEM, item,
> + "outer UDP remote port cannot be"
> + " 0 for vxlan encapsulation");
> + } else {
> + return rte_flow_error_set(error, EINVAL,
> + RTE_FLOW_ERROR_TYPE_ITEM, item,
> + "outer UDP remote port must be"
> + " specified for vxlan encapsulation");
> + }
> + if (mask->hdr.src_port != RTE_BE16(0x0000)) {
> + if (mask->hdr.src_port != RTE_BE16(0xffff))
> + return rte_flow_error_set(error, ENOTSUP,
> + RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask,
> + "no support for partial mask on"
> + " \"udp.hdr.src_port\" field"
> + " for vxlan encapsulation");
> + DRV_LOG(WARNING,
> + "outer UDP source port cannot be"
> + " forced for vxlan encapsulation,"
> + " parameter ignored");
> + }
> + return 0;
> +}
> +
> +/**
> + * Validate VXLAN_ENCAP action RTE_FLOW_ITEM_TYPE_VXLAN item for E-Switch.
> + * The routine checks the VNIP fields to be used in encapsulation header.
> + *
> + * @param[in] item
> + * Pointer to the item structure.
> + * @param[out] error
> + * Pointer to the error structure.
> + *
> + * @return
> + * 0 on success, a negative errno value otherwise and rte_ernno is set.
> + **/
> +static int
> +flow_tcf_validate_vxlan_encap_vni(const struct rte_flow_item *item,
> + struct rte_flow_error *error)
> +{
> + const struct rte_flow_item_vxlan *spec = item->spec;
> + const struct rte_flow_item_vxlan *mask = item->mask;
> +
> + if (!spec)
> + /* Outer VNI is required by tunnel_key parameter. */
> + return rte_flow_error_set(error, EINVAL,
> + RTE_FLOW_ERROR_TYPE_ITEM, item,
> + "NULL VNI specification"
> + " for vxlan encapsulation");
> + if (!mask)
> + mask = &rte_flow_item_vxlan_mask;
> + if (!mask->vni[0] && !mask->vni[1] && !mask->vni[2])
> + return rte_flow_error_set(error, EINVAL,
> + RTE_FLOW_ERROR_TYPE_ITEM, item,
> + "outer VNI must be specified "
> + "for vxlan encapsulation");
> + if (mask->vni[0] != 0xff ||
> + mask->vni[1] != 0xff ||
> + mask->vni[2] != 0xff)
> + return rte_flow_error_set(error, ENOTSUP,
> + RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask,
> + "no support for partial mask on"
> + " \"vxlan.vni\" field");
> +
> + if (!spec->vni[0] && !spec->vni[1] && !spec->vni[2])
> + return rte_flow_error_set(error, EINVAL,
> + RTE_FLOW_ERROR_TYPE_ITEM, item,
> + "vxlan vni cannot be 0");
> + return 0;
> +}
> +
> +/**
> + * Validate VXLAN_ENCAP action item list for E-Switch.
> + * The routine checks items to be used in encapsulation header.
> + *
> + * @param[in] action
> + * Pointer to the VXLAN_ENCAP action structure.
> + * @param[out] error
> + * Pointer to the error structure.
> + *
> + * @return
> + * 0 on success, a negative errno value otherwise and rte_ernno is set.
> + **/
> +static int
> +flow_tcf_validate_vxlan_encap(const struct rte_flow_action *action,
> + struct rte_flow_error *error)
> +{
> + const struct rte_flow_item *items;
> + int ret;
> + uint32_t item_flags = 0;
> +
> + if (!action->conf)
> + return rte_flow_error_set
> + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION,
> + action, "Missing vxlan tunnel"
> + " action configuration");
> + items = ((const struct rte_flow_action_vxlan_encap *)
> + action->conf)->definition;
> + if (!items)
> + return rte_flow_error_set
> + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION,
> + action, "Missing vxlan tunnel"
> + " encapsulation parameters");
> + for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) {
> + switch (items->type) {
> + case RTE_FLOW_ITEM_TYPE_VOID:
> + break;
> + case RTE_FLOW_ITEM_TYPE_ETH:
> + ret = mlx5_flow_validate_item_eth(items, item_flags,
> + error);
> + if (ret < 0)
> + return ret;
> + ret = flow_tcf_validate_vxlan_encap_eth(items, error);
> + if (ret < 0)
> + return ret;
> + item_flags |= MLX5_FLOW_LAYER_OUTER_L2;
> + break;
> + break;
> + case RTE_FLOW_ITEM_TYPE_IPV4:
> + ret = mlx5_flow_validate_item_ipv4(items, item_flags,
> + error);
> + if (ret < 0)
> + return ret;
> + ret = flow_tcf_validate_vxlan_encap_ipv4(items, error);
> + if (ret < 0)
> + return ret;
> + item_flags |= MLX5_FLOW_LAYER_OUTER_L3_IPV4;
> + break;
> + case RTE_FLOW_ITEM_TYPE_IPV6:
> + ret = mlx5_flow_validate_item_ipv6(items, item_flags,
> + error);
> + if (ret < 0)
> + return ret;
> + ret = flow_tcf_validate_vxlan_encap_ipv6(items, error);
> + if (ret < 0)
> + return ret;
> + item_flags |= MLX5_FLOW_LAYER_OUTER_L3_IPV6;
> + break;
> + case RTE_FLOW_ITEM_TYPE_UDP:
> + ret = mlx5_flow_validate_item_udp(items, item_flags,
> + 0xFF, error);
> + if (ret < 0)
> + return ret;
> + ret = flow_tcf_validate_vxlan_encap_udp(items, error);
> + if (ret < 0)
> + return ret;
> + item_flags |= MLX5_FLOW_LAYER_OUTER_L4_UDP;
> + break;
> + case RTE_FLOW_ITEM_TYPE_VXLAN:
> + ret = mlx5_flow_validate_item_vxlan(items,
> + item_flags, error);
> + if (ret < 0)
> + return ret;
> + ret = flow_tcf_validate_vxlan_encap_vni(items, error);
> + if (ret < 0)
> + return ret;
> + item_flags |= MLX5_FLOW_LAYER_VXLAN;
> + break;
> + default:
> + return rte_flow_error_set(error, ENOTSUP,
> + RTE_FLOW_ERROR_TYPE_ITEM, items,
> + "vxlan encap item not supported");
> + }
> + }
> + if (!(item_flags & MLX5_FLOW_LAYER_OUTER_L3))
> + return rte_flow_error_set(error, EINVAL,
> + RTE_FLOW_ERROR_TYPE_ACTION, action,
> + "no outer IP layer found"
> + " for vxlan encapsulation");
> + if (!(item_flags & MLX5_FLOW_LAYER_OUTER_L4_UDP))
> + return rte_flow_error_set(error, EINVAL,
> + RTE_FLOW_ERROR_TYPE_ACTION, action,
> + "no outer UDP layer found"
> + " for vxlan encapsulation");
> + if (!(item_flags & MLX5_FLOW_LAYER_VXLAN))
> + return rte_flow_error_set(error, EINVAL,
> + RTE_FLOW_ERROR_TYPE_ACTION, action,
> + "no VXLAN VNI found"
> + " for vxlan encapsulation");
> + return 0;
> +}
> +
> +/**
> + * Validate RTE_FLOW_ITEM_TYPE_IPV4 item if VXLAN_DECAP action
> + * is present in actions list.
> + *
> + * @param[in] ipv4
> + * Outer IPv4 address item (if any, NULL otherwise).
> + * @param[out] error
> + * Pointer to the error structure.
> + *
> + * @return
> + * 0 on success, a negative errno value otherwise and rte_ernno is set.
> + **/
> +static int
> +flow_tcf_validate_vxlan_decap_ipv4(const struct rte_flow_item *ipv4,
> + struct rte_flow_error *error)
> +{
> + const struct rte_flow_item_ipv4 *spec = ipv4->spec;
> + const struct rte_flow_item_ipv4 *mask = ipv4->mask;
> +
> + if (!spec)
> + /*
> + * Specification for IP addresses cannot be empty
> + * because it is required as decap parameter.
> + */
> + return rte_flow_error_set(error, EINVAL,
> + RTE_FLOW_ERROR_TYPE_ITEM, ipv4,
> + "NULL outer ipv4 address"
> + " specification for vxlan"
> + " for vxlan decapsulation");
> + if (!mask)
> + mask = &rte_flow_item_ipv4_mask;
> + if (mask->hdr.dst_addr != RTE_BE32(0x00000000)) {
> + if (mask->hdr.dst_addr != RTE_BE32(0xffffffff))
> + return rte_flow_error_set(error, ENOTSUP,
> + RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask,
> + "no support for partial mask on"
> + " \"ipv4.hdr.dst_addr\" field");
> + /* More IP address validations can be put here. */
> + } else {
> + /*
> + * Kernel uses the destination IP address
> + * to determine the ingress network interface
> + * for traffic being decapsulated.
> + */
> + return rte_flow_error_set(error, EINVAL,
> + RTE_FLOW_ERROR_TYPE_ITEM, ipv4,
> + "outer ipv4 destination address"
> + " must be specified for"
> + " vxlan decapsulation");
> + }
> + /* Source IP address is optional for decap. */
> + if (mask->hdr.src_addr != RTE_BE32(0x00000000) &&
> + mask->hdr.src_addr != RTE_BE32(0xffffffff))
> + return rte_flow_error_set(error, ENOTSUP,
> + RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask,
> + "no support for partial mask on"
> + " \"ipv4.hdr.src_addr\" field");
> + return 0;
> +}
> +
> +/**
> + * Validate RTE_FLOW_ITEM_TYPE_IPV6 item if VXLAN_DECAP action
> + * is present in actions list.
> + *
> + * @param[in] ipv6
> + * Outer IPv6 address item (if any, NULL otherwise).
> + * @param[out] error
> + * Pointer to the error structure.
> + *
> + * @return
> + * 0 on success, a negative errno value otherwise and rte_ernno is set.
> + **/
> +static int
> +flow_tcf_validate_vxlan_decap_ipv6(const struct rte_flow_item *ipv6,
> + struct rte_flow_error *error)
> +{
> + const struct rte_flow_item_ipv6 *spec = ipv6->spec;
> + const struct rte_flow_item_ipv6 *mask = ipv6->mask;
> +
> + if (!spec)
> + /*
> + * Specification for IP addresses cannot be empty
> + * because it is required as decap parameter.
> + */
> + return rte_flow_error_set(error, EINVAL,
> + RTE_FLOW_ERROR_TYPE_ITEM, ipv6,
> + "NULL outer ipv6 address"
> + " specification for vxlan"
> + " decapsulation");
> + if (!mask)
> + mask = &rte_flow_item_ipv6_mask;
> + if (memcmp(&mask->hdr.dst_addr,
> + &flow_tcf_mask_empty.ipv6.hdr.dst_addr,
> + IPV6_ADDR_LEN)) {
> + if (memcmp(&mask->hdr.dst_addr,
> + &rte_flow_item_ipv6_mask.hdr.dst_addr,
> + IPV6_ADDR_LEN))
> + return rte_flow_error_set(error, ENOTSUP,
> + RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask,
> + "no support for partial mask on"
> + " \"ipv6.hdr.dst_addr\" field");
> + /* More IP address validations can be put here. */
> + } else {
> + /*
> + * Kernel uses the destination IP address
> + * to determine the ingress network interface
> + * for traffic being decapsulated.
> + */
> + return rte_flow_error_set(error, EINVAL,
> + RTE_FLOW_ERROR_TYPE_ITEM, ipv6,
> + "outer ipv6 destination address must be "
> + "specified for vxlan decapsulation");
> + }
> + /* Source IP address is optional for decap. */
> + if (memcmp(&mask->hdr.src_addr,
> + &flow_tcf_mask_empty.ipv6.hdr.src_addr,
> + IPV6_ADDR_LEN)) {
> + if (memcmp(&mask->hdr.src_addr,
> + &rte_flow_item_ipv6_mask.hdr.src_addr,
> + IPV6_ADDR_LEN))
> + return rte_flow_error_set(error, ENOTSUP,
> + RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask,
> + "no support for partial mask on"
> + " \"ipv6.hdr.src_addr\" field");
> + }
> + return 0;
> +}
> +
> +/**
> + * Validate RTE_FLOW_ITEM_TYPE_UDP item if VXLAN_DECAP action
> + * is present in actions list.
> + *
> + * @param[in] udp
> + * Outer UDP layer item (if any, NULL otherwise).
> + * @param[out] error
> + * Pointer to the error structure.
> + *
> + * @return
> + * 0 on success, a negative errno value otherwise and rte_ernno is set.
> + **/
> +static int
> +flow_tcf_validate_vxlan_decap_udp(const struct rte_flow_item *udp,
> + struct rte_flow_error *error)
> +{
> + const struct rte_flow_item_udp *spec = udp->spec;
> + const struct rte_flow_item_udp *mask = udp->mask;
> +
> + if (!spec)
> + /*
> + * Specification for UDP ports cannot be empty
> + * because it is required as decap parameter.
> + */
> + return rte_flow_error_set(error, EINVAL,
> + RTE_FLOW_ERROR_TYPE_ITEM, udp,
> + "NULL UDP port specification"
> + " for VXLAN decapsulation");
> + if (!mask)
> + mask = &rte_flow_item_udp_mask;
> + if (mask->hdr.dst_port != RTE_BE16(0x0000)) {
> + if (mask->hdr.dst_port != RTE_BE16(0xffff))
> + return rte_flow_error_set(error, ENOTSUP,
> + RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask,
> + "no support for partial mask on"
> + " \"udp.hdr.dst_port\" field");
> + if (!spec->hdr.dst_port)
> + return rte_flow_error_set(error, EINVAL,
> + RTE_FLOW_ERROR_TYPE_ITEM, udp,
> + "zero decap local UDP port");
> + } else {
> + return rte_flow_error_set(error, EINVAL,
> + RTE_FLOW_ERROR_TYPE_ITEM, udp,
> + "outer UDP destination port must be "
> + "specified for vxlan decapsulation");
> + }
> + if (mask->hdr.src_port != RTE_BE16(0x0000)) {
> + if (mask->hdr.src_port != RTE_BE16(0xffff))
> + return rte_flow_error_set(error, ENOTSUP,
> + RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask,
> + "no support for partial mask on"
> + " \"udp.hdr.src_port\" field");
> + DRV_LOG(WARNING,
> + "outer UDP local port cannot be "
> + "forced for VXLAN encapsulation, "
> + "parameter ignored");
> + }
> + return 0;
> +}
> +
> +/**
> * Validate flow for E-Switch.
> *
> * @param[in] priv
> @@ -1147,6 +1774,7 @@ struct pedit_parser {
> const struct rte_flow_item_ipv6 *ipv6;
> const struct rte_flow_item_tcp *tcp;
> const struct rte_flow_item_udp *udp;
> + const struct rte_flow_item_vxlan *vxlan;
> } spec, mask;
> union {
> const struct rte_flow_action_port_id *port_id;
> @@ -1156,6 +1784,7 @@ struct pedit_parser {
> of_set_vlan_vid;
> const struct rte_flow_action_of_set_vlan_pcp *
> of_set_vlan_pcp;
> + const struct rte_flow_action_vxlan_encap *vxlan_encap;
> const struct rte_flow_action_set_ipv4 *set_ipv4;
> const struct rte_flow_action_set_ipv6 *set_ipv6;
> } conf;
> @@ -1242,6 +1871,15 @@ struct pedit_parser {
> " set action must follow push action");
> current_action_flag = MLX5_FLOW_ACTION_OF_SET_VLAN_PCP;
> break;
> + case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP:
> + current_action_flag = MLX5_FLOW_ACTION_VXLAN_DECAP;
> + break;
> + case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP:
> + ret = flow_tcf_validate_vxlan_encap(actions, error);
> + if (ret < 0)
> + return ret;
> + current_action_flag = MLX5_FLOW_ACTION_VXLAN_ENCAP;
> + break;
> case RTE_FLOW_ACTION_TYPE_SET_IPV4_SRC:
> current_action_flag = MLX5_FLOW_ACTION_SET_IPV4_SRC;
> break;
> @@ -1302,11 +1940,32 @@ struct pedit_parser {
> actions,
> "can't have multiple fate"
> " actions");
> + if ((current_action_flag & MLX5_TCF_VXLAN_ACTIONS) &&
> + (action_flags & MLX5_TCF_VXLAN_ACTIONS))
> + return rte_flow_error_set(error, EINVAL,
> + RTE_FLOW_ERROR_TYPE_ACTION,
> + actions,
> + "can't have multiple vxlan"
> + " actions");
> + if ((current_action_flag & MLX5_TCF_VXLAN_ACTIONS) &&
> + (action_flags & MLX5_TCF_VLAN_ACTIONS))
> + return rte_flow_error_set(error, ENOTSUP,
> + RTE_FLOW_ERROR_TYPE_ACTION,
> + actions,
> + "can't have vxlan and vlan"
> + " actions in the same rule");
> action_flags |= current_action_flag;
> }
> for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) {
> unsigned int i;
>
> + if ((item_flags & MLX5_FLOW_LAYER_TUNNEL) &&
> + items->type != RTE_FLOW_ITEM_TYPE_ETH)
> + return rte_flow_error_set(error, ENOTSUP,
> + RTE_FLOW_ERROR_TYPE_ITEM,
> + items,
> + "only L2 inner item"
> + " is supported");
> switch (items->type) {
> case RTE_FLOW_ITEM_TYPE_VOID:
> break;
> @@ -1360,7 +2019,9 @@ struct pedit_parser {
> error);
> if (ret < 0)
> return ret;
> - item_flags |= MLX5_FLOW_LAYER_OUTER_L2;
> + item_flags |= (item_flags & MLX5_FLOW_LAYER_TUNNEL) ?
> + MLX5_FLOW_LAYER_INNER_L2 :
> + MLX5_FLOW_LAYER_OUTER_L2;
> /* TODO:
> * Redundant check due to different supported mask.
> * Same for the rest of items.
> @@ -1438,6 +2099,12 @@ struct pedit_parser {
> next_protocol =
> ((const struct rte_flow_item_ipv4 *)
> (items->spec))->hdr.next_proto_id;
> + if (action_flags & MLX5_FLOW_ACTION_VXLAN_DECAP) {
> + ret = flow_tcf_validate_vxlan_decap_ipv4
> + (items, error);
> + if (ret < 0)
> + return ret;
> + }
> break;
> case RTE_FLOW_ITEM_TYPE_IPV6:
> ret = mlx5_flow_validate_item_ipv6(items, item_flags,
> @@ -1465,6 +2132,12 @@ struct pedit_parser {
> next_protocol =
> ((const struct rte_flow_item_ipv6 *)
> (items->spec))->hdr.proto;
> + if (action_flags & MLX5_FLOW_ACTION_VXLAN_DECAP) {
> + ret = flow_tcf_validate_vxlan_decap_ipv6
> + (items, error);
> + if (ret < 0)
> + return ret;
> + }
> break;
> case RTE_FLOW_ITEM_TYPE_UDP:
> ret = mlx5_flow_validate_item_udp(items, item_flags,
> @@ -1480,6 +2153,12 @@ struct pedit_parser {
> error);
> if (!mask.udp)
> return -rte_errno;
> + if (action_flags & MLX5_FLOW_ACTION_VXLAN_DECAP) {
> + ret = flow_tcf_validate_vxlan_decap_udp
> + (items, error);
> + if (ret < 0)
> + return ret;
> + }
> break;
> case RTE_FLOW_ITEM_TYPE_TCP:
> ret = mlx5_flow_validate_item_tcp
> @@ -1499,10 +2178,40 @@ struct pedit_parser {
> if (!mask.tcp)
> return -rte_errno;
> break;
> + case RTE_FLOW_ITEM_TYPE_VXLAN:
> + if (!(action_flags & RTE_FLOW_ACTION_TYPE_VXLAN_DECAP))
> + return rte_flow_error_set
> + (error, ENOTSUP,
> + RTE_FLOW_ERROR_TYPE_ITEM,
> + items,
> + "vni pattern should be followed by"
> + " vxlan decapsulation action");
> + ret = mlx5_flow_validate_item_vxlan(items,
> + item_flags, error);
> + if (ret < 0)
> + return ret;
> + item_flags |= MLX5_FLOW_LAYER_VXLAN;
> + mask.vxlan = flow_tcf_item_mask
> + (items, &rte_flow_item_vxlan_mask,
> + &flow_tcf_mask_supported.vxlan,
> + &flow_tcf_mask_empty.vxlan,
> + sizeof(flow_tcf_mask_supported.vxlan), error);
> + if (!mask.vxlan)
> + return -rte_errno;
> + if (mask.vxlan->vni[0] != 0xff ||
> + mask.vxlan->vni[1] != 0xff ||
> + mask.vxlan->vni[2] != 0xff)
> + return rte_flow_error_set
> + (error, ENOTSUP,
> + RTE_FLOW_ERROR_TYPE_ITEM_MASK,
> + mask.vxlan,
> + "no support for partial or "
> + "empty mask on \"vxlan.vni\" field");
> + break;
> default:
> return rte_flow_error_set(error, ENOTSUP,
> RTE_FLOW_ERROR_TYPE_ITEM,
> - NULL, "item not supported");
> + items, "item not supported");
> }
> }
> if ((action_flags & MLX5_TCF_PEDIT_ACTIONS) &&
> @@ -1571,6 +2280,12 @@ struct pedit_parser {
> RTE_FLOW_ERROR_TYPE_ACTION, actions,
> "vlan actions are supported"
> " only with port_id action");
> + if ((action_flags & MLX5_TCF_VXLAN_ACTIONS) &&
> + !(action_flags & MLX5_FLOW_ACTION_PORT_ID))
> + return rte_flow_error_set(error, ENOTSUP,
> + RTE_FLOW_ERROR_TYPE_ACTION, NULL,
> + "vxlan actions are supported"
> + " only with port_id action");
> if (!(action_flags & MLX5_TCF_FATE_ACTIONS))
> return rte_flow_error_set(error, EINVAL,
> RTE_FLOW_ERROR_TYPE_ACTION, actions,
> @@ -1594,6 +2309,28 @@ struct pedit_parser {
> "no ethernet found in"
> " pattern");
> }
> + if (action_flags & MLX5_FLOW_ACTION_VXLAN_DECAP) {
> + if (!(item_flags &
> + (MLX5_FLOW_LAYER_OUTER_L3_IPV4 |
> + MLX5_FLOW_LAYER_OUTER_L3_IPV6)))
> + return rte_flow_error_set(error, EINVAL,
> + RTE_FLOW_ERROR_TYPE_ACTION,
> + NULL,
> + "no outer IP pattern found"
> + " for vxlan decap action");
> + if (!(item_flags & MLX5_FLOW_LAYER_OUTER_L4_UDP))
> + return rte_flow_error_set(error, EINVAL,
> + RTE_FLOW_ERROR_TYPE_ACTION,
> + NULL,
> + "no outer UDP pattern found"
> + " for vxlan decap action");
> + if (!(item_flags & MLX5_FLOW_LAYER_VXLAN))
> + return rte_flow_error_set(error, EINVAL,
> + RTE_FLOW_ERROR_TYPE_ACTION,
> + NULL,
> + "no VNI pattern found"
> + " for vxlan decap action");
> + }
> return 0;
> }
>
> --
> 1.8.3.1
>
next prev parent reply other threads:[~2018-11-01 20:49 UTC|newest]
Thread overview: 110+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-10-02 6:30 [dpdk-dev] [PATCH 1/5] net/mlx5: add VXLAN encap/decap support for e-switch Slava Ovsiienko
2018-10-02 6:30 ` [dpdk-dev] [PATCH 2/5] net/mlx5: e-switch VXLAN netlink routines update Slava Ovsiienko
2018-10-02 6:30 ` [dpdk-dev] [PATCH 3/5] net/mlx5: e-switch VXLAN flow validation routine Slava Ovsiienko
2018-10-02 6:30 ` [dpdk-dev] [PATCH 4/5] net/mlx5: e-switch VXLAN flow translation routine Slava Ovsiienko
2018-10-02 6:30 ` [dpdk-dev] [PATCH 5/5] net/mlx5: e-switch VXLAN tunnel devices management Slava Ovsiienko
2018-10-15 14:13 ` [dpdk-dev] [PATCH v2 0/7] net/mlx5: e-switch VXLAN encap/decap hardware offload Viacheslav Ovsiienko
2018-10-15 14:13 ` [dpdk-dev] [PATCH v2 1/7] net/mlx5: e-switch VXLAN configuration and definitions Viacheslav Ovsiienko
2018-10-23 10:01 ` Yongseok Koh
2018-10-25 12:50 ` Slava Ovsiienko
2018-10-25 23:33 ` Yongseok Koh
2018-10-15 14:13 ` [dpdk-dev] [PATCH v2 2/7] net/mlx5: e-switch VXLAN flow validation routine Viacheslav Ovsiienko
2018-10-23 10:04 ` Yongseok Koh
2018-10-25 13:53 ` Slava Ovsiienko
2018-10-26 3:07 ` Yongseok Koh
2018-10-26 8:39 ` Slava Ovsiienko
2018-10-26 21:56 ` Yongseok Koh
2018-10-29 9:33 ` Slava Ovsiienko
2018-10-29 18:26 ` Yongseok Koh
2018-10-15 14:13 ` [dpdk-dev] [PATCH v2 3/7] net/mlx5: e-switch VXLAN flow translation routine Viacheslav Ovsiienko
2018-10-23 10:06 ` Yongseok Koh
2018-10-25 14:37 ` Slava Ovsiienko
2018-10-26 4:22 ` Yongseok Koh
2018-10-26 9:06 ` Slava Ovsiienko
2018-10-26 22:10 ` Yongseok Koh
2018-10-15 14:13 ` [dpdk-dev] [PATCH v2 4/7] net/mlx5: e-switch VXLAN netlink routines update Viacheslav Ovsiienko
2018-10-23 10:07 ` Yongseok Koh
2018-10-15 14:13 ` [dpdk-dev] [PATCH v2 5/7] net/mlx5: e-switch VXLAN tunnel devices management Viacheslav Ovsiienko
2018-10-25 0:28 ` Yongseok Koh
2018-10-25 20:21 ` Slava Ovsiienko
2018-10-26 6:25 ` Yongseok Koh
2018-10-26 9:35 ` Slava Ovsiienko
2018-10-26 22:42 ` Yongseok Koh
2018-10-29 11:53 ` Slava Ovsiienko
2018-10-29 18:42 ` Yongseok Koh
2018-10-15 14:13 ` [dpdk-dev] [PATCH v2 6/7] net/mlx5: e-switch VXLAN encapsulation rules management Viacheslav Ovsiienko
2018-10-25 0:33 ` Yongseok Koh
2018-10-15 14:13 ` [dpdk-dev] [PATCH v2 7/7] net/mlx5: e-switch VXLAN rule cleanup routines Viacheslav Ovsiienko
2018-10-25 0:36 ` Yongseok Koh
2018-10-25 20:32 ` Slava Ovsiienko
2018-10-26 6:30 ` Yongseok Koh
2018-11-01 12:19 ` [dpdk-dev] [PATCH v3 00/13] net/mlx5: e-switch VXLAN encap/decap hardware offload Slava Ovsiienko
2018-11-01 12:19 ` [dpdk-dev] [PATCH v3 01/13] net/mlx5: prepare makefile for adding e-switch VXLAN Slava Ovsiienko
2018-11-01 20:33 ` Yongseok Koh
2018-11-01 12:19 ` [dpdk-dev] [PATCH v3 02/13] net/mlx5: prepare meson.build " Slava Ovsiienko
2018-11-01 20:33 ` Yongseok Koh
2018-11-01 12:19 ` [dpdk-dev] [PATCH v3 03/13] net/mlx5: add necessary definitions for " Slava Ovsiienko
2018-11-01 20:35 ` Yongseok Koh
2018-11-01 12:19 ` [dpdk-dev] [PATCH v3 04/13] net/mlx5: add necessary structures " Slava Ovsiienko
2018-11-01 20:36 ` Yongseok Koh
2018-11-01 12:19 ` [dpdk-dev] [PATCH v3 05/13] net/mlx5: swap items/actions validations for e-switch rules Slava Ovsiienko
2018-11-01 20:37 ` Yongseok Koh
2018-11-01 12:19 ` [dpdk-dev] [PATCH v3 06/13] net/mlx5: add e-switch VXLAN support to validation routine Slava Ovsiienko
2018-11-01 20:49 ` Yongseok Koh [this message]
2018-11-01 12:19 ` [dpdk-dev] [PATCH v3 07/13] net/mlx5: add VXLAN support to flow prepare routine Slava Ovsiienko
2018-11-01 21:03 ` Yongseok Koh
2018-11-01 12:19 ` [dpdk-dev] [PATCH v3 08/13] net/mlx5: add VXLAN support to flow translate routine Slava Ovsiienko
2018-11-01 21:18 ` Yongseok Koh
2018-11-01 12:19 ` [dpdk-dev] [PATCH v3 09/13] net/mlx5: e-switch VXLAN netlink routines update Slava Ovsiienko
2018-11-01 21:21 ` Yongseok Koh
2018-11-01 12:19 ` [dpdk-dev] [PATCH v3 10/13] net/mlx5: fix e-switch Flow counter deletion Slava Ovsiienko
2018-11-01 22:00 ` Yongseok Koh
2018-11-01 12:19 ` [dpdk-dev] [PATCH v3 11/13] net/mlx5: add e-switch VXLAN tunnel devices management Slava Ovsiienko
2018-11-01 23:59 ` Yongseok Koh
2018-11-01 12:19 ` [dpdk-dev] [PATCH v3 12/13] net/mlx5: add e-switch VXLAN encapsulation rules Slava Ovsiienko
2018-11-02 0:01 ` Yongseok Koh
2018-11-01 12:19 ` [dpdk-dev] [PATCH v3 13/13] net/mlx5: add e-switch VXLAN rule cleanup routines Slava Ovsiienko
2018-11-02 0:01 ` Yongseok Koh
2018-11-01 20:32 ` [dpdk-dev] [PATCH v3 00/13] net/mlx5: e-switch VXLAN encap/decap hardware offload Yongseok Koh
2018-11-02 17:53 ` [dpdk-dev] [PATCH v4 " Slava Ovsiienko
2018-11-02 17:53 ` [dpdk-dev] [PATCH v4 01/13] net/mlx5: prepare makefile for adding E-Switch VXLAN Slava Ovsiienko
2018-11-03 6:18 ` [dpdk-dev] [PATCH v5 00/13] net/mlx5: e-switch VXLAN encap/decap hardware offload Slava Ovsiienko
2018-11-03 6:18 ` [dpdk-dev] [PATCH v5 01/13] net/mlx5: prepare makefile for adding E-Switch VXLAN Slava Ovsiienko
2018-11-12 20:01 ` [dpdk-dev] [PATCH 0/4] net/mlx5: prepare to add E-switch rule flags check Slava Ovsiienko
2018-11-12 20:01 ` [dpdk-dev] [PATCH 1/4] net/mlx5: prepare Netlink communication routine to fix Slava Ovsiienko
2018-11-13 13:21 ` Shahaf Shuler
2018-11-12 20:01 ` [dpdk-dev] [PATCH 2/4] net/mlx5: fix Netlink communication routine Slava Ovsiienko
2018-11-13 13:21 ` Shahaf Shuler
2018-11-14 12:57 ` Slava Ovsiienko
2018-11-12 20:01 ` [dpdk-dev] [PATCH 3/4] net/mlx5: prepare to add E-switch rule flags check Slava Ovsiienko
2018-11-12 20:01 ` [dpdk-dev] [PATCH 4/4] net/mlx5: add E-switch rule hardware offload flag check Slava Ovsiienko
2018-11-13 13:21 ` [dpdk-dev] [PATCH 0/4] net/mlx5: prepare to add E-switch rule flags check Shahaf Shuler
2018-11-14 14:56 ` Shahaf Shuler
2018-11-03 6:18 ` [dpdk-dev] [PATCH v5 03/13] net/mlx5: add necessary definitions for E-Switch VXLAN Slava Ovsiienko
2018-11-03 6:18 ` [dpdk-dev] [PATCH v5 02/13] net/mlx5: prepare meson.build for adding " Slava Ovsiienko
2018-11-03 6:18 ` [dpdk-dev] [PATCH v5 04/13] net/mlx5: add necessary structures for " Slava Ovsiienko
2018-11-03 6:18 ` [dpdk-dev] [PATCH v5 05/13] net/mlx5: swap items/actions validations for E-Switch rules Slava Ovsiienko
2018-11-03 6:18 ` [dpdk-dev] [PATCH v5 06/13] net/mlx5: add E-Switch VXLAN support to validation routine Slava Ovsiienko
2018-11-03 6:18 ` [dpdk-dev] [PATCH v5 07/13] net/mlx5: add VXLAN support to flow prepare routine Slava Ovsiienko
2018-11-03 6:18 ` [dpdk-dev] [PATCH v5 08/13] net/mlx5: add VXLAN support to flow translate routine Slava Ovsiienko
2018-11-03 6:18 ` [dpdk-dev] [PATCH v5 09/13] net/mlx5: update E-Switch VXLAN netlink routines Slava Ovsiienko
2018-11-03 6:18 ` [dpdk-dev] [PATCH v5 10/13] net/mlx5: fix E-Switch Flow counter deletion Slava Ovsiienko
2018-11-03 6:18 ` [dpdk-dev] [PATCH v5 11/13] net/mlx5: add E-switch VXLAN tunnel devices management Slava Ovsiienko
2018-11-03 6:18 ` [dpdk-dev] [PATCH v5 12/13] net/mlx5: add E-Switch VXLAN encapsulation rules Slava Ovsiienko
2018-11-03 6:18 ` [dpdk-dev] [PATCH v5 13/13] net/mlx5: add E-switch VXLAN rule cleanup routines Slava Ovsiienko
2018-11-04 6:48 ` [dpdk-dev] [PATCH v5 00/13] net/mlx5: e-switch VXLAN encap/decap hardware offload Shahaf Shuler
2018-11-02 17:53 ` [dpdk-dev] [PATCH v4 02/13] net/mlx5: prepare meson.build for adding E-Switch VXLAN Slava Ovsiienko
2018-11-02 17:53 ` [dpdk-dev] [PATCH v4 03/13] net/mlx5: add necessary definitions for " Slava Ovsiienko
2018-11-02 17:53 ` [dpdk-dev] [PATCH v4 04/13] net/mlx5: add necessary structures " Slava Ovsiienko
2018-11-02 17:53 ` [dpdk-dev] [PATCH v4 05/13] net/mlx5: swap items/actions validations for E-Switch rules Slava Ovsiienko
2018-11-02 17:53 ` [dpdk-dev] [PATCH v4 07/13] net/mlx5: add VXLAN support to flow prepare routine Slava Ovsiienko
2018-11-02 21:38 ` Yongseok Koh
2018-11-02 17:53 ` [dpdk-dev] [PATCH v4 06/13] net/mlx5: add E-Switch VXLAN support to validation routine Slava Ovsiienko
2018-11-02 17:53 ` [dpdk-dev] [PATCH v4 08/13] net/mlx5: add VXLAN support to flow translate routine Slava Ovsiienko
2018-11-02 21:53 ` Yongseok Koh
2018-11-02 23:29 ` Yongseok Koh
2018-11-02 17:53 ` [dpdk-dev] [PATCH v4 09/13] net/mlx5: update E-Switch VXLAN netlink routines Slava Ovsiienko
2018-11-02 17:53 ` [dpdk-dev] [PATCH v4 10/13] net/mlx5: fix E-Switch Flow counter deletion Slava Ovsiienko
2018-11-02 17:53 ` [dpdk-dev] [PATCH v4 11/13] net/mlx5: add E-switch VXLAN tunnel devices management Slava Ovsiienko
2018-11-02 17:53 ` [dpdk-dev] [PATCH v4 12/13] net/mlx5: add E-Switch VXLAN encapsulation rules Slava Ovsiienko
2018-11-02 17:53 ` [dpdk-dev] [PATCH v4 13/13] net/mlx5: add E-switch VXLAN rule cleanup routines Slava Ovsiienko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20181101204905.GG6118@mtidpdk.mti.labs.mlnx \
--to=yskoh@mellanox.com \
--cc=dev@dpdk.org \
--cc=shahafs@mellanox.com \
--cc=viacheslavo@mellanox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).