From: yang_y_yi@163.com
To: dev@dpdk.org
Cc: jiayu.hu@intel.com, konstantin.ananyev@intel.com,
thomas@monjalon.net, yangyi01@inspur.com, yang_y_yi@163.com
Subject: [dpdk-dev] [PATCH v1 6/8] gro: support IPv4 VXLAN UDP/IPv6
Date: Mon, 21 Dec 2020 11:50:50 +0800 [thread overview]
Message-ID: <20201221035052.128292-7-yang_y_yi@163.com> (raw)
In-Reply-To: <20201221035052.128292-1-yang_y_yi@163.com>
From: Yi Yang <yangyi01@inspur.com>
IPv4 VXLAN UDP/IPv6 GRO can help improve UDP/IPv6
performance in IPv4 VXLAN use case.
With this enabled in DPDK, OVS DPDK can leverage it
to improve VM-to-VM UDP/IPv6 performance, it will merge
small adjacent IPv4 VXLAN UDP/IPv6 fragments to a big
IPv4 VXLAN UDP/IPv6 packet immediate after they are
received from a physical NIC. It is very helpful in OVS
DPDK IPv4 VXLAN use case.
Signed-off-by: Yi Yang <yangyi01@inspur.com>
---
.../prog_guide/generic_receive_offload_lib.rst | 4 +-
doc/guides/rel_notes/release_21_02.rst | 5 +
lib/librte_gro/gro_vxlan_udp6.c | 607 +++++++++++++++++++++
lib/librte_gro/gro_vxlan_udp6.h | 152 ++++++
lib/librte_gro/meson.build | 2 +-
lib/librte_gro/rte_gro.c | 83 ++-
lib/librte_gro/rte_gro.h | 3 +
7 files changed, 845 insertions(+), 11 deletions(-)
create mode 100644 lib/librte_gro/gro_vxlan_udp6.c
create mode 100644 lib/librte_gro/gro_vxlan_udp6.h
diff --git a/doc/guides/prog_guide/generic_receive_offload_lib.rst b/doc/guides/prog_guide/generic_receive_offload_lib.rst
index 0ea3076..906e85f 100644
--- a/doc/guides/prog_guide/generic_receive_offload_lib.rst
+++ b/doc/guides/prog_guide/generic_receive_offload_lib.rst
@@ -32,8 +32,8 @@ fragmentation is possible (i.e., DF==0). Additionally, it complies RFC
Currently, the GRO library provides GRO supports for TCP/IPv4, UDP/IPv4,
UDP/IPv6, and TCP/IPv6 packets as well as VxLAN packets which contain an
-outer IPv4 header and an inner TCP/IPv4, UDP/IPv4 or TCP/IPv6 packet or
-an outer IPv6 header and an inner TCP/IPv4 or TCP/IPv6 packet.
+outer IPv4 header and an inner TCP/IPv4, UDP/IPv4, TCP/IPv6 or UDP/IPv6
+packet or an outer IPv6 header and an inner TCP/IPv4 or TCP/IPv6 packet.
Two Sets of API
---------------
diff --git a/doc/guides/rel_notes/release_21_02.rst b/doc/guides/rel_notes/release_21_02.rst
index d8974b7..4758cea 100644
--- a/doc/guides/rel_notes/release_21_02.rst
+++ b/doc/guides/rel_notes/release_21_02.rst
@@ -80,6 +80,11 @@ New Features
Added UDP/IPv6 GRO support for non-VXLAN packets, this enabled UDP/IPv6
GRO support for IPv6 VLAN use case.
+* **Added UDP/IPv6 GRO support for IPv4 VXLAN packets.**
+
+ Added UDP/IPv6 GRO support for IPv4 VXLAN packets, this enabled UDP/IPv6
+ GRO support for IPv4 VXLAN use case.
+
Removed Items
-------------
diff --git a/lib/librte_gro/gro_vxlan_udp6.c b/lib/librte_gro/gro_vxlan_udp6.c
new file mode 100644
index 0000000..8c76b41
--- /dev/null
+++ b/lib/librte_gro/gro_vxlan_udp6.c
@@ -0,0 +1,607 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Inspur Corporation
+ */
+
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
+#include <rte_cycles.h>
+#include <rte_ethdev.h>
+#include <rte_udp.h>
+#include <rte_ip.h>
+
+#include "gro_vxlan_udp6.h"
+
+void *
+gro_vxlan_udp6_tbl_create(uint16_t socket_id,
+ uint16_t max_flow_num,
+ uint16_t max_item_per_flow)
+{
+ struct gro_vxlan_udp6_tbl *tbl;
+ size_t size;
+ uint32_t entries_num, i;
+
+ entries_num = max_flow_num * max_item_per_flow;
+ entries_num = RTE_MIN(entries_num, GRO_VXLAN_UDP6_TBL_MAX_ITEM_NUM);
+
+ if (entries_num == 0)
+ return NULL;
+
+ tbl = rte_zmalloc_socket(__func__,
+ sizeof(struct gro_vxlan_udp6_tbl),
+ RTE_CACHE_LINE_SIZE,
+ socket_id);
+ if (tbl == NULL)
+ return NULL;
+
+ size = sizeof(struct gro_vxlan_udp6_item) * entries_num;
+ tbl->items = rte_zmalloc_socket(__func__,
+ size,
+ RTE_CACHE_LINE_SIZE,
+ socket_id);
+ if (tbl->items == NULL) {
+ rte_free(tbl);
+ return NULL;
+ }
+ tbl->max_item_num = entries_num;
+
+ size = sizeof(struct gro_vxlan_udp6_flow) * entries_num;
+ tbl->flows = rte_zmalloc_socket(__func__,
+ size,
+ RTE_CACHE_LINE_SIZE,
+ socket_id);
+ if (tbl->flows == NULL) {
+ rte_free(tbl->items);
+ rte_free(tbl);
+ return NULL;
+ }
+
+ for (i = 0; i < entries_num; i++)
+ tbl->flows[i].start_index = INVALID_ARRAY_INDEX;
+ tbl->max_flow_num = entries_num;
+
+ return tbl;
+}
+
+void
+gro_vxlan_udp6_tbl_destroy(void *tbl)
+{
+ struct gro_vxlan_udp6_tbl *vxlan_tbl = tbl;
+
+ if (vxlan_tbl) {
+ rte_free(vxlan_tbl->items);
+ rte_free(vxlan_tbl->flows);
+ }
+ rte_free(vxlan_tbl);
+}
+
+static inline uint32_t
+find_an_empty_item(struct gro_vxlan_udp6_tbl *tbl)
+{
+ uint32_t max_item_num = tbl->max_item_num, i;
+
+ for (i = 0; i < max_item_num; i++)
+ if (tbl->items[i].inner_item.firstseg == NULL)
+ return i;
+ return INVALID_ARRAY_INDEX;
+}
+
+static inline uint32_t
+find_an_empty_flow(struct gro_vxlan_udp6_tbl *tbl)
+{
+ uint32_t max_flow_num = tbl->max_flow_num, i;
+
+ for (i = 0; i < max_flow_num; i++)
+ if (tbl->flows[i].start_index == INVALID_ARRAY_INDEX)
+ return i;
+ return INVALID_ARRAY_INDEX;
+}
+
+static inline uint32_t
+insert_new_item(struct gro_vxlan_udp6_tbl *tbl,
+ struct rte_mbuf *pkt,
+ uint64_t start_time,
+ uint32_t prev_idx,
+ uint16_t frag_offset,
+ uint8_t is_last_frag,
+ uint16_t outer_ip_id,
+ uint8_t outer_is_atomic)
+{
+ uint32_t item_idx;
+
+ item_idx = find_an_empty_item(tbl);
+ if (unlikely(item_idx == INVALID_ARRAY_INDEX))
+ return INVALID_ARRAY_INDEX;
+
+ tbl->items[item_idx].inner_item.firstseg = pkt;
+ tbl->items[item_idx].inner_item.lastseg = rte_pktmbuf_lastseg(pkt);
+ tbl->items[item_idx].inner_item.start_time = start_time;
+ tbl->items[item_idx].inner_item.next_pkt_idx = INVALID_ARRAY_INDEX;
+ tbl->items[item_idx].inner_item.frag_offset = frag_offset;
+ tbl->items[item_idx].inner_item.is_last_frag = is_last_frag;
+ tbl->items[item_idx].inner_item.nb_merged = 1;
+ tbl->items[item_idx].outer_ip_id = outer_ip_id;
+ tbl->items[item_idx].outer_is_atomic = outer_is_atomic;
+ tbl->item_num++;
+
+ /* If the previous packet exists, chain the new one with it. */
+ if (prev_idx != INVALID_ARRAY_INDEX) {
+ tbl->items[item_idx].inner_item.next_pkt_idx =
+ tbl->items[prev_idx].inner_item.next_pkt_idx;
+ tbl->items[prev_idx].inner_item.next_pkt_idx = item_idx;
+ }
+
+ return item_idx;
+}
+
+static inline uint32_t
+delete_item(struct gro_vxlan_udp6_tbl *tbl,
+ uint32_t item_idx,
+ uint32_t prev_item_idx)
+{
+ uint32_t next_idx = tbl->items[item_idx].inner_item.next_pkt_idx;
+
+ /* NULL indicates an empty item. */
+ tbl->items[item_idx].inner_item.firstseg = NULL;
+ tbl->item_num--;
+ if (prev_item_idx != INVALID_ARRAY_INDEX)
+ tbl->items[prev_item_idx].inner_item.next_pkt_idx = next_idx;
+
+ return next_idx;
+}
+
+/* Copy IPv6 addr */
+static inline void gro_ipv6_addr_copy(const uint8_t *ipv6_from,
+ uint8_t *ipv6_to)
+{
+ const uint64_t *from_words = (const uint64_t *)ipv6_from;
+ uint64_t *to_words = (uint64_t *)ipv6_to;
+
+ to_words[0] = from_words[0];
+ to_words[1] = from_words[1];
+}
+
+
+static inline uint32_t
+insert_new_flow(struct gro_vxlan_udp6_tbl *tbl,
+ struct vxlan_udp6_flow_key *src,
+ uint32_t item_idx)
+{
+ struct vxlan_udp6_flow_key *dst;
+ uint32_t flow_idx;
+
+ flow_idx = find_an_empty_flow(tbl);
+ if (unlikely(flow_idx == INVALID_ARRAY_INDEX))
+ return INVALID_ARRAY_INDEX;
+
+ dst = &(tbl->flows[flow_idx].key);
+
+ rte_ether_addr_copy(&(src->inner_key.eth_saddr),
+ &(dst->inner_key.eth_saddr));
+ rte_ether_addr_copy(&(src->inner_key.eth_daddr),
+ &(dst->inner_key.eth_daddr));
+ gro_ipv6_addr_copy(src->inner_key.ip_saddr, dst->inner_key.ip_saddr);
+ gro_ipv6_addr_copy(src->inner_key.ip_daddr, dst->inner_key.ip_daddr);
+ dst->inner_key.ip_id = src->inner_key.ip_id;
+
+ dst->vxlan_hdr.vx_flags = src->vxlan_hdr.vx_flags;
+ dst->vxlan_hdr.vx_vni = src->vxlan_hdr.vx_vni;
+ rte_ether_addr_copy(&(src->outer_eth_saddr), &(dst->outer_eth_saddr));
+ rte_ether_addr_copy(&(src->outer_eth_daddr), &(dst->outer_eth_daddr));
+ dst->outer_ip_src_addr = src->outer_ip_src_addr;
+ dst->outer_ip_dst_addr = src->outer_ip_dst_addr;
+ dst->outer_src_port = src->outer_src_port;
+ dst->outer_dst_port = src->outer_dst_port;
+
+ tbl->flows[flow_idx].start_index = item_idx;
+ tbl->flow_num++;
+
+ return flow_idx;
+}
+
+static inline int
+is_same_vxlan_udp6_flow(struct vxlan_udp6_flow_key k1,
+ struct vxlan_udp6_flow_key k2)
+{
+ /* For VxLAN packet, outer udp src port is calculated from
+ * inner packet RSS hash, udp src port of the first UDP
+ * fragment is different from one of other UDP fragments
+ * even if they are same flow, so we have to skip outer udp
+ * src port comparison here.
+ */
+ return (rte_is_same_ether_addr(&k1.outer_eth_saddr,
+ &k2.outer_eth_saddr) &&
+ rte_is_same_ether_addr(&k1.outer_eth_daddr,
+ &k2.outer_eth_daddr) &&
+ (k1.outer_ip_src_addr == k2.outer_ip_src_addr) &&
+ (k1.outer_ip_dst_addr == k2.outer_ip_dst_addr) &&
+ (k1.outer_dst_port == k2.outer_dst_port) &&
+ (k1.vxlan_hdr.vx_flags == k2.vxlan_hdr.vx_flags) &&
+ (k1.vxlan_hdr.vx_vni == k2.vxlan_hdr.vx_vni) &&
+ is_same_udp6_flow(k1.inner_key, k2.inner_key));
+}
+
+static inline int
+udp6_check_vxlan_neighbor(struct gro_vxlan_udp6_item *item,
+ uint16_t frag_offset,
+ uint16_t ip_dl)
+{
+ struct rte_mbuf *pkt = item->inner_item.firstseg;
+ int cmp;
+ uint16_t l2_offset;
+ int ret = 0;
+
+ l2_offset = pkt->outer_l2_len + pkt->outer_l3_len;
+ cmp = udp6_check_neighbor(&item->inner_item, frag_offset,
+ ip_dl, l2_offset);
+ if (cmp > 0)
+ /* Append the new packet. */
+ ret = 1;
+ else if (cmp < 0)
+ /* Prepend the new packet. */
+ ret = -1;
+
+ return ret;
+}
+
+static inline int
+merge_two_vxlan_udp6_packets(struct gro_vxlan_udp6_item *item,
+ struct rte_mbuf *pkt,
+ int cmp,
+ uint16_t frag_offset,
+ uint8_t is_last_frag)
+{
+ if (merge_two_udp6_packets(&item->inner_item, pkt, cmp, frag_offset,
+ is_last_frag,
+ pkt->outer_l2_len + pkt->outer_l3_len)) {
+ return 1;
+ }
+
+ return 0;
+}
+
+static inline uint16_t
+get_ipv6_frag_offset(struct rte_ipv6_fragment_ext *ipv6_frag_hdr)
+{
+ return ((rte_be_to_cpu_16(ipv6_frag_hdr->frag_data) >> 3) * 8);
+}
+
+static inline uint8_t
+is_last_ipv6_frag(struct rte_ipv6_fragment_ext *ipv6_frag_hdr)
+{
+ return (rte_be_to_cpu_16(ipv6_frag_hdr->frag_data) & 0x0001);
+}
+
+static inline void
+update_vxlan_header(struct gro_vxlan_udp6_item *item)
+{
+ struct rte_ipv4_hdr *outer_ipv4_hdr;
+ struct rte_ipv6_hdr *ipv6_hdr;
+ struct rte_udp_hdr *udp_hdr;
+ struct rte_ipv6_fragment_ext *ipv6_frag_hdr;
+ size_t fh_len = sizeof(*ipv6_frag_hdr);
+ struct rte_mbuf *pkt = item->inner_item.firstseg;
+ uint16_t len;
+
+ /* Position to inner IPv6 header first */
+ ipv6_hdr = (struct rte_ipv6_hdr *)rte_pktmbuf_mtod_offset(pkt, char *,
+ pkt->outer_l2_len + pkt->outer_l3_len + pkt->l2_len);
+ len = pkt->pkt_len - pkt->outer_l2_len - pkt->outer_l3_len
+ - pkt->l2_len;
+ ipv6_hdr->payload_len = rte_cpu_to_be_16(len - pkt->l3_len);
+
+ /* Remove fragment extension header or clear MF flag */
+ if (item->inner_item.is_last_frag
+ && (ipv6_hdr->proto == IPPROTO_FRAGMENT)) {
+ uint16_t ip_ofs;
+
+ ipv6_frag_hdr = (struct rte_ipv6_fragment_ext *)(ipv6_hdr + 1);
+ ip_ofs = get_ipv6_frag_offset(ipv6_frag_hdr);
+ if (ip_ofs == 0) {
+ ipv6_hdr->proto = ipv6_frag_hdr->next_header;
+ pkt->l3_len -= fh_len;
+
+ /* Remove IPv6 fragment extension header */
+ memmove(rte_pktmbuf_mtod_offset(pkt, char *, fh_len),
+ rte_pktmbuf_mtod(pkt, char*),
+ pkt->outer_l2_len + pkt->outer_l3_len +
+ pkt->l2_len + pkt->l3_len);
+ rte_pktmbuf_adj(pkt, fh_len);
+ } else {
+ /* clear MF flag */
+ ipv6_frag_hdr->frag_data = rte_cpu_to_be_16(
+ (rte_be_to_cpu_16(ipv6_frag_hdr->frag_data) &
+ 0xFFFE));
+ ipv6_hdr->payload_len += fh_len;
+ }
+ }
+
+ /* Must adjust outer header after inner IPv6 is handled. */
+ /* Update the outer IPv4 header. */
+ len = pkt->pkt_len - pkt->outer_l2_len;
+ outer_ipv4_hdr = (struct rte_ipv4_hdr *)(rte_pktmbuf_mtod(pkt, char *) +
+ pkt->outer_l2_len);
+ outer_ipv4_hdr->total_length = rte_cpu_to_be_16(len);
+
+ /* Point to the outer UDP header. */
+ len -= pkt->outer_l3_len;
+ udp_hdr = (struct rte_udp_hdr *)((char *)outer_ipv4_hdr
+ + pkt->outer_l3_len);
+ udp_hdr->dgram_len = rte_cpu_to_be_16(len);
+}
+
+int32_t
+gro_vxlan_udp6_reassemble(struct rte_mbuf *pkt,
+ struct gro_vxlan_udp6_tbl *tbl,
+ uint64_t start_time)
+{
+ struct rte_ether_hdr *outer_eth_hdr, *eth_hdr;
+ struct rte_ipv4_hdr *outer_ipv4_hdr;
+ struct rte_ipv6_hdr *ipv6_hdr;
+ struct rte_ipv6_fragment_ext *ipv6_frag_hdr;
+ uint16_t fh_len = sizeof(*ipv6_frag_hdr);
+ struct rte_udp_hdr *udp_hdr;
+ struct rte_vxlan_hdr *vxlan_hdr;
+ uint16_t frag_offset;
+ uint8_t is_last_frag;
+ int16_t ip_dl;
+ uint32_t ip_id;
+ uint16_t outer_ip_id;
+ uint8_t outer_is_atomic;
+
+ struct vxlan_udp6_flow_key key;
+ uint32_t cur_idx, prev_idx, item_idx;
+ uint32_t i, max_flow_num, remaining_flow_num;
+ int cmp;
+ uint16_t hdr_len;
+ uint8_t find;
+
+ outer_eth_hdr = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *);
+ outer_ipv4_hdr = (struct rte_ipv4_hdr *)((char *)outer_eth_hdr +
+ pkt->outer_l2_len);
+
+ udp_hdr = (struct rte_udp_hdr *)((char *)outer_ipv4_hdr +
+ pkt->outer_l3_len);
+ vxlan_hdr = (struct rte_vxlan_hdr *)((char *)udp_hdr +
+ sizeof(struct rte_udp_hdr));
+ eth_hdr = (struct rte_ether_hdr *)((char *)vxlan_hdr +
+ sizeof(struct rte_vxlan_hdr));
+ /* l2_len = outer udp hdr len + vxlan hdr len + inner l2 len */
+ ipv6_hdr = (struct rte_ipv6_hdr *)((char *)udp_hdr + pkt->l2_len);
+
+ /*
+ * Don't process the packet which has non-fragment inner IP.
+ */
+ if (ipv6_hdr->proto != IPPROTO_FRAGMENT)
+ return -1;
+
+ /* Note: l3_len includes length of extension headers */
+ hdr_len = pkt->outer_l2_len + pkt->outer_l3_len + pkt->l2_len +
+ pkt->l3_len;
+ /*
+ * Don't process the packet whose payload length is less than or
+ * equal to 0.
+ */
+ if (pkt->pkt_len <= hdr_len)
+ return -1;
+
+ /*
+ * Save IPv4 ID for the packet whose DF bit is 0. For the packet
+ * whose DF bit is 1, IPv4 ID is ignored.
+ */
+ frag_offset = rte_be_to_cpu_16(outer_ipv4_hdr->fragment_offset);
+ outer_is_atomic =
+ ((frag_offset & RTE_IPV4_HDR_DF_FLAG) == RTE_IPV4_HDR_DF_FLAG);
+ outer_ip_id = outer_is_atomic ? 0 :
+ rte_be_to_cpu_16(outer_ipv4_hdr->packet_id);
+ ipv6_frag_hdr = (struct rte_ipv6_fragment_ext *)(ipv6_hdr + 1);
+ ip_dl = rte_be_to_cpu_16(ipv6_hdr->payload_len) - fh_len;
+ ip_id = rte_be_to_cpu_32(ipv6_frag_hdr->id);
+ frag_offset = get_ipv6_frag_offset(ipv6_frag_hdr);
+ is_last_frag = is_last_ipv6_frag(ipv6_frag_hdr);
+
+ rte_ether_addr_copy(&(eth_hdr->s_addr), &(key.inner_key.eth_saddr));
+ rte_ether_addr_copy(&(eth_hdr->d_addr), &(key.inner_key.eth_daddr));
+ gro_ipv6_addr_copy(ipv6_hdr->src_addr, key.inner_key.ip_saddr);
+ gro_ipv6_addr_copy(ipv6_hdr->dst_addr, key.inner_key.ip_daddr);
+ key.inner_key.ip_id = ip_id;
+
+ key.vxlan_hdr.vx_flags = vxlan_hdr->vx_flags;
+ key.vxlan_hdr.vx_vni = vxlan_hdr->vx_vni;
+ rte_ether_addr_copy(&(outer_eth_hdr->s_addr), &(key.outer_eth_saddr));
+ rte_ether_addr_copy(&(outer_eth_hdr->d_addr), &(key.outer_eth_daddr));
+ key.outer_ip_src_addr = outer_ipv4_hdr->src_addr;
+ key.outer_ip_dst_addr = outer_ipv4_hdr->dst_addr;
+ key.outer_src_port = udp_hdr->src_port;
+ key.outer_dst_port = udp_hdr->dst_port;
+
+ /* Search for a matched flow. */
+ max_flow_num = tbl->max_flow_num;
+ remaining_flow_num = tbl->flow_num;
+ find = 0;
+ for (i = 0; i < max_flow_num && remaining_flow_num; i++) {
+ if (tbl->flows[i].start_index != INVALID_ARRAY_INDEX) {
+ if (is_same_vxlan_udp6_flow(tbl->flows[i].key, key)) {
+ find = 1;
+ break;
+ }
+ remaining_flow_num--;
+ }
+ }
+
+ /*
+ * Can't find a matched flow. Insert a new flow and store the
+ * packet into the flow.
+ */
+ if (find == 0) {
+ item_idx = insert_new_item(tbl, pkt, start_time,
+ INVALID_ARRAY_INDEX, frag_offset,
+ is_last_frag, outer_ip_id,
+ outer_is_atomic);
+ if (item_idx == INVALID_ARRAY_INDEX)
+ return -1;
+ if (insert_new_flow(tbl, &key, item_idx) ==
+ INVALID_ARRAY_INDEX) {
+ /*
+ * Fail to insert a new flow, so
+ * delete the inserted packet.
+ */
+ delete_item(tbl, item_idx, INVALID_ARRAY_INDEX);
+ return -1;
+ }
+ return 0;
+ }
+
+ /* Check all packets in the flow and try to find a neighbor. */
+ cur_idx = tbl->flows[i].start_index;
+ prev_idx = cur_idx;
+ do {
+ cmp = udp6_check_vxlan_neighbor(&(tbl->items[cur_idx]),
+ frag_offset, ip_dl);
+ if (cmp) {
+ if (merge_two_vxlan_udp6_packets(
+ &(tbl->items[cur_idx]),
+ pkt, cmp, frag_offset,
+ is_last_frag)) {
+ return 1;
+ }
+ /*
+ * Can't merge two packets, as the packet
+ * length will be greater than the max value.
+ * Insert the packet into the flow.
+ */
+ if (insert_new_item(tbl, pkt, start_time, prev_idx,
+ frag_offset, is_last_frag,
+ outer_ip_id,
+ outer_is_atomic)
+ == INVALID_ARRAY_INDEX)
+ return -1;
+ return 0;
+ }
+
+ /* Ensure inserted items are ordered by frag_offset */
+ if (frag_offset
+ < tbl->items[cur_idx].inner_item.frag_offset) {
+ break;
+ }
+
+ prev_idx = cur_idx;
+ cur_idx = tbl->items[cur_idx].inner_item.next_pkt_idx;
+ } while (cur_idx != INVALID_ARRAY_INDEX);
+
+ /* Can't find neighbor. Insert the packet into the flow. */
+ if (cur_idx == tbl->flows[i].start_index) {
+ /* Insert it before the first packet of the flow */
+ item_idx = insert_new_item(tbl, pkt, start_time,
+ INVALID_ARRAY_INDEX, frag_offset,
+ is_last_frag, outer_ip_id,
+ outer_is_atomic);
+ if (item_idx == INVALID_ARRAY_INDEX)
+ return -1;
+ tbl->items[item_idx].inner_item.next_pkt_idx = cur_idx;
+ tbl->flows[i].start_index = item_idx;
+ } else {
+ if (insert_new_item(tbl, pkt, start_time, prev_idx,
+ frag_offset, is_last_frag, outer_ip_id,
+ outer_is_atomic) == INVALID_ARRAY_INDEX)
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+gro_vxlan_udp6_merge_items(struct gro_vxlan_udp6_tbl *tbl,
+ uint32_t start_idx)
+{
+ uint16_t frag_offset;
+ uint8_t is_last_frag;
+ int16_t ip_dl;
+ struct rte_mbuf *pkt;
+ int cmp;
+ uint32_t item_idx;
+ uint16_t hdr_len;
+
+ item_idx = tbl->items[start_idx].inner_item.next_pkt_idx;
+ while (item_idx != INVALID_ARRAY_INDEX) {
+ pkt = tbl->items[item_idx].inner_item.firstseg;
+ hdr_len = pkt->outer_l2_len + pkt->outer_l3_len + pkt->l2_len +
+ pkt->l3_len;
+ ip_dl = pkt->pkt_len - hdr_len;
+ frag_offset = tbl->items[item_idx].inner_item.frag_offset;
+ is_last_frag = tbl->items[item_idx].inner_item.is_last_frag;
+ cmp = udp6_check_vxlan_neighbor(&(tbl->items[start_idx]),
+ frag_offset, ip_dl);
+ if (cmp) {
+ if (merge_two_vxlan_udp6_packets(
+ &(tbl->items[start_idx]),
+ pkt, cmp, frag_offset,
+ is_last_frag)) {
+ item_idx = delete_item(tbl, item_idx,
+ INVALID_ARRAY_INDEX);
+ tbl->items[start_idx].inner_item.next_pkt_idx
+ = item_idx;
+ } else {
+ return 0;
+ }
+ } else {
+ return 0;
+ }
+ }
+
+ return 0;
+}
+
+uint16_t
+gro_vxlan_udp6_tbl_timeout_flush(struct gro_vxlan_udp6_tbl *tbl,
+ uint64_t flush_timestamp,
+ struct rte_mbuf **out,
+ uint16_t nb_out)
+{
+ uint16_t k = 0;
+ uint32_t i, j;
+ uint32_t max_flow_num = tbl->max_flow_num;
+
+ for (i = 0; i < max_flow_num; i++) {
+ if (unlikely(tbl->flow_num == 0))
+ return k;
+
+ j = tbl->flows[i].start_index;
+ while (j != INVALID_ARRAY_INDEX) {
+ if (tbl->items[j].inner_item.start_time <=
+ flush_timestamp) {
+ gro_vxlan_udp6_merge_items(tbl, j);
+ out[k++] = tbl->items[j].inner_item.firstseg;
+ if (tbl->items[j].inner_item.nb_merged > 1)
+ update_vxlan_header(&(tbl->items[j]));
+ /*
+ * Delete the item and get the next packet
+ * index.
+ */
+ j = delete_item(tbl, j, INVALID_ARRAY_INDEX);
+ tbl->flows[i].start_index = j;
+ if (j == INVALID_ARRAY_INDEX)
+ tbl->flow_num--;
+
+ if (unlikely(k == nb_out))
+ return k;
+ } else
+ /*
+ * The left packets in the flow won't be
+ * timeout. Go to check other flows.
+ */
+ break;
+ }
+ }
+ return k;
+}
+
+uint32_t
+gro_vxlan_udp6_tbl_pkt_count(void *tbl)
+{
+ struct gro_vxlan_udp6_tbl *gro_tbl = tbl;
+
+ if (gro_tbl)
+ return gro_tbl->item_num;
+
+ return 0;
+}
diff --git a/lib/librte_gro/gro_vxlan_udp6.h b/lib/librte_gro/gro_vxlan_udp6.h
new file mode 100644
index 0000000..648733d
--- /dev/null
+++ b/lib/librte_gro/gro_vxlan_udp6.h
@@ -0,0 +1,152 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Inspur Corporation
+ */
+
+#ifndef _GRO_VXLAN_UDP6_H_
+#define _GRO_VXLAN_UDP6_H_
+
+#include "gro_udp6.h"
+
+#define GRO_VXLAN_UDP6_TBL_MAX_ITEM_NUM (1024UL * 1024UL)
+
+/* Header fields representing a VxLAN flow */
+struct vxlan_udp6_flow_key {
+ struct udp6_flow_key inner_key;
+ struct rte_vxlan_hdr vxlan_hdr;
+
+ struct rte_ether_addr outer_eth_saddr;
+ struct rte_ether_addr outer_eth_daddr;
+
+ uint32_t outer_ip_src_addr;
+ uint32_t outer_ip_dst_addr;
+
+ /* Outer UDP ports */
+ uint16_t outer_src_port;
+ uint16_t outer_dst_port;
+
+};
+
+struct gro_vxlan_udp6_flow {
+ struct vxlan_udp6_flow_key key;
+ /*
+ * The index of the first packet in the flow. INVALID_ARRAY_INDEX
+ * indicates an empty flow.
+ */
+ uint32_t start_index;
+};
+
+struct gro_vxlan_udp6_item {
+ struct gro_udp6_item inner_item;
+ /* IPv4 ID in the outer IPv4 header */
+ uint16_t outer_ip_id;
+ /* Indicate if outer IPv4 ID can be ignored */
+ uint8_t outer_is_atomic;
+};
+
+/*
+ * VxLAN (with an outer IPv4 header and an inner UDP/IPv6 packet)
+ * reassembly table structure
+ */
+struct gro_vxlan_udp6_tbl {
+ /* item array */
+ struct gro_vxlan_udp6_item *items;
+ /* flow array */
+ struct gro_vxlan_udp6_flow *flows;
+ /* current item number */
+ uint32_t item_num;
+ /* current flow number */
+ uint32_t flow_num;
+ /* the maximum item number */
+ uint32_t max_item_num;
+ /* the maximum flow number */
+ uint32_t max_flow_num;
+};
+
+/**
+ * This function creates a IPv4 VxLAN reassembly table for IPv4 VxLAN packets
+ * which have an outer IPv4 header and an inner UDP/IPv6 packet.
+ *
+ * @param socket_id
+ * Socket index for allocating the table
+ * @param max_flow_num
+ * The maximum number of flows in the table
+ * @param max_item_per_flow
+ * The maximum number of packets per flow
+ *
+ * @return
+ * - Return the table pointer on success.
+ * - Return NULL on failure.
+ */
+void *gro_vxlan_udp6_tbl_create(uint16_t socket_id,
+ uint16_t max_flow_num,
+ uint16_t max_item_per_flow);
+
+/**
+ * This function destroys a IPv4 VxLAN reassembly table.
+ *
+ * @param tbl
+ * Pointer pointing to the IPv4 VxLAN reassembly table
+ */
+void gro_vxlan_udp6_tbl_destroy(void *tbl);
+
+/**
+ * This function merges a IPv4 VxLAN packet which has an outer IPv4 header and
+ * an inner UDP/IPv6 packet. It does not process the packet which does not
+ * have payload.
+ *
+ * This function does not check if the packet has correct checksums and
+ * does not re-calculate checksums for the merged packet. It returns the
+ * packet if there is no available space in the table.
+ *
+ * @param pkt
+ * Packet to reassemble
+ * @param tbl
+ * Pointer pointing to the IPv4 VxLAN reassembly table
+ * @start_time
+ * The time when the packet is inserted into the table
+ *
+ * @return
+ * - Return a positive value if the packet is merged.
+ * - Return zero if the packet isn't merged but stored in the table.
+ * - Return a negative value for invalid parameters or no available
+ * space in the table.
+ */
+int32_t gro_vxlan_udp6_reassemble(struct rte_mbuf *pkt,
+ struct gro_vxlan_udp6_tbl *tbl,
+ uint64_t start_time);
+
+/**
+ * This function flushes timeout packets in the IPv4 VxLAN reassembly table,
+ * and without updating checksums.
+ *
+ * @param tbl
+ * Pointer pointing to a IPv4 VxLAN GRO table
+ * @param flush_timestamp
+ * This function flushes packets which are inserted into the table
+ * before or at the flush_timestamp.
+ * @param out
+ * Pointer array used to keep flushed packets
+ * @param nb_out
+ * The element number in 'out'. It also determines the maximum number of
+ * packets that can be flushed finally.
+ *
+ * @return
+ * The number of flushed packets
+ */
+uint16_t gro_vxlan_udp6_tbl_timeout_flush(struct gro_vxlan_udp6_tbl *tbl,
+ uint64_t flush_timestamp,
+ struct rte_mbuf **out,
+ uint16_t nb_out);
+
+/**
+ * This function returns the number of the packets in a IPv4 VxLAN
+ * reassembly table.
+ *
+ * @param tbl
+ * Pointer pointing to the IPv4 VxLAN reassembly table
+ *
+ * @return
+ * The number of packets in the table
+ */
+uint32_t gro_vxlan_udp6_tbl_pkt_count(void *tbl);
+#endif
diff --git a/lib/librte_gro/meson.build b/lib/librte_gro/meson.build
index 7939081..14b6b64 100644
--- a/lib/librte_gro/meson.build
+++ b/lib/librte_gro/meson.build
@@ -1,6 +1,6 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2017 Intel Corporation
-sources = files('rte_gro.c', 'gro_tcp4.c', 'gro_udp4.c', 'gro_vxlan_tcp4.c', 'gro_vxlan_udp4.c', 'gro_tcp6.c', 'gro_vxlan_tcp6.c', 'gro_vxlan6_tcp4.c', 'gro_vxlan6_tcp6.c', 'gro_udp6.c')
+sources = files('rte_gro.c', 'gro_tcp4.c', 'gro_udp4.c', 'gro_vxlan_tcp4.c', 'gro_vxlan_udp4.c', 'gro_tcp6.c', 'gro_vxlan_tcp6.c', 'gro_vxlan6_tcp4.c', 'gro_vxlan6_tcp6.c', 'gro_udp6.c', 'gro_vxlan_udp6.c')
headers = files('rte_gro.h')
deps += ['ethdev']
diff --git a/lib/librte_gro/rte_gro.c b/lib/librte_gro/rte_gro.c
index 13b38cc..3bb7ea1 100644
--- a/lib/librte_gro/rte_gro.c
+++ b/lib/librte_gro/rte_gro.c
@@ -17,6 +17,7 @@
#include "gro_vxlan_tcp4.h"
#include "gro_vxlan_tcp6.h"
#include "gro_vxlan_udp4.h"
+#include "gro_vxlan_udp6.h"
typedef void *(*gro_tbl_create_fn)(uint16_t socket_id,
uint16_t max_flow_num,
@@ -29,21 +30,24 @@
gro_udp4_tbl_create, gro_vxlan_udp4_tbl_create,
gro_tcp6_tbl_create, gro_vxlan_tcp6_tbl_create,
gro_vxlan6_tcp4_tbl_create, gro_vxlan6_tcp6_tbl_create,
- gro_udp6_tbl_create, NULL};
+ gro_udp6_tbl_create, gro_vxlan_udp6_tbl_create,
+ NULL};
static gro_tbl_destroy_fn tbl_destroy_fn[RTE_GRO_TYPE_MAX_NUM] = {
gro_tcp4_tbl_destroy, gro_vxlan_tcp4_tbl_destroy,
gro_udp4_tbl_destroy, gro_vxlan_udp4_tbl_destroy,
gro_tcp6_tbl_destroy, gro_vxlan_tcp6_tbl_destroy,
gro_vxlan6_tcp4_tbl_destroy,
gro_vxlan6_tcp6_tbl_destroy,
- gro_udp6_tbl_destroy, NULL};
+ gro_udp6_tbl_destroy, gro_vxlan_udp6_tbl_destroy,
+ NULL};
static gro_tbl_pkt_count_fn tbl_pkt_count_fn[RTE_GRO_TYPE_MAX_NUM] = {
gro_tcp4_tbl_pkt_count, gro_vxlan_tcp4_tbl_pkt_count,
gro_udp4_tbl_pkt_count, gro_vxlan_udp4_tbl_pkt_count,
gro_tcp6_tbl_pkt_count, gro_vxlan_tcp6_tbl_pkt_count,
gro_vxlan6_tcp4_tbl_pkt_count,
gro_vxlan6_tcp6_tbl_pkt_count,
- gro_udp6_tbl_pkt_count, NULL};
+ gro_udp6_tbl_pkt_count, gro_vxlan_udp6_tbl_pkt_count,
+ NULL};
#define IS_IPV4_TCP_PKT(ptype) (RTE_ETH_IS_IPV4_HDR(ptype) && \
((ptype & RTE_PTYPE_L4_TCP) == RTE_PTYPE_L4_TCP) && \
@@ -126,6 +130,19 @@
((ptype & RTE_PTYPE_L4_UDP) == RTE_PTYPE_L4_UDP) && \
(RTE_ETH_IS_TUNNEL_PKT(ptype) == 0))
+#define IS_IPV4_VXLAN_UDP6_PKT(ptype) (RTE_ETH_IS_IPV4_HDR(ptype) && \
+ ((ptype & RTE_PTYPE_L4_UDP) == RTE_PTYPE_L4_UDP) && \
+ ((ptype & RTE_PTYPE_TUNNEL_VXLAN) == \
+ RTE_PTYPE_TUNNEL_VXLAN) && \
+ ((ptype & RTE_PTYPE_INNER_L4_UDP) == \
+ RTE_PTYPE_INNER_L4_UDP) && \
+ (((ptype & RTE_PTYPE_INNER_L3_MASK) == \
+ RTE_PTYPE_INNER_L3_IPV6) || \
+ ((ptype & RTE_PTYPE_INNER_L3_MASK) == \
+ RTE_PTYPE_INNER_L3_IPV6_EXT) || \
+ ((ptype & RTE_PTYPE_INNER_L3_MASK) == \
+ RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN)))
+
/*
* GRO context structure. It keeps the table structures, which are
* used to merge packets, for different GRO types. Before using
@@ -256,6 +273,12 @@ struct gro_ctx {
struct gro_udp6_flow udp6_flows[RTE_GRO_MAX_BURST_ITEM_NUM];
struct gro_udp6_item udp6_items[RTE_GRO_MAX_BURST_ITEM_NUM] = {{0} };
+ /* Allocate a reassembly table for IPv4 VXLAN UDP/IPv6 GRO */
+ struct gro_vxlan_udp6_tbl vxlan_udp6_tbl;
+ struct gro_vxlan_udp6_flow vxlan_udp6_flows[RTE_GRO_MAX_BURST_ITEM_NUM];
+ struct gro_vxlan_udp6_item vxlan_udp6_items[RTE_GRO_MAX_BURST_ITEM_NUM]
+ = {{{0}, 0, 0} };
+
struct rte_mbuf *unprocess_pkts[nb_pkts];
uint32_t item_num;
int32_t ret;
@@ -263,7 +286,7 @@ struct gro_ctx {
uint8_t do_tcp4_gro = 0, do_vxlan_tcp_gro = 0, do_udp4_gro = 0,
do_vxlan_udp_gro = 0, do_tcp6_gro = 0, do_vxlan_tcp6_gro = 0,
do_vxlan6_tcp4_gro = 0, do_vxlan6_tcp6_gro = 0,
- do_udp6_gro = 0;
+ do_udp6_gro = 0, do_vxlan_udp6_gro = 0;
if (unlikely((param->gro_types & (RTE_GRO_IPV4_VXLAN_TCP_IPV4 |
RTE_GRO_TCP_IPV4 |
@@ -273,7 +296,8 @@ struct gro_ctx {
RTE_GRO_IPV4_VXLAN_TCP_IPV6 |
RTE_GRO_IPV6_VXLAN_TCP_IPV4 |
RTE_GRO_IPV6_VXLAN_TCP_IPV6 |
- RTE_GRO_UDP_IPV6)) == 0))
+ RTE_GRO_UDP_IPV6 |
+ RTE_GRO_IPV4_VXLAN_UDP_IPV6)) == 0))
return nb_pkts;
/* Get the maximum number of packets */
@@ -398,6 +422,19 @@ struct gro_ctx {
do_udp6_gro = 1;
}
+ if (param->gro_types & RTE_GRO_IPV4_VXLAN_UDP_IPV6) {
+ for (i = 0; i < item_num; i++)
+ vxlan_udp6_flows[i].start_index = INVALID_ARRAY_INDEX;
+
+ vxlan_udp6_tbl.flows = vxlan_udp6_flows;
+ vxlan_udp6_tbl.items = vxlan_udp6_items;
+ vxlan_udp6_tbl.flow_num = 0;
+ vxlan_udp6_tbl.item_num = 0;
+ vxlan_udp6_tbl.max_flow_num = item_num;
+ vxlan_udp6_tbl.max_item_num = item_num;
+ do_vxlan_udp6_gro = 1;
+ }
+
for (i = 0; i < nb_pkts; i++) {
/*
* The timestamp is ignored, since all packets
@@ -480,6 +517,15 @@ struct gro_ctx {
nb_after_gro--;
else if (ret < 0)
unprocess_pkts[unprocess_num++] = pkts[i];
+ } else if (IS_IPV4_VXLAN_UDP6_PKT(pkts[i]->packet_type) &&
+ do_vxlan_udp6_gro) {
+ ret = gro_vxlan_udp6_reassemble(pkts[i],
+ &vxlan_udp6_tbl, 0);
+ if (ret > 0)
+ /* Merge successfully */
+ nb_after_gro--;
+ else if (ret < 0)
+ unprocess_pkts[unprocess_num++] = pkts[i];
} else
unprocess_pkts[unprocess_num++] = pkts[i];
}
@@ -534,6 +580,11 @@ struct gro_ctx {
&pkts[i], nb_pkts - i);
}
+ if (do_vxlan_udp6_gro) {
+ i += gro_vxlan_udp6_tbl_timeout_flush(&vxlan_udp6_tbl,
+ 0, &pkts[i], nb_pkts - i);
+ }
+
/* Copy unprocessed packets */
if (unprocess_num > 0) {
memcpy(&pkts[i], unprocess_pkts,
@@ -555,12 +606,12 @@ struct gro_ctx {
struct gro_ctx *gro_ctx = ctx;
void *tcp_tbl, *udp_tbl, *vxlan_tcp_tbl, *vxlan_udp_tbl, *tcp6_tbl,
*vxlan_tcp6_tbl, *vxlan6_tcp4_tbl, *vxlan6_tcp6_tbl,
- *udp6_tbl;
+ *udp6_tbl, *vxlan_udp6_tbl;
uint64_t current_time;
uint16_t i, unprocess_num = 0;
uint8_t do_tcp4_gro, do_vxlan_tcp_gro, do_udp4_gro, do_vxlan_udp_gro,
do_tcp6_gro, do_vxlan_tcp6_gro, do_vxlan6_tcp4_gro,
- do_vxlan6_tcp6_gro, do_udp6_gro;
+ do_vxlan6_tcp6_gro, do_udp6_gro, do_vxlan_udp6_gro;
if (unlikely((gro_ctx->gro_types & (RTE_GRO_IPV4_VXLAN_TCP_IPV4 |
RTE_GRO_TCP_IPV4 |
@@ -570,7 +621,8 @@ struct gro_ctx {
RTE_GRO_IPV4_VXLAN_TCP_IPV6 |
RTE_GRO_IPV6_VXLAN_TCP_IPV4 |
RTE_GRO_IPV6_VXLAN_TCP_IPV6 |
- RTE_GRO_UDP_IPV6)) == 0))
+ RTE_GRO_UDP_IPV6 |
+ RTE_GRO_IPV4_VXLAN_UDP_IPV6)) == 0))
return nb_pkts;
tcp_tbl = gro_ctx->tbls[RTE_GRO_TCP_IPV4_INDEX];
@@ -582,6 +634,7 @@ struct gro_ctx {
vxlan6_tcp4_tbl = gro_ctx->tbls[RTE_GRO_IPV6_VXLAN_TCP_IPV4_INDEX];
vxlan6_tcp6_tbl = gro_ctx->tbls[RTE_GRO_IPV6_VXLAN_TCP_IPV6_INDEX];
udp6_tbl = gro_ctx->tbls[RTE_GRO_UDP_IPV6_INDEX];
+ vxlan_udp6_tbl = gro_ctx->tbls[RTE_GRO_IPV4_VXLAN_UDP_IPV6_INDEX];
do_tcp4_gro = (gro_ctx->gro_types & RTE_GRO_TCP_IPV4) ==
RTE_GRO_TCP_IPV4;
@@ -601,6 +654,8 @@ struct gro_ctx {
== RTE_GRO_IPV6_VXLAN_TCP_IPV6;
do_udp6_gro = (gro_ctx->gro_types & RTE_GRO_UDP_IPV6) ==
RTE_GRO_UDP_IPV6;
+ do_vxlan_udp6_gro = (gro_ctx->gro_types & RTE_GRO_IPV4_VXLAN_UDP_IPV6)
+ == RTE_GRO_IPV4_VXLAN_UDP_IPV6;
current_time = rte_rdtsc();
@@ -650,6 +705,11 @@ struct gro_ctx {
if (gro_udp6_reassemble(pkts[i], udp6_tbl,
current_time) < 0)
unprocess_pkts[unprocess_num++] = pkts[i];
+ } else if (IS_IPV4_VXLAN_UDP6_PKT(pkts[i]->packet_type) &&
+ do_vxlan_udp6_gro) {
+ if (gro_vxlan_udp6_reassemble(pkts[i], vxlan_udp6_tbl,
+ current_time) < 0)
+ unprocess_pkts[unprocess_num++] = pkts[i];
} else
unprocess_pkts[unprocess_num++] = pkts[i];
}
@@ -741,6 +801,13 @@ struct gro_ctx {
gro_ctx->tbls[RTE_GRO_UDP_IPV6_INDEX],
flush_timestamp,
&out[num], left_nb_out);
+ left_nb_out = max_nb_out - num;
+ }
+
+ if ((gro_types & RTE_GRO_IPV4_VXLAN_UDP_IPV6) && left_nb_out > 0) {
+ num += gro_vxlan_udp6_tbl_timeout_flush(gro_ctx->tbls[
+ RTE_GRO_IPV4_VXLAN_UDP_IPV6_INDEX],
+ flush_timestamp, &out[num], left_nb_out);
}
return num;
diff --git a/lib/librte_gro/rte_gro.h b/lib/librte_gro/rte_gro.h
index 94ed3d3..1824ce3 100644
--- a/lib/librte_gro/rte_gro.h
+++ b/lib/librte_gro/rte_gro.h
@@ -53,6 +53,9 @@
#define RTE_GRO_UDP_IPV6_INDEX 8
#define RTE_GRO_UDP_IPV6 (1ULL << RTE_GRO_UDP_IPV6_INDEX)
/**< UDP/IPv6 GRO flag */
+#define RTE_GRO_IPV4_VXLAN_UDP_IPV6_INDEX 9
+#define RTE_GRO_IPV4_VXLAN_UDP_IPV6 (1ULL << RTE_GRO_IPV4_VXLAN_UDP_IPV6_INDEX)
+/**< IPv4 VxLAN UDP/IPv6 GRO flag. */
/**
* Structure used to create GRO context objects or used to pass
--
1.8.3.1
next prev parent reply other threads:[~2020-12-21 3:52 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-12-21 3:50 [dpdk-dev] [PATCH v1 0/8] gro: support TCP/IPv6 and UDP/IPv6 for VLAN and VXLAN yang_y_yi
2020-12-21 3:50 ` [dpdk-dev] [PATCH v1 1/8] gro: support TCP/IPv6 yang_y_yi
2020-12-21 3:50 ` [dpdk-dev] [PATCH v1 2/8] gro: support IPv4 VXLAN TCP/IPv6 yang_y_yi
2020-12-21 3:50 ` [dpdk-dev] [PATCH v1 3/8] gro: support IPv6 VXLAN TCP/IPv4 yang_y_yi
2020-12-21 3:50 ` [dpdk-dev] [PATCH v1 4/8] gro: support IPv6 VXLAN TCP/IPv6 yang_y_yi
2020-12-21 3:50 ` [dpdk-dev] [PATCH v1 5/8] gro: support UDP/IPv6 yang_y_yi
2020-12-21 3:50 ` yang_y_yi [this message]
2020-12-21 3:50 ` [dpdk-dev] [PATCH v1 7/8] gro: support IPv6 VXLAN UDP/IPv4 yang_y_yi
2020-12-21 3:50 ` [dpdk-dev] [PATCH v1 8/8] gro: support IPv6 VXLAN UDP/IPv6 yang_y_yi
2021-03-24 21:22 ` [dpdk-dev] [PATCH v1 0/8] gro: support TCP/IPv6 and UDP/IPv6 for VLAN and VXLAN Thomas Monjalon
2021-07-24 8:48 ` Thomas Monjalon
2021-07-25 23:53 ` Hu, Jiayu
2023-06-12 2:11 ` Stephen Hemminger
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20201221035052.128292-7-yang_y_yi@163.com \
--to=yang_y_yi@163.com \
--cc=dev@dpdk.org \
--cc=jiayu.hu@intel.com \
--cc=konstantin.ananyev@intel.com \
--cc=thomas@monjalon.net \
--cc=yangyi01@inspur.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).