From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by dpdk.org (Postfix) with ESMTP id 9A10158CE for ; Fri, 29 Dec 2017 04:53:49 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga104.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 28 Dec 2017 19:53:48 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.45,473,1508828400"; d="scan'208";a="17303932" Received: from fmsmsx108.amr.corp.intel.com ([10.18.124.206]) by fmsmga001.fm.intel.com with ESMTP; 28 Dec 2017 19:53:48 -0800 Received: from fmsmsx111.amr.corp.intel.com (10.18.116.5) by FMSMSX108.amr.corp.intel.com (10.18.124.206) with Microsoft SMTP Server (TLS) id 14.3.319.2; Thu, 28 Dec 2017 19:53:47 -0800 Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by fmsmsx111.amr.corp.intel.com (10.18.116.5) with Microsoft SMTP Server (TLS) id 14.3.319.2; Thu, 28 Dec 2017 19:53:47 -0800 Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.159]) by shsmsx102.ccr.corp.intel.com ([169.254.2.189]) with mapi id 14.03.0319.002; Fri, 29 Dec 2017 11:53:45 +0800 From: "Chen, Junjie J" To: "Hu, Jiayu" , "dev@dpdk.org" CC: "Tan, Jianfeng" , "Ananyev, Konstantin" , "stephen@networkplumber.org" , "Yigit, Ferruh" , "Yao, Lei A" Thread-Topic: [PATCH v3 2/2] gro: support VxLAN GRO Thread-Index: AQHTevWLXTZmGX2c302qPGXsjhRJ6aNZuz9Q Date: Fri, 29 Dec 2017 03:53:44 +0000 Message-ID: References: <1513219779-100115-1-git-send-email-jiayu.hu@intel.com> <1513927544-97241-1-git-send-email-jiayu.hu@intel.com> <1513927544-97241-3-git-send-email-jiayu.hu@intel.com> In-Reply-To: <1513927544-97241-3-git-send-email-jiayu.hu@intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-product: dlpe-windows dlp-version: 11.0.0.116 dlp-reaction: no-action x-ctpclassification: CTP_NT x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiOGVjNmRmNzctZTdkYy00OGM3LWJlMjctNGEzMTZjNmY0NzU2IiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX05UIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE3LjIuNS4xOCIsIlRydXN0ZWRMYWJlbEhhc2giOiJpeUdmckU4dkJcL21WYmE2RElwekFybXY4SmF6a1hVUkV0akVaQnVhbythR0pTZ1R0SVM3VWpVc2NlWXNKWkJXTiJ9 x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH v3 2/2] gro: support VxLAN GRO X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 29 Dec 2017 03:53:50 -0000 > -----Original Message----- > From: Hu, Jiayu > Sent: Friday, December 22, 2017 3:26 PM > To: dev@dpdk.org > Cc: Tan, Jianfeng ; Chen, Junjie J > ; Ananyev, Konstantin > ; stephen@networkplumber.org; Yigit, > Ferruh ; Yao, Lei A ; Hu, Ji= ayu > > Subject: [PATCH v3 2/2] gro: support VxLAN GRO >=20 > This patch adds a framework that allows GRO on tunneled packets. > Furthermore, it leverages that framework to provide GRO support for > VxLAN-encapsulated packets. Supported VxLAN packets must have an outer > IPv4 header, and contain an inner TCP/IPv4 packet. >=20 > VxLAN GRO doesn't check if input packets have correct checksums and doesn= 't > update checksums for output packets. Additionally, it assumes the packets= are > complete (i.e., MF=3D=3D0 && frag_off=3D=3D0), when IP fragmentation is p= ossible (i.e., > DF=3D=3D0). >=20 > Signed-off-by: Jiayu Hu > --- > .../prog_guide/generic_receive_offload_lib.rst | 31 +- > lib/librte_gro/Makefile | 1 + > lib/librte_gro/gro_vxlan_tcp4.c | 515 > +++++++++++++++++++++ > lib/librte_gro/gro_vxlan_tcp4.h | 184 ++++++++ > lib/librte_gro/rte_gro.c | 129 +++++- > lib/librte_gro/rte_gro.h | 5 +- > 6 files changed, 837 insertions(+), 28 deletions(-) create mode 100644 > lib/librte_gro/gro_vxlan_tcp4.c create mode 100644 > lib/librte_gro/gro_vxlan_tcp4.h >=20 > diff --git a/doc/guides/prog_guide/generic_receive_offload_lib.rst > b/doc/guides/prog_guide/generic_receive_offload_lib.rst > index c2d7a41..078bec0 100644 > --- a/doc/guides/prog_guide/generic_receive_offload_lib.rst > +++ b/doc/guides/prog_guide/generic_receive_offload_lib.rst > @@ -57,7 +57,9 @@ assumes the packets are complete (i.e., MF=3D=3D0 && > frag_off=3D=3D0), when IP fragmentation is possible (i.e., DF=3D=3D0). A= dditionally, it > complies RFC > 6864 to process the IPv4 ID field. >=20 > -Currently, the GRO library provides GRO supports for TCP/IPv4 packets. > +Currently, the GRO library provides GRO supports for TCP/IPv4 packets > +and VxLAN packets which contain an outer IPv4 header and an inner > +TCP/IPv4 packet. >=20 > Two Sets of API > --------------- > @@ -108,7 +110,8 @@ Reassembly Algorithm >=20 > The reassembly algorithm is used for reassembling packets. In the GRO > library, different GRO types can use different algorithms. In this -secti= on, we > will introduce an algorithm, which is used by TCP/IPv4 GRO. > +section, we will introduce an algorithm, which is used by TCP/IPv4 GRO > +and VxLAN GRO. >=20 > Challenges > ~~~~~~~~~~ > @@ -185,6 +188,30 @@ Header fields deciding if two packets are neighbors > include: > - IPv4 ID. The IPv4 ID fields of the packets, whose DF bit is 0, should > be increased by 1. >=20 > +VxLAN GRO > +--------- > + > +The table structure used by VxLAN GRO, which is in charge of processing > +VxLAN packets with an outer IPv4 header and inner TCP/IPv4 packet, is > +similar with that of TCP/IPv4 GRO. Differently, the header fields used > +to define a VxLAN flow include: > + > +- outer source and destination: Ethernet and IP address, UDP port > + > +- VxLAN header (VNI and flag) > + > +- inner source and destination: Ethernet and IP address, TCP port > + > +Header fields deciding if packets are neighbors include: > + > +- outer IPv4 ID. The IPv4 ID fields of the packets, whose DF bit in the > + outer IPv4 header is 0, should be increased by 1. > + > +- inner TCP sequence number > + > +- inner IPv4 ID. The IPv4 ID fields of the packets, whose DF bit in the > + inner IPv4 header is 0, should be increased by 1. > + > .. note:: > We comply RFC 6864 to process the IPv4 ID field. Specifically, > we check IPv4 ID fields for the packets whose DF bit is 0 and di= ff > --git a/lib/librte_gro/Makefile b/lib/librte_gro/Makefile index > eb423cc..0110455 100644 > --- a/lib/librte_gro/Makefile > +++ b/lib/librte_gro/Makefile > @@ -45,6 +45,7 @@ LIBABIVER :=3D 1 > # source files > SRCS-$(CONFIG_RTE_LIBRTE_GRO) +=3D rte_gro.c > SRCS-$(CONFIG_RTE_LIBRTE_GRO) +=3D gro_tcp4.c > +SRCS-$(CONFIG_RTE_LIBRTE_GRO) +=3D gro_vxlan_tcp4.c >=20 > # install this header file > SYMLINK-$(CONFIG_RTE_LIBRTE_GRO)-include +=3D rte_gro.h diff --git > a/lib/librte_gro/gro_vxlan_tcp4.c b/lib/librte_gro/gro_vxlan_tcp4.c new f= ile > mode 100644 index 0000000..6567779 > --- /dev/null > +++ b/lib/librte_gro/gro_vxlan_tcp4.c > @@ -0,0 +1,515 @@ > +/*- > + * BSD LICENSE > + * > + * Copyright(c) 2017 Intel Corporation. All rights reserved. > + * > + * Redistribution and use in source and binary forms, with or without > + * modification, are permitted provided that the following conditions > + * are met: > + * > + * * Redistributions of source code must retain the above copyright > + * notice, this list of conditions and the following disclaimer. > + * * Redistributions in binary form must reproduce the above copyrig= ht > + * notice, this list of conditions and the following disclaimer in > + * the documentation and/or other materials provided with the > + * distribution. > + * * Neither the name of Intel Corporation nor the names of its > + * contributors may be used to endorse or promote products derived > + * from this software without specific prior written permission. > + * > + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND > CONTRIBUTORS > + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT > NOT > + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND > FITNESS FOR > + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE > COPYRIGHT > + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, > INCIDENTAL, > + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT > NOT > + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS > OF USE, > + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED > AND ON ANY > + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR > TORT > + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT > OF THE USE > + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH > DAMAGE. > + */ > + > +#include > +#include > +#include > +#include > +#include > + > +#include "gro_vxlan_tcp4.h" > + > +void * > +gro_vxlan_tcp4_tbl_create(uint16_t socket_id, > + uint16_t max_flow_num, > + uint16_t max_item_per_flow) > +{ > + struct gro_vxlan_tcp4_tbl *tbl; > + size_t size; > + uint32_t entries_num, i; > + > + entries_num =3D max_flow_num * max_item_per_flow; > + entries_num =3D RTE_MIN(entries_num, > GRO_VXLAN_TCP4_TBL_MAX_ITEM_NUM); > + > + if (entries_num =3D=3D 0) > + return NULL; > + > + tbl =3D rte_zmalloc_socket(__func__, > + sizeof(struct gro_vxlan_tcp4_tbl), > + RTE_CACHE_LINE_SIZE, > + socket_id); > + if (tbl =3D=3D NULL) > + return NULL; > + > + size =3D sizeof(struct gro_vxlan_tcp4_item) * entries_num; > + tbl->items =3D rte_zmalloc_socket(__func__, > + size, > + RTE_CACHE_LINE_SIZE, > + socket_id); > + if (tbl->items =3D=3D NULL) { > + rte_free(tbl); > + return NULL; > + } > + tbl->max_item_num =3D entries_num; > + > + size =3D sizeof(struct gro_vxlan_tcp4_flow) * entries_num; > + tbl->flows =3D rte_zmalloc_socket(__func__, > + size, > + RTE_CACHE_LINE_SIZE, > + socket_id); > + if (tbl->flows =3D=3D NULL) { > + rte_free(tbl->items); > + rte_free(tbl); > + return NULL; > + } > + > + for (i =3D 0; i < entries_num; i++) > + tbl->flows[i].start_index =3D INVALID_ARRAY_INDEX; > + tbl->max_flow_num =3D entries_num; > + > + return tbl; > +} > + > +void > +gro_vxlan_tcp4_tbl_destroy(void *tbl) > +{ > + struct gro_vxlan_tcp4_tbl *vxlan_tbl =3D tbl; > + > + if (vxlan_tbl) { > + rte_free(vxlan_tbl->items); > + rte_free(vxlan_tbl->flows); > + } > + rte_free(vxlan_tbl); > +} > + > +static inline uint32_t > +find_an_empty_item(struct gro_vxlan_tcp4_tbl *tbl) { > + uint32_t max_item_num =3D tbl->max_item_num, i; > + > + for (i =3D 0; i < max_item_num; i++) > + if (tbl->items[i].inner_item.firstseg =3D=3D NULL) > + return i; > + return INVALID_ARRAY_INDEX; > +} > + > +static inline uint32_t > +find_an_empty_flow(struct gro_vxlan_tcp4_tbl *tbl) { > + uint32_t max_flow_num =3D tbl->max_flow_num, i; > + > + for (i =3D 0; i < max_flow_num; i++) > + if (tbl->flows[i].start_index =3D=3D INVALID_ARRAY_INDEX) > + return i; > + return INVALID_ARRAY_INDEX; > +} > + > +static inline uint32_t > +insert_new_item(struct gro_vxlan_tcp4_tbl *tbl, > + struct rte_mbuf *pkt, > + uint64_t start_time, > + uint32_t prev_idx, > + uint32_t sent_seq, > + uint16_t outer_ip_id, > + uint16_t ip_id, > + uint8_t outer_is_atomic, > + uint8_t is_atomic) > +{ > + uint32_t item_idx; > + > + item_idx =3D find_an_empty_item(tbl); > + if (item_idx =3D=3D INVALID_ARRAY_INDEX) > + return INVALID_ARRAY_INDEX; > + > + tbl->items[item_idx].inner_item.firstseg =3D pkt; > + tbl->items[item_idx].inner_item.lastseg =3D rte_pktmbuf_lastseg(pkt); > + tbl->items[item_idx].inner_item.start_time =3D start_time; > + tbl->items[item_idx].inner_item.next_pkt_idx =3D INVALID_ARRAY_INDEX; > + tbl->items[item_idx].inner_item.sent_seq =3D sent_seq; > + tbl->items[item_idx].inner_item.ip_id =3D ip_id; > + tbl->items[item_idx].inner_item.nb_merged =3D 1; > + tbl->items[item_idx].inner_item.is_atomic =3D is_atomic; > + tbl->items[item_idx].outer_ip_id =3D outer_ip_id; > + tbl->items[item_idx].outer_is_atomic =3D outer_is_atomic; > + tbl->item_num++; > + > + /* If the previous packet exists, chain the new one with it. */ > + if (prev_idx !=3D INVALID_ARRAY_INDEX) { > + tbl->items[item_idx].inner_item.next_pkt_idx =3D > + tbl->items[prev_idx].inner_item.next_pkt_idx; > + tbl->items[prev_idx].inner_item.next_pkt_idx =3D item_idx; > + } > + > + return item_idx; > +} > + > +static inline uint32_t > +delete_item(struct gro_vxlan_tcp4_tbl *tbl, > + uint32_t item_idx, > + uint32_t prev_item_idx) > +{ > + uint32_t next_idx =3D tbl->items[item_idx].inner_item.next_pkt_idx; > + > + /* NULL indicates an empty item. */ > + tbl->items[item_idx].inner_item.firstseg =3D NULL; > + tbl->item_num--; > + if (prev_item_idx !=3D INVALID_ARRAY_INDEX) > + tbl->items[prev_item_idx].inner_item.next_pkt_idx =3D next_idx; > + > + return next_idx; > +} > + > +static inline uint32_t > +insert_new_flow(struct gro_vxlan_tcp4_tbl *tbl, > + struct vxlan_tcp4_flow_key *src, > + uint32_t item_idx) > +{ > + struct vxlan_tcp4_flow_key *dst; > + uint32_t flow_idx; > + > + flow_idx =3D find_an_empty_flow(tbl); > + if (flow_idx =3D=3D INVALID_ARRAY_INDEX) > + return INVALID_ARRAY_INDEX; > + > + dst =3D &(tbl->flows[flow_idx].key); > + > + ether_addr_copy(&(src->inner_key.eth_saddr), > + &(dst->inner_key.eth_saddr)); > + ether_addr_copy(&(src->inner_key.eth_daddr), > + &(dst->inner_key.eth_daddr)); > + dst->inner_key.ip_src_addr =3D src->inner_key.ip_src_addr; > + dst->inner_key.ip_dst_addr =3D src->inner_key.ip_dst_addr; > + dst->inner_key.recv_ack =3D src->inner_key.recv_ack; > + dst->inner_key.src_port =3D src->inner_key.src_port; > + dst->inner_key.dst_port =3D src->inner_key.dst_port; > + > + dst->vxlan_hdr.vx_flags =3D src->vxlan_hdr.vx_flags; > + dst->vxlan_hdr.vx_vni =3D src->vxlan_hdr.vx_vni; > + ether_addr_copy(&(src->outer_eth_saddr), &(dst->outer_eth_saddr)); > + ether_addr_copy(&(src->outer_eth_daddr), &(dst->outer_eth_daddr)); > + dst->outer_ip_src_addr =3D src->outer_ip_src_addr; > + dst->outer_ip_dst_addr =3D src->outer_ip_dst_addr; > + dst->outer_src_port =3D src->outer_src_port; > + dst->outer_dst_port =3D src->outer_dst_port; > + > + tbl->flows[flow_idx].start_index =3D item_idx; > + tbl->flow_num++; > + > + return flow_idx; > +} > + > +static inline int > +is_same_vxlan_tcp4_flow(struct vxlan_tcp4_flow_key k1, > + struct vxlan_tcp4_flow_key k2) > +{ > + return (is_same_ether_addr(&k1.outer_eth_saddr, &k2.outer_eth_saddr) > && > + is_same_ether_addr(&k1.outer_eth_daddr, > + &k2.outer_eth_daddr) && > + (k1.outer_ip_src_addr =3D=3D k2.outer_ip_src_addr) && > + (k1.outer_ip_dst_addr =3D=3D k2.outer_ip_dst_addr) && > + (k1.outer_src_port =3D=3D k2.outer_src_port) && > + (k1.outer_dst_port =3D=3D k2.outer_dst_port) && > + (k1.vxlan_hdr.vx_flags =3D=3D k2.vxlan_hdr.vx_flags) && > + (k1.vxlan_hdr.vx_vni =3D=3D k2.vxlan_hdr.vx_vni) && > + is_same_tcp4_flow(k1.inner_key, k2.inner_key)); } > + > +static inline int > +check_vxlan_seq_option(struct gro_vxlan_tcp4_item *item, > + struct tcp_hdr *tcp_hdr, > + uint32_t sent_seq, > + uint16_t outer_ip_id, > + uint16_t ip_id, > + uint16_t tcp_hl, > + uint16_t tcp_dl, > + uint8_t outer_is_atomic, > + uint8_t is_atomic) > +{ > + struct rte_mbuf *pkt =3D item->inner_item.firstseg; > + int cmp; > + uint16_t l2_offset; > + > + /* Don't merge packets whose outer DF bits are different. */ > + if (unlikely(item->outer_is_atomic ^ outer_is_atomic)) > + return 0; > + > + l2_offset =3D pkt->outer_l2_len + pkt->outer_l3_len; > + cmp =3D check_seq_option(&item->inner_item, tcp_hdr, sent_seq, ip_id, > + tcp_hl, tcp_dl, l2_offset, is_atomic); > + if ((cmp =3D=3D 1) && (outer_is_atomic || > + (outer_ip_id =3D=3D item->outer_ip_id + > + item->inner_item.nb_merged))) > + /* Append the packet. */ > + return 1; > + else if ((cmp =3D=3D -1) && (outer_is_atomic || > + (outer_ip_id + 1 =3D=3D item->outer_ip_id))) > + /* Prepend the packet. */ > + return -1; > + > + return 0; > +} > + > +static inline int > +merge_two_vxlan_tcp4_packets(struct gro_vxlan_tcp4_item *item, > + struct rte_mbuf *pkt, > + int cmp, > + uint32_t sent_seq, > + uint16_t outer_ip_id, > + uint16_t ip_id) > +{ > + if (merge_two_tcp4_packets(&item->inner_item, pkt, cmp, sent_seq, > + ip_id, pkt->outer_l2_len + > + pkt->outer_l3_len)) { > + item->outer_ip_id =3D cmp < 0 ? outer_ip_id : item->outer_ip_id; > + return 1; > + } > + > + return 0; > +} > + > +static inline void > +update_vxlan_header(struct gro_vxlan_tcp4_item *item) { > + struct ipv4_hdr *ipv4_hdr; > + struct udp_hdr *udp_hdr; > + struct rte_mbuf *pkt =3D item->inner_item.firstseg; > + uint16_t len; > + > + /* Update the outer IPv4 header. */ > + len =3D pkt->pkt_len - pkt->outer_l2_len; > + ipv4_hdr =3D (struct ipv4_hdr *)(rte_pktmbuf_mtod(pkt, char *) + > + pkt->outer_l2_len); > + ipv4_hdr->total_length =3D rte_cpu_to_be_16(len); > + > + /* Update the outer UDP header. */ > + len -=3D pkt->outer_l3_len; > + udp_hdr =3D (struct udp_hdr *)((char *)ipv4_hdr + pkt->outer_l3_len); > + udp_hdr->dgram_len =3D rte_cpu_to_be_16(len); > + > + /* Update the inner IPv4 header. */ > + len -=3D pkt->l2_len; > + ipv4_hdr =3D (struct ipv4_hdr *)((char *)udp_hdr + pkt->l2_len); > + ipv4_hdr->total_length =3D rte_cpu_to_be_16(len); } > + > +int32_t > +gro_vxlan_tcp4_reassemble(struct rte_mbuf *pkt, > + struct gro_vxlan_tcp4_tbl *tbl, > + uint64_t start_time) > +{ > + struct ether_hdr *outer_eth_hdr, *eth_hdr; > + struct ipv4_hdr *outer_ipv4_hdr, *ipv4_hdr; > + struct tcp_hdr *tcp_hdr; > + struct udp_hdr *udp_hdr; > + struct vxlan_hdr *vxlan_hdr; > + uint32_t sent_seq; > + uint16_t tcp_dl, frag_off, outer_ip_id, ip_id; > + uint8_t outer_is_atomic, is_atomic; > + > + struct vxlan_tcp4_flow_key key; > + uint32_t cur_idx, prev_idx, item_idx; > + uint32_t i, max_flow_num; > + int cmp; > + uint16_t hdr_len; > + > + outer_eth_hdr =3D rte_pktmbuf_mtod(pkt, struct ether_hdr *); > + outer_ipv4_hdr =3D (struct ipv4_hdr *)((char *)outer_eth_hdr + > + pkt->outer_l2_len); > + udp_hdr =3D (struct udp_hdr *)((char *)outer_ipv4_hdr + > + pkt->outer_l3_len); > + vxlan_hdr =3D (struct vxlan_hdr *)((char *)udp_hdr + > + sizeof(struct udp_hdr)); > + eth_hdr =3D (struct ether_hdr *)((char *)vxlan_hdr + > + sizeof(struct vxlan_hdr)); > + ipv4_hdr =3D (struct ipv4_hdr *)((char *)udp_hdr + pkt->l2_len); > + tcp_hdr =3D (struct tcp_hdr *)((char *)ipv4_hdr + pkt->l3_len); > + > + /* > + * Don't process the packet which has FIN, SYN, RST, PSH, URG, > + * ECE or CWR set. > + */ > + if (tcp_hdr->tcp_flags !=3D TCP_ACK_FLAG) > + return -1; > + > + hdr_len =3D pkt->outer_l2_len + pkt->outer_l3_len + pkt->l2_len + > + pkt->l3_len + pkt->l4_len; > + /* > + * Don't process the packet whose payload length is less than or > + * equal to 0. > + */ > + tcp_dl =3D pkt->pkt_len - hdr_len; > + if (tcp_dl <=3D 0) > + return -1; > + > + /* > + * Save IPv4 ID for the packet whose DF bit is 0. For the packet > + * whose DF bit is 1, IPv4 ID is ignored. > + */ > + frag_off =3D rte_be_to_cpu_16(outer_ipv4_hdr->fragment_offset); > + outer_is_atomic =3D (frag_off & IPV4_HDR_DF_FLAG) =3D=3D > IPV4_HDR_DF_FLAG; > + outer_ip_id =3D outer_is_atomic ? 0 : > + rte_be_to_cpu_16(outer_ipv4_hdr->packet_id); > + frag_off =3D rte_be_to_cpu_16(ipv4_hdr->fragment_offset); > + is_atomic =3D (frag_off & IPV4_HDR_DF_FLAG) =3D=3D IPV4_HDR_DF_FLAG; > + ip_id =3D is_atomic ? 0 : rte_be_to_cpu_16(ipv4_hdr->packet_id); > + > + sent_seq =3D rte_be_to_cpu_32(tcp_hdr->sent_seq); > + > + ether_addr_copy(&(eth_hdr->s_addr), &(key.inner_key.eth_saddr)); > + ether_addr_copy(&(eth_hdr->d_addr), &(key.inner_key.eth_daddr)); > + key.inner_key.ip_src_addr =3D ipv4_hdr->src_addr; > + key.inner_key.ip_dst_addr =3D ipv4_hdr->dst_addr; > + key.inner_key.recv_ack =3D tcp_hdr->recv_ack; > + key.inner_key.src_port =3D tcp_hdr->src_port; > + key.inner_key.dst_port =3D tcp_hdr->dst_port; > + > + key.vxlan_hdr.vx_flags =3D vxlan_hdr->vx_flags; > + key.vxlan_hdr.vx_vni =3D vxlan_hdr->vx_vni; > + ether_addr_copy(&(outer_eth_hdr->s_addr), &(key.outer_eth_saddr)); > + ether_addr_copy(&(outer_eth_hdr->d_addr), &(key.outer_eth_daddr)); > + key.outer_ip_src_addr =3D outer_ipv4_hdr->src_addr; > + key.outer_ip_dst_addr =3D outer_ipv4_hdr->dst_addr; > + key.outer_src_port =3D udp_hdr->src_port; > + key.outer_dst_port =3D udp_hdr->dst_port; > + > + /* Search for a matched flow. */ > + max_flow_num =3D tbl->max_flow_num; > + for (i =3D 0; i < max_flow_num; i++) { > + if (tbl->flows[i].start_index !=3D INVALID_ARRAY_INDEX && > + is_same_vxlan_tcp4_flow(tbl->flows[i].key, > + key)) > + break; > + } > + > + /* > + * Can't find a matched flow. Insert a new flow and store the > + * packet into the flow. > + */ > + if (i =3D=3D tbl->max_flow_num) { > + item_idx =3D insert_new_item(tbl, pkt, start_time, > + INVALID_ARRAY_INDEX, sent_seq, outer_ip_id, > + ip_id, outer_is_atomic, is_atomic); > + if (item_idx =3D=3D INVALID_ARRAY_INDEX) > + return -1; > + if (insert_new_flow(tbl, &key, item_idx) =3D=3D > + INVALID_ARRAY_INDEX) { > + /* > + * Fail to insert a new flow, so > + * delete the inserted packet. > + */ > + delete_item(tbl, item_idx, INVALID_ARRAY_INDEX); > + return -1; > + } > + return 0; > + } > + > + /* Check all packets in the flow and try to find a neighbor. */ > + cur_idx =3D tbl->flows[i].start_index; > + prev_idx =3D cur_idx; > + do { > + cmp =3D check_vxlan_seq_option(&(tbl->items[cur_idx]), tcp_hdr, > + sent_seq, outer_ip_id, ip_id, pkt->l4_len, > + tcp_dl, outer_is_atomic, is_atomic); > + if (cmp) { > + if (merge_two_vxlan_tcp4_packets(&(tbl->items[cur_idx]), > + pkt, cmp, sent_seq, > + outer_ip_id, ip_id)) > + return 1; > + /* > + * Can't merge two packets, as the packet > + * length will be greater than the max value. > + * Insert the packet into the flow. > + */ > + if (insert_new_item(tbl, pkt, start_time, prev_idx, > + sent_seq, outer_ip_id, > + ip_id, outer_is_atomic, > + is_atomic) =3D=3D > + INVALID_ARRAY_INDEX) > + return -1; > + return 0; > + } > + prev_idx =3D cur_idx; > + cur_idx =3D tbl->items[cur_idx].inner_item.next_pkt_idx; > + } while (cur_idx !=3D INVALID_ARRAY_INDEX); > + > + /* Can't find neighbor. Insert the packet into the flow. */ > + if (insert_new_item(tbl, pkt, start_time, prev_idx, sent_seq, > + outer_ip_id, ip_id, outer_is_atomic, > + is_atomic) =3D=3D INVALID_ARRAY_INDEX) > + return -1; > + > + return 0; > +} > + > +uint16_t > +gro_vxlan_tcp4_tbl_timeout_flush(struct gro_vxlan_tcp4_tbl *tbl, > + uint64_t flush_timestamp, > + struct rte_mbuf **out, > + uint16_t nb_out) > +{ > + uint16_t k =3D 0; > + uint32_t i, j; > + uint32_t max_flow_num =3D tbl->max_flow_num; > + > + for (i =3D 0; i < max_flow_num; i++) { > + if (unlikely(tbl->flow_num =3D=3D 0)) > + return k; > + > + j =3D tbl->flows[i].start_index; > + while (j !=3D INVALID_ARRAY_INDEX) { > + if (tbl->items[j].inner_item.start_time <=3D > + flush_timestamp) { > + out[k++] =3D tbl->items[j].inner_item.firstseg; > + if (tbl->items[j].inner_item.nb_merged > 1) > + update_vxlan_header(&(tbl->items[j])); > + /* > + * Delete the item and get the next packet > + * index. > + */ > + j =3D delete_item(tbl, j, INVALID_ARRAY_INDEX); > + tbl->flows[i].start_index =3D j; > + if (j =3D=3D INVALID_ARRAY_INDEX) > + tbl->flow_num--; > + > + if (unlikely(k =3D=3D nb_out)) > + return k; > + } else > + /* > + * The left packets in the flow won't be > + * timeout. Go to check other flows. > + */ > + break; > + } > + } > + return k; > +} > + > +uint32_t > +gro_vxlan_tcp4_tbl_pkt_count(void *tbl) { > + struct gro_vxlan_tcp4_tbl *gro_tbl =3D tbl; > + > + if (gro_tbl) > + return gro_tbl->item_num; > + > + return 0; > +} > diff --git a/lib/librte_gro/gro_vxlan_tcp4.h b/lib/librte_gro/gro_vxlan_t= cp4.h > new file mode 100644 index 0000000..66baf73 > --- /dev/null > +++ b/lib/librte_gro/gro_vxlan_tcp4.h > @@ -0,0 +1,184 @@ > +/*- > + * BSD LICENSE > + * > + * Copyright(c) 2017 Intel Corporation. All rights reserved. > + * > + * Redistribution and use in source and binary forms, with or without > + * modification, are permitted provided that the following conditions > + * are met: > + * > + * * Redistributions of source code must retain the above copyright > + * notice, this list of conditions and the following disclaimer. > + * * Redistributions in binary form must reproduce the above copyrig= ht > + * notice, this list of conditions and the following disclaimer in > + * the documentation and/or other materials provided with the > + * distribution. > + * * Neither the name of Intel Corporation nor the names of its > + * contributors may be used to endorse or promote products derived > + * from this software without specific prior written permission. > + * > + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND > CONTRIBUTORS > + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT > NOT > + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND > FITNESS FOR > + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE > COPYRIGHT > + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, > INCIDENTAL, > + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT > NOT > + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS > OF USE, > + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED > AND ON ANY > + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR > TORT > + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT > OF THE USE > + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH > DAMAGE. > + */ > + > +#ifndef _GRO_VXLAN_TCP4_H_ > +#define _GRO_VXLAN_TCP4_H_ > + > +#include "gro_tcp4.h" > + > +#define GRO_VXLAN_TCP4_TBL_MAX_ITEM_NUM (1024UL * 1024UL) > + > +/* Header fields representing a VxLAN flow */ struct > +vxlan_tcp4_flow_key { > + struct tcp4_flow_key inner_key; > + struct vxlan_hdr vxlan_hdr; > + > + struct ether_addr outer_eth_saddr; > + struct ether_addr outer_eth_daddr; > + > + uint32_t outer_ip_src_addr; > + uint32_t outer_ip_dst_addr; > + > + /* Outer UDP ports */ > + uint16_t outer_src_port; > + uint16_t outer_dst_port; > + > +}; > + > +struct gro_vxlan_tcp4_flow { > + struct vxlan_tcp4_flow_key key; > + /* > + * The index of the first packet in the flow. INVALID_ARRAY_INDEX > + * indicates an empty flow. > + */ > + uint32_t start_index; > +}; > + > +struct gro_vxlan_tcp4_item { > + struct gro_tcp4_item inner_item; > + /* IPv4 ID in the outer IPv4 header */ > + uint16_t outer_ip_id; > + /* Indicate if outer IPv4 ID can be ignored */ > + uint8_t outer_is_atomic; > +}; > + > +/* > + * VxLAN (with an outer IPv4 header and an inner TCP/IPv4 packet) > + * reassembly table structure > + */ > +struct gro_vxlan_tcp4_tbl { > + /* item array */ > + struct gro_vxlan_tcp4_item *items; > + /* flow array */ > + struct gro_vxlan_tcp4_flow *flows; > + /* current item number */ > + uint32_t item_num; > + /* current flow number */ > + uint32_t flow_num; > + /* the maximum item number */ > + uint32_t max_item_num; > + /* the maximum flow number */ > + uint32_t max_flow_num; > +}; > + > +/** > + * This function creates a VxLAN reassembly table for VxLAN packets > + * which have an outer IPv4 header and an inner TCP/IPv4 packet. > + * > + * @param socket_id > + * Socket index for allocating the table > + * @param max_flow_num > + * The maximum number of flows in the table > + * @param max_item_per_flow > + * The maximum number of packets per flow > + * > + * @return > + * - Return the table pointer on success. > + * - Return NULL on failure. > + */ > +void *gro_vxlan_tcp4_tbl_create(uint16_t socket_id, > + uint16_t max_flow_num, > + uint16_t max_item_per_flow); > + > +/** > + * This function destroys a VxLAN reassembly table. > + * > + * @param tbl > + * Pointer pointing to the VxLAN reassembly table */ void > +gro_vxlan_tcp4_tbl_destroy(void *tbl); > + > +/** > + * This function merges a VxLAN packet which has an outer IPv4 header > +and > + * an inner TCP/IPv4 packet. It doesn't process the packet, whose TCP > + * header has SYN, FIN, RST, PSH, CWR, ECE or URG bit set, or which > + * doesn't have payload. > + * > + * This function doesn't check if the packet has correct checksums and > + * doesn't re-calculate checksums for the merged packet. Additionally, > + * it assumes the packets are complete (i.e., MF=3D=3D0 && frag_off=3D= =3D0), > +when > + * IP fragmentation is possible (i.e., DF=3D=3D0). It returns the packet= , > +if > + * the packet has invalid parameters (e.g. SYN bit is set) or there is > +no > + * available space in the table. > + * > + * @param pkt > + * Packet to reassemble > + * @param tbl > + * Pointer pointing to the VxLAN reassembly table > + * @start_time > + * The time when the packet is inserted into the table > + * > + * @return > + * - Return a positive value if the packet is merged. > + * - Return zero if the packet isn't merged but stored in the table. > + * - Return a negative value for invalid parameters or no available > + * space in the table. > + */ > +int32_t gro_vxlan_tcp4_reassemble(struct rte_mbuf *pkt, > + struct gro_vxlan_tcp4_tbl *tbl, > + uint64_t start_time); > + > +/** > + * This function flushes timeout packets in the VxLAN reassembly table, > + * and without updating checksums. > + * > + * @param tbl > + * Pointer pointing to a VxLAN GRO table > + * @param flush_timestamp > + * This function flushes packets which are inserted into the table > + * before or at the flush_timestamp. > + * @param out > + * Pointer array used to keep flushed packets > + * @param nb_out > + * The element number in 'out'. It also determines the maximum number > +of > + * packets that can be flushed finally. > + * > + * @return > + * The number of flushed packets > + */ > +uint16_t gro_vxlan_tcp4_tbl_timeout_flush(struct gro_vxlan_tcp4_tbl *tbl= , > + uint64_t flush_timestamp, > + struct rte_mbuf **out, > + uint16_t nb_out); > + > +/** > + * This function returns the number of the packets in a VxLAN > + * reassembly table. > + * > + * @param tbl > + * Pointer pointing to the VxLAN reassembly table > + * > + * @return > + * The number of packets in the table > + */ > +uint32_t gro_vxlan_tcp4_tbl_pkt_count(void *tbl); #endif > diff --git a/lib/librte_gro/rte_gro.c b/lib/librte_gro/rte_gro.c index > b3931a8..5a26893 100644 > --- a/lib/librte_gro/rte_gro.c > +++ b/lib/librte_gro/rte_gro.c > @@ -37,6 +37,7 @@ >=20 > #include "rte_gro.h" > #include "gro_tcp4.h" > +#include "gro_vxlan_tcp4.h" >=20 > typedef void *(*gro_tbl_create_fn)(uint16_t socket_id, > uint16_t max_flow_num, > @@ -45,15 +46,28 @@ typedef void (*gro_tbl_destroy_fn)(void *tbl); > typedef uint32_t (*gro_tbl_pkt_count_fn)(void *tbl); >=20 > static gro_tbl_create_fn tbl_create_fn[RTE_GRO_TYPE_MAX_NUM] =3D { > - gro_tcp4_tbl_create, NULL}; > + gro_tcp4_tbl_create, gro_vxlan_tcp4_tbl_create, NULL}; > static gro_tbl_destroy_fn tbl_destroy_fn[RTE_GRO_TYPE_MAX_NUM] =3D { > - gro_tcp4_tbl_destroy, NULL}; > + gro_tcp4_tbl_destroy, gro_vxlan_tcp4_tbl_destroy, > + NULL}; > static gro_tbl_pkt_count_fn tbl_pkt_count_fn[RTE_GRO_TYPE_MAX_NUM] =3D > { > - gro_tcp4_tbl_pkt_count, NULL}; > + gro_tcp4_tbl_pkt_count, gro_vxlan_tcp4_tbl_pkt_count, > + NULL}; >=20 > #define IS_IPV4_TCP_PKT(ptype) (RTE_ETH_IS_IPV4_HDR(ptype) && \ > ((ptype & RTE_PTYPE_L4_TCP) =3D=3D RTE_PTYPE_L4_TCP)) >=20 > +#define IS_IPV4_VXLAN_TCP4_PKT(ptype) (RTE_ETH_IS_IPV4_HDR(ptype) && > \ > + ((ptype & RTE_PTYPE_L4_UDP) =3D=3D RTE_PTYPE_L4_UDP) && \ > + ((ptype & RTE_PTYPE_TUNNEL_VXLAN) =3D=3D \ > + RTE_PTYPE_TUNNEL_VXLAN) && \ > + ((ptype & RTE_PTYPE_INNER_L4_TCP) =3D=3D \ > + RTE_PTYPE_INNER_L4_TCP) && \ > + (((ptype & RTE_PTYPE_INNER_L3_MASK) & \ > + (RTE_PTYPE_INNER_L3_IPV4 | \ > + RTE_PTYPE_INNER_L3_IPV4_EXT | \ > + RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN)) !=3D 0)) > + > /* > * GRO context structure. It keeps the table structures, which are > * used to merge packets, for different GRO types. Before using @@ -137,= 12 > +151,20 @@ rte_gro_reassemble_burst(struct rte_mbuf **pkts, > struct gro_tcp4_flow tcp_flows[RTE_GRO_MAX_BURST_ITEM_NUM]; > struct gro_tcp4_item tcp_items[RTE_GRO_MAX_BURST_ITEM_NUM] =3D > {{0} }; >=20 > + /* Allocate a reassembly table for VXLAN GRO */ > + struct gro_vxlan_tcp4_tbl vxlan_tbl; > + struct gro_vxlan_tcp4_flow > vxlan_flows[RTE_GRO_MAX_BURST_ITEM_NUM]; > + struct gro_vxlan_tcp4_item > vxlan_items[RTE_GRO_MAX_BURST_ITEM_NUM] =3D { > + {{0}, 0, 0} }; > + > struct rte_mbuf *unprocess_pkts[nb_pkts]; > uint32_t item_num; > int32_t ret; > uint16_t i, unprocess_num =3D 0, nb_after_gro =3D nb_pkts; > + uint8_t do_tcp4_gro =3D 0, do_vxlan_gro =3D 0; >=20 > - if (unlikely((param->gro_types & RTE_GRO_TCP_IPV4) =3D=3D 0)) > + if (unlikely((param->gro_types & (RTE_GRO_IPV4_VXLAN_TCP_IPV4 | > + RTE_GRO_TCP_IPV4)) =3D=3D 0)) > return nb_pkts; >=20 > /* Get the maximum number of packets */ @@ -150,22 +172,47 @@ > rte_gro_reassemble_burst(struct rte_mbuf **pkts, > param->max_item_per_flow)); > item_num =3D RTE_MIN(item_num, RTE_GRO_MAX_BURST_ITEM_NUM); >=20 > - for (i =3D 0; i < item_num; i++) > - tcp_flows[i].start_index =3D INVALID_ARRAY_INDEX; > + if (param->gro_types & RTE_GRO_IPV4_VXLAN_TCP_IPV4) { > + for (i =3D 0; i < item_num; i++) > + vxlan_flows[i].start_index =3D INVALID_ARRAY_INDEX; > + > + vxlan_tbl.flows =3D vxlan_flows; > + vxlan_tbl.items =3D vxlan_items; > + vxlan_tbl.flow_num =3D 0; > + vxlan_tbl.item_num =3D 0; > + vxlan_tbl.max_flow_num =3D item_num; > + vxlan_tbl.max_item_num =3D item_num; > + do_vxlan_gro =3D 1; > + } >=20 > - tcp_tbl.flows =3D tcp_flows; > - tcp_tbl.items =3D tcp_items; > - tcp_tbl.flow_num =3D 0; > - tcp_tbl.item_num =3D 0; > - tcp_tbl.max_flow_num =3D item_num; > - tcp_tbl.max_item_num =3D item_num; > + if (param->gro_types & RTE_GRO_TCP_IPV4) { > + for (i =3D 0; i < item_num; i++) > + tcp_flows[i].start_index =3D INVALID_ARRAY_INDEX; > + > + tcp_tbl.flows =3D tcp_flows; > + tcp_tbl.items =3D tcp_items; > + tcp_tbl.flow_num =3D 0; > + tcp_tbl.item_num =3D 0; > + tcp_tbl.max_flow_num =3D item_num; > + tcp_tbl.max_item_num =3D item_num; > + do_tcp4_gro =3D 1; > + } >=20 > for (i =3D 0; i < nb_pkts; i++) { > - if (IS_IPV4_TCP_PKT(pkts[i]->packet_type)) { > - /* > - * The timestamp is ignored, since all packets > - * will be flushed from the tables. > - */ > + /* > + * The timestamp is ignored, since all packets > + * will be flushed from the tables. > + */ > + if (IS_IPV4_VXLAN_TCP4_PKT(pkts[i]->packet_type) && > + do_vxlan_gro) { > + ret =3D gro_vxlan_tcp4_reassemble(pkts[i], &vxlan_tbl, 0); > + if (ret > 0) > + /* Merge successfully */ > + nb_after_gro--; > + else if (ret < 0) > + unprocess_pkts[unprocess_num++] =3D pkts[i]; > + } else if (IS_IPV4_TCP_PKT(pkts[i]->packet_type) && > + do_tcp4_gro) { > ret =3D gro_tcp4_reassemble(pkts[i], &tcp_tbl, 0); > if (ret > 0) > /* Merge successfully */ > @@ -177,8 +224,16 @@ rte_gro_reassemble_burst(struct rte_mbuf **pkts, > } >=20 > if (nb_after_gro < nb_pkts) { > + i =3D 0; > /* Flush all packets from the tables */ > - i =3D gro_tcp4_tbl_timeout_flush(&tcp_tbl, 0, pkts, nb_pkts); > + if (do_vxlan_gro) { > + i =3D gro_vxlan_tcp4_tbl_timeout_flush(&vxlan_tbl, > + 0, pkts, nb_pkts); > + } > + if (do_tcp4_gro) { > + i +=3D gro_tcp4_tbl_timeout_flush(&tcp_tbl, 0, > + &pkts[i], nb_pkts - i); > + } > /* Copy unprocessed packets */ > if (unprocess_num > 0) { > memcpy(&pkts[i], unprocess_pkts, > @@ -197,18 +252,33 @@ rte_gro_reassemble(struct rte_mbuf **pkts, { > struct rte_mbuf *unprocess_pkts[nb_pkts]; > struct gro_ctx *gro_ctx =3D ctx; > - void *tcp_tbl; > + void *tcp_tbl, *vxlan_tbl; > uint64_t current_time; > uint16_t i, unprocess_num =3D 0; > + uint8_t do_tcp4_gro, do_vxlan_gro; >=20 > - if (unlikely((gro_ctx->gro_types & RTE_GRO_TCP_IPV4) =3D=3D 0)) > + if (unlikely((gro_ctx->gro_types & (RTE_GRO_IPV4_VXLAN_TCP_IPV4 | > + RTE_GRO_TCP_IPV4)) =3D=3D 0)) > return nb_pkts; >=20 > tcp_tbl =3D gro_ctx->tbls[RTE_GRO_TCP_IPV4_INDEX]; > + vxlan_tbl =3D gro_ctx->tbls[RTE_GRO_IPV4_VXLAN_TCP_IPV4_INDEX]; > + > + do_tcp4_gro =3D (gro_ctx->gro_types & RTE_GRO_TCP_IPV4) =3D=3D > + RTE_GRO_TCP_IPV4; > + do_vxlan_gro =3D (gro_ctx->gro_types & RTE_GRO_IPV4_VXLAN_TCP_IPV4) > =3D=3D > + RTE_GRO_IPV4_VXLAN_TCP_IPV4; > + > current_time =3D rte_rdtsc(); >=20 > for (i =3D 0; i < nb_pkts; i++) { > - if (IS_IPV4_TCP_PKT(pkts[i]->packet_type)) { > + if (IS_IPV4_VXLAN_TCP4_PKT(pkts[i]->packet_type) && > + do_vxlan_gro) { > + if (gro_vxlan_tcp4_reassemble(pkts[i], vxlan_tbl, > + current_time) < 0) > + unprocess_pkts[unprocess_num++] =3D pkts[i]; > + } else if (IS_IPV4_TCP_PKT(pkts[i]->packet_type) && > + do_tcp4_gro) { > if (gro_tcp4_reassemble(pkts[i], tcp_tbl, > current_time) < 0) > unprocess_pkts[unprocess_num++] =3D pkts[i]; @@ -232,18 > +302,27 @@ rte_gro_timeout_flush(void *ctx, { > struct gro_ctx *gro_ctx =3D ctx; > uint64_t flush_timestamp; > + uint16_t num =3D 0; >=20 > gro_types =3D gro_types & gro_ctx->gro_types; > flush_timestamp =3D rte_rdtsc() - timeout_cycles; >=20 > - if (gro_types & RTE_GRO_TCP_IPV4) { > - return gro_tcp4_tbl_timeout_flush( > + if (gro_types & RTE_GRO_IPV4_VXLAN_TCP_IPV4) { > + num =3D gro_vxlan_tcp4_tbl_timeout_flush(gro_ctx->tbls[ > + RTE_GRO_IPV4_VXLAN_TCP_IPV4_INDEX], > + flush_timestamp, out, max_nb_out); > + max_nb_out -=3D num; > + } > + > + /* If no available space in 'out', stop flushing. */ > + if ((gro_types & RTE_GRO_TCP_IPV4) && max_nb_out > 0) { > + num +=3D gro_tcp4_tbl_timeout_flush( > gro_ctx->tbls[RTE_GRO_TCP_IPV4_INDEX], > flush_timestamp, > - out, max_nb_out); > + &out[num], max_nb_out); > } >=20 > - return 0; > + return num; > } >=20 > uint64_t > diff --git a/lib/librte_gro/rte_gro.h b/lib/librte_gro/rte_gro.h index > 36a1e60..5ed72d7 100644 > --- a/lib/librte_gro/rte_gro.h > +++ b/lib/librte_gro/rte_gro.h > @@ -51,12 +51,15 @@ extern "C" { > */ > #define RTE_GRO_TYPE_MAX_NUM 64 > /**< the max number of supported GRO types */ -#define > RTE_GRO_TYPE_SUPPORT_NUM 1 > +#define RTE_GRO_TYPE_SUPPORT_NUM 2 > /**< the number of currently supported GRO types */ >=20 > #define RTE_GRO_TCP_IPV4_INDEX 0 > #define RTE_GRO_TCP_IPV4 (1ULL << RTE_GRO_TCP_IPV4_INDEX) /**< > TCP/IPv4 GRO flag */ > +#define RTE_GRO_IPV4_VXLAN_TCP_IPV4_INDEX 1 #define > +RTE_GRO_IPV4_VXLAN_TCP_IPV4 (1ULL << > RTE_GRO_IPV4_VXLAN_TCP_IPV4_INDEX) > +/**< VxLAN GRO flag. */ >=20 > /** > * Structure used to create GRO context objects or used to pass > -- > 2.7.4 Reviewed-by: Junjie Chen Thanks