From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id CACF64C57 for ; Fri, 30 Jun 2017 08:52:19 +0200 (CEST) Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 29 Jun 2017 23:52:19 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.40,285,1496127600"; d="scan'208";a="105519007" Received: from dpdk15.sh.intel.com ([10.67.111.77]) by orsmga002.jf.intel.com with ESMTP; 29 Jun 2017 23:52:17 -0700 From: Jiayu Hu To: dev@dpdk.org Cc: stephen@networkplumber.org, konstantin.ananyev@intel.com, jianfeng.tan@intel.com, yliu@fridaylinux.org, jingjing.wu@intel.com, keith.wiles@intel.com, tiwei.bie@intel.com, lei.a.yao@intel.com, Jiayu Hu Date: Fri, 30 Jun 2017 14:53:37 +0800 Message-Id: <1498805618-63649-3-git-send-email-jiayu.hu@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1498805618-63649-1-git-send-email-jiayu.hu@intel.com> References: <1498733940-117800-1-git-send-email-jiayu.hu@intel.com> <1498805618-63649-1-git-send-email-jiayu.hu@intel.com> Subject: [dpdk-dev] [PATCH v9 2/3] lib/gro: add TCP/IPv4 GRO support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 30 Jun 2017 06:52:21 -0000 In this patch, we introduce five APIs to support TCP/IPv4 GRO. - gro_tcp_tbl_create: create a TCP reassembly table, which is used to merge packets. - gro_tcp_tbl_destroy: free memory space of a TCP reassembly table. - gro_tcp_tbl_timeout_flush: flush timeout packets from a TCP reassembly table. - gro_tcp_tbl_item_num: return the number of packets in a TCP reassembly table. - gro_tcp4_reassemble: reassemble an inputted TCP/IPv4 packet. TCP/IPv4 GRO API assumes all inputted packets are with correct IPv4 and TCP checksums. And TCP/IPv4 GRO API doesn't update IPv4 and TCP checksums for merged packets. If inputted packets are IP fragmented, TCP/IPv4 GRO API assumes they are complete packets (i.e. with L4 headers). In TCP GRO, we use a table structure, called TCP reassembly table, to reassemble packets. Both TCP/IPv4 and TCP/IPv6 GRO use the same table structure. A TCP reassembly table includes a key array and a item array, where the key array keeps the criteria to merge packets and the item array keeps packet information. One key in the key array points to an item group, which consists of packets which have the same criteria value. If two packets are able to merge, they must be in the same item group. Each key in the key array includes two parts: - criteria: the criteria of merging packets. If two packets can be merged, they must have the same criteria value. - start_index: the index of the first incoming packet of the item group. Each element in the item array keeps the information of one packet. It mainly includes two parts: - pkt: packet address - next_pkt_index: the index of the next packet in the same item group. All packets in the same item group are chained by next_pkt_index. With next_pkt_index, we can locate all packets in the same item group one by one. To process an incoming packet needs three steps: a. check if the packet should be processed. Packets with the following properties won't be processed: - packets without data (e.g. SYN, SYN-ACK) b. traverse the key array to find a key which has the same criteria value with the incoming packet. If find, goto step c. Otherwise, insert a new key and insert the packet into the item array. c. locate the first packet in the item group via the start_index in the key. Then traverse all packets in the item group via next_pkt_index. If find one packet which can merge with the incoming one, merge them together. If can't find, insert the packet into this item group. Signed-off-by: Jiayu Hu --- doc/guides/rel_notes/release_17_08.rst | 7 + lib/librte_gro/Makefile | 1 + lib/librte_gro/rte_gro.c | 123 ++++++++-- lib/librte_gro/rte_gro.h | 6 +- lib/librte_gro/rte_gro_tcp.c | 395 +++++++++++++++++++++++++++++++++ lib/librte_gro/rte_gro_tcp.h | 172 ++++++++++++++ 6 files changed, 690 insertions(+), 14 deletions(-) create mode 100644 lib/librte_gro/rte_gro_tcp.c create mode 100644 lib/librte_gro/rte_gro_tcp.h diff --git a/doc/guides/rel_notes/release_17_08.rst b/doc/guides/rel_notes/release_17_08.rst index 842f46f..f067247 100644 --- a/doc/guides/rel_notes/release_17_08.rst +++ b/doc/guides/rel_notes/release_17_08.rst @@ -75,6 +75,13 @@ New Features Added support for firmwares with multiple Ethernet ports per physical port. +* **Add Generic Receive Offload API support.** + + Generic Receive Offload (GRO) API supports to reassemble TCP/IPv4 + packets. GRO API assumes all inputted packets are with correct + checksums. GRO API doesn't update checksums for merged packets. If + inputted packets are IP fragmented, GRO API assumes they are complete + packets (i.e. with L4 headers). Resolved Issues --------------- diff --git a/lib/librte_gro/Makefile b/lib/librte_gro/Makefile index 7e0f128..e89344d 100644 --- a/lib/librte_gro/Makefile +++ b/lib/librte_gro/Makefile @@ -43,6 +43,7 @@ LIBABIVER := 1 # source files SRCS-$(CONFIG_RTE_LIBRTE_GRO) += rte_gro.c +SRCS-$(CONFIG_RTE_LIBRTE_GRO) += rte_gro_tcp.c # install this header file SYMLINK-$(CONFIG_RTE_LIBRTE_GRO)-include += rte_gro.h diff --git a/lib/librte_gro/rte_gro.c b/lib/librte_gro/rte_gro.c index 648835b..993cf29 100644 --- a/lib/librte_gro/rte_gro.c +++ b/lib/librte_gro/rte_gro.c @@ -32,8 +32,11 @@ #include #include +#include +#include #include "rte_gro.h" +#include "rte_gro_tcp.h" typedef void *(*gro_tbl_create_fn)(uint16_t socket_id, uint16_t max_flow_num, @@ -41,9 +44,12 @@ typedef void *(*gro_tbl_create_fn)(uint16_t socket_id, typedef void (*gro_tbl_destroy_fn)(void *tbl); typedef uint32_t (*gro_tbl_item_num_fn)(void *tbl); -static gro_tbl_create_fn tbl_create_functions[RTE_GRO_TYPE_MAX_NUM]; -static gro_tbl_destroy_fn tbl_destroy_functions[RTE_GRO_TYPE_MAX_NUM]; -static gro_tbl_item_num_fn tbl_item_num_functions[RTE_GRO_TYPE_MAX_NUM]; +static gro_tbl_create_fn tbl_create_functions[RTE_GRO_TYPE_MAX_NUM] = { + gro_tcp_tbl_create, NULL}; +static gro_tbl_destroy_fn tbl_destroy_functions[RTE_GRO_TYPE_MAX_NUM] = { + gro_tcp_tbl_destroy, NULL}; +static gro_tbl_item_num_fn tbl_item_num_functions[ + RTE_GRO_TYPE_MAX_NUM] = {gro_tcp_tbl_item_num, NULL}; /** * GRO table, which is used to merge packets. It keeps many reassembly @@ -130,27 +136,118 @@ void rte_gro_tbl_destroy(void *tbl) } uint16_t -rte_gro_reassemble_burst(struct rte_mbuf **pkts __rte_unused, +rte_gro_reassemble_burst(struct rte_mbuf **pkts, uint16_t nb_pkts, - const struct rte_gro_param *param __rte_unused) + const struct rte_gro_param *param) { - return nb_pkts; + uint16_t i; + uint16_t nb_after_gro = nb_pkts; + uint32_t item_num; + + /* allocate a reassembly table for TCP/IPv4 GRO */ + struct gro_tcp_tbl tcp_tbl; + struct gro_tcp_key tcp_keys[RTE_GRO_MAX_BURST_ITEM_NUM] = {0}; + struct gro_tcp_item tcp_items[RTE_GRO_MAX_BURST_ITEM_NUM] = {0}; + + struct rte_mbuf *unprocess_pkts[nb_pkts]; + uint16_t unprocess_num = 0; + int32_t ret; + uint64_t current_time; + + if ((param->desired_gro_types & RTE_GRO_TCP_IPV4) == 0) + return nb_pkts; + + /* get the actual number of items */ + item_num = RTE_MIN(nb_pkts, (param->max_flow_num * + param->max_item_per_flow)); + item_num = RTE_MIN(item_num, RTE_GRO_MAX_BURST_ITEM_NUM); + + tcp_tbl.keys = tcp_keys; + tcp_tbl.items = tcp_items; + tcp_tbl.key_num = 0; + tcp_tbl.item_num = 0; + tcp_tbl.max_key_num = item_num; + tcp_tbl.max_item_num = item_num; + + current_time = rte_rdtsc(); + + for (i = 0; i < nb_pkts; i++) { + if (RTE_ETH_IS_IPV4_HDR(pkts[i]->packet_type) && + (pkts[i]->packet_type & RTE_PTYPE_L4_TCP)) { + ret = gro_tcp4_reassemble(pkts[i], + &tcp_tbl, + param->max_packet_size, + current_time); + if (ret > 0) + /* merge successfully */ + nb_after_gro--; + else if (ret < 0) + unprocess_pkts[unprocess_num++] = + pkts[i]; + } else + unprocess_pkts[unprocess_num++] = + pkts[i]; + } + + /* re-arrange GROed packets */ + if (nb_after_gro < nb_pkts) { + i = gro_tcp_tbl_timeout_flush(&tcp_tbl, 0, + pkts, nb_pkts); + if (unprocess_num > 0) + memcpy(&pkts[i], unprocess_pkts, + sizeof(struct rte_mbuf *) * + unprocess_num); + } + return nb_after_gro; } uint16_t -rte_gro_reassemble(struct rte_mbuf **pkts __rte_unused, +rte_gro_reassemble(struct rte_mbuf **pkts, uint16_t nb_pkts, - void *tbl __rte_unused) + void *tbl) { - return nb_pkts; + uint16_t i, unprocess_num = 0; + struct rte_mbuf *unprocess_pkts[nb_pkts]; + struct gro_tbl *gro_tbl = (struct gro_tbl *)tbl; + uint64_t current_time; + + if ((gro_tbl->desired_gro_types & RTE_GRO_TCP_IPV4) == 0) + return nb_pkts; + + current_time = rte_rdtsc(); + for (i = 0; i < nb_pkts; i++) { + if (RTE_ETH_IS_IPV4_HDR(pkts[i]->packet_type) && + (pkts[i]->packet_type & RTE_PTYPE_L4_TCP)) { + if (gro_tcp4_reassemble(pkts[i], + gro_tbl->tbls[RTE_GRO_TCP_IPV4_INDEX], + gro_tbl->max_packet_size, + current_time) < 0) + unprocess_pkts[unprocess_num++] = pkts[i]; + } else + unprocess_pkts[unprocess_num++] = pkts[i]; + } + if (unprocess_num > 0) + memcpy(pkts, unprocess_pkts, + sizeof(struct rte_mbuf *) * unprocess_num); + + return unprocess_num; } uint16_t -rte_gro_timeout_flush(void *tbl __rte_unused, - uint64_t desired_gro_types __rte_unused, - struct rte_mbuf **out __rte_unused, - uint16_t max_nb_out __rte_unused) +rte_gro_timeout_flush(void *tbl, + uint64_t desired_gro_types, + struct rte_mbuf **out, + uint16_t max_nb_out) { + struct gro_tbl *gro_tbl = (struct gro_tbl *)tbl; + + desired_gro_types = desired_gro_types & + gro_tbl->desired_gro_types; + if (desired_gro_types & RTE_GRO_TCP_IPV4) + return gro_tcp_tbl_timeout_flush( + gro_tbl->tbls[RTE_GRO_TCP_IPV4_INDEX], + gro_tbl->max_timeout_cycles, + out, max_nb_out); return 0; } diff --git a/lib/librte_gro/rte_gro.h b/lib/librte_gro/rte_gro.h index 02c9113..2cd06ee 100644 --- a/lib/librte_gro/rte_gro.h +++ b/lib/librte_gro/rte_gro.h @@ -45,7 +45,11 @@ extern "C" { /* max number of supported GRO types */ #define RTE_GRO_TYPE_MAX_NUM 64 -#define RTE_GRO_TYPE_SUPPORT_NUM 0 /**< current supported GRO num */ +#define RTE_GRO_TYPE_SUPPORT_NUM 1 /**< current supported GRO num */ + +/* TCP/IPv4 GRO flag */ +#define RTE_GRO_TCP_IPV4_INDEX 0 +#define RTE_GRO_TCP_IPV4 (1ULL << RTE_GRO_TCP_IPV4_INDEX) struct rte_gro_param { diff --git a/lib/librte_gro/rte_gro_tcp.c b/lib/librte_gro/rte_gro_tcp.c new file mode 100644 index 0000000..cf5cea2 --- /dev/null +++ b/lib/librte_gro/rte_gro_tcp.c @@ -0,0 +1,395 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include +#include +#include + +#include +#include +#include + +#include "rte_gro_tcp.h" + +void *gro_tcp_tbl_create(uint16_t socket_id, + uint16_t max_flow_num, + uint16_t max_item_per_flow) +{ + size_t size; + uint32_t entries_num; + struct gro_tcp_tbl *tbl; + + entries_num = max_flow_num * max_item_per_flow; + entries_num = entries_num > GRO_TCP_TBL_MAX_ITEM_NUM ? + GRO_TCP_TBL_MAX_ITEM_NUM : entries_num; + + if (entries_num == 0) + return NULL; + + tbl = (struct gro_tcp_tbl *)rte_zmalloc_socket( + __func__, + sizeof(struct gro_tcp_tbl), + RTE_CACHE_LINE_SIZE, + socket_id); + if (tbl == NULL) + return NULL; + + size = sizeof(struct gro_tcp_item) * entries_num; + tbl->items = (struct gro_tcp_item *)rte_zmalloc_socket( + __func__, + size, + RTE_CACHE_LINE_SIZE, + socket_id); + if (tbl->items == NULL) { + rte_free(tbl); + return NULL; + } + tbl->max_item_num = entries_num; + + size = sizeof(struct gro_tcp_key) * entries_num; + tbl->keys = (struct gro_tcp_key *)rte_zmalloc_socket( + __func__, + size, RTE_CACHE_LINE_SIZE, + socket_id); + if (tbl->keys == NULL) { + rte_free(tbl->items); + rte_free(tbl); + return NULL; + } + tbl->max_key_num = entries_num; + return tbl; +} + +void gro_tcp_tbl_destroy(void *tbl) +{ + struct gro_tcp_tbl *tcp_tbl = (struct gro_tcp_tbl *)tbl; + + if (tcp_tbl) { + rte_free(tcp_tbl->items); + rte_free(tcp_tbl->keys); + } + rte_free(tcp_tbl); +} + +static struct rte_mbuf *get_mbuf_lastseg(struct rte_mbuf *pkt) +{ + struct rte_mbuf *lastseg = pkt; + + while (lastseg->next) + lastseg = lastseg->next; + + return lastseg; +} + +/** + * merge two TCP/IPv4 packets without updating checksums. + */ +static int +merge_two_tcp4_packets(struct gro_tcp_item *item_src, + struct rte_mbuf *pkt, + uint32_t max_packet_size) +{ + struct ipv4_hdr *ipv4_hdr1, *ipv4_hdr2; + uint16_t tcp_dl1; + struct rte_mbuf *pkt_src = item_src->pkt; + + /* parse the given packet */ + ipv4_hdr1 = (struct ipv4_hdr *)(rte_pktmbuf_mtod(pkt, + char *) + pkt->l2_len); + tcp_dl1 = rte_be_to_cpu_16(ipv4_hdr1->total_length) - + pkt->l3_len - pkt->l4_len; + + /* parse the original packet */ + ipv4_hdr2 = (struct ipv4_hdr *)(rte_pktmbuf_mtod(pkt_src, + char *) + pkt_src->l2_len); + + if (pkt_src->pkt_len + tcp_dl1 > max_packet_size) + return -1; + + /* remove the header of the incoming packet */ + rte_pktmbuf_adj(pkt, pkt->l2_len + pkt->l3_len + pkt->l4_len); + + /* chain the two packet together and update lastseg */ + item_src->lastseg->next = pkt; + item_src->lastseg = get_mbuf_lastseg(pkt); + + /* update IP header */ + ipv4_hdr2->total_length = rte_cpu_to_be_16( + rte_be_to_cpu_16( + ipv4_hdr2->total_length) + + tcp_dl1); + + /* update mbuf metadata for the merged packet */ + pkt_src->nb_segs += pkt->nb_segs; + pkt_src->pkt_len += pkt->pkt_len; + return 1; +} + +static int +check_seq_option(struct rte_mbuf *pkt, + struct tcp_hdr *tcp_hdr, + uint16_t tcp_hl) +{ + struct ipv4_hdr *ipv4_hdr1; + struct tcp_hdr *tcp_hdr1; + uint16_t tcp_hl1, tcp_dl1; + uint32_t sent_seq1, sent_seq; + uint16_t len; + int ret = -1; + + ipv4_hdr1 = (struct ipv4_hdr *)(rte_pktmbuf_mtod(pkt, + char *) + pkt->l2_len); + tcp_hl1 = pkt->l4_len; + tcp_hdr1 = (struct tcp_hdr *)((char *)ipv4_hdr1 + pkt->l3_len); + tcp_dl1 = rte_be_to_cpu_16(ipv4_hdr1->total_length) - + pkt->l3_len - tcp_hl1; + sent_seq1 = rte_be_to_cpu_32(tcp_hdr1->sent_seq) + tcp_dl1; + sent_seq = rte_be_to_cpu_32(tcp_hdr->sent_seq); + + /* check if the two packets are neighbor */ + if (sent_seq == sent_seq1) { + ret = 1; + len = RTE_MAX(tcp_hl, tcp_hl1) - sizeof(struct tcp_hdr); + /* check if TCP option field equals */ + if ((tcp_hl1 != tcp_hl) || ((len > 0) && + (memcmp(tcp_hdr1 + 1, + tcp_hdr + 1, + len) != 0))) + ret = -1; + } + return ret; +} + +static uint32_t +find_an_empty_item(struct gro_tcp_tbl *tbl) +{ + uint32_t i; + + for (i = 0; i < tbl->max_item_num; i++) + if (tbl->items[i].pkt == NULL) + return i; + return INVALID_ARRAY_INDEX; +} + +static uint32_t +find_an_empty_key(struct gro_tcp_tbl *tbl) +{ + uint32_t i; + + for (i = 0; i < tbl->max_key_num; i++) + if (tbl->keys[i].is_valid == 0) + return i; + return INVALID_ARRAY_INDEX; +} + +int32_t +gro_tcp4_reassemble(struct rte_mbuf *pkt, + struct gro_tcp_tbl *tbl, + uint32_t max_packet_size, + uint64_t start_time) +{ + struct ether_hdr *eth_hdr; + struct ipv4_hdr *ipv4_hdr; + struct tcp_hdr *tcp_hdr; + uint16_t tcp_dl; + + struct tcp_key key; + uint32_t cur_idx, prev_idx, item_idx; + uint32_t i, key_idx; + + eth_hdr = rte_pktmbuf_mtod(pkt, struct ether_hdr *); + ipv4_hdr = (struct ipv4_hdr *)((char *)eth_hdr + pkt->l2_len); + + /* check if the packet should be processed */ + if (pkt->l3_len < sizeof(struct ipv4_hdr)) + return -1; + tcp_hdr = (struct tcp_hdr *)((char *)ipv4_hdr + pkt->l3_len); + tcp_dl = rte_be_to_cpu_16(ipv4_hdr->total_length) - pkt->l3_len + - pkt->l4_len; + if (tcp_dl == 0) + return -1; + + /* find a key and traverse all packets in its item group */ + key.eth_saddr = eth_hdr->s_addr; + key.eth_daddr = eth_hdr->d_addr; + key.ip_src_addr[0] = rte_be_to_cpu_32(ipv4_hdr->src_addr); + key.ip_dst_addr[0] = rte_be_to_cpu_32(ipv4_hdr->dst_addr); + key.src_port = rte_be_to_cpu_16(tcp_hdr->src_port); + key.dst_port = rte_be_to_cpu_16(tcp_hdr->dst_port); + key.recv_ack = rte_be_to_cpu_32(tcp_hdr->recv_ack); + key.tcp_flags = tcp_hdr->tcp_flags; + + for (i = 0; i < tbl->max_key_num; i++) { + /* search for a key */ + if ((tbl->keys[i].is_valid == 0) || + (memcmp(&(tbl->keys[i].key), &key, + sizeof(struct tcp_key)) != 0)) + continue; + + cur_idx = tbl->keys[i].start_index; + prev_idx = cur_idx; + while (cur_idx != INVALID_ARRAY_INDEX) { + if (check_seq_option(tbl->items[cur_idx].pkt, + tcp_hdr, + pkt->l4_len) > 0) { + if (merge_two_tcp4_packets( + &(tbl->items[cur_idx]), + pkt, + max_packet_size) > 0) + return 1; + /** + * fail to merge two packets since + * it's beyond the max packet length. + * Insert it into the item group. + */ + item_idx = find_an_empty_item(tbl); + if (item_idx == INVALID_ARRAY_INDEX) + return -1; + tbl->items[prev_idx].next_pkt_idx = item_idx; + tbl->items[item_idx].pkt = pkt; + tbl->items[item_idx].lastseg = + get_mbuf_lastseg(pkt); + tbl->items[item_idx].next_pkt_idx = + INVALID_ARRAY_INDEX; + tbl->items[item_idx].start_time = start_time; + tbl->item_num++; + return 0; + } + prev_idx = cur_idx; + cur_idx = tbl->items[cur_idx].next_pkt_idx; + } + /** + * find a corresponding item group but fails to find + * one packet to merge. Insert it into this item group. + */ + item_idx = find_an_empty_item(tbl); + if (item_idx == INVALID_ARRAY_INDEX) + return -1; + tbl->items[prev_idx].next_pkt_idx = item_idx; + tbl->items[item_idx].pkt = pkt; + tbl->items[item_idx].lastseg = + get_mbuf_lastseg(pkt); + tbl->items[item_idx].next_pkt_idx = INVALID_ARRAY_INDEX; + tbl->items[item_idx].start_time = start_time; + tbl->item_num++; + return 0; + } + + /** + * merge fail as the given packet has + * a new key. So insert a new key. + */ + item_idx = find_an_empty_item(tbl); + key_idx = find_an_empty_key(tbl); + /** + * if current key or item number is greater than the max + * value, don't insert the packet into the table and return + * immediately. + */ + if (item_idx == INVALID_ARRAY_INDEX || + key_idx == INVALID_ARRAY_INDEX) + return -1; + tbl->items[item_idx].pkt = pkt; + tbl->items[item_idx].lastseg = get_mbuf_lastseg(pkt); + tbl->items[item_idx].next_pkt_idx = INVALID_ARRAY_INDEX; + tbl->items[item_idx].start_time = start_time; + tbl->item_num++; + + memcpy(&(tbl->keys[key_idx].key), + &key, sizeof(struct tcp_key)); + tbl->keys[key_idx].start_index = item_idx; + tbl->keys[key_idx].is_valid = 1; + tbl->key_num++; + + return 0; +} + +uint16_t +gro_tcp_tbl_timeout_flush(struct gro_tcp_tbl *tbl, + uint64_t timeout_cycles, + struct rte_mbuf **out, + uint16_t nb_out) +{ + uint16_t k = 0; + uint32_t i, j; + uint64_t current_time; + + current_time = rte_rdtsc(); + for (i = 0; i < tbl->max_key_num; i++) { + /* all keys have been checked, return immediately */ + if (tbl->key_num == 0) + return k; + + if (tbl->keys[i].is_valid == 0) + continue; + + j = tbl->keys[i].start_index; + while (j != INVALID_ARRAY_INDEX) { + if (current_time - tbl->items[j].start_time >= + timeout_cycles) { + out[k++] = tbl->items[j].pkt; + tbl->items[j].pkt = NULL; + tbl->item_num--; + j = tbl->items[j].next_pkt_idx; + + /** + * delete the key as all of + * its packets are flushed. + */ + if (j == INVALID_ARRAY_INDEX) { + tbl->keys[i].is_valid = 0; + tbl->key_num--; + } else + /* update start_index of the key */ + tbl->keys[i].start_index = j; + + if (k == nb_out) + return k; + } else + /** + * left packets of this key won't be + * timeout, so go to check other keys. + */ + break; + } + } + return k; +} + +uint32_t gro_tcp_tbl_item_num(void *tbl) +{ + struct gro_tcp_tbl *gro_tbl = (struct gro_tcp_tbl *)tbl; + + if (gro_tbl) + return gro_tbl->item_num; + return 0; +} diff --git a/lib/librte_gro/rte_gro_tcp.h b/lib/librte_gro/rte_gro_tcp.h new file mode 100644 index 0000000..2000318 --- /dev/null +++ b/lib/librte_gro/rte_gro_tcp.h @@ -0,0 +1,172 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _RTE_GRO_TCP_H_ +#define _RTE_GRO_TCP_H_ + +#define INVALID_ARRAY_INDEX 0xffffffffUL +#define GRO_TCP_TBL_MAX_ITEM_NUM (UINT32_MAX - 1) + +/* criteria of mergeing packets */ +struct tcp_key { + struct ether_addr eth_saddr; + struct ether_addr eth_daddr; + uint32_t ip_src_addr[4]; /**< IPv4 uses the first 4B */ + uint32_t ip_dst_addr[4]; + + uint32_t recv_ack; /**< acknowledgment sequence number. */ + uint16_t src_port; + uint16_t dst_port; + uint8_t tcp_flags; /**< TCP flags. */ +}; + +struct gro_tcp_key { + struct tcp_key key; + uint32_t start_index; /**< the first packet index of the flow */ + uint8_t is_valid; +}; + +struct gro_tcp_item { + struct rte_mbuf *pkt; /**< packet address. */ + struct rte_mbuf *lastseg; /**< last segment of the packet */ + /* the time when the packet in added into the table */ + uint64_t start_time; + uint32_t next_pkt_idx; /**< next packet index. */ +}; + +/** + * TCP reassembly table. Both TCP/IPv4 and TCP/IPv6 use the same table + * structure. + */ +struct gro_tcp_tbl { + struct gro_tcp_item *items; /**< item array */ + struct gro_tcp_key *keys; /**< key array */ + uint32_t item_num; /**< current item number */ + uint32_t key_num; /**< current key num */ + uint32_t max_item_num; /**< item array size */ + uint32_t max_key_num; /**< key array size */ +}; + +/** + * This function creates a TCP reassembly table. + * + * @param socket_id + * socket index where the Ethernet port connects to. + * @param max_flow_num + * the maximum number of flows in the TCP GRO table + * @param max_item_per_flow + * the maximum packet number per flow. + * @return + * if create successfully, return a pointer which points to the + * created TCP GRO table. Otherwise, return NULL. + */ +void *gro_tcp_tbl_create(uint16_t socket_id, + uint16_t max_flow_num, + uint16_t max_item_per_flow); + +/** + * This function destroys a TCP reassembly table. + * @param tbl + * a pointer points to the TCP reassembly table. + */ +void gro_tcp_tbl_destroy(void *tbl); + +/** + * This function searches for a packet in the TCP reassembly table to + * merge with the inputted one. To merge two packets is to chain them + * together and update packet headers. If the packet is without data + * (e.g. SYN, SYN-ACK packet), this function returns immediately. + * Otherwise, the packet is either merged, or inserted into the table. + * Besides, if there is no available space to insert the packet, this + * function returns immediately too. + * + * This function assumes the inputted packet is with correct IPv4 and + * TCP checksums. And if two packets are merged, it won't re-calculate + * IPv4 and TCP checksums. Besides, if the inputted packet is IP + * fragmented, it assumes the packet is complete (with TCP header). + * + * @param pkt + * packet to reassemble. + * @param tbl + * a pointer that points to a TCP reassembly table. + * @param max_packet_size + * max packet length after merged + * @start_time + * the start time that the packet is inserted into the table + * @return + * if the packet doesn't have data, or there is no available space + * in the table to insert a new item or a new key, return a negative + * value. If the packet is merged successfully, return an positive + * value. If the packet is inserted into the table, return 0. + */ +int32_t +gro_tcp4_reassemble(struct rte_mbuf *pkt, + struct gro_tcp_tbl *tbl, + uint32_t max_packet_size, + uint64_t start_time); + +/** + * This function flushes timeout packets in a TCP reassembly table to + * applications, and without updating checksums for merged packets. + * The max number of flushed timeout packets is the element number of + * the array which is used to keep flushed packets. + * + * @param tbl + * a pointer that points to a TCP GRO table. + * @param timeout_cycles + * the maximum time that packets can stay in the table. + * @param out + * pointer array which is used to keep flushed packets. + * @param nb_out + * the element number of out. It's also the max number of timeout + * packets that can be flushed finally. + * @return + * the number of packets that are returned. + */ +uint16_t +gro_tcp_tbl_timeout_flush(struct gro_tcp_tbl *tbl, + uint64_t timeout_cycles, + struct rte_mbuf **out, + uint16_t nb_out); + +/** + * This function returns the number of the packets in the TCP + * reassembly table. + * + * @param tbl + * pointer points to a TCP reassembly table. + * @return + * the number of packets in the table + */ +uint32_t +gro_tcp_tbl_item_num(void *tbl); +#endif -- 2.7.4