From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id 955743B5 for ; Wed, 22 Mar 2017 10:32:14 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=intel.com; i=@intel.com; q=dns/txt; s=intel; t=1490175134; x=1521711134; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=w0Yqfg5ecnMtauigcyK4kJ5MMcgs3TFRjaZllUAO+4I=; b=XGvChd1ndp+pZFcXYA2/q2t+e5PCYxGF2vJszhHO3p9ESUs8MLAsdSR0 mxxi33AH0SOLvDZ1LfncMWK6kckDlQ==; Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 22 Mar 2017 02:32:14 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.36,204,1486454400"; d="scan'208";a="79751599" Received: from unknown (HELO localhost.localdomain.sh.intel.com) ([10.239.128.234]) by fmsmga006.fm.intel.com with ESMTP; 22 Mar 2017 02:32:13 -0700 From: Jiayu Hu To: dev@dpdk.org Cc: yuanhan.liu@linux.intel.com, Jiayu Hu Date: Wed, 22 Mar 2017 17:32:16 +0800 Message-Id: <1490175137-108413-2-git-send-email-jiayu.hu@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1490175137-108413-1-git-send-email-jiayu.hu@intel.com> References: <1490175137-108413-1-git-send-email-jiayu.hu@intel.com> Subject: [dpdk-dev] [PATCH 1/2] lib: add Generic Receive Offload support for TCP IPv4 packets X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 22 Mar 2017 09:32:15 -0000 Introduce two new functions to support TCP IPv4 GRO: - rte_gro_tcp4_tbl_create: create a lookup table for TCP IPv4 GRO. - rte_gro_tcp4_reassemble_burst: reassemble a bulk of TCP IPv4 packets at a time. rte_gro_tcp4_reassemble_burst works in burst-mode, which processes a bulk of packets at a time. That is, applications are in charge of classifying and accumulating TCP IPv4 packets before calling it. If applications provide non-TCP IPv4 packets, rte_gro_tcp4_reassemble_burst won't process them. Before using rte_gro_tcp4_reassemble_burst, applications need to create TCP IPv4 lookup tables via rte_gro_tcp4_tbl_create, which are used by rte_gro_tcp4_reassemble_burst. The TCP IPv4 lookup table is a cuckoo hashing table, whose keys are rules of merging TCP IPv4 packets, and whose values point to item-lists. Each item-list contains items whose keys are the same. To process an incoming packet, there are following four steps: a. Check if the packet should be processed. TCP IPv4 GRO doesn't process the following types packets: - non TCP-IPv4 packets - packets without data - packets with wrong checksums - fragmented packets b. Lookup the hash table to find a item-list, which stores packets that may be able to merge with the incoming packet. c. If lookup successfully, check all items in the item-list. If find one that is the neighbor of the incoming packet, chaining them together and update the packet header fields and mbuf metadata; if don't find, allocate a new item for the incoming packet and insert it into the item-list. d. If fail to find a item-list, allocate a new item-list for the incoming packet and insert it into the hash table. After processing all packets, update checksums for the merged ones, and clear the content of the lookup table. Signed-off-by: Jiayu Hu --- config/common_base | 5 + lib/Makefile | 1 + lib/librte_gro/Makefile | 50 +++++++ lib/librte_gro/rte_gro_tcp.c | 301 +++++++++++++++++++++++++++++++++++++++++++ lib/librte_gro/rte_gro_tcp.h | 114 ++++++++++++++++ mk/rte.app.mk | 1 + 6 files changed, 472 insertions(+) create mode 100644 lib/librte_gro/Makefile create mode 100644 lib/librte_gro/rte_gro_tcp.c create mode 100644 lib/librte_gro/rte_gro_tcp.h diff --git a/config/common_base b/config/common_base index 37aa1e1..29475ad 100644 --- a/config/common_base +++ b/config/common_base @@ -609,6 +609,11 @@ CONFIG_RTE_LIBRTE_VHOST_DEBUG=n CONFIG_RTE_LIBRTE_PMD_VHOST=n # +# Compile GRO library +# +CONFIG_RTE_LIBRTE_GRO=y + +# #Compile Xen domain0 support # CONFIG_RTE_LIBRTE_XEN_DOM0=n diff --git a/lib/Makefile b/lib/Makefile index 4178325..0665f58 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -59,6 +59,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_TABLE) += librte_table DIRS-$(CONFIG_RTE_LIBRTE_PIPELINE) += librte_pipeline DIRS-$(CONFIG_RTE_LIBRTE_REORDER) += librte_reorder DIRS-$(CONFIG_RTE_LIBRTE_PDUMP) += librte_pdump +DIRS-$(CONFIG_RTE_LIBRTE_GRO) += librte_gro ifeq ($(CONFIG_RTE_EXEC_ENV_LINUXAPP),y) DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni diff --git a/lib/librte_gro/Makefile b/lib/librte_gro/Makefile new file mode 100644 index 0000000..71bdb04 --- /dev/null +++ b/lib/librte_gro/Makefile @@ -0,0 +1,50 @@ +# BSD LICENSE +# +# Copyright(c) 2010-2014 Intel Corporation. All rights reserved. +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Intel Corporation nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +include $(RTE_SDK)/mk/rte.vars.mk + +# library name +LIB = librte_gro.a + +CFLAGS += -O3 +CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) + +EXPORT_MAP := rte_gro_version.map + +LIBABIVER := 1 + +#source files +SRCS-$(CONFIG_RTE_LIBRTE_GRO) += rte_gro_tcp.c + +# install this header file +SYMLINK-$(CONFIG_RTE_LIBRTE_GRO)-include += rte_gro_tcp.h + +include $(RTE_SDK)/mk/rte.lib.mk diff --git a/lib/librte_gro/rte_gro_tcp.c b/lib/librte_gro/rte_gro_tcp.c new file mode 100644 index 0000000..9fd3efe --- /dev/null +++ b/lib/librte_gro/rte_gro_tcp.c @@ -0,0 +1,301 @@ +#include "rte_gro_tcp.h" + +struct rte_hash * +rte_gro_tcp4_tbl_create(char *name, + uint32_t nb_entries, uint16_t socket_id) +{ + struct rte_hash_parameters ht_param = { + .entries = nb_entries, + .name = name, + .key_len = sizeof(struct gro_tcp4_pre_rules), + .hash_func = rte_jhash, + .hash_func_init_val = 0, + .socket_id = socket_id, + }; + struct rte_hash *tbl; + + tbl = rte_hash_create(&ht_param); + if (tbl == NULL) + printf("GRO TCP4: allocate hash table fail\n"); + return tbl; +} + +/* update TCP IPv4 checksum */ +static void +gro_tcp4_cksum_update(struct rte_mbuf *pkt) +{ + uint32_t len, offset, cksum; + struct ether_hdr *eth_hdr; + struct ipv4_hdr *ipv4_hdr; + struct tcp_hdr *tcp_hdr; + uint16_t ipv4_ihl, cksum_pld; + + if (pkt == NULL) + return; + + len = pkt->pkt_len; + eth_hdr = rte_pktmbuf_mtod(pkt, struct ether_hdr *); + ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1); + ipv4_ihl = IPv4_HDR_LEN(ipv4_hdr); + tcp_hdr = (struct tcp_hdr *)((char *)ipv4_hdr + ipv4_ihl); + + offset = sizeof(struct ether_hdr) + ipv4_ihl; + len -= offset; + + /* TCP cksum without IP pseudo header */ + ipv4_hdr->hdr_checksum = 0; + tcp_hdr->cksum = 0; + if (rte_raw_cksum_mbuf(pkt, offset, len, &cksum_pld) < 0) { + printf("invalid param for raw_cksum_mbuf\n"); + return; + } + /* IP pseudo header cksum */ + cksum = cksum_pld; + cksum += rte_ipv4_phdr_cksum(ipv4_hdr, 0); + + /* combine TCP checksum and IP pseudo header checksum */ + cksum = ((cksum & 0xffff0000) >> 16) + (cksum & 0xffff); + cksum = (~cksum) & 0xffff; + cksum = (cksum == 0) ? 0xffff : cksum; + tcp_hdr->cksum = cksum; + + /* update IP header cksum */ + ipv4_hdr->hdr_checksum = rte_ipv4_cksum(ipv4_hdr); +} + +/** + * This function traverses the item-list to find one item that can be + * merged with the incoming packet. If merge successfully, the merged + * packets are chained together; if not, insert the incoming packet into + * the item-list. + */ +static uint64_t +gro_tcp4_reassemble(struct gro_tcp_item_list *list, + struct rte_mbuf *pkt, + uint32_t pkt_sent_seq, + uint32_t pkt_idx) +{ + struct gro_tcp_item *items; + struct ipv4_hdr *ipv4_hdr1; + struct tcp_hdr *tcp_hdr1; + uint16_t ipv4_ihl1, tcp_hl1, tcp_dl1; + + items = list->items; + ipv4_hdr1 = (struct ipv4_hdr *)(rte_pktmbuf_mtod(pkt, struct + ether_hdr *) + 1); + ipv4_ihl1 = IPv4_HDR_LEN(ipv4_hdr1); + tcp_hdr1 = (struct tcp_hdr *)((char *)ipv4_hdr1 + ipv4_ihl1); + tcp_hl1 = TCP_HDR_LEN(tcp_hdr1); + tcp_dl1 = rte_be_to_cpu_16(ipv4_hdr1->total_length) - ipv4_ihl1 + - tcp_hl1; + + for (uint16_t i = 0; i < list->nb_item; i++) { + /* check if the two packets are neighbor */ + if ((pkt_sent_seq ^ items[i].next_sent_seq) == 0) { + struct ipv4_hdr *ipv4_hdr2; + struct tcp_hdr *tcp_hdr2; + uint16_t ipv4_ihl2, tcp_hl2; + struct rte_mbuf *tail; + + ipv4_hdr2 = (struct ipv4_hdr *) (rte_pktmbuf_mtod( + items[i].segment, struct ether_hdr *) + + 1); + + /* check if the option fields equal */ + if (tcp_hl1 > sizeof(struct tcp_hdr)) { + ipv4_ihl2 = IPv4_HDR_LEN(ipv4_hdr2); + tcp_hdr2 = (struct tcp_hdr *) + ((char *)ipv4_hdr2 + ipv4_ihl2); + tcp_hl2 = TCP_HDR_LEN(tcp_hdr2); + if ((tcp_hl1 != tcp_hl2) || + (memcmp(tcp_hdr1 + 1, tcp_hdr2 + 1, + tcp_hl2 - sizeof(struct tcp_hdr)) + != 0)) + continue; + } + /* check if the packet length will be beyond 64K */ + if (items[i].segment->pkt_len + tcp_dl1 > UINT16_MAX) + goto merge_fail; + + /* remove the header of the incoming packet */ + rte_pktmbuf_adj(pkt, sizeof(struct ether_hdr) + + ipv4_ihl1 + tcp_hl1); + /* chain the two packet together */ + tail = rte_pktmbuf_lastseg(items[i].segment); + tail->next = pkt; + + /* update IP header for the merged packet */ + ipv4_hdr2->total_length = rte_cpu_to_be_16( + rte_be_to_cpu_16(ipv4_hdr2->total_length) + + tcp_dl1); + + /* update the next expected sequence number */ + items[i].next_sent_seq += tcp_dl1; + + /* update mbuf metadata for the merged packet */ + items[i].segment->nb_segs++; + items[i].segment->pkt_len += pkt->pkt_len; + + return items[i].segment_idx + 1; + } + } + +merge_fail: + /* fail to merge. Insert the incoming packet into the item-list */ + items[list->nb_item].next_sent_seq = pkt_sent_seq + tcp_dl1; + items[list->nb_item].segment = pkt; + items[list->nb_item].segment_idx = pkt_idx; + list->nb_item++; + + return 0; +} + +uint32_t +rte_gro_tcp4_reassemble_burst(struct rte_hash *hash_tbl, + struct rte_mbuf **pkts, + const uint32_t nb_pkts) +{ + struct ether_hdr *eth_hdr; + struct ipv4_hdr *ipv4_hdr; + struct tcp_hdr *tcp_hdr; + uint16_t ipv4_ihl, tcp_hl, tcp_dl, tcp_cksum, ip_cksum; + uint32_t sent_seq; + struct gro_tcp4_pre_rules key; + struct gro_tcp_item_list *list; + + /* preallocated items. Each packet has nb_pkts items */ + struct gro_tcp_item items_pool[nb_pkts * nb_pkts]; + + struct gro_tcp_info gro_infos[nb_pkts]; + uint64_t ol_flags, idx; + int ret, is_performed_gro = 0; + uint32_t nb_after_gro = nb_pkts; + + if (hash_tbl == NULL || pkts == NULL || nb_pkts == 0) { + printf("GRO TCP4: invalid parameters\n"); + goto end; + } + memset(&key, 0, sizeof(struct gro_tcp4_pre_rules)); + + for (uint32_t i = 0; i < nb_pkts; i++) { + gro_infos[i].nb_merged_pkts = 1; + + eth_hdr = rte_pktmbuf_mtod(pkts[i], struct ether_hdr *); + ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1); + ipv4_ihl = IPv4_HDR_LEN(ipv4_hdr); + + /* 1. check if the packet should be processed */ + if (ipv4_ihl < sizeof(struct ipv4_hdr)) + continue; + if (ipv4_hdr->next_proto_id != IPPROTO_TCP) + continue; + if ((ipv4_hdr->fragment_offset & + rte_cpu_to_be_16(IPV4_HDR_DF_MASK)) + == 0) + continue; + + tcp_hdr = (struct tcp_hdr *)((char *)ipv4_hdr + ipv4_ihl); + tcp_hl = TCP_HDR_LEN(tcp_hdr); + tcp_dl = rte_be_to_cpu_16(ipv4_hdr->total_length) - ipv4_ihl + - tcp_hl; + if (tcp_dl == 0) + continue; + + ol_flags = pkts[i]->ol_flags; + /** + * 2. if HW rx checksum offload isn't enabled, recalculate the + * checksum in SW. Then, check if the checksum is correct + */ + if ((ol_flags & PKT_RX_IP_CKSUM_MASK) != + PKT_RX_IP_CKSUM_UNKNOWN) { + if (ol_flags == PKT_RX_IP_CKSUM_BAD) + continue; + } else { + ip_cksum = ipv4_hdr->hdr_checksum; + ipv4_hdr->hdr_checksum = 0; + ipv4_hdr->hdr_checksum = rte_ipv4_cksum(ipv4_hdr); + if (ipv4_hdr->hdr_checksum ^ ip_cksum) + continue; + } + + if ((ol_flags & PKT_RX_L4_CKSUM_MASK) != + PKT_RX_L4_CKSUM_UNKNOWN) { + if (ol_flags == PKT_RX_L4_CKSUM_BAD) + continue; + } else { + tcp_cksum = tcp_hdr->cksum; + tcp_hdr->cksum = 0; + tcp_hdr->cksum = rte_ipv4_udptcp_cksum + (ipv4_hdr, tcp_hdr); + if (tcp_hdr->cksum ^ tcp_cksum) + continue; + } + + /* 3. search for the corresponding item-list for the packet */ + key.eth_saddr = eth_hdr->s_addr; + key.eth_daddr = eth_hdr->d_addr; + key.ip_src_addr = rte_be_to_cpu_32(ipv4_hdr->src_addr); + key.ip_dst_addr = rte_be_to_cpu_32(ipv4_hdr->dst_addr); + key.src_port = rte_be_to_cpu_16(tcp_hdr->src_port); + key.dst_port = rte_be_to_cpu_16(tcp_hdr->dst_port); + key.recv_ack = rte_be_to_cpu_32(tcp_hdr->recv_ack); + key.tcp_flags = tcp_hdr->tcp_flags; + + sent_seq = rte_be_to_cpu_32(tcp_hdr->sent_seq); + ret = rte_hash_lookup_data(hash_tbl, &key, (void **)&list); + + /* try to reassemble the packet */ + if (ret >= 0) { + idx = gro_tcp4_reassemble(list, pkts[i], sent_seq, i); + /* merge successfully, update gro_info */ + if (idx > 0) { + gro_infos[i].nb_merged_pkts = 0; + gro_infos[--idx].nb_merged_pkts++; + nb_after_gro--; + } + } else { + /** + * fail to find a item-list. Allocate a new item-list + * for the incoming packet and insert it into the hash + * table. + */ + list = &(gro_infos[i].item_list); + list->items = &(items_pool[nb_pkts * i]); + list->nb_item = 1; + list->items[0].next_sent_seq = sent_seq + tcp_dl; + list->items[0].segment = pkts[i]; + list->items[0].segment_idx = i; + + if (unlikely(rte_hash_add_key_data(hash_tbl, &key, list) + != 0)) + printf("GRO TCP hash insert fail.\n"); + + is_performed_gro = 1; + } + } + + /** + * if there are packets been merged, update their checksum, + * and remove useless packet addresses from packet array + */ + if (nb_after_gro < nb_pkts) { + struct rte_mbuf *tmp[nb_pkts]; + + memset(tmp, 0, sizeof(struct rte_mbuf *) * nb_pkts); + /* update checksum */ + for (uint32_t i = 0, j = 0; i < nb_pkts; i++) { + if (gro_infos[i].nb_merged_pkts > 1) + gro_tcp4_cksum_update(pkts[i]); + if (gro_infos[i].nb_merged_pkts != 0) + tmp[j++] = pkts[i]; + } + /* update the packet array */ + rte_memcpy(pkts, tmp, nb_pkts * sizeof(struct rte_mbuf *)); + } + + /* if GRO is performed, reset the hash table */ + if (is_performed_gro) + rte_hash_reset(hash_tbl); +end: + return nb_after_gro; +} diff --git a/lib/librte_gro/rte_gro_tcp.h b/lib/librte_gro/rte_gro_tcp.h new file mode 100644 index 0000000..aa99a06 --- /dev/null +++ b/lib/librte_gro/rte_gro_tcp.h @@ -0,0 +1,114 @@ +#ifndef _RTE_GRO_TCP_H_ +#define _RTE_GRO_TCP_H_ + +#include +#include +#include +#include +#include +#include + +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN +#define TCP_HDR_LEN(tcph) \ + ((tcph->data_off >> 4) * 4) +#define IPv4_HDR_LEN(iph) \ + ((iph->version_ihl & 0x0f) * 4) +#else +#define TCP_DATAOFF_MASK 0x0f +#define TCP_HDR_LEN(tcph) \ + ((tcph->data_off & TCP_DATAOFF_MASK) * 4) +#define IPv4_HDR_LEN(iph) \ + ((iph->version_ihl >> 4) * 4) +#endif + +#define IPV4_HDR_DF_SHIFT 14 +#define IPV4_HDR_DF_MASK (1 << IPV4_HDR_DF_SHIFT) + +#define RTE_GRO_TCP_HASH_ENTRIES_MIN RTE_HASH_BUCKET_ENTRIES +#define RTE_GRO_TCP_HASH_ENTRIES_MAX RTE_HASH_ENTRIES_MAX + +/** + * key structure of TCP ipv4 hash table. It describes the prerequsite + * rules of merging packets. + */ +struct gro_tcp4_pre_rules { + struct ether_addr eth_saddr; + struct ether_addr eth_daddr; + uint32_t ip_src_addr; + uint32_t ip_dst_addr; + + uint32_t recv_ack; /**< acknowledgment sequence number. */ + uint16_t src_port; + uint16_t dst_port; + uint8_t tcp_flags; /**< TCP flags. */ + + uint8_t padding[3]; +}; + +/** + * Item structure + */ +struct gro_tcp_item { + struct rte_mbuf *segment; /**< packet address. */ + uint32_t next_sent_seq; /**< sequence number of the next packet. */ + uint32_t segment_idx; /**< packet index. */ +} __rte_cache_aligned; + +/** + * Item-list structure, which is the value in the TCP ipv4 hash table. + */ +struct gro_tcp_item_list { + struct gro_tcp_item *items; /**< items array */ + uint32_t nb_item; /**< item number */ +}; + +/** + * Local data structure. Every packet has an object of this structure, + * which is used for reassembling. + */ +struct gro_tcp_info { + struct gro_tcp_item_list item_list; /**< preallocated item-list */ + uint32_t nb_merged_pkts; /**< the number of merged packets */ +}; + +/** + * Create a new TCP ipv4 GRO lookup table. + * + * @param name + * Lookup table name + * @param nb_entries + * Lookup table elements number, whose value should be larger than or + * equal to RTE_GRO_TCP_HASH_ENTRIES_MIN, and less than or equal to + * RTE_GRO_TCP_HASH_ENTRIES_MAX, and should be power of two. + * @param socket_id + * socket id + * @return + * lookup table address + */ +struct rte_hash * +rte_gro_tcp4_tbl_create(char *name, uint32_t nb_entries, + uint16_t socket_id); +/** + * This function reassembles a bulk of TCP IPv4 packets. For non-TCP IPv4 + * packets, the function won't process them. + * + * @param hash_tbl + * Lookup table used to reassemble packets. It stores key-value pairs. + * The key describes the prerequsite rules to merge two TCP IPv4 packets; + * the value is a pointer pointing to a item-list, which contains + * packets that have the same prerequisite TCP IPv4 rules. Note that + * applications need to guarantee the hash_tbl is clean when first call + * this function. + * @param pkts + * Packets to reassemble. + * @param nb_pkts + * The number of packets to reassemble. + * @return + * The packet number after GRO. If reassemble successfully, the value is + * less than nb_pkts; if not, the value is equal to nb_pkts. + */ +uint32_t +rte_gro_tcp4_reassemble_burst(struct rte_hash *hash_tbl, + struct rte_mbuf **pkts, + const uint32_t nb_pkts); +#endif diff --git a/mk/rte.app.mk b/mk/rte.app.mk index 0e0b600..521d20e 100644 --- a/mk/rte.app.mk +++ b/mk/rte.app.mk @@ -99,6 +99,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_RING) += -lrte_ring _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL) += -lrte_eal _LDLIBS-$(CONFIG_RTE_LIBRTE_CMDLINE) += -lrte_cmdline _LDLIBS-$(CONFIG_RTE_LIBRTE_REORDER) += -lrte_reorder +_LDLIBS-$(CONFIG_RTE_LIBRTE_GRO) += -lrte_gro ifeq ($(CONFIG_RTE_BUILD_SHARED_LIB),n) # plugins (link only if static libraries) -- 2.7.4