DPDK patches and discussions
 help / color / mirror / Atom feed
From: kumaraparameshwaran rathinavel <kumaraparamesh92@gmail.com>
To: "Hu, Jiayu" <jiayu.hu@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [PATCH v4] gro : ipv6 changes to support GRO for TCP/ipv6
Date: Thu, 8 Jun 2023 22:22:04 +0530	[thread overview]
Message-ID: <CANxNyat1x=NaQYo7c1BS1DLa8pduUXB4wDTdqwuvXPhd_bc+hg@mail.gmail.com> (raw)
In-Reply-To: <CY5PR11MB6487580375B22A0B8640CC659250A@CY5PR11MB6487.namprd11.prod.outlook.com>

[-- Attachment #1: Type: text/plain, Size: 26737 bytes --]

Hi Jiyau,

Thanks for the quick review comments. Will address the review comments.
Require clarification in one of the comments. Please find it inline.

On Thu, Jun 8, 2023 at 9:35 AM Hu, Jiayu <jiayu.hu@intel.com> wrote:

> Hi Kumara,
>
> Please see replies inline.
>
> In addition, you need to update the programmer guide in
> generic_receive_offload_lib.rst,
> and release note release_23_07.rst.
>
> Thanks,
> Jiayu
>
> > -----Original Message-----
> > From: Kumara Parameshwaran <kumaraparamesh92@gmail.com>
> > Sent: Tuesday, June 6, 2023 10:58 PM
> > To: Hu, Jiayu <jiayu.hu@intel.com>
> > Cc: dev@dpdk.org; Kumara Parameshwaran
> > <kumaraparamesh92@gmail.com>
> > Subject: [PATCH v4] gro : ipv6 changes to support GRO for TCP/ipv6
> >
> > The patch adds GRO support for TCP/ipv6 packets. This does not include
> the
> > support for vxlan, udp ipv6 packets.
> >
> > Signed-off-by: Kumara Parameshwaran <kumaraparamesh92@gmail.com>>
> > diff --git a/lib/gro/gro_tcp.c b/lib/gro/gro_tcp.c new file mode 100644
> index
> > 0000000000..02a7d0f8c5
> > --- /dev/null
> > +++ b/lib/gro/gro_tcp.c
>
> For gro_tcp.c and gro_tcp.h, it's better to add "_internal" in the file
> name.
>
> > @@ -0,0 +1,128 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(c) 2017 Intel Corporation
> > + */
> > +#include <rte_malloc.h>
> > +#include <rte_mbuf.h>
> > +#include <rte_ethdev.h>
> > +
> > +#include "gro_tcp.h"
> > +
> > +static inline uint32_t
> > +find_an_empty_item(struct gro_tcp_item *items,
> > +     uint32_t max_item_num)
> > +{
> > +     uint32_t i;
> > +
> > +     for (i = 0; i < max_item_num; i++)
> > +             if (items[i].firstseg == NULL)
> > +                     return i;
> > +     return INVALID_ARRAY_INDEX;
> > +}
> > +
> > +uint32_t
> > +insert_new_tcp_item(struct rte_mbuf *pkt,
> > +             struct gro_tcp_item *items,
> > +             uint32_t *item_num,
> > +             uint32_t max_item_num,
> > +             uint64_t start_time,
> > +             uint32_t prev_idx,
> > +             uint32_t sent_seq,
> > +             uint16_t ip_id,
> > +             uint8_t is_atomic)
>
> This function can be inline.
>
> > +{
> > +     uint32_t item_idx;
> > +
> > +     item_idx = find_an_empty_item(items, max_item_num);
> > +     if (item_idx == INVALID_ARRAY_INDEX)
> > +             return INVALID_ARRAY_INDEX;
> > +
> > +     items[item_idx].firstseg = pkt;
> > +     items[item_idx].lastseg = rte_pktmbuf_lastseg(pkt);
> > +     items[item_idx].start_time = start_time;
> > +     items[item_idx].next_pkt_idx = INVALID_ARRAY_INDEX;
> > +     items[item_idx].sent_seq = sent_seq;
> > +     items[item_idx].ip_id = ip_id;
> > +     items[item_idx].nb_merged = 1;
> > +     items[item_idx].is_atomic = is_atomic;
> > +     (*item_num) += 1;
> > +
> > +     /* if the previous packet exists, chain them together. */
> > +     if (prev_idx != INVALID_ARRAY_INDEX) {
> > +             items[item_idx].next_pkt_idx =
> > +                     items[prev_idx].next_pkt_idx;
> > +             items[prev_idx].next_pkt_idx = item_idx;
> > +     }
> > +
> > +     return item_idx;
> > +}
> > +
> > +uint32_t
> > +delete_tcp_item(struct gro_tcp_item *items, uint32_t item_idx,
> > +             uint32_t *item_num,
> > +             uint32_t prev_item_idx)
> > +{
> > +     uint32_t next_idx = items[item_idx].next_pkt_idx;
> > +
> > +     /* NULL indicates an empty item */
> > +     items[item_idx].firstseg = NULL;
> > +     (*item_num) -= 1;
> > +     if (prev_item_idx != INVALID_ARRAY_INDEX)
> > +             items[prev_item_idx].next_pkt_idx = next_idx;
> > +
> > +     return next_idx;
> > +}
>
> This function can be inline.
>
> > +
> > +int32_t
> > +gro_process_tcp_item(struct rte_mbuf *pkt,
> > +     struct rte_tcp_hdr *tcp_hdr,
> > +     int32_t tcp_dl,
> > +     struct gro_tcp_item *items,
> > +     uint32_t item_idx,
> > +     uint32_t *item_num,
> > +     uint32_t max_item_num,
> > +     uint16_t ip_id,
> > +     uint8_t is_atomic,
> > +     uint64_t start_time)
>
> It is for internal use, so it's better to remove "gro_" from the function
> name.
>
> > +{
> > +     uint32_t cur_idx;
> > +     uint32_t prev_idx;
> > +     int cmp;
> > +     uint32_t sent_seq;
> > +
> > +     sent_seq = rte_be_to_cpu_32(tcp_hdr->sent_seq);
> > +     /*
> > +      * Check all packets in the flow and try to find a neighbor for
> > +      * the input packet.
> > +      */
> > +     cur_idx = item_idx;
> > +     prev_idx = cur_idx;
> > +     do {
> > +             cmp = check_seq_option(&items[cur_idx], tcp_hdr,
> > +                             sent_seq, ip_id, pkt->l4_len, tcp_dl, 0,
> > +                             is_atomic);
> > +             if (cmp) {
> > +                     if (merge_two_tcp_packets(&items[cur_idx],
> > +                                             pkt, cmp, sent_seq, ip_id,
> 0))
> > +                             return 1;
> > +                     /*
> > +                      * Fail to merge the two packets, as the packet
> > +                      * length is greater than the max value. Store
> > +                      * the packet into the flow.
> > +                      */
> > +                     if (insert_new_tcp_item(pkt, items, item_num,
> > max_item_num, start_time, cur_idx,
> > +                                             sent_seq, ip_id,
> is_atomic) ==
> > +                                     INVALID_ARRAY_INDEX)
> > +                             return -1;
> > +                     return 0;
> > +             }
> > +             prev_idx = cur_idx;
> > +             cur_idx = items[cur_idx].next_pkt_idx;
> > +     } while (cur_idx != INVALID_ARRAY_INDEX);
> > +
> > +     /* Fail to find a neighbor, so store the packet into the flow. */
> > +     if (insert_new_tcp_item(pkt, items, item_num, max_item_num,
> > start_time, prev_idx, sent_seq,
> > +                             ip_id, is_atomic) == INVALID_ARRAY_INDEX)
> > +             return -1;
> > +
> > +     return 0;
> > +}
> > diff --git a/lib/gro/gro_tcp.h b/lib/gro/gro_tcp.h new file mode 100644
> index
> > 0000000000..4b5b4eda9c
> > --- /dev/null
> > +++ b/lib/gro/gro_tcp.h
> > @@ -0,0 +1,209 @@
> > +#ifndef _GRO_TCP_H_
> > +#define _GRO_TCP_H_
> > +
> > +#define INVALID_ARRAY_INDEX 0xffffffffUL
> > +
> > +#include <rte_tcp.h>
> > +
> > +/*
> > + * The max length of a IPv4 packet, which includes the length of the L3
> > + * header, the L4 header and the data payload.
> > + */
> > +#define MAX_IP_PKT_LENGTH UINT16_MAX
> > +
> > +/* The maximum TCP header length */
> > +#define MAX_TCP_HLEN 60
> > +#define INVALID_TCP_HDRLEN(len) \
> > +     (((len) < sizeof(struct rte_tcp_hdr)) || ((len) > MAX_TCP_HLEN))
> > +
> > +struct gro_tcp_flow {
>
> This structure name is confusing. In the upper layer, tcp4 and tcp6 have
> gro_tcp4_flow
> and gro_tcp6_flow, which represent a flow. Inside gro_tcp4/6_flow, there
> are keys,
> represented by struct tcp4/6_flow_key. But inside struct tcp4/6_flow_key,
> there is
> struct gro_tcp_flow. Need to rename struct gro_tcp_flow, like
> common_tcp_flow_key.
>
> > +     struct rte_ether_addr eth_saddr;
> > +     struct rte_ether_addr eth_daddr;
> > +     uint32_t recv_ack;
> > +     uint16_t src_port;
> > +     uint16_t dst_port;
> > +};
> > +
> > +#define ASSIGN_TCP_FLOW_KEY(k1, k2) \
>
> Ditto. The macro needs rename, like ASSIGN_COMMON_TCP_FLOW_KEY.
>
> > +     do {\
> > +             rte_ether_addr_copy(&(k1->eth_saddr), &(k2->eth_saddr)); \
> > +             rte_ether_addr_copy(&(k1->eth_daddr), &(k2->eth_daddr));
> > \
> > +             k2->recv_ack = k1->recv_ack; \
> > +             k2->src_port = k1->src_port; \
> > +             k2->dst_port = k1->dst_port; \
> > +     } while (0)
> > +
> > +struct gro_tcp_item {
> > +     /*
> > +      * The first MBUF segment of the packet. If the value
> > +      * is NULL, it means the item is empty.
> > +      */
> > +     struct rte_mbuf *firstseg;
> > +     /* The last MBUF segment of the packet */
> > +     struct rte_mbuf *lastseg;
> > +     /*
> > +      * The time when the first packet is inserted into the table.
> > +      * This value won't be updated, even if the packet is merged
> > +      * with other packets.
> > +      */
> > +     uint64_t start_time;
> > +     /*
> > +      * next_pkt_idx is used to chain the packets that
> > +      * are in the same flow but can't be merged together
> > +      * (e.g. caused by packet reordering).
> > +      */
> > +     uint32_t next_pkt_idx;
> > +     /* TCP sequence number of the packet */
> > +     uint32_t sent_seq;
> > +     /* IPv4 ID of the packet */
> > +     uint16_t ip_id;
>
> The ip_id field is not used by tcp6. It's better to use an union to
> include ip_id for IPv4 and
> an useless member for IPv6 with some comments to avoid confusing.
>
> > +     /* the number of merged packets */
> > +     uint16_t nb_merged;
> > +     /* Indicate if IPv4 ID can be ignored */
> > +     uint8_t is_atomic;
> > +};
> > +
> > +uint32_t
> > +insert_new_tcp_item(struct rte_mbuf *pkt,
> > +             struct gro_tcp_item *items,
> > +             uint32_t *item_num,
> > +             uint32_t table_size,
> > +             uint64_t start_time,
> > +             uint32_t prev_idx,
> > +             uint32_t sent_seq,
> > +             uint16_t ip_id,
> > +             uint8_t is_atomic);
> > +
> > +uint32_t
> > +delete_tcp_item(struct gro_tcp_item *items,
> > +             uint32_t item_idx,
> > +             uint32_t *item_num,
> > +             uint32_t prev_item_idx);
> > +
> > +int32_t
> > +gro_process_tcp_item(struct rte_mbuf *pkt,
> > +     struct rte_tcp_hdr *tcp_hdr,
> > +     int32_t tcp_dl,
> > +     struct gro_tcp_item *items,
> > +     uint32_t item_idx,
> > +     uint32_t *item_num,
> > +     uint32_t table_size,
> > +     uint16_t ip_id,
> > +     uint8_t is_atomic,
> > +     uint64_t start_time);
> > +
> > +/*
> > + * Merge two TCP packets without updating checksums.
> > + * If cmp is larger than 0, append the new packet to the
> > + * original packet. Otherwise, pre-pend the new packet to
> > + * the original packet.
> > + */
> > +static inline int
> > +merge_two_tcp_packets(struct gro_tcp_item *item,
> > +             struct rte_mbuf *pkt,
> > +             int cmp,
> > +             uint32_t sent_seq,
> > +             uint16_t ip_id,
> > +             uint16_t l2_offset)
> > +{
> > +     struct rte_mbuf *pkt_head, *pkt_tail, *lastseg;
> > +     uint16_t hdr_len, l2_len;
> > +
> > +     if (cmp > 0) {
> > +             pkt_head = item->firstseg;
> > +             pkt_tail = pkt;
> > +     } else {
> > +             pkt_head = pkt;
> > +             pkt_tail = item->firstseg;
> > +     }
> > +
> > +     /* check if the IPv4 packet length is greater than the max value */
> > +     hdr_len = l2_offset + pkt_head->l2_len + pkt_head->l3_len +
> > +             pkt_head->l4_len;
> > +     l2_len = l2_offset > 0 ? pkt_head->outer_l2_len : pkt_head->l2_len;
> > +     if (unlikely(pkt_head->pkt_len - l2_len + pkt_tail->pkt_len -
> > +                             hdr_len > MAX_IP_PKT_LENGTH))
> > +             return 0;
> > +
> > +     /* remove the packet header for the tail packet */
> > +     rte_pktmbuf_adj(pkt_tail, hdr_len);
> > +
> > +     /* chain two packets together */
> > +     if (cmp > 0) {
> > +             item->lastseg->next = pkt;
> > +             item->lastseg = rte_pktmbuf_lastseg(pkt);
> > +             /* update IP ID to the larger value */
> > +             item->ip_id = ip_id;
> > +     } else {
> > +             lastseg = rte_pktmbuf_lastseg(pkt);
> > +             lastseg->next = item->firstseg;
> > +             item->firstseg = pkt;
> > +             /* update sent_seq to the smaller value */
> > +             item->sent_seq = sent_seq;
> > +             item->ip_id = ip_id;
> > +     }
> > +     item->nb_merged++;
> > +
> > +     /* update MBUF metadata for the merged packet */
> > +     pkt_head->nb_segs += pkt_tail->nb_segs;
> > +     pkt_head->pkt_len += pkt_tail->pkt_len;
> > +
> > +     return 1;
> > +}
> > +
> > +/*
> > + * Check if two TCP/IPv4 packets are neighbors.
> > + */
> > +static inline int
> > +check_seq_option(struct gro_tcp_item *item,
> > +             struct rte_tcp_hdr *tcph,
> > +             uint32_t sent_seq,
> > +             uint16_t ip_id,
> > +             uint16_t tcp_hl,
> > +             uint16_t tcp_dl,
> > +             uint16_t l2_offset,
> > +             uint8_t is_atomic)
> > +{
> > +     struct rte_mbuf *pkt_orig = item->firstseg;
> > +     char *iph_orig;
> > +     struct rte_tcp_hdr *tcph_orig;
> > +     uint16_t len, tcp_hl_orig;
> > +
> > +     iph_orig = (char *)(rte_pktmbuf_mtod(pkt_orig, char *) +
> > +                     l2_offset + pkt_orig->l2_len);
> > +     tcph_orig = (struct rte_tcp_hdr *)(iph_orig + pkt_orig->l3_len);
> > +     tcp_hl_orig = pkt_orig->l4_len;
> > +
> > +     /* Check if TCP option fields equal */
> > +     len = RTE_MAX(tcp_hl, tcp_hl_orig) - sizeof(struct rte_tcp_hdr);
> > +     if ((tcp_hl != tcp_hl_orig) || ((len > 0) &&
> > +                             (memcmp(tcph + 1, tcph_orig + 1,
> > +                                     len) != 0)))
> > +             return 0;
> > +
> > +     /* Don't merge packets whose DF bits are different */
> > +     if (unlikely(item->is_atomic ^ is_atomic))
> > +             return 0;
> > +
> > +     /* check if the two packets are neighbors */
> > +     len = pkt_orig->pkt_len - l2_offset - pkt_orig->l2_len -
> > +             pkt_orig->l3_len - tcp_hl_orig;
> > +     if ((sent_seq == item->sent_seq + len) && (is_atomic ||
> > +                             (ip_id == item->ip_id + 1)))
> > +             /* append the new packet */
> > +             return 1;
> > +     else if ((sent_seq + tcp_dl == item->sent_seq) && (is_atomic ||
> > +                             (ip_id + item->nb_merged == item->ip_id)))
> > +             /* pre-pend the new packet */
> > +             return -1;
> > +
> > +     return 0;
> > +}
> > +
> > +static inline int
> > +is_same_tcp_flow(struct gro_tcp_flow *k1, struct gro_tcp_flow *k2) {
> > +     return (!memcmp(k1, k2, sizeof(struct gro_tcp_flow))); }
>
> I think this function needs rename, as the result of this function cannot
> identify if they are
> same TCP flow.
>
> > +
> > +#endif
> > diff --git a/lib/gro/gro_tcp4.c b/lib/gro/gro_tcp4.c index
> > 0014096e63..ffc33747c4 100644
> > --- a/lib/gro/gro_tcp4.c
> > +++ b/lib/gro/gro_tcp4.c
> > @@ -7,6 +7,7 @@
> >  #include <rte_ethdev.h>
> >
> >  #include "gro_tcp4.h"
> > +#include "gro_tcp.h"
> >
> >  void *
> >  gro_tcp4_tbl_create(uint16_t socket_id, @@ -30,7 +31,7 @@
> > gro_tcp4_tbl_create(uint16_t socket_id,
> >       if (tbl == NULL)
> >               return NULL;
> >
> > -     size = sizeof(struct gro_tcp4_item) * entries_num;
> > +     size = sizeof(struct gro_tcp_item) * entries_num;
> >       tbl->items = rte_zmalloc_socket(__func__,
> >                       size,
> >                       RTE_CACHE_LINE_SIZE,
> > @@ -71,18 +72,6 @@ gro_tcp4_tbl_destroy(void *tbl)
> >       rte_free(tcp_tbl);
> >  }
> >
> > -static inline uint32_t
> > -find_an_empty_item(struct gro_tcp4_tbl *tbl) -{
> > -     uint32_t i;
> > -     uint32_t max_item_num = tbl->max_item_num;
> > -
> > -     for (i = 0; i < max_item_num; i++)
> > -             if (tbl->items[i].firstseg == NULL)
> > -                     return i;
> > -     return INVALID_ARRAY_INDEX;
> > -}
> > -
> >  static inline uint32_t
> >  find_an_empty_flow(struct gro_tcp4_tbl *tbl)  { @@ -95,56 +84,6 @@
> > find_an_empty_flow(struct gro_tcp4_tbl *tbl)
> >       return INVALID_ARRAY_INDEX;
> >  }
> >
> > -static inline uint32_t
> > -insert_new_item(struct gro_tcp4_tbl *tbl,
> > -             struct rte_mbuf *pkt,
> > -             uint64_t start_time,
> > -             uint32_t prev_idx,
> > -             uint32_t sent_seq,
> > -             uint16_t ip_id,
> > -             uint8_t is_atomic)
> > -{
> > -     uint32_t item_idx;
> > -
> > -     item_idx = find_an_empty_item(tbl);
> > -     if (item_idx == INVALID_ARRAY_INDEX)
> > -             return INVALID_ARRAY_INDEX;
> > -
> > -     tbl->items[item_idx].firstseg = pkt;
> > -     tbl->items[item_idx].lastseg = rte_pktmbuf_lastseg(pkt);
> > -     tbl->items[item_idx].start_time = start_time;
> > -     tbl->items[item_idx].next_pkt_idx = INVALID_ARRAY_INDEX;
> > -     tbl->items[item_idx].sent_seq = sent_seq;
> > -     tbl->items[item_idx].ip_id = ip_id;
> > -     tbl->items[item_idx].nb_merged = 1;
> > -     tbl->items[item_idx].is_atomic = is_atomic;
> > -     tbl->item_num++;
> > -
> > -     /* if the previous packet exists, chain them together. */
> > -     if (prev_idx != INVALID_ARRAY_INDEX) {
> > -             tbl->items[item_idx].next_pkt_idx =
> > -                     tbl->items[prev_idx].next_pkt_idx;
> > -             tbl->items[prev_idx].next_pkt_idx = item_idx;
> > -     }
> > -
> > -     return item_idx;
> > -}
> > -
> > -static inline uint32_t
> > -delete_item(struct gro_tcp4_tbl *tbl, uint32_t item_idx,
> > -             uint32_t prev_item_idx)
> > -{
> > -     uint32_t next_idx = tbl->items[item_idx].next_pkt_idx;
> > -
> > -     /* NULL indicates an empty item */
> > -     tbl->items[item_idx].firstseg = NULL;
> > -     tbl->item_num--;
> > -     if (prev_item_idx != INVALID_ARRAY_INDEX)
> > -             tbl->items[prev_item_idx].next_pkt_idx = next_idx;
> > -
> > -     return next_idx;
> > -}
> > -
> >  static inline uint32_t
> >  insert_new_flow(struct gro_tcp4_tbl *tbl,
> >               struct tcp4_flow_key *src,
> > @@ -159,13 +98,10 @@ insert_new_flow(struct gro_tcp4_tbl *tbl,
> >
> >       dst = &(tbl->flows[flow_idx].key);
> >
> > -     rte_ether_addr_copy(&(src->eth_saddr), &(dst->eth_saddr));
> > -     rte_ether_addr_copy(&(src->eth_daddr), &(dst->eth_daddr));
> > +     ASSIGN_TCP_FLOW_KEY((&src->tcp_flow), (&dst->tcp_flow));
> > +
> >       dst->ip_src_addr = src->ip_src_addr;
> >       dst->ip_dst_addr = src->ip_dst_addr;
> > -     dst->recv_ack = src->recv_ack;
> > -     dst->src_port = src->src_port;
> > -     dst->dst_port = src->dst_port;
> >
> >       tbl->flows[flow_idx].start_index = item_idx;
> >       tbl->flow_num++;
> > @@ -173,21 +109,6 @@ insert_new_flow(struct gro_tcp4_tbl *tbl,
> >       return flow_idx;
> >  }
> >
> > -/*
> > - * update the packet length for the flushed packet.
> > - */
> > -static inline void
> > -update_header(struct gro_tcp4_item *item) -{
> > -     struct rte_ipv4_hdr *ipv4_hdr;
> > -     struct rte_mbuf *pkt = item->firstseg;
> > -
> > -     ipv4_hdr = (struct rte_ipv4_hdr *)(rte_pktmbuf_mtod(pkt, char *) +
> > -                     pkt->l2_len);
> > -     ipv4_hdr->total_length = rte_cpu_to_be_16(pkt->pkt_len -
> > -                     pkt->l2_len);
> > -}
> > -
> >  int32_t
> >  gro_tcp4_reassemble(struct rte_mbuf *pkt,
> >               struct gro_tcp4_tbl *tbl,
> > @@ -195,16 +116,15 @@ gro_tcp4_reassemble(struct rte_mbuf *pkt,  {
> >       struct rte_ether_hdr *eth_hdr;
> >       struct rte_ipv4_hdr *ipv4_hdr;
> > -     struct rte_tcp_hdr *tcp_hdr;
> > -     uint32_t sent_seq;
> >       int32_t tcp_dl;
> > +     struct rte_tcp_hdr *tcp_hdr;
> >       uint16_t ip_id, hdr_len, frag_off, ip_tlen;
> >       uint8_t is_atomic;
> > +     uint32_t sent_seq;
>
> No need to change tcp_hdr and sent_seq here.
>
>> The flow matching is done in the function and if the flow is not found
>> insert_new_tcp_item is invoked from this function itself. Did you mean to
>> move that to the process_tcp_item as well? If that is the case we should
>> pass the start_idx as INVALID_ARRAY_INDEX and in process_tcp_item check if
>> INVALID_ARRAY_INDEX do a insert_new_tcp_item and return, do not do the
>> sequnce number checks etc.
>>
> >
> >       struct tcp4_flow_key key;
> > -     uint32_t cur_idx, prev_idx, item_idx;
> > +     uint32_t item_idx;
> >       uint32_t i, max_flow_num, remaining_flow_num;
> > -     int cmp;
> >       uint8_t find;
> >
> >       /*
> > @@ -216,7 +136,7 @@ gro_tcp4_reassemble(struct rte_mbuf *pkt,
> >
> >       eth_hdr = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *);
> >       ipv4_hdr = (struct rte_ipv4_hdr *)((char *)eth_hdr + pkt->l2_len);
> > -     tcp_hdr = (struct rte_tcp_hdr *)((char *)ipv4_hdr + pkt->l3_len);
> > +     tcp_hdr = rte_pktmbuf_mtod_offset(pkt, struct rte_tcp_hdr *,
> > +pkt->l2_len + pkt->l3_len);
> >       hdr_len = pkt->l2_len + pkt->l3_len + pkt->l4_len;
> >
> >       /*
> > @@ -230,7 +150,6 @@ gro_tcp4_reassemble(struct rte_mbuf *pkt,
> >       ip_tlen = rte_be_to_cpu_16(ipv4_hdr->total_length);
> >       if (pkt->pkt_len > (uint32_t)(ip_tlen + pkt->l2_len))
> >               rte_pktmbuf_trim(pkt, pkt->pkt_len - ip_tlen -
> pkt->l2_len);
> > -
> >       /*
> >        * Don't process the packet whose payload length is less than or
> >        * equal to 0.
> > @@ -239,6 +158,13 @@ gro_tcp4_reassemble(struct rte_mbuf *pkt,
> >       if (tcp_dl <= 0)
> >               return -1;
> >
> > +     rte_ether_addr_copy(&(eth_hdr->src_addr),
> > &(key.tcp_flow.eth_saddr));
> > +     rte_ether_addr_copy(&(eth_hdr->dst_addr),
> > &(key.tcp_flow.eth_daddr));
> > +     key.ip_src_addr = ipv4_hdr->src_addr;
> > +     key.ip_dst_addr = ipv4_hdr->dst_addr;
> > +     key.tcp_flow.src_port = tcp_hdr->src_port;
> > +     key.tcp_flow.dst_port = tcp_hdr->dst_port;
> > +     key.tcp_flow.recv_ack = tcp_hdr->recv_ack;
> >       /*
> >        * Save IPv4 ID for the packet whose DF bit is 0. For the packet
> >        * whose DF bit is 1, IPv4 ID is ignored.
> > @@ -246,15 +172,6 @@ gro_tcp4_reassemble(struct rte_mbuf *pkt,
> >       frag_off = rte_be_to_cpu_16(ipv4_hdr->fragment_offset);
> >       is_atomic = (frag_off & RTE_IPV4_HDR_DF_FLAG) ==
> > RTE_IPV4_HDR_DF_FLAG;
> >       ip_id = is_atomic ? 0 : rte_be_to_cpu_16(ipv4_hdr->packet_id);
> > -     sent_seq = rte_be_to_cpu_32(tcp_hdr->sent_seq);
> > -
> > -     rte_ether_addr_copy(&(eth_hdr->src_addr), &(key.eth_saddr));
> > -     rte_ether_addr_copy(&(eth_hdr->dst_addr), &(key.eth_daddr));
> > -     key.ip_src_addr = ipv4_hdr->src_addr;
> > -     key.ip_dst_addr = ipv4_hdr->dst_addr;
> > -     key.src_port = tcp_hdr->src_port;
> > -     key.dst_port = tcp_hdr->dst_port;
> > -     key.recv_ack = tcp_hdr->recv_ack;
> >
> >       /* Search for a matched flow. */
> >       max_flow_num = tbl->max_flow_num;
> > @@ -270,63 +187,44 @@ gro_tcp4_reassemble(struct rte_mbuf *pkt,
> >               }
> >       }
> >
> > -     /*
> > -      * Fail to find a matched flow. Insert a new flow and store the
> > -      * packet into the flow.
> > -      */
> >       if (find == 0) {
> > -             item_idx = insert_new_item(tbl, pkt, start_time,
> > -                             INVALID_ARRAY_INDEX, sent_seq, ip_id,
> > -                             is_atomic);
> > +             sent_seq = rte_be_to_cpu_32(tcp_hdr->sent_seq);
> > +             item_idx = insert_new_tcp_item(pkt, tbl->items, &tbl-
> > >item_num, tbl->max_item_num, start_time,
> > +                                             INVALID_ARRAY_INDEX,
> > sent_seq, ip_id,
> > +                                             is_atomic);
> >               if (item_idx == INVALID_ARRAY_INDEX)
> >                       return -1;
> >               if (insert_new_flow(tbl, &key, item_idx) ==
> > -                             INVALID_ARRAY_INDEX) {
> > +                     INVALID_ARRAY_INDEX) {
> >                       /*
> >                        * Fail to insert a new flow, so delete the
> >                        * stored packet.
> > -                      */
> > -                     delete_item(tbl, item_idx, INVALID_ARRAY_INDEX);
> > +                     */
> > +                     delete_tcp_item(tbl->items, item_idx, &tbl-
> > >item_num,
> > +INVALID_ARRAY_INDEX);
> >                       return -1;
> >               }
> >               return 0;
> >       }
> > +     item_idx = tbl->flows[i].start_index;
>
> No need to update item_idx, and you can directly pass
> tbl->flows[i].start_index to
> gro_process_tcp_item(). And same in gro_tcp6_reassemble().
>
> >
> > -     /*
> > -      * Check all packets in the flow and try to find a neighbor for
> > -      * the input packet.
> > -      */
> > -     cur_idx = tbl->flows[i].start_index;
> > -     prev_idx = cur_idx;
> > -     do {
> > -             cmp = check_seq_option(&(tbl->items[cur_idx]), tcp_hdr,
> > -                             sent_seq, ip_id, pkt->l4_len, tcp_dl, 0,
> > -                             is_atomic);
> > -             if (cmp) {
> > -                     if (merge_two_tcp4_packets(&(tbl->items[cur_idx]),
> > -                                             pkt, cmp, sent_seq, ip_id,
> 0))
> > -                             return 1;
> > -                     /*
> > -                      * Fail to merge the two packets, as the packet
> > -                      * length is greater than the max value. Store
> > -                      * the packet into the flow.
> > -                      */
> > -                     if (insert_new_item(tbl, pkt, start_time, cur_idx,
> > -                                             sent_seq, ip_id,
> is_atomic) ==
> > -                                     INVALID_ARRAY_INDEX)
> > -                             return -1;
> > -                     return 0;
> > -             }
> > -             prev_idx = cur_idx;
> > -             cur_idx = tbl->items[cur_idx].next_pkt_idx;
> > -     } while (cur_idx != INVALID_ARRAY_INDEX);
> > +     return gro_process_tcp_item(pkt, tcp_hdr, tcp_dl, tbl->items,
> > item_idx,
> > +                                             &tbl->item_num, tbl-
> > >max_item_num,
> > +                                             ip_id, is_atomic,
> start_time);
> > +}
> >
> > -     /* Fail to find a neighbor, so store the packet into the flow. */
> > -     if (insert_new_item(tbl, pkt, start_time, prev_idx, sent_seq,
> > -                             ip_id, is_atomic) == INVALID_ARRAY_INDEX)
> > -             return -1;
>

[-- Attachment #2: Type: text/html, Size: 33888 bytes --]

  reply	other threads:[~2023-06-08 16:52 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-10-20 18:07 [PATCH] " Kumara Parameshwaran
2022-10-20 18:14 ` [PATCH v2] " Kumara Parameshwaran
2023-05-12  2:47   ` Hu, Jiayu
2023-05-16  9:28     ` kumaraparameshwaran rathinavel
2023-05-25 11:22       ` kumaraparameshwaran rathinavel
2023-05-31  8:20         ` Hu, Jiayu
2023-06-02  6:02   ` [PATCH v3] gro : ipv6-gro review comments to reduce code duplication across v4 and v6 Kumara Parameshwaran
2023-06-02  6:34   ` [PATCH v3] gro : ipv6 changes to support GRO for TCP/ipv6 Kumara Parameshwaran
2023-06-06  4:35     ` Hu, Jiayu
2023-06-06  9:31       ` kumaraparameshwaran rathinavel
2023-06-06 14:58   ` [PATCH v4] " Kumara Parameshwaran
2023-06-08  4:05     ` Hu, Jiayu
2023-06-08 16:52       ` kumaraparameshwaran rathinavel [this message]
2023-06-09  1:04         ` Hu, Jiayu
2023-06-12 11:05   ` [PATCH v5] " Kumara Parameshwaran
2023-06-12 11:23   ` [PATCH v6] " Kumara Parameshwaran
2023-06-12 11:31   ` [PATCH v7] " Kumara Parameshwaran
2023-06-12 12:04     ` kumaraparameshwaran rathinavel
2023-06-13  2:25     ` Hu, Jiayu
2023-06-14  3:43       ` kumaraparameshwaran rathinavel
2023-06-14  4:56         ` Hu, Jiayu
2023-06-15  5:40   ` [PATCH v8] " Kumara Parameshwaran
2023-06-15  6:20   ` [PATCH v9] " Kumara Parameshwaran
2023-06-15  6:30     ` kumaraparameshwaran rathinavel
2023-06-15  8:01     ` Hu, Jiayu
2023-06-15  9:16       ` kumaraparameshwaran rathinavel
2023-06-19 13:30     ` Thomas Monjalon
2023-06-21  8:25   ` [PATCH v10 1/2] gro : refactor IPv4 to add GRO support for IPv6 Kumara Parameshwaran
2023-06-21  8:25     ` [PATCH v10 2/2] gro : add support for IPv6 GRO Kumara Parameshwaran
2023-06-21  8:38   ` [PATCH v11 1/2] gro : refactor IPv4 to add GRO support for IPv6 Kumara Parameshwaran
2023-06-21  8:38     ` [PATCH v11 2/2] gro : add support for IPv6 GRO Kumara Parameshwaran
2023-06-27 15:47       ` Thomas Monjalon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CANxNyat1x=NaQYo7c1BS1DLa8pduUXB4wDTdqwuvXPhd_bc+hg@mail.gmail.com' \
    --to=kumaraparamesh92@gmail.com \
    --cc=dev@dpdk.org \
    --cc=jiayu.hu@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).