patches for DPDK stable branches
 help / color / mirror / Atom feed
From: Mohsin Kazmi <mohsin.kazmi14@gmail.com>
To: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Cc: Olivier Matz <olivier.matz@6wind.com>, dev@dpdk.org, stable@dpdk.org
Subject: Re: [dpdk-stable] [dpdk-dev] [PATCH v3] net: fix Intel-specific Prepare the outer ipv4 hdr for checksum
Date: Tue, 3 Aug 2021 13:49:37 +0100	[thread overview]
Message-ID: <CAKkt9+LyKRqc0BzaK5n1CJ-a6_nGt6vA77urDt+oFSD3+YUhkA@mail.gmail.com> (raw)
In-Reply-To: <6d7c41bf-6e9f-8fbd-bfc8-11ca07c777d8@oktetlabs.ru>

On Sat, Jul 31, 2021 at 1:49 PM Andrew Rybchenko <
andrew.rybchenko@oktetlabs.ru> wrote:

> On 7/30/21 2:11 PM, Olivier Matz wrote:
> > On Wed, Jul 28, 2021 at 06:46:53PM +0300, Andrew Rybchenko wrote:
> >> On 7/7/21 12:40 PM, Mohsin Kazmi wrote:
> >>> Preparation the headers for the hardware offload
> >>> misses the outer ipv4 checksum offload.
> >>> It results in bad checksum computed by hardware NIC.
> >>>
> >>> This patch fixes the issue by setting the outer ipv4
> >>> checksum field to 0.
> >>>
> >>> Fixes: 4fb7e803eb1a ("ethdev: add Tx preparation")
> >>> Cc: stable@dpdk.org
> >>>
> >>> Signed-off-by: Mohsin Kazmi <mohsin.kazmi14@gmail.com>
> >>> Acked-by: Qi Zhang <qi.z.zhang@intel.com>
> >>> ---
> >>> v3:
> >>>      * Update the conditional test with PKT_TX_OUTER_IP_CKSUM.
> >>>      * Update the commit title with "Intel-specific".
> >>>
> >>> v2:
> >>>      * Update the commit message with Fixes.
> >>>
> >>>    lib/net/rte_net.h | 15 +++++++++++++--
> >>>    1 file changed, 13 insertions(+), 2 deletions(-)
> >>>
> >>> diff --git a/lib/net/rte_net.h b/lib/net/rte_net.h
> >>> index 434435ffa2..3f4c8c58b9 100644
> >>> --- a/lib/net/rte_net.h
> >>> +++ b/lib/net/rte_net.h
> >>> @@ -125,11 +125,22 @@ rte_net_intel_cksum_flags_prepare(struct
> rte_mbuf *m, uint64_t ol_flags)
> >>>      * Mainly it is required to avoid fragmented headers check if
> >>>      * no offloads are requested.
> >>>      */
> >>> -   if (!(ol_flags & (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK |
> PKT_TX_TCP_SEG)))
> >>> +   if (!(ol_flags & (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK |
> PKT_TX_TCP_SEG |
> >>> +                     PKT_TX_OUTER_IP_CKSUM)))
> >>>             return 0;
> >>> -   if (ol_flags & (PKT_TX_OUTER_IPV4 | PKT_TX_OUTER_IPV6))
> >>> +   if (ol_flags & (PKT_TX_OUTER_IPV4 | PKT_TX_OUTER_IPV6)) {
> >>>             inner_l3_offset += m->outer_l2_len + m->outer_l3_len;
> >>> +           /*
> >>> +            * prepare outer ipv4 header checksum by setting it to 0,
> >>> +            * in order to be computed by hardware NICs.
> >>> +            */
> >>> +           if (ol_flags & PKT_TX_OUTER_IP_CKSUM) {
> >>> +                   ipv4_hdr = rte_pktmbuf_mtod_offset(m,
> >>> +                                   struct rte_ipv4_hdr *,
> m->outer_l2_len);
> >>> +                   ipv4_hdr->hdr_checksum = 0;
> >>
> >> Here we assume that the field is located in the first segment.
> >> Unlikely but it still could be false. We must handle it properly.
> >
> > This is specified in the API comment, so I think it has to be checked
> > by the caller.
>
> If no, what's the point to spoil memory here if stricter check is
> done few lines below.
>
We have two possibilities:
1) take the whole block of above code after the strict check: Then strict
check will use m->outer_l2_len + m->outer_l3_len directly without any
condition and we will be on the mercy of drivers to initialize these to 0
if outer headers are not use. Drivers usually don't set the fields which
they are not interested in because of performance reasons as
setting these values per packet will cost them additional cycles.
2) Taking just PKT_TX_OUTER_IP_CKSUM conditional check after the strict
fragmented check: In that case, each packet will hit an extra conditional
check without getting benefit from it, again with a performance penalty.

I am more inclined towards solution 1. But I also welcome other
suggestions/comments.

>
> >>> +           }
> >>> +   }
> >>>     /*
> >>>      * Check if headers are fragmented.
> >>>
> >>
>
>

  reply	other threads:[~2021-08-03 12:49 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-30 11:04 [dpdk-stable] [PATCH v2] net: prepare " Mohsin Kazmi
2021-06-30 14:09 ` Olivier Matz
2021-07-07  9:14   ` Mohsin Kazmi
2021-07-22 19:53     ` [dpdk-stable] [dpdk-dev] " Thomas Monjalon
2021-08-03 12:29       ` Mohsin Kazmi
2021-07-07  9:40 ` [dpdk-stable] [PATCH v3] net: fix Intel-specific Prepare " Mohsin Kazmi
2021-07-22 19:56   ` Thomas Monjalon
2021-07-27 12:52     ` Olivier Matz
2021-07-28 15:46   ` [dpdk-stable] [dpdk-dev] " Andrew Rybchenko
2021-07-30 11:11     ` Olivier Matz
2021-07-31 12:49       ` Andrew Rybchenko
2021-08-03 12:49         ` Mohsin Kazmi [this message]
2021-08-27 13:44           ` Mohsin Kazmi
2021-09-07 10:49   ` [dpdk-stable] [PATCH v4] net: fix Intel-specific Prepare the outer IPv4 " Mohsin Kazmi
2021-09-15 10:39     ` [dpdk-stable] [dpdk-dev] " Ferruh Yigit
2021-09-15 11:04     ` Ferruh Yigit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAKkt9+LyKRqc0BzaK5n1CJ-a6_nGt6vA77urDt+oFSD3+YUhkA@mail.gmail.com \
    --to=mohsin.kazmi14@gmail.com \
    --cc=andrew.rybchenko@oktetlabs.ru \
    --cc=dev@dpdk.org \
    --cc=olivier.matz@6wind.com \
    --cc=stable@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).