From: "Ananyev, Konstantin" <konstantin.ananyev@intel.com>
To: Thomas Monjalon <thomas@monjalon.net>,
"Lipiec, Herakliusz" <herakliusz.lipiec@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH] example/ipv4_multicast: fix app hanging when using clone
Date: Tue, 13 Nov 2018 09:47:48 +0000 [thread overview]
Message-ID: <2601191342CEEE43887BDE71AB977258010CE49B5D@IRSMSX106.ger.corp.intel.com> (raw)
In-Reply-To: <4835654.xzScbSBJzk@xps>
>
> Hi,
>
> 12/11/2018 21:46, Herakliusz Lipiec:
> > This example was dropping packets when using clone (ip 224.0.0.103).
The problem is that ipv4_multicast app:
1. invokes rte_pktmbuf_clone() for the packet
(that creates a new mbuf with IND_ATTACHED_MBUF set in ol_flags).
2. creates new mbuf containing L2 header and chains it with cloned at step 1 mbuf.
3. copy ol_flags from cloned mbuf to new header mbuf.
So after step 3 L2 header mbuf also has IND_ATTACHED_MBUF set in ol_flags.
That makes pktmbuf_free() wrongly assume that this is an indirect mbuf,
which causes all sorts of problems: incorrect behavior, silent memory corruption, etc.
The easiest way to reproduce the problem:
- run ipv4_multicast, using two ports:
ipv4_multicast -l 0,1 - -p 0x3
send 8K+ packets to one of the ports with dest ip address: 224.0.0.103
ipv4_multicast will stop forward any packets.
In fact, there is no reason to copy ol_flags from the cloned packet.
So the fix is just to remove that code.
Konstantin
>
> What is this IP?
> What is clone?
>
> > The problem was that mbufs were not freed. This was caused by coping
> > ol_flags from cloned mbuf to header mbufs.
>
> Mbuf is not freed because of ol_flags?
> I feel this description should be improved.
>
> > Signed-off-by: Herakliusz Lipiec <herakliusz.lipiec@intel.com>
> [...]
> > --- a/examples/ipv4_multicast/main.c
> > +++ b/examples/ipv4_multicast/main.c
> > @@ -266,8 +266,6 @@ mcast_out_pkt(struct rte_mbuf *pkt, int use_clone)
> > hdr->tx_offload = pkt->tx_offload;
> > hdr->hash = pkt->hash;
> >
> > - hdr->ol_flags = pkt->ol_flags;
> > -
> > __rte_mbuf_sanity_check(hdr, 1);
> > return hdr;
> > }
>
>
next prev parent reply other threads:[~2018-11-13 9:47 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-11-12 20:46 Herakliusz Lipiec
2018-11-12 22:44 ` Ananyev, Konstantin
2018-11-13 9:25 ` Thomas Monjalon
2018-11-13 9:47 ` Ananyev, Konstantin [this message]
2018-11-13 10:21 ` Burakov, Anatoly
2018-11-13 10:28 ` Ananyev, Konstantin
2018-11-13 11:49 ` [dpdk-dev] [PATCH v2] " Herakliusz Lipiec
2018-11-13 11:51 ` Ananyev, Konstantin
2018-11-14 11:33 ` Wang, Dong1
2018-11-18 21:56 ` [dpdk-dev] [dpdk-stable] " Thomas Monjalon
2018-11-14 2:28 ` [dpdk-dev] " Wang, Dong1
2018-11-14 9:02 ` Ananyev, Konstantin
2018-11-14 10:09 ` Wang, Dong1
2018-11-14 10:17 ` Ananyev, Konstantin
2018-11-14 11:06 ` Wang, Dong1
2018-11-14 11:19 ` Ananyev, Konstantin
2018-11-14 11:32 ` Wang, Dong1
2018-11-20 5:40 ` Zhao1, Wei
2018-11-20 9:52 ` Ananyev, Konstantin
2018-11-14 10:21 ` Lipiec, Herakliusz
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2601191342CEEE43887BDE71AB977258010CE49B5D@IRSMSX106.ger.corp.intel.com \
--to=konstantin.ananyev@intel.com \
--cc=dev@dpdk.org \
--cc=herakliusz.lipiec@intel.com \
--cc=thomas@monjalon.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).