DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Ananyev, Konstantin" <konstantin.ananyev@intel.com>
To: Olivier MATZ <olivier.matz@6wind.com>,
	Simon Kagstrom <simon.kagstrom@netinsight.net>,
	"dev@dpdk.org" <dev@dpdk.org>,
	"Zhang, Helin" <helin.zhang@intel.com>,
	"Gonzalez Monroy, Sergio" <sergio.gonzalez.monroy@intel.com>,
	"Burakov, Anatoly" <anatoly.burakov@intel.com>
Subject: Re: [dpdk-dev] [PATCH RFC] mbuf/ip_frag: Move mbuf chaining to	common code
Date: Mon, 7 Sep 2015 09:13:21 +0000	[thread overview]
Message-ID: <2601191342CEEE43887BDE71AB97725836A83CBA@irsmsx105.ger.corp.intel.com> (raw)
In-Reply-To: <55ED3D9A.7070607@6wind.com>

Hi lads,

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Olivier MATZ
> Sent: Monday, September 07, 2015 8:33 AM
> To: Simon Kagstrom; dev@dpdk.org; Zhang, Helin; Gonzalez Monroy, Sergio; Burakov, Anatoly
> Subject: Re: [dpdk-dev] [PATCH RFC] mbuf/ip_frag: Move mbuf chaining to common code
> 
> Hi Simon,
> 
> I think it's a good idea. Please see some minor comments below.
> 
> On 08/31/2015 02:41 PM, Simon Kagstrom wrote:
> > Chaining/segmenting mbufs can be useful in many places, so make it
> > global.
> >
> > Signed-off-by: Simon Kagstrom <simon.kagstrom@netinsight.net>
> > Signed-off-by: Johan Faltstrom <johan.faltstrom@netinsight.net>
> > ---
> > NOTE! Only compile-tested.
> >
> > We were looking for packet segmenting functionality in the MBUF API but
> > didn't find it. This patch moves the implementation, apart from the
> > things which look ip_frag-specific.
> >
> >  lib/librte_ip_frag/ip_frag_common.h      | 23 -----------------------
> >  lib/librte_ip_frag/rte_ipv4_reassembly.c |  7 +++++--
> >  lib/librte_ip_frag/rte_ipv6_reassembly.c |  7 +++++--
> >  lib/librte_mbuf/rte_mbuf.h               | 23 +++++++++++++++++++++++
> >  4 files changed, 33 insertions(+), 27 deletions(-)
> >
> > diff --git a/lib/librte_ip_frag/ip_frag_common.h b/lib/librte_ip_frag/ip_frag_common.h
> > index 6b2acee..cde6ed4 100644
> 
> > [...]
> 
> > diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
> > index 8c2db1b..ef47256 100644
> > --- a/lib/librte_mbuf/rte_mbuf.h
> > +++ b/lib/librte_mbuf/rte_mbuf.h
> > @@ -1801,6 +1801,29 @@ static inline int rte_pktmbuf_is_contiguous(const struct rte_mbuf *m)
> >  }
> >
> >  /**
> > + * Chain an mbuf to another, thereby creating a segmented packet.
> > + *
> > + * @param head the head of the mbuf chain (the first packet)
> > + * @param tail the mbuf to put last in the chain
> > + */
> > +static inline void rte_pktmbuf_chain(struct rte_mbuf *head, struct rte_mbuf *tail)
> > +{
> > +	struct rte_mbuf *cur_tail;
> > +
> 
> Here, we could check if the pkt_len of tail mbuf is 0. If
> it's the case, we can just free it and return. It would avoid
> to have an empty segment inside the mbuf chain, which can be
> annoying.
> 
> if (unlikely(rte_pktmbuf_pkt_len(tail) == 0)) {
> 	rte_pktmbuf_free(tail);
> 	return;
> }

Wonder why do we need to do that?
Probably head mbuf is out of space and want to expand it using pktmbuf_chain()?
So in that case seems logical:
1) allocate new mbuf (it's pkt_len will be 0)
b) call pktmbuf_chain()

Konstantin

> 
> > +	/* Chain 'tail' onto the old tail */
> > +	cur_tail = rte_pktmbuf_lastseg(head);
> > +	cur_tail->next = tail;
> > +
> > +	/* accumulate number of segments and total length. */
> > +	head->nb_segs = (uint8_t)(head->nb_segs + tail->nb_segs);
> 
> I'm wondering if we shouldn't check the overflow here. In
> this case we would need to have a return value in case of
> failure.
> 
> > +	head->pkt_len += tail->pkt_len;
> > +
> > +	/* reset pkt_len and nb_segs for chained fragment. */
> > +	tail->pkt_len = tail->data_len;
> > +	tail->nb_segs = 1;
> 
> I don't think it's required to reset this fields in the tail mbuf.
> In any case, they will be reset again.
> 
> > +}
> > +
> > +/**
> >   * Dump an mbuf structure to the console.
> >   *
> >   * Dump all fields for the given packet mbuf and all its associated
> >
> 
> 
> Regards,
> Olivier

  reply	other threads:[~2015-09-07  9:13 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-08-31 12:41 Simon Kagstrom
2015-09-07  7:32 ` Olivier MATZ
2015-09-07  9:13   ` Ananyev, Konstantin [this message]
2015-09-07  9:35     ` Olivier MATZ
2015-09-07 10:40       ` Simon Kågström
2015-09-07 11:43         ` [dpdk-dev] [PATCH v2] " Simon Kagstrom
2015-09-07 12:32           ` Ananyev, Konstantin
2015-09-07 12:41             ` Simon Kågström
2015-09-07 23:21               ` Ananyev, Konstantin
2015-09-08 10:40                 ` Simon Kågström
2015-09-09  8:22                   ` Ananyev, Konstantin
2015-09-07 12:50           ` [dpdk-dev] [PATCH v3] " Simon Kagstrom
2015-10-13 12:50             ` Simon Kagstrom
2015-10-13 13:17               ` Thomas Monjalon
2015-10-13 13:11             ` Olivier MATZ

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2601191342CEEE43887BDE71AB97725836A83CBA@irsmsx105.ger.corp.intel.com \
    --to=konstantin.ananyev@intel.com \
    --cc=anatoly.burakov@intel.com \
    --cc=dev@dpdk.org \
    --cc=helin.zhang@intel.com \
    --cc=olivier.matz@6wind.com \
    --cc=sergio.gonzalez.monroy@intel.com \
    --cc=simon.kagstrom@netinsight.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).