DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Ananyev, Konstantin" <konstantin.ananyev@intel.com>
To: "Legacy, Allain (Wind River)" <allain.legacy@windriver.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
	"Osman, Dahir (Wind River)" <dahir.osman@windriver.com>
Subject: Re: [dpdk-dev] [PATCH] ip_frag: free mbufs on reassembly table destroy
Date: Sun, 4 Jun 2017 16:20:56 +0000	[thread overview]
Message-ID: <2601191342CEEE43887BDE71AB9772583FB058B4@IRSMSX109.ger.corp.intel.com> (raw)
In-Reply-To: <20170425170450.173221-1-allain.legacy@windriver.com>

Hu Allain,

> -----Original Message-----
> From: Allain Legacy [mailto:allain.legacy@windriver.com]
> Sent: Tuesday, April 25, 2017 6:05 PM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> Cc: dev@dpdk.org; Osman, Dahir (Wind River) <dahir.osman@windriver.com>
> Subject: [PATCH] ip_frag: free mbufs on reassembly table destroy
> 
> From: Dahir Osman <dahir.osman@windriver.com>
> 
> The rte_ip_frag_table_destroy procedure simply releases the memory for the
> table without freeing the packet buffers that may be referenced in the hash
> table for in-flight or incomplete packet reassembly operations.  To prevent
> leaked mbufs go through the list of fragments and free each one
> individually.
> 
> Reported-by: Matt Peters <matt.peters@windriver.com>
> Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
> ---
>  lib/librte_ip_frag/ip_frag_common.h       | 20 ++++++++++++++++++++
>  lib/librte_ip_frag/rte_ip_frag.h          |  7 ++-----
>  lib/librte_ip_frag/rte_ip_frag_common.c   | 12 ++++++++++++
>  lib/librte_ip_frag/rte_ipfrag_version.map |  7 +++++++
>  4 files changed, 41 insertions(+), 5 deletions(-)
> 
> diff --git a/lib/librte_ip_frag/ip_frag_common.h b/lib/librte_ip_frag/ip_frag_common.h
> index 835e4f93f..9f5619651 100644
> --- a/lib/librte_ip_frag/ip_frag_common.h
> +++ b/lib/librte_ip_frag/ip_frag_common.h
> @@ -130,6 +130,26 @@ ip_frag_free(struct ip_frag_pkt *fp, struct rte_ip_frag_death_row *dr)
>  	dr->cnt = k;
>  }
> 
> +/* delete fragment's mbufs immediately instead of using death row */
> +static inline void
> +ip_frag_free_immediate(struct ip_frag_pkt *fp)
> +{
> +	uint32_t i;
> +
> +	for (i = 0; i < fp->last_idx; i++) {
> +		if (fp->frags[i].mb != NULL) {
> +			IP_FRAG_LOG(DEBUG, "%s:%d\n"
> +			    "mbuf: %p, tms: %" PRIu64", key: <%" PRIx64 ", %#x>\n",
> +			    __func__, __LINE__, fp->frags[i].mb, fp->start,
> +			    fp->key.src_dst[0], fp->key.id);
> +			rte_pktmbuf_free(fp->frags[i].mb);
> +			fp->frags[i].mb = NULL;
> +		}
> +	}
> +
> +	fp->last_idx = 0;
> +}
> +
>  /* if key is empty, mark key as in use */
>  static inline void
>  ip_frag_inuse(struct rte_ip_frag_tbl *tbl, const struct  ip_frag_pkt *fp)
> diff --git a/lib/librte_ip_frag/rte_ip_frag.h b/lib/librte_ip_frag/rte_ip_frag.h
> index 6708906d3..ff16f4c52 100644
> --- a/lib/librte_ip_frag/rte_ip_frag.h
> +++ b/lib/librte_ip_frag/rte_ip_frag.h
> @@ -180,11 +180,8 @@ struct rte_ip_frag_tbl * rte_ip_frag_table_create(uint32_t bucket_num,
>   * @param tbl
>   *   Fragmentation table to free.
>   */
> -static inline void
> -rte_ip_frag_table_destroy(struct rte_ip_frag_tbl *tbl)
> -{
> -	rte_free(tbl);
> -}
> +void
> +rte_ip_frag_table_destroy(struct rte_ip_frag_tbl *tbl);
> 
>  /**
>   * This function implements the fragmentation of IPv6 packets.
> diff --git a/lib/librte_ip_frag/rte_ip_frag_common.c b/lib/librte_ip_frag/rte_ip_frag_common.c
> index 6176ff4e0..70964101c 100644
> --- a/lib/librte_ip_frag/rte_ip_frag_common.c
> +++ b/lib/librte_ip_frag/rte_ip_frag_common.c
> @@ -109,6 +109,18 @@ rte_ip_frag_table_create(uint32_t bucket_num, uint32_t bucket_entries,
>  	return tbl;
>  }
> 
> +/* delete fragmentation table */
> +void
> +rte_ip_frag_table_destroy(struct rte_ip_frag_tbl *tbl)
> +{
> +	uint32_t i;
> +
> +	for (i = 0; i < tbl->nb_entries; i++)
> +		ip_frag_free_immediate(&tbl->pkt[i]);


Looks ok, just one thought: wouldn't it be better(faster) in most cases to iterate over lru list?
I suppose usually we wouldn't have nearly all entries filled in.
Konstantin

> +
> +	rte_free(tbl);
> +}
> +
>  /* dump frag table statistics to file */
>  void
>  rte_ip_frag_table_statistics_dump(FILE *f, const struct rte_ip_frag_tbl *tbl)
> diff --git a/lib/librte_ip_frag/rte_ipfrag_version.map b/lib/librte_ip_frag/rte_ipfrag_version.map
> index 354fa0822..29e2cfea5 100644
> --- a/lib/librte_ip_frag/rte_ipfrag_version.map
> +++ b/lib/librte_ip_frag/rte_ipfrag_version.map
> @@ -11,3 +11,10 @@ DPDK_2.0 {
> 
>  	local: *;
>  };
> +
> +DPDK_17.05 {
> +    global:
> +
> +    rte_ip_frag_table_destroy;
> +
> +} DPDK_2.0;
> --
> 2.12.1

  reply	other threads:[~2017-06-04 16:21 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-04-25 17:04 Allain Legacy
2017-06-04 16:20 ` Ananyev, Konstantin [this message]
2017-06-05 14:49   ` Legacy, Allain
2017-06-05 15:49 ` [dpdk-dev] [PATCH v2] " Allain Legacy
2017-06-05 22:25   ` Ananyev, Konstantin
2017-06-22 15:56     ` Thomas Monjalon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2601191342CEEE43887BDE71AB9772583FB058B4@IRSMSX109.ger.corp.intel.com \
    --to=konstantin.ananyev@intel.com \
    --cc=allain.legacy@windriver.com \
    --cc=dahir.osman@windriver.com \
    --cc=dev@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).