DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Verma, Shally" <Shally.Verma@cavium.com>
To: "Trahe, Fiona" <fiona.trahe@intel.com>,
	Ahmed Mansour <ahmed.mansour@nxp.com>,
	"dev@dpdk.org" <dev@dpdk.org>
Cc: "De Lara Guarch, Pablo" <pablo.de.lara.guarch@intel.com>,
	"Athreya, Narayana Prasad" <NarayanaPrasad.Athreya@cavium.com>,
	"Gupta, Ashish" <Ashish.Gupta@cavium.com>,
	"Sahu, Sunila" <Sunila.Sahu@cavium.com>,
	"Challa, Mahipal" <Mahipal.Challa@cavium.com>,
	"Jain, Deepak K" <deepak.k.jain@intel.com>,
	Hemant Agrawal <hemant.agrawal@nxp.com>,
	Roy Pledge <roy.pledge@nxp.com>,
	Youri Querry <youri.querry_1@nxp.com>,
	"fiona.trahe@gmail.com" <fiona.trahe@gmail.com>,
	"Daly, Lee" <lee.daly@intel.com>,
	"Jozwiak, TomaszX" <tomaszx.jozwiak@intel.com>
Subject: Re: [dpdk-dev] [PATCH] compressdev: implement API - mbuf alternative
Date: Tue, 13 Mar 2018 08:14:49 +0000	[thread overview]
Message-ID: <CY4PR0701MB36348E983AE7156552BE8D89F0D20@CY4PR0701MB3634.namprd07.prod.outlook.com> (raw)
In-Reply-To: <348A99DA5F5B7549AA880327E580B435893478BA@IRSMSX101.ger.corp.intel.com>

HI Fiona

So I understand we're moving away from mbufs because of its size limitation (64k) and cacheline overhead and their more suitability to n/w applications. Given that, I understand benefit of having another structure to input data but then what is proposal for ipcomp like application where mbuf usage may be a better option? Should we keep support for both (mbuf and this structure) so that apps can use appropriate data structure depending on their requirement.

Further comments, on github.

Thanks
Shally

>-----Original Message-----
>From: Trahe, Fiona [mailto:fiona.trahe@intel.com]
>Sent: 12 March 2018 21:31
>To: Ahmed Mansour <ahmed.mansour@nxp.com>; Verma, Shally <Shally.Verma@cavium.com>; dev@dpdk.org
>Cc: De Lara Guarch, Pablo <pablo.de.lara.guarch@intel.com>; Athreya, Narayana Prasad <NarayanaPrasad.Athreya@cavium.com>;
>Gupta, Ashish <Ashish.Gupta@cavium.com>; Sahu, Sunila <Sunila.Sahu@cavium.com>; Challa, Mahipal
><Mahipal.Challa@cavium.com>; Jain, Deepak K <deepak.k.jain@intel.com>; Hemant Agrawal <hemant.agrawal@nxp.com>; Roy
>Pledge <roy.pledge@nxp.com>; Youri Querry <youri.querry_1@nxp.com>; fiona.trahe@gmail.com; Daly, Lee <lee.daly@intel.com>;
>Jozwiak, TomaszX <tomaszx.jozwiak@intel.com>
>Subject: RE: [dpdk-dev] [PATCH] compressdev: implement API - mbuf alternative
>
>Hi Shally, Ahmed, and anyone else interested in compressdev,
>
>I mentioned last week that we've been exploring using something other than mbufs to pass src/dst buffers to compressdev PMDs.
>
>Reasons:
> - mbuf data is limited to 64k-1 in each segment of a chained mbuf. Data for compression
>    can be greater and it would add cycles to have to break up into smaller segments.
> - data may originate in mbufs, but is more likely, particularly for storage use-cases,  to
>    originate in other data structures.
> - There's a 2 cache-line overhead for every segment in a chain, most of this data
>    is network-related, not needed by compressdev
>So moving to a custom structure would minimise memory overhead, remove restriction on 64k-1 size and give more flexibility if
>compressdev ever needs any comp-specific meta-data.
>
>We've come up with a compressdev-specific structure using the struct iovec from sys/uio.h, which is commonly used by storage
>applications. This would replace the src and dest mbufs in the  op.
>I'll not include the code here - Pablo will push that to github shortly and we'd appreciate review comments there.
>https://github.com/pablodelara/dpdk-draft-compressdev
>Just posting on the mailing list to give a heads-up and ensure this reaches a wider audience than may see it on github.
>
>Note : We also considered having no data structures in the op, instead the application
>would supply a callback which the PMD would use to retrieve meta-data (virt address, iova, length)
>for each next segment as needed. While this is quite flexible and allow the application
>to keep its data in its native structures, it's likely to cost more cycles.
>So we're not proposing this at the moment, but hope to benchmark it later while the API is still experimental.
>
>General feedback on direction is welcome here on the mailing list.
>For feedback on the details of implementation we would appreciate comments on github.
>
>Regards,
>Fiona.

  reply	other threads:[~2018-03-13  8:14 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-03-12 16:01 Trahe, Fiona
2018-03-13  8:14 ` Verma, Shally [this message]
2018-03-13 15:52   ` Trahe, Fiona
2018-03-14 12:50     ` Verma, Shally
2018-03-14 18:39       ` Trahe, Fiona
2018-03-14 19:02         ` Ahmed Mansour
2018-03-15  4:11           ` Verma, Shally
2018-03-15  9:48             ` Trahe, Fiona
2018-03-13 11:16 ` Ananyev, Konstantin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CY4PR0701MB36348E983AE7156552BE8D89F0D20@CY4PR0701MB3634.namprd07.prod.outlook.com \
    --to=shally.verma@cavium.com \
    --cc=Ashish.Gupta@cavium.com \
    --cc=Mahipal.Challa@cavium.com \
    --cc=NarayanaPrasad.Athreya@cavium.com \
    --cc=Sunila.Sahu@cavium.com \
    --cc=ahmed.mansour@nxp.com \
    --cc=deepak.k.jain@intel.com \
    --cc=dev@dpdk.org \
    --cc=fiona.trahe@gmail.com \
    --cc=fiona.trahe@intel.com \
    --cc=hemant.agrawal@nxp.com \
    --cc=lee.daly@intel.com \
    --cc=pablo.de.lara.guarch@intel.com \
    --cc=roy.pledge@nxp.com \
    --cc=tomaszx.jozwiak@intel.com \
    --cc=youri.querry_1@nxp.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).