DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Trahe, Fiona" <fiona.trahe@intel.com>
To: "Verma, Shally" <Shally.Verma@cavium.com>
Cc: "Athreya, Narayana Prasad" <NarayanaPrasad.Athreya@cavium.com>,
	"Challa, Mahipal" <Mahipal.Challa@cavium.com>,
	"Gupta, Ashish" <Ashish.Gupta@cavium.com>,
	"dev@dpdk.org" <dev@dpdk.org>,
	"ahmed.mansour@nxp.com" <ahmed.mansour@nxp.com>,
	"De Lara Guarch, Pablo" <pablo.de.lara.guarch@intel.com>,
	"Jain, Deepak K" <deepak.k.jain@intel.com>,
	 "Trahe, Fiona" <fiona.trahe@intel.com>
Subject: Re: [dpdk-dev] [RFC v4 1/1] lib/compressdev: Adding hash support
Date: Fri, 16 Mar 2018 14:21:11 +0000	[thread overview]
Message-ID: <348A99DA5F5B7549AA880327E580B4358935AD72@IRSMSX101.ger.corp.intel.com> (raw)
In-Reply-To: <CY4PR0701MB3634E014AEC5A00B63E82823F0D70@CY4PR0701MB3634.namprd07.prod.outlook.com>

Hi Shally,

//snip//;
> >> +	/**< window size range in bytes */
> >> +	int support_dict;
> >> +	/** Indicate algo support for dictionary load */
> >[Fiona] I need more info on this - What else is needed on the API to support it?
> [Shally] We will need to add an API, say, rte_comp_set_dictionary() to load pre-set dictionary.
> If a particular algorithm implementation supports dictionary load, then app can call this API before starting compression.
[Fiona] ok. so maybe submit in a patch separate to hash on github.
With any needed APIs.
 
> 
> >
> >> +	struct rte_comp_hash_algo_capability *hash_capa;
> >> +	/** pointer to an array of hash capability struct */
> >[Fiona] Does the hash capability depend on the compression algorithm?
> >I'd have thought it's completely independent as it's computed
> >on the uncompressed data?
> >I suppose the same can be said of checksum.
> 
> [Shally] Ok. I miss to set background here.
> Though practically hash can be performed standalone on data but we are proposing it as a feature that can
> be performed along with compression and so
> is the assumption for crc/adler. Thus , we proposed it as part of compression algo capability. Ex.
> Implementation may support Deflate+sha1 but not lzs+sha1.
> Do you want to propose hash/checksum as standalone capability on compression API? shouldn't
> standalone hash a part of  crypto spec? similar is question
> for checksum because app can use processor optimized crc/adler library func, if required. So, I see
> standalone hash/checksums as duplicated feature on compression.
> 
> I will issue next patch on github, once we this requirement is resolved.
[Fiona] No, I wasn't suggesting as standalone op - just that I thought it a device had the capability
to do hash it could probably do it regardless of which algo is used. 
But if you think there's a case where a device has deflate+hash capability and not lzs+sha1
then the hash capability can be per-algo. Based on above can it just be added to the 
existing feature-flags bitmask? Probably no need for the separate hash and algo capability
structs you were proposing since all capabilities are per-algo.

> 
> >This all seems a bit awkward.
> >Would something like this be better:
> >
> >enum rte_comp_capability_type {
> >  RTE_COMP_CAPA_TYPE_ALGO,
> >  RTE_COMP_CAPA_TYPE_CHKSUM,
> >  RTE_COMP_CAPA_TYPE_HASH
> >}
> >
> >struct rte_comp_algo_capability {
> >               enum rte_comp_algorithm algo;
> >	/** Compression Algorithm */
> >	uint64_t comp_feature_flags;
> >	/**< bitmap of flags for compression service features supported with this algo */
> >	struct rte_comp_param_range window_size;
> >	/**< window size range in bytes supported with this algo */
> >               /* int support_dict; */
> >              /** Indicate algo support for dictionary load */
> >};
> >
> >struct rte_compressdev_capabilities {
> >{
> >	rte_comp_capability_type capa_type;
> >	union {
> >	    struct rte_comp_algo_capability algo_capa,
> >	    int checksum_capa, /* bitmask of supported checksums  */
> >	    int hash_capa, /* bitmask of supported hashes - this could be a struct if more data needed */
> >	}
> >};
> >
> >>
> >>  /** Structure used to capture a capability of a comp device */
> >>  struct rte_compressdev_capabilities {
> >> -	/* TODO */
> >>  	enum rte_comp_algorithm algo;
> >> +	/** Compression Algorithm */
> >> +	struct rte_comp_algo_capability alg_capa;
> >> +	/**< Compression algo capabilities */
> >>  	uint64_t comp_feature_flags;
> >> -	/**< bitmap of flags for compression service features*/
> >> -	struct rte_comp_param_range window_size;
> >> -	/**< window size range in bytes */
> >> +	/**< bitmap of flags for compression service features */
> >>  };
> >> +
//snip

      reply	other threads:[~2018-03-16 14:21 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-02-01 11:07 Shally Verma
2018-03-14 12:41 ` Trahe, Fiona
2018-03-16  6:22   ` Verma, Shally
2018-03-16 14:21     ` Trahe, Fiona [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=348A99DA5F5B7549AA880327E580B4358935AD72@IRSMSX101.ger.corp.intel.com \
    --to=fiona.trahe@intel.com \
    --cc=Ashish.Gupta@cavium.com \
    --cc=Mahipal.Challa@cavium.com \
    --cc=NarayanaPrasad.Athreya@cavium.com \
    --cc=Shally.Verma@cavium.com \
    --cc=ahmed.mansour@nxp.com \
    --cc=deepak.k.jain@intel.com \
    --cc=dev@dpdk.org \
    --cc=pablo.de.lara.guarch@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).