DPDK patches and discussions
 help / color / mirror / Atom feed
From: Ahmed Mansour <ahmed.mansour@nxp.com>
To: "Verma, Shally" <Shally.Verma@cavium.com>,
	"Trahe, Fiona" <fiona.trahe@intel.com>,
	"dev@dpdk.org" <dev@dpdk.org>, Akhil Goyal <akhil.goyal@nxp.com>
Cc: "Challa, Mahipal" <Mahipal.Challa@cavium.com>,
	"Athreya, Narayana Prasad" <NarayanaPrasad.Athreya@cavium.com>,
	"De Lara Guarch, Pablo" <pablo.de.lara.guarch@intel.com>,
	"Gupta, Ashish" <Ashish.Gupta@cavium.com>,
	"Sahu, Sunila" <Sunila.Sahu@cavium.com>,
	"Jain, Deepak K" <deepak.k.jain@intel.com>,
	Hemant Agrawal <hemant.agrawal@nxp.com>,
	"Roy Pledge" <roy.pledge@nxp.com>,
	Youri Querry <youri.querry_1@nxp.com>
Subject: Re: [dpdk-dev] [RFC v3 1/1] lib: add compressdev API
Date: Mon, 29 Jan 2018 17:16:05 +0000	[thread overview]
Message-ID: <AM0PR0402MB3842388CC0EFD167693854B9E1E50@AM0PR0402MB3842.eurprd04.prod.outlook.com> (raw)
In-Reply-To: <CY4PR0701MB3634ECFB14AFF24D7841887CF0E50@CY4PR0701MB3634.namprd07.prod.outlook.com>

On 1/29/2018 7:26 AM, Verma, Shally wrote:
> Hi
>
>> -----Original Message-----
>> From: Trahe, Fiona [mailto:fiona.trahe@intel.com]
>> Sent: 26 January 2018 00:13
>> To: Verma, Shally <Shally.Verma@cavium.com>; Ahmed Mansour
>> <ahmed.mansour@nxp.com>; dev@dpdk.org; Akhil Goyal
>> <akhil.goyal@nxp.com>
>> Cc: Challa, Mahipal <Mahipal.Challa@cavium.com>; Athreya, Narayana
>> Prasad <NarayanaPrasad.Athreya@cavium.com>; De Lara Guarch, Pablo
>> <pablo.de.lara.guarch@intel.com>; Gupta, Ashish
>> <Ashish.Gupta@cavium.com>; Sahu, Sunila <Sunila.Sahu@cavium.com>;
>> Jain, Deepak K <deepak.k.jain@intel.com>; Hemant Agrawal
>> <hemant.agrawal@nxp.com>; Roy Pledge <roy.pledge@nxp.com>; Youri
>> Querry <youri.querry_1@nxp.com>; Trahe, Fiona <fiona.trahe@intel.com>
>> Subject: RE: [RFC v3 1/1] lib: add compressdev API
>>
>> Hi Shally, Ahmed,
>>
>>
>>> -----Original Message-----
>>> From: Verma, Shally [mailto:Shally.Verma@cavium.com]
>>> Sent: Thursday, January 25, 2018 10:25 AM
>>> To: Ahmed Mansour <ahmed.mansour@nxp.com>; Trahe, Fiona
>> <fiona.trahe@intel.com>;
>>> dev@dpdk.org; Akhil Goyal <akhil.goyal@nxp.com>
>>> Cc: Challa, Mahipal <Mahipal.Challa@cavium.com>; Athreya, Narayana
>> Prasad
>>> <NarayanaPrasad.Athreya@cavium.com>; De Lara Guarch, Pablo
>> <pablo.de.lara.guarch@intel.com>;
>>> Gupta, Ashish <Ashish.Gupta@cavium.com>; Sahu, Sunila
>> <Sunila.Sahu@cavium.com>; Jain, Deepak K
>>> <deepak.k.jain@intel.com>; Hemant Agrawal
>> <hemant.agrawal@nxp.com>; Roy Pledge
>>> <roy.pledge@nxp.com>; Youri Querry <youri.querry_1@nxp.com>
>>> Subject: RE: [RFC v3 1/1] lib: add compressdev API
>>>
>>>
>>>
>>>> -----Original Message-----
>>>> From: Ahmed Mansour [mailto:ahmed.mansour@nxp.com]
>>>> Sent: 25 January 2018 01:06
>>>> To: Verma, Shally <Shally.Verma@cavium.com>; Trahe, Fiona
>>>> <fiona.trahe@intel.com>; dev@dpdk.org; Akhil Goyal
>>>> <akhil.goyal@nxp.com>
>>>> Cc: Challa, Mahipal <Mahipal.Challa@cavium.com>; Athreya, Narayana
>>>> Prasad <NarayanaPrasad.Athreya@cavium.com>; De Lara Guarch, Pablo
>>>> <pablo.de.lara.guarch@intel.com>; Gupta, Ashish
>>>> <Ashish.Gupta@cavium.com>; Sahu, Sunila <Sunila.Sahu@cavium.com>;
>>>> Jain, Deepak K <deepak.k.jain@intel.com>; Hemant Agrawal
>>>> <hemant.agrawal@nxp.com>; Roy Pledge <roy.pledge@nxp.com>; Youri
>>>> Querry <youri.querry_1@nxp.com>
>>>> Subject: Re: [RFC v3 1/1] lib: add compressdev API
>>>>
>>>> Hi All,
>>>>
>>>> Please see responses in line.
>>>>
>>>> Thanks,
>>>>
>>>> Ahmed
>>>>
>>>> On 1/23/2018 6:58 AM, Verma, Shally wrote:
>>>>> Hi Fiona
>>>>>
>>>>>> -----Original Message-----
>>>>>> From: Trahe, Fiona [mailto:fiona.trahe@intel.com]
>>>>>> Sent: 19 January 2018 17:30
>>>>>> To: Verma, Shally <Shally.Verma@cavium.com>; dev@dpdk.org;
>>>>>> akhil.goyal@nxp.com
>>>>>> Cc: Challa, Mahipal <Mahipal.Challa@cavium.com>; Athreya, Narayana
>>>>>> Prasad <NarayanaPrasad.Athreya@cavium.com>; De Lara Guarch,
>> Pablo
>>>>>> <pablo.de.lara.guarch@intel.com>; Gupta, Ashish
>>>>>> <Ashish.Gupta@cavium.com>; Sahu, Sunila
>> <Sunila.Sahu@cavium.com>;
>>>>>> Jain, Deepak K <deepak.k.jain@intel.com>; Hemant Agrawal
>>>>>> <hemant.agrawal@nxp.com>; Roy Pledge <roy.pledge@nxp.com>;
>> Youri
>>>>>> Querry <youri.querry_1@nxp.com>; Ahmed Mansour
>>>>>> <ahmed.mansour@nxp.com>; Trahe, Fiona <fiona.trahe@intel.com>
>>>>>> Subject: RE: [RFC v3 1/1] lib: add compressdev API
>>>>>>
>>>>>> Hi Shally,
>>>>>>
>>>>>>> -----Original Message-----
>>>>>>> From: Verma, Shally [mailto:Shally.Verma@cavium.com]
>>>>>>> Sent: Thursday, January 18, 2018 12:54 PM
>>>>>>> To: Trahe, Fiona <fiona.trahe@intel.com>; dev@dpdk.org
>>>>>>> Cc: Challa, Mahipal <Mahipal.Challa@cavium.com>; Athreya,
>> Narayana
>>>>>> Prasad
>>>>>>> <NarayanaPrasad.Athreya@cavium.com>; De Lara Guarch, Pablo
>>>>>> <pablo.de.lara.guarch@intel.com>;
>>>>>>> Gupta, Ashish <Ashish.Gupta@cavium.com>; Sahu, Sunila
>>>>>> <Sunila.Sahu@cavium.com>; Jain, Deepak K
>>>>>>> <deepak.k.jain@intel.com>; Hemant Agrawal
>>>>>> <hemant.agrawal@nxp.com>; Roy Pledge
>>>>>>> <roy.pledge@nxp.com>; Youri Querry <youri.querry_1@nxp.com>;
>>>>>> Ahmed Mansour
>>>>>>> <ahmed.mansour@nxp.com>
>>>>>>> Subject: RE: [RFC v3 1/1] lib: add compressdev API
>>>>>>>
>>>>>>> Hi Fiona
>>>>>>>
>>>>>>> While revisiting this, we identified few questions and additions.
>> Please
>>>> see
>>>>>> them inline.
>>>>>>>> -----Original Message-----
>>>>>>>> From: Trahe, Fiona [mailto:fiona.trahe@intel.com]
>>>>>>>> Sent: 15 December 2017 23:19
>>>>>>>> To: dev@dpdk.org; Verma, Shally <Shally.Verma@cavium.com>
>>>>>>>> Cc: Challa, Mahipal <Mahipal.Challa@cavium.com>; Athreya,
>> Narayana
>>>>>>>> Prasad <NarayanaPrasad.Athreya@cavium.com>;
>>>>>>>> pablo.de.lara.guarch@intel.com; fiona.trahe@intel.com
>>>>>>>> Subject: [RFC v3 1/1] lib: add compressdev API
>>>>>>>>
>>>>>>>> Signed-off-by: Trahe, Fiona <fiona.trahe@intel.com>
>>>>>>>> ---
>>>>>>> //snip
>>>>>>>
>>>>>>>> +
>>>>>>>> +int
>>>>>>>> +rte_compressdev_queue_pair_setup(uint8_t dev_id, uint16_t
>>>>>>>> queue_pair_id,
>>>>>>>> +		uint32_t max_inflight_ops, int socket_id)
>>>>>>> [Shally] Is max_inflights_ops different from nb_streams_per_qp in
>>>> struct
>>>>>> rte_compressdev_info?
>>>>>>> I assume they both carry same purpose. If yes, then it will be better
>> to
>>>> use
>>>>>> single naming convention to
>>>>>>> avoid confusion.
>>>>>> [Fiona] No, I think they have different purposes.
>>>>>> max_inflight_ops should be used to configure the qp with the number
>> of
>>>> ops
>>>>>> the application expects to be able to submit to the qp before it needs
>> to
>>>> poll
>>>>>> for a response. It can be configured differently for each qp. In the QAT
>>>> case it
>>>>>> dictates the depth of the qp created, it may have different
>> implications on
>>>>>> other PMDs.
>>>>>> nb_sessions_per_qp and nb_streams_per_qp are limitations the
>> devices
>>>>>> reports and are same for all qps on the device. QAT doesn't have
>> those
>>>>>> limitations and so would report 0, however I assumed they may be
>>>> necessary
>>>>>> for other devices.
>>>>>> This assumption is based on the patch submitted by NXP to cryptodev
>> in
>>>> Feb
>>>>>> 2017
>>>>>>
>> https://emea01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fdpd
>>>> k.org%2Fml%2Farchives%2Fdev%2F2017-
>>>>
>> March%2F060740.html&data=02%7C01%7Cahmed.mansour%40nxp.com%7C
>> b012d74d7530493b155108d56258955f%7C686ea1d3bc2b4c6fa92cd99c5c30163
>> 5%7C0%7C0%7C636523054981379413&sdata=2SazlEazMxcBGS7R58CpNrX0G5
>>>> OeWx8PLMwf%2FYzqv34%3D&reserved=0
>>>>>> I also assume these are not necessarily the max number of sessions in
>> ops
>>>> on
>>>>>> the qp at a given time, but the total number attached, i.e. if the device
>>>> has
>>>>>> this limitation then sessions must be attached to qps, and presumably
>>>>>> reserve some resources. Being attached doesn't imply there is an op
>> on
>>>> the
>>>>>> qp at that time using that session. So it's not to relating to the inflight
>> op
>>>>>> count, but to the number of sessions attached/detached to the qp.
>>>>>> Including Akhil on the To list, maybe NXP can confirm if these params
>> are
>>>>>> needed.
>>>>> [Shally] Ok. Then let's wait for NXP to confirm on this requirement as
>>>> currently spec doesn't have any API to attach
>>>> queue_pair_to_specific_session_or_stream as cryptodev.
>>>>> But then how application could know limit on max_inflight_ops
>> supported
>>>> on a qp? As it can pass any random number during qp_setup().
>>>>> Do you believe we need to add a capability field in dev_info to indicate
>> limit
>>>> on max_inflight_ops?
>>>>> Thanks
>>>>> Shally
>>>> [Ahmed] @Fiona This looks ok. max_inflight_ops makes sense. I
>> understand
>>>> it as a push back mechanism per qp. We do not have physical limit for
>>>> number of streams or sessions on a qp in our hardware, so we would
>>>> return 0 here as well.
>>>> @Shally in our PMD implementation we do not attach streams or sessions
>>>> to a particular qp. Regarding max_inflight_ops. I think that limit
>>> [Shally] Ok. We too don't have any such limit defined. So, if these are
>> redundant fields then can be
>>> removed until requirement is identified in context of compressdev.
>> [Fiona] Ok, so it seems we're all agreed to remove max_nb_sessions_per_qp
>> and
>>  max_nb_streams_per_qp from rte_compressdev_info.
>> I think we're also agreed to keep max_inflight_ops on the qp_setup.
> [Shally] yes, by me.
[Ahmed] That works.
>
>> It's not available on the info and if I understand you both correctly we don't
>> need to add it there as a hw limitation or capability. 
> [Shally] I'm fine with either ways. No preferences here currently.
[Ahmed] Yes.
>
>> I'd expect the appl to set it to
>> some value which is probably lower than any hardware limitation. The appl
>> may then
>> perform many enqueue_bursts until the qp is full and if unable to enqueue a
>> burst
>> should try dequeueing to free up space on the qp for more enqueue_bursts.
> [Shally] qp not necessarily has to be full (depending upon PMD implementation though) to run into this condition, especially when, say, Hw limit < application max_inflight_ops. 
> Thus, would rephrase it as:
> "application may enqueue bursts up to limit setup in qp_setup and if enqueue_burst() returns with number < total nb_ops , then wait on dequeue to free-up space".
[Ahmed] Agreed. The hard limit is left to the implementation.
>
>> I think the value it's set to can give the application some influence over
>> latency vs throughput.
>> E.g. if it's set to a very large number then it allows the PMD to stockpile
>> requests,
>> which can result in longer latency, but optimal throughput as easier to keep
>> the
>> engines supplied with requests. If set very small, latency may be short, as
>> requests get
>> to engines sooner, but there's a risk of the engines running out of requests
>> if the PMD manages to process everything before the application tops up the
>> qp.
> [Shally] I concur from you.
[Ahmed] Makes sense.
>
>>>
>>>> should be independent of hardware. Not all enqueues must succeed.
>> The
>>>> hardware can push back against the enqueuer dynamically if the
>> resources
>>>> needed to accommodate additional ops are not available yet. This push
>>>> back happens in the software if the user sets a max_inflight_ops that is
>>>> less that the hardware max_inflight_ops. The same return pathway can
>> be
>>>> exercised if the user actually attempts to enqueue more than the
>>>> supported max_inflight_ops because of the hardware.
>>> [Shally] Ok. This sounds fine to me. As you mentioned, we can let
>> application setup a queue pair with
>>> any max_inflight_ops and, during enqueue_burst(), leave it on hardware
>> to consume as much as it can
>>> subject to the limit set in qp_setup().
>>> So, this doesn't seem to be a hard requirement on dev_info to expose.
>> Only knock-on effect I see is,
>>> same testcase can then behave differently with different PMDs as each
>> PMD may have different support
>>> level for same max_inflight_ops in their qp_setup().
>


      reply	other threads:[~2018-01-29 17:16 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-11-24 16:56 [dpdk-dev] [RFC v2] " Trahe, Fiona
2017-12-07  9:58 ` Verma, Shally
2017-12-11 18:22   ` Trahe, Fiona
2017-12-12  4:43     ` Verma, Shally
2017-12-15 17:49 ` [dpdk-dev] [RFC v3 1/1] " Trahe, Fiona
2017-12-18 21:43   ` Ahmed Mansour
2017-12-22 14:15     ` Trahe, Fiona
2018-01-18 12:53   ` Verma, Shally
2018-01-19 12:00     ` Trahe, Fiona
2018-01-23 11:58       ` Verma, Shally
2018-01-24 19:36         ` Ahmed Mansour
2018-01-25 10:24           ` Verma, Shally
2018-01-25 18:43             ` Trahe, Fiona
2018-01-29 12:26               ` Verma, Shally
2018-01-29 17:16                 ` Ahmed Mansour [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=AM0PR0402MB3842388CC0EFD167693854B9E1E50@AM0PR0402MB3842.eurprd04.prod.outlook.com \
    --to=ahmed.mansour@nxp.com \
    --cc=Ashish.Gupta@cavium.com \
    --cc=Mahipal.Challa@cavium.com \
    --cc=NarayanaPrasad.Athreya@cavium.com \
    --cc=Shally.Verma@cavium.com \
    --cc=Sunila.Sahu@cavium.com \
    --cc=akhil.goyal@nxp.com \
    --cc=deepak.k.jain@intel.com \
    --cc=dev@dpdk.org \
    --cc=fiona.trahe@intel.com \
    --cc=hemant.agrawal@nxp.com \
    --cc=pablo.de.lara.guarch@intel.com \
    --cc=roy.pledge@nxp.com \
    --cc=youri.querry_1@nxp.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).