From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 11D911B1B8 for ; Thu, 25 Jan 2018 19:43:06 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Jan 2018 10:43:05 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.46,413,1511856000"; d="scan'208";a="22325290" Received: from irsmsx153.ger.corp.intel.com ([163.33.192.75]) by orsmga003.jf.intel.com with ESMTP; 25 Jan 2018 10:43:03 -0800 Received: from irsmsx101.ger.corp.intel.com ([169.254.1.46]) by IRSMSX153.ger.corp.intel.com ([169.254.9.34]) with mapi id 14.03.0319.002; Thu, 25 Jan 2018 18:43:02 +0000 From: "Trahe, Fiona" To: "Verma, Shally" , Ahmed Mansour , "dev@dpdk.org" , Akhil Goyal CC: "Challa, Mahipal" , "Athreya, Narayana Prasad" , "De Lara Guarch, Pablo" , "Gupta, Ashish" , "Sahu, Sunila" , "Jain, Deepak K" , Hemant Agrawal , "Roy Pledge" , Youri Querry , "Trahe, Fiona" Thread-Topic: [RFC v3 1/1] lib: add compressdev API Thread-Index: AQHTdc0UinnGLuzhCUCQkELi5wsFSqOEoeAAgACAedA= Date: Thu, 25 Jan 2018 18:43:01 +0000 Message-ID: <348A99DA5F5B7549AA880327E580B435893011B0@IRSMSX101.ger.corp.intel.com> References: <1511542566-10455-1-git-send-email-fiona.trahe@intel.com> <1513360153-15036-1-git-send-email-fiona.trahe@intel.com> <348A99DA5F5B7549AA880327E580B435892FCF50@IRSMSX101.ger.corp.intel.com> In-Reply-To: Accept-Language: en-IE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiYmM0OTQyYmQtOTY2ZS00N2IwLTljZTUtMTQ3ZWQ1MjRlMDYzIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX05UIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE2LjUuOS4zIiwiVHJ1c3RlZExhYmVsSGFzaCI6ImpmUW1vdFliRUliR3ZoSWpKU2RLMDAraEtJUU9xaXdQcGlXUUFuRHFUME09In0= x-ctpclassification: CTP_NT dlp-product: dlpe-windows dlp-version: 11.0.0.116 dlp-reaction: no-action x-originating-ip: [163.33.239.182] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [RFC v3 1/1] lib: add compressdev API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 25 Jan 2018 18:43:07 -0000 Hi Shally, Ahmed, > -----Original Message----- > From: Verma, Shally [mailto:Shally.Verma@cavium.com] > Sent: Thursday, January 25, 2018 10:25 AM > To: Ahmed Mansour ; Trahe, Fiona ; > dev@dpdk.org; Akhil Goyal > Cc: Challa, Mahipal ; Athreya, Narayana Prasad > ; De Lara Guarch, Pablo ; > Gupta, Ashish ; Sahu, Sunila ; Jain, Deepak K > ; Hemant Agrawal ; Roy P= ledge > ; Youri Querry > Subject: RE: [RFC v3 1/1] lib: add compressdev API >=20 >=20 >=20 > > -----Original Message----- > > From: Ahmed Mansour [mailto:ahmed.mansour@nxp.com] > > Sent: 25 January 2018 01:06 > > To: Verma, Shally ; Trahe, Fiona > > ; dev@dpdk.org; Akhil Goyal > > > > Cc: Challa, Mahipal ; Athreya, Narayana > > Prasad ; De Lara Guarch, Pablo > > ; Gupta, Ashish > > ; Sahu, Sunila ; > > Jain, Deepak K ; Hemant Agrawal > > ; Roy Pledge ; Youri > > Querry > > Subject: Re: [RFC v3 1/1] lib: add compressdev API > > > > Hi All, > > > > Please see responses in line. > > > > Thanks, > > > > Ahmed > > > > On 1/23/2018 6:58 AM, Verma, Shally wrote: > > > Hi Fiona > > > > > >> -----Original Message----- > > >> From: Trahe, Fiona [mailto:fiona.trahe@intel.com] > > >> Sent: 19 January 2018 17:30 > > >> To: Verma, Shally ; dev@dpdk.org; > > >> akhil.goyal@nxp.com > > >> Cc: Challa, Mahipal ; Athreya, Narayana > > >> Prasad ; De Lara Guarch, Pablo > > >> ; Gupta, Ashish > > >> ; Sahu, Sunila ; > > >> Jain, Deepak K ; Hemant Agrawal > > >> ; Roy Pledge ; Youri > > >> Querry ; Ahmed Mansour > > >> ; Trahe, Fiona > > >> Subject: RE: [RFC v3 1/1] lib: add compressdev API > > >> > > >> Hi Shally, > > >> > > >>> -----Original Message----- > > >>> From: Verma, Shally [mailto:Shally.Verma@cavium.com] > > >>> Sent: Thursday, January 18, 2018 12:54 PM > > >>> To: Trahe, Fiona ; dev@dpdk.org > > >>> Cc: Challa, Mahipal ; Athreya, Narayana > > >> Prasad > > >>> ; De Lara Guarch, Pablo > > >> ; > > >>> Gupta, Ashish ; Sahu, Sunila > > >> ; Jain, Deepak K > > >>> ; Hemant Agrawal > > >> ; Roy Pledge > > >>> ; Youri Querry ; > > >> Ahmed Mansour > > >>> > > >>> Subject: RE: [RFC v3 1/1] lib: add compressdev API > > >>> > > >>> Hi Fiona > > >>> > > >>> While revisiting this, we identified few questions and additions. P= lease > > see > > >> them inline. > > >>> > > >>>> -----Original Message----- > > >>>> From: Trahe, Fiona [mailto:fiona.trahe@intel.com] > > >>>> Sent: 15 December 2017 23:19 > > >>>> To: dev@dpdk.org; Verma, Shally > > >>>> Cc: Challa, Mahipal ; Athreya, Narayana > > >>>> Prasad ; > > >>>> pablo.de.lara.guarch@intel.com; fiona.trahe@intel.com > > >>>> Subject: [RFC v3 1/1] lib: add compressdev API > > >>>> > > >>>> Signed-off-by: Trahe, Fiona > > >>>> --- > > >>> //snip > > >>> > > >>>> + > > >>>> +int > > >>>> +rte_compressdev_queue_pair_setup(uint8_t dev_id, uint16_t > > >>>> queue_pair_id, > > >>>> + uint32_t max_inflight_ops, int socket_id) > > >>> [Shally] Is max_inflights_ops different from nb_streams_per_qp in > > struct > > >> rte_compressdev_info? > > >>> I assume they both carry same purpose. If yes, then it will be bett= er to > > use > > >> single naming convention to > > >>> avoid confusion. > > >> [Fiona] No, I think they have different purposes. > > >> max_inflight_ops should be used to configure the qp with the number = of > > ops > > >> the application expects to be able to submit to the qp before it nee= ds to > > poll > > >> for a response. It can be configured differently for each qp. In the= QAT > > case it > > >> dictates the depth of the qp created, it may have different implicat= ions on > > >> other PMDs. > > >> nb_sessions_per_qp and nb_streams_per_qp are limitations the devices > > >> reports and are same for all qps on the device. QAT doesn't have tho= se > > >> limitations and so would report 0, however I assumed they may be > > necessary > > >> for other devices. > > >> This assumption is based on the patch submitted by NXP to cryptodev = in > > Feb > > >> 2017 > > >> > > https://emea01.safelinks.protection.outlook.com/?url=3Dhttp%3A%2F%2Fdpd > > k.org%2Fml%2Farchives%2Fdev%2F2017- > > March%2F060740.html&data=3D02%7C01%7Cahmed.mansour%40nxp.com%7C > > b012d74d7530493b155108d56258955f%7C686ea1d3bc2b4c6fa92cd99c5c30163 > > 5%7C0%7C0%7C636523054981379413&sdata=3D2SazlEazMxcBGS7R58CpNrX0G5 > > OeWx8PLMwf%2FYzqv34%3D&reserved=3D0 > > >> I also assume these are not necessarily the max number of sessions i= n ops > > on > > >> the qp at a given time, but the total number attached, i.e. if the d= evice > > has > > >> this limitation then sessions must be attached to qps, and presumabl= y > > >> reserve some resources. Being attached doesn't imply there is an op = on > > the > > >> qp at that time using that session. So it's not to relating to the i= nflight op > > >> count, but to the number of sessions attached/detached to the qp. > > >> Including Akhil on the To list, maybe NXP can confirm if these param= s are > > >> needed. > > > [Shally] Ok. Then let's wait for NXP to confirm on this requirement a= s > > currently spec doesn't have any API to attach > > queue_pair_to_specific_session_or_stream as cryptodev. > > > > > > But then how application could know limit on max_inflight_ops support= ed > > on a qp? As it can pass any random number during qp_setup(). > > > Do you believe we need to add a capability field in dev_info to indic= ate limit > > on max_inflight_ops? > > > > > > Thanks > > > Shally > > [Ahmed] @Fiona This looks ok. max_inflight_ops makes sense. I understan= d > > it as a push back mechanism per qp. We do not have physical limit for > > number of streams or sessions on a qp in our hardware, so we would > > return 0 here as well. > > @Shally in our PMD implementation we do not attach streams or sessions > > to a particular qp. Regarding max_inflight_ops. I think that limit >=20 > [Shally] Ok. We too don't have any such limit defined. So, if these are r= edundant fields then can be > removed until requirement is identified in context of compressdev. [Fiona] Ok, so it seems we're all agreed to remove max_nb_sessions_per_qp a= nd max_nb_streams_per_qp from rte_compressdev_info. I think we're also agreed to keep max_inflight_ops on the qp_setup. It's not available on the info and if I understand you both correctly we do= n't need to add it there as a hw limitation or capability. I'd expect the appl = to set it to=20 some value which is probably lower than any hardware limitation. The appl m= ay then perform many enqueue_bursts until the qp is full and if unable to enqueue a= burst=20 should try dequeueing to free up space on the qp for more enqueue_bursts. I think the value it's set to can give the application some influence over = latency vs throughput.=20 E.g. if it's set to a very large number then it allows the PMD to stockpile= requests, which can result in longer latency, but optimal throughput as easier to kee= p the engines supplied with requests. If set very small, latency may be short, as= requests get to engines sooner, but there's a risk of the engines running out of request= s if the PMD manages to process everything before the application tops up the= qp. >=20 >=20 > > should be independent of hardware. Not all enqueues must succeed. The > > hardware can push back against the enqueuer dynamically if the resource= s > > needed to accommodate additional ops are not available yet. This push > > back happens in the software if the user sets a max_inflight_ops that i= s > > less that the hardware max_inflight_ops. The same return pathway can be > > exercised if the user actually attempts to enqueue more than the > > supported max_inflight_ops because of the hardware. >=20 > [Shally] Ok. This sounds fine to me. As you mentioned, we can let applica= tion setup a queue pair with > any max_inflight_ops and, during enqueue_burst(), leave it on hardware to= consume as much as it can > subject to the limit set in qp_setup(). > So, this doesn't seem to be a hard requirement on dev_info to expose. Onl= y knock-on effect I see is, > same testcase can then behave differently with different PMDs as each PMD= may have different support > level for same max_inflight_ops in their qp_setup().