From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from NAM03-BY2-obe.outbound.protection.outlook.com (mail-by2nam03on0067.outbound.protection.outlook.com [104.47.42.67]) by dpdk.org (Postfix) with ESMTP id BA2601B2CA for ; Mon, 29 Jan 2018 13:26:32 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=9fvS27w8HabL7gvKXi0rp8QppXXm0fPKulAEDtzZ5M4=; b=aBCDfAcZiplXTbJ3M+U2d7L6qW0EQ0jpMAWniDj+KoB3FBd1asgfhR2dxqBf2WyeW0A2EHgnP/Lrb6oMFZ7FriMZuJtNBLv2rA3rfzsBQr8GC5m+edNdF2xFbd+b6xli3K2ghQEDGd0O3Eevh6FufkJVf2KdvK/R9t8RojT6NzI= Received: from CY4PR0701MB3634.namprd07.prod.outlook.com (52.132.101.164) by CY4PR0701MB3635.namprd07.prod.outlook.com (52.132.102.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.444.14; Mon, 29 Jan 2018 12:26:30 +0000 Received: from CY4PR0701MB3634.namprd07.prod.outlook.com ([fe80::a90e:9fcd:9ebd:8cad]) by CY4PR0701MB3634.namprd07.prod.outlook.com ([fe80::a90e:9fcd:9ebd:8cad%13]) with mapi id 15.20.0444.016; Mon, 29 Jan 2018 12:26:28 +0000 From: "Verma, Shally" To: "Trahe, Fiona" , Ahmed Mansour , "dev@dpdk.org" , Akhil Goyal CC: "Challa, Mahipal" , "Athreya, Narayana Prasad" , "De Lara Guarch, Pablo" , "Gupta, Ashish" , "Sahu, Sunila" , "Jain, Deepak K" , Hemant Agrawal , Roy Pledge , Youri Querry Thread-Topic: [RFC v3 1/1] lib: add compressdev API Thread-Index: AQHTdc0Olvrvx8Spo02l2PHMHF7TgqOEYOCwgADMLYCABY5tIA== Date: Mon, 29 Jan 2018 12:26:28 +0000 Message-ID: References: <1511542566-10455-1-git-send-email-fiona.trahe@intel.com> <1513360153-15036-1-git-send-email-fiona.trahe@intel.com> <348A99DA5F5B7549AA880327E580B435892FCF50@IRSMSX101.ger.corp.intel.com> <348A99DA5F5B7549AA880327E580B435893011B0@IRSMSX101.ger.corp.intel.com> In-Reply-To: <348A99DA5F5B7549AA880327E580B435893011B0@IRSMSX101.ger.corp.intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: spf=none (sender IP is ) smtp.mailfrom=Shally.Verma@cavium.com; x-originating-ip: [115.113.156.2] x-ms-publictraffictype: Email x-microsoft-exchange-diagnostics: 1; CY4PR0701MB3635; 7:+JPFU3gkUIsspXMIK7+asIVeN5gTmyTMBnpjjUdScqoqvERKC+Dbq3OhxTP23gWae1NYL5YvP5/mOK2+9LStknwuz6ePs81lJ+H0kpcffUIVs3rVOfOS4iCU2t3zac4RnScd4zuGYq4y8J+75xcmoWBK4GgAdq/w1p3XYh09542wbn8PawZKEyg9pNRZMI1490y7fT+jQYESmJKgBC9FhAsJf6Vma4DNzjbEU3S6uwlXgWKLFBo9t2u6SQyNbIZJ x-ms-exchange-antispam-srfa-diagnostics: SSOS;SSOR; x-forefront-antispam-report: SFV:SKI; SCL:-1; SFV:NSPM; SFS:(10009020)(376002)(396003)(346002)(39380400002)(39850400004)(366004)(199004)(51444003)(189003)(13464003)(53754006)(25786009)(478600001)(66066001)(14454004)(55016002)(3846002)(6116002)(8656006)(33656002)(99286004)(6246003)(2906002)(97736004)(105586002)(316002)(72206003)(106356001)(45080400002)(4326008)(966005)(3280700002)(55236004)(3660700001)(305945005)(9686003)(68736007)(7736002)(6506007)(53546011)(7696005)(26005)(102836004)(76176011)(2501003)(6306002)(110136005)(54906003)(2950100002)(8936002)(5890100001)(6436002)(186003)(5250100002)(2900100001)(74316002)(53936002)(5660300001)(81156014)(575784001)(93886005)(86362001)(81166006)(59450400001)(8676002)(229853002); DIR:OUT; SFP:1101; SCL:1; SRVR:CY4PR0701MB3635; H:CY4PR0701MB3634.namprd07.prod.outlook.com; FPR:; SPF:None; PTR:InfoNoRecords; A:1; MX:1; LANG:en; x-ms-office365-filtering-correlation-id: 8b133225-4fc2-44ab-57e2-08d5671385d0 x-microsoft-antispam: UriScan:; BCL:0; PCL:0; RULEID:(7020095)(4652020)(4534165)(7168020)(4627221)(201703031133081)(201702281549075)(5600026)(4604075)(3008032)(2017052603307)(7153060)(7193020); SRVR:CY4PR0701MB3635; x-ms-traffictypediagnostic: CY4PR0701MB3635: x-microsoft-antispam-prvs: x-exchange-antispam-report-test: UriScan:(20558992708506)(189930954265078)(185117386973197)(45079756050767)(228905959029699); x-exchange-antispam-report-cfa-test: BCL:0; PCL:0; RULEID:(6040501)(2401047)(8121501046)(5005006)(10201501046)(3231101)(944501161)(3002001)(93006095)(93001095)(6041288)(20161123560045)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123558120)(20161123562045)(20161123564045)(6072148)(201708071742011); SRVR:CY4PR0701MB3635; BCL:0; PCL:0; RULEID:; SRVR:CY4PR0701MB3635; x-forefront-prvs: 0567A15835 received-spf: None (protection.outlook.com: cavium.com does not designate permitted sender hosts) x-microsoft-antispam-message-info: po3Z761nzgvUIXFFdeGiWs8BeUp72rzhe+I00sePR/ZfGC2RrO549bh2Qo4UQFq17gItBiMR7Qbsu8ZrGvyg8w== spamdiagnosticoutput: 1:99 spamdiagnosticmetadata: NSPM Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: cavium.com X-MS-Exchange-CrossTenant-Network-Message-Id: 8b133225-4fc2-44ab-57e2-08d5671385d0 X-MS-Exchange-CrossTenant-originalarrivaltime: 29 Jan 2018 12:26:28.7981 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 711e4ccf-2e9b-4bcf-a551-4094005b6194 X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR0701MB3635 Subject: Re: [dpdk-dev] [RFC v3 1/1] lib: add compressdev API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Jan 2018 12:26:33 -0000 Hi > -----Original Message----- > From: Trahe, Fiona [mailto:fiona.trahe@intel.com] > Sent: 26 January 2018 00:13 > To: Verma, Shally ; Ahmed Mansour > ; dev@dpdk.org; Akhil Goyal > > Cc: Challa, Mahipal ; Athreya, Narayana > Prasad ; De Lara Guarch, Pablo > ; Gupta, Ashish > ; Sahu, Sunila ; > Jain, Deepak K ; Hemant Agrawal > ; Roy Pledge ; Youri > Querry ; Trahe, Fiona > Subject: RE: [RFC v3 1/1] lib: add compressdev API >=20 > Hi Shally, Ahmed, >=20 >=20 > > -----Original Message----- > > From: Verma, Shally [mailto:Shally.Verma@cavium.com] > > Sent: Thursday, January 25, 2018 10:25 AM > > To: Ahmed Mansour ; Trahe, Fiona > ; > > dev@dpdk.org; Akhil Goyal > > Cc: Challa, Mahipal ; Athreya, Narayana > Prasad > > ; De Lara Guarch, Pablo > ; > > Gupta, Ashish ; Sahu, Sunila > ; Jain, Deepak K > > ; Hemant Agrawal > ; Roy Pledge > > ; Youri Querry > > Subject: RE: [RFC v3 1/1] lib: add compressdev API > > > > > > > > > -----Original Message----- > > > From: Ahmed Mansour [mailto:ahmed.mansour@nxp.com] > > > Sent: 25 January 2018 01:06 > > > To: Verma, Shally ; Trahe, Fiona > > > ; dev@dpdk.org; Akhil Goyal > > > > > > Cc: Challa, Mahipal ; Athreya, Narayana > > > Prasad ; De Lara Guarch, Pablo > > > ; Gupta, Ashish > > > ; Sahu, Sunila ; > > > Jain, Deepak K ; Hemant Agrawal > > > ; Roy Pledge ; Youri > > > Querry > > > Subject: Re: [RFC v3 1/1] lib: add compressdev API > > > > > > Hi All, > > > > > > Please see responses in line. > > > > > > Thanks, > > > > > > Ahmed > > > > > > On 1/23/2018 6:58 AM, Verma, Shally wrote: > > > > Hi Fiona > > > > > > > >> -----Original Message----- > > > >> From: Trahe, Fiona [mailto:fiona.trahe@intel.com] > > > >> Sent: 19 January 2018 17:30 > > > >> To: Verma, Shally ; dev@dpdk.org; > > > >> akhil.goyal@nxp.com > > > >> Cc: Challa, Mahipal ; Athreya, Narayana > > > >> Prasad ; De Lara Guarch, > Pablo > > > >> ; Gupta, Ashish > > > >> ; Sahu, Sunila > ; > > > >> Jain, Deepak K ; Hemant Agrawal > > > >> ; Roy Pledge ; > Youri > > > >> Querry ; Ahmed Mansour > > > >> ; Trahe, Fiona > > > >> Subject: RE: [RFC v3 1/1] lib: add compressdev API > > > >> > > > >> Hi Shally, > > > >> > > > >>> -----Original Message----- > > > >>> From: Verma, Shally [mailto:Shally.Verma@cavium.com] > > > >>> Sent: Thursday, January 18, 2018 12:54 PM > > > >>> To: Trahe, Fiona ; dev@dpdk.org > > > >>> Cc: Challa, Mahipal ; Athreya, > Narayana > > > >> Prasad > > > >>> ; De Lara Guarch, Pablo > > > >> ; > > > >>> Gupta, Ashish ; Sahu, Sunila > > > >> ; Jain, Deepak K > > > >>> ; Hemant Agrawal > > > >> ; Roy Pledge > > > >>> ; Youri Querry ; > > > >> Ahmed Mansour > > > >>> > > > >>> Subject: RE: [RFC v3 1/1] lib: add compressdev API > > > >>> > > > >>> Hi Fiona > > > >>> > > > >>> While revisiting this, we identified few questions and additions. > Please > > > see > > > >> them inline. > > > >>> > > > >>>> -----Original Message----- > > > >>>> From: Trahe, Fiona [mailto:fiona.trahe@intel.com] > > > >>>> Sent: 15 December 2017 23:19 > > > >>>> To: dev@dpdk.org; Verma, Shally > > > >>>> Cc: Challa, Mahipal ; Athreya, > Narayana > > > >>>> Prasad ; > > > >>>> pablo.de.lara.guarch@intel.com; fiona.trahe@intel.com > > > >>>> Subject: [RFC v3 1/1] lib: add compressdev API > > > >>>> > > > >>>> Signed-off-by: Trahe, Fiona > > > >>>> --- > > > >>> //snip > > > >>> > > > >>>> + > > > >>>> +int > > > >>>> +rte_compressdev_queue_pair_setup(uint8_t dev_id, uint16_t > > > >>>> queue_pair_id, > > > >>>> + uint32_t max_inflight_ops, int socket_id) > > > >>> [Shally] Is max_inflights_ops different from nb_streams_per_qp in > > > struct > > > >> rte_compressdev_info? > > > >>> I assume they both carry same purpose. If yes, then it will be be= tter > to > > > use > > > >> single naming convention to > > > >>> avoid confusion. > > > >> [Fiona] No, I think they have different purposes. > > > >> max_inflight_ops should be used to configure the qp with the numbe= r > of > > > ops > > > >> the application expects to be able to submit to the qp before it n= eeds > to > > > poll > > > >> for a response. It can be configured differently for each qp. In t= he QAT > > > case it > > > >> dictates the depth of the qp created, it may have different > implications on > > > >> other PMDs. > > > >> nb_sessions_per_qp and nb_streams_per_qp are limitations the > devices > > > >> reports and are same for all qps on the device. QAT doesn't have > those > > > >> limitations and so would report 0, however I assumed they may be > > > necessary > > > >> for other devices. > > > >> This assumption is based on the patch submitted by NXP to cryptode= v > in > > > Feb > > > >> 2017 > > > >> > > > > https://emea01.safelinks.protection.outlook.com/?url=3Dhttp%3A%2F%2Fdpd > > > k.org%2Fml%2Farchives%2Fdev%2F2017- > > > > March%2F060740.html&data=3D02%7C01%7Cahmed.mansour%40nxp.com%7C > > > > b012d74d7530493b155108d56258955f%7C686ea1d3bc2b4c6fa92cd99c5c30163 > > > > 5%7C0%7C0%7C636523054981379413&sdata=3D2SazlEazMxcBGS7R58CpNrX0G5 > > > OeWx8PLMwf%2FYzqv34%3D&reserved=3D0 > > > >> I also assume these are not necessarily the max number of sessions= in > ops > > > on > > > >> the qp at a given time, but the total number attached, i.e. if the= device > > > has > > > >> this limitation then sessions must be attached to qps, and presuma= bly > > > >> reserve some resources. Being attached doesn't imply there is an o= p > on > > > the > > > >> qp at that time using that session. So it's not to relating to the= inflight > op > > > >> count, but to the number of sessions attached/detached to the qp. > > > >> Including Akhil on the To list, maybe NXP can confirm if these par= ams > are > > > >> needed. > > > > [Shally] Ok. Then let's wait for NXP to confirm on this requirement= as > > > currently spec doesn't have any API to attach > > > queue_pair_to_specific_session_or_stream as cryptodev. > > > > > > > > But then how application could know limit on max_inflight_ops > supported > > > on a qp? As it can pass any random number during qp_setup(). > > > > Do you believe we need to add a capability field in dev_info to ind= icate > limit > > > on max_inflight_ops? > > > > > > > > Thanks > > > > Shally > > > [Ahmed] @Fiona This looks ok. max_inflight_ops makes sense. I > understand > > > it as a push back mechanism per qp. We do not have physical limit for > > > number of streams or sessions on a qp in our hardware, so we would > > > return 0 here as well. > > > @Shally in our PMD implementation we do not attach streams or session= s > > > to a particular qp. Regarding max_inflight_ops. I think that limit > > > > [Shally] Ok. We too don't have any such limit defined. So, if these are > redundant fields then can be > > removed until requirement is identified in context of compressdev. > [Fiona] Ok, so it seems we're all agreed to remove max_nb_sessions_per_qp > and > max_nb_streams_per_qp from rte_compressdev_info. > I think we're also agreed to keep max_inflight_ops on the qp_setup. [Shally] yes, by me. > It's not available on the info and if I understand you both correctly we = don't > need to add it there as a hw limitation or capability.=20 [Shally] I'm fine with either ways. No preferences here currently. > I'd expect the appl to set it to > some value which is probably lower than any hardware limitation. The appl > may then > perform many enqueue_bursts until the qp is full and if unable to enqueue= a > burst > should try dequeueing to free up space on the qp for more enqueue_bursts. [Shally] qp not necessarily has to be full (depending upon PMD implementati= on though) to run into this condition, especially when, say, Hw limit < app= lication max_inflight_ops.=20 Thus, would rephrase it as: "application may enqueue bursts up to limit setup in qp_setup and if enqueu= e_burst() returns with number < total nb_ops , then wait on dequeue to free= -up space". > I think the value it's set to can give the application some influence ove= r > latency vs throughput. > E.g. if it's set to a very large number then it allows the PMD to stockpi= le > requests, > which can result in longer latency, but optimal throughput as easier to k= eep > the > engines supplied with requests. If set very small, latency may be short, = as > requests get > to engines sooner, but there's a risk of the engines running out of reque= sts > if the PMD manages to process everything before the application tops up t= he > qp. [Shally] I concur from you. >=20 > > > > > > > should be independent of hardware. Not all enqueues must succeed. > The > > > hardware can push back against the enqueuer dynamically if the > resources > > > needed to accommodate additional ops are not available yet. This push > > > back happens in the software if the user sets a max_inflight_ops that= is > > > less that the hardware max_inflight_ops. The same return pathway can > be > > > exercised if the user actually attempts to enqueue more than the > > > supported max_inflight_ops because of the hardware. > > > > [Shally] Ok. This sounds fine to me. As you mentioned, we can let > application setup a queue pair with > > any max_inflight_ops and, during enqueue_burst(), leave it on hardware > to consume as much as it can > > subject to the limit set in qp_setup(). > > So, this doesn't seem to be a hard requirement on dev_info to expose. > Only knock-on effect I see is, > > same testcase can then behave differently with different PMDs as each > PMD may have different support > > level for same max_inflight_ops in their qp_setup(). >=20