From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 5FA341B1AF for ; Fri, 16 Feb 2018 12:12:02 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 16 Feb 2018 03:12:01 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.46,519,1511856000"; d="scan'208";a="18221238" Received: from irsmsx152.ger.corp.intel.com ([163.33.192.66]) by fmsmga007.fm.intel.com with ESMTP; 16 Feb 2018 03:11:59 -0800 Received: from irsmsx101.ger.corp.intel.com ([169.254.1.188]) by IRSMSX152.ger.corp.intel.com ([169.254.6.52]) with mapi id 14.03.0319.002; Fri, 16 Feb 2018 11:11:58 +0000 From: "Trahe, Fiona" To: Ahmed Mansour , "Verma, Shally" , "dev@dpdk.org" CC: "Athreya, Narayana Prasad" , "Gupta, Ashish" , "Sahu, Sunila" , "De Lara Guarch, Pablo" , "Challa, Mahipal" , "Jain, Deepak K" , Hemant Agrawal , Roy Pledge , Youri Querry Thread-Topic: [RFC v2] doc compression API for DPDK Thread-Index: AdOFUW8Wdt99b3u6RKydGSrxJwvtHghxTpjw Date: Fri, 16 Feb 2018 11:11:57 +0000 Message-ID: <348A99DA5F5B7549AA880327E580B4358932119C@IRSMSX101.ger.corp.intel.com> References: <348A99DA5F5B7549AA880327E580B435892F589D@IRSMSX101.ger.corp.intel.com> <348A99DA5F5B7549AA880327E580B43589315232@IRSMSX101.ger.corp.intel.com> <348A99DA5F5B7549AA880327E580B43589315AF3@IRSMSX101.ger.corp.intel.com> <348A99DA5F5B7549AA880327E580B4358931F4E3@IRSMSX101.ger.corp.intel.com> In-Reply-To: Accept-Language: en-IE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiMDUxNGI4OGYtNTYzOS00NzVlLThlZWEtNzI4MTY4YTcxZGFmIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX05UIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE2LjUuOS4zIiwiVHJ1c3RlZExhYmVsSGFzaCI6IitXbGtnNVhpZ1ZiRmUwSmJWNmZNSldJUTFoYW80bzNyZmhBSkx1Vk1SSVE9In0= x-ctpclassification: CTP_NT dlp-product: dlpe-windows dlp-version: 11.0.0.116 dlp-reaction: no-action x-originating-ip: [163.33.239.180] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [RFC v2] doc compression API for DPDK X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 16 Feb 2018 11:12:04 -0000 > -----Original Message----- > From: Ahmed Mansour [mailto:ahmed.mansour@nxp.com] > Sent: Thursday, February 15, 2018 7:51 PM > To: Trahe, Fiona ; Verma, Shally ; dev@dpdk.org > Cc: Athreya, Narayana Prasad ; Gupta, = Ashish > ; Sahu, Sunila ; De Lara= Guarch, Pablo > ; Challa, Mahipal ; Jain, Deepak K > ; Hemant Agrawal ; Roy P= ledge > ; Youri Querry > Subject: Re: [RFC v2] doc compression API for DPDK >=20 > /// snip /// > >>>>> > >>>>>>>> [Fiona] I propose if BFINAL bit is detected before end of input > >>>>>>>> the decompression should stop. In this case consumed will be < s= rc.length. > >>>>>>>> produced will be < dst buffer size. Do we need an extra STATUS r= esponse? > >>>>>>>> STATUS_BFINAL_DETECTED ? > >>>>>>> [Shally] @fiona, I assume you mean here decompressor stop after p= rocessing Final block right? > >>>>>> [Fiona] Yes. > >>>>>> > >>>>>> And if yes, > >>>>>>> and if it can process that final block successfully/unsuccessfull= y, then status could simply be > >>>>>>> SUCCESS/FAILED. > >>>>>>> I don't see need of specific return code for this use case. Just = to share, in past, we have > practically > >> run into > >>>>>>> such cases with boost lib, and decompressor has simply worked thi= s way. > >>>>>> [Fiona] I'm ok with this. > >>>>>> > >>>>>>>> Only thing I don't like this is it can impact on performance, as= normally > >>>>>>>> we can just look for STATUS =3D=3D SUCCESS. Anything else should= be an exception. > >>>>>>>> Now the application would have to check for SUCCESS || BFINAL_DE= TECTED every time. > >>>>>>>> Do you have a suggestion on how we should handle this? > >>>>>>>> > >>>>> [Ahmed] This makes sense. So in all cases the PMD should assume tha= t it > >>>>> should stop as soon as a BFINAL is observed. > >>>>> > >>>>> A question. What happens ins stateful vs stateless modes when > >>>>> decompressing an op that encompasses multiple BFINALs. I assume the > >>>>> caller in that case will use the consumed=3Dx bytes to find out how= far in > >>>>> to the input is the end of the first stream and start from the next > >>>>> byte. Is this correct? > >>>> [Shally] As per my understanding, each op can be tied up to only on= e stream as we have only one > >> stream pointer per op and one > >>> stream can have only one BFINAL (as stream is one complete compressed= data) but looks like you're > >> suggesting a case where one op > >>> can carry multiple independent streams? and thus multiple BFINAL?! , = such as, below here is op > >> pointing to more than one streams > >>>> -------------------------------------------- > >>>> op --> |stream1|stream2| |stream3| > >>>> -------------------------------------------- > >>>> > >>>> Could you confirm if I understand your question correct? > >>> [Ahmed] Correct. We found that in some storage applications the user > >>> does not know where exactly the BFINAL is. They rely on zlib software > >>> today. zlib.net software halts at the first BFINAL. Users put multipl= e > >>> streams in one op and rely on zlib to stop and inform them of the en= d > >>> location of the first stream. > >> [Shally] Then this is practically case possible on decompressor and de= compressor doesn't regard flush > >> flag. So in that case, I expect PMD to internally reset themselves (sa= y in case of zlib going through > cycle > >> of deflateEnd and deflateInit or deflateReset) and return with status = =3D SUCCESS with updated > produced > >> and consumed. Now in such case, if previous stream also has some foote= r followed by start of next > >> stream, then I am not sure how PMD / lib can support that case. Have y= ou had practically run of such > >> use-case on zlib? If yes, how then such application handle it in your = experience? > >> I can imagine for such input zlib would return with Z_FLUSH_END after = 1st BFINAL is processed to the > >> user. Then application doing deflateReset() or Init-End() cycle before= starting with next. But if it starts > >> with input that doesn't have valid zlib header, then likely it will th= row an error. > >> > > [Fiona] The consumed and produced tell the Application hw much data was= processed up to > > the end of the first deflate block encountered with a bfinal set. > > If there is data, e.g. footer after the block with bfinal, then I think= it must be the responsibility of > > the application to know this, the PMD can't have any responsibility for= this. > > The next op sent to the PMD must start with a valid deflate block. > [Ahmed] Agreed. This is exactly what I expected. In our case we support > gzip and zlib header/footer processing, but that does not fundamentally > change the setup. The user may have other meta data after the footer > which the PMD is not responsible for. The PMD should stop processing > depending on the mode. In raw DEFLATE, it should stop immediately. In > other modes it should stop after the footer. We also have a mode in our > PMD to simply continue decompression. In that case there cannot be > header/footer between streams in raw DEFLATE. That mode can be enabled > perhaps at the session level in the future with a session parameter at > setup time. We call it "member continue". In this mode the PMD plows > through as much of the op as possible. If it hits incorrectly setup data > then it returns what it did decompress successfully and the error code > in decompressing the data afterwards. [Fiona] Yes, these would be interesting capabilities which could be=20 added to the API in future releases. > > > > > >>>> Thanks > >>>> Shally > >>>> > >