From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 93F20A3160 for ; Fri, 11 Oct 2019 15:23:45 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 979DC1EADD; Fri, 11 Oct 2019 15:23:44 +0200 (CEST) Received: from EUR03-AM5-obe.outbound.protection.outlook.com (mail-eopbgr30069.outbound.protection.outlook.com [40.107.3.69]) by dpdk.org (Postfix) with ESMTP id 26DE11EAC6 for ; Fri, 11 Oct 2019 15:23:42 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=AcyB9Hw1VW64BgiTv86G2aLtzk9+j1+OyXN2PzXMpXCyXqz4YbW/E+OV05au0wgmmvYI15ZV+gMRygxYFaNZ+ELL9IsupKQE2tcQaROY9kS7voE0noD9kbqIe9hdFpXGJpI7pX/fv58XKHLsvXHrFotcpi4XLH5AGKya0HrhjKlXAL+chyZgmzigF/N8IuRkt2fsMtfy5/GmRDhRFbVB8JJPjD4RAx8ze5Bl46NLIFr9S8e/8YQK8PCPYZnBaX7/uVUC/gFjn7JD7dz+Qo/JBS1Cd4Fp0j4SAl1+TBih0MJ5Nx8xZhh/ZkmMV4fhnnGEbCGeFA8NB9HNWzyfoMoc8A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=CAoGyIQ3ExEDBh7+WtxmWz5GbVBnGyABcTen3CsxRek=; b=QXaMqS4SsBCwu8NEmuUEmNVg2z/LmWDEXmWQ10NF6cQndNBVz+k8ZQCtWVtkC1XLG1087ujE04687zW8/2yZ+DBHkVxAbsDfDvWnIMZCTDKJsfzCvuYrkQvEWnZcDOuq0W2cRst7hSahCkT85lk8FedXRLd+mjnuYunW1DzI9qj221rBOuJhKHbdz6CDq1w173KbzSVXjTCl5XccGExF9D9fFuuVK/Y0081MMITmPzFUYwM4UJGD+vJGVbQaReSfGond83z5kxnYue5dJbMwgYXludeYgs7h0EsKZTORjqegSU1MsNsLheSRQq22qdZT4avBTH0uhU0r7t7nBpssKA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nxp.com; dmarc=pass action=none header.from=nxp.com; dkim=pass header.d=nxp.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nxp.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=CAoGyIQ3ExEDBh7+WtxmWz5GbVBnGyABcTen3CsxRek=; b=dVwjw94ZfoVbcQQvr8FKYCWfXek5GXfpLV/plg04Lx+4d1R4g+/96xEK3fhXf9rfQ/g/xfQrhCG7sOMKuLf6cKWx+jnkostQ0GAjVH1H+retjuNXbUH5+biWqQJeW1cV3ECYIqRvxIyDEVpHDlMOuBsGkF9uprVXWmOdE21R0SU= Received: from VE1PR04MB6639.eurprd04.prod.outlook.com (10.255.118.11) by VE1PR04MB6494.eurprd04.prod.outlook.com (20.179.235.143) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2347.18; Fri, 11 Oct 2019 13:23:41 +0000 Received: from VE1PR04MB6639.eurprd04.prod.outlook.com ([fe80::c045:5df2:ba1f:c3ee]) by VE1PR04MB6639.eurprd04.prod.outlook.com ([fe80::c045:5df2:ba1f:c3ee%5]) with mapi id 15.20.2347.021; Fri, 11 Oct 2019 13:23:41 +0000 From: Akhil Goyal To: "Ananyev, Konstantin" , "'dev@dpdk.org'" , "De Lara Guarch, Pablo" , 'Thomas Monjalon' , "Zhang, Roy Fan" , "Doherty, Declan" CC: 'Anoob Joseph' Thread-Topic: [RFC PATCH 1/9] security: introduce CPU Crypto action type and API Thread-Index: AQHVYm4LqyJkewM9NkuUWAfAmrqx1acbUiZggAAsN4CAAtsIgIAAT02AgAYXC5CAAbSDgIABbRGggAaWxgCAAPjG4IABs/OAgAuzNYCAAoY34IAE8G8AgAAH4mCAAbN9gIADA/pwgAZJhgCAAr+oMIAAcucAgAMT0cA= Date: Fri, 11 Oct 2019 13:23:41 +0000 Message-ID: References: <20190903154046.55992-1-roy.fan.zhang@intel.com> <20190903154046.55992-2-roy.fan.zhang@intel.com> <9F7182E3F746AB4EA17801C148F3C6043369D686@IRSMSX101.ger.corp.intel.com> <2601191342CEEE43887BDE71AB9772580191926A17@irsmsx105.ger.corp.intel.com> <2601191342CEEE43887BDE71AB9772580191962CD5@irsmsx105.ger.corp.intel.com> <2601191342CEEE43887BDE71AB9772580191966116@irsmsx105.ger.corp.intel.com> <2601191342CEEE43887BDE71AB9772580191966C23@irsmsx105.ger.corp.intel.com> <2601191342CEEE43887BDE71AB977258019196A767@irsmsx105.ger.corp.intel.com> <2601191342CEEE43887BDE71AB977258019196D53D@irsmsx105.ger.corp.intel.com> <2601191342CEEE43887BDE71AB977258019196F386@irsmsx105.ger.corp.intel.com> <2601191342CEEE43887BDE71AB977258019197206C@irsmsx105.ger.corp.intel.com> <2601191342CEEE43887BDE71AB977258019197446B@irsmsx105.ger.corp.intel.com> In-Reply-To: <2601191342CEEE43887BDE71AB977258019197446B@irsmsx105.ger.corp.intel.com> Accept-Language: en-IN, en-US Content-Language: en-US X-Mentions: pablo.de.lara.guarch@intel.com,declan.doherty@intel.com X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: spf=none (sender IP is ) smtp.mailfrom=akhil.goyal@nxp.com; x-originating-ip: [92.120.1.65] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 77b87a19-1bc8-4d8e-7432-08d74e4e3bdb x-ms-office365-filtering-ht: Tenant x-ms-traffictypediagnostic: VE1PR04MB6494: x-ms-exchange-purlcount: 1 x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:6430; x-forefront-prvs: 0187F3EA14 x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(4636009)(396003)(136003)(39860400002)(376002)(346002)(366004)(51444003)(189003)(199004)(43544003)(4326008)(15650500001)(256004)(2906002)(476003)(966005)(6306002)(2420400007)(66446008)(9686003)(76176011)(66556008)(66476007)(81156014)(81166006)(14444005)(7696005)(64756008)(52536014)(3846002)(66946007)(486006)(8936002)(110136005)(561944003)(33656002)(8676002)(99286004)(26005)(6116002)(186003)(102836004)(316002)(6506007)(76116006)(66066001)(446003)(71190400001)(6246003)(6436002)(25786009)(71200400001)(30864003)(7110500001)(229853002)(7736002)(478600001)(5660300002)(74316002)(14454004)(86362001)(44832011)(11346002)(55016002)(305945005)(921003)(1121003)(491001); DIR:OUT; SFP:1101; SCL:1; SRVR:VE1PR04MB6494; H:VE1PR04MB6639.eurprd04.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1; received-spf: None (protection.outlook.com: nxp.com does not designate permitted sender hosts) x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: 5UO3Z8Tj3vjtxB6jymEpCx3HuPXZ+pUTB8bbZtET+hoBHlfr5gf75X7wFXWgfm/WkS0Xotx4yCNVP1suAaHEBcJQfRSiu2T8fAg2NOdL6iCidFgGpFdXyuFywB9BcA0I9VXYXxyjoSlNxqFReFvO8pjhThH76skgauOa6JWI8Ru8H9DGMUbh16jd4bLYjsIPYELy9xk5Ye7Pfrcjp+SEmhRIT+V+DYi09qg1LikkFyWTUhDwuKMzRdc5rMiPt3a9ayGvtMPnxNncRtVdFh6URftFtf0bqpjtnDBuReN4oOWQz9Qp2jeWi5j11P4Xxc/Zv85ZGSe5VbkFEQW+hgmKzFPCQf658Qyo0vHT72O8n7OZmClB2OP2LmRwUHMaQWSXSBDYZcNxDh32TKE9d9Y6U8QCsxrBaLR5k4k5cbSOW4vE8RKNSIwVQxbfB8zrwmqGdB4QfFQSuE4CJF37E4LE1A== x-ms-exchange-transport-forked: True Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: nxp.com X-MS-Exchange-CrossTenant-Network-Message-Id: 77b87a19-1bc8-4d8e-7432-08d74e4e3bdb X-MS-Exchange-CrossTenant-originalarrivaltime: 11 Oct 2019 13:23:41.3258 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: D2riDLWwEf3i2j7sX54JwegZvDpWtEiyneQZID+Ty+Egc5dBuEH8ZNaEmFhk7rE5WNrG6H4v+8/k+4v9HaqsNw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB6494 Subject: Re: [dpdk-dev] [RFC PATCH 1/9] security: introduce CPU Crypto action type and API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi Konstantin, >=20 > Hi Akhil, >=20 ..[snip] > > > > > > OK let us assume that you have a separate structure. But I have= a few > > > queries: > > > > > > 1. how can multiple drivers use a same session > > > > > > > > > > As a short answer: they can't. > > > > > It is pretty much the same approach as with rte_security - each d= evice > needs > > > to > > > > > create/init its own session. > > > > > So upper layer would need to maintain its own array (or so) for s= uch case. > > > > > Though the question is why would you like to have same session ov= er > > > multiple > > > > > SW backed devices? > > > > > As it would be anyway just a synchronous function call that will = be > executed > > > on > > > > > the same cpu. > > > > > > > > I may have single FAT tunnel which may be distributed over multiple > > > > Cores, and each core is affined to a different SW device. > > > > > > If it is pure SW, then we don't need multiple devices for such scenar= io. > > > Device in that case is pure abstraction that we can skip. > > > > Yes agreed, but that liberty is given to the application whether it nee= d multiple > > devices with single queue or a single device with multiple queues. > > I think that independence should not be broken in this new API. > > > > > > > So a single session may be accessed by multiple devices. > > > > > > > > One more example would be depending on packet sizes, I may switch > between > > > > HW/SW PMDs with the same session. > > > > > > Sure, but then we'll have multiple sessions. > > > > No, the session will be same and it will have multiple private data for= each of > the PMD. > > > > > BTW, we have same thing now - these private session pointers are just > stored > > > inside the same rte_crypto_sym_session. > > > And if user wants to support this model, he would also need to store = > > queue_id> > > > pair for each HW device anyway. > > > > Yes agreed, but how is that thing happening in your new struct, you can= not > support that. >=20 > User can store all these info in his own struct. > That's exactly what we have right now. > Let say ipsec-secgw has to store for each IPsec SA: > pointer to crypto-session and/or pointer to security session > plus (for lookaside-devices) cdev_id_qp that allows it to extract > dev_id + queue_id information. > As I understand that works for now, as each ipsec_sa uses only one > dev+queue. Though if someone would like to use multiple devices/queues > for the same SA - he would need to have an array of these pai= rs. > So even right now rte_cryptodev_sym_session is not self-consistent and > requires extra information to be maintained by user. Why are you increasing the complexity for the user application. The new APIs and struct should be such that it need to do minimum changes i= n the stack so that stack is portable on multiple vendors. You should try to hide as much complexity in the driver or lib to give the = user simple APIs. Having a same session for multiple devices was added by Intel only for some= use cases. And we had split that session create API into 2. Now if those are not usefu= l shall we move back to the single API. I think @Doherty, Declan and @De Lara Guarch, Pablo can = comment on this. >=20 > > > > > > > > > > > > > > > > > > > > 2. Can somebody use the scheduler pmd for scheduling the differ= ent > type > > > of > > > > > payloads for the same session? > > > > > > > > > > In theory yes. > > > > > Though for that scheduler pmd should have inside it's > > > > > rte_crypto_cpu_sym_session an array of pointers to > > > > > the underlying devices sessions. > > > > > > > > > > > > > > > > > With your proposal the APIs would be very specific to your use = case > only. > > > > > > > > > > Yes in some way. > > > > > I consider that API specific for SW backed crypto PMDs. > > > > > I can hardly see how any 'real HW' PMDs (lksd-none, lksd-proto) w= ill > benefit > > > > > from it. > > > > > Current crypto-op API is very much HW oriented. > > > > > Which is ok, that's for it was intended for, but I think we also = need one > that > > > > > would be designed > > > > > for SW backed implementation in mind. > > > > > > > > We may re-use your API for HW PMDs as well which do not have > requirement > > > of > > > > Crypto-op/mbuf etc. > > > > The return type of your new process API may have a status which say > > > 'processed' > > > > Or can be say 'enqueued'. So if it is 'enqueued', we may have a ne= w API for > > > raw > > > > Bufs dequeue as well. > > > > > > > > This requirement can be for any hardware PMDs like QAT as well. > > > > > > I don't think it is a good idea to extend this API for async (lookasi= de) devices. > > > You'll need to: > > > - provide dev_id and queue_id for each process(enqueue) and dequeuer > > > operation. > > > - provide IOVA for all buffers passing to that function (data buffer= s, digest, > IV, > > > aad). > > > - On dequeue provide some way to associate dequed data and digest bu= ffers > > > with > > > crypto-session that was used (and probably with mbuf). > > > So most likely we'll end up with another just version of our current= crypto-op > > > structure. > > > If you'd like to get rid of mbufs dependency within current crypto-op= API that > > > understandable, > > > but I don't think we should have same API for both sync (CPU) and asy= nc > > > (lookaside) cases. > > > It doesn't seem feasible at all and voids whole purpose of that patch= . > > > > At this moment we are not much concerned about the dequeue API and abou= t > the > > HW PMD support. It is just that the new API should be generic enough to= be > used in > > some future scenarios as well. I am just highlighting the possible usec= ases > which can > > be there in future. >=20 > Sorry, but I strongly disagree with such approach. > We should stop adding/modifying API 'just in case' and because 'it might = be > useful for some future HW'. > Inside DPDK we already do have too many dev level APIs without any > implementations. > That's quite bad practice and very dis-orienting for end-users. > I think to justify API additions/changes we need at least one proper > implementation for it, > or at least some strong evidence that people are really committed to supp= ort it > in nearest future. > BTW, that what TB agreed on, nearly a year ago. >=20 > This new API (if we'll go ahead with it of course) would stay experimenta= l for > some time anyway > to make sure we don't miss anything needed (I think for about a year time= - > frame). > So if you guys *really* want to extend it support _async_ devices too - > I am open for modifications/additions here. > Though personally I think such addition would over-complicate things and = we'll > end up with > another reincarnation of current crypto-op. > We actually discussed it internally, and decided to drop that idea becaus= e of that. > Again, my opinion - for lookaside devices it might be better to try to op= timize > current crypto-op path (remove mbuf requirement, probably add ability to > group by session on enqueue/dequeue, etc.). I agree that the new API is experimental and can be modified later. So no i= ssues in that, but we can keep some things in mind while defining APIs. These were some co= mments from my side, if those are impacting the current scenario, you can drop those. W= e will take care of those later. >=20 > > > > What is the issue that you face in making a dev-op for this new API. Do= you see > any > > performance impact with that? >=20 > There are two main things: > 1. user would need to maintain and provide for each process() call > dev_id+queue_id. > That's means extra (and totally unnecessary for SW) overhead. You are using a crypto device for performing the processing, you must use dev_id to identify which SW device it is. This is how the DPDK Framework works. . > 2. yes I would expect some perf overhead too - it would be extra call or = branch. > Again as it would be data-dependency - most likely cpu wouldn't be able t= o > pipeline > it efficiently: >=20 > rte_crypto_sym_process(uint8_t dev_id, uint16 qp_id, rte_crypto_sym_sessi= on > *ses, ...) > { > struct rte_cryptodev *dev =3D &rte_cryptodevs[dev_id]; > return (*dev->process)(sess->data[dev->driver_id, ...); > } >=20 > driver_specific_process(driver_specific_sym_session *sess) > { > return sess->process(sess, ...) ; > } >=20 > I didn't make any exact measurements but sure it would be slower than jus= t: > session_udata->process(session->udata->sess, ...); > Again it would be much more noticeable on low end cpus. > Let say here: > http://mails.dpdk.org/archives/dev/2019-September/144350.html > Jerin claims 1.5-3% drop for introducing extra call via hiding eth_dev co= ntents - > I suppose we would have something similar here. > I do realize that in majority of cases crypto is more expensive then RX/T= X, but > still. >=20 > If it would be a really unavoidable tradeoff (support already existing AP= I, or so) > I wouldn't mind, but I don't see any real need for it right now. Calling session_udata->process(session->udata->sess, ...); from the applica= tion and Application need to maintain for each PMD the process() API in its memory w= ill make the application not portable to other vendors. What we are doing here is defining another way to create sessions for the s= ame stuff that is already done. This make applications non-portable and confusing for= the application writer. I would say you should do some profiling first. As you also mentioned crypt= o workload is more Cycle consuming, it will not impact this case. >=20 > > > > > > > > > That is why a dev-ops would be a better option. > > > > > > > > > > > > > > > When you would add more functionality to this sync API/struct, = it will > end > > > up > > > > > being the same API/struct. > > > > > > > > > > > > Let us see how close/ far we are from the existing APIs when t= he > actual > > > > > implementation is done. > > > > > > > > > > > > > > I am not sure if that would be needed. > > > > > > > > It would be internal to the driver that if synchronous proc= essing is > > > > > > > supported(from feature flag) and > > > > > > > > Have relevant fields in xform(the newly added ones which ar= e > packed > > > as > > > > > per > > > > > > > your suggestions) set, > > > > > > > > It will create that type of session. > > > > > > > > > > > > > > > > > > > > > > > > > + * Main points: > > > > > > > > > + * - Current crypto-dev API is reasonably mature and it = is > desirable > > > > > > > > > + * to keep it unchanged (API/ABI stability). From othe= r side, this > > > > > > > > > + * new sync API is new one and probably would require = extra > > > changes. > > > > > > > > > + * Having it as a new one allows to mark it as experim= ental, > without > > > > > > > > > + * affecting existing one. > > > > > > > > > + * - Fully opaque cpu_sym_session structure gives more f= lexibility > > > > > > > > > + * to the PMD writers and again allows to avoid ABI br= eakages > in > > > future. > > > > > > > > > + * - process() function per set of xforms > > > > > > > > > + * allows to expose different process() functions for = different > > > > > > > > > + * xform combinations. PMD writer can decide, does he = wants > to > > > > > > > > > + * push all supported algorithms into one process() fu= nction, > > > > > > > > > + * or spread it across several ones. > > > > > > > > > + * I.E. More flexibility for PMD writer. > > > > > > > > > > > > > > > > Which process function should be chosen is internal to PMD,= how > > > would > > > > > that > > > > > > > info > > > > > > > > be visible to the application or the library. These will ge= t stored in > the > > > > > session > > > > > > > private > > > > > > > > data. It would be upto the PMD writer, to store the per ses= sion > process > > > > > > > function in > > > > > > > > the session private data. > > > > > > > > > > > > > > > > Process function would be a dev ops just like enc/deq opera= tions > and it > > > > > should > > > > > > > call > > > > > > > > The respective process API stored in the session private da= ta. > > > > > > > > > > > > > > That model (via devops) is possible, but has several drawback= s from > my > > > > > > > perspective: > > > > > > > > > > > > > > 1. It means we'll need to pass dev_id as a parameter to proce= ss() > function. > > > > > > > Though in fact dev_id is not a relevant information for us he= re > > > > > > > (all we need is pointer to the session and pointer to the fuc= tion to call) > > > > > > > and I tried to avoid using it in data-path functions for that= API. > > > > > > > > > > > > You have a single vdev, but someone may have multiple vdevs for= each > > > thread, > > > > > or may > > > > > > Have same dev with multiple queues for each core. > > > > > > > > > > That's fine. As I said above it is a SW backed implementation. > > > > > Each session has to be a separate entity that contains all necess= ary > > > information > > > > > (keys, alg/mode info, etc.) to process input buffers. > > > > > Plus we need the actual function pointer to call. > > > > > I just don't see what for we need a dev_id in that situation. > > > > > > > > To iterate the session private data in the session. > > > > > > > > > Again, here we don't need care about queues and their pinning to = cores. > > > > > If let say someone would like to process buffers from the same IP= sec SA > on 2 > > > > > different cores in parallel, he can just create 2 sessions for th= e same > xform, > > > > > give one to thread #1 and second to thread #2. > > > > > After that both threads are free to call process(this_thread_ses,= ...) at will. > > > > > > > > Say you have a 16core device to handle 100G of traffic on a single = tunnel. > > > > Will we make 16 sessions with same parameters? > > > > > > Absolutely same question we can ask for current crypto-op API. > > > You have lookaside crypto-dev with 16 HW queues, each queue is servic= ed by > > > different CPU. > > > For the same SA, do you need a separate session per queue, or is it o= k to > reuse > > > current one? > > > AFAIK, right now this is a grey area not clearly defined. > > > For crypto-devs I am aware - user can reuse the same session (as PMD = uses it > > > read-only). > > > But again, right now I think it is not clearly defined and is impleme= ntation > > > specific. > > > > User can use the same session, that is what I am also insisting, but it= may have > separate > > Session private data. Cryptodev session create API provide that functio= nality > and we can > > Leverage that. >=20 > rte_cryptodev_sym_session. sess_data[] is indexed by driver_id, which mea= ns > we can't use > the same rte_cryptodev_sym_session to hold sessions for both sync and asy= nc > mode > for the same device. Off course we can add a hard requirement that any dr= iver > that wants to > support process() has to create sessions that can handle both process an= d > enqueue/dequeue, > but then again what for to create such overhead? >=20 > BTW, to be honest, I don't consider current rte_cryptodev_sym_session > construct for multiple device_ids: > __extension__ struct { > void *data; > uint16_t refcnt; > } sess_data[0]; > /**< Driver specific session material, variable size */ >=20 Yes I also feel the same. I was also not in favor of this when it was intro= duced. Please go ahead and remove this. I have no issues with that. > as an advantage. > It looks too error prone for me: > 1. Simultaneous session initialization/de-initialization for devices with= the same > driver_id is not possible. > 2. It assumes that all device driver will be loaded before we start to cr= eate > session pools. >=20 > Right now it seems ok, as no-one requires such functionality, but I don't= know > how it will be in future. > For me rte_security session model, where for each security context user h= ave to > create new session > looks much more robust. Agreed >=20 > > > > BTW, I can see a v2 to this RFC which is still based on security librar= y. >=20 > Yes, v2 was concentrated on fixing found issues, some code restructuring, > i.e. - changes that would be needed anyway whatever API aproach we'll cho= ose. >=20 > > When do you plan > > To submit the patches for crypto based APIs. We have RC1 merge deadline= for > this > > patchset on 21st Oct. >=20 > We'd like to start working on it ASAP, but it seems we still have a major > disagreement > about how this crypto-dev API should look like. > Which makes me think - should we return to our original proposal via > rte_security? > It still looks to me like clean and straightforward way to enable this ne= w API, > and probably wouldn't cause that much controversy. > What do you think? I cannot spend more time discussing on this until RC1 date. I have some oth= er stuff pending. You can send the patches early next week with the approach that I mentioned= or else we can discuss this post RC1(which would mean deferring to 20.02). But moving back to security is not acceptable to me. The code should be put= where it is intended and not where it is easy to put. You are not doing any rte_securit= y stuff. Regards, Akhil