From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DC1E5A00BE; Thu, 10 Feb 2022 18:46:08 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 710CA4013F; Thu, 10 Feb 2022 18:46:08 +0100 (CET) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by mails.dpdk.org (Postfix) with ESMTP id 68E2D40041 for ; Thu, 10 Feb 2022 18:46:07 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1644515167; x=1676051167; h=from:to:subject:date:message-id:references:in-reply-to: content-transfer-encoding:mime-version; bh=EYQhKYL08UsIrcM5P/0NvfAozMQoi+CGzdpLozKpUvQ=; b=iDMhG5NEZt/1myj92iW0ajp7mwHCKj+Ic1Rt46trBMb7C850vMDwb1E/ Stbs6S+neGXJE+Vz9ANRybtef4H7LqX++rwWPtb2T0BhrOdndGS/d8yK/ foQlGdGITKWIn+MyupAFo7RJwen9YR70rCehkXP2Nua4Tiw72pb+vH2jb 4ssHzXJQWFNeMCDC+SkbZVgNJznJKEXAjGdgLxGGtzPRdHTV7mOgLw7qN lYKkfWqPsEDLZYrtoJw5ZJRY8EuZaxoUus21lqp2exkqcFqm4e7heEawo eb1eRLpbcEPhELUrZQmGaJUyXDoHDipc1B5RunNnUH578KJyzQRTvIr2E g==; X-IronPort-AV: E=McAfee;i="6200,9189,10254"; a="247134106" X-IronPort-AV: E=Sophos;i="5.88,359,1635231600"; d="scan'208";a="247134106" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Feb 2022 09:46:06 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.88,359,1635231600"; d="scan'208";a="586032419" Received: from fmsmsx601.amr.corp.intel.com ([10.18.126.81]) by fmsmga008.fm.intel.com with ESMTP; 10 Feb 2022 09:46:06 -0800 Received: from fmsmsx610.amr.corp.intel.com (10.18.126.90) by fmsmsx601.amr.corp.intel.com (10.18.126.81) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20; Thu, 10 Feb 2022 09:46:05 -0800 Received: from fmsedg601.ED.cps.intel.com (10.1.192.135) by fmsmsx610.amr.corp.intel.com (10.18.126.90) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20 via Frontend Transport; Thu, 10 Feb 2022 09:46:05 -0800 Received: from NAM04-MW2-obe.outbound.protection.outlook.com (104.47.73.177) by edgegateway.intel.com (192.55.55.70) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2308.20; Thu, 10 Feb 2022 09:46:05 -0800 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ScGyuTTOs0f/KhcmxrqQ1R9W7LwLZj/lulZJd9zAgAuz8v4idUAghtfIobKQd3s0GffwGJ2u7yTaIdJ80bRquGcZcxOvjkyEVMCEoJvFddLBkw2jQDR2AcHRtDRCFHL3LdoMQtjn/n5SVtkEBjYUtVqYEIVE/Z0BGYnIUrJBAVb+TMEcLz5DNwC3ZS8ygtO4NTDXc7Sl3vmlwqHcI+1Sw37zCXS4E0Ka/nOJT/M/o6zGoVmaRYVyb9oBPqPctCn+ApA/Hzej3gLznuOKOxe3sVTxxgQjKc0faUz6zI9GiVzMtEAsefF9Hs8j5ujLZOkraTnuAXdgf9LEaCZg7ccGwQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=JW2uPWQUHN/s8Pn85yHfE6/U+Hjo0Ge0GkMLLKKeHEE=; b=mldKA36WFqeGqFeVj8Zi7eU8qSNcv7Z+xHlDKqWqItyskLFbNhZd8v2RkI5j+AlDAniSlH7yoITKbtxUhMMvTsF8JSa+sJikVqrfxQDunpuVxc9Q/gkd7ycU1lJOFK87k9oZAfzVX24dBEiUAKYZMQbAJkbAtDy1q2D5pzsxJ/muvx9HZsyoHMUsfdE9FTyGHyjNFv31sEPYujfu1cT34vKCR6MjLl0tBc+s2bTWJEeTunCcfON4juVzQ0nyPRm1jOIWb40ZgMH5hdJD0WLzkaWlr9O21LjEP3IFd+aRfoK0nsHbCeMM8IdtTVwjP9NU6poxir2Eg7WnzW6y/m8rVg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=none; dmarc=none; dkim=none; arc=none Received: from MW4PR11MB5911.namprd11.prod.outlook.com (2603:10b6:303:16b::16) by BN6PR11MB1361.namprd11.prod.outlook.com (2603:10b6:404:49::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4975.11; Thu, 10 Feb 2022 17:46:03 +0000 Received: from MW4PR11MB5911.namprd11.prod.outlook.com ([fe80::e00c:1274:42e2:8355]) by MW4PR11MB5911.namprd11.prod.outlook.com ([fe80::e00c:1274:42e2:8355%7]) with mapi id 15.20.4951.016; Thu, 10 Feb 2022 17:46:03 +0000 From: "Kundapura, Ganapati" To: "Gujjar, Abhinandan S" , "Jayatheerthan, Jay" , "jerinjacobk@gmail.com" , "dev@dpdk.org" Subject: RE: [PATCH v4 1/2] eventdev/crypto_adapter: move crypto ops to circular buffer Thread-Topic: [PATCH v4 1/2] eventdev/crypto_adapter: move crypto ops to circular buffer Thread-Index: AQHYHBCmfFQZQrPKSEuiH5CxJzW/l6yM558AgAArnrA= Date: Thu, 10 Feb 2022 17:46:03 +0000 Message-ID: References: <20220111103631.3664763-1-ganapati.kundapura@intel.com> <20220207105036.3468246-1-ganapati.kundapura@intel.com> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-product: dlpe-windows dlp-reaction: no-action dlp-version: 11.6.200.16 authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=intel.com; x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 415eb8a2-053f-419b-fa12-08d9ecbd3565 x-ms-traffictypediagnostic: BN6PR11MB1361:EE_ x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:4502; x-ms-exchange-senderadcheck: 1 x-ms-exchange-antispam-relay: 0 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: XZX40HFPu8Nl0dY5aaD9sOh87iCaRsKpUxyF1zg2pe+AeEgvWbdy50U09uzg28jGbyjCPp4YXDRsCw9yM00/ARqcGz4x/DjpVMZxFhV84M0+bG+YlTX3lPfQDvIjXrplvSZx0YVnpKSwa7wVDZm6PULYWnPRgzJQjWO+L8CnNbv51/OG6Pwc0RfNn/gsPaoy61wbMR2k6ud0DxraA6U/fhEgZe+bI92+9+x0RdA2lPkl6vzD7i2NY7gatxDNT70VcuetAapFFylLmt0+atS//5vxxbXjUnwUqSnhlPcQToRBpvYjevm7Wk3jmWdMzdJgg5VblDXjwIWxGhT5ZDdF6KiRmufppbXSL7reOROUp3XMEj1K2o7xfbXnLrFg1OvYIRmxJFs5Gl9kJrmIGiyc6AYHTdQNgoFKQH7xrBXyoSCG9G/J6IQWrwo32Tew+hndpMGpKICBCknpXQImDWSlLvvtjs1LGeO40Fh8AnK8j2dfuJKsOauzmGqGzwtnZQcTAAIAZTmrHrqm9hMCKQ7iLM5NGAaKqsV/oTp53PHvbq3UHohAmCFgrC9DIYw7irzG85aWs5+EcN688Qd6FCtccowcKa9aThaQSOs2syWZu36r9a5/GU9Q2CHpCEDjP32kXZ197/dWXorGa2JxXzDYOI9kNG4Rw7IUgkWZiPEBX+s9w2e6bRGAbsBOttdq/Ng2t6S4v4UwCCFbcdHXLDWEQg== x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:MW4PR11MB5911.namprd11.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230001)(366004)(8676002)(186003)(7696005)(9686003)(6506007)(122000001)(71200400001)(8936002)(26005)(55016003)(66476007)(66556008)(66946007)(64756008)(66446008)(76116006)(33656002)(110136005)(316002)(38100700002)(38070700005)(53546011)(55236004)(86362001)(508600001)(82960400001)(83380400001)(5660300002)(30864003)(2906002)(52536014)(559001)(579004); DIR:OUT; SFP:1102; x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?7nYSX9i2Ush5TXnxZGofIbfa9NQMrJrelb4JvHiinDlDbAUODsvbvafa1g0t?= =?us-ascii?Q?tl9Kr6TL9p58KRLZrtRPSOaKGyrxNG4L2Ag5I5WWUftQOBkF1ClMs/DSWD5p?= =?us-ascii?Q?pzoCn9lEZ0hP3b7gG4lL6xeQR1QpSyCenuDy4YfO2BsPQKr8gfO3r2VEMPnr?= =?us-ascii?Q?T/jy7ZdjWY2mDfENprRF1uqJWWaCa2sttlolpjnSox2rIhhGTjkSwz5miM5c?= =?us-ascii?Q?fl0jkC6prhLrtFrtVLBt7dR06gZVZ27/1Ut+bcl+7UmwR9+bBYoDqbE43xa+?= =?us-ascii?Q?KVarPpdebGFMLAN5uA7WVwH3c9CQJ8nfWwojMw5p/z4wuT8dXeTmAzMTxTo2?= =?us-ascii?Q?va11DuVYDwv9AHUxNV48VcYH1X9zp6ayjjQr8mQPXoor7pZBkDNJOVy8/Vk1?= =?us-ascii?Q?A+Tex/si8wmNHc3A0P4K1bFOO0bChrZhUXIKNPHOKB0EGtWwf3r5e5FIBHgY?= =?us-ascii?Q?PHN86IU8FfN3uVTOQQLbAKc3tPJ1s3NxAiMvl18EpzVbrrlGnGIQsa3D7LJA?= =?us-ascii?Q?pG5F9y3w9VrV/gm2zI2Om3N9WJgIgOoRSMo1NVrgsWJ6kxQUVT4+3iyaC5Ri?= =?us-ascii?Q?I0hx9U8qSiqG/dLBRACbctGFeRdrVqL8JUtL4p23UQgAIHYszwtdDfM3uUfx?= =?us-ascii?Q?8aqMnPLZ7LXNbhICjy4emCsANWvgtxY2lPp+X/svJLAsJIRdatKYdR9eYsR5?= =?us-ascii?Q?aWDIbKoNoUjRO3Jfp8eA9q6Bact4fgEQPmUfygK6QNPxR/459UiYRM/f6icy?= =?us-ascii?Q?1orhhNRh+pTv9Y1No7vg1P5HYUX5Gv+LR6o6lof0jMbWuKwZgb6xFqNGs5ef?= =?us-ascii?Q?jkLnrnGGgFMmZHBTdtRHqiybd+ob5X16dEAWeF9gmUgzs5jZIRe5KPk4M6ID?= =?us-ascii?Q?AkRRKCPpnfpIBL+FfztAfjd+TPW/W1B+1PRTpytC6YKpKO+f3P9F/+BxxfRM?= =?us-ascii?Q?cIh3WTipWuVHLqle6eqPTuIHEo/eHddxfBc6u375DWk87uhRmPEoGiUnl/YJ?= =?us-ascii?Q?KDQILV3rpFMUKG5yCO6EovGvR5ZOeNreqlBX0RlLC8+7m6Lvkgx9r8q+sRgz?= =?us-ascii?Q?zLIsyVetZPg7WOVV1B+UkfWQbHxfKeNgbsdn73uuakejbgTd4BEq7J+D7/eG?= =?us-ascii?Q?CiYNhTSjzW7GUQGKU69D+ZIJVTjxAHlZpRSRN54ehuwatGOH8Bpu9I8/uVzu?= =?us-ascii?Q?9jfOvfcLyR4+rggU+we7PBe6JV9yZwhK4ULlKFi+qKsZIfLO76WTYYecELjv?= =?us-ascii?Q?cWFLB4swVXPEetGUpf2xrElZQUS1VlAW2wb/Ir5wot9AZd0rNjzH049y4oMy?= =?us-ascii?Q?plYpq6FKDvcvnwGWJUTy06tGuVtyEUqdD5uv3sM7Cb/q/UjD4MYTyFTBOdqV?= =?us-ascii?Q?XiO7nvVTh0OWvk3jowZG8qCjt6ac0DcNRydL29MtKHqDG0xuJSqyLCIsSghM?= =?us-ascii?Q?ek9Z3AA0ZnERLW+0a+SRL7zdna/J+xuExLUJa81GUgp7Mm36nL3lI0O/wmqV?= =?us-ascii?Q?dRpZQeXRFeyiJfJpAS33wrUzJF4okZJOE+kcdrpOAxlLwaQvZzpjHThvAf1q?= =?us-ascii?Q?T7f5EQ9sHkI9MIvbEmwLDwSKZHOWB7EbYDgwMQJJkBr4PC1le4vB9CtqClKa?= =?us-ascii?Q?ziLebzVPfLU94/k2Si53NanzJI9zZdXa9OjI0+pkflXC/dTle67SencUn8c3?= =?us-ascii?Q?i20NzA=3D=3D?= Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: MW4PR11MB5911.namprd11.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: 415eb8a2-053f-419b-fa12-08d9ecbd3565 X-MS-Exchange-CrossTenant-originalarrivaltime: 10 Feb 2022 17:46:03.3174 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: OU+Y9+r+JOjbi6UMsjzAaM92cZLvtU4fldjvLkY4fQLE6HHPjd4oodV15rkJ2RraCPf0L298jPnN7+HlyRQzuDIPwF5CXpKEbN/wcfY6sUw= X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN6PR11MB1361 X-OriginatorOrg: intel.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Hi Abhi, > -----Original Message----- > From: Gujjar, Abhinandan S > Sent: 10 February 2022 20:37 > To: Kundapura, Ganapati ; Jayatheerthan, > Jay ; jerinjacobk@gmail.com; dev@dpdk.org > Subject: RE: [PATCH v4 1/2] eventdev/crypto_adapter: move crypto ops to > circular buffer >=20 > Hi Ganapati, >=20 > It looks good to me. Some minor comments inline. >=20 > > -----Original Message----- > > From: Kundapura, Ganapati > > Sent: Monday, February 7, 2022 4:21 PM > > To: Jayatheerthan, Jay ; > > jerinjacobk@gmail.com; Gujjar, Abhinandan S > > ; dev@dpdk.org > > Subject: [PATCH v4 1/2] eventdev/crypto_adapter: move crypto ops to > > circular buffer > > > > Move crypto ops to circular buffer to retain crypto ops when > > cryptodev/eventdev are temporarily full > > > > Signed-off-by: Ganapati Kundapura > > > > --- > > v4: > > * Retain the non enqueued crypto ops in circular buffer to > > process later and stop the dequeue from eventdev till > > all the crypto ops are enqueued to cryptodev > > > > check space in circular buffer and stop dequeue from > > eventdev if some ops failed to flush to cdev > > and no space for another batch is available in circular buffer > > > > Enable dequeue from eventdev after all the ops are flushed > > > > v3: > > * update eca_ops_buffer_flush() to flush out all the crypto > > ops out of circular buffer. > > * remove freeing of failed crypto ops from eca_ops_enqueue_burst() > > and add to cirular buffer for later processing. > > > > v2: > > * reset crypto adapter next cdev id before dequeueing from the > > next cdev > > --- > > > > diff --git a/lib/eventdev/rte_event_crypto_adapter.c > > b/lib/eventdev/rte_event_crypto_adapter.c > > index d840803..0faac36 100644 > > --- a/lib/eventdev/rte_event_crypto_adapter.c > > +++ b/lib/eventdev/rte_event_crypto_adapter.c > > @@ -25,11 +25,27 @@ > > #define CRYPTO_ADAPTER_MEM_NAME_LEN 32 #define > > CRYPTO_ADAPTER_MAX_EV_ENQ_RETRIES 100 > > > > +#define CRYPTO_ADAPTER_OPS_BUFFER_SZ (BATCH_SIZE + BATCH_SIZE) > > #define > > +CRYPTO_ADAPTER_BUFFER_SZ 1024 > > + > > /* Flush an instance's enqueue buffers every > > CRYPTO_ENQ_FLUSH_THRESHOLD > > * iterations of eca_crypto_adapter_enq_run() > > */ > > #define CRYPTO_ENQ_FLUSH_THRESHOLD 1024 > > > > +struct crypto_ops_circular_buffer { > > + /* index of head element in circular buffer */ > > + uint16_t head; > > + /* index of tail element in circular buffer */ > > + uint16_t tail; > > + /* number of elements in buffer */ > > + uint16_t count; > > + /* size of circular buffer */ > > + uint16_t size; > > + /* Pointer to hold rte_crypto_ops for batching */ > > + struct rte_crypto_op **op_buffer; > > +} __rte_cache_aligned; > > + > > struct event_crypto_adapter { > > /* Event device identifier */ > > uint8_t eventdev_id; > > @@ -37,6 +53,10 @@ struct event_crypto_adapter { > > uint8_t event_port_id; > > /* Store event device's implicit release capability */ > > uint8_t implicit_release_disabled; > > + /* Flag to indicate backpressure at cryptodev > > + * Stop further dequeuing events from eventdev > > + */ > > + bool stop_enq_to_cryptodev; > > /* Max crypto ops processed in any service function invocation */ > > uint32_t max_nb; > > /* Lock to serialize config updates with service function */ @@ > > -47,6 > > +67,8 @@ struct event_crypto_adapter { > > struct crypto_device_info *cdevs; > > /* Loop counter to flush crypto ops */ > > uint16_t transmit_loop_count; > > + /* Circular buffer for batching crypto ops to eventdev */ > > + struct crypto_ops_circular_buffer ebuf; > > /* Per instance stats structure */ > > struct rte_event_crypto_adapter_stats crypto_stats; > > /* Configuration callback for rte_service configuration */ @@ -93,10 > > +115,8 @@ struct crypto_device_info { struct crypto_queue_pair_info { > > /* Set to indicate queue pair is enabled */ > > bool qp_enabled; > > - /* Pointer to hold rte_crypto_ops for batching */ > > - struct rte_crypto_op **op_buffer; > > - /* No of crypto ops accumulated */ > > - uint8_t len; > > + /* Circular buffer for batching crypto ops to cdev */ > > + struct crypto_ops_circular_buffer cbuf; > > } __rte_cache_aligned; > > > > static struct event_crypto_adapter **event_crypto_adapter; @@ -141,6 > > +161,84 @@ eca_init(void) > > return 0; > > } > > > > +static inline bool > > +eca_circular_buffer_batch_ready(struct crypto_ops_circular_buffer > > +*bufp) { > > + return bufp->count >=3D BATCH_SIZE; > > +} > > + > > +static inline bool > > +eca_circular_buffer_space_for_batch(struct crypto_ops_circular_buffer > > +*bufp) { > > + return (bufp->size - bufp->count) >=3D BATCH_SIZE; } > > + > > +static inline void > > +eca_circular_buffer_free(struct crypto_ops_circular_buffer *bufp) { > > + rte_free(bufp->op_buffer); > > +} > > + > > +static inline int > > +eca_circular_buffer_init(const char *name, > > + struct crypto_ops_circular_buffer *bufp, > > + uint16_t sz) > > +{ > > + bufp->op_buffer =3D rte_zmalloc(name, > > + sizeof(struct rte_crypto_op *) * sz, > > + 0); > > + if (bufp->op_buffer =3D=3D NULL) > > + return -ENOMEM; > > + > > + bufp->size =3D sz; > > + return 0; > > +} > > + > > +static inline int > > +eca_circular_buffer_add(struct crypto_ops_circular_buffer *bufp, > > + struct rte_crypto_op *op) > > +{ > > + uint16_t *tailp =3D &bufp->tail; > > + > > + bufp->op_buffer[*tailp] =3D op; > > + /* circular buffer, go round */ > > + *tailp =3D (*tailp + 1) % bufp->size; > > + bufp->count++; > > + > > + return 0; > > +} > > + > > +static inline int > > +eca_circular_buffer_flush_to_cdev(struct crypto_ops_circular_buffer > > *bufp, > > + uint8_t cdev_id, uint16_t qp_id, > > + uint16_t *nb_ops_flushed) > > +{ > > + uint16_t n =3D 0; > > + uint16_t *headp =3D &bufp->head; > > + uint16_t *tailp =3D &bufp->tail; > > + struct rte_crypto_op **ops =3D bufp->op_buffer; > > + > > + if (*tailp > *headp) > > + n =3D *tailp - *headp; > > + else if (*tailp < *headp) > > + n =3D bufp->size - *headp; > > + else { > > + *nb_ops_flushed =3D 0; > > + return 0; /* buffer empty */ > > + } > > + > > + *nb_ops_flushed =3D rte_cryptodev_enqueue_burst(cdev_id, qp_id, > > + &ops[*headp], n); > > + bufp->count -=3D *nb_ops_flushed; > > + if (!bufp->count) { > > + *headp =3D 0; > > + *tailp =3D 0; > > + } else > > + *headp =3D (*headp + *nb_ops_flushed) % bufp->size; > > + > > + return *nb_ops_flushed =3D=3D n ? 0 : -1; } > > + > > static inline struct event_crypto_adapter * > > eca_id_to_adapter(uint8_t id) { @@ -237,10 +335,19 @@ > > rte_event_crypto_adapter_create_ext(uint8_t id, uint8_t dev_id, > > return -ENOMEM; > > } > > > > + if (eca_circular_buffer_init("eca_edev_circular_buffer", > > + &adapter->ebuf, > > + CRYPTO_ADAPTER_BUFFER_SZ)) { > > + RTE_EDEV_LOG_ERR("Failed to get memory for eventdev > > buffer"); > > + rte_free(adapter); > > + return -ENOMEM; > > + } > > + > > ret =3D rte_event_dev_info_get(dev_id, &dev_info); > > if (ret < 0) { > > RTE_EDEV_LOG_ERR("Failed to get info for eventdev %d: > > %s!", > > dev_id, dev_info.driver_name); > > + eca_circular_buffer_free(&adapter->ebuf); > > rte_free(adapter); > > return ret; > > } > > @@ -259,6 +366,7 @@ rte_event_crypto_adapter_create_ext(uint8_t id, > > uint8_t dev_id, > > socket_id); > > if (adapter->cdevs =3D=3D NULL) { > > RTE_EDEV_LOG_ERR("Failed to get mem for crypto > devices\n"); > > + eca_circular_buffer_free(&adapter->ebuf); > > rte_free(adapter); > > return -ENOMEM; > > } > > @@ -337,10 +445,10 @@ eca_enq_to_cryptodev(struct > event_crypto_adapter > > *adapter, struct rte_event *ev, > > struct crypto_queue_pair_info *qp_info =3D NULL; > > struct rte_crypto_op *crypto_op; > > unsigned int i, n; > > - uint16_t qp_id, len, ret; > > + uint16_t qp_id, nb_enqueued =3D 0; > > uint8_t cdev_id; > > + int ret; > > > > - len =3D 0; > > ret =3D 0; > > n =3D 0; > > stats->event_deq_count +=3D cnt; > > @@ -366,9 +474,7 @@ eca_enq_to_cryptodev(struct > event_crypto_adapter > > *adapter, struct rte_event *ev, > > rte_crypto_op_free(crypto_op); > > continue; > > } > > - len =3D qp_info->len; > > - qp_info->op_buffer[len] =3D crypto_op; > > - len++; > > + eca_circular_buffer_add(&qp_info->cbuf, > > crypto_op); > > } else if (crypto_op->sess_type =3D=3D > > RTE_CRYPTO_OP_SESSIONLESS && > > crypto_op->private_data_offset) { > > m_data =3D (union rte_event_crypto_metadata *) @@ > > -382,87 +488,91 @@ eca_enq_to_cryptodev(struct event_crypto_adapter > > *adapter, struct rte_event *ev, > > rte_crypto_op_free(crypto_op); > > continue; > > } > > - len =3D qp_info->len; > > - qp_info->op_buffer[len] =3D crypto_op; > > - len++; > > + eca_circular_buffer_add(&qp_info->cbuf, > > crypto_op); > > } else { > > rte_pktmbuf_free(crypto_op->sym->m_src); > > rte_crypto_op_free(crypto_op); > > continue; > > } > > > > - if (len =3D=3D BATCH_SIZE) { > > - struct rte_crypto_op **op_buffer =3D qp_info- > > >op_buffer; > > - ret =3D rte_cryptodev_enqueue_burst(cdev_id, > > - qp_id, > > - op_buffer, > > - BATCH_SIZE); > > - > > - stats->crypto_enq_count +=3D ret; > > - > > - while (ret < len) { > > - struct rte_crypto_op *op; > > - op =3D op_buffer[ret++]; > > - stats->crypto_enq_fail++; > > - rte_pktmbuf_free(op->sym->m_src); > > - rte_crypto_op_free(op); > > - } > > - > > - len =3D 0; > > + if (eca_circular_buffer_batch_ready(&qp_info->cbuf)) { > > + ret =3D eca_circular_buffer_flush_to_cdev(&qp_info- > > >cbuf, > > + cdev_id, > > + qp_id, > > + > > &nb_enqueued); > > + /** > > + * If some crypto ops failed to flush to cdev and > > + * space for another batch is not available, stop > > + * dequeue from eventdev momentarily > > + */ > > + if (unlikely(ret < 0 && > > + !eca_circular_buffer_space_for_batch( > > + &qp_info->cbuf))) > > + adapter->stop_enq_to_cryptodev =3D true; > > } > > > > - if (qp_info) > > - qp_info->len =3D len; > > - n +=3D ret; > > + stats->crypto_enq_count +=3D nb_enqueued; > > + n +=3D nb_enqueued; > > } > > > > return n; > > } > > > > static unsigned int > > -eca_crypto_enq_flush(struct event_crypto_adapter *adapter) > > +eca_crypto_cdev_flush(struct event_crypto_adapter *adapter, > > + uint8_t cdev_id, uint16_t *nb_ops_flushed) > > { > > - struct rte_event_crypto_adapter_stats *stats =3D &adapter- > > >crypto_stats; > > struct crypto_device_info *curr_dev; > > struct crypto_queue_pair_info *curr_queue; > > - struct rte_crypto_op **op_buffer; > > struct rte_cryptodev *dev; > > - uint8_t cdev_id; > > + uint16_t nb =3D 0, nb_enqueued =3D 0; > > uint16_t qp; > > - uint16_t ret; > > - uint16_t num_cdev =3D rte_cryptodev_count(); > > > > - ret =3D 0; > > - for (cdev_id =3D 0; cdev_id < num_cdev; cdev_id++) { > > - curr_dev =3D &adapter->cdevs[cdev_id]; > > - dev =3D curr_dev->dev; > > - if (dev =3D=3D NULL) > > - continue; > > - for (qp =3D 0; qp < dev->data->nb_queue_pairs; qp++) { > > + curr_dev =3D &adapter->cdevs[cdev_id]; > > + if (unlikely(curr_dev =3D=3D NULL)) > > + return 0; > > > > - curr_queue =3D &curr_dev->qpairs[qp]; > > - if (!curr_queue->qp_enabled) > > - continue; > > + dev =3D rte_cryptodev_pmd_get_dev(cdev_id); > > + for (qp =3D 0; qp < dev->data->nb_queue_pairs; qp++) { > > > > - op_buffer =3D curr_queue->op_buffer; > > - ret =3D rte_cryptodev_enqueue_burst(cdev_id, > > - qp, > > - op_buffer, > > - curr_queue->len); > > - stats->crypto_enq_count +=3D ret; > > - > > - while (ret < curr_queue->len) { > > - struct rte_crypto_op *op; > > - op =3D op_buffer[ret++]; > > - stats->crypto_enq_fail++; > > - rte_pktmbuf_free(op->sym->m_src); > > - rte_crypto_op_free(op); > > - } > > - curr_queue->len =3D 0; > > - } > > + curr_queue =3D &curr_dev->qpairs[qp]; > > + if (unlikely(curr_queue =3D=3D NULL || !curr_queue- > > >qp_enabled)) > > + continue; > > + > > + eca_circular_buffer_flush_to_cdev(&curr_queue->cbuf, > > + cdev_id, > > + qp, > > + &nb_enqueued); > > + *nb_ops_flushed +=3D curr_queue->cbuf.count; > > + nb +=3D nb_enqueued; > > } > > > > - return ret; > > + return nb; > > +} > > + > > +static unsigned int > > +eca_crypto_enq_flush(struct event_crypto_adapter *adapter) { > > + struct rte_event_crypto_adapter_stats *stats =3D &adapter- > > >crypto_stats; > > + uint8_t cdev_id; > > + uint16_t nb_enqueued =3D 0; > > + uint16_t nb_ops_flushed =3D 0; > > + uint16_t num_cdev =3D rte_cryptodev_count(); > > + > > + for (cdev_id =3D 0; cdev_id < num_cdev; cdev_id++) > > + nb_enqueued +=3D eca_crypto_cdev_flush(adapter, > > + cdev_id, > > + &nb_ops_flushed); > > + /** > > + * Enable dequeue from eventdev if all ops from circular > > + * buffer flushed to cdev > > + */ > > + if (!nb_ops_flushed) > > + adapter->stop_enq_to_cryptodev =3D false; > > + > > + stats->crypto_enq_count +=3D nb_enqueued; > > + > > + return nb_enqueued; > > } > > > > static int > > @@ -480,6 +590,13 @@ eca_crypto_adapter_enq_run(struct > > event_crypto_adapter *adapter, > > if (adapter->mode =3D=3D RTE_EVENT_CRYPTO_ADAPTER_OP_NEW) > > return 0; > > > > + if (adapter->stop_enq_to_cryptodev) { > Add unlikely here Added in v5 > > + nb_enqueued +=3D eca_crypto_enq_flush(adapter); > > + > > + if (adapter->stop_enq_to_cryptodev) > Add unlikely here Added in v5 > > + goto skip_event_dequeue_burst; > > + } > > + > > for (nb_enq =3D 0; nb_enq < max_enq; nb_enq +=3D n) { > > stats->event_poll_count++; > > n =3D rte_event_dequeue_burst(event_dev_id, > > @@ -491,6 +608,8 @@ eca_crypto_adapter_enq_run(struct > > event_crypto_adapter *adapter, > > nb_enqueued +=3D eca_enq_to_cryptodev(adapter, ev, n); > > } > > > > +skip_event_dequeue_burst: > > + > > if ((++adapter->transmit_loop_count & > > (CRYPTO_ENQ_FLUSH_THRESHOLD - 1)) =3D=3D 0) { > > nb_enqueued +=3D eca_crypto_enq_flush(adapter); @@ - > > 499,9 +618,9 @@ eca_crypto_adapter_enq_run(struct > event_crypto_adapter > > *adapter, > > return nb_enqueued; > > } > > > > -static inline void > > +static inline uint16_t > > eca_ops_enqueue_burst(struct event_crypto_adapter *adapter, > > - struct rte_crypto_op **ops, uint16_t num) > > + struct rte_crypto_op **ops, uint16_t num) > > { > > struct rte_event_crypto_adapter_stats *stats =3D &adapter- > > >crypto_stats; > > union rte_event_crypto_metadata *m_data =3D NULL; @@ -518,6 > > +637,8 @@ eca_ops_enqueue_burst(struct event_crypto_adapter > *adapter, > > num =3D RTE_MIN(num, BATCH_SIZE); > > for (i =3D 0; i < num; i++) { > > struct rte_event *ev =3D &events[nb_ev++]; > > + > > + m_data =3D NULL; > > if (ops[i]->sess_type =3D=3D RTE_CRYPTO_OP_WITH_SESSION) { > > m_data =3D > > rte_cryptodev_sym_session_get_user_data( > > ops[i]->sym->session); > > @@ -548,21 +669,56 @@ eca_ops_enqueue_burst(struct > > event_crypto_adapter *adapter, > > event_port_id, > > &events[nb_enqueued], > > nb_ev - nb_enqueued); > > + > > } while (retry++ < CRYPTO_ADAPTER_MAX_EV_ENQ_RETRIES && > > nb_enqueued < nb_ev); > > > > - /* Free mbufs and rte_crypto_ops for failed events */ > > - for (i =3D nb_enqueued; i < nb_ev; i++) { > > - struct rte_crypto_op *op =3D events[i].event_ptr; > > - rte_pktmbuf_free(op->sym->m_src); > > - rte_crypto_op_free(op); > > - } > > - > > stats->event_enq_fail_count +=3D nb_ev - nb_enqueued; > > stats->event_enq_count +=3D nb_enqueued; > > stats->event_enq_retry_count +=3D retry - 1; > > + > > + return nb_enqueued; > > +} > > + > > +static int > > +eca_circular_buffer_flush_to_evdev(struct event_crypto_adapter > > *adapter, > > + struct crypto_ops_circular_buffer *bufp) { > > + uint16_t n =3D 0, nb_ops_flushed; > > + uint16_t *headp =3D &bufp->head; > > + uint16_t *tailp =3D &bufp->tail; > > + struct rte_crypto_op **ops =3D bufp->op_buffer; > > + > > + if (*tailp > *headp) > > + n =3D *tailp - *headp; > > + else if (*tailp < *headp) > > + n =3D bufp->size - *headp; > > + else > > + return 0; /* buffer empty */ > > + > > + nb_ops_flushed =3D eca_ops_enqueue_burst(adapter, ops, n); > > + bufp->count -=3D nb_ops_flushed; > > + if (!bufp->count) { > > + *headp =3D 0; > > + *tailp =3D 0; > > + return 0; /* buffer empty */ > > + } > > + > > + *headp =3D (*headp + nb_ops_flushed) % bufp->size; > > + return 1; > > } > > > > + > > +static void > > +eca_ops_buffer_flush(struct event_crypto_adapter *adapter) { > > + if (adapter->ebuf.count =3D=3D 0) > Add likely here Added in v5 > > + return; > > + > > + while (eca_circular_buffer_flush_to_evdev(adapter, > > + &adapter->ebuf)) > > + ; > > +} > > static inline unsigned int > > eca_crypto_adapter_deq_run(struct event_crypto_adapter *adapter, > > unsigned int max_deq) > > @@ -571,7 +727,7 @@ eca_crypto_adapter_deq_run(struct > > event_crypto_adapter *adapter, > > struct crypto_device_info *curr_dev; > > struct crypto_queue_pair_info *curr_queue; > > struct rte_crypto_op *ops[BATCH_SIZE]; > > - uint16_t n, nb_deq; > > + uint16_t n, nb_deq, nb_enqueued, i; > > struct rte_cryptodev *dev; > > uint8_t cdev_id; > > uint16_t qp, dev_qps; > > @@ -579,16 +735,20 @@ eca_crypto_adapter_deq_run(struct > > event_crypto_adapter *adapter, > > uint16_t num_cdev =3D rte_cryptodev_count(); > > > > nb_deq =3D 0; > > + eca_ops_buffer_flush(adapter); > > + > > do { > > - uint16_t queues =3D 0; > > done =3D true; > > > > for (cdev_id =3D adapter->next_cdev_id; > > cdev_id < num_cdev; cdev_id++) { > > + uint16_t queues =3D 0; > > + > > curr_dev =3D &adapter->cdevs[cdev_id]; > > dev =3D curr_dev->dev; > > if (dev =3D=3D NULL) > Add unlikely here Added in v5 > > continue; > > + > > dev_qps =3D dev->data->nb_queue_pairs; > > > > for (qp =3D curr_dev->next_queue_pair_id; @@ -596,7 > > +756,8 @@ eca_crypto_adapter_deq_run(struct event_crypto_adapter > > *adapter, > > queues++) { > > > > curr_queue =3D &curr_dev->qpairs[qp]; > > - if (!curr_queue->qp_enabled) > > + if (curr_queue =3D=3D NULL || > > + !curr_queue->qp_enabled) > Add unlikely here Added in v5 > > continue; > > > > n =3D rte_cryptodev_dequeue_burst(cdev_id, > > qp, @@ -605,11 +766,27 @@ eca_crypto_adapter_deq_run(struct > > event_crypto_adapter *adapter, > > continue; > > > > done =3D false; > > + nb_enqueued =3D 0; > > + > > stats->crypto_deq_count +=3D n; > > - eca_ops_enqueue_burst(adapter, ops, n); > > + > > + if (unlikely(!adapter->ebuf.count)) > > + nb_enqueued =3D > > eca_ops_enqueue_burst( > > + adapter, ops, n); > > + > > + if (nb_enqueued =3D=3D n) > Add likely here Added in v5 > > + goto check; > > + > > + /* Failed to enqueue events case */ > > + for (i =3D nb_enqueued; i < n; i++) > > + eca_circular_buffer_add( > > + &adapter->ebuf, > > + ops[nb_enqueued]); > > + > > +check: > > nb_deq +=3D n; > > > > - if (nb_deq > max_deq) { > > + if (nb_deq >=3D max_deq) { > > if ((qp + 1) =3D=3D dev_qps) { > > adapter->next_cdev_id =3D > > (cdev_id + 1) > > @@ -622,6 +799,7 @@ eca_crypto_adapter_deq_run(struct > > event_crypto_adapter *adapter, > > } > > } > > } > > + adapter->next_cdev_id =3D 0; > > } while (done =3D=3D false); > > return nb_deq; > > } > > @@ -751,11 +929,12 @@ eca_add_queue_pair(struct > event_crypto_adapter > > *adapter, uint8_t cdev_id, > > return -ENOMEM; > > > > qpairs =3D dev_info->qpairs; > > - qpairs->op_buffer =3D rte_zmalloc_socket(adapter- > > >mem_name, > > - BATCH_SIZE * > > - sizeof(struct rte_crypto_op *), > > - 0, adapter->socket_id); > > - if (!qpairs->op_buffer) { > > + > > + if (eca_circular_buffer_init("eca_cdev_circular_buffer", > > + &qpairs->cbuf, > > + > > CRYPTO_ADAPTER_OPS_BUFFER_SZ)) { > > + RTE_EDEV_LOG_ERR("Failed to get memory for > > cryptodev " > > + "buffer"); > > rte_free(qpairs); > > return -ENOMEM; > > } > > -- > > 2.6.4