From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 92EF2A00BE; Fri, 11 Feb 2022 05:43:19 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 81B39411FB; Fri, 11 Feb 2022 05:43:19 +0100 (CET) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id A62B2411ED for ; Fri, 11 Feb 2022 05:43:17 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1644554597; x=1676090597; h=from:to:subject:date:message-id:references:in-reply-to: content-transfer-encoding:mime-version; bh=wJm/z0R3QkwzwLi6XL0BN6uwa4hPjIWSSQtdCck7j+Q=; b=BD3jm0PxUitGNt64XxFFPvbT2D3FVC1oK6y26UjHNsIAZtErut0sEaov 2LpRjvJ+MkYhnFo7yGCloQpd/ag0KCRfg4h0PDFWD20ST14dFpnXH2DAK PBrWWU5u9YhfReGHTarzeCPWZqP+gFCPeYlDdmQaYT+QpZXgTVaCjuqi5 97IH1qtBQlR9Q+4pEinOI0oBSr7XuWLgAi4DLKUOuhMYExA/WwmSsdS9+ WRVarrX9zAnCZx1SgWkUebAxMqpbvnDfMTD2lLeaCNAh7uv4bBtqfPW9Z tNTkDANXqWbtJx+XAN1VicZVKoPbl6GbSOexWDBgFCqcWldPlPyf5zEKk A==; X-IronPort-AV: E=McAfee;i="6200,9189,10254"; a="249410973" X-IronPort-AV: E=Sophos;i="5.88,359,1635231600"; d="scan'208";a="249410973" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Feb 2022 20:43:16 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.88,359,1635231600"; d="scan'208";a="486666284" Received: from orsmsx606.amr.corp.intel.com ([10.22.229.19]) by orsmga006.jf.intel.com with ESMTP; 10 Feb 2022 20:43:16 -0800 Received: from orsmsx609.amr.corp.intel.com (10.22.229.22) by ORSMSX606.amr.corp.intel.com (10.22.229.19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20; Thu, 10 Feb 2022 20:43:16 -0800 Received: from ORSEDG601.ED.cps.intel.com (10.7.248.6) by orsmsx609.amr.corp.intel.com (10.22.229.22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20 via Frontend Transport; Thu, 10 Feb 2022 20:43:16 -0800 Received: from NAM11-BN8-obe.outbound.protection.outlook.com (104.47.58.173) by edgegateway.intel.com (134.134.137.102) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2308.20; Thu, 10 Feb 2022 20:43:16 -0800 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=THjtCebGbxsR9I+Mz9Qe/wY0LvHgr8nKjaSLKMrANCZoRjujceDurVHUC3SyxKu6wNaZbRjnYXgUQY55QZzN3SL5sfVuUnZc/t5hJaAxRxbJ4Tvrj9VDDXL2wnIFrLCkp/yLNmhV2RrIKz0SdX1si22sA7sCyJ0DyC3YpXTZtLz9eehStGQ74nN5mznGdcGM+oYspOoLuLwdtvIXPdbOLCzOohX30ht856V2wFibqGNZquCChI0Ha6+ySg2yigTOq01aeipPOM0JDvouR6KXIdhSMtgc+JRrOcZv30WmyrEZBlaMxVtOKEoriGDY87f8sPVDPK7hHaDN60hJGEOFvQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=pykN0BhVFPR8PVgqo7Gbdhz33aYTjljX2lMP31r01lM=; b=lxtYjROITrEccm4qenDAxPxPgqSCo8oLBhvPN2lFu32Q+3N+vc2hX+6CA/64O2Nlm/JfB1SrsOD5xs9NepOKb3J8S3Ee4CkwLwPyI2r3RV9LuiP1Rj9N7VWQboFcS3caLRhvDdC7dYAYOca2THheLA78gqx00awQN6dviDpfpi+0Z//oiyxHaOfwjpDmoTcwRAsOjG0Gr1dNyZchvjOAaa63jSjbI9WtgekmVHxbd77lRECgHKYroQsgaD1oVd5emX2ymFvQ0SPRRu8XqXEScT/+4QY6MSySeGuhr726IYay8j2AikH2P4AbMhozvLFi8g6Q3DunkmqmfzsM8bfRPA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=none; dmarc=none; dkim=none; arc=none Received: from PH0PR11MB4824.namprd11.prod.outlook.com (2603:10b6:510:38::13) by BY5PR11MB4151.namprd11.prod.outlook.com (2603:10b6:a03:18f::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.18; Fri, 11 Feb 2022 04:43:14 +0000 Received: from PH0PR11MB4824.namprd11.prod.outlook.com ([fe80::9a7:5887:c585:adf2]) by PH0PR11MB4824.namprd11.prod.outlook.com ([fe80::9a7:5887:c585:adf2%6]) with mapi id 15.20.4951.019; Fri, 11 Feb 2022 04:43:13 +0000 From: "Gujjar, Abhinandan S" To: "Kundapura, Ganapati" , "Jayatheerthan, Jay" , "jerinjacobk@gmail.com" , "dev@dpdk.org" Subject: RE: [PATCH v5 1/2] eventdev/crypto_adapter: move crypto ops to circular buffer Thread-Topic: [PATCH v5 1/2] eventdev/crypto_adapter: move crypto ops to circular buffer Thread-Index: AQHYHqVvKFr0Fhx/pEKmg5/A3UvQCKyNxm8Q Date: Fri, 11 Feb 2022 04:43:13 +0000 Message-ID: References: <20220207105036.3468246-1-ganapati.kundapura@intel.com> <20220210174117.3715562-1-ganapati.kundapura@intel.com> In-Reply-To: <20220210174117.3715562-1-ganapati.kundapura@intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-product: dlpe-windows dlp-reaction: no-action dlp-version: 11.6.200.16 authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=intel.com; x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: fa960054-0108-414c-9870-08d9ed190398 x-ms-traffictypediagnostic: BY5PR11MB4151:EE_ x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:4502; x-ms-exchange-senderadcheck: 1 x-ms-exchange-antispam-relay: 0 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: /7LwQrsiB7NW2iQiHLScq11BDAYd+z8uKNJI8srzmJTa8CdlAl4vev2rSULYuCqj4cLT7f/+MvipFIk/d19hZjPkT2jlXdNnhd9L6DWX2oRUa2G4JqkVoiEeLel0vX50GGSXV7mJTQSd2c9GQB3OemIN4btVwCd6LWZaFrYMy2EDfeadKaQP5t3ozQc/fSm1l3BCsxC/YanhM0wfqbxo4QcVbCd9Qv998M9zqwfM7U95kiYw/5+INaSwBUd694q1cC3r2sf12YgzTB8EvXF5SBCNEZXVVXZr0QO5pELI4hyBG2Z380yxYunSH8JVw4bVGxMtoAsR5cOKryr4e5eDEetS0sH1hEP6YFkOWylGIYH24UXuSA2VuU2vgPbYGUKsUPh6dsmzphEgLvzJpykIDVjQa1w5RO9ZoMtDw+QazUPhgifIHW0SX+7EGJ1GPHzN4LB/HKZLtyaSb8ob5/mr8CX4JbljDC33a68ush6tOc5p38iuB1r2RH1VN7p+Ktt7qbTZ6nrT8Lddv8nsKoNXb6NeP3C+kbbjjY+OiF1O+TAa2ENDV/lKUaY6tVJNK3g+O+5c/aeFRx19AtoqLxvgYgVfAND9ZmA8RP2GQZ+lmBjldkCzE38VPTD4xMsrBOx94LRlHliRX635hPBUSn0KYochviqhIRvNTPDmQNtCdHPNRxANpqQSylEmwmYAfu4XU2RIfXoln72zEu1gIXlURg== x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:PH0PR11MB4824.namprd11.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230001)(366004)(9686003)(5660300002)(86362001)(508600001)(6506007)(82960400001)(30864003)(52536014)(38100700002)(7696005)(66476007)(66556008)(2906002)(53546011)(71200400001)(38070700005)(110136005)(55016003)(122000001)(66446008)(83380400001)(316002)(76116006)(26005)(8676002)(8936002)(186003)(66946007)(64756008)(33656002)(579004); DIR:OUT; SFP:1102; x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?u41VRNionEoMIPnnyXCl68qjhBZojYkAoAGbgjYd32K18PqQzrI6/YjydmNM?= =?us-ascii?Q?lChkDz73nlR2vAGt2MjNnvT9FAczi0Dz0iEhkr1XSa1B3tzVR5/aN/INR88+?= =?us-ascii?Q?2S1u5HyfGn3Yw4v3s9DwzGx4w4zhVu9T65Gbctcf1Sqm5peKEiyljUCWjnKo?= =?us-ascii?Q?MT8zN9uxg2Se3Rb4bP19dQU58PsM4Xfwg47QbazqZ5oCWh3SR5YssBNJqDvE?= =?us-ascii?Q?Z91afQqKhwSbTBaWRB2dBn0TRPkIcHOh0sIJOcl/b9gP7H+oVvN3D52xlJ7h?= =?us-ascii?Q?LRClx62R2TjvXHc1Jp29IC3wwakiv50fPM4DII//52FrdaGn6UM8/EUgHQ7I?= =?us-ascii?Q?cDJq/ts+3PT5ZCPzDe6LbGNcRDOegdrigYRGeTPrzW/rHPo+QPyfeIODujq6?= =?us-ascii?Q?ECS8G21BpH15FCFHGD+Re3Jm35SODJ0n7Qt+K6IkER5JWqcAxgOlDxP64/h8?= =?us-ascii?Q?Ofc96fCaxAYYrdQFlHMJU98K7L9t7lpThfNQ9YgiItFkFfbYM8DWlCV38TCA?= =?us-ascii?Q?poTcQsZ4AGP9idjmHeIWQqyZXyXO74jJkakrAoBc2p2v8O33PU4AATdOvxjy?= =?us-ascii?Q?JFPw/4Zr8X3Zqn5S/t60h9g8xsRirE9znYdiTR3yn7BuHXhLkLYfn/DyE4xb?= =?us-ascii?Q?0b587DLE71arYDE+xYnV1dyCOME2bAejmHrnxp87SIVVEvDRbPPovDIS2bkn?= =?us-ascii?Q?RJDB1yRn1+Cz1y8zEyKk9c3ml8RB9fnPjNVf3KljZpgzIx8CUL/M9l1DYiTb?= =?us-ascii?Q?UwDY4MUWlCPNF4bJOPlu/ZoS9Wv6gwQ2PcAeFL+MQGsnQ4dTs8sVQP9sEOAZ?= =?us-ascii?Q?8yGe2/AarLilpK/cPLHyj6Xfn2w+qUXElGCLNj7naLfS0cepotLhJarx60r8?= =?us-ascii?Q?p/2NNYa6mMWMgsS6H2bA5nS+07O44fttA1Ridb+c4Mfm3jT6YSdbFCZ8giPo?= =?us-ascii?Q?Migl7zuDZWmpYlPpmGvwyqqgS2di7NR8npObgSXaIvWN0SAiqHd4jy5tz9xh?= =?us-ascii?Q?/jNiIvwwPue9ZfxYrDpA+2hyy2mlYJ64U52RZYou4+FkRg2xS6kXHF3VTCKX?= =?us-ascii?Q?sHTzTRvJE7JEdl4EcjrdHVzmK6dGhLAk5wzWxLeNJB4kxEG+Nm0kN3vFlCg8?= =?us-ascii?Q?X5Ai9NHomaGld0xHt7St/x8TgSSMgw9Xvy7dBXw4JGpdjxwoy7dlTlmcTDQ6?= =?us-ascii?Q?zOyAXDQu6d9VG+bUcd7NXWB+YF9KVc4Y10wVAKz3K8xhZcwvGo+5GuEl/sGy?= =?us-ascii?Q?FOsq933lwzsMDZoPrRFuBtUgNwWzH4GXc7wvoy84h+cWcI9FJSGbaqzrmyCG?= =?us-ascii?Q?kqYXndPeBmJhWzHkrmXV2/4YXdd9CY3uazafM5ZPmmPgoVWdEh+T5Dc0YlIf?= =?us-ascii?Q?TKN6GQA91k8Z0Lqxm/MEP1EKC3eV7uuWZc99A5NoENzadlgSgZ4ZLeqsXsuV?= =?us-ascii?Q?eH+2pDDcHPiGeX5wmceh7kDwN6RAw9qM+WXzYsw1Dohl7srxIMB1rI9y4tJ5?= =?us-ascii?Q?5USNGuP+nHPG5yG/Wy4Lkqh+FX5afWpZcjUGUKm5pDuxY7ScvWqplNC4dufW?= =?us-ascii?Q?SfXSNNGVrbk71HRuVxB6eI2vstJwI3ySmkBRndfccO4SKG3SRhfmze+kqMKv?= =?us-ascii?Q?1l2EwjoPazggrDhL/74lKgBHdVMTE8TP3I8UC/sca8tjKSySPeSYtb/Cb+NJ?= =?us-ascii?Q?12Lx1A=3D=3D?= Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: PH0PR11MB4824.namprd11.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: fa960054-0108-414c-9870-08d9ed190398 X-MS-Exchange-CrossTenant-originalarrivaltime: 11 Feb 2022 04:43:13.8694 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: OEZhPeUYC+NHjSjpB9L29EXZ0CQSRHao9y0kY4GUJ0W3DtMB7qZDQ8oLQ2G88mC+wc4/q7lLH6oP2EQmq3/gQQSicDDXSWhjQEq6x3Ok+SE= X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR11MB4151 X-OriginatorOrg: intel.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Acked-by: Abhinandan Gujjar > -----Original Message----- > From: Kundapura, Ganapati > Sent: Thursday, February 10, 2022 11:11 PM > To: Jayatheerthan, Jay ; > jerinjacobk@gmail.com; Gujjar, Abhinandan S > ; dev@dpdk.org > Subject: [PATCH v5 1/2] eventdev/crypto_adapter: move crypto ops to > circular buffer >=20 > Move crypto ops to circular buffer to retain crypto ops when > cryptodev/eventdev are temporarily full >=20 > Signed-off-by: Ganapati Kundapura >=20 > --- > v5: > * Add branch prediction to if conditions >=20 > v4: > * Retain the non enqueued crypto ops in circular buffer to > process later and stop the dequeue from eventdev till > all the crypto ops are enqueued to cryptodev >=20 > check space in circular buffer and stop dequeue from > eventdev if some ops failed to flush to cdev > and no space for another batch is available in circular buffer >=20 > Enable dequeue from eventdev after all the ops are flushed >=20 > v3: > * update eca_ops_buffer_flush() to flush out all the crypto > ops out of circular buffer. > * remove freeing of failed crypto ops from eca_ops_enqueue_burst() > and add to cirular buffer for later processing. >=20 > v2: > * reset crypto adapter next cdev id before dequeueing from the > next cdev > --- >=20 > diff --git a/lib/eventdev/rte_event_crypto_adapter.c > b/lib/eventdev/rte_event_crypto_adapter.c > index d840803..0b484f3 100644 > --- a/lib/eventdev/rte_event_crypto_adapter.c > +++ b/lib/eventdev/rte_event_crypto_adapter.c > @@ -25,11 +25,27 @@ > #define CRYPTO_ADAPTER_MEM_NAME_LEN 32 > #define CRYPTO_ADAPTER_MAX_EV_ENQ_RETRIES 100 >=20 > +#define CRYPTO_ADAPTER_OPS_BUFFER_SZ (BATCH_SIZE + BATCH_SIZE) > #define > +CRYPTO_ADAPTER_BUFFER_SZ 1024 > + > /* Flush an instance's enqueue buffers every > CRYPTO_ENQ_FLUSH_THRESHOLD > * iterations of eca_crypto_adapter_enq_run() > */ > #define CRYPTO_ENQ_FLUSH_THRESHOLD 1024 >=20 > +struct crypto_ops_circular_buffer { > + /* index of head element in circular buffer */ > + uint16_t head; > + /* index of tail element in circular buffer */ > + uint16_t tail; > + /* number of elements in buffer */ > + uint16_t count; > + /* size of circular buffer */ > + uint16_t size; > + /* Pointer to hold rte_crypto_ops for batching */ > + struct rte_crypto_op **op_buffer; > +} __rte_cache_aligned; > + > struct event_crypto_adapter { > /* Event device identifier */ > uint8_t eventdev_id; > @@ -37,6 +53,10 @@ struct event_crypto_adapter { > uint8_t event_port_id; > /* Store event device's implicit release capability */ > uint8_t implicit_release_disabled; > + /* Flag to indicate backpressure at cryptodev > + * Stop further dequeuing events from eventdev > + */ > + bool stop_enq_to_cryptodev; > /* Max crypto ops processed in any service function invocation */ > uint32_t max_nb; > /* Lock to serialize config updates with service function */ @@ -47,6 > +67,8 @@ struct event_crypto_adapter { > struct crypto_device_info *cdevs; > /* Loop counter to flush crypto ops */ > uint16_t transmit_loop_count; > + /* Circular buffer for batching crypto ops to eventdev */ > + struct crypto_ops_circular_buffer ebuf; > /* Per instance stats structure */ > struct rte_event_crypto_adapter_stats crypto_stats; > /* Configuration callback for rte_service configuration */ @@ -93,10 > +115,8 @@ struct crypto_device_info { struct crypto_queue_pair_info { > /* Set to indicate queue pair is enabled */ > bool qp_enabled; > - /* Pointer to hold rte_crypto_ops for batching */ > - struct rte_crypto_op **op_buffer; > - /* No of crypto ops accumulated */ > - uint8_t len; > + /* Circular buffer for batching crypto ops to cdev */ > + struct crypto_ops_circular_buffer cbuf; > } __rte_cache_aligned; >=20 > static struct event_crypto_adapter **event_crypto_adapter; @@ -141,6 > +161,84 @@ eca_init(void) > return 0; > } >=20 > +static inline bool > +eca_circular_buffer_batch_ready(struct crypto_ops_circular_buffer > +*bufp) { > + return bufp->count >=3D BATCH_SIZE; > +} > + > +static inline bool > +eca_circular_buffer_space_for_batch(struct crypto_ops_circular_buffer > +*bufp) { > + return (bufp->size - bufp->count) >=3D BATCH_SIZE; } > + > +static inline void > +eca_circular_buffer_free(struct crypto_ops_circular_buffer *bufp) { > + rte_free(bufp->op_buffer); > +} > + > +static inline int > +eca_circular_buffer_init(const char *name, > + struct crypto_ops_circular_buffer *bufp, > + uint16_t sz) > +{ > + bufp->op_buffer =3D rte_zmalloc(name, > + sizeof(struct rte_crypto_op *) * sz, > + 0); > + if (bufp->op_buffer =3D=3D NULL) > + return -ENOMEM; > + > + bufp->size =3D sz; > + return 0; > +} > + > +static inline int > +eca_circular_buffer_add(struct crypto_ops_circular_buffer *bufp, > + struct rte_crypto_op *op) > +{ > + uint16_t *tailp =3D &bufp->tail; > + > + bufp->op_buffer[*tailp] =3D op; > + /* circular buffer, go round */ > + *tailp =3D (*tailp + 1) % bufp->size; > + bufp->count++; > + > + return 0; > +} > + > +static inline int > +eca_circular_buffer_flush_to_cdev(struct crypto_ops_circular_buffer > *bufp, > + uint8_t cdev_id, uint16_t qp_id, > + uint16_t *nb_ops_flushed) > +{ > + uint16_t n =3D 0; > + uint16_t *headp =3D &bufp->head; > + uint16_t *tailp =3D &bufp->tail; > + struct rte_crypto_op **ops =3D bufp->op_buffer; > + > + if (*tailp > *headp) > + n =3D *tailp - *headp; > + else if (*tailp < *headp) > + n =3D bufp->size - *headp; > + else { > + *nb_ops_flushed =3D 0; > + return 0; /* buffer empty */ > + } > + > + *nb_ops_flushed =3D rte_cryptodev_enqueue_burst(cdev_id, qp_id, > + &ops[*headp], n); > + bufp->count -=3D *nb_ops_flushed; > + if (!bufp->count) { > + *headp =3D 0; > + *tailp =3D 0; > + } else > + *headp =3D (*headp + *nb_ops_flushed) % bufp->size; > + > + return *nb_ops_flushed =3D=3D n ? 0 : -1; > +} > + > static inline struct event_crypto_adapter * eca_id_to_adapter(uint8_t i= d) { > @@ -237,10 +335,19 @@ rte_event_crypto_adapter_create_ext(uint8_t id, > uint8_t dev_id, > return -ENOMEM; > } >=20 > + if (eca_circular_buffer_init("eca_edev_circular_buffer", > + &adapter->ebuf, > + CRYPTO_ADAPTER_BUFFER_SZ)) { > + RTE_EDEV_LOG_ERR("Failed to get memory for eventdev > buffer"); > + rte_free(adapter); > + return -ENOMEM; > + } > + > ret =3D rte_event_dev_info_get(dev_id, &dev_info); > if (ret < 0) { > RTE_EDEV_LOG_ERR("Failed to get info for eventdev %d: > %s!", > dev_id, dev_info.driver_name); > + eca_circular_buffer_free(&adapter->ebuf); > rte_free(adapter); > return ret; > } > @@ -259,6 +366,7 @@ rte_event_crypto_adapter_create_ext(uint8_t id, > uint8_t dev_id, > socket_id); > if (adapter->cdevs =3D=3D NULL) { > RTE_EDEV_LOG_ERR("Failed to get mem for crypto > devices\n"); > + eca_circular_buffer_free(&adapter->ebuf); > rte_free(adapter); > return -ENOMEM; > } > @@ -337,10 +445,10 @@ eca_enq_to_cryptodev(struct > event_crypto_adapter *adapter, struct rte_event *ev, > struct crypto_queue_pair_info *qp_info =3D NULL; > struct rte_crypto_op *crypto_op; > unsigned int i, n; > - uint16_t qp_id, len, ret; > + uint16_t qp_id, nb_enqueued =3D 0; > uint8_t cdev_id; > + int ret; >=20 > - len =3D 0; > ret =3D 0; > n =3D 0; > stats->event_deq_count +=3D cnt; > @@ -366,9 +474,7 @@ eca_enq_to_cryptodev(struct event_crypto_adapter > *adapter, struct rte_event *ev, > rte_crypto_op_free(crypto_op); > continue; > } > - len =3D qp_info->len; > - qp_info->op_buffer[len] =3D crypto_op; > - len++; > + eca_circular_buffer_add(&qp_info->cbuf, > crypto_op); > } else if (crypto_op->sess_type =3D=3D > RTE_CRYPTO_OP_SESSIONLESS && > crypto_op->private_data_offset) { > m_data =3D (union rte_event_crypto_metadata *) @@ > -382,87 +488,91 @@ eca_enq_to_cryptodev(struct event_crypto_adapter > *adapter, struct rte_event *ev, > rte_crypto_op_free(crypto_op); > continue; > } > - len =3D qp_info->len; > - qp_info->op_buffer[len] =3D crypto_op; > - len++; > + eca_circular_buffer_add(&qp_info->cbuf, > crypto_op); > } else { > rte_pktmbuf_free(crypto_op->sym->m_src); > rte_crypto_op_free(crypto_op); > continue; > } >=20 > - if (len =3D=3D BATCH_SIZE) { > - struct rte_crypto_op **op_buffer =3D qp_info- > >op_buffer; > - ret =3D rte_cryptodev_enqueue_burst(cdev_id, > - qp_id, > - op_buffer, > - BATCH_SIZE); > - > - stats->crypto_enq_count +=3D ret; > - > - while (ret < len) { > - struct rte_crypto_op *op; > - op =3D op_buffer[ret++]; > - stats->crypto_enq_fail++; > - rte_pktmbuf_free(op->sym->m_src); > - rte_crypto_op_free(op); > - } > - > - len =3D 0; > + if (eca_circular_buffer_batch_ready(&qp_info->cbuf)) { > + ret =3D eca_circular_buffer_flush_to_cdev(&qp_info- > >cbuf, > + cdev_id, > + qp_id, > + > &nb_enqueued); > + /** > + * If some crypto ops failed to flush to cdev and > + * space for another batch is not available, stop > + * dequeue from eventdev momentarily > + */ > + if (unlikely(ret < 0 && > + !eca_circular_buffer_space_for_batch( > + &qp_info->cbuf))) > + adapter->stop_enq_to_cryptodev =3D true; > } >=20 > - if (qp_info) > - qp_info->len =3D len; > - n +=3D ret; > + stats->crypto_enq_count +=3D nb_enqueued; > + n +=3D nb_enqueued; > } >=20 > return n; > } >=20 > static unsigned int > -eca_crypto_enq_flush(struct event_crypto_adapter *adapter) > +eca_crypto_cdev_flush(struct event_crypto_adapter *adapter, > + uint8_t cdev_id, uint16_t *nb_ops_flushed) > { > - struct rte_event_crypto_adapter_stats *stats =3D &adapter- > >crypto_stats; > struct crypto_device_info *curr_dev; > struct crypto_queue_pair_info *curr_queue; > - struct rte_crypto_op **op_buffer; > struct rte_cryptodev *dev; > - uint8_t cdev_id; > + uint16_t nb =3D 0, nb_enqueued =3D 0; > uint16_t qp; > - uint16_t ret; > - uint16_t num_cdev =3D rte_cryptodev_count(); >=20 > - ret =3D 0; > - for (cdev_id =3D 0; cdev_id < num_cdev; cdev_id++) { > - curr_dev =3D &adapter->cdevs[cdev_id]; > - dev =3D curr_dev->dev; > - if (dev =3D=3D NULL) > - continue; > - for (qp =3D 0; qp < dev->data->nb_queue_pairs; qp++) { > + curr_dev =3D &adapter->cdevs[cdev_id]; > + if (unlikely(curr_dev =3D=3D NULL)) > + return 0; >=20 > - curr_queue =3D &curr_dev->qpairs[qp]; > - if (!curr_queue->qp_enabled) > - continue; > + dev =3D rte_cryptodev_pmd_get_dev(cdev_id); > + for (qp =3D 0; qp < dev->data->nb_queue_pairs; qp++) { >=20 > - op_buffer =3D curr_queue->op_buffer; > - ret =3D rte_cryptodev_enqueue_burst(cdev_id, > - qp, > - op_buffer, > - curr_queue->len); > - stats->crypto_enq_count +=3D ret; > - > - while (ret < curr_queue->len) { > - struct rte_crypto_op *op; > - op =3D op_buffer[ret++]; > - stats->crypto_enq_fail++; > - rte_pktmbuf_free(op->sym->m_src); > - rte_crypto_op_free(op); > - } > - curr_queue->len =3D 0; > - } > + curr_queue =3D &curr_dev->qpairs[qp]; > + if (unlikely(curr_queue =3D=3D NULL || !curr_queue- > >qp_enabled)) > + continue; > + > + eca_circular_buffer_flush_to_cdev(&curr_queue->cbuf, > + cdev_id, > + qp, > + &nb_enqueued); > + *nb_ops_flushed +=3D curr_queue->cbuf.count; > + nb +=3D nb_enqueued; > } >=20 > - return ret; > + return nb; > +} > + > +static unsigned int > +eca_crypto_enq_flush(struct event_crypto_adapter *adapter) { > + struct rte_event_crypto_adapter_stats *stats =3D &adapter- > >crypto_stats; > + uint8_t cdev_id; > + uint16_t nb_enqueued =3D 0; > + uint16_t nb_ops_flushed =3D 0; > + uint16_t num_cdev =3D rte_cryptodev_count(); > + > + for (cdev_id =3D 0; cdev_id < num_cdev; cdev_id++) > + nb_enqueued +=3D eca_crypto_cdev_flush(adapter, > + cdev_id, > + &nb_ops_flushed); > + /** > + * Enable dequeue from eventdev if all ops from circular > + * buffer flushed to cdev > + */ > + if (!nb_ops_flushed) > + adapter->stop_enq_to_cryptodev =3D false; > + > + stats->crypto_enq_count +=3D nb_enqueued; > + > + return nb_enqueued; > } >=20 > static int > @@ -480,6 +590,13 @@ eca_crypto_adapter_enq_run(struct > event_crypto_adapter *adapter, > if (adapter->mode =3D=3D RTE_EVENT_CRYPTO_ADAPTER_OP_NEW) > return 0; >=20 > + if (unlikely(adapter->stop_enq_to_cryptodev)) { > + nb_enqueued +=3D eca_crypto_enq_flush(adapter); > + > + if (unlikely(adapter->stop_enq_to_cryptodev)) > + goto skip_event_dequeue_burst; > + } > + > for (nb_enq =3D 0; nb_enq < max_enq; nb_enq +=3D n) { > stats->event_poll_count++; > n =3D rte_event_dequeue_burst(event_dev_id, > @@ -491,6 +608,8 @@ eca_crypto_adapter_enq_run(struct > event_crypto_adapter *adapter, > nb_enqueued +=3D eca_enq_to_cryptodev(adapter, ev, n); > } >=20 > +skip_event_dequeue_burst: > + > if ((++adapter->transmit_loop_count & > (CRYPTO_ENQ_FLUSH_THRESHOLD - 1)) =3D=3D 0) { > nb_enqueued +=3D eca_crypto_enq_flush(adapter); @@ - > 499,9 +618,9 @@ eca_crypto_adapter_enq_run(struct > event_crypto_adapter *adapter, > return nb_enqueued; > } >=20 > -static inline void > +static inline uint16_t > eca_ops_enqueue_burst(struct event_crypto_adapter *adapter, > - struct rte_crypto_op **ops, uint16_t num) > + struct rte_crypto_op **ops, uint16_t num) > { > struct rte_event_crypto_adapter_stats *stats =3D &adapter- > >crypto_stats; > union rte_event_crypto_metadata *m_data =3D NULL; @@ -518,6 > +637,8 @@ eca_ops_enqueue_burst(struct event_crypto_adapter *adapter, > num =3D RTE_MIN(num, BATCH_SIZE); > for (i =3D 0; i < num; i++) { > struct rte_event *ev =3D &events[nb_ev++]; > + > + m_data =3D NULL; > if (ops[i]->sess_type =3D=3D RTE_CRYPTO_OP_WITH_SESSION) { > m_data =3D > rte_cryptodev_sym_session_get_user_data( > ops[i]->sym->session); > @@ -548,21 +669,56 @@ eca_ops_enqueue_burst(struct > event_crypto_adapter *adapter, > event_port_id, > &events[nb_enqueued], > nb_ev - nb_enqueued); > + > } while (retry++ < CRYPTO_ADAPTER_MAX_EV_ENQ_RETRIES && > nb_enqueued < nb_ev); >=20 > - /* Free mbufs and rte_crypto_ops for failed events */ > - for (i =3D nb_enqueued; i < nb_ev; i++) { > - struct rte_crypto_op *op =3D events[i].event_ptr; > - rte_pktmbuf_free(op->sym->m_src); > - rte_crypto_op_free(op); > - } > - > stats->event_enq_fail_count +=3D nb_ev - nb_enqueued; > stats->event_enq_count +=3D nb_enqueued; > stats->event_enq_retry_count +=3D retry - 1; > + > + return nb_enqueued; > +} > + > +static int > +eca_circular_buffer_flush_to_evdev(struct event_crypto_adapter > *adapter, > + struct crypto_ops_circular_buffer *bufp) { > + uint16_t n =3D 0, nb_ops_flushed; > + uint16_t *headp =3D &bufp->head; > + uint16_t *tailp =3D &bufp->tail; > + struct rte_crypto_op **ops =3D bufp->op_buffer; > + > + if (*tailp > *headp) > + n =3D *tailp - *headp; > + else if (*tailp < *headp) > + n =3D bufp->size - *headp; > + else > + return 0; /* buffer empty */ > + > + nb_ops_flushed =3D eca_ops_enqueue_burst(adapter, ops, n); > + bufp->count -=3D nb_ops_flushed; > + if (!bufp->count) { > + *headp =3D 0; > + *tailp =3D 0; > + return 0; /* buffer empty */ > + } > + > + *headp =3D (*headp + nb_ops_flushed) % bufp->size; > + return 1; > } >=20 > + > +static void > +eca_ops_buffer_flush(struct event_crypto_adapter *adapter) { > + if (likely(adapter->ebuf.count =3D=3D 0)) > + return; > + > + while (eca_circular_buffer_flush_to_evdev(adapter, > + &adapter->ebuf)) > + ; > +} > static inline unsigned int > eca_crypto_adapter_deq_run(struct event_crypto_adapter *adapter, > unsigned int max_deq) > @@ -571,7 +727,7 @@ eca_crypto_adapter_deq_run(struct > event_crypto_adapter *adapter, > struct crypto_device_info *curr_dev; > struct crypto_queue_pair_info *curr_queue; > struct rte_crypto_op *ops[BATCH_SIZE]; > - uint16_t n, nb_deq; > + uint16_t n, nb_deq, nb_enqueued, i; > struct rte_cryptodev *dev; > uint8_t cdev_id; > uint16_t qp, dev_qps; > @@ -579,16 +735,20 @@ eca_crypto_adapter_deq_run(struct > event_crypto_adapter *adapter, > uint16_t num_cdev =3D rte_cryptodev_count(); >=20 > nb_deq =3D 0; > + eca_ops_buffer_flush(adapter); > + > do { > - uint16_t queues =3D 0; > done =3D true; >=20 > for (cdev_id =3D adapter->next_cdev_id; > cdev_id < num_cdev; cdev_id++) { > + uint16_t queues =3D 0; > + > curr_dev =3D &adapter->cdevs[cdev_id]; > dev =3D curr_dev->dev; > - if (dev =3D=3D NULL) > + if (unlikely(dev =3D=3D NULL)) > continue; > + > dev_qps =3D dev->data->nb_queue_pairs; >=20 > for (qp =3D curr_dev->next_queue_pair_id; @@ -596,7 > +756,8 @@ eca_crypto_adapter_deq_run(struct event_crypto_adapter > *adapter, > queues++) { >=20 > curr_queue =3D &curr_dev->qpairs[qp]; > - if (!curr_queue->qp_enabled) > + if (unlikely(curr_queue =3D=3D NULL || > + !curr_queue->qp_enabled)) > continue; >=20 > n =3D rte_cryptodev_dequeue_burst(cdev_id, > qp, @@ -605,11 +766,27 @@ eca_crypto_adapter_deq_run(struct > event_crypto_adapter *adapter, > continue; >=20 > done =3D false; > + nb_enqueued =3D 0; > + > stats->crypto_deq_count +=3D n; > - eca_ops_enqueue_burst(adapter, ops, n); > + > + if (unlikely(!adapter->ebuf.count)) > + nb_enqueued =3D > eca_ops_enqueue_burst( > + adapter, ops, n); > + > + if (likely(nb_enqueued =3D=3D n)) > + goto check; > + > + /* Failed to enqueue events case */ > + for (i =3D nb_enqueued; i < n; i++) > + eca_circular_buffer_add( > + &adapter->ebuf, > + ops[nb_enqueued]); > + > +check: > nb_deq +=3D n; >=20 > - if (nb_deq > max_deq) { > + if (nb_deq >=3D max_deq) { > if ((qp + 1) =3D=3D dev_qps) { > adapter->next_cdev_id =3D > (cdev_id + 1) > @@ -622,6 +799,7 @@ eca_crypto_adapter_deq_run(struct > event_crypto_adapter *adapter, > } > } > } > + adapter->next_cdev_id =3D 0; > } while (done =3D=3D false); > return nb_deq; > } > @@ -751,11 +929,12 @@ eca_add_queue_pair(struct event_crypto_adapter > *adapter, uint8_t cdev_id, > return -ENOMEM; >=20 > qpairs =3D dev_info->qpairs; > - qpairs->op_buffer =3D rte_zmalloc_socket(adapter- > >mem_name, > - BATCH_SIZE * > - sizeof(struct rte_crypto_op *), > - 0, adapter->socket_id); > - if (!qpairs->op_buffer) { > + > + if (eca_circular_buffer_init("eca_cdev_circular_buffer", > + &qpairs->cbuf, > + > CRYPTO_ADAPTER_OPS_BUFFER_SZ)) { > + RTE_EDEV_LOG_ERR("Failed to get memory for > cryptodev " > + "buffer"); > rte_free(qpairs); > return -ENOMEM; > } > -- > 2.6.4