From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 87FF745459; Fri, 14 Jun 2024 08:49:26 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0FF61402F0; Fri, 14 Jun 2024 08:49:26 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 1AB7F402C6 for ; Fri, 14 Jun 2024 08:49:24 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 45E213wd026852; Thu, 13 Jun 2024 23:49:24 -0700 Received: from nam04-dm6-obe.outbound.protection.outlook.com (mail-dm6nam04lp2041.outbound.protection.outlook.com [104.47.73.41]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3yrcq4gtgd-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 13 Jun 2024 23:49:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=MYTXCPx4iJfJdCgY2i/jWYllkap9j1EQkWI8Jm9X+DCwbLWqWBzwmTb67iPwmByQfjh/XD+bD2emM1xIefaoqzxLKfuAICDiqCfSarTy4JM9vN3mPVBcK3b7u7J3s3toYxPu6thlNCgmZ9Ptd30GoxGHVNCwoq710H1zw4q/DpK3xXl2/QoXK5zAzzwARBVkX39OA8fC/X6ADgFB3kXNvjYnP9O70dP9mr7jLS3M7oMjM4nl4ZMSJ+aWga3sINFRoiSax8Z1Ox+kuLF4ajp74UemKfunK2brrc8HD6Ezi7L957VGuETS5yb8Q1tkmUisvlAAUcOOv5NG6UPRp0kMVQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=FjnkEh9wgkM9dBGeH+rORrgda2ikXp6N2Sje5beDjpo=; b=BOrGC0CfPUL5Tq+GjCNQcUikeL2T6f0+nd+/aXnT9AY+Lt4vvsD5WaNW1v+hkppq24C196wiDFJKDEKkwQyaHoEkDspfTbjTK9zF+W+87h+xV2QG34IOZZ7+bvb3VdUWTjopshT8TI6q4AgPy3rRjjaVnUZNt2AIjsWkcnDnatc/OljVQ3lN9DKGfIJfpGuZCtwSp/zpychL6nJj+p8tHiq6XAduL2DosSHQ9uQYr5TXgOHke7BsQ97eqideqcQE8vJeLEnhvU02a158NGSTh1rKG7KHz9RLkyeWUnz33dvCc7rpE3O5QLklgBlt6JZVe0eL2Zt/O8Wdb/ftlN82EA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=marvell.com; dmarc=pass action=none header.from=marvell.com; dkim=pass header.d=marvell.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=FjnkEh9wgkM9dBGeH+rORrgda2ikXp6N2Sje5beDjpo=; b=sb6QGpEduuV/42NGoTOFR5PsaSqrhEoZflcPCHmSyNCo4hs7d1t9sTCUAWbcJxb+Sr7rcoMWrB9NLP6y/orDVpZ54IIsaT7/kfuYdEcsZzJvLysETv1WJuXpm6U8g/uMeMQzTDwj0ZNAhpJqeWIuhqPC8R37sAo9P3b4bjTCL/o= Received: from CO6PR18MB4484.namprd18.prod.outlook.com (2603:10b6:5:359::9) by MW3PR18MB3452.namprd18.prod.outlook.com (2603:10b6:303:58::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.25; Fri, 14 Jun 2024 06:49:21 +0000 Received: from CO6PR18MB4484.namprd18.prod.outlook.com ([fe80::3c98:dd36:4897:a51d]) by CO6PR18MB4484.namprd18.prod.outlook.com ([fe80::3c98:dd36:4897:a51d%6]) with mapi id 15.20.7677.024; Fri, 14 Jun 2024 06:49:21 +0000 From: Akhil Goyal To: Suanming Mou , Matan Azrad CC: "dev@dpdk.org" Subject: RE: [EXTERNAL] [PATCH v2 1/2] crypto/mlx5: optimize AES-GCM IPsec operation Thread-Topic: [EXTERNAL] [PATCH v2 1/2] crypto/mlx5: optimize AES-GCM IPsec operation Thread-Index: AQHavfIq/wjJ6a5c+EyGc9lyqVndIrHG0H9A Date: Fri, 14 Jun 2024 06:49:21 +0000 Message-ID: References: <20240530072413.1602343-1-suanmingm@nvidia.com> <20240614003031.2006131-1-suanmingm@nvidia.com> <20240614003031.2006131-2-suanmingm@nvidia.com> In-Reply-To: <20240614003031.2006131-2-suanmingm@nvidia.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ms-publictraffictype: Email x-ms-traffictypediagnostic: CO6PR18MB4484:EE_|MW3PR18MB3452:EE_ x-ms-office365-filtering-correlation-id: f6802a6e-93ad-43c4-92cc-08dc8c3e1f06 x-ms-exchange-senderadcheck: 1 x-ms-exchange-antispam-relay: 0 x-microsoft-antispam: BCL:0; ARA:13230035|1800799019|376009|366011|38070700013; x-microsoft-antispam-message-info: =?us-ascii?Q?TNY9sjh5ClB2AGalppD5u7ByyTUge9arc1JTK2DT6V/+5CchUQW52ZePjXsh?= =?us-ascii?Q?6Ng+qmV+W54zc/yDAXErGnDNazqnkRQ3yOZfA8eP+DBDqYHon3yq6Wbe1Ht3?= =?us-ascii?Q?oJV4ljY5LzDY4N0pqrHhA47Zb4p4mzVGXLH37u6QOjnbcTz4ktTWPK6lktdT?= =?us-ascii?Q?neL3z/+JVZZHSE+c+jMrs7gIF75DpE57NHNF4ukKX/7mWluNcX+CcxfY0OzI?= =?us-ascii?Q?DfNuuT7PN57EVzmcBKJ7XN8VxEY4sYEs7nQDye6yjUIAa8RbRfjNlb4ofLVL?= =?us-ascii?Q?Jv4Z4Y0B60JBDBaOX7pB7P0/uMR9RymvUujbUmDAkH/bOkybUAufRdLLOziX?= =?us-ascii?Q?oEklLUYseInFoTStQDrMfHMVcVn9+YuJvnZB/8W8c30x6bAONUzi2Du5E0w6?= =?us-ascii?Q?SlbPQVy84IsYMhVwxpsm1zudsNBzCLwbBlHgnbiuthApl1rh88WpF94NCwLe?= =?us-ascii?Q?S+xPW7+OaJPqkg+6p2O5MRALP7UsoB7GVrkg2rwQ1CeNywu5ZMPlrgFvxSzb?= =?us-ascii?Q?tLOP3tDjWF9PxVrtvfk6Un4Jul+BkTCoAOOcgJdXNDbHNLO3LJ5wwg7Ey66T?= =?us-ascii?Q?1D0jSmM5hNSKgcdgxlHw+pzOiwC9h6/rPn5GsrFYgUGTvpgrSm/kQXs39GGV?= =?us-ascii?Q?qtOBrc37UB35sByKFmqUAksEuRvy9GUjSoM39l9zByTqfR6MobTODigV6XSl?= =?us-ascii?Q?t2SANq94bygLZnN0qf9U+O0EqTS219WSvnFVlUHTDDva4VmjLk5NAnyu8fFO?= =?us-ascii?Q?7Xcty8jYLTrnJzQHRXsEaQbxMbQXKGB+T1SttitD6CQvxcYFga5zRq0Hb5VR?= =?us-ascii?Q?CCqjBOHREWe1atyytoHOd0MbSIjzzlEEaNibpbD58EpLSZK46CwGu7Uo2TH2?= =?us-ascii?Q?uY/AwfvNZ6RaJMSKl6BXh62G29cD9RwZsYGnRmqpx5iqprf8Q1YPP7lmWtGQ?= =?us-ascii?Q?yfozZO/HfoWf1EqIW4lEB/KNzskImX7RJopvbsMMtpIF122oSniuqKNtA1ce?= =?us-ascii?Q?R9op3Js7HElXRkkJrWrCYmNxKvp1ReVwZQbONceZIxPmOZtBV0YuItXW53Tv?= =?us-ascii?Q?yYMvLnrKrQIza5ek6yVNh8prs65iNBtpOCI1iFfFo7h8vO7mBDYK+o2J/Vv0?= =?us-ascii?Q?Qe17cQaHT6EswbTNuV/8l2aAiuVLfMKUMpW/LF2eMhXSAZbLc6wsSywN6noP?= =?us-ascii?Q?0uj5eCJF7XYu9ZVKXxePDOQTRz5x41ysK1AP7G4ceigrqZSu2RrMfUIyU5dY?= =?us-ascii?Q?C70sW1i+l7goODgCepd40hdoNtiCQqIlNTh6kyqvgfuJOgq23IRBmdJFAFZa?= =?us-ascii?Q?EbGKWdXqhU2ORE06aXIcp9Zs13r7lFw9K19BG+Joufh4gXNC8UEYBs8Qzyao?= =?us-ascii?Q?jwpeyv0=3D?= x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:CO6PR18MB4484.namprd18.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230035)(1800799019)(376009)(366011)(38070700013); DIR:OUT; SFP:1102; x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?+COqSRLZE1gWRtlyouS6e87s5hi/YcJ0Z0UUkXg9t8HO/jmU73a23ApCHJxc?= =?us-ascii?Q?BWQnZPktepBjL0E+EJYrBD1ecX2DDxW+4deNgGn0cqvap7cZL+V7gVAzfyZ4?= =?us-ascii?Q?iTHBpqMIAKkIGADRJoald+hcVBW81fZdMWlCT6d4WcEHzh2dc06+NJX2Eu4X?= =?us-ascii?Q?phG4hCVrhKiz/H24K//qJCYbvl+PtWm7Vt4+peqfMLu9G7wbWzmwIIE0iqIz?= =?us-ascii?Q?PadkVYsYl1+4a2gP3zdgB0zpnSZXNSrUvVcVqXIL80zroXAbvlXeh90HQfwV?= =?us-ascii?Q?de92T97y0VpqzckTXHdfMOe6ZuZATaiEOomcAo1xpliYOuCfJxo7QRXZgMQx?= =?us-ascii?Q?dnazbYGb10rjS+QdRUUPkrlD5IT4BVC+c4gHWbZKOaQ9vBi36XXaGRN9CQm7?= =?us-ascii?Q?/4Q1hEAysCC0bmZcommntSJ1PksY5zFoI2gU+zo7jnvOLvjxaOpcHkjWW80C?= =?us-ascii?Q?vI/fd0gz9jjV06xh/WEPdCdtuxkkkngtcCpQcaRd/8PGPACMZfyh/vtCngs8?= =?us-ascii?Q?7yzMwnMnHXFyw4fotQlg5+gTlbKQ8bFe9NR5DiSnbrxkcRlj0CafhOaEokwK?= =?us-ascii?Q?SLt8GkLP0L47bcV2KF4gc3HXQ6x6d3dkTxoiXntWJdocoScZJmJBJWt1aIRk?= =?us-ascii?Q?UoQbalfGP8XcChOVt+ed9ZNP9fjW4GsCRIgEpHMXxSneW7sINTwtGaNso499?= =?us-ascii?Q?WAEI85mGu5H7J49iiwngkhsYNkF7QqRnHHNP2+92DYiIuHyViNYZ6Cc3Pim5?= =?us-ascii?Q?RgYD6Fay6kP9lJ5HkQ1OxjAhLfSlZMvSorsw3gQ+DUu4C7hBlOTbxzUh7ZQW?= =?us-ascii?Q?SxXEVlHs4D/haOLOe9FJgV/DS3DDBsisMadmSeIZ49T+2LiEvKFobD0d65If?= =?us-ascii?Q?9RSJReUPMJM1zN7MRxB5Il5hCmhI0mxjX4gi8weu3VxdZirDYxF5+ewzRXtD?= =?us-ascii?Q?NzCHmMaDn5JklxCWx1fjNc9uakiVykCowhDOb8vbGwQojTsUZNL8Fui1w34N?= =?us-ascii?Q?kdoowRCLV3d7eq5wgUp0/RMvk7xILZWOUbC7fM20rhwE0qlJctCO/YdRk222?= =?us-ascii?Q?SMdNEm3wcqmKQ0HIdjsMjeA9jwVMUBkmf7FeUD7wYBJrU9MlB6xH7AJg2Ftv?= =?us-ascii?Q?he3Z2iEn8TX3qB6pGs9ibzLS27vRFBME16SV4sILv3mhj918mTQMZGc0EIRf?= =?us-ascii?Q?zccNWKel6x8Knyqno/IbuQv5g74Fk18tyMmqwU1Q7fA1KT2gwDrVTvblHHNy?= =?us-ascii?Q?TrkMdazBlr9AzGZs2U488ZWZPdtUmq5+FO+Ts6CIcSnyIXzzPoVBhl/W0iOL?= =?us-ascii?Q?Ag3X0oZhQ084MJf/iw21JidvE50APFr3OeS/0pagymfar81FKkLpT2SZ4gSk?= =?us-ascii?Q?S8oe14tL9WSCzf9NSOP6AgpVilbQeLdUCGDpFv5JT2aNzx14fykLz4Yl+qwh?= =?us-ascii?Q?9Rdc7zwtNHYEKkrxVgFJwNgWUfTZvUdAiHDhV2O0BtbU2/+/rqX4aw2Jd1QZ?= =?us-ascii?Q?1ZN5YxnXdeYNOPjqvxBLw1PjcIsbQUajQyWwWcUA/8g7PO20A3/CoiTmpswg?= =?us-ascii?Q?DD1IZaXGaVTQYpjSboox5pAJ9fStiyivVQ698z4w?= Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: marvell.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: CO6PR18MB4484.namprd18.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: f6802a6e-93ad-43c4-92cc-08dc8c3e1f06 X-MS-Exchange-CrossTenant-originalarrivaltime: 14 Jun 2024 06:49:21.5710 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 70e1fb47-1155-421d-87fc-2e58f638b6e0 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: lW2ybrLw98e7CbhBJOqgw5LYOosVPM1HghdmhdOIA7fPwmFd8i3x9ZrZaYfVbIugkk/e38WNlLS22nDqj4lyaw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW3PR18MB3452 X-Proofpoint-ORIG-GUID: 2tn6JF04V4mw1Kfo5bwoUlXoTQ7HibIU X-Proofpoint-GUID: 2tn6JF04V4mw1Kfo5bwoUlXoTQ7HibIU X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16 definitions=2024-06-13_15,2024-06-13_02,2024-05-17_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org > To optimize AES-GCM IPsec operation within crypto/mlx5, > the DPDK API typically supplies AES_GCM AAD/Payload/Digest > in separate locations, potentially disrupting their > contiguous layout. In cases where the memory layout fails > to meet hardware (HW) requirements, an UMR WQE is initiated > ahead of the GCM's GGA WQE to establish a continuous > AAD/Payload/Digest virtual memory space for the HW MMU. >=20 > For IPsec scenarios, where the memory layout consistently > adheres to the fixed order of AAD/IV/Payload/Digest, > directly shrinking memory for AAD proves more efficient > than preparing a UMR WQE. To address this, a new devarg > "crypto_mode" with mode "ipsec_opt" is introduced in the > commit, offering an optimization hint specifically for > IPsec cases. When enabled, the PMD copies AAD directly > before Payload in the enqueue_burst function instead of > employing the UMR WQE. Subsequently, in the dequeue_burst > function, the overridden IV before Payload is restored > from the GGA WQE. It's crucial for users to avoid utilizing > the input mbuf data during processing. This seems very specific to mlx5 and is not as per the expectations of cryptodev APIs. It seems you are asking to alter the user application to accommodate this to support IPsec. Cryptodev APIs are for generic crypto processing of data as defined in rte_= crypto_op. With your proposed changes, it seems the behavior of the crypto APIs will b= e different in case of mlx5 which I believe is not correct. Is it not possible for you to use rte_security IPsec path? >=20 > Signed-off-by: Suanming Mou > Acked-by: Matan Azrad > --- > doc/guides/cryptodevs/mlx5.rst | 20 +++ > doc/guides/rel_notes/release_24_07.rst | 4 + > drivers/crypto/mlx5/mlx5_crypto.c | 24 ++- > drivers/crypto/mlx5/mlx5_crypto.h | 19 +++ > drivers/crypto/mlx5/mlx5_crypto_gcm.c | 220 +++++++++++++++++++++++-- > 5 files changed, 265 insertions(+), 22 deletions(-) >=20 > diff --git a/doc/guides/cryptodevs/mlx5.rst b/doc/guides/cryptodevs/mlx5.= rst > index 8c05759ae7..320f57bb02 100644 > --- a/doc/guides/cryptodevs/mlx5.rst > +++ b/doc/guides/cryptodevs/mlx5.rst > @@ -185,6 +185,25 @@ for an additional list of options shared with other = mlx5 > drivers. >=20 > Maximum number of mbuf chain segments(src or dest), default value is 8= . >=20 > +- ``crypto_mode`` parameter [string] > + > + Only valid in AES-GCM mode. Will be ignored in AES-XTS mode. > + > + - ``full_capable`` > + Use UMR WQE for inputs not as contiguous AAD/Payload/Digest. > + > + - ``ipsec_opt`` > + Do software AAD shrink for inputs as contiguous AAD/IV/Payload/Di= gest. > + The PMD relies on the IPsec layout, expecting the memory to align= with > + AAD/IV/Payload/Digest in a contiguous manner, all within a single= mbuf > + for any given OP. > + The PMD extracts the ESP.IV bytes from the input memory and binds= the > + AAD (ESP SPI and SN) to the payload during enqueue OP. It then re= stores > + the original memory layout in the decrypt OP. > + ESP.IV size supported range is [0,16] bytes. > + > + Set to ``full_capable`` by default. > + >=20 > Supported NICs > -------------- > @@ -205,6 +224,7 @@ Limitations > values. > - AES-GCM is supported only on BlueField-3. > - AES-GCM supports only key import plaintext mode. > +- AES-GCM ``ipsec_opt`` mode does not support multi-segment mode. >=20 >=20 > Prerequisites > diff --git a/doc/guides/rel_notes/release_24_07.rst > b/doc/guides/rel_notes/release_24_07.rst > index 9e06dcab17..0a5d8f771c 100644 > --- a/doc/guides/rel_notes/release_24_07.rst > +++ b/doc/guides/rel_notes/release_24_07.rst > @@ -64,6 +64,10 @@ New Features >=20 > Added a new crypto driver for AMD Pensando hardware accelerators. >=20 > +* **Updated NVIDIA mlx5 crypto driver.** > + > + * Added AES-GCM IPsec operation optimization. > + >=20 > Removed Items > ------------- > diff --git a/drivers/crypto/mlx5/mlx5_crypto.c > b/drivers/crypto/mlx5/mlx5_crypto.c > index 26bd4087da..d49a375dcb 100644 > --- a/drivers/crypto/mlx5/mlx5_crypto.c > +++ b/drivers/crypto/mlx5/mlx5_crypto.c > @@ -25,10 +25,6 @@ >=20 > #define MLX5_CRYPTO_FEATURE_FLAGS(wrapped_mode) \ > (RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO | > RTE_CRYPTODEV_FF_HW_ACCELERATED | \ > - RTE_CRYPTODEV_FF_IN_PLACE_SGL | > RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT | \ > - RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT | \ > - RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT | \ > - RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT | \ > (wrapped_mode ? RTE_CRYPTODEV_FF_CIPHER_WRAPPED_KEY : 0) | \ > RTE_CRYPTODEV_FF_CIPHER_MULTIPLE_DATA_UNITS) >=20 > @@ -60,6 +56,14 @@ mlx5_crypto_dev_infos_get(struct rte_cryptodev *dev, > dev_info->driver_id =3D mlx5_crypto_driver_id; > dev_info->feature_flags =3D > MLX5_CRYPTO_FEATURE_FLAGS(priv- > >is_wrapped_mode); > + if (!mlx5_crypto_is_ipsec_opt(priv)) > + dev_info->feature_flags |=3D > + RTE_CRYPTODEV_FF_IN_PLACE_SGL | > + RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT | > + RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT | > + RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT | > + RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT; > + > dev_info->capabilities =3D priv->caps; > dev_info->max_nb_queue_pairs =3D MLX5_CRYPTO_MAX_QPS; > if (priv->caps->sym.xform_type =3D=3D > RTE_CRYPTO_SYM_XFORM_AEAD) { > @@ -249,6 +253,16 @@ mlx5_crypto_args_check_handler(const char *key, > const char *val, void *opaque) > fclose(file); > devarg_prms->login_devarg =3D true; > return 0; > + } else if (strcmp(key, "crypto_mode") =3D=3D 0) { > + if (strcmp(val, "full_capable") =3D=3D 0) { > + devarg_prms->crypto_mode =3D > MLX5_CRYPTO_FULL_CAPABLE; > + } else if (strcmp(val, "ipsec_opt") =3D=3D 0) { > + devarg_prms->crypto_mode =3D > MLX5_CRYPTO_IPSEC_OPT; > + } else { > + DRV_LOG(ERR, "Invalid crypto mode: %s", val); > + rte_errno =3D EINVAL; > + return -rte_errno; > + } > } > errno =3D 0; > tmp =3D strtoul(val, NULL, 0); > @@ -294,6 +308,7 @@ mlx5_crypto_parse_devargs(struct mlx5_kvargs_ctrl > *mkvlist, > "max_segs_num", > "wcs_file", > "algo", > + "crypto_mode", > NULL, > }; >=20 > @@ -379,6 +394,7 @@ mlx5_crypto_dev_probe(struct mlx5_common_device > *cdev, > priv->crypto_dev =3D crypto_dev; > priv->is_wrapped_mode =3D wrapped_mode; > priv->max_segs_num =3D devarg_prms.max_segs_num; > + priv->crypto_mode =3D devarg_prms.crypto_mode; > /* Init and override AES-GCM configuration. */ > if (devarg_prms.is_aes_gcm) { > ret =3D mlx5_crypto_gcm_init(priv); > diff --git a/drivers/crypto/mlx5/mlx5_crypto.h > b/drivers/crypto/mlx5/mlx5_crypto.h > index 5432484f80..547bb490e2 100644 > --- a/drivers/crypto/mlx5/mlx5_crypto.h > +++ b/drivers/crypto/mlx5/mlx5_crypto.h > @@ -25,6 +25,16 @@ > MLX5_WSEG_SIZE) > #define MLX5_CRYPTO_GCM_MAX_AAD 64 > #define MLX5_CRYPTO_GCM_MAX_DIGEST 16 > +#define MLX5_CRYPTO_GCM_IPSEC_IV_SIZE 16 > + > +enum mlx5_crypto_mode { > + MLX5_CRYPTO_FULL_CAPABLE, > + MLX5_CRYPTO_IPSEC_OPT, > +}; > + > +struct mlx5_crypto_ipsec_mem { > + uint8_t mem[MLX5_CRYPTO_GCM_IPSEC_IV_SIZE]; > +} __rte_packed; >=20 > struct mlx5_crypto_priv { > TAILQ_ENTRY(mlx5_crypto_priv) next; > @@ -45,6 +55,7 @@ struct mlx5_crypto_priv { > uint16_t umr_wqe_stride; > uint16_t max_rdmar_ds; > uint32_t is_wrapped_mode:1; > + enum mlx5_crypto_mode crypto_mode; > }; >=20 > struct mlx5_crypto_qp { > @@ -57,6 +68,7 @@ struct mlx5_crypto_qp { > struct mlx5_devx_obj **mkey; /* WQE's indirect mekys. */ > struct mlx5_klm *klm_array; > union mlx5_gga_crypto_opaque *opaque_addr; > + struct mlx5_crypto_ipsec_mem *ipsec_mem; > struct mlx5_mr_ctrl mr_ctrl; > struct mlx5_pmd_mr mr; > /* Crypto QP. */ > @@ -93,6 +105,7 @@ struct mlx5_crypto_devarg_params { > uint64_t keytag; > uint32_t max_segs_num; > uint32_t is_aes_gcm:1; > + enum mlx5_crypto_mode crypto_mode; > }; >=20 > struct mlx5_crypto_session { > @@ -139,6 +152,12 @@ struct mlx5_crypto_dek_ctx { > struct mlx5_crypto_priv *priv; > }; >=20 > +static __rte_always_inline bool > +mlx5_crypto_is_ipsec_opt(struct mlx5_crypto_priv *priv) > +{ > + return priv->crypto_mode =3D=3D MLX5_CRYPTO_IPSEC_OPT; > +} > + > typedef void *(*mlx5_crypto_mkey_update_t)(struct mlx5_crypto_priv *priv= , > struct mlx5_crypto_qp *qp, > uint32_t idx); > diff --git a/drivers/crypto/mlx5/mlx5_crypto_gcm.c > b/drivers/crypto/mlx5/mlx5_crypto_gcm.c > index fc6ade6711..189e798d1d 100644 > --- a/drivers/crypto/mlx5/mlx5_crypto_gcm.c > +++ b/drivers/crypto/mlx5/mlx5_crypto_gcm.c > @@ -181,6 +181,7 @@ mlx5_crypto_sym_gcm_session_configure(struct > rte_cryptodev *dev, > DRV_LOG(ERR, "Only AES-GCM algorithm is supported."); > return -ENOTSUP; > } > + > if (aead->op =3D=3D RTE_CRYPTO_AEAD_OP_ENCRYPT) > op_type =3D MLX5_CRYPTO_OP_TYPE_ENCRYPTION; > else > @@ -235,6 +236,7 @@ mlx5_crypto_gcm_qp_release(struct rte_cryptodev *dev, > uint16_t qp_id) > } > mlx5_crypto_indirect_mkeys_release(qp, qp->entries_n); > mlx5_mr_btree_free(&qp->mr_ctrl.cache_bh); > + rte_free(qp->ipsec_mem); > rte_free(qp); > dev->data->queue_pairs[qp_id] =3D NULL; > return 0; > @@ -321,13 +323,16 @@ mlx5_crypto_gcm_qp_setup(struct rte_cryptodev > *dev, uint16_t qp_id, > uint32_t log_ops_n =3D rte_log2_u32(qp_conf->nb_descriptors); > uint32_t entries =3D RTE_BIT32(log_ops_n); > uint32_t alloc_size =3D sizeof(*qp); > + uint32_t extra_obj_size =3D 0; > size_t mr_size, opaq_size; > void *mr_buf; > int ret; >=20 > + if (!mlx5_crypto_is_ipsec_opt(priv)) > + extra_obj_size =3D sizeof(struct mlx5_devx_obj *); > alloc_size =3D RTE_ALIGN(alloc_size, RTE_CACHE_LINE_SIZE); > alloc_size +=3D (sizeof(struct rte_crypto_op *) + > - sizeof(struct mlx5_devx_obj *)) * entries; > + extra_obj_size) * entries; > qp =3D rte_zmalloc_socket(__func__, alloc_size, RTE_CACHE_LINE_SIZE, > socket_id); > if (qp =3D=3D NULL) { > @@ -370,7 +375,7 @@ mlx5_crypto_gcm_qp_setup(struct rte_cryptodev *dev, > uint16_t qp_id, > * Triple the CQ size as UMR QP which contains UMR and SEND_EN WQE > * will share this CQ . > */ > - qp->cq_entries_n =3D rte_align32pow2(entries * 3); > + qp->cq_entries_n =3D rte_align32pow2(entries * > (mlx5_crypto_is_ipsec_opt(priv) ? 1 : 3)); > ret =3D mlx5_devx_cq_create(priv->cdev->ctx, &qp->cq_obj, > rte_log2_u32(qp->cq_entries_n), > &cq_attr, socket_id); > @@ -384,7 +389,7 @@ mlx5_crypto_gcm_qp_setup(struct rte_cryptodev *dev, > uint16_t qp_id, > qp_attr.num_of_send_wqbbs =3D entries; > qp_attr.mmo =3D attr->crypto_mmo.crypto_mmo_qp; > /* Set MMO QP as follower as the input data may depend on UMR. */ > - qp_attr.cd_slave_send =3D 1; > + qp_attr.cd_slave_send =3D !mlx5_crypto_is_ipsec_opt(priv); > ret =3D mlx5_devx_qp_create(priv->cdev->ctx, &qp->qp_obj, > qp_attr.num_of_send_wqbbs * > MLX5_WQE_SIZE, > &qp_attr, socket_id); > @@ -397,18 +402,28 @@ mlx5_crypto_gcm_qp_setup(struct rte_cryptodev > *dev, uint16_t qp_id, > if (ret) > goto err; > qp->ops =3D (struct rte_crypto_op **)(qp + 1); > - qp->mkey =3D (struct mlx5_devx_obj **)(qp->ops + entries); > - if (mlx5_crypto_gcm_umr_qp_setup(dev, qp, socket_id)) { > - DRV_LOG(ERR, "Failed to setup UMR QP."); > - goto err; > - } > - DRV_LOG(INFO, "QP %u: SQN=3D0x%X CQN=3D0x%X entries num =3D %u", > - (uint32_t)qp_id, qp->qp_obj.qp->id, qp->cq_obj.cq->id, entries); > - if (mlx5_crypto_indirect_mkeys_prepare(priv, qp, &mkey_attr, > - > mlx5_crypto_gcm_mkey_klm_update)) { > - DRV_LOG(ERR, "Cannot allocate indirect memory regions."); > - rte_errno =3D ENOMEM; > - goto err; > + if (!mlx5_crypto_is_ipsec_opt(priv)) { > + qp->mkey =3D (struct mlx5_devx_obj **)(qp->ops + entries); > + if (mlx5_crypto_gcm_umr_qp_setup(dev, qp, socket_id)) { > + DRV_LOG(ERR, "Failed to setup UMR QP."); > + goto err; > + } > + DRV_LOG(INFO, "QP %u: SQN=3D0x%X CQN=3D0x%X entries num =3D > %u", > + (uint32_t)qp_id, qp->qp_obj.qp->id, qp->cq_obj.cq->id, > entries); > + if (mlx5_crypto_indirect_mkeys_prepare(priv, qp, &mkey_attr, > + > mlx5_crypto_gcm_mkey_klm_update)) { > + DRV_LOG(ERR, "Cannot allocate indirect memory > regions."); > + rte_errno =3D ENOMEM; > + goto err; > + } > + } else { > + extra_obj_size =3D sizeof(struct mlx5_crypto_ipsec_mem) * > entries; > + qp->ipsec_mem =3D rte_calloc(__func__, (size_t)1, extra_obj_size, > + RTE_CACHE_LINE_SIZE); > + if (!qp->ipsec_mem) { > + DRV_LOG(ERR, "Failed to allocate ipsec_mem."); > + goto err; > + } > } > dev->data->queue_pairs[qp_id] =3D qp; > return 0; > @@ -974,6 +989,168 @@ mlx5_crypto_gcm_dequeue_burst(void *queue_pair, > return op_num; > } >=20 > +static uint16_t > +mlx5_crypto_gcm_ipsec_enqueue_burst(void *queue_pair, > + struct rte_crypto_op **ops, > + uint16_t nb_ops) > +{ > + struct mlx5_crypto_qp *qp =3D queue_pair; > + struct mlx5_crypto_session *sess; > + struct mlx5_crypto_priv *priv =3D qp->priv; > + struct mlx5_crypto_gcm_data gcm_data; > + struct rte_crypto_op *op; > + struct rte_mbuf *m_src; > + uint16_t mask =3D qp->entries_n - 1; > + uint16_t remain =3D qp->entries_n - (qp->pi - qp->qp_ci); > + uint32_t idx; > + uint32_t pkt_iv_len; > + uint8_t *payload; > + > + if (remain < nb_ops) > + nb_ops =3D remain; > + else > + remain =3D nb_ops; > + if (unlikely(remain =3D=3D 0)) > + return 0; > + do { > + op =3D *ops++; > + sess =3D CRYPTODEV_GET_SYM_SESS_PRIV(op->sym->session); > + idx =3D qp->pi & mask; > + m_src =3D op->sym->m_src; > + MLX5_ASSERT(m_src->nb_segs =3D=3D 1); > + payload =3D rte_pktmbuf_mtod_offset(m_src, void *, op->sym- > >aead.data.offset); > + gcm_data.src_addr =3D RTE_PTR_SUB(payload, sess->aad_len); > + /* > + * IPsec IV between payload and AAD should be equal or less > than > + * MLX5_CRYPTO_GCM_IPSEC_IV_SIZE. > + */ > + pkt_iv_len =3D RTE_PTR_DIFF(payload, > + RTE_PTR_ADD(op->sym->aead.aad.data, sess- > >aad_len)); > + MLX5_ASSERT(pkt_iv_len <=3D > MLX5_CRYPTO_GCM_IPSEC_IV_SIZE); > + gcm_data.src_bytes =3D op->sym->aead.data.length + sess- > >aad_len; > + gcm_data.src_mkey =3D mlx5_mr_mb2mr(&qp->mr_ctrl, op->sym- > >m_src); > + /* OOP mode is not supported. */ > + MLX5_ASSERT(!op->sym->m_dst || op->sym->m_dst =3D=3D m_src); > + gcm_data.dst_addr =3D gcm_data.src_addr; > + gcm_data.dst_mkey =3D gcm_data.src_mkey; > + gcm_data.dst_bytes =3D gcm_data.src_bytes; > + /* Digest should follow payload. */ > + MLX5_ASSERT(RTE_PTR_ADD > + (gcm_data.src_addr, sess->aad_len + op->sym- > >aead.data.length) =3D=3D > + op->sym->aead.digest.data); > + if (sess->op_type =3D=3D MLX5_CRYPTO_OP_TYPE_ENCRYPTION) > + gcm_data.dst_bytes +=3D sess->tag_len; > + else > + gcm_data.src_bytes +=3D sess->tag_len; > + mlx5_crypto_gcm_wqe_set(qp, op, idx, &gcm_data); > + /* > + * All the data such as IV have been copied above, > + * shrink AAD before payload. First backup the mem, > + * then do shrink. > + */ > + rte_memcpy(&qp->ipsec_mem[idx], > + RTE_PTR_SUB(payload, > MLX5_CRYPTO_GCM_IPSEC_IV_SIZE), > + MLX5_CRYPTO_GCM_IPSEC_IV_SIZE); > + /* If no memory overlap, do copy directly, otherwise memmove. > */ > + if (likely(pkt_iv_len >=3D sess->aad_len)) > + rte_memcpy(gcm_data.src_addr, op->sym- > >aead.aad.data, sess->aad_len); > + else > + memmove(gcm_data.src_addr, op->sym- > >aead.aad.data, sess->aad_len); > + op->status =3D RTE_CRYPTO_OP_STATUS_SUCCESS; > + qp->ops[idx] =3D op; > + qp->pi++; > + } while (--remain); > + qp->stats.enqueued_count +=3D nb_ops; > + /* Update the last GGA cseg with COMP. */ > + ((struct mlx5_wqe_cseg *)qp->wqe)->flags =3D > + RTE_BE32(MLX5_COMP_ALWAYS << > MLX5_COMP_MODE_OFFSET); > + mlx5_doorbell_ring(&priv->uar.bf_db, *(volatile uint64_t *)qp->wqe, > + qp->pi, &qp->qp_obj.db_rec[MLX5_SND_DBR], > + !priv->uar.dbnc); > + return nb_ops; > +} > + > +static __rte_always_inline void > +mlx5_crypto_gcm_restore_ipsec_mem(struct mlx5_crypto_qp *qp, > + uint16_t orci, > + uint16_t rci, > + uint16_t op_mask) > +{ > + uint32_t idx; > + struct mlx5_crypto_session *sess; > + struct rte_crypto_op *op; > + struct rte_mbuf *m_src; > + uint8_t *payload; > + > + while (orci !=3D rci) { > + idx =3D orci & op_mask; > + op =3D qp->ops[idx]; > + sess =3D CRYPTODEV_GET_SYM_SESS_PRIV(op->sym->session); > + m_src =3D op->sym->m_src; > + payload =3D rte_pktmbuf_mtod_offset(m_src, void *, > + op->sym->aead.data.offset); > + /* Restore the IPsec memory. */ > + if (unlikely(sess->aad_len > > MLX5_CRYPTO_GCM_IPSEC_IV_SIZE)) > + memmove(op->sym->aead.aad.data, > + RTE_PTR_SUB(payload, sess->aad_len), sess- > >aad_len); > + rte_memcpy(RTE_PTR_SUB(payload, > MLX5_CRYPTO_GCM_IPSEC_IV_SIZE), > + &qp->ipsec_mem[idx], > MLX5_CRYPTO_GCM_IPSEC_IV_SIZE); > + orci++; > + } > +} > + > +static uint16_t > +mlx5_crypto_gcm_ipsec_dequeue_burst(void *queue_pair, > + struct rte_crypto_op **ops, > + uint16_t nb_ops) > +{ > + struct mlx5_crypto_qp *qp =3D queue_pair; > + volatile struct mlx5_cqe *restrict cqe; > + const unsigned int cq_size =3D qp->cq_entries_n; > + const unsigned int mask =3D cq_size - 1; > + const unsigned int op_mask =3D qp->entries_n - 1; > + uint32_t idx; > + uint32_t next_idx =3D qp->cq_ci & mask; > + uint16_t reported_ci =3D qp->reported_ci; > + uint16_t qp_ci =3D qp->qp_ci; > + const uint16_t max =3D RTE_MIN((uint16_t)(qp->pi - reported_ci), nb_ops= ); > + uint16_t op_num =3D 0; > + int ret; > + > + if (unlikely(max =3D=3D 0)) > + return 0; > + while (qp_ci - reported_ci < max) { > + idx =3D next_idx; > + next_idx =3D (qp->cq_ci + 1) & mask; > + cqe =3D &qp->cq_obj.cqes[idx]; > + ret =3D check_cqe(cqe, cq_size, qp->cq_ci); > + if (unlikely(ret !=3D MLX5_CQE_STATUS_SW_OWN)) { > + if (unlikely(ret !=3D MLX5_CQE_STATUS_HW_OWN)) > + mlx5_crypto_gcm_cqe_err_handle(qp, > + qp->ops[reported_ci & > op_mask]); > + break; > + } > + qp_ci =3D rte_be_to_cpu_16(cqe->wqe_counter) + 1; > + qp->cq_ci++; > + } > + /* If wqe_counter changed, means CQE handled. */ > + if (likely(qp->qp_ci !=3D qp_ci)) { > + qp->qp_ci =3D qp_ci; > + rte_io_wmb(); > + qp->cq_obj.db_rec[0] =3D rte_cpu_to_be_32(qp->cq_ci); > + } > + /* If reported_ci is not same with qp_ci, means op retrieved. */ > + if (qp_ci !=3D reported_ci) { > + op_num =3D RTE_MIN((uint16_t)(qp_ci - reported_ci), max); > + reported_ci +=3D op_num; > + mlx5_crypto_gcm_restore_ipsec_mem(qp, qp->reported_ci, > reported_ci, op_mask); > + mlx5_crypto_gcm_fill_op(qp, ops, qp->reported_ci, reported_ci, > op_mask); > + qp->stats.dequeued_count +=3D op_num; > + qp->reported_ci =3D reported_ci; > + } > + return op_num; > +} > + > int > mlx5_crypto_gcm_init(struct mlx5_crypto_priv *priv) > { > @@ -987,9 +1164,16 @@ mlx5_crypto_gcm_init(struct mlx5_crypto_priv *priv) > mlx5_os_set_reg_mr_cb(&priv->reg_mr_cb, &priv->dereg_mr_cb); > dev_ops->queue_pair_setup =3D mlx5_crypto_gcm_qp_setup; > dev_ops->queue_pair_release =3D mlx5_crypto_gcm_qp_release; > - crypto_dev->dequeue_burst =3D mlx5_crypto_gcm_dequeue_burst; > - crypto_dev->enqueue_burst =3D mlx5_crypto_gcm_enqueue_burst; > - priv->max_klm_num =3D RTE_ALIGN((priv->max_segs_num + 1) * 2 + 1, > MLX5_UMR_KLM_NUM_ALIGN); > + if (mlx5_crypto_is_ipsec_opt(priv)) { > + crypto_dev->dequeue_burst =3D > mlx5_crypto_gcm_ipsec_dequeue_burst; > + crypto_dev->enqueue_burst =3D > mlx5_crypto_gcm_ipsec_enqueue_burst; > + priv->max_klm_num =3D 0; > + } else { > + crypto_dev->dequeue_burst =3D > mlx5_crypto_gcm_dequeue_burst; > + crypto_dev->enqueue_burst =3D > mlx5_crypto_gcm_enqueue_burst; > + priv->max_klm_num =3D RTE_ALIGN((priv->max_segs_num + 1) * 2 > + 1, > + MLX5_UMR_KLM_NUM_ALIGN); > + } > /* Generate GCM capability. */ > ret =3D mlx5_crypto_generate_gcm_cap(&cdev- > >config.hca_attr.crypto_mmo, > mlx5_crypto_gcm_caps); > -- > 2.34.1