* [dpdk-dev] [PATCH 0 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices
@ 2020-08-11 14:58 Vikas Gupta
2020-08-11 14:58 ` [dpdk-dev] [PATCH 0 1/8] crypto/bcmfs: add BCMFS driver Vikas Gupta
` (8 more replies)
0 siblings, 9 replies; 75+ messages in thread
From: Vikas Gupta @ 2020-08-11 14:58 UTC (permalink / raw)
To: dev, akhil.goyal, ajit.khaparde; +Cc: vikram.prakash, Vikas Gupta
Hi,
This patchset contains support for Crypto offload on Broadcom’s
Stingray/Stingray2 SoCs having FlexSparc unit.
BCMFS is an acronym for Broadcom FlexSparc device used in the patchest.
The patchset progressively adds major modules as below.
a) Detection of platform-device based on the known registered platforms and attaching with VFIO.
b) Creation of Cryptodevice.
c) Addition of session handling.
d) Add Cryptodevice into test Cryptodev framework.
The patchset has been tested on the above mentioned SoCs.
Regards,
Vikas
Vikas Gupta (8):
crypto/bcmfs: add BCMFS driver
crypto/bcmfs: add vfio support
crypto/bcmfs: add apis for queue pair management
crypto/bcmfs: add hw queue pair operations
crypto/bcmfs: create a symmetric cryptodev
crypto/bcmfs: add session handling and capabilities
crypto/bcmfs: add crypto h/w module
crypto/bcmfs: add crypto pmd into cryptodev test
MAINTAINERS | 7 +
app/test/test_cryptodev.c | 261 +++++
app/test/test_cryptodev.h | 1 +
config/common_base | 5 +
doc/guides/cryptodevs/bcmfs.rst | 72 ++
doc/guides/cryptodevs/features/bcmfs.ini | 56 +
doc/guides/cryptodevs/index.rst | 1 +
drivers/crypto/bcmfs/bcmfs_dev_msg.h | 29 +
drivers/crypto/bcmfs/bcmfs_device.c | 331 ++++++
drivers/crypto/bcmfs/bcmfs_device.h | 76 ++
drivers/crypto/bcmfs/bcmfs_hw_defs.h | 38 +
drivers/crypto/bcmfs/bcmfs_logs.c | 38 +
drivers/crypto/bcmfs/bcmfs_logs.h | 34 +
drivers/crypto/bcmfs/bcmfs_qp.c | 383 +++++++
drivers/crypto/bcmfs/bcmfs_qp.h | 142 +++
drivers/crypto/bcmfs/bcmfs_sym.c | 316 ++++++
drivers/crypto/bcmfs/bcmfs_sym_capabilities.c | 764 ++++++++++++++
drivers/crypto/bcmfs/bcmfs_sym_capabilities.h | 16 +
drivers/crypto/bcmfs/bcmfs_sym_defs.h | 186 ++++
drivers/crypto/bcmfs/bcmfs_sym_engine.c | 994 ++++++++++++++++++
drivers/crypto/bcmfs/bcmfs_sym_engine.h | 103 ++
drivers/crypto/bcmfs/bcmfs_sym_pmd.c | 426 ++++++++
drivers/crypto/bcmfs/bcmfs_sym_pmd.h | 38 +
drivers/crypto/bcmfs/bcmfs_sym_req.h | 62 ++
drivers/crypto/bcmfs/bcmfs_sym_session.c | 426 ++++++++
drivers/crypto/bcmfs/bcmfs_sym_session.h | 99 ++
drivers/crypto/bcmfs/bcmfs_vfio.c | 94 ++
drivers/crypto/bcmfs/bcmfs_vfio.h | 17 +
drivers/crypto/bcmfs/hw/bcmfs4_rm.c | 742 +++++++++++++
drivers/crypto/bcmfs/hw/bcmfs5_rm.c | 677 ++++++++++++
drivers/crypto/bcmfs/hw/bcmfs_rm_common.c | 82 ++
drivers/crypto/bcmfs/hw/bcmfs_rm_common.h | 46 +
drivers/crypto/bcmfs/meson.build | 20 +
.../crypto/bcmfs/rte_pmd_bcmfs_version.map | 3 +
drivers/crypto/meson.build | 3 +-
mk/rte.app.mk | 1 +
36 files changed, 6588 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/cryptodevs/bcmfs.rst
create mode 100644 doc/guides/cryptodevs/features/bcmfs.ini
create mode 100644 drivers/crypto/bcmfs/bcmfs_dev_msg.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_device.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_device.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_hw_defs.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_logs.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_logs.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_qp.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_qp.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_capabilities.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_capabilities.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_defs.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_engine.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_engine.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_pmd.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_pmd.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_req.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_session.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_session.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_vfio.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_vfio.h
create mode 100644 drivers/crypto/bcmfs/hw/bcmfs4_rm.c
create mode 100644 drivers/crypto/bcmfs/hw/bcmfs5_rm.c
create mode 100644 drivers/crypto/bcmfs/hw/bcmfs_rm_common.c
create mode 100644 drivers/crypto/bcmfs/hw/bcmfs_rm_common.h
create mode 100644 drivers/crypto/bcmfs/meson.build
create mode 100644 drivers/crypto/bcmfs/rte_pmd_bcmfs_version.map
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH 0 1/8] crypto/bcmfs: add BCMFS driver
2020-08-11 14:58 [dpdk-dev] [PATCH 0 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
@ 2020-08-11 14:58 ` Vikas Gupta
2020-08-11 14:58 ` [dpdk-dev] [PATCH 0 2/8] crypto/bcmfs: add vfio support Vikas Gupta
` (7 subsequent siblings)
8 siblings, 0 replies; 75+ messages in thread
From: Vikas Gupta @ 2020-08-11 14:58 UTC (permalink / raw)
To: dev, akhil.goyal, ajit.khaparde; +Cc: vikram.prakash, Vikas Gupta
Add Broadcom FlexSparc(FS) device creation driver which registers to a
vdev and create a device. Add APIs for logs, supportive documention and
maintainers file.
Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
---
MAINTAINERS | 7 +
config/common_base | 5 +
doc/guides/cryptodevs/bcmfs.rst | 26 ++
doc/guides/cryptodevs/index.rst | 1 +
drivers/crypto/bcmfs/Makefile | 27 ++
drivers/crypto/bcmfs/bcmfs_device.c | 256 ++++++++++++++++++
drivers/crypto/bcmfs/bcmfs_device.h | 40 +++
drivers/crypto/bcmfs/bcmfs_logs.c | 38 +++
drivers/crypto/bcmfs/bcmfs_logs.h | 34 +++
drivers/crypto/bcmfs/meson.build | 10 +
.../crypto/bcmfs/rte_pmd_bcmfs_version.map | 3 +
drivers/crypto/meson.build | 3 +-
mk/rte.app.mk | 1 +
13 files changed, 450 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/cryptodevs/bcmfs.rst
create mode 100644 drivers/crypto/bcmfs/Makefile
create mode 100644 drivers/crypto/bcmfs/bcmfs_device.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_device.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_logs.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_logs.h
create mode 100644 drivers/crypto/bcmfs/meson.build
create mode 100644 drivers/crypto/bcmfs/rte_pmd_bcmfs_version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index 3cd402b34..7c2d7ff1b 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1099,6 +1099,13 @@ F: drivers/crypto/zuc/
F: doc/guides/cryptodevs/zuc.rst
F: doc/guides/cryptodevs/features/zuc.ini
+Broadcom FlexSparc
+M: Vikas Gupta <vikas.gupta@@broadcom.com>
+M: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
+M: Ajit Khaparde <ajit.khaparde@broadcom.com>
+F: drivers/crypto/bcmfs/
+F: doc/guides/cryptodevs/bcmfs.rst
+F: doc/guides/cryptodevs/features/bcmfs.ini
Compression Drivers
-------------------
diff --git a/config/common_base b/config/common_base
index f7a8824f5..21daadcdd 100644
--- a/config/common_base
+++ b/config/common_base
@@ -705,6 +705,11 @@ CONFIG_RTE_LIBRTE_PMD_MVSAM_CRYPTO=n
#
CONFIG_RTE_LIBRTE_PMD_NITROX=y
+#
+# Compile PMD for Broadcom crypto device
+#
+CONFIG_RTE_LIBRTE_PMD_BCMFS=y
+
#
# Compile generic security library
#
diff --git a/doc/guides/cryptodevs/bcmfs.rst b/doc/guides/cryptodevs/bcmfs.rst
new file mode 100644
index 000000000..752ce028a
--- /dev/null
+++ b/doc/guides/cryptodevs/bcmfs.rst
@@ -0,0 +1,26 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(C) 2020 Broadcom
+
+Broadcom FlexSparc Crypto Poll Mode Driver
+==========================================
+
+The FlexSparc crypto poll mode driver provides support for offloading
+cryptographic operations to the Broadcom SoCs having FlexSparc4/FlexSparc5 unit.
+Detailed information about SoCs can be found in
+
+* https://www.broadcom.com/
+
+Installation
+------------
+
+For compiling the Broadcom FlexSparc crypto PMD, please check if the
+CONFIG_RTE_LIBRTE_PMD_BCMFS setting is set to `y` in config/common_base file.
+
+* ``CONFIG_RTE_LIBRTE_PMD_BCMFS=y``
+
+Initialization
+--------------
+BCMFS crypto PMD depend upon the devices present in the path
+/sys/bus/platform/devices/fs<version>/<dev_name> on the platform.
+Each cryptodev PMD instance can be attached to the nodes present
+in the mentioned path.
diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst
index a67ed5a28..5d7e028bd 100644
--- a/doc/guides/cryptodevs/index.rst
+++ b/doc/guides/cryptodevs/index.rst
@@ -29,3 +29,4 @@ Crypto Device Drivers
qat
virtio
zuc
+ bcmfs
diff --git a/drivers/crypto/bcmfs/Makefile b/drivers/crypto/bcmfs/Makefile
new file mode 100644
index 000000000..781ee6efa
--- /dev/null
+++ b/drivers/crypto/bcmfs/Makefile
@@ -0,0 +1,27 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(C) 2020 Broadcom
+# All rights reserved.
+#
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_bcmfs.a
+
+CFLAGS += $(WERROR_FLAGS)
+CFLAGS += -I$(RTE_SDK)/drivers/crypto/bcmfs
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+
+#
+# all source are stored in SRCS-y
+#
+SRCS-y += bcmfs_logs.c
+SRCS-y += bcmfs_device.c
+
+LDLIBS += -lrte_eal -lrte_bus_vdev
+
+EXPORT_MAP := rte_pmd_bcmfs_version.map
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/crypto/bcmfs/bcmfs_device.c b/drivers/crypto/bcmfs/bcmfs_device.c
new file mode 100644
index 000000000..47c776de6
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_device.c
@@ -0,0 +1,256 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Broadcom.
+ * All rights reserved.
+ */
+
+#include <dirent.h>
+#include <stdbool.h>
+#include <sys/queue.h>
+
+#include <rte_string_fns.h>
+
+#include "bcmfs_device.h"
+#include "bcmfs_logs.h"
+
+struct bcmfs_device_attr {
+ const char name[BCMFS_MAX_PATH_LEN];
+ const char suffix[BCMFS_DEV_NAME_LEN];
+ const enum bcmfs_device_type type;
+ const uint32_t offset;
+ const uint32_t version;
+};
+
+/* BCMFS supported devices */
+static struct bcmfs_device_attr dev_table[] = {
+ {
+ .name = "fs4",
+ .suffix = "crypto_mbox",
+ .type = BCMFS_SYM_FS4,
+ .offset = 0,
+ .version = 0x76303031
+ },
+ {
+ .name = "fs5",
+ .suffix = "mbox",
+ .type = BCMFS_SYM_FS5,
+ .offset = 0,
+ .version = 0x76303032
+ },
+ {
+ /* sentinel */
+ }
+};
+
+TAILQ_HEAD(fsdev_list, bcmfs_device);
+static struct fsdev_list fsdev_list = TAILQ_HEAD_INITIALIZER(fsdev_list);
+
+static struct bcmfs_device *
+fsdev_allocate_one_dev(struct rte_vdev_device *vdev,
+ char *dirpath,
+ char *devname,
+ enum bcmfs_device_type dev_type __rte_unused)
+{
+ struct bcmfs_device *fsdev;
+
+ fsdev = calloc(1, sizeof(*fsdev));
+ if (!fsdev)
+ return NULL;
+
+ if (strlen(dirpath) > sizeof(fsdev->dirname)) {
+ BCMFS_LOG(ERR, "dir path name is too long");
+ goto cleanup;
+ }
+
+ if (strlen(devname) > sizeof(fsdev->name)) {
+ BCMFS_LOG(ERR, "devname is too long");
+ goto cleanup;
+ }
+
+ strcpy(fsdev->dirname, dirpath);
+ strcpy(fsdev->name, devname);
+
+ fsdev->vdev = vdev;
+
+ TAILQ_INSERT_TAIL(&fsdev_list, fsdev, next);
+
+ return fsdev;
+
+cleanup:
+ free(fsdev);
+
+ return NULL;
+}
+
+static struct bcmfs_device *
+find_fsdev(struct rte_vdev_device *vdev)
+{
+ struct bcmfs_device *fsdev;
+
+ TAILQ_FOREACH(fsdev, &fsdev_list, next)
+ if (fsdev->vdev == vdev)
+ return fsdev;
+
+ return NULL;
+}
+
+static void
+fsdev_release(struct bcmfs_device *fsdev)
+{
+ if (fsdev == NULL)
+ return;
+
+ TAILQ_REMOVE(&fsdev_list, fsdev, next);
+ free(fsdev);
+}
+
+static int
+cmprator(const void *a, const void *b)
+{
+ return (*(const unsigned int *)a - *(const unsigned int *)b);
+}
+
+static int
+fsdev_find_all_devs(const char *path, const char *search,
+ uint32_t *devs)
+{
+ DIR *dir;
+ struct dirent *entry;
+ int count = 0;
+ char addr[BCMFS_MAX_NODES][BCMFS_MAX_PATH_LEN];
+ int i;
+
+ dir = opendir(path);
+ if (dir == NULL) {
+ BCMFS_LOG(ERR, "Unable to open directory");
+ return 0;
+ }
+
+ while ((entry = readdir(dir)) != NULL) {
+ if (strstr(entry->d_name, search)) {
+ strlcpy(addr[count], entry->d_name,
+ BCMFS_MAX_PATH_LEN);
+ count++;
+ }
+ }
+
+ closedir(dir);
+
+ for (i = 0 ; i < count; i++)
+ devs[i] = (uint32_t)strtoul(addr[i], NULL, 16);
+ /* sort the devices based on IO addresses */
+ qsort(devs, count, sizeof(uint32_t), cmprator);
+
+ return count;
+}
+
+static bool
+fsdev_find_sub_dir(char *path, const char *search, char *output)
+{
+ DIR *dir;
+ struct dirent *entry;
+
+ dir = opendir(path);
+ if (dir == NULL) {
+ BCMFS_LOG(ERR, "Unable to open directory");
+ return -ENODEV;
+ }
+
+ while ((entry = readdir(dir)) != NULL) {
+ if (!strcmp(entry->d_name, search)) {
+ strlcpy(output, entry->d_name, BCMFS_MAX_PATH_LEN);
+ closedir(dir);
+ return true;
+ }
+ }
+
+ closedir(dir);
+
+ return false;
+}
+
+
+static int
+bcmfs_vdev_probe(struct rte_vdev_device *vdev)
+{
+ struct bcmfs_device *fsdev;
+ char top_dirpath[BCMFS_MAX_PATH_LEN];
+ char sub_dirpath[BCMFS_MAX_PATH_LEN];
+ char out_dirpath[BCMFS_MAX_PATH_LEN];
+ char out_dirname[BCMFS_MAX_PATH_LEN];
+ uint32_t fsdev_dev[BCMFS_MAX_NODES];
+ enum bcmfs_device_type dtype;
+ int i = 0;
+ int dev_idx;
+ int count = 0;
+ bool found = false;
+
+ sprintf(top_dirpath, "%s", SYSFS_BCM_PLTFORM_DEVICES);
+ while (strlen(dev_table[i].name)) {
+ found = fsdev_find_sub_dir(top_dirpath,
+ dev_table[i].name,
+ sub_dirpath);
+ if (found)
+ break;
+ i++;
+ }
+ if (!found) {
+ BCMFS_LOG(ERR, "No supported bcmfs dev found");
+ return -ENODEV;
+ }
+
+ dev_idx = i;
+ dtype = dev_table[i].type;
+
+ snprintf(out_dirpath, sizeof(out_dirpath), "%s/%s",
+ top_dirpath, sub_dirpath);
+ count = fsdev_find_all_devs(out_dirpath,
+ dev_table[dev_idx].suffix,
+ fsdev_dev);
+ if (!count) {
+ BCMFS_LOG(ERR, "No supported bcmfs dev found");
+ return -ENODEV;
+ }
+
+ i = 0;
+ while (count) {
+ /* format the device name present in the patch */
+ snprintf(out_dirname, sizeof(out_dirname), "%x.%s",
+ fsdev_dev[i], dev_table[dev_idx].suffix);
+ fsdev = fsdev_allocate_one_dev(vdev, out_dirpath,
+ out_dirname, dtype);
+ if (!fsdev) {
+ count--;
+ i++;
+ continue;
+ }
+ break;
+ }
+ if (fsdev == NULL) {
+ BCMFS_LOG(ERR, "All supported devs busy");
+ return -ENODEV;
+ }
+
+ return 0;
+}
+
+static int
+bcmfs_vdev_remove(struct rte_vdev_device *vdev)
+{
+ struct bcmfs_device *fsdev;
+
+ fsdev = find_fsdev(vdev);
+ if (fsdev == NULL)
+ return -ENODEV;
+
+ fsdev_release(fsdev);
+ return 0;
+}
+
+/* Register with vdev */
+static struct rte_vdev_driver rte_bcmfs_pmd = {
+ .probe = bcmfs_vdev_probe,
+ .remove = bcmfs_vdev_remove
+};
+
+RTE_PMD_REGISTER_VDEV(bcmfs_pmd,
+ rte_bcmfs_pmd);
diff --git a/drivers/crypto/bcmfs/bcmfs_device.h b/drivers/crypto/bcmfs/bcmfs_device.h
new file mode 100644
index 000000000..4b0c6d3ca
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_device.h
@@ -0,0 +1,40 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Broadcom.
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_DEV_H_
+#define _BCMFS_DEV_H_
+
+#include <sys/queue.h>
+
+#include <rte_bus_vdev.h>
+
+#include "bcmfs_logs.h"
+
+/* max number of dev nodes */
+#define BCMFS_MAX_NODES 4
+#define BCMFS_MAX_PATH_LEN 512
+#define BCMFS_DEV_NAME_LEN 64
+
+/* Path for BCM-Platform device directory */
+#define SYSFS_BCM_PLTFORM_DEVICES "/sys/bus/platform/devices"
+
+/* Supported devices */
+enum bcmfs_device_type {
+ BCMFS_SYM_FS4,
+ BCMFS_SYM_FS5,
+ BCMFS_UNKNOWN
+};
+
+struct bcmfs_device {
+ TAILQ_ENTRY(bcmfs_device) next;
+ /* Directoy path for vfio */
+ char dirname[BCMFS_MAX_PATH_LEN];
+ /* BCMFS device name */
+ char name[BCMFS_DEV_NAME_LEN];
+ /* Parent vdev */
+ struct rte_vdev_device *vdev;
+};
+
+#endif /* _BCMFS_DEV_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_logs.c b/drivers/crypto/bcmfs/bcmfs_logs.c
new file mode 100644
index 000000000..86f4ff3b5
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_logs.c
@@ -0,0 +1,38 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_log.h>
+#include <rte_hexdump.h>
+
+#include "bcmfs_logs.h"
+
+int bcmfs_conf_logtype;
+int bcmfs_dp_logtype;
+
+int
+bcmfs_hexdump_log(uint32_t level, uint32_t logtype, const char *title,
+ const void *buf, unsigned int len)
+{
+ if (level > rte_log_get_global_level())
+ return 0;
+ if (level > (uint32_t)(rte_log_get_level(logtype)))
+ return 0;
+
+ rte_hexdump(rte_log_get_stream(), title, buf, len);
+ return 0;
+}
+
+RTE_INIT(bcmfs_device_init_log)
+{
+ /* Configuration and general logs */
+ bcmfs_conf_logtype = rte_log_register("pmd.bcmfs_config");
+ if (bcmfs_conf_logtype >= 0)
+ rte_log_set_level(bcmfs_conf_logtype, RTE_LOG_NOTICE);
+
+ /* data-path logs */
+ bcmfs_dp_logtype = rte_log_register("pmd.bcmfs_fp");
+ if (bcmfs_dp_logtype >= 0)
+ rte_log_set_level(bcmfs_dp_logtype, RTE_LOG_NOTICE);
+}
diff --git a/drivers/crypto/bcmfs/bcmfs_logs.h b/drivers/crypto/bcmfs/bcmfs_logs.h
new file mode 100644
index 000000000..c03a49b75
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_logs.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_LOGS_H_
+#define _BCMFS_LOGS_H_
+
+#include <rte_log.h>
+
+extern int bcmfs_conf_logtype;
+extern int bcmfs_dp_logtype;
+
+#define BCMFS_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, bcmfs_conf_logtype, \
+ "%s(): " fmt "\n", __func__, ## args)
+
+#define BCMFS_DP_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, bcmfs_dp_logtype, \
+ "%s(): " fmt "\n", __func__, ## args)
+
+#define BCMFS_DP_HEXDUMP_LOG(level, title, buf, len) \
+ bcmfs_hexdump_log(RTE_LOG_ ## level, bcmfs_dp_logtype, title, buf, len)
+
+/**
+ * bcmfs_hexdump_log Dump out memory in a special hex dump format.
+ *
+ * The message will be sent to the stream used by the rte_log infrastructure.
+ */
+int
+bcmfs_hexdump_log(uint32_t level, uint32_t logtype, const char *heading,
+ const void *buf, unsigned int len);
+
+#endif /* _BCMFS_LOGS_H_ */
diff --git a/drivers/crypto/bcmfs/meson.build b/drivers/crypto/bcmfs/meson.build
new file mode 100644
index 000000000..a4bdd8ee5
--- /dev/null
+++ b/drivers/crypto/bcmfs/meson.build
@@ -0,0 +1,10 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(C) 2020 Broadcom
+# All rights reserved.
+#
+
+deps += ['eal', 'bus_vdev']
+sources = files(
+ 'bcmfs_logs.c',
+ 'bcmfs_device.c'
+ )
diff --git a/drivers/crypto/bcmfs/rte_pmd_bcmfs_version.map b/drivers/crypto/bcmfs/rte_pmd_bcmfs_version.map
new file mode 100644
index 000000000..f9f17e4f6
--- /dev/null
+++ b/drivers/crypto/bcmfs/rte_pmd_bcmfs_version.map
@@ -0,0 +1,3 @@
+DPDK_20.0 {
+ local: *;
+};
diff --git a/drivers/crypto/meson.build b/drivers/crypto/meson.build
index a2423507a..8e06d0533 100644
--- a/drivers/crypto/meson.build
+++ b/drivers/crypto/meson.build
@@ -23,7 +23,8 @@ drivers = ['aesni_gcm',
'scheduler',
'snow3g',
'virtio',
- 'zuc']
+ 'zuc',
+ 'bcmfs']
std_deps = ['cryptodev'] # cryptodev pulls in all other needed deps
config_flag_fmt = 'RTE_LIBRTE_@0@_PMD'
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 0ce8cf541..5e268f8c0 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -308,6 +308,7 @@ ifeq ($(CONFIG_RTE_LIBRTE_SECURITY),y)
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_CAAM_JR) += -lrte_pmd_caam_jr
endif # CONFIG_RTE_LIBRTE_SECURITY
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_VIRTIO_CRYPTO) += -lrte_pmd_virtio_crypto
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_BCMFS) += -lrte_pmd_bcmfs
endif # CONFIG_RTE_LIBRTE_CRYPTODEV
ifeq ($(CONFIG_RTE_LIBRTE_COMPRESSDEV),y)
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH 0 2/8] crypto/bcmfs: add vfio support
2020-08-11 14:58 [dpdk-dev] [PATCH 0 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
2020-08-11 14:58 ` [dpdk-dev] [PATCH 0 1/8] crypto/bcmfs: add BCMFS driver Vikas Gupta
@ 2020-08-11 14:58 ` Vikas Gupta
2020-08-11 14:58 ` [dpdk-dev] [PATCH 0 3/8] crypto/bcmfs: add apis for queue pair management Vikas Gupta
` (6 subsequent siblings)
8 siblings, 0 replies; 75+ messages in thread
From: Vikas Gupta @ 2020-08-11 14:58 UTC (permalink / raw)
To: dev, akhil.goyal, ajit.khaparde; +Cc: vikram.prakash, Vikas Gupta
Add vfio support for device.
Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
---
drivers/crypto/bcmfs/Makefile | 1 +
drivers/crypto/bcmfs/bcmfs_device.c | 5 ++
drivers/crypto/bcmfs/bcmfs_device.h | 6 ++
drivers/crypto/bcmfs/bcmfs_vfio.c | 94 +++++++++++++++++++++++++++++
drivers/crypto/bcmfs/bcmfs_vfio.h | 17 ++++++
drivers/crypto/bcmfs/meson.build | 3 +-
6 files changed, 125 insertions(+), 1 deletion(-)
create mode 100644 drivers/crypto/bcmfs/bcmfs_vfio.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_vfio.h
diff --git a/drivers/crypto/bcmfs/Makefile b/drivers/crypto/bcmfs/Makefile
index 781ee6efa..5f691f7ba 100644
--- a/drivers/crypto/bcmfs/Makefile
+++ b/drivers/crypto/bcmfs/Makefile
@@ -19,6 +19,7 @@ CFLAGS += -DALLOW_EXPERIMENTAL_API
#
SRCS-y += bcmfs_logs.c
SRCS-y += bcmfs_device.c
+SRCS-y += bcmfs_vfio.c
LDLIBS += -lrte_eal -lrte_bus_vdev
diff --git a/drivers/crypto/bcmfs/bcmfs_device.c b/drivers/crypto/bcmfs/bcmfs_device.c
index 47c776de6..3b5cc9e98 100644
--- a/drivers/crypto/bcmfs/bcmfs_device.c
+++ b/drivers/crypto/bcmfs/bcmfs_device.c
@@ -11,6 +11,7 @@
#include "bcmfs_device.h"
#include "bcmfs_logs.h"
+#include "bcmfs_vfio.h"
struct bcmfs_device_attr {
const char name[BCMFS_MAX_PATH_LEN];
@@ -71,6 +72,10 @@ fsdev_allocate_one_dev(struct rte_vdev_device *vdev,
fsdev->vdev = vdev;
+ /* attach to VFIO */
+ if (bcmfs_attach_vfio(fsdev))
+ goto cleanup;
+
TAILQ_INSERT_TAIL(&fsdev_list, fsdev, next);
return fsdev;
diff --git a/drivers/crypto/bcmfs/bcmfs_device.h b/drivers/crypto/bcmfs/bcmfs_device.h
index 4b0c6d3ca..5232bdea5 100644
--- a/drivers/crypto/bcmfs/bcmfs_device.h
+++ b/drivers/crypto/bcmfs/bcmfs_device.h
@@ -35,6 +35,12 @@ struct bcmfs_device {
char name[BCMFS_DEV_NAME_LEN];
/* Parent vdev */
struct rte_vdev_device *vdev;
+ /* vfio handle */
+ int vfio_dev_fd;
+ /* mapped address */
+ uint8_t *mmap_addr;
+ /* mapped size */
+ uint32_t mmap_size;
};
#endif /* _BCMFS_DEV_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_vfio.c b/drivers/crypto/bcmfs/bcmfs_vfio.c
new file mode 100644
index 000000000..9138f96eb
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_vfio.c
@@ -0,0 +1,94 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Broadcom.
+ * All rights reserved.
+ */
+
+#include <errno.h>
+#include <sys/mman.h>
+#include <sys/ioctl.h>
+
+#include <rte_vfio.h>
+
+#include "bcmfs_device.h"
+#include "bcmfs_logs.h"
+#include "bcmfs_vfio.h"
+
+static int
+vfio_map_dev_obj(const char *path, const char *dev_obj,
+ uint32_t *size, void **addr, int *dev_fd)
+{
+ int32_t ret;
+ struct vfio_group_status status = { .argsz = sizeof(status) };
+
+ struct vfio_device_info d_info = { .argsz = sizeof(d_info) };
+ struct vfio_region_info reg_info = { .argsz = sizeof(reg_info) };
+
+ ret = rte_vfio_setup_device(path, dev_obj, dev_fd, &d_info);
+ if (ret) {
+ BCMFS_LOG(ERR, "VFIO Setting for device failed");
+ return ret;
+ }
+
+ /* getting device region info*/
+ ret = ioctl(*dev_fd, VFIO_DEVICE_GET_REGION_INFO, ®_info);
+ if (ret < 0) {
+ BCMFS_LOG(ERR, "Error in VFIO getting REGION_INFO");
+ goto map_failed;
+ }
+
+ *addr = mmap(NULL, reg_info.size,
+ PROT_WRITE | PROT_READ, MAP_SHARED,
+ *dev_fd, reg_info.offset);
+ if (*addr == MAP_FAILED) {
+ BCMFS_LOG(ERR, "Error mapping region (errno = %d)", errno);
+ ret = errno;
+ goto map_failed;
+ }
+ *size = reg_info.size;
+
+ return 0;
+
+map_failed:
+ rte_vfio_release_device(path, dev_obj, *dev_fd);
+
+ return ret;
+}
+
+int
+bcmfs_attach_vfio(struct bcmfs_device *dev)
+{
+ int ret;
+ int vfio_dev_fd;
+ void *v_addr = NULL;
+ uint32_t size = 0;
+
+ ret = vfio_map_dev_obj(dev->dirname, dev->name,
+ &size, &v_addr, &vfio_dev_fd);
+ if (ret)
+ return -1;
+
+ dev->mmap_size = size;
+ dev->mmap_addr = v_addr;
+ dev->vfio_dev_fd = vfio_dev_fd;
+
+ return 0;
+}
+
+void
+bcmfs_release_vfio(struct bcmfs_device *dev)
+{
+ int ret;
+
+ if (dev == NULL)
+ return;
+
+ /* unmap the addr */
+ munmap(dev->mmap_addr, dev->mmap_size);
+ /* release the device */
+ ret = rte_vfio_release_device(dev->dirname, dev->name,
+ dev->vfio_dev_fd);
+ if (ret < 0) {
+ BCMFS_LOG(ERR, "cannot release device");
+ return;
+ }
+}
diff --git a/drivers/crypto/bcmfs/bcmfs_vfio.h b/drivers/crypto/bcmfs/bcmfs_vfio.h
new file mode 100644
index 000000000..d0fdf6483
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_vfio.h
@@ -0,0 +1,17 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_VFIO_H_
+#define _BCMFS_VFIO_H_
+
+/* Attach the bcmfs device to vfio */
+int
+bcmfs_attach_vfio(struct bcmfs_device *dev);
+
+/* Release the bcmfs device from vfio */
+void
+bcmfs_release_vfio(struct bcmfs_device *dev);
+
+#endif /* _BCMFS_VFIO_H_ */
diff --git a/drivers/crypto/bcmfs/meson.build b/drivers/crypto/bcmfs/meson.build
index a4bdd8ee5..fd39eba20 100644
--- a/drivers/crypto/bcmfs/meson.build
+++ b/drivers/crypto/bcmfs/meson.build
@@ -6,5 +6,6 @@
deps += ['eal', 'bus_vdev']
sources = files(
'bcmfs_logs.c',
- 'bcmfs_device.c'
+ 'bcmfs_device.c',
+ 'bcmfs_vfio.c'
)
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH 0 3/8] crypto/bcmfs: add apis for queue pair management
2020-08-11 14:58 [dpdk-dev] [PATCH 0 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
2020-08-11 14:58 ` [dpdk-dev] [PATCH 0 1/8] crypto/bcmfs: add BCMFS driver Vikas Gupta
2020-08-11 14:58 ` [dpdk-dev] [PATCH 0 2/8] crypto/bcmfs: add vfio support Vikas Gupta
@ 2020-08-11 14:58 ` Vikas Gupta
2020-08-11 14:58 ` [dpdk-dev] [PATCH 0 4/8] crypto/bcmfs: add hw queue pair operations Vikas Gupta
` (5 subsequent siblings)
8 siblings, 0 replies; 75+ messages in thread
From: Vikas Gupta @ 2020-08-11 14:58 UTC (permalink / raw)
To: dev, akhil.goyal, ajit.khaparde; +Cc: vikram.prakash, Vikas Gupta
Add queue pair management APIs which will be used by Crypto device to
manage h/w queues. A bcmfs device structure owns multiple queue-pairs
based on the mapped address allocated to it.
Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
---
drivers/crypto/bcmfs/Makefile | 28 ---
drivers/crypto/bcmfs/bcmfs_device.c | 4 +
drivers/crypto/bcmfs/bcmfs_device.h | 5 +
drivers/crypto/bcmfs/bcmfs_hw_defs.h | 38 +++
drivers/crypto/bcmfs/bcmfs_qp.c | 345 +++++++++++++++++++++++++++
drivers/crypto/bcmfs/bcmfs_qp.h | 122 ++++++++++
drivers/crypto/bcmfs/meson.build | 3 +-
7 files changed, 516 insertions(+), 29 deletions(-)
delete mode 100644 drivers/crypto/bcmfs/Makefile
create mode 100644 drivers/crypto/bcmfs/bcmfs_hw_defs.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_qp.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_qp.h
diff --git a/drivers/crypto/bcmfs/Makefile b/drivers/crypto/bcmfs/Makefile
deleted file mode 100644
index 5f691f7ba..000000000
--- a/drivers/crypto/bcmfs/Makefile
+++ /dev/null
@@ -1,28 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(C) 2020 Broadcom
-# All rights reserved.
-#
-
-include $(RTE_SDK)/mk/rte.vars.mk
-
-#
-# library name
-#
-LIB = librte_pmd_bcmfs.a
-
-CFLAGS += $(WERROR_FLAGS)
-CFLAGS += -I$(RTE_SDK)/drivers/crypto/bcmfs
-CFLAGS += -DALLOW_EXPERIMENTAL_API
-
-#
-# all source are stored in SRCS-y
-#
-SRCS-y += bcmfs_logs.c
-SRCS-y += bcmfs_device.c
-SRCS-y += bcmfs_vfio.c
-
-LDLIBS += -lrte_eal -lrte_bus_vdev
-
-EXPORT_MAP := rte_pmd_bcmfs_version.map
-
-include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/crypto/bcmfs/bcmfs_device.c b/drivers/crypto/bcmfs/bcmfs_device.c
index 3b5cc9e98..b475c2933 100644
--- a/drivers/crypto/bcmfs/bcmfs_device.c
+++ b/drivers/crypto/bcmfs/bcmfs_device.c
@@ -11,6 +11,7 @@
#include "bcmfs_device.h"
#include "bcmfs_logs.h"
+#include "bcmfs_qp.h"
#include "bcmfs_vfio.h"
struct bcmfs_device_attr {
@@ -76,6 +77,9 @@ fsdev_allocate_one_dev(struct rte_vdev_device *vdev,
if (bcmfs_attach_vfio(fsdev))
goto cleanup;
+ /* Maximum number of QPs supported */
+ fsdev->max_hw_qps = fsdev->mmap_size / BCMFS_HW_QUEUE_IO_ADDR_LEN;
+
TAILQ_INSERT_TAIL(&fsdev_list, fsdev, next);
return fsdev;
diff --git a/drivers/crypto/bcmfs/bcmfs_device.h b/drivers/crypto/bcmfs/bcmfs_device.h
index 5232bdea5..e03ce5b5b 100644
--- a/drivers/crypto/bcmfs/bcmfs_device.h
+++ b/drivers/crypto/bcmfs/bcmfs_device.h
@@ -11,6 +11,7 @@
#include <rte_bus_vdev.h>
#include "bcmfs_logs.h"
+#include "bcmfs_qp.h"
/* max number of dev nodes */
#define BCMFS_MAX_NODES 4
@@ -41,6 +42,10 @@ struct bcmfs_device {
uint8_t *mmap_addr;
/* mapped size */
uint32_t mmap_size;
+ /* max number of h/w queue pairs detected */
+ uint16_t max_hw_qps;
+ /* current qpairs in use */
+ struct bcmfs_qp *qps_in_use[BCMFS_MAX_HW_QUEUES];
};
#endif /* _BCMFS_DEV_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_hw_defs.h b/drivers/crypto/bcmfs/bcmfs_hw_defs.h
new file mode 100644
index 000000000..ecb0c09ba
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_hw_defs.h
@@ -0,0 +1,38 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_RM_DEFS_H_
+#define _BCMFS_RM_DEFS_H_
+
+#include <rte_atomic.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_io.h>
+
+/* 32-bit MMIO register write */
+#define FS_MMIO_WRITE32(value, addr) rte_write32_relaxed((value), (addr))
+
+/* 32-bit MMIO register read */
+#define FS_MMIO_READ32(addr) rte_read32_relaxed((addr))
+
+#ifndef BIT
+#define BIT(nr) (1UL << (nr))
+#endif
+
+#define FS_RING_REGS_SIZE 0x10000
+#define FS_RING_DESC_SIZE 8
+#define FS_RING_BD_ALIGN_ORDER 12
+#define FS_RING_BD_DESC_PER_REQ 32
+#define FS_RING_CMPL_ALIGN_ORDER 13
+#define FS_RING_CMPL_SIZE (1024 * FS_RING_DESC_SIZE)
+#define FS_RING_MAX_REQ_COUNT 1024
+#define FS_RING_PAGE_SHFT 12
+#define FS_RING_PAGE_SIZE BIT(FS_RING_PAGE_SHFT)
+
+/* Minimum and maximum number of requests supported */
+#define FS_RM_MAX_REQS 1024
+#define FS_RM_MIN_REQS 32
+
+#endif /* BCMFS_RM_DEFS_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_qp.c b/drivers/crypto/bcmfs/bcmfs_qp.c
new file mode 100644
index 000000000..864e7bb74
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_qp.c
@@ -0,0 +1,345 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Broadcom.
+ * All rights reserved.
+ */
+
+#include <inttypes.h>
+
+#include <rte_atomic.h>
+#include <rte_bitmap.h>
+#include <rte_common.h>
+#include <rte_dev.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_prefetch.h>
+#include <rte_string_fns.h>
+
+#include "bcmfs_logs.h"
+#include "bcmfs_qp.h"
+#include "bcmfs_hw_defs.h"
+
+/* TX or submission queue name */
+static const char *txq_name = "tx";
+/* Completion or receive queue name */
+static const char *cmplq_name = "cmpl";
+
+/* Helper function */
+static int
+bcmfs_qp_check_queue_alignment(uint64_t phys_addr,
+ uint32_t align)
+{
+ if (((align - 1) & phys_addr) != 0)
+ return -EINVAL;
+ return 0;
+}
+
+static void
+bcmfs_queue_delete(struct bcmfs_queue *queue,
+ uint16_t queue_pair_id)
+{
+ const struct rte_memzone *mz;
+ int status = 0;
+
+ if (queue == NULL) {
+ BCMFS_LOG(DEBUG, "Invalid queue");
+ return;
+ }
+ BCMFS_LOG(DEBUG, "Free ring %d type %d, memzone: %s",
+ queue_pair_id, queue->q_type, queue->memz_name);
+
+ mz = rte_memzone_lookup(queue->memz_name);
+ if (mz != NULL) {
+ /* Write an unused pattern to the queue memory. */
+ memset(queue->base_addr, 0x9B, queue->queue_size);
+ status = rte_memzone_free(mz);
+ if (status != 0)
+ BCMFS_LOG(ERR, "Error %d on freeing queue %s",
+ status, queue->memz_name);
+ } else {
+ BCMFS_LOG(DEBUG, "queue %s doesn't exist",
+ queue->memz_name);
+ }
+}
+
+static const struct rte_memzone *
+queue_dma_zone_reserve(const char *queue_name, uint32_t queue_size,
+ int socket_id, unsigned int align)
+{
+ const struct rte_memzone *mz;
+
+ mz = rte_memzone_lookup(queue_name);
+ if (mz != NULL) {
+ if (((size_t)queue_size <= mz->len) &&
+ (socket_id == SOCKET_ID_ANY ||
+ socket_id == mz->socket_id)) {
+ BCMFS_LOG(DEBUG, "re-use memzone already "
+ "allocated for %s", queue_name);
+ return mz;
+ }
+
+ BCMFS_LOG(ERR, "Incompatible memzone already "
+ "allocated %s, size %u, socket %d. "
+ "Requested size %u, socket %u",
+ queue_name, (uint32_t)mz->len,
+ mz->socket_id, queue_size, socket_id);
+ return NULL;
+ }
+
+ BCMFS_LOG(DEBUG, "Allocate memzone for %s, size %u on socket %u",
+ queue_name, queue_size, socket_id);
+ return rte_memzone_reserve_aligned(queue_name, queue_size,
+ socket_id, RTE_MEMZONE_IOVA_CONTIG, align);
+}
+
+static int
+bcmfs_queue_create(struct bcmfs_queue *queue,
+ struct bcmfs_qp_config *qp_conf,
+ uint16_t queue_pair_id,
+ enum bcmfs_queue_type qtype)
+{
+ const struct rte_memzone *qp_mz;
+ char q_name[16];
+ unsigned int align;
+ uint32_t queue_size_bytes;
+ int ret;
+
+ if (qtype == BCMFS_RM_TXQ) {
+ strlcpy(q_name, txq_name, sizeof(q_name));
+ align = 1U << FS_RING_BD_ALIGN_ORDER;
+ queue_size_bytes = qp_conf->nb_descriptors *
+ qp_conf->max_descs_req * FS_RING_DESC_SIZE;
+ queue_size_bytes = RTE_ALIGN_MUL_CEIL(queue_size_bytes,
+ FS_RING_PAGE_SIZE);
+ /* make queue size to multiple for 4K pages */
+ } else if (qtype == BCMFS_RM_CPLQ) {
+ strlcpy(q_name, cmplq_name, sizeof(q_name));
+ align = 1U << FS_RING_CMPL_ALIGN_ORDER;
+
+ /*
+ * Memory size for cmpl + MSI
+ * For MSI allocate here itself and so we allocate twice
+ */
+ queue_size_bytes = 2 * FS_RING_CMPL_SIZE;
+ } else {
+ BCMFS_LOG(ERR, "Invalid queue selection");
+ return -EINVAL;
+ }
+
+ queue->q_type = qtype;
+
+ /*
+ * Allocate a memzone for the queue - create a unique name.
+ */
+ snprintf(queue->memz_name, sizeof(queue->memz_name),
+ "%s_%d_%s_%d_%s", "bcmfs", qtype, "qp_mem",
+ queue_pair_id, q_name);
+ qp_mz = queue_dma_zone_reserve(queue->memz_name, queue_size_bytes,
+ 0, align);
+ if (qp_mz == NULL) {
+ BCMFS_LOG(ERR, "Failed to allocate ring memzone");
+ return -ENOMEM;
+ }
+
+ if (bcmfs_qp_check_queue_alignment(qp_mz->iova, align)) {
+ BCMFS_LOG(ERR, "Invalid alignment on queue create "
+ " 0x%" PRIx64 "\n",
+ queue->base_phys_addr);
+ ret = -EFAULT;
+ goto queue_create_err;
+ }
+
+ queue->base_addr = (char *)qp_mz->addr;
+ queue->base_phys_addr = qp_mz->iova;
+ queue->queue_size = queue_size_bytes;
+
+ return 0;
+
+queue_create_err:
+ rte_memzone_free(qp_mz);
+
+ return ret;
+}
+
+int
+bcmfs_qp_release(struct bcmfs_qp **qp_addr)
+{
+ struct bcmfs_qp *qp = *qp_addr;
+
+ if (qp == NULL) {
+ BCMFS_LOG(DEBUG, "qp already freed");
+ return 0;
+ }
+
+ /* Don't free memory if there are still responses to be processed */
+ if ((qp->stats.enqueued_count - qp->stats.dequeued_count) == 0) {
+ /* Stop the h/w ring */
+ qp->ops->stopq(qp);
+ /* Delete the queue pairs */
+ bcmfs_queue_delete(&qp->tx_q, qp->qpair_id);
+ bcmfs_queue_delete(&qp->cmpl_q, qp->qpair_id);
+ } else {
+ return -EAGAIN;
+ }
+
+ rte_bitmap_reset(qp->ctx_bmp);
+ rte_free(qp->ctx_bmp_mem);
+ rte_free(qp->ctx_pool);
+
+ rte_free(qp);
+ *qp_addr = NULL;
+
+ return 0;
+}
+
+int
+bcmfs_qp_setup(struct bcmfs_qp **qp_addr,
+ uint16_t queue_pair_id,
+ struct bcmfs_qp_config *qp_conf)
+{
+ struct bcmfs_qp *qp;
+ uint32_t bmp_size;
+ uint32_t nb_descriptors = qp_conf->nb_descriptors;
+ uint16_t i;
+ int rc;
+
+ if (nb_descriptors < FS_RM_MIN_REQS) {
+ BCMFS_LOG(ERR, "Can't create qp for %u descriptors",
+ nb_descriptors);
+ return -EINVAL;
+ }
+
+ if (nb_descriptors > FS_RM_MAX_REQS)
+ nb_descriptors = FS_RM_MAX_REQS;
+
+ if (qp_conf->iobase == NULL) {
+ BCMFS_LOG(ERR, "IO onfig space null");
+ return -EINVAL;
+ }
+
+ qp = rte_zmalloc_socket("BCM FS PMD qp metadata",
+ sizeof(*qp), RTE_CACHE_LINE_SIZE,
+ qp_conf->socket_id);
+ if (qp == NULL) {
+ BCMFS_LOG(ERR, "Failed to alloc mem for qp struct");
+ return -ENOMEM;
+ }
+
+ qp->qpair_id = queue_pair_id;
+ qp->ioreg = qp_conf->iobase;
+ qp->nb_descriptors = nb_descriptors;
+
+ qp->stats.enqueued_count = 0;
+ qp->stats.dequeued_count = 0;
+
+ rc = bcmfs_queue_create(&qp->tx_q, qp_conf, qp->qpair_id,
+ BCMFS_RM_TXQ);
+ if (rc) {
+ BCMFS_LOG(ERR, "Tx queue create failed queue_pair_id %u",
+ queue_pair_id);
+ goto create_err;
+ }
+
+ rc = bcmfs_queue_create(&qp->cmpl_q, qp_conf, qp->qpair_id,
+ BCMFS_RM_CPLQ);
+ if (rc) {
+ BCMFS_LOG(ERR, "Cmpl queue create failed queue_pair_id= %u",
+ queue_pair_id);
+ goto q_create_err;
+ }
+
+ /* ctx saving bitmap */
+ bmp_size = rte_bitmap_get_memory_footprint(nb_descriptors);
+
+ /* Allocate memory for bitmap */
+ qp->ctx_bmp_mem = rte_zmalloc("ctx_bmp_mem", bmp_size,
+ RTE_CACHE_LINE_SIZE);
+ if (qp->ctx_bmp_mem == NULL) {
+ rc = -ENOMEM;
+ goto qp_create_err;
+ }
+
+ /* Initialize pool resource bitmap array */
+ qp->ctx_bmp = rte_bitmap_init(nb_descriptors, qp->ctx_bmp_mem,
+ bmp_size);
+ if (qp->ctx_bmp == NULL) {
+ rc = -EINVAL;
+ goto bmap_mem_free;
+ }
+
+ /* Mark all pools available */
+ for (i = 0; i < nb_descriptors; i++)
+ rte_bitmap_set(qp->ctx_bmp, i);
+
+ /* Allocate memory for context */
+ qp->ctx_pool = rte_zmalloc("qp_ctx_pool",
+ sizeof(unsigned long) *
+ nb_descriptors, 0);
+ if (qp->ctx_pool == NULL) {
+ BCMFS_LOG(ERR, "ctx allocation pool fails");
+ rc = -ENOMEM;
+ goto bmap_free;
+ }
+
+ /* Start h/w ring */
+ qp->ops->startq(qp);
+
+ *qp_addr = qp;
+
+ return 0;
+
+bmap_free:
+ rte_bitmap_reset(qp->ctx_bmp);
+bmap_mem_free:
+ rte_free(qp->ctx_bmp_mem);
+qp_create_err:
+ bcmfs_queue_delete(&qp->cmpl_q, queue_pair_id);
+q_create_err:
+ bcmfs_queue_delete(&qp->tx_q, queue_pair_id);
+create_err:
+ rte_free(qp);
+
+ return rc;
+}
+
+uint16_t
+bcmfs_enqueue_op_burst(void *qp, void **ops, uint16_t nb_ops)
+{
+ struct bcmfs_qp *tmp_qp = (struct bcmfs_qp *)qp;
+ register uint32_t nb_ops_sent = 0;
+ uint16_t nb_ops_possible = nb_ops;
+ int ret;
+
+ if (unlikely(nb_ops == 0))
+ return 0;
+
+ while (nb_ops_sent != nb_ops_possible) {
+ ret = tmp_qp->ops->enq_one_req(qp, *ops);
+ if (ret != 0) {
+ tmp_qp->stats.enqueue_err_count++;
+ /* This message cannot be enqueued */
+ if (nb_ops_sent == 0)
+ return 0;
+ goto ring_db;
+ }
+
+ ops++;
+ nb_ops_sent++;
+ }
+
+ring_db:
+ tmp_qp->stats.enqueued_count += nb_ops_sent;
+ tmp_qp->ops->ring_db(tmp_qp);
+
+ return nb_ops_sent;
+}
+
+uint16_t
+bcmfs_dequeue_op_burst(void *qp, void **ops, uint16_t nb_ops)
+{
+ struct bcmfs_qp *tmp_qp = (struct bcmfs_qp *)qp;
+ uint32_t deq = tmp_qp->ops->dequeue(tmp_qp, ops, nb_ops);
+
+ tmp_qp->stats.dequeued_count += deq;
+
+ return deq;
+}
diff --git a/drivers/crypto/bcmfs/bcmfs_qp.h b/drivers/crypto/bcmfs/bcmfs_qp.h
new file mode 100644
index 000000000..027d7a50c
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_qp.h
@@ -0,0 +1,122 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_QP_H_
+#define _BCMFS_QP_H_
+
+#include <rte_memzone.h>
+
+/* Maximum number of h/w queues supported by device */
+#define BCMFS_MAX_HW_QUEUES 32
+
+/* H/W queue IO address space len */
+#define BCMFS_HW_QUEUE_IO_ADDR_LEN (64 * 1024)
+
+/* Maximum size of device ops name */
+#define BCMFS_HW_OPS_NAMESIZE 32
+
+enum bcmfs_queue_type {
+ /* TX or submission queue */
+ BCMFS_RM_TXQ,
+ /* Completion or receive queue */
+ BCMFS_RM_CPLQ
+};
+
+struct bcmfs_qp_stats {
+ /* Count of all operations enqueued */
+ uint64_t enqueued_count;
+ /* Count of all operations dequeued */
+ uint64_t dequeued_count;
+ /* Total error count on operations enqueued */
+ uint64_t enqueue_err_count;
+ /* Total error count on operations dequeued */
+ uint64_t dequeue_err_count;
+};
+
+struct bcmfs_qp_config {
+ /* Socket to allocate memory on */
+ int socket_id;
+ /* Mapped iobase for qp */
+ void *iobase;
+ /* nb_descriptors or requests a h/w queue can accommodate */
+ uint16_t nb_descriptors;
+ /* Maximum number of h/w descriptors needed by a request */
+ uint16_t max_descs_req;
+};
+
+struct bcmfs_queue {
+ /* Base virt address */
+ void *base_addr;
+ /* Base iova */
+ rte_iova_t base_phys_addr;
+ /* Queue type */
+ enum bcmfs_queue_type q_type;
+ /* Queue size based on nb_descriptors and max_descs_reqs */
+ uint32_t queue_size;
+ union {
+ /* s/w pointer for tx h/w queue*/
+ uint32_t tx_write_ptr;
+ /* s/w pointer for completion h/w queue*/
+ uint32_t cmpl_read_ptr;
+ };
+ /* Memzone name */
+ char memz_name[RTE_MEMZONE_NAMESIZE];
+};
+
+struct bcmfs_qp {
+ /* Queue-pair ID */
+ uint16_t qpair_id;
+ /* Mapped IO address */
+ void *ioreg;
+ /* A TX queue */
+ struct bcmfs_queue tx_q;
+ /* A Completion queue */
+ struct bcmfs_queue cmpl_q;
+ /* Number of requests queue can acommodate */
+ uint32_t nb_descriptors;
+ /* Number of pending requests and enqueued to h/w queue */
+ uint16_t nb_pending_requests;
+ /* A pool which act as a hash for <request-ID and virt address> pair */
+ unsigned long *ctx_pool;
+ /* virt address for mem allocated for bitmap */
+ void *ctx_bmp_mem;
+ /* Bitmap */
+ struct rte_bitmap *ctx_bmp;
+ /* Associated stats */
+ struct bcmfs_qp_stats stats;
+ /* h/w ops associated with qp */
+ struct bcmfs_hw_queue_pair_ops *ops;
+
+} __rte_cache_aligned;
+
+/* Structure defining h/w queue pair operations */
+struct bcmfs_hw_queue_pair_ops {
+ /* ops name */
+ char name[BCMFS_HW_OPS_NAMESIZE];
+ /* Enqueue an object */
+ int (*enq_one_req)(struct bcmfs_qp *qp, void *obj);
+ /* Ring doorbell */
+ void (*ring_db)(struct bcmfs_qp *qp);
+ /* Dequeue objects */
+ uint16_t (*dequeue)(struct bcmfs_qp *qp, void **obj,
+ uint16_t nb_ops);
+ /* Start the h/w queue */
+ int (*startq)(struct bcmfs_qp *qp);
+ /* Stop the h/w queue */
+ void (*stopq)(struct bcmfs_qp *qp);
+};
+
+uint16_t
+bcmfs_enqueue_op_burst(void *qp, void **ops, uint16_t nb_ops);
+uint16_t
+bcmfs_dequeue_op_burst(void *qp, void **ops, uint16_t nb_ops);
+int
+bcmfs_qp_release(struct bcmfs_qp **qp_addr);
+int
+bcmfs_qp_setup(struct bcmfs_qp **qp_addr,
+ uint16_t queue_pair_id,
+ struct bcmfs_qp_config *bcmfs_conf);
+
+#endif /* _BCMFS_QP_H_ */
diff --git a/drivers/crypto/bcmfs/meson.build b/drivers/crypto/bcmfs/meson.build
index fd39eba20..7e2bcbf14 100644
--- a/drivers/crypto/bcmfs/meson.build
+++ b/drivers/crypto/bcmfs/meson.build
@@ -7,5 +7,6 @@ deps += ['eal', 'bus_vdev']
sources = files(
'bcmfs_logs.c',
'bcmfs_device.c',
- 'bcmfs_vfio.c'
+ 'bcmfs_vfio.c',
+ 'bcmfs_qp.c'
)
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH 0 4/8] crypto/bcmfs: add hw queue pair operations
2020-08-11 14:58 [dpdk-dev] [PATCH 0 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
` (2 preceding siblings ...)
2020-08-11 14:58 ` [dpdk-dev] [PATCH 0 3/8] crypto/bcmfs: add apis for queue pair management Vikas Gupta
@ 2020-08-11 14:58 ` Vikas Gupta
2020-08-11 14:58 ` [dpdk-dev] [PATCH 0 5/8] crypto/bcmfs: create a symmetric cryptodev Vikas Gupta
` (4 subsequent siblings)
8 siblings, 0 replies; 75+ messages in thread
From: Vikas Gupta @ 2020-08-11 14:58 UTC (permalink / raw)
To: dev, akhil.goyal, ajit.khaparde; +Cc: vikram.prakash, Vikas Gupta
Add queue pair operations exported by supported devices.
Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
---
drivers/crypto/bcmfs/bcmfs_dev_msg.h | 29 +
drivers/crypto/bcmfs/bcmfs_device.c | 51 ++
drivers/crypto/bcmfs/bcmfs_device.h | 16 +
drivers/crypto/bcmfs/bcmfs_qp.c | 1 +
drivers/crypto/bcmfs/bcmfs_qp.h | 4 +
drivers/crypto/bcmfs/hw/bcmfs4_rm.c | 742 ++++++++++++++++++++++
drivers/crypto/bcmfs/hw/bcmfs5_rm.c | 677 ++++++++++++++++++++
drivers/crypto/bcmfs/hw/bcmfs_rm_common.c | 82 +++
drivers/crypto/bcmfs/hw/bcmfs_rm_common.h | 46 ++
drivers/crypto/bcmfs/meson.build | 5 +-
10 files changed, 1652 insertions(+), 1 deletion(-)
create mode 100644 drivers/crypto/bcmfs/bcmfs_dev_msg.h
create mode 100644 drivers/crypto/bcmfs/hw/bcmfs4_rm.c
create mode 100644 drivers/crypto/bcmfs/hw/bcmfs5_rm.c
create mode 100644 drivers/crypto/bcmfs/hw/bcmfs_rm_common.c
create mode 100644 drivers/crypto/bcmfs/hw/bcmfs_rm_common.h
diff --git a/drivers/crypto/bcmfs/bcmfs_dev_msg.h b/drivers/crypto/bcmfs/bcmfs_dev_msg.h
new file mode 100644
index 000000000..5b50bde35
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_dev_msg.h
@@ -0,0 +1,29 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_DEV_MSG_H_
+#define _BCMFS_DEV_MSG_H_
+
+#define MAX_SRC_ADDR_BUFFERS 8
+#define MAX_DST_ADDR_BUFFERS 3
+
+struct bcmfs_qp_message {
+ /** Physical address of each source */
+ uint64_t srcs_addr[MAX_SRC_ADDR_BUFFERS];
+ /** Length of each sources */
+ uint32_t srcs_len[MAX_SRC_ADDR_BUFFERS];
+ /** Total number of sources */
+ unsigned int srcs_count;
+ /** Physical address of each destination */
+ uint64_t dsts_addr[MAX_DST_ADDR_BUFFERS];
+ /** Length of each destination */
+ uint32_t dsts_len[MAX_DST_ADDR_BUFFERS];
+ /** Total number of destinations */
+ unsigned int dsts_count;
+
+ void *ctx;
+};
+
+#endif /* _BCMFS_DEV_MSG_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_device.c b/drivers/crypto/bcmfs/bcmfs_device.c
index b475c2933..bd2d64acf 100644
--- a/drivers/crypto/bcmfs/bcmfs_device.c
+++ b/drivers/crypto/bcmfs/bcmfs_device.c
@@ -43,6 +43,47 @@ static struct bcmfs_device_attr dev_table[] = {
}
};
+struct bcmfs_hw_queue_pair_ops_table bcmfs_hw_queue_pair_ops_table = {
+ .tl = RTE_SPINLOCK_INITIALIZER,
+ .num_ops = 0
+};
+
+int bcmfs_hw_queue_pair_register_ops(const struct bcmfs_hw_queue_pair_ops *h)
+{
+ struct bcmfs_hw_queue_pair_ops *ops;
+ int16_t ops_index;
+
+ rte_spinlock_lock(&bcmfs_hw_queue_pair_ops_table.tl);
+
+ if (h->enq_one_req == NULL || h->dequeue == NULL ||
+ h->ring_db == NULL || h->startq == NULL || h->stopq == NULL) {
+ rte_spinlock_unlock(&bcmfs_hw_queue_pair_ops_table.tl);
+ BCMFS_LOG(ERR,
+ "Missing callback while registering device ops");
+ return -EINVAL;
+ }
+
+ if (strlen(h->name) >= sizeof(ops->name) - 1) {
+ rte_spinlock_unlock(&bcmfs_hw_queue_pair_ops_table.tl);
+ BCMFS_LOG(ERR, "%s(): fs device_ops <%s>: name too long",
+ __func__, h->name);
+ return -EEXIST;
+ }
+
+ ops_index = bcmfs_hw_queue_pair_ops_table.num_ops++;
+ ops = &bcmfs_hw_queue_pair_ops_table.qp_ops[ops_index];
+ strlcpy(ops->name, h->name, sizeof(ops->name));
+ ops->enq_one_req = h->enq_one_req;
+ ops->dequeue = h->dequeue;
+ ops->ring_db = h->ring_db;
+ ops->startq = h->startq;
+ ops->stopq = h->stopq;
+
+ rte_spinlock_unlock(&bcmfs_hw_queue_pair_ops_table.tl);
+
+ return ops_index;
+}
+
TAILQ_HEAD(fsdev_list, bcmfs_device);
static struct fsdev_list fsdev_list = TAILQ_HEAD_INITIALIZER(fsdev_list);
@@ -53,6 +94,7 @@ fsdev_allocate_one_dev(struct rte_vdev_device *vdev,
enum bcmfs_device_type dev_type __rte_unused)
{
struct bcmfs_device *fsdev;
+ uint32_t i;
fsdev = calloc(1, sizeof(*fsdev));
if (!fsdev)
@@ -68,6 +110,15 @@ fsdev_allocate_one_dev(struct rte_vdev_device *vdev,
goto cleanup;
}
+ /* check if registered ops name is present in directory path */
+ for (i = 0; i < bcmfs_hw_queue_pair_ops_table.num_ops; i++)
+ if (strstr(dirpath,
+ bcmfs_hw_queue_pair_ops_table.qp_ops[i].name))
+ fsdev->sym_hw_qp_ops =
+ &bcmfs_hw_queue_pair_ops_table.qp_ops[i];
+ if (!fsdev->sym_hw_qp_ops)
+ goto cleanup;
+
strcpy(fsdev->dirname, dirpath);
strcpy(fsdev->name, devname);
diff --git a/drivers/crypto/bcmfs/bcmfs_device.h b/drivers/crypto/bcmfs/bcmfs_device.h
index e03ce5b5b..96beb10fa 100644
--- a/drivers/crypto/bcmfs/bcmfs_device.h
+++ b/drivers/crypto/bcmfs/bcmfs_device.h
@@ -8,6 +8,7 @@
#include <sys/queue.h>
+#include <rte_spinlock.h>
#include <rte_bus_vdev.h>
#include "bcmfs_logs.h"
@@ -28,6 +29,19 @@ enum bcmfs_device_type {
BCMFS_UNKNOWN
};
+/* A table to store registered queue pair opertations */
+struct bcmfs_hw_queue_pair_ops_table {
+ rte_spinlock_t tl;
+ /* Number of used ops structs in the table. */
+ uint32_t num_ops;
+ /* Storage for all possible ops structs. */
+ struct bcmfs_hw_queue_pair_ops qp_ops[BCMFS_MAX_NODES];
+};
+
+/* HW queue pair ops register function */
+int bcmfs_hw_queue_pair_register_ops(const struct bcmfs_hw_queue_pair_ops
+ *qp_ops);
+
struct bcmfs_device {
TAILQ_ENTRY(bcmfs_device) next;
/* Directoy path for vfio */
@@ -46,6 +60,8 @@ struct bcmfs_device {
uint16_t max_hw_qps;
/* current qpairs in use */
struct bcmfs_qp *qps_in_use[BCMFS_MAX_HW_QUEUES];
+ /* queue pair ops exported by symmetric crypto hw */
+ struct bcmfs_hw_queue_pair_ops *sym_hw_qp_ops;
};
#endif /* _BCMFS_DEV_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_qp.c b/drivers/crypto/bcmfs/bcmfs_qp.c
index 864e7bb74..ec1327b78 100644
--- a/drivers/crypto/bcmfs/bcmfs_qp.c
+++ b/drivers/crypto/bcmfs/bcmfs_qp.c
@@ -227,6 +227,7 @@ bcmfs_qp_setup(struct bcmfs_qp **qp_addr,
qp->qpair_id = queue_pair_id;
qp->ioreg = qp_conf->iobase;
qp->nb_descriptors = nb_descriptors;
+ qp->ops = qp_conf->ops;
qp->stats.enqueued_count = 0;
qp->stats.dequeued_count = 0;
diff --git a/drivers/crypto/bcmfs/bcmfs_qp.h b/drivers/crypto/bcmfs/bcmfs_qp.h
index 027d7a50c..e4b0c3f2f 100644
--- a/drivers/crypto/bcmfs/bcmfs_qp.h
+++ b/drivers/crypto/bcmfs/bcmfs_qp.h
@@ -44,6 +44,8 @@ struct bcmfs_qp_config {
uint16_t nb_descriptors;
/* Maximum number of h/w descriptors needed by a request */
uint16_t max_descs_req;
+ /* h/w ops associated with qp */
+ struct bcmfs_hw_queue_pair_ops *ops;
};
struct bcmfs_queue {
@@ -61,6 +63,8 @@ struct bcmfs_queue {
/* s/w pointer for completion h/w queue*/
uint32_t cmpl_read_ptr;
};
+ /* number of inflight descriptor accumulated before next db ring */
+ uint16_t descs_inflight;
/* Memzone name */
char memz_name[RTE_MEMZONE_NAMESIZE];
};
diff --git a/drivers/crypto/bcmfs/hw/bcmfs4_rm.c b/drivers/crypto/bcmfs/hw/bcmfs4_rm.c
new file mode 100644
index 000000000..c1cd1b813
--- /dev/null
+++ b/drivers/crypto/bcmfs/hw/bcmfs4_rm.c
@@ -0,0 +1,742 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <unistd.h>
+
+#include <rte_bitmap.h>
+
+#include "bcmfs_device.h"
+#include "bcmfs_dev_msg.h"
+#include "bcmfs_hw_defs.h"
+#include "bcmfs_logs.h"
+#include "bcmfs_qp.h"
+#include "bcmfs_rm_common.h"
+
+/* FS4 configuration */
+#define RING_BD_TOGGLE_INVALID(offset) \
+ (((offset) >> FS_RING_BD_ALIGN_ORDER) & 0x1)
+#define RING_BD_TOGGLE_VALID(offset) \
+ (!RING_BD_TOGGLE_INVALID(offset))
+
+#define RING_VER_MAGIC 0x76303031
+
+/* Per-Ring register offsets */
+#define RING_VER 0x000
+#define RING_BD_START_ADDR 0x004
+#define RING_BD_READ_PTR 0x008
+#define RING_BD_WRITE_PTR 0x00c
+#define RING_BD_READ_PTR_DDR_LS 0x010
+#define RING_BD_READ_PTR_DDR_MS 0x014
+#define RING_CMPL_START_ADDR 0x018
+#define RING_CMPL_WRITE_PTR 0x01c
+#define RING_NUM_REQ_RECV_LS 0x020
+#define RING_NUM_REQ_RECV_MS 0x024
+#define RING_NUM_REQ_TRANS_LS 0x028
+#define RING_NUM_REQ_TRANS_MS 0x02c
+#define RING_NUM_REQ_OUTSTAND 0x030
+#define RING_CONTROL 0x034
+#define RING_FLUSH_DONE 0x038
+#define RING_MSI_ADDR_LS 0x03c
+#define RING_MSI_ADDR_MS 0x040
+#define RING_MSI_CONTROL 0x048
+#define RING_BD_READ_PTR_DDR_CONTROL 0x04c
+#define RING_MSI_DATA_VALUE 0x064
+
+/* Register RING_BD_START_ADDR fields */
+#define BD_LAST_UPDATE_HW_SHIFT 28
+#define BD_LAST_UPDATE_HW_MASK 0x1
+#define BD_START_ADDR_VALUE(pa) \
+ ((uint32_t)((((uint64_t)(pa)) >> FS_RING_BD_ALIGN_ORDER) & 0x0fffffff))
+#define BD_START_ADDR_DECODE(val) \
+ ((uint64_t)((val) & 0x0fffffff) << FS_RING_BD_ALIGN_ORDER)
+
+/* Register RING_CMPL_START_ADDR fields */
+#define CMPL_START_ADDR_VALUE(pa) \
+ ((uint32_t)((((uint64_t)(pa)) >> FS_RING_CMPL_ALIGN_ORDER) & 0x7ffffff))
+
+/* Register RING_CONTROL fields */
+#define CONTROL_MASK_DISABLE_CONTROL 12
+#define CONTROL_FLUSH_SHIFT 5
+#define CONTROL_ACTIVE_SHIFT 4
+#define CONTROL_RATE_ADAPT_MASK 0xf
+#define CONTROL_RATE_DYNAMIC 0x0
+#define CONTROL_RATE_FAST 0x8
+#define CONTROL_RATE_MEDIUM 0x9
+#define CONTROL_RATE_SLOW 0xa
+#define CONTROL_RATE_IDLE 0xb
+
+/* Register RING_FLUSH_DONE fields */
+#define FLUSH_DONE_MASK 0x1
+
+/* Register RING_MSI_CONTROL fields */
+#define MSI_TIMER_VAL_SHIFT 16
+#define MSI_TIMER_VAL_MASK 0xffff
+#define MSI_ENABLE_SHIFT 15
+#define MSI_ENABLE_MASK 0x1
+#define MSI_COUNT_SHIFT 0
+#define MSI_COUNT_MASK 0x3ff
+
+/* Register RING_BD_READ_PTR_DDR_CONTROL fields */
+#define BD_READ_PTR_DDR_TIMER_VAL_SHIFT 16
+#define BD_READ_PTR_DDR_TIMER_VAL_MASK 0xffff
+#define BD_READ_PTR_DDR_ENABLE_SHIFT 15
+#define BD_READ_PTR_DDR_ENABLE_MASK 0x1
+
+/* ====== Broadcom FS4-RM ring descriptor defines ===== */
+
+
+/* General descriptor format */
+#define DESC_TYPE_SHIFT 60
+#define DESC_TYPE_MASK 0xf
+#define DESC_PAYLOAD_SHIFT 0
+#define DESC_PAYLOAD_MASK 0x0fffffffffffffff
+
+/* Null descriptor format */
+#define NULL_TYPE 0
+#define NULL_TOGGLE_SHIFT 58
+#define NULL_TOGGLE_MASK 0x1
+
+/* Header descriptor format */
+#define HEADER_TYPE 1
+#define HEADER_TOGGLE_SHIFT 58
+#define HEADER_TOGGLE_MASK 0x1
+#define HEADER_ENDPKT_SHIFT 57
+#define HEADER_ENDPKT_MASK 0x1
+#define HEADER_STARTPKT_SHIFT 56
+#define HEADER_STARTPKT_MASK 0x1
+#define HEADER_BDCOUNT_SHIFT 36
+#define HEADER_BDCOUNT_MASK 0x1f
+#define HEADER_BDCOUNT_MAX HEADER_BDCOUNT_MASK
+#define HEADER_FLAGS_SHIFT 16
+#define HEADER_FLAGS_MASK 0xffff
+#define HEADER_OPAQUE_SHIFT 0
+#define HEADER_OPAQUE_MASK 0xffff
+
+/* Source (SRC) descriptor format */
+#define SRC_TYPE 2
+#define SRC_LENGTH_SHIFT 44
+#define SRC_LENGTH_MASK 0xffff
+#define SRC_ADDR_SHIFT 0
+#define SRC_ADDR_MASK 0x00000fffffffffff
+
+/* Destination (DST) descriptor format */
+#define DST_TYPE 3
+#define DST_LENGTH_SHIFT 44
+#define DST_LENGTH_MASK 0xffff
+#define DST_ADDR_SHIFT 0
+#define DST_ADDR_MASK 0x00000fffffffffff
+
+/* Next pointer (NPTR) descriptor format */
+#define NPTR_TYPE 5
+#define NPTR_TOGGLE_SHIFT 58
+#define NPTR_TOGGLE_MASK 0x1
+#define NPTR_ADDR_SHIFT 0
+#define NPTR_ADDR_MASK 0x00000fffffffffff
+
+/* Mega source (MSRC) descriptor format */
+#define MSRC_TYPE 6
+#define MSRC_LENGTH_SHIFT 44
+#define MSRC_LENGTH_MASK 0xffff
+#define MSRC_ADDR_SHIFT 0
+#define MSRC_ADDR_MASK 0x00000fffffffffff
+
+/* Mega destination (MDST) descriptor format */
+#define MDST_TYPE 7
+#define MDST_LENGTH_SHIFT 44
+#define MDST_LENGTH_MASK 0xffff
+#define MDST_ADDR_SHIFT 0
+#define MDST_ADDR_MASK 0x00000fffffffffff
+
+static uint8_t
+bcmfs4_is_next_table_desc(void *desc_ptr)
+{
+ uint64_t desc = rm_read_desc(desc_ptr);
+ uint32_t type = FS_DESC_DEC(desc, DESC_TYPE_SHIFT, DESC_TYPE_MASK);
+
+ return (type == NPTR_TYPE) ? true : false;
+}
+
+static uint64_t
+bcmfs4_next_table_desc(uint32_t toggle, uint64_t next_addr)
+{
+ return (rm_build_desc(NPTR_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(toggle, NPTR_TOGGLE_SHIFT, NPTR_TOGGLE_MASK) |
+ rm_build_desc(next_addr, NPTR_ADDR_SHIFT, NPTR_ADDR_MASK));
+}
+
+static uint64_t
+bcmfs4_null_desc(uint32_t toggle)
+{
+ return (rm_build_desc(NULL_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(toggle, NULL_TOGGLE_SHIFT, NULL_TOGGLE_MASK));
+}
+
+static void
+bcmfs4_flip_header_toggle(void *desc_ptr)
+{
+ uint64_t desc = rm_read_desc(desc_ptr);
+
+ if (desc & ((uint64_t)0x1 << HEADER_TOGGLE_SHIFT))
+ desc &= ~((uint64_t)0x1 << HEADER_TOGGLE_SHIFT);
+ else
+ desc |= ((uint64_t)0x1 << HEADER_TOGGLE_SHIFT);
+
+ rm_write_desc(desc_ptr, desc);
+}
+
+static uint64_t
+bcmfs4_header_desc(uint32_t toggle, uint32_t startpkt,
+ uint32_t endpkt, uint32_t bdcount,
+ uint32_t flags, uint32_t opaque)
+{
+ return (rm_build_desc(HEADER_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(toggle, HEADER_TOGGLE_SHIFT, HEADER_TOGGLE_MASK) |
+ rm_build_desc(startpkt, HEADER_STARTPKT_SHIFT,
+ HEADER_STARTPKT_MASK) |
+ rm_build_desc(endpkt, HEADER_ENDPKT_SHIFT, HEADER_ENDPKT_MASK) |
+ rm_build_desc(bdcount, HEADER_BDCOUNT_SHIFT,
+ HEADER_BDCOUNT_MASK) |
+ rm_build_desc(flags, HEADER_FLAGS_SHIFT, HEADER_FLAGS_MASK) |
+ rm_build_desc(opaque, HEADER_OPAQUE_SHIFT, HEADER_OPAQUE_MASK));
+}
+
+static void
+bcmfs4_enqueue_desc(uint32_t nhpos, uint32_t nhcnt,
+ uint32_t reqid, uint64_t desc,
+ void **desc_ptr, uint32_t *toggle,
+ void *start_desc, void *end_desc)
+{
+ uint64_t d;
+ uint32_t nhavail, _toggle, _startpkt, _endpkt, _bdcount;
+
+ /*
+ * Each request or packet start with a HEADER descriptor followed
+ * by one or more non-HEADER descriptors (SRC, SRCT, MSRC, DST,
+ * DSTT, MDST, IMM, and IMMT). The number of non-HEADER descriptors
+ * following a HEADER descriptor is represented by BDCOUNT field
+ * of HEADER descriptor. The max value of BDCOUNT field is 31 which
+ * means we can only have 31 non-HEADER descriptors following one
+ * HEADER descriptor.
+ *
+ * In general use, number of non-HEADER descriptors can easily go
+ * beyond 31. To tackle this situation, we have packet (or request)
+ * extension bits (STARTPKT and ENDPKT) in the HEADER descriptor.
+ *
+ * To use packet extension, the first HEADER descriptor of request
+ * (or packet) will have STARTPKT=1 and ENDPKT=0. The intermediate
+ * HEADER descriptors will have STARTPKT=0 and ENDPKT=0. The last
+ * HEADER descriptor will have STARTPKT=0 and ENDPKT=1. Also, the
+ * TOGGLE bit of the first HEADER will be set to invalid state to
+ * ensure that FlexDMA engine does not start fetching descriptors
+ * till all descriptors are enqueued. The user of this function
+ * will flip the TOGGLE bit of first HEADER after all descriptors
+ * are enqueued.
+ */
+
+ if ((nhpos % HEADER_BDCOUNT_MAX == 0) && (nhcnt - nhpos)) {
+ /* Prepare the header descriptor */
+ nhavail = (nhcnt - nhpos);
+ _toggle = (nhpos == 0) ? !(*toggle) : (*toggle);
+ _startpkt = (nhpos == 0) ? 0x1 : 0x0;
+ _endpkt = (nhavail <= HEADER_BDCOUNT_MAX) ? 0x1 : 0x0;
+ _bdcount = (nhavail <= HEADER_BDCOUNT_MAX) ?
+ nhavail : HEADER_BDCOUNT_MAX;
+ if (nhavail <= HEADER_BDCOUNT_MAX)
+ _bdcount = nhavail;
+ else
+ _bdcount = HEADER_BDCOUNT_MAX;
+ d = bcmfs4_header_desc(_toggle, _startpkt, _endpkt,
+ _bdcount, 0x0, reqid);
+
+ /* Write header descriptor */
+ rm_write_desc(*desc_ptr, d);
+
+ /* Point to next descriptor */
+ *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc);
+ if (*desc_ptr == end_desc)
+ *desc_ptr = start_desc;
+
+ /* Skip next pointer descriptors */
+ while (bcmfs4_is_next_table_desc(*desc_ptr)) {
+ *toggle = (*toggle) ? 0 : 1;
+ *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc);
+ if (*desc_ptr == end_desc)
+ *desc_ptr = start_desc;
+ }
+ }
+
+ /* Write desired descriptor */
+ rm_write_desc(*desc_ptr, desc);
+
+ /* Point to next descriptor */
+ *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc);
+ if (*desc_ptr == end_desc)
+ *desc_ptr = start_desc;
+
+ /* Skip next pointer descriptors */
+ while (bcmfs4_is_next_table_desc(*desc_ptr)) {
+ *toggle = (*toggle) ? 0 : 1;
+ *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc);
+ if (*desc_ptr == end_desc)
+ *desc_ptr = start_desc;
+ }
+}
+
+static uint64_t
+bcmfs4_src_desc(uint64_t addr, unsigned int length)
+{
+ return (rm_build_desc(SRC_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(length, SRC_LENGTH_SHIFT, SRC_LENGTH_MASK) |
+ rm_build_desc(addr, SRC_ADDR_SHIFT, SRC_ADDR_MASK));
+}
+
+static uint64_t
+bcmfs4_msrc_desc(uint64_t addr, unsigned int length_div_16)
+{
+ return (rm_build_desc(MSRC_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(length_div_16, MSRC_LENGTH_SHIFT, MSRC_LENGTH_MASK) |
+ rm_build_desc(addr, MSRC_ADDR_SHIFT, MSRC_ADDR_MASK));
+}
+
+static uint64_t
+bcmfs4_dst_desc(uint64_t addr, unsigned int length)
+{
+ return (rm_build_desc(DST_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(length, DST_LENGTH_SHIFT, DST_LENGTH_MASK) |
+ rm_build_desc(addr, DST_ADDR_SHIFT, DST_ADDR_MASK));
+}
+
+static uint64_t
+bcmfs4_mdst_desc(uint64_t addr, unsigned int length_div_16)
+{
+ return (rm_build_desc(MDST_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(length_div_16, MDST_LENGTH_SHIFT, MDST_LENGTH_MASK) |
+ rm_build_desc(addr, MDST_ADDR_SHIFT, MDST_ADDR_MASK));
+}
+
+static bool
+bcmfs4_sanity_check(struct bcmfs_qp_message *msg)
+{
+ unsigned int i = 0;
+
+ if (msg == NULL)
+ return false;
+
+ for (i = 0; i < msg->srcs_count; i++) {
+ if (msg->srcs_len[i] & 0xf) {
+ if (msg->srcs_len[i] > SRC_LENGTH_MASK)
+ return false;
+ } else {
+ if (msg->srcs_len[i] > (MSRC_LENGTH_MASK * 16))
+ return false;
+ }
+ }
+ for (i = 0; i < msg->dsts_count; i++) {
+ if (msg->dsts_len[i] & 0xf) {
+ if (msg->dsts_len[i] > DST_LENGTH_MASK)
+ return false;
+ } else {
+ if (msg->dsts_len[i] > (MDST_LENGTH_MASK * 16))
+ return false;
+ }
+ }
+
+ return true;
+}
+
+static uint32_t
+estimate_nonheader_desc_count(struct bcmfs_qp_message *msg)
+{
+ uint32_t cnt = 0;
+ unsigned int src = 0;
+ unsigned int dst = 0;
+ unsigned int dst_target = 0;
+
+ while (src < msg->srcs_count ||
+ dst < msg->dsts_count) {
+ if (src < msg->srcs_count) {
+ cnt++;
+ dst_target = msg->srcs_len[src];
+ src++;
+ } else {
+ dst_target = UINT_MAX;
+ }
+ while (dst_target && dst < msg->dsts_count) {
+ cnt++;
+ if (msg->dsts_len[dst] < dst_target)
+ dst_target -= msg->dsts_len[dst];
+ else
+ dst_target = 0;
+ dst++;
+ }
+ }
+
+ return cnt;
+}
+
+static void *
+bcmfs4_enqueue_msg(struct bcmfs_qp_message *msg,
+ uint32_t nhcnt, uint32_t reqid,
+ void *desc_ptr, uint32_t toggle,
+ void *start_desc, void *end_desc)
+{
+ uint64_t d;
+ uint32_t nhpos = 0;
+ unsigned int src = 0;
+ unsigned int dst = 0;
+ unsigned int dst_target = 0;
+ void *orig_desc_ptr = desc_ptr;
+
+ if (!desc_ptr || !start_desc || !end_desc)
+ return NULL;
+
+ if (desc_ptr < start_desc || end_desc <= desc_ptr)
+ return NULL;
+
+ while (src < msg->srcs_count || dst < msg->dsts_count) {
+ if (src < msg->srcs_count) {
+ if (msg->srcs_len[src] & 0xf) {
+ d = bcmfs4_src_desc(msg->srcs_addr[src],
+ msg->srcs_len[src]);
+ } else {
+ d = bcmfs4_msrc_desc(msg->srcs_addr[src],
+ msg->srcs_len[src] / 16);
+ }
+ bcmfs4_enqueue_desc(nhpos, nhcnt, reqid,
+ d, &desc_ptr, &toggle,
+ start_desc, end_desc);
+ nhpos++;
+ dst_target = msg->srcs_len[src];
+ src++;
+ } else {
+ dst_target = UINT_MAX;
+ }
+
+ while (dst_target && (dst < msg->dsts_count)) {
+ if (msg->dsts_len[dst] & 0xf) {
+ d = bcmfs4_dst_desc(msg->dsts_addr[dst],
+ msg->dsts_len[dst]);
+ } else {
+ d = bcmfs4_mdst_desc(msg->dsts_addr[dst],
+ msg->dsts_len[dst] / 16);
+ }
+ bcmfs4_enqueue_desc(nhpos, nhcnt, reqid,
+ d, &desc_ptr, &toggle,
+ start_desc, end_desc);
+ nhpos++;
+ if (msg->dsts_len[dst] < dst_target)
+ dst_target -= msg->dsts_len[dst];
+ else
+ dst_target = 0;
+ dst++; /* for next buffer */
+ }
+ }
+
+ /* Null descriptor with invalid toggle bit */
+ rm_write_desc(desc_ptr, bcmfs4_null_desc(!toggle));
+
+ /* Ensure that descriptors have been written to memory */
+ rte_smp_wmb();
+
+ bcmfs4_flip_header_toggle(orig_desc_ptr);
+
+ return desc_ptr;
+}
+
+static int
+bcmfs4_enqueue_single_request_qp(struct bcmfs_qp *qp, void *op)
+{
+ int reqid;
+ void *next;
+ uint32_t nhcnt;
+ int ret = 0;
+ uint32_t pos = 0;
+ uint64_t slab = 0;
+ uint8_t exit_cleanup = false;
+ struct bcmfs_queue *txq = &qp->tx_q;
+ struct bcmfs_qp_message *msg = (struct bcmfs_qp_message *)op;
+
+ /* Do sanity check on message */
+ if (!bcmfs4_sanity_check(msg)) {
+ BCMFS_DP_LOG(ERR, "Invalid msg on queue %d", qp->qpair_id);
+ return -EIO;
+ }
+
+ /* Scan from the beginning */
+ __rte_bitmap_scan_init(qp->ctx_bmp);
+ /* Scan bitmap to get the free pool */
+ ret = rte_bitmap_scan(qp->ctx_bmp, &pos, &slab);
+ if (ret == 0) {
+ BCMFS_DP_LOG(ERR, "BD memory exhausted");
+ return -ERANGE;
+ }
+
+ reqid = pos + __builtin_ctzll(slab);
+ rte_bitmap_clear(qp->ctx_bmp, reqid);
+ qp->ctx_pool[reqid] = (unsigned long)msg;
+
+ /*
+ * Number required descriptors = number of non-header descriptors +
+ * number of header descriptors +
+ * 1x null descriptor
+ */
+ nhcnt = estimate_nonheader_desc_count(msg);
+
+ /* Write descriptors to ring */
+ next = bcmfs4_enqueue_msg(msg, nhcnt, reqid,
+ (uint8_t *)txq->base_addr + txq->tx_write_ptr,
+ RING_BD_TOGGLE_VALID(txq->tx_write_ptr),
+ txq->base_addr,
+ (uint8_t *)txq->base_addr + txq->queue_size);
+ if (next == NULL) {
+ BCMFS_DP_LOG(ERR, "Enqueue for desc failed on queue %d",
+ qp->qpair_id);
+ ret = -EINVAL;
+ exit_cleanup = true;
+ goto exit;
+ }
+
+ /* Save ring BD write offset */
+ txq->tx_write_ptr = (uint32_t)((uint8_t *)next -
+ (uint8_t *)txq->base_addr);
+
+ qp->nb_pending_requests++;
+
+ return 0;
+
+exit:
+ /* Cleanup if we failed */
+ if (exit_cleanup)
+ rte_bitmap_set(qp->ctx_bmp, reqid);
+
+ return ret;
+}
+
+static void
+bcmfs4_ring_doorbell_qp(struct bcmfs_qp *qp __rte_unused)
+{
+ /* no door bell method supported */
+}
+
+static uint16_t
+bcmfs4_dequeue_qp(struct bcmfs_qp *qp, void **ops, uint16_t budget)
+{
+ int err;
+ uint16_t reqid;
+ uint64_t desc;
+ uint16_t count = 0;
+ unsigned long context = 0;
+ struct bcmfs_queue *hwq = &qp->cmpl_q;
+ uint32_t cmpl_read_offset, cmpl_write_offset;
+
+ /*
+ * Check whether budget is valid, else set the budget to maximum
+ * so that all the available completions will be processed.
+ */
+ if (budget > qp->nb_pending_requests)
+ budget = qp->nb_pending_requests;
+
+ /*
+ * Get current completion read and write offset
+ * Note: We should read completion write pointer atleast once
+ * after we get a MSI interrupt because HW maintains internal
+ * MSI status which will allow next MSI interrupt only after
+ * completion write pointer is read.
+ */
+ cmpl_write_offset = FS_MMIO_READ32((uint8_t *)qp->ioreg +
+ RING_CMPL_WRITE_PTR);
+ cmpl_write_offset *= FS_RING_DESC_SIZE;
+ cmpl_read_offset = hwq->cmpl_read_ptr;
+
+ rte_smp_rmb();
+
+ /* For each completed request notify mailbox clients */
+ reqid = 0;
+ while ((cmpl_read_offset != cmpl_write_offset) && (budget > 0)) {
+ /* Dequeue next completion descriptor */
+ desc = *((uint64_t *)((uint8_t *)hwq->base_addr +
+ cmpl_read_offset));
+
+ /* Next read offset */
+ cmpl_read_offset += FS_RING_DESC_SIZE;
+ if (cmpl_read_offset == FS_RING_CMPL_SIZE)
+ cmpl_read_offset = 0;
+
+ /* Decode error from completion descriptor */
+ err = rm_cmpl_desc_to_error(desc);
+ if (err < 0)
+ BCMFS_DP_LOG(ERR, "error desc rcvd");
+
+ /* Determine request id from completion descriptor */
+ reqid = rm_cmpl_desc_to_reqid(desc);
+
+ /* Determine message pointer based on reqid */
+ context = qp->ctx_pool[reqid];
+ if (context == 0)
+ BCMFS_DP_LOG(ERR, "HW error detected");
+
+ /* Release reqid for recycling */
+ qp->ctx_pool[reqid] = 0;
+ rte_bitmap_set(qp->ctx_bmp, reqid);
+
+ *ops = (void *)context;
+
+ /* Increment number of completions processed */
+ count++;
+ budget--;
+ ops++;
+ }
+
+ hwq->cmpl_read_ptr = cmpl_read_offset;
+
+ qp->nb_pending_requests -= count;
+
+ return count;
+}
+
+static int
+bcmfs4_start_qp(struct bcmfs_qp *qp)
+{
+ int timeout;
+ uint32_t val, off;
+ uint64_t d, next_addr, msi;
+ struct bcmfs_queue *tx_queue = &qp->tx_q;
+ struct bcmfs_queue *cmpl_queue = &qp->cmpl_q;
+
+ /* Disable/inactivate ring */
+ FS_MMIO_WRITE32(0x0, (uint8_t *)qp->ioreg + RING_CONTROL);
+
+ /* Configure next table pointer entries in BD memory */
+ for (off = 0; off < tx_queue->queue_size; off += FS_RING_DESC_SIZE) {
+ next_addr = off + FS_RING_DESC_SIZE;
+ if (next_addr == tx_queue->queue_size)
+ next_addr = 0;
+ next_addr += (uint64_t)tx_queue->base_phys_addr;
+ if (FS_RING_BD_ALIGN_CHECK(next_addr))
+ d = bcmfs4_next_table_desc(RING_BD_TOGGLE_VALID(off),
+ next_addr);
+ else
+ d = bcmfs4_null_desc(RING_BD_TOGGLE_INVALID(off));
+ rm_write_desc((uint8_t *)tx_queue->base_addr + off, d);
+ }
+
+ /*
+ * If user interrupt the test in between the run(Ctrl+C), then all
+ * subsequent test run will fail because sw cmpl_read_offset and hw
+ * cmpl_write_offset will be pointing at different completion BD. To
+ * handle this we should flush all the rings in the startup instead
+ * of shutdown function.
+ * Ring flush will reset hw cmpl_write_offset.
+ */
+
+ /* Set ring flush state */
+ timeout = 1000;
+ FS_MMIO_WRITE32(BIT(CONTROL_FLUSH_SHIFT),
+ (uint8_t *)qp->ioreg + RING_CONTROL);
+ do {
+ /*
+ * If previous test is stopped in between the run, then
+ * sw has to read cmpl_write_offset else DME/AE will be not
+ * come out of flush state.
+ */
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_CMPL_WRITE_PTR);
+
+ if (FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_FLUSH_DONE) &
+ FLUSH_DONE_MASK)
+ break;
+ usleep(1000);
+ } while (--timeout);
+ if (!timeout) {
+ BCMFS_DP_LOG(ERR, "Ring flush timeout hw-queue %d",
+ qp->qpair_id);
+ }
+
+ /* Clear ring flush state */
+ timeout = 1000;
+ FS_MMIO_WRITE32(0x0, (uint8_t *)qp->ioreg + RING_CONTROL);
+ do {
+ if (!(FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_FLUSH_DONE) &
+ FLUSH_DONE_MASK))
+ break;
+ usleep(1000);
+ } while (--timeout);
+ if (!timeout) {
+ BCMFS_DP_LOG(ERR, "Ring clear flush timeout hw-queue %d",
+ qp->qpair_id);
+ }
+
+ /* Program BD start address */
+ val = BD_START_ADDR_VALUE(tx_queue->base_phys_addr);
+ FS_MMIO_WRITE32(val, (uint8_t *)qp->ioreg + RING_BD_START_ADDR);
+
+ /* BD write pointer will be same as HW write pointer */
+ tx_queue->tx_write_ptr = FS_MMIO_READ32((uint8_t *)qp->ioreg +
+ RING_BD_WRITE_PTR);
+ tx_queue->tx_write_ptr *= FS_RING_DESC_SIZE;
+
+
+ for (off = 0; off < FS_RING_CMPL_SIZE; off += FS_RING_DESC_SIZE)
+ rm_write_desc((uint8_t *)cmpl_queue->base_addr + off, 0x0);
+
+ /* Program completion start address */
+ val = CMPL_START_ADDR_VALUE(cmpl_queue->base_phys_addr);
+ FS_MMIO_WRITE32(val, (uint8_t *)qp->ioreg + RING_CMPL_START_ADDR);
+
+ /* Completion read pointer will be same as HW write pointer */
+ cmpl_queue->cmpl_read_ptr = FS_MMIO_READ32((uint8_t *)qp->ioreg +
+ RING_CMPL_WRITE_PTR);
+ cmpl_queue->cmpl_read_ptr *= FS_RING_DESC_SIZE;
+
+ /* Read ring Tx, Rx, and Outstanding counts to clear */
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_RECV_LS);
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_RECV_MS);
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_TRANS_LS);
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_TRANS_MS);
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_OUTSTAND);
+
+ /* Configure per-Ring MSI registers with dummy location */
+ /* We leave 1k * FS_RING_DESC_SIZE size from base phys for MSI */
+ msi = cmpl_queue->base_phys_addr + (1024 * FS_RING_DESC_SIZE);
+ FS_MMIO_WRITE32((msi & 0xFFFFFFFF),
+ (uint8_t *)qp->ioreg + RING_MSI_ADDR_LS);
+ FS_MMIO_WRITE32(((msi >> 32) & 0xFFFFFFFF),
+ (uint8_t *)qp->ioreg + RING_MSI_ADDR_MS);
+ FS_MMIO_WRITE32(qp->qpair_id,
+ (uint8_t *)qp->ioreg + RING_MSI_DATA_VALUE);
+
+ /* Configure RING_MSI_CONTROL */
+ val = 0;
+ val |= (MSI_TIMER_VAL_MASK << MSI_TIMER_VAL_SHIFT);
+ val |= BIT(MSI_ENABLE_SHIFT);
+ val |= (0x1 & MSI_COUNT_MASK) << MSI_COUNT_SHIFT;
+ FS_MMIO_WRITE32(val, (uint8_t *)qp->ioreg + RING_MSI_CONTROL);
+
+ /* Enable/activate ring */
+ val = BIT(CONTROL_ACTIVE_SHIFT);
+ FS_MMIO_WRITE32(val, (uint8_t *)qp->ioreg + RING_CONTROL);
+
+ return 0;
+}
+
+static void
+bcmfs4_shutdown_qp(struct bcmfs_qp *qp)
+{
+ /* Disable/inactivate ring */
+ FS_MMIO_WRITE32(0x0, (uint8_t *)qp->ioreg + RING_CONTROL);
+}
+
+struct bcmfs_hw_queue_pair_ops bcmfs4_qp_ops = {
+ .name = "fs4",
+ .enq_one_req = bcmfs4_enqueue_single_request_qp,
+ .ring_db = bcmfs4_ring_doorbell_qp,
+ .dequeue = bcmfs4_dequeue_qp,
+ .startq = bcmfs4_start_qp,
+ .stopq = bcmfs4_shutdown_qp,
+};
+
+RTE_INIT(bcmfs4_register_qp_ops)
+{
+ bcmfs_hw_queue_pair_register_ops(&bcmfs4_qp_ops);
+}
diff --git a/drivers/crypto/bcmfs/hw/bcmfs5_rm.c b/drivers/crypto/bcmfs/hw/bcmfs5_rm.c
new file mode 100644
index 000000000..fd92121da
--- /dev/null
+++ b/drivers/crypto/bcmfs/hw/bcmfs5_rm.c
@@ -0,0 +1,677 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <unistd.h>
+
+#include <rte_bitmap.h>
+
+#include "bcmfs_qp.h"
+#include "bcmfs_logs.h"
+#include "bcmfs_dev_msg.h"
+#include "bcmfs_device.h"
+#include "bcmfs_hw_defs.h"
+#include "bcmfs_rm_common.h"
+
+/* Ring version */
+#define RING_VER_MAGIC 0x76303032
+
+/* Per-Ring register offsets */
+#define RING_VER 0x000
+#define RING_BD_START_ADDRESS_LSB 0x004
+#define RING_BD_READ_PTR 0x008
+#define RING_BD_WRITE_PTR 0x00c
+#define RING_BD_READ_PTR_DDR_LS 0x010
+#define RING_BD_READ_PTR_DDR_MS 0x014
+#define RING_CMPL_START_ADDR_LSB 0x018
+#define RING_CMPL_WRITE_PTR 0x01c
+#define RING_NUM_REQ_RECV_LS 0x020
+#define RING_NUM_REQ_RECV_MS 0x024
+#define RING_NUM_REQ_TRANS_LS 0x028
+#define RING_NUM_REQ_TRANS_MS 0x02c
+#define RING_NUM_REQ_OUTSTAND 0x030
+#define RING_CONTROL 0x034
+#define RING_FLUSH_DONE 0x038
+#define RING_MSI_ADDR_LS 0x03c
+#define RING_MSI_ADDR_MS 0x040
+#define RING_MSI_CONTROL 0x048
+#define RING_BD_READ_PTR_DDR_CONTROL 0x04c
+#define RING_MSI_DATA_VALUE 0x064
+#define RING_BD_START_ADDRESS_MSB 0x078
+#define RING_CMPL_START_ADDR_MSB 0x07c
+#define RING_DOORBELL_BD_WRITE_COUNT 0x074
+
+/* Register RING_BD_START_ADDR fields */
+#define BD_LAST_UPDATE_HW_SHIFT 28
+#define BD_LAST_UPDATE_HW_MASK 0x1
+#define BD_START_ADDR_VALUE(pa) \
+ ((uint32_t)((((uint64_t)(pa)) >> RING_BD_ALIGN_ORDER) & 0x0fffffff))
+#define BD_START_ADDR_DECODE(val) \
+ ((uint64_t)((val) & 0x0fffffff) << RING_BD_ALIGN_ORDER)
+
+/* Register RING_CMPL_START_ADDR fields */
+#define CMPL_START_ADDR_VALUE(pa) \
+ ((uint32_t)((((uint64_t)(pa)) >> RING_CMPL_ALIGN_ORDER) & 0x07ffffff))
+
+/* Register RING_CONTROL fields */
+#define CONTROL_MASK_DISABLE_CONTROL 12
+#define CONTROL_FLUSH_SHIFT 5
+#define CONTROL_ACTIVE_SHIFT 4
+#define CONTROL_RATE_ADAPT_MASK 0xf
+#define CONTROL_RATE_DYNAMIC 0x0
+#define CONTROL_RATE_FAST 0x8
+#define CONTROL_RATE_MEDIUM 0x9
+#define CONTROL_RATE_SLOW 0xa
+#define CONTROL_RATE_IDLE 0xb
+
+/* Register RING_FLUSH_DONE fields */
+#define FLUSH_DONE_MASK 0x1
+
+/* Register RING_MSI_CONTROL fields */
+#define MSI_TIMER_VAL_SHIFT 16
+#define MSI_TIMER_VAL_MASK 0xffff
+#define MSI_ENABLE_SHIFT 15
+#define MSI_ENABLE_MASK 0x1
+#define MSI_COUNT_SHIFT 0
+#define MSI_COUNT_MASK 0x3ff
+
+/* Register RING_BD_READ_PTR_DDR_CONTROL fields */
+#define BD_READ_PTR_DDR_TIMER_VAL_SHIFT 16
+#define BD_READ_PTR_DDR_TIMER_VAL_MASK 0xffff
+#define BD_READ_PTR_DDR_ENABLE_SHIFT 15
+#define BD_READ_PTR_DDR_ENABLE_MASK 0x1
+
+/* General descriptor format */
+#define DESC_TYPE_SHIFT 60
+#define DESC_TYPE_MASK 0xf
+#define DESC_PAYLOAD_SHIFT 0
+#define DESC_PAYLOAD_MASK 0x0fffffffffffffff
+
+/* Null descriptor format */
+#define NULL_TYPE 0
+#define NULL_TOGGLE_SHIFT 59
+#define NULL_TOGGLE_MASK 0x1
+
+/* Header descriptor format */
+#define HEADER_TYPE 1
+#define HEADER_TOGGLE_SHIFT 59
+#define HEADER_TOGGLE_MASK 0x1
+#define HEADER_ENDPKT_SHIFT 57
+#define HEADER_ENDPKT_MASK 0x1
+#define HEADER_STARTPKT_SHIFT 56
+#define HEADER_STARTPKT_MASK 0x1
+#define HEADER_BDCOUNT_SHIFT 36
+#define HEADER_BDCOUNT_MASK 0x1f
+#define HEADER_BDCOUNT_MAX HEADER_BDCOUNT_MASK
+#define HEADER_FLAGS_SHIFT 16
+#define HEADER_FLAGS_MASK 0xffff
+#define HEADER_OPAQUE_SHIFT 0
+#define HEADER_OPAQUE_MASK 0xffff
+
+/* Source (SRC) descriptor format */
+
+#define SRC_TYPE 2
+#define SRC_LENGTH_SHIFT 44
+#define SRC_LENGTH_MASK 0xffff
+#define SRC_ADDR_SHIFT 0
+#define SRC_ADDR_MASK 0x00000fffffffffff
+
+/* Destination (DST) descriptor format */
+#define DST_TYPE 3
+#define DST_LENGTH_SHIFT 44
+#define DST_LENGTH_MASK 0xffff
+#define DST_ADDR_SHIFT 0
+#define DST_ADDR_MASK 0x00000fffffffffff
+
+/* Next pointer (NPTR) descriptor format */
+#define NPTR_TYPE 5
+#define NPTR_TOGGLE_SHIFT 59
+#define NPTR_TOGGLE_MASK 0x1
+#define NPTR_ADDR_SHIFT 0
+#define NPTR_ADDR_MASK 0x00000fffffffffff
+
+/* Mega source (MSRC) descriptor format */
+#define MSRC_TYPE 6
+#define MSRC_LENGTH_SHIFT 44
+#define MSRC_LENGTH_MASK 0xffff
+#define MSRC_ADDR_SHIFT 0
+#define MSRC_ADDR_MASK 0x00000fffffffffff
+
+/* Mega destination (MDST) descriptor format */
+#define MDST_TYPE 7
+#define MDST_LENGTH_SHIFT 44
+#define MDST_LENGTH_MASK 0xffff
+#define MDST_ADDR_SHIFT 0
+#define MDST_ADDR_MASK 0x00000fffffffffff
+
+static uint8_t
+bcmfs5_is_next_table_desc(void *desc_ptr)
+{
+ uint64_t desc = rm_read_desc(desc_ptr);
+ uint32_t type = FS_DESC_DEC(desc, DESC_TYPE_SHIFT, DESC_TYPE_MASK);
+
+ return (type == NPTR_TYPE) ? true : false;
+}
+
+static uint64_t
+bcmfs5_next_table_desc(uint64_t next_addr)
+{
+ return (rm_build_desc(NPTR_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(next_addr, NPTR_ADDR_SHIFT, NPTR_ADDR_MASK));
+}
+
+static uint64_t
+bcmfs5_null_desc(void)
+{
+ return rm_build_desc(NULL_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK);
+}
+
+static uint64_t
+bcmfs5_header_desc(uint32_t startpkt, uint32_t endpkt,
+ uint32_t bdcount, uint32_t flags,
+ uint32_t opaque)
+{
+ return (rm_build_desc(HEADER_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(startpkt, HEADER_STARTPKT_SHIFT,
+ HEADER_STARTPKT_MASK) |
+ rm_build_desc(endpkt, HEADER_ENDPKT_SHIFT, HEADER_ENDPKT_MASK) |
+ rm_build_desc(bdcount, HEADER_BDCOUNT_SHIFT, HEADER_BDCOUNT_MASK) |
+ rm_build_desc(flags, HEADER_FLAGS_SHIFT, HEADER_FLAGS_MASK) |
+ rm_build_desc(opaque, HEADER_OPAQUE_SHIFT, HEADER_OPAQUE_MASK));
+}
+
+static int
+bcmfs5_enqueue_desc(uint32_t nhpos, uint32_t nhcnt,
+ uint32_t reqid, uint64_t desc,
+ void **desc_ptr, void *start_desc,
+ void *end_desc)
+{
+ uint64_t d;
+ uint32_t nhavail, _startpkt, _endpkt, _bdcount;
+ int is_nxt_page = 0;
+
+ /*
+ * Each request or packet start with a HEADER descriptor followed
+ * by one or more non-HEADER descriptors (SRC, SRCT, MSRC, DST,
+ * DSTT, MDST, IMM, and IMMT). The number of non-HEADER descriptors
+ * following a HEADER descriptor is represented by BDCOUNT field
+ * of HEADER descriptor. The max value of BDCOUNT field is 31 which
+ * means we can only have 31 non-HEADER descriptors following one
+ * HEADER descriptor.
+ *
+ * In general use, number of non-HEADER descriptors can easily go
+ * beyond 31. To tackle this situation, we have packet (or request)
+ * extension bits (STARTPKT and ENDPKT) in the HEADER descriptor.
+ *
+ * To use packet extension, the first HEADER descriptor of request
+ * (or packet) will have STARTPKT=1 and ENDPKT=0. The intermediate
+ * HEADER descriptors will have STARTPKT=0 and ENDPKT=0. The last
+ * HEADER descriptor will have STARTPKT=0 and ENDPKT=1.
+ */
+
+ if ((nhpos % HEADER_BDCOUNT_MAX == 0) && (nhcnt - nhpos)) {
+ /* Prepare the header descriptor */
+ nhavail = (nhcnt - nhpos);
+ _startpkt = (nhpos == 0) ? 0x1 : 0x0;
+ _endpkt = (nhavail <= HEADER_BDCOUNT_MAX) ? 0x1 : 0x0;
+ _bdcount = (nhavail <= HEADER_BDCOUNT_MAX) ?
+ nhavail : HEADER_BDCOUNT_MAX;
+ if (nhavail <= HEADER_BDCOUNT_MAX)
+ _bdcount = nhavail;
+ else
+ _bdcount = HEADER_BDCOUNT_MAX;
+ d = bcmfs5_header_desc(_startpkt, _endpkt,
+ _bdcount, 0x0, reqid);
+
+ /* Write header descriptor */
+ rm_write_desc(*desc_ptr, d);
+
+ /* Point to next descriptor */
+ *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc);
+ if (*desc_ptr == end_desc)
+ *desc_ptr = start_desc;
+
+ /* Skip next pointer descriptors */
+ while (bcmfs5_is_next_table_desc(*desc_ptr)) {
+ is_nxt_page = 1;
+ *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc);
+ if (*desc_ptr == end_desc)
+ *desc_ptr = start_desc;
+ }
+ }
+
+ /* Write desired descriptor */
+ rm_write_desc(*desc_ptr, desc);
+
+ /* Point to next descriptor */
+ *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc);
+ if (*desc_ptr == end_desc)
+ *desc_ptr = start_desc;
+
+ /* Skip next pointer descriptors */
+ while (bcmfs5_is_next_table_desc(*desc_ptr)) {
+ is_nxt_page = 1;
+ *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc);
+ if (*desc_ptr == end_desc)
+ *desc_ptr = start_desc;
+ }
+
+ return is_nxt_page;
+}
+
+static uint64_t
+bcmfs5_src_desc(uint64_t addr, unsigned int len)
+{
+ return (rm_build_desc(SRC_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(len, SRC_LENGTH_SHIFT, SRC_LENGTH_MASK) |
+ rm_build_desc(addr, SRC_ADDR_SHIFT, SRC_ADDR_MASK));
+}
+
+static uint64_t
+bcmfs5_msrc_desc(uint64_t addr, unsigned int len_div_16)
+{
+ return (rm_build_desc(MSRC_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(len_div_16, MSRC_LENGTH_SHIFT, MSRC_LENGTH_MASK) |
+ rm_build_desc(addr, MSRC_ADDR_SHIFT, MSRC_ADDR_MASK));
+}
+
+static uint64_t
+bcmfs5_dst_desc(uint64_t addr, unsigned int len)
+{
+ return (rm_build_desc(DST_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(len, DST_LENGTH_SHIFT, DST_LENGTH_MASK) |
+ rm_build_desc(addr, DST_ADDR_SHIFT, DST_ADDR_MASK));
+}
+
+static uint64_t
+bcmfs5_mdst_desc(uint64_t addr, unsigned int len_div_16)
+{
+ return (rm_build_desc(MDST_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(len_div_16, MDST_LENGTH_SHIFT, MDST_LENGTH_MASK) |
+ rm_build_desc(addr, MDST_ADDR_SHIFT, MDST_ADDR_MASK));
+}
+
+static bool
+bcmfs5_sanity_check(struct bcmfs_qp_message *msg)
+{
+ unsigned int i = 0;
+
+ if (msg == NULL)
+ return false;
+
+ for (i = 0; i < msg->srcs_count; i++) {
+ if (msg->srcs_len[i] & 0xf) {
+ if (msg->srcs_len[i] > SRC_LENGTH_MASK)
+ return false;
+ } else {
+ if (msg->srcs_len[i] > (MSRC_LENGTH_MASK * 16))
+ return false;
+ }
+ }
+ for (i = 0; i < msg->dsts_count; i++) {
+ if (msg->dsts_len[i] & 0xf) {
+ if (msg->dsts_len[i] > DST_LENGTH_MASK)
+ return false;
+ } else {
+ if (msg->dsts_len[i] > (MDST_LENGTH_MASK * 16))
+ return false;
+ }
+ }
+
+ return true;
+}
+
+static void *
+bcmfs5_enqueue_msg(struct bcmfs_queue *txq,
+ struct bcmfs_qp_message *msg,
+ uint32_t reqid, void *desc_ptr,
+ void *start_desc, void *end_desc)
+{
+ uint64_t d;
+ unsigned int src, dst;
+ uint32_t nhpos = 0;
+ int nxt_page = 0;
+ uint32_t nhcnt = msg->srcs_count + msg->dsts_count;
+
+ if (desc_ptr == NULL || start_desc == NULL || end_desc == NULL)
+ return NULL;
+
+ if (desc_ptr < start_desc || end_desc <= desc_ptr)
+ return NULL;
+
+ for (src = 0; src < msg->srcs_count; src++) {
+ if (msg->srcs_len[src] & 0xf)
+ d = bcmfs5_src_desc(msg->srcs_addr[src],
+ msg->srcs_len[src]);
+ else
+ d = bcmfs5_msrc_desc(msg->srcs_addr[src],
+ msg->srcs_len[src] / 16);
+
+ nxt_page = bcmfs5_enqueue_desc(nhpos, nhcnt, reqid,
+ d, &desc_ptr, start_desc,
+ end_desc);
+ if (nxt_page)
+ txq->descs_inflight++;
+ nhpos++;
+ }
+
+ for (dst = 0; dst < msg->dsts_count; dst++) {
+ if (msg->dsts_len[dst] & 0xf)
+ d = bcmfs5_dst_desc(msg->dsts_addr[dst],
+ msg->dsts_len[dst]);
+ else
+ d = bcmfs5_mdst_desc(msg->dsts_addr[dst],
+ msg->dsts_len[dst] / 16);
+
+ nxt_page = bcmfs5_enqueue_desc(nhpos, nhcnt, reqid,
+ d, &desc_ptr, start_desc,
+ end_desc);
+ if (nxt_page)
+ txq->descs_inflight++;
+ nhpos++;
+ }
+
+ txq->descs_inflight += nhcnt + 1;
+
+ return desc_ptr;
+}
+
+static int
+bcmfs5_enqueue_single_request_qp(struct bcmfs_qp *qp, void *op)
+{
+ void *next;
+ int reqid;
+ int ret = 0;
+ uint64_t slab = 0;
+ uint32_t pos = 0;
+ uint8_t exit_cleanup = false;
+ struct bcmfs_queue *txq = &qp->tx_q;
+ struct bcmfs_qp_message *msg = (struct bcmfs_qp_message *)op;
+
+ /* Do sanity check on message */
+ if (!bcmfs5_sanity_check(msg)) {
+ BCMFS_DP_LOG(ERR, "Invalid msg on queue %d", qp->qpair_id);
+ return -EIO;
+ }
+
+ /* Scan from the beginning */
+ __rte_bitmap_scan_init(qp->ctx_bmp);
+ /* Scan bitmap to get the free pool */
+ ret = rte_bitmap_scan(qp->ctx_bmp, &pos, &slab);
+ if (ret == 0) {
+ BCMFS_DP_LOG(ERR, "BD memory exhausted");
+ return -ERANGE;
+ }
+
+ reqid = pos + __builtin_ctzll(slab);
+ rte_bitmap_clear(qp->ctx_bmp, reqid);
+ qp->ctx_pool[reqid] = (unsigned long)msg;
+
+ /* Write descriptors to ring */
+ next = bcmfs5_enqueue_msg(txq, msg, reqid,
+ (uint8_t *)txq->base_addr + txq->tx_write_ptr,
+ txq->base_addr,
+ (uint8_t *)txq->base_addr + txq->queue_size);
+ if (next == NULL) {
+ BCMFS_DP_LOG(ERR, "Enqueue for desc failed on queue %d",
+ qp->qpair_id);
+ ret = -EINVAL;
+ exit_cleanup = true;
+ goto exit;
+ }
+
+ /* Save ring BD write offset */
+ txq->tx_write_ptr = (uint32_t)((uint8_t *)next -
+ (uint8_t *)txq->base_addr);
+
+ qp->nb_pending_requests++;
+
+ return 0;
+
+exit:
+ /* Cleanup if we failed */
+ if (exit_cleanup)
+ rte_bitmap_set(qp->ctx_bmp, reqid);
+
+ return ret;
+}
+
+static void bcmfs5_write_doorbell(struct bcmfs_qp *qp)
+{
+ struct bcmfs_queue *txq = &qp->tx_q;
+
+ /* sync in bfeore ringing the door-bell */
+ rte_wmb();
+
+ FS_MMIO_WRITE32(txq->descs_inflight,
+ (uint8_t *)qp->ioreg + RING_DOORBELL_BD_WRITE_COUNT);
+
+ /* reset the count */
+ txq->descs_inflight = 0;
+}
+
+static uint16_t
+bcmfs5_dequeue_qp(struct bcmfs_qp *qp, void **ops, uint16_t budget)
+{
+ int err;
+ uint16_t reqid;
+ uint64_t desc;
+ uint16_t count = 0;
+ unsigned long context = 0;
+ struct bcmfs_queue *hwq = &qp->cmpl_q;
+ uint32_t cmpl_read_offset, cmpl_write_offset;
+
+ /*
+ * Check whether budget is valid, else set the budget to maximum
+ * so that all the available completions will be processed.
+ */
+ if (budget > qp->nb_pending_requests)
+ budget = qp->nb_pending_requests;
+
+ /*
+ * Get current completion read and write offset
+ *
+ * Note: We should read completion write pointer atleast once
+ * after we get a MSI interrupt because HW maintains internal
+ * MSI status which will allow next MSI interrupt only after
+ * completion write pointer is read.
+ */
+ cmpl_write_offset = FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_CMPL_WRITE_PTR);
+ cmpl_write_offset *= FS_RING_DESC_SIZE;
+ cmpl_read_offset = hwq->cmpl_read_ptr;
+
+ /* read the ring cmpl write ptr before cmpl read offset */
+ rte_smp_rmb();
+
+ /* For each completed request notify mailbox clients */
+ reqid = 0;
+ while ((cmpl_read_offset != cmpl_write_offset) && (budget > 0)) {
+ /* Dequeue next completion descriptor */
+ desc = *((uint64_t *)((uint8_t *)hwq->base_addr +
+ cmpl_read_offset));
+
+ /* Next read offset */
+ cmpl_read_offset += FS_RING_DESC_SIZE;
+ if (cmpl_read_offset == FS_RING_CMPL_SIZE)
+ cmpl_read_offset = 0;
+
+ /* Decode error from completion descriptor */
+ err = rm_cmpl_desc_to_error(desc);
+ if (err < 0)
+ BCMFS_DP_LOG(ERR, "error desc rcvd");
+
+ /* Determine request id from completion descriptor */
+ reqid = rm_cmpl_desc_to_reqid(desc);
+
+ /* Retrieve context */
+ context = qp->ctx_pool[reqid];
+ if (context == 0)
+ BCMFS_DP_LOG(ERR, "HW error detected");
+
+ /* Release reqid for recycling */
+ qp->ctx_pool[reqid] = 0;
+ rte_bitmap_set(qp->ctx_bmp, reqid);
+
+ *ops = (void *)context;
+
+ /* Increment number of completions processed */
+ count++;
+ budget--;
+ ops++;
+ }
+
+ hwq->cmpl_read_ptr = cmpl_read_offset;
+
+ qp->nb_pending_requests -= count;
+
+ return count;
+}
+
+static int
+bcmfs5_start_qp(struct bcmfs_qp *qp)
+{
+ uint32_t val, off;
+ uint64_t d, next_addr, msi;
+ int timeout;
+ uint32_t bd_high, bd_low, cmpl_high, cmpl_low;
+ struct bcmfs_queue *tx_queue = &qp->tx_q;
+ struct bcmfs_queue *cmpl_queue = &qp->cmpl_q;
+
+ /* Disable/inactivate ring */
+ FS_MMIO_WRITE32(0x0, (uint8_t *)qp->ioreg + RING_CONTROL);
+
+ /* Configure next table pointer entries in BD memory */
+ for (off = 0; off < tx_queue->queue_size; off += FS_RING_DESC_SIZE) {
+ next_addr = off + FS_RING_DESC_SIZE;
+ if (next_addr == tx_queue->queue_size)
+ next_addr = 0;
+ next_addr += (uint64_t)tx_queue->base_phys_addr;
+ if (FS_RING_BD_ALIGN_CHECK(next_addr))
+ d = bcmfs5_next_table_desc(next_addr);
+ else
+ d = bcmfs5_null_desc();
+ rm_write_desc((uint8_t *)tx_queue->base_addr + off, d);
+ }
+
+ /*
+ * If user interrupt the test in between the run(Ctrl+C), then all
+ * subsequent test run will fail because sw cmpl_read_offset and hw
+ * cmpl_write_offset will be pointing at different completion BD. To
+ * handle this we should flush all the rings in the startup instead
+ * of shutdown function.
+ * Ring flush will reset hw cmpl_write_offset.
+ */
+
+ /* Set ring flush state */
+ timeout = 1000;
+ FS_MMIO_WRITE32(BIT(CONTROL_FLUSH_SHIFT),
+ (uint8_t *)qp->ioreg + RING_CONTROL);
+ do {
+ /*
+ * If previous test is stopped in between the run, then
+ * sw has to read cmpl_write_offset else DME/AE will be not
+ * come out of flush state.
+ */
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_CMPL_WRITE_PTR);
+
+ if (FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_FLUSH_DONE) &
+ FLUSH_DONE_MASK)
+ break;
+ usleep(1000);
+ } while (--timeout);
+ if (!timeout) {
+ BCMFS_DP_LOG(ERR, "Ring flush timeout hw-queue %d",
+ qp->qpair_id);
+ }
+
+ /* Clear ring flush state */
+ timeout = 1000;
+ FS_MMIO_WRITE32(0x0, (uint8_t *)qp->ioreg + RING_CONTROL);
+ do {
+ if (!(FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_FLUSH_DONE) &
+ FLUSH_DONE_MASK))
+ break;
+ usleep(1000);
+ } while (--timeout);
+ if (!timeout) {
+ BCMFS_DP_LOG(ERR, "Ring clear flush timeout hw-queue %d",
+ qp->qpair_id);
+ }
+
+ /* Program BD start address */
+ bd_low = lower_32_bits(tx_queue->base_phys_addr);
+ bd_high = upper_32_bits(tx_queue->base_phys_addr);
+ FS_MMIO_WRITE32(bd_low, (uint8_t *)qp->ioreg +
+ RING_BD_START_ADDRESS_LSB);
+ FS_MMIO_WRITE32(bd_high, (uint8_t *)qp->ioreg +
+ RING_BD_START_ADDRESS_MSB);
+
+ tx_queue->tx_write_ptr = 0;
+
+ for (off = 0; off < FS_RING_CMPL_SIZE; off += FS_RING_DESC_SIZE)
+ rm_write_desc((uint8_t *)cmpl_queue->base_addr + off, 0x0);
+
+ /* Completion read pointer will be same as HW write pointer */
+ cmpl_queue->cmpl_read_ptr = FS_MMIO_READ32((uint8_t *)qp->ioreg +
+ RING_CMPL_WRITE_PTR);
+ /* Program completion start address */
+ cmpl_low = lower_32_bits(cmpl_queue->base_phys_addr);
+ cmpl_high = upper_32_bits(cmpl_queue->base_phys_addr);
+ FS_MMIO_WRITE32(cmpl_low, (uint8_t *)qp->ioreg +
+ RING_CMPL_START_ADDR_LSB);
+ FS_MMIO_WRITE32(cmpl_high, (uint8_t *)qp->ioreg +
+ RING_CMPL_START_ADDR_MSB);
+
+ cmpl_queue->cmpl_read_ptr *= FS_RING_DESC_SIZE;
+
+ /* Read ring Tx, Rx, and Outstanding counts to clear */
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_RECV_LS);
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_RECV_MS);
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_TRANS_LS);
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_TRANS_MS);
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_OUTSTAND);
+
+ /* Configure per-Ring MSI registers with dummy location */
+ msi = cmpl_queue->base_phys_addr + (1024 * FS_RING_DESC_SIZE);
+ FS_MMIO_WRITE32((msi & 0xFFFFFFFF),
+ (uint8_t *)qp->ioreg + RING_MSI_ADDR_LS);
+ FS_MMIO_WRITE32(((msi >> 32) & 0xFFFFFFFF),
+ (uint8_t *)qp->ioreg + RING_MSI_ADDR_MS);
+ FS_MMIO_WRITE32(qp->qpair_id, (uint8_t *)qp->ioreg +
+ RING_MSI_DATA_VALUE);
+
+ /* Configure RING_MSI_CONTROL */
+ val = 0;
+ val |= (MSI_TIMER_VAL_MASK << MSI_TIMER_VAL_SHIFT);
+ val |= BIT(MSI_ENABLE_SHIFT);
+ val |= (0x1 & MSI_COUNT_MASK) << MSI_COUNT_SHIFT;
+ FS_MMIO_WRITE32(val, (uint8_t *)qp->ioreg + RING_MSI_CONTROL);
+
+ /* Enable/activate ring */
+ val = BIT(CONTROL_ACTIVE_SHIFT);
+ FS_MMIO_WRITE32(val, (uint8_t *)qp->ioreg + RING_CONTROL);
+
+ return 0;
+}
+
+static void
+bcmfs5_shutdown_qp(struct bcmfs_qp *qp)
+{
+ /* Disable/inactivate ring */
+ FS_MMIO_WRITE32(0x0, (uint8_t *)qp->ioreg + RING_CONTROL);
+}
+
+struct bcmfs_hw_queue_pair_ops bcmfs5_qp_ops = {
+ .name = "fs5",
+ .enq_one_req = bcmfs5_enqueue_single_request_qp,
+ .ring_db = bcmfs5_write_doorbell,
+ .dequeue = bcmfs5_dequeue_qp,
+ .startq = bcmfs5_start_qp,
+ .stopq = bcmfs5_shutdown_qp,
+};
+
+RTE_INIT(bcmfs5_register_qp_ops)
+{
+ bcmfs_hw_queue_pair_register_ops(&bcmfs5_qp_ops);
+}
diff --git a/drivers/crypto/bcmfs/hw/bcmfs_rm_common.c b/drivers/crypto/bcmfs/hw/bcmfs_rm_common.c
new file mode 100644
index 000000000..9445d28f9
--- /dev/null
+++ b/drivers/crypto/bcmfs/hw/bcmfs_rm_common.c
@@ -0,0 +1,82 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Broadcom.
+ * All rights reserved.
+ */
+
+#include "bcmfs_hw_defs.h"
+#include "bcmfs_rm_common.h"
+
+/* Completion descriptor format */
+#define FS_CMPL_OPAQUE_SHIFT 0
+#define FS_CMPL_OPAQUE_MASK 0xffff
+#define FS_CMPL_ENGINE_STATUS_SHIFT 16
+#define FS_CMPL_ENGINE_STATUS_MASK 0xffff
+#define FS_CMPL_DME_STATUS_SHIFT 32
+#define FS_CMPL_DME_STATUS_MASK 0xffff
+#define FS_CMPL_RM_STATUS_SHIFT 48
+#define FS_CMPL_RM_STATUS_MASK 0xffff
+/* Completion RM status code */
+#define FS_RM_STATUS_CODE_SHIFT 0
+#define FS_RM_STATUS_CODE_MASK 0x3ff
+#define FS_RM_STATUS_CODE_GOOD 0x0
+#define FS_RM_STATUS_CODE_AE_TIMEOUT 0x3ff
+
+
+/* Completion DME status code */
+#define FS_DME_STATUS_MEM_COR_ERR BIT(0)
+#define FS_DME_STATUS_MEM_UCOR_ERR BIT(1)
+#define FS_DME_STATUS_FIFO_UNDRFLOW BIT(2)
+#define FS_DME_STATUS_FIFO_OVERFLOW BIT(3)
+#define FS_DME_STATUS_RRESP_ERR BIT(4)
+#define FS_DME_STATUS_BRESP_ERR BIT(5)
+#define FS_DME_STATUS_ERROR_MASK (FS_DME_STATUS_MEM_COR_ERR | \
+ FS_DME_STATUS_MEM_UCOR_ERR | \
+ FS_DME_STATUS_FIFO_UNDRFLOW | \
+ FS_DME_STATUS_FIFO_OVERFLOW | \
+ FS_DME_STATUS_RRESP_ERR | \
+ FS_DME_STATUS_BRESP_ERR)
+
+/* APIs related to ring manager descriptors */
+uint64_t
+rm_build_desc(uint64_t val, uint32_t shift,
+ uint64_t mask)
+{
+ return((val & mask) << shift);
+}
+
+uint64_t
+rm_read_desc(void *desc_ptr)
+{
+ return le64_to_cpu(*((uint64_t *)desc_ptr));
+}
+
+void
+rm_write_desc(void *desc_ptr, uint64_t desc)
+{
+ *((uint64_t *)desc_ptr) = cpu_to_le64(desc);
+}
+
+uint32_t
+rm_cmpl_desc_to_reqid(uint64_t cmpl_desc)
+{
+ return (uint32_t)(cmpl_desc & FS_CMPL_OPAQUE_MASK);
+}
+
+int
+rm_cmpl_desc_to_error(uint64_t cmpl_desc)
+{
+ uint32_t status;
+
+ status = FS_DESC_DEC(cmpl_desc, FS_CMPL_DME_STATUS_SHIFT,
+ FS_CMPL_DME_STATUS_MASK);
+ if (status & FS_DME_STATUS_ERROR_MASK)
+ return -EIO;
+
+ status = FS_DESC_DEC(cmpl_desc, FS_CMPL_RM_STATUS_SHIFT,
+ FS_CMPL_RM_STATUS_MASK);
+ status &= FS_RM_STATUS_CODE_MASK;
+ if (status == FS_RM_STATUS_CODE_AE_TIMEOUT)
+ return -ETIMEDOUT;
+
+ return 0;
+}
diff --git a/drivers/crypto/bcmfs/hw/bcmfs_rm_common.h b/drivers/crypto/bcmfs/hw/bcmfs_rm_common.h
new file mode 100644
index 000000000..5cbafa0da
--- /dev/null
+++ b/drivers/crypto/bcmfs/hw/bcmfs_rm_common.h
@@ -0,0 +1,46 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_RM_COMMON_H_
+#define _BCMFS_RM_COMMON_H_
+
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_io.h>
+
+/* Descriptor helper macros */
+#define FS_DESC_DEC(d, s, m) (((d) >> (s)) & (m))
+
+#define FS_RING_BD_ALIGN_CHECK(addr) \
+ (!((addr) & ((0x1 << FS_RING_BD_ALIGN_ORDER) - 1)))
+
+#define cpu_to_le64 rte_cpu_to_le_64
+#define cpu_to_le32 rte_cpu_to_le_32
+#define cpu_to_le16 rte_cpu_to_le_16
+
+#define le64_to_cpu rte_le_to_cpu_64
+#define le32_to_cpu rte_le_to_cpu_32
+#define le16_to_cpu rte_le_to_cpu_16
+
+#define lower_32_bits(x) ((uint32_t)(x))
+#define upper_32_bits(x) ((uint32_t)(((x) >> 16) >> 16))
+
+uint64_t
+rm_build_desc(uint64_t val, uint32_t shift,
+ uint64_t mask);
+uint64_t
+rm_read_desc(void *desc_ptr);
+
+void
+rm_write_desc(void *desc_ptr, uint64_t desc);
+
+uint32_t
+rm_cmpl_desc_to_reqid(uint64_t cmpl_desc);
+
+int
+rm_cmpl_desc_to_error(uint64_t cmpl_desc);
+
+#endif /* _BCMFS_RM_COMMON_H_ */
+
diff --git a/drivers/crypto/bcmfs/meson.build b/drivers/crypto/bcmfs/meson.build
index 7e2bcbf14..cd58bd5e2 100644
--- a/drivers/crypto/bcmfs/meson.build
+++ b/drivers/crypto/bcmfs/meson.build
@@ -8,5 +8,8 @@ sources = files(
'bcmfs_logs.c',
'bcmfs_device.c',
'bcmfs_vfio.c',
- 'bcmfs_qp.c'
+ 'bcmfs_qp.c',
+ 'hw/bcmfs4_rm.c',
+ 'hw/bcmfs5_rm.c',
+ 'hw/bcmfs_rm_common.c'
)
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH 0 5/8] crypto/bcmfs: create a symmetric cryptodev
2020-08-11 14:58 [dpdk-dev] [PATCH 0 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
` (3 preceding siblings ...)
2020-08-11 14:58 ` [dpdk-dev] [PATCH 0 4/8] crypto/bcmfs: add hw queue pair operations Vikas Gupta
@ 2020-08-11 14:58 ` Vikas Gupta
2020-08-11 14:58 ` [dpdk-dev] [PATCH 0 6/8] crypto/bcmfs: add session handling and capabilities Vikas Gupta
` (3 subsequent siblings)
8 siblings, 0 replies; 75+ messages in thread
From: Vikas Gupta @ 2020-08-11 14:58 UTC (permalink / raw)
To: dev, akhil.goyal, ajit.khaparde; +Cc: vikram.prakash, Vikas Gupta
Create a symmetric crypto device and supported cryptodev ops.
Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
---
drivers/crypto/bcmfs/bcmfs_device.c | 15 ++
drivers/crypto/bcmfs/bcmfs_device.h | 9 +
drivers/crypto/bcmfs/bcmfs_qp.c | 37 +++
drivers/crypto/bcmfs/bcmfs_qp.h | 16 ++
drivers/crypto/bcmfs/bcmfs_sym_pmd.c | 387 +++++++++++++++++++++++++++
drivers/crypto/bcmfs/bcmfs_sym_pmd.h | 38 +++
drivers/crypto/bcmfs/bcmfs_sym_req.h | 22 ++
drivers/crypto/bcmfs/meson.build | 3 +-
8 files changed, 526 insertions(+), 1 deletion(-)
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_pmd.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_pmd.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_req.h
diff --git a/drivers/crypto/bcmfs/bcmfs_device.c b/drivers/crypto/bcmfs/bcmfs_device.c
index bd2d64acf..c9263ec28 100644
--- a/drivers/crypto/bcmfs/bcmfs_device.c
+++ b/drivers/crypto/bcmfs/bcmfs_device.c
@@ -13,6 +13,7 @@
#include "bcmfs_logs.h"
#include "bcmfs_qp.h"
#include "bcmfs_vfio.h"
+#include "bcmfs_sym_pmd.h"
struct bcmfs_device_attr {
const char name[BCMFS_MAX_PATH_LEN];
@@ -239,6 +240,7 @@ bcmfs_vdev_probe(struct rte_vdev_device *vdev)
char out_dirname[BCMFS_MAX_PATH_LEN];
uint32_t fsdev_dev[BCMFS_MAX_NODES];
enum bcmfs_device_type dtype;
+ int err;
int i = 0;
int dev_idx;
int count = 0;
@@ -290,7 +292,20 @@ bcmfs_vdev_probe(struct rte_vdev_device *vdev)
return -ENODEV;
}
+ err = bcmfs_sym_dev_create(fsdev);
+ if (err) {
+ BCMFS_LOG(WARNING,
+ "Failed to create BCMFS SYM PMD for device %s",
+ fsdev->name);
+ goto pmd_create_fail;
+ }
+
return 0;
+
+pmd_create_fail:
+ fsdev_release(fsdev);
+
+ return err;
}
static int
diff --git a/drivers/crypto/bcmfs/bcmfs_device.h b/drivers/crypto/bcmfs/bcmfs_device.h
index 96beb10fa..37907b91f 100644
--- a/drivers/crypto/bcmfs/bcmfs_device.h
+++ b/drivers/crypto/bcmfs/bcmfs_device.h
@@ -62,6 +62,15 @@ struct bcmfs_device {
struct bcmfs_qp *qps_in_use[BCMFS_MAX_HW_QUEUES];
/* queue pair ops exported by symmetric crypto hw */
struct bcmfs_hw_queue_pair_ops *sym_hw_qp_ops;
+ /* a cryptodevice attached to bcmfs device */
+ struct rte_cryptodev *cdev;
+ /* a rte_device to register with cryptodev */
+ struct rte_device sym_rte_dev;
+ /* private info to keep with cryptodev */
+ struct bcmfs_sym_dev_private *sym_dev;
};
+/* stats exported by device */
+
+
#endif /* _BCMFS_DEV_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_qp.c b/drivers/crypto/bcmfs/bcmfs_qp.c
index ec1327b78..cb5ff6c61 100644
--- a/drivers/crypto/bcmfs/bcmfs_qp.c
+++ b/drivers/crypto/bcmfs/bcmfs_qp.c
@@ -344,3 +344,40 @@ bcmfs_dequeue_op_burst(void *qp, void **ops, uint16_t nb_ops)
return deq;
}
+
+void bcmfs_qp_stats_get(struct bcmfs_qp **qp, int num_qp,
+ struct bcmfs_qp_stats *stats)
+{
+ int i;
+
+ if (stats == NULL) {
+ BCMFS_LOG(ERR, "invalid param: stats %p",
+ stats);
+ return;
+ }
+
+ for (i = 0; i < num_qp; i++) {
+ if (qp[i] == NULL) {
+ BCMFS_LOG(DEBUG, "Uninitialised qp %d", i);
+ continue;
+ }
+
+ stats->enqueued_count += qp[i]->stats.enqueued_count;
+ stats->dequeued_count += qp[i]->stats.dequeued_count;
+ stats->enqueue_err_count += qp[i]->stats.enqueue_err_count;
+ stats->dequeue_err_count += qp[i]->stats.dequeue_err_count;
+ }
+}
+
+void bcmfs_qp_stats_reset(struct bcmfs_qp **qp, int num_qp)
+{
+ int i;
+
+ for (i = 0; i < num_qp; i++) {
+ if (qp[i] == NULL) {
+ BCMFS_LOG(DEBUG, "Uninitialised qp %d", i);
+ continue;
+ }
+ memset(&qp[i]->stats, 0, sizeof(qp[i]->stats));
+ }
+}
diff --git a/drivers/crypto/bcmfs/bcmfs_qp.h b/drivers/crypto/bcmfs/bcmfs_qp.h
index e4b0c3f2f..fec58ca71 100644
--- a/drivers/crypto/bcmfs/bcmfs_qp.h
+++ b/drivers/crypto/bcmfs/bcmfs_qp.h
@@ -24,6 +24,13 @@ enum bcmfs_queue_type {
BCMFS_RM_CPLQ
};
+#define BCMFS_QP_IOBASE_XLATE(base, idx) \
+ ((base) + ((idx) * BCMFS_HW_QUEUE_IO_ADDR_LEN))
+
+/* Max pkts for preprocessing before submitting to h/w qp */
+#define BCMFS_MAX_REQS_BUFF 64
+
+/* qp stats */
struct bcmfs_qp_stats {
/* Count of all operations enqueued */
uint64_t enqueued_count;
@@ -92,6 +99,10 @@ struct bcmfs_qp {
struct bcmfs_qp_stats stats;
/* h/w ops associated with qp */
struct bcmfs_hw_queue_pair_ops *ops;
+ /* bcmfs requests pool*/
+ struct rte_mempool *sr_mp;
+ /* a temporary buffer to keep message pointers */
+ struct bcmfs_qp_message *infl_msgs[BCMFS_MAX_REQS_BUFF];
} __rte_cache_aligned;
@@ -123,4 +134,9 @@ bcmfs_qp_setup(struct bcmfs_qp **qp_addr,
uint16_t queue_pair_id,
struct bcmfs_qp_config *bcmfs_conf);
+/* stats functions*/
+void bcmfs_qp_stats_get(struct bcmfs_qp **qp, int num_qp,
+ struct bcmfs_qp_stats *stats);
+void bcmfs_qp_stats_reset(struct bcmfs_qp **qp, int num_qp);
+
#endif /* _BCMFS_QP_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
new file mode 100644
index 000000000..0f96915f7
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
@@ -0,0 +1,387 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+#include <rte_dev.h>
+#include <rte_errno.h>
+#include <rte_malloc.h>
+#include <rte_cryptodev_pmd.h>
+
+#include "bcmfs_device.h"
+#include "bcmfs_logs.h"
+#include "bcmfs_qp.h"
+#include "bcmfs_sym_pmd.h"
+#include "bcmfs_sym_req.h"
+
+uint8_t cryptodev_bcmfs_driver_id;
+
+static int bcmfs_sym_qp_release(struct rte_cryptodev *dev,
+ uint16_t queue_pair_id);
+
+static int
+bcmfs_sym_dev_config(__rte_unused struct rte_cryptodev *dev,
+ __rte_unused struct rte_cryptodev_config *config)
+{
+ return 0;
+}
+
+static int
+bcmfs_sym_dev_start(__rte_unused struct rte_cryptodev *dev)
+{
+ return 0;
+}
+
+static void
+bcmfs_sym_dev_stop(__rte_unused struct rte_cryptodev *dev)
+{
+}
+
+static int
+bcmfs_sym_dev_close(struct rte_cryptodev *dev)
+{
+ int i, ret;
+
+ for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+ ret = bcmfs_sym_qp_release(dev, i);
+ if (ret < 0)
+ return ret;
+ }
+
+ return 0;
+}
+
+static void
+bcmfs_sym_dev_info_get(struct rte_cryptodev *dev,
+ struct rte_cryptodev_info *dev_info)
+{
+ struct bcmfs_sym_dev_private *internals = dev->data->dev_private;
+ struct bcmfs_device *fsdev = internals->fsdev;
+
+ if (dev_info != NULL) {
+ dev_info->driver_id = cryptodev_bcmfs_driver_id;
+ dev_info->feature_flags = dev->feature_flags;
+ dev_info->max_nb_queue_pairs = fsdev->max_hw_qps;
+ /* No limit of number of sessions */
+ dev_info->sym.max_nb_sessions = 0;
+ }
+}
+
+static void
+bcmfs_sym_stats_get(struct rte_cryptodev *dev,
+ struct rte_cryptodev_stats *stats)
+{
+ struct bcmfs_qp_stats bcmfs_stats = {0};
+ struct bcmfs_sym_dev_private *bcmfs_priv;
+ struct bcmfs_device *fsdev;
+
+ if (stats == NULL || dev == NULL) {
+ BCMFS_LOG(ERR, "invalid ptr: stats %p, dev %p", stats, dev);
+ return;
+ }
+ bcmfs_priv = dev->data->dev_private;
+ fsdev = bcmfs_priv->fsdev;
+
+ bcmfs_qp_stats_get(fsdev->qps_in_use, fsdev->max_hw_qps, &bcmfs_stats);
+
+ stats->enqueued_count = bcmfs_stats.enqueued_count;
+ stats->dequeued_count = bcmfs_stats.dequeued_count;
+ stats->enqueue_err_count = bcmfs_stats.enqueue_err_count;
+ stats->dequeue_err_count = bcmfs_stats.dequeue_err_count;
+}
+
+static void
+bcmfs_sym_stats_reset(struct rte_cryptodev *dev)
+{
+ struct bcmfs_sym_dev_private *bcmfs_priv;
+ struct bcmfs_device *fsdev;
+
+ if (dev == NULL) {
+ BCMFS_LOG(ERR, "invalid cryptodev ptr %p", dev);
+ return;
+ }
+ bcmfs_priv = dev->data->dev_private;
+ fsdev = bcmfs_priv->fsdev;
+
+ bcmfs_qp_stats_reset(fsdev->qps_in_use, fsdev->max_hw_qps);
+}
+
+static int
+bcmfs_sym_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id)
+{
+ struct bcmfs_sym_dev_private *bcmfs_private = dev->data->dev_private;
+ struct bcmfs_qp *qp = (struct bcmfs_qp *)
+ (dev->data->queue_pairs[queue_pair_id]);
+
+ BCMFS_LOG(DEBUG, "Release sym qp %u on device %d",
+ queue_pair_id, dev->data->dev_id);
+
+ rte_mempool_free(qp->sr_mp);
+
+ bcmfs_private->fsdev->qps_in_use[queue_pair_id] = NULL;
+
+ return bcmfs_qp_release((struct bcmfs_qp **)
+ &dev->data->queue_pairs[queue_pair_id]);
+}
+
+static void
+spu_req_init(struct bcmfs_sym_request *sr, rte_iova_t iova __rte_unused)
+{
+ memset(sr, 0, sizeof(*sr));
+}
+
+static void
+req_pool_obj_init(__rte_unused struct rte_mempool *mp,
+ __rte_unused void *opaque, void *obj,
+ __rte_unused unsigned int obj_idx)
+{
+ spu_req_init(obj, rte_mempool_virt2iova(obj));
+}
+
+static struct rte_mempool *
+bcmfs_sym_req_pool_create(struct rte_cryptodev *cdev __rte_unused,
+ uint32_t nobjs, uint16_t qp_id,
+ int socket_id)
+{
+ char softreq_pool_name[RTE_RING_NAMESIZE];
+ struct rte_mempool *mp;
+
+ snprintf(softreq_pool_name, RTE_RING_NAMESIZE, "%s_%d",
+ "bcm_sym", qp_id);
+
+ mp = rte_mempool_create(softreq_pool_name,
+ RTE_ALIGN_MUL_CEIL(nobjs, 64),
+ sizeof(struct bcmfs_sym_request),
+ 64, 0, NULL, NULL, req_pool_obj_init, NULL,
+ socket_id, 0);
+ if (mp == NULL)
+ BCMFS_LOG(ERR, "Failed to create req pool, qid %d, err %d",
+ qp_id, rte_errno);
+
+ return mp;
+}
+
+static int
+bcmfs_sym_qp_setup(struct rte_cryptodev *cdev, uint16_t qp_id,
+ const struct rte_cryptodev_qp_conf *qp_conf,
+ int socket_id)
+{
+ int ret = 0;
+ struct bcmfs_qp *qp = NULL;
+ struct bcmfs_qp_config bcmfs_qp_conf;
+
+ struct bcmfs_qp **qp_addr =
+ (struct bcmfs_qp **)&cdev->data->queue_pairs[qp_id];
+ struct bcmfs_sym_dev_private *bcmfs_private = cdev->data->dev_private;
+ struct bcmfs_device *fsdev = bcmfs_private->fsdev;
+
+
+ /* If qp is already in use free ring memory and qp metadata. */
+ if (*qp_addr != NULL) {
+ ret = bcmfs_sym_qp_release(cdev, qp_id);
+ if (ret < 0)
+ return ret;
+ }
+
+ if (qp_id >= fsdev->max_hw_qps) {
+ BCMFS_LOG(ERR, "qp_id %u invalid for this device", qp_id);
+ return -EINVAL;
+ }
+
+ bcmfs_qp_conf.nb_descriptors = qp_conf->nb_descriptors;
+ bcmfs_qp_conf.socket_id = socket_id;
+ bcmfs_qp_conf.max_descs_req = BCMFS_CRYPTO_MAX_HW_DESCS_PER_REQ;
+ bcmfs_qp_conf.iobase = BCMFS_QP_IOBASE_XLATE(fsdev->mmap_addr, qp_id);
+ bcmfs_qp_conf.ops = fsdev->sym_hw_qp_ops;
+
+ ret = bcmfs_qp_setup(qp_addr, qp_id, &bcmfs_qp_conf);
+ if (ret != 0)
+ return ret;
+
+ qp = (struct bcmfs_qp *)*qp_addr;
+
+ qp->sr_mp = bcmfs_sym_req_pool_create(cdev, qp_conf->nb_descriptors,
+ qp_id, socket_id);
+ if (qp->sr_mp == NULL)
+ return -ENOMEM;
+
+ /* store a link to the qp in the bcmfs_device */
+ bcmfs_private->fsdev->qps_in_use[qp_id] = *qp_addr;
+
+ cdev->data->queue_pairs[qp_id] = qp;
+ BCMFS_LOG(NOTICE, "queue %d setup done\n", qp_id);
+
+ return 0;
+}
+
+static struct rte_cryptodev_ops crypto_bcmfs_ops = {
+ /* Device related operations */
+ .dev_configure = bcmfs_sym_dev_config,
+ .dev_start = bcmfs_sym_dev_start,
+ .dev_stop = bcmfs_sym_dev_stop,
+ .dev_close = bcmfs_sym_dev_close,
+ .dev_infos_get = bcmfs_sym_dev_info_get,
+ /* Stats Collection */
+ .stats_get = bcmfs_sym_stats_get,
+ .stats_reset = bcmfs_sym_stats_reset,
+ /* Queue-Pair management */
+ .queue_pair_setup = bcmfs_sym_qp_setup,
+ .queue_pair_release = bcmfs_sym_qp_release,
+};
+
+/** Enqueue burst */
+static uint16_t
+bcmfs_sym_pmd_enqueue_op_burst(void *queue_pair,
+ struct rte_crypto_op **ops,
+ uint16_t nb_ops)
+{
+ int i, j;
+ uint16_t enq = 0;
+ struct bcmfs_sym_request *sreq;
+ struct bcmfs_qp *qp = (struct bcmfs_qp *)queue_pair;
+
+ if (nb_ops == 0)
+ return 0;
+
+ if (nb_ops > BCMFS_MAX_REQS_BUFF)
+ nb_ops = BCMFS_MAX_REQS_BUFF;
+
+ /* We do not process more than available space */
+ if (nb_ops > (qp->nb_descriptors - qp->nb_pending_requests))
+ nb_ops = qp->nb_descriptors - qp->nb_pending_requests;
+
+ for (i = 0; i < nb_ops; i++) {
+ if (rte_mempool_get(qp->sr_mp, (void **)&sreq))
+ goto enqueue_err;
+
+ /* save rte_crypto_op */
+ sreq->op = ops[i];
+
+ /* save context */
+ qp->infl_msgs[i] = &sreq->msgs;
+ qp->infl_msgs[i]->ctx = (void *)sreq;
+ }
+ /* Send burst request to hw QP */
+ enq = bcmfs_enqueue_op_burst(qp, (void **)qp->infl_msgs, i);
+
+ for (j = enq; j < i; j++)
+ rte_mempool_put(qp->sr_mp, qp->infl_msgs[j]->ctx);
+
+ return enq;
+
+enqueue_err:
+ for (j = 0; j < i; j++)
+ rte_mempool_put(qp->sr_mp, qp->infl_msgs[j]->ctx);
+
+ return enq;
+}
+
+static uint16_t
+bcmfs_sym_pmd_dequeue_op_burst(void *queue_pair,
+ struct rte_crypto_op **ops,
+ uint16_t nb_ops)
+{
+ int i;
+ uint16_t deq = 0;
+ unsigned int pkts = 0;
+ struct bcmfs_sym_request *sreq;
+ struct bcmfs_qp *qp = queue_pair;
+
+ if (nb_ops > BCMFS_MAX_REQS_BUFF)
+ nb_ops = BCMFS_MAX_REQS_BUFF;
+
+ deq = bcmfs_dequeue_op_burst(qp, (void **)qp->infl_msgs, nb_ops);
+ /* get rte_crypto_ops */
+ for (i = 0; i < deq; i++) {
+ sreq = (struct bcmfs_sym_request *)qp->infl_msgs[i]->ctx;
+
+ ops[pkts++] = sreq->op;
+
+ rte_mempool_put(qp->sr_mp, sreq);
+ }
+
+ return pkts;
+}
+
+/*
+ * An rte_driver is needed in the registration of both the
+ * device and the driver with cryptodev.
+ */
+static const char bcmfs_sym_drv_name[] = RTE_STR(CRYPTODEV_NAME_BCMFS_SYM_PMD);
+static const struct rte_driver cryptodev_bcmfs_sym_driver = {
+ .name = bcmfs_sym_drv_name,
+ .alias = bcmfs_sym_drv_name
+};
+
+int
+bcmfs_sym_dev_create(struct bcmfs_device *fsdev)
+{
+ struct rte_cryptodev_pmd_init_params init_params = {
+ .name = "",
+ .socket_id = rte_socket_id(),
+ .private_data_size = sizeof(struct bcmfs_sym_dev_private)
+ };
+ char name[RTE_CRYPTODEV_NAME_MAX_LEN];
+ struct rte_cryptodev *cryptodev;
+ struct bcmfs_sym_dev_private *internals;
+
+ snprintf(name, RTE_CRYPTODEV_NAME_MAX_LEN, "%s_%s",
+ fsdev->name, "sym");
+
+ /* Populate subset device to use in cryptodev device creation */
+ fsdev->sym_rte_dev.driver = &cryptodev_bcmfs_sym_driver;
+ fsdev->sym_rte_dev.numa_node = 0;
+ fsdev->sym_rte_dev.devargs = NULL;
+
+ cryptodev = rte_cryptodev_pmd_create(name,
+ &fsdev->sym_rte_dev,
+ &init_params);
+ if (cryptodev == NULL)
+ return -ENODEV;
+
+ fsdev->sym_rte_dev.name = cryptodev->data->name;
+ cryptodev->driver_id = cryptodev_bcmfs_driver_id;
+ cryptodev->dev_ops = &crypto_bcmfs_ops;
+
+ cryptodev->enqueue_burst = bcmfs_sym_pmd_enqueue_op_burst;
+ cryptodev->dequeue_burst = bcmfs_sym_pmd_dequeue_op_burst;
+
+ cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
+ RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
+ RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT;
+
+ internals = cryptodev->data->dev_private;
+ internals->fsdev = fsdev;
+ fsdev->sym_dev = internals;
+
+ internals->sym_dev_id = cryptodev->data->dev_id;
+
+ BCMFS_LOG(DEBUG, "Created bcmfs-sym device %s as cryptodev instance %d",
+ cryptodev->data->name, internals->sym_dev_id);
+ return 0;
+}
+
+int
+bcmfs_sym_dev_destroy(struct bcmfs_device *fsdev)
+{
+ struct rte_cryptodev *cryptodev;
+
+ if (fsdev == NULL)
+ return -ENODEV;
+ if (fsdev->sym_dev == NULL)
+ return 0;
+
+ /* free crypto device */
+ cryptodev = rte_cryptodev_pmd_get_dev(fsdev->sym_dev->sym_dev_id);
+ rte_cryptodev_pmd_destroy(cryptodev);
+ fsdev->sym_rte_dev.name = NULL;
+ fsdev->sym_dev = NULL;
+
+ return 0;
+}
+
+static struct cryptodev_driver bcmfs_crypto_drv;
+RTE_PMD_REGISTER_CRYPTO_DRIVER(bcmfs_crypto_drv,
+ cryptodev_bcmfs_sym_driver,
+ cryptodev_bcmfs_driver_id);
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_pmd.h b/drivers/crypto/bcmfs/bcmfs_sym_pmd.h
new file mode 100644
index 000000000..65d704609
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_pmd.h
@@ -0,0 +1,38 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_SYM_PMD_H_
+#define _BCMFS_SYM_PMD_H_
+
+#include <rte_cryptodev.h>
+
+#include "bcmfs_device.h"
+
+#define CRYPTODEV_NAME_BCMFS_SYM_PMD crypto_bcmfs
+
+#define BCMFS_CRYPTO_MAX_HW_DESCS_PER_REQ 16
+
+extern uint8_t cryptodev_bcmfs_driver_id;
+
+/** private data structure for a BCMFS device.
+ * This BCMFS device is a device offering only symmetric crypto service,
+ * there can be one of these on each bcmfs_pci_device (VF).
+ */
+struct bcmfs_sym_dev_private {
+ /* The bcmfs device hosting the service */
+ struct bcmfs_device *fsdev;
+ /* Device instance for this rte_cryptodev */
+ uint8_t sym_dev_id;
+ /* BCMFS device symmetric crypto capabilities */
+ const struct rte_cryptodev_capabilities *fsdev_capabilities;
+};
+
+int
+bcmfs_sym_dev_create(struct bcmfs_device *fdev);
+
+int
+bcmfs_sym_dev_destroy(struct bcmfs_device *fdev);
+
+#endif /* _BCMFS_SYM_PMD_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_req.h b/drivers/crypto/bcmfs/bcmfs_sym_req.h
new file mode 100644
index 000000000..0f0b051f1
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_req.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_SYM_REQ_H_
+#define _BCMFS_SYM_REQ_H_
+
+#include "bcmfs_dev_msg.h"
+
+/*
+ * This structure hold the supportive data required to process a
+ * rte_crypto_op
+ */
+struct bcmfs_sym_request {
+ /* bcmfs qp message for h/w queues to process */
+ struct bcmfs_qp_message msgs;
+ /* crypto op */
+ struct rte_crypto_op *op;
+};
+
+#endif /* _BCMFS_SYM_REQ_H_ */
diff --git a/drivers/crypto/bcmfs/meson.build b/drivers/crypto/bcmfs/meson.build
index cd58bd5e2..d9a3d73e9 100644
--- a/drivers/crypto/bcmfs/meson.build
+++ b/drivers/crypto/bcmfs/meson.build
@@ -11,5 +11,6 @@ sources = files(
'bcmfs_qp.c',
'hw/bcmfs4_rm.c',
'hw/bcmfs5_rm.c',
- 'hw/bcmfs_rm_common.c'
+ 'hw/bcmfs_rm_common.c',
+ 'bcmfs_sym_pmd.c'
)
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH 0 6/8] crypto/bcmfs: add session handling and capabilities
2020-08-11 14:58 [dpdk-dev] [PATCH 0 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
` (4 preceding siblings ...)
2020-08-11 14:58 ` [dpdk-dev] [PATCH 0 5/8] crypto/bcmfs: create a symmetric cryptodev Vikas Gupta
@ 2020-08-11 14:58 ` Vikas Gupta
2020-08-11 14:58 ` [dpdk-dev] [PATCH 0 7/8] crypto/bcmfs: add crypto h/w module Vikas Gupta
` (2 subsequent siblings)
8 siblings, 0 replies; 75+ messages in thread
From: Vikas Gupta @ 2020-08-11 14:58 UTC (permalink / raw)
To: dev, akhil.goyal, ajit.khaparde; +Cc: vikram.prakash, Vikas Gupta
Add session handling and capabilities supported by crypto h/w
accelerator.
Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
---
doc/guides/cryptodevs/bcmfs.rst | 46 ++
doc/guides/cryptodevs/features/bcmfs.ini | 56 ++
drivers/crypto/bcmfs/bcmfs_sym_capabilities.c | 764 ++++++++++++++++++
drivers/crypto/bcmfs/bcmfs_sym_capabilities.h | 16 +
drivers/crypto/bcmfs/bcmfs_sym_defs.h | 170 ++++
drivers/crypto/bcmfs/bcmfs_sym_pmd.c | 13 +
drivers/crypto/bcmfs/bcmfs_sym_session.c | 426 ++++++++++
drivers/crypto/bcmfs/bcmfs_sym_session.h | 99 +++
drivers/crypto/bcmfs/meson.build | 4 +-
9 files changed, 1593 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/cryptodevs/features/bcmfs.ini
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_capabilities.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_capabilities.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_defs.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_session.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_session.h
diff --git a/doc/guides/cryptodevs/bcmfs.rst b/doc/guides/cryptodevs/bcmfs.rst
index 752ce028a..2488b19f7 100644
--- a/doc/guides/cryptodevs/bcmfs.rst
+++ b/doc/guides/cryptodevs/bcmfs.rst
@@ -18,9 +18,55 @@ CONFIG_RTE_LIBRTE_PMD_BCMFS setting is set to `y` in config/common_base file.
* ``CONFIG_RTE_LIBRTE_PMD_BCMFS=y``
+Features
+~~~~~~~~
+
+The BCMFS SYM PMD has support for:
+
+Cipher algorithms:
+
+* ``RTE_CRYPTO_CIPHER_3DES_CBC``
+* ``RTE_CRYPTO_CIPHER_3DES_CTR``
+* ``RTE_CRYPTO_CIPHER_AES128_CBC``
+* ``RTE_CRYPTO_CIPHER_AES192_CBC``
+* ``RTE_CRYPTO_CIPHER_AES256_CBC``
+* ``RTE_CRYPTO_CIPHER_AES128_CTR``
+* ``RTE_CRYPTO_CIPHER_AES192_CTR``
+* ``RTE_CRYPTO_CIPHER_AES256_CTR``
+* ``RTE_CRYPTO_CIPHER_AES_XTS``
+* ``RTE_CRYPTO_CIPHER_DES_CBC``
+
+Hash algorithms:
+
+* ``RTE_CRYPTO_AUTH_SHA1``
+* ``RTE_CRYPTO_AUTH_SHA1_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA224``
+* ``RTE_CRYPTO_AUTH_SHA224_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA256``
+* ``RTE_CRYPTO_AUTH_SHA256_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA384``
+* ``RTE_CRYPTO_AUTH_SHA384_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA512``
+* ``RTE_CRYPTO_AUTH_SHA512_HMAC``
+* ``RTE_CRYPTO_AUTH_AES_XCBC_MAC``
+* ``RTE_CRYPTO_AUTH_MD5_HMAC``
+* ``RTE_CRYPTO_AUTH_AES_GMAC``
+* ``RTE_CRYPTO_AUTH_AES_CMAC``
+
+Supported AEAD algorithms:
+
+* ``RTE_CRYPTO_AEAD_AES_GCM``
+* ``RTE_CRYPTO_AEAD_AES_CCM``
+
Initialization
--------------
BCMFS crypto PMD depend upon the devices present in the path
/sys/bus/platform/devices/fs<version>/<dev_name> on the platform.
Each cryptodev PMD instance can be attached to the nodes present
in the mentioned path.
+
+Limitations
+~~~~~~~~~~~
+
+* Only supports the session-oriented API implementation (session-less APIs are not supported).
+* CCM is not supported on Broadcom`s SoCs having FlexSparc4 unit.
diff --git a/doc/guides/cryptodevs/features/bcmfs.ini b/doc/guides/cryptodevs/features/bcmfs.ini
new file mode 100644
index 000000000..82d2c639d
--- /dev/null
+++ b/doc/guides/cryptodevs/features/bcmfs.ini
@@ -0,0 +1,56 @@
+;
+; Supported features of the 'bcmfs' crypto driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Symmetric crypto = Y
+Sym operation chaining = Y
+HW Accelerated = Y
+Protocol offload = Y
+In Place SGL = Y
+
+;
+; Supported crypto algorithms of the 'bcmfs' crypto driver.
+;
+[Cipher]
+AES CBC (128) = Y
+AES CBC (192) = Y
+AES CBC (256) = Y
+AES CTR (128) = Y
+AES CTR (192) = Y
+AES CTR (256) = Y
+AES XTS (128) = Y
+AES XTS (256) = Y
+3DES CBC = Y
+DES CBC = Y
+;
+; Supported authentication algorithms of the 'bcmfs' crypto driver.
+;
+[Auth]
+MD5 HMAC = Y
+SHA1 = Y
+SHA1 HMAC = Y
+SHA224 = Y
+SHA224 HMAC = Y
+SHA256 = Y
+SHA256 HMAC = Y
+SHA384 = Y
+SHA384 HMAC = Y
+SHA512 = Y
+SHA512 HMAC = Y
+AES GMAC = Y
+AES CMAC (128) = Y
+AES CBC = Y
+AES XCBC = Y
+
+;
+; Supported AEAD algorithms of the 'bcmfs' crypto driver.
+;
+[AEAD]
+AES GCM (128) = Y
+AES GCM (192) = Y
+AES GCM (256) = Y
+AES CCM (128) = Y
+AES CCM (192) = Y
+AES CCM (256) = Y
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_capabilities.c b/drivers/crypto/bcmfs/bcmfs_sym_capabilities.c
new file mode 100644
index 000000000..bb8fa9f81
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_capabilities.c
@@ -0,0 +1,764 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_cryptodev.h>
+
+#include "bcmfs_sym_capabilities.h"
+
+static const struct rte_cryptodev_capabilities bcmfs_sym_capabilities[] = {
+ {
+ /* SHA1 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA1,
+ .block_size = 64,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 20,
+ .max = 20,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* MD5 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_MD5,
+ .block_size = 64,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ }, }
+ }, }
+ },
+ {
+ /* SHA224 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA224,
+ .block_size = 64,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 28,
+ .max = 28,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA256 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA256,
+ .block_size = 64,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 32,
+ .max = 32,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA384 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA384,
+ .block_size = 64,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 48,
+ .max = 48,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA512 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA512,
+ .block_size = 64,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 64,
+ .max = 64,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA3_224 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA3_224,
+ .block_size = 144,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 28,
+ .max = 28,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA3_256 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA3_256,
+ .block_size = 136,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 32,
+ .max = 32,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA3_384 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA3_384,
+ .block_size = 104,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 48,
+ .max = 48,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA3_512 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA3_512,
+ .block_size = 72,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 64,
+ .max = 64,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA1 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
+ .block_size = 64,
+ .key_size = {
+ .min = 1,
+ .max = 64,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 20,
+ .max = 20,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* MD5 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_MD5_HMAC,
+ .block_size = 64,
+ .key_size = {
+ .min = 1,
+ .max = 64,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA224 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA224_HMAC,
+ .block_size = 64,
+ .key_size = {
+ .min = 1,
+ .max = 64,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 28,
+ .max = 28,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA256 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA256_HMAC,
+ .block_size = 64,
+ .key_size = {
+ .min = 1,
+ .max = 64,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 32,
+ .max = 32,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA384 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA384_HMAC,
+ .block_size = 128,
+ .key_size = {
+ .min = 1,
+ .max = 128,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 48,
+ .max = 48,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA512 HMAC*/
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA512_HMAC,
+ .block_size = 128,
+ .key_size = {
+ .min = 1,
+ .max = 128,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 64,
+ .max = 64,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA3_224 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA3_224_HMAC,
+ .block_size = 144,
+ .key_size = {
+ .min = 1,
+ .max = 144,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 28,
+ .max = 28,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA3_256 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA3_256_HMAC,
+ .block_size = 136,
+ .key_size = {
+ .min = 1,
+ .max = 136,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 32,
+ .max = 32,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA3_384 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA3_384_HMAC,
+ .block_size = 104,
+ .key_size = {
+ .min = 1,
+ .max = 104,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 48,
+ .max = 48,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA3_512 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA3_512_HMAC,
+ .block_size = 72,
+ .key_size = {
+ .min = 1,
+ .max = 72,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 64,
+ .max = 64,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* AES XCBC MAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_AES_XCBC_MAC,
+ .block_size = 16,
+ .key_size = {
+ .min = 1,
+ .max = 16,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* AES GMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.aead = {
+ .algo = RTE_CRYPTO_AUTH_AES_GMAC,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 8
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .aad_size = {
+ .min = 0,
+ .max = 65535,
+ .increment = 1
+ },
+ .iv_size = {
+ .min = 12,
+ .max = 16,
+ .increment = 4
+ },
+ }, }
+ }, }
+ },
+ {
+ /* AES CMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_AES_CMAC,
+ .block_size = 16,
+ .key_size = {
+ .min = 1,
+ .max = 16,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* AES CBC MAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_AES_CBC_MAC,
+ .block_size = 16,
+ .key_size = {
+ .min = 1,
+ .max = 16,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* AES ECB */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_AES_ECB,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 8
+ },
+ .iv_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ {
+ /* AES CBC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_AES_CBC,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 8
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ {
+ /* AES CTR */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_AES_CTR,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 8
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ {
+ /* AES XTS */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_AES_XTS,
+ .block_size = 16,
+ .key_size = {
+ .min = 32,
+ .max = 64,
+ .increment = 32
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ {
+ /* DES CBC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_DES_CBC,
+ .block_size = 8,
+ .key_size = {
+ .min = 8,
+ .max = 8,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ {
+ /* 3DES CBC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_3DES_CBC,
+ .block_size = 8,
+ .key_size = {
+ .min = 24,
+ .max = 24,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ {
+ /* 3DES ECB */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_3DES_ECB,
+ .block_size = 8,
+ .key_size = {
+ .min = 24,
+ .max = 24,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ {
+ /* AES GCM */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
+ {.aead = {
+ .algo = RTE_CRYPTO_AEAD_AES_GCM,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 8
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .aad_size = {
+ .min = 0,
+ .max = 65535,
+ .increment = 1
+ },
+ .iv_size = {
+ .min = 12,
+ .max = 16,
+ .increment = 4
+ },
+ }, }
+ }, }
+ },
+ {
+ /* AES CCM */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
+ {.aead = {
+ .algo = RTE_CRYPTO_AEAD_AES_CCM,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 8
+ },
+ .digest_size = {
+ .min = 4,
+ .max = 16,
+ .increment = 2
+ },
+ .aad_size = {
+ .min = 0,
+ .max = 65535,
+ .increment = 1
+ },
+ .iv_size = {
+ .min = 7,
+ .max = 13,
+ .increment = 1
+ },
+ }, }
+ }, }
+ },
+
+ RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+const struct rte_cryptodev_capabilities *
+bcmfs_sym_get_capabilities(void)
+{
+ return bcmfs_sym_capabilities;
+}
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_capabilities.h b/drivers/crypto/bcmfs/bcmfs_sym_capabilities.h
new file mode 100644
index 000000000..3ff61b7d2
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_capabilities.h
@@ -0,0 +1,16 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_SYM_CAPABILITIES_H_
+#define _BCMFS_SYM_CAPABILITIES_H_
+
+/*
+ * Get capabilities list for the device
+ *
+ */
+const struct rte_cryptodev_capabilities *bcmfs_sym_get_capabilities(void);
+
+#endif /* _BCMFS_SYM_CAPABILITIES_H__ */
+
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_defs.h b/drivers/crypto/bcmfs/bcmfs_sym_defs.h
new file mode 100644
index 000000000..b5657a9bc
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_defs.h
@@ -0,0 +1,170 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_SYM_DEFS_H_
+#define _BCMFS_SYM_DEFS_H_
+
+/*
+ * Max block size of hash algorithm
+ * currently SHA3 supports max block size
+ * of 144 bytes
+ */
+#define BCMFS_MAX_KEY_SIZE 144
+#define BCMFS_MAX_IV_SIZE 16
+#define BCMFS_MAX_DIGEST_SIZE 64
+
+/** Symmetric Cipher Direction */
+enum bcmfs_crypto_cipher_op {
+ /** Encrypt cipher operation */
+ BCMFS_CRYPTO_CIPHER_OP_ENCRYPT,
+
+ /** Decrypt cipher operation */
+ BCMFS_CRYPTO_CIPHER_OP_DECRYPT,
+};
+
+/** Symmetric Cipher Algorithms */
+enum bcmfs_crypto_cipher_algorithm {
+ /** NULL cipher algorithm. No mode applies to the NULL algorithm. */
+ BCMFS_CRYPTO_CIPHER_NONE = 0,
+
+ /** Triple DES algorithm in CBC mode */
+ BCMFS_CRYPTO_CIPHER_DES_CBC,
+
+ /** Triple DES algorithm in ECB mode */
+ BCMFS_CRYPTO_CIPHER_DES_ECB,
+
+ /** Triple DES algorithm in CBC mode */
+ BCMFS_CRYPTO_CIPHER_3DES_CBC,
+
+ /** Triple DES algorithm in ECB mode */
+ BCMFS_CRYPTO_CIPHER_3DES_ECB,
+
+ /** AES algorithm in CBC mode */
+ BCMFS_CRYPTO_CIPHER_AES_CBC,
+
+ /** AES algorithm in CCM mode. */
+ BCMFS_CRYPTO_CIPHER_AES_CCM,
+
+ /** AES algorithm in Counter mode */
+ BCMFS_CRYPTO_CIPHER_AES_CTR,
+
+ /** AES algorithm in ECB mode */
+ BCMFS_CRYPTO_CIPHER_AES_ECB,
+
+ /** AES algorithm in GCM mode. */
+ BCMFS_CRYPTO_CIPHER_AES_GCM,
+
+ /** AES algorithm in XTS mode */
+ BCMFS_CRYPTO_CIPHER_AES_XTS,
+
+ /** AES algorithm in OFB mode */
+ BCMFS_CRYPTO_CIPHER_AES_OFB,
+};
+
+/** Symmetric Authentication Algorithms */
+enum bcmfs_crypto_auth_algorithm {
+ /** NULL hash algorithm. */
+ BCMFS_CRYPTO_AUTH_NONE = 0,
+
+ /** MD5 algorithm */
+ BCMFS_CRYPTO_AUTH_MD5,
+
+ /** MD5-HMAC algorithm */
+ BCMFS_CRYPTO_AUTH_MD5_HMAC,
+
+ /** SHA1 algorithm */
+ BCMFS_CRYPTO_AUTH_SHA1,
+
+ /** SHA1-HMAC algorithm */
+ BCMFS_CRYPTO_AUTH_SHA1_HMAC,
+
+ /** 224 bit SHA algorithm. */
+ BCMFS_CRYPTO_AUTH_SHA224,
+
+ /** 224 bit SHA-HMAC algorithm. */
+ BCMFS_CRYPTO_AUTH_SHA224_HMAC,
+
+ /** 256 bit SHA algorithm. */
+ BCMFS_CRYPTO_AUTH_SHA256,
+
+ /** 256 bit SHA-HMAC algorithm. */
+ BCMFS_CRYPTO_AUTH_SHA256_HMAC,
+
+ /** 384 bit SHA algorithm. */
+ BCMFS_CRYPTO_AUTH_SHA384,
+
+ /** 384 bit SHA-HMAC algorithm. */
+ BCMFS_CRYPTO_AUTH_SHA384_HMAC,
+
+ /** 512 bit SHA algorithm. */
+ BCMFS_CRYPTO_AUTH_SHA512,
+
+ /** 512 bit SHA-HMAC algorithm. */
+ BCMFS_CRYPTO_AUTH_SHA512_HMAC,
+
+ /** 224 bit SHA3 algorithm. */
+ BCMFS_CRYPTO_AUTH_SHA3_224,
+
+ /** 224 bit SHA-HMAC algorithm. */
+ BCMFS_CRYPTO_AUTH_SHA3_224_HMAC,
+
+ /** 256 bit SHA3 algorithm. */
+ BCMFS_CRYPTO_AUTH_SHA3_256,
+
+ /** 256 bit SHA3-HMAC algorithm. */
+ BCMFS_CRYPTO_AUTH_SHA3_256_HMAC,
+
+ /** 384 bit SHA3 algorithm. */
+ BCMFS_CRYPTO_AUTH_SHA3_384,
+
+ /** 384 bit SHA3-HMAC algorithm. */
+ BCMFS_CRYPTO_AUTH_SHA3_384_HMAC,
+
+ /** 512 bit SHA3 algorithm. */
+ BCMFS_CRYPTO_AUTH_SHA3_512,
+
+ /** 512 bit SHA3-HMAC algorithm. */
+ BCMFS_CRYPTO_AUTH_SHA3_512_HMAC,
+
+ /** AES XCBC MAC algorithm */
+ BCMFS_CRYPTO_AUTH_AES_XCBC_MAC,
+
+ /** AES CMAC algorithm */
+ BCMFS_CRYPTO_AUTH_AES_CMAC,
+
+ /** AES CBC-MAC algorithm */
+ BCMFS_CRYPTO_AUTH_AES_CBC_MAC,
+
+ /** AES CBC-MAC algorithm */
+ BCMFS_CRYPTO_AUTH_AES_GMAC,
+
+ /** AES algorithm in GCM mode. */
+ BCMFS_CRYPTO_AUTH_AES_GCM,
+
+ /** AES algorithm in CCM mode. */
+ BCMFS_CRYPTO_AUTH_AES_CCM,
+};
+
+/** Symmetric Authentication Operations */
+enum bcmfs_crypto_auth_op {
+ /** Verify authentication digest */
+ BCMFS_CRYPTO_AUTH_OP_VERIFY,
+
+ /** Generate authentication digest */
+ BCMFS_CRYPTO_AUTH_OP_GENERATE,
+};
+
+enum bcmfs_sym_crypto_class {
+ /** Cipher algorithm */
+ BCMFS_CRYPTO_CIPHER,
+
+ /** Hash algorithm */
+ BCMFS_CRYPTO_HASH,
+
+ /** Authenticated Encryption with Assosciated Data algorithm */
+ BCMFS_CRYPTO_AEAD,
+};
+
+#endif /* _BCMFS_SYM_DEFS_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
index 0f96915f7..381ca8ea4 100644
--- a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
+++ b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
@@ -14,6 +14,8 @@
#include "bcmfs_qp.h"
#include "bcmfs_sym_pmd.h"
#include "bcmfs_sym_req.h"
+#include "bcmfs_sym_session.h"
+#include "bcmfs_sym_capabilities.h"
uint8_t cryptodev_bcmfs_driver_id;
@@ -65,6 +67,7 @@ bcmfs_sym_dev_info_get(struct rte_cryptodev *dev,
dev_info->max_nb_queue_pairs = fsdev->max_hw_qps;
/* No limit of number of sessions */
dev_info->sym.max_nb_sessions = 0;
+ dev_info->capabilities = bcmfs_sym_get_capabilities();
}
}
@@ -228,6 +231,10 @@ static struct rte_cryptodev_ops crypto_bcmfs_ops = {
/* Queue-Pair management */
.queue_pair_setup = bcmfs_sym_qp_setup,
.queue_pair_release = bcmfs_sym_qp_release,
+ /* Crypto session related operations */
+ .sym_session_get_size = bcmfs_sym_session_get_private_size,
+ .sym_session_configure = bcmfs_sym_session_configure,
+ .sym_session_clear = bcmfs_sym_session_clear
};
/** Enqueue burst */
@@ -239,6 +246,7 @@ bcmfs_sym_pmd_enqueue_op_burst(void *queue_pair,
int i, j;
uint16_t enq = 0;
struct bcmfs_sym_request *sreq;
+ struct bcmfs_sym_session *sess;
struct bcmfs_qp *qp = (struct bcmfs_qp *)queue_pair;
if (nb_ops == 0)
@@ -252,6 +260,10 @@ bcmfs_sym_pmd_enqueue_op_burst(void *queue_pair,
nb_ops = qp->nb_descriptors - qp->nb_pending_requests;
for (i = 0; i < nb_ops; i++) {
+ sess = bcmfs_sym_get_session(ops[i]);
+ if (unlikely(sess == NULL))
+ goto enqueue_err;
+
if (rte_mempool_get(qp->sr_mp, (void **)&sreq))
goto enqueue_err;
@@ -356,6 +368,7 @@ bcmfs_sym_dev_create(struct bcmfs_device *fsdev)
fsdev->sym_dev = internals;
internals->sym_dev_id = cryptodev->data->dev_id;
+ internals->fsdev_capabilities = bcmfs_sym_get_capabilities();
BCMFS_LOG(DEBUG, "Created bcmfs-sym device %s as cryptodev instance %d",
cryptodev->data->name, internals->sym_dev_id);
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_session.c b/drivers/crypto/bcmfs/bcmfs_sym_session.c
new file mode 100644
index 000000000..3d1fce629
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_session.c
@@ -0,0 +1,426 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_crypto.h>
+#include <rte_crypto_sym.h>
+#include <rte_log.h>
+
+#include "bcmfs_logs.h"
+#include "bcmfs_sym_defs.h"
+#include "bcmfs_sym_pmd.h"
+#include "bcmfs_sym_session.h"
+
+/** Configure the session from a crypto xform chain */
+static enum bcmfs_sym_chain_order
+crypto_get_chain_order(const struct rte_crypto_sym_xform *xform)
+{
+ enum bcmfs_sym_chain_order res = BCMFS_SYM_CHAIN_NOT_SUPPORTED;
+
+
+ if (xform != NULL) {
+ if (xform->type == RTE_CRYPTO_SYM_XFORM_AEAD)
+ res = BCMFS_SYM_CHAIN_AEAD;
+
+ if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
+ if (xform->next == NULL)
+ res = BCMFS_SYM_CHAIN_ONLY_AUTH;
+ else if (xform->next->type ==
+ RTE_CRYPTO_SYM_XFORM_CIPHER)
+ res = BCMFS_SYM_CHAIN_AUTH_CIPHER;
+ }
+ if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
+ if (xform->next == NULL)
+ res = BCMFS_SYM_CHAIN_ONLY_CIPHER;
+ else if (xform->next->type == RTE_CRYPTO_SYM_XFORM_AUTH)
+ res = BCMFS_SYM_CHAIN_CIPHER_AUTH;
+ }
+ }
+
+ return res;
+}
+
+/* Get session cipher key from input cipher key */
+static void
+get_key(const uint8_t *input_key, int keylen, uint8_t *session_key)
+{
+ memcpy(session_key, input_key, keylen);
+}
+
+/* Set session cipher parameters */
+static int
+crypto_set_session_cipher_parameters
+ (struct bcmfs_sym_session *sess,
+ const struct rte_crypto_cipher_xform *cipher_xform)
+{
+ int rc = 0;
+
+ /* Select cipher direction */
+ sess->cipher.direction = cipher_xform->op;
+ sess->cipher.key.length = cipher_xform->key.length;
+ sess->cipher.iv.offset = cipher_xform->iv.offset;
+ sess->cipher.iv.length = cipher_xform->iv.length;
+
+ /* Select cipher algo */
+ switch (cipher_xform->algo) {
+ case RTE_CRYPTO_CIPHER_3DES_CBC:
+ sess->cipher.algo = BCMFS_CRYPTO_CIPHER_3DES_CBC;
+ break;
+ case RTE_CRYPTO_CIPHER_3DES_ECB:
+ sess->cipher.algo = BCMFS_CRYPTO_CIPHER_3DES_ECB;
+ break;
+ case RTE_CRYPTO_CIPHER_DES_CBC:
+ sess->cipher.algo = BCMFS_CRYPTO_CIPHER_DES_CBC;
+ break;
+ case RTE_CRYPTO_CIPHER_AES_CBC:
+ sess->cipher.algo = BCMFS_CRYPTO_CIPHER_AES_CBC;
+ break;
+ case RTE_CRYPTO_CIPHER_AES_ECB:
+ sess->cipher.algo = BCMFS_CRYPTO_CIPHER_AES_ECB;
+ break;
+ case RTE_CRYPTO_CIPHER_AES_CTR:
+ sess->cipher.algo = BCMFS_CRYPTO_CIPHER_AES_CTR;
+ break;
+ case RTE_CRYPTO_CIPHER_AES_XTS:
+ sess->cipher.algo = BCMFS_CRYPTO_CIPHER_AES_XTS;
+ break;
+ default:
+ BCMFS_DP_LOG(ERR, "set session failed. unknown algo");
+ sess->cipher.algo = RTE_CRYPTO_CIPHER_NULL;
+ rc = -EINVAL;
+ break;
+ }
+
+ if (!rc)
+ get_key(cipher_xform->key.data,
+ sess->cipher.key.length,
+ sess->cipher.key.data);
+
+ return rc;
+}
+
+/* Set session auth parameters */
+static int
+crypto_set_session_auth_parameters(struct bcmfs_sym_session *sess,
+ const struct rte_crypto_auth_xform
+ *auth_xform)
+{
+ int rc = 0;
+
+ /* Select auth generate/verify */
+ sess->auth.operation = auth_xform->op ?
+ BCMFS_CRYPTO_AUTH_OP_GENERATE :
+ BCMFS_CRYPTO_AUTH_OP_VERIFY;
+ sess->auth.key.length = auth_xform->key.length;
+ sess->auth.digest_length = auth_xform->digest_length;
+ sess->auth.iv.length = auth_xform->iv.length;
+ sess->auth.iv.offset = auth_xform->iv.offset;
+
+ /* Select auth algo */
+ switch (auth_xform->algo) {
+ case RTE_CRYPTO_AUTH_MD5:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_MD5;
+ break;
+ case RTE_CRYPTO_AUTH_SHA1:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA1;
+ break;
+ case RTE_CRYPTO_AUTH_SHA224:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA224;
+ break;
+ case RTE_CRYPTO_AUTH_SHA256:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA256;
+ break;
+ case RTE_CRYPTO_AUTH_SHA384:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA384;
+ break;
+ case RTE_CRYPTO_AUTH_SHA512:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA512;
+ break;
+ case RTE_CRYPTO_AUTH_SHA3_224:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA3_224;
+ break;
+ case RTE_CRYPTO_AUTH_SHA3_256:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA3_256;
+ break;
+ case RTE_CRYPTO_AUTH_SHA3_384:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA3_384;
+ break;
+ case RTE_CRYPTO_AUTH_SHA3_512:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA3_512;
+ break;
+ case RTE_CRYPTO_AUTH_MD5_HMAC:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_MD5_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA1_HMAC:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA1_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA224_HMAC:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA224_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA256_HMAC:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA256_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA384_HMAC:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA384_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA512_HMAC:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA512_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA3_224_HMAC:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA3_224_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA3_256_HMAC:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA3_256_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA3_384_HMAC:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA3_384_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA3_512_HMAC:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA3_512_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_AES_XCBC_MAC;
+ break;
+ case RTE_CRYPTO_AUTH_AES_GMAC:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_AES_GMAC;
+ break;
+ case RTE_CRYPTO_AUTH_AES_CBC_MAC:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_AES_CBC_MAC;
+ break;
+ case RTE_CRYPTO_AUTH_AES_CMAC:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_AES_CMAC;
+ break;
+ default:
+ BCMFS_DP_LOG(ERR, "Invalid Auth algorithm\n");
+ rc = -EINVAL;
+ break;
+ }
+
+ if (!rc)
+ get_key(auth_xform->key.data,
+ auth_xform->key.length,
+ sess->auth.key.data);
+
+ return rc;
+}
+
+/* Set session aead parameters */
+static int
+crypto_set_session_aead_parameters(struct bcmfs_sym_session *sess,
+ const struct rte_crypto_sym_xform *xform)
+{
+ int rc = 0;
+
+ sess->cipher.direction = xform->aead.op;
+ sess->cipher.iv.offset = xform->aead.iv.offset;
+ sess->cipher.iv.length = xform->aead.iv.length;
+ sess->aead.aad_length = xform->aead.aad_length;
+ sess->cipher.key.length = xform->aead.key.length;
+ sess->auth.digest_length = xform->aead.digest_length;
+
+ /* Select aead algo */
+ switch (xform->aead.algo) {
+ case RTE_CRYPTO_AEAD_AES_CCM:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_AES_CCM;
+ sess->cipher.algo = BCMFS_CRYPTO_CIPHER_AES_CCM;
+ break;
+ case RTE_CRYPTO_AEAD_AES_GCM:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_AES_GCM;
+ sess->cipher.algo = BCMFS_CRYPTO_CIPHER_AES_GCM;
+ break;
+ default:
+ BCMFS_DP_LOG(ERR, "Invalid aead algorithm\n");
+ rc = -EINVAL;
+ break;
+ }
+
+ if (!rc)
+ get_key(xform->aead.key.data,
+ xform->aead.key.length,
+ sess->cipher.key.data);
+
+ return rc;
+}
+
+static struct rte_crypto_auth_xform *
+crypto_get_auth_xform(struct rte_crypto_sym_xform *xform)
+{
+ do {
+ if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH)
+ return &xform->auth;
+
+ xform = xform->next;
+ } while (xform);
+
+ return NULL;
+}
+
+static struct rte_crypto_cipher_xform *
+crypto_get_cipher_xform(struct rte_crypto_sym_xform *xform)
+{
+ do {
+ if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER)
+ return &xform->cipher;
+
+ xform = xform->next;
+ } while (xform);
+
+ return NULL;
+}
+
+
+/** Parse crypto xform chain and set private session parameters */
+static int
+crypto_set_session_parameters(struct bcmfs_sym_session *sess,
+ struct rte_crypto_sym_xform *xform)
+{
+ int rc = 0;
+ struct rte_crypto_cipher_xform *cipher_xform =
+ crypto_get_cipher_xform(xform);
+ struct rte_crypto_auth_xform *auth_xform =
+ crypto_get_auth_xform(xform);
+
+ sess->chain_order = crypto_get_chain_order(xform);
+
+ switch (sess->chain_order) {
+ case BCMFS_SYM_CHAIN_ONLY_CIPHER:
+ if (crypto_set_session_cipher_parameters(sess,
+ cipher_xform)) {
+ BCMFS_DP_LOG(ERR, "Invalid cipher");
+ rc = -EINVAL;
+ }
+ break;
+ case BCMFS_SYM_CHAIN_ONLY_AUTH:
+ if (crypto_set_session_auth_parameters(sess,
+ auth_xform)) {
+ BCMFS_DP_LOG(ERR, "Invalid auth");
+ rc = -EINVAL;
+ }
+ break;
+ case BCMFS_SYM_CHAIN_AUTH_CIPHER:
+ sess->cipher_first = false;
+ if (crypto_set_session_auth_parameters(sess,
+ auth_xform)) {
+ BCMFS_DP_LOG(ERR, "Invalid auth");
+ rc = -EINVAL;
+ goto error;
+ }
+
+ if (crypto_set_session_cipher_parameters(sess,
+ cipher_xform)) {
+ BCMFS_DP_LOG(ERR, "Invalid cipher");
+ rc = -EINVAL;
+ }
+ break;
+ case BCMFS_SYM_CHAIN_CIPHER_AUTH:
+ sess->cipher_first = true;
+ if (crypto_set_session_auth_parameters(sess,
+ auth_xform)) {
+ BCMFS_DP_LOG(ERR, "Invalid auth");
+ rc = -EINVAL;
+ goto error;
+ }
+
+ if (crypto_set_session_cipher_parameters(sess,
+ cipher_xform)) {
+ BCMFS_DP_LOG(ERR, "Invalid cipher");
+ rc = -EINVAL;
+ }
+ break;
+ case BCMFS_SYM_CHAIN_AEAD:
+ if (crypto_set_session_aead_parameters(sess,
+ xform)) {
+ BCMFS_DP_LOG(ERR, "Invalid aead");
+ rc = -EINVAL;
+ }
+ break;
+ default:
+ BCMFS_DP_LOG(ERR, "Invalid chain order\n");
+ rc = -EINVAL;
+ break;
+ }
+
+error:
+ return rc;
+}
+
+struct bcmfs_sym_session *
+bcmfs_sym_get_session(struct rte_crypto_op *op)
+{
+ struct bcmfs_sym_session *sess = NULL;
+
+ if (unlikely(op->sess_type == RTE_CRYPTO_OP_SESSIONLESS)) {
+ BCMFS_DP_LOG(ERR, "operations op(%p) is sessionless", op);
+ } else if (likely(op->sym->session != NULL)) {
+ /* get existing session */
+ sess = (struct bcmfs_sym_session *)
+ get_sym_session_private_data(op->sym->session,
+ cryptodev_bcmfs_driver_id);
+ }
+
+ if (sess == NULL)
+ op->status = RTE_CRYPTO_OP_STATUS_INVALID_SESSION;
+
+ return sess;
+}
+
+int
+bcmfs_sym_session_configure(struct rte_cryptodev *dev,
+ struct rte_crypto_sym_xform *xform,
+ struct rte_cryptodev_sym_session *sess,
+ struct rte_mempool *mempool)
+{
+ void *sess_private_data;
+ int ret;
+
+ if (unlikely(sess == NULL)) {
+ BCMFS_DP_LOG(ERR, "Invalid session struct");
+ return -EINVAL;
+ }
+
+ if (rte_mempool_get(mempool, &sess_private_data)) {
+ BCMFS_DP_LOG(ERR,
+ "Couldn't get object from session mempool");
+ return -ENOMEM;
+ }
+
+ ret = crypto_set_session_parameters(sess_private_data, xform);
+
+ if (ret != 0) {
+ BCMFS_DP_LOG(ERR, "Failed configure session parameters");
+ /* Return session to mempool */
+ rte_mempool_put(mempool, sess_private_data);
+ return ret;
+ }
+
+ set_sym_session_private_data(sess, dev->driver_id,
+ sess_private_data);
+
+ return 0;
+}
+
+/* Clear the memory of session so it doesn't leave key material behind */
+void
+bcmfs_sym_session_clear(struct rte_cryptodev *dev,
+ struct rte_cryptodev_sym_session *sess)
+{
+ uint8_t index = dev->driver_id;
+ void *sess_priv = get_sym_session_private_data(sess, index);
+
+ if (sess_priv) {
+ struct rte_mempool *sess_mp;
+
+ memset(sess_priv, 0, sizeof(struct bcmfs_sym_session));
+ sess_mp = rte_mempool_from_obj(sess_priv);
+
+ set_sym_session_private_data(sess, index, NULL);
+ rte_mempool_put(sess_mp, sess_priv);
+ }
+}
+
+unsigned int
+bcmfs_sym_session_get_private_size(struct rte_cryptodev *dev __rte_unused)
+{
+ return sizeof(struct bcmfs_sym_session);
+}
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_session.h b/drivers/crypto/bcmfs/bcmfs_sym_session.h
new file mode 100644
index 000000000..43deedcf8
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_session.h
@@ -0,0 +1,99 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_SYM_SESSION_H_
+#define _BCMFS_SYM_SESSION_H_
+
+#include <stdbool.h>
+#include <rte_crypto.h>
+#include <rte_cryptodev_pmd.h>
+
+#include "bcmfs_sym_defs.h"
+#include "bcmfs_sym_req.h"
+
+/* BCMFS_SYM operation order mode enumerator */
+enum bcmfs_sym_chain_order {
+ BCMFS_SYM_CHAIN_ONLY_CIPHER,
+ BCMFS_SYM_CHAIN_ONLY_AUTH,
+ BCMFS_SYM_CHAIN_CIPHER_AUTH,
+ BCMFS_SYM_CHAIN_AUTH_CIPHER,
+ BCMFS_SYM_CHAIN_AEAD,
+ BCMFS_SYM_CHAIN_NOT_SUPPORTED
+};
+
+/* BCMFS_SYM crypto private session structure */
+struct bcmfs_sym_session {
+ enum bcmfs_sym_chain_order chain_order;
+
+ /* Cipher Parameters */
+ struct {
+ enum bcmfs_crypto_cipher_op direction;
+ /* cipher operation direction */
+ enum bcmfs_crypto_cipher_algorithm algo;
+ /* cipher algorithm */
+
+ struct {
+ uint8_t data[BCMFS_MAX_KEY_SIZE];
+ /* key data */
+ size_t length;
+ /* key length in bytes */
+ } key;
+
+ struct {
+ uint16_t offset;
+ uint16_t length;
+ } iv;
+ } cipher;
+
+ /* Authentication Parameters */
+ struct {
+ enum bcmfs_crypto_auth_op operation;
+ /* auth operation generate or verify */
+ enum bcmfs_crypto_auth_algorithm algo;
+ /* cipher algorithm */
+
+ struct {
+ uint8_t data[BCMFS_MAX_KEY_SIZE];
+ /* key data */
+ size_t length;
+ /* key length in bytes */
+ } key;
+ struct {
+ uint16_t offset;
+ uint16_t length;
+ } iv;
+
+ uint16_t digest_length;
+ } auth;
+
+ /* aead Parameters */
+ struct {
+ uint16_t aad_length;
+ } aead;
+ bool cipher_first;
+} __rte_cache_aligned;
+
+int
+bcmfs_process_crypto_op(struct rte_crypto_op *op,
+ struct bcmfs_sym_session *sess,
+ struct bcmfs_sym_request *req);
+
+int
+bcmfs_sym_session_configure(struct rte_cryptodev *dev,
+ struct rte_crypto_sym_xform *xform,
+ struct rte_cryptodev_sym_session *sess,
+ struct rte_mempool *mempool);
+
+void
+bcmfs_sym_session_clear(struct rte_cryptodev *dev,
+ struct rte_cryptodev_sym_session *sess);
+
+unsigned int
+bcmfs_sym_session_get_private_size(struct rte_cryptodev *dev __rte_unused);
+
+struct bcmfs_sym_session *
+bcmfs_sym_get_session(struct rte_crypto_op *op);
+
+#endif /* _BCMFS_SYM_SESSION_H_ */
diff --git a/drivers/crypto/bcmfs/meson.build b/drivers/crypto/bcmfs/meson.build
index d9a3d73e9..2e86c733e 100644
--- a/drivers/crypto/bcmfs/meson.build
+++ b/drivers/crypto/bcmfs/meson.build
@@ -12,5 +12,7 @@ sources = files(
'hw/bcmfs4_rm.c',
'hw/bcmfs5_rm.c',
'hw/bcmfs_rm_common.c',
- 'bcmfs_sym_pmd.c'
+ 'bcmfs_sym_pmd.c',
+ 'bcmfs_sym_capabilities.c',
+ 'bcmfs_sym_session.c'
)
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH 0 7/8] crypto/bcmfs: add crypto h/w module
2020-08-11 14:58 [dpdk-dev] [PATCH 0 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
` (5 preceding siblings ...)
2020-08-11 14:58 ` [dpdk-dev] [PATCH 0 6/8] crypto/bcmfs: add session handling and capabilities Vikas Gupta
@ 2020-08-11 14:58 ` Vikas Gupta
2020-08-11 14:58 ` [dpdk-dev] [PATCH 0 8/8] crypto/bcmfs: add crypto pmd into cryptodev test Vikas Gupta
2020-08-12 6:31 ` [dpdk-dev] [PATCH v1 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
8 siblings, 0 replies; 75+ messages in thread
From: Vikas Gupta @ 2020-08-11 14:58 UTC (permalink / raw)
To: dev, akhil.goyal, ajit.khaparde; +Cc: vikram.prakash, Vikas Gupta
Add crypto h/w module to process crypto op. Crypto op is processed via
sym_engine module before submitting the crypto request to h/w queues.
Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
---
drivers/crypto/bcmfs/bcmfs_sym.c | 316 ++++++++
drivers/crypto/bcmfs/bcmfs_sym_defs.h | 16 +
drivers/crypto/bcmfs/bcmfs_sym_engine.c | 994 ++++++++++++++++++++++++
drivers/crypto/bcmfs/bcmfs_sym_engine.h | 103 +++
drivers/crypto/bcmfs/bcmfs_sym_pmd.c | 26 +
drivers/crypto/bcmfs/bcmfs_sym_req.h | 40 +
drivers/crypto/bcmfs/meson.build | 4 +-
7 files changed, 1498 insertions(+), 1 deletion(-)
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_engine.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_engine.h
diff --git a/drivers/crypto/bcmfs/bcmfs_sym.c b/drivers/crypto/bcmfs/bcmfs_sym.c
new file mode 100644
index 000000000..8f9415b5e
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym.c
@@ -0,0 +1,316 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <stdbool.h>
+
+#include <rte_byteorder.h>
+#include <rte_crypto_sym.h>
+#include <rte_cryptodev.h>
+#include <rte_mbuf.h>
+#include <rte_mempool.h>
+
+#include "bcmfs_sym_defs.h"
+#include "bcmfs_sym_engine.h"
+#include "bcmfs_sym_req.h"
+#include "bcmfs_sym_session.h"
+
+/** Process cipher operation */
+static int
+process_crypto_cipher_op(struct rte_crypto_op *op,
+ struct rte_mbuf *mbuf_src,
+ struct rte_mbuf *mbuf_dst,
+ struct bcmfs_sym_session *sess,
+ struct bcmfs_sym_request *req)
+{
+ int rc = 0;
+ struct fsattr src, dst, iv, key;
+ struct rte_crypto_sym_op *sym_op = op->sym;
+
+ fsattr_sz(&src) = sym_op->cipher.data.length;
+ fsattr_sz(&dst) = sym_op->cipher.data.length;
+
+ fsattr_va(&src) = rte_pktmbuf_mtod_offset
+ (mbuf_src,
+ uint8_t *,
+ op->sym->cipher.data.offset);
+
+ fsattr_va(&dst) = rte_pktmbuf_mtod_offset
+ (mbuf_dst,
+ uint8_t *,
+ op->sym->cipher.data.offset);
+
+ fsattr_pa(&src) = rte_pktmbuf_iova(mbuf_src);
+ fsattr_pa(&dst) = rte_pktmbuf_iova(mbuf_dst);
+
+ fsattr_va(&iv) = rte_crypto_op_ctod_offset(op,
+ uint8_t *,
+ sess->cipher.iv.offset);
+
+ fsattr_sz(&iv) = sess->cipher.iv.length;
+
+ fsattr_va(&key) = sess->cipher.key.data;
+ fsattr_pa(&key) = 0;
+ fsattr_sz(&key) = sess->cipher.key.length;
+
+ rc = bcmfs_crypto_build_cipher_req(req, sess->cipher.algo,
+ sess->cipher.direction, &src,
+ &dst, &key, &iv);
+ if (rc)
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+
+ return rc;
+}
+
+/** Process auth operation */
+static int
+process_crypto_auth_op(struct rte_crypto_op *op,
+ struct rte_mbuf *mbuf_src,
+ struct bcmfs_sym_session *sess,
+ struct bcmfs_sym_request *req)
+{
+ int rc = 0;
+ struct fsattr src, dst, mac, key;
+
+ fsattr_sz(&src) = op->sym->auth.data.length;
+ fsattr_va(&src) = rte_pktmbuf_mtod_offset(mbuf_src,
+ uint8_t *,
+ op->sym->auth.data.offset);
+ fsattr_pa(&src) = rte_pktmbuf_iova(mbuf_src);
+
+ if (!sess->auth.operation) {
+ fsattr_va(&mac) = op->sym->auth.digest.data;
+ fsattr_pa(&mac) = op->sym->auth.digest.phys_addr;
+ fsattr_sz(&mac) = sess->auth.digest_length;
+ } else {
+ fsattr_va(&dst) = op->sym->auth.digest.data;
+ fsattr_pa(&dst) = op->sym->auth.digest.phys_addr;
+ fsattr_sz(&dst) = sess->auth.digest_length;
+ }
+
+ fsattr_va(&key) = sess->auth.key.data;
+ fsattr_pa(&key) = 0;
+ fsattr_sz(&key) = sess->auth.key.length;
+
+ /* AES-GMAC uses AES-GCM-128 authenticator */
+ if (sess->auth.algo == BCMFS_CRYPTO_AUTH_AES_GMAC) {
+ struct fsattr iv;
+ fsattr_va(&iv) = rte_crypto_op_ctod_offset(op,
+ uint8_t *,
+ sess->auth.iv.offset);
+ fsattr_pa(&iv) = 0;
+ fsattr_sz(&iv) = sess->auth.iv.length;
+
+ rc = bcmfs_crypto_build_aead_request(req,
+ BCMFS_CRYPTO_CIPHER_NONE,
+ 0,
+ BCMFS_CRYPTO_AUTH_AES_GMAC,
+ sess->auth.operation,
+ &src, NULL, NULL, &key,
+ &iv, NULL,
+ sess->auth.operation ?
+ (&dst) : &(mac),
+ 0);
+ } else {
+ rc = bcmfs_crypto_build_auth_req(req, sess->auth.algo,
+ sess->auth.operation,
+ &src,
+ (sess->auth.operation) ? (&dst) : NULL,
+ (sess->auth.operation) ? NULL : (&mac),
+ &key);
+ }
+
+ if (rc)
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+
+ return rc;
+}
+
+/** Process combined/chained mode operation */
+static int
+process_crypto_combined_op(struct rte_crypto_op *op,
+ struct rte_mbuf *mbuf_src,
+ struct rte_mbuf *mbuf_dst,
+ struct bcmfs_sym_session *sess,
+ struct bcmfs_sym_request *req)
+{
+ int rc = 0, aad_size = 0;
+ struct fsattr src, dst, iv;
+ struct rte_crypto_sym_op *sym_op = op->sym;
+ struct fsattr cipher_key, aad, mac, auth_key;
+
+ fsattr_sz(&src) = sym_op->cipher.data.length;
+ fsattr_sz(&dst) = sym_op->cipher.data.length;
+
+ fsattr_va(&src) = rte_pktmbuf_mtod_offset
+ (mbuf_src,
+ uint8_t *,
+ sym_op->cipher.data.offset);
+
+ fsattr_va(&dst) = rte_pktmbuf_mtod_offset
+ (mbuf_dst,
+ uint8_t *,
+ sym_op->cipher.data.offset);
+
+ fsattr_pa(&src) = rte_pktmbuf_iova_offset(mbuf_src,
+ sym_op->cipher.data.offset);
+ fsattr_pa(&dst) = rte_pktmbuf_iova_offset(mbuf_dst,
+ sym_op->cipher.data.offset);
+
+ fsattr_va(&iv) = rte_crypto_op_ctod_offset(op,
+ uint8_t *,
+ sess->cipher.iv.offset);
+
+ fsattr_pa(&iv) = 0;
+ fsattr_sz(&iv) = sess->cipher.iv.length;
+
+ fsattr_va(&cipher_key) = sess->cipher.key.data;
+ fsattr_pa(&cipher_key) = 0;
+ fsattr_sz(&cipher_key) = sess->cipher.key.length;
+
+ fsattr_va(&auth_key) = sess->auth.key.data;
+ fsattr_pa(&auth_key) = 0;
+ fsattr_sz(&auth_key) = sess->auth.key.length;
+
+ fsattr_va(&mac) = op->sym->auth.digest.data;
+ fsattr_pa(&mac) = op->sym->auth.digest.phys_addr;
+ fsattr_sz(&mac) = sess->auth.digest_length;
+
+ aad_size = sym_op->auth.data.length - sym_op->cipher.data.length;
+
+ if (aad_size > 0) {
+ fsattr_sz(&aad) = aad_size;
+ fsattr_va(&aad) = rte_pktmbuf_mtod_offset
+ (mbuf_src,
+ uint8_t *,
+ sym_op->auth.data.offset);
+ fsattr_pa(&aad) = rte_pktmbuf_iova_offset(mbuf_src,
+ sym_op->auth.data.offset);
+ }
+
+ rc = bcmfs_crypto_build_aead_request(req, sess->cipher.algo,
+ sess->cipher.direction,
+ sess->auth.algo,
+ sess->auth.operation,
+ &src, &dst, &cipher_key,
+ &auth_key, &iv,
+ (aad_size > 0) ? (&aad) : NULL,
+ &mac, sess->cipher_first);
+
+ if (rc)
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+
+ return rc;
+}
+
+/** Process AEAD operation */
+static int
+process_crypto_aead_op(struct rte_crypto_op *op,
+ struct rte_mbuf *mbuf_src,
+ struct rte_mbuf *mbuf_dst,
+ struct bcmfs_sym_session *sess,
+ struct bcmfs_sym_request *req)
+{
+ int rc = 0;
+ struct fsattr src, dst, iv;
+ struct rte_crypto_sym_op *sym_op = op->sym;
+ struct fsattr cipher_key, aad, mac, auth_key;
+ enum bcmfs_crypto_cipher_op cipher_op;
+ enum bcmfs_crypto_auth_op auth_op;
+
+ if (sess->cipher.direction) {
+ auth_op = BCMFS_CRYPTO_AUTH_OP_VERIFY;
+ cipher_op = BCMFS_CRYPTO_CIPHER_OP_DECRYPT;
+ } else {
+ auth_op = BCMFS_CRYPTO_AUTH_OP_GENERATE;
+ cipher_op = BCMFS_CRYPTO_CIPHER_OP_ENCRYPT;
+ }
+
+ fsattr_sz(&src) = sym_op->aead.data.length;
+ fsattr_sz(&dst) = sym_op->aead.data.length;
+
+ fsattr_va(&src) = rte_pktmbuf_mtod_offset
+ (mbuf_src,
+ uint8_t *,
+ sym_op->aead.data.offset);
+
+ fsattr_va(&dst) = rte_pktmbuf_mtod_offset
+ (mbuf_dst,
+ uint8_t *,
+ sym_op->aead.data.offset);
+
+ fsattr_pa(&src) = rte_pktmbuf_iova_offset(mbuf_src,
+ sym_op->aead.data.offset);
+ fsattr_pa(&dst) = rte_pktmbuf_iova_offset(mbuf_dst,
+ sym_op->aead.data.offset);
+
+ fsattr_va(&iv) = rte_crypto_op_ctod_offset(op,
+ uint8_t *,
+ sess->cipher.iv.offset);
+
+ fsattr_pa(&iv) = 0;
+ fsattr_sz(&iv) = sess->cipher.iv.length;
+
+ fsattr_va(&cipher_key) = sess->cipher.key.data;
+ fsattr_pa(&cipher_key) = 0;
+ fsattr_sz(&cipher_key) = sess->cipher.key.length;
+
+ fsattr_va(&auth_key) = sess->auth.key.data;
+ fsattr_pa(&auth_key) = 0;
+ fsattr_sz(&auth_key) = sess->auth.key.length;
+
+ fsattr_va(&mac) = op->sym->aead.digest.data;
+ fsattr_pa(&mac) = op->sym->aead.digest.phys_addr;
+ fsattr_sz(&mac) = sess->auth.digest_length;
+
+ fsattr_va(&aad) = op->sym->aead.aad.data;
+ fsattr_pa(&aad) = op->sym->aead.aad.phys_addr;
+ fsattr_sz(&aad) = sess->aead.aad_length;
+
+ rc = bcmfs_crypto_build_aead_request(req, sess->cipher.algo,
+ cipher_op, sess->auth.algo,
+ auth_op, &src, &dst, &cipher_key,
+ &auth_key, &iv, &aad, &mac,
+ sess->cipher.direction ? 0 : 1);
+
+ if (rc)
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+
+ return rc;
+}
+
+/** Process crypto operation for mbuf */
+int
+bcmfs_process_sym_crypto_op(struct rte_crypto_op *op,
+ struct bcmfs_sym_session *sess,
+ struct bcmfs_sym_request *req)
+{
+ struct rte_mbuf *msrc, *mdst;
+ int rc = 0;
+
+ msrc = op->sym->m_src;
+ mdst = op->sym->m_dst ? op->sym->m_dst : op->sym->m_src;
+ op->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+
+ switch (sess->chain_order) {
+ case BCMFS_SYM_CHAIN_ONLY_CIPHER:
+ rc = process_crypto_cipher_op(op, msrc, mdst, sess, req);
+ break;
+ case BCMFS_SYM_CHAIN_ONLY_AUTH:
+ rc = process_crypto_auth_op(op, msrc, sess, req);
+ break;
+ case BCMFS_SYM_CHAIN_CIPHER_AUTH:
+ case BCMFS_SYM_CHAIN_AUTH_CIPHER:
+ rc = process_crypto_combined_op(op, msrc, mdst, sess, req);
+ break;
+ case BCMFS_SYM_CHAIN_AEAD:
+ rc = process_crypto_aead_op(op, msrc, mdst, sess, req);
+ break;
+ default:
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ break;
+ }
+
+ return rc;
+}
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_defs.h b/drivers/crypto/bcmfs/bcmfs_sym_defs.h
index b5657a9bc..8824521dd 100644
--- a/drivers/crypto/bcmfs/bcmfs_sym_defs.h
+++ b/drivers/crypto/bcmfs/bcmfs_sym_defs.h
@@ -15,6 +15,18 @@
#define BCMFS_MAX_IV_SIZE 16
#define BCMFS_MAX_DIGEST_SIZE 64
+struct bcmfs_sym_session;
+struct bcmfs_sym_request;
+
+/** Crypto Request processing successful. */
+#define BCMFS_SYM_RESPONSE_SUCCESS (0)
+/** Crypot Request processing protocol failure. */
+#define BCMFS_SYM_RESPONSE_PROTO_FAILURE (1)
+/** Crypot Request processing completion failure. */
+#define BCMFS_SYM_RESPONSE_COMPL_ERROR (2)
+/** Crypot Request processing hash tag check error. */
+#define BCMFS_SYM_RESPONSE_HASH_TAG_ERROR (3)
+
/** Symmetric Cipher Direction */
enum bcmfs_crypto_cipher_op {
/** Encrypt cipher operation */
@@ -167,4 +179,8 @@ enum bcmfs_sym_crypto_class {
BCMFS_CRYPTO_AEAD,
};
+int
+bcmfs_process_sym_crypto_op(struct rte_crypto_op *op,
+ struct bcmfs_sym_session *sess,
+ struct bcmfs_sym_request *req);
#endif /* _BCMFS_SYM_DEFS_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_engine.c b/drivers/crypto/bcmfs/bcmfs_sym_engine.c
new file mode 100644
index 000000000..b8cf3eab9
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_engine.c
@@ -0,0 +1,994 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Broadcom.
+ * All rights reserved.
+ */
+
+#include <stdbool.h>
+#include <string.h>
+
+#include <rte_common.h>
+#include <rte_cryptodev.h>
+
+#include "bcmfs_logs.h"
+#include "bcmfs_sym_defs.h"
+#include "bcmfs_dev_msg.h"
+#include "bcmfs_sym_req.h"
+#include "bcmfs_sym_engine.h"
+
+enum spu2_cipher_type {
+ SPU2_CIPHER_TYPE_NONE = 0x0,
+ SPU2_CIPHER_TYPE_AES128 = 0x1,
+ SPU2_CIPHER_TYPE_AES192 = 0x2,
+ SPU2_CIPHER_TYPE_AES256 = 0x3,
+ SPU2_CIPHER_TYPE_DES = 0x4,
+ SPU2_CIPHER_TYPE_3DES = 0x5,
+ SPU2_CIPHER_TYPE_LAST
+};
+
+enum spu2_cipher_mode {
+ SPU2_CIPHER_MODE_ECB = 0x0,
+ SPU2_CIPHER_MODE_CBC = 0x1,
+ SPU2_CIPHER_MODE_CTR = 0x2,
+ SPU2_CIPHER_MODE_CFB = 0x3,
+ SPU2_CIPHER_MODE_OFB = 0x4,
+ SPU2_CIPHER_MODE_XTS = 0x5,
+ SPU2_CIPHER_MODE_CCM = 0x6,
+ SPU2_CIPHER_MODE_GCM = 0x7,
+ SPU2_CIPHER_MODE_LAST
+};
+
+enum spu2_hash_type {
+ SPU2_HASH_TYPE_NONE = 0x0,
+ SPU2_HASH_TYPE_AES128 = 0x1,
+ SPU2_HASH_TYPE_AES192 = 0x2,
+ SPU2_HASH_TYPE_AES256 = 0x3,
+ SPU2_HASH_TYPE_MD5 = 0x6,
+ SPU2_HASH_TYPE_SHA1 = 0x7,
+ SPU2_HASH_TYPE_SHA224 = 0x8,
+ SPU2_HASH_TYPE_SHA256 = 0x9,
+ SPU2_HASH_TYPE_SHA384 = 0xa,
+ SPU2_HASH_TYPE_SHA512 = 0xb,
+ SPU2_HASH_TYPE_SHA512_224 = 0xc,
+ SPU2_HASH_TYPE_SHA512_256 = 0xd,
+ SPU2_HASH_TYPE_SHA3_224 = 0xe,
+ SPU2_HASH_TYPE_SHA3_256 = 0xf,
+ SPU2_HASH_TYPE_SHA3_384 = 0x10,
+ SPU2_HASH_TYPE_SHA3_512 = 0x11,
+ SPU2_HASH_TYPE_LAST
+};
+
+enum spu2_hash_mode {
+ SPU2_HASH_MODE_CMAC = 0x0,
+ SPU2_HASH_MODE_CBC_MAC = 0x1,
+ SPU2_HASH_MODE_XCBC_MAC = 0x2,
+ SPU2_HASH_MODE_HMAC = 0x3,
+ SPU2_HASH_MODE_RABIN = 0x4,
+ SPU2_HASH_MODE_CCM = 0x5,
+ SPU2_HASH_MODE_GCM = 0x6,
+ SPU2_HASH_MODE_RESERVED = 0x7,
+ SPU2_HASH_MODE_LAST
+};
+
+enum spu2_proto_sel {
+ SPU2_PROTO_RESV = 0,
+ SPU2_MACSEC_SECTAG8_ECB = 1,
+ SPU2_MACSEC_SECTAG8_SCB = 2,
+ SPU2_MACSEC_SECTAG16 = 3,
+ SPU2_MACSEC_SECTAG16_8_XPN = 4,
+ SPU2_IPSEC = 5,
+ SPU2_IPSEC_ESN = 6,
+ SPU2_TLS_CIPHER = 7,
+ SPU2_TLS_AEAD = 8,
+ SPU2_DTLS_CIPHER = 9,
+ SPU2_DTLS_AEAD = 10
+};
+
+/* SPU2 response size */
+#define SPU2_STATUS_LEN 2
+
+/* Metadata settings in response */
+enum spu2_ret_md_opts {
+ SPU2_RET_NO_MD = 0, /* return no metadata */
+ SPU2_RET_FMD_OMD = 1, /* return both FMD and OMD */
+ SPU2_RET_FMD_ONLY = 2, /* return only FMD */
+ SPU2_RET_FMD_OMD_IV = 3, /* return FMD and OMD with just IVs */
+};
+
+/* FMD ctrl0 field masks */
+#define SPU2_CIPH_ENCRYPT_EN 0x1 /* 0: decrypt, 1: encrypt */
+#define SPU2_CIPH_TYPE_SHIFT 4
+#define SPU2_CIPH_MODE 0xF00 /* one of spu2_cipher_mode */
+#define SPU2_CIPH_MODE_SHIFT 8
+#define SPU2_CFB_MASK 0x7000 /* cipher feedback mask */
+#define SPU2_CFB_MASK_SHIFT 12
+#define SPU2_PROTO_SEL 0xF00000 /* MACsec, IPsec, TLS... */
+#define SPU2_PROTO_SEL_SHIFT 20
+#define SPU2_HASH_FIRST 0x1000000 /* 1: hash input is input pkt
+ * data
+ */
+#define SPU2_CHK_TAG 0x2000000 /* 1: check digest provided */
+#define SPU2_HASH_TYPE 0x1F0000000 /* one of spu2_hash_type */
+#define SPU2_HASH_TYPE_SHIFT 28
+#define SPU2_HASH_MODE 0xF000000000 /* one of spu2_hash_mode */
+#define SPU2_HASH_MODE_SHIFT 36
+#define SPU2_CIPH_PAD_EN 0x100000000000 /* 1: Add pad to end of payload for
+ * enc
+ */
+#define SPU2_CIPH_PAD 0xFF000000000000 /* cipher pad value */
+#define SPU2_CIPH_PAD_SHIFT 48
+
+/* FMD ctrl1 field masks */
+#define SPU2_TAG_LOC 0x1 /* 1: end of payload, 0: undef */
+#define SPU2_HAS_FR_DATA 0x2 /* 1: msg has frame data */
+#define SPU2_HAS_AAD1 0x4 /* 1: msg has AAD1 field */
+#define SPU2_HAS_NAAD 0x8 /* 1: msg has NAAD field */
+#define SPU2_HAS_AAD2 0x10 /* 1: msg has AAD2 field */
+#define SPU2_HAS_ESN 0x20 /* 1: msg has ESN field */
+#define SPU2_HASH_KEY_LEN 0xFF00 /* len of hash key in bytes.
+ * HMAC only.
+ */
+#define SPU2_HASH_KEY_LEN_SHIFT 8
+#define SPU2_CIPH_KEY_LEN 0xFF00000 /* len of cipher key in bytes */
+#define SPU2_CIPH_KEY_LEN_SHIFT 20
+#define SPU2_GENIV 0x10000000 /* 1: hw generates IV */
+#define SPU2_HASH_IV 0x20000000 /* 1: IV incl in hash */
+#define SPU2_RET_IV 0x40000000 /* 1: return IV in output msg
+ * b4 payload
+ */
+#define SPU2_RET_IV_LEN 0xF00000000 /* length in bytes of IV returned.
+ * 0 = 16 bytes
+ */
+#define SPU2_RET_IV_LEN_SHIFT 32
+#define SPU2_IV_OFFSET 0xF000000000 /* gen IV offset */
+#define SPU2_IV_OFFSET_SHIFT 36
+#define SPU2_IV_LEN 0x1F0000000000 /* length of input IV in bytes */
+#define SPU2_IV_LEN_SHIFT 40
+#define SPU2_HASH_TAG_LEN 0x7F000000000000 /* hash tag length in bytes */
+#define SPU2_HASH_TAG_LEN_SHIFT 48
+#define SPU2_RETURN_MD 0x300000000000000 /* return metadata */
+#define SPU2_RETURN_MD_SHIFT 56
+#define SPU2_RETURN_FD 0x400000000000000
+#define SPU2_RETURN_AAD1 0x800000000000000
+#define SPU2_RETURN_NAAD 0x1000000000000000
+#define SPU2_RETURN_AAD2 0x2000000000000000
+#define SPU2_RETURN_PAY 0x4000000000000000 /* return payload */
+
+/* FMD ctrl2 field masks */
+#define SPU2_AAD1_OFFSET 0xFFF /* byte offset of AAD1 field */
+#define SPU2_AAD1_LEN 0xFF000 /* length of AAD1 in bytes */
+#define SPU2_AAD1_LEN_SHIFT 12
+#define SPU2_AAD2_OFFSET 0xFFF00000 /* byte offset of AAD2 field */
+#define SPU2_AAD2_OFFSET_SHIFT 20
+#define SPU2_PL_OFFSET 0xFFFFFFFF00000000 /* payload offset from AAD2 */
+#define SPU2_PL_OFFSET_SHIFT 32
+
+/* FMD ctrl3 field masks */
+#define SPU2_PL_LEN 0xFFFFFFFF /* payload length in bytes */
+#define SPU2_TLS_LEN 0xFFFF00000000 /* TLS encrypt: cipher len
+ * TLS decrypt: compressed len
+ */
+#define SPU2_TLS_LEN_SHIFT 32
+
+/*
+ * Max value that can be represented in the Payload Length field of the
+ * ctrl3 word of FMD.
+ */
+#define SPU2_MAX_PAYLOAD SPU2_PL_LEN
+
+#define SPU2_VAL_NONE 0
+
+/* CCM B_0 field definitions, common for SPU-M and SPU2 */
+#define CCM_B0_ADATA 0x40
+#define CCM_B0_ADATA_SHIFT 6
+#define CCM_B0_M_PRIME 0x38
+#define CCM_B0_M_PRIME_SHIFT 3
+#define CCM_B0_L_PRIME 0x07
+#define CCM_B0_L_PRIME_SHIFT 0
+#define CCM_ESP_L_VALUE 4
+
+static uint16_t
+spu2_cipher_type_xlate(enum bcmfs_crypto_cipher_algorithm cipher_alg,
+ enum spu2_cipher_type *spu2_type,
+ struct fsattr *key)
+{
+ int ret = 0;
+ int key_size = fsattr_sz(key);
+
+ if (cipher_alg == BCMFS_CRYPTO_CIPHER_AES_XTS)
+ key_size = key_size / 2;
+
+ switch (key_size) {
+ case BCMFS_CRYPTO_AES128:
+ *spu2_type = SPU2_CIPHER_TYPE_AES128;
+ break;
+ case BCMFS_CRYPTO_AES192:
+ *spu2_type = SPU2_CIPHER_TYPE_AES192;
+ break;
+ case BCMFS_CRYPTO_AES256:
+ *spu2_type = SPU2_CIPHER_TYPE_AES256;
+ break;
+ default:
+ ret = -EINVAL;
+ }
+
+ return ret;
+}
+
+static int
+spu2_hash_xlate(enum bcmfs_crypto_auth_algorithm auth_alg,
+ struct fsattr *key,
+ enum spu2_hash_type *spu2_type,
+ enum spu2_hash_mode *spu2_mode)
+{
+ *spu2_mode = 0;
+
+ switch (auth_alg) {
+ case BCMFS_CRYPTO_AUTH_NONE:
+ *spu2_type = SPU2_HASH_TYPE_NONE;
+ break;
+ case BCMFS_CRYPTO_AUTH_MD5:
+ *spu2_type = SPU2_HASH_TYPE_MD5;
+ break;
+ case BCMFS_CRYPTO_AUTH_MD5_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_MD5;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA1:
+ *spu2_type = SPU2_HASH_TYPE_SHA1;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA1_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA1;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA224:
+ *spu2_type = SPU2_HASH_TYPE_SHA224;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA224_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA224;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA256:
+ *spu2_type = SPU2_HASH_TYPE_SHA256;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA256_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA256;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA384:
+ *spu2_type = SPU2_HASH_TYPE_SHA384;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA384_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA384;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA512:
+ *spu2_type = SPU2_HASH_TYPE_SHA512;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA512_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA512;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA3_224:
+ *spu2_type = SPU2_HASH_TYPE_SHA3_224;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA3_224_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA3_224;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA3_256:
+ *spu2_type = SPU2_HASH_TYPE_SHA3_256;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA3_256_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA3_256;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA3_384:
+ *spu2_type = SPU2_HASH_TYPE_SHA3_384;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA3_384_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA3_384;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA3_512:
+ *spu2_type = SPU2_HASH_TYPE_SHA3_512;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA3_512_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA3_512;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case BCMFS_CRYPTO_AUTH_AES_XCBC_MAC:
+ *spu2_mode = SPU2_HASH_MODE_XCBC_MAC;
+ switch (fsattr_sz(key)) {
+ case BCMFS_CRYPTO_AES128:
+ *spu2_type = SPU2_HASH_TYPE_AES128;
+ break;
+ case BCMFS_CRYPTO_AES192:
+ *spu2_type = SPU2_HASH_TYPE_AES192;
+ break;
+ case BCMFS_CRYPTO_AES256:
+ *spu2_type = SPU2_HASH_TYPE_AES256;
+ break;
+ default:
+ return -EINVAL;
+ }
+ break;
+ case BCMFS_CRYPTO_AUTH_AES_CMAC:
+ *spu2_mode = SPU2_HASH_MODE_CMAC;
+ switch (fsattr_sz(key)) {
+ case BCMFS_CRYPTO_AES128:
+ *spu2_type = SPU2_HASH_TYPE_AES128;
+ break;
+ case BCMFS_CRYPTO_AES192:
+ *spu2_type = SPU2_HASH_TYPE_AES192;
+ break;
+ case BCMFS_CRYPTO_AES256:
+ *spu2_type = SPU2_HASH_TYPE_AES256;
+ break;
+ default:
+ return -EINVAL;
+ }
+ break;
+ case BCMFS_CRYPTO_AUTH_AES_GMAC:
+ *spu2_mode = SPU2_HASH_MODE_GCM;
+ switch (fsattr_sz(key)) {
+ case BCMFS_CRYPTO_AES128:
+ *spu2_type = SPU2_HASH_TYPE_AES128;
+ break;
+ case BCMFS_CRYPTO_AES192:
+ *spu2_type = SPU2_HASH_TYPE_AES192;
+ break;
+ case BCMFS_CRYPTO_AES256:
+ *spu2_type = SPU2_HASH_TYPE_AES256;
+ break;
+ default:
+ return -EINVAL;
+ }
+ break;
+ case BCMFS_CRYPTO_AUTH_AES_CBC_MAC:
+ *spu2_mode = SPU2_HASH_MODE_CBC_MAC;
+ switch (fsattr_sz(key)) {
+ case BCMFS_CRYPTO_AES128:
+ *spu2_type = SPU2_HASH_TYPE_AES128;
+ break;
+ case BCMFS_CRYPTO_AES192:
+ *spu2_type = SPU2_HASH_TYPE_AES192;
+ break;
+ case BCMFS_CRYPTO_AES256:
+ *spu2_type = SPU2_HASH_TYPE_AES256;
+ break;
+ default:
+ return -EINVAL;
+ }
+ break;
+ case BCMFS_CRYPTO_AUTH_AES_GCM:
+ *spu2_mode = SPU2_HASH_MODE_GCM;
+ break;
+ case BCMFS_CRYPTO_AUTH_AES_CCM:
+ *spu2_mode = SPU2_HASH_MODE_CCM;
+ break;
+ }
+
+ return 0;
+}
+
+static int
+spu2_cipher_xlate(enum bcmfs_crypto_cipher_algorithm cipher_alg,
+ struct fsattr *key,
+ enum spu2_cipher_type *spu2_type,
+ enum spu2_cipher_mode *spu2_mode)
+{
+ int ret = 0;
+
+ switch (cipher_alg) {
+ case BCMFS_CRYPTO_CIPHER_NONE:
+ *spu2_type = SPU2_CIPHER_TYPE_NONE;
+ break;
+ case BCMFS_CRYPTO_CIPHER_DES_ECB:
+ *spu2_mode = SPU2_CIPHER_MODE_ECB;
+ *spu2_type = SPU2_CIPHER_TYPE_DES;
+ break;
+ case BCMFS_CRYPTO_CIPHER_DES_CBC:
+ *spu2_mode = SPU2_CIPHER_MODE_CBC;
+ *spu2_type = SPU2_CIPHER_TYPE_DES;
+ break;
+ case BCMFS_CRYPTO_CIPHER_3DES_ECB:
+ *spu2_mode = SPU2_CIPHER_MODE_ECB;
+ *spu2_type = SPU2_CIPHER_TYPE_3DES;
+ break;
+ case BCMFS_CRYPTO_CIPHER_3DES_CBC:
+ *spu2_mode = SPU2_CIPHER_MODE_CBC;
+ *spu2_type = SPU2_CIPHER_TYPE_3DES;
+ break;
+ case BCMFS_CRYPTO_CIPHER_AES_CBC:
+ *spu2_mode = SPU2_CIPHER_MODE_CBC;
+ ret = spu2_cipher_type_xlate(cipher_alg, spu2_type, key);
+ break;
+ case BCMFS_CRYPTO_CIPHER_AES_ECB:
+ *spu2_mode = SPU2_CIPHER_MODE_ECB;
+ ret = spu2_cipher_type_xlate(cipher_alg, spu2_type, key);
+ break;
+ case BCMFS_CRYPTO_CIPHER_AES_CTR:
+ *spu2_mode = SPU2_CIPHER_MODE_CTR;
+ ret = spu2_cipher_type_xlate(cipher_alg, spu2_type, key);
+ break;
+ case BCMFS_CRYPTO_CIPHER_AES_CCM:
+ *spu2_mode = SPU2_CIPHER_MODE_CCM;
+ ret = spu2_cipher_type_xlate(cipher_alg, spu2_type, key);
+ break;
+ case BCMFS_CRYPTO_CIPHER_AES_GCM:
+ *spu2_mode = SPU2_CIPHER_MODE_GCM;
+ ret = spu2_cipher_type_xlate(cipher_alg, spu2_type, key);
+ break;
+ case BCMFS_CRYPTO_CIPHER_AES_XTS:
+ *spu2_mode = SPU2_CIPHER_MODE_XTS;
+ ret = spu2_cipher_type_xlate(cipher_alg, spu2_type, key);
+ break;
+ case BCMFS_CRYPTO_CIPHER_AES_OFB:
+ *spu2_mode = SPU2_CIPHER_MODE_OFB;
+ ret = spu2_cipher_type_xlate(cipher_alg, spu2_type, key);
+ break;
+ }
+
+ return ret;
+}
+
+static void
+spu2_fmd_ctrl0_write(struct spu2_fmd *fmd,
+ bool is_inbound, bool auth_first,
+ enum spu2_proto_sel protocol,
+ enum spu2_cipher_type cipher_type,
+ enum spu2_cipher_mode cipher_mode,
+ enum spu2_hash_type auth_type,
+ enum spu2_hash_mode auth_mode)
+{
+ uint64_t ctrl0 = 0;
+
+ if (cipher_type != SPU2_CIPHER_TYPE_NONE && !is_inbound)
+ ctrl0 |= SPU2_CIPH_ENCRYPT_EN;
+
+ ctrl0 |= ((uint64_t)cipher_type << SPU2_CIPH_TYPE_SHIFT) |
+ ((uint64_t)cipher_mode << SPU2_CIPH_MODE_SHIFT);
+
+ if (protocol != SPU2_PROTO_RESV)
+ ctrl0 |= (uint64_t)protocol << SPU2_PROTO_SEL_SHIFT;
+
+ if (auth_first)
+ ctrl0 |= SPU2_HASH_FIRST;
+
+ if (is_inbound && auth_type != SPU2_HASH_TYPE_NONE)
+ ctrl0 |= SPU2_CHK_TAG;
+
+ ctrl0 |= (((uint64_t)auth_type << SPU2_HASH_TYPE_SHIFT) |
+ ((uint64_t)auth_mode << SPU2_HASH_MODE_SHIFT));
+
+ fmd->ctrl0 = ctrl0;
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "ctrl0:", &fmd->ctrl0, sizeof(uint64_t));
+#endif
+}
+
+static void
+spu2_fmd_ctrl1_write(struct spu2_fmd *fmd, bool is_inbound,
+ uint64_t assoc_size, uint64_t auth_key_len,
+ uint64_t cipher_key_len, bool gen_iv, bool hash_iv,
+ bool return_iv, uint64_t ret_iv_len,
+ uint64_t ret_iv_offset, uint64_t cipher_iv_len,
+ uint64_t digest_size, bool return_payload, bool return_md)
+{
+ uint64_t ctrl1 = 0;
+
+ if (is_inbound && digest_size != 0)
+ ctrl1 |= SPU2_TAG_LOC;
+
+ if (assoc_size != 0)
+ ctrl1 |= SPU2_HAS_AAD2;
+
+ if (auth_key_len != 0)
+ ctrl1 |= ((auth_key_len << SPU2_HASH_KEY_LEN_SHIFT) &
+ SPU2_HASH_KEY_LEN);
+
+ if (cipher_key_len != 0)
+ ctrl1 |= ((cipher_key_len << SPU2_CIPH_KEY_LEN_SHIFT) &
+ SPU2_CIPH_KEY_LEN);
+
+ if (gen_iv)
+ ctrl1 |= SPU2_GENIV;
+
+ if (hash_iv)
+ ctrl1 |= SPU2_HASH_IV;
+
+ if (return_iv) {
+ ctrl1 |= SPU2_RET_IV;
+ ctrl1 |= ret_iv_len << SPU2_RET_IV_LEN_SHIFT;
+ ctrl1 |= ret_iv_offset << SPU2_IV_OFFSET_SHIFT;
+ }
+
+ ctrl1 |= ((cipher_iv_len << SPU2_IV_LEN_SHIFT) & SPU2_IV_LEN);
+
+ if (digest_size != 0) {
+ ctrl1 |= ((digest_size << SPU2_HASH_TAG_LEN_SHIFT) &
+ SPU2_HASH_TAG_LEN);
+ }
+
+ /*
+ * Let's ask for the output pkt to include FMD, but don't need to
+ * get keys and IVs back in OMD.
+ */
+ if (return_md)
+ ctrl1 |= ((uint64_t)SPU2_RET_FMD_ONLY << SPU2_RETURN_MD_SHIFT);
+ else
+ ctrl1 |= ((uint64_t)SPU2_RET_NO_MD << SPU2_RETURN_MD_SHIFT);
+
+ /* Crypto API does not get assoc data back. So no need for AAD2. */
+
+ if (return_payload)
+ ctrl1 |= SPU2_RETURN_PAY;
+
+ fmd->ctrl1 = ctrl1;
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "ctrl1:", &fmd->ctrl1, sizeof(uint64_t));
+#endif
+}
+
+static void
+spu2_fmd_ctrl2_write(struct spu2_fmd *fmd, uint64_t cipher_offset,
+ uint64_t auth_key_len __rte_unused,
+ uint64_t auth_iv_len __rte_unused,
+ uint64_t cipher_key_len __rte_unused,
+ uint64_t cipher_iv_len __rte_unused)
+{
+ uint64_t aad1_offset;
+ uint64_t aad2_offset;
+ uint16_t aad1_len = 0;
+ uint64_t payload_offset;
+
+ /* AAD1 offset is from start of FD. FD length always 0. */
+ aad1_offset = 0;
+
+ aad2_offset = aad1_offset;
+ payload_offset = cipher_offset;
+ fmd->ctrl2 = aad1_offset |
+ (aad1_len << SPU2_AAD1_LEN_SHIFT) |
+ (aad2_offset << SPU2_AAD2_OFFSET_SHIFT) |
+ (payload_offset << SPU2_PL_OFFSET_SHIFT);
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "ctrl2:", &fmd->ctrl2, sizeof(uint64_t));
+#endif
+}
+
+static void
+spu2_fmd_ctrl3_write(struct spu2_fmd *fmd, uint64_t payload_len)
+{
+ fmd->ctrl3 = payload_len & SPU2_PL_LEN;
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "ctrl3:", &fmd->ctrl3, sizeof(uint64_t));
+#endif
+}
+
+int
+bcmfs_crypto_build_auth_req(struct bcmfs_sym_request *sreq,
+ enum bcmfs_crypto_auth_algorithm a_alg,
+ enum bcmfs_crypto_auth_op auth_op,
+ struct fsattr *src, struct fsattr *dst,
+ struct fsattr *mac, struct fsattr *auth_key)
+{
+ int ret;
+ uint64_t dst_size;
+ int src_index = 0;
+ struct spu2_fmd *fmd;
+ enum spu2_hash_mode spu2_auth_mode;
+ enum spu2_hash_type spu2_auth_type = SPU2_HASH_TYPE_NONE;
+ uint64_t auth_ksize = (auth_key != NULL) ? fsattr_sz(auth_key) : 0;
+ bool is_inbound = (auth_op == BCMFS_CRYPTO_AUTH_OP_VERIFY);
+
+ if (src == NULL)
+ return -EINVAL;
+
+ /* one of dst or mac should not be NULL */
+ if (dst == NULL && mac == NULL)
+ return -EINVAL;
+
+ dst_size = (auth_op == BCMFS_CRYPTO_AUTH_OP_GENERATE) ?
+ fsattr_sz(dst) : fsattr_sz(mac);
+
+ /* spu2 hash algorithm and hash algorithm mode */
+ ret = spu2_hash_xlate(a_alg, auth_key, &spu2_auth_type,
+ &spu2_auth_mode);
+ if (ret)
+ return -EINVAL;
+
+ fmd = &sreq->fmd;
+
+ spu2_fmd_ctrl0_write(fmd, is_inbound, SPU2_VAL_NONE,
+ SPU2_VAL_NONE, SPU2_PROTO_RESV,
+ SPU2_VAL_NONE, spu2_auth_type, spu2_auth_mode);
+
+ spu2_fmd_ctrl1_write(fmd, is_inbound, SPU2_VAL_NONE,
+ auth_ksize, SPU2_VAL_NONE, false,
+ false, SPU2_VAL_NONE, SPU2_VAL_NONE,
+ SPU2_VAL_NONE, SPU2_VAL_NONE,
+ dst_size, SPU2_VAL_NONE, SPU2_VAL_NONE);
+
+ memset(&fmd->ctrl2, 0, sizeof(uint64_t));
+
+ spu2_fmd_ctrl3_write(fmd, fsattr_sz(src));
+
+ /* Source metadata and data pointers */
+ sreq->msgs.srcs_addr[src_index] = sreq->fptr;
+ sreq->msgs.srcs_len[src_index] = sizeof(struct spu2_fmd);
+ src_index++;
+
+ if (auth_key != NULL && fsattr_sz(auth_key) != 0) {
+ memcpy(sreq->auth_key, fsattr_va(auth_key),
+ fsattr_sz(auth_key));
+
+ sreq->msgs.srcs_addr[src_index] = sreq->aptr;
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(auth_key);
+ src_index++;
+ }
+
+ sreq->msgs.srcs_addr[src_index] = fsattr_pa(src);
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(src);
+ src_index++;
+
+ /*
+ * In case of authentication verify operation, use input mac data to
+ * SPU2 engine.
+ */
+ if (auth_op == BCMFS_CRYPTO_AUTH_OP_VERIFY && mac != NULL) {
+ sreq->msgs.srcs_addr[src_index] = fsattr_pa(mac);
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(mac);
+ src_index++;
+ }
+ sreq->msgs.srcs_count = src_index;
+
+ /*
+ * Output packet contains actual output from SPU2 and
+ * the status packet, so the dsts_count is always 2 below.
+ */
+ if (auth_op == BCMFS_CRYPTO_AUTH_OP_GENERATE) {
+ sreq->msgs.dsts_addr[0] = fsattr_pa(dst);
+ sreq->msgs.dsts_len[0] = fsattr_sz(dst);
+ } else {
+ /*
+ * In case of authentication verify operation, provide dummy
+ * location to SPU2 engine to generate hash. This is needed
+ * because SPU2 generates hash even in case of verify operation.
+ */
+ sreq->msgs.dsts_addr[0] = sreq->dptr;
+ sreq->msgs.dsts_len[0] = fsattr_sz(mac);
+ }
+
+ sreq->msgs.dsts_addr[1] = sreq->rptr;
+ sreq->msgs.dsts_len[1] = SPU2_STATUS_LEN;
+ sreq->msgs.dsts_count = 2;
+
+ return 0;
+}
+
+int
+bcmfs_crypto_build_cipher_req(struct bcmfs_sym_request *sreq,
+ enum bcmfs_crypto_cipher_algorithm calgo,
+ enum bcmfs_crypto_cipher_op cipher_op,
+ struct fsattr *src, struct fsattr *dst,
+ struct fsattr *cipher_key, struct fsattr *iv)
+{
+ int ret = 0;
+ int src_index = 0;
+ struct spu2_fmd *fmd;
+ unsigned int xts_keylen;
+ enum spu2_cipher_mode spu2_ciph_mode = 0;
+ enum spu2_cipher_type spu2_ciph_type = SPU2_CIPHER_TYPE_NONE;
+ bool is_inbound = (cipher_op == BCMFS_CRYPTO_CIPHER_OP_DECRYPT);
+
+ if (src == NULL || dst == NULL || iv == NULL)
+ return -EINVAL;
+
+ fmd = &sreq->fmd;
+
+ /* spu2 cipher algorithm and cipher algorithm mode */
+ ret = spu2_cipher_xlate(calgo, cipher_key,
+ &spu2_ciph_type, &spu2_ciph_mode);
+ if (ret)
+ return -EINVAL;
+
+ spu2_fmd_ctrl0_write(fmd, is_inbound, SPU2_VAL_NONE,
+ SPU2_PROTO_RESV, spu2_ciph_type, spu2_ciph_mode,
+ SPU2_VAL_NONE, SPU2_VAL_NONE);
+
+ spu2_fmd_ctrl1_write(fmd, SPU2_VAL_NONE, SPU2_VAL_NONE, SPU2_VAL_NONE,
+ fsattr_sz(cipher_key), false, false,
+ SPU2_VAL_NONE, SPU2_VAL_NONE, SPU2_VAL_NONE,
+ fsattr_sz(iv), SPU2_VAL_NONE, SPU2_VAL_NONE,
+ SPU2_VAL_NONE);
+
+ /* Nothing for FMD2 */
+ memset(&fmd->ctrl2, 0, sizeof(uint64_t));
+
+ spu2_fmd_ctrl3_write(fmd, fsattr_sz(src));
+
+ /* Source metadata and data pointers */
+ sreq->msgs.srcs_addr[src_index] = sreq->fptr;
+ sreq->msgs.srcs_len[src_index] = sizeof(struct spu2_fmd);
+ src_index++;
+
+ if (cipher_key != NULL && fsattr_sz(cipher_key) != 0) {
+ if (calgo == BCMFS_CRYPTO_CIPHER_AES_XTS) {
+ xts_keylen = fsattr_sz(cipher_key) / 2;
+ memcpy(sreq->cipher_key,
+ (uint8_t *)fsattr_va(cipher_key) + xts_keylen,
+ xts_keylen);
+ memcpy(sreq->cipher_key + xts_keylen,
+ fsattr_va(cipher_key), xts_keylen);
+ } else {
+ memcpy(sreq->cipher_key,
+ fsattr_va(cipher_key), fsattr_sz(cipher_key));
+ }
+
+ sreq->msgs.srcs_addr[src_index] = sreq->cptr;
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(cipher_key);
+ src_index++;
+ }
+
+ if (iv != NULL && fsattr_sz(iv) != 0) {
+ memcpy(sreq->iv,
+ fsattr_va(iv), fsattr_sz(iv));
+ sreq->msgs.srcs_addr[src_index] = sreq->iptr;
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(iv);
+ src_index++;
+ }
+
+ sreq->msgs.srcs_addr[src_index] = fsattr_pa(src);
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(src);
+ src_index++;
+ sreq->msgs.srcs_count = src_index;
+
+ /**
+ * Output packet contains actual output from SPU2 and
+ * the status packet, so the dsts_count is always 2 below.
+ */
+ sreq->msgs.dsts_addr[0] = fsattr_pa(dst);
+ sreq->msgs.dsts_len[0] = fsattr_sz(dst);
+
+ sreq->msgs.dsts_addr[1] = sreq->rptr;
+ sreq->msgs.dsts_len[1] = SPU2_STATUS_LEN;
+ sreq->msgs.dsts_count = 2;
+
+ return 0;
+}
+
+static void
+bcmfs_crypto_ccm_update_iv(uint8_t *ivbuf,
+ unsigned int *ivlen, bool is_esp)
+{
+ int L; /* size of length field, in bytes */
+
+ /*
+ * In RFC4309 mode, L is fixed at 4 bytes; otherwise, IV from
+ * testmgr contains (L-1) in bottom 3 bits of first byte,
+ * per RFC 3610.
+ */
+ if (is_esp)
+ L = CCM_ESP_L_VALUE;
+ else
+ L = ((ivbuf[0] & CCM_B0_L_PRIME) >>
+ CCM_B0_L_PRIME_SHIFT) + 1;
+
+ /* SPU2 doesn't want these length bytes nor the first byte... */
+ *ivlen -= (1 + L);
+ memmove(ivbuf, &ivbuf[1], *ivlen);
+}
+
+int
+bcmfs_crypto_build_aead_request(struct bcmfs_sym_request *sreq,
+ enum bcmfs_crypto_cipher_algorithm cipher_alg,
+ enum bcmfs_crypto_cipher_op cipher_op,
+ enum bcmfs_crypto_auth_algorithm auth_alg,
+ enum bcmfs_crypto_auth_op auth_op,
+ struct fsattr *src, struct fsattr *dst,
+ struct fsattr *cipher_key,
+ struct fsattr *auth_key,
+ struct fsattr *iv, struct fsattr *aad,
+ struct fsattr *digest, bool cipher_first)
+{
+ int ret = 0;
+ int src_index = 0;
+ int dst_index = 0;
+ bool auth_first = 0;
+ struct spu2_fmd *fmd;
+ unsigned int payload_len;
+ enum spu2_cipher_mode spu2_ciph_mode = 0;
+ enum spu2_hash_mode spu2_auth_mode = 0;
+ uint64_t aad_size = (aad != NULL) ? fsattr_sz(aad) : 0;
+ unsigned int iv_size = (iv != NULL) ? fsattr_sz(iv) : 0;
+ enum spu2_cipher_type spu2_ciph_type = SPU2_CIPHER_TYPE_NONE;
+ uint64_t auth_ksize = (auth_key != NULL) ?
+ fsattr_sz(auth_key) : 0;
+ uint64_t cipher_ksize = (cipher_key != NULL) ?
+ fsattr_sz(cipher_key) : 0;
+ uint64_t digest_size = (digest != NULL) ?
+ fsattr_sz(digest) : 0;
+ enum spu2_hash_type spu2_auth_type = SPU2_HASH_TYPE_NONE;
+ bool is_inbound = (auth_op == BCMFS_CRYPTO_AUTH_OP_VERIFY);
+
+ if (src == NULL)
+ return -EINVAL;
+
+ payload_len = fsattr_sz(src);
+ if (!payload_len) {
+ BCMFS_DP_LOG(ERR, "null payload not supported");
+ return -EINVAL;
+ }
+
+ /* spu2 hash algorithm and hash algorithm mode */
+ ret = spu2_hash_xlate(auth_alg, auth_key, &spu2_auth_type,
+ &spu2_auth_mode);
+ if (ret)
+ return -EINVAL;
+
+ /* spu2 cipher algorithm and cipher algorithm mode */
+ ret = spu2_cipher_xlate(cipher_alg, cipher_key, &spu2_ciph_type,
+ &spu2_ciph_mode);
+ if (ret) {
+ BCMFS_DP_LOG(ERR, "cipher xlate error");
+ return -EINVAL;
+ }
+
+ auth_first = cipher_first ? 0 : 1;
+
+ if (cipher_alg == BCMFS_CRYPTO_CIPHER_AES_GCM) {
+ spu2_auth_type = spu2_ciph_type;
+ /*
+ * SPU2 needs in total 12 bytes of IV
+ * ie IV of 8 bytes(random number) and 4 bytes of salt.
+ */
+ if (fsattr_sz(iv) > 12)
+ iv_size = 12;
+
+ /*
+ * On SPU 2, aes gcm cipher first on encrypt, auth first on
+ * decrypt
+ */
+
+ auth_first = (cipher_op == BCMFS_CRYPTO_CIPHER_OP_ENCRYPT) ?
+ 0 : 1;
+ }
+
+ if (iv != NULL && fsattr_sz(iv) != 0)
+ memcpy(sreq->iv, fsattr_va(iv), fsattr_sz(iv));
+
+ if (cipher_alg == BCMFS_CRYPTO_CIPHER_AES_CCM) {
+ spu2_auth_type = spu2_ciph_type;
+ if (iv != NULL) {
+ memcpy(sreq->iv, fsattr_va(iv),
+ fsattr_sz(iv));
+ iv_size = fsattr_sz(iv);
+ bcmfs_crypto_ccm_update_iv(sreq->iv, &iv_size, false);
+ }
+
+ /* opposite for ccm (auth 1st on encrypt) */
+ auth_first = (cipher_op == BCMFS_CRYPTO_CIPHER_OP_ENCRYPT) ?
+ 1 : 0;
+ }
+
+ fmd = &sreq->fmd;
+
+ spu2_fmd_ctrl0_write(fmd, is_inbound, auth_first, SPU2_PROTO_RESV,
+ spu2_ciph_type, spu2_ciph_mode,
+ spu2_auth_type, spu2_auth_mode);
+
+ spu2_fmd_ctrl1_write(fmd, is_inbound, aad_size, auth_ksize,
+ cipher_ksize, false, false, SPU2_VAL_NONE,
+ SPU2_VAL_NONE, SPU2_VAL_NONE, iv_size,
+ digest_size, false, SPU2_VAL_NONE);
+
+ spu2_fmd_ctrl2_write(fmd, aad_size, auth_ksize, 0,
+ cipher_ksize, iv_size);
+
+ spu2_fmd_ctrl3_write(fmd, payload_len);
+
+ /* Source metadata and data pointers */
+ sreq->msgs.srcs_addr[src_index] = sreq->fptr;
+ sreq->msgs.srcs_len[src_index] = sizeof(struct spu2_fmd);
+ src_index++;
+
+ if (auth_key != NULL && fsattr_sz(auth_key) != 0) {
+ memcpy(sreq->auth_key,
+ fsattr_va(auth_key), fsattr_sz(auth_key));
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "auth key:", fsattr_va(auth_key),
+ fsattr_sz(auth_key));
+#endif
+ sreq->msgs.srcs_addr[src_index] = sreq->aptr;
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(auth_key);
+ src_index++;
+ }
+
+ if (cipher_key != NULL && fsattr_sz(cipher_key) != 0) {
+ memcpy(sreq->cipher_key,
+ fsattr_va(cipher_key), fsattr_sz(cipher_key));
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "cipher key:", fsattr_va(cipher_key),
+ fsattr_sz(cipher_key));
+#endif
+ sreq->msgs.srcs_addr[src_index] = sreq->cptr;
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(cipher_key);
+ src_index++;
+ }
+
+ if (iv != NULL && fsattr_sz(iv) != 0) {
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "iv key:", fsattr_va(iv),
+ fsattr_sz(iv));
+#endif
+ sreq->msgs.srcs_addr[src_index] = sreq->iptr;
+ sreq->msgs.srcs_len[src_index] = iv_size;
+ src_index++;
+ }
+
+ if (aad != NULL && fsattr_sz(aad) != 0) {
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "aad :", fsattr_va(aad),
+ fsattr_sz(aad));
+#endif
+ sreq->msgs.srcs_addr[src_index] = fsattr_pa(aad);
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(aad);
+ src_index++;
+ }
+
+ sreq->msgs.srcs_addr[src_index] = fsattr_pa(src);
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(src);
+ src_index++;
+
+
+ if (auth_op == BCMFS_CRYPTO_AUTH_OP_VERIFY && digest != NULL &&
+ fsattr_sz(digest) != 0) {
+ sreq->msgs.srcs_addr[src_index] = fsattr_pa(digest);
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(digest);
+ src_index++;
+ }
+ sreq->msgs.srcs_count = src_index;
+
+ if (dst != NULL) {
+ sreq->msgs.dsts_addr[dst_index] = fsattr_pa(dst);
+ sreq->msgs.dsts_len[dst_index] = fsattr_sz(dst);
+ dst_index++;
+ }
+
+ if (auth_op == BCMFS_CRYPTO_AUTH_OP_VERIFY) {
+ /*
+ * In case of decryption digest data is generated by
+ * SPU2 engine but application doesn't need digest
+ * as such. So program dummy location to capture
+ * digest data
+ */
+ if (digest != NULL && fsattr_sz(digest) != 0) {
+ sreq->msgs.dsts_addr[dst_index] =
+ sreq->dptr;
+ sreq->msgs.dsts_len[dst_index] =
+ fsattr_sz(digest);
+ dst_index++;
+ }
+ } else {
+ if (digest != NULL && fsattr_sz(digest) != 0) {
+ sreq->msgs.dsts_addr[dst_index] =
+ fsattr_pa(digest);
+ sreq->msgs.dsts_len[dst_index] =
+ fsattr_sz(digest);
+ dst_index++;
+ }
+ }
+
+ sreq->msgs.dsts_addr[dst_index] = sreq->rptr;
+ sreq->msgs.dsts_len[dst_index] = SPU2_STATUS_LEN;
+ dst_index++;
+ sreq->msgs.dsts_count = dst_index;
+
+ return 0;
+}
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_engine.h b/drivers/crypto/bcmfs/bcmfs_sym_engine.h
new file mode 100644
index 000000000..29cfb4dc2
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_engine.h
@@ -0,0 +1,103 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_SYM_ENGINE_H_
+#define _BCMFS_SYM_ENGINE_H_
+
+#include "bcmfs_dev_msg.h"
+#include "bcmfs_sym_defs.h"
+#include "bcmfs_sym_req.h"
+
+/* structure to hold element's arrtibutes */
+struct fsattr {
+ void *va;
+ uint64_t pa;
+ uint64_t sz;
+};
+
+#define fsattr_va(__ptr) ((__ptr)->va)
+#define fsattr_pa(__ptr) ((__ptr)->pa)
+#define fsattr_sz(__ptr) ((__ptr)->sz)
+
+/*
+ * Macros for Crypto h/w constraints
+ */
+
+#define BCMFS_CRYPTO_AES_BLOCK_SIZE 16
+#define BCMFS_CRYPTO_AES_MIN_KEY_SIZE 16
+#define BCMFS_CRYPTO_AES_MAX_KEY_SIZE 32
+
+#define BCMFS_CRYPTO_DES_BLOCK_SIZE 8
+#define BCMFS_CRYPTO_DES_KEY_SIZE 8
+
+#define BCMFS_CRYPTO_3DES_BLOCK_SIZE 8
+#define BCMFS_CRYPTO_3DES_KEY_SIZE (3 * 8)
+
+#define BCMFS_CRYPTO_MD5_DIGEST_SIZE 16
+#define BCMFS_CRYPTO_MD5_BLOCK_SIZE 64
+
+#define BCMFS_CRYPTO_SHA1_DIGEST_SIZE 20
+#define BCMFS_CRYPTO_SHA1_BLOCK_SIZE 64
+
+#define BCMFS_CRYPTO_SHA224_DIGEST_SIZE 28
+#define BCMFS_CRYPTO_SHA224_BLOCK_SIZE 64
+
+#define BCMFS_CRYPTO_SHA256_DIGEST_SIZE 32
+#define BCMFS_CRYPTO_SHA256_BLOCK_SIZE 64
+
+#define BCMFS_CRYPTO_SHA384_DIGEST_SIZE 48
+#define BCMFS_CRYPTO_SHA384_BLOCK_SIZE 128
+
+#define BCMFS_CRYPTO_SHA512_DIGEST_SIZE 64
+#define BCMFS_CRYPTO_SHA512_BLOCK_SIZE 128
+
+#define BCMFS_CRYPTO_SHA3_224_DIGEST_SIZE (224 / 8)
+#define BCMFS_CRYPTO_SHA3_224_BLOCK_SIZE (200 - 2 * \
+ BCMFS_CRYPTO_SHA3_224_DIGEST_SIZE)
+
+#define BCMFS_CRYPTO_SHA3_256_DIGEST_SIZE (256 / 8)
+#define BCMFS_CRYPTO_SHA3_256_BLOCK_SIZE (200 - 2 * \
+ BCMFS_CRYPTO_SHA3_256_DIGEST_SIZE)
+
+#define BCMFS_CRYPTO_SHA3_384_DIGEST_SIZE (384 / 8)
+#define BCMFS_CRYPTO_SHA3_384_BLOCK_SIZE (200 - 2 * \
+ BCMFS_CRYPTO_SHA3_384_DIGEST_SIZE)
+
+#define BCMFS_CRYPTO_SHA3_512_DIGEST_SIZE (512 / 8)
+#define BCMFS_CRYPTO_SHA3_512_BLOCK_SIZE (200 - 2 * \
+ BCMFS_CRYPTO_SHA3_512_DIGEST_SIZE)
+
+enum bcmfs_crypto_aes_cipher_key {
+ BCMFS_CRYPTO_AES128 = 16,
+ BCMFS_CRYPTO_AES192 = 24,
+ BCMFS_CRYPTO_AES256 = 32,
+};
+
+int
+bcmfs_crypto_build_cipher_req(struct bcmfs_sym_request *req,
+ enum bcmfs_crypto_cipher_algorithm c_algo,
+ enum bcmfs_crypto_cipher_op cop,
+ struct fsattr *src, struct fsattr *dst,
+ struct fsattr *key, struct fsattr *iv);
+
+int
+bcmfs_crypto_build_auth_req(struct bcmfs_sym_request *req,
+ enum bcmfs_crypto_auth_algorithm a_algo,
+ enum bcmfs_crypto_auth_op aop,
+ struct fsattr *src, struct fsattr *dst,
+ struct fsattr *mac, struct fsattr *key);
+
+int
+bcmfs_crypto_build_aead_request(struct bcmfs_sym_request *req,
+ enum bcmfs_crypto_cipher_algorithm c_algo,
+ enum bcmfs_crypto_cipher_op cop,
+ enum bcmfs_crypto_auth_algorithm a_algo,
+ enum bcmfs_crypto_auth_op aop,
+ struct fsattr *src, struct fsattr *dst,
+ struct fsattr *cipher_key, struct fsattr *auth_key,
+ struct fsattr *iv, struct fsattr *aad,
+ struct fsattr *digest, bool cipher_first);
+
+#endif /* _BCMFS_SYM_ENGINE_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
index 381ca8ea4..568797b4f 100644
--- a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
+++ b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
@@ -132,6 +132,12 @@ static void
spu_req_init(struct bcmfs_sym_request *sr, rte_iova_t iova __rte_unused)
{
memset(sr, 0, sizeof(*sr));
+ sr->fptr = iova;
+ sr->cptr = iova + offsetof(struct bcmfs_sym_request, cipher_key);
+ sr->aptr = iova + offsetof(struct bcmfs_sym_request, auth_key);
+ sr->iptr = iova + offsetof(struct bcmfs_sym_request, iv);
+ sr->dptr = iova + offsetof(struct bcmfs_sym_request, digest);
+ sr->rptr = iova + offsetof(struct bcmfs_sym_request, resp);
}
static void
@@ -244,6 +250,7 @@ bcmfs_sym_pmd_enqueue_op_burst(void *queue_pair,
uint16_t nb_ops)
{
int i, j;
+ int retval;
uint16_t enq = 0;
struct bcmfs_sym_request *sreq;
struct bcmfs_sym_session *sess;
@@ -273,6 +280,11 @@ bcmfs_sym_pmd_enqueue_op_burst(void *queue_pair,
/* save context */
qp->infl_msgs[i] = &sreq->msgs;
qp->infl_msgs[i]->ctx = (void *)sreq;
+
+ /* pre process the request crypto h/w acceleration */
+ retval = bcmfs_process_sym_crypto_op(ops[i], sess, sreq);
+ if (unlikely(retval < 0))
+ goto enqueue_err;
}
/* Send burst request to hw QP */
enq = bcmfs_enqueue_op_burst(qp, (void **)qp->infl_msgs, i);
@@ -289,6 +301,17 @@ bcmfs_sym_pmd_enqueue_op_burst(void *queue_pair,
return enq;
}
+static void bcmfs_sym_set_request_status(struct rte_crypto_op *op,
+ struct bcmfs_sym_request *out)
+{
+ if (*out->resp == BCMFS_SYM_RESPONSE_SUCCESS)
+ op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+ else if (*out->resp == BCMFS_SYM_RESPONSE_HASH_TAG_ERROR)
+ op->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
+ else
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+}
+
static uint16_t
bcmfs_sym_pmd_dequeue_op_burst(void *queue_pair,
struct rte_crypto_op **ops,
@@ -308,6 +331,9 @@ bcmfs_sym_pmd_dequeue_op_burst(void *queue_pair,
for (i = 0; i < deq; i++) {
sreq = (struct bcmfs_sym_request *)qp->infl_msgs[i]->ctx;
+ /* set the status based on the response from the crypto h/w */
+ bcmfs_sym_set_request_status(sreq->op, sreq);
+
ops[pkts++] = sreq->op;
rte_mempool_put(qp->sr_mp, sreq);
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_req.h b/drivers/crypto/bcmfs/bcmfs_sym_req.h
index 0f0b051f1..e53c50adc 100644
--- a/drivers/crypto/bcmfs/bcmfs_sym_req.h
+++ b/drivers/crypto/bcmfs/bcmfs_sym_req.h
@@ -6,13 +6,53 @@
#ifndef _BCMFS_SYM_REQ_H_
#define _BCMFS_SYM_REQ_H_
+#include <rte_cryptodev.h>
+
#include "bcmfs_dev_msg.h"
+#include "bcmfs_sym_defs.h"
+
+/* Fixed SPU2 Metadata */
+struct spu2_fmd {
+ uint64_t ctrl0;
+ uint64_t ctrl1;
+ uint64_t ctrl2;
+ uint64_t ctrl3;
+};
/*
* This structure hold the supportive data required to process a
* rte_crypto_op
*/
struct bcmfs_sym_request {
+ /* spu2 engine related data */
+ struct spu2_fmd fmd;
+ /* cipher key */
+ uint8_t cipher_key[BCMFS_MAX_KEY_SIZE];
+ /* auth key */
+ uint8_t auth_key[BCMFS_MAX_KEY_SIZE];
+ /* iv key */
+ uint8_t iv[BCMFS_MAX_IV_SIZE];
+ /* digest data output from crypto h/w */
+ uint8_t digest[BCMFS_MAX_DIGEST_SIZE];
+ /* 2-Bytes response from crypto h/w */
+ uint8_t resp[2];
+ /*
+ * Below are all iovas for above members
+ * from top
+ */
+ /* iova for fmd */
+ rte_iova_t fptr;
+ /* iova for cipher key */
+ rte_iova_t cptr;
+ /* iova for auth key */
+ rte_iova_t aptr;
+ /* iova for iv key */
+ rte_iova_t iptr;
+ /* iova for digest */
+ rte_iova_t dptr;
+ /* iova for response */
+ rte_iova_t rptr;
+
/* bcmfs qp message for h/w queues to process */
struct bcmfs_qp_message msgs;
/* crypto op */
diff --git a/drivers/crypto/bcmfs/meson.build b/drivers/crypto/bcmfs/meson.build
index 2e86c733e..7aa0f05db 100644
--- a/drivers/crypto/bcmfs/meson.build
+++ b/drivers/crypto/bcmfs/meson.build
@@ -14,5 +14,7 @@ sources = files(
'hw/bcmfs_rm_common.c',
'bcmfs_sym_pmd.c',
'bcmfs_sym_capabilities.c',
- 'bcmfs_sym_session.c'
+ 'bcmfs_sym_session.c',
+ 'bcmfs_sym.c',
+ 'bcmfs_sym_engine.c'
)
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH 0 8/8] crypto/bcmfs: add crypto pmd into cryptodev test
2020-08-11 14:58 [dpdk-dev] [PATCH 0 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
` (6 preceding siblings ...)
2020-08-11 14:58 ` [dpdk-dev] [PATCH 0 7/8] crypto/bcmfs: add crypto h/w module Vikas Gupta
@ 2020-08-11 14:58 ` Vikas Gupta
2020-08-12 6:31 ` [dpdk-dev] [PATCH v1 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
8 siblings, 0 replies; 75+ messages in thread
From: Vikas Gupta @ 2020-08-11 14:58 UTC (permalink / raw)
To: dev, akhil.goyal, ajit.khaparde; +Cc: vikram.prakash, Vikas Gupta
Add test suites for supported algorithms by bcmfs crypto pmd
Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
---
app/test/test_cryptodev.c | 261 ++++++++++++++++++++++++++++++++++++++
app/test/test_cryptodev.h | 1 +
2 files changed, 262 insertions(+)
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 70bf6fe2c..6e7d8471c 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -12681,6 +12681,250 @@ static struct unit_test_suite cryptodev_nitrox_testsuite = {
}
};
+static struct unit_test_suite cryptodev_bcmfs_testsuite = {
+ .suite_name = "Crypto BCMFS Unit Test Suite",
+ .setup = testsuite_setup,
+ .teardown = testsuite_teardown,
+ .unit_test_cases = {
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_device_configure_invalid_dev_id),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_device_configure_invalid_queue_pair_ids),
+
+ TEST_CASE_ST(ut_setup, ut_teardown, test_AES_cipheronly_all),
+ TEST_CASE_ST(ut_setup, ut_teardown, test_AES_chain_all),
+ TEST_CASE_ST(ut_setup, ut_teardown, test_3DES_cipheronly_all),
+ TEST_CASE_ST(ut_setup, ut_teardown, test_3DES_chain_all),
+
+ /** AES GCM Authenticated Encryption */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_authenticated_encryption_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_authenticated_encryption_test_case_2),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_authenticated_encryption_test_case_3),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_authenticated_encryption_test_case_4),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_authenticated_encryption_test_case_5),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_authenticated_encryption_test_case_6),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_authenticated_encryption_test_case_7),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_authenticated_encryption_test_case_8),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_J0_authenticated_encryption_test_case_1),
+
+ /** AES GCM Authenticated Decryption */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_authenticated_decryption_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_authenticated_decryption_test_case_2),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_authenticated_decryption_test_case_3),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_authenticated_decryption_test_case_4),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_authenticated_decryption_test_case_5),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_authenticated_decryption_test_case_6),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_authenticated_decryption_test_case_7),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_authenticated_decryption_test_case_8),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_J0_authenticated_decryption_test_case_1),
+
+ /** AES GCM Authenticated Encryption 192 bits key */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_test_case_192_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_test_case_192_2),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_test_case_192_3),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_test_case_192_4),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_test_case_192_5),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_test_case_192_6),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_test_case_192_7),
+
+ /** AES GCM Authenticated Decryption 192 bits key */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_test_case_192_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_test_case_192_2),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_test_case_192_3),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_test_case_192_4),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_test_case_192_5),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_test_case_192_6),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_test_case_192_7),
+
+ /** AES GCM Authenticated Encryption 256 bits key */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_test_case_256_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_test_case_256_2),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_test_case_256_3),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_test_case_256_4),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_test_case_256_5),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_test_case_256_6),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_test_case_256_7),
+
+ /** AES GCM Authenticated Decryption 256 bits key */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_test_case_256_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_test_case_256_2),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_test_case_256_3),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_test_case_256_4),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_test_case_256_5),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_test_case_256_6),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_test_case_256_7),
+
+ /** AES GCM Authenticated Encryption big aad size */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_test_case_aad_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_test_case_aad_2),
+
+ /** AES GCM Authenticated Decryption big aad size */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_test_case_aad_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_test_case_aad_2),
+
+ /** Out of place tests */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_authenticated_encryption_oop_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_authenticated_decryption_oop_test_case_1),
+
+ /** AES GMAC Authentication */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GMAC_authentication_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GMAC_authentication_verify_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GMAC_authentication_test_case_2),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GMAC_authentication_verify_test_case_2),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GMAC_authentication_test_case_3),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GMAC_authentication_verify_test_case_3),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GMAC_authentication_test_case_4),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GMAC_authentication_verify_test_case_4),
+
+ /** Negative tests */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ authentication_verify_HMAC_SHA1_fail_data_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ authentication_verify_HMAC_SHA1_fail_tag_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_fail_iv_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_fail_in_data_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_fail_out_data_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_fail_aad_len_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_fail_aad_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_fail_tag_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_fail_iv_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_fail_in_data_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_fail_out_data_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_fail_aad_len_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_fail_aad_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_fail_tag_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ authentication_verify_AES128_GMAC_fail_data_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ authentication_verify_AES128_GMAC_fail_tag_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ auth_decryption_AES128CBC_HMAC_SHA1_fail_data_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ auth_decryption_AES128CBC_HMAC_SHA1_fail_tag_corrupt),
+
+ /** AES GMAC Authentication */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GMAC_authentication_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GMAC_authentication_verify_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GMAC_authentication_test_case_2),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GMAC_authentication_verify_test_case_2),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GMAC_authentication_test_case_3),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GMAC_authentication_verify_test_case_3),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GMAC_authentication_test_case_4),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GMAC_authentication_verify_test_case_4),
+
+ /** HMAC_MD5 Authentication */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_MD5_HMAC_generate_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_MD5_HMAC_verify_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_MD5_HMAC_generate_case_2),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_MD5_HMAC_verify_case_2),
+
+ /** Mixed CIPHER + HASH algorithms */
+ /** AUTH AES CMAC + CIPHER AES CTR */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_aes_cmac_aes_ctr_digest_enc_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_aes_cmac_aes_ctr_digest_enc_test_case_1_oop),
+
+ /** AUTH NULL + CIPHER AES CTR */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_auth_null_cipher_aes_ctr_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_verify_auth_null_cipher_aes_ctr_test_case_1),
+
+ /** AUTH AES CMAC + CIPHER NULL */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_auth_aes_cmac_cipher_null_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_verify_auth_aes_cmac_cipher_null_test_case_1),
+
+ TEST_CASES_END() /**< NULL terminate unit test array */
+ }
+};
+
static int
test_cryptodev_qat(void /*argv __rte_unused, int argc __rte_unused*/)
{
@@ -13041,6 +13285,22 @@ test_cryptodev_nitrox(void)
return unit_test_suite_runner(&cryptodev_nitrox_testsuite);
}
+static int
+test_cryptodev_bcmfs(void)
+{
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_BCMFS_PMD));
+
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "BCMFS PMD must be loaded. Check if "
+ "CONFIG_RTE_LIBRTE_PMD_BCMFS is enabled "
+ "in config file to run this testsuite.\n");
+ return TEST_FAILED;
+ }
+
+ return unit_test_suite_runner(&cryptodev_bcmfs_testsuite);
+}
+
REGISTER_TEST_COMMAND(cryptodev_qat_autotest, test_cryptodev_qat);
REGISTER_TEST_COMMAND(cryptodev_aesni_mb_autotest, test_cryptodev_aesni_mb);
REGISTER_TEST_COMMAND(cryptodev_cpu_aesni_mb_autotest,
@@ -13063,3 +13323,4 @@ REGISTER_TEST_COMMAND(cryptodev_octeontx_autotest, test_cryptodev_octeontx);
REGISTER_TEST_COMMAND(cryptodev_octeontx2_autotest, test_cryptodev_octeontx2);
REGISTER_TEST_COMMAND(cryptodev_caam_jr_autotest, test_cryptodev_caam_jr);
REGISTER_TEST_COMMAND(cryptodev_nitrox_autotest, test_cryptodev_nitrox);
+REGISTER_TEST_COMMAND(cryptodev_bcmfs_autotest, test_cryptodev_bcmfs);
diff --git a/app/test/test_cryptodev.h b/app/test/test_cryptodev.h
index 41542e055..c58126368 100644
--- a/app/test/test_cryptodev.h
+++ b/app/test/test_cryptodev.h
@@ -70,6 +70,7 @@
#define CRYPTODEV_NAME_OCTEONTX2_PMD crypto_octeontx2
#define CRYPTODEV_NAME_CAAM_JR_PMD crypto_caam_jr
#define CRYPTODEV_NAME_NITROX_PMD crypto_nitrox_sym
+#define CRYPTODEV_NAME_BCMFS_PMD crypto_bcmfs
/**
* Write (spread) data from buffer to mbuf data
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v1 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices
2020-08-11 14:58 [dpdk-dev] [PATCH 0 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
` (7 preceding siblings ...)
2020-08-11 14:58 ` [dpdk-dev] [PATCH 0 8/8] crypto/bcmfs: add crypto pmd into cryptodev test Vikas Gupta
@ 2020-08-12 6:31 ` Vikas Gupta
2020-08-12 6:31 ` [dpdk-dev] [PATCH v1 1/8] crypto/bcmfs: add BCMFS driver Vikas Gupta
` (8 more replies)
8 siblings, 9 replies; 75+ messages in thread
From: Vikas Gupta @ 2020-08-12 6:31 UTC (permalink / raw)
To: dev, akhil.goyal; +Cc: ajit.khaparde, vikram.prakash, Vikas Gupta
Hi,
This patchset contains support for Crypto offload on Broadcom’s
Stingray/Stingray2 SoCs having FlexSparc unit.
BCMFS is an acronym for Broadcom FlexSparc device used in the patchest.
The patchset progressively adds major modules as below.
a) Detection of platform-device based on the known registered platforms and attaching with VFIO.
b) Creation of Cryptodevice.
c) Addition of session handling.
d) Add Cryptodevice into test Cryptodev framework.
The patchset has been tested on the above mentioned SoCs.
Regards,
Vikas
Changes from v0->v1:
Updated the ABI version in map file 'rte_pmd_bcmfs_version.map'
Vikas Gupta (8):
crypto/bcmfs: add BCMFS driver
crypto/bcmfs: add vfio support
crypto/bcmfs: add apis for queue pair management
crypto/bcmfs: add hw queue pair operations
crypto/bcmfs: create a symmetric cryptodev
crypto/bcmfs: add session handling and capabilities
crypto/bcmfs: add crypto h/w module
crypto/bcmfs: add crypto pmd into cryptodev test
MAINTAINERS | 7 +
app/test/test_cryptodev.c | 261 +++++
app/test/test_cryptodev.h | 1 +
config/common_base | 5 +
doc/guides/cryptodevs/bcmfs.rst | 72 ++
doc/guides/cryptodevs/features/bcmfs.ini | 56 +
doc/guides/cryptodevs/index.rst | 1 +
drivers/crypto/bcmfs/bcmfs_dev_msg.h | 29 +
drivers/crypto/bcmfs/bcmfs_device.c | 331 ++++++
drivers/crypto/bcmfs/bcmfs_device.h | 76 ++
drivers/crypto/bcmfs/bcmfs_hw_defs.h | 38 +
drivers/crypto/bcmfs/bcmfs_logs.c | 38 +
drivers/crypto/bcmfs/bcmfs_logs.h | 34 +
drivers/crypto/bcmfs/bcmfs_qp.c | 383 +++++++
drivers/crypto/bcmfs/bcmfs_qp.h | 142 +++
drivers/crypto/bcmfs/bcmfs_sym.c | 316 ++++++
drivers/crypto/bcmfs/bcmfs_sym_capabilities.c | 764 ++++++++++++++
drivers/crypto/bcmfs/bcmfs_sym_capabilities.h | 16 +
drivers/crypto/bcmfs/bcmfs_sym_defs.h | 186 ++++
drivers/crypto/bcmfs/bcmfs_sym_engine.c | 994 ++++++++++++++++++
drivers/crypto/bcmfs/bcmfs_sym_engine.h | 103 ++
drivers/crypto/bcmfs/bcmfs_sym_pmd.c | 426 ++++++++
drivers/crypto/bcmfs/bcmfs_sym_pmd.h | 38 +
drivers/crypto/bcmfs/bcmfs_sym_req.h | 62 ++
drivers/crypto/bcmfs/bcmfs_sym_session.c | 426 ++++++++
drivers/crypto/bcmfs/bcmfs_sym_session.h | 99 ++
drivers/crypto/bcmfs/bcmfs_vfio.c | 94 ++
drivers/crypto/bcmfs/bcmfs_vfio.h | 17 +
drivers/crypto/bcmfs/hw/bcmfs4_rm.c | 742 +++++++++++++
drivers/crypto/bcmfs/hw/bcmfs5_rm.c | 677 ++++++++++++
drivers/crypto/bcmfs/hw/bcmfs_rm_common.c | 82 ++
drivers/crypto/bcmfs/hw/bcmfs_rm_common.h | 46 +
drivers/crypto/bcmfs/meson.build | 20 +
.../crypto/bcmfs/rte_pmd_bcmfs_version.map | 3 +
drivers/crypto/meson.build | 3 +-
mk/rte.app.mk | 1 +
36 files changed, 6588 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/cryptodevs/bcmfs.rst
create mode 100644 doc/guides/cryptodevs/features/bcmfs.ini
create mode 100644 drivers/crypto/bcmfs/bcmfs_dev_msg.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_device.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_device.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_hw_defs.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_logs.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_logs.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_qp.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_qp.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_capabilities.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_capabilities.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_defs.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_engine.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_engine.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_pmd.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_pmd.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_req.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_session.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_session.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_vfio.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_vfio.h
create mode 100644 drivers/crypto/bcmfs/hw/bcmfs4_rm.c
create mode 100644 drivers/crypto/bcmfs/hw/bcmfs5_rm.c
create mode 100644 drivers/crypto/bcmfs/hw/bcmfs_rm_common.c
create mode 100644 drivers/crypto/bcmfs/hw/bcmfs_rm_common.h
create mode 100644 drivers/crypto/bcmfs/meson.build
create mode 100644 drivers/crypto/bcmfs/rte_pmd_bcmfs_version.map
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v1 1/8] crypto/bcmfs: add BCMFS driver
2020-08-12 6:31 ` [dpdk-dev] [PATCH v1 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
@ 2020-08-12 6:31 ` Vikas Gupta
2020-08-12 6:31 ` [dpdk-dev] [PATCH v1 2/8] crypto/bcmfs: add vfio support Vikas Gupta
` (7 subsequent siblings)
8 siblings, 0 replies; 75+ messages in thread
From: Vikas Gupta @ 2020-08-12 6:31 UTC (permalink / raw)
To: dev, akhil.goyal
Cc: ajit.khaparde, vikram.prakash, Vikas Gupta, Raveendra Padasalagi
Add Broadcom FlexSparc(FS) device creation driver which registers to a
vdev and create a device. Add APIs for logs, supportive documention and
maintainers file.
Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
MAINTAINERS | 7 +
config/common_base | 5 +
doc/guides/cryptodevs/bcmfs.rst | 26 ++
doc/guides/cryptodevs/index.rst | 1 +
drivers/crypto/bcmfs/Makefile | 27 ++
drivers/crypto/bcmfs/bcmfs_device.c | 256 ++++++++++++++++++
drivers/crypto/bcmfs/bcmfs_device.h | 40 +++
drivers/crypto/bcmfs/bcmfs_logs.c | 38 +++
drivers/crypto/bcmfs/bcmfs_logs.h | 34 +++
drivers/crypto/bcmfs/meson.build | 10 +
.../crypto/bcmfs/rte_pmd_bcmfs_version.map | 3 +
drivers/crypto/meson.build | 3 +-
mk/rte.app.mk | 1 +
13 files changed, 450 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/cryptodevs/bcmfs.rst
create mode 100644 drivers/crypto/bcmfs/Makefile
create mode 100644 drivers/crypto/bcmfs/bcmfs_device.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_device.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_logs.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_logs.h
create mode 100644 drivers/crypto/bcmfs/meson.build
create mode 100644 drivers/crypto/bcmfs/rte_pmd_bcmfs_version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index 3cd402b34..7c2d7ff1b 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1099,6 +1099,13 @@ F: drivers/crypto/zuc/
F: doc/guides/cryptodevs/zuc.rst
F: doc/guides/cryptodevs/features/zuc.ini
+Broadcom FlexSparc
+M: Vikas Gupta <vikas.gupta@@broadcom.com>
+M: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
+M: Ajit Khaparde <ajit.khaparde@broadcom.com>
+F: drivers/crypto/bcmfs/
+F: doc/guides/cryptodevs/bcmfs.rst
+F: doc/guides/cryptodevs/features/bcmfs.ini
Compression Drivers
-------------------
diff --git a/config/common_base b/config/common_base
index f7a8824f5..21daadcdd 100644
--- a/config/common_base
+++ b/config/common_base
@@ -705,6 +705,11 @@ CONFIG_RTE_LIBRTE_PMD_MVSAM_CRYPTO=n
#
CONFIG_RTE_LIBRTE_PMD_NITROX=y
+#
+# Compile PMD for Broadcom crypto device
+#
+CONFIG_RTE_LIBRTE_PMD_BCMFS=y
+
#
# Compile generic security library
#
diff --git a/doc/guides/cryptodevs/bcmfs.rst b/doc/guides/cryptodevs/bcmfs.rst
new file mode 100644
index 000000000..752ce028a
--- /dev/null
+++ b/doc/guides/cryptodevs/bcmfs.rst
@@ -0,0 +1,26 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(C) 2020 Broadcom
+
+Broadcom FlexSparc Crypto Poll Mode Driver
+==========================================
+
+The FlexSparc crypto poll mode driver provides support for offloading
+cryptographic operations to the Broadcom SoCs having FlexSparc4/FlexSparc5 unit.
+Detailed information about SoCs can be found in
+
+* https://www.broadcom.com/
+
+Installation
+------------
+
+For compiling the Broadcom FlexSparc crypto PMD, please check if the
+CONFIG_RTE_LIBRTE_PMD_BCMFS setting is set to `y` in config/common_base file.
+
+* ``CONFIG_RTE_LIBRTE_PMD_BCMFS=y``
+
+Initialization
+--------------
+BCMFS crypto PMD depend upon the devices present in the path
+/sys/bus/platform/devices/fs<version>/<dev_name> on the platform.
+Each cryptodev PMD instance can be attached to the nodes present
+in the mentioned path.
diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst
index a67ed5a28..5d7e028bd 100644
--- a/doc/guides/cryptodevs/index.rst
+++ b/doc/guides/cryptodevs/index.rst
@@ -29,3 +29,4 @@ Crypto Device Drivers
qat
virtio
zuc
+ bcmfs
diff --git a/drivers/crypto/bcmfs/Makefile b/drivers/crypto/bcmfs/Makefile
new file mode 100644
index 000000000..781ee6efa
--- /dev/null
+++ b/drivers/crypto/bcmfs/Makefile
@@ -0,0 +1,27 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(C) 2020 Broadcom
+# All rights reserved.
+#
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_bcmfs.a
+
+CFLAGS += $(WERROR_FLAGS)
+CFLAGS += -I$(RTE_SDK)/drivers/crypto/bcmfs
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+
+#
+# all source are stored in SRCS-y
+#
+SRCS-y += bcmfs_logs.c
+SRCS-y += bcmfs_device.c
+
+LDLIBS += -lrte_eal -lrte_bus_vdev
+
+EXPORT_MAP := rte_pmd_bcmfs_version.map
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/crypto/bcmfs/bcmfs_device.c b/drivers/crypto/bcmfs/bcmfs_device.c
new file mode 100644
index 000000000..47c776de6
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_device.c
@@ -0,0 +1,256 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Broadcom.
+ * All rights reserved.
+ */
+
+#include <dirent.h>
+#include <stdbool.h>
+#include <sys/queue.h>
+
+#include <rte_string_fns.h>
+
+#include "bcmfs_device.h"
+#include "bcmfs_logs.h"
+
+struct bcmfs_device_attr {
+ const char name[BCMFS_MAX_PATH_LEN];
+ const char suffix[BCMFS_DEV_NAME_LEN];
+ const enum bcmfs_device_type type;
+ const uint32_t offset;
+ const uint32_t version;
+};
+
+/* BCMFS supported devices */
+static struct bcmfs_device_attr dev_table[] = {
+ {
+ .name = "fs4",
+ .suffix = "crypto_mbox",
+ .type = BCMFS_SYM_FS4,
+ .offset = 0,
+ .version = 0x76303031
+ },
+ {
+ .name = "fs5",
+ .suffix = "mbox",
+ .type = BCMFS_SYM_FS5,
+ .offset = 0,
+ .version = 0x76303032
+ },
+ {
+ /* sentinel */
+ }
+};
+
+TAILQ_HEAD(fsdev_list, bcmfs_device);
+static struct fsdev_list fsdev_list = TAILQ_HEAD_INITIALIZER(fsdev_list);
+
+static struct bcmfs_device *
+fsdev_allocate_one_dev(struct rte_vdev_device *vdev,
+ char *dirpath,
+ char *devname,
+ enum bcmfs_device_type dev_type __rte_unused)
+{
+ struct bcmfs_device *fsdev;
+
+ fsdev = calloc(1, sizeof(*fsdev));
+ if (!fsdev)
+ return NULL;
+
+ if (strlen(dirpath) > sizeof(fsdev->dirname)) {
+ BCMFS_LOG(ERR, "dir path name is too long");
+ goto cleanup;
+ }
+
+ if (strlen(devname) > sizeof(fsdev->name)) {
+ BCMFS_LOG(ERR, "devname is too long");
+ goto cleanup;
+ }
+
+ strcpy(fsdev->dirname, dirpath);
+ strcpy(fsdev->name, devname);
+
+ fsdev->vdev = vdev;
+
+ TAILQ_INSERT_TAIL(&fsdev_list, fsdev, next);
+
+ return fsdev;
+
+cleanup:
+ free(fsdev);
+
+ return NULL;
+}
+
+static struct bcmfs_device *
+find_fsdev(struct rte_vdev_device *vdev)
+{
+ struct bcmfs_device *fsdev;
+
+ TAILQ_FOREACH(fsdev, &fsdev_list, next)
+ if (fsdev->vdev == vdev)
+ return fsdev;
+
+ return NULL;
+}
+
+static void
+fsdev_release(struct bcmfs_device *fsdev)
+{
+ if (fsdev == NULL)
+ return;
+
+ TAILQ_REMOVE(&fsdev_list, fsdev, next);
+ free(fsdev);
+}
+
+static int
+cmprator(const void *a, const void *b)
+{
+ return (*(const unsigned int *)a - *(const unsigned int *)b);
+}
+
+static int
+fsdev_find_all_devs(const char *path, const char *search,
+ uint32_t *devs)
+{
+ DIR *dir;
+ struct dirent *entry;
+ int count = 0;
+ char addr[BCMFS_MAX_NODES][BCMFS_MAX_PATH_LEN];
+ int i;
+
+ dir = opendir(path);
+ if (dir == NULL) {
+ BCMFS_LOG(ERR, "Unable to open directory");
+ return 0;
+ }
+
+ while ((entry = readdir(dir)) != NULL) {
+ if (strstr(entry->d_name, search)) {
+ strlcpy(addr[count], entry->d_name,
+ BCMFS_MAX_PATH_LEN);
+ count++;
+ }
+ }
+
+ closedir(dir);
+
+ for (i = 0 ; i < count; i++)
+ devs[i] = (uint32_t)strtoul(addr[i], NULL, 16);
+ /* sort the devices based on IO addresses */
+ qsort(devs, count, sizeof(uint32_t), cmprator);
+
+ return count;
+}
+
+static bool
+fsdev_find_sub_dir(char *path, const char *search, char *output)
+{
+ DIR *dir;
+ struct dirent *entry;
+
+ dir = opendir(path);
+ if (dir == NULL) {
+ BCMFS_LOG(ERR, "Unable to open directory");
+ return -ENODEV;
+ }
+
+ while ((entry = readdir(dir)) != NULL) {
+ if (!strcmp(entry->d_name, search)) {
+ strlcpy(output, entry->d_name, BCMFS_MAX_PATH_LEN);
+ closedir(dir);
+ return true;
+ }
+ }
+
+ closedir(dir);
+
+ return false;
+}
+
+
+static int
+bcmfs_vdev_probe(struct rte_vdev_device *vdev)
+{
+ struct bcmfs_device *fsdev;
+ char top_dirpath[BCMFS_MAX_PATH_LEN];
+ char sub_dirpath[BCMFS_MAX_PATH_LEN];
+ char out_dirpath[BCMFS_MAX_PATH_LEN];
+ char out_dirname[BCMFS_MAX_PATH_LEN];
+ uint32_t fsdev_dev[BCMFS_MAX_NODES];
+ enum bcmfs_device_type dtype;
+ int i = 0;
+ int dev_idx;
+ int count = 0;
+ bool found = false;
+
+ sprintf(top_dirpath, "%s", SYSFS_BCM_PLTFORM_DEVICES);
+ while (strlen(dev_table[i].name)) {
+ found = fsdev_find_sub_dir(top_dirpath,
+ dev_table[i].name,
+ sub_dirpath);
+ if (found)
+ break;
+ i++;
+ }
+ if (!found) {
+ BCMFS_LOG(ERR, "No supported bcmfs dev found");
+ return -ENODEV;
+ }
+
+ dev_idx = i;
+ dtype = dev_table[i].type;
+
+ snprintf(out_dirpath, sizeof(out_dirpath), "%s/%s",
+ top_dirpath, sub_dirpath);
+ count = fsdev_find_all_devs(out_dirpath,
+ dev_table[dev_idx].suffix,
+ fsdev_dev);
+ if (!count) {
+ BCMFS_LOG(ERR, "No supported bcmfs dev found");
+ return -ENODEV;
+ }
+
+ i = 0;
+ while (count) {
+ /* format the device name present in the patch */
+ snprintf(out_dirname, sizeof(out_dirname), "%x.%s",
+ fsdev_dev[i], dev_table[dev_idx].suffix);
+ fsdev = fsdev_allocate_one_dev(vdev, out_dirpath,
+ out_dirname, dtype);
+ if (!fsdev) {
+ count--;
+ i++;
+ continue;
+ }
+ break;
+ }
+ if (fsdev == NULL) {
+ BCMFS_LOG(ERR, "All supported devs busy");
+ return -ENODEV;
+ }
+
+ return 0;
+}
+
+static int
+bcmfs_vdev_remove(struct rte_vdev_device *vdev)
+{
+ struct bcmfs_device *fsdev;
+
+ fsdev = find_fsdev(vdev);
+ if (fsdev == NULL)
+ return -ENODEV;
+
+ fsdev_release(fsdev);
+ return 0;
+}
+
+/* Register with vdev */
+static struct rte_vdev_driver rte_bcmfs_pmd = {
+ .probe = bcmfs_vdev_probe,
+ .remove = bcmfs_vdev_remove
+};
+
+RTE_PMD_REGISTER_VDEV(bcmfs_pmd,
+ rte_bcmfs_pmd);
diff --git a/drivers/crypto/bcmfs/bcmfs_device.h b/drivers/crypto/bcmfs/bcmfs_device.h
new file mode 100644
index 000000000..4b0c6d3ca
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_device.h
@@ -0,0 +1,40 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Broadcom.
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_DEV_H_
+#define _BCMFS_DEV_H_
+
+#include <sys/queue.h>
+
+#include <rte_bus_vdev.h>
+
+#include "bcmfs_logs.h"
+
+/* max number of dev nodes */
+#define BCMFS_MAX_NODES 4
+#define BCMFS_MAX_PATH_LEN 512
+#define BCMFS_DEV_NAME_LEN 64
+
+/* Path for BCM-Platform device directory */
+#define SYSFS_BCM_PLTFORM_DEVICES "/sys/bus/platform/devices"
+
+/* Supported devices */
+enum bcmfs_device_type {
+ BCMFS_SYM_FS4,
+ BCMFS_SYM_FS5,
+ BCMFS_UNKNOWN
+};
+
+struct bcmfs_device {
+ TAILQ_ENTRY(bcmfs_device) next;
+ /* Directoy path for vfio */
+ char dirname[BCMFS_MAX_PATH_LEN];
+ /* BCMFS device name */
+ char name[BCMFS_DEV_NAME_LEN];
+ /* Parent vdev */
+ struct rte_vdev_device *vdev;
+};
+
+#endif /* _BCMFS_DEV_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_logs.c b/drivers/crypto/bcmfs/bcmfs_logs.c
new file mode 100644
index 000000000..86f4ff3b5
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_logs.c
@@ -0,0 +1,38 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_log.h>
+#include <rte_hexdump.h>
+
+#include "bcmfs_logs.h"
+
+int bcmfs_conf_logtype;
+int bcmfs_dp_logtype;
+
+int
+bcmfs_hexdump_log(uint32_t level, uint32_t logtype, const char *title,
+ const void *buf, unsigned int len)
+{
+ if (level > rte_log_get_global_level())
+ return 0;
+ if (level > (uint32_t)(rte_log_get_level(logtype)))
+ return 0;
+
+ rte_hexdump(rte_log_get_stream(), title, buf, len);
+ return 0;
+}
+
+RTE_INIT(bcmfs_device_init_log)
+{
+ /* Configuration and general logs */
+ bcmfs_conf_logtype = rte_log_register("pmd.bcmfs_config");
+ if (bcmfs_conf_logtype >= 0)
+ rte_log_set_level(bcmfs_conf_logtype, RTE_LOG_NOTICE);
+
+ /* data-path logs */
+ bcmfs_dp_logtype = rte_log_register("pmd.bcmfs_fp");
+ if (bcmfs_dp_logtype >= 0)
+ rte_log_set_level(bcmfs_dp_logtype, RTE_LOG_NOTICE);
+}
diff --git a/drivers/crypto/bcmfs/bcmfs_logs.h b/drivers/crypto/bcmfs/bcmfs_logs.h
new file mode 100644
index 000000000..c03a49b75
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_logs.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_LOGS_H_
+#define _BCMFS_LOGS_H_
+
+#include <rte_log.h>
+
+extern int bcmfs_conf_logtype;
+extern int bcmfs_dp_logtype;
+
+#define BCMFS_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, bcmfs_conf_logtype, \
+ "%s(): " fmt "\n", __func__, ## args)
+
+#define BCMFS_DP_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, bcmfs_dp_logtype, \
+ "%s(): " fmt "\n", __func__, ## args)
+
+#define BCMFS_DP_HEXDUMP_LOG(level, title, buf, len) \
+ bcmfs_hexdump_log(RTE_LOG_ ## level, bcmfs_dp_logtype, title, buf, len)
+
+/**
+ * bcmfs_hexdump_log Dump out memory in a special hex dump format.
+ *
+ * The message will be sent to the stream used by the rte_log infrastructure.
+ */
+int
+bcmfs_hexdump_log(uint32_t level, uint32_t logtype, const char *heading,
+ const void *buf, unsigned int len);
+
+#endif /* _BCMFS_LOGS_H_ */
diff --git a/drivers/crypto/bcmfs/meson.build b/drivers/crypto/bcmfs/meson.build
new file mode 100644
index 000000000..a4bdd8ee5
--- /dev/null
+++ b/drivers/crypto/bcmfs/meson.build
@@ -0,0 +1,10 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(C) 2020 Broadcom
+# All rights reserved.
+#
+
+deps += ['eal', 'bus_vdev']
+sources = files(
+ 'bcmfs_logs.c',
+ 'bcmfs_device.c'
+ )
diff --git a/drivers/crypto/bcmfs/rte_pmd_bcmfs_version.map b/drivers/crypto/bcmfs/rte_pmd_bcmfs_version.map
new file mode 100644
index 000000000..299ae632d
--- /dev/null
+++ b/drivers/crypto/bcmfs/rte_pmd_bcmfs_version.map
@@ -0,0 +1,3 @@
+DPDK_21.0 {
+ local: *;
+};
diff --git a/drivers/crypto/meson.build b/drivers/crypto/meson.build
index a2423507a..8e06d0533 100644
--- a/drivers/crypto/meson.build
+++ b/drivers/crypto/meson.build
@@ -23,7 +23,8 @@ drivers = ['aesni_gcm',
'scheduler',
'snow3g',
'virtio',
- 'zuc']
+ 'zuc',
+ 'bcmfs']
std_deps = ['cryptodev'] # cryptodev pulls in all other needed deps
config_flag_fmt = 'RTE_LIBRTE_@0@_PMD'
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 0ce8cf541..5e268f8c0 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -308,6 +308,7 @@ ifeq ($(CONFIG_RTE_LIBRTE_SECURITY),y)
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_CAAM_JR) += -lrte_pmd_caam_jr
endif # CONFIG_RTE_LIBRTE_SECURITY
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_VIRTIO_CRYPTO) += -lrte_pmd_virtio_crypto
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_BCMFS) += -lrte_pmd_bcmfs
endif # CONFIG_RTE_LIBRTE_CRYPTODEV
ifeq ($(CONFIG_RTE_LIBRTE_COMPRESSDEV),y)
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v1 2/8] crypto/bcmfs: add vfio support
2020-08-12 6:31 ` [dpdk-dev] [PATCH v1 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
2020-08-12 6:31 ` [dpdk-dev] [PATCH v1 1/8] crypto/bcmfs: add BCMFS driver Vikas Gupta
@ 2020-08-12 6:31 ` Vikas Gupta
2020-08-12 6:31 ` [dpdk-dev] [PATCH v1 3/8] crypto/bcmfs: add apis for queue pair management Vikas Gupta
` (6 subsequent siblings)
8 siblings, 0 replies; 75+ messages in thread
From: Vikas Gupta @ 2020-08-12 6:31 UTC (permalink / raw)
To: dev, akhil.goyal
Cc: ajit.khaparde, vikram.prakash, Vikas Gupta, Raveendra Padasalagi
Add vfio support for device.
Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
drivers/crypto/bcmfs/Makefile | 1 +
drivers/crypto/bcmfs/bcmfs_device.c | 5 ++
drivers/crypto/bcmfs/bcmfs_device.h | 6 ++
drivers/crypto/bcmfs/bcmfs_vfio.c | 94 +++++++++++++++++++++++++++++
drivers/crypto/bcmfs/bcmfs_vfio.h | 17 ++++++
drivers/crypto/bcmfs/meson.build | 3 +-
6 files changed, 125 insertions(+), 1 deletion(-)
create mode 100644 drivers/crypto/bcmfs/bcmfs_vfio.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_vfio.h
diff --git a/drivers/crypto/bcmfs/Makefile b/drivers/crypto/bcmfs/Makefile
index 781ee6efa..5f691f7ba 100644
--- a/drivers/crypto/bcmfs/Makefile
+++ b/drivers/crypto/bcmfs/Makefile
@@ -19,6 +19,7 @@ CFLAGS += -DALLOW_EXPERIMENTAL_API
#
SRCS-y += bcmfs_logs.c
SRCS-y += bcmfs_device.c
+SRCS-y += bcmfs_vfio.c
LDLIBS += -lrte_eal -lrte_bus_vdev
diff --git a/drivers/crypto/bcmfs/bcmfs_device.c b/drivers/crypto/bcmfs/bcmfs_device.c
index 47c776de6..3b5cc9e98 100644
--- a/drivers/crypto/bcmfs/bcmfs_device.c
+++ b/drivers/crypto/bcmfs/bcmfs_device.c
@@ -11,6 +11,7 @@
#include "bcmfs_device.h"
#include "bcmfs_logs.h"
+#include "bcmfs_vfio.h"
struct bcmfs_device_attr {
const char name[BCMFS_MAX_PATH_LEN];
@@ -71,6 +72,10 @@ fsdev_allocate_one_dev(struct rte_vdev_device *vdev,
fsdev->vdev = vdev;
+ /* attach to VFIO */
+ if (bcmfs_attach_vfio(fsdev))
+ goto cleanup;
+
TAILQ_INSERT_TAIL(&fsdev_list, fsdev, next);
return fsdev;
diff --git a/drivers/crypto/bcmfs/bcmfs_device.h b/drivers/crypto/bcmfs/bcmfs_device.h
index 4b0c6d3ca..5232bdea5 100644
--- a/drivers/crypto/bcmfs/bcmfs_device.h
+++ b/drivers/crypto/bcmfs/bcmfs_device.h
@@ -35,6 +35,12 @@ struct bcmfs_device {
char name[BCMFS_DEV_NAME_LEN];
/* Parent vdev */
struct rte_vdev_device *vdev;
+ /* vfio handle */
+ int vfio_dev_fd;
+ /* mapped address */
+ uint8_t *mmap_addr;
+ /* mapped size */
+ uint32_t mmap_size;
};
#endif /* _BCMFS_DEV_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_vfio.c b/drivers/crypto/bcmfs/bcmfs_vfio.c
new file mode 100644
index 000000000..9138f96eb
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_vfio.c
@@ -0,0 +1,94 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Broadcom.
+ * All rights reserved.
+ */
+
+#include <errno.h>
+#include <sys/mman.h>
+#include <sys/ioctl.h>
+
+#include <rte_vfio.h>
+
+#include "bcmfs_device.h"
+#include "bcmfs_logs.h"
+#include "bcmfs_vfio.h"
+
+static int
+vfio_map_dev_obj(const char *path, const char *dev_obj,
+ uint32_t *size, void **addr, int *dev_fd)
+{
+ int32_t ret;
+ struct vfio_group_status status = { .argsz = sizeof(status) };
+
+ struct vfio_device_info d_info = { .argsz = sizeof(d_info) };
+ struct vfio_region_info reg_info = { .argsz = sizeof(reg_info) };
+
+ ret = rte_vfio_setup_device(path, dev_obj, dev_fd, &d_info);
+ if (ret) {
+ BCMFS_LOG(ERR, "VFIO Setting for device failed");
+ return ret;
+ }
+
+ /* getting device region info*/
+ ret = ioctl(*dev_fd, VFIO_DEVICE_GET_REGION_INFO, ®_info);
+ if (ret < 0) {
+ BCMFS_LOG(ERR, "Error in VFIO getting REGION_INFO");
+ goto map_failed;
+ }
+
+ *addr = mmap(NULL, reg_info.size,
+ PROT_WRITE | PROT_READ, MAP_SHARED,
+ *dev_fd, reg_info.offset);
+ if (*addr == MAP_FAILED) {
+ BCMFS_LOG(ERR, "Error mapping region (errno = %d)", errno);
+ ret = errno;
+ goto map_failed;
+ }
+ *size = reg_info.size;
+
+ return 0;
+
+map_failed:
+ rte_vfio_release_device(path, dev_obj, *dev_fd);
+
+ return ret;
+}
+
+int
+bcmfs_attach_vfio(struct bcmfs_device *dev)
+{
+ int ret;
+ int vfio_dev_fd;
+ void *v_addr = NULL;
+ uint32_t size = 0;
+
+ ret = vfio_map_dev_obj(dev->dirname, dev->name,
+ &size, &v_addr, &vfio_dev_fd);
+ if (ret)
+ return -1;
+
+ dev->mmap_size = size;
+ dev->mmap_addr = v_addr;
+ dev->vfio_dev_fd = vfio_dev_fd;
+
+ return 0;
+}
+
+void
+bcmfs_release_vfio(struct bcmfs_device *dev)
+{
+ int ret;
+
+ if (dev == NULL)
+ return;
+
+ /* unmap the addr */
+ munmap(dev->mmap_addr, dev->mmap_size);
+ /* release the device */
+ ret = rte_vfio_release_device(dev->dirname, dev->name,
+ dev->vfio_dev_fd);
+ if (ret < 0) {
+ BCMFS_LOG(ERR, "cannot release device");
+ return;
+ }
+}
diff --git a/drivers/crypto/bcmfs/bcmfs_vfio.h b/drivers/crypto/bcmfs/bcmfs_vfio.h
new file mode 100644
index 000000000..d0fdf6483
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_vfio.h
@@ -0,0 +1,17 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_VFIO_H_
+#define _BCMFS_VFIO_H_
+
+/* Attach the bcmfs device to vfio */
+int
+bcmfs_attach_vfio(struct bcmfs_device *dev);
+
+/* Release the bcmfs device from vfio */
+void
+bcmfs_release_vfio(struct bcmfs_device *dev);
+
+#endif /* _BCMFS_VFIO_H_ */
diff --git a/drivers/crypto/bcmfs/meson.build b/drivers/crypto/bcmfs/meson.build
index a4bdd8ee5..fd39eba20 100644
--- a/drivers/crypto/bcmfs/meson.build
+++ b/drivers/crypto/bcmfs/meson.build
@@ -6,5 +6,6 @@
deps += ['eal', 'bus_vdev']
sources = files(
'bcmfs_logs.c',
- 'bcmfs_device.c'
+ 'bcmfs_device.c',
+ 'bcmfs_vfio.c'
)
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v1 3/8] crypto/bcmfs: add apis for queue pair management
2020-08-12 6:31 ` [dpdk-dev] [PATCH v1 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
2020-08-12 6:31 ` [dpdk-dev] [PATCH v1 1/8] crypto/bcmfs: add BCMFS driver Vikas Gupta
2020-08-12 6:31 ` [dpdk-dev] [PATCH v1 2/8] crypto/bcmfs: add vfio support Vikas Gupta
@ 2020-08-12 6:31 ` Vikas Gupta
2020-08-12 6:31 ` [dpdk-dev] [PATCH v1 4/8] crypto/bcmfs: add hw queue pair operations Vikas Gupta
` (5 subsequent siblings)
8 siblings, 0 replies; 75+ messages in thread
From: Vikas Gupta @ 2020-08-12 6:31 UTC (permalink / raw)
To: dev, akhil.goyal
Cc: ajit.khaparde, vikram.prakash, Vikas Gupta, Raveendra Padasalagi
Add queue pair management APIs which will be used by Crypto device to
manage h/w queues. A bcmfs device structure owns multiple queue-pairs
based on the mapped address allocated to it.
Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
drivers/crypto/bcmfs/Makefile | 28 ---
drivers/crypto/bcmfs/bcmfs_device.c | 4 +
drivers/crypto/bcmfs/bcmfs_device.h | 5 +
drivers/crypto/bcmfs/bcmfs_hw_defs.h | 38 +++
drivers/crypto/bcmfs/bcmfs_qp.c | 345 +++++++++++++++++++++++++++
drivers/crypto/bcmfs/bcmfs_qp.h | 122 ++++++++++
drivers/crypto/bcmfs/meson.build | 3 +-
7 files changed, 516 insertions(+), 29 deletions(-)
delete mode 100644 drivers/crypto/bcmfs/Makefile
create mode 100644 drivers/crypto/bcmfs/bcmfs_hw_defs.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_qp.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_qp.h
diff --git a/drivers/crypto/bcmfs/Makefile b/drivers/crypto/bcmfs/Makefile
deleted file mode 100644
index 5f691f7ba..000000000
--- a/drivers/crypto/bcmfs/Makefile
+++ /dev/null
@@ -1,28 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(C) 2020 Broadcom
-# All rights reserved.
-#
-
-include $(RTE_SDK)/mk/rte.vars.mk
-
-#
-# library name
-#
-LIB = librte_pmd_bcmfs.a
-
-CFLAGS += $(WERROR_FLAGS)
-CFLAGS += -I$(RTE_SDK)/drivers/crypto/bcmfs
-CFLAGS += -DALLOW_EXPERIMENTAL_API
-
-#
-# all source are stored in SRCS-y
-#
-SRCS-y += bcmfs_logs.c
-SRCS-y += bcmfs_device.c
-SRCS-y += bcmfs_vfio.c
-
-LDLIBS += -lrte_eal -lrte_bus_vdev
-
-EXPORT_MAP := rte_pmd_bcmfs_version.map
-
-include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/crypto/bcmfs/bcmfs_device.c b/drivers/crypto/bcmfs/bcmfs_device.c
index 3b5cc9e98..b475c2933 100644
--- a/drivers/crypto/bcmfs/bcmfs_device.c
+++ b/drivers/crypto/bcmfs/bcmfs_device.c
@@ -11,6 +11,7 @@
#include "bcmfs_device.h"
#include "bcmfs_logs.h"
+#include "bcmfs_qp.h"
#include "bcmfs_vfio.h"
struct bcmfs_device_attr {
@@ -76,6 +77,9 @@ fsdev_allocate_one_dev(struct rte_vdev_device *vdev,
if (bcmfs_attach_vfio(fsdev))
goto cleanup;
+ /* Maximum number of QPs supported */
+ fsdev->max_hw_qps = fsdev->mmap_size / BCMFS_HW_QUEUE_IO_ADDR_LEN;
+
TAILQ_INSERT_TAIL(&fsdev_list, fsdev, next);
return fsdev;
diff --git a/drivers/crypto/bcmfs/bcmfs_device.h b/drivers/crypto/bcmfs/bcmfs_device.h
index 5232bdea5..e03ce5b5b 100644
--- a/drivers/crypto/bcmfs/bcmfs_device.h
+++ b/drivers/crypto/bcmfs/bcmfs_device.h
@@ -11,6 +11,7 @@
#include <rte_bus_vdev.h>
#include "bcmfs_logs.h"
+#include "bcmfs_qp.h"
/* max number of dev nodes */
#define BCMFS_MAX_NODES 4
@@ -41,6 +42,10 @@ struct bcmfs_device {
uint8_t *mmap_addr;
/* mapped size */
uint32_t mmap_size;
+ /* max number of h/w queue pairs detected */
+ uint16_t max_hw_qps;
+ /* current qpairs in use */
+ struct bcmfs_qp *qps_in_use[BCMFS_MAX_HW_QUEUES];
};
#endif /* _BCMFS_DEV_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_hw_defs.h b/drivers/crypto/bcmfs/bcmfs_hw_defs.h
new file mode 100644
index 000000000..ecb0c09ba
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_hw_defs.h
@@ -0,0 +1,38 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_RM_DEFS_H_
+#define _BCMFS_RM_DEFS_H_
+
+#include <rte_atomic.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_io.h>
+
+/* 32-bit MMIO register write */
+#define FS_MMIO_WRITE32(value, addr) rte_write32_relaxed((value), (addr))
+
+/* 32-bit MMIO register read */
+#define FS_MMIO_READ32(addr) rte_read32_relaxed((addr))
+
+#ifndef BIT
+#define BIT(nr) (1UL << (nr))
+#endif
+
+#define FS_RING_REGS_SIZE 0x10000
+#define FS_RING_DESC_SIZE 8
+#define FS_RING_BD_ALIGN_ORDER 12
+#define FS_RING_BD_DESC_PER_REQ 32
+#define FS_RING_CMPL_ALIGN_ORDER 13
+#define FS_RING_CMPL_SIZE (1024 * FS_RING_DESC_SIZE)
+#define FS_RING_MAX_REQ_COUNT 1024
+#define FS_RING_PAGE_SHFT 12
+#define FS_RING_PAGE_SIZE BIT(FS_RING_PAGE_SHFT)
+
+/* Minimum and maximum number of requests supported */
+#define FS_RM_MAX_REQS 1024
+#define FS_RM_MIN_REQS 32
+
+#endif /* BCMFS_RM_DEFS_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_qp.c b/drivers/crypto/bcmfs/bcmfs_qp.c
new file mode 100644
index 000000000..864e7bb74
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_qp.c
@@ -0,0 +1,345 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Broadcom.
+ * All rights reserved.
+ */
+
+#include <inttypes.h>
+
+#include <rte_atomic.h>
+#include <rte_bitmap.h>
+#include <rte_common.h>
+#include <rte_dev.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_prefetch.h>
+#include <rte_string_fns.h>
+
+#include "bcmfs_logs.h"
+#include "bcmfs_qp.h"
+#include "bcmfs_hw_defs.h"
+
+/* TX or submission queue name */
+static const char *txq_name = "tx";
+/* Completion or receive queue name */
+static const char *cmplq_name = "cmpl";
+
+/* Helper function */
+static int
+bcmfs_qp_check_queue_alignment(uint64_t phys_addr,
+ uint32_t align)
+{
+ if (((align - 1) & phys_addr) != 0)
+ return -EINVAL;
+ return 0;
+}
+
+static void
+bcmfs_queue_delete(struct bcmfs_queue *queue,
+ uint16_t queue_pair_id)
+{
+ const struct rte_memzone *mz;
+ int status = 0;
+
+ if (queue == NULL) {
+ BCMFS_LOG(DEBUG, "Invalid queue");
+ return;
+ }
+ BCMFS_LOG(DEBUG, "Free ring %d type %d, memzone: %s",
+ queue_pair_id, queue->q_type, queue->memz_name);
+
+ mz = rte_memzone_lookup(queue->memz_name);
+ if (mz != NULL) {
+ /* Write an unused pattern to the queue memory. */
+ memset(queue->base_addr, 0x9B, queue->queue_size);
+ status = rte_memzone_free(mz);
+ if (status != 0)
+ BCMFS_LOG(ERR, "Error %d on freeing queue %s",
+ status, queue->memz_name);
+ } else {
+ BCMFS_LOG(DEBUG, "queue %s doesn't exist",
+ queue->memz_name);
+ }
+}
+
+static const struct rte_memzone *
+queue_dma_zone_reserve(const char *queue_name, uint32_t queue_size,
+ int socket_id, unsigned int align)
+{
+ const struct rte_memzone *mz;
+
+ mz = rte_memzone_lookup(queue_name);
+ if (mz != NULL) {
+ if (((size_t)queue_size <= mz->len) &&
+ (socket_id == SOCKET_ID_ANY ||
+ socket_id == mz->socket_id)) {
+ BCMFS_LOG(DEBUG, "re-use memzone already "
+ "allocated for %s", queue_name);
+ return mz;
+ }
+
+ BCMFS_LOG(ERR, "Incompatible memzone already "
+ "allocated %s, size %u, socket %d. "
+ "Requested size %u, socket %u",
+ queue_name, (uint32_t)mz->len,
+ mz->socket_id, queue_size, socket_id);
+ return NULL;
+ }
+
+ BCMFS_LOG(DEBUG, "Allocate memzone for %s, size %u on socket %u",
+ queue_name, queue_size, socket_id);
+ return rte_memzone_reserve_aligned(queue_name, queue_size,
+ socket_id, RTE_MEMZONE_IOVA_CONTIG, align);
+}
+
+static int
+bcmfs_queue_create(struct bcmfs_queue *queue,
+ struct bcmfs_qp_config *qp_conf,
+ uint16_t queue_pair_id,
+ enum bcmfs_queue_type qtype)
+{
+ const struct rte_memzone *qp_mz;
+ char q_name[16];
+ unsigned int align;
+ uint32_t queue_size_bytes;
+ int ret;
+
+ if (qtype == BCMFS_RM_TXQ) {
+ strlcpy(q_name, txq_name, sizeof(q_name));
+ align = 1U << FS_RING_BD_ALIGN_ORDER;
+ queue_size_bytes = qp_conf->nb_descriptors *
+ qp_conf->max_descs_req * FS_RING_DESC_SIZE;
+ queue_size_bytes = RTE_ALIGN_MUL_CEIL(queue_size_bytes,
+ FS_RING_PAGE_SIZE);
+ /* make queue size to multiple for 4K pages */
+ } else if (qtype == BCMFS_RM_CPLQ) {
+ strlcpy(q_name, cmplq_name, sizeof(q_name));
+ align = 1U << FS_RING_CMPL_ALIGN_ORDER;
+
+ /*
+ * Memory size for cmpl + MSI
+ * For MSI allocate here itself and so we allocate twice
+ */
+ queue_size_bytes = 2 * FS_RING_CMPL_SIZE;
+ } else {
+ BCMFS_LOG(ERR, "Invalid queue selection");
+ return -EINVAL;
+ }
+
+ queue->q_type = qtype;
+
+ /*
+ * Allocate a memzone for the queue - create a unique name.
+ */
+ snprintf(queue->memz_name, sizeof(queue->memz_name),
+ "%s_%d_%s_%d_%s", "bcmfs", qtype, "qp_mem",
+ queue_pair_id, q_name);
+ qp_mz = queue_dma_zone_reserve(queue->memz_name, queue_size_bytes,
+ 0, align);
+ if (qp_mz == NULL) {
+ BCMFS_LOG(ERR, "Failed to allocate ring memzone");
+ return -ENOMEM;
+ }
+
+ if (bcmfs_qp_check_queue_alignment(qp_mz->iova, align)) {
+ BCMFS_LOG(ERR, "Invalid alignment on queue create "
+ " 0x%" PRIx64 "\n",
+ queue->base_phys_addr);
+ ret = -EFAULT;
+ goto queue_create_err;
+ }
+
+ queue->base_addr = (char *)qp_mz->addr;
+ queue->base_phys_addr = qp_mz->iova;
+ queue->queue_size = queue_size_bytes;
+
+ return 0;
+
+queue_create_err:
+ rte_memzone_free(qp_mz);
+
+ return ret;
+}
+
+int
+bcmfs_qp_release(struct bcmfs_qp **qp_addr)
+{
+ struct bcmfs_qp *qp = *qp_addr;
+
+ if (qp == NULL) {
+ BCMFS_LOG(DEBUG, "qp already freed");
+ return 0;
+ }
+
+ /* Don't free memory if there are still responses to be processed */
+ if ((qp->stats.enqueued_count - qp->stats.dequeued_count) == 0) {
+ /* Stop the h/w ring */
+ qp->ops->stopq(qp);
+ /* Delete the queue pairs */
+ bcmfs_queue_delete(&qp->tx_q, qp->qpair_id);
+ bcmfs_queue_delete(&qp->cmpl_q, qp->qpair_id);
+ } else {
+ return -EAGAIN;
+ }
+
+ rte_bitmap_reset(qp->ctx_bmp);
+ rte_free(qp->ctx_bmp_mem);
+ rte_free(qp->ctx_pool);
+
+ rte_free(qp);
+ *qp_addr = NULL;
+
+ return 0;
+}
+
+int
+bcmfs_qp_setup(struct bcmfs_qp **qp_addr,
+ uint16_t queue_pair_id,
+ struct bcmfs_qp_config *qp_conf)
+{
+ struct bcmfs_qp *qp;
+ uint32_t bmp_size;
+ uint32_t nb_descriptors = qp_conf->nb_descriptors;
+ uint16_t i;
+ int rc;
+
+ if (nb_descriptors < FS_RM_MIN_REQS) {
+ BCMFS_LOG(ERR, "Can't create qp for %u descriptors",
+ nb_descriptors);
+ return -EINVAL;
+ }
+
+ if (nb_descriptors > FS_RM_MAX_REQS)
+ nb_descriptors = FS_RM_MAX_REQS;
+
+ if (qp_conf->iobase == NULL) {
+ BCMFS_LOG(ERR, "IO onfig space null");
+ return -EINVAL;
+ }
+
+ qp = rte_zmalloc_socket("BCM FS PMD qp metadata",
+ sizeof(*qp), RTE_CACHE_LINE_SIZE,
+ qp_conf->socket_id);
+ if (qp == NULL) {
+ BCMFS_LOG(ERR, "Failed to alloc mem for qp struct");
+ return -ENOMEM;
+ }
+
+ qp->qpair_id = queue_pair_id;
+ qp->ioreg = qp_conf->iobase;
+ qp->nb_descriptors = nb_descriptors;
+
+ qp->stats.enqueued_count = 0;
+ qp->stats.dequeued_count = 0;
+
+ rc = bcmfs_queue_create(&qp->tx_q, qp_conf, qp->qpair_id,
+ BCMFS_RM_TXQ);
+ if (rc) {
+ BCMFS_LOG(ERR, "Tx queue create failed queue_pair_id %u",
+ queue_pair_id);
+ goto create_err;
+ }
+
+ rc = bcmfs_queue_create(&qp->cmpl_q, qp_conf, qp->qpair_id,
+ BCMFS_RM_CPLQ);
+ if (rc) {
+ BCMFS_LOG(ERR, "Cmpl queue create failed queue_pair_id= %u",
+ queue_pair_id);
+ goto q_create_err;
+ }
+
+ /* ctx saving bitmap */
+ bmp_size = rte_bitmap_get_memory_footprint(nb_descriptors);
+
+ /* Allocate memory for bitmap */
+ qp->ctx_bmp_mem = rte_zmalloc("ctx_bmp_mem", bmp_size,
+ RTE_CACHE_LINE_SIZE);
+ if (qp->ctx_bmp_mem == NULL) {
+ rc = -ENOMEM;
+ goto qp_create_err;
+ }
+
+ /* Initialize pool resource bitmap array */
+ qp->ctx_bmp = rte_bitmap_init(nb_descriptors, qp->ctx_bmp_mem,
+ bmp_size);
+ if (qp->ctx_bmp == NULL) {
+ rc = -EINVAL;
+ goto bmap_mem_free;
+ }
+
+ /* Mark all pools available */
+ for (i = 0; i < nb_descriptors; i++)
+ rte_bitmap_set(qp->ctx_bmp, i);
+
+ /* Allocate memory for context */
+ qp->ctx_pool = rte_zmalloc("qp_ctx_pool",
+ sizeof(unsigned long) *
+ nb_descriptors, 0);
+ if (qp->ctx_pool == NULL) {
+ BCMFS_LOG(ERR, "ctx allocation pool fails");
+ rc = -ENOMEM;
+ goto bmap_free;
+ }
+
+ /* Start h/w ring */
+ qp->ops->startq(qp);
+
+ *qp_addr = qp;
+
+ return 0;
+
+bmap_free:
+ rte_bitmap_reset(qp->ctx_bmp);
+bmap_mem_free:
+ rte_free(qp->ctx_bmp_mem);
+qp_create_err:
+ bcmfs_queue_delete(&qp->cmpl_q, queue_pair_id);
+q_create_err:
+ bcmfs_queue_delete(&qp->tx_q, queue_pair_id);
+create_err:
+ rte_free(qp);
+
+ return rc;
+}
+
+uint16_t
+bcmfs_enqueue_op_burst(void *qp, void **ops, uint16_t nb_ops)
+{
+ struct bcmfs_qp *tmp_qp = (struct bcmfs_qp *)qp;
+ register uint32_t nb_ops_sent = 0;
+ uint16_t nb_ops_possible = nb_ops;
+ int ret;
+
+ if (unlikely(nb_ops == 0))
+ return 0;
+
+ while (nb_ops_sent != nb_ops_possible) {
+ ret = tmp_qp->ops->enq_one_req(qp, *ops);
+ if (ret != 0) {
+ tmp_qp->stats.enqueue_err_count++;
+ /* This message cannot be enqueued */
+ if (nb_ops_sent == 0)
+ return 0;
+ goto ring_db;
+ }
+
+ ops++;
+ nb_ops_sent++;
+ }
+
+ring_db:
+ tmp_qp->stats.enqueued_count += nb_ops_sent;
+ tmp_qp->ops->ring_db(tmp_qp);
+
+ return nb_ops_sent;
+}
+
+uint16_t
+bcmfs_dequeue_op_burst(void *qp, void **ops, uint16_t nb_ops)
+{
+ struct bcmfs_qp *tmp_qp = (struct bcmfs_qp *)qp;
+ uint32_t deq = tmp_qp->ops->dequeue(tmp_qp, ops, nb_ops);
+
+ tmp_qp->stats.dequeued_count += deq;
+
+ return deq;
+}
diff --git a/drivers/crypto/bcmfs/bcmfs_qp.h b/drivers/crypto/bcmfs/bcmfs_qp.h
new file mode 100644
index 000000000..027d7a50c
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_qp.h
@@ -0,0 +1,122 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_QP_H_
+#define _BCMFS_QP_H_
+
+#include <rte_memzone.h>
+
+/* Maximum number of h/w queues supported by device */
+#define BCMFS_MAX_HW_QUEUES 32
+
+/* H/W queue IO address space len */
+#define BCMFS_HW_QUEUE_IO_ADDR_LEN (64 * 1024)
+
+/* Maximum size of device ops name */
+#define BCMFS_HW_OPS_NAMESIZE 32
+
+enum bcmfs_queue_type {
+ /* TX or submission queue */
+ BCMFS_RM_TXQ,
+ /* Completion or receive queue */
+ BCMFS_RM_CPLQ
+};
+
+struct bcmfs_qp_stats {
+ /* Count of all operations enqueued */
+ uint64_t enqueued_count;
+ /* Count of all operations dequeued */
+ uint64_t dequeued_count;
+ /* Total error count on operations enqueued */
+ uint64_t enqueue_err_count;
+ /* Total error count on operations dequeued */
+ uint64_t dequeue_err_count;
+};
+
+struct bcmfs_qp_config {
+ /* Socket to allocate memory on */
+ int socket_id;
+ /* Mapped iobase for qp */
+ void *iobase;
+ /* nb_descriptors or requests a h/w queue can accommodate */
+ uint16_t nb_descriptors;
+ /* Maximum number of h/w descriptors needed by a request */
+ uint16_t max_descs_req;
+};
+
+struct bcmfs_queue {
+ /* Base virt address */
+ void *base_addr;
+ /* Base iova */
+ rte_iova_t base_phys_addr;
+ /* Queue type */
+ enum bcmfs_queue_type q_type;
+ /* Queue size based on nb_descriptors and max_descs_reqs */
+ uint32_t queue_size;
+ union {
+ /* s/w pointer for tx h/w queue*/
+ uint32_t tx_write_ptr;
+ /* s/w pointer for completion h/w queue*/
+ uint32_t cmpl_read_ptr;
+ };
+ /* Memzone name */
+ char memz_name[RTE_MEMZONE_NAMESIZE];
+};
+
+struct bcmfs_qp {
+ /* Queue-pair ID */
+ uint16_t qpair_id;
+ /* Mapped IO address */
+ void *ioreg;
+ /* A TX queue */
+ struct bcmfs_queue tx_q;
+ /* A Completion queue */
+ struct bcmfs_queue cmpl_q;
+ /* Number of requests queue can acommodate */
+ uint32_t nb_descriptors;
+ /* Number of pending requests and enqueued to h/w queue */
+ uint16_t nb_pending_requests;
+ /* A pool which act as a hash for <request-ID and virt address> pair */
+ unsigned long *ctx_pool;
+ /* virt address for mem allocated for bitmap */
+ void *ctx_bmp_mem;
+ /* Bitmap */
+ struct rte_bitmap *ctx_bmp;
+ /* Associated stats */
+ struct bcmfs_qp_stats stats;
+ /* h/w ops associated with qp */
+ struct bcmfs_hw_queue_pair_ops *ops;
+
+} __rte_cache_aligned;
+
+/* Structure defining h/w queue pair operations */
+struct bcmfs_hw_queue_pair_ops {
+ /* ops name */
+ char name[BCMFS_HW_OPS_NAMESIZE];
+ /* Enqueue an object */
+ int (*enq_one_req)(struct bcmfs_qp *qp, void *obj);
+ /* Ring doorbell */
+ void (*ring_db)(struct bcmfs_qp *qp);
+ /* Dequeue objects */
+ uint16_t (*dequeue)(struct bcmfs_qp *qp, void **obj,
+ uint16_t nb_ops);
+ /* Start the h/w queue */
+ int (*startq)(struct bcmfs_qp *qp);
+ /* Stop the h/w queue */
+ void (*stopq)(struct bcmfs_qp *qp);
+};
+
+uint16_t
+bcmfs_enqueue_op_burst(void *qp, void **ops, uint16_t nb_ops);
+uint16_t
+bcmfs_dequeue_op_burst(void *qp, void **ops, uint16_t nb_ops);
+int
+bcmfs_qp_release(struct bcmfs_qp **qp_addr);
+int
+bcmfs_qp_setup(struct bcmfs_qp **qp_addr,
+ uint16_t queue_pair_id,
+ struct bcmfs_qp_config *bcmfs_conf);
+
+#endif /* _BCMFS_QP_H_ */
diff --git a/drivers/crypto/bcmfs/meson.build b/drivers/crypto/bcmfs/meson.build
index fd39eba20..7e2bcbf14 100644
--- a/drivers/crypto/bcmfs/meson.build
+++ b/drivers/crypto/bcmfs/meson.build
@@ -7,5 +7,6 @@ deps += ['eal', 'bus_vdev']
sources = files(
'bcmfs_logs.c',
'bcmfs_device.c',
- 'bcmfs_vfio.c'
+ 'bcmfs_vfio.c',
+ 'bcmfs_qp.c'
)
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v1 4/8] crypto/bcmfs: add hw queue pair operations
2020-08-12 6:31 ` [dpdk-dev] [PATCH v1 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
` (2 preceding siblings ...)
2020-08-12 6:31 ` [dpdk-dev] [PATCH v1 3/8] crypto/bcmfs: add apis for queue pair management Vikas Gupta
@ 2020-08-12 6:31 ` Vikas Gupta
2020-08-12 6:31 ` [dpdk-dev] [PATCH v1 5/8] crypto/bcmfs: create a symmetric cryptodev Vikas Gupta
` (4 subsequent siblings)
8 siblings, 0 replies; 75+ messages in thread
From: Vikas Gupta @ 2020-08-12 6:31 UTC (permalink / raw)
To: dev, akhil.goyal
Cc: ajit.khaparde, vikram.prakash, Vikas Gupta, Raveendra Padasalagi
Add queue pair operations exported by supported devices.
Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
drivers/crypto/bcmfs/bcmfs_dev_msg.h | 29 +
drivers/crypto/bcmfs/bcmfs_device.c | 51 ++
drivers/crypto/bcmfs/bcmfs_device.h | 16 +
drivers/crypto/bcmfs/bcmfs_qp.c | 1 +
drivers/crypto/bcmfs/bcmfs_qp.h | 4 +
drivers/crypto/bcmfs/hw/bcmfs4_rm.c | 742 ++++++++++++++++++++++
drivers/crypto/bcmfs/hw/bcmfs5_rm.c | 677 ++++++++++++++++++++
drivers/crypto/bcmfs/hw/bcmfs_rm_common.c | 82 +++
drivers/crypto/bcmfs/hw/bcmfs_rm_common.h | 46 ++
drivers/crypto/bcmfs/meson.build | 5 +-
10 files changed, 1652 insertions(+), 1 deletion(-)
create mode 100644 drivers/crypto/bcmfs/bcmfs_dev_msg.h
create mode 100644 drivers/crypto/bcmfs/hw/bcmfs4_rm.c
create mode 100644 drivers/crypto/bcmfs/hw/bcmfs5_rm.c
create mode 100644 drivers/crypto/bcmfs/hw/bcmfs_rm_common.c
create mode 100644 drivers/crypto/bcmfs/hw/bcmfs_rm_common.h
diff --git a/drivers/crypto/bcmfs/bcmfs_dev_msg.h b/drivers/crypto/bcmfs/bcmfs_dev_msg.h
new file mode 100644
index 000000000..5b50bde35
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_dev_msg.h
@@ -0,0 +1,29 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_DEV_MSG_H_
+#define _BCMFS_DEV_MSG_H_
+
+#define MAX_SRC_ADDR_BUFFERS 8
+#define MAX_DST_ADDR_BUFFERS 3
+
+struct bcmfs_qp_message {
+ /** Physical address of each source */
+ uint64_t srcs_addr[MAX_SRC_ADDR_BUFFERS];
+ /** Length of each sources */
+ uint32_t srcs_len[MAX_SRC_ADDR_BUFFERS];
+ /** Total number of sources */
+ unsigned int srcs_count;
+ /** Physical address of each destination */
+ uint64_t dsts_addr[MAX_DST_ADDR_BUFFERS];
+ /** Length of each destination */
+ uint32_t dsts_len[MAX_DST_ADDR_BUFFERS];
+ /** Total number of destinations */
+ unsigned int dsts_count;
+
+ void *ctx;
+};
+
+#endif /* _BCMFS_DEV_MSG_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_device.c b/drivers/crypto/bcmfs/bcmfs_device.c
index b475c2933..bd2d64acf 100644
--- a/drivers/crypto/bcmfs/bcmfs_device.c
+++ b/drivers/crypto/bcmfs/bcmfs_device.c
@@ -43,6 +43,47 @@ static struct bcmfs_device_attr dev_table[] = {
}
};
+struct bcmfs_hw_queue_pair_ops_table bcmfs_hw_queue_pair_ops_table = {
+ .tl = RTE_SPINLOCK_INITIALIZER,
+ .num_ops = 0
+};
+
+int bcmfs_hw_queue_pair_register_ops(const struct bcmfs_hw_queue_pair_ops *h)
+{
+ struct bcmfs_hw_queue_pair_ops *ops;
+ int16_t ops_index;
+
+ rte_spinlock_lock(&bcmfs_hw_queue_pair_ops_table.tl);
+
+ if (h->enq_one_req == NULL || h->dequeue == NULL ||
+ h->ring_db == NULL || h->startq == NULL || h->stopq == NULL) {
+ rte_spinlock_unlock(&bcmfs_hw_queue_pair_ops_table.tl);
+ BCMFS_LOG(ERR,
+ "Missing callback while registering device ops");
+ return -EINVAL;
+ }
+
+ if (strlen(h->name) >= sizeof(ops->name) - 1) {
+ rte_spinlock_unlock(&bcmfs_hw_queue_pair_ops_table.tl);
+ BCMFS_LOG(ERR, "%s(): fs device_ops <%s>: name too long",
+ __func__, h->name);
+ return -EEXIST;
+ }
+
+ ops_index = bcmfs_hw_queue_pair_ops_table.num_ops++;
+ ops = &bcmfs_hw_queue_pair_ops_table.qp_ops[ops_index];
+ strlcpy(ops->name, h->name, sizeof(ops->name));
+ ops->enq_one_req = h->enq_one_req;
+ ops->dequeue = h->dequeue;
+ ops->ring_db = h->ring_db;
+ ops->startq = h->startq;
+ ops->stopq = h->stopq;
+
+ rte_spinlock_unlock(&bcmfs_hw_queue_pair_ops_table.tl);
+
+ return ops_index;
+}
+
TAILQ_HEAD(fsdev_list, bcmfs_device);
static struct fsdev_list fsdev_list = TAILQ_HEAD_INITIALIZER(fsdev_list);
@@ -53,6 +94,7 @@ fsdev_allocate_one_dev(struct rte_vdev_device *vdev,
enum bcmfs_device_type dev_type __rte_unused)
{
struct bcmfs_device *fsdev;
+ uint32_t i;
fsdev = calloc(1, sizeof(*fsdev));
if (!fsdev)
@@ -68,6 +110,15 @@ fsdev_allocate_one_dev(struct rte_vdev_device *vdev,
goto cleanup;
}
+ /* check if registered ops name is present in directory path */
+ for (i = 0; i < bcmfs_hw_queue_pair_ops_table.num_ops; i++)
+ if (strstr(dirpath,
+ bcmfs_hw_queue_pair_ops_table.qp_ops[i].name))
+ fsdev->sym_hw_qp_ops =
+ &bcmfs_hw_queue_pair_ops_table.qp_ops[i];
+ if (!fsdev->sym_hw_qp_ops)
+ goto cleanup;
+
strcpy(fsdev->dirname, dirpath);
strcpy(fsdev->name, devname);
diff --git a/drivers/crypto/bcmfs/bcmfs_device.h b/drivers/crypto/bcmfs/bcmfs_device.h
index e03ce5b5b..96beb10fa 100644
--- a/drivers/crypto/bcmfs/bcmfs_device.h
+++ b/drivers/crypto/bcmfs/bcmfs_device.h
@@ -8,6 +8,7 @@
#include <sys/queue.h>
+#include <rte_spinlock.h>
#include <rte_bus_vdev.h>
#include "bcmfs_logs.h"
@@ -28,6 +29,19 @@ enum bcmfs_device_type {
BCMFS_UNKNOWN
};
+/* A table to store registered queue pair opertations */
+struct bcmfs_hw_queue_pair_ops_table {
+ rte_spinlock_t tl;
+ /* Number of used ops structs in the table. */
+ uint32_t num_ops;
+ /* Storage for all possible ops structs. */
+ struct bcmfs_hw_queue_pair_ops qp_ops[BCMFS_MAX_NODES];
+};
+
+/* HW queue pair ops register function */
+int bcmfs_hw_queue_pair_register_ops(const struct bcmfs_hw_queue_pair_ops
+ *qp_ops);
+
struct bcmfs_device {
TAILQ_ENTRY(bcmfs_device) next;
/* Directoy path for vfio */
@@ -46,6 +60,8 @@ struct bcmfs_device {
uint16_t max_hw_qps;
/* current qpairs in use */
struct bcmfs_qp *qps_in_use[BCMFS_MAX_HW_QUEUES];
+ /* queue pair ops exported by symmetric crypto hw */
+ struct bcmfs_hw_queue_pair_ops *sym_hw_qp_ops;
};
#endif /* _BCMFS_DEV_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_qp.c b/drivers/crypto/bcmfs/bcmfs_qp.c
index 864e7bb74..ec1327b78 100644
--- a/drivers/crypto/bcmfs/bcmfs_qp.c
+++ b/drivers/crypto/bcmfs/bcmfs_qp.c
@@ -227,6 +227,7 @@ bcmfs_qp_setup(struct bcmfs_qp **qp_addr,
qp->qpair_id = queue_pair_id;
qp->ioreg = qp_conf->iobase;
qp->nb_descriptors = nb_descriptors;
+ qp->ops = qp_conf->ops;
qp->stats.enqueued_count = 0;
qp->stats.dequeued_count = 0;
diff --git a/drivers/crypto/bcmfs/bcmfs_qp.h b/drivers/crypto/bcmfs/bcmfs_qp.h
index 027d7a50c..e4b0c3f2f 100644
--- a/drivers/crypto/bcmfs/bcmfs_qp.h
+++ b/drivers/crypto/bcmfs/bcmfs_qp.h
@@ -44,6 +44,8 @@ struct bcmfs_qp_config {
uint16_t nb_descriptors;
/* Maximum number of h/w descriptors needed by a request */
uint16_t max_descs_req;
+ /* h/w ops associated with qp */
+ struct bcmfs_hw_queue_pair_ops *ops;
};
struct bcmfs_queue {
@@ -61,6 +63,8 @@ struct bcmfs_queue {
/* s/w pointer for completion h/w queue*/
uint32_t cmpl_read_ptr;
};
+ /* number of inflight descriptor accumulated before next db ring */
+ uint16_t descs_inflight;
/* Memzone name */
char memz_name[RTE_MEMZONE_NAMESIZE];
};
diff --git a/drivers/crypto/bcmfs/hw/bcmfs4_rm.c b/drivers/crypto/bcmfs/hw/bcmfs4_rm.c
new file mode 100644
index 000000000..c1cd1b813
--- /dev/null
+++ b/drivers/crypto/bcmfs/hw/bcmfs4_rm.c
@@ -0,0 +1,742 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <unistd.h>
+
+#include <rte_bitmap.h>
+
+#include "bcmfs_device.h"
+#include "bcmfs_dev_msg.h"
+#include "bcmfs_hw_defs.h"
+#include "bcmfs_logs.h"
+#include "bcmfs_qp.h"
+#include "bcmfs_rm_common.h"
+
+/* FS4 configuration */
+#define RING_BD_TOGGLE_INVALID(offset) \
+ (((offset) >> FS_RING_BD_ALIGN_ORDER) & 0x1)
+#define RING_BD_TOGGLE_VALID(offset) \
+ (!RING_BD_TOGGLE_INVALID(offset))
+
+#define RING_VER_MAGIC 0x76303031
+
+/* Per-Ring register offsets */
+#define RING_VER 0x000
+#define RING_BD_START_ADDR 0x004
+#define RING_BD_READ_PTR 0x008
+#define RING_BD_WRITE_PTR 0x00c
+#define RING_BD_READ_PTR_DDR_LS 0x010
+#define RING_BD_READ_PTR_DDR_MS 0x014
+#define RING_CMPL_START_ADDR 0x018
+#define RING_CMPL_WRITE_PTR 0x01c
+#define RING_NUM_REQ_RECV_LS 0x020
+#define RING_NUM_REQ_RECV_MS 0x024
+#define RING_NUM_REQ_TRANS_LS 0x028
+#define RING_NUM_REQ_TRANS_MS 0x02c
+#define RING_NUM_REQ_OUTSTAND 0x030
+#define RING_CONTROL 0x034
+#define RING_FLUSH_DONE 0x038
+#define RING_MSI_ADDR_LS 0x03c
+#define RING_MSI_ADDR_MS 0x040
+#define RING_MSI_CONTROL 0x048
+#define RING_BD_READ_PTR_DDR_CONTROL 0x04c
+#define RING_MSI_DATA_VALUE 0x064
+
+/* Register RING_BD_START_ADDR fields */
+#define BD_LAST_UPDATE_HW_SHIFT 28
+#define BD_LAST_UPDATE_HW_MASK 0x1
+#define BD_START_ADDR_VALUE(pa) \
+ ((uint32_t)((((uint64_t)(pa)) >> FS_RING_BD_ALIGN_ORDER) & 0x0fffffff))
+#define BD_START_ADDR_DECODE(val) \
+ ((uint64_t)((val) & 0x0fffffff) << FS_RING_BD_ALIGN_ORDER)
+
+/* Register RING_CMPL_START_ADDR fields */
+#define CMPL_START_ADDR_VALUE(pa) \
+ ((uint32_t)((((uint64_t)(pa)) >> FS_RING_CMPL_ALIGN_ORDER) & 0x7ffffff))
+
+/* Register RING_CONTROL fields */
+#define CONTROL_MASK_DISABLE_CONTROL 12
+#define CONTROL_FLUSH_SHIFT 5
+#define CONTROL_ACTIVE_SHIFT 4
+#define CONTROL_RATE_ADAPT_MASK 0xf
+#define CONTROL_RATE_DYNAMIC 0x0
+#define CONTROL_RATE_FAST 0x8
+#define CONTROL_RATE_MEDIUM 0x9
+#define CONTROL_RATE_SLOW 0xa
+#define CONTROL_RATE_IDLE 0xb
+
+/* Register RING_FLUSH_DONE fields */
+#define FLUSH_DONE_MASK 0x1
+
+/* Register RING_MSI_CONTROL fields */
+#define MSI_TIMER_VAL_SHIFT 16
+#define MSI_TIMER_VAL_MASK 0xffff
+#define MSI_ENABLE_SHIFT 15
+#define MSI_ENABLE_MASK 0x1
+#define MSI_COUNT_SHIFT 0
+#define MSI_COUNT_MASK 0x3ff
+
+/* Register RING_BD_READ_PTR_DDR_CONTROL fields */
+#define BD_READ_PTR_DDR_TIMER_VAL_SHIFT 16
+#define BD_READ_PTR_DDR_TIMER_VAL_MASK 0xffff
+#define BD_READ_PTR_DDR_ENABLE_SHIFT 15
+#define BD_READ_PTR_DDR_ENABLE_MASK 0x1
+
+/* ====== Broadcom FS4-RM ring descriptor defines ===== */
+
+
+/* General descriptor format */
+#define DESC_TYPE_SHIFT 60
+#define DESC_TYPE_MASK 0xf
+#define DESC_PAYLOAD_SHIFT 0
+#define DESC_PAYLOAD_MASK 0x0fffffffffffffff
+
+/* Null descriptor format */
+#define NULL_TYPE 0
+#define NULL_TOGGLE_SHIFT 58
+#define NULL_TOGGLE_MASK 0x1
+
+/* Header descriptor format */
+#define HEADER_TYPE 1
+#define HEADER_TOGGLE_SHIFT 58
+#define HEADER_TOGGLE_MASK 0x1
+#define HEADER_ENDPKT_SHIFT 57
+#define HEADER_ENDPKT_MASK 0x1
+#define HEADER_STARTPKT_SHIFT 56
+#define HEADER_STARTPKT_MASK 0x1
+#define HEADER_BDCOUNT_SHIFT 36
+#define HEADER_BDCOUNT_MASK 0x1f
+#define HEADER_BDCOUNT_MAX HEADER_BDCOUNT_MASK
+#define HEADER_FLAGS_SHIFT 16
+#define HEADER_FLAGS_MASK 0xffff
+#define HEADER_OPAQUE_SHIFT 0
+#define HEADER_OPAQUE_MASK 0xffff
+
+/* Source (SRC) descriptor format */
+#define SRC_TYPE 2
+#define SRC_LENGTH_SHIFT 44
+#define SRC_LENGTH_MASK 0xffff
+#define SRC_ADDR_SHIFT 0
+#define SRC_ADDR_MASK 0x00000fffffffffff
+
+/* Destination (DST) descriptor format */
+#define DST_TYPE 3
+#define DST_LENGTH_SHIFT 44
+#define DST_LENGTH_MASK 0xffff
+#define DST_ADDR_SHIFT 0
+#define DST_ADDR_MASK 0x00000fffffffffff
+
+/* Next pointer (NPTR) descriptor format */
+#define NPTR_TYPE 5
+#define NPTR_TOGGLE_SHIFT 58
+#define NPTR_TOGGLE_MASK 0x1
+#define NPTR_ADDR_SHIFT 0
+#define NPTR_ADDR_MASK 0x00000fffffffffff
+
+/* Mega source (MSRC) descriptor format */
+#define MSRC_TYPE 6
+#define MSRC_LENGTH_SHIFT 44
+#define MSRC_LENGTH_MASK 0xffff
+#define MSRC_ADDR_SHIFT 0
+#define MSRC_ADDR_MASK 0x00000fffffffffff
+
+/* Mega destination (MDST) descriptor format */
+#define MDST_TYPE 7
+#define MDST_LENGTH_SHIFT 44
+#define MDST_LENGTH_MASK 0xffff
+#define MDST_ADDR_SHIFT 0
+#define MDST_ADDR_MASK 0x00000fffffffffff
+
+static uint8_t
+bcmfs4_is_next_table_desc(void *desc_ptr)
+{
+ uint64_t desc = rm_read_desc(desc_ptr);
+ uint32_t type = FS_DESC_DEC(desc, DESC_TYPE_SHIFT, DESC_TYPE_MASK);
+
+ return (type == NPTR_TYPE) ? true : false;
+}
+
+static uint64_t
+bcmfs4_next_table_desc(uint32_t toggle, uint64_t next_addr)
+{
+ return (rm_build_desc(NPTR_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(toggle, NPTR_TOGGLE_SHIFT, NPTR_TOGGLE_MASK) |
+ rm_build_desc(next_addr, NPTR_ADDR_SHIFT, NPTR_ADDR_MASK));
+}
+
+static uint64_t
+bcmfs4_null_desc(uint32_t toggle)
+{
+ return (rm_build_desc(NULL_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(toggle, NULL_TOGGLE_SHIFT, NULL_TOGGLE_MASK));
+}
+
+static void
+bcmfs4_flip_header_toggle(void *desc_ptr)
+{
+ uint64_t desc = rm_read_desc(desc_ptr);
+
+ if (desc & ((uint64_t)0x1 << HEADER_TOGGLE_SHIFT))
+ desc &= ~((uint64_t)0x1 << HEADER_TOGGLE_SHIFT);
+ else
+ desc |= ((uint64_t)0x1 << HEADER_TOGGLE_SHIFT);
+
+ rm_write_desc(desc_ptr, desc);
+}
+
+static uint64_t
+bcmfs4_header_desc(uint32_t toggle, uint32_t startpkt,
+ uint32_t endpkt, uint32_t bdcount,
+ uint32_t flags, uint32_t opaque)
+{
+ return (rm_build_desc(HEADER_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(toggle, HEADER_TOGGLE_SHIFT, HEADER_TOGGLE_MASK) |
+ rm_build_desc(startpkt, HEADER_STARTPKT_SHIFT,
+ HEADER_STARTPKT_MASK) |
+ rm_build_desc(endpkt, HEADER_ENDPKT_SHIFT, HEADER_ENDPKT_MASK) |
+ rm_build_desc(bdcount, HEADER_BDCOUNT_SHIFT,
+ HEADER_BDCOUNT_MASK) |
+ rm_build_desc(flags, HEADER_FLAGS_SHIFT, HEADER_FLAGS_MASK) |
+ rm_build_desc(opaque, HEADER_OPAQUE_SHIFT, HEADER_OPAQUE_MASK));
+}
+
+static void
+bcmfs4_enqueue_desc(uint32_t nhpos, uint32_t nhcnt,
+ uint32_t reqid, uint64_t desc,
+ void **desc_ptr, uint32_t *toggle,
+ void *start_desc, void *end_desc)
+{
+ uint64_t d;
+ uint32_t nhavail, _toggle, _startpkt, _endpkt, _bdcount;
+
+ /*
+ * Each request or packet start with a HEADER descriptor followed
+ * by one or more non-HEADER descriptors (SRC, SRCT, MSRC, DST,
+ * DSTT, MDST, IMM, and IMMT). The number of non-HEADER descriptors
+ * following a HEADER descriptor is represented by BDCOUNT field
+ * of HEADER descriptor. The max value of BDCOUNT field is 31 which
+ * means we can only have 31 non-HEADER descriptors following one
+ * HEADER descriptor.
+ *
+ * In general use, number of non-HEADER descriptors can easily go
+ * beyond 31. To tackle this situation, we have packet (or request)
+ * extension bits (STARTPKT and ENDPKT) in the HEADER descriptor.
+ *
+ * To use packet extension, the first HEADER descriptor of request
+ * (or packet) will have STARTPKT=1 and ENDPKT=0. The intermediate
+ * HEADER descriptors will have STARTPKT=0 and ENDPKT=0. The last
+ * HEADER descriptor will have STARTPKT=0 and ENDPKT=1. Also, the
+ * TOGGLE bit of the first HEADER will be set to invalid state to
+ * ensure that FlexDMA engine does not start fetching descriptors
+ * till all descriptors are enqueued. The user of this function
+ * will flip the TOGGLE bit of first HEADER after all descriptors
+ * are enqueued.
+ */
+
+ if ((nhpos % HEADER_BDCOUNT_MAX == 0) && (nhcnt - nhpos)) {
+ /* Prepare the header descriptor */
+ nhavail = (nhcnt - nhpos);
+ _toggle = (nhpos == 0) ? !(*toggle) : (*toggle);
+ _startpkt = (nhpos == 0) ? 0x1 : 0x0;
+ _endpkt = (nhavail <= HEADER_BDCOUNT_MAX) ? 0x1 : 0x0;
+ _bdcount = (nhavail <= HEADER_BDCOUNT_MAX) ?
+ nhavail : HEADER_BDCOUNT_MAX;
+ if (nhavail <= HEADER_BDCOUNT_MAX)
+ _bdcount = nhavail;
+ else
+ _bdcount = HEADER_BDCOUNT_MAX;
+ d = bcmfs4_header_desc(_toggle, _startpkt, _endpkt,
+ _bdcount, 0x0, reqid);
+
+ /* Write header descriptor */
+ rm_write_desc(*desc_ptr, d);
+
+ /* Point to next descriptor */
+ *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc);
+ if (*desc_ptr == end_desc)
+ *desc_ptr = start_desc;
+
+ /* Skip next pointer descriptors */
+ while (bcmfs4_is_next_table_desc(*desc_ptr)) {
+ *toggle = (*toggle) ? 0 : 1;
+ *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc);
+ if (*desc_ptr == end_desc)
+ *desc_ptr = start_desc;
+ }
+ }
+
+ /* Write desired descriptor */
+ rm_write_desc(*desc_ptr, desc);
+
+ /* Point to next descriptor */
+ *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc);
+ if (*desc_ptr == end_desc)
+ *desc_ptr = start_desc;
+
+ /* Skip next pointer descriptors */
+ while (bcmfs4_is_next_table_desc(*desc_ptr)) {
+ *toggle = (*toggle) ? 0 : 1;
+ *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc);
+ if (*desc_ptr == end_desc)
+ *desc_ptr = start_desc;
+ }
+}
+
+static uint64_t
+bcmfs4_src_desc(uint64_t addr, unsigned int length)
+{
+ return (rm_build_desc(SRC_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(length, SRC_LENGTH_SHIFT, SRC_LENGTH_MASK) |
+ rm_build_desc(addr, SRC_ADDR_SHIFT, SRC_ADDR_MASK));
+}
+
+static uint64_t
+bcmfs4_msrc_desc(uint64_t addr, unsigned int length_div_16)
+{
+ return (rm_build_desc(MSRC_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(length_div_16, MSRC_LENGTH_SHIFT, MSRC_LENGTH_MASK) |
+ rm_build_desc(addr, MSRC_ADDR_SHIFT, MSRC_ADDR_MASK));
+}
+
+static uint64_t
+bcmfs4_dst_desc(uint64_t addr, unsigned int length)
+{
+ return (rm_build_desc(DST_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(length, DST_LENGTH_SHIFT, DST_LENGTH_MASK) |
+ rm_build_desc(addr, DST_ADDR_SHIFT, DST_ADDR_MASK));
+}
+
+static uint64_t
+bcmfs4_mdst_desc(uint64_t addr, unsigned int length_div_16)
+{
+ return (rm_build_desc(MDST_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(length_div_16, MDST_LENGTH_SHIFT, MDST_LENGTH_MASK) |
+ rm_build_desc(addr, MDST_ADDR_SHIFT, MDST_ADDR_MASK));
+}
+
+static bool
+bcmfs4_sanity_check(struct bcmfs_qp_message *msg)
+{
+ unsigned int i = 0;
+
+ if (msg == NULL)
+ return false;
+
+ for (i = 0; i < msg->srcs_count; i++) {
+ if (msg->srcs_len[i] & 0xf) {
+ if (msg->srcs_len[i] > SRC_LENGTH_MASK)
+ return false;
+ } else {
+ if (msg->srcs_len[i] > (MSRC_LENGTH_MASK * 16))
+ return false;
+ }
+ }
+ for (i = 0; i < msg->dsts_count; i++) {
+ if (msg->dsts_len[i] & 0xf) {
+ if (msg->dsts_len[i] > DST_LENGTH_MASK)
+ return false;
+ } else {
+ if (msg->dsts_len[i] > (MDST_LENGTH_MASK * 16))
+ return false;
+ }
+ }
+
+ return true;
+}
+
+static uint32_t
+estimate_nonheader_desc_count(struct bcmfs_qp_message *msg)
+{
+ uint32_t cnt = 0;
+ unsigned int src = 0;
+ unsigned int dst = 0;
+ unsigned int dst_target = 0;
+
+ while (src < msg->srcs_count ||
+ dst < msg->dsts_count) {
+ if (src < msg->srcs_count) {
+ cnt++;
+ dst_target = msg->srcs_len[src];
+ src++;
+ } else {
+ dst_target = UINT_MAX;
+ }
+ while (dst_target && dst < msg->dsts_count) {
+ cnt++;
+ if (msg->dsts_len[dst] < dst_target)
+ dst_target -= msg->dsts_len[dst];
+ else
+ dst_target = 0;
+ dst++;
+ }
+ }
+
+ return cnt;
+}
+
+static void *
+bcmfs4_enqueue_msg(struct bcmfs_qp_message *msg,
+ uint32_t nhcnt, uint32_t reqid,
+ void *desc_ptr, uint32_t toggle,
+ void *start_desc, void *end_desc)
+{
+ uint64_t d;
+ uint32_t nhpos = 0;
+ unsigned int src = 0;
+ unsigned int dst = 0;
+ unsigned int dst_target = 0;
+ void *orig_desc_ptr = desc_ptr;
+
+ if (!desc_ptr || !start_desc || !end_desc)
+ return NULL;
+
+ if (desc_ptr < start_desc || end_desc <= desc_ptr)
+ return NULL;
+
+ while (src < msg->srcs_count || dst < msg->dsts_count) {
+ if (src < msg->srcs_count) {
+ if (msg->srcs_len[src] & 0xf) {
+ d = bcmfs4_src_desc(msg->srcs_addr[src],
+ msg->srcs_len[src]);
+ } else {
+ d = bcmfs4_msrc_desc(msg->srcs_addr[src],
+ msg->srcs_len[src] / 16);
+ }
+ bcmfs4_enqueue_desc(nhpos, nhcnt, reqid,
+ d, &desc_ptr, &toggle,
+ start_desc, end_desc);
+ nhpos++;
+ dst_target = msg->srcs_len[src];
+ src++;
+ } else {
+ dst_target = UINT_MAX;
+ }
+
+ while (dst_target && (dst < msg->dsts_count)) {
+ if (msg->dsts_len[dst] & 0xf) {
+ d = bcmfs4_dst_desc(msg->dsts_addr[dst],
+ msg->dsts_len[dst]);
+ } else {
+ d = bcmfs4_mdst_desc(msg->dsts_addr[dst],
+ msg->dsts_len[dst] / 16);
+ }
+ bcmfs4_enqueue_desc(nhpos, nhcnt, reqid,
+ d, &desc_ptr, &toggle,
+ start_desc, end_desc);
+ nhpos++;
+ if (msg->dsts_len[dst] < dst_target)
+ dst_target -= msg->dsts_len[dst];
+ else
+ dst_target = 0;
+ dst++; /* for next buffer */
+ }
+ }
+
+ /* Null descriptor with invalid toggle bit */
+ rm_write_desc(desc_ptr, bcmfs4_null_desc(!toggle));
+
+ /* Ensure that descriptors have been written to memory */
+ rte_smp_wmb();
+
+ bcmfs4_flip_header_toggle(orig_desc_ptr);
+
+ return desc_ptr;
+}
+
+static int
+bcmfs4_enqueue_single_request_qp(struct bcmfs_qp *qp, void *op)
+{
+ int reqid;
+ void *next;
+ uint32_t nhcnt;
+ int ret = 0;
+ uint32_t pos = 0;
+ uint64_t slab = 0;
+ uint8_t exit_cleanup = false;
+ struct bcmfs_queue *txq = &qp->tx_q;
+ struct bcmfs_qp_message *msg = (struct bcmfs_qp_message *)op;
+
+ /* Do sanity check on message */
+ if (!bcmfs4_sanity_check(msg)) {
+ BCMFS_DP_LOG(ERR, "Invalid msg on queue %d", qp->qpair_id);
+ return -EIO;
+ }
+
+ /* Scan from the beginning */
+ __rte_bitmap_scan_init(qp->ctx_bmp);
+ /* Scan bitmap to get the free pool */
+ ret = rte_bitmap_scan(qp->ctx_bmp, &pos, &slab);
+ if (ret == 0) {
+ BCMFS_DP_LOG(ERR, "BD memory exhausted");
+ return -ERANGE;
+ }
+
+ reqid = pos + __builtin_ctzll(slab);
+ rte_bitmap_clear(qp->ctx_bmp, reqid);
+ qp->ctx_pool[reqid] = (unsigned long)msg;
+
+ /*
+ * Number required descriptors = number of non-header descriptors +
+ * number of header descriptors +
+ * 1x null descriptor
+ */
+ nhcnt = estimate_nonheader_desc_count(msg);
+
+ /* Write descriptors to ring */
+ next = bcmfs4_enqueue_msg(msg, nhcnt, reqid,
+ (uint8_t *)txq->base_addr + txq->tx_write_ptr,
+ RING_BD_TOGGLE_VALID(txq->tx_write_ptr),
+ txq->base_addr,
+ (uint8_t *)txq->base_addr + txq->queue_size);
+ if (next == NULL) {
+ BCMFS_DP_LOG(ERR, "Enqueue for desc failed on queue %d",
+ qp->qpair_id);
+ ret = -EINVAL;
+ exit_cleanup = true;
+ goto exit;
+ }
+
+ /* Save ring BD write offset */
+ txq->tx_write_ptr = (uint32_t)((uint8_t *)next -
+ (uint8_t *)txq->base_addr);
+
+ qp->nb_pending_requests++;
+
+ return 0;
+
+exit:
+ /* Cleanup if we failed */
+ if (exit_cleanup)
+ rte_bitmap_set(qp->ctx_bmp, reqid);
+
+ return ret;
+}
+
+static void
+bcmfs4_ring_doorbell_qp(struct bcmfs_qp *qp __rte_unused)
+{
+ /* no door bell method supported */
+}
+
+static uint16_t
+bcmfs4_dequeue_qp(struct bcmfs_qp *qp, void **ops, uint16_t budget)
+{
+ int err;
+ uint16_t reqid;
+ uint64_t desc;
+ uint16_t count = 0;
+ unsigned long context = 0;
+ struct bcmfs_queue *hwq = &qp->cmpl_q;
+ uint32_t cmpl_read_offset, cmpl_write_offset;
+
+ /*
+ * Check whether budget is valid, else set the budget to maximum
+ * so that all the available completions will be processed.
+ */
+ if (budget > qp->nb_pending_requests)
+ budget = qp->nb_pending_requests;
+
+ /*
+ * Get current completion read and write offset
+ * Note: We should read completion write pointer atleast once
+ * after we get a MSI interrupt because HW maintains internal
+ * MSI status which will allow next MSI interrupt only after
+ * completion write pointer is read.
+ */
+ cmpl_write_offset = FS_MMIO_READ32((uint8_t *)qp->ioreg +
+ RING_CMPL_WRITE_PTR);
+ cmpl_write_offset *= FS_RING_DESC_SIZE;
+ cmpl_read_offset = hwq->cmpl_read_ptr;
+
+ rte_smp_rmb();
+
+ /* For each completed request notify mailbox clients */
+ reqid = 0;
+ while ((cmpl_read_offset != cmpl_write_offset) && (budget > 0)) {
+ /* Dequeue next completion descriptor */
+ desc = *((uint64_t *)((uint8_t *)hwq->base_addr +
+ cmpl_read_offset));
+
+ /* Next read offset */
+ cmpl_read_offset += FS_RING_DESC_SIZE;
+ if (cmpl_read_offset == FS_RING_CMPL_SIZE)
+ cmpl_read_offset = 0;
+
+ /* Decode error from completion descriptor */
+ err = rm_cmpl_desc_to_error(desc);
+ if (err < 0)
+ BCMFS_DP_LOG(ERR, "error desc rcvd");
+
+ /* Determine request id from completion descriptor */
+ reqid = rm_cmpl_desc_to_reqid(desc);
+
+ /* Determine message pointer based on reqid */
+ context = qp->ctx_pool[reqid];
+ if (context == 0)
+ BCMFS_DP_LOG(ERR, "HW error detected");
+
+ /* Release reqid for recycling */
+ qp->ctx_pool[reqid] = 0;
+ rte_bitmap_set(qp->ctx_bmp, reqid);
+
+ *ops = (void *)context;
+
+ /* Increment number of completions processed */
+ count++;
+ budget--;
+ ops++;
+ }
+
+ hwq->cmpl_read_ptr = cmpl_read_offset;
+
+ qp->nb_pending_requests -= count;
+
+ return count;
+}
+
+static int
+bcmfs4_start_qp(struct bcmfs_qp *qp)
+{
+ int timeout;
+ uint32_t val, off;
+ uint64_t d, next_addr, msi;
+ struct bcmfs_queue *tx_queue = &qp->tx_q;
+ struct bcmfs_queue *cmpl_queue = &qp->cmpl_q;
+
+ /* Disable/inactivate ring */
+ FS_MMIO_WRITE32(0x0, (uint8_t *)qp->ioreg + RING_CONTROL);
+
+ /* Configure next table pointer entries in BD memory */
+ for (off = 0; off < tx_queue->queue_size; off += FS_RING_DESC_SIZE) {
+ next_addr = off + FS_RING_DESC_SIZE;
+ if (next_addr == tx_queue->queue_size)
+ next_addr = 0;
+ next_addr += (uint64_t)tx_queue->base_phys_addr;
+ if (FS_RING_BD_ALIGN_CHECK(next_addr))
+ d = bcmfs4_next_table_desc(RING_BD_TOGGLE_VALID(off),
+ next_addr);
+ else
+ d = bcmfs4_null_desc(RING_BD_TOGGLE_INVALID(off));
+ rm_write_desc((uint8_t *)tx_queue->base_addr + off, d);
+ }
+
+ /*
+ * If user interrupt the test in between the run(Ctrl+C), then all
+ * subsequent test run will fail because sw cmpl_read_offset and hw
+ * cmpl_write_offset will be pointing at different completion BD. To
+ * handle this we should flush all the rings in the startup instead
+ * of shutdown function.
+ * Ring flush will reset hw cmpl_write_offset.
+ */
+
+ /* Set ring flush state */
+ timeout = 1000;
+ FS_MMIO_WRITE32(BIT(CONTROL_FLUSH_SHIFT),
+ (uint8_t *)qp->ioreg + RING_CONTROL);
+ do {
+ /*
+ * If previous test is stopped in between the run, then
+ * sw has to read cmpl_write_offset else DME/AE will be not
+ * come out of flush state.
+ */
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_CMPL_WRITE_PTR);
+
+ if (FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_FLUSH_DONE) &
+ FLUSH_DONE_MASK)
+ break;
+ usleep(1000);
+ } while (--timeout);
+ if (!timeout) {
+ BCMFS_DP_LOG(ERR, "Ring flush timeout hw-queue %d",
+ qp->qpair_id);
+ }
+
+ /* Clear ring flush state */
+ timeout = 1000;
+ FS_MMIO_WRITE32(0x0, (uint8_t *)qp->ioreg + RING_CONTROL);
+ do {
+ if (!(FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_FLUSH_DONE) &
+ FLUSH_DONE_MASK))
+ break;
+ usleep(1000);
+ } while (--timeout);
+ if (!timeout) {
+ BCMFS_DP_LOG(ERR, "Ring clear flush timeout hw-queue %d",
+ qp->qpair_id);
+ }
+
+ /* Program BD start address */
+ val = BD_START_ADDR_VALUE(tx_queue->base_phys_addr);
+ FS_MMIO_WRITE32(val, (uint8_t *)qp->ioreg + RING_BD_START_ADDR);
+
+ /* BD write pointer will be same as HW write pointer */
+ tx_queue->tx_write_ptr = FS_MMIO_READ32((uint8_t *)qp->ioreg +
+ RING_BD_WRITE_PTR);
+ tx_queue->tx_write_ptr *= FS_RING_DESC_SIZE;
+
+
+ for (off = 0; off < FS_RING_CMPL_SIZE; off += FS_RING_DESC_SIZE)
+ rm_write_desc((uint8_t *)cmpl_queue->base_addr + off, 0x0);
+
+ /* Program completion start address */
+ val = CMPL_START_ADDR_VALUE(cmpl_queue->base_phys_addr);
+ FS_MMIO_WRITE32(val, (uint8_t *)qp->ioreg + RING_CMPL_START_ADDR);
+
+ /* Completion read pointer will be same as HW write pointer */
+ cmpl_queue->cmpl_read_ptr = FS_MMIO_READ32((uint8_t *)qp->ioreg +
+ RING_CMPL_WRITE_PTR);
+ cmpl_queue->cmpl_read_ptr *= FS_RING_DESC_SIZE;
+
+ /* Read ring Tx, Rx, and Outstanding counts to clear */
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_RECV_LS);
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_RECV_MS);
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_TRANS_LS);
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_TRANS_MS);
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_OUTSTAND);
+
+ /* Configure per-Ring MSI registers with dummy location */
+ /* We leave 1k * FS_RING_DESC_SIZE size from base phys for MSI */
+ msi = cmpl_queue->base_phys_addr + (1024 * FS_RING_DESC_SIZE);
+ FS_MMIO_WRITE32((msi & 0xFFFFFFFF),
+ (uint8_t *)qp->ioreg + RING_MSI_ADDR_LS);
+ FS_MMIO_WRITE32(((msi >> 32) & 0xFFFFFFFF),
+ (uint8_t *)qp->ioreg + RING_MSI_ADDR_MS);
+ FS_MMIO_WRITE32(qp->qpair_id,
+ (uint8_t *)qp->ioreg + RING_MSI_DATA_VALUE);
+
+ /* Configure RING_MSI_CONTROL */
+ val = 0;
+ val |= (MSI_TIMER_VAL_MASK << MSI_TIMER_VAL_SHIFT);
+ val |= BIT(MSI_ENABLE_SHIFT);
+ val |= (0x1 & MSI_COUNT_MASK) << MSI_COUNT_SHIFT;
+ FS_MMIO_WRITE32(val, (uint8_t *)qp->ioreg + RING_MSI_CONTROL);
+
+ /* Enable/activate ring */
+ val = BIT(CONTROL_ACTIVE_SHIFT);
+ FS_MMIO_WRITE32(val, (uint8_t *)qp->ioreg + RING_CONTROL);
+
+ return 0;
+}
+
+static void
+bcmfs4_shutdown_qp(struct bcmfs_qp *qp)
+{
+ /* Disable/inactivate ring */
+ FS_MMIO_WRITE32(0x0, (uint8_t *)qp->ioreg + RING_CONTROL);
+}
+
+struct bcmfs_hw_queue_pair_ops bcmfs4_qp_ops = {
+ .name = "fs4",
+ .enq_one_req = bcmfs4_enqueue_single_request_qp,
+ .ring_db = bcmfs4_ring_doorbell_qp,
+ .dequeue = bcmfs4_dequeue_qp,
+ .startq = bcmfs4_start_qp,
+ .stopq = bcmfs4_shutdown_qp,
+};
+
+RTE_INIT(bcmfs4_register_qp_ops)
+{
+ bcmfs_hw_queue_pair_register_ops(&bcmfs4_qp_ops);
+}
diff --git a/drivers/crypto/bcmfs/hw/bcmfs5_rm.c b/drivers/crypto/bcmfs/hw/bcmfs5_rm.c
new file mode 100644
index 000000000..fd92121da
--- /dev/null
+++ b/drivers/crypto/bcmfs/hw/bcmfs5_rm.c
@@ -0,0 +1,677 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <unistd.h>
+
+#include <rte_bitmap.h>
+
+#include "bcmfs_qp.h"
+#include "bcmfs_logs.h"
+#include "bcmfs_dev_msg.h"
+#include "bcmfs_device.h"
+#include "bcmfs_hw_defs.h"
+#include "bcmfs_rm_common.h"
+
+/* Ring version */
+#define RING_VER_MAGIC 0x76303032
+
+/* Per-Ring register offsets */
+#define RING_VER 0x000
+#define RING_BD_START_ADDRESS_LSB 0x004
+#define RING_BD_READ_PTR 0x008
+#define RING_BD_WRITE_PTR 0x00c
+#define RING_BD_READ_PTR_DDR_LS 0x010
+#define RING_BD_READ_PTR_DDR_MS 0x014
+#define RING_CMPL_START_ADDR_LSB 0x018
+#define RING_CMPL_WRITE_PTR 0x01c
+#define RING_NUM_REQ_RECV_LS 0x020
+#define RING_NUM_REQ_RECV_MS 0x024
+#define RING_NUM_REQ_TRANS_LS 0x028
+#define RING_NUM_REQ_TRANS_MS 0x02c
+#define RING_NUM_REQ_OUTSTAND 0x030
+#define RING_CONTROL 0x034
+#define RING_FLUSH_DONE 0x038
+#define RING_MSI_ADDR_LS 0x03c
+#define RING_MSI_ADDR_MS 0x040
+#define RING_MSI_CONTROL 0x048
+#define RING_BD_READ_PTR_DDR_CONTROL 0x04c
+#define RING_MSI_DATA_VALUE 0x064
+#define RING_BD_START_ADDRESS_MSB 0x078
+#define RING_CMPL_START_ADDR_MSB 0x07c
+#define RING_DOORBELL_BD_WRITE_COUNT 0x074
+
+/* Register RING_BD_START_ADDR fields */
+#define BD_LAST_UPDATE_HW_SHIFT 28
+#define BD_LAST_UPDATE_HW_MASK 0x1
+#define BD_START_ADDR_VALUE(pa) \
+ ((uint32_t)((((uint64_t)(pa)) >> RING_BD_ALIGN_ORDER) & 0x0fffffff))
+#define BD_START_ADDR_DECODE(val) \
+ ((uint64_t)((val) & 0x0fffffff) << RING_BD_ALIGN_ORDER)
+
+/* Register RING_CMPL_START_ADDR fields */
+#define CMPL_START_ADDR_VALUE(pa) \
+ ((uint32_t)((((uint64_t)(pa)) >> RING_CMPL_ALIGN_ORDER) & 0x07ffffff))
+
+/* Register RING_CONTROL fields */
+#define CONTROL_MASK_DISABLE_CONTROL 12
+#define CONTROL_FLUSH_SHIFT 5
+#define CONTROL_ACTIVE_SHIFT 4
+#define CONTROL_RATE_ADAPT_MASK 0xf
+#define CONTROL_RATE_DYNAMIC 0x0
+#define CONTROL_RATE_FAST 0x8
+#define CONTROL_RATE_MEDIUM 0x9
+#define CONTROL_RATE_SLOW 0xa
+#define CONTROL_RATE_IDLE 0xb
+
+/* Register RING_FLUSH_DONE fields */
+#define FLUSH_DONE_MASK 0x1
+
+/* Register RING_MSI_CONTROL fields */
+#define MSI_TIMER_VAL_SHIFT 16
+#define MSI_TIMER_VAL_MASK 0xffff
+#define MSI_ENABLE_SHIFT 15
+#define MSI_ENABLE_MASK 0x1
+#define MSI_COUNT_SHIFT 0
+#define MSI_COUNT_MASK 0x3ff
+
+/* Register RING_BD_READ_PTR_DDR_CONTROL fields */
+#define BD_READ_PTR_DDR_TIMER_VAL_SHIFT 16
+#define BD_READ_PTR_DDR_TIMER_VAL_MASK 0xffff
+#define BD_READ_PTR_DDR_ENABLE_SHIFT 15
+#define BD_READ_PTR_DDR_ENABLE_MASK 0x1
+
+/* General descriptor format */
+#define DESC_TYPE_SHIFT 60
+#define DESC_TYPE_MASK 0xf
+#define DESC_PAYLOAD_SHIFT 0
+#define DESC_PAYLOAD_MASK 0x0fffffffffffffff
+
+/* Null descriptor format */
+#define NULL_TYPE 0
+#define NULL_TOGGLE_SHIFT 59
+#define NULL_TOGGLE_MASK 0x1
+
+/* Header descriptor format */
+#define HEADER_TYPE 1
+#define HEADER_TOGGLE_SHIFT 59
+#define HEADER_TOGGLE_MASK 0x1
+#define HEADER_ENDPKT_SHIFT 57
+#define HEADER_ENDPKT_MASK 0x1
+#define HEADER_STARTPKT_SHIFT 56
+#define HEADER_STARTPKT_MASK 0x1
+#define HEADER_BDCOUNT_SHIFT 36
+#define HEADER_BDCOUNT_MASK 0x1f
+#define HEADER_BDCOUNT_MAX HEADER_BDCOUNT_MASK
+#define HEADER_FLAGS_SHIFT 16
+#define HEADER_FLAGS_MASK 0xffff
+#define HEADER_OPAQUE_SHIFT 0
+#define HEADER_OPAQUE_MASK 0xffff
+
+/* Source (SRC) descriptor format */
+
+#define SRC_TYPE 2
+#define SRC_LENGTH_SHIFT 44
+#define SRC_LENGTH_MASK 0xffff
+#define SRC_ADDR_SHIFT 0
+#define SRC_ADDR_MASK 0x00000fffffffffff
+
+/* Destination (DST) descriptor format */
+#define DST_TYPE 3
+#define DST_LENGTH_SHIFT 44
+#define DST_LENGTH_MASK 0xffff
+#define DST_ADDR_SHIFT 0
+#define DST_ADDR_MASK 0x00000fffffffffff
+
+/* Next pointer (NPTR) descriptor format */
+#define NPTR_TYPE 5
+#define NPTR_TOGGLE_SHIFT 59
+#define NPTR_TOGGLE_MASK 0x1
+#define NPTR_ADDR_SHIFT 0
+#define NPTR_ADDR_MASK 0x00000fffffffffff
+
+/* Mega source (MSRC) descriptor format */
+#define MSRC_TYPE 6
+#define MSRC_LENGTH_SHIFT 44
+#define MSRC_LENGTH_MASK 0xffff
+#define MSRC_ADDR_SHIFT 0
+#define MSRC_ADDR_MASK 0x00000fffffffffff
+
+/* Mega destination (MDST) descriptor format */
+#define MDST_TYPE 7
+#define MDST_LENGTH_SHIFT 44
+#define MDST_LENGTH_MASK 0xffff
+#define MDST_ADDR_SHIFT 0
+#define MDST_ADDR_MASK 0x00000fffffffffff
+
+static uint8_t
+bcmfs5_is_next_table_desc(void *desc_ptr)
+{
+ uint64_t desc = rm_read_desc(desc_ptr);
+ uint32_t type = FS_DESC_DEC(desc, DESC_TYPE_SHIFT, DESC_TYPE_MASK);
+
+ return (type == NPTR_TYPE) ? true : false;
+}
+
+static uint64_t
+bcmfs5_next_table_desc(uint64_t next_addr)
+{
+ return (rm_build_desc(NPTR_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(next_addr, NPTR_ADDR_SHIFT, NPTR_ADDR_MASK));
+}
+
+static uint64_t
+bcmfs5_null_desc(void)
+{
+ return rm_build_desc(NULL_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK);
+}
+
+static uint64_t
+bcmfs5_header_desc(uint32_t startpkt, uint32_t endpkt,
+ uint32_t bdcount, uint32_t flags,
+ uint32_t opaque)
+{
+ return (rm_build_desc(HEADER_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(startpkt, HEADER_STARTPKT_SHIFT,
+ HEADER_STARTPKT_MASK) |
+ rm_build_desc(endpkt, HEADER_ENDPKT_SHIFT, HEADER_ENDPKT_MASK) |
+ rm_build_desc(bdcount, HEADER_BDCOUNT_SHIFT, HEADER_BDCOUNT_MASK) |
+ rm_build_desc(flags, HEADER_FLAGS_SHIFT, HEADER_FLAGS_MASK) |
+ rm_build_desc(opaque, HEADER_OPAQUE_SHIFT, HEADER_OPAQUE_MASK));
+}
+
+static int
+bcmfs5_enqueue_desc(uint32_t nhpos, uint32_t nhcnt,
+ uint32_t reqid, uint64_t desc,
+ void **desc_ptr, void *start_desc,
+ void *end_desc)
+{
+ uint64_t d;
+ uint32_t nhavail, _startpkt, _endpkt, _bdcount;
+ int is_nxt_page = 0;
+
+ /*
+ * Each request or packet start with a HEADER descriptor followed
+ * by one or more non-HEADER descriptors (SRC, SRCT, MSRC, DST,
+ * DSTT, MDST, IMM, and IMMT). The number of non-HEADER descriptors
+ * following a HEADER descriptor is represented by BDCOUNT field
+ * of HEADER descriptor. The max value of BDCOUNT field is 31 which
+ * means we can only have 31 non-HEADER descriptors following one
+ * HEADER descriptor.
+ *
+ * In general use, number of non-HEADER descriptors can easily go
+ * beyond 31. To tackle this situation, we have packet (or request)
+ * extension bits (STARTPKT and ENDPKT) in the HEADER descriptor.
+ *
+ * To use packet extension, the first HEADER descriptor of request
+ * (or packet) will have STARTPKT=1 and ENDPKT=0. The intermediate
+ * HEADER descriptors will have STARTPKT=0 and ENDPKT=0. The last
+ * HEADER descriptor will have STARTPKT=0 and ENDPKT=1.
+ */
+
+ if ((nhpos % HEADER_BDCOUNT_MAX == 0) && (nhcnt - nhpos)) {
+ /* Prepare the header descriptor */
+ nhavail = (nhcnt - nhpos);
+ _startpkt = (nhpos == 0) ? 0x1 : 0x0;
+ _endpkt = (nhavail <= HEADER_BDCOUNT_MAX) ? 0x1 : 0x0;
+ _bdcount = (nhavail <= HEADER_BDCOUNT_MAX) ?
+ nhavail : HEADER_BDCOUNT_MAX;
+ if (nhavail <= HEADER_BDCOUNT_MAX)
+ _bdcount = nhavail;
+ else
+ _bdcount = HEADER_BDCOUNT_MAX;
+ d = bcmfs5_header_desc(_startpkt, _endpkt,
+ _bdcount, 0x0, reqid);
+
+ /* Write header descriptor */
+ rm_write_desc(*desc_ptr, d);
+
+ /* Point to next descriptor */
+ *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc);
+ if (*desc_ptr == end_desc)
+ *desc_ptr = start_desc;
+
+ /* Skip next pointer descriptors */
+ while (bcmfs5_is_next_table_desc(*desc_ptr)) {
+ is_nxt_page = 1;
+ *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc);
+ if (*desc_ptr == end_desc)
+ *desc_ptr = start_desc;
+ }
+ }
+
+ /* Write desired descriptor */
+ rm_write_desc(*desc_ptr, desc);
+
+ /* Point to next descriptor */
+ *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc);
+ if (*desc_ptr == end_desc)
+ *desc_ptr = start_desc;
+
+ /* Skip next pointer descriptors */
+ while (bcmfs5_is_next_table_desc(*desc_ptr)) {
+ is_nxt_page = 1;
+ *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc);
+ if (*desc_ptr == end_desc)
+ *desc_ptr = start_desc;
+ }
+
+ return is_nxt_page;
+}
+
+static uint64_t
+bcmfs5_src_desc(uint64_t addr, unsigned int len)
+{
+ return (rm_build_desc(SRC_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(len, SRC_LENGTH_SHIFT, SRC_LENGTH_MASK) |
+ rm_build_desc(addr, SRC_ADDR_SHIFT, SRC_ADDR_MASK));
+}
+
+static uint64_t
+bcmfs5_msrc_desc(uint64_t addr, unsigned int len_div_16)
+{
+ return (rm_build_desc(MSRC_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(len_div_16, MSRC_LENGTH_SHIFT, MSRC_LENGTH_MASK) |
+ rm_build_desc(addr, MSRC_ADDR_SHIFT, MSRC_ADDR_MASK));
+}
+
+static uint64_t
+bcmfs5_dst_desc(uint64_t addr, unsigned int len)
+{
+ return (rm_build_desc(DST_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(len, DST_LENGTH_SHIFT, DST_LENGTH_MASK) |
+ rm_build_desc(addr, DST_ADDR_SHIFT, DST_ADDR_MASK));
+}
+
+static uint64_t
+bcmfs5_mdst_desc(uint64_t addr, unsigned int len_div_16)
+{
+ return (rm_build_desc(MDST_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(len_div_16, MDST_LENGTH_SHIFT, MDST_LENGTH_MASK) |
+ rm_build_desc(addr, MDST_ADDR_SHIFT, MDST_ADDR_MASK));
+}
+
+static bool
+bcmfs5_sanity_check(struct bcmfs_qp_message *msg)
+{
+ unsigned int i = 0;
+
+ if (msg == NULL)
+ return false;
+
+ for (i = 0; i < msg->srcs_count; i++) {
+ if (msg->srcs_len[i] & 0xf) {
+ if (msg->srcs_len[i] > SRC_LENGTH_MASK)
+ return false;
+ } else {
+ if (msg->srcs_len[i] > (MSRC_LENGTH_MASK * 16))
+ return false;
+ }
+ }
+ for (i = 0; i < msg->dsts_count; i++) {
+ if (msg->dsts_len[i] & 0xf) {
+ if (msg->dsts_len[i] > DST_LENGTH_MASK)
+ return false;
+ } else {
+ if (msg->dsts_len[i] > (MDST_LENGTH_MASK * 16))
+ return false;
+ }
+ }
+
+ return true;
+}
+
+static void *
+bcmfs5_enqueue_msg(struct bcmfs_queue *txq,
+ struct bcmfs_qp_message *msg,
+ uint32_t reqid, void *desc_ptr,
+ void *start_desc, void *end_desc)
+{
+ uint64_t d;
+ unsigned int src, dst;
+ uint32_t nhpos = 0;
+ int nxt_page = 0;
+ uint32_t nhcnt = msg->srcs_count + msg->dsts_count;
+
+ if (desc_ptr == NULL || start_desc == NULL || end_desc == NULL)
+ return NULL;
+
+ if (desc_ptr < start_desc || end_desc <= desc_ptr)
+ return NULL;
+
+ for (src = 0; src < msg->srcs_count; src++) {
+ if (msg->srcs_len[src] & 0xf)
+ d = bcmfs5_src_desc(msg->srcs_addr[src],
+ msg->srcs_len[src]);
+ else
+ d = bcmfs5_msrc_desc(msg->srcs_addr[src],
+ msg->srcs_len[src] / 16);
+
+ nxt_page = bcmfs5_enqueue_desc(nhpos, nhcnt, reqid,
+ d, &desc_ptr, start_desc,
+ end_desc);
+ if (nxt_page)
+ txq->descs_inflight++;
+ nhpos++;
+ }
+
+ for (dst = 0; dst < msg->dsts_count; dst++) {
+ if (msg->dsts_len[dst] & 0xf)
+ d = bcmfs5_dst_desc(msg->dsts_addr[dst],
+ msg->dsts_len[dst]);
+ else
+ d = bcmfs5_mdst_desc(msg->dsts_addr[dst],
+ msg->dsts_len[dst] / 16);
+
+ nxt_page = bcmfs5_enqueue_desc(nhpos, nhcnt, reqid,
+ d, &desc_ptr, start_desc,
+ end_desc);
+ if (nxt_page)
+ txq->descs_inflight++;
+ nhpos++;
+ }
+
+ txq->descs_inflight += nhcnt + 1;
+
+ return desc_ptr;
+}
+
+static int
+bcmfs5_enqueue_single_request_qp(struct bcmfs_qp *qp, void *op)
+{
+ void *next;
+ int reqid;
+ int ret = 0;
+ uint64_t slab = 0;
+ uint32_t pos = 0;
+ uint8_t exit_cleanup = false;
+ struct bcmfs_queue *txq = &qp->tx_q;
+ struct bcmfs_qp_message *msg = (struct bcmfs_qp_message *)op;
+
+ /* Do sanity check on message */
+ if (!bcmfs5_sanity_check(msg)) {
+ BCMFS_DP_LOG(ERR, "Invalid msg on queue %d", qp->qpair_id);
+ return -EIO;
+ }
+
+ /* Scan from the beginning */
+ __rte_bitmap_scan_init(qp->ctx_bmp);
+ /* Scan bitmap to get the free pool */
+ ret = rte_bitmap_scan(qp->ctx_bmp, &pos, &slab);
+ if (ret == 0) {
+ BCMFS_DP_LOG(ERR, "BD memory exhausted");
+ return -ERANGE;
+ }
+
+ reqid = pos + __builtin_ctzll(slab);
+ rte_bitmap_clear(qp->ctx_bmp, reqid);
+ qp->ctx_pool[reqid] = (unsigned long)msg;
+
+ /* Write descriptors to ring */
+ next = bcmfs5_enqueue_msg(txq, msg, reqid,
+ (uint8_t *)txq->base_addr + txq->tx_write_ptr,
+ txq->base_addr,
+ (uint8_t *)txq->base_addr + txq->queue_size);
+ if (next == NULL) {
+ BCMFS_DP_LOG(ERR, "Enqueue for desc failed on queue %d",
+ qp->qpair_id);
+ ret = -EINVAL;
+ exit_cleanup = true;
+ goto exit;
+ }
+
+ /* Save ring BD write offset */
+ txq->tx_write_ptr = (uint32_t)((uint8_t *)next -
+ (uint8_t *)txq->base_addr);
+
+ qp->nb_pending_requests++;
+
+ return 0;
+
+exit:
+ /* Cleanup if we failed */
+ if (exit_cleanup)
+ rte_bitmap_set(qp->ctx_bmp, reqid);
+
+ return ret;
+}
+
+static void bcmfs5_write_doorbell(struct bcmfs_qp *qp)
+{
+ struct bcmfs_queue *txq = &qp->tx_q;
+
+ /* sync in bfeore ringing the door-bell */
+ rte_wmb();
+
+ FS_MMIO_WRITE32(txq->descs_inflight,
+ (uint8_t *)qp->ioreg + RING_DOORBELL_BD_WRITE_COUNT);
+
+ /* reset the count */
+ txq->descs_inflight = 0;
+}
+
+static uint16_t
+bcmfs5_dequeue_qp(struct bcmfs_qp *qp, void **ops, uint16_t budget)
+{
+ int err;
+ uint16_t reqid;
+ uint64_t desc;
+ uint16_t count = 0;
+ unsigned long context = 0;
+ struct bcmfs_queue *hwq = &qp->cmpl_q;
+ uint32_t cmpl_read_offset, cmpl_write_offset;
+
+ /*
+ * Check whether budget is valid, else set the budget to maximum
+ * so that all the available completions will be processed.
+ */
+ if (budget > qp->nb_pending_requests)
+ budget = qp->nb_pending_requests;
+
+ /*
+ * Get current completion read and write offset
+ *
+ * Note: We should read completion write pointer atleast once
+ * after we get a MSI interrupt because HW maintains internal
+ * MSI status which will allow next MSI interrupt only after
+ * completion write pointer is read.
+ */
+ cmpl_write_offset = FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_CMPL_WRITE_PTR);
+ cmpl_write_offset *= FS_RING_DESC_SIZE;
+ cmpl_read_offset = hwq->cmpl_read_ptr;
+
+ /* read the ring cmpl write ptr before cmpl read offset */
+ rte_smp_rmb();
+
+ /* For each completed request notify mailbox clients */
+ reqid = 0;
+ while ((cmpl_read_offset != cmpl_write_offset) && (budget > 0)) {
+ /* Dequeue next completion descriptor */
+ desc = *((uint64_t *)((uint8_t *)hwq->base_addr +
+ cmpl_read_offset));
+
+ /* Next read offset */
+ cmpl_read_offset += FS_RING_DESC_SIZE;
+ if (cmpl_read_offset == FS_RING_CMPL_SIZE)
+ cmpl_read_offset = 0;
+
+ /* Decode error from completion descriptor */
+ err = rm_cmpl_desc_to_error(desc);
+ if (err < 0)
+ BCMFS_DP_LOG(ERR, "error desc rcvd");
+
+ /* Determine request id from completion descriptor */
+ reqid = rm_cmpl_desc_to_reqid(desc);
+
+ /* Retrieve context */
+ context = qp->ctx_pool[reqid];
+ if (context == 0)
+ BCMFS_DP_LOG(ERR, "HW error detected");
+
+ /* Release reqid for recycling */
+ qp->ctx_pool[reqid] = 0;
+ rte_bitmap_set(qp->ctx_bmp, reqid);
+
+ *ops = (void *)context;
+
+ /* Increment number of completions processed */
+ count++;
+ budget--;
+ ops++;
+ }
+
+ hwq->cmpl_read_ptr = cmpl_read_offset;
+
+ qp->nb_pending_requests -= count;
+
+ return count;
+}
+
+static int
+bcmfs5_start_qp(struct bcmfs_qp *qp)
+{
+ uint32_t val, off;
+ uint64_t d, next_addr, msi;
+ int timeout;
+ uint32_t bd_high, bd_low, cmpl_high, cmpl_low;
+ struct bcmfs_queue *tx_queue = &qp->tx_q;
+ struct bcmfs_queue *cmpl_queue = &qp->cmpl_q;
+
+ /* Disable/inactivate ring */
+ FS_MMIO_WRITE32(0x0, (uint8_t *)qp->ioreg + RING_CONTROL);
+
+ /* Configure next table pointer entries in BD memory */
+ for (off = 0; off < tx_queue->queue_size; off += FS_RING_DESC_SIZE) {
+ next_addr = off + FS_RING_DESC_SIZE;
+ if (next_addr == tx_queue->queue_size)
+ next_addr = 0;
+ next_addr += (uint64_t)tx_queue->base_phys_addr;
+ if (FS_RING_BD_ALIGN_CHECK(next_addr))
+ d = bcmfs5_next_table_desc(next_addr);
+ else
+ d = bcmfs5_null_desc();
+ rm_write_desc((uint8_t *)tx_queue->base_addr + off, d);
+ }
+
+ /*
+ * If user interrupt the test in between the run(Ctrl+C), then all
+ * subsequent test run will fail because sw cmpl_read_offset and hw
+ * cmpl_write_offset will be pointing at different completion BD. To
+ * handle this we should flush all the rings in the startup instead
+ * of shutdown function.
+ * Ring flush will reset hw cmpl_write_offset.
+ */
+
+ /* Set ring flush state */
+ timeout = 1000;
+ FS_MMIO_WRITE32(BIT(CONTROL_FLUSH_SHIFT),
+ (uint8_t *)qp->ioreg + RING_CONTROL);
+ do {
+ /*
+ * If previous test is stopped in between the run, then
+ * sw has to read cmpl_write_offset else DME/AE will be not
+ * come out of flush state.
+ */
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_CMPL_WRITE_PTR);
+
+ if (FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_FLUSH_DONE) &
+ FLUSH_DONE_MASK)
+ break;
+ usleep(1000);
+ } while (--timeout);
+ if (!timeout) {
+ BCMFS_DP_LOG(ERR, "Ring flush timeout hw-queue %d",
+ qp->qpair_id);
+ }
+
+ /* Clear ring flush state */
+ timeout = 1000;
+ FS_MMIO_WRITE32(0x0, (uint8_t *)qp->ioreg + RING_CONTROL);
+ do {
+ if (!(FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_FLUSH_DONE) &
+ FLUSH_DONE_MASK))
+ break;
+ usleep(1000);
+ } while (--timeout);
+ if (!timeout) {
+ BCMFS_DP_LOG(ERR, "Ring clear flush timeout hw-queue %d",
+ qp->qpair_id);
+ }
+
+ /* Program BD start address */
+ bd_low = lower_32_bits(tx_queue->base_phys_addr);
+ bd_high = upper_32_bits(tx_queue->base_phys_addr);
+ FS_MMIO_WRITE32(bd_low, (uint8_t *)qp->ioreg +
+ RING_BD_START_ADDRESS_LSB);
+ FS_MMIO_WRITE32(bd_high, (uint8_t *)qp->ioreg +
+ RING_BD_START_ADDRESS_MSB);
+
+ tx_queue->tx_write_ptr = 0;
+
+ for (off = 0; off < FS_RING_CMPL_SIZE; off += FS_RING_DESC_SIZE)
+ rm_write_desc((uint8_t *)cmpl_queue->base_addr + off, 0x0);
+
+ /* Completion read pointer will be same as HW write pointer */
+ cmpl_queue->cmpl_read_ptr = FS_MMIO_READ32((uint8_t *)qp->ioreg +
+ RING_CMPL_WRITE_PTR);
+ /* Program completion start address */
+ cmpl_low = lower_32_bits(cmpl_queue->base_phys_addr);
+ cmpl_high = upper_32_bits(cmpl_queue->base_phys_addr);
+ FS_MMIO_WRITE32(cmpl_low, (uint8_t *)qp->ioreg +
+ RING_CMPL_START_ADDR_LSB);
+ FS_MMIO_WRITE32(cmpl_high, (uint8_t *)qp->ioreg +
+ RING_CMPL_START_ADDR_MSB);
+
+ cmpl_queue->cmpl_read_ptr *= FS_RING_DESC_SIZE;
+
+ /* Read ring Tx, Rx, and Outstanding counts to clear */
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_RECV_LS);
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_RECV_MS);
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_TRANS_LS);
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_TRANS_MS);
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_OUTSTAND);
+
+ /* Configure per-Ring MSI registers with dummy location */
+ msi = cmpl_queue->base_phys_addr + (1024 * FS_RING_DESC_SIZE);
+ FS_MMIO_WRITE32((msi & 0xFFFFFFFF),
+ (uint8_t *)qp->ioreg + RING_MSI_ADDR_LS);
+ FS_MMIO_WRITE32(((msi >> 32) & 0xFFFFFFFF),
+ (uint8_t *)qp->ioreg + RING_MSI_ADDR_MS);
+ FS_MMIO_WRITE32(qp->qpair_id, (uint8_t *)qp->ioreg +
+ RING_MSI_DATA_VALUE);
+
+ /* Configure RING_MSI_CONTROL */
+ val = 0;
+ val |= (MSI_TIMER_VAL_MASK << MSI_TIMER_VAL_SHIFT);
+ val |= BIT(MSI_ENABLE_SHIFT);
+ val |= (0x1 & MSI_COUNT_MASK) << MSI_COUNT_SHIFT;
+ FS_MMIO_WRITE32(val, (uint8_t *)qp->ioreg + RING_MSI_CONTROL);
+
+ /* Enable/activate ring */
+ val = BIT(CONTROL_ACTIVE_SHIFT);
+ FS_MMIO_WRITE32(val, (uint8_t *)qp->ioreg + RING_CONTROL);
+
+ return 0;
+}
+
+static void
+bcmfs5_shutdown_qp(struct bcmfs_qp *qp)
+{
+ /* Disable/inactivate ring */
+ FS_MMIO_WRITE32(0x0, (uint8_t *)qp->ioreg + RING_CONTROL);
+}
+
+struct bcmfs_hw_queue_pair_ops bcmfs5_qp_ops = {
+ .name = "fs5",
+ .enq_one_req = bcmfs5_enqueue_single_request_qp,
+ .ring_db = bcmfs5_write_doorbell,
+ .dequeue = bcmfs5_dequeue_qp,
+ .startq = bcmfs5_start_qp,
+ .stopq = bcmfs5_shutdown_qp,
+};
+
+RTE_INIT(bcmfs5_register_qp_ops)
+{
+ bcmfs_hw_queue_pair_register_ops(&bcmfs5_qp_ops);
+}
diff --git a/drivers/crypto/bcmfs/hw/bcmfs_rm_common.c b/drivers/crypto/bcmfs/hw/bcmfs_rm_common.c
new file mode 100644
index 000000000..9445d28f9
--- /dev/null
+++ b/drivers/crypto/bcmfs/hw/bcmfs_rm_common.c
@@ -0,0 +1,82 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Broadcom.
+ * All rights reserved.
+ */
+
+#include "bcmfs_hw_defs.h"
+#include "bcmfs_rm_common.h"
+
+/* Completion descriptor format */
+#define FS_CMPL_OPAQUE_SHIFT 0
+#define FS_CMPL_OPAQUE_MASK 0xffff
+#define FS_CMPL_ENGINE_STATUS_SHIFT 16
+#define FS_CMPL_ENGINE_STATUS_MASK 0xffff
+#define FS_CMPL_DME_STATUS_SHIFT 32
+#define FS_CMPL_DME_STATUS_MASK 0xffff
+#define FS_CMPL_RM_STATUS_SHIFT 48
+#define FS_CMPL_RM_STATUS_MASK 0xffff
+/* Completion RM status code */
+#define FS_RM_STATUS_CODE_SHIFT 0
+#define FS_RM_STATUS_CODE_MASK 0x3ff
+#define FS_RM_STATUS_CODE_GOOD 0x0
+#define FS_RM_STATUS_CODE_AE_TIMEOUT 0x3ff
+
+
+/* Completion DME status code */
+#define FS_DME_STATUS_MEM_COR_ERR BIT(0)
+#define FS_DME_STATUS_MEM_UCOR_ERR BIT(1)
+#define FS_DME_STATUS_FIFO_UNDRFLOW BIT(2)
+#define FS_DME_STATUS_FIFO_OVERFLOW BIT(3)
+#define FS_DME_STATUS_RRESP_ERR BIT(4)
+#define FS_DME_STATUS_BRESP_ERR BIT(5)
+#define FS_DME_STATUS_ERROR_MASK (FS_DME_STATUS_MEM_COR_ERR | \
+ FS_DME_STATUS_MEM_UCOR_ERR | \
+ FS_DME_STATUS_FIFO_UNDRFLOW | \
+ FS_DME_STATUS_FIFO_OVERFLOW | \
+ FS_DME_STATUS_RRESP_ERR | \
+ FS_DME_STATUS_BRESP_ERR)
+
+/* APIs related to ring manager descriptors */
+uint64_t
+rm_build_desc(uint64_t val, uint32_t shift,
+ uint64_t mask)
+{
+ return((val & mask) << shift);
+}
+
+uint64_t
+rm_read_desc(void *desc_ptr)
+{
+ return le64_to_cpu(*((uint64_t *)desc_ptr));
+}
+
+void
+rm_write_desc(void *desc_ptr, uint64_t desc)
+{
+ *((uint64_t *)desc_ptr) = cpu_to_le64(desc);
+}
+
+uint32_t
+rm_cmpl_desc_to_reqid(uint64_t cmpl_desc)
+{
+ return (uint32_t)(cmpl_desc & FS_CMPL_OPAQUE_MASK);
+}
+
+int
+rm_cmpl_desc_to_error(uint64_t cmpl_desc)
+{
+ uint32_t status;
+
+ status = FS_DESC_DEC(cmpl_desc, FS_CMPL_DME_STATUS_SHIFT,
+ FS_CMPL_DME_STATUS_MASK);
+ if (status & FS_DME_STATUS_ERROR_MASK)
+ return -EIO;
+
+ status = FS_DESC_DEC(cmpl_desc, FS_CMPL_RM_STATUS_SHIFT,
+ FS_CMPL_RM_STATUS_MASK);
+ status &= FS_RM_STATUS_CODE_MASK;
+ if (status == FS_RM_STATUS_CODE_AE_TIMEOUT)
+ return -ETIMEDOUT;
+
+ return 0;
+}
diff --git a/drivers/crypto/bcmfs/hw/bcmfs_rm_common.h b/drivers/crypto/bcmfs/hw/bcmfs_rm_common.h
new file mode 100644
index 000000000..5cbafa0da
--- /dev/null
+++ b/drivers/crypto/bcmfs/hw/bcmfs_rm_common.h
@@ -0,0 +1,46 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_RM_COMMON_H_
+#define _BCMFS_RM_COMMON_H_
+
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_io.h>
+
+/* Descriptor helper macros */
+#define FS_DESC_DEC(d, s, m) (((d) >> (s)) & (m))
+
+#define FS_RING_BD_ALIGN_CHECK(addr) \
+ (!((addr) & ((0x1 << FS_RING_BD_ALIGN_ORDER) - 1)))
+
+#define cpu_to_le64 rte_cpu_to_le_64
+#define cpu_to_le32 rte_cpu_to_le_32
+#define cpu_to_le16 rte_cpu_to_le_16
+
+#define le64_to_cpu rte_le_to_cpu_64
+#define le32_to_cpu rte_le_to_cpu_32
+#define le16_to_cpu rte_le_to_cpu_16
+
+#define lower_32_bits(x) ((uint32_t)(x))
+#define upper_32_bits(x) ((uint32_t)(((x) >> 16) >> 16))
+
+uint64_t
+rm_build_desc(uint64_t val, uint32_t shift,
+ uint64_t mask);
+uint64_t
+rm_read_desc(void *desc_ptr);
+
+void
+rm_write_desc(void *desc_ptr, uint64_t desc);
+
+uint32_t
+rm_cmpl_desc_to_reqid(uint64_t cmpl_desc);
+
+int
+rm_cmpl_desc_to_error(uint64_t cmpl_desc);
+
+#endif /* _BCMFS_RM_COMMON_H_ */
+
diff --git a/drivers/crypto/bcmfs/meson.build b/drivers/crypto/bcmfs/meson.build
index 7e2bcbf14..cd58bd5e2 100644
--- a/drivers/crypto/bcmfs/meson.build
+++ b/drivers/crypto/bcmfs/meson.build
@@ -8,5 +8,8 @@ sources = files(
'bcmfs_logs.c',
'bcmfs_device.c',
'bcmfs_vfio.c',
- 'bcmfs_qp.c'
+ 'bcmfs_qp.c',
+ 'hw/bcmfs4_rm.c',
+ 'hw/bcmfs5_rm.c',
+ 'hw/bcmfs_rm_common.c'
)
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v1 5/8] crypto/bcmfs: create a symmetric cryptodev
2020-08-12 6:31 ` [dpdk-dev] [PATCH v1 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
` (3 preceding siblings ...)
2020-08-12 6:31 ` [dpdk-dev] [PATCH v1 4/8] crypto/bcmfs: add hw queue pair operations Vikas Gupta
@ 2020-08-12 6:31 ` Vikas Gupta
2020-08-12 6:31 ` [dpdk-dev] [PATCH v1 6/8] crypto/bcmfs: add session handling and capabilities Vikas Gupta
` (3 subsequent siblings)
8 siblings, 0 replies; 75+ messages in thread
From: Vikas Gupta @ 2020-08-12 6:31 UTC (permalink / raw)
To: dev, akhil.goyal
Cc: ajit.khaparde, vikram.prakash, Vikas Gupta, Raveendra Padasalagi
Create a symmetric crypto device and supported cryptodev ops.
Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
drivers/crypto/bcmfs/bcmfs_device.c | 15 ++
drivers/crypto/bcmfs/bcmfs_device.h | 9 +
drivers/crypto/bcmfs/bcmfs_qp.c | 37 +++
drivers/crypto/bcmfs/bcmfs_qp.h | 16 ++
drivers/crypto/bcmfs/bcmfs_sym_pmd.c | 387 +++++++++++++++++++++++++++
drivers/crypto/bcmfs/bcmfs_sym_pmd.h | 38 +++
drivers/crypto/bcmfs/bcmfs_sym_req.h | 22 ++
drivers/crypto/bcmfs/meson.build | 3 +-
8 files changed, 526 insertions(+), 1 deletion(-)
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_pmd.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_pmd.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_req.h
diff --git a/drivers/crypto/bcmfs/bcmfs_device.c b/drivers/crypto/bcmfs/bcmfs_device.c
index bd2d64acf..c9263ec28 100644
--- a/drivers/crypto/bcmfs/bcmfs_device.c
+++ b/drivers/crypto/bcmfs/bcmfs_device.c
@@ -13,6 +13,7 @@
#include "bcmfs_logs.h"
#include "bcmfs_qp.h"
#include "bcmfs_vfio.h"
+#include "bcmfs_sym_pmd.h"
struct bcmfs_device_attr {
const char name[BCMFS_MAX_PATH_LEN];
@@ -239,6 +240,7 @@ bcmfs_vdev_probe(struct rte_vdev_device *vdev)
char out_dirname[BCMFS_MAX_PATH_LEN];
uint32_t fsdev_dev[BCMFS_MAX_NODES];
enum bcmfs_device_type dtype;
+ int err;
int i = 0;
int dev_idx;
int count = 0;
@@ -290,7 +292,20 @@ bcmfs_vdev_probe(struct rte_vdev_device *vdev)
return -ENODEV;
}
+ err = bcmfs_sym_dev_create(fsdev);
+ if (err) {
+ BCMFS_LOG(WARNING,
+ "Failed to create BCMFS SYM PMD for device %s",
+ fsdev->name);
+ goto pmd_create_fail;
+ }
+
return 0;
+
+pmd_create_fail:
+ fsdev_release(fsdev);
+
+ return err;
}
static int
diff --git a/drivers/crypto/bcmfs/bcmfs_device.h b/drivers/crypto/bcmfs/bcmfs_device.h
index 96beb10fa..37907b91f 100644
--- a/drivers/crypto/bcmfs/bcmfs_device.h
+++ b/drivers/crypto/bcmfs/bcmfs_device.h
@@ -62,6 +62,15 @@ struct bcmfs_device {
struct bcmfs_qp *qps_in_use[BCMFS_MAX_HW_QUEUES];
/* queue pair ops exported by symmetric crypto hw */
struct bcmfs_hw_queue_pair_ops *sym_hw_qp_ops;
+ /* a cryptodevice attached to bcmfs device */
+ struct rte_cryptodev *cdev;
+ /* a rte_device to register with cryptodev */
+ struct rte_device sym_rte_dev;
+ /* private info to keep with cryptodev */
+ struct bcmfs_sym_dev_private *sym_dev;
};
+/* stats exported by device */
+
+
#endif /* _BCMFS_DEV_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_qp.c b/drivers/crypto/bcmfs/bcmfs_qp.c
index ec1327b78..cb5ff6c61 100644
--- a/drivers/crypto/bcmfs/bcmfs_qp.c
+++ b/drivers/crypto/bcmfs/bcmfs_qp.c
@@ -344,3 +344,40 @@ bcmfs_dequeue_op_burst(void *qp, void **ops, uint16_t nb_ops)
return deq;
}
+
+void bcmfs_qp_stats_get(struct bcmfs_qp **qp, int num_qp,
+ struct bcmfs_qp_stats *stats)
+{
+ int i;
+
+ if (stats == NULL) {
+ BCMFS_LOG(ERR, "invalid param: stats %p",
+ stats);
+ return;
+ }
+
+ for (i = 0; i < num_qp; i++) {
+ if (qp[i] == NULL) {
+ BCMFS_LOG(DEBUG, "Uninitialised qp %d", i);
+ continue;
+ }
+
+ stats->enqueued_count += qp[i]->stats.enqueued_count;
+ stats->dequeued_count += qp[i]->stats.dequeued_count;
+ stats->enqueue_err_count += qp[i]->stats.enqueue_err_count;
+ stats->dequeue_err_count += qp[i]->stats.dequeue_err_count;
+ }
+}
+
+void bcmfs_qp_stats_reset(struct bcmfs_qp **qp, int num_qp)
+{
+ int i;
+
+ for (i = 0; i < num_qp; i++) {
+ if (qp[i] == NULL) {
+ BCMFS_LOG(DEBUG, "Uninitialised qp %d", i);
+ continue;
+ }
+ memset(&qp[i]->stats, 0, sizeof(qp[i]->stats));
+ }
+}
diff --git a/drivers/crypto/bcmfs/bcmfs_qp.h b/drivers/crypto/bcmfs/bcmfs_qp.h
index e4b0c3f2f..fec58ca71 100644
--- a/drivers/crypto/bcmfs/bcmfs_qp.h
+++ b/drivers/crypto/bcmfs/bcmfs_qp.h
@@ -24,6 +24,13 @@ enum bcmfs_queue_type {
BCMFS_RM_CPLQ
};
+#define BCMFS_QP_IOBASE_XLATE(base, idx) \
+ ((base) + ((idx) * BCMFS_HW_QUEUE_IO_ADDR_LEN))
+
+/* Max pkts for preprocessing before submitting to h/w qp */
+#define BCMFS_MAX_REQS_BUFF 64
+
+/* qp stats */
struct bcmfs_qp_stats {
/* Count of all operations enqueued */
uint64_t enqueued_count;
@@ -92,6 +99,10 @@ struct bcmfs_qp {
struct bcmfs_qp_stats stats;
/* h/w ops associated with qp */
struct bcmfs_hw_queue_pair_ops *ops;
+ /* bcmfs requests pool*/
+ struct rte_mempool *sr_mp;
+ /* a temporary buffer to keep message pointers */
+ struct bcmfs_qp_message *infl_msgs[BCMFS_MAX_REQS_BUFF];
} __rte_cache_aligned;
@@ -123,4 +134,9 @@ bcmfs_qp_setup(struct bcmfs_qp **qp_addr,
uint16_t queue_pair_id,
struct bcmfs_qp_config *bcmfs_conf);
+/* stats functions*/
+void bcmfs_qp_stats_get(struct bcmfs_qp **qp, int num_qp,
+ struct bcmfs_qp_stats *stats);
+void bcmfs_qp_stats_reset(struct bcmfs_qp **qp, int num_qp);
+
#endif /* _BCMFS_QP_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
new file mode 100644
index 000000000..0f96915f7
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
@@ -0,0 +1,387 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+#include <rte_dev.h>
+#include <rte_errno.h>
+#include <rte_malloc.h>
+#include <rte_cryptodev_pmd.h>
+
+#include "bcmfs_device.h"
+#include "bcmfs_logs.h"
+#include "bcmfs_qp.h"
+#include "bcmfs_sym_pmd.h"
+#include "bcmfs_sym_req.h"
+
+uint8_t cryptodev_bcmfs_driver_id;
+
+static int bcmfs_sym_qp_release(struct rte_cryptodev *dev,
+ uint16_t queue_pair_id);
+
+static int
+bcmfs_sym_dev_config(__rte_unused struct rte_cryptodev *dev,
+ __rte_unused struct rte_cryptodev_config *config)
+{
+ return 0;
+}
+
+static int
+bcmfs_sym_dev_start(__rte_unused struct rte_cryptodev *dev)
+{
+ return 0;
+}
+
+static void
+bcmfs_sym_dev_stop(__rte_unused struct rte_cryptodev *dev)
+{
+}
+
+static int
+bcmfs_sym_dev_close(struct rte_cryptodev *dev)
+{
+ int i, ret;
+
+ for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+ ret = bcmfs_sym_qp_release(dev, i);
+ if (ret < 0)
+ return ret;
+ }
+
+ return 0;
+}
+
+static void
+bcmfs_sym_dev_info_get(struct rte_cryptodev *dev,
+ struct rte_cryptodev_info *dev_info)
+{
+ struct bcmfs_sym_dev_private *internals = dev->data->dev_private;
+ struct bcmfs_device *fsdev = internals->fsdev;
+
+ if (dev_info != NULL) {
+ dev_info->driver_id = cryptodev_bcmfs_driver_id;
+ dev_info->feature_flags = dev->feature_flags;
+ dev_info->max_nb_queue_pairs = fsdev->max_hw_qps;
+ /* No limit of number of sessions */
+ dev_info->sym.max_nb_sessions = 0;
+ }
+}
+
+static void
+bcmfs_sym_stats_get(struct rte_cryptodev *dev,
+ struct rte_cryptodev_stats *stats)
+{
+ struct bcmfs_qp_stats bcmfs_stats = {0};
+ struct bcmfs_sym_dev_private *bcmfs_priv;
+ struct bcmfs_device *fsdev;
+
+ if (stats == NULL || dev == NULL) {
+ BCMFS_LOG(ERR, "invalid ptr: stats %p, dev %p", stats, dev);
+ return;
+ }
+ bcmfs_priv = dev->data->dev_private;
+ fsdev = bcmfs_priv->fsdev;
+
+ bcmfs_qp_stats_get(fsdev->qps_in_use, fsdev->max_hw_qps, &bcmfs_stats);
+
+ stats->enqueued_count = bcmfs_stats.enqueued_count;
+ stats->dequeued_count = bcmfs_stats.dequeued_count;
+ stats->enqueue_err_count = bcmfs_stats.enqueue_err_count;
+ stats->dequeue_err_count = bcmfs_stats.dequeue_err_count;
+}
+
+static void
+bcmfs_sym_stats_reset(struct rte_cryptodev *dev)
+{
+ struct bcmfs_sym_dev_private *bcmfs_priv;
+ struct bcmfs_device *fsdev;
+
+ if (dev == NULL) {
+ BCMFS_LOG(ERR, "invalid cryptodev ptr %p", dev);
+ return;
+ }
+ bcmfs_priv = dev->data->dev_private;
+ fsdev = bcmfs_priv->fsdev;
+
+ bcmfs_qp_stats_reset(fsdev->qps_in_use, fsdev->max_hw_qps);
+}
+
+static int
+bcmfs_sym_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id)
+{
+ struct bcmfs_sym_dev_private *bcmfs_private = dev->data->dev_private;
+ struct bcmfs_qp *qp = (struct bcmfs_qp *)
+ (dev->data->queue_pairs[queue_pair_id]);
+
+ BCMFS_LOG(DEBUG, "Release sym qp %u on device %d",
+ queue_pair_id, dev->data->dev_id);
+
+ rte_mempool_free(qp->sr_mp);
+
+ bcmfs_private->fsdev->qps_in_use[queue_pair_id] = NULL;
+
+ return bcmfs_qp_release((struct bcmfs_qp **)
+ &dev->data->queue_pairs[queue_pair_id]);
+}
+
+static void
+spu_req_init(struct bcmfs_sym_request *sr, rte_iova_t iova __rte_unused)
+{
+ memset(sr, 0, sizeof(*sr));
+}
+
+static void
+req_pool_obj_init(__rte_unused struct rte_mempool *mp,
+ __rte_unused void *opaque, void *obj,
+ __rte_unused unsigned int obj_idx)
+{
+ spu_req_init(obj, rte_mempool_virt2iova(obj));
+}
+
+static struct rte_mempool *
+bcmfs_sym_req_pool_create(struct rte_cryptodev *cdev __rte_unused,
+ uint32_t nobjs, uint16_t qp_id,
+ int socket_id)
+{
+ char softreq_pool_name[RTE_RING_NAMESIZE];
+ struct rte_mempool *mp;
+
+ snprintf(softreq_pool_name, RTE_RING_NAMESIZE, "%s_%d",
+ "bcm_sym", qp_id);
+
+ mp = rte_mempool_create(softreq_pool_name,
+ RTE_ALIGN_MUL_CEIL(nobjs, 64),
+ sizeof(struct bcmfs_sym_request),
+ 64, 0, NULL, NULL, req_pool_obj_init, NULL,
+ socket_id, 0);
+ if (mp == NULL)
+ BCMFS_LOG(ERR, "Failed to create req pool, qid %d, err %d",
+ qp_id, rte_errno);
+
+ return mp;
+}
+
+static int
+bcmfs_sym_qp_setup(struct rte_cryptodev *cdev, uint16_t qp_id,
+ const struct rte_cryptodev_qp_conf *qp_conf,
+ int socket_id)
+{
+ int ret = 0;
+ struct bcmfs_qp *qp = NULL;
+ struct bcmfs_qp_config bcmfs_qp_conf;
+
+ struct bcmfs_qp **qp_addr =
+ (struct bcmfs_qp **)&cdev->data->queue_pairs[qp_id];
+ struct bcmfs_sym_dev_private *bcmfs_private = cdev->data->dev_private;
+ struct bcmfs_device *fsdev = bcmfs_private->fsdev;
+
+
+ /* If qp is already in use free ring memory and qp metadata. */
+ if (*qp_addr != NULL) {
+ ret = bcmfs_sym_qp_release(cdev, qp_id);
+ if (ret < 0)
+ return ret;
+ }
+
+ if (qp_id >= fsdev->max_hw_qps) {
+ BCMFS_LOG(ERR, "qp_id %u invalid for this device", qp_id);
+ return -EINVAL;
+ }
+
+ bcmfs_qp_conf.nb_descriptors = qp_conf->nb_descriptors;
+ bcmfs_qp_conf.socket_id = socket_id;
+ bcmfs_qp_conf.max_descs_req = BCMFS_CRYPTO_MAX_HW_DESCS_PER_REQ;
+ bcmfs_qp_conf.iobase = BCMFS_QP_IOBASE_XLATE(fsdev->mmap_addr, qp_id);
+ bcmfs_qp_conf.ops = fsdev->sym_hw_qp_ops;
+
+ ret = bcmfs_qp_setup(qp_addr, qp_id, &bcmfs_qp_conf);
+ if (ret != 0)
+ return ret;
+
+ qp = (struct bcmfs_qp *)*qp_addr;
+
+ qp->sr_mp = bcmfs_sym_req_pool_create(cdev, qp_conf->nb_descriptors,
+ qp_id, socket_id);
+ if (qp->sr_mp == NULL)
+ return -ENOMEM;
+
+ /* store a link to the qp in the bcmfs_device */
+ bcmfs_private->fsdev->qps_in_use[qp_id] = *qp_addr;
+
+ cdev->data->queue_pairs[qp_id] = qp;
+ BCMFS_LOG(NOTICE, "queue %d setup done\n", qp_id);
+
+ return 0;
+}
+
+static struct rte_cryptodev_ops crypto_bcmfs_ops = {
+ /* Device related operations */
+ .dev_configure = bcmfs_sym_dev_config,
+ .dev_start = bcmfs_sym_dev_start,
+ .dev_stop = bcmfs_sym_dev_stop,
+ .dev_close = bcmfs_sym_dev_close,
+ .dev_infos_get = bcmfs_sym_dev_info_get,
+ /* Stats Collection */
+ .stats_get = bcmfs_sym_stats_get,
+ .stats_reset = bcmfs_sym_stats_reset,
+ /* Queue-Pair management */
+ .queue_pair_setup = bcmfs_sym_qp_setup,
+ .queue_pair_release = bcmfs_sym_qp_release,
+};
+
+/** Enqueue burst */
+static uint16_t
+bcmfs_sym_pmd_enqueue_op_burst(void *queue_pair,
+ struct rte_crypto_op **ops,
+ uint16_t nb_ops)
+{
+ int i, j;
+ uint16_t enq = 0;
+ struct bcmfs_sym_request *sreq;
+ struct bcmfs_qp *qp = (struct bcmfs_qp *)queue_pair;
+
+ if (nb_ops == 0)
+ return 0;
+
+ if (nb_ops > BCMFS_MAX_REQS_BUFF)
+ nb_ops = BCMFS_MAX_REQS_BUFF;
+
+ /* We do not process more than available space */
+ if (nb_ops > (qp->nb_descriptors - qp->nb_pending_requests))
+ nb_ops = qp->nb_descriptors - qp->nb_pending_requests;
+
+ for (i = 0; i < nb_ops; i++) {
+ if (rte_mempool_get(qp->sr_mp, (void **)&sreq))
+ goto enqueue_err;
+
+ /* save rte_crypto_op */
+ sreq->op = ops[i];
+
+ /* save context */
+ qp->infl_msgs[i] = &sreq->msgs;
+ qp->infl_msgs[i]->ctx = (void *)sreq;
+ }
+ /* Send burst request to hw QP */
+ enq = bcmfs_enqueue_op_burst(qp, (void **)qp->infl_msgs, i);
+
+ for (j = enq; j < i; j++)
+ rte_mempool_put(qp->sr_mp, qp->infl_msgs[j]->ctx);
+
+ return enq;
+
+enqueue_err:
+ for (j = 0; j < i; j++)
+ rte_mempool_put(qp->sr_mp, qp->infl_msgs[j]->ctx);
+
+ return enq;
+}
+
+static uint16_t
+bcmfs_sym_pmd_dequeue_op_burst(void *queue_pair,
+ struct rte_crypto_op **ops,
+ uint16_t nb_ops)
+{
+ int i;
+ uint16_t deq = 0;
+ unsigned int pkts = 0;
+ struct bcmfs_sym_request *sreq;
+ struct bcmfs_qp *qp = queue_pair;
+
+ if (nb_ops > BCMFS_MAX_REQS_BUFF)
+ nb_ops = BCMFS_MAX_REQS_BUFF;
+
+ deq = bcmfs_dequeue_op_burst(qp, (void **)qp->infl_msgs, nb_ops);
+ /* get rte_crypto_ops */
+ for (i = 0; i < deq; i++) {
+ sreq = (struct bcmfs_sym_request *)qp->infl_msgs[i]->ctx;
+
+ ops[pkts++] = sreq->op;
+
+ rte_mempool_put(qp->sr_mp, sreq);
+ }
+
+ return pkts;
+}
+
+/*
+ * An rte_driver is needed in the registration of both the
+ * device and the driver with cryptodev.
+ */
+static const char bcmfs_sym_drv_name[] = RTE_STR(CRYPTODEV_NAME_BCMFS_SYM_PMD);
+static const struct rte_driver cryptodev_bcmfs_sym_driver = {
+ .name = bcmfs_sym_drv_name,
+ .alias = bcmfs_sym_drv_name
+};
+
+int
+bcmfs_sym_dev_create(struct bcmfs_device *fsdev)
+{
+ struct rte_cryptodev_pmd_init_params init_params = {
+ .name = "",
+ .socket_id = rte_socket_id(),
+ .private_data_size = sizeof(struct bcmfs_sym_dev_private)
+ };
+ char name[RTE_CRYPTODEV_NAME_MAX_LEN];
+ struct rte_cryptodev *cryptodev;
+ struct bcmfs_sym_dev_private *internals;
+
+ snprintf(name, RTE_CRYPTODEV_NAME_MAX_LEN, "%s_%s",
+ fsdev->name, "sym");
+
+ /* Populate subset device to use in cryptodev device creation */
+ fsdev->sym_rte_dev.driver = &cryptodev_bcmfs_sym_driver;
+ fsdev->sym_rte_dev.numa_node = 0;
+ fsdev->sym_rte_dev.devargs = NULL;
+
+ cryptodev = rte_cryptodev_pmd_create(name,
+ &fsdev->sym_rte_dev,
+ &init_params);
+ if (cryptodev == NULL)
+ return -ENODEV;
+
+ fsdev->sym_rte_dev.name = cryptodev->data->name;
+ cryptodev->driver_id = cryptodev_bcmfs_driver_id;
+ cryptodev->dev_ops = &crypto_bcmfs_ops;
+
+ cryptodev->enqueue_burst = bcmfs_sym_pmd_enqueue_op_burst;
+ cryptodev->dequeue_burst = bcmfs_sym_pmd_dequeue_op_burst;
+
+ cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
+ RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
+ RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT;
+
+ internals = cryptodev->data->dev_private;
+ internals->fsdev = fsdev;
+ fsdev->sym_dev = internals;
+
+ internals->sym_dev_id = cryptodev->data->dev_id;
+
+ BCMFS_LOG(DEBUG, "Created bcmfs-sym device %s as cryptodev instance %d",
+ cryptodev->data->name, internals->sym_dev_id);
+ return 0;
+}
+
+int
+bcmfs_sym_dev_destroy(struct bcmfs_device *fsdev)
+{
+ struct rte_cryptodev *cryptodev;
+
+ if (fsdev == NULL)
+ return -ENODEV;
+ if (fsdev->sym_dev == NULL)
+ return 0;
+
+ /* free crypto device */
+ cryptodev = rte_cryptodev_pmd_get_dev(fsdev->sym_dev->sym_dev_id);
+ rte_cryptodev_pmd_destroy(cryptodev);
+ fsdev->sym_rte_dev.name = NULL;
+ fsdev->sym_dev = NULL;
+
+ return 0;
+}
+
+static struct cryptodev_driver bcmfs_crypto_drv;
+RTE_PMD_REGISTER_CRYPTO_DRIVER(bcmfs_crypto_drv,
+ cryptodev_bcmfs_sym_driver,
+ cryptodev_bcmfs_driver_id);
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_pmd.h b/drivers/crypto/bcmfs/bcmfs_sym_pmd.h
new file mode 100644
index 000000000..65d704609
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_pmd.h
@@ -0,0 +1,38 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_SYM_PMD_H_
+#define _BCMFS_SYM_PMD_H_
+
+#include <rte_cryptodev.h>
+
+#include "bcmfs_device.h"
+
+#define CRYPTODEV_NAME_BCMFS_SYM_PMD crypto_bcmfs
+
+#define BCMFS_CRYPTO_MAX_HW_DESCS_PER_REQ 16
+
+extern uint8_t cryptodev_bcmfs_driver_id;
+
+/** private data structure for a BCMFS device.
+ * This BCMFS device is a device offering only symmetric crypto service,
+ * there can be one of these on each bcmfs_pci_device (VF).
+ */
+struct bcmfs_sym_dev_private {
+ /* The bcmfs device hosting the service */
+ struct bcmfs_device *fsdev;
+ /* Device instance for this rte_cryptodev */
+ uint8_t sym_dev_id;
+ /* BCMFS device symmetric crypto capabilities */
+ const struct rte_cryptodev_capabilities *fsdev_capabilities;
+};
+
+int
+bcmfs_sym_dev_create(struct bcmfs_device *fdev);
+
+int
+bcmfs_sym_dev_destroy(struct bcmfs_device *fdev);
+
+#endif /* _BCMFS_SYM_PMD_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_req.h b/drivers/crypto/bcmfs/bcmfs_sym_req.h
new file mode 100644
index 000000000..0f0b051f1
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_req.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_SYM_REQ_H_
+#define _BCMFS_SYM_REQ_H_
+
+#include "bcmfs_dev_msg.h"
+
+/*
+ * This structure hold the supportive data required to process a
+ * rte_crypto_op
+ */
+struct bcmfs_sym_request {
+ /* bcmfs qp message for h/w queues to process */
+ struct bcmfs_qp_message msgs;
+ /* crypto op */
+ struct rte_crypto_op *op;
+};
+
+#endif /* _BCMFS_SYM_REQ_H_ */
diff --git a/drivers/crypto/bcmfs/meson.build b/drivers/crypto/bcmfs/meson.build
index cd58bd5e2..d9a3d73e9 100644
--- a/drivers/crypto/bcmfs/meson.build
+++ b/drivers/crypto/bcmfs/meson.build
@@ -11,5 +11,6 @@ sources = files(
'bcmfs_qp.c',
'hw/bcmfs4_rm.c',
'hw/bcmfs5_rm.c',
- 'hw/bcmfs_rm_common.c'
+ 'hw/bcmfs_rm_common.c',
+ 'bcmfs_sym_pmd.c'
)
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v1 6/8] crypto/bcmfs: add session handling and capabilities
2020-08-12 6:31 ` [dpdk-dev] [PATCH v1 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
` (4 preceding siblings ...)
2020-08-12 6:31 ` [dpdk-dev] [PATCH v1 5/8] crypto/bcmfs: create a symmetric cryptodev Vikas Gupta
@ 2020-08-12 6:31 ` Vikas Gupta
2020-08-12 6:31 ` [dpdk-dev] [PATCH v1 7/8] crypto/bcmfs: add crypto h/w module Vikas Gupta
` (2 subsequent siblings)
8 siblings, 0 replies; 75+ messages in thread
From: Vikas Gupta @ 2020-08-12 6:31 UTC (permalink / raw)
To: dev, akhil.goyal
Cc: ajit.khaparde, vikram.prakash, Vikas Gupta, Raveendra Padasalagi
Add session handling and capabilities supported by crypto h/w
accelerator.
Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
doc/guides/cryptodevs/bcmfs.rst | 46 ++
doc/guides/cryptodevs/features/bcmfs.ini | 56 ++
drivers/crypto/bcmfs/bcmfs_sym_capabilities.c | 764 ++++++++++++++++++
drivers/crypto/bcmfs/bcmfs_sym_capabilities.h | 16 +
drivers/crypto/bcmfs/bcmfs_sym_defs.h | 170 ++++
drivers/crypto/bcmfs/bcmfs_sym_pmd.c | 13 +
drivers/crypto/bcmfs/bcmfs_sym_session.c | 426 ++++++++++
drivers/crypto/bcmfs/bcmfs_sym_session.h | 99 +++
drivers/crypto/bcmfs/meson.build | 4 +-
9 files changed, 1593 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/cryptodevs/features/bcmfs.ini
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_capabilities.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_capabilities.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_defs.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_session.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_session.h
diff --git a/doc/guides/cryptodevs/bcmfs.rst b/doc/guides/cryptodevs/bcmfs.rst
index 752ce028a..2488b19f7 100644
--- a/doc/guides/cryptodevs/bcmfs.rst
+++ b/doc/guides/cryptodevs/bcmfs.rst
@@ -18,9 +18,55 @@ CONFIG_RTE_LIBRTE_PMD_BCMFS setting is set to `y` in config/common_base file.
* ``CONFIG_RTE_LIBRTE_PMD_BCMFS=y``
+Features
+~~~~~~~~
+
+The BCMFS SYM PMD has support for:
+
+Cipher algorithms:
+
+* ``RTE_CRYPTO_CIPHER_3DES_CBC``
+* ``RTE_CRYPTO_CIPHER_3DES_CTR``
+* ``RTE_CRYPTO_CIPHER_AES128_CBC``
+* ``RTE_CRYPTO_CIPHER_AES192_CBC``
+* ``RTE_CRYPTO_CIPHER_AES256_CBC``
+* ``RTE_CRYPTO_CIPHER_AES128_CTR``
+* ``RTE_CRYPTO_CIPHER_AES192_CTR``
+* ``RTE_CRYPTO_CIPHER_AES256_CTR``
+* ``RTE_CRYPTO_CIPHER_AES_XTS``
+* ``RTE_CRYPTO_CIPHER_DES_CBC``
+
+Hash algorithms:
+
+* ``RTE_CRYPTO_AUTH_SHA1``
+* ``RTE_CRYPTO_AUTH_SHA1_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA224``
+* ``RTE_CRYPTO_AUTH_SHA224_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA256``
+* ``RTE_CRYPTO_AUTH_SHA256_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA384``
+* ``RTE_CRYPTO_AUTH_SHA384_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA512``
+* ``RTE_CRYPTO_AUTH_SHA512_HMAC``
+* ``RTE_CRYPTO_AUTH_AES_XCBC_MAC``
+* ``RTE_CRYPTO_AUTH_MD5_HMAC``
+* ``RTE_CRYPTO_AUTH_AES_GMAC``
+* ``RTE_CRYPTO_AUTH_AES_CMAC``
+
+Supported AEAD algorithms:
+
+* ``RTE_CRYPTO_AEAD_AES_GCM``
+* ``RTE_CRYPTO_AEAD_AES_CCM``
+
Initialization
--------------
BCMFS crypto PMD depend upon the devices present in the path
/sys/bus/platform/devices/fs<version>/<dev_name> on the platform.
Each cryptodev PMD instance can be attached to the nodes present
in the mentioned path.
+
+Limitations
+~~~~~~~~~~~
+
+* Only supports the session-oriented API implementation (session-less APIs are not supported).
+* CCM is not supported on Broadcom`s SoCs having FlexSparc4 unit.
diff --git a/doc/guides/cryptodevs/features/bcmfs.ini b/doc/guides/cryptodevs/features/bcmfs.ini
new file mode 100644
index 000000000..82d2c639d
--- /dev/null
+++ b/doc/guides/cryptodevs/features/bcmfs.ini
@@ -0,0 +1,56 @@
+;
+; Supported features of the 'bcmfs' crypto driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Symmetric crypto = Y
+Sym operation chaining = Y
+HW Accelerated = Y
+Protocol offload = Y
+In Place SGL = Y
+
+;
+; Supported crypto algorithms of the 'bcmfs' crypto driver.
+;
+[Cipher]
+AES CBC (128) = Y
+AES CBC (192) = Y
+AES CBC (256) = Y
+AES CTR (128) = Y
+AES CTR (192) = Y
+AES CTR (256) = Y
+AES XTS (128) = Y
+AES XTS (256) = Y
+3DES CBC = Y
+DES CBC = Y
+;
+; Supported authentication algorithms of the 'bcmfs' crypto driver.
+;
+[Auth]
+MD5 HMAC = Y
+SHA1 = Y
+SHA1 HMAC = Y
+SHA224 = Y
+SHA224 HMAC = Y
+SHA256 = Y
+SHA256 HMAC = Y
+SHA384 = Y
+SHA384 HMAC = Y
+SHA512 = Y
+SHA512 HMAC = Y
+AES GMAC = Y
+AES CMAC (128) = Y
+AES CBC = Y
+AES XCBC = Y
+
+;
+; Supported AEAD algorithms of the 'bcmfs' crypto driver.
+;
+[AEAD]
+AES GCM (128) = Y
+AES GCM (192) = Y
+AES GCM (256) = Y
+AES CCM (128) = Y
+AES CCM (192) = Y
+AES CCM (256) = Y
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_capabilities.c b/drivers/crypto/bcmfs/bcmfs_sym_capabilities.c
new file mode 100644
index 000000000..bb8fa9f81
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_capabilities.c
@@ -0,0 +1,764 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_cryptodev.h>
+
+#include "bcmfs_sym_capabilities.h"
+
+static const struct rte_cryptodev_capabilities bcmfs_sym_capabilities[] = {
+ {
+ /* SHA1 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA1,
+ .block_size = 64,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 20,
+ .max = 20,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* MD5 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_MD5,
+ .block_size = 64,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ }, }
+ }, }
+ },
+ {
+ /* SHA224 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA224,
+ .block_size = 64,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 28,
+ .max = 28,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA256 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA256,
+ .block_size = 64,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 32,
+ .max = 32,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA384 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA384,
+ .block_size = 64,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 48,
+ .max = 48,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA512 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA512,
+ .block_size = 64,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 64,
+ .max = 64,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA3_224 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA3_224,
+ .block_size = 144,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 28,
+ .max = 28,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA3_256 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA3_256,
+ .block_size = 136,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 32,
+ .max = 32,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA3_384 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA3_384,
+ .block_size = 104,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 48,
+ .max = 48,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA3_512 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA3_512,
+ .block_size = 72,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 64,
+ .max = 64,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA1 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
+ .block_size = 64,
+ .key_size = {
+ .min = 1,
+ .max = 64,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 20,
+ .max = 20,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* MD5 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_MD5_HMAC,
+ .block_size = 64,
+ .key_size = {
+ .min = 1,
+ .max = 64,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA224 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA224_HMAC,
+ .block_size = 64,
+ .key_size = {
+ .min = 1,
+ .max = 64,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 28,
+ .max = 28,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA256 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA256_HMAC,
+ .block_size = 64,
+ .key_size = {
+ .min = 1,
+ .max = 64,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 32,
+ .max = 32,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA384 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA384_HMAC,
+ .block_size = 128,
+ .key_size = {
+ .min = 1,
+ .max = 128,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 48,
+ .max = 48,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA512 HMAC*/
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA512_HMAC,
+ .block_size = 128,
+ .key_size = {
+ .min = 1,
+ .max = 128,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 64,
+ .max = 64,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA3_224 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA3_224_HMAC,
+ .block_size = 144,
+ .key_size = {
+ .min = 1,
+ .max = 144,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 28,
+ .max = 28,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA3_256 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA3_256_HMAC,
+ .block_size = 136,
+ .key_size = {
+ .min = 1,
+ .max = 136,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 32,
+ .max = 32,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA3_384 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA3_384_HMAC,
+ .block_size = 104,
+ .key_size = {
+ .min = 1,
+ .max = 104,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 48,
+ .max = 48,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA3_512 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA3_512_HMAC,
+ .block_size = 72,
+ .key_size = {
+ .min = 1,
+ .max = 72,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 64,
+ .max = 64,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* AES XCBC MAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_AES_XCBC_MAC,
+ .block_size = 16,
+ .key_size = {
+ .min = 1,
+ .max = 16,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* AES GMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.aead = {
+ .algo = RTE_CRYPTO_AUTH_AES_GMAC,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 8
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .aad_size = {
+ .min = 0,
+ .max = 65535,
+ .increment = 1
+ },
+ .iv_size = {
+ .min = 12,
+ .max = 16,
+ .increment = 4
+ },
+ }, }
+ }, }
+ },
+ {
+ /* AES CMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_AES_CMAC,
+ .block_size = 16,
+ .key_size = {
+ .min = 1,
+ .max = 16,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* AES CBC MAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_AES_CBC_MAC,
+ .block_size = 16,
+ .key_size = {
+ .min = 1,
+ .max = 16,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* AES ECB */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_AES_ECB,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 8
+ },
+ .iv_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ {
+ /* AES CBC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_AES_CBC,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 8
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ {
+ /* AES CTR */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_AES_CTR,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 8
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ {
+ /* AES XTS */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_AES_XTS,
+ .block_size = 16,
+ .key_size = {
+ .min = 32,
+ .max = 64,
+ .increment = 32
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ {
+ /* DES CBC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_DES_CBC,
+ .block_size = 8,
+ .key_size = {
+ .min = 8,
+ .max = 8,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ {
+ /* 3DES CBC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_3DES_CBC,
+ .block_size = 8,
+ .key_size = {
+ .min = 24,
+ .max = 24,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ {
+ /* 3DES ECB */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_3DES_ECB,
+ .block_size = 8,
+ .key_size = {
+ .min = 24,
+ .max = 24,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ {
+ /* AES GCM */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
+ {.aead = {
+ .algo = RTE_CRYPTO_AEAD_AES_GCM,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 8
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .aad_size = {
+ .min = 0,
+ .max = 65535,
+ .increment = 1
+ },
+ .iv_size = {
+ .min = 12,
+ .max = 16,
+ .increment = 4
+ },
+ }, }
+ }, }
+ },
+ {
+ /* AES CCM */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
+ {.aead = {
+ .algo = RTE_CRYPTO_AEAD_AES_CCM,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 8
+ },
+ .digest_size = {
+ .min = 4,
+ .max = 16,
+ .increment = 2
+ },
+ .aad_size = {
+ .min = 0,
+ .max = 65535,
+ .increment = 1
+ },
+ .iv_size = {
+ .min = 7,
+ .max = 13,
+ .increment = 1
+ },
+ }, }
+ }, }
+ },
+
+ RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+const struct rte_cryptodev_capabilities *
+bcmfs_sym_get_capabilities(void)
+{
+ return bcmfs_sym_capabilities;
+}
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_capabilities.h b/drivers/crypto/bcmfs/bcmfs_sym_capabilities.h
new file mode 100644
index 000000000..3ff61b7d2
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_capabilities.h
@@ -0,0 +1,16 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_SYM_CAPABILITIES_H_
+#define _BCMFS_SYM_CAPABILITIES_H_
+
+/*
+ * Get capabilities list for the device
+ *
+ */
+const struct rte_cryptodev_capabilities *bcmfs_sym_get_capabilities(void);
+
+#endif /* _BCMFS_SYM_CAPABILITIES_H__ */
+
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_defs.h b/drivers/crypto/bcmfs/bcmfs_sym_defs.h
new file mode 100644
index 000000000..b5657a9bc
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_defs.h
@@ -0,0 +1,170 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_SYM_DEFS_H_
+#define _BCMFS_SYM_DEFS_H_
+
+/*
+ * Max block size of hash algorithm
+ * currently SHA3 supports max block size
+ * of 144 bytes
+ */
+#define BCMFS_MAX_KEY_SIZE 144
+#define BCMFS_MAX_IV_SIZE 16
+#define BCMFS_MAX_DIGEST_SIZE 64
+
+/** Symmetric Cipher Direction */
+enum bcmfs_crypto_cipher_op {
+ /** Encrypt cipher operation */
+ BCMFS_CRYPTO_CIPHER_OP_ENCRYPT,
+
+ /** Decrypt cipher operation */
+ BCMFS_CRYPTO_CIPHER_OP_DECRYPT,
+};
+
+/** Symmetric Cipher Algorithms */
+enum bcmfs_crypto_cipher_algorithm {
+ /** NULL cipher algorithm. No mode applies to the NULL algorithm. */
+ BCMFS_CRYPTO_CIPHER_NONE = 0,
+
+ /** Triple DES algorithm in CBC mode */
+ BCMFS_CRYPTO_CIPHER_DES_CBC,
+
+ /** Triple DES algorithm in ECB mode */
+ BCMFS_CRYPTO_CIPHER_DES_ECB,
+
+ /** Triple DES algorithm in CBC mode */
+ BCMFS_CRYPTO_CIPHER_3DES_CBC,
+
+ /** Triple DES algorithm in ECB mode */
+ BCMFS_CRYPTO_CIPHER_3DES_ECB,
+
+ /** AES algorithm in CBC mode */
+ BCMFS_CRYPTO_CIPHER_AES_CBC,
+
+ /** AES algorithm in CCM mode. */
+ BCMFS_CRYPTO_CIPHER_AES_CCM,
+
+ /** AES algorithm in Counter mode */
+ BCMFS_CRYPTO_CIPHER_AES_CTR,
+
+ /** AES algorithm in ECB mode */
+ BCMFS_CRYPTO_CIPHER_AES_ECB,
+
+ /** AES algorithm in GCM mode. */
+ BCMFS_CRYPTO_CIPHER_AES_GCM,
+
+ /** AES algorithm in XTS mode */
+ BCMFS_CRYPTO_CIPHER_AES_XTS,
+
+ /** AES algorithm in OFB mode */
+ BCMFS_CRYPTO_CIPHER_AES_OFB,
+};
+
+/** Symmetric Authentication Algorithms */
+enum bcmfs_crypto_auth_algorithm {
+ /** NULL hash algorithm. */
+ BCMFS_CRYPTO_AUTH_NONE = 0,
+
+ /** MD5 algorithm */
+ BCMFS_CRYPTO_AUTH_MD5,
+
+ /** MD5-HMAC algorithm */
+ BCMFS_CRYPTO_AUTH_MD5_HMAC,
+
+ /** SHA1 algorithm */
+ BCMFS_CRYPTO_AUTH_SHA1,
+
+ /** SHA1-HMAC algorithm */
+ BCMFS_CRYPTO_AUTH_SHA1_HMAC,
+
+ /** 224 bit SHA algorithm. */
+ BCMFS_CRYPTO_AUTH_SHA224,
+
+ /** 224 bit SHA-HMAC algorithm. */
+ BCMFS_CRYPTO_AUTH_SHA224_HMAC,
+
+ /** 256 bit SHA algorithm. */
+ BCMFS_CRYPTO_AUTH_SHA256,
+
+ /** 256 bit SHA-HMAC algorithm. */
+ BCMFS_CRYPTO_AUTH_SHA256_HMAC,
+
+ /** 384 bit SHA algorithm. */
+ BCMFS_CRYPTO_AUTH_SHA384,
+
+ /** 384 bit SHA-HMAC algorithm. */
+ BCMFS_CRYPTO_AUTH_SHA384_HMAC,
+
+ /** 512 bit SHA algorithm. */
+ BCMFS_CRYPTO_AUTH_SHA512,
+
+ /** 512 bit SHA-HMAC algorithm. */
+ BCMFS_CRYPTO_AUTH_SHA512_HMAC,
+
+ /** 224 bit SHA3 algorithm. */
+ BCMFS_CRYPTO_AUTH_SHA3_224,
+
+ /** 224 bit SHA-HMAC algorithm. */
+ BCMFS_CRYPTO_AUTH_SHA3_224_HMAC,
+
+ /** 256 bit SHA3 algorithm. */
+ BCMFS_CRYPTO_AUTH_SHA3_256,
+
+ /** 256 bit SHA3-HMAC algorithm. */
+ BCMFS_CRYPTO_AUTH_SHA3_256_HMAC,
+
+ /** 384 bit SHA3 algorithm. */
+ BCMFS_CRYPTO_AUTH_SHA3_384,
+
+ /** 384 bit SHA3-HMAC algorithm. */
+ BCMFS_CRYPTO_AUTH_SHA3_384_HMAC,
+
+ /** 512 bit SHA3 algorithm. */
+ BCMFS_CRYPTO_AUTH_SHA3_512,
+
+ /** 512 bit SHA3-HMAC algorithm. */
+ BCMFS_CRYPTO_AUTH_SHA3_512_HMAC,
+
+ /** AES XCBC MAC algorithm */
+ BCMFS_CRYPTO_AUTH_AES_XCBC_MAC,
+
+ /** AES CMAC algorithm */
+ BCMFS_CRYPTO_AUTH_AES_CMAC,
+
+ /** AES CBC-MAC algorithm */
+ BCMFS_CRYPTO_AUTH_AES_CBC_MAC,
+
+ /** AES CBC-MAC algorithm */
+ BCMFS_CRYPTO_AUTH_AES_GMAC,
+
+ /** AES algorithm in GCM mode. */
+ BCMFS_CRYPTO_AUTH_AES_GCM,
+
+ /** AES algorithm in CCM mode. */
+ BCMFS_CRYPTO_AUTH_AES_CCM,
+};
+
+/** Symmetric Authentication Operations */
+enum bcmfs_crypto_auth_op {
+ /** Verify authentication digest */
+ BCMFS_CRYPTO_AUTH_OP_VERIFY,
+
+ /** Generate authentication digest */
+ BCMFS_CRYPTO_AUTH_OP_GENERATE,
+};
+
+enum bcmfs_sym_crypto_class {
+ /** Cipher algorithm */
+ BCMFS_CRYPTO_CIPHER,
+
+ /** Hash algorithm */
+ BCMFS_CRYPTO_HASH,
+
+ /** Authenticated Encryption with Assosciated Data algorithm */
+ BCMFS_CRYPTO_AEAD,
+};
+
+#endif /* _BCMFS_SYM_DEFS_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
index 0f96915f7..381ca8ea4 100644
--- a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
+++ b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
@@ -14,6 +14,8 @@
#include "bcmfs_qp.h"
#include "bcmfs_sym_pmd.h"
#include "bcmfs_sym_req.h"
+#include "bcmfs_sym_session.h"
+#include "bcmfs_sym_capabilities.h"
uint8_t cryptodev_bcmfs_driver_id;
@@ -65,6 +67,7 @@ bcmfs_sym_dev_info_get(struct rte_cryptodev *dev,
dev_info->max_nb_queue_pairs = fsdev->max_hw_qps;
/* No limit of number of sessions */
dev_info->sym.max_nb_sessions = 0;
+ dev_info->capabilities = bcmfs_sym_get_capabilities();
}
}
@@ -228,6 +231,10 @@ static struct rte_cryptodev_ops crypto_bcmfs_ops = {
/* Queue-Pair management */
.queue_pair_setup = bcmfs_sym_qp_setup,
.queue_pair_release = bcmfs_sym_qp_release,
+ /* Crypto session related operations */
+ .sym_session_get_size = bcmfs_sym_session_get_private_size,
+ .sym_session_configure = bcmfs_sym_session_configure,
+ .sym_session_clear = bcmfs_sym_session_clear
};
/** Enqueue burst */
@@ -239,6 +246,7 @@ bcmfs_sym_pmd_enqueue_op_burst(void *queue_pair,
int i, j;
uint16_t enq = 0;
struct bcmfs_sym_request *sreq;
+ struct bcmfs_sym_session *sess;
struct bcmfs_qp *qp = (struct bcmfs_qp *)queue_pair;
if (nb_ops == 0)
@@ -252,6 +260,10 @@ bcmfs_sym_pmd_enqueue_op_burst(void *queue_pair,
nb_ops = qp->nb_descriptors - qp->nb_pending_requests;
for (i = 0; i < nb_ops; i++) {
+ sess = bcmfs_sym_get_session(ops[i]);
+ if (unlikely(sess == NULL))
+ goto enqueue_err;
+
if (rte_mempool_get(qp->sr_mp, (void **)&sreq))
goto enqueue_err;
@@ -356,6 +368,7 @@ bcmfs_sym_dev_create(struct bcmfs_device *fsdev)
fsdev->sym_dev = internals;
internals->sym_dev_id = cryptodev->data->dev_id;
+ internals->fsdev_capabilities = bcmfs_sym_get_capabilities();
BCMFS_LOG(DEBUG, "Created bcmfs-sym device %s as cryptodev instance %d",
cryptodev->data->name, internals->sym_dev_id);
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_session.c b/drivers/crypto/bcmfs/bcmfs_sym_session.c
new file mode 100644
index 000000000..3d1fce629
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_session.c
@@ -0,0 +1,426 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_crypto.h>
+#include <rte_crypto_sym.h>
+#include <rte_log.h>
+
+#include "bcmfs_logs.h"
+#include "bcmfs_sym_defs.h"
+#include "bcmfs_sym_pmd.h"
+#include "bcmfs_sym_session.h"
+
+/** Configure the session from a crypto xform chain */
+static enum bcmfs_sym_chain_order
+crypto_get_chain_order(const struct rte_crypto_sym_xform *xform)
+{
+ enum bcmfs_sym_chain_order res = BCMFS_SYM_CHAIN_NOT_SUPPORTED;
+
+
+ if (xform != NULL) {
+ if (xform->type == RTE_CRYPTO_SYM_XFORM_AEAD)
+ res = BCMFS_SYM_CHAIN_AEAD;
+
+ if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
+ if (xform->next == NULL)
+ res = BCMFS_SYM_CHAIN_ONLY_AUTH;
+ else if (xform->next->type ==
+ RTE_CRYPTO_SYM_XFORM_CIPHER)
+ res = BCMFS_SYM_CHAIN_AUTH_CIPHER;
+ }
+ if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
+ if (xform->next == NULL)
+ res = BCMFS_SYM_CHAIN_ONLY_CIPHER;
+ else if (xform->next->type == RTE_CRYPTO_SYM_XFORM_AUTH)
+ res = BCMFS_SYM_CHAIN_CIPHER_AUTH;
+ }
+ }
+
+ return res;
+}
+
+/* Get session cipher key from input cipher key */
+static void
+get_key(const uint8_t *input_key, int keylen, uint8_t *session_key)
+{
+ memcpy(session_key, input_key, keylen);
+}
+
+/* Set session cipher parameters */
+static int
+crypto_set_session_cipher_parameters
+ (struct bcmfs_sym_session *sess,
+ const struct rte_crypto_cipher_xform *cipher_xform)
+{
+ int rc = 0;
+
+ /* Select cipher direction */
+ sess->cipher.direction = cipher_xform->op;
+ sess->cipher.key.length = cipher_xform->key.length;
+ sess->cipher.iv.offset = cipher_xform->iv.offset;
+ sess->cipher.iv.length = cipher_xform->iv.length;
+
+ /* Select cipher algo */
+ switch (cipher_xform->algo) {
+ case RTE_CRYPTO_CIPHER_3DES_CBC:
+ sess->cipher.algo = BCMFS_CRYPTO_CIPHER_3DES_CBC;
+ break;
+ case RTE_CRYPTO_CIPHER_3DES_ECB:
+ sess->cipher.algo = BCMFS_CRYPTO_CIPHER_3DES_ECB;
+ break;
+ case RTE_CRYPTO_CIPHER_DES_CBC:
+ sess->cipher.algo = BCMFS_CRYPTO_CIPHER_DES_CBC;
+ break;
+ case RTE_CRYPTO_CIPHER_AES_CBC:
+ sess->cipher.algo = BCMFS_CRYPTO_CIPHER_AES_CBC;
+ break;
+ case RTE_CRYPTO_CIPHER_AES_ECB:
+ sess->cipher.algo = BCMFS_CRYPTO_CIPHER_AES_ECB;
+ break;
+ case RTE_CRYPTO_CIPHER_AES_CTR:
+ sess->cipher.algo = BCMFS_CRYPTO_CIPHER_AES_CTR;
+ break;
+ case RTE_CRYPTO_CIPHER_AES_XTS:
+ sess->cipher.algo = BCMFS_CRYPTO_CIPHER_AES_XTS;
+ break;
+ default:
+ BCMFS_DP_LOG(ERR, "set session failed. unknown algo");
+ sess->cipher.algo = RTE_CRYPTO_CIPHER_NULL;
+ rc = -EINVAL;
+ break;
+ }
+
+ if (!rc)
+ get_key(cipher_xform->key.data,
+ sess->cipher.key.length,
+ sess->cipher.key.data);
+
+ return rc;
+}
+
+/* Set session auth parameters */
+static int
+crypto_set_session_auth_parameters(struct bcmfs_sym_session *sess,
+ const struct rte_crypto_auth_xform
+ *auth_xform)
+{
+ int rc = 0;
+
+ /* Select auth generate/verify */
+ sess->auth.operation = auth_xform->op ?
+ BCMFS_CRYPTO_AUTH_OP_GENERATE :
+ BCMFS_CRYPTO_AUTH_OP_VERIFY;
+ sess->auth.key.length = auth_xform->key.length;
+ sess->auth.digest_length = auth_xform->digest_length;
+ sess->auth.iv.length = auth_xform->iv.length;
+ sess->auth.iv.offset = auth_xform->iv.offset;
+
+ /* Select auth algo */
+ switch (auth_xform->algo) {
+ case RTE_CRYPTO_AUTH_MD5:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_MD5;
+ break;
+ case RTE_CRYPTO_AUTH_SHA1:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA1;
+ break;
+ case RTE_CRYPTO_AUTH_SHA224:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA224;
+ break;
+ case RTE_CRYPTO_AUTH_SHA256:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA256;
+ break;
+ case RTE_CRYPTO_AUTH_SHA384:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA384;
+ break;
+ case RTE_CRYPTO_AUTH_SHA512:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA512;
+ break;
+ case RTE_CRYPTO_AUTH_SHA3_224:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA3_224;
+ break;
+ case RTE_CRYPTO_AUTH_SHA3_256:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA3_256;
+ break;
+ case RTE_CRYPTO_AUTH_SHA3_384:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA3_384;
+ break;
+ case RTE_CRYPTO_AUTH_SHA3_512:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA3_512;
+ break;
+ case RTE_CRYPTO_AUTH_MD5_HMAC:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_MD5_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA1_HMAC:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA1_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA224_HMAC:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA224_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA256_HMAC:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA256_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA384_HMAC:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA384_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA512_HMAC:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA512_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA3_224_HMAC:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA3_224_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA3_256_HMAC:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA3_256_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA3_384_HMAC:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA3_384_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA3_512_HMAC:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA3_512_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_AES_XCBC_MAC;
+ break;
+ case RTE_CRYPTO_AUTH_AES_GMAC:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_AES_GMAC;
+ break;
+ case RTE_CRYPTO_AUTH_AES_CBC_MAC:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_AES_CBC_MAC;
+ break;
+ case RTE_CRYPTO_AUTH_AES_CMAC:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_AES_CMAC;
+ break;
+ default:
+ BCMFS_DP_LOG(ERR, "Invalid Auth algorithm\n");
+ rc = -EINVAL;
+ break;
+ }
+
+ if (!rc)
+ get_key(auth_xform->key.data,
+ auth_xform->key.length,
+ sess->auth.key.data);
+
+ return rc;
+}
+
+/* Set session aead parameters */
+static int
+crypto_set_session_aead_parameters(struct bcmfs_sym_session *sess,
+ const struct rte_crypto_sym_xform *xform)
+{
+ int rc = 0;
+
+ sess->cipher.direction = xform->aead.op;
+ sess->cipher.iv.offset = xform->aead.iv.offset;
+ sess->cipher.iv.length = xform->aead.iv.length;
+ sess->aead.aad_length = xform->aead.aad_length;
+ sess->cipher.key.length = xform->aead.key.length;
+ sess->auth.digest_length = xform->aead.digest_length;
+
+ /* Select aead algo */
+ switch (xform->aead.algo) {
+ case RTE_CRYPTO_AEAD_AES_CCM:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_AES_CCM;
+ sess->cipher.algo = BCMFS_CRYPTO_CIPHER_AES_CCM;
+ break;
+ case RTE_CRYPTO_AEAD_AES_GCM:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_AES_GCM;
+ sess->cipher.algo = BCMFS_CRYPTO_CIPHER_AES_GCM;
+ break;
+ default:
+ BCMFS_DP_LOG(ERR, "Invalid aead algorithm\n");
+ rc = -EINVAL;
+ break;
+ }
+
+ if (!rc)
+ get_key(xform->aead.key.data,
+ xform->aead.key.length,
+ sess->cipher.key.data);
+
+ return rc;
+}
+
+static struct rte_crypto_auth_xform *
+crypto_get_auth_xform(struct rte_crypto_sym_xform *xform)
+{
+ do {
+ if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH)
+ return &xform->auth;
+
+ xform = xform->next;
+ } while (xform);
+
+ return NULL;
+}
+
+static struct rte_crypto_cipher_xform *
+crypto_get_cipher_xform(struct rte_crypto_sym_xform *xform)
+{
+ do {
+ if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER)
+ return &xform->cipher;
+
+ xform = xform->next;
+ } while (xform);
+
+ return NULL;
+}
+
+
+/** Parse crypto xform chain and set private session parameters */
+static int
+crypto_set_session_parameters(struct bcmfs_sym_session *sess,
+ struct rte_crypto_sym_xform *xform)
+{
+ int rc = 0;
+ struct rte_crypto_cipher_xform *cipher_xform =
+ crypto_get_cipher_xform(xform);
+ struct rte_crypto_auth_xform *auth_xform =
+ crypto_get_auth_xform(xform);
+
+ sess->chain_order = crypto_get_chain_order(xform);
+
+ switch (sess->chain_order) {
+ case BCMFS_SYM_CHAIN_ONLY_CIPHER:
+ if (crypto_set_session_cipher_parameters(sess,
+ cipher_xform)) {
+ BCMFS_DP_LOG(ERR, "Invalid cipher");
+ rc = -EINVAL;
+ }
+ break;
+ case BCMFS_SYM_CHAIN_ONLY_AUTH:
+ if (crypto_set_session_auth_parameters(sess,
+ auth_xform)) {
+ BCMFS_DP_LOG(ERR, "Invalid auth");
+ rc = -EINVAL;
+ }
+ break;
+ case BCMFS_SYM_CHAIN_AUTH_CIPHER:
+ sess->cipher_first = false;
+ if (crypto_set_session_auth_parameters(sess,
+ auth_xform)) {
+ BCMFS_DP_LOG(ERR, "Invalid auth");
+ rc = -EINVAL;
+ goto error;
+ }
+
+ if (crypto_set_session_cipher_parameters(sess,
+ cipher_xform)) {
+ BCMFS_DP_LOG(ERR, "Invalid cipher");
+ rc = -EINVAL;
+ }
+ break;
+ case BCMFS_SYM_CHAIN_CIPHER_AUTH:
+ sess->cipher_first = true;
+ if (crypto_set_session_auth_parameters(sess,
+ auth_xform)) {
+ BCMFS_DP_LOG(ERR, "Invalid auth");
+ rc = -EINVAL;
+ goto error;
+ }
+
+ if (crypto_set_session_cipher_parameters(sess,
+ cipher_xform)) {
+ BCMFS_DP_LOG(ERR, "Invalid cipher");
+ rc = -EINVAL;
+ }
+ break;
+ case BCMFS_SYM_CHAIN_AEAD:
+ if (crypto_set_session_aead_parameters(sess,
+ xform)) {
+ BCMFS_DP_LOG(ERR, "Invalid aead");
+ rc = -EINVAL;
+ }
+ break;
+ default:
+ BCMFS_DP_LOG(ERR, "Invalid chain order\n");
+ rc = -EINVAL;
+ break;
+ }
+
+error:
+ return rc;
+}
+
+struct bcmfs_sym_session *
+bcmfs_sym_get_session(struct rte_crypto_op *op)
+{
+ struct bcmfs_sym_session *sess = NULL;
+
+ if (unlikely(op->sess_type == RTE_CRYPTO_OP_SESSIONLESS)) {
+ BCMFS_DP_LOG(ERR, "operations op(%p) is sessionless", op);
+ } else if (likely(op->sym->session != NULL)) {
+ /* get existing session */
+ sess = (struct bcmfs_sym_session *)
+ get_sym_session_private_data(op->sym->session,
+ cryptodev_bcmfs_driver_id);
+ }
+
+ if (sess == NULL)
+ op->status = RTE_CRYPTO_OP_STATUS_INVALID_SESSION;
+
+ return sess;
+}
+
+int
+bcmfs_sym_session_configure(struct rte_cryptodev *dev,
+ struct rte_crypto_sym_xform *xform,
+ struct rte_cryptodev_sym_session *sess,
+ struct rte_mempool *mempool)
+{
+ void *sess_private_data;
+ int ret;
+
+ if (unlikely(sess == NULL)) {
+ BCMFS_DP_LOG(ERR, "Invalid session struct");
+ return -EINVAL;
+ }
+
+ if (rte_mempool_get(mempool, &sess_private_data)) {
+ BCMFS_DP_LOG(ERR,
+ "Couldn't get object from session mempool");
+ return -ENOMEM;
+ }
+
+ ret = crypto_set_session_parameters(sess_private_data, xform);
+
+ if (ret != 0) {
+ BCMFS_DP_LOG(ERR, "Failed configure session parameters");
+ /* Return session to mempool */
+ rte_mempool_put(mempool, sess_private_data);
+ return ret;
+ }
+
+ set_sym_session_private_data(sess, dev->driver_id,
+ sess_private_data);
+
+ return 0;
+}
+
+/* Clear the memory of session so it doesn't leave key material behind */
+void
+bcmfs_sym_session_clear(struct rte_cryptodev *dev,
+ struct rte_cryptodev_sym_session *sess)
+{
+ uint8_t index = dev->driver_id;
+ void *sess_priv = get_sym_session_private_data(sess, index);
+
+ if (sess_priv) {
+ struct rte_mempool *sess_mp;
+
+ memset(sess_priv, 0, sizeof(struct bcmfs_sym_session));
+ sess_mp = rte_mempool_from_obj(sess_priv);
+
+ set_sym_session_private_data(sess, index, NULL);
+ rte_mempool_put(sess_mp, sess_priv);
+ }
+}
+
+unsigned int
+bcmfs_sym_session_get_private_size(struct rte_cryptodev *dev __rte_unused)
+{
+ return sizeof(struct bcmfs_sym_session);
+}
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_session.h b/drivers/crypto/bcmfs/bcmfs_sym_session.h
new file mode 100644
index 000000000..43deedcf8
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_session.h
@@ -0,0 +1,99 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_SYM_SESSION_H_
+#define _BCMFS_SYM_SESSION_H_
+
+#include <stdbool.h>
+#include <rte_crypto.h>
+#include <rte_cryptodev_pmd.h>
+
+#include "bcmfs_sym_defs.h"
+#include "bcmfs_sym_req.h"
+
+/* BCMFS_SYM operation order mode enumerator */
+enum bcmfs_sym_chain_order {
+ BCMFS_SYM_CHAIN_ONLY_CIPHER,
+ BCMFS_SYM_CHAIN_ONLY_AUTH,
+ BCMFS_SYM_CHAIN_CIPHER_AUTH,
+ BCMFS_SYM_CHAIN_AUTH_CIPHER,
+ BCMFS_SYM_CHAIN_AEAD,
+ BCMFS_SYM_CHAIN_NOT_SUPPORTED
+};
+
+/* BCMFS_SYM crypto private session structure */
+struct bcmfs_sym_session {
+ enum bcmfs_sym_chain_order chain_order;
+
+ /* Cipher Parameters */
+ struct {
+ enum bcmfs_crypto_cipher_op direction;
+ /* cipher operation direction */
+ enum bcmfs_crypto_cipher_algorithm algo;
+ /* cipher algorithm */
+
+ struct {
+ uint8_t data[BCMFS_MAX_KEY_SIZE];
+ /* key data */
+ size_t length;
+ /* key length in bytes */
+ } key;
+
+ struct {
+ uint16_t offset;
+ uint16_t length;
+ } iv;
+ } cipher;
+
+ /* Authentication Parameters */
+ struct {
+ enum bcmfs_crypto_auth_op operation;
+ /* auth operation generate or verify */
+ enum bcmfs_crypto_auth_algorithm algo;
+ /* cipher algorithm */
+
+ struct {
+ uint8_t data[BCMFS_MAX_KEY_SIZE];
+ /* key data */
+ size_t length;
+ /* key length in bytes */
+ } key;
+ struct {
+ uint16_t offset;
+ uint16_t length;
+ } iv;
+
+ uint16_t digest_length;
+ } auth;
+
+ /* aead Parameters */
+ struct {
+ uint16_t aad_length;
+ } aead;
+ bool cipher_first;
+} __rte_cache_aligned;
+
+int
+bcmfs_process_crypto_op(struct rte_crypto_op *op,
+ struct bcmfs_sym_session *sess,
+ struct bcmfs_sym_request *req);
+
+int
+bcmfs_sym_session_configure(struct rte_cryptodev *dev,
+ struct rte_crypto_sym_xform *xform,
+ struct rte_cryptodev_sym_session *sess,
+ struct rte_mempool *mempool);
+
+void
+bcmfs_sym_session_clear(struct rte_cryptodev *dev,
+ struct rte_cryptodev_sym_session *sess);
+
+unsigned int
+bcmfs_sym_session_get_private_size(struct rte_cryptodev *dev __rte_unused);
+
+struct bcmfs_sym_session *
+bcmfs_sym_get_session(struct rte_crypto_op *op);
+
+#endif /* _BCMFS_SYM_SESSION_H_ */
diff --git a/drivers/crypto/bcmfs/meson.build b/drivers/crypto/bcmfs/meson.build
index d9a3d73e9..2e86c733e 100644
--- a/drivers/crypto/bcmfs/meson.build
+++ b/drivers/crypto/bcmfs/meson.build
@@ -12,5 +12,7 @@ sources = files(
'hw/bcmfs4_rm.c',
'hw/bcmfs5_rm.c',
'hw/bcmfs_rm_common.c',
- 'bcmfs_sym_pmd.c'
+ 'bcmfs_sym_pmd.c',
+ 'bcmfs_sym_capabilities.c',
+ 'bcmfs_sym_session.c'
)
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v1 7/8] crypto/bcmfs: add crypto h/w module
2020-08-12 6:31 ` [dpdk-dev] [PATCH v1 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
` (5 preceding siblings ...)
2020-08-12 6:31 ` [dpdk-dev] [PATCH v1 6/8] crypto/bcmfs: add session handling and capabilities Vikas Gupta
@ 2020-08-12 6:31 ` Vikas Gupta
2020-08-12 6:31 ` [dpdk-dev] [PATCH v1 8/8] crypto/bcmfs: add crypto pmd into cryptodev test Vikas Gupta
2020-08-13 17:23 ` [dpdk-dev] [PATCH v2 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
8 siblings, 0 replies; 75+ messages in thread
From: Vikas Gupta @ 2020-08-12 6:31 UTC (permalink / raw)
To: dev, akhil.goyal
Cc: ajit.khaparde, vikram.prakash, Vikas Gupta, Raveendra Padasalagi
Add crypto h/w module to process crypto op. Crypto op is processed via
sym_engine module before submitting the crypto request to h/w queues.
Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
drivers/crypto/bcmfs/bcmfs_sym.c | 316 ++++++++
drivers/crypto/bcmfs/bcmfs_sym_defs.h | 16 +
drivers/crypto/bcmfs/bcmfs_sym_engine.c | 994 ++++++++++++++++++++++++
drivers/crypto/bcmfs/bcmfs_sym_engine.h | 103 +++
drivers/crypto/bcmfs/bcmfs_sym_pmd.c | 26 +
drivers/crypto/bcmfs/bcmfs_sym_req.h | 40 +
drivers/crypto/bcmfs/meson.build | 4 +-
7 files changed, 1498 insertions(+), 1 deletion(-)
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_engine.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_engine.h
diff --git a/drivers/crypto/bcmfs/bcmfs_sym.c b/drivers/crypto/bcmfs/bcmfs_sym.c
new file mode 100644
index 000000000..8f9415b5e
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym.c
@@ -0,0 +1,316 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <stdbool.h>
+
+#include <rte_byteorder.h>
+#include <rte_crypto_sym.h>
+#include <rte_cryptodev.h>
+#include <rte_mbuf.h>
+#include <rte_mempool.h>
+
+#include "bcmfs_sym_defs.h"
+#include "bcmfs_sym_engine.h"
+#include "bcmfs_sym_req.h"
+#include "bcmfs_sym_session.h"
+
+/** Process cipher operation */
+static int
+process_crypto_cipher_op(struct rte_crypto_op *op,
+ struct rte_mbuf *mbuf_src,
+ struct rte_mbuf *mbuf_dst,
+ struct bcmfs_sym_session *sess,
+ struct bcmfs_sym_request *req)
+{
+ int rc = 0;
+ struct fsattr src, dst, iv, key;
+ struct rte_crypto_sym_op *sym_op = op->sym;
+
+ fsattr_sz(&src) = sym_op->cipher.data.length;
+ fsattr_sz(&dst) = sym_op->cipher.data.length;
+
+ fsattr_va(&src) = rte_pktmbuf_mtod_offset
+ (mbuf_src,
+ uint8_t *,
+ op->sym->cipher.data.offset);
+
+ fsattr_va(&dst) = rte_pktmbuf_mtod_offset
+ (mbuf_dst,
+ uint8_t *,
+ op->sym->cipher.data.offset);
+
+ fsattr_pa(&src) = rte_pktmbuf_iova(mbuf_src);
+ fsattr_pa(&dst) = rte_pktmbuf_iova(mbuf_dst);
+
+ fsattr_va(&iv) = rte_crypto_op_ctod_offset(op,
+ uint8_t *,
+ sess->cipher.iv.offset);
+
+ fsattr_sz(&iv) = sess->cipher.iv.length;
+
+ fsattr_va(&key) = sess->cipher.key.data;
+ fsattr_pa(&key) = 0;
+ fsattr_sz(&key) = sess->cipher.key.length;
+
+ rc = bcmfs_crypto_build_cipher_req(req, sess->cipher.algo,
+ sess->cipher.direction, &src,
+ &dst, &key, &iv);
+ if (rc)
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+
+ return rc;
+}
+
+/** Process auth operation */
+static int
+process_crypto_auth_op(struct rte_crypto_op *op,
+ struct rte_mbuf *mbuf_src,
+ struct bcmfs_sym_session *sess,
+ struct bcmfs_sym_request *req)
+{
+ int rc = 0;
+ struct fsattr src, dst, mac, key;
+
+ fsattr_sz(&src) = op->sym->auth.data.length;
+ fsattr_va(&src) = rte_pktmbuf_mtod_offset(mbuf_src,
+ uint8_t *,
+ op->sym->auth.data.offset);
+ fsattr_pa(&src) = rte_pktmbuf_iova(mbuf_src);
+
+ if (!sess->auth.operation) {
+ fsattr_va(&mac) = op->sym->auth.digest.data;
+ fsattr_pa(&mac) = op->sym->auth.digest.phys_addr;
+ fsattr_sz(&mac) = sess->auth.digest_length;
+ } else {
+ fsattr_va(&dst) = op->sym->auth.digest.data;
+ fsattr_pa(&dst) = op->sym->auth.digest.phys_addr;
+ fsattr_sz(&dst) = sess->auth.digest_length;
+ }
+
+ fsattr_va(&key) = sess->auth.key.data;
+ fsattr_pa(&key) = 0;
+ fsattr_sz(&key) = sess->auth.key.length;
+
+ /* AES-GMAC uses AES-GCM-128 authenticator */
+ if (sess->auth.algo == BCMFS_CRYPTO_AUTH_AES_GMAC) {
+ struct fsattr iv;
+ fsattr_va(&iv) = rte_crypto_op_ctod_offset(op,
+ uint8_t *,
+ sess->auth.iv.offset);
+ fsattr_pa(&iv) = 0;
+ fsattr_sz(&iv) = sess->auth.iv.length;
+
+ rc = bcmfs_crypto_build_aead_request(req,
+ BCMFS_CRYPTO_CIPHER_NONE,
+ 0,
+ BCMFS_CRYPTO_AUTH_AES_GMAC,
+ sess->auth.operation,
+ &src, NULL, NULL, &key,
+ &iv, NULL,
+ sess->auth.operation ?
+ (&dst) : &(mac),
+ 0);
+ } else {
+ rc = bcmfs_crypto_build_auth_req(req, sess->auth.algo,
+ sess->auth.operation,
+ &src,
+ (sess->auth.operation) ? (&dst) : NULL,
+ (sess->auth.operation) ? NULL : (&mac),
+ &key);
+ }
+
+ if (rc)
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+
+ return rc;
+}
+
+/** Process combined/chained mode operation */
+static int
+process_crypto_combined_op(struct rte_crypto_op *op,
+ struct rte_mbuf *mbuf_src,
+ struct rte_mbuf *mbuf_dst,
+ struct bcmfs_sym_session *sess,
+ struct bcmfs_sym_request *req)
+{
+ int rc = 0, aad_size = 0;
+ struct fsattr src, dst, iv;
+ struct rte_crypto_sym_op *sym_op = op->sym;
+ struct fsattr cipher_key, aad, mac, auth_key;
+
+ fsattr_sz(&src) = sym_op->cipher.data.length;
+ fsattr_sz(&dst) = sym_op->cipher.data.length;
+
+ fsattr_va(&src) = rte_pktmbuf_mtod_offset
+ (mbuf_src,
+ uint8_t *,
+ sym_op->cipher.data.offset);
+
+ fsattr_va(&dst) = rte_pktmbuf_mtod_offset
+ (mbuf_dst,
+ uint8_t *,
+ sym_op->cipher.data.offset);
+
+ fsattr_pa(&src) = rte_pktmbuf_iova_offset(mbuf_src,
+ sym_op->cipher.data.offset);
+ fsattr_pa(&dst) = rte_pktmbuf_iova_offset(mbuf_dst,
+ sym_op->cipher.data.offset);
+
+ fsattr_va(&iv) = rte_crypto_op_ctod_offset(op,
+ uint8_t *,
+ sess->cipher.iv.offset);
+
+ fsattr_pa(&iv) = 0;
+ fsattr_sz(&iv) = sess->cipher.iv.length;
+
+ fsattr_va(&cipher_key) = sess->cipher.key.data;
+ fsattr_pa(&cipher_key) = 0;
+ fsattr_sz(&cipher_key) = sess->cipher.key.length;
+
+ fsattr_va(&auth_key) = sess->auth.key.data;
+ fsattr_pa(&auth_key) = 0;
+ fsattr_sz(&auth_key) = sess->auth.key.length;
+
+ fsattr_va(&mac) = op->sym->auth.digest.data;
+ fsattr_pa(&mac) = op->sym->auth.digest.phys_addr;
+ fsattr_sz(&mac) = sess->auth.digest_length;
+
+ aad_size = sym_op->auth.data.length - sym_op->cipher.data.length;
+
+ if (aad_size > 0) {
+ fsattr_sz(&aad) = aad_size;
+ fsattr_va(&aad) = rte_pktmbuf_mtod_offset
+ (mbuf_src,
+ uint8_t *,
+ sym_op->auth.data.offset);
+ fsattr_pa(&aad) = rte_pktmbuf_iova_offset(mbuf_src,
+ sym_op->auth.data.offset);
+ }
+
+ rc = bcmfs_crypto_build_aead_request(req, sess->cipher.algo,
+ sess->cipher.direction,
+ sess->auth.algo,
+ sess->auth.operation,
+ &src, &dst, &cipher_key,
+ &auth_key, &iv,
+ (aad_size > 0) ? (&aad) : NULL,
+ &mac, sess->cipher_first);
+
+ if (rc)
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+
+ return rc;
+}
+
+/** Process AEAD operation */
+static int
+process_crypto_aead_op(struct rte_crypto_op *op,
+ struct rte_mbuf *mbuf_src,
+ struct rte_mbuf *mbuf_dst,
+ struct bcmfs_sym_session *sess,
+ struct bcmfs_sym_request *req)
+{
+ int rc = 0;
+ struct fsattr src, dst, iv;
+ struct rte_crypto_sym_op *sym_op = op->sym;
+ struct fsattr cipher_key, aad, mac, auth_key;
+ enum bcmfs_crypto_cipher_op cipher_op;
+ enum bcmfs_crypto_auth_op auth_op;
+
+ if (sess->cipher.direction) {
+ auth_op = BCMFS_CRYPTO_AUTH_OP_VERIFY;
+ cipher_op = BCMFS_CRYPTO_CIPHER_OP_DECRYPT;
+ } else {
+ auth_op = BCMFS_CRYPTO_AUTH_OP_GENERATE;
+ cipher_op = BCMFS_CRYPTO_CIPHER_OP_ENCRYPT;
+ }
+
+ fsattr_sz(&src) = sym_op->aead.data.length;
+ fsattr_sz(&dst) = sym_op->aead.data.length;
+
+ fsattr_va(&src) = rte_pktmbuf_mtod_offset
+ (mbuf_src,
+ uint8_t *,
+ sym_op->aead.data.offset);
+
+ fsattr_va(&dst) = rte_pktmbuf_mtod_offset
+ (mbuf_dst,
+ uint8_t *,
+ sym_op->aead.data.offset);
+
+ fsattr_pa(&src) = rte_pktmbuf_iova_offset(mbuf_src,
+ sym_op->aead.data.offset);
+ fsattr_pa(&dst) = rte_pktmbuf_iova_offset(mbuf_dst,
+ sym_op->aead.data.offset);
+
+ fsattr_va(&iv) = rte_crypto_op_ctod_offset(op,
+ uint8_t *,
+ sess->cipher.iv.offset);
+
+ fsattr_pa(&iv) = 0;
+ fsattr_sz(&iv) = sess->cipher.iv.length;
+
+ fsattr_va(&cipher_key) = sess->cipher.key.data;
+ fsattr_pa(&cipher_key) = 0;
+ fsattr_sz(&cipher_key) = sess->cipher.key.length;
+
+ fsattr_va(&auth_key) = sess->auth.key.data;
+ fsattr_pa(&auth_key) = 0;
+ fsattr_sz(&auth_key) = sess->auth.key.length;
+
+ fsattr_va(&mac) = op->sym->aead.digest.data;
+ fsattr_pa(&mac) = op->sym->aead.digest.phys_addr;
+ fsattr_sz(&mac) = sess->auth.digest_length;
+
+ fsattr_va(&aad) = op->sym->aead.aad.data;
+ fsattr_pa(&aad) = op->sym->aead.aad.phys_addr;
+ fsattr_sz(&aad) = sess->aead.aad_length;
+
+ rc = bcmfs_crypto_build_aead_request(req, sess->cipher.algo,
+ cipher_op, sess->auth.algo,
+ auth_op, &src, &dst, &cipher_key,
+ &auth_key, &iv, &aad, &mac,
+ sess->cipher.direction ? 0 : 1);
+
+ if (rc)
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+
+ return rc;
+}
+
+/** Process crypto operation for mbuf */
+int
+bcmfs_process_sym_crypto_op(struct rte_crypto_op *op,
+ struct bcmfs_sym_session *sess,
+ struct bcmfs_sym_request *req)
+{
+ struct rte_mbuf *msrc, *mdst;
+ int rc = 0;
+
+ msrc = op->sym->m_src;
+ mdst = op->sym->m_dst ? op->sym->m_dst : op->sym->m_src;
+ op->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+
+ switch (sess->chain_order) {
+ case BCMFS_SYM_CHAIN_ONLY_CIPHER:
+ rc = process_crypto_cipher_op(op, msrc, mdst, sess, req);
+ break;
+ case BCMFS_SYM_CHAIN_ONLY_AUTH:
+ rc = process_crypto_auth_op(op, msrc, sess, req);
+ break;
+ case BCMFS_SYM_CHAIN_CIPHER_AUTH:
+ case BCMFS_SYM_CHAIN_AUTH_CIPHER:
+ rc = process_crypto_combined_op(op, msrc, mdst, sess, req);
+ break;
+ case BCMFS_SYM_CHAIN_AEAD:
+ rc = process_crypto_aead_op(op, msrc, mdst, sess, req);
+ break;
+ default:
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ break;
+ }
+
+ return rc;
+}
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_defs.h b/drivers/crypto/bcmfs/bcmfs_sym_defs.h
index b5657a9bc..8824521dd 100644
--- a/drivers/crypto/bcmfs/bcmfs_sym_defs.h
+++ b/drivers/crypto/bcmfs/bcmfs_sym_defs.h
@@ -15,6 +15,18 @@
#define BCMFS_MAX_IV_SIZE 16
#define BCMFS_MAX_DIGEST_SIZE 64
+struct bcmfs_sym_session;
+struct bcmfs_sym_request;
+
+/** Crypto Request processing successful. */
+#define BCMFS_SYM_RESPONSE_SUCCESS (0)
+/** Crypot Request processing protocol failure. */
+#define BCMFS_SYM_RESPONSE_PROTO_FAILURE (1)
+/** Crypot Request processing completion failure. */
+#define BCMFS_SYM_RESPONSE_COMPL_ERROR (2)
+/** Crypot Request processing hash tag check error. */
+#define BCMFS_SYM_RESPONSE_HASH_TAG_ERROR (3)
+
/** Symmetric Cipher Direction */
enum bcmfs_crypto_cipher_op {
/** Encrypt cipher operation */
@@ -167,4 +179,8 @@ enum bcmfs_sym_crypto_class {
BCMFS_CRYPTO_AEAD,
};
+int
+bcmfs_process_sym_crypto_op(struct rte_crypto_op *op,
+ struct bcmfs_sym_session *sess,
+ struct bcmfs_sym_request *req);
#endif /* _BCMFS_SYM_DEFS_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_engine.c b/drivers/crypto/bcmfs/bcmfs_sym_engine.c
new file mode 100644
index 000000000..b8cf3eab9
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_engine.c
@@ -0,0 +1,994 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Broadcom.
+ * All rights reserved.
+ */
+
+#include <stdbool.h>
+#include <string.h>
+
+#include <rte_common.h>
+#include <rte_cryptodev.h>
+
+#include "bcmfs_logs.h"
+#include "bcmfs_sym_defs.h"
+#include "bcmfs_dev_msg.h"
+#include "bcmfs_sym_req.h"
+#include "bcmfs_sym_engine.h"
+
+enum spu2_cipher_type {
+ SPU2_CIPHER_TYPE_NONE = 0x0,
+ SPU2_CIPHER_TYPE_AES128 = 0x1,
+ SPU2_CIPHER_TYPE_AES192 = 0x2,
+ SPU2_CIPHER_TYPE_AES256 = 0x3,
+ SPU2_CIPHER_TYPE_DES = 0x4,
+ SPU2_CIPHER_TYPE_3DES = 0x5,
+ SPU2_CIPHER_TYPE_LAST
+};
+
+enum spu2_cipher_mode {
+ SPU2_CIPHER_MODE_ECB = 0x0,
+ SPU2_CIPHER_MODE_CBC = 0x1,
+ SPU2_CIPHER_MODE_CTR = 0x2,
+ SPU2_CIPHER_MODE_CFB = 0x3,
+ SPU2_CIPHER_MODE_OFB = 0x4,
+ SPU2_CIPHER_MODE_XTS = 0x5,
+ SPU2_CIPHER_MODE_CCM = 0x6,
+ SPU2_CIPHER_MODE_GCM = 0x7,
+ SPU2_CIPHER_MODE_LAST
+};
+
+enum spu2_hash_type {
+ SPU2_HASH_TYPE_NONE = 0x0,
+ SPU2_HASH_TYPE_AES128 = 0x1,
+ SPU2_HASH_TYPE_AES192 = 0x2,
+ SPU2_HASH_TYPE_AES256 = 0x3,
+ SPU2_HASH_TYPE_MD5 = 0x6,
+ SPU2_HASH_TYPE_SHA1 = 0x7,
+ SPU2_HASH_TYPE_SHA224 = 0x8,
+ SPU2_HASH_TYPE_SHA256 = 0x9,
+ SPU2_HASH_TYPE_SHA384 = 0xa,
+ SPU2_HASH_TYPE_SHA512 = 0xb,
+ SPU2_HASH_TYPE_SHA512_224 = 0xc,
+ SPU2_HASH_TYPE_SHA512_256 = 0xd,
+ SPU2_HASH_TYPE_SHA3_224 = 0xe,
+ SPU2_HASH_TYPE_SHA3_256 = 0xf,
+ SPU2_HASH_TYPE_SHA3_384 = 0x10,
+ SPU2_HASH_TYPE_SHA3_512 = 0x11,
+ SPU2_HASH_TYPE_LAST
+};
+
+enum spu2_hash_mode {
+ SPU2_HASH_MODE_CMAC = 0x0,
+ SPU2_HASH_MODE_CBC_MAC = 0x1,
+ SPU2_HASH_MODE_XCBC_MAC = 0x2,
+ SPU2_HASH_MODE_HMAC = 0x3,
+ SPU2_HASH_MODE_RABIN = 0x4,
+ SPU2_HASH_MODE_CCM = 0x5,
+ SPU2_HASH_MODE_GCM = 0x6,
+ SPU2_HASH_MODE_RESERVED = 0x7,
+ SPU2_HASH_MODE_LAST
+};
+
+enum spu2_proto_sel {
+ SPU2_PROTO_RESV = 0,
+ SPU2_MACSEC_SECTAG8_ECB = 1,
+ SPU2_MACSEC_SECTAG8_SCB = 2,
+ SPU2_MACSEC_SECTAG16 = 3,
+ SPU2_MACSEC_SECTAG16_8_XPN = 4,
+ SPU2_IPSEC = 5,
+ SPU2_IPSEC_ESN = 6,
+ SPU2_TLS_CIPHER = 7,
+ SPU2_TLS_AEAD = 8,
+ SPU2_DTLS_CIPHER = 9,
+ SPU2_DTLS_AEAD = 10
+};
+
+/* SPU2 response size */
+#define SPU2_STATUS_LEN 2
+
+/* Metadata settings in response */
+enum spu2_ret_md_opts {
+ SPU2_RET_NO_MD = 0, /* return no metadata */
+ SPU2_RET_FMD_OMD = 1, /* return both FMD and OMD */
+ SPU2_RET_FMD_ONLY = 2, /* return only FMD */
+ SPU2_RET_FMD_OMD_IV = 3, /* return FMD and OMD with just IVs */
+};
+
+/* FMD ctrl0 field masks */
+#define SPU2_CIPH_ENCRYPT_EN 0x1 /* 0: decrypt, 1: encrypt */
+#define SPU2_CIPH_TYPE_SHIFT 4
+#define SPU2_CIPH_MODE 0xF00 /* one of spu2_cipher_mode */
+#define SPU2_CIPH_MODE_SHIFT 8
+#define SPU2_CFB_MASK 0x7000 /* cipher feedback mask */
+#define SPU2_CFB_MASK_SHIFT 12
+#define SPU2_PROTO_SEL 0xF00000 /* MACsec, IPsec, TLS... */
+#define SPU2_PROTO_SEL_SHIFT 20
+#define SPU2_HASH_FIRST 0x1000000 /* 1: hash input is input pkt
+ * data
+ */
+#define SPU2_CHK_TAG 0x2000000 /* 1: check digest provided */
+#define SPU2_HASH_TYPE 0x1F0000000 /* one of spu2_hash_type */
+#define SPU2_HASH_TYPE_SHIFT 28
+#define SPU2_HASH_MODE 0xF000000000 /* one of spu2_hash_mode */
+#define SPU2_HASH_MODE_SHIFT 36
+#define SPU2_CIPH_PAD_EN 0x100000000000 /* 1: Add pad to end of payload for
+ * enc
+ */
+#define SPU2_CIPH_PAD 0xFF000000000000 /* cipher pad value */
+#define SPU2_CIPH_PAD_SHIFT 48
+
+/* FMD ctrl1 field masks */
+#define SPU2_TAG_LOC 0x1 /* 1: end of payload, 0: undef */
+#define SPU2_HAS_FR_DATA 0x2 /* 1: msg has frame data */
+#define SPU2_HAS_AAD1 0x4 /* 1: msg has AAD1 field */
+#define SPU2_HAS_NAAD 0x8 /* 1: msg has NAAD field */
+#define SPU2_HAS_AAD2 0x10 /* 1: msg has AAD2 field */
+#define SPU2_HAS_ESN 0x20 /* 1: msg has ESN field */
+#define SPU2_HASH_KEY_LEN 0xFF00 /* len of hash key in bytes.
+ * HMAC only.
+ */
+#define SPU2_HASH_KEY_LEN_SHIFT 8
+#define SPU2_CIPH_KEY_LEN 0xFF00000 /* len of cipher key in bytes */
+#define SPU2_CIPH_KEY_LEN_SHIFT 20
+#define SPU2_GENIV 0x10000000 /* 1: hw generates IV */
+#define SPU2_HASH_IV 0x20000000 /* 1: IV incl in hash */
+#define SPU2_RET_IV 0x40000000 /* 1: return IV in output msg
+ * b4 payload
+ */
+#define SPU2_RET_IV_LEN 0xF00000000 /* length in bytes of IV returned.
+ * 0 = 16 bytes
+ */
+#define SPU2_RET_IV_LEN_SHIFT 32
+#define SPU2_IV_OFFSET 0xF000000000 /* gen IV offset */
+#define SPU2_IV_OFFSET_SHIFT 36
+#define SPU2_IV_LEN 0x1F0000000000 /* length of input IV in bytes */
+#define SPU2_IV_LEN_SHIFT 40
+#define SPU2_HASH_TAG_LEN 0x7F000000000000 /* hash tag length in bytes */
+#define SPU2_HASH_TAG_LEN_SHIFT 48
+#define SPU2_RETURN_MD 0x300000000000000 /* return metadata */
+#define SPU2_RETURN_MD_SHIFT 56
+#define SPU2_RETURN_FD 0x400000000000000
+#define SPU2_RETURN_AAD1 0x800000000000000
+#define SPU2_RETURN_NAAD 0x1000000000000000
+#define SPU2_RETURN_AAD2 0x2000000000000000
+#define SPU2_RETURN_PAY 0x4000000000000000 /* return payload */
+
+/* FMD ctrl2 field masks */
+#define SPU2_AAD1_OFFSET 0xFFF /* byte offset of AAD1 field */
+#define SPU2_AAD1_LEN 0xFF000 /* length of AAD1 in bytes */
+#define SPU2_AAD1_LEN_SHIFT 12
+#define SPU2_AAD2_OFFSET 0xFFF00000 /* byte offset of AAD2 field */
+#define SPU2_AAD2_OFFSET_SHIFT 20
+#define SPU2_PL_OFFSET 0xFFFFFFFF00000000 /* payload offset from AAD2 */
+#define SPU2_PL_OFFSET_SHIFT 32
+
+/* FMD ctrl3 field masks */
+#define SPU2_PL_LEN 0xFFFFFFFF /* payload length in bytes */
+#define SPU2_TLS_LEN 0xFFFF00000000 /* TLS encrypt: cipher len
+ * TLS decrypt: compressed len
+ */
+#define SPU2_TLS_LEN_SHIFT 32
+
+/*
+ * Max value that can be represented in the Payload Length field of the
+ * ctrl3 word of FMD.
+ */
+#define SPU2_MAX_PAYLOAD SPU2_PL_LEN
+
+#define SPU2_VAL_NONE 0
+
+/* CCM B_0 field definitions, common for SPU-M and SPU2 */
+#define CCM_B0_ADATA 0x40
+#define CCM_B0_ADATA_SHIFT 6
+#define CCM_B0_M_PRIME 0x38
+#define CCM_B0_M_PRIME_SHIFT 3
+#define CCM_B0_L_PRIME 0x07
+#define CCM_B0_L_PRIME_SHIFT 0
+#define CCM_ESP_L_VALUE 4
+
+static uint16_t
+spu2_cipher_type_xlate(enum bcmfs_crypto_cipher_algorithm cipher_alg,
+ enum spu2_cipher_type *spu2_type,
+ struct fsattr *key)
+{
+ int ret = 0;
+ int key_size = fsattr_sz(key);
+
+ if (cipher_alg == BCMFS_CRYPTO_CIPHER_AES_XTS)
+ key_size = key_size / 2;
+
+ switch (key_size) {
+ case BCMFS_CRYPTO_AES128:
+ *spu2_type = SPU2_CIPHER_TYPE_AES128;
+ break;
+ case BCMFS_CRYPTO_AES192:
+ *spu2_type = SPU2_CIPHER_TYPE_AES192;
+ break;
+ case BCMFS_CRYPTO_AES256:
+ *spu2_type = SPU2_CIPHER_TYPE_AES256;
+ break;
+ default:
+ ret = -EINVAL;
+ }
+
+ return ret;
+}
+
+static int
+spu2_hash_xlate(enum bcmfs_crypto_auth_algorithm auth_alg,
+ struct fsattr *key,
+ enum spu2_hash_type *spu2_type,
+ enum spu2_hash_mode *spu2_mode)
+{
+ *spu2_mode = 0;
+
+ switch (auth_alg) {
+ case BCMFS_CRYPTO_AUTH_NONE:
+ *spu2_type = SPU2_HASH_TYPE_NONE;
+ break;
+ case BCMFS_CRYPTO_AUTH_MD5:
+ *spu2_type = SPU2_HASH_TYPE_MD5;
+ break;
+ case BCMFS_CRYPTO_AUTH_MD5_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_MD5;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA1:
+ *spu2_type = SPU2_HASH_TYPE_SHA1;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA1_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA1;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA224:
+ *spu2_type = SPU2_HASH_TYPE_SHA224;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA224_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA224;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA256:
+ *spu2_type = SPU2_HASH_TYPE_SHA256;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA256_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA256;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA384:
+ *spu2_type = SPU2_HASH_TYPE_SHA384;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA384_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA384;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA512:
+ *spu2_type = SPU2_HASH_TYPE_SHA512;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA512_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA512;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA3_224:
+ *spu2_type = SPU2_HASH_TYPE_SHA3_224;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA3_224_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA3_224;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA3_256:
+ *spu2_type = SPU2_HASH_TYPE_SHA3_256;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA3_256_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA3_256;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA3_384:
+ *spu2_type = SPU2_HASH_TYPE_SHA3_384;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA3_384_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA3_384;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA3_512:
+ *spu2_type = SPU2_HASH_TYPE_SHA3_512;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA3_512_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA3_512;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case BCMFS_CRYPTO_AUTH_AES_XCBC_MAC:
+ *spu2_mode = SPU2_HASH_MODE_XCBC_MAC;
+ switch (fsattr_sz(key)) {
+ case BCMFS_CRYPTO_AES128:
+ *spu2_type = SPU2_HASH_TYPE_AES128;
+ break;
+ case BCMFS_CRYPTO_AES192:
+ *spu2_type = SPU2_HASH_TYPE_AES192;
+ break;
+ case BCMFS_CRYPTO_AES256:
+ *spu2_type = SPU2_HASH_TYPE_AES256;
+ break;
+ default:
+ return -EINVAL;
+ }
+ break;
+ case BCMFS_CRYPTO_AUTH_AES_CMAC:
+ *spu2_mode = SPU2_HASH_MODE_CMAC;
+ switch (fsattr_sz(key)) {
+ case BCMFS_CRYPTO_AES128:
+ *spu2_type = SPU2_HASH_TYPE_AES128;
+ break;
+ case BCMFS_CRYPTO_AES192:
+ *spu2_type = SPU2_HASH_TYPE_AES192;
+ break;
+ case BCMFS_CRYPTO_AES256:
+ *spu2_type = SPU2_HASH_TYPE_AES256;
+ break;
+ default:
+ return -EINVAL;
+ }
+ break;
+ case BCMFS_CRYPTO_AUTH_AES_GMAC:
+ *spu2_mode = SPU2_HASH_MODE_GCM;
+ switch (fsattr_sz(key)) {
+ case BCMFS_CRYPTO_AES128:
+ *spu2_type = SPU2_HASH_TYPE_AES128;
+ break;
+ case BCMFS_CRYPTO_AES192:
+ *spu2_type = SPU2_HASH_TYPE_AES192;
+ break;
+ case BCMFS_CRYPTO_AES256:
+ *spu2_type = SPU2_HASH_TYPE_AES256;
+ break;
+ default:
+ return -EINVAL;
+ }
+ break;
+ case BCMFS_CRYPTO_AUTH_AES_CBC_MAC:
+ *spu2_mode = SPU2_HASH_MODE_CBC_MAC;
+ switch (fsattr_sz(key)) {
+ case BCMFS_CRYPTO_AES128:
+ *spu2_type = SPU2_HASH_TYPE_AES128;
+ break;
+ case BCMFS_CRYPTO_AES192:
+ *spu2_type = SPU2_HASH_TYPE_AES192;
+ break;
+ case BCMFS_CRYPTO_AES256:
+ *spu2_type = SPU2_HASH_TYPE_AES256;
+ break;
+ default:
+ return -EINVAL;
+ }
+ break;
+ case BCMFS_CRYPTO_AUTH_AES_GCM:
+ *spu2_mode = SPU2_HASH_MODE_GCM;
+ break;
+ case BCMFS_CRYPTO_AUTH_AES_CCM:
+ *spu2_mode = SPU2_HASH_MODE_CCM;
+ break;
+ }
+
+ return 0;
+}
+
+static int
+spu2_cipher_xlate(enum bcmfs_crypto_cipher_algorithm cipher_alg,
+ struct fsattr *key,
+ enum spu2_cipher_type *spu2_type,
+ enum spu2_cipher_mode *spu2_mode)
+{
+ int ret = 0;
+
+ switch (cipher_alg) {
+ case BCMFS_CRYPTO_CIPHER_NONE:
+ *spu2_type = SPU2_CIPHER_TYPE_NONE;
+ break;
+ case BCMFS_CRYPTO_CIPHER_DES_ECB:
+ *spu2_mode = SPU2_CIPHER_MODE_ECB;
+ *spu2_type = SPU2_CIPHER_TYPE_DES;
+ break;
+ case BCMFS_CRYPTO_CIPHER_DES_CBC:
+ *spu2_mode = SPU2_CIPHER_MODE_CBC;
+ *spu2_type = SPU2_CIPHER_TYPE_DES;
+ break;
+ case BCMFS_CRYPTO_CIPHER_3DES_ECB:
+ *spu2_mode = SPU2_CIPHER_MODE_ECB;
+ *spu2_type = SPU2_CIPHER_TYPE_3DES;
+ break;
+ case BCMFS_CRYPTO_CIPHER_3DES_CBC:
+ *spu2_mode = SPU2_CIPHER_MODE_CBC;
+ *spu2_type = SPU2_CIPHER_TYPE_3DES;
+ break;
+ case BCMFS_CRYPTO_CIPHER_AES_CBC:
+ *spu2_mode = SPU2_CIPHER_MODE_CBC;
+ ret = spu2_cipher_type_xlate(cipher_alg, spu2_type, key);
+ break;
+ case BCMFS_CRYPTO_CIPHER_AES_ECB:
+ *spu2_mode = SPU2_CIPHER_MODE_ECB;
+ ret = spu2_cipher_type_xlate(cipher_alg, spu2_type, key);
+ break;
+ case BCMFS_CRYPTO_CIPHER_AES_CTR:
+ *spu2_mode = SPU2_CIPHER_MODE_CTR;
+ ret = spu2_cipher_type_xlate(cipher_alg, spu2_type, key);
+ break;
+ case BCMFS_CRYPTO_CIPHER_AES_CCM:
+ *spu2_mode = SPU2_CIPHER_MODE_CCM;
+ ret = spu2_cipher_type_xlate(cipher_alg, spu2_type, key);
+ break;
+ case BCMFS_CRYPTO_CIPHER_AES_GCM:
+ *spu2_mode = SPU2_CIPHER_MODE_GCM;
+ ret = spu2_cipher_type_xlate(cipher_alg, spu2_type, key);
+ break;
+ case BCMFS_CRYPTO_CIPHER_AES_XTS:
+ *spu2_mode = SPU2_CIPHER_MODE_XTS;
+ ret = spu2_cipher_type_xlate(cipher_alg, spu2_type, key);
+ break;
+ case BCMFS_CRYPTO_CIPHER_AES_OFB:
+ *spu2_mode = SPU2_CIPHER_MODE_OFB;
+ ret = spu2_cipher_type_xlate(cipher_alg, spu2_type, key);
+ break;
+ }
+
+ return ret;
+}
+
+static void
+spu2_fmd_ctrl0_write(struct spu2_fmd *fmd,
+ bool is_inbound, bool auth_first,
+ enum spu2_proto_sel protocol,
+ enum spu2_cipher_type cipher_type,
+ enum spu2_cipher_mode cipher_mode,
+ enum spu2_hash_type auth_type,
+ enum spu2_hash_mode auth_mode)
+{
+ uint64_t ctrl0 = 0;
+
+ if (cipher_type != SPU2_CIPHER_TYPE_NONE && !is_inbound)
+ ctrl0 |= SPU2_CIPH_ENCRYPT_EN;
+
+ ctrl0 |= ((uint64_t)cipher_type << SPU2_CIPH_TYPE_SHIFT) |
+ ((uint64_t)cipher_mode << SPU2_CIPH_MODE_SHIFT);
+
+ if (protocol != SPU2_PROTO_RESV)
+ ctrl0 |= (uint64_t)protocol << SPU2_PROTO_SEL_SHIFT;
+
+ if (auth_first)
+ ctrl0 |= SPU2_HASH_FIRST;
+
+ if (is_inbound && auth_type != SPU2_HASH_TYPE_NONE)
+ ctrl0 |= SPU2_CHK_TAG;
+
+ ctrl0 |= (((uint64_t)auth_type << SPU2_HASH_TYPE_SHIFT) |
+ ((uint64_t)auth_mode << SPU2_HASH_MODE_SHIFT));
+
+ fmd->ctrl0 = ctrl0;
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "ctrl0:", &fmd->ctrl0, sizeof(uint64_t));
+#endif
+}
+
+static void
+spu2_fmd_ctrl1_write(struct spu2_fmd *fmd, bool is_inbound,
+ uint64_t assoc_size, uint64_t auth_key_len,
+ uint64_t cipher_key_len, bool gen_iv, bool hash_iv,
+ bool return_iv, uint64_t ret_iv_len,
+ uint64_t ret_iv_offset, uint64_t cipher_iv_len,
+ uint64_t digest_size, bool return_payload, bool return_md)
+{
+ uint64_t ctrl1 = 0;
+
+ if (is_inbound && digest_size != 0)
+ ctrl1 |= SPU2_TAG_LOC;
+
+ if (assoc_size != 0)
+ ctrl1 |= SPU2_HAS_AAD2;
+
+ if (auth_key_len != 0)
+ ctrl1 |= ((auth_key_len << SPU2_HASH_KEY_LEN_SHIFT) &
+ SPU2_HASH_KEY_LEN);
+
+ if (cipher_key_len != 0)
+ ctrl1 |= ((cipher_key_len << SPU2_CIPH_KEY_LEN_SHIFT) &
+ SPU2_CIPH_KEY_LEN);
+
+ if (gen_iv)
+ ctrl1 |= SPU2_GENIV;
+
+ if (hash_iv)
+ ctrl1 |= SPU2_HASH_IV;
+
+ if (return_iv) {
+ ctrl1 |= SPU2_RET_IV;
+ ctrl1 |= ret_iv_len << SPU2_RET_IV_LEN_SHIFT;
+ ctrl1 |= ret_iv_offset << SPU2_IV_OFFSET_SHIFT;
+ }
+
+ ctrl1 |= ((cipher_iv_len << SPU2_IV_LEN_SHIFT) & SPU2_IV_LEN);
+
+ if (digest_size != 0) {
+ ctrl1 |= ((digest_size << SPU2_HASH_TAG_LEN_SHIFT) &
+ SPU2_HASH_TAG_LEN);
+ }
+
+ /*
+ * Let's ask for the output pkt to include FMD, but don't need to
+ * get keys and IVs back in OMD.
+ */
+ if (return_md)
+ ctrl1 |= ((uint64_t)SPU2_RET_FMD_ONLY << SPU2_RETURN_MD_SHIFT);
+ else
+ ctrl1 |= ((uint64_t)SPU2_RET_NO_MD << SPU2_RETURN_MD_SHIFT);
+
+ /* Crypto API does not get assoc data back. So no need for AAD2. */
+
+ if (return_payload)
+ ctrl1 |= SPU2_RETURN_PAY;
+
+ fmd->ctrl1 = ctrl1;
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "ctrl1:", &fmd->ctrl1, sizeof(uint64_t));
+#endif
+}
+
+static void
+spu2_fmd_ctrl2_write(struct spu2_fmd *fmd, uint64_t cipher_offset,
+ uint64_t auth_key_len __rte_unused,
+ uint64_t auth_iv_len __rte_unused,
+ uint64_t cipher_key_len __rte_unused,
+ uint64_t cipher_iv_len __rte_unused)
+{
+ uint64_t aad1_offset;
+ uint64_t aad2_offset;
+ uint16_t aad1_len = 0;
+ uint64_t payload_offset;
+
+ /* AAD1 offset is from start of FD. FD length always 0. */
+ aad1_offset = 0;
+
+ aad2_offset = aad1_offset;
+ payload_offset = cipher_offset;
+ fmd->ctrl2 = aad1_offset |
+ (aad1_len << SPU2_AAD1_LEN_SHIFT) |
+ (aad2_offset << SPU2_AAD2_OFFSET_SHIFT) |
+ (payload_offset << SPU2_PL_OFFSET_SHIFT);
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "ctrl2:", &fmd->ctrl2, sizeof(uint64_t));
+#endif
+}
+
+static void
+spu2_fmd_ctrl3_write(struct spu2_fmd *fmd, uint64_t payload_len)
+{
+ fmd->ctrl3 = payload_len & SPU2_PL_LEN;
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "ctrl3:", &fmd->ctrl3, sizeof(uint64_t));
+#endif
+}
+
+int
+bcmfs_crypto_build_auth_req(struct bcmfs_sym_request *sreq,
+ enum bcmfs_crypto_auth_algorithm a_alg,
+ enum bcmfs_crypto_auth_op auth_op,
+ struct fsattr *src, struct fsattr *dst,
+ struct fsattr *mac, struct fsattr *auth_key)
+{
+ int ret;
+ uint64_t dst_size;
+ int src_index = 0;
+ struct spu2_fmd *fmd;
+ enum spu2_hash_mode spu2_auth_mode;
+ enum spu2_hash_type spu2_auth_type = SPU2_HASH_TYPE_NONE;
+ uint64_t auth_ksize = (auth_key != NULL) ? fsattr_sz(auth_key) : 0;
+ bool is_inbound = (auth_op == BCMFS_CRYPTO_AUTH_OP_VERIFY);
+
+ if (src == NULL)
+ return -EINVAL;
+
+ /* one of dst or mac should not be NULL */
+ if (dst == NULL && mac == NULL)
+ return -EINVAL;
+
+ dst_size = (auth_op == BCMFS_CRYPTO_AUTH_OP_GENERATE) ?
+ fsattr_sz(dst) : fsattr_sz(mac);
+
+ /* spu2 hash algorithm and hash algorithm mode */
+ ret = spu2_hash_xlate(a_alg, auth_key, &spu2_auth_type,
+ &spu2_auth_mode);
+ if (ret)
+ return -EINVAL;
+
+ fmd = &sreq->fmd;
+
+ spu2_fmd_ctrl0_write(fmd, is_inbound, SPU2_VAL_NONE,
+ SPU2_VAL_NONE, SPU2_PROTO_RESV,
+ SPU2_VAL_NONE, spu2_auth_type, spu2_auth_mode);
+
+ spu2_fmd_ctrl1_write(fmd, is_inbound, SPU2_VAL_NONE,
+ auth_ksize, SPU2_VAL_NONE, false,
+ false, SPU2_VAL_NONE, SPU2_VAL_NONE,
+ SPU2_VAL_NONE, SPU2_VAL_NONE,
+ dst_size, SPU2_VAL_NONE, SPU2_VAL_NONE);
+
+ memset(&fmd->ctrl2, 0, sizeof(uint64_t));
+
+ spu2_fmd_ctrl3_write(fmd, fsattr_sz(src));
+
+ /* Source metadata and data pointers */
+ sreq->msgs.srcs_addr[src_index] = sreq->fptr;
+ sreq->msgs.srcs_len[src_index] = sizeof(struct spu2_fmd);
+ src_index++;
+
+ if (auth_key != NULL && fsattr_sz(auth_key) != 0) {
+ memcpy(sreq->auth_key, fsattr_va(auth_key),
+ fsattr_sz(auth_key));
+
+ sreq->msgs.srcs_addr[src_index] = sreq->aptr;
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(auth_key);
+ src_index++;
+ }
+
+ sreq->msgs.srcs_addr[src_index] = fsattr_pa(src);
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(src);
+ src_index++;
+
+ /*
+ * In case of authentication verify operation, use input mac data to
+ * SPU2 engine.
+ */
+ if (auth_op == BCMFS_CRYPTO_AUTH_OP_VERIFY && mac != NULL) {
+ sreq->msgs.srcs_addr[src_index] = fsattr_pa(mac);
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(mac);
+ src_index++;
+ }
+ sreq->msgs.srcs_count = src_index;
+
+ /*
+ * Output packet contains actual output from SPU2 and
+ * the status packet, so the dsts_count is always 2 below.
+ */
+ if (auth_op == BCMFS_CRYPTO_AUTH_OP_GENERATE) {
+ sreq->msgs.dsts_addr[0] = fsattr_pa(dst);
+ sreq->msgs.dsts_len[0] = fsattr_sz(dst);
+ } else {
+ /*
+ * In case of authentication verify operation, provide dummy
+ * location to SPU2 engine to generate hash. This is needed
+ * because SPU2 generates hash even in case of verify operation.
+ */
+ sreq->msgs.dsts_addr[0] = sreq->dptr;
+ sreq->msgs.dsts_len[0] = fsattr_sz(mac);
+ }
+
+ sreq->msgs.dsts_addr[1] = sreq->rptr;
+ sreq->msgs.dsts_len[1] = SPU2_STATUS_LEN;
+ sreq->msgs.dsts_count = 2;
+
+ return 0;
+}
+
+int
+bcmfs_crypto_build_cipher_req(struct bcmfs_sym_request *sreq,
+ enum bcmfs_crypto_cipher_algorithm calgo,
+ enum bcmfs_crypto_cipher_op cipher_op,
+ struct fsattr *src, struct fsattr *dst,
+ struct fsattr *cipher_key, struct fsattr *iv)
+{
+ int ret = 0;
+ int src_index = 0;
+ struct spu2_fmd *fmd;
+ unsigned int xts_keylen;
+ enum spu2_cipher_mode spu2_ciph_mode = 0;
+ enum spu2_cipher_type spu2_ciph_type = SPU2_CIPHER_TYPE_NONE;
+ bool is_inbound = (cipher_op == BCMFS_CRYPTO_CIPHER_OP_DECRYPT);
+
+ if (src == NULL || dst == NULL || iv == NULL)
+ return -EINVAL;
+
+ fmd = &sreq->fmd;
+
+ /* spu2 cipher algorithm and cipher algorithm mode */
+ ret = spu2_cipher_xlate(calgo, cipher_key,
+ &spu2_ciph_type, &spu2_ciph_mode);
+ if (ret)
+ return -EINVAL;
+
+ spu2_fmd_ctrl0_write(fmd, is_inbound, SPU2_VAL_NONE,
+ SPU2_PROTO_RESV, spu2_ciph_type, spu2_ciph_mode,
+ SPU2_VAL_NONE, SPU2_VAL_NONE);
+
+ spu2_fmd_ctrl1_write(fmd, SPU2_VAL_NONE, SPU2_VAL_NONE, SPU2_VAL_NONE,
+ fsattr_sz(cipher_key), false, false,
+ SPU2_VAL_NONE, SPU2_VAL_NONE, SPU2_VAL_NONE,
+ fsattr_sz(iv), SPU2_VAL_NONE, SPU2_VAL_NONE,
+ SPU2_VAL_NONE);
+
+ /* Nothing for FMD2 */
+ memset(&fmd->ctrl2, 0, sizeof(uint64_t));
+
+ spu2_fmd_ctrl3_write(fmd, fsattr_sz(src));
+
+ /* Source metadata and data pointers */
+ sreq->msgs.srcs_addr[src_index] = sreq->fptr;
+ sreq->msgs.srcs_len[src_index] = sizeof(struct spu2_fmd);
+ src_index++;
+
+ if (cipher_key != NULL && fsattr_sz(cipher_key) != 0) {
+ if (calgo == BCMFS_CRYPTO_CIPHER_AES_XTS) {
+ xts_keylen = fsattr_sz(cipher_key) / 2;
+ memcpy(sreq->cipher_key,
+ (uint8_t *)fsattr_va(cipher_key) + xts_keylen,
+ xts_keylen);
+ memcpy(sreq->cipher_key + xts_keylen,
+ fsattr_va(cipher_key), xts_keylen);
+ } else {
+ memcpy(sreq->cipher_key,
+ fsattr_va(cipher_key), fsattr_sz(cipher_key));
+ }
+
+ sreq->msgs.srcs_addr[src_index] = sreq->cptr;
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(cipher_key);
+ src_index++;
+ }
+
+ if (iv != NULL && fsattr_sz(iv) != 0) {
+ memcpy(sreq->iv,
+ fsattr_va(iv), fsattr_sz(iv));
+ sreq->msgs.srcs_addr[src_index] = sreq->iptr;
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(iv);
+ src_index++;
+ }
+
+ sreq->msgs.srcs_addr[src_index] = fsattr_pa(src);
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(src);
+ src_index++;
+ sreq->msgs.srcs_count = src_index;
+
+ /**
+ * Output packet contains actual output from SPU2 and
+ * the status packet, so the dsts_count is always 2 below.
+ */
+ sreq->msgs.dsts_addr[0] = fsattr_pa(dst);
+ sreq->msgs.dsts_len[0] = fsattr_sz(dst);
+
+ sreq->msgs.dsts_addr[1] = sreq->rptr;
+ sreq->msgs.dsts_len[1] = SPU2_STATUS_LEN;
+ sreq->msgs.dsts_count = 2;
+
+ return 0;
+}
+
+static void
+bcmfs_crypto_ccm_update_iv(uint8_t *ivbuf,
+ unsigned int *ivlen, bool is_esp)
+{
+ int L; /* size of length field, in bytes */
+
+ /*
+ * In RFC4309 mode, L is fixed at 4 bytes; otherwise, IV from
+ * testmgr contains (L-1) in bottom 3 bits of first byte,
+ * per RFC 3610.
+ */
+ if (is_esp)
+ L = CCM_ESP_L_VALUE;
+ else
+ L = ((ivbuf[0] & CCM_B0_L_PRIME) >>
+ CCM_B0_L_PRIME_SHIFT) + 1;
+
+ /* SPU2 doesn't want these length bytes nor the first byte... */
+ *ivlen -= (1 + L);
+ memmove(ivbuf, &ivbuf[1], *ivlen);
+}
+
+int
+bcmfs_crypto_build_aead_request(struct bcmfs_sym_request *sreq,
+ enum bcmfs_crypto_cipher_algorithm cipher_alg,
+ enum bcmfs_crypto_cipher_op cipher_op,
+ enum bcmfs_crypto_auth_algorithm auth_alg,
+ enum bcmfs_crypto_auth_op auth_op,
+ struct fsattr *src, struct fsattr *dst,
+ struct fsattr *cipher_key,
+ struct fsattr *auth_key,
+ struct fsattr *iv, struct fsattr *aad,
+ struct fsattr *digest, bool cipher_first)
+{
+ int ret = 0;
+ int src_index = 0;
+ int dst_index = 0;
+ bool auth_first = 0;
+ struct spu2_fmd *fmd;
+ unsigned int payload_len;
+ enum spu2_cipher_mode spu2_ciph_mode = 0;
+ enum spu2_hash_mode spu2_auth_mode = 0;
+ uint64_t aad_size = (aad != NULL) ? fsattr_sz(aad) : 0;
+ unsigned int iv_size = (iv != NULL) ? fsattr_sz(iv) : 0;
+ enum spu2_cipher_type spu2_ciph_type = SPU2_CIPHER_TYPE_NONE;
+ uint64_t auth_ksize = (auth_key != NULL) ?
+ fsattr_sz(auth_key) : 0;
+ uint64_t cipher_ksize = (cipher_key != NULL) ?
+ fsattr_sz(cipher_key) : 0;
+ uint64_t digest_size = (digest != NULL) ?
+ fsattr_sz(digest) : 0;
+ enum spu2_hash_type spu2_auth_type = SPU2_HASH_TYPE_NONE;
+ bool is_inbound = (auth_op == BCMFS_CRYPTO_AUTH_OP_VERIFY);
+
+ if (src == NULL)
+ return -EINVAL;
+
+ payload_len = fsattr_sz(src);
+ if (!payload_len) {
+ BCMFS_DP_LOG(ERR, "null payload not supported");
+ return -EINVAL;
+ }
+
+ /* spu2 hash algorithm and hash algorithm mode */
+ ret = spu2_hash_xlate(auth_alg, auth_key, &spu2_auth_type,
+ &spu2_auth_mode);
+ if (ret)
+ return -EINVAL;
+
+ /* spu2 cipher algorithm and cipher algorithm mode */
+ ret = spu2_cipher_xlate(cipher_alg, cipher_key, &spu2_ciph_type,
+ &spu2_ciph_mode);
+ if (ret) {
+ BCMFS_DP_LOG(ERR, "cipher xlate error");
+ return -EINVAL;
+ }
+
+ auth_first = cipher_first ? 0 : 1;
+
+ if (cipher_alg == BCMFS_CRYPTO_CIPHER_AES_GCM) {
+ spu2_auth_type = spu2_ciph_type;
+ /*
+ * SPU2 needs in total 12 bytes of IV
+ * ie IV of 8 bytes(random number) and 4 bytes of salt.
+ */
+ if (fsattr_sz(iv) > 12)
+ iv_size = 12;
+
+ /*
+ * On SPU 2, aes gcm cipher first on encrypt, auth first on
+ * decrypt
+ */
+
+ auth_first = (cipher_op == BCMFS_CRYPTO_CIPHER_OP_ENCRYPT) ?
+ 0 : 1;
+ }
+
+ if (iv != NULL && fsattr_sz(iv) != 0)
+ memcpy(sreq->iv, fsattr_va(iv), fsattr_sz(iv));
+
+ if (cipher_alg == BCMFS_CRYPTO_CIPHER_AES_CCM) {
+ spu2_auth_type = spu2_ciph_type;
+ if (iv != NULL) {
+ memcpy(sreq->iv, fsattr_va(iv),
+ fsattr_sz(iv));
+ iv_size = fsattr_sz(iv);
+ bcmfs_crypto_ccm_update_iv(sreq->iv, &iv_size, false);
+ }
+
+ /* opposite for ccm (auth 1st on encrypt) */
+ auth_first = (cipher_op == BCMFS_CRYPTO_CIPHER_OP_ENCRYPT) ?
+ 1 : 0;
+ }
+
+ fmd = &sreq->fmd;
+
+ spu2_fmd_ctrl0_write(fmd, is_inbound, auth_first, SPU2_PROTO_RESV,
+ spu2_ciph_type, spu2_ciph_mode,
+ spu2_auth_type, spu2_auth_mode);
+
+ spu2_fmd_ctrl1_write(fmd, is_inbound, aad_size, auth_ksize,
+ cipher_ksize, false, false, SPU2_VAL_NONE,
+ SPU2_VAL_NONE, SPU2_VAL_NONE, iv_size,
+ digest_size, false, SPU2_VAL_NONE);
+
+ spu2_fmd_ctrl2_write(fmd, aad_size, auth_ksize, 0,
+ cipher_ksize, iv_size);
+
+ spu2_fmd_ctrl3_write(fmd, payload_len);
+
+ /* Source metadata and data pointers */
+ sreq->msgs.srcs_addr[src_index] = sreq->fptr;
+ sreq->msgs.srcs_len[src_index] = sizeof(struct spu2_fmd);
+ src_index++;
+
+ if (auth_key != NULL && fsattr_sz(auth_key) != 0) {
+ memcpy(sreq->auth_key,
+ fsattr_va(auth_key), fsattr_sz(auth_key));
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "auth key:", fsattr_va(auth_key),
+ fsattr_sz(auth_key));
+#endif
+ sreq->msgs.srcs_addr[src_index] = sreq->aptr;
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(auth_key);
+ src_index++;
+ }
+
+ if (cipher_key != NULL && fsattr_sz(cipher_key) != 0) {
+ memcpy(sreq->cipher_key,
+ fsattr_va(cipher_key), fsattr_sz(cipher_key));
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "cipher key:", fsattr_va(cipher_key),
+ fsattr_sz(cipher_key));
+#endif
+ sreq->msgs.srcs_addr[src_index] = sreq->cptr;
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(cipher_key);
+ src_index++;
+ }
+
+ if (iv != NULL && fsattr_sz(iv) != 0) {
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "iv key:", fsattr_va(iv),
+ fsattr_sz(iv));
+#endif
+ sreq->msgs.srcs_addr[src_index] = sreq->iptr;
+ sreq->msgs.srcs_len[src_index] = iv_size;
+ src_index++;
+ }
+
+ if (aad != NULL && fsattr_sz(aad) != 0) {
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "aad :", fsattr_va(aad),
+ fsattr_sz(aad));
+#endif
+ sreq->msgs.srcs_addr[src_index] = fsattr_pa(aad);
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(aad);
+ src_index++;
+ }
+
+ sreq->msgs.srcs_addr[src_index] = fsattr_pa(src);
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(src);
+ src_index++;
+
+
+ if (auth_op == BCMFS_CRYPTO_AUTH_OP_VERIFY && digest != NULL &&
+ fsattr_sz(digest) != 0) {
+ sreq->msgs.srcs_addr[src_index] = fsattr_pa(digest);
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(digest);
+ src_index++;
+ }
+ sreq->msgs.srcs_count = src_index;
+
+ if (dst != NULL) {
+ sreq->msgs.dsts_addr[dst_index] = fsattr_pa(dst);
+ sreq->msgs.dsts_len[dst_index] = fsattr_sz(dst);
+ dst_index++;
+ }
+
+ if (auth_op == BCMFS_CRYPTO_AUTH_OP_VERIFY) {
+ /*
+ * In case of decryption digest data is generated by
+ * SPU2 engine but application doesn't need digest
+ * as such. So program dummy location to capture
+ * digest data
+ */
+ if (digest != NULL && fsattr_sz(digest) != 0) {
+ sreq->msgs.dsts_addr[dst_index] =
+ sreq->dptr;
+ sreq->msgs.dsts_len[dst_index] =
+ fsattr_sz(digest);
+ dst_index++;
+ }
+ } else {
+ if (digest != NULL && fsattr_sz(digest) != 0) {
+ sreq->msgs.dsts_addr[dst_index] =
+ fsattr_pa(digest);
+ sreq->msgs.dsts_len[dst_index] =
+ fsattr_sz(digest);
+ dst_index++;
+ }
+ }
+
+ sreq->msgs.dsts_addr[dst_index] = sreq->rptr;
+ sreq->msgs.dsts_len[dst_index] = SPU2_STATUS_LEN;
+ dst_index++;
+ sreq->msgs.dsts_count = dst_index;
+
+ return 0;
+}
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_engine.h b/drivers/crypto/bcmfs/bcmfs_sym_engine.h
new file mode 100644
index 000000000..29cfb4dc2
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_engine.h
@@ -0,0 +1,103 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_SYM_ENGINE_H_
+#define _BCMFS_SYM_ENGINE_H_
+
+#include "bcmfs_dev_msg.h"
+#include "bcmfs_sym_defs.h"
+#include "bcmfs_sym_req.h"
+
+/* structure to hold element's arrtibutes */
+struct fsattr {
+ void *va;
+ uint64_t pa;
+ uint64_t sz;
+};
+
+#define fsattr_va(__ptr) ((__ptr)->va)
+#define fsattr_pa(__ptr) ((__ptr)->pa)
+#define fsattr_sz(__ptr) ((__ptr)->sz)
+
+/*
+ * Macros for Crypto h/w constraints
+ */
+
+#define BCMFS_CRYPTO_AES_BLOCK_SIZE 16
+#define BCMFS_CRYPTO_AES_MIN_KEY_SIZE 16
+#define BCMFS_CRYPTO_AES_MAX_KEY_SIZE 32
+
+#define BCMFS_CRYPTO_DES_BLOCK_SIZE 8
+#define BCMFS_CRYPTO_DES_KEY_SIZE 8
+
+#define BCMFS_CRYPTO_3DES_BLOCK_SIZE 8
+#define BCMFS_CRYPTO_3DES_KEY_SIZE (3 * 8)
+
+#define BCMFS_CRYPTO_MD5_DIGEST_SIZE 16
+#define BCMFS_CRYPTO_MD5_BLOCK_SIZE 64
+
+#define BCMFS_CRYPTO_SHA1_DIGEST_SIZE 20
+#define BCMFS_CRYPTO_SHA1_BLOCK_SIZE 64
+
+#define BCMFS_CRYPTO_SHA224_DIGEST_SIZE 28
+#define BCMFS_CRYPTO_SHA224_BLOCK_SIZE 64
+
+#define BCMFS_CRYPTO_SHA256_DIGEST_SIZE 32
+#define BCMFS_CRYPTO_SHA256_BLOCK_SIZE 64
+
+#define BCMFS_CRYPTO_SHA384_DIGEST_SIZE 48
+#define BCMFS_CRYPTO_SHA384_BLOCK_SIZE 128
+
+#define BCMFS_CRYPTO_SHA512_DIGEST_SIZE 64
+#define BCMFS_CRYPTO_SHA512_BLOCK_SIZE 128
+
+#define BCMFS_CRYPTO_SHA3_224_DIGEST_SIZE (224 / 8)
+#define BCMFS_CRYPTO_SHA3_224_BLOCK_SIZE (200 - 2 * \
+ BCMFS_CRYPTO_SHA3_224_DIGEST_SIZE)
+
+#define BCMFS_CRYPTO_SHA3_256_DIGEST_SIZE (256 / 8)
+#define BCMFS_CRYPTO_SHA3_256_BLOCK_SIZE (200 - 2 * \
+ BCMFS_CRYPTO_SHA3_256_DIGEST_SIZE)
+
+#define BCMFS_CRYPTO_SHA3_384_DIGEST_SIZE (384 / 8)
+#define BCMFS_CRYPTO_SHA3_384_BLOCK_SIZE (200 - 2 * \
+ BCMFS_CRYPTO_SHA3_384_DIGEST_SIZE)
+
+#define BCMFS_CRYPTO_SHA3_512_DIGEST_SIZE (512 / 8)
+#define BCMFS_CRYPTO_SHA3_512_BLOCK_SIZE (200 - 2 * \
+ BCMFS_CRYPTO_SHA3_512_DIGEST_SIZE)
+
+enum bcmfs_crypto_aes_cipher_key {
+ BCMFS_CRYPTO_AES128 = 16,
+ BCMFS_CRYPTO_AES192 = 24,
+ BCMFS_CRYPTO_AES256 = 32,
+};
+
+int
+bcmfs_crypto_build_cipher_req(struct bcmfs_sym_request *req,
+ enum bcmfs_crypto_cipher_algorithm c_algo,
+ enum bcmfs_crypto_cipher_op cop,
+ struct fsattr *src, struct fsattr *dst,
+ struct fsattr *key, struct fsattr *iv);
+
+int
+bcmfs_crypto_build_auth_req(struct bcmfs_sym_request *req,
+ enum bcmfs_crypto_auth_algorithm a_algo,
+ enum bcmfs_crypto_auth_op aop,
+ struct fsattr *src, struct fsattr *dst,
+ struct fsattr *mac, struct fsattr *key);
+
+int
+bcmfs_crypto_build_aead_request(struct bcmfs_sym_request *req,
+ enum bcmfs_crypto_cipher_algorithm c_algo,
+ enum bcmfs_crypto_cipher_op cop,
+ enum bcmfs_crypto_auth_algorithm a_algo,
+ enum bcmfs_crypto_auth_op aop,
+ struct fsattr *src, struct fsattr *dst,
+ struct fsattr *cipher_key, struct fsattr *auth_key,
+ struct fsattr *iv, struct fsattr *aad,
+ struct fsattr *digest, bool cipher_first);
+
+#endif /* _BCMFS_SYM_ENGINE_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
index 381ca8ea4..568797b4f 100644
--- a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
+++ b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
@@ -132,6 +132,12 @@ static void
spu_req_init(struct bcmfs_sym_request *sr, rte_iova_t iova __rte_unused)
{
memset(sr, 0, sizeof(*sr));
+ sr->fptr = iova;
+ sr->cptr = iova + offsetof(struct bcmfs_sym_request, cipher_key);
+ sr->aptr = iova + offsetof(struct bcmfs_sym_request, auth_key);
+ sr->iptr = iova + offsetof(struct bcmfs_sym_request, iv);
+ sr->dptr = iova + offsetof(struct bcmfs_sym_request, digest);
+ sr->rptr = iova + offsetof(struct bcmfs_sym_request, resp);
}
static void
@@ -244,6 +250,7 @@ bcmfs_sym_pmd_enqueue_op_burst(void *queue_pair,
uint16_t nb_ops)
{
int i, j;
+ int retval;
uint16_t enq = 0;
struct bcmfs_sym_request *sreq;
struct bcmfs_sym_session *sess;
@@ -273,6 +280,11 @@ bcmfs_sym_pmd_enqueue_op_burst(void *queue_pair,
/* save context */
qp->infl_msgs[i] = &sreq->msgs;
qp->infl_msgs[i]->ctx = (void *)sreq;
+
+ /* pre process the request crypto h/w acceleration */
+ retval = bcmfs_process_sym_crypto_op(ops[i], sess, sreq);
+ if (unlikely(retval < 0))
+ goto enqueue_err;
}
/* Send burst request to hw QP */
enq = bcmfs_enqueue_op_burst(qp, (void **)qp->infl_msgs, i);
@@ -289,6 +301,17 @@ bcmfs_sym_pmd_enqueue_op_burst(void *queue_pair,
return enq;
}
+static void bcmfs_sym_set_request_status(struct rte_crypto_op *op,
+ struct bcmfs_sym_request *out)
+{
+ if (*out->resp == BCMFS_SYM_RESPONSE_SUCCESS)
+ op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+ else if (*out->resp == BCMFS_SYM_RESPONSE_HASH_TAG_ERROR)
+ op->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
+ else
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+}
+
static uint16_t
bcmfs_sym_pmd_dequeue_op_burst(void *queue_pair,
struct rte_crypto_op **ops,
@@ -308,6 +331,9 @@ bcmfs_sym_pmd_dequeue_op_burst(void *queue_pair,
for (i = 0; i < deq; i++) {
sreq = (struct bcmfs_sym_request *)qp->infl_msgs[i]->ctx;
+ /* set the status based on the response from the crypto h/w */
+ bcmfs_sym_set_request_status(sreq->op, sreq);
+
ops[pkts++] = sreq->op;
rte_mempool_put(qp->sr_mp, sreq);
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_req.h b/drivers/crypto/bcmfs/bcmfs_sym_req.h
index 0f0b051f1..e53c50adc 100644
--- a/drivers/crypto/bcmfs/bcmfs_sym_req.h
+++ b/drivers/crypto/bcmfs/bcmfs_sym_req.h
@@ -6,13 +6,53 @@
#ifndef _BCMFS_SYM_REQ_H_
#define _BCMFS_SYM_REQ_H_
+#include <rte_cryptodev.h>
+
#include "bcmfs_dev_msg.h"
+#include "bcmfs_sym_defs.h"
+
+/* Fixed SPU2 Metadata */
+struct spu2_fmd {
+ uint64_t ctrl0;
+ uint64_t ctrl1;
+ uint64_t ctrl2;
+ uint64_t ctrl3;
+};
/*
* This structure hold the supportive data required to process a
* rte_crypto_op
*/
struct bcmfs_sym_request {
+ /* spu2 engine related data */
+ struct spu2_fmd fmd;
+ /* cipher key */
+ uint8_t cipher_key[BCMFS_MAX_KEY_SIZE];
+ /* auth key */
+ uint8_t auth_key[BCMFS_MAX_KEY_SIZE];
+ /* iv key */
+ uint8_t iv[BCMFS_MAX_IV_SIZE];
+ /* digest data output from crypto h/w */
+ uint8_t digest[BCMFS_MAX_DIGEST_SIZE];
+ /* 2-Bytes response from crypto h/w */
+ uint8_t resp[2];
+ /*
+ * Below are all iovas for above members
+ * from top
+ */
+ /* iova for fmd */
+ rte_iova_t fptr;
+ /* iova for cipher key */
+ rte_iova_t cptr;
+ /* iova for auth key */
+ rte_iova_t aptr;
+ /* iova for iv key */
+ rte_iova_t iptr;
+ /* iova for digest */
+ rte_iova_t dptr;
+ /* iova for response */
+ rte_iova_t rptr;
+
/* bcmfs qp message for h/w queues to process */
struct bcmfs_qp_message msgs;
/* crypto op */
diff --git a/drivers/crypto/bcmfs/meson.build b/drivers/crypto/bcmfs/meson.build
index 2e86c733e..7aa0f05db 100644
--- a/drivers/crypto/bcmfs/meson.build
+++ b/drivers/crypto/bcmfs/meson.build
@@ -14,5 +14,7 @@ sources = files(
'hw/bcmfs_rm_common.c',
'bcmfs_sym_pmd.c',
'bcmfs_sym_capabilities.c',
- 'bcmfs_sym_session.c'
+ 'bcmfs_sym_session.c',
+ 'bcmfs_sym.c',
+ 'bcmfs_sym_engine.c'
)
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v1 8/8] crypto/bcmfs: add crypto pmd into cryptodev test
2020-08-12 6:31 ` [dpdk-dev] [PATCH v1 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
` (6 preceding siblings ...)
2020-08-12 6:31 ` [dpdk-dev] [PATCH v1 7/8] crypto/bcmfs: add crypto h/w module Vikas Gupta
@ 2020-08-12 6:31 ` Vikas Gupta
2020-08-12 13:44 ` Dybkowski, AdamX
2020-08-13 17:23 ` [dpdk-dev] [PATCH v2 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
8 siblings, 1 reply; 75+ messages in thread
From: Vikas Gupta @ 2020-08-12 6:31 UTC (permalink / raw)
To: dev, akhil.goyal
Cc: ajit.khaparde, vikram.prakash, Vikas Gupta, Raveendra Padasalagi
Add test suites for supported algorithms by bcmfs crypto pmd
Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
app/test/test_cryptodev.c | 261 ++++++++++++++++++++++++++++++++++++++
app/test/test_cryptodev.h | 1 +
2 files changed, 262 insertions(+)
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 70bf6fe2c..6e7d8471c 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -12681,6 +12681,250 @@ static struct unit_test_suite cryptodev_nitrox_testsuite = {
}
};
+static struct unit_test_suite cryptodev_bcmfs_testsuite = {
+ .suite_name = "Crypto BCMFS Unit Test Suite",
+ .setup = testsuite_setup,
+ .teardown = testsuite_teardown,
+ .unit_test_cases = {
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_device_configure_invalid_dev_id),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_device_configure_invalid_queue_pair_ids),
+
+ TEST_CASE_ST(ut_setup, ut_teardown, test_AES_cipheronly_all),
+ TEST_CASE_ST(ut_setup, ut_teardown, test_AES_chain_all),
+ TEST_CASE_ST(ut_setup, ut_teardown, test_3DES_cipheronly_all),
+ TEST_CASE_ST(ut_setup, ut_teardown, test_3DES_chain_all),
+
+ /** AES GCM Authenticated Encryption */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_authenticated_encryption_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_authenticated_encryption_test_case_2),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_authenticated_encryption_test_case_3),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_authenticated_encryption_test_case_4),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_authenticated_encryption_test_case_5),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_authenticated_encryption_test_case_6),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_authenticated_encryption_test_case_7),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_authenticated_encryption_test_case_8),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_J0_authenticated_encryption_test_case_1),
+
+ /** AES GCM Authenticated Decryption */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_authenticated_decryption_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_authenticated_decryption_test_case_2),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_authenticated_decryption_test_case_3),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_authenticated_decryption_test_case_4),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_authenticated_decryption_test_case_5),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_authenticated_decryption_test_case_6),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_authenticated_decryption_test_case_7),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_authenticated_decryption_test_case_8),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_J0_authenticated_decryption_test_case_1),
+
+ /** AES GCM Authenticated Encryption 192 bits key */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_test_case_192_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_test_case_192_2),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_test_case_192_3),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_test_case_192_4),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_test_case_192_5),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_test_case_192_6),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_test_case_192_7),
+
+ /** AES GCM Authenticated Decryption 192 bits key */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_test_case_192_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_test_case_192_2),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_test_case_192_3),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_test_case_192_4),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_test_case_192_5),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_test_case_192_6),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_test_case_192_7),
+
+ /** AES GCM Authenticated Encryption 256 bits key */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_test_case_256_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_test_case_256_2),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_test_case_256_3),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_test_case_256_4),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_test_case_256_5),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_test_case_256_6),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_test_case_256_7),
+
+ /** AES GCM Authenticated Decryption 256 bits key */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_test_case_256_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_test_case_256_2),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_test_case_256_3),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_test_case_256_4),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_test_case_256_5),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_test_case_256_6),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_test_case_256_7),
+
+ /** AES GCM Authenticated Encryption big aad size */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_test_case_aad_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_test_case_aad_2),
+
+ /** AES GCM Authenticated Decryption big aad size */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_test_case_aad_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_test_case_aad_2),
+
+ /** Out of place tests */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_authenticated_encryption_oop_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_authenticated_decryption_oop_test_case_1),
+
+ /** AES GMAC Authentication */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GMAC_authentication_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GMAC_authentication_verify_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GMAC_authentication_test_case_2),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GMAC_authentication_verify_test_case_2),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GMAC_authentication_test_case_3),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GMAC_authentication_verify_test_case_3),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GMAC_authentication_test_case_4),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GMAC_authentication_verify_test_case_4),
+
+ /** Negative tests */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ authentication_verify_HMAC_SHA1_fail_data_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ authentication_verify_HMAC_SHA1_fail_tag_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_fail_iv_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_fail_in_data_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_fail_out_data_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_fail_aad_len_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_fail_aad_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_encryption_fail_tag_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_fail_iv_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_fail_in_data_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_fail_out_data_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_fail_aad_len_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_fail_aad_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GCM_auth_decryption_fail_tag_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ authentication_verify_AES128_GMAC_fail_data_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ authentication_verify_AES128_GMAC_fail_tag_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ auth_decryption_AES128CBC_HMAC_SHA1_fail_data_corrupt),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ auth_decryption_AES128CBC_HMAC_SHA1_fail_tag_corrupt),
+
+ /** AES GMAC Authentication */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GMAC_authentication_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GMAC_authentication_verify_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GMAC_authentication_test_case_2),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GMAC_authentication_verify_test_case_2),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GMAC_authentication_test_case_3),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GMAC_authentication_verify_test_case_3),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GMAC_authentication_test_case_4),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_AES_GMAC_authentication_verify_test_case_4),
+
+ /** HMAC_MD5 Authentication */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_MD5_HMAC_generate_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_MD5_HMAC_verify_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_MD5_HMAC_generate_case_2),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_MD5_HMAC_verify_case_2),
+
+ /** Mixed CIPHER + HASH algorithms */
+ /** AUTH AES CMAC + CIPHER AES CTR */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_aes_cmac_aes_ctr_digest_enc_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_aes_cmac_aes_ctr_digest_enc_test_case_1_oop),
+
+ /** AUTH NULL + CIPHER AES CTR */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_auth_null_cipher_aes_ctr_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_verify_auth_null_cipher_aes_ctr_test_case_1),
+
+ /** AUTH AES CMAC + CIPHER NULL */
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_auth_aes_cmac_cipher_null_test_case_1),
+ TEST_CASE_ST(ut_setup, ut_teardown,
+ test_verify_auth_aes_cmac_cipher_null_test_case_1),
+
+ TEST_CASES_END() /**< NULL terminate unit test array */
+ }
+};
+
static int
test_cryptodev_qat(void /*argv __rte_unused, int argc __rte_unused*/)
{
@@ -13041,6 +13285,22 @@ test_cryptodev_nitrox(void)
return unit_test_suite_runner(&cryptodev_nitrox_testsuite);
}
+static int
+test_cryptodev_bcmfs(void)
+{
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_BCMFS_PMD));
+
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "BCMFS PMD must be loaded. Check if "
+ "CONFIG_RTE_LIBRTE_PMD_BCMFS is enabled "
+ "in config file to run this testsuite.\n");
+ return TEST_FAILED;
+ }
+
+ return unit_test_suite_runner(&cryptodev_bcmfs_testsuite);
+}
+
REGISTER_TEST_COMMAND(cryptodev_qat_autotest, test_cryptodev_qat);
REGISTER_TEST_COMMAND(cryptodev_aesni_mb_autotest, test_cryptodev_aesni_mb);
REGISTER_TEST_COMMAND(cryptodev_cpu_aesni_mb_autotest,
@@ -13063,3 +13323,4 @@ REGISTER_TEST_COMMAND(cryptodev_octeontx_autotest, test_cryptodev_octeontx);
REGISTER_TEST_COMMAND(cryptodev_octeontx2_autotest, test_cryptodev_octeontx2);
REGISTER_TEST_COMMAND(cryptodev_caam_jr_autotest, test_cryptodev_caam_jr);
REGISTER_TEST_COMMAND(cryptodev_nitrox_autotest, test_cryptodev_nitrox);
+REGISTER_TEST_COMMAND(cryptodev_bcmfs_autotest, test_cryptodev_bcmfs);
diff --git a/app/test/test_cryptodev.h b/app/test/test_cryptodev.h
index 41542e055..c58126368 100644
--- a/app/test/test_cryptodev.h
+++ b/app/test/test_cryptodev.h
@@ -70,6 +70,7 @@
#define CRYPTODEV_NAME_OCTEONTX2_PMD crypto_octeontx2
#define CRYPTODEV_NAME_CAAM_JR_PMD crypto_caam_jr
#define CRYPTODEV_NAME_NITROX_PMD crypto_nitrox_sym
+#define CRYPTODEV_NAME_BCMFS_PMD crypto_bcmfs
/**
* Write (spread) data from buffer to mbuf data
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* Re: [dpdk-dev] [PATCH v1 8/8] crypto/bcmfs: add crypto pmd into cryptodev test
2020-08-12 6:31 ` [dpdk-dev] [PATCH v1 8/8] crypto/bcmfs: add crypto pmd into cryptodev test Vikas Gupta
@ 2020-08-12 13:44 ` Dybkowski, AdamX
0 siblings, 0 replies; 75+ messages in thread
From: Dybkowski, AdamX @ 2020-08-12 13:44 UTC (permalink / raw)
To: Vikas Gupta, dev, akhil.goyal
Cc: ajit.khaparde, vikram.prakash, Raveendra Padasalagi
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Vikas Gupta
> Sent: Wednesday, 12 August, 2020 08:31
> To: dev@dpdk.org; akhil.goyal@nxp.com
> Cc: ajit.khaparde@broadcom.com; vikram.prakash@broadcom.com; Vikas
> Gupta <vikas.gupta@broadcom.com>; Raveendra Padasalagi
> <raveendra.padasalagi@broadcom.com>
> Subject: [dpdk-dev] [PATCH v1 8/8] crypto/bcmfs: add crypto pmd into
> cryptodev test
>
> Add test suites for supported algorithms by bcmfs crypto pmd
>
> Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
> Signed-off-by: Raveendra Padasalagi
> <raveendra.padasalagi@broadcom.com>
> Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
> ---
> app/test/test_cryptodev.c | 261
[Adam] The test suites set was refactored in recent months to use one big list of tests and run/skip them depending on the capability checks. I strongly propose to do this also for your new BCMFS PMD. Have a look at e.g. QAT PMD test suite in the same file. It has one function to properly initialize the run (check the device id) and then it uses big common area of individual tests (some of them are skipped because of requiring specific capabilities).
Adam Dybkowski
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v2 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices
2020-08-12 6:31 ` [dpdk-dev] [PATCH v1 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
` (7 preceding siblings ...)
2020-08-12 6:31 ` [dpdk-dev] [PATCH v1 8/8] crypto/bcmfs: add crypto pmd into cryptodev test Vikas Gupta
@ 2020-08-13 17:23 ` Vikas Gupta
2020-08-13 17:23 ` [dpdk-dev] [PATCH v2 1/8] crypto/bcmfs: add BCMFS driver Vikas Gupta
` (9 more replies)
8 siblings, 10 replies; 75+ messages in thread
From: Vikas Gupta @ 2020-08-13 17:23 UTC (permalink / raw)
To: dev, akhil.goyal; +Cc: vikram.prakash, Vikas Gupta
Hi,
This patchset contains support for Crypto offload on Broadcom’s
Stingray/Stingray2 SoCs having FlexSparc unit.
BCMFS is an acronym for Broadcom FlexSparc device used in the patchest.
The patchset progressively adds major modules as below.
a) Detection of platform-device based on the known registered platforms and attaching with VFIO.
b) Creation of Cryptodevice.
c) Addition of session handling.
d) Add Cryptodevice into test Cryptodev framework.
The patchset has been tested on the above mentioned SoCs.
Regards,
Vikas
Changes from v0->v1:
Updated the ABI version in file .../crypto/bcmfs/rte_pmd_bcmfs_version.map
Changes from v1->v2:
- Fix compilation errors and coding style warnings.
- Use global test crypto suite suggested by Adam Dybkowski
Vikas Gupta (8):
crypto/bcmfs: add BCMFS driver
crypto/bcmfs: add vfio support
crypto/bcmfs: add apis for queue pair management
crypto/bcmfs: add hw queue pair operations
crypto/bcmfs: create a symmetric cryptodev
crypto/bcmfs: add session handling and capabilities
crypto/bcmfs: add crypto h/w module
crypto/bcmfs: add crypto pmd into cryptodev test
MAINTAINERS | 7 +
app/test/test_cryptodev.c | 17 +
app/test/test_cryptodev.h | 1 +
config/common_base | 5 +
doc/guides/cryptodevs/bcmfs.rst | 72 ++
doc/guides/cryptodevs/features/bcmfs.ini | 56 +
doc/guides/cryptodevs/index.rst | 1 +
drivers/crypto/bcmfs/bcmfs_dev_msg.h | 29 +
drivers/crypto/bcmfs/bcmfs_device.c | 331 ++++++
drivers/crypto/bcmfs/bcmfs_device.h | 76 ++
drivers/crypto/bcmfs/bcmfs_hw_defs.h | 38 +
drivers/crypto/bcmfs/bcmfs_logs.c | 38 +
drivers/crypto/bcmfs/bcmfs_logs.h | 34 +
drivers/crypto/bcmfs/bcmfs_qp.c | 383 +++++++
drivers/crypto/bcmfs/bcmfs_qp.h | 142 +++
drivers/crypto/bcmfs/bcmfs_sym.c | 316 ++++++
drivers/crypto/bcmfs/bcmfs_sym_capabilities.c | 764 ++++++++++++++
drivers/crypto/bcmfs/bcmfs_sym_capabilities.h | 16 +
drivers/crypto/bcmfs/bcmfs_sym_defs.h | 186 ++++
drivers/crypto/bcmfs/bcmfs_sym_engine.c | 994 ++++++++++++++++++
drivers/crypto/bcmfs/bcmfs_sym_engine.h | 103 ++
drivers/crypto/bcmfs/bcmfs_sym_pmd.c | 426 ++++++++
drivers/crypto/bcmfs/bcmfs_sym_pmd.h | 38 +
drivers/crypto/bcmfs/bcmfs_sym_req.h | 62 ++
drivers/crypto/bcmfs/bcmfs_sym_session.c | 424 ++++++++
drivers/crypto/bcmfs/bcmfs_sym_session.h | 99 ++
drivers/crypto/bcmfs/bcmfs_vfio.c | 107 ++
drivers/crypto/bcmfs/bcmfs_vfio.h | 17 +
drivers/crypto/bcmfs/hw/bcmfs4_rm.c | 742 +++++++++++++
drivers/crypto/bcmfs/hw/bcmfs5_rm.c | 677 ++++++++++++
drivers/crypto/bcmfs/hw/bcmfs_rm_common.c | 82 ++
drivers/crypto/bcmfs/hw/bcmfs_rm_common.h | 46 +
drivers/crypto/bcmfs/meson.build | 20 +
.../crypto/bcmfs/rte_pmd_bcmfs_version.map | 3 +
drivers/crypto/meson.build | 3 +-
mk/rte.app.mk | 1 +
36 files changed, 6355 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/cryptodevs/bcmfs.rst
create mode 100644 doc/guides/cryptodevs/features/bcmfs.ini
create mode 100644 drivers/crypto/bcmfs/bcmfs_dev_msg.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_device.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_device.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_hw_defs.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_logs.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_logs.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_qp.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_qp.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_capabilities.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_capabilities.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_defs.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_engine.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_engine.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_pmd.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_pmd.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_req.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_session.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_session.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_vfio.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_vfio.h
create mode 100644 drivers/crypto/bcmfs/hw/bcmfs4_rm.c
create mode 100644 drivers/crypto/bcmfs/hw/bcmfs5_rm.c
create mode 100644 drivers/crypto/bcmfs/hw/bcmfs_rm_common.c
create mode 100644 drivers/crypto/bcmfs/hw/bcmfs_rm_common.h
create mode 100644 drivers/crypto/bcmfs/meson.build
create mode 100644 drivers/crypto/bcmfs/rte_pmd_bcmfs_version.map
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v2 1/8] crypto/bcmfs: add BCMFS driver
2020-08-13 17:23 ` [dpdk-dev] [PATCH v2 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
@ 2020-08-13 17:23 ` Vikas Gupta
2020-09-28 18:49 ` Akhil Goyal
2020-08-13 17:23 ` [dpdk-dev] [PATCH v2 2/8] crypto/bcmfs: add vfio support Vikas Gupta
` (8 subsequent siblings)
9 siblings, 1 reply; 75+ messages in thread
From: Vikas Gupta @ 2020-08-13 17:23 UTC (permalink / raw)
To: dev, akhil.goyal; +Cc: vikram.prakash, Vikas Gupta, Raveendra Padasalagi
Add Broadcom FlexSparc(FS) device creation driver which registers to a
vdev and create a device. Add APIs for logs, supportive documention and
maintainers file.
Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
MAINTAINERS | 7 +
config/common_base | 5 +
doc/guides/cryptodevs/bcmfs.rst | 26 ++
doc/guides/cryptodevs/index.rst | 1 +
drivers/crypto/bcmfs/bcmfs_device.c | 256 ++++++++++++++++++
drivers/crypto/bcmfs/bcmfs_device.h | 40 +++
drivers/crypto/bcmfs/bcmfs_logs.c | 38 +++
drivers/crypto/bcmfs/bcmfs_logs.h | 34 +++
drivers/crypto/bcmfs/meson.build | 10 +
.../crypto/bcmfs/rte_pmd_bcmfs_version.map | 3 +
drivers/crypto/meson.build | 3 +-
mk/rte.app.mk | 1 +
12 files changed, 423 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/cryptodevs/bcmfs.rst
create mode 100644 drivers/crypto/bcmfs/bcmfs_device.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_device.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_logs.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_logs.h
create mode 100644 drivers/crypto/bcmfs/meson.build
create mode 100644 drivers/crypto/bcmfs/rte_pmd_bcmfs_version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index 3cd402b34..7c2d7ff1b 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1099,6 +1099,13 @@ F: drivers/crypto/zuc/
F: doc/guides/cryptodevs/zuc.rst
F: doc/guides/cryptodevs/features/zuc.ini
+Broadcom FlexSparc
+M: Vikas Gupta <vikas.gupta@@broadcom.com>
+M: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
+M: Ajit Khaparde <ajit.khaparde@broadcom.com>
+F: drivers/crypto/bcmfs/
+F: doc/guides/cryptodevs/bcmfs.rst
+F: doc/guides/cryptodevs/features/bcmfs.ini
Compression Drivers
-------------------
diff --git a/config/common_base b/config/common_base
index f7a8824f5..21daadcdd 100644
--- a/config/common_base
+++ b/config/common_base
@@ -705,6 +705,11 @@ CONFIG_RTE_LIBRTE_PMD_MVSAM_CRYPTO=n
#
CONFIG_RTE_LIBRTE_PMD_NITROX=y
+#
+# Compile PMD for Broadcom crypto device
+#
+CONFIG_RTE_LIBRTE_PMD_BCMFS=y
+
#
# Compile generic security library
#
diff --git a/doc/guides/cryptodevs/bcmfs.rst b/doc/guides/cryptodevs/bcmfs.rst
new file mode 100644
index 000000000..752ce028a
--- /dev/null
+++ b/doc/guides/cryptodevs/bcmfs.rst
@@ -0,0 +1,26 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(C) 2020 Broadcom
+
+Broadcom FlexSparc Crypto Poll Mode Driver
+==========================================
+
+The FlexSparc crypto poll mode driver provides support for offloading
+cryptographic operations to the Broadcom SoCs having FlexSparc4/FlexSparc5 unit.
+Detailed information about SoCs can be found in
+
+* https://www.broadcom.com/
+
+Installation
+------------
+
+For compiling the Broadcom FlexSparc crypto PMD, please check if the
+CONFIG_RTE_LIBRTE_PMD_BCMFS setting is set to `y` in config/common_base file.
+
+* ``CONFIG_RTE_LIBRTE_PMD_BCMFS=y``
+
+Initialization
+--------------
+BCMFS crypto PMD depend upon the devices present in the path
+/sys/bus/platform/devices/fs<version>/<dev_name> on the platform.
+Each cryptodev PMD instance can be attached to the nodes present
+in the mentioned path.
diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst
index a67ed5a28..5d7e028bd 100644
--- a/doc/guides/cryptodevs/index.rst
+++ b/doc/guides/cryptodevs/index.rst
@@ -29,3 +29,4 @@ Crypto Device Drivers
qat
virtio
zuc
+ bcmfs
diff --git a/drivers/crypto/bcmfs/bcmfs_device.c b/drivers/crypto/bcmfs/bcmfs_device.c
new file mode 100644
index 000000000..47c776de6
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_device.c
@@ -0,0 +1,256 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Broadcom.
+ * All rights reserved.
+ */
+
+#include <dirent.h>
+#include <stdbool.h>
+#include <sys/queue.h>
+
+#include <rte_string_fns.h>
+
+#include "bcmfs_device.h"
+#include "bcmfs_logs.h"
+
+struct bcmfs_device_attr {
+ const char name[BCMFS_MAX_PATH_LEN];
+ const char suffix[BCMFS_DEV_NAME_LEN];
+ const enum bcmfs_device_type type;
+ const uint32_t offset;
+ const uint32_t version;
+};
+
+/* BCMFS supported devices */
+static struct bcmfs_device_attr dev_table[] = {
+ {
+ .name = "fs4",
+ .suffix = "crypto_mbox",
+ .type = BCMFS_SYM_FS4,
+ .offset = 0,
+ .version = 0x76303031
+ },
+ {
+ .name = "fs5",
+ .suffix = "mbox",
+ .type = BCMFS_SYM_FS5,
+ .offset = 0,
+ .version = 0x76303032
+ },
+ {
+ /* sentinel */
+ }
+};
+
+TAILQ_HEAD(fsdev_list, bcmfs_device);
+static struct fsdev_list fsdev_list = TAILQ_HEAD_INITIALIZER(fsdev_list);
+
+static struct bcmfs_device *
+fsdev_allocate_one_dev(struct rte_vdev_device *vdev,
+ char *dirpath,
+ char *devname,
+ enum bcmfs_device_type dev_type __rte_unused)
+{
+ struct bcmfs_device *fsdev;
+
+ fsdev = calloc(1, sizeof(*fsdev));
+ if (!fsdev)
+ return NULL;
+
+ if (strlen(dirpath) > sizeof(fsdev->dirname)) {
+ BCMFS_LOG(ERR, "dir path name is too long");
+ goto cleanup;
+ }
+
+ if (strlen(devname) > sizeof(fsdev->name)) {
+ BCMFS_LOG(ERR, "devname is too long");
+ goto cleanup;
+ }
+
+ strcpy(fsdev->dirname, dirpath);
+ strcpy(fsdev->name, devname);
+
+ fsdev->vdev = vdev;
+
+ TAILQ_INSERT_TAIL(&fsdev_list, fsdev, next);
+
+ return fsdev;
+
+cleanup:
+ free(fsdev);
+
+ return NULL;
+}
+
+static struct bcmfs_device *
+find_fsdev(struct rte_vdev_device *vdev)
+{
+ struct bcmfs_device *fsdev;
+
+ TAILQ_FOREACH(fsdev, &fsdev_list, next)
+ if (fsdev->vdev == vdev)
+ return fsdev;
+
+ return NULL;
+}
+
+static void
+fsdev_release(struct bcmfs_device *fsdev)
+{
+ if (fsdev == NULL)
+ return;
+
+ TAILQ_REMOVE(&fsdev_list, fsdev, next);
+ free(fsdev);
+}
+
+static int
+cmprator(const void *a, const void *b)
+{
+ return (*(const unsigned int *)a - *(const unsigned int *)b);
+}
+
+static int
+fsdev_find_all_devs(const char *path, const char *search,
+ uint32_t *devs)
+{
+ DIR *dir;
+ struct dirent *entry;
+ int count = 0;
+ char addr[BCMFS_MAX_NODES][BCMFS_MAX_PATH_LEN];
+ int i;
+
+ dir = opendir(path);
+ if (dir == NULL) {
+ BCMFS_LOG(ERR, "Unable to open directory");
+ return 0;
+ }
+
+ while ((entry = readdir(dir)) != NULL) {
+ if (strstr(entry->d_name, search)) {
+ strlcpy(addr[count], entry->d_name,
+ BCMFS_MAX_PATH_LEN);
+ count++;
+ }
+ }
+
+ closedir(dir);
+
+ for (i = 0 ; i < count; i++)
+ devs[i] = (uint32_t)strtoul(addr[i], NULL, 16);
+ /* sort the devices based on IO addresses */
+ qsort(devs, count, sizeof(uint32_t), cmprator);
+
+ return count;
+}
+
+static bool
+fsdev_find_sub_dir(char *path, const char *search, char *output)
+{
+ DIR *dir;
+ struct dirent *entry;
+
+ dir = opendir(path);
+ if (dir == NULL) {
+ BCMFS_LOG(ERR, "Unable to open directory");
+ return -ENODEV;
+ }
+
+ while ((entry = readdir(dir)) != NULL) {
+ if (!strcmp(entry->d_name, search)) {
+ strlcpy(output, entry->d_name, BCMFS_MAX_PATH_LEN);
+ closedir(dir);
+ return true;
+ }
+ }
+
+ closedir(dir);
+
+ return false;
+}
+
+
+static int
+bcmfs_vdev_probe(struct rte_vdev_device *vdev)
+{
+ struct bcmfs_device *fsdev;
+ char top_dirpath[BCMFS_MAX_PATH_LEN];
+ char sub_dirpath[BCMFS_MAX_PATH_LEN];
+ char out_dirpath[BCMFS_MAX_PATH_LEN];
+ char out_dirname[BCMFS_MAX_PATH_LEN];
+ uint32_t fsdev_dev[BCMFS_MAX_NODES];
+ enum bcmfs_device_type dtype;
+ int i = 0;
+ int dev_idx;
+ int count = 0;
+ bool found = false;
+
+ sprintf(top_dirpath, "%s", SYSFS_BCM_PLTFORM_DEVICES);
+ while (strlen(dev_table[i].name)) {
+ found = fsdev_find_sub_dir(top_dirpath,
+ dev_table[i].name,
+ sub_dirpath);
+ if (found)
+ break;
+ i++;
+ }
+ if (!found) {
+ BCMFS_LOG(ERR, "No supported bcmfs dev found");
+ return -ENODEV;
+ }
+
+ dev_idx = i;
+ dtype = dev_table[i].type;
+
+ snprintf(out_dirpath, sizeof(out_dirpath), "%s/%s",
+ top_dirpath, sub_dirpath);
+ count = fsdev_find_all_devs(out_dirpath,
+ dev_table[dev_idx].suffix,
+ fsdev_dev);
+ if (!count) {
+ BCMFS_LOG(ERR, "No supported bcmfs dev found");
+ return -ENODEV;
+ }
+
+ i = 0;
+ while (count) {
+ /* format the device name present in the patch */
+ snprintf(out_dirname, sizeof(out_dirname), "%x.%s",
+ fsdev_dev[i], dev_table[dev_idx].suffix);
+ fsdev = fsdev_allocate_one_dev(vdev, out_dirpath,
+ out_dirname, dtype);
+ if (!fsdev) {
+ count--;
+ i++;
+ continue;
+ }
+ break;
+ }
+ if (fsdev == NULL) {
+ BCMFS_LOG(ERR, "All supported devs busy");
+ return -ENODEV;
+ }
+
+ return 0;
+}
+
+static int
+bcmfs_vdev_remove(struct rte_vdev_device *vdev)
+{
+ struct bcmfs_device *fsdev;
+
+ fsdev = find_fsdev(vdev);
+ if (fsdev == NULL)
+ return -ENODEV;
+
+ fsdev_release(fsdev);
+ return 0;
+}
+
+/* Register with vdev */
+static struct rte_vdev_driver rte_bcmfs_pmd = {
+ .probe = bcmfs_vdev_probe,
+ .remove = bcmfs_vdev_remove
+};
+
+RTE_PMD_REGISTER_VDEV(bcmfs_pmd,
+ rte_bcmfs_pmd);
diff --git a/drivers/crypto/bcmfs/bcmfs_device.h b/drivers/crypto/bcmfs/bcmfs_device.h
new file mode 100644
index 000000000..cc64a8df2
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_device.h
@@ -0,0 +1,40 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Broadcom.
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_DEV_H_
+#define _BCMFS_DEV_H_
+
+#include <sys/queue.h>
+
+#include <rte_bus_vdev.h>
+
+#include "bcmfs_logs.h"
+
+/* max number of dev nodes */
+#define BCMFS_MAX_NODES 4
+#define BCMFS_MAX_PATH_LEN 512
+#define BCMFS_DEV_NAME_LEN 64
+
+/* Path for BCM-Platform device directory */
+#define SYSFS_BCM_PLTFORM_DEVICES "/sys/bus/platform/devices"
+
+/* Supported devices */
+enum bcmfs_device_type {
+ BCMFS_SYM_FS4,
+ BCMFS_SYM_FS5,
+ BCMFS_UNKNOWN
+};
+
+struct bcmfs_device {
+ TAILQ_ENTRY(bcmfs_device) next;
+ /* Directory path for vfio */
+ char dirname[BCMFS_MAX_PATH_LEN];
+ /* BCMFS device name */
+ char name[BCMFS_DEV_NAME_LEN];
+ /* Parent vdev */
+ struct rte_vdev_device *vdev;
+};
+
+#endif /* _BCMFS_DEV_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_logs.c b/drivers/crypto/bcmfs/bcmfs_logs.c
new file mode 100644
index 000000000..86f4ff3b5
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_logs.c
@@ -0,0 +1,38 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_log.h>
+#include <rte_hexdump.h>
+
+#include "bcmfs_logs.h"
+
+int bcmfs_conf_logtype;
+int bcmfs_dp_logtype;
+
+int
+bcmfs_hexdump_log(uint32_t level, uint32_t logtype, const char *title,
+ const void *buf, unsigned int len)
+{
+ if (level > rte_log_get_global_level())
+ return 0;
+ if (level > (uint32_t)(rte_log_get_level(logtype)))
+ return 0;
+
+ rte_hexdump(rte_log_get_stream(), title, buf, len);
+ return 0;
+}
+
+RTE_INIT(bcmfs_device_init_log)
+{
+ /* Configuration and general logs */
+ bcmfs_conf_logtype = rte_log_register("pmd.bcmfs_config");
+ if (bcmfs_conf_logtype >= 0)
+ rte_log_set_level(bcmfs_conf_logtype, RTE_LOG_NOTICE);
+
+ /* data-path logs */
+ bcmfs_dp_logtype = rte_log_register("pmd.bcmfs_fp");
+ if (bcmfs_dp_logtype >= 0)
+ rte_log_set_level(bcmfs_dp_logtype, RTE_LOG_NOTICE);
+}
diff --git a/drivers/crypto/bcmfs/bcmfs_logs.h b/drivers/crypto/bcmfs/bcmfs_logs.h
new file mode 100644
index 000000000..c03a49b75
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_logs.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_LOGS_H_
+#define _BCMFS_LOGS_H_
+
+#include <rte_log.h>
+
+extern int bcmfs_conf_logtype;
+extern int bcmfs_dp_logtype;
+
+#define BCMFS_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, bcmfs_conf_logtype, \
+ "%s(): " fmt "\n", __func__, ## args)
+
+#define BCMFS_DP_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, bcmfs_dp_logtype, \
+ "%s(): " fmt "\n", __func__, ## args)
+
+#define BCMFS_DP_HEXDUMP_LOG(level, title, buf, len) \
+ bcmfs_hexdump_log(RTE_LOG_ ## level, bcmfs_dp_logtype, title, buf, len)
+
+/**
+ * bcmfs_hexdump_log Dump out memory in a special hex dump format.
+ *
+ * The message will be sent to the stream used by the rte_log infrastructure.
+ */
+int
+bcmfs_hexdump_log(uint32_t level, uint32_t logtype, const char *heading,
+ const void *buf, unsigned int len);
+
+#endif /* _BCMFS_LOGS_H_ */
diff --git a/drivers/crypto/bcmfs/meson.build b/drivers/crypto/bcmfs/meson.build
new file mode 100644
index 000000000..a4bdd8ee5
--- /dev/null
+++ b/drivers/crypto/bcmfs/meson.build
@@ -0,0 +1,10 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(C) 2020 Broadcom
+# All rights reserved.
+#
+
+deps += ['eal', 'bus_vdev']
+sources = files(
+ 'bcmfs_logs.c',
+ 'bcmfs_device.c'
+ )
diff --git a/drivers/crypto/bcmfs/rte_pmd_bcmfs_version.map b/drivers/crypto/bcmfs/rte_pmd_bcmfs_version.map
new file mode 100644
index 000000000..299ae632d
--- /dev/null
+++ b/drivers/crypto/bcmfs/rte_pmd_bcmfs_version.map
@@ -0,0 +1,3 @@
+DPDK_21.0 {
+ local: *;
+};
diff --git a/drivers/crypto/meson.build b/drivers/crypto/meson.build
index a2423507a..8e06d0533 100644
--- a/drivers/crypto/meson.build
+++ b/drivers/crypto/meson.build
@@ -23,7 +23,8 @@ drivers = ['aesni_gcm',
'scheduler',
'snow3g',
'virtio',
- 'zuc']
+ 'zuc',
+ 'bcmfs']
std_deps = ['cryptodev'] # cryptodev pulls in all other needed deps
config_flag_fmt = 'RTE_LIBRTE_@0@_PMD'
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 0ce8cf541..5e268f8c0 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -308,6 +308,7 @@ ifeq ($(CONFIG_RTE_LIBRTE_SECURITY),y)
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_CAAM_JR) += -lrte_pmd_caam_jr
endif # CONFIG_RTE_LIBRTE_SECURITY
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_VIRTIO_CRYPTO) += -lrte_pmd_virtio_crypto
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_BCMFS) += -lrte_pmd_bcmfs
endif # CONFIG_RTE_LIBRTE_CRYPTODEV
ifeq ($(CONFIG_RTE_LIBRTE_COMPRESSDEV),y)
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v2 2/8] crypto/bcmfs: add vfio support
2020-08-13 17:23 ` [dpdk-dev] [PATCH v2 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
2020-08-13 17:23 ` [dpdk-dev] [PATCH v2 1/8] crypto/bcmfs: add BCMFS driver Vikas Gupta
@ 2020-08-13 17:23 ` Vikas Gupta
2020-09-28 19:00 ` Akhil Goyal
2020-08-13 17:23 ` [dpdk-dev] [PATCH v2 3/8] crypto/bcmfs: add apis for queue pair management Vikas Gupta
` (7 subsequent siblings)
9 siblings, 1 reply; 75+ messages in thread
From: Vikas Gupta @ 2020-08-13 17:23 UTC (permalink / raw)
To: dev, akhil.goyal; +Cc: vikram.prakash, Vikas Gupta, Raveendra Padasalagi
Add vfio support for device.
Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
drivers/crypto/bcmfs/bcmfs_device.c | 5 ++
drivers/crypto/bcmfs/bcmfs_device.h | 6 ++
drivers/crypto/bcmfs/bcmfs_vfio.c | 107 ++++++++++++++++++++++++++++
drivers/crypto/bcmfs/bcmfs_vfio.h | 17 +++++
drivers/crypto/bcmfs/meson.build | 3 +-
5 files changed, 137 insertions(+), 1 deletion(-)
create mode 100644 drivers/crypto/bcmfs/bcmfs_vfio.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_vfio.h
diff --git a/drivers/crypto/bcmfs/bcmfs_device.c b/drivers/crypto/bcmfs/bcmfs_device.c
index 47c776de6..3b5cc9e98 100644
--- a/drivers/crypto/bcmfs/bcmfs_device.c
+++ b/drivers/crypto/bcmfs/bcmfs_device.c
@@ -11,6 +11,7 @@
#include "bcmfs_device.h"
#include "bcmfs_logs.h"
+#include "bcmfs_vfio.h"
struct bcmfs_device_attr {
const char name[BCMFS_MAX_PATH_LEN];
@@ -71,6 +72,10 @@ fsdev_allocate_one_dev(struct rte_vdev_device *vdev,
fsdev->vdev = vdev;
+ /* attach to VFIO */
+ if (bcmfs_attach_vfio(fsdev))
+ goto cleanup;
+
TAILQ_INSERT_TAIL(&fsdev_list, fsdev, next);
return fsdev;
diff --git a/drivers/crypto/bcmfs/bcmfs_device.h b/drivers/crypto/bcmfs/bcmfs_device.h
index cc64a8df2..c41cc0031 100644
--- a/drivers/crypto/bcmfs/bcmfs_device.h
+++ b/drivers/crypto/bcmfs/bcmfs_device.h
@@ -35,6 +35,12 @@ struct bcmfs_device {
char name[BCMFS_DEV_NAME_LEN];
/* Parent vdev */
struct rte_vdev_device *vdev;
+ /* vfio handle */
+ int vfio_dev_fd;
+ /* mapped address */
+ uint8_t *mmap_addr;
+ /* mapped size */
+ uint32_t mmap_size;
};
#endif /* _BCMFS_DEV_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_vfio.c b/drivers/crypto/bcmfs/bcmfs_vfio.c
new file mode 100644
index 000000000..dc2def580
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_vfio.c
@@ -0,0 +1,107 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Broadcom.
+ * All rights reserved.
+ */
+
+#include <errno.h>
+#include <sys/mman.h>
+#include <sys/ioctl.h>
+
+#include <rte_vfio.h>
+
+#include "bcmfs_device.h"
+#include "bcmfs_logs.h"
+#include "bcmfs_vfio.h"
+
+#ifdef VFIO_PRESENT
+static int
+vfio_map_dev_obj(const char *path, const char *dev_obj,
+ uint32_t *size, void **addr, int *dev_fd)
+{
+ int32_t ret;
+ struct vfio_group_status status = { .argsz = sizeof(status) };
+
+ struct vfio_device_info d_info = { .argsz = sizeof(d_info) };
+ struct vfio_region_info reg_info = { .argsz = sizeof(reg_info) };
+
+ ret = rte_vfio_setup_device(path, dev_obj, dev_fd, &d_info);
+ if (ret) {
+ BCMFS_LOG(ERR, "VFIO Setting for device failed");
+ return ret;
+ }
+
+ /* getting device region info*/
+ ret = ioctl(*dev_fd, VFIO_DEVICE_GET_REGION_INFO, ®_info);
+ if (ret < 0) {
+ BCMFS_LOG(ERR, "Error in VFIO getting REGION_INFO");
+ goto map_failed;
+ }
+
+ *addr = mmap(NULL, reg_info.size,
+ PROT_WRITE | PROT_READ, MAP_SHARED,
+ *dev_fd, reg_info.offset);
+ if (*addr == MAP_FAILED) {
+ BCMFS_LOG(ERR, "Error mapping region (errno = %d)", errno);
+ ret = errno;
+ goto map_failed;
+ }
+ *size = reg_info.size;
+
+ return 0;
+
+map_failed:
+ rte_vfio_release_device(path, dev_obj, *dev_fd);
+
+ return ret;
+}
+
+int
+bcmfs_attach_vfio(struct bcmfs_device *dev)
+{
+ int ret;
+ int vfio_dev_fd;
+ void *v_addr = NULL;
+ uint32_t size = 0;
+
+ ret = vfio_map_dev_obj(dev->dirname, dev->name,
+ &size, &v_addr, &vfio_dev_fd);
+ if (ret)
+ return -1;
+
+ dev->mmap_size = size;
+ dev->mmap_addr = v_addr;
+ dev->vfio_dev_fd = vfio_dev_fd;
+
+ return 0;
+}
+
+void
+bcmfs_release_vfio(struct bcmfs_device *dev)
+{
+ int ret;
+
+ if (dev == NULL)
+ return;
+
+ /* unmap the addr */
+ munmap(dev->mmap_addr, dev->mmap_size);
+ /* release the device */
+ ret = rte_vfio_release_device(dev->dirname, dev->name,
+ dev->vfio_dev_fd);
+ if (ret < 0) {
+ BCMFS_LOG(ERR, "cannot release device");
+ return;
+ }
+}
+#else
+int
+bcmfs_attach_vfio(struct bcmfs_device *dev __rte_unused)
+{
+ return -1;
+}
+
+void
+bcmfs_release_vfio(struct bcmfs_device *dev __rte_unused)
+{
+}
+#endif
diff --git a/drivers/crypto/bcmfs/bcmfs_vfio.h b/drivers/crypto/bcmfs/bcmfs_vfio.h
new file mode 100644
index 000000000..d0fdf6483
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_vfio.h
@@ -0,0 +1,17 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_VFIO_H_
+#define _BCMFS_VFIO_H_
+
+/* Attach the bcmfs device to vfio */
+int
+bcmfs_attach_vfio(struct bcmfs_device *dev);
+
+/* Release the bcmfs device from vfio */
+void
+bcmfs_release_vfio(struct bcmfs_device *dev);
+
+#endif /* _BCMFS_VFIO_H_ */
diff --git a/drivers/crypto/bcmfs/meson.build b/drivers/crypto/bcmfs/meson.build
index a4bdd8ee5..fd39eba20 100644
--- a/drivers/crypto/bcmfs/meson.build
+++ b/drivers/crypto/bcmfs/meson.build
@@ -6,5 +6,6 @@
deps += ['eal', 'bus_vdev']
sources = files(
'bcmfs_logs.c',
- 'bcmfs_device.c'
+ 'bcmfs_device.c',
+ 'bcmfs_vfio.c'
)
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v2 3/8] crypto/bcmfs: add apis for queue pair management
2020-08-13 17:23 ` [dpdk-dev] [PATCH v2 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
2020-08-13 17:23 ` [dpdk-dev] [PATCH v2 1/8] crypto/bcmfs: add BCMFS driver Vikas Gupta
2020-08-13 17:23 ` [dpdk-dev] [PATCH v2 2/8] crypto/bcmfs: add vfio support Vikas Gupta
@ 2020-08-13 17:23 ` Vikas Gupta
2020-09-28 19:29 ` Akhil Goyal
2020-08-13 17:23 ` [dpdk-dev] [PATCH v2 4/8] crypto/bcmfs: add hw queue pair operations Vikas Gupta
` (6 subsequent siblings)
9 siblings, 1 reply; 75+ messages in thread
From: Vikas Gupta @ 2020-08-13 17:23 UTC (permalink / raw)
To: dev, akhil.goyal; +Cc: vikram.prakash, Vikas Gupta, Raveendra Padasalagi
Add queue pair management APIs which will be used by Crypto device to
manage h/w queues. A bcmfs device structure owns multiple queue-pairs
based on the mapped address allocated to it.
Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
drivers/crypto/bcmfs/bcmfs_device.c | 4 +
drivers/crypto/bcmfs/bcmfs_device.h | 5 +
drivers/crypto/bcmfs/bcmfs_hw_defs.h | 38 +++
drivers/crypto/bcmfs/bcmfs_qp.c | 345 +++++++++++++++++++++++++++
drivers/crypto/bcmfs/bcmfs_qp.h | 122 ++++++++++
drivers/crypto/bcmfs/meson.build | 3 +-
6 files changed, 516 insertions(+), 1 deletion(-)
create mode 100644 drivers/crypto/bcmfs/bcmfs_hw_defs.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_qp.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_qp.h
diff --git a/drivers/crypto/bcmfs/bcmfs_device.c b/drivers/crypto/bcmfs/bcmfs_device.c
index 3b5cc9e98..b475c2933 100644
--- a/drivers/crypto/bcmfs/bcmfs_device.c
+++ b/drivers/crypto/bcmfs/bcmfs_device.c
@@ -11,6 +11,7 @@
#include "bcmfs_device.h"
#include "bcmfs_logs.h"
+#include "bcmfs_qp.h"
#include "bcmfs_vfio.h"
struct bcmfs_device_attr {
@@ -76,6 +77,9 @@ fsdev_allocate_one_dev(struct rte_vdev_device *vdev,
if (bcmfs_attach_vfio(fsdev))
goto cleanup;
+ /* Maximum number of QPs supported */
+ fsdev->max_hw_qps = fsdev->mmap_size / BCMFS_HW_QUEUE_IO_ADDR_LEN;
+
TAILQ_INSERT_TAIL(&fsdev_list, fsdev, next);
return fsdev;
diff --git a/drivers/crypto/bcmfs/bcmfs_device.h b/drivers/crypto/bcmfs/bcmfs_device.h
index c41cc0031..a47537332 100644
--- a/drivers/crypto/bcmfs/bcmfs_device.h
+++ b/drivers/crypto/bcmfs/bcmfs_device.h
@@ -11,6 +11,7 @@
#include <rte_bus_vdev.h>
#include "bcmfs_logs.h"
+#include "bcmfs_qp.h"
/* max number of dev nodes */
#define BCMFS_MAX_NODES 4
@@ -41,6 +42,10 @@ struct bcmfs_device {
uint8_t *mmap_addr;
/* mapped size */
uint32_t mmap_size;
+ /* max number of h/w queue pairs detected */
+ uint16_t max_hw_qps;
+ /* current qpairs in use */
+ struct bcmfs_qp *qps_in_use[BCMFS_MAX_HW_QUEUES];
};
#endif /* _BCMFS_DEV_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_hw_defs.h b/drivers/crypto/bcmfs/bcmfs_hw_defs.h
new file mode 100644
index 000000000..ecb0c09ba
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_hw_defs.h
@@ -0,0 +1,38 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_RM_DEFS_H_
+#define _BCMFS_RM_DEFS_H_
+
+#include <rte_atomic.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_io.h>
+
+/* 32-bit MMIO register write */
+#define FS_MMIO_WRITE32(value, addr) rte_write32_relaxed((value), (addr))
+
+/* 32-bit MMIO register read */
+#define FS_MMIO_READ32(addr) rte_read32_relaxed((addr))
+
+#ifndef BIT
+#define BIT(nr) (1UL << (nr))
+#endif
+
+#define FS_RING_REGS_SIZE 0x10000
+#define FS_RING_DESC_SIZE 8
+#define FS_RING_BD_ALIGN_ORDER 12
+#define FS_RING_BD_DESC_PER_REQ 32
+#define FS_RING_CMPL_ALIGN_ORDER 13
+#define FS_RING_CMPL_SIZE (1024 * FS_RING_DESC_SIZE)
+#define FS_RING_MAX_REQ_COUNT 1024
+#define FS_RING_PAGE_SHFT 12
+#define FS_RING_PAGE_SIZE BIT(FS_RING_PAGE_SHFT)
+
+/* Minimum and maximum number of requests supported */
+#define FS_RM_MAX_REQS 1024
+#define FS_RM_MIN_REQS 32
+
+#endif /* BCMFS_RM_DEFS_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_qp.c b/drivers/crypto/bcmfs/bcmfs_qp.c
new file mode 100644
index 000000000..864e7bb74
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_qp.c
@@ -0,0 +1,345 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Broadcom.
+ * All rights reserved.
+ */
+
+#include <inttypes.h>
+
+#include <rte_atomic.h>
+#include <rte_bitmap.h>
+#include <rte_common.h>
+#include <rte_dev.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_prefetch.h>
+#include <rte_string_fns.h>
+
+#include "bcmfs_logs.h"
+#include "bcmfs_qp.h"
+#include "bcmfs_hw_defs.h"
+
+/* TX or submission queue name */
+static const char *txq_name = "tx";
+/* Completion or receive queue name */
+static const char *cmplq_name = "cmpl";
+
+/* Helper function */
+static int
+bcmfs_qp_check_queue_alignment(uint64_t phys_addr,
+ uint32_t align)
+{
+ if (((align - 1) & phys_addr) != 0)
+ return -EINVAL;
+ return 0;
+}
+
+static void
+bcmfs_queue_delete(struct bcmfs_queue *queue,
+ uint16_t queue_pair_id)
+{
+ const struct rte_memzone *mz;
+ int status = 0;
+
+ if (queue == NULL) {
+ BCMFS_LOG(DEBUG, "Invalid queue");
+ return;
+ }
+ BCMFS_LOG(DEBUG, "Free ring %d type %d, memzone: %s",
+ queue_pair_id, queue->q_type, queue->memz_name);
+
+ mz = rte_memzone_lookup(queue->memz_name);
+ if (mz != NULL) {
+ /* Write an unused pattern to the queue memory. */
+ memset(queue->base_addr, 0x9B, queue->queue_size);
+ status = rte_memzone_free(mz);
+ if (status != 0)
+ BCMFS_LOG(ERR, "Error %d on freeing queue %s",
+ status, queue->memz_name);
+ } else {
+ BCMFS_LOG(DEBUG, "queue %s doesn't exist",
+ queue->memz_name);
+ }
+}
+
+static const struct rte_memzone *
+queue_dma_zone_reserve(const char *queue_name, uint32_t queue_size,
+ int socket_id, unsigned int align)
+{
+ const struct rte_memzone *mz;
+
+ mz = rte_memzone_lookup(queue_name);
+ if (mz != NULL) {
+ if (((size_t)queue_size <= mz->len) &&
+ (socket_id == SOCKET_ID_ANY ||
+ socket_id == mz->socket_id)) {
+ BCMFS_LOG(DEBUG, "re-use memzone already "
+ "allocated for %s", queue_name);
+ return mz;
+ }
+
+ BCMFS_LOG(ERR, "Incompatible memzone already "
+ "allocated %s, size %u, socket %d. "
+ "Requested size %u, socket %u",
+ queue_name, (uint32_t)mz->len,
+ mz->socket_id, queue_size, socket_id);
+ return NULL;
+ }
+
+ BCMFS_LOG(DEBUG, "Allocate memzone for %s, size %u on socket %u",
+ queue_name, queue_size, socket_id);
+ return rte_memzone_reserve_aligned(queue_name, queue_size,
+ socket_id, RTE_MEMZONE_IOVA_CONTIG, align);
+}
+
+static int
+bcmfs_queue_create(struct bcmfs_queue *queue,
+ struct bcmfs_qp_config *qp_conf,
+ uint16_t queue_pair_id,
+ enum bcmfs_queue_type qtype)
+{
+ const struct rte_memzone *qp_mz;
+ char q_name[16];
+ unsigned int align;
+ uint32_t queue_size_bytes;
+ int ret;
+
+ if (qtype == BCMFS_RM_TXQ) {
+ strlcpy(q_name, txq_name, sizeof(q_name));
+ align = 1U << FS_RING_BD_ALIGN_ORDER;
+ queue_size_bytes = qp_conf->nb_descriptors *
+ qp_conf->max_descs_req * FS_RING_DESC_SIZE;
+ queue_size_bytes = RTE_ALIGN_MUL_CEIL(queue_size_bytes,
+ FS_RING_PAGE_SIZE);
+ /* make queue size to multiple for 4K pages */
+ } else if (qtype == BCMFS_RM_CPLQ) {
+ strlcpy(q_name, cmplq_name, sizeof(q_name));
+ align = 1U << FS_RING_CMPL_ALIGN_ORDER;
+
+ /*
+ * Memory size for cmpl + MSI
+ * For MSI allocate here itself and so we allocate twice
+ */
+ queue_size_bytes = 2 * FS_RING_CMPL_SIZE;
+ } else {
+ BCMFS_LOG(ERR, "Invalid queue selection");
+ return -EINVAL;
+ }
+
+ queue->q_type = qtype;
+
+ /*
+ * Allocate a memzone for the queue - create a unique name.
+ */
+ snprintf(queue->memz_name, sizeof(queue->memz_name),
+ "%s_%d_%s_%d_%s", "bcmfs", qtype, "qp_mem",
+ queue_pair_id, q_name);
+ qp_mz = queue_dma_zone_reserve(queue->memz_name, queue_size_bytes,
+ 0, align);
+ if (qp_mz == NULL) {
+ BCMFS_LOG(ERR, "Failed to allocate ring memzone");
+ return -ENOMEM;
+ }
+
+ if (bcmfs_qp_check_queue_alignment(qp_mz->iova, align)) {
+ BCMFS_LOG(ERR, "Invalid alignment on queue create "
+ " 0x%" PRIx64 "\n",
+ queue->base_phys_addr);
+ ret = -EFAULT;
+ goto queue_create_err;
+ }
+
+ queue->base_addr = (char *)qp_mz->addr;
+ queue->base_phys_addr = qp_mz->iova;
+ queue->queue_size = queue_size_bytes;
+
+ return 0;
+
+queue_create_err:
+ rte_memzone_free(qp_mz);
+
+ return ret;
+}
+
+int
+bcmfs_qp_release(struct bcmfs_qp **qp_addr)
+{
+ struct bcmfs_qp *qp = *qp_addr;
+
+ if (qp == NULL) {
+ BCMFS_LOG(DEBUG, "qp already freed");
+ return 0;
+ }
+
+ /* Don't free memory if there are still responses to be processed */
+ if ((qp->stats.enqueued_count - qp->stats.dequeued_count) == 0) {
+ /* Stop the h/w ring */
+ qp->ops->stopq(qp);
+ /* Delete the queue pairs */
+ bcmfs_queue_delete(&qp->tx_q, qp->qpair_id);
+ bcmfs_queue_delete(&qp->cmpl_q, qp->qpair_id);
+ } else {
+ return -EAGAIN;
+ }
+
+ rte_bitmap_reset(qp->ctx_bmp);
+ rte_free(qp->ctx_bmp_mem);
+ rte_free(qp->ctx_pool);
+
+ rte_free(qp);
+ *qp_addr = NULL;
+
+ return 0;
+}
+
+int
+bcmfs_qp_setup(struct bcmfs_qp **qp_addr,
+ uint16_t queue_pair_id,
+ struct bcmfs_qp_config *qp_conf)
+{
+ struct bcmfs_qp *qp;
+ uint32_t bmp_size;
+ uint32_t nb_descriptors = qp_conf->nb_descriptors;
+ uint16_t i;
+ int rc;
+
+ if (nb_descriptors < FS_RM_MIN_REQS) {
+ BCMFS_LOG(ERR, "Can't create qp for %u descriptors",
+ nb_descriptors);
+ return -EINVAL;
+ }
+
+ if (nb_descriptors > FS_RM_MAX_REQS)
+ nb_descriptors = FS_RM_MAX_REQS;
+
+ if (qp_conf->iobase == NULL) {
+ BCMFS_LOG(ERR, "IO onfig space null");
+ return -EINVAL;
+ }
+
+ qp = rte_zmalloc_socket("BCM FS PMD qp metadata",
+ sizeof(*qp), RTE_CACHE_LINE_SIZE,
+ qp_conf->socket_id);
+ if (qp == NULL) {
+ BCMFS_LOG(ERR, "Failed to alloc mem for qp struct");
+ return -ENOMEM;
+ }
+
+ qp->qpair_id = queue_pair_id;
+ qp->ioreg = qp_conf->iobase;
+ qp->nb_descriptors = nb_descriptors;
+
+ qp->stats.enqueued_count = 0;
+ qp->stats.dequeued_count = 0;
+
+ rc = bcmfs_queue_create(&qp->tx_q, qp_conf, qp->qpair_id,
+ BCMFS_RM_TXQ);
+ if (rc) {
+ BCMFS_LOG(ERR, "Tx queue create failed queue_pair_id %u",
+ queue_pair_id);
+ goto create_err;
+ }
+
+ rc = bcmfs_queue_create(&qp->cmpl_q, qp_conf, qp->qpair_id,
+ BCMFS_RM_CPLQ);
+ if (rc) {
+ BCMFS_LOG(ERR, "Cmpl queue create failed queue_pair_id= %u",
+ queue_pair_id);
+ goto q_create_err;
+ }
+
+ /* ctx saving bitmap */
+ bmp_size = rte_bitmap_get_memory_footprint(nb_descriptors);
+
+ /* Allocate memory for bitmap */
+ qp->ctx_bmp_mem = rte_zmalloc("ctx_bmp_mem", bmp_size,
+ RTE_CACHE_LINE_SIZE);
+ if (qp->ctx_bmp_mem == NULL) {
+ rc = -ENOMEM;
+ goto qp_create_err;
+ }
+
+ /* Initialize pool resource bitmap array */
+ qp->ctx_bmp = rte_bitmap_init(nb_descriptors, qp->ctx_bmp_mem,
+ bmp_size);
+ if (qp->ctx_bmp == NULL) {
+ rc = -EINVAL;
+ goto bmap_mem_free;
+ }
+
+ /* Mark all pools available */
+ for (i = 0; i < nb_descriptors; i++)
+ rte_bitmap_set(qp->ctx_bmp, i);
+
+ /* Allocate memory for context */
+ qp->ctx_pool = rte_zmalloc("qp_ctx_pool",
+ sizeof(unsigned long) *
+ nb_descriptors, 0);
+ if (qp->ctx_pool == NULL) {
+ BCMFS_LOG(ERR, "ctx allocation pool fails");
+ rc = -ENOMEM;
+ goto bmap_free;
+ }
+
+ /* Start h/w ring */
+ qp->ops->startq(qp);
+
+ *qp_addr = qp;
+
+ return 0;
+
+bmap_free:
+ rte_bitmap_reset(qp->ctx_bmp);
+bmap_mem_free:
+ rte_free(qp->ctx_bmp_mem);
+qp_create_err:
+ bcmfs_queue_delete(&qp->cmpl_q, queue_pair_id);
+q_create_err:
+ bcmfs_queue_delete(&qp->tx_q, queue_pair_id);
+create_err:
+ rte_free(qp);
+
+ return rc;
+}
+
+uint16_t
+bcmfs_enqueue_op_burst(void *qp, void **ops, uint16_t nb_ops)
+{
+ struct bcmfs_qp *tmp_qp = (struct bcmfs_qp *)qp;
+ register uint32_t nb_ops_sent = 0;
+ uint16_t nb_ops_possible = nb_ops;
+ int ret;
+
+ if (unlikely(nb_ops == 0))
+ return 0;
+
+ while (nb_ops_sent != nb_ops_possible) {
+ ret = tmp_qp->ops->enq_one_req(qp, *ops);
+ if (ret != 0) {
+ tmp_qp->stats.enqueue_err_count++;
+ /* This message cannot be enqueued */
+ if (nb_ops_sent == 0)
+ return 0;
+ goto ring_db;
+ }
+
+ ops++;
+ nb_ops_sent++;
+ }
+
+ring_db:
+ tmp_qp->stats.enqueued_count += nb_ops_sent;
+ tmp_qp->ops->ring_db(tmp_qp);
+
+ return nb_ops_sent;
+}
+
+uint16_t
+bcmfs_dequeue_op_burst(void *qp, void **ops, uint16_t nb_ops)
+{
+ struct bcmfs_qp *tmp_qp = (struct bcmfs_qp *)qp;
+ uint32_t deq = tmp_qp->ops->dequeue(tmp_qp, ops, nb_ops);
+
+ tmp_qp->stats.dequeued_count += deq;
+
+ return deq;
+}
diff --git a/drivers/crypto/bcmfs/bcmfs_qp.h b/drivers/crypto/bcmfs/bcmfs_qp.h
new file mode 100644
index 000000000..027d7a50c
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_qp.h
@@ -0,0 +1,122 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_QP_H_
+#define _BCMFS_QP_H_
+
+#include <rte_memzone.h>
+
+/* Maximum number of h/w queues supported by device */
+#define BCMFS_MAX_HW_QUEUES 32
+
+/* H/W queue IO address space len */
+#define BCMFS_HW_QUEUE_IO_ADDR_LEN (64 * 1024)
+
+/* Maximum size of device ops name */
+#define BCMFS_HW_OPS_NAMESIZE 32
+
+enum bcmfs_queue_type {
+ /* TX or submission queue */
+ BCMFS_RM_TXQ,
+ /* Completion or receive queue */
+ BCMFS_RM_CPLQ
+};
+
+struct bcmfs_qp_stats {
+ /* Count of all operations enqueued */
+ uint64_t enqueued_count;
+ /* Count of all operations dequeued */
+ uint64_t dequeued_count;
+ /* Total error count on operations enqueued */
+ uint64_t enqueue_err_count;
+ /* Total error count on operations dequeued */
+ uint64_t dequeue_err_count;
+};
+
+struct bcmfs_qp_config {
+ /* Socket to allocate memory on */
+ int socket_id;
+ /* Mapped iobase for qp */
+ void *iobase;
+ /* nb_descriptors or requests a h/w queue can accommodate */
+ uint16_t nb_descriptors;
+ /* Maximum number of h/w descriptors needed by a request */
+ uint16_t max_descs_req;
+};
+
+struct bcmfs_queue {
+ /* Base virt address */
+ void *base_addr;
+ /* Base iova */
+ rte_iova_t base_phys_addr;
+ /* Queue type */
+ enum bcmfs_queue_type q_type;
+ /* Queue size based on nb_descriptors and max_descs_reqs */
+ uint32_t queue_size;
+ union {
+ /* s/w pointer for tx h/w queue*/
+ uint32_t tx_write_ptr;
+ /* s/w pointer for completion h/w queue*/
+ uint32_t cmpl_read_ptr;
+ };
+ /* Memzone name */
+ char memz_name[RTE_MEMZONE_NAMESIZE];
+};
+
+struct bcmfs_qp {
+ /* Queue-pair ID */
+ uint16_t qpair_id;
+ /* Mapped IO address */
+ void *ioreg;
+ /* A TX queue */
+ struct bcmfs_queue tx_q;
+ /* A Completion queue */
+ struct bcmfs_queue cmpl_q;
+ /* Number of requests queue can acommodate */
+ uint32_t nb_descriptors;
+ /* Number of pending requests and enqueued to h/w queue */
+ uint16_t nb_pending_requests;
+ /* A pool which act as a hash for <request-ID and virt address> pair */
+ unsigned long *ctx_pool;
+ /* virt address for mem allocated for bitmap */
+ void *ctx_bmp_mem;
+ /* Bitmap */
+ struct rte_bitmap *ctx_bmp;
+ /* Associated stats */
+ struct bcmfs_qp_stats stats;
+ /* h/w ops associated with qp */
+ struct bcmfs_hw_queue_pair_ops *ops;
+
+} __rte_cache_aligned;
+
+/* Structure defining h/w queue pair operations */
+struct bcmfs_hw_queue_pair_ops {
+ /* ops name */
+ char name[BCMFS_HW_OPS_NAMESIZE];
+ /* Enqueue an object */
+ int (*enq_one_req)(struct bcmfs_qp *qp, void *obj);
+ /* Ring doorbell */
+ void (*ring_db)(struct bcmfs_qp *qp);
+ /* Dequeue objects */
+ uint16_t (*dequeue)(struct bcmfs_qp *qp, void **obj,
+ uint16_t nb_ops);
+ /* Start the h/w queue */
+ int (*startq)(struct bcmfs_qp *qp);
+ /* Stop the h/w queue */
+ void (*stopq)(struct bcmfs_qp *qp);
+};
+
+uint16_t
+bcmfs_enqueue_op_burst(void *qp, void **ops, uint16_t nb_ops);
+uint16_t
+bcmfs_dequeue_op_burst(void *qp, void **ops, uint16_t nb_ops);
+int
+bcmfs_qp_release(struct bcmfs_qp **qp_addr);
+int
+bcmfs_qp_setup(struct bcmfs_qp **qp_addr,
+ uint16_t queue_pair_id,
+ struct bcmfs_qp_config *bcmfs_conf);
+
+#endif /* _BCMFS_QP_H_ */
diff --git a/drivers/crypto/bcmfs/meson.build b/drivers/crypto/bcmfs/meson.build
index fd39eba20..7e2bcbf14 100644
--- a/drivers/crypto/bcmfs/meson.build
+++ b/drivers/crypto/bcmfs/meson.build
@@ -7,5 +7,6 @@ deps += ['eal', 'bus_vdev']
sources = files(
'bcmfs_logs.c',
'bcmfs_device.c',
- 'bcmfs_vfio.c'
+ 'bcmfs_vfio.c',
+ 'bcmfs_qp.c'
)
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v2 4/8] crypto/bcmfs: add hw queue pair operations
2020-08-13 17:23 ` [dpdk-dev] [PATCH v2 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
` (2 preceding siblings ...)
2020-08-13 17:23 ` [dpdk-dev] [PATCH v2 3/8] crypto/bcmfs: add apis for queue pair management Vikas Gupta
@ 2020-08-13 17:23 ` Vikas Gupta
2020-08-13 17:23 ` [dpdk-dev] [PATCH v2 5/8] crypto/bcmfs: create a symmetric cryptodev Vikas Gupta
` (5 subsequent siblings)
9 siblings, 0 replies; 75+ messages in thread
From: Vikas Gupta @ 2020-08-13 17:23 UTC (permalink / raw)
To: dev, akhil.goyal; +Cc: vikram.prakash, Vikas Gupta, Raveendra Padasalagi
Add queue pair operations exported by supported devices.
Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
drivers/crypto/bcmfs/bcmfs_dev_msg.h | 29 +
drivers/crypto/bcmfs/bcmfs_device.c | 51 ++
drivers/crypto/bcmfs/bcmfs_device.h | 16 +
drivers/crypto/bcmfs/bcmfs_qp.c | 1 +
drivers/crypto/bcmfs/bcmfs_qp.h | 4 +
drivers/crypto/bcmfs/hw/bcmfs4_rm.c | 742 ++++++++++++++++++++++
drivers/crypto/bcmfs/hw/bcmfs5_rm.c | 677 ++++++++++++++++++++
drivers/crypto/bcmfs/hw/bcmfs_rm_common.c | 82 +++
drivers/crypto/bcmfs/hw/bcmfs_rm_common.h | 46 ++
drivers/crypto/bcmfs/meson.build | 5 +-
10 files changed, 1652 insertions(+), 1 deletion(-)
create mode 100644 drivers/crypto/bcmfs/bcmfs_dev_msg.h
create mode 100644 drivers/crypto/bcmfs/hw/bcmfs4_rm.c
create mode 100644 drivers/crypto/bcmfs/hw/bcmfs5_rm.c
create mode 100644 drivers/crypto/bcmfs/hw/bcmfs_rm_common.c
create mode 100644 drivers/crypto/bcmfs/hw/bcmfs_rm_common.h
diff --git a/drivers/crypto/bcmfs/bcmfs_dev_msg.h b/drivers/crypto/bcmfs/bcmfs_dev_msg.h
new file mode 100644
index 000000000..5b50bde35
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_dev_msg.h
@@ -0,0 +1,29 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_DEV_MSG_H_
+#define _BCMFS_DEV_MSG_H_
+
+#define MAX_SRC_ADDR_BUFFERS 8
+#define MAX_DST_ADDR_BUFFERS 3
+
+struct bcmfs_qp_message {
+ /** Physical address of each source */
+ uint64_t srcs_addr[MAX_SRC_ADDR_BUFFERS];
+ /** Length of each sources */
+ uint32_t srcs_len[MAX_SRC_ADDR_BUFFERS];
+ /** Total number of sources */
+ unsigned int srcs_count;
+ /** Physical address of each destination */
+ uint64_t dsts_addr[MAX_DST_ADDR_BUFFERS];
+ /** Length of each destination */
+ uint32_t dsts_len[MAX_DST_ADDR_BUFFERS];
+ /** Total number of destinations */
+ unsigned int dsts_count;
+
+ void *ctx;
+};
+
+#endif /* _BCMFS_DEV_MSG_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_device.c b/drivers/crypto/bcmfs/bcmfs_device.c
index b475c2933..bd2d64acf 100644
--- a/drivers/crypto/bcmfs/bcmfs_device.c
+++ b/drivers/crypto/bcmfs/bcmfs_device.c
@@ -43,6 +43,47 @@ static struct bcmfs_device_attr dev_table[] = {
}
};
+struct bcmfs_hw_queue_pair_ops_table bcmfs_hw_queue_pair_ops_table = {
+ .tl = RTE_SPINLOCK_INITIALIZER,
+ .num_ops = 0
+};
+
+int bcmfs_hw_queue_pair_register_ops(const struct bcmfs_hw_queue_pair_ops *h)
+{
+ struct bcmfs_hw_queue_pair_ops *ops;
+ int16_t ops_index;
+
+ rte_spinlock_lock(&bcmfs_hw_queue_pair_ops_table.tl);
+
+ if (h->enq_one_req == NULL || h->dequeue == NULL ||
+ h->ring_db == NULL || h->startq == NULL || h->stopq == NULL) {
+ rte_spinlock_unlock(&bcmfs_hw_queue_pair_ops_table.tl);
+ BCMFS_LOG(ERR,
+ "Missing callback while registering device ops");
+ return -EINVAL;
+ }
+
+ if (strlen(h->name) >= sizeof(ops->name) - 1) {
+ rte_spinlock_unlock(&bcmfs_hw_queue_pair_ops_table.tl);
+ BCMFS_LOG(ERR, "%s(): fs device_ops <%s>: name too long",
+ __func__, h->name);
+ return -EEXIST;
+ }
+
+ ops_index = bcmfs_hw_queue_pair_ops_table.num_ops++;
+ ops = &bcmfs_hw_queue_pair_ops_table.qp_ops[ops_index];
+ strlcpy(ops->name, h->name, sizeof(ops->name));
+ ops->enq_one_req = h->enq_one_req;
+ ops->dequeue = h->dequeue;
+ ops->ring_db = h->ring_db;
+ ops->startq = h->startq;
+ ops->stopq = h->stopq;
+
+ rte_spinlock_unlock(&bcmfs_hw_queue_pair_ops_table.tl);
+
+ return ops_index;
+}
+
TAILQ_HEAD(fsdev_list, bcmfs_device);
static struct fsdev_list fsdev_list = TAILQ_HEAD_INITIALIZER(fsdev_list);
@@ -53,6 +94,7 @@ fsdev_allocate_one_dev(struct rte_vdev_device *vdev,
enum bcmfs_device_type dev_type __rte_unused)
{
struct bcmfs_device *fsdev;
+ uint32_t i;
fsdev = calloc(1, sizeof(*fsdev));
if (!fsdev)
@@ -68,6 +110,15 @@ fsdev_allocate_one_dev(struct rte_vdev_device *vdev,
goto cleanup;
}
+ /* check if registered ops name is present in directory path */
+ for (i = 0; i < bcmfs_hw_queue_pair_ops_table.num_ops; i++)
+ if (strstr(dirpath,
+ bcmfs_hw_queue_pair_ops_table.qp_ops[i].name))
+ fsdev->sym_hw_qp_ops =
+ &bcmfs_hw_queue_pair_ops_table.qp_ops[i];
+ if (!fsdev->sym_hw_qp_ops)
+ goto cleanup;
+
strcpy(fsdev->dirname, dirpath);
strcpy(fsdev->name, devname);
diff --git a/drivers/crypto/bcmfs/bcmfs_device.h b/drivers/crypto/bcmfs/bcmfs_device.h
index a47537332..9e40c5d74 100644
--- a/drivers/crypto/bcmfs/bcmfs_device.h
+++ b/drivers/crypto/bcmfs/bcmfs_device.h
@@ -8,6 +8,7 @@
#include <sys/queue.h>
+#include <rte_spinlock.h>
#include <rte_bus_vdev.h>
#include "bcmfs_logs.h"
@@ -28,6 +29,19 @@ enum bcmfs_device_type {
BCMFS_UNKNOWN
};
+/* A table to store registered queue pair opertations */
+struct bcmfs_hw_queue_pair_ops_table {
+ rte_spinlock_t tl;
+ /* Number of used ops structs in the table. */
+ uint32_t num_ops;
+ /* Storage for all possible ops structs. */
+ struct bcmfs_hw_queue_pair_ops qp_ops[BCMFS_MAX_NODES];
+};
+
+/* HW queue pair ops register function */
+int bcmfs_hw_queue_pair_register_ops(const struct bcmfs_hw_queue_pair_ops
+ *qp_ops);
+
struct bcmfs_device {
TAILQ_ENTRY(bcmfs_device) next;
/* Directory path for vfio */
@@ -46,6 +60,8 @@ struct bcmfs_device {
uint16_t max_hw_qps;
/* current qpairs in use */
struct bcmfs_qp *qps_in_use[BCMFS_MAX_HW_QUEUES];
+ /* queue pair ops exported by symmetric crypto hw */
+ struct bcmfs_hw_queue_pair_ops *sym_hw_qp_ops;
};
#endif /* _BCMFS_DEV_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_qp.c b/drivers/crypto/bcmfs/bcmfs_qp.c
index 864e7bb74..ec1327b78 100644
--- a/drivers/crypto/bcmfs/bcmfs_qp.c
+++ b/drivers/crypto/bcmfs/bcmfs_qp.c
@@ -227,6 +227,7 @@ bcmfs_qp_setup(struct bcmfs_qp **qp_addr,
qp->qpair_id = queue_pair_id;
qp->ioreg = qp_conf->iobase;
qp->nb_descriptors = nb_descriptors;
+ qp->ops = qp_conf->ops;
qp->stats.enqueued_count = 0;
qp->stats.dequeued_count = 0;
diff --git a/drivers/crypto/bcmfs/bcmfs_qp.h b/drivers/crypto/bcmfs/bcmfs_qp.h
index 027d7a50c..e4b0c3f2f 100644
--- a/drivers/crypto/bcmfs/bcmfs_qp.h
+++ b/drivers/crypto/bcmfs/bcmfs_qp.h
@@ -44,6 +44,8 @@ struct bcmfs_qp_config {
uint16_t nb_descriptors;
/* Maximum number of h/w descriptors needed by a request */
uint16_t max_descs_req;
+ /* h/w ops associated with qp */
+ struct bcmfs_hw_queue_pair_ops *ops;
};
struct bcmfs_queue {
@@ -61,6 +63,8 @@ struct bcmfs_queue {
/* s/w pointer for completion h/w queue*/
uint32_t cmpl_read_ptr;
};
+ /* number of inflight descriptor accumulated before next db ring */
+ uint16_t descs_inflight;
/* Memzone name */
char memz_name[RTE_MEMZONE_NAMESIZE];
};
diff --git a/drivers/crypto/bcmfs/hw/bcmfs4_rm.c b/drivers/crypto/bcmfs/hw/bcmfs4_rm.c
new file mode 100644
index 000000000..82b1cf9c5
--- /dev/null
+++ b/drivers/crypto/bcmfs/hw/bcmfs4_rm.c
@@ -0,0 +1,742 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <unistd.h>
+
+#include <rte_bitmap.h>
+
+#include "bcmfs_device.h"
+#include "bcmfs_dev_msg.h"
+#include "bcmfs_hw_defs.h"
+#include "bcmfs_logs.h"
+#include "bcmfs_qp.h"
+#include "bcmfs_rm_common.h"
+
+/* FS4 configuration */
+#define RING_BD_TOGGLE_INVALID(offset) \
+ (((offset) >> FS_RING_BD_ALIGN_ORDER) & 0x1)
+#define RING_BD_TOGGLE_VALID(offset) \
+ (!RING_BD_TOGGLE_INVALID(offset))
+
+#define RING_VER_MAGIC 0x76303031
+
+/* Per-Ring register offsets */
+#define RING_VER 0x000
+#define RING_BD_START_ADDR 0x004
+#define RING_BD_READ_PTR 0x008
+#define RING_BD_WRITE_PTR 0x00c
+#define RING_BD_READ_PTR_DDR_LS 0x010
+#define RING_BD_READ_PTR_DDR_MS 0x014
+#define RING_CMPL_START_ADDR 0x018
+#define RING_CMPL_WRITE_PTR 0x01c
+#define RING_NUM_REQ_RECV_LS 0x020
+#define RING_NUM_REQ_RECV_MS 0x024
+#define RING_NUM_REQ_TRANS_LS 0x028
+#define RING_NUM_REQ_TRANS_MS 0x02c
+#define RING_NUM_REQ_OUTSTAND 0x030
+#define RING_CONTROL 0x034
+#define RING_FLUSH_DONE 0x038
+#define RING_MSI_ADDR_LS 0x03c
+#define RING_MSI_ADDR_MS 0x040
+#define RING_MSI_CONTROL 0x048
+#define RING_BD_READ_PTR_DDR_CONTROL 0x04c
+#define RING_MSI_DATA_VALUE 0x064
+
+/* Register RING_BD_START_ADDR fields */
+#define BD_LAST_UPDATE_HW_SHIFT 28
+#define BD_LAST_UPDATE_HW_MASK 0x1
+#define BD_START_ADDR_VALUE(pa) \
+ ((uint32_t)((((uint64_t)(pa)) >> FS_RING_BD_ALIGN_ORDER) & 0x0fffffff))
+#define BD_START_ADDR_DECODE(val) \
+ ((uint64_t)((val) & 0x0fffffff) << FS_RING_BD_ALIGN_ORDER)
+
+/* Register RING_CMPL_START_ADDR fields */
+#define CMPL_START_ADDR_VALUE(pa) \
+ ((uint32_t)((((uint64_t)(pa)) >> FS_RING_CMPL_ALIGN_ORDER) & 0x7ffffff))
+
+/* Register RING_CONTROL fields */
+#define CONTROL_MASK_DISABLE_CONTROL 12
+#define CONTROL_FLUSH_SHIFT 5
+#define CONTROL_ACTIVE_SHIFT 4
+#define CONTROL_RATE_ADAPT_MASK 0xf
+#define CONTROL_RATE_DYNAMIC 0x0
+#define CONTROL_RATE_FAST 0x8
+#define CONTROL_RATE_MEDIUM 0x9
+#define CONTROL_RATE_SLOW 0xa
+#define CONTROL_RATE_IDLE 0xb
+
+/* Register RING_FLUSH_DONE fields */
+#define FLUSH_DONE_MASK 0x1
+
+/* Register RING_MSI_CONTROL fields */
+#define MSI_TIMER_VAL_SHIFT 16
+#define MSI_TIMER_VAL_MASK 0xffff
+#define MSI_ENABLE_SHIFT 15
+#define MSI_ENABLE_MASK 0x1
+#define MSI_COUNT_SHIFT 0
+#define MSI_COUNT_MASK 0x3ff
+
+/* Register RING_BD_READ_PTR_DDR_CONTROL fields */
+#define BD_READ_PTR_DDR_TIMER_VAL_SHIFT 16
+#define BD_READ_PTR_DDR_TIMER_VAL_MASK 0xffff
+#define BD_READ_PTR_DDR_ENABLE_SHIFT 15
+#define BD_READ_PTR_DDR_ENABLE_MASK 0x1
+
+/* ====== Broadcom FS4-RM ring descriptor defines ===== */
+
+
+/* General descriptor format */
+#define DESC_TYPE_SHIFT 60
+#define DESC_TYPE_MASK 0xf
+#define DESC_PAYLOAD_SHIFT 0
+#define DESC_PAYLOAD_MASK 0x0fffffffffffffff
+
+/* Null descriptor format */
+#define NULL_TYPE 0
+#define NULL_TOGGLE_SHIFT 58
+#define NULL_TOGGLE_MASK 0x1
+
+/* Header descriptor format */
+#define HEADER_TYPE 1
+#define HEADER_TOGGLE_SHIFT 58
+#define HEADER_TOGGLE_MASK 0x1
+#define HEADER_ENDPKT_SHIFT 57
+#define HEADER_ENDPKT_MASK 0x1
+#define HEADER_STARTPKT_SHIFT 56
+#define HEADER_STARTPKT_MASK 0x1
+#define HEADER_BDCOUNT_SHIFT 36
+#define HEADER_BDCOUNT_MASK 0x1f
+#define HEADER_BDCOUNT_MAX HEADER_BDCOUNT_MASK
+#define HEADER_FLAGS_SHIFT 16
+#define HEADER_FLAGS_MASK 0xffff
+#define HEADER_OPAQUE_SHIFT 0
+#define HEADER_OPAQUE_MASK 0xffff
+
+/* Source (SRC) descriptor format */
+#define SRC_TYPE 2
+#define SRC_LENGTH_SHIFT 44
+#define SRC_LENGTH_MASK 0xffff
+#define SRC_ADDR_SHIFT 0
+#define SRC_ADDR_MASK 0x00000fffffffffff
+
+/* Destination (DST) descriptor format */
+#define DST_TYPE 3
+#define DST_LENGTH_SHIFT 44
+#define DST_LENGTH_MASK 0xffff
+#define DST_ADDR_SHIFT 0
+#define DST_ADDR_MASK 0x00000fffffffffff
+
+/* Next pointer (NPTR) descriptor format */
+#define NPTR_TYPE 5
+#define NPTR_TOGGLE_SHIFT 58
+#define NPTR_TOGGLE_MASK 0x1
+#define NPTR_ADDR_SHIFT 0
+#define NPTR_ADDR_MASK 0x00000fffffffffff
+
+/* Mega source (MSRC) descriptor format */
+#define MSRC_TYPE 6
+#define MSRC_LENGTH_SHIFT 44
+#define MSRC_LENGTH_MASK 0xffff
+#define MSRC_ADDR_SHIFT 0
+#define MSRC_ADDR_MASK 0x00000fffffffffff
+
+/* Mega destination (MDST) descriptor format */
+#define MDST_TYPE 7
+#define MDST_LENGTH_SHIFT 44
+#define MDST_LENGTH_MASK 0xffff
+#define MDST_ADDR_SHIFT 0
+#define MDST_ADDR_MASK 0x00000fffffffffff
+
+static uint8_t
+bcmfs4_is_next_table_desc(void *desc_ptr)
+{
+ uint64_t desc = rm_read_desc(desc_ptr);
+ uint32_t type = FS_DESC_DEC(desc, DESC_TYPE_SHIFT, DESC_TYPE_MASK);
+
+ return (type == NPTR_TYPE) ? true : false;
+}
+
+static uint64_t
+bcmfs4_next_table_desc(uint32_t toggle, uint64_t next_addr)
+{
+ return (rm_build_desc(NPTR_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(toggle, NPTR_TOGGLE_SHIFT, NPTR_TOGGLE_MASK) |
+ rm_build_desc(next_addr, NPTR_ADDR_SHIFT, NPTR_ADDR_MASK));
+}
+
+static uint64_t
+bcmfs4_null_desc(uint32_t toggle)
+{
+ return (rm_build_desc(NULL_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(toggle, NULL_TOGGLE_SHIFT, NULL_TOGGLE_MASK));
+}
+
+static void
+bcmfs4_flip_header_toggle(void *desc_ptr)
+{
+ uint64_t desc = rm_read_desc(desc_ptr);
+
+ if (desc & ((uint64_t)0x1 << HEADER_TOGGLE_SHIFT))
+ desc &= ~((uint64_t)0x1 << HEADER_TOGGLE_SHIFT);
+ else
+ desc |= ((uint64_t)0x1 << HEADER_TOGGLE_SHIFT);
+
+ rm_write_desc(desc_ptr, desc);
+}
+
+static uint64_t
+bcmfs4_header_desc(uint32_t toggle, uint32_t startpkt,
+ uint32_t endpkt, uint32_t bdcount,
+ uint32_t flags, uint32_t opaque)
+{
+ return (rm_build_desc(HEADER_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(toggle, HEADER_TOGGLE_SHIFT, HEADER_TOGGLE_MASK) |
+ rm_build_desc(startpkt, HEADER_STARTPKT_SHIFT,
+ HEADER_STARTPKT_MASK) |
+ rm_build_desc(endpkt, HEADER_ENDPKT_SHIFT, HEADER_ENDPKT_MASK) |
+ rm_build_desc(bdcount, HEADER_BDCOUNT_SHIFT,
+ HEADER_BDCOUNT_MASK) |
+ rm_build_desc(flags, HEADER_FLAGS_SHIFT, HEADER_FLAGS_MASK) |
+ rm_build_desc(opaque, HEADER_OPAQUE_SHIFT, HEADER_OPAQUE_MASK));
+}
+
+static void
+bcmfs4_enqueue_desc(uint32_t nhpos, uint32_t nhcnt,
+ uint32_t reqid, uint64_t desc,
+ void **desc_ptr, uint32_t *toggle,
+ void *start_desc, void *end_desc)
+{
+ uint64_t d;
+ uint32_t nhavail, _toggle, _startpkt, _endpkt, _bdcount;
+
+ /*
+ * Each request or packet start with a HEADER descriptor followed
+ * by one or more non-HEADER descriptors (SRC, SRCT, MSRC, DST,
+ * DSTT, MDST, IMM, and IMMT). The number of non-HEADER descriptors
+ * following a HEADER descriptor is represented by BDCOUNT field
+ * of HEADER descriptor. The max value of BDCOUNT field is 31 which
+ * means we can only have 31 non-HEADER descriptors following one
+ * HEADER descriptor.
+ *
+ * In general use, number of non-HEADER descriptors can easily go
+ * beyond 31. To tackle this situation, we have packet (or request)
+ * extension bits (STARTPKT and ENDPKT) in the HEADER descriptor.
+ *
+ * To use packet extension, the first HEADER descriptor of request
+ * (or packet) will have STARTPKT=1 and ENDPKT=0. The intermediate
+ * HEADER descriptors will have STARTPKT=0 and ENDPKT=0. The last
+ * HEADER descriptor will have STARTPKT=0 and ENDPKT=1. Also, the
+ * TOGGLE bit of the first HEADER will be set to invalid state to
+ * ensure that FlexDMA engine does not start fetching descriptors
+ * till all descriptors are enqueued. The user of this function
+ * will flip the TOGGLE bit of first HEADER after all descriptors
+ * are enqueued.
+ */
+
+ if ((nhpos % HEADER_BDCOUNT_MAX == 0) && (nhcnt - nhpos)) {
+ /* Prepare the header descriptor */
+ nhavail = (nhcnt - nhpos);
+ _toggle = (nhpos == 0) ? !(*toggle) : (*toggle);
+ _startpkt = (nhpos == 0) ? 0x1 : 0x0;
+ _endpkt = (nhavail <= HEADER_BDCOUNT_MAX) ? 0x1 : 0x0;
+ _bdcount = (nhavail <= HEADER_BDCOUNT_MAX) ?
+ nhavail : HEADER_BDCOUNT_MAX;
+ if (nhavail <= HEADER_BDCOUNT_MAX)
+ _bdcount = nhavail;
+ else
+ _bdcount = HEADER_BDCOUNT_MAX;
+ d = bcmfs4_header_desc(_toggle, _startpkt, _endpkt,
+ _bdcount, 0x0, reqid);
+
+ /* Write header descriptor */
+ rm_write_desc(*desc_ptr, d);
+
+ /* Point to next descriptor */
+ *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc);
+ if (*desc_ptr == end_desc)
+ *desc_ptr = start_desc;
+
+ /* Skip next pointer descriptors */
+ while (bcmfs4_is_next_table_desc(*desc_ptr)) {
+ *toggle = (*toggle) ? 0 : 1;
+ *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc);
+ if (*desc_ptr == end_desc)
+ *desc_ptr = start_desc;
+ }
+ }
+
+ /* Write desired descriptor */
+ rm_write_desc(*desc_ptr, desc);
+
+ /* Point to next descriptor */
+ *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc);
+ if (*desc_ptr == end_desc)
+ *desc_ptr = start_desc;
+
+ /* Skip next pointer descriptors */
+ while (bcmfs4_is_next_table_desc(*desc_ptr)) {
+ *toggle = (*toggle) ? 0 : 1;
+ *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc);
+ if (*desc_ptr == end_desc)
+ *desc_ptr = start_desc;
+ }
+}
+
+static uint64_t
+bcmfs4_src_desc(uint64_t addr, unsigned int length)
+{
+ return (rm_build_desc(SRC_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(length, SRC_LENGTH_SHIFT, SRC_LENGTH_MASK) |
+ rm_build_desc(addr, SRC_ADDR_SHIFT, SRC_ADDR_MASK));
+}
+
+static uint64_t
+bcmfs4_msrc_desc(uint64_t addr, unsigned int length_div_16)
+{
+ return (rm_build_desc(MSRC_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(length_div_16, MSRC_LENGTH_SHIFT, MSRC_LENGTH_MASK) |
+ rm_build_desc(addr, MSRC_ADDR_SHIFT, MSRC_ADDR_MASK));
+}
+
+static uint64_t
+bcmfs4_dst_desc(uint64_t addr, unsigned int length)
+{
+ return (rm_build_desc(DST_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(length, DST_LENGTH_SHIFT, DST_LENGTH_MASK) |
+ rm_build_desc(addr, DST_ADDR_SHIFT, DST_ADDR_MASK));
+}
+
+static uint64_t
+bcmfs4_mdst_desc(uint64_t addr, unsigned int length_div_16)
+{
+ return (rm_build_desc(MDST_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(length_div_16, MDST_LENGTH_SHIFT, MDST_LENGTH_MASK) |
+ rm_build_desc(addr, MDST_ADDR_SHIFT, MDST_ADDR_MASK));
+}
+
+static bool
+bcmfs4_sanity_check(struct bcmfs_qp_message *msg)
+{
+ unsigned int i = 0;
+
+ if (msg == NULL)
+ return false;
+
+ for (i = 0; i < msg->srcs_count; i++) {
+ if (msg->srcs_len[i] & 0xf) {
+ if (msg->srcs_len[i] > SRC_LENGTH_MASK)
+ return false;
+ } else {
+ if (msg->srcs_len[i] > (MSRC_LENGTH_MASK * 16))
+ return false;
+ }
+ }
+ for (i = 0; i < msg->dsts_count; i++) {
+ if (msg->dsts_len[i] & 0xf) {
+ if (msg->dsts_len[i] > DST_LENGTH_MASK)
+ return false;
+ } else {
+ if (msg->dsts_len[i] > (MDST_LENGTH_MASK * 16))
+ return false;
+ }
+ }
+
+ return true;
+}
+
+static uint32_t
+estimate_nonheader_desc_count(struct bcmfs_qp_message *msg)
+{
+ uint32_t cnt = 0;
+ unsigned int src = 0;
+ unsigned int dst = 0;
+ unsigned int dst_target = 0;
+
+ while (src < msg->srcs_count ||
+ dst < msg->dsts_count) {
+ if (src < msg->srcs_count) {
+ cnt++;
+ dst_target = msg->srcs_len[src];
+ src++;
+ } else {
+ dst_target = UINT_MAX;
+ }
+ while (dst_target && dst < msg->dsts_count) {
+ cnt++;
+ if (msg->dsts_len[dst] < dst_target)
+ dst_target -= msg->dsts_len[dst];
+ else
+ dst_target = 0;
+ dst++;
+ }
+ }
+
+ return cnt;
+}
+
+static void *
+bcmfs4_enqueue_msg(struct bcmfs_qp_message *msg,
+ uint32_t nhcnt, uint32_t reqid,
+ void *desc_ptr, uint32_t toggle,
+ void *start_desc, void *end_desc)
+{
+ uint64_t d;
+ uint32_t nhpos = 0;
+ unsigned int src = 0;
+ unsigned int dst = 0;
+ unsigned int dst_target = 0;
+ void *orig_desc_ptr = desc_ptr;
+
+ if (!desc_ptr || !start_desc || !end_desc)
+ return NULL;
+
+ if (desc_ptr < start_desc || end_desc <= desc_ptr)
+ return NULL;
+
+ while (src < msg->srcs_count || dst < msg->dsts_count) {
+ if (src < msg->srcs_count) {
+ if (msg->srcs_len[src] & 0xf) {
+ d = bcmfs4_src_desc(msg->srcs_addr[src],
+ msg->srcs_len[src]);
+ } else {
+ d = bcmfs4_msrc_desc(msg->srcs_addr[src],
+ msg->srcs_len[src] / 16);
+ }
+ bcmfs4_enqueue_desc(nhpos, nhcnt, reqid,
+ d, &desc_ptr, &toggle,
+ start_desc, end_desc);
+ nhpos++;
+ dst_target = msg->srcs_len[src];
+ src++;
+ } else {
+ dst_target = UINT_MAX;
+ }
+
+ while (dst_target && (dst < msg->dsts_count)) {
+ if (msg->dsts_len[dst] & 0xf) {
+ d = bcmfs4_dst_desc(msg->dsts_addr[dst],
+ msg->dsts_len[dst]);
+ } else {
+ d = bcmfs4_mdst_desc(msg->dsts_addr[dst],
+ msg->dsts_len[dst] / 16);
+ }
+ bcmfs4_enqueue_desc(nhpos, nhcnt, reqid,
+ d, &desc_ptr, &toggle,
+ start_desc, end_desc);
+ nhpos++;
+ if (msg->dsts_len[dst] < dst_target)
+ dst_target -= msg->dsts_len[dst];
+ else
+ dst_target = 0;
+ dst++; /* for next buffer */
+ }
+ }
+
+ /* Null descriptor with invalid toggle bit */
+ rm_write_desc(desc_ptr, bcmfs4_null_desc(!toggle));
+
+ /* Ensure that descriptors have been written to memory */
+ rte_smp_wmb();
+
+ bcmfs4_flip_header_toggle(orig_desc_ptr);
+
+ return desc_ptr;
+}
+
+static int
+bcmfs4_enqueue_single_request_qp(struct bcmfs_qp *qp, void *op)
+{
+ int reqid;
+ void *next;
+ uint32_t nhcnt;
+ int ret = 0;
+ uint32_t pos = 0;
+ uint64_t slab = 0;
+ uint8_t exit_cleanup = false;
+ struct bcmfs_queue *txq = &qp->tx_q;
+ struct bcmfs_qp_message *msg = (struct bcmfs_qp_message *)op;
+
+ /* Do sanity check on message */
+ if (!bcmfs4_sanity_check(msg)) {
+ BCMFS_DP_LOG(ERR, "Invalid msg on queue %d", qp->qpair_id);
+ return -EIO;
+ }
+
+ /* Scan from the beginning */
+ __rte_bitmap_scan_init(qp->ctx_bmp);
+ /* Scan bitmap to get the free pool */
+ ret = rte_bitmap_scan(qp->ctx_bmp, &pos, &slab);
+ if (ret == 0) {
+ BCMFS_DP_LOG(ERR, "BD memory exhausted");
+ return -ERANGE;
+ }
+
+ reqid = pos + __builtin_ctzll(slab);
+ rte_bitmap_clear(qp->ctx_bmp, reqid);
+ qp->ctx_pool[reqid] = (unsigned long)msg;
+
+ /*
+ * Number required descriptors = number of non-header descriptors +
+ * number of header descriptors +
+ * 1x null descriptor
+ */
+ nhcnt = estimate_nonheader_desc_count(msg);
+
+ /* Write descriptors to ring */
+ next = bcmfs4_enqueue_msg(msg, nhcnt, reqid,
+ (uint8_t *)txq->base_addr + txq->tx_write_ptr,
+ RING_BD_TOGGLE_VALID(txq->tx_write_ptr),
+ txq->base_addr,
+ (uint8_t *)txq->base_addr + txq->queue_size);
+ if (next == NULL) {
+ BCMFS_DP_LOG(ERR, "Enqueue for desc failed on queue %d",
+ qp->qpair_id);
+ ret = -EINVAL;
+ exit_cleanup = true;
+ goto exit;
+ }
+
+ /* Save ring BD write offset */
+ txq->tx_write_ptr = (uint32_t)((uint8_t *)next -
+ (uint8_t *)txq->base_addr);
+
+ qp->nb_pending_requests++;
+
+ return 0;
+
+exit:
+ /* Cleanup if we failed */
+ if (exit_cleanup)
+ rte_bitmap_set(qp->ctx_bmp, reqid);
+
+ return ret;
+}
+
+static void
+bcmfs4_ring_doorbell_qp(struct bcmfs_qp *qp __rte_unused)
+{
+ /* no door bell method supported */
+}
+
+static uint16_t
+bcmfs4_dequeue_qp(struct bcmfs_qp *qp, void **ops, uint16_t budget)
+{
+ int err;
+ uint16_t reqid;
+ uint64_t desc;
+ uint16_t count = 0;
+ unsigned long context = 0;
+ struct bcmfs_queue *hwq = &qp->cmpl_q;
+ uint32_t cmpl_read_offset, cmpl_write_offset;
+
+ /*
+ * Check whether budget is valid, else set the budget to maximum
+ * so that all the available completions will be processed.
+ */
+ if (budget > qp->nb_pending_requests)
+ budget = qp->nb_pending_requests;
+
+ /*
+ * Get current completion read and write offset
+ * Note: We should read completion write pointer at least once
+ * after we get a MSI interrupt because HW maintains internal
+ * MSI status which will allow next MSI interrupt only after
+ * completion write pointer is read.
+ */
+ cmpl_write_offset = FS_MMIO_READ32((uint8_t *)qp->ioreg +
+ RING_CMPL_WRITE_PTR);
+ cmpl_write_offset *= FS_RING_DESC_SIZE;
+ cmpl_read_offset = hwq->cmpl_read_ptr;
+
+ rte_smp_rmb();
+
+ /* For each completed request notify mailbox clients */
+ reqid = 0;
+ while ((cmpl_read_offset != cmpl_write_offset) && (budget > 0)) {
+ /* Dequeue next completion descriptor */
+ desc = *((uint64_t *)((uint8_t *)hwq->base_addr +
+ cmpl_read_offset));
+
+ /* Next read offset */
+ cmpl_read_offset += FS_RING_DESC_SIZE;
+ if (cmpl_read_offset == FS_RING_CMPL_SIZE)
+ cmpl_read_offset = 0;
+
+ /* Decode error from completion descriptor */
+ err = rm_cmpl_desc_to_error(desc);
+ if (err < 0)
+ BCMFS_DP_LOG(ERR, "error desc rcvd");
+
+ /* Determine request id from completion descriptor */
+ reqid = rm_cmpl_desc_to_reqid(desc);
+
+ /* Determine message pointer based on reqid */
+ context = qp->ctx_pool[reqid];
+ if (context == 0)
+ BCMFS_DP_LOG(ERR, "HW error detected");
+
+ /* Release reqid for recycling */
+ qp->ctx_pool[reqid] = 0;
+ rte_bitmap_set(qp->ctx_bmp, reqid);
+
+ *ops = (void *)context;
+
+ /* Increment number of completions processed */
+ count++;
+ budget--;
+ ops++;
+ }
+
+ hwq->cmpl_read_ptr = cmpl_read_offset;
+
+ qp->nb_pending_requests -= count;
+
+ return count;
+}
+
+static int
+bcmfs4_start_qp(struct bcmfs_qp *qp)
+{
+ int timeout;
+ uint32_t val, off;
+ uint64_t d, next_addr, msi;
+ struct bcmfs_queue *tx_queue = &qp->tx_q;
+ struct bcmfs_queue *cmpl_queue = &qp->cmpl_q;
+
+ /* Disable/deactivate ring */
+ FS_MMIO_WRITE32(0x0, (uint8_t *)qp->ioreg + RING_CONTROL);
+
+ /* Configure next table pointer entries in BD memory */
+ for (off = 0; off < tx_queue->queue_size; off += FS_RING_DESC_SIZE) {
+ next_addr = off + FS_RING_DESC_SIZE;
+ if (next_addr == tx_queue->queue_size)
+ next_addr = 0;
+ next_addr += (uint64_t)tx_queue->base_phys_addr;
+ if (FS_RING_BD_ALIGN_CHECK(next_addr))
+ d = bcmfs4_next_table_desc(RING_BD_TOGGLE_VALID(off),
+ next_addr);
+ else
+ d = bcmfs4_null_desc(RING_BD_TOGGLE_INVALID(off));
+ rm_write_desc((uint8_t *)tx_queue->base_addr + off, d);
+ }
+
+ /*
+ * If user interrupt the test in between the run(Ctrl+C), then all
+ * subsequent test run will fail because sw cmpl_read_offset and hw
+ * cmpl_write_offset will be pointing at different completion BD. To
+ * handle this we should flush all the rings in the startup instead
+ * of shutdown function.
+ * Ring flush will reset hw cmpl_write_offset.
+ */
+
+ /* Set ring flush state */
+ timeout = 1000;
+ FS_MMIO_WRITE32(BIT(CONTROL_FLUSH_SHIFT),
+ (uint8_t *)qp->ioreg + RING_CONTROL);
+ do {
+ /*
+ * If previous test is stopped in between the run, then
+ * sw has to read cmpl_write_offset else DME/AE will be not
+ * come out of flush state.
+ */
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_CMPL_WRITE_PTR);
+
+ if (FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_FLUSH_DONE) &
+ FLUSH_DONE_MASK)
+ break;
+ usleep(1000);
+ } while (--timeout);
+ if (!timeout) {
+ BCMFS_DP_LOG(ERR, "Ring flush timeout hw-queue %d",
+ qp->qpair_id);
+ }
+
+ /* Clear ring flush state */
+ timeout = 1000;
+ FS_MMIO_WRITE32(0x0, (uint8_t *)qp->ioreg + RING_CONTROL);
+ do {
+ if (!(FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_FLUSH_DONE) &
+ FLUSH_DONE_MASK))
+ break;
+ usleep(1000);
+ } while (--timeout);
+ if (!timeout) {
+ BCMFS_DP_LOG(ERR, "Ring clear flush timeout hw-queue %d",
+ qp->qpair_id);
+ }
+
+ /* Program BD start address */
+ val = BD_START_ADDR_VALUE(tx_queue->base_phys_addr);
+ FS_MMIO_WRITE32(val, (uint8_t *)qp->ioreg + RING_BD_START_ADDR);
+
+ /* BD write pointer will be same as HW write pointer */
+ tx_queue->tx_write_ptr = FS_MMIO_READ32((uint8_t *)qp->ioreg +
+ RING_BD_WRITE_PTR);
+ tx_queue->tx_write_ptr *= FS_RING_DESC_SIZE;
+
+
+ for (off = 0; off < FS_RING_CMPL_SIZE; off += FS_RING_DESC_SIZE)
+ rm_write_desc((uint8_t *)cmpl_queue->base_addr + off, 0x0);
+
+ /* Program completion start address */
+ val = CMPL_START_ADDR_VALUE(cmpl_queue->base_phys_addr);
+ FS_MMIO_WRITE32(val, (uint8_t *)qp->ioreg + RING_CMPL_START_ADDR);
+
+ /* Completion read pointer will be same as HW write pointer */
+ cmpl_queue->cmpl_read_ptr = FS_MMIO_READ32((uint8_t *)qp->ioreg +
+ RING_CMPL_WRITE_PTR);
+ cmpl_queue->cmpl_read_ptr *= FS_RING_DESC_SIZE;
+
+ /* Read ring Tx, Rx, and Outstanding counts to clear */
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_RECV_LS);
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_RECV_MS);
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_TRANS_LS);
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_TRANS_MS);
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_OUTSTAND);
+
+ /* Configure per-Ring MSI registers with dummy location */
+ /* We leave 1k * FS_RING_DESC_SIZE size from base phys for MSI */
+ msi = cmpl_queue->base_phys_addr + (1024 * FS_RING_DESC_SIZE);
+ FS_MMIO_WRITE32((msi & 0xFFFFFFFF),
+ (uint8_t *)qp->ioreg + RING_MSI_ADDR_LS);
+ FS_MMIO_WRITE32(((msi >> 32) & 0xFFFFFFFF),
+ (uint8_t *)qp->ioreg + RING_MSI_ADDR_MS);
+ FS_MMIO_WRITE32(qp->qpair_id,
+ (uint8_t *)qp->ioreg + RING_MSI_DATA_VALUE);
+
+ /* Configure RING_MSI_CONTROL */
+ val = 0;
+ val |= (MSI_TIMER_VAL_MASK << MSI_TIMER_VAL_SHIFT);
+ val |= BIT(MSI_ENABLE_SHIFT);
+ val |= (0x1 & MSI_COUNT_MASK) << MSI_COUNT_SHIFT;
+ FS_MMIO_WRITE32(val, (uint8_t *)qp->ioreg + RING_MSI_CONTROL);
+
+ /* Enable/activate ring */
+ val = BIT(CONTROL_ACTIVE_SHIFT);
+ FS_MMIO_WRITE32(val, (uint8_t *)qp->ioreg + RING_CONTROL);
+
+ return 0;
+}
+
+static void
+bcmfs4_shutdown_qp(struct bcmfs_qp *qp)
+{
+ /* Disable/deactivate ring */
+ FS_MMIO_WRITE32(0x0, (uint8_t *)qp->ioreg + RING_CONTROL);
+}
+
+struct bcmfs_hw_queue_pair_ops bcmfs4_qp_ops = {
+ .name = "fs4",
+ .enq_one_req = bcmfs4_enqueue_single_request_qp,
+ .ring_db = bcmfs4_ring_doorbell_qp,
+ .dequeue = bcmfs4_dequeue_qp,
+ .startq = bcmfs4_start_qp,
+ .stopq = bcmfs4_shutdown_qp,
+};
+
+RTE_INIT(bcmfs4_register_qp_ops)
+{
+ bcmfs_hw_queue_pair_register_ops(&bcmfs4_qp_ops);
+}
diff --git a/drivers/crypto/bcmfs/hw/bcmfs5_rm.c b/drivers/crypto/bcmfs/hw/bcmfs5_rm.c
new file mode 100644
index 000000000..00ea7a1b3
--- /dev/null
+++ b/drivers/crypto/bcmfs/hw/bcmfs5_rm.c
@@ -0,0 +1,677 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <unistd.h>
+
+#include <rte_bitmap.h>
+
+#include "bcmfs_qp.h"
+#include "bcmfs_logs.h"
+#include "bcmfs_dev_msg.h"
+#include "bcmfs_device.h"
+#include "bcmfs_hw_defs.h"
+#include "bcmfs_rm_common.h"
+
+/* Ring version */
+#define RING_VER_MAGIC 0x76303032
+
+/* Per-Ring register offsets */
+#define RING_VER 0x000
+#define RING_BD_START_ADDRESS_LSB 0x004
+#define RING_BD_READ_PTR 0x008
+#define RING_BD_WRITE_PTR 0x00c
+#define RING_BD_READ_PTR_DDR_LS 0x010
+#define RING_BD_READ_PTR_DDR_MS 0x014
+#define RING_CMPL_START_ADDR_LSB 0x018
+#define RING_CMPL_WRITE_PTR 0x01c
+#define RING_NUM_REQ_RECV_LS 0x020
+#define RING_NUM_REQ_RECV_MS 0x024
+#define RING_NUM_REQ_TRANS_LS 0x028
+#define RING_NUM_REQ_TRANS_MS 0x02c
+#define RING_NUM_REQ_OUTSTAND 0x030
+#define RING_CONTROL 0x034
+#define RING_FLUSH_DONE 0x038
+#define RING_MSI_ADDR_LS 0x03c
+#define RING_MSI_ADDR_MS 0x040
+#define RING_MSI_CONTROL 0x048
+#define RING_BD_READ_PTR_DDR_CONTROL 0x04c
+#define RING_MSI_DATA_VALUE 0x064
+#define RING_BD_START_ADDRESS_MSB 0x078
+#define RING_CMPL_START_ADDR_MSB 0x07c
+#define RING_DOORBELL_BD_WRITE_COUNT 0x074
+
+/* Register RING_BD_START_ADDR fields */
+#define BD_LAST_UPDATE_HW_SHIFT 28
+#define BD_LAST_UPDATE_HW_MASK 0x1
+#define BD_START_ADDR_VALUE(pa) \
+ ((uint32_t)((((uint64_t)(pa)) >> RING_BD_ALIGN_ORDER) & 0x0fffffff))
+#define BD_START_ADDR_DECODE(val) \
+ ((uint64_t)((val) & 0x0fffffff) << RING_BD_ALIGN_ORDER)
+
+/* Register RING_CMPL_START_ADDR fields */
+#define CMPL_START_ADDR_VALUE(pa) \
+ ((uint32_t)((((uint64_t)(pa)) >> RING_CMPL_ALIGN_ORDER) & 0x07ffffff))
+
+/* Register RING_CONTROL fields */
+#define CONTROL_MASK_DISABLE_CONTROL 12
+#define CONTROL_FLUSH_SHIFT 5
+#define CONTROL_ACTIVE_SHIFT 4
+#define CONTROL_RATE_ADAPT_MASK 0xf
+#define CONTROL_RATE_DYNAMIC 0x0
+#define CONTROL_RATE_FAST 0x8
+#define CONTROL_RATE_MEDIUM 0x9
+#define CONTROL_RATE_SLOW 0xa
+#define CONTROL_RATE_IDLE 0xb
+
+/* Register RING_FLUSH_DONE fields */
+#define FLUSH_DONE_MASK 0x1
+
+/* Register RING_MSI_CONTROL fields */
+#define MSI_TIMER_VAL_SHIFT 16
+#define MSI_TIMER_VAL_MASK 0xffff
+#define MSI_ENABLE_SHIFT 15
+#define MSI_ENABLE_MASK 0x1
+#define MSI_COUNT_SHIFT 0
+#define MSI_COUNT_MASK 0x3ff
+
+/* Register RING_BD_READ_PTR_DDR_CONTROL fields */
+#define BD_READ_PTR_DDR_TIMER_VAL_SHIFT 16
+#define BD_READ_PTR_DDR_TIMER_VAL_MASK 0xffff
+#define BD_READ_PTR_DDR_ENABLE_SHIFT 15
+#define BD_READ_PTR_DDR_ENABLE_MASK 0x1
+
+/* General descriptor format */
+#define DESC_TYPE_SHIFT 60
+#define DESC_TYPE_MASK 0xf
+#define DESC_PAYLOAD_SHIFT 0
+#define DESC_PAYLOAD_MASK 0x0fffffffffffffff
+
+/* Null descriptor format */
+#define NULL_TYPE 0
+#define NULL_TOGGLE_SHIFT 59
+#define NULL_TOGGLE_MASK 0x1
+
+/* Header descriptor format */
+#define HEADER_TYPE 1
+#define HEADER_TOGGLE_SHIFT 59
+#define HEADER_TOGGLE_MASK 0x1
+#define HEADER_ENDPKT_SHIFT 57
+#define HEADER_ENDPKT_MASK 0x1
+#define HEADER_STARTPKT_SHIFT 56
+#define HEADER_STARTPKT_MASK 0x1
+#define HEADER_BDCOUNT_SHIFT 36
+#define HEADER_BDCOUNT_MASK 0x1f
+#define HEADER_BDCOUNT_MAX HEADER_BDCOUNT_MASK
+#define HEADER_FLAGS_SHIFT 16
+#define HEADER_FLAGS_MASK 0xffff
+#define HEADER_OPAQUE_SHIFT 0
+#define HEADER_OPAQUE_MASK 0xffff
+
+/* Source (SRC) descriptor format */
+
+#define SRC_TYPE 2
+#define SRC_LENGTH_SHIFT 44
+#define SRC_LENGTH_MASK 0xffff
+#define SRC_ADDR_SHIFT 0
+#define SRC_ADDR_MASK 0x00000fffffffffff
+
+/* Destination (DST) descriptor format */
+#define DST_TYPE 3
+#define DST_LENGTH_SHIFT 44
+#define DST_LENGTH_MASK 0xffff
+#define DST_ADDR_SHIFT 0
+#define DST_ADDR_MASK 0x00000fffffffffff
+
+/* Next pointer (NPTR) descriptor format */
+#define NPTR_TYPE 5
+#define NPTR_TOGGLE_SHIFT 59
+#define NPTR_TOGGLE_MASK 0x1
+#define NPTR_ADDR_SHIFT 0
+#define NPTR_ADDR_MASK 0x00000fffffffffff
+
+/* Mega source (MSRC) descriptor format */
+#define MSRC_TYPE 6
+#define MSRC_LENGTH_SHIFT 44
+#define MSRC_LENGTH_MASK 0xffff
+#define MSRC_ADDR_SHIFT 0
+#define MSRC_ADDR_MASK 0x00000fffffffffff
+
+/* Mega destination (MDST) descriptor format */
+#define MDST_TYPE 7
+#define MDST_LENGTH_SHIFT 44
+#define MDST_LENGTH_MASK 0xffff
+#define MDST_ADDR_SHIFT 0
+#define MDST_ADDR_MASK 0x00000fffffffffff
+
+static uint8_t
+bcmfs5_is_next_table_desc(void *desc_ptr)
+{
+ uint64_t desc = rm_read_desc(desc_ptr);
+ uint32_t type = FS_DESC_DEC(desc, DESC_TYPE_SHIFT, DESC_TYPE_MASK);
+
+ return (type == NPTR_TYPE) ? true : false;
+}
+
+static uint64_t
+bcmfs5_next_table_desc(uint64_t next_addr)
+{
+ return (rm_build_desc(NPTR_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(next_addr, NPTR_ADDR_SHIFT, NPTR_ADDR_MASK));
+}
+
+static uint64_t
+bcmfs5_null_desc(void)
+{
+ return rm_build_desc(NULL_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK);
+}
+
+static uint64_t
+bcmfs5_header_desc(uint32_t startpkt, uint32_t endpkt,
+ uint32_t bdcount, uint32_t flags,
+ uint32_t opaque)
+{
+ return (rm_build_desc(HEADER_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(startpkt, HEADER_STARTPKT_SHIFT,
+ HEADER_STARTPKT_MASK) |
+ rm_build_desc(endpkt, HEADER_ENDPKT_SHIFT, HEADER_ENDPKT_MASK) |
+ rm_build_desc(bdcount, HEADER_BDCOUNT_SHIFT, HEADER_BDCOUNT_MASK) |
+ rm_build_desc(flags, HEADER_FLAGS_SHIFT, HEADER_FLAGS_MASK) |
+ rm_build_desc(opaque, HEADER_OPAQUE_SHIFT, HEADER_OPAQUE_MASK));
+}
+
+static int
+bcmfs5_enqueue_desc(uint32_t nhpos, uint32_t nhcnt,
+ uint32_t reqid, uint64_t desc,
+ void **desc_ptr, void *start_desc,
+ void *end_desc)
+{
+ uint64_t d;
+ uint32_t nhavail, _startpkt, _endpkt, _bdcount;
+ int is_nxt_page = 0;
+
+ /*
+ * Each request or packet start with a HEADER descriptor followed
+ * by one or more non-HEADER descriptors (SRC, SRCT, MSRC, DST,
+ * DSTT, MDST, IMM, and IMMT). The number of non-HEADER descriptors
+ * following a HEADER descriptor is represented by BDCOUNT field
+ * of HEADER descriptor. The max value of BDCOUNT field is 31 which
+ * means we can only have 31 non-HEADER descriptors following one
+ * HEADER descriptor.
+ *
+ * In general use, number of non-HEADER descriptors can easily go
+ * beyond 31. To tackle this situation, we have packet (or request)
+ * extension bits (STARTPKT and ENDPKT) in the HEADER descriptor.
+ *
+ * To use packet extension, the first HEADER descriptor of request
+ * (or packet) will have STARTPKT=1 and ENDPKT=0. The intermediate
+ * HEADER descriptors will have STARTPKT=0 and ENDPKT=0. The last
+ * HEADER descriptor will have STARTPKT=0 and ENDPKT=1.
+ */
+
+ if ((nhpos % HEADER_BDCOUNT_MAX == 0) && (nhcnt - nhpos)) {
+ /* Prepare the header descriptor */
+ nhavail = (nhcnt - nhpos);
+ _startpkt = (nhpos == 0) ? 0x1 : 0x0;
+ _endpkt = (nhavail <= HEADER_BDCOUNT_MAX) ? 0x1 : 0x0;
+ _bdcount = (nhavail <= HEADER_BDCOUNT_MAX) ?
+ nhavail : HEADER_BDCOUNT_MAX;
+ if (nhavail <= HEADER_BDCOUNT_MAX)
+ _bdcount = nhavail;
+ else
+ _bdcount = HEADER_BDCOUNT_MAX;
+ d = bcmfs5_header_desc(_startpkt, _endpkt,
+ _bdcount, 0x0, reqid);
+
+ /* Write header descriptor */
+ rm_write_desc(*desc_ptr, d);
+
+ /* Point to next descriptor */
+ *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc);
+ if (*desc_ptr == end_desc)
+ *desc_ptr = start_desc;
+
+ /* Skip next pointer descriptors */
+ while (bcmfs5_is_next_table_desc(*desc_ptr)) {
+ is_nxt_page = 1;
+ *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc);
+ if (*desc_ptr == end_desc)
+ *desc_ptr = start_desc;
+ }
+ }
+
+ /* Write desired descriptor */
+ rm_write_desc(*desc_ptr, desc);
+
+ /* Point to next descriptor */
+ *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc);
+ if (*desc_ptr == end_desc)
+ *desc_ptr = start_desc;
+
+ /* Skip next pointer descriptors */
+ while (bcmfs5_is_next_table_desc(*desc_ptr)) {
+ is_nxt_page = 1;
+ *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc);
+ if (*desc_ptr == end_desc)
+ *desc_ptr = start_desc;
+ }
+
+ return is_nxt_page;
+}
+
+static uint64_t
+bcmfs5_src_desc(uint64_t addr, unsigned int len)
+{
+ return (rm_build_desc(SRC_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(len, SRC_LENGTH_SHIFT, SRC_LENGTH_MASK) |
+ rm_build_desc(addr, SRC_ADDR_SHIFT, SRC_ADDR_MASK));
+}
+
+static uint64_t
+bcmfs5_msrc_desc(uint64_t addr, unsigned int len_div_16)
+{
+ return (rm_build_desc(MSRC_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(len_div_16, MSRC_LENGTH_SHIFT, MSRC_LENGTH_MASK) |
+ rm_build_desc(addr, MSRC_ADDR_SHIFT, MSRC_ADDR_MASK));
+}
+
+static uint64_t
+bcmfs5_dst_desc(uint64_t addr, unsigned int len)
+{
+ return (rm_build_desc(DST_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(len, DST_LENGTH_SHIFT, DST_LENGTH_MASK) |
+ rm_build_desc(addr, DST_ADDR_SHIFT, DST_ADDR_MASK));
+}
+
+static uint64_t
+bcmfs5_mdst_desc(uint64_t addr, unsigned int len_div_16)
+{
+ return (rm_build_desc(MDST_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(len_div_16, MDST_LENGTH_SHIFT, MDST_LENGTH_MASK) |
+ rm_build_desc(addr, MDST_ADDR_SHIFT, MDST_ADDR_MASK));
+}
+
+static bool
+bcmfs5_sanity_check(struct bcmfs_qp_message *msg)
+{
+ unsigned int i = 0;
+
+ if (msg == NULL)
+ return false;
+
+ for (i = 0; i < msg->srcs_count; i++) {
+ if (msg->srcs_len[i] & 0xf) {
+ if (msg->srcs_len[i] > SRC_LENGTH_MASK)
+ return false;
+ } else {
+ if (msg->srcs_len[i] > (MSRC_LENGTH_MASK * 16))
+ return false;
+ }
+ }
+ for (i = 0; i < msg->dsts_count; i++) {
+ if (msg->dsts_len[i] & 0xf) {
+ if (msg->dsts_len[i] > DST_LENGTH_MASK)
+ return false;
+ } else {
+ if (msg->dsts_len[i] > (MDST_LENGTH_MASK * 16))
+ return false;
+ }
+ }
+
+ return true;
+}
+
+static void *
+bcmfs5_enqueue_msg(struct bcmfs_queue *txq,
+ struct bcmfs_qp_message *msg,
+ uint32_t reqid, void *desc_ptr,
+ void *start_desc, void *end_desc)
+{
+ uint64_t d;
+ unsigned int src, dst;
+ uint32_t nhpos = 0;
+ int nxt_page = 0;
+ uint32_t nhcnt = msg->srcs_count + msg->dsts_count;
+
+ if (desc_ptr == NULL || start_desc == NULL || end_desc == NULL)
+ return NULL;
+
+ if (desc_ptr < start_desc || end_desc <= desc_ptr)
+ return NULL;
+
+ for (src = 0; src < msg->srcs_count; src++) {
+ if (msg->srcs_len[src] & 0xf)
+ d = bcmfs5_src_desc(msg->srcs_addr[src],
+ msg->srcs_len[src]);
+ else
+ d = bcmfs5_msrc_desc(msg->srcs_addr[src],
+ msg->srcs_len[src] / 16);
+
+ nxt_page = bcmfs5_enqueue_desc(nhpos, nhcnt, reqid,
+ d, &desc_ptr, start_desc,
+ end_desc);
+ if (nxt_page)
+ txq->descs_inflight++;
+ nhpos++;
+ }
+
+ for (dst = 0; dst < msg->dsts_count; dst++) {
+ if (msg->dsts_len[dst] & 0xf)
+ d = bcmfs5_dst_desc(msg->dsts_addr[dst],
+ msg->dsts_len[dst]);
+ else
+ d = bcmfs5_mdst_desc(msg->dsts_addr[dst],
+ msg->dsts_len[dst] / 16);
+
+ nxt_page = bcmfs5_enqueue_desc(nhpos, nhcnt, reqid,
+ d, &desc_ptr, start_desc,
+ end_desc);
+ if (nxt_page)
+ txq->descs_inflight++;
+ nhpos++;
+ }
+
+ txq->descs_inflight += nhcnt + 1;
+
+ return desc_ptr;
+}
+
+static int
+bcmfs5_enqueue_single_request_qp(struct bcmfs_qp *qp, void *op)
+{
+ void *next;
+ int reqid;
+ int ret = 0;
+ uint64_t slab = 0;
+ uint32_t pos = 0;
+ uint8_t exit_cleanup = false;
+ struct bcmfs_queue *txq = &qp->tx_q;
+ struct bcmfs_qp_message *msg = (struct bcmfs_qp_message *)op;
+
+ /* Do sanity check on message */
+ if (!bcmfs5_sanity_check(msg)) {
+ BCMFS_DP_LOG(ERR, "Invalid msg on queue %d", qp->qpair_id);
+ return -EIO;
+ }
+
+ /* Scan from the beginning */
+ __rte_bitmap_scan_init(qp->ctx_bmp);
+ /* Scan bitmap to get the free pool */
+ ret = rte_bitmap_scan(qp->ctx_bmp, &pos, &slab);
+ if (ret == 0) {
+ BCMFS_DP_LOG(ERR, "BD memory exhausted");
+ return -ERANGE;
+ }
+
+ reqid = pos + __builtin_ctzll(slab);
+ rte_bitmap_clear(qp->ctx_bmp, reqid);
+ qp->ctx_pool[reqid] = (unsigned long)msg;
+
+ /* Write descriptors to ring */
+ next = bcmfs5_enqueue_msg(txq, msg, reqid,
+ (uint8_t *)txq->base_addr + txq->tx_write_ptr,
+ txq->base_addr,
+ (uint8_t *)txq->base_addr + txq->queue_size);
+ if (next == NULL) {
+ BCMFS_DP_LOG(ERR, "Enqueue for desc failed on queue %d",
+ qp->qpair_id);
+ ret = -EINVAL;
+ exit_cleanup = true;
+ goto exit;
+ }
+
+ /* Save ring BD write offset */
+ txq->tx_write_ptr = (uint32_t)((uint8_t *)next -
+ (uint8_t *)txq->base_addr);
+
+ qp->nb_pending_requests++;
+
+ return 0;
+
+exit:
+ /* Cleanup if we failed */
+ if (exit_cleanup)
+ rte_bitmap_set(qp->ctx_bmp, reqid);
+
+ return ret;
+}
+
+static void bcmfs5_write_doorbell(struct bcmfs_qp *qp)
+{
+ struct bcmfs_queue *txq = &qp->tx_q;
+
+ /* sync in bfeore ringing the door-bell */
+ rte_wmb();
+
+ FS_MMIO_WRITE32(txq->descs_inflight,
+ (uint8_t *)qp->ioreg + RING_DOORBELL_BD_WRITE_COUNT);
+
+ /* reset the count */
+ txq->descs_inflight = 0;
+}
+
+static uint16_t
+bcmfs5_dequeue_qp(struct bcmfs_qp *qp, void **ops, uint16_t budget)
+{
+ int err;
+ uint16_t reqid;
+ uint64_t desc;
+ uint16_t count = 0;
+ unsigned long context = 0;
+ struct bcmfs_queue *hwq = &qp->cmpl_q;
+ uint32_t cmpl_read_offset, cmpl_write_offset;
+
+ /*
+ * Check whether budget is valid, else set the budget to maximum
+ * so that all the available completions will be processed.
+ */
+ if (budget > qp->nb_pending_requests)
+ budget = qp->nb_pending_requests;
+
+ /*
+ * Get current completion read and write offset
+ *
+ * Note: We should read completion write pointer at least once
+ * after we get a MSI interrupt because HW maintains internal
+ * MSI status which will allow next MSI interrupt only after
+ * completion write pointer is read.
+ */
+ cmpl_write_offset = FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_CMPL_WRITE_PTR);
+ cmpl_write_offset *= FS_RING_DESC_SIZE;
+ cmpl_read_offset = hwq->cmpl_read_ptr;
+
+ /* read the ring cmpl write ptr before cmpl read offset */
+ rte_smp_rmb();
+
+ /* For each completed request notify mailbox clients */
+ reqid = 0;
+ while ((cmpl_read_offset != cmpl_write_offset) && (budget > 0)) {
+ /* Dequeue next completion descriptor */
+ desc = *((uint64_t *)((uint8_t *)hwq->base_addr +
+ cmpl_read_offset));
+
+ /* Next read offset */
+ cmpl_read_offset += FS_RING_DESC_SIZE;
+ if (cmpl_read_offset == FS_RING_CMPL_SIZE)
+ cmpl_read_offset = 0;
+
+ /* Decode error from completion descriptor */
+ err = rm_cmpl_desc_to_error(desc);
+ if (err < 0)
+ BCMFS_DP_LOG(ERR, "error desc rcvd");
+
+ /* Determine request id from completion descriptor */
+ reqid = rm_cmpl_desc_to_reqid(desc);
+
+ /* Retrieve context */
+ context = qp->ctx_pool[reqid];
+ if (context == 0)
+ BCMFS_DP_LOG(ERR, "HW error detected");
+
+ /* Release reqid for recycling */
+ qp->ctx_pool[reqid] = 0;
+ rte_bitmap_set(qp->ctx_bmp, reqid);
+
+ *ops = (void *)context;
+
+ /* Increment number of completions processed */
+ count++;
+ budget--;
+ ops++;
+ }
+
+ hwq->cmpl_read_ptr = cmpl_read_offset;
+
+ qp->nb_pending_requests -= count;
+
+ return count;
+}
+
+static int
+bcmfs5_start_qp(struct bcmfs_qp *qp)
+{
+ uint32_t val, off;
+ uint64_t d, next_addr, msi;
+ int timeout;
+ uint32_t bd_high, bd_low, cmpl_high, cmpl_low;
+ struct bcmfs_queue *tx_queue = &qp->tx_q;
+ struct bcmfs_queue *cmpl_queue = &qp->cmpl_q;
+
+ /* Disable/deactivate ring */
+ FS_MMIO_WRITE32(0x0, (uint8_t *)qp->ioreg + RING_CONTROL);
+
+ /* Configure next table pointer entries in BD memory */
+ for (off = 0; off < tx_queue->queue_size; off += FS_RING_DESC_SIZE) {
+ next_addr = off + FS_RING_DESC_SIZE;
+ if (next_addr == tx_queue->queue_size)
+ next_addr = 0;
+ next_addr += (uint64_t)tx_queue->base_phys_addr;
+ if (FS_RING_BD_ALIGN_CHECK(next_addr))
+ d = bcmfs5_next_table_desc(next_addr);
+ else
+ d = bcmfs5_null_desc();
+ rm_write_desc((uint8_t *)tx_queue->base_addr + off, d);
+ }
+
+ /*
+ * If user interrupt the test in between the run(Ctrl+C), then all
+ * subsequent test run will fail because sw cmpl_read_offset and hw
+ * cmpl_write_offset will be pointing at different completion BD. To
+ * handle this we should flush all the rings in the startup instead
+ * of shutdown function.
+ * Ring flush will reset hw cmpl_write_offset.
+ */
+
+ /* Set ring flush state */
+ timeout = 1000;
+ FS_MMIO_WRITE32(BIT(CONTROL_FLUSH_SHIFT),
+ (uint8_t *)qp->ioreg + RING_CONTROL);
+ do {
+ /*
+ * If previous test is stopped in between the run, then
+ * sw has to read cmpl_write_offset else DME/AE will be not
+ * come out of flush state.
+ */
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_CMPL_WRITE_PTR);
+
+ if (FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_FLUSH_DONE) &
+ FLUSH_DONE_MASK)
+ break;
+ usleep(1000);
+ } while (--timeout);
+ if (!timeout) {
+ BCMFS_DP_LOG(ERR, "Ring flush timeout hw-queue %d",
+ qp->qpair_id);
+ }
+
+ /* Clear ring flush state */
+ timeout = 1000;
+ FS_MMIO_WRITE32(0x0, (uint8_t *)qp->ioreg + RING_CONTROL);
+ do {
+ if (!(FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_FLUSH_DONE) &
+ FLUSH_DONE_MASK))
+ break;
+ usleep(1000);
+ } while (--timeout);
+ if (!timeout) {
+ BCMFS_DP_LOG(ERR, "Ring clear flush timeout hw-queue %d",
+ qp->qpair_id);
+ }
+
+ /* Program BD start address */
+ bd_low = lower_32_bits(tx_queue->base_phys_addr);
+ bd_high = upper_32_bits(tx_queue->base_phys_addr);
+ FS_MMIO_WRITE32(bd_low, (uint8_t *)qp->ioreg +
+ RING_BD_START_ADDRESS_LSB);
+ FS_MMIO_WRITE32(bd_high, (uint8_t *)qp->ioreg +
+ RING_BD_START_ADDRESS_MSB);
+
+ tx_queue->tx_write_ptr = 0;
+
+ for (off = 0; off < FS_RING_CMPL_SIZE; off += FS_RING_DESC_SIZE)
+ rm_write_desc((uint8_t *)cmpl_queue->base_addr + off, 0x0);
+
+ /* Completion read pointer will be same as HW write pointer */
+ cmpl_queue->cmpl_read_ptr = FS_MMIO_READ32((uint8_t *)qp->ioreg +
+ RING_CMPL_WRITE_PTR);
+ /* Program completion start address */
+ cmpl_low = lower_32_bits(cmpl_queue->base_phys_addr);
+ cmpl_high = upper_32_bits(cmpl_queue->base_phys_addr);
+ FS_MMIO_WRITE32(cmpl_low, (uint8_t *)qp->ioreg +
+ RING_CMPL_START_ADDR_LSB);
+ FS_MMIO_WRITE32(cmpl_high, (uint8_t *)qp->ioreg +
+ RING_CMPL_START_ADDR_MSB);
+
+ cmpl_queue->cmpl_read_ptr *= FS_RING_DESC_SIZE;
+
+ /* Read ring Tx, Rx, and Outstanding counts to clear */
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_RECV_LS);
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_RECV_MS);
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_TRANS_LS);
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_TRANS_MS);
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_OUTSTAND);
+
+ /* Configure per-Ring MSI registers with dummy location */
+ msi = cmpl_queue->base_phys_addr + (1024 * FS_RING_DESC_SIZE);
+ FS_MMIO_WRITE32((msi & 0xFFFFFFFF),
+ (uint8_t *)qp->ioreg + RING_MSI_ADDR_LS);
+ FS_MMIO_WRITE32(((msi >> 32) & 0xFFFFFFFF),
+ (uint8_t *)qp->ioreg + RING_MSI_ADDR_MS);
+ FS_MMIO_WRITE32(qp->qpair_id, (uint8_t *)qp->ioreg +
+ RING_MSI_DATA_VALUE);
+
+ /* Configure RING_MSI_CONTROL */
+ val = 0;
+ val |= (MSI_TIMER_VAL_MASK << MSI_TIMER_VAL_SHIFT);
+ val |= BIT(MSI_ENABLE_SHIFT);
+ val |= (0x1 & MSI_COUNT_MASK) << MSI_COUNT_SHIFT;
+ FS_MMIO_WRITE32(val, (uint8_t *)qp->ioreg + RING_MSI_CONTROL);
+
+ /* Enable/activate ring */
+ val = BIT(CONTROL_ACTIVE_SHIFT);
+ FS_MMIO_WRITE32(val, (uint8_t *)qp->ioreg + RING_CONTROL);
+
+ return 0;
+}
+
+static void
+bcmfs5_shutdown_qp(struct bcmfs_qp *qp)
+{
+ /* Disable/deactivate ring */
+ FS_MMIO_WRITE32(0x0, (uint8_t *)qp->ioreg + RING_CONTROL);
+}
+
+struct bcmfs_hw_queue_pair_ops bcmfs5_qp_ops = {
+ .name = "fs5",
+ .enq_one_req = bcmfs5_enqueue_single_request_qp,
+ .ring_db = bcmfs5_write_doorbell,
+ .dequeue = bcmfs5_dequeue_qp,
+ .startq = bcmfs5_start_qp,
+ .stopq = bcmfs5_shutdown_qp,
+};
+
+RTE_INIT(bcmfs5_register_qp_ops)
+{
+ bcmfs_hw_queue_pair_register_ops(&bcmfs5_qp_ops);
+}
diff --git a/drivers/crypto/bcmfs/hw/bcmfs_rm_common.c b/drivers/crypto/bcmfs/hw/bcmfs_rm_common.c
new file mode 100644
index 000000000..9445d28f9
--- /dev/null
+++ b/drivers/crypto/bcmfs/hw/bcmfs_rm_common.c
@@ -0,0 +1,82 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Broadcom.
+ * All rights reserved.
+ */
+
+#include "bcmfs_hw_defs.h"
+#include "bcmfs_rm_common.h"
+
+/* Completion descriptor format */
+#define FS_CMPL_OPAQUE_SHIFT 0
+#define FS_CMPL_OPAQUE_MASK 0xffff
+#define FS_CMPL_ENGINE_STATUS_SHIFT 16
+#define FS_CMPL_ENGINE_STATUS_MASK 0xffff
+#define FS_CMPL_DME_STATUS_SHIFT 32
+#define FS_CMPL_DME_STATUS_MASK 0xffff
+#define FS_CMPL_RM_STATUS_SHIFT 48
+#define FS_CMPL_RM_STATUS_MASK 0xffff
+/* Completion RM status code */
+#define FS_RM_STATUS_CODE_SHIFT 0
+#define FS_RM_STATUS_CODE_MASK 0x3ff
+#define FS_RM_STATUS_CODE_GOOD 0x0
+#define FS_RM_STATUS_CODE_AE_TIMEOUT 0x3ff
+
+
+/* Completion DME status code */
+#define FS_DME_STATUS_MEM_COR_ERR BIT(0)
+#define FS_DME_STATUS_MEM_UCOR_ERR BIT(1)
+#define FS_DME_STATUS_FIFO_UNDRFLOW BIT(2)
+#define FS_DME_STATUS_FIFO_OVERFLOW BIT(3)
+#define FS_DME_STATUS_RRESP_ERR BIT(4)
+#define FS_DME_STATUS_BRESP_ERR BIT(5)
+#define FS_DME_STATUS_ERROR_MASK (FS_DME_STATUS_MEM_COR_ERR | \
+ FS_DME_STATUS_MEM_UCOR_ERR | \
+ FS_DME_STATUS_FIFO_UNDRFLOW | \
+ FS_DME_STATUS_FIFO_OVERFLOW | \
+ FS_DME_STATUS_RRESP_ERR | \
+ FS_DME_STATUS_BRESP_ERR)
+
+/* APIs related to ring manager descriptors */
+uint64_t
+rm_build_desc(uint64_t val, uint32_t shift,
+ uint64_t mask)
+{
+ return((val & mask) << shift);
+}
+
+uint64_t
+rm_read_desc(void *desc_ptr)
+{
+ return le64_to_cpu(*((uint64_t *)desc_ptr));
+}
+
+void
+rm_write_desc(void *desc_ptr, uint64_t desc)
+{
+ *((uint64_t *)desc_ptr) = cpu_to_le64(desc);
+}
+
+uint32_t
+rm_cmpl_desc_to_reqid(uint64_t cmpl_desc)
+{
+ return (uint32_t)(cmpl_desc & FS_CMPL_OPAQUE_MASK);
+}
+
+int
+rm_cmpl_desc_to_error(uint64_t cmpl_desc)
+{
+ uint32_t status;
+
+ status = FS_DESC_DEC(cmpl_desc, FS_CMPL_DME_STATUS_SHIFT,
+ FS_CMPL_DME_STATUS_MASK);
+ if (status & FS_DME_STATUS_ERROR_MASK)
+ return -EIO;
+
+ status = FS_DESC_DEC(cmpl_desc, FS_CMPL_RM_STATUS_SHIFT,
+ FS_CMPL_RM_STATUS_MASK);
+ status &= FS_RM_STATUS_CODE_MASK;
+ if (status == FS_RM_STATUS_CODE_AE_TIMEOUT)
+ return -ETIMEDOUT;
+
+ return 0;
+}
diff --git a/drivers/crypto/bcmfs/hw/bcmfs_rm_common.h b/drivers/crypto/bcmfs/hw/bcmfs_rm_common.h
new file mode 100644
index 000000000..5cbafa0da
--- /dev/null
+++ b/drivers/crypto/bcmfs/hw/bcmfs_rm_common.h
@@ -0,0 +1,46 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_RM_COMMON_H_
+#define _BCMFS_RM_COMMON_H_
+
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_io.h>
+
+/* Descriptor helper macros */
+#define FS_DESC_DEC(d, s, m) (((d) >> (s)) & (m))
+
+#define FS_RING_BD_ALIGN_CHECK(addr) \
+ (!((addr) & ((0x1 << FS_RING_BD_ALIGN_ORDER) - 1)))
+
+#define cpu_to_le64 rte_cpu_to_le_64
+#define cpu_to_le32 rte_cpu_to_le_32
+#define cpu_to_le16 rte_cpu_to_le_16
+
+#define le64_to_cpu rte_le_to_cpu_64
+#define le32_to_cpu rte_le_to_cpu_32
+#define le16_to_cpu rte_le_to_cpu_16
+
+#define lower_32_bits(x) ((uint32_t)(x))
+#define upper_32_bits(x) ((uint32_t)(((x) >> 16) >> 16))
+
+uint64_t
+rm_build_desc(uint64_t val, uint32_t shift,
+ uint64_t mask);
+uint64_t
+rm_read_desc(void *desc_ptr);
+
+void
+rm_write_desc(void *desc_ptr, uint64_t desc);
+
+uint32_t
+rm_cmpl_desc_to_reqid(uint64_t cmpl_desc);
+
+int
+rm_cmpl_desc_to_error(uint64_t cmpl_desc);
+
+#endif /* _BCMFS_RM_COMMON_H_ */
+
diff --git a/drivers/crypto/bcmfs/meson.build b/drivers/crypto/bcmfs/meson.build
index 7e2bcbf14..cd58bd5e2 100644
--- a/drivers/crypto/bcmfs/meson.build
+++ b/drivers/crypto/bcmfs/meson.build
@@ -8,5 +8,8 @@ sources = files(
'bcmfs_logs.c',
'bcmfs_device.c',
'bcmfs_vfio.c',
- 'bcmfs_qp.c'
+ 'bcmfs_qp.c',
+ 'hw/bcmfs4_rm.c',
+ 'hw/bcmfs5_rm.c',
+ 'hw/bcmfs_rm_common.c'
)
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v2 5/8] crypto/bcmfs: create a symmetric cryptodev
2020-08-13 17:23 ` [dpdk-dev] [PATCH v2 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
` (3 preceding siblings ...)
2020-08-13 17:23 ` [dpdk-dev] [PATCH v2 4/8] crypto/bcmfs: add hw queue pair operations Vikas Gupta
@ 2020-08-13 17:23 ` Vikas Gupta
2020-08-13 17:23 ` [dpdk-dev] [PATCH v2 6/8] crypto/bcmfs: add session handling and capabilities Vikas Gupta
` (4 subsequent siblings)
9 siblings, 0 replies; 75+ messages in thread
From: Vikas Gupta @ 2020-08-13 17:23 UTC (permalink / raw)
To: dev, akhil.goyal; +Cc: vikram.prakash, Vikas Gupta, Raveendra Padasalagi
Create a symmetric crypto device and supported cryptodev ops.
Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
drivers/crypto/bcmfs/bcmfs_device.c | 15 ++
drivers/crypto/bcmfs/bcmfs_device.h | 9 +
drivers/crypto/bcmfs/bcmfs_qp.c | 37 +++
drivers/crypto/bcmfs/bcmfs_qp.h | 16 ++
drivers/crypto/bcmfs/bcmfs_sym_pmd.c | 387 +++++++++++++++++++++++++++
drivers/crypto/bcmfs/bcmfs_sym_pmd.h | 38 +++
drivers/crypto/bcmfs/bcmfs_sym_req.h | 22 ++
drivers/crypto/bcmfs/meson.build | 3 +-
8 files changed, 526 insertions(+), 1 deletion(-)
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_pmd.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_pmd.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_req.h
diff --git a/drivers/crypto/bcmfs/bcmfs_device.c b/drivers/crypto/bcmfs/bcmfs_device.c
index bd2d64acf..c9263ec28 100644
--- a/drivers/crypto/bcmfs/bcmfs_device.c
+++ b/drivers/crypto/bcmfs/bcmfs_device.c
@@ -13,6 +13,7 @@
#include "bcmfs_logs.h"
#include "bcmfs_qp.h"
#include "bcmfs_vfio.h"
+#include "bcmfs_sym_pmd.h"
struct bcmfs_device_attr {
const char name[BCMFS_MAX_PATH_LEN];
@@ -239,6 +240,7 @@ bcmfs_vdev_probe(struct rte_vdev_device *vdev)
char out_dirname[BCMFS_MAX_PATH_LEN];
uint32_t fsdev_dev[BCMFS_MAX_NODES];
enum bcmfs_device_type dtype;
+ int err;
int i = 0;
int dev_idx;
int count = 0;
@@ -290,7 +292,20 @@ bcmfs_vdev_probe(struct rte_vdev_device *vdev)
return -ENODEV;
}
+ err = bcmfs_sym_dev_create(fsdev);
+ if (err) {
+ BCMFS_LOG(WARNING,
+ "Failed to create BCMFS SYM PMD for device %s",
+ fsdev->name);
+ goto pmd_create_fail;
+ }
+
return 0;
+
+pmd_create_fail:
+ fsdev_release(fsdev);
+
+ return err;
}
static int
diff --git a/drivers/crypto/bcmfs/bcmfs_device.h b/drivers/crypto/bcmfs/bcmfs_device.h
index 9e40c5d74..e8a9c4091 100644
--- a/drivers/crypto/bcmfs/bcmfs_device.h
+++ b/drivers/crypto/bcmfs/bcmfs_device.h
@@ -62,6 +62,15 @@ struct bcmfs_device {
struct bcmfs_qp *qps_in_use[BCMFS_MAX_HW_QUEUES];
/* queue pair ops exported by symmetric crypto hw */
struct bcmfs_hw_queue_pair_ops *sym_hw_qp_ops;
+ /* a cryptodevice attached to bcmfs device */
+ struct rte_cryptodev *cdev;
+ /* a rte_device to register with cryptodev */
+ struct rte_device sym_rte_dev;
+ /* private info to keep with cryptodev */
+ struct bcmfs_sym_dev_private *sym_dev;
};
+/* stats exported by device */
+
+
#endif /* _BCMFS_DEV_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_qp.c b/drivers/crypto/bcmfs/bcmfs_qp.c
index ec1327b78..cb5ff6c61 100644
--- a/drivers/crypto/bcmfs/bcmfs_qp.c
+++ b/drivers/crypto/bcmfs/bcmfs_qp.c
@@ -344,3 +344,40 @@ bcmfs_dequeue_op_burst(void *qp, void **ops, uint16_t nb_ops)
return deq;
}
+
+void bcmfs_qp_stats_get(struct bcmfs_qp **qp, int num_qp,
+ struct bcmfs_qp_stats *stats)
+{
+ int i;
+
+ if (stats == NULL) {
+ BCMFS_LOG(ERR, "invalid param: stats %p",
+ stats);
+ return;
+ }
+
+ for (i = 0; i < num_qp; i++) {
+ if (qp[i] == NULL) {
+ BCMFS_LOG(DEBUG, "Uninitialised qp %d", i);
+ continue;
+ }
+
+ stats->enqueued_count += qp[i]->stats.enqueued_count;
+ stats->dequeued_count += qp[i]->stats.dequeued_count;
+ stats->enqueue_err_count += qp[i]->stats.enqueue_err_count;
+ stats->dequeue_err_count += qp[i]->stats.dequeue_err_count;
+ }
+}
+
+void bcmfs_qp_stats_reset(struct bcmfs_qp **qp, int num_qp)
+{
+ int i;
+
+ for (i = 0; i < num_qp; i++) {
+ if (qp[i] == NULL) {
+ BCMFS_LOG(DEBUG, "Uninitialised qp %d", i);
+ continue;
+ }
+ memset(&qp[i]->stats, 0, sizeof(qp[i]->stats));
+ }
+}
diff --git a/drivers/crypto/bcmfs/bcmfs_qp.h b/drivers/crypto/bcmfs/bcmfs_qp.h
index e4b0c3f2f..fec58ca71 100644
--- a/drivers/crypto/bcmfs/bcmfs_qp.h
+++ b/drivers/crypto/bcmfs/bcmfs_qp.h
@@ -24,6 +24,13 @@ enum bcmfs_queue_type {
BCMFS_RM_CPLQ
};
+#define BCMFS_QP_IOBASE_XLATE(base, idx) \
+ ((base) + ((idx) * BCMFS_HW_QUEUE_IO_ADDR_LEN))
+
+/* Max pkts for preprocessing before submitting to h/w qp */
+#define BCMFS_MAX_REQS_BUFF 64
+
+/* qp stats */
struct bcmfs_qp_stats {
/* Count of all operations enqueued */
uint64_t enqueued_count;
@@ -92,6 +99,10 @@ struct bcmfs_qp {
struct bcmfs_qp_stats stats;
/* h/w ops associated with qp */
struct bcmfs_hw_queue_pair_ops *ops;
+ /* bcmfs requests pool*/
+ struct rte_mempool *sr_mp;
+ /* a temporary buffer to keep message pointers */
+ struct bcmfs_qp_message *infl_msgs[BCMFS_MAX_REQS_BUFF];
} __rte_cache_aligned;
@@ -123,4 +134,9 @@ bcmfs_qp_setup(struct bcmfs_qp **qp_addr,
uint16_t queue_pair_id,
struct bcmfs_qp_config *bcmfs_conf);
+/* stats functions*/
+void bcmfs_qp_stats_get(struct bcmfs_qp **qp, int num_qp,
+ struct bcmfs_qp_stats *stats);
+void bcmfs_qp_stats_reset(struct bcmfs_qp **qp, int num_qp);
+
#endif /* _BCMFS_QP_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
new file mode 100644
index 000000000..0f96915f7
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
@@ -0,0 +1,387 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+#include <rte_dev.h>
+#include <rte_errno.h>
+#include <rte_malloc.h>
+#include <rte_cryptodev_pmd.h>
+
+#include "bcmfs_device.h"
+#include "bcmfs_logs.h"
+#include "bcmfs_qp.h"
+#include "bcmfs_sym_pmd.h"
+#include "bcmfs_sym_req.h"
+
+uint8_t cryptodev_bcmfs_driver_id;
+
+static int bcmfs_sym_qp_release(struct rte_cryptodev *dev,
+ uint16_t queue_pair_id);
+
+static int
+bcmfs_sym_dev_config(__rte_unused struct rte_cryptodev *dev,
+ __rte_unused struct rte_cryptodev_config *config)
+{
+ return 0;
+}
+
+static int
+bcmfs_sym_dev_start(__rte_unused struct rte_cryptodev *dev)
+{
+ return 0;
+}
+
+static void
+bcmfs_sym_dev_stop(__rte_unused struct rte_cryptodev *dev)
+{
+}
+
+static int
+bcmfs_sym_dev_close(struct rte_cryptodev *dev)
+{
+ int i, ret;
+
+ for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+ ret = bcmfs_sym_qp_release(dev, i);
+ if (ret < 0)
+ return ret;
+ }
+
+ return 0;
+}
+
+static void
+bcmfs_sym_dev_info_get(struct rte_cryptodev *dev,
+ struct rte_cryptodev_info *dev_info)
+{
+ struct bcmfs_sym_dev_private *internals = dev->data->dev_private;
+ struct bcmfs_device *fsdev = internals->fsdev;
+
+ if (dev_info != NULL) {
+ dev_info->driver_id = cryptodev_bcmfs_driver_id;
+ dev_info->feature_flags = dev->feature_flags;
+ dev_info->max_nb_queue_pairs = fsdev->max_hw_qps;
+ /* No limit of number of sessions */
+ dev_info->sym.max_nb_sessions = 0;
+ }
+}
+
+static void
+bcmfs_sym_stats_get(struct rte_cryptodev *dev,
+ struct rte_cryptodev_stats *stats)
+{
+ struct bcmfs_qp_stats bcmfs_stats = {0};
+ struct bcmfs_sym_dev_private *bcmfs_priv;
+ struct bcmfs_device *fsdev;
+
+ if (stats == NULL || dev == NULL) {
+ BCMFS_LOG(ERR, "invalid ptr: stats %p, dev %p", stats, dev);
+ return;
+ }
+ bcmfs_priv = dev->data->dev_private;
+ fsdev = bcmfs_priv->fsdev;
+
+ bcmfs_qp_stats_get(fsdev->qps_in_use, fsdev->max_hw_qps, &bcmfs_stats);
+
+ stats->enqueued_count = bcmfs_stats.enqueued_count;
+ stats->dequeued_count = bcmfs_stats.dequeued_count;
+ stats->enqueue_err_count = bcmfs_stats.enqueue_err_count;
+ stats->dequeue_err_count = bcmfs_stats.dequeue_err_count;
+}
+
+static void
+bcmfs_sym_stats_reset(struct rte_cryptodev *dev)
+{
+ struct bcmfs_sym_dev_private *bcmfs_priv;
+ struct bcmfs_device *fsdev;
+
+ if (dev == NULL) {
+ BCMFS_LOG(ERR, "invalid cryptodev ptr %p", dev);
+ return;
+ }
+ bcmfs_priv = dev->data->dev_private;
+ fsdev = bcmfs_priv->fsdev;
+
+ bcmfs_qp_stats_reset(fsdev->qps_in_use, fsdev->max_hw_qps);
+}
+
+static int
+bcmfs_sym_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id)
+{
+ struct bcmfs_sym_dev_private *bcmfs_private = dev->data->dev_private;
+ struct bcmfs_qp *qp = (struct bcmfs_qp *)
+ (dev->data->queue_pairs[queue_pair_id]);
+
+ BCMFS_LOG(DEBUG, "Release sym qp %u on device %d",
+ queue_pair_id, dev->data->dev_id);
+
+ rte_mempool_free(qp->sr_mp);
+
+ bcmfs_private->fsdev->qps_in_use[queue_pair_id] = NULL;
+
+ return bcmfs_qp_release((struct bcmfs_qp **)
+ &dev->data->queue_pairs[queue_pair_id]);
+}
+
+static void
+spu_req_init(struct bcmfs_sym_request *sr, rte_iova_t iova __rte_unused)
+{
+ memset(sr, 0, sizeof(*sr));
+}
+
+static void
+req_pool_obj_init(__rte_unused struct rte_mempool *mp,
+ __rte_unused void *opaque, void *obj,
+ __rte_unused unsigned int obj_idx)
+{
+ spu_req_init(obj, rte_mempool_virt2iova(obj));
+}
+
+static struct rte_mempool *
+bcmfs_sym_req_pool_create(struct rte_cryptodev *cdev __rte_unused,
+ uint32_t nobjs, uint16_t qp_id,
+ int socket_id)
+{
+ char softreq_pool_name[RTE_RING_NAMESIZE];
+ struct rte_mempool *mp;
+
+ snprintf(softreq_pool_name, RTE_RING_NAMESIZE, "%s_%d",
+ "bcm_sym", qp_id);
+
+ mp = rte_mempool_create(softreq_pool_name,
+ RTE_ALIGN_MUL_CEIL(nobjs, 64),
+ sizeof(struct bcmfs_sym_request),
+ 64, 0, NULL, NULL, req_pool_obj_init, NULL,
+ socket_id, 0);
+ if (mp == NULL)
+ BCMFS_LOG(ERR, "Failed to create req pool, qid %d, err %d",
+ qp_id, rte_errno);
+
+ return mp;
+}
+
+static int
+bcmfs_sym_qp_setup(struct rte_cryptodev *cdev, uint16_t qp_id,
+ const struct rte_cryptodev_qp_conf *qp_conf,
+ int socket_id)
+{
+ int ret = 0;
+ struct bcmfs_qp *qp = NULL;
+ struct bcmfs_qp_config bcmfs_qp_conf;
+
+ struct bcmfs_qp **qp_addr =
+ (struct bcmfs_qp **)&cdev->data->queue_pairs[qp_id];
+ struct bcmfs_sym_dev_private *bcmfs_private = cdev->data->dev_private;
+ struct bcmfs_device *fsdev = bcmfs_private->fsdev;
+
+
+ /* If qp is already in use free ring memory and qp metadata. */
+ if (*qp_addr != NULL) {
+ ret = bcmfs_sym_qp_release(cdev, qp_id);
+ if (ret < 0)
+ return ret;
+ }
+
+ if (qp_id >= fsdev->max_hw_qps) {
+ BCMFS_LOG(ERR, "qp_id %u invalid for this device", qp_id);
+ return -EINVAL;
+ }
+
+ bcmfs_qp_conf.nb_descriptors = qp_conf->nb_descriptors;
+ bcmfs_qp_conf.socket_id = socket_id;
+ bcmfs_qp_conf.max_descs_req = BCMFS_CRYPTO_MAX_HW_DESCS_PER_REQ;
+ bcmfs_qp_conf.iobase = BCMFS_QP_IOBASE_XLATE(fsdev->mmap_addr, qp_id);
+ bcmfs_qp_conf.ops = fsdev->sym_hw_qp_ops;
+
+ ret = bcmfs_qp_setup(qp_addr, qp_id, &bcmfs_qp_conf);
+ if (ret != 0)
+ return ret;
+
+ qp = (struct bcmfs_qp *)*qp_addr;
+
+ qp->sr_mp = bcmfs_sym_req_pool_create(cdev, qp_conf->nb_descriptors,
+ qp_id, socket_id);
+ if (qp->sr_mp == NULL)
+ return -ENOMEM;
+
+ /* store a link to the qp in the bcmfs_device */
+ bcmfs_private->fsdev->qps_in_use[qp_id] = *qp_addr;
+
+ cdev->data->queue_pairs[qp_id] = qp;
+ BCMFS_LOG(NOTICE, "queue %d setup done\n", qp_id);
+
+ return 0;
+}
+
+static struct rte_cryptodev_ops crypto_bcmfs_ops = {
+ /* Device related operations */
+ .dev_configure = bcmfs_sym_dev_config,
+ .dev_start = bcmfs_sym_dev_start,
+ .dev_stop = bcmfs_sym_dev_stop,
+ .dev_close = bcmfs_sym_dev_close,
+ .dev_infos_get = bcmfs_sym_dev_info_get,
+ /* Stats Collection */
+ .stats_get = bcmfs_sym_stats_get,
+ .stats_reset = bcmfs_sym_stats_reset,
+ /* Queue-Pair management */
+ .queue_pair_setup = bcmfs_sym_qp_setup,
+ .queue_pair_release = bcmfs_sym_qp_release,
+};
+
+/** Enqueue burst */
+static uint16_t
+bcmfs_sym_pmd_enqueue_op_burst(void *queue_pair,
+ struct rte_crypto_op **ops,
+ uint16_t nb_ops)
+{
+ int i, j;
+ uint16_t enq = 0;
+ struct bcmfs_sym_request *sreq;
+ struct bcmfs_qp *qp = (struct bcmfs_qp *)queue_pair;
+
+ if (nb_ops == 0)
+ return 0;
+
+ if (nb_ops > BCMFS_MAX_REQS_BUFF)
+ nb_ops = BCMFS_MAX_REQS_BUFF;
+
+ /* We do not process more than available space */
+ if (nb_ops > (qp->nb_descriptors - qp->nb_pending_requests))
+ nb_ops = qp->nb_descriptors - qp->nb_pending_requests;
+
+ for (i = 0; i < nb_ops; i++) {
+ if (rte_mempool_get(qp->sr_mp, (void **)&sreq))
+ goto enqueue_err;
+
+ /* save rte_crypto_op */
+ sreq->op = ops[i];
+
+ /* save context */
+ qp->infl_msgs[i] = &sreq->msgs;
+ qp->infl_msgs[i]->ctx = (void *)sreq;
+ }
+ /* Send burst request to hw QP */
+ enq = bcmfs_enqueue_op_burst(qp, (void **)qp->infl_msgs, i);
+
+ for (j = enq; j < i; j++)
+ rte_mempool_put(qp->sr_mp, qp->infl_msgs[j]->ctx);
+
+ return enq;
+
+enqueue_err:
+ for (j = 0; j < i; j++)
+ rte_mempool_put(qp->sr_mp, qp->infl_msgs[j]->ctx);
+
+ return enq;
+}
+
+static uint16_t
+bcmfs_sym_pmd_dequeue_op_burst(void *queue_pair,
+ struct rte_crypto_op **ops,
+ uint16_t nb_ops)
+{
+ int i;
+ uint16_t deq = 0;
+ unsigned int pkts = 0;
+ struct bcmfs_sym_request *sreq;
+ struct bcmfs_qp *qp = queue_pair;
+
+ if (nb_ops > BCMFS_MAX_REQS_BUFF)
+ nb_ops = BCMFS_MAX_REQS_BUFF;
+
+ deq = bcmfs_dequeue_op_burst(qp, (void **)qp->infl_msgs, nb_ops);
+ /* get rte_crypto_ops */
+ for (i = 0; i < deq; i++) {
+ sreq = (struct bcmfs_sym_request *)qp->infl_msgs[i]->ctx;
+
+ ops[pkts++] = sreq->op;
+
+ rte_mempool_put(qp->sr_mp, sreq);
+ }
+
+ return pkts;
+}
+
+/*
+ * An rte_driver is needed in the registration of both the
+ * device and the driver with cryptodev.
+ */
+static const char bcmfs_sym_drv_name[] = RTE_STR(CRYPTODEV_NAME_BCMFS_SYM_PMD);
+static const struct rte_driver cryptodev_bcmfs_sym_driver = {
+ .name = bcmfs_sym_drv_name,
+ .alias = bcmfs_sym_drv_name
+};
+
+int
+bcmfs_sym_dev_create(struct bcmfs_device *fsdev)
+{
+ struct rte_cryptodev_pmd_init_params init_params = {
+ .name = "",
+ .socket_id = rte_socket_id(),
+ .private_data_size = sizeof(struct bcmfs_sym_dev_private)
+ };
+ char name[RTE_CRYPTODEV_NAME_MAX_LEN];
+ struct rte_cryptodev *cryptodev;
+ struct bcmfs_sym_dev_private *internals;
+
+ snprintf(name, RTE_CRYPTODEV_NAME_MAX_LEN, "%s_%s",
+ fsdev->name, "sym");
+
+ /* Populate subset device to use in cryptodev device creation */
+ fsdev->sym_rte_dev.driver = &cryptodev_bcmfs_sym_driver;
+ fsdev->sym_rte_dev.numa_node = 0;
+ fsdev->sym_rte_dev.devargs = NULL;
+
+ cryptodev = rte_cryptodev_pmd_create(name,
+ &fsdev->sym_rte_dev,
+ &init_params);
+ if (cryptodev == NULL)
+ return -ENODEV;
+
+ fsdev->sym_rte_dev.name = cryptodev->data->name;
+ cryptodev->driver_id = cryptodev_bcmfs_driver_id;
+ cryptodev->dev_ops = &crypto_bcmfs_ops;
+
+ cryptodev->enqueue_burst = bcmfs_sym_pmd_enqueue_op_burst;
+ cryptodev->dequeue_burst = bcmfs_sym_pmd_dequeue_op_burst;
+
+ cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
+ RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
+ RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT;
+
+ internals = cryptodev->data->dev_private;
+ internals->fsdev = fsdev;
+ fsdev->sym_dev = internals;
+
+ internals->sym_dev_id = cryptodev->data->dev_id;
+
+ BCMFS_LOG(DEBUG, "Created bcmfs-sym device %s as cryptodev instance %d",
+ cryptodev->data->name, internals->sym_dev_id);
+ return 0;
+}
+
+int
+bcmfs_sym_dev_destroy(struct bcmfs_device *fsdev)
+{
+ struct rte_cryptodev *cryptodev;
+
+ if (fsdev == NULL)
+ return -ENODEV;
+ if (fsdev->sym_dev == NULL)
+ return 0;
+
+ /* free crypto device */
+ cryptodev = rte_cryptodev_pmd_get_dev(fsdev->sym_dev->sym_dev_id);
+ rte_cryptodev_pmd_destroy(cryptodev);
+ fsdev->sym_rte_dev.name = NULL;
+ fsdev->sym_dev = NULL;
+
+ return 0;
+}
+
+static struct cryptodev_driver bcmfs_crypto_drv;
+RTE_PMD_REGISTER_CRYPTO_DRIVER(bcmfs_crypto_drv,
+ cryptodev_bcmfs_sym_driver,
+ cryptodev_bcmfs_driver_id);
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_pmd.h b/drivers/crypto/bcmfs/bcmfs_sym_pmd.h
new file mode 100644
index 000000000..65d704609
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_pmd.h
@@ -0,0 +1,38 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_SYM_PMD_H_
+#define _BCMFS_SYM_PMD_H_
+
+#include <rte_cryptodev.h>
+
+#include "bcmfs_device.h"
+
+#define CRYPTODEV_NAME_BCMFS_SYM_PMD crypto_bcmfs
+
+#define BCMFS_CRYPTO_MAX_HW_DESCS_PER_REQ 16
+
+extern uint8_t cryptodev_bcmfs_driver_id;
+
+/** private data structure for a BCMFS device.
+ * This BCMFS device is a device offering only symmetric crypto service,
+ * there can be one of these on each bcmfs_pci_device (VF).
+ */
+struct bcmfs_sym_dev_private {
+ /* The bcmfs device hosting the service */
+ struct bcmfs_device *fsdev;
+ /* Device instance for this rte_cryptodev */
+ uint8_t sym_dev_id;
+ /* BCMFS device symmetric crypto capabilities */
+ const struct rte_cryptodev_capabilities *fsdev_capabilities;
+};
+
+int
+bcmfs_sym_dev_create(struct bcmfs_device *fdev);
+
+int
+bcmfs_sym_dev_destroy(struct bcmfs_device *fdev);
+
+#endif /* _BCMFS_SYM_PMD_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_req.h b/drivers/crypto/bcmfs/bcmfs_sym_req.h
new file mode 100644
index 000000000..0f0b051f1
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_req.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_SYM_REQ_H_
+#define _BCMFS_SYM_REQ_H_
+
+#include "bcmfs_dev_msg.h"
+
+/*
+ * This structure hold the supportive data required to process a
+ * rte_crypto_op
+ */
+struct bcmfs_sym_request {
+ /* bcmfs qp message for h/w queues to process */
+ struct bcmfs_qp_message msgs;
+ /* crypto op */
+ struct rte_crypto_op *op;
+};
+
+#endif /* _BCMFS_SYM_REQ_H_ */
diff --git a/drivers/crypto/bcmfs/meson.build b/drivers/crypto/bcmfs/meson.build
index cd58bd5e2..d9a3d73e9 100644
--- a/drivers/crypto/bcmfs/meson.build
+++ b/drivers/crypto/bcmfs/meson.build
@@ -11,5 +11,6 @@ sources = files(
'bcmfs_qp.c',
'hw/bcmfs4_rm.c',
'hw/bcmfs5_rm.c',
- 'hw/bcmfs_rm_common.c'
+ 'hw/bcmfs_rm_common.c',
+ 'bcmfs_sym_pmd.c'
)
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v2 6/8] crypto/bcmfs: add session handling and capabilities
2020-08-13 17:23 ` [dpdk-dev] [PATCH v2 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
` (4 preceding siblings ...)
2020-08-13 17:23 ` [dpdk-dev] [PATCH v2 5/8] crypto/bcmfs: create a symmetric cryptodev Vikas Gupta
@ 2020-08-13 17:23 ` Vikas Gupta
2020-09-28 19:46 ` Akhil Goyal
2020-08-13 17:23 ` [dpdk-dev] [PATCH v2 7/8] crypto/bcmfs: add crypto h/w module Vikas Gupta
` (3 subsequent siblings)
9 siblings, 1 reply; 75+ messages in thread
From: Vikas Gupta @ 2020-08-13 17:23 UTC (permalink / raw)
To: dev, akhil.goyal; +Cc: vikram.prakash, Vikas Gupta, Raveendra Padasalagi
Add session handling and capabilities supported by crypto h/w
accelerator.
Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
doc/guides/cryptodevs/bcmfs.rst | 46 ++
doc/guides/cryptodevs/features/bcmfs.ini | 56 ++
drivers/crypto/bcmfs/bcmfs_sym_capabilities.c | 764 ++++++++++++++++++
drivers/crypto/bcmfs/bcmfs_sym_capabilities.h | 16 +
drivers/crypto/bcmfs/bcmfs_sym_defs.h | 170 ++++
drivers/crypto/bcmfs/bcmfs_sym_pmd.c | 13 +
drivers/crypto/bcmfs/bcmfs_sym_session.c | 424 ++++++++++
drivers/crypto/bcmfs/bcmfs_sym_session.h | 99 +++
drivers/crypto/bcmfs/meson.build | 4 +-
9 files changed, 1591 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/cryptodevs/features/bcmfs.ini
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_capabilities.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_capabilities.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_defs.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_session.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_session.h
diff --git a/doc/guides/cryptodevs/bcmfs.rst b/doc/guides/cryptodevs/bcmfs.rst
index 752ce028a..2488b19f7 100644
--- a/doc/guides/cryptodevs/bcmfs.rst
+++ b/doc/guides/cryptodevs/bcmfs.rst
@@ -18,9 +18,55 @@ CONFIG_RTE_LIBRTE_PMD_BCMFS setting is set to `y` in config/common_base file.
* ``CONFIG_RTE_LIBRTE_PMD_BCMFS=y``
+Features
+~~~~~~~~
+
+The BCMFS SYM PMD has support for:
+
+Cipher algorithms:
+
+* ``RTE_CRYPTO_CIPHER_3DES_CBC``
+* ``RTE_CRYPTO_CIPHER_3DES_CTR``
+* ``RTE_CRYPTO_CIPHER_AES128_CBC``
+* ``RTE_CRYPTO_CIPHER_AES192_CBC``
+* ``RTE_CRYPTO_CIPHER_AES256_CBC``
+* ``RTE_CRYPTO_CIPHER_AES128_CTR``
+* ``RTE_CRYPTO_CIPHER_AES192_CTR``
+* ``RTE_CRYPTO_CIPHER_AES256_CTR``
+* ``RTE_CRYPTO_CIPHER_AES_XTS``
+* ``RTE_CRYPTO_CIPHER_DES_CBC``
+
+Hash algorithms:
+
+* ``RTE_CRYPTO_AUTH_SHA1``
+* ``RTE_CRYPTO_AUTH_SHA1_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA224``
+* ``RTE_CRYPTO_AUTH_SHA224_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA256``
+* ``RTE_CRYPTO_AUTH_SHA256_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA384``
+* ``RTE_CRYPTO_AUTH_SHA384_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA512``
+* ``RTE_CRYPTO_AUTH_SHA512_HMAC``
+* ``RTE_CRYPTO_AUTH_AES_XCBC_MAC``
+* ``RTE_CRYPTO_AUTH_MD5_HMAC``
+* ``RTE_CRYPTO_AUTH_AES_GMAC``
+* ``RTE_CRYPTO_AUTH_AES_CMAC``
+
+Supported AEAD algorithms:
+
+* ``RTE_CRYPTO_AEAD_AES_GCM``
+* ``RTE_CRYPTO_AEAD_AES_CCM``
+
Initialization
--------------
BCMFS crypto PMD depend upon the devices present in the path
/sys/bus/platform/devices/fs<version>/<dev_name> on the platform.
Each cryptodev PMD instance can be attached to the nodes present
in the mentioned path.
+
+Limitations
+~~~~~~~~~~~
+
+* Only supports the session-oriented API implementation (session-less APIs are not supported).
+* CCM is not supported on Broadcom`s SoCs having FlexSparc4 unit.
diff --git a/doc/guides/cryptodevs/features/bcmfs.ini b/doc/guides/cryptodevs/features/bcmfs.ini
new file mode 100644
index 000000000..82d2c639d
--- /dev/null
+++ b/doc/guides/cryptodevs/features/bcmfs.ini
@@ -0,0 +1,56 @@
+;
+; Supported features of the 'bcmfs' crypto driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Symmetric crypto = Y
+Sym operation chaining = Y
+HW Accelerated = Y
+Protocol offload = Y
+In Place SGL = Y
+
+;
+; Supported crypto algorithms of the 'bcmfs' crypto driver.
+;
+[Cipher]
+AES CBC (128) = Y
+AES CBC (192) = Y
+AES CBC (256) = Y
+AES CTR (128) = Y
+AES CTR (192) = Y
+AES CTR (256) = Y
+AES XTS (128) = Y
+AES XTS (256) = Y
+3DES CBC = Y
+DES CBC = Y
+;
+; Supported authentication algorithms of the 'bcmfs' crypto driver.
+;
+[Auth]
+MD5 HMAC = Y
+SHA1 = Y
+SHA1 HMAC = Y
+SHA224 = Y
+SHA224 HMAC = Y
+SHA256 = Y
+SHA256 HMAC = Y
+SHA384 = Y
+SHA384 HMAC = Y
+SHA512 = Y
+SHA512 HMAC = Y
+AES GMAC = Y
+AES CMAC (128) = Y
+AES CBC = Y
+AES XCBC = Y
+
+;
+; Supported AEAD algorithms of the 'bcmfs' crypto driver.
+;
+[AEAD]
+AES GCM (128) = Y
+AES GCM (192) = Y
+AES GCM (256) = Y
+AES CCM (128) = Y
+AES CCM (192) = Y
+AES CCM (256) = Y
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_capabilities.c b/drivers/crypto/bcmfs/bcmfs_sym_capabilities.c
new file mode 100644
index 000000000..dee88ed4a
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_capabilities.c
@@ -0,0 +1,764 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_cryptodev.h>
+
+#include "bcmfs_sym_capabilities.h"
+
+static const struct rte_cryptodev_capabilities bcmfs_sym_capabilities[] = {
+ {
+ /* SHA1 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA1,
+ .block_size = 64,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 20,
+ .max = 20,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* MD5 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_MD5,
+ .block_size = 64,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ }, }
+ }, }
+ },
+ {
+ /* SHA224 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA224,
+ .block_size = 64,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 28,
+ .max = 28,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA256 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA256,
+ .block_size = 64,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 32,
+ .max = 32,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA384 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA384,
+ .block_size = 64,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 48,
+ .max = 48,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA512 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA512,
+ .block_size = 64,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 64,
+ .max = 64,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA3_224 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA3_224,
+ .block_size = 144,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 28,
+ .max = 28,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA3_256 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA3_256,
+ .block_size = 136,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 32,
+ .max = 32,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA3_384 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA3_384,
+ .block_size = 104,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 48,
+ .max = 48,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA3_512 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA3_512,
+ .block_size = 72,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 64,
+ .max = 64,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA1 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
+ .block_size = 64,
+ .key_size = {
+ .min = 1,
+ .max = 64,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 20,
+ .max = 20,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* MD5 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_MD5_HMAC,
+ .block_size = 64,
+ .key_size = {
+ .min = 1,
+ .max = 64,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA224 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA224_HMAC,
+ .block_size = 64,
+ .key_size = {
+ .min = 1,
+ .max = 64,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 28,
+ .max = 28,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA256 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA256_HMAC,
+ .block_size = 64,
+ .key_size = {
+ .min = 1,
+ .max = 64,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 32,
+ .max = 32,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA384 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA384_HMAC,
+ .block_size = 128,
+ .key_size = {
+ .min = 1,
+ .max = 128,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 48,
+ .max = 48,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA512 HMAC*/
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA512_HMAC,
+ .block_size = 128,
+ .key_size = {
+ .min = 1,
+ .max = 128,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 64,
+ .max = 64,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA3_224 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA3_224_HMAC,
+ .block_size = 144,
+ .key_size = {
+ .min = 1,
+ .max = 144,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 28,
+ .max = 28,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA3_256 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA3_256_HMAC,
+ .block_size = 136,
+ .key_size = {
+ .min = 1,
+ .max = 136,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 32,
+ .max = 32,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA3_384 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA3_384_HMAC,
+ .block_size = 104,
+ .key_size = {
+ .min = 1,
+ .max = 104,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 48,
+ .max = 48,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA3_512 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA3_512_HMAC,
+ .block_size = 72,
+ .key_size = {
+ .min = 1,
+ .max = 72,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 64,
+ .max = 64,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* AES XCBC MAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_AES_XCBC_MAC,
+ .block_size = 16,
+ .key_size = {
+ .min = 1,
+ .max = 16,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* AES GMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_AES_GMAC,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 8
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .aad_size = {
+ .min = 0,
+ .max = 65535,
+ .increment = 1
+ },
+ .iv_size = {
+ .min = 12,
+ .max = 16,
+ .increment = 4
+ },
+ }, }
+ }, }
+ },
+ {
+ /* AES CMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_AES_CMAC,
+ .block_size = 16,
+ .key_size = {
+ .min = 1,
+ .max = 16,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* AES CBC MAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_AES_CBC_MAC,
+ .block_size = 16,
+ .key_size = {
+ .min = 1,
+ .max = 16,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* AES ECB */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_AES_ECB,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 8
+ },
+ .iv_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ {
+ /* AES CBC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_AES_CBC,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 8
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ {
+ /* AES CTR */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_AES_CTR,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 8
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ {
+ /* AES XTS */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_AES_XTS,
+ .block_size = 16,
+ .key_size = {
+ .min = 32,
+ .max = 64,
+ .increment = 32
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ {
+ /* DES CBC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_DES_CBC,
+ .block_size = 8,
+ .key_size = {
+ .min = 8,
+ .max = 8,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ {
+ /* 3DES CBC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_3DES_CBC,
+ .block_size = 8,
+ .key_size = {
+ .min = 24,
+ .max = 24,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ {
+ /* 3DES ECB */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_3DES_ECB,
+ .block_size = 8,
+ .key_size = {
+ .min = 24,
+ .max = 24,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ {
+ /* AES GCM */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
+ {.aead = {
+ .algo = RTE_CRYPTO_AEAD_AES_GCM,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 8
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .aad_size = {
+ .min = 0,
+ .max = 65535,
+ .increment = 1
+ },
+ .iv_size = {
+ .min = 12,
+ .max = 16,
+ .increment = 4
+ },
+ }, }
+ }, }
+ },
+ {
+ /* AES CCM */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
+ {.aead = {
+ .algo = RTE_CRYPTO_AEAD_AES_CCM,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 8
+ },
+ .digest_size = {
+ .min = 4,
+ .max = 16,
+ .increment = 2
+ },
+ .aad_size = {
+ .min = 0,
+ .max = 65535,
+ .increment = 1
+ },
+ .iv_size = {
+ .min = 7,
+ .max = 13,
+ .increment = 1
+ },
+ }, }
+ }, }
+ },
+
+ RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+const struct rte_cryptodev_capabilities *
+bcmfs_sym_get_capabilities(void)
+{
+ return bcmfs_sym_capabilities;
+}
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_capabilities.h b/drivers/crypto/bcmfs/bcmfs_sym_capabilities.h
new file mode 100644
index 000000000..3ff61b7d2
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_capabilities.h
@@ -0,0 +1,16 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_SYM_CAPABILITIES_H_
+#define _BCMFS_SYM_CAPABILITIES_H_
+
+/*
+ * Get capabilities list for the device
+ *
+ */
+const struct rte_cryptodev_capabilities *bcmfs_sym_get_capabilities(void);
+
+#endif /* _BCMFS_SYM_CAPABILITIES_H__ */
+
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_defs.h b/drivers/crypto/bcmfs/bcmfs_sym_defs.h
new file mode 100644
index 000000000..d94446d35
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_defs.h
@@ -0,0 +1,170 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_SYM_DEFS_H_
+#define _BCMFS_SYM_DEFS_H_
+
+/*
+ * Max block size of hash algorithm
+ * currently SHA3 supports max block size
+ * of 144 bytes
+ */
+#define BCMFS_MAX_KEY_SIZE 144
+#define BCMFS_MAX_IV_SIZE 16
+#define BCMFS_MAX_DIGEST_SIZE 64
+
+/** Symmetric Cipher Direction */
+enum bcmfs_crypto_cipher_op {
+ /** Encrypt cipher operation */
+ BCMFS_CRYPTO_CIPHER_OP_ENCRYPT,
+
+ /** Decrypt cipher operation */
+ BCMFS_CRYPTO_CIPHER_OP_DECRYPT,
+};
+
+/** Symmetric Cipher Algorithms */
+enum bcmfs_crypto_cipher_algorithm {
+ /** NULL cipher algorithm. No mode applies to the NULL algorithm. */
+ BCMFS_CRYPTO_CIPHER_NONE = 0,
+
+ /** Triple DES algorithm in CBC mode */
+ BCMFS_CRYPTO_CIPHER_DES_CBC,
+
+ /** Triple DES algorithm in ECB mode */
+ BCMFS_CRYPTO_CIPHER_DES_ECB,
+
+ /** Triple DES algorithm in CBC mode */
+ BCMFS_CRYPTO_CIPHER_3DES_CBC,
+
+ /** Triple DES algorithm in ECB mode */
+ BCMFS_CRYPTO_CIPHER_3DES_ECB,
+
+ /** AES algorithm in CBC mode */
+ BCMFS_CRYPTO_CIPHER_AES_CBC,
+
+ /** AES algorithm in CCM mode. */
+ BCMFS_CRYPTO_CIPHER_AES_CCM,
+
+ /** AES algorithm in Counter mode */
+ BCMFS_CRYPTO_CIPHER_AES_CTR,
+
+ /** AES algorithm in ECB mode */
+ BCMFS_CRYPTO_CIPHER_AES_ECB,
+
+ /** AES algorithm in GCM mode. */
+ BCMFS_CRYPTO_CIPHER_AES_GCM,
+
+ /** AES algorithm in XTS mode */
+ BCMFS_CRYPTO_CIPHER_AES_XTS,
+
+ /** AES algorithm in OFB mode */
+ BCMFS_CRYPTO_CIPHER_AES_OFB,
+};
+
+/** Symmetric Authentication Algorithms */
+enum bcmfs_crypto_auth_algorithm {
+ /** NULL hash algorithm. */
+ BCMFS_CRYPTO_AUTH_NONE = 0,
+
+ /** MD5 algorithm */
+ BCMFS_CRYPTO_AUTH_MD5,
+
+ /** MD5-HMAC algorithm */
+ BCMFS_CRYPTO_AUTH_MD5_HMAC,
+
+ /** SHA1 algorithm */
+ BCMFS_CRYPTO_AUTH_SHA1,
+
+ /** SHA1-HMAC algorithm */
+ BCMFS_CRYPTO_AUTH_SHA1_HMAC,
+
+ /** 224 bit SHA algorithm. */
+ BCMFS_CRYPTO_AUTH_SHA224,
+
+ /** 224 bit SHA-HMAC algorithm. */
+ BCMFS_CRYPTO_AUTH_SHA224_HMAC,
+
+ /** 256 bit SHA algorithm. */
+ BCMFS_CRYPTO_AUTH_SHA256,
+
+ /** 256 bit SHA-HMAC algorithm. */
+ BCMFS_CRYPTO_AUTH_SHA256_HMAC,
+
+ /** 384 bit SHA algorithm. */
+ BCMFS_CRYPTO_AUTH_SHA384,
+
+ /** 384 bit SHA-HMAC algorithm. */
+ BCMFS_CRYPTO_AUTH_SHA384_HMAC,
+
+ /** 512 bit SHA algorithm. */
+ BCMFS_CRYPTO_AUTH_SHA512,
+
+ /** 512 bit SHA-HMAC algorithm. */
+ BCMFS_CRYPTO_AUTH_SHA512_HMAC,
+
+ /** 224 bit SHA3 algorithm. */
+ BCMFS_CRYPTO_AUTH_SHA3_224,
+
+ /** 224 bit SHA-HMAC algorithm. */
+ BCMFS_CRYPTO_AUTH_SHA3_224_HMAC,
+
+ /** 256 bit SHA3 algorithm. */
+ BCMFS_CRYPTO_AUTH_SHA3_256,
+
+ /** 256 bit SHA3-HMAC algorithm. */
+ BCMFS_CRYPTO_AUTH_SHA3_256_HMAC,
+
+ /** 384 bit SHA3 algorithm. */
+ BCMFS_CRYPTO_AUTH_SHA3_384,
+
+ /** 384 bit SHA3-HMAC algorithm. */
+ BCMFS_CRYPTO_AUTH_SHA3_384_HMAC,
+
+ /** 512 bit SHA3 algorithm. */
+ BCMFS_CRYPTO_AUTH_SHA3_512,
+
+ /** 512 bit SHA3-HMAC algorithm. */
+ BCMFS_CRYPTO_AUTH_SHA3_512_HMAC,
+
+ /** AES XCBC MAC algorithm */
+ BCMFS_CRYPTO_AUTH_AES_XCBC_MAC,
+
+ /** AES CMAC algorithm */
+ BCMFS_CRYPTO_AUTH_AES_CMAC,
+
+ /** AES CBC-MAC algorithm */
+ BCMFS_CRYPTO_AUTH_AES_CBC_MAC,
+
+ /** AES CBC-MAC algorithm */
+ BCMFS_CRYPTO_AUTH_AES_GMAC,
+
+ /** AES algorithm in GCM mode. */
+ BCMFS_CRYPTO_AUTH_AES_GCM,
+
+ /** AES algorithm in CCM mode. */
+ BCMFS_CRYPTO_AUTH_AES_CCM,
+};
+
+/** Symmetric Authentication Operations */
+enum bcmfs_crypto_auth_op {
+ /** Verify authentication digest */
+ BCMFS_CRYPTO_AUTH_OP_VERIFY,
+
+ /** Generate authentication digest */
+ BCMFS_CRYPTO_AUTH_OP_GENERATE,
+};
+
+enum bcmfs_sym_crypto_class {
+ /** Cipher algorithm */
+ BCMFS_CRYPTO_CIPHER,
+
+ /** Hash algorithm */
+ BCMFS_CRYPTO_HASH,
+
+ /** Authenticated Encryption with Associated Data algorithm */
+ BCMFS_CRYPTO_AEAD,
+};
+
+#endif /* _BCMFS_SYM_DEFS_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
index 0f96915f7..381ca8ea4 100644
--- a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
+++ b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
@@ -14,6 +14,8 @@
#include "bcmfs_qp.h"
#include "bcmfs_sym_pmd.h"
#include "bcmfs_sym_req.h"
+#include "bcmfs_sym_session.h"
+#include "bcmfs_sym_capabilities.h"
uint8_t cryptodev_bcmfs_driver_id;
@@ -65,6 +67,7 @@ bcmfs_sym_dev_info_get(struct rte_cryptodev *dev,
dev_info->max_nb_queue_pairs = fsdev->max_hw_qps;
/* No limit of number of sessions */
dev_info->sym.max_nb_sessions = 0;
+ dev_info->capabilities = bcmfs_sym_get_capabilities();
}
}
@@ -228,6 +231,10 @@ static struct rte_cryptodev_ops crypto_bcmfs_ops = {
/* Queue-Pair management */
.queue_pair_setup = bcmfs_sym_qp_setup,
.queue_pair_release = bcmfs_sym_qp_release,
+ /* Crypto session related operations */
+ .sym_session_get_size = bcmfs_sym_session_get_private_size,
+ .sym_session_configure = bcmfs_sym_session_configure,
+ .sym_session_clear = bcmfs_sym_session_clear
};
/** Enqueue burst */
@@ -239,6 +246,7 @@ bcmfs_sym_pmd_enqueue_op_burst(void *queue_pair,
int i, j;
uint16_t enq = 0;
struct bcmfs_sym_request *sreq;
+ struct bcmfs_sym_session *sess;
struct bcmfs_qp *qp = (struct bcmfs_qp *)queue_pair;
if (nb_ops == 0)
@@ -252,6 +260,10 @@ bcmfs_sym_pmd_enqueue_op_burst(void *queue_pair,
nb_ops = qp->nb_descriptors - qp->nb_pending_requests;
for (i = 0; i < nb_ops; i++) {
+ sess = bcmfs_sym_get_session(ops[i]);
+ if (unlikely(sess == NULL))
+ goto enqueue_err;
+
if (rte_mempool_get(qp->sr_mp, (void **)&sreq))
goto enqueue_err;
@@ -356,6 +368,7 @@ bcmfs_sym_dev_create(struct bcmfs_device *fsdev)
fsdev->sym_dev = internals;
internals->sym_dev_id = cryptodev->data->dev_id;
+ internals->fsdev_capabilities = bcmfs_sym_get_capabilities();
BCMFS_LOG(DEBUG, "Created bcmfs-sym device %s as cryptodev instance %d",
cryptodev->data->name, internals->sym_dev_id);
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_session.c b/drivers/crypto/bcmfs/bcmfs_sym_session.c
new file mode 100644
index 000000000..8853b4d12
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_session.c
@@ -0,0 +1,424 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_crypto.h>
+#include <rte_crypto_sym.h>
+#include <rte_log.h>
+
+#include "bcmfs_logs.h"
+#include "bcmfs_sym_defs.h"
+#include "bcmfs_sym_pmd.h"
+#include "bcmfs_sym_session.h"
+
+/** Configure the session from a crypto xform chain */
+static enum bcmfs_sym_chain_order
+crypto_get_chain_order(const struct rte_crypto_sym_xform *xform)
+{
+ enum bcmfs_sym_chain_order res = BCMFS_SYM_CHAIN_NOT_SUPPORTED;
+
+
+ if (xform != NULL) {
+ if (xform->type == RTE_CRYPTO_SYM_XFORM_AEAD)
+ res = BCMFS_SYM_CHAIN_AEAD;
+
+ if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
+ if (xform->next == NULL)
+ res = BCMFS_SYM_CHAIN_ONLY_AUTH;
+ else if (xform->next->type ==
+ RTE_CRYPTO_SYM_XFORM_CIPHER)
+ res = BCMFS_SYM_CHAIN_AUTH_CIPHER;
+ }
+ if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
+ if (xform->next == NULL)
+ res = BCMFS_SYM_CHAIN_ONLY_CIPHER;
+ else if (xform->next->type == RTE_CRYPTO_SYM_XFORM_AUTH)
+ res = BCMFS_SYM_CHAIN_CIPHER_AUTH;
+ }
+ }
+
+ return res;
+}
+
+/* Get session cipher key from input cipher key */
+static void
+get_key(const uint8_t *input_key, int keylen, uint8_t *session_key)
+{
+ memcpy(session_key, input_key, keylen);
+}
+
+/* Set session cipher parameters */
+static int
+crypto_set_session_cipher_parameters
+ (struct bcmfs_sym_session *sess,
+ const struct rte_crypto_cipher_xform *cipher_xform)
+{
+ int rc = 0;
+
+ sess->cipher.key.length = cipher_xform->key.length;
+ sess->cipher.iv.offset = cipher_xform->iv.offset;
+ sess->cipher.iv.length = cipher_xform->iv.length;
+ sess->cipher.direction = (enum bcmfs_crypto_cipher_op)cipher_xform->op;
+
+ /* Select cipher algo */
+ switch (cipher_xform->algo) {
+ case RTE_CRYPTO_CIPHER_3DES_CBC:
+ sess->cipher.algo = BCMFS_CRYPTO_CIPHER_3DES_CBC;
+ break;
+ case RTE_CRYPTO_CIPHER_3DES_ECB:
+ sess->cipher.algo = BCMFS_CRYPTO_CIPHER_3DES_ECB;
+ break;
+ case RTE_CRYPTO_CIPHER_DES_CBC:
+ sess->cipher.algo = BCMFS_CRYPTO_CIPHER_DES_CBC;
+ break;
+ case RTE_CRYPTO_CIPHER_AES_CBC:
+ sess->cipher.algo = BCMFS_CRYPTO_CIPHER_AES_CBC;
+ break;
+ case RTE_CRYPTO_CIPHER_AES_ECB:
+ sess->cipher.algo = BCMFS_CRYPTO_CIPHER_AES_ECB;
+ break;
+ case RTE_CRYPTO_CIPHER_AES_CTR:
+ sess->cipher.algo = BCMFS_CRYPTO_CIPHER_AES_CTR;
+ break;
+ case RTE_CRYPTO_CIPHER_AES_XTS:
+ sess->cipher.algo = BCMFS_CRYPTO_CIPHER_AES_XTS;
+ break;
+ default:
+ BCMFS_DP_LOG(ERR, "set session failed. unknown algo");
+ rc = -EINVAL;
+ break;
+ }
+
+ if (!rc)
+ get_key(cipher_xform->key.data,
+ sess->cipher.key.length,
+ sess->cipher.key.data);
+
+ return rc;
+}
+
+/* Set session auth parameters */
+static int
+crypto_set_session_auth_parameters(struct bcmfs_sym_session *sess,
+ const struct rte_crypto_auth_xform
+ *auth_xform)
+{
+ int rc = 0;
+
+ /* Select auth generate/verify */
+ sess->auth.operation = auth_xform->op ?
+ BCMFS_CRYPTO_AUTH_OP_GENERATE :
+ BCMFS_CRYPTO_AUTH_OP_VERIFY;
+ sess->auth.key.length = auth_xform->key.length;
+ sess->auth.digest_length = auth_xform->digest_length;
+ sess->auth.iv.length = auth_xform->iv.length;
+ sess->auth.iv.offset = auth_xform->iv.offset;
+
+ /* Select auth algo */
+ switch (auth_xform->algo) {
+ case RTE_CRYPTO_AUTH_MD5:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_MD5;
+ break;
+ case RTE_CRYPTO_AUTH_SHA1:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA1;
+ break;
+ case RTE_CRYPTO_AUTH_SHA224:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA224;
+ break;
+ case RTE_CRYPTO_AUTH_SHA256:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA256;
+ break;
+ case RTE_CRYPTO_AUTH_SHA384:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA384;
+ break;
+ case RTE_CRYPTO_AUTH_SHA512:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA512;
+ break;
+ case RTE_CRYPTO_AUTH_SHA3_224:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA3_224;
+ break;
+ case RTE_CRYPTO_AUTH_SHA3_256:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA3_256;
+ break;
+ case RTE_CRYPTO_AUTH_SHA3_384:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA3_384;
+ break;
+ case RTE_CRYPTO_AUTH_SHA3_512:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA3_512;
+ break;
+ case RTE_CRYPTO_AUTH_MD5_HMAC:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_MD5_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA1_HMAC:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA1_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA224_HMAC:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA224_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA256_HMAC:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA256_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA384_HMAC:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA384_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA512_HMAC:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA512_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA3_224_HMAC:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA3_224_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA3_256_HMAC:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA3_256_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA3_384_HMAC:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA3_384_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA3_512_HMAC:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA3_512_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_AES_XCBC_MAC;
+ break;
+ case RTE_CRYPTO_AUTH_AES_GMAC:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_AES_GMAC;
+ break;
+ case RTE_CRYPTO_AUTH_AES_CBC_MAC:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_AES_CBC_MAC;
+ break;
+ case RTE_CRYPTO_AUTH_AES_CMAC:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_AES_CMAC;
+ break;
+ default:
+ BCMFS_DP_LOG(ERR, "Invalid Auth algorithm\n");
+ rc = -EINVAL;
+ break;
+ }
+
+ if (!rc)
+ get_key(auth_xform->key.data,
+ auth_xform->key.length,
+ sess->auth.key.data);
+
+ return rc;
+}
+
+/* Set session aead parameters */
+static int
+crypto_set_session_aead_parameters(struct bcmfs_sym_session *sess,
+ const struct rte_crypto_sym_xform *xform)
+{
+ int rc = 0;
+
+ sess->cipher.iv.offset = xform->aead.iv.offset;
+ sess->cipher.iv.length = xform->aead.iv.length;
+ sess->aead.aad_length = xform->aead.aad_length;
+ sess->cipher.key.length = xform->aead.key.length;
+ sess->auth.digest_length = xform->aead.digest_length;
+ sess->cipher.direction = (enum bcmfs_crypto_cipher_op)xform->aead.op;
+
+ /* Select aead algo */
+ switch (xform->aead.algo) {
+ case RTE_CRYPTO_AEAD_AES_CCM:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_AES_CCM;
+ sess->cipher.algo = BCMFS_CRYPTO_CIPHER_AES_CCM;
+ break;
+ case RTE_CRYPTO_AEAD_AES_GCM:
+ sess->auth.algo = BCMFS_CRYPTO_AUTH_AES_GCM;
+ sess->cipher.algo = BCMFS_CRYPTO_CIPHER_AES_GCM;
+ break;
+ default:
+ BCMFS_DP_LOG(ERR, "Invalid aead algorithm\n");
+ rc = -EINVAL;
+ break;
+ }
+
+ if (!rc)
+ get_key(xform->aead.key.data,
+ xform->aead.key.length,
+ sess->cipher.key.data);
+
+ return rc;
+}
+
+static struct rte_crypto_auth_xform *
+crypto_get_auth_xform(struct rte_crypto_sym_xform *xform)
+{
+ do {
+ if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH)
+ return &xform->auth;
+
+ xform = xform->next;
+ } while (xform);
+
+ return NULL;
+}
+
+static struct rte_crypto_cipher_xform *
+crypto_get_cipher_xform(struct rte_crypto_sym_xform *xform)
+{
+ do {
+ if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER)
+ return &xform->cipher;
+
+ xform = xform->next;
+ } while (xform);
+
+ return NULL;
+}
+
+
+/** Parse crypto xform chain and set private session parameters */
+static int
+crypto_set_session_parameters(struct bcmfs_sym_session *sess,
+ struct rte_crypto_sym_xform *xform)
+{
+ int rc = 0;
+ struct rte_crypto_cipher_xform *cipher_xform =
+ crypto_get_cipher_xform(xform);
+ struct rte_crypto_auth_xform *auth_xform =
+ crypto_get_auth_xform(xform);
+
+ sess->chain_order = crypto_get_chain_order(xform);
+
+ switch (sess->chain_order) {
+ case BCMFS_SYM_CHAIN_ONLY_CIPHER:
+ if (crypto_set_session_cipher_parameters(sess,
+ cipher_xform)) {
+ BCMFS_DP_LOG(ERR, "Invalid cipher");
+ rc = -EINVAL;
+ }
+ break;
+ case BCMFS_SYM_CHAIN_ONLY_AUTH:
+ if (crypto_set_session_auth_parameters(sess,
+ auth_xform)) {
+ BCMFS_DP_LOG(ERR, "Invalid auth");
+ rc = -EINVAL;
+ }
+ break;
+ case BCMFS_SYM_CHAIN_AUTH_CIPHER:
+ sess->cipher_first = false;
+ if (crypto_set_session_auth_parameters(sess,
+ auth_xform)) {
+ BCMFS_DP_LOG(ERR, "Invalid auth");
+ rc = -EINVAL;
+ goto error;
+ }
+
+ if (crypto_set_session_cipher_parameters(sess,
+ cipher_xform)) {
+ BCMFS_DP_LOG(ERR, "Invalid cipher");
+ rc = -EINVAL;
+ }
+ break;
+ case BCMFS_SYM_CHAIN_CIPHER_AUTH:
+ sess->cipher_first = true;
+ if (crypto_set_session_auth_parameters(sess,
+ auth_xform)) {
+ BCMFS_DP_LOG(ERR, "Invalid auth");
+ rc = -EINVAL;
+ goto error;
+ }
+
+ if (crypto_set_session_cipher_parameters(sess,
+ cipher_xform)) {
+ BCMFS_DP_LOG(ERR, "Invalid cipher");
+ rc = -EINVAL;
+ }
+ break;
+ case BCMFS_SYM_CHAIN_AEAD:
+ if (crypto_set_session_aead_parameters(sess,
+ xform)) {
+ BCMFS_DP_LOG(ERR, "Invalid aead");
+ rc = -EINVAL;
+ }
+ break;
+ default:
+ BCMFS_DP_LOG(ERR, "Invalid chain order\n");
+ rc = -EINVAL;
+ break;
+ }
+
+error:
+ return rc;
+}
+
+struct bcmfs_sym_session *
+bcmfs_sym_get_session(struct rte_crypto_op *op)
+{
+ struct bcmfs_sym_session *sess = NULL;
+
+ if (unlikely(op->sess_type == RTE_CRYPTO_OP_SESSIONLESS)) {
+ BCMFS_DP_LOG(ERR, "operations op(%p) is sessionless", op);
+ } else if (likely(op->sym->session != NULL)) {
+ /* get existing session */
+ sess = (struct bcmfs_sym_session *)
+ get_sym_session_private_data(op->sym->session,
+ cryptodev_bcmfs_driver_id);
+ }
+
+ if (sess == NULL)
+ op->status = RTE_CRYPTO_OP_STATUS_INVALID_SESSION;
+
+ return sess;
+}
+
+int
+bcmfs_sym_session_configure(struct rte_cryptodev *dev,
+ struct rte_crypto_sym_xform *xform,
+ struct rte_cryptodev_sym_session *sess,
+ struct rte_mempool *mempool)
+{
+ void *sess_private_data;
+ int ret;
+
+ if (unlikely(sess == NULL)) {
+ BCMFS_DP_LOG(ERR, "Invalid session struct");
+ return -EINVAL;
+ }
+
+ if (rte_mempool_get(mempool, &sess_private_data)) {
+ BCMFS_DP_LOG(ERR,
+ "Couldn't get object from session mempool");
+ return -ENOMEM;
+ }
+
+ ret = crypto_set_session_parameters(sess_private_data, xform);
+
+ if (ret != 0) {
+ BCMFS_DP_LOG(ERR, "Failed configure session parameters");
+ /* Return session to mempool */
+ rte_mempool_put(mempool, sess_private_data);
+ return ret;
+ }
+
+ set_sym_session_private_data(sess, dev->driver_id,
+ sess_private_data);
+
+ return 0;
+}
+
+/* Clear the memory of session so it doesn't leave key material behind */
+void
+bcmfs_sym_session_clear(struct rte_cryptodev *dev,
+ struct rte_cryptodev_sym_session *sess)
+{
+ uint8_t index = dev->driver_id;
+ void *sess_priv = get_sym_session_private_data(sess, index);
+
+ if (sess_priv) {
+ struct rte_mempool *sess_mp;
+
+ memset(sess_priv, 0, sizeof(struct bcmfs_sym_session));
+ sess_mp = rte_mempool_from_obj(sess_priv);
+
+ set_sym_session_private_data(sess, index, NULL);
+ rte_mempool_put(sess_mp, sess_priv);
+ }
+}
+
+unsigned int
+bcmfs_sym_session_get_private_size(struct rte_cryptodev *dev __rte_unused)
+{
+ return sizeof(struct bcmfs_sym_session);
+}
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_session.h b/drivers/crypto/bcmfs/bcmfs_sym_session.h
new file mode 100644
index 000000000..43deedcf8
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_session.h
@@ -0,0 +1,99 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_SYM_SESSION_H_
+#define _BCMFS_SYM_SESSION_H_
+
+#include <stdbool.h>
+#include <rte_crypto.h>
+#include <rte_cryptodev_pmd.h>
+
+#include "bcmfs_sym_defs.h"
+#include "bcmfs_sym_req.h"
+
+/* BCMFS_SYM operation order mode enumerator */
+enum bcmfs_sym_chain_order {
+ BCMFS_SYM_CHAIN_ONLY_CIPHER,
+ BCMFS_SYM_CHAIN_ONLY_AUTH,
+ BCMFS_SYM_CHAIN_CIPHER_AUTH,
+ BCMFS_SYM_CHAIN_AUTH_CIPHER,
+ BCMFS_SYM_CHAIN_AEAD,
+ BCMFS_SYM_CHAIN_NOT_SUPPORTED
+};
+
+/* BCMFS_SYM crypto private session structure */
+struct bcmfs_sym_session {
+ enum bcmfs_sym_chain_order chain_order;
+
+ /* Cipher Parameters */
+ struct {
+ enum bcmfs_crypto_cipher_op direction;
+ /* cipher operation direction */
+ enum bcmfs_crypto_cipher_algorithm algo;
+ /* cipher algorithm */
+
+ struct {
+ uint8_t data[BCMFS_MAX_KEY_SIZE];
+ /* key data */
+ size_t length;
+ /* key length in bytes */
+ } key;
+
+ struct {
+ uint16_t offset;
+ uint16_t length;
+ } iv;
+ } cipher;
+
+ /* Authentication Parameters */
+ struct {
+ enum bcmfs_crypto_auth_op operation;
+ /* auth operation generate or verify */
+ enum bcmfs_crypto_auth_algorithm algo;
+ /* cipher algorithm */
+
+ struct {
+ uint8_t data[BCMFS_MAX_KEY_SIZE];
+ /* key data */
+ size_t length;
+ /* key length in bytes */
+ } key;
+ struct {
+ uint16_t offset;
+ uint16_t length;
+ } iv;
+
+ uint16_t digest_length;
+ } auth;
+
+ /* aead Parameters */
+ struct {
+ uint16_t aad_length;
+ } aead;
+ bool cipher_first;
+} __rte_cache_aligned;
+
+int
+bcmfs_process_crypto_op(struct rte_crypto_op *op,
+ struct bcmfs_sym_session *sess,
+ struct bcmfs_sym_request *req);
+
+int
+bcmfs_sym_session_configure(struct rte_cryptodev *dev,
+ struct rte_crypto_sym_xform *xform,
+ struct rte_cryptodev_sym_session *sess,
+ struct rte_mempool *mempool);
+
+void
+bcmfs_sym_session_clear(struct rte_cryptodev *dev,
+ struct rte_cryptodev_sym_session *sess);
+
+unsigned int
+bcmfs_sym_session_get_private_size(struct rte_cryptodev *dev __rte_unused);
+
+struct bcmfs_sym_session *
+bcmfs_sym_get_session(struct rte_crypto_op *op);
+
+#endif /* _BCMFS_SYM_SESSION_H_ */
diff --git a/drivers/crypto/bcmfs/meson.build b/drivers/crypto/bcmfs/meson.build
index d9a3d73e9..2e86c733e 100644
--- a/drivers/crypto/bcmfs/meson.build
+++ b/drivers/crypto/bcmfs/meson.build
@@ -12,5 +12,7 @@ sources = files(
'hw/bcmfs4_rm.c',
'hw/bcmfs5_rm.c',
'hw/bcmfs_rm_common.c',
- 'bcmfs_sym_pmd.c'
+ 'bcmfs_sym_pmd.c',
+ 'bcmfs_sym_capabilities.c',
+ 'bcmfs_sym_session.c'
)
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v2 7/8] crypto/bcmfs: add crypto h/w module
2020-08-13 17:23 ` [dpdk-dev] [PATCH v2 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
` (5 preceding siblings ...)
2020-08-13 17:23 ` [dpdk-dev] [PATCH v2 6/8] crypto/bcmfs: add session handling and capabilities Vikas Gupta
@ 2020-08-13 17:23 ` Vikas Gupta
2020-09-28 20:00 ` Akhil Goyal
2020-08-13 17:23 ` [dpdk-dev] [PATCH v2 8/8] crypto/bcmfs: add crypto pmd into cryptodev test Vikas Gupta
` (2 subsequent siblings)
9 siblings, 1 reply; 75+ messages in thread
From: Vikas Gupta @ 2020-08-13 17:23 UTC (permalink / raw)
To: dev, akhil.goyal; +Cc: vikram.prakash, Vikas Gupta, Raveendra Padasalagi
Add crypto h/w module to process crypto op. Crypto op is processed via
sym_engine module before submitting the crypto request to h/w queues.
Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
drivers/crypto/bcmfs/bcmfs_sym.c | 316 ++++++++
drivers/crypto/bcmfs/bcmfs_sym_defs.h | 16 +
drivers/crypto/bcmfs/bcmfs_sym_engine.c | 994 ++++++++++++++++++++++++
drivers/crypto/bcmfs/bcmfs_sym_engine.h | 103 +++
drivers/crypto/bcmfs/bcmfs_sym_pmd.c | 26 +
drivers/crypto/bcmfs/bcmfs_sym_req.h | 40 +
drivers/crypto/bcmfs/meson.build | 4 +-
7 files changed, 1498 insertions(+), 1 deletion(-)
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_engine.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_engine.h
diff --git a/drivers/crypto/bcmfs/bcmfs_sym.c b/drivers/crypto/bcmfs/bcmfs_sym.c
new file mode 100644
index 000000000..8f9415b5e
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym.c
@@ -0,0 +1,316 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <stdbool.h>
+
+#include <rte_byteorder.h>
+#include <rte_crypto_sym.h>
+#include <rte_cryptodev.h>
+#include <rte_mbuf.h>
+#include <rte_mempool.h>
+
+#include "bcmfs_sym_defs.h"
+#include "bcmfs_sym_engine.h"
+#include "bcmfs_sym_req.h"
+#include "bcmfs_sym_session.h"
+
+/** Process cipher operation */
+static int
+process_crypto_cipher_op(struct rte_crypto_op *op,
+ struct rte_mbuf *mbuf_src,
+ struct rte_mbuf *mbuf_dst,
+ struct bcmfs_sym_session *sess,
+ struct bcmfs_sym_request *req)
+{
+ int rc = 0;
+ struct fsattr src, dst, iv, key;
+ struct rte_crypto_sym_op *sym_op = op->sym;
+
+ fsattr_sz(&src) = sym_op->cipher.data.length;
+ fsattr_sz(&dst) = sym_op->cipher.data.length;
+
+ fsattr_va(&src) = rte_pktmbuf_mtod_offset
+ (mbuf_src,
+ uint8_t *,
+ op->sym->cipher.data.offset);
+
+ fsattr_va(&dst) = rte_pktmbuf_mtod_offset
+ (mbuf_dst,
+ uint8_t *,
+ op->sym->cipher.data.offset);
+
+ fsattr_pa(&src) = rte_pktmbuf_iova(mbuf_src);
+ fsattr_pa(&dst) = rte_pktmbuf_iova(mbuf_dst);
+
+ fsattr_va(&iv) = rte_crypto_op_ctod_offset(op,
+ uint8_t *,
+ sess->cipher.iv.offset);
+
+ fsattr_sz(&iv) = sess->cipher.iv.length;
+
+ fsattr_va(&key) = sess->cipher.key.data;
+ fsattr_pa(&key) = 0;
+ fsattr_sz(&key) = sess->cipher.key.length;
+
+ rc = bcmfs_crypto_build_cipher_req(req, sess->cipher.algo,
+ sess->cipher.direction, &src,
+ &dst, &key, &iv);
+ if (rc)
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+
+ return rc;
+}
+
+/** Process auth operation */
+static int
+process_crypto_auth_op(struct rte_crypto_op *op,
+ struct rte_mbuf *mbuf_src,
+ struct bcmfs_sym_session *sess,
+ struct bcmfs_sym_request *req)
+{
+ int rc = 0;
+ struct fsattr src, dst, mac, key;
+
+ fsattr_sz(&src) = op->sym->auth.data.length;
+ fsattr_va(&src) = rte_pktmbuf_mtod_offset(mbuf_src,
+ uint8_t *,
+ op->sym->auth.data.offset);
+ fsattr_pa(&src) = rte_pktmbuf_iova(mbuf_src);
+
+ if (!sess->auth.operation) {
+ fsattr_va(&mac) = op->sym->auth.digest.data;
+ fsattr_pa(&mac) = op->sym->auth.digest.phys_addr;
+ fsattr_sz(&mac) = sess->auth.digest_length;
+ } else {
+ fsattr_va(&dst) = op->sym->auth.digest.data;
+ fsattr_pa(&dst) = op->sym->auth.digest.phys_addr;
+ fsattr_sz(&dst) = sess->auth.digest_length;
+ }
+
+ fsattr_va(&key) = sess->auth.key.data;
+ fsattr_pa(&key) = 0;
+ fsattr_sz(&key) = sess->auth.key.length;
+
+ /* AES-GMAC uses AES-GCM-128 authenticator */
+ if (sess->auth.algo == BCMFS_CRYPTO_AUTH_AES_GMAC) {
+ struct fsattr iv;
+ fsattr_va(&iv) = rte_crypto_op_ctod_offset(op,
+ uint8_t *,
+ sess->auth.iv.offset);
+ fsattr_pa(&iv) = 0;
+ fsattr_sz(&iv) = sess->auth.iv.length;
+
+ rc = bcmfs_crypto_build_aead_request(req,
+ BCMFS_CRYPTO_CIPHER_NONE,
+ 0,
+ BCMFS_CRYPTO_AUTH_AES_GMAC,
+ sess->auth.operation,
+ &src, NULL, NULL, &key,
+ &iv, NULL,
+ sess->auth.operation ?
+ (&dst) : &(mac),
+ 0);
+ } else {
+ rc = bcmfs_crypto_build_auth_req(req, sess->auth.algo,
+ sess->auth.operation,
+ &src,
+ (sess->auth.operation) ? (&dst) : NULL,
+ (sess->auth.operation) ? NULL : (&mac),
+ &key);
+ }
+
+ if (rc)
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+
+ return rc;
+}
+
+/** Process combined/chained mode operation */
+static int
+process_crypto_combined_op(struct rte_crypto_op *op,
+ struct rte_mbuf *mbuf_src,
+ struct rte_mbuf *mbuf_dst,
+ struct bcmfs_sym_session *sess,
+ struct bcmfs_sym_request *req)
+{
+ int rc = 0, aad_size = 0;
+ struct fsattr src, dst, iv;
+ struct rte_crypto_sym_op *sym_op = op->sym;
+ struct fsattr cipher_key, aad, mac, auth_key;
+
+ fsattr_sz(&src) = sym_op->cipher.data.length;
+ fsattr_sz(&dst) = sym_op->cipher.data.length;
+
+ fsattr_va(&src) = rte_pktmbuf_mtod_offset
+ (mbuf_src,
+ uint8_t *,
+ sym_op->cipher.data.offset);
+
+ fsattr_va(&dst) = rte_pktmbuf_mtod_offset
+ (mbuf_dst,
+ uint8_t *,
+ sym_op->cipher.data.offset);
+
+ fsattr_pa(&src) = rte_pktmbuf_iova_offset(mbuf_src,
+ sym_op->cipher.data.offset);
+ fsattr_pa(&dst) = rte_pktmbuf_iova_offset(mbuf_dst,
+ sym_op->cipher.data.offset);
+
+ fsattr_va(&iv) = rte_crypto_op_ctod_offset(op,
+ uint8_t *,
+ sess->cipher.iv.offset);
+
+ fsattr_pa(&iv) = 0;
+ fsattr_sz(&iv) = sess->cipher.iv.length;
+
+ fsattr_va(&cipher_key) = sess->cipher.key.data;
+ fsattr_pa(&cipher_key) = 0;
+ fsattr_sz(&cipher_key) = sess->cipher.key.length;
+
+ fsattr_va(&auth_key) = sess->auth.key.data;
+ fsattr_pa(&auth_key) = 0;
+ fsattr_sz(&auth_key) = sess->auth.key.length;
+
+ fsattr_va(&mac) = op->sym->auth.digest.data;
+ fsattr_pa(&mac) = op->sym->auth.digest.phys_addr;
+ fsattr_sz(&mac) = sess->auth.digest_length;
+
+ aad_size = sym_op->auth.data.length - sym_op->cipher.data.length;
+
+ if (aad_size > 0) {
+ fsattr_sz(&aad) = aad_size;
+ fsattr_va(&aad) = rte_pktmbuf_mtod_offset
+ (mbuf_src,
+ uint8_t *,
+ sym_op->auth.data.offset);
+ fsattr_pa(&aad) = rte_pktmbuf_iova_offset(mbuf_src,
+ sym_op->auth.data.offset);
+ }
+
+ rc = bcmfs_crypto_build_aead_request(req, sess->cipher.algo,
+ sess->cipher.direction,
+ sess->auth.algo,
+ sess->auth.operation,
+ &src, &dst, &cipher_key,
+ &auth_key, &iv,
+ (aad_size > 0) ? (&aad) : NULL,
+ &mac, sess->cipher_first);
+
+ if (rc)
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+
+ return rc;
+}
+
+/** Process AEAD operation */
+static int
+process_crypto_aead_op(struct rte_crypto_op *op,
+ struct rte_mbuf *mbuf_src,
+ struct rte_mbuf *mbuf_dst,
+ struct bcmfs_sym_session *sess,
+ struct bcmfs_sym_request *req)
+{
+ int rc = 0;
+ struct fsattr src, dst, iv;
+ struct rte_crypto_sym_op *sym_op = op->sym;
+ struct fsattr cipher_key, aad, mac, auth_key;
+ enum bcmfs_crypto_cipher_op cipher_op;
+ enum bcmfs_crypto_auth_op auth_op;
+
+ if (sess->cipher.direction) {
+ auth_op = BCMFS_CRYPTO_AUTH_OP_VERIFY;
+ cipher_op = BCMFS_CRYPTO_CIPHER_OP_DECRYPT;
+ } else {
+ auth_op = BCMFS_CRYPTO_AUTH_OP_GENERATE;
+ cipher_op = BCMFS_CRYPTO_CIPHER_OP_ENCRYPT;
+ }
+
+ fsattr_sz(&src) = sym_op->aead.data.length;
+ fsattr_sz(&dst) = sym_op->aead.data.length;
+
+ fsattr_va(&src) = rte_pktmbuf_mtod_offset
+ (mbuf_src,
+ uint8_t *,
+ sym_op->aead.data.offset);
+
+ fsattr_va(&dst) = rte_pktmbuf_mtod_offset
+ (mbuf_dst,
+ uint8_t *,
+ sym_op->aead.data.offset);
+
+ fsattr_pa(&src) = rte_pktmbuf_iova_offset(mbuf_src,
+ sym_op->aead.data.offset);
+ fsattr_pa(&dst) = rte_pktmbuf_iova_offset(mbuf_dst,
+ sym_op->aead.data.offset);
+
+ fsattr_va(&iv) = rte_crypto_op_ctod_offset(op,
+ uint8_t *,
+ sess->cipher.iv.offset);
+
+ fsattr_pa(&iv) = 0;
+ fsattr_sz(&iv) = sess->cipher.iv.length;
+
+ fsattr_va(&cipher_key) = sess->cipher.key.data;
+ fsattr_pa(&cipher_key) = 0;
+ fsattr_sz(&cipher_key) = sess->cipher.key.length;
+
+ fsattr_va(&auth_key) = sess->auth.key.data;
+ fsattr_pa(&auth_key) = 0;
+ fsattr_sz(&auth_key) = sess->auth.key.length;
+
+ fsattr_va(&mac) = op->sym->aead.digest.data;
+ fsattr_pa(&mac) = op->sym->aead.digest.phys_addr;
+ fsattr_sz(&mac) = sess->auth.digest_length;
+
+ fsattr_va(&aad) = op->sym->aead.aad.data;
+ fsattr_pa(&aad) = op->sym->aead.aad.phys_addr;
+ fsattr_sz(&aad) = sess->aead.aad_length;
+
+ rc = bcmfs_crypto_build_aead_request(req, sess->cipher.algo,
+ cipher_op, sess->auth.algo,
+ auth_op, &src, &dst, &cipher_key,
+ &auth_key, &iv, &aad, &mac,
+ sess->cipher.direction ? 0 : 1);
+
+ if (rc)
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+
+ return rc;
+}
+
+/** Process crypto operation for mbuf */
+int
+bcmfs_process_sym_crypto_op(struct rte_crypto_op *op,
+ struct bcmfs_sym_session *sess,
+ struct bcmfs_sym_request *req)
+{
+ struct rte_mbuf *msrc, *mdst;
+ int rc = 0;
+
+ msrc = op->sym->m_src;
+ mdst = op->sym->m_dst ? op->sym->m_dst : op->sym->m_src;
+ op->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+
+ switch (sess->chain_order) {
+ case BCMFS_SYM_CHAIN_ONLY_CIPHER:
+ rc = process_crypto_cipher_op(op, msrc, mdst, sess, req);
+ break;
+ case BCMFS_SYM_CHAIN_ONLY_AUTH:
+ rc = process_crypto_auth_op(op, msrc, sess, req);
+ break;
+ case BCMFS_SYM_CHAIN_CIPHER_AUTH:
+ case BCMFS_SYM_CHAIN_AUTH_CIPHER:
+ rc = process_crypto_combined_op(op, msrc, mdst, sess, req);
+ break;
+ case BCMFS_SYM_CHAIN_AEAD:
+ rc = process_crypto_aead_op(op, msrc, mdst, sess, req);
+ break;
+ default:
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ break;
+ }
+
+ return rc;
+}
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_defs.h b/drivers/crypto/bcmfs/bcmfs_sym_defs.h
index d94446d35..90280dba5 100644
--- a/drivers/crypto/bcmfs/bcmfs_sym_defs.h
+++ b/drivers/crypto/bcmfs/bcmfs_sym_defs.h
@@ -15,6 +15,18 @@
#define BCMFS_MAX_IV_SIZE 16
#define BCMFS_MAX_DIGEST_SIZE 64
+struct bcmfs_sym_session;
+struct bcmfs_sym_request;
+
+/** Crypto Request processing successful. */
+#define BCMFS_SYM_RESPONSE_SUCCESS (0)
+/** Crypot Request processing protocol failure. */
+#define BCMFS_SYM_RESPONSE_PROTO_FAILURE (1)
+/** Crypot Request processing completion failure. */
+#define BCMFS_SYM_RESPONSE_COMPL_ERROR (2)
+/** Crypot Request processing hash tag check error. */
+#define BCMFS_SYM_RESPONSE_HASH_TAG_ERROR (3)
+
/** Symmetric Cipher Direction */
enum bcmfs_crypto_cipher_op {
/** Encrypt cipher operation */
@@ -167,4 +179,8 @@ enum bcmfs_sym_crypto_class {
BCMFS_CRYPTO_AEAD,
};
+int
+bcmfs_process_sym_crypto_op(struct rte_crypto_op *op,
+ struct bcmfs_sym_session *sess,
+ struct bcmfs_sym_request *req);
#endif /* _BCMFS_SYM_DEFS_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_engine.c b/drivers/crypto/bcmfs/bcmfs_sym_engine.c
new file mode 100644
index 000000000..c17174fc0
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_engine.c
@@ -0,0 +1,994 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Broadcom.
+ * All rights reserved.
+ */
+
+#include <stdbool.h>
+#include <string.h>
+
+#include <rte_common.h>
+#include <rte_cryptodev.h>
+
+#include "bcmfs_logs.h"
+#include "bcmfs_sym_defs.h"
+#include "bcmfs_dev_msg.h"
+#include "bcmfs_sym_req.h"
+#include "bcmfs_sym_engine.h"
+
+enum spu2_cipher_type {
+ SPU2_CIPHER_TYPE_NONE = 0x0,
+ SPU2_CIPHER_TYPE_AES128 = 0x1,
+ SPU2_CIPHER_TYPE_AES192 = 0x2,
+ SPU2_CIPHER_TYPE_AES256 = 0x3,
+ SPU2_CIPHER_TYPE_DES = 0x4,
+ SPU2_CIPHER_TYPE_3DES = 0x5,
+ SPU2_CIPHER_TYPE_LAST
+};
+
+enum spu2_cipher_mode {
+ SPU2_CIPHER_MODE_ECB = 0x0,
+ SPU2_CIPHER_MODE_CBC = 0x1,
+ SPU2_CIPHER_MODE_CTR = 0x2,
+ SPU2_CIPHER_MODE_CFB = 0x3,
+ SPU2_CIPHER_MODE_OFB = 0x4,
+ SPU2_CIPHER_MODE_XTS = 0x5,
+ SPU2_CIPHER_MODE_CCM = 0x6,
+ SPU2_CIPHER_MODE_GCM = 0x7,
+ SPU2_CIPHER_MODE_LAST
+};
+
+enum spu2_hash_type {
+ SPU2_HASH_TYPE_NONE = 0x0,
+ SPU2_HASH_TYPE_AES128 = 0x1,
+ SPU2_HASH_TYPE_AES192 = 0x2,
+ SPU2_HASH_TYPE_AES256 = 0x3,
+ SPU2_HASH_TYPE_MD5 = 0x6,
+ SPU2_HASH_TYPE_SHA1 = 0x7,
+ SPU2_HASH_TYPE_SHA224 = 0x8,
+ SPU2_HASH_TYPE_SHA256 = 0x9,
+ SPU2_HASH_TYPE_SHA384 = 0xa,
+ SPU2_HASH_TYPE_SHA512 = 0xb,
+ SPU2_HASH_TYPE_SHA512_224 = 0xc,
+ SPU2_HASH_TYPE_SHA512_256 = 0xd,
+ SPU2_HASH_TYPE_SHA3_224 = 0xe,
+ SPU2_HASH_TYPE_SHA3_256 = 0xf,
+ SPU2_HASH_TYPE_SHA3_384 = 0x10,
+ SPU2_HASH_TYPE_SHA3_512 = 0x11,
+ SPU2_HASH_TYPE_LAST
+};
+
+enum spu2_hash_mode {
+ SPU2_HASH_MODE_CMAC = 0x0,
+ SPU2_HASH_MODE_CBC_MAC = 0x1,
+ SPU2_HASH_MODE_XCBC_MAC = 0x2,
+ SPU2_HASH_MODE_HMAC = 0x3,
+ SPU2_HASH_MODE_RABIN = 0x4,
+ SPU2_HASH_MODE_CCM = 0x5,
+ SPU2_HASH_MODE_GCM = 0x6,
+ SPU2_HASH_MODE_RESERVED = 0x7,
+ SPU2_HASH_MODE_LAST
+};
+
+enum spu2_proto_sel {
+ SPU2_PROTO_RESV = 0,
+ SPU2_MACSEC_SECTAG8_ECB = 1,
+ SPU2_MACSEC_SECTAG8_SCB = 2,
+ SPU2_MACSEC_SECTAG16 = 3,
+ SPU2_MACSEC_SECTAG16_8_XPN = 4,
+ SPU2_IPSEC = 5,
+ SPU2_IPSEC_ESN = 6,
+ SPU2_TLS_CIPHER = 7,
+ SPU2_TLS_AEAD = 8,
+ SPU2_DTLS_CIPHER = 9,
+ SPU2_DTLS_AEAD = 10
+};
+
+/* SPU2 response size */
+#define SPU2_STATUS_LEN 2
+
+/* Metadata settings in response */
+enum spu2_ret_md_opts {
+ SPU2_RET_NO_MD = 0, /* return no metadata */
+ SPU2_RET_FMD_OMD = 1, /* return both FMD and OMD */
+ SPU2_RET_FMD_ONLY = 2, /* return only FMD */
+ SPU2_RET_FMD_OMD_IV = 3, /* return FMD and OMD with just IVs */
+};
+
+/* FMD ctrl0 field masks */
+#define SPU2_CIPH_ENCRYPT_EN 0x1 /* 0: decrypt, 1: encrypt */
+#define SPU2_CIPH_TYPE_SHIFT 4
+#define SPU2_CIPH_MODE 0xF00 /* one of spu2_cipher_mode */
+#define SPU2_CIPH_MODE_SHIFT 8
+#define SPU2_CFB_MASK 0x7000 /* cipher feedback mask */
+#define SPU2_CFB_MASK_SHIFT 12
+#define SPU2_PROTO_SEL 0xF00000 /* MACsec, IPsec, TLS... */
+#define SPU2_PROTO_SEL_SHIFT 20
+#define SPU2_HASH_FIRST 0x1000000 /* 1: hash input is input pkt
+ * data
+ */
+#define SPU2_CHK_TAG 0x2000000 /* 1: check digest provided */
+#define SPU2_HASH_TYPE 0x1F0000000 /* one of spu2_hash_type */
+#define SPU2_HASH_TYPE_SHIFT 28
+#define SPU2_HASH_MODE 0xF000000000 /* one of spu2_hash_mode */
+#define SPU2_HASH_MODE_SHIFT 36
+#define SPU2_CIPH_PAD_EN 0x100000000000 /* 1: Add pad to end of payload for
+ * enc
+ */
+#define SPU2_CIPH_PAD 0xFF000000000000 /* cipher pad value */
+#define SPU2_CIPH_PAD_SHIFT 48
+
+/* FMD ctrl1 field masks */
+#define SPU2_TAG_LOC 0x1 /* 1: end of payload, 0: undef */
+#define SPU2_HAS_FR_DATA 0x2 /* 1: msg has frame data */
+#define SPU2_HAS_AAD1 0x4 /* 1: msg has AAD1 field */
+#define SPU2_HAS_NAAD 0x8 /* 1: msg has NAAD field */
+#define SPU2_HAS_AAD2 0x10 /* 1: msg has AAD2 field */
+#define SPU2_HAS_ESN 0x20 /* 1: msg has ESN field */
+#define SPU2_HASH_KEY_LEN 0xFF00 /* len of hash key in bytes.
+ * HMAC only.
+ */
+#define SPU2_HASH_KEY_LEN_SHIFT 8
+#define SPU2_CIPH_KEY_LEN 0xFF00000 /* len of cipher key in bytes */
+#define SPU2_CIPH_KEY_LEN_SHIFT 20
+#define SPU2_GENIV 0x10000000 /* 1: hw generates IV */
+#define SPU2_HASH_IV 0x20000000 /* 1: IV incl in hash */
+#define SPU2_RET_IV 0x40000000 /* 1: return IV in output msg
+ * b4 payload
+ */
+#define SPU2_RET_IV_LEN 0xF00000000 /* length in bytes of IV returned.
+ * 0 = 16 bytes
+ */
+#define SPU2_RET_IV_LEN_SHIFT 32
+#define SPU2_IV_OFFSET 0xF000000000 /* gen IV offset */
+#define SPU2_IV_OFFSET_SHIFT 36
+#define SPU2_IV_LEN 0x1F0000000000 /* length of input IV in bytes */
+#define SPU2_IV_LEN_SHIFT 40
+#define SPU2_HASH_TAG_LEN 0x7F000000000000 /* hash tag length in bytes */
+#define SPU2_HASH_TAG_LEN_SHIFT 48
+#define SPU2_RETURN_MD 0x300000000000000 /* return metadata */
+#define SPU2_RETURN_MD_SHIFT 56
+#define SPU2_RETURN_FD 0x400000000000000
+#define SPU2_RETURN_AAD1 0x800000000000000
+#define SPU2_RETURN_NAAD 0x1000000000000000
+#define SPU2_RETURN_AAD2 0x2000000000000000
+#define SPU2_RETURN_PAY 0x4000000000000000 /* return payload */
+
+/* FMD ctrl2 field masks */
+#define SPU2_AAD1_OFFSET 0xFFF /* byte offset of AAD1 field */
+#define SPU2_AAD1_LEN 0xFF000 /* length of AAD1 in bytes */
+#define SPU2_AAD1_LEN_SHIFT 12
+#define SPU2_AAD2_OFFSET 0xFFF00000 /* byte offset of AAD2 field */
+#define SPU2_AAD2_OFFSET_SHIFT 20
+#define SPU2_PL_OFFSET 0xFFFFFFFF00000000 /* payload offset from AAD2 */
+#define SPU2_PL_OFFSET_SHIFT 32
+
+/* FMD ctrl3 field masks */
+#define SPU2_PL_LEN 0xFFFFFFFF /* payload length in bytes */
+#define SPU2_TLS_LEN 0xFFFF00000000 /* TLS encrypt: cipher len
+ * TLS decrypt: compressed len
+ */
+#define SPU2_TLS_LEN_SHIFT 32
+
+/*
+ * Max value that can be represented in the Payload Length field of the
+ * ctrl3 word of FMD.
+ */
+#define SPU2_MAX_PAYLOAD SPU2_PL_LEN
+
+#define SPU2_VAL_NONE 0
+
+/* CCM B_0 field definitions, common for SPU-M and SPU2 */
+#define CCM_B0_ADATA 0x40
+#define CCM_B0_ADATA_SHIFT 6
+#define CCM_B0_M_PRIME 0x38
+#define CCM_B0_M_PRIME_SHIFT 3
+#define CCM_B0_L_PRIME 0x07
+#define CCM_B0_L_PRIME_SHIFT 0
+#define CCM_ESP_L_VALUE 4
+
+static uint16_t
+spu2_cipher_type_xlate(enum bcmfs_crypto_cipher_algorithm cipher_alg,
+ enum spu2_cipher_type *spu2_type,
+ struct fsattr *key)
+{
+ int ret = 0;
+ int key_size = fsattr_sz(key);
+
+ if (cipher_alg == BCMFS_CRYPTO_CIPHER_AES_XTS)
+ key_size = key_size / 2;
+
+ switch (key_size) {
+ case BCMFS_CRYPTO_AES128:
+ *spu2_type = SPU2_CIPHER_TYPE_AES128;
+ break;
+ case BCMFS_CRYPTO_AES192:
+ *spu2_type = SPU2_CIPHER_TYPE_AES192;
+ break;
+ case BCMFS_CRYPTO_AES256:
+ *spu2_type = SPU2_CIPHER_TYPE_AES256;
+ break;
+ default:
+ ret = -EINVAL;
+ }
+
+ return ret;
+}
+
+static int
+spu2_hash_xlate(enum bcmfs_crypto_auth_algorithm auth_alg,
+ struct fsattr *key,
+ enum spu2_hash_type *spu2_type,
+ enum spu2_hash_mode *spu2_mode)
+{
+ *spu2_mode = 0;
+
+ switch (auth_alg) {
+ case BCMFS_CRYPTO_AUTH_NONE:
+ *spu2_type = SPU2_HASH_TYPE_NONE;
+ break;
+ case BCMFS_CRYPTO_AUTH_MD5:
+ *spu2_type = SPU2_HASH_TYPE_MD5;
+ break;
+ case BCMFS_CRYPTO_AUTH_MD5_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_MD5;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA1:
+ *spu2_type = SPU2_HASH_TYPE_SHA1;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA1_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA1;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA224:
+ *spu2_type = SPU2_HASH_TYPE_SHA224;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA224_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA224;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA256:
+ *spu2_type = SPU2_HASH_TYPE_SHA256;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA256_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA256;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA384:
+ *spu2_type = SPU2_HASH_TYPE_SHA384;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA384_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA384;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA512:
+ *spu2_type = SPU2_HASH_TYPE_SHA512;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA512_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA512;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA3_224:
+ *spu2_type = SPU2_HASH_TYPE_SHA3_224;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA3_224_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA3_224;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA3_256:
+ *spu2_type = SPU2_HASH_TYPE_SHA3_256;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA3_256_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA3_256;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA3_384:
+ *spu2_type = SPU2_HASH_TYPE_SHA3_384;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA3_384_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA3_384;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA3_512:
+ *spu2_type = SPU2_HASH_TYPE_SHA3_512;
+ break;
+ case BCMFS_CRYPTO_AUTH_SHA3_512_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA3_512;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case BCMFS_CRYPTO_AUTH_AES_XCBC_MAC:
+ *spu2_mode = SPU2_HASH_MODE_XCBC_MAC;
+ switch (fsattr_sz(key)) {
+ case BCMFS_CRYPTO_AES128:
+ *spu2_type = SPU2_HASH_TYPE_AES128;
+ break;
+ case BCMFS_CRYPTO_AES192:
+ *spu2_type = SPU2_HASH_TYPE_AES192;
+ break;
+ case BCMFS_CRYPTO_AES256:
+ *spu2_type = SPU2_HASH_TYPE_AES256;
+ break;
+ default:
+ return -EINVAL;
+ }
+ break;
+ case BCMFS_CRYPTO_AUTH_AES_CMAC:
+ *spu2_mode = SPU2_HASH_MODE_CMAC;
+ switch (fsattr_sz(key)) {
+ case BCMFS_CRYPTO_AES128:
+ *spu2_type = SPU2_HASH_TYPE_AES128;
+ break;
+ case BCMFS_CRYPTO_AES192:
+ *spu2_type = SPU2_HASH_TYPE_AES192;
+ break;
+ case BCMFS_CRYPTO_AES256:
+ *spu2_type = SPU2_HASH_TYPE_AES256;
+ break;
+ default:
+ return -EINVAL;
+ }
+ break;
+ case BCMFS_CRYPTO_AUTH_AES_GMAC:
+ *spu2_mode = SPU2_HASH_MODE_GCM;
+ switch (fsattr_sz(key)) {
+ case BCMFS_CRYPTO_AES128:
+ *spu2_type = SPU2_HASH_TYPE_AES128;
+ break;
+ case BCMFS_CRYPTO_AES192:
+ *spu2_type = SPU2_HASH_TYPE_AES192;
+ break;
+ case BCMFS_CRYPTO_AES256:
+ *spu2_type = SPU2_HASH_TYPE_AES256;
+ break;
+ default:
+ return -EINVAL;
+ }
+ break;
+ case BCMFS_CRYPTO_AUTH_AES_CBC_MAC:
+ *spu2_mode = SPU2_HASH_MODE_CBC_MAC;
+ switch (fsattr_sz(key)) {
+ case BCMFS_CRYPTO_AES128:
+ *spu2_type = SPU2_HASH_TYPE_AES128;
+ break;
+ case BCMFS_CRYPTO_AES192:
+ *spu2_type = SPU2_HASH_TYPE_AES192;
+ break;
+ case BCMFS_CRYPTO_AES256:
+ *spu2_type = SPU2_HASH_TYPE_AES256;
+ break;
+ default:
+ return -EINVAL;
+ }
+ break;
+ case BCMFS_CRYPTO_AUTH_AES_GCM:
+ *spu2_mode = SPU2_HASH_MODE_GCM;
+ break;
+ case BCMFS_CRYPTO_AUTH_AES_CCM:
+ *spu2_mode = SPU2_HASH_MODE_CCM;
+ break;
+ }
+
+ return 0;
+}
+
+static int
+spu2_cipher_xlate(enum bcmfs_crypto_cipher_algorithm cipher_alg,
+ struct fsattr *key,
+ enum spu2_cipher_type *spu2_type,
+ enum spu2_cipher_mode *spu2_mode)
+{
+ int ret = 0;
+
+ switch (cipher_alg) {
+ case BCMFS_CRYPTO_CIPHER_NONE:
+ *spu2_type = SPU2_CIPHER_TYPE_NONE;
+ break;
+ case BCMFS_CRYPTO_CIPHER_DES_ECB:
+ *spu2_mode = SPU2_CIPHER_MODE_ECB;
+ *spu2_type = SPU2_CIPHER_TYPE_DES;
+ break;
+ case BCMFS_CRYPTO_CIPHER_DES_CBC:
+ *spu2_mode = SPU2_CIPHER_MODE_CBC;
+ *spu2_type = SPU2_CIPHER_TYPE_DES;
+ break;
+ case BCMFS_CRYPTO_CIPHER_3DES_ECB:
+ *spu2_mode = SPU2_CIPHER_MODE_ECB;
+ *spu2_type = SPU2_CIPHER_TYPE_3DES;
+ break;
+ case BCMFS_CRYPTO_CIPHER_3DES_CBC:
+ *spu2_mode = SPU2_CIPHER_MODE_CBC;
+ *spu2_type = SPU2_CIPHER_TYPE_3DES;
+ break;
+ case BCMFS_CRYPTO_CIPHER_AES_CBC:
+ *spu2_mode = SPU2_CIPHER_MODE_CBC;
+ ret = spu2_cipher_type_xlate(cipher_alg, spu2_type, key);
+ break;
+ case BCMFS_CRYPTO_CIPHER_AES_ECB:
+ *spu2_mode = SPU2_CIPHER_MODE_ECB;
+ ret = spu2_cipher_type_xlate(cipher_alg, spu2_type, key);
+ break;
+ case BCMFS_CRYPTO_CIPHER_AES_CTR:
+ *spu2_mode = SPU2_CIPHER_MODE_CTR;
+ ret = spu2_cipher_type_xlate(cipher_alg, spu2_type, key);
+ break;
+ case BCMFS_CRYPTO_CIPHER_AES_CCM:
+ *spu2_mode = SPU2_CIPHER_MODE_CCM;
+ ret = spu2_cipher_type_xlate(cipher_alg, spu2_type, key);
+ break;
+ case BCMFS_CRYPTO_CIPHER_AES_GCM:
+ *spu2_mode = SPU2_CIPHER_MODE_GCM;
+ ret = spu2_cipher_type_xlate(cipher_alg, spu2_type, key);
+ break;
+ case BCMFS_CRYPTO_CIPHER_AES_XTS:
+ *spu2_mode = SPU2_CIPHER_MODE_XTS;
+ ret = spu2_cipher_type_xlate(cipher_alg, spu2_type, key);
+ break;
+ case BCMFS_CRYPTO_CIPHER_AES_OFB:
+ *spu2_mode = SPU2_CIPHER_MODE_OFB;
+ ret = spu2_cipher_type_xlate(cipher_alg, spu2_type, key);
+ break;
+ }
+
+ return ret;
+}
+
+static void
+spu2_fmd_ctrl0_write(struct spu2_fmd *fmd,
+ bool is_inbound, bool auth_first,
+ enum spu2_proto_sel protocol,
+ enum spu2_cipher_type cipher_type,
+ enum spu2_cipher_mode cipher_mode,
+ enum spu2_hash_type auth_type,
+ enum spu2_hash_mode auth_mode)
+{
+ uint64_t ctrl0 = 0;
+
+ if (cipher_type != SPU2_CIPHER_TYPE_NONE && !is_inbound)
+ ctrl0 |= SPU2_CIPH_ENCRYPT_EN;
+
+ ctrl0 |= ((uint64_t)cipher_type << SPU2_CIPH_TYPE_SHIFT) |
+ ((uint64_t)cipher_mode << SPU2_CIPH_MODE_SHIFT);
+
+ if (protocol != SPU2_PROTO_RESV)
+ ctrl0 |= (uint64_t)protocol << SPU2_PROTO_SEL_SHIFT;
+
+ if (auth_first)
+ ctrl0 |= SPU2_HASH_FIRST;
+
+ if (is_inbound && auth_type != SPU2_HASH_TYPE_NONE)
+ ctrl0 |= SPU2_CHK_TAG;
+
+ ctrl0 |= (((uint64_t)auth_type << SPU2_HASH_TYPE_SHIFT) |
+ ((uint64_t)auth_mode << SPU2_HASH_MODE_SHIFT));
+
+ fmd->ctrl0 = ctrl0;
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "ctrl0:", &fmd->ctrl0, sizeof(uint64_t));
+#endif
+}
+
+static void
+spu2_fmd_ctrl1_write(struct spu2_fmd *fmd, bool is_inbound,
+ uint64_t assoc_size, uint64_t auth_key_len,
+ uint64_t cipher_key_len, bool gen_iv, bool hash_iv,
+ bool return_iv, uint64_t ret_iv_len,
+ uint64_t ret_iv_offset, uint64_t cipher_iv_len,
+ uint64_t digest_size, bool return_payload, bool return_md)
+{
+ uint64_t ctrl1 = 0;
+
+ if (is_inbound && digest_size != 0)
+ ctrl1 |= SPU2_TAG_LOC;
+
+ if (assoc_size != 0)
+ ctrl1 |= SPU2_HAS_AAD2;
+
+ if (auth_key_len != 0)
+ ctrl1 |= ((auth_key_len << SPU2_HASH_KEY_LEN_SHIFT) &
+ SPU2_HASH_KEY_LEN);
+
+ if (cipher_key_len != 0)
+ ctrl1 |= ((cipher_key_len << SPU2_CIPH_KEY_LEN_SHIFT) &
+ SPU2_CIPH_KEY_LEN);
+
+ if (gen_iv)
+ ctrl1 |= SPU2_GENIV;
+
+ if (hash_iv)
+ ctrl1 |= SPU2_HASH_IV;
+
+ if (return_iv) {
+ ctrl1 |= SPU2_RET_IV;
+ ctrl1 |= ret_iv_len << SPU2_RET_IV_LEN_SHIFT;
+ ctrl1 |= ret_iv_offset << SPU2_IV_OFFSET_SHIFT;
+ }
+
+ ctrl1 |= ((cipher_iv_len << SPU2_IV_LEN_SHIFT) & SPU2_IV_LEN);
+
+ if (digest_size != 0) {
+ ctrl1 |= ((digest_size << SPU2_HASH_TAG_LEN_SHIFT) &
+ SPU2_HASH_TAG_LEN);
+ }
+
+ /*
+ * Let's ask for the output pkt to include FMD, but don't need to
+ * get keys and IVs back in OMD.
+ */
+ if (return_md)
+ ctrl1 |= ((uint64_t)SPU2_RET_FMD_ONLY << SPU2_RETURN_MD_SHIFT);
+ else
+ ctrl1 |= ((uint64_t)SPU2_RET_NO_MD << SPU2_RETURN_MD_SHIFT);
+
+ /* Crypto API does not get assoc data back. So no need for AAD2. */
+
+ if (return_payload)
+ ctrl1 |= SPU2_RETURN_PAY;
+
+ fmd->ctrl1 = ctrl1;
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "ctrl1:", &fmd->ctrl1, sizeof(uint64_t));
+#endif
+}
+
+static void
+spu2_fmd_ctrl2_write(struct spu2_fmd *fmd, uint64_t cipher_offset,
+ uint64_t auth_key_len __rte_unused,
+ uint64_t auth_iv_len __rte_unused,
+ uint64_t cipher_key_len __rte_unused,
+ uint64_t cipher_iv_len __rte_unused)
+{
+ uint64_t aad1_offset;
+ uint64_t aad2_offset;
+ uint16_t aad1_len = 0;
+ uint64_t payload_offset;
+
+ /* AAD1 offset is from start of FD. FD length always 0. */
+ aad1_offset = 0;
+
+ aad2_offset = aad1_offset;
+ payload_offset = cipher_offset;
+ fmd->ctrl2 = aad1_offset |
+ (aad1_len << SPU2_AAD1_LEN_SHIFT) |
+ (aad2_offset << SPU2_AAD2_OFFSET_SHIFT) |
+ (payload_offset << SPU2_PL_OFFSET_SHIFT);
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "ctrl2:", &fmd->ctrl2, sizeof(uint64_t));
+#endif
+}
+
+static void
+spu2_fmd_ctrl3_write(struct spu2_fmd *fmd, uint64_t payload_len)
+{
+ fmd->ctrl3 = payload_len & SPU2_PL_LEN;
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "ctrl3:", &fmd->ctrl3, sizeof(uint64_t));
+#endif
+}
+
+int
+bcmfs_crypto_build_auth_req(struct bcmfs_sym_request *sreq,
+ enum bcmfs_crypto_auth_algorithm a_alg,
+ enum bcmfs_crypto_auth_op auth_op,
+ struct fsattr *src, struct fsattr *dst,
+ struct fsattr *mac, struct fsattr *auth_key)
+{
+ int ret;
+ uint64_t dst_size;
+ int src_index = 0;
+ struct spu2_fmd *fmd;
+ enum spu2_hash_mode spu2_auth_mode;
+ enum spu2_hash_type spu2_auth_type = SPU2_HASH_TYPE_NONE;
+ uint64_t auth_ksize = (auth_key != NULL) ? fsattr_sz(auth_key) : 0;
+ bool is_inbound = (auth_op == BCMFS_CRYPTO_AUTH_OP_VERIFY);
+
+ if (src == NULL)
+ return -EINVAL;
+
+ /* one of dst or mac should not be NULL */
+ if (dst == NULL && mac == NULL)
+ return -EINVAL;
+
+ dst_size = (auth_op == BCMFS_CRYPTO_AUTH_OP_GENERATE) ?
+ fsattr_sz(dst) : fsattr_sz(mac);
+
+ /* spu2 hash algorithm and hash algorithm mode */
+ ret = spu2_hash_xlate(a_alg, auth_key, &spu2_auth_type,
+ &spu2_auth_mode);
+ if (ret)
+ return -EINVAL;
+
+ fmd = &sreq->fmd;
+
+ spu2_fmd_ctrl0_write(fmd, is_inbound, SPU2_VAL_NONE,
+ SPU2_PROTO_RESV, SPU2_VAL_NONE,
+ SPU2_VAL_NONE, spu2_auth_type, spu2_auth_mode);
+
+ spu2_fmd_ctrl1_write(fmd, is_inbound, SPU2_VAL_NONE,
+ auth_ksize, SPU2_VAL_NONE, false,
+ false, SPU2_VAL_NONE, SPU2_VAL_NONE,
+ SPU2_VAL_NONE, SPU2_VAL_NONE,
+ dst_size, SPU2_VAL_NONE, SPU2_VAL_NONE);
+
+ memset(&fmd->ctrl2, 0, sizeof(uint64_t));
+
+ spu2_fmd_ctrl3_write(fmd, fsattr_sz(src));
+
+ /* Source metadata and data pointers */
+ sreq->msgs.srcs_addr[src_index] = sreq->fptr;
+ sreq->msgs.srcs_len[src_index] = sizeof(struct spu2_fmd);
+ src_index++;
+
+ if (auth_key != NULL && fsattr_sz(auth_key) != 0) {
+ memcpy(sreq->auth_key, fsattr_va(auth_key),
+ fsattr_sz(auth_key));
+
+ sreq->msgs.srcs_addr[src_index] = sreq->aptr;
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(auth_key);
+ src_index++;
+ }
+
+ sreq->msgs.srcs_addr[src_index] = fsattr_pa(src);
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(src);
+ src_index++;
+
+ /*
+ * In case of authentication verify operation, use input mac data to
+ * SPU2 engine.
+ */
+ if (auth_op == BCMFS_CRYPTO_AUTH_OP_VERIFY && mac != NULL) {
+ sreq->msgs.srcs_addr[src_index] = fsattr_pa(mac);
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(mac);
+ src_index++;
+ }
+ sreq->msgs.srcs_count = src_index;
+
+ /*
+ * Output packet contains actual output from SPU2 and
+ * the status packet, so the dsts_count is always 2 below.
+ */
+ if (auth_op == BCMFS_CRYPTO_AUTH_OP_GENERATE) {
+ sreq->msgs.dsts_addr[0] = fsattr_pa(dst);
+ sreq->msgs.dsts_len[0] = fsattr_sz(dst);
+ } else {
+ /*
+ * In case of authentication verify operation, provide dummy
+ * location to SPU2 engine to generate hash. This is needed
+ * because SPU2 generates hash even in case of verify operation.
+ */
+ sreq->msgs.dsts_addr[0] = sreq->dptr;
+ sreq->msgs.dsts_len[0] = fsattr_sz(mac);
+ }
+
+ sreq->msgs.dsts_addr[1] = sreq->rptr;
+ sreq->msgs.dsts_len[1] = SPU2_STATUS_LEN;
+ sreq->msgs.dsts_count = 2;
+
+ return 0;
+}
+
+int
+bcmfs_crypto_build_cipher_req(struct bcmfs_sym_request *sreq,
+ enum bcmfs_crypto_cipher_algorithm calgo,
+ enum bcmfs_crypto_cipher_op cipher_op,
+ struct fsattr *src, struct fsattr *dst,
+ struct fsattr *cipher_key, struct fsattr *iv)
+{
+ int ret = 0;
+ int src_index = 0;
+ struct spu2_fmd *fmd;
+ unsigned int xts_keylen;
+ enum spu2_cipher_mode spu2_ciph_mode = 0;
+ enum spu2_cipher_type spu2_ciph_type = SPU2_CIPHER_TYPE_NONE;
+ bool is_inbound = (cipher_op == BCMFS_CRYPTO_CIPHER_OP_DECRYPT);
+
+ if (src == NULL || dst == NULL || iv == NULL)
+ return -EINVAL;
+
+ fmd = &sreq->fmd;
+
+ /* spu2 cipher algorithm and cipher algorithm mode */
+ ret = spu2_cipher_xlate(calgo, cipher_key,
+ &spu2_ciph_type, &spu2_ciph_mode);
+ if (ret)
+ return -EINVAL;
+
+ spu2_fmd_ctrl0_write(fmd, is_inbound, SPU2_VAL_NONE,
+ SPU2_PROTO_RESV, spu2_ciph_type, spu2_ciph_mode,
+ SPU2_VAL_NONE, SPU2_VAL_NONE);
+
+ spu2_fmd_ctrl1_write(fmd, SPU2_VAL_NONE, SPU2_VAL_NONE, SPU2_VAL_NONE,
+ fsattr_sz(cipher_key), false, false,
+ SPU2_VAL_NONE, SPU2_VAL_NONE, SPU2_VAL_NONE,
+ fsattr_sz(iv), SPU2_VAL_NONE, SPU2_VAL_NONE,
+ SPU2_VAL_NONE);
+
+ /* Nothing for FMD2 */
+ memset(&fmd->ctrl2, 0, sizeof(uint64_t));
+
+ spu2_fmd_ctrl3_write(fmd, fsattr_sz(src));
+
+ /* Source metadata and data pointers */
+ sreq->msgs.srcs_addr[src_index] = sreq->fptr;
+ sreq->msgs.srcs_len[src_index] = sizeof(struct spu2_fmd);
+ src_index++;
+
+ if (cipher_key != NULL && fsattr_sz(cipher_key) != 0) {
+ if (calgo == BCMFS_CRYPTO_CIPHER_AES_XTS) {
+ xts_keylen = fsattr_sz(cipher_key) / 2;
+ memcpy(sreq->cipher_key,
+ (uint8_t *)fsattr_va(cipher_key) + xts_keylen,
+ xts_keylen);
+ memcpy(sreq->cipher_key + xts_keylen,
+ fsattr_va(cipher_key), xts_keylen);
+ } else {
+ memcpy(sreq->cipher_key,
+ fsattr_va(cipher_key), fsattr_sz(cipher_key));
+ }
+
+ sreq->msgs.srcs_addr[src_index] = sreq->cptr;
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(cipher_key);
+ src_index++;
+ }
+
+ if (iv != NULL && fsattr_sz(iv) != 0) {
+ memcpy(sreq->iv,
+ fsattr_va(iv), fsattr_sz(iv));
+ sreq->msgs.srcs_addr[src_index] = sreq->iptr;
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(iv);
+ src_index++;
+ }
+
+ sreq->msgs.srcs_addr[src_index] = fsattr_pa(src);
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(src);
+ src_index++;
+ sreq->msgs.srcs_count = src_index;
+
+ /**
+ * Output packet contains actual output from SPU2 and
+ * the status packet, so the dsts_count is always 2 below.
+ */
+ sreq->msgs.dsts_addr[0] = fsattr_pa(dst);
+ sreq->msgs.dsts_len[0] = fsattr_sz(dst);
+
+ sreq->msgs.dsts_addr[1] = sreq->rptr;
+ sreq->msgs.dsts_len[1] = SPU2_STATUS_LEN;
+ sreq->msgs.dsts_count = 2;
+
+ return 0;
+}
+
+static void
+bcmfs_crypto_ccm_update_iv(uint8_t *ivbuf,
+ unsigned int *ivlen, bool is_esp)
+{
+ int L; /* size of length field, in bytes */
+
+ /*
+ * In RFC4309 mode, L is fixed at 4 bytes; otherwise, IV from
+ * testmgr contains (L-1) in bottom 3 bits of first byte,
+ * per RFC 3610.
+ */
+ if (is_esp)
+ L = CCM_ESP_L_VALUE;
+ else
+ L = ((ivbuf[0] & CCM_B0_L_PRIME) >>
+ CCM_B0_L_PRIME_SHIFT) + 1;
+
+ /* SPU2 doesn't want these length bytes nor the first byte... */
+ *ivlen -= (1 + L);
+ memmove(ivbuf, &ivbuf[1], *ivlen);
+}
+
+int
+bcmfs_crypto_build_aead_request(struct bcmfs_sym_request *sreq,
+ enum bcmfs_crypto_cipher_algorithm cipher_alg,
+ enum bcmfs_crypto_cipher_op cipher_op,
+ enum bcmfs_crypto_auth_algorithm auth_alg,
+ enum bcmfs_crypto_auth_op auth_op,
+ struct fsattr *src, struct fsattr *dst,
+ struct fsattr *cipher_key,
+ struct fsattr *auth_key,
+ struct fsattr *iv, struct fsattr *aad,
+ struct fsattr *digest, bool cipher_first)
+{
+ int ret = 0;
+ int src_index = 0;
+ int dst_index = 0;
+ bool auth_first = 0;
+ struct spu2_fmd *fmd;
+ unsigned int payload_len;
+ enum spu2_cipher_mode spu2_ciph_mode = 0;
+ enum spu2_hash_mode spu2_auth_mode = 0;
+ uint64_t aad_size = (aad != NULL) ? fsattr_sz(aad) : 0;
+ unsigned int iv_size = (iv != NULL) ? fsattr_sz(iv) : 0;
+ enum spu2_cipher_type spu2_ciph_type = SPU2_CIPHER_TYPE_NONE;
+ uint64_t auth_ksize = (auth_key != NULL) ?
+ fsattr_sz(auth_key) : 0;
+ uint64_t cipher_ksize = (cipher_key != NULL) ?
+ fsattr_sz(cipher_key) : 0;
+ uint64_t digest_size = (digest != NULL) ?
+ fsattr_sz(digest) : 0;
+ enum spu2_hash_type spu2_auth_type = SPU2_HASH_TYPE_NONE;
+ bool is_inbound = (auth_op == BCMFS_CRYPTO_AUTH_OP_VERIFY);
+
+ if (src == NULL)
+ return -EINVAL;
+
+ payload_len = fsattr_sz(src);
+ if (!payload_len) {
+ BCMFS_DP_LOG(ERR, "null payload not supported");
+ return -EINVAL;
+ }
+
+ /* spu2 hash algorithm and hash algorithm mode */
+ ret = spu2_hash_xlate(auth_alg, auth_key, &spu2_auth_type,
+ &spu2_auth_mode);
+ if (ret)
+ return -EINVAL;
+
+ /* spu2 cipher algorithm and cipher algorithm mode */
+ ret = spu2_cipher_xlate(cipher_alg, cipher_key, &spu2_ciph_type,
+ &spu2_ciph_mode);
+ if (ret) {
+ BCMFS_DP_LOG(ERR, "cipher xlate error");
+ return -EINVAL;
+ }
+
+ auth_first = cipher_first ? 0 : 1;
+
+ if (cipher_alg == BCMFS_CRYPTO_CIPHER_AES_GCM) {
+ spu2_auth_type = (enum spu2_hash_type)spu2_ciph_type;
+ /*
+ * SPU2 needs in total 12 bytes of IV
+ * ie IV of 8 bytes(random number) and 4 bytes of salt.
+ */
+ if (fsattr_sz(iv) > 12)
+ iv_size = 12;
+
+ /*
+ * On SPU 2, aes gcm cipher first on encrypt, auth first on
+ * decrypt
+ */
+
+ auth_first = (cipher_op == BCMFS_CRYPTO_CIPHER_OP_ENCRYPT) ?
+ 0 : 1;
+ }
+
+ if (iv != NULL && fsattr_sz(iv) != 0)
+ memcpy(sreq->iv, fsattr_va(iv), fsattr_sz(iv));
+
+ if (cipher_alg == BCMFS_CRYPTO_CIPHER_AES_CCM) {
+ spu2_auth_type = (enum spu2_hash_type)spu2_ciph_type;
+ if (iv != NULL) {
+ memcpy(sreq->iv, fsattr_va(iv),
+ fsattr_sz(iv));
+ iv_size = fsattr_sz(iv);
+ bcmfs_crypto_ccm_update_iv(sreq->iv, &iv_size, false);
+ }
+
+ /* opposite for ccm (auth 1st on encrypt) */
+ auth_first = (cipher_op == BCMFS_CRYPTO_CIPHER_OP_ENCRYPT) ?
+ 1 : 0;
+ }
+
+ fmd = &sreq->fmd;
+
+ spu2_fmd_ctrl0_write(fmd, is_inbound, auth_first, SPU2_PROTO_RESV,
+ spu2_ciph_type, spu2_ciph_mode,
+ spu2_auth_type, spu2_auth_mode);
+
+ spu2_fmd_ctrl1_write(fmd, is_inbound, aad_size, auth_ksize,
+ cipher_ksize, false, false, SPU2_VAL_NONE,
+ SPU2_VAL_NONE, SPU2_VAL_NONE, iv_size,
+ digest_size, false, SPU2_VAL_NONE);
+
+ spu2_fmd_ctrl2_write(fmd, aad_size, auth_ksize, 0,
+ cipher_ksize, iv_size);
+
+ spu2_fmd_ctrl3_write(fmd, payload_len);
+
+ /* Source metadata and data pointers */
+ sreq->msgs.srcs_addr[src_index] = sreq->fptr;
+ sreq->msgs.srcs_len[src_index] = sizeof(struct spu2_fmd);
+ src_index++;
+
+ if (auth_key != NULL && fsattr_sz(auth_key) != 0) {
+ memcpy(sreq->auth_key,
+ fsattr_va(auth_key), fsattr_sz(auth_key));
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "auth key:", fsattr_va(auth_key),
+ fsattr_sz(auth_key));
+#endif
+ sreq->msgs.srcs_addr[src_index] = sreq->aptr;
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(auth_key);
+ src_index++;
+ }
+
+ if (cipher_key != NULL && fsattr_sz(cipher_key) != 0) {
+ memcpy(sreq->cipher_key,
+ fsattr_va(cipher_key), fsattr_sz(cipher_key));
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "cipher key:", fsattr_va(cipher_key),
+ fsattr_sz(cipher_key));
+#endif
+ sreq->msgs.srcs_addr[src_index] = sreq->cptr;
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(cipher_key);
+ src_index++;
+ }
+
+ if (iv != NULL && fsattr_sz(iv) != 0) {
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "iv key:", fsattr_va(iv),
+ fsattr_sz(iv));
+#endif
+ sreq->msgs.srcs_addr[src_index] = sreq->iptr;
+ sreq->msgs.srcs_len[src_index] = iv_size;
+ src_index++;
+ }
+
+ if (aad != NULL && fsattr_sz(aad) != 0) {
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "aad :", fsattr_va(aad),
+ fsattr_sz(aad));
+#endif
+ sreq->msgs.srcs_addr[src_index] = fsattr_pa(aad);
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(aad);
+ src_index++;
+ }
+
+ sreq->msgs.srcs_addr[src_index] = fsattr_pa(src);
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(src);
+ src_index++;
+
+
+ if (auth_op == BCMFS_CRYPTO_AUTH_OP_VERIFY && digest != NULL &&
+ fsattr_sz(digest) != 0) {
+ sreq->msgs.srcs_addr[src_index] = fsattr_pa(digest);
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(digest);
+ src_index++;
+ }
+ sreq->msgs.srcs_count = src_index;
+
+ if (dst != NULL) {
+ sreq->msgs.dsts_addr[dst_index] = fsattr_pa(dst);
+ sreq->msgs.dsts_len[dst_index] = fsattr_sz(dst);
+ dst_index++;
+ }
+
+ if (auth_op == BCMFS_CRYPTO_AUTH_OP_VERIFY) {
+ /*
+ * In case of decryption digest data is generated by
+ * SPU2 engine but application doesn't need digest
+ * as such. So program dummy location to capture
+ * digest data
+ */
+ if (digest != NULL && fsattr_sz(digest) != 0) {
+ sreq->msgs.dsts_addr[dst_index] =
+ sreq->dptr;
+ sreq->msgs.dsts_len[dst_index] =
+ fsattr_sz(digest);
+ dst_index++;
+ }
+ } else {
+ if (digest != NULL && fsattr_sz(digest) != 0) {
+ sreq->msgs.dsts_addr[dst_index] =
+ fsattr_pa(digest);
+ sreq->msgs.dsts_len[dst_index] =
+ fsattr_sz(digest);
+ dst_index++;
+ }
+ }
+
+ sreq->msgs.dsts_addr[dst_index] = sreq->rptr;
+ sreq->msgs.dsts_len[dst_index] = SPU2_STATUS_LEN;
+ dst_index++;
+ sreq->msgs.dsts_count = dst_index;
+
+ return 0;
+}
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_engine.h b/drivers/crypto/bcmfs/bcmfs_sym_engine.h
new file mode 100644
index 000000000..29cfb4dc2
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_engine.h
@@ -0,0 +1,103 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_SYM_ENGINE_H_
+#define _BCMFS_SYM_ENGINE_H_
+
+#include "bcmfs_dev_msg.h"
+#include "bcmfs_sym_defs.h"
+#include "bcmfs_sym_req.h"
+
+/* structure to hold element's arrtibutes */
+struct fsattr {
+ void *va;
+ uint64_t pa;
+ uint64_t sz;
+};
+
+#define fsattr_va(__ptr) ((__ptr)->va)
+#define fsattr_pa(__ptr) ((__ptr)->pa)
+#define fsattr_sz(__ptr) ((__ptr)->sz)
+
+/*
+ * Macros for Crypto h/w constraints
+ */
+
+#define BCMFS_CRYPTO_AES_BLOCK_SIZE 16
+#define BCMFS_CRYPTO_AES_MIN_KEY_SIZE 16
+#define BCMFS_CRYPTO_AES_MAX_KEY_SIZE 32
+
+#define BCMFS_CRYPTO_DES_BLOCK_SIZE 8
+#define BCMFS_CRYPTO_DES_KEY_SIZE 8
+
+#define BCMFS_CRYPTO_3DES_BLOCK_SIZE 8
+#define BCMFS_CRYPTO_3DES_KEY_SIZE (3 * 8)
+
+#define BCMFS_CRYPTO_MD5_DIGEST_SIZE 16
+#define BCMFS_CRYPTO_MD5_BLOCK_SIZE 64
+
+#define BCMFS_CRYPTO_SHA1_DIGEST_SIZE 20
+#define BCMFS_CRYPTO_SHA1_BLOCK_SIZE 64
+
+#define BCMFS_CRYPTO_SHA224_DIGEST_SIZE 28
+#define BCMFS_CRYPTO_SHA224_BLOCK_SIZE 64
+
+#define BCMFS_CRYPTO_SHA256_DIGEST_SIZE 32
+#define BCMFS_CRYPTO_SHA256_BLOCK_SIZE 64
+
+#define BCMFS_CRYPTO_SHA384_DIGEST_SIZE 48
+#define BCMFS_CRYPTO_SHA384_BLOCK_SIZE 128
+
+#define BCMFS_CRYPTO_SHA512_DIGEST_SIZE 64
+#define BCMFS_CRYPTO_SHA512_BLOCK_SIZE 128
+
+#define BCMFS_CRYPTO_SHA3_224_DIGEST_SIZE (224 / 8)
+#define BCMFS_CRYPTO_SHA3_224_BLOCK_SIZE (200 - 2 * \
+ BCMFS_CRYPTO_SHA3_224_DIGEST_SIZE)
+
+#define BCMFS_CRYPTO_SHA3_256_DIGEST_SIZE (256 / 8)
+#define BCMFS_CRYPTO_SHA3_256_BLOCK_SIZE (200 - 2 * \
+ BCMFS_CRYPTO_SHA3_256_DIGEST_SIZE)
+
+#define BCMFS_CRYPTO_SHA3_384_DIGEST_SIZE (384 / 8)
+#define BCMFS_CRYPTO_SHA3_384_BLOCK_SIZE (200 - 2 * \
+ BCMFS_CRYPTO_SHA3_384_DIGEST_SIZE)
+
+#define BCMFS_CRYPTO_SHA3_512_DIGEST_SIZE (512 / 8)
+#define BCMFS_CRYPTO_SHA3_512_BLOCK_SIZE (200 - 2 * \
+ BCMFS_CRYPTO_SHA3_512_DIGEST_SIZE)
+
+enum bcmfs_crypto_aes_cipher_key {
+ BCMFS_CRYPTO_AES128 = 16,
+ BCMFS_CRYPTO_AES192 = 24,
+ BCMFS_CRYPTO_AES256 = 32,
+};
+
+int
+bcmfs_crypto_build_cipher_req(struct bcmfs_sym_request *req,
+ enum bcmfs_crypto_cipher_algorithm c_algo,
+ enum bcmfs_crypto_cipher_op cop,
+ struct fsattr *src, struct fsattr *dst,
+ struct fsattr *key, struct fsattr *iv);
+
+int
+bcmfs_crypto_build_auth_req(struct bcmfs_sym_request *req,
+ enum bcmfs_crypto_auth_algorithm a_algo,
+ enum bcmfs_crypto_auth_op aop,
+ struct fsattr *src, struct fsattr *dst,
+ struct fsattr *mac, struct fsattr *key);
+
+int
+bcmfs_crypto_build_aead_request(struct bcmfs_sym_request *req,
+ enum bcmfs_crypto_cipher_algorithm c_algo,
+ enum bcmfs_crypto_cipher_op cop,
+ enum bcmfs_crypto_auth_algorithm a_algo,
+ enum bcmfs_crypto_auth_op aop,
+ struct fsattr *src, struct fsattr *dst,
+ struct fsattr *cipher_key, struct fsattr *auth_key,
+ struct fsattr *iv, struct fsattr *aad,
+ struct fsattr *digest, bool cipher_first);
+
+#endif /* _BCMFS_SYM_ENGINE_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
index 381ca8ea4..568797b4f 100644
--- a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
+++ b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
@@ -132,6 +132,12 @@ static void
spu_req_init(struct bcmfs_sym_request *sr, rte_iova_t iova __rte_unused)
{
memset(sr, 0, sizeof(*sr));
+ sr->fptr = iova;
+ sr->cptr = iova + offsetof(struct bcmfs_sym_request, cipher_key);
+ sr->aptr = iova + offsetof(struct bcmfs_sym_request, auth_key);
+ sr->iptr = iova + offsetof(struct bcmfs_sym_request, iv);
+ sr->dptr = iova + offsetof(struct bcmfs_sym_request, digest);
+ sr->rptr = iova + offsetof(struct bcmfs_sym_request, resp);
}
static void
@@ -244,6 +250,7 @@ bcmfs_sym_pmd_enqueue_op_burst(void *queue_pair,
uint16_t nb_ops)
{
int i, j;
+ int retval;
uint16_t enq = 0;
struct bcmfs_sym_request *sreq;
struct bcmfs_sym_session *sess;
@@ -273,6 +280,11 @@ bcmfs_sym_pmd_enqueue_op_burst(void *queue_pair,
/* save context */
qp->infl_msgs[i] = &sreq->msgs;
qp->infl_msgs[i]->ctx = (void *)sreq;
+
+ /* pre process the request crypto h/w acceleration */
+ retval = bcmfs_process_sym_crypto_op(ops[i], sess, sreq);
+ if (unlikely(retval < 0))
+ goto enqueue_err;
}
/* Send burst request to hw QP */
enq = bcmfs_enqueue_op_burst(qp, (void **)qp->infl_msgs, i);
@@ -289,6 +301,17 @@ bcmfs_sym_pmd_enqueue_op_burst(void *queue_pair,
return enq;
}
+static void bcmfs_sym_set_request_status(struct rte_crypto_op *op,
+ struct bcmfs_sym_request *out)
+{
+ if (*out->resp == BCMFS_SYM_RESPONSE_SUCCESS)
+ op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+ else if (*out->resp == BCMFS_SYM_RESPONSE_HASH_TAG_ERROR)
+ op->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
+ else
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+}
+
static uint16_t
bcmfs_sym_pmd_dequeue_op_burst(void *queue_pair,
struct rte_crypto_op **ops,
@@ -308,6 +331,9 @@ bcmfs_sym_pmd_dequeue_op_burst(void *queue_pair,
for (i = 0; i < deq; i++) {
sreq = (struct bcmfs_sym_request *)qp->infl_msgs[i]->ctx;
+ /* set the status based on the response from the crypto h/w */
+ bcmfs_sym_set_request_status(sreq->op, sreq);
+
ops[pkts++] = sreq->op;
rte_mempool_put(qp->sr_mp, sreq);
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_req.h b/drivers/crypto/bcmfs/bcmfs_sym_req.h
index 0f0b051f1..e53c50adc 100644
--- a/drivers/crypto/bcmfs/bcmfs_sym_req.h
+++ b/drivers/crypto/bcmfs/bcmfs_sym_req.h
@@ -6,13 +6,53 @@
#ifndef _BCMFS_SYM_REQ_H_
#define _BCMFS_SYM_REQ_H_
+#include <rte_cryptodev.h>
+
#include "bcmfs_dev_msg.h"
+#include "bcmfs_sym_defs.h"
+
+/* Fixed SPU2 Metadata */
+struct spu2_fmd {
+ uint64_t ctrl0;
+ uint64_t ctrl1;
+ uint64_t ctrl2;
+ uint64_t ctrl3;
+};
/*
* This structure hold the supportive data required to process a
* rte_crypto_op
*/
struct bcmfs_sym_request {
+ /* spu2 engine related data */
+ struct spu2_fmd fmd;
+ /* cipher key */
+ uint8_t cipher_key[BCMFS_MAX_KEY_SIZE];
+ /* auth key */
+ uint8_t auth_key[BCMFS_MAX_KEY_SIZE];
+ /* iv key */
+ uint8_t iv[BCMFS_MAX_IV_SIZE];
+ /* digest data output from crypto h/w */
+ uint8_t digest[BCMFS_MAX_DIGEST_SIZE];
+ /* 2-Bytes response from crypto h/w */
+ uint8_t resp[2];
+ /*
+ * Below are all iovas for above members
+ * from top
+ */
+ /* iova for fmd */
+ rte_iova_t fptr;
+ /* iova for cipher key */
+ rte_iova_t cptr;
+ /* iova for auth key */
+ rte_iova_t aptr;
+ /* iova for iv key */
+ rte_iova_t iptr;
+ /* iova for digest */
+ rte_iova_t dptr;
+ /* iova for response */
+ rte_iova_t rptr;
+
/* bcmfs qp message for h/w queues to process */
struct bcmfs_qp_message msgs;
/* crypto op */
diff --git a/drivers/crypto/bcmfs/meson.build b/drivers/crypto/bcmfs/meson.build
index 2e86c733e..7aa0f05db 100644
--- a/drivers/crypto/bcmfs/meson.build
+++ b/drivers/crypto/bcmfs/meson.build
@@ -14,5 +14,7 @@ sources = files(
'hw/bcmfs_rm_common.c',
'bcmfs_sym_pmd.c',
'bcmfs_sym_capabilities.c',
- 'bcmfs_sym_session.c'
+ 'bcmfs_sym_session.c',
+ 'bcmfs_sym.c',
+ 'bcmfs_sym_engine.c'
)
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v2 8/8] crypto/bcmfs: add crypto pmd into cryptodev test
2020-08-13 17:23 ` [dpdk-dev] [PATCH v2 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
` (6 preceding siblings ...)
2020-08-13 17:23 ` [dpdk-dev] [PATCH v2 7/8] crypto/bcmfs: add crypto h/w module Vikas Gupta
@ 2020-08-13 17:23 ` Vikas Gupta
2020-09-28 20:01 ` Akhil Goyal
2020-09-28 20:06 ` [dpdk-dev] [PATCH v2 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Akhil Goyal
2020-10-05 16:26 ` [dpdk-dev] [PATCH v3 " Vikas Gupta
9 siblings, 1 reply; 75+ messages in thread
From: Vikas Gupta @ 2020-08-13 17:23 UTC (permalink / raw)
To: dev, akhil.goyal; +Cc: vikram.prakash, Vikas Gupta, Raveendra Padasalagi
Add global test suite for bcmfs crypto pmd
Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
app/test/test_cryptodev.c | 17 +++++++++++++++++
app/test/test_cryptodev.h | 1 +
2 files changed, 18 insertions(+)
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 70bf6fe2c..9157115ab 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -13041,6 +13041,22 @@ test_cryptodev_nitrox(void)
return unit_test_suite_runner(&cryptodev_nitrox_testsuite);
}
+static int
+test_cryptodev_bcmfs(void)
+{
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_BCMFS_PMD));
+
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "BCMFS PMD must be loaded. Check if "
+ "CONFIG_RTE_LIBRTE_PMD_BCMFS is enabled "
+ "in config file to run this testsuite.\n");
+ return TEST_FAILED;
+ }
+
+ return unit_test_suite_runner(&cryptodev_testsuite);
+}
+
REGISTER_TEST_COMMAND(cryptodev_qat_autotest, test_cryptodev_qat);
REGISTER_TEST_COMMAND(cryptodev_aesni_mb_autotest, test_cryptodev_aesni_mb);
REGISTER_TEST_COMMAND(cryptodev_cpu_aesni_mb_autotest,
@@ -13063,3 +13079,4 @@ REGISTER_TEST_COMMAND(cryptodev_octeontx_autotest, test_cryptodev_octeontx);
REGISTER_TEST_COMMAND(cryptodev_octeontx2_autotest, test_cryptodev_octeontx2);
REGISTER_TEST_COMMAND(cryptodev_caam_jr_autotest, test_cryptodev_caam_jr);
REGISTER_TEST_COMMAND(cryptodev_nitrox_autotest, test_cryptodev_nitrox);
+REGISTER_TEST_COMMAND(cryptodev_bcmfs_autotest, test_cryptodev_bcmfs);
diff --git a/app/test/test_cryptodev.h b/app/test/test_cryptodev.h
index 41542e055..c58126368 100644
--- a/app/test/test_cryptodev.h
+++ b/app/test/test_cryptodev.h
@@ -70,6 +70,7 @@
#define CRYPTODEV_NAME_OCTEONTX2_PMD crypto_octeontx2
#define CRYPTODEV_NAME_CAAM_JR_PMD crypto_caam_jr
#define CRYPTODEV_NAME_NITROX_PMD crypto_nitrox_sym
+#define CRYPTODEV_NAME_BCMFS_PMD crypto_bcmfs
/**
* Write (spread) data from buffer to mbuf data
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/8] crypto/bcmfs: add BCMFS driver
2020-08-13 17:23 ` [dpdk-dev] [PATCH v2 1/8] crypto/bcmfs: add BCMFS driver Vikas Gupta
@ 2020-09-28 18:49 ` Akhil Goyal
2020-09-29 10:52 ` Vikas Gupta
0 siblings, 1 reply; 75+ messages in thread
From: Akhil Goyal @ 2020-09-28 18:49 UTC (permalink / raw)
To: Vikas Gupta, dev; +Cc: vikram.prakash, Raveendra Padasalagi
Hi Vikas,
> +BCMFS crypto PMD depend upon the devices present in the path
> +/sys/bus/platform/devices/fs<version>/<dev_name> on the platform.
> +Each cryptodev PMD instance can be attached to the nodes present
> +in the mentioned path.
It would be good, if you can mention the details about the SDKs which need
To be installed, any kernel dependencies if any.
The device path mentioned is from which rootfs? This looks incomplete documentation.
> diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst
> index a67ed5a28..5d7e028bd 100644
> --- a/doc/guides/cryptodevs/index.rst
> +++ b/doc/guides/cryptodevs/index.rst
> @@ -29,3 +29,4 @@ Crypto Device Drivers
> qat
> virtio
> zuc
> + bcmfs
It is better to maintain an alphabetical order.
> diff --git a/drivers/crypto/bcmfs/bcmfs_device.c
> b/drivers/crypto/bcmfs/bcmfs_device.c
> new file mode 100644
> index 000000000..47c776de6
> --- /dev/null
> +++ b/drivers/crypto/bcmfs/bcmfs_device.c
> @@ -0,0 +1,256 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(C) 2020 Broadcom.
> + * All rights reserved.
> + */
> +
> +#include <dirent.h>
> +#include <stdbool.h>
> +#include <sys/queue.h>
> +
> +#include <rte_string_fns.h>
> +
> +#include "bcmfs_device.h"
> +#include "bcmfs_logs.h"
> +
> +struct bcmfs_device_attr {
> + const char name[BCMFS_MAX_PATH_LEN];
> + const char suffix[BCMFS_DEV_NAME_LEN];
> + const enum bcmfs_device_type type;
> + const uint32_t offset;
> + const uint32_t version;
> +};
> +
> +/* BCMFS supported devices */
> +static struct bcmfs_device_attr dev_table[] = {
> + {
> + .name = "fs4",
> + .suffix = "crypto_mbox",
> + .type = BCMFS_SYM_FS4,
> + .offset = 0,
> + .version = 0x76303031
> + },
> + {
> + .name = "fs5",
> + .suffix = "mbox",
> + .type = BCMFS_SYM_FS5,
> + .offset = 0,
> + .version = 0x76303032
> + },
> + {
> + /* sentinel */
> + }
> +};
> +
> +TAILQ_HEAD(fsdev_list, bcmfs_device);
> +static struct fsdev_list fsdev_list = TAILQ_HEAD_INITIALIZER(fsdev_list);
> +
> +static struct bcmfs_device *
> +fsdev_allocate_one_dev(struct rte_vdev_device *vdev,
> + char *dirpath,
> + char *devname,
> + enum bcmfs_device_type dev_type __rte_unused)
> +{
> + struct bcmfs_device *fsdev;
> +
> + fsdev = calloc(1, sizeof(*fsdev));
Can we use rte_calloc
> + if (!fsdev)
> + return NULL;
> +
> + if (strlen(dirpath) > sizeof(fsdev->dirname)) {
> + BCMFS_LOG(ERR, "dir path name is too long");
> + goto cleanup;
> + }
> +
> + if (strlen(devname) > sizeof(fsdev->name)) {
> + BCMFS_LOG(ERR, "devname is too long");
> + goto cleanup;
> + }
> +
> + strcpy(fsdev->dirname, dirpath);
> + strcpy(fsdev->name, devname);
> +
> + fsdev->vdev = vdev;
> +
> + TAILQ_INSERT_TAIL(&fsdev_list, fsdev, next);
> +
> + return fsdev;
> +
> +cleanup:
> + free(fsdev);
> +
> + return NULL;
> +}
> +
<snip>
> diff --git a/drivers/crypto/meson.build b/drivers/crypto/meson.build
> index a2423507a..8e06d0533 100644
> --- a/drivers/crypto/meson.build
> +++ b/drivers/crypto/meson.build
> @@ -23,7 +23,8 @@ drivers = ['aesni_gcm',
> 'scheduler',
> 'snow3g',
> 'virtio',
> - 'zuc']
> + 'zuc',
> + 'bcmfs']
Please maintain an alphabetical order.
^ permalink raw reply [flat|nested] 75+ messages in thread
* Re: [dpdk-dev] [PATCH v2 2/8] crypto/bcmfs: add vfio support
2020-08-13 17:23 ` [dpdk-dev] [PATCH v2 2/8] crypto/bcmfs: add vfio support Vikas Gupta
@ 2020-09-28 19:00 ` Akhil Goyal
2020-09-29 11:01 ` Vikas Gupta
0 siblings, 1 reply; 75+ messages in thread
From: Akhil Goyal @ 2020-09-28 19:00 UTC (permalink / raw)
To: Vikas Gupta, dev; +Cc: vikram.prakash, Raveendra Padasalagi
Hi Vikas,
> Subject: [PATCH v2 2/8] crypto/bcmfs: add vfio support
>
> Add vfio support for device.
>
> Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
> Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
> Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
> ---
> drivers/crypto/bcmfs/bcmfs_device.c | 5 ++
> drivers/crypto/bcmfs/bcmfs_device.h | 6 ++
> drivers/crypto/bcmfs/bcmfs_vfio.c | 107 ++++++++++++++++++++++++++++
> drivers/crypto/bcmfs/bcmfs_vfio.h | 17 +++++
> drivers/crypto/bcmfs/meson.build | 3 +-
> 5 files changed, 137 insertions(+), 1 deletion(-)
> create mode 100644 drivers/crypto/bcmfs/bcmfs_vfio.c
> create mode 100644 drivers/crypto/bcmfs/bcmfs_vfio.h
>
> diff --git a/drivers/crypto/bcmfs/bcmfs_device.c
> b/drivers/crypto/bcmfs/bcmfs_device.c
> index 47c776de6..3b5cc9e98 100644
> --- a/drivers/crypto/bcmfs/bcmfs_device.c
> +++ b/drivers/crypto/bcmfs/bcmfs_device.c
> @@ -11,6 +11,7 @@
>
> #include "bcmfs_device.h"
> #include "bcmfs_logs.h"
> +#include "bcmfs_vfio.h"
>
> struct bcmfs_device_attr {
> const char name[BCMFS_MAX_PATH_LEN];
> @@ -71,6 +72,10 @@ fsdev_allocate_one_dev(struct rte_vdev_device *vdev,
>
> fsdev->vdev = vdev;
>
> + /* attach to VFIO */
> + if (bcmfs_attach_vfio(fsdev))
> + goto cleanup;
> +
> TAILQ_INSERT_TAIL(&fsdev_list, fsdev, next);
>
> return fsdev;
> diff --git a/drivers/crypto/bcmfs/bcmfs_device.h
> b/drivers/crypto/bcmfs/bcmfs_device.h
> index cc64a8df2..c41cc0031 100644
> --- a/drivers/crypto/bcmfs/bcmfs_device.h
> +++ b/drivers/crypto/bcmfs/bcmfs_device.h
> @@ -35,6 +35,12 @@ struct bcmfs_device {
> char name[BCMFS_DEV_NAME_LEN];
> /* Parent vdev */
> struct rte_vdev_device *vdev;
> + /* vfio handle */
> + int vfio_dev_fd;
> + /* mapped address */
> + uint8_t *mmap_addr;
> + /* mapped size */
> + uint32_t mmap_size;
> };
>
> #endif /* _BCMFS_DEV_H_ */
> diff --git a/drivers/crypto/bcmfs/bcmfs_vfio.c
> b/drivers/crypto/bcmfs/bcmfs_vfio.c
> new file mode 100644
> index 000000000..dc2def580
> --- /dev/null
> +++ b/drivers/crypto/bcmfs/bcmfs_vfio.c
> @@ -0,0 +1,107 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(C) 2020 Broadcom.
> + * All rights reserved.
> + */
> +
> +#include <errno.h>
> +#include <sys/mman.h>
> +#include <sys/ioctl.h>
> +
> +#include <rte_vfio.h>
> +
> +#include "bcmfs_device.h"
> +#include "bcmfs_logs.h"
> +#include "bcmfs_vfio.h"
> +
> +#ifdef VFIO_PRESENT
I cannot see VFIO_PRESENT flag defined in this patch.
Hence the below code is a dead code and the patch
Title is not justified as it says adding support for VFIO.
> +static int
> +vfio_map_dev_obj(const char *path, const char *dev_obj,
> + uint32_t *size, void **addr, int *dev_fd)
Regards,
Akhil
^ permalink raw reply [flat|nested] 75+ messages in thread
* Re: [dpdk-dev] [PATCH v2 3/8] crypto/bcmfs: add apis for queue pair management
2020-08-13 17:23 ` [dpdk-dev] [PATCH v2 3/8] crypto/bcmfs: add apis for queue pair management Vikas Gupta
@ 2020-09-28 19:29 ` Akhil Goyal
2020-09-29 11:04 ` Vikas Gupta
0 siblings, 1 reply; 75+ messages in thread
From: Akhil Goyal @ 2020-09-28 19:29 UTC (permalink / raw)
To: Vikas Gupta, dev; +Cc: vikram.prakash, Raveendra Padasalagi
> Subject: [PATCH v2 3/8] crypto/bcmfs: add apis for queue pair management
>
> diff --git a/drivers/crypto/bcmfs/bcmfs_hw_defs.h
> b/drivers/crypto/bcmfs/bcmfs_hw_defs.h
> new file mode 100644
> index 000000000..ecb0c09ba
> --- /dev/null
> +++ b/drivers/crypto/bcmfs/bcmfs_hw_defs.h
> @@ -0,0 +1,38 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2020 Broadcom
> + * All rights reserved.
> + */
> +
> +#ifndef _BCMFS_RM_DEFS_H_
> +#define _BCMFS_RM_DEFS_H_
The file name is bcmfs_hw_defs.h
Check for other headers also.
^ permalink raw reply [flat|nested] 75+ messages in thread
* Re: [dpdk-dev] [PATCH v2 6/8] crypto/bcmfs: add session handling and capabilities
2020-08-13 17:23 ` [dpdk-dev] [PATCH v2 6/8] crypto/bcmfs: add session handling and capabilities Vikas Gupta
@ 2020-09-28 19:46 ` Akhil Goyal
2020-09-29 11:12 ` Vikas Gupta
0 siblings, 1 reply; 75+ messages in thread
From: Akhil Goyal @ 2020-09-28 19:46 UTC (permalink / raw)
To: Vikas Gupta, dev; +Cc: vikram.prakash, Raveendra Padasalagi
Hi Vikas,
> diff --git a/doc/guides/cryptodevs/features/bcmfs.ini
> b/doc/guides/cryptodevs/features/bcmfs.ini
> new file mode 100644
> index 000000000..82d2c639d
> --- /dev/null
> +++ b/doc/guides/cryptodevs/features/bcmfs.ini
> @@ -0,0 +1,56 @@
> +;
> +; Supported features of the 'bcmfs' crypto driver.
> +;
> +; Refer to default.ini for the full list of available PMD features.
> +;
> +[Features]
> +Symmetric crypto = Y
> +Sym operation chaining = Y
> +HW Accelerated = Y
> +Protocol offload = Y
> +In Place SGL = Y
> +
> +;
> +; Supported crypto algorithms of the 'bcmfs' crypto driver.
> +;
> +[Cipher]
> +AES CBC (128) = Y
> +AES CBC (192) = Y
> +AES CBC (256) = Y
> +AES CTR (128) = Y
> +AES CTR (192) = Y
> +AES CTR (256) = Y
> +AES XTS (128) = Y
> +AES XTS (256) = Y
> +3DES CBC = Y
> +DES CBC = Y
> +;
> +; Supported authentication algorithms of the 'bcmfs' crypto driver.
> +;
> +[Auth]
> +MD5 HMAC = Y
> +SHA1 = Y
> +SHA1 HMAC = Y
> +SHA224 = Y
> +SHA224 HMAC = Y
> +SHA256 = Y
> +SHA256 HMAC = Y
> +SHA384 = Y
> +SHA384 HMAC = Y
> +SHA512 = Y
> +SHA512 HMAC = Y
> +AES GMAC = Y
> +AES CMAC (128) = Y
> +AES CBC = Y
AES CBC is not an auth algo
You should use AES CBC MAC
Please use the same notation as there in default.ini
Check for all the names.
> +AES XCBC = Y
> +
> +;
> +; Supported AEAD algorithms of the 'bcmfs' crypto driver.
> +;
> +[AEAD]
> +AES GCM (128) = Y
> +AES GCM (192) = Y
> +AES GCM (256) = Y
> +AES CCM (128) = Y
> +AES CCM (192) = Y
> +AES CCM (256) = Y
// snip//
> + {
> + /* SHA1 HMAC */
> + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
> + {.sym = {
> + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
> + {.auth = {
> + .algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
> + .block_size = 64,
> + .key_size = {
> + .min = 1,
> + .max = 64,
> + .increment = 0
Increment should be 1 for all HMAC cases.
> + },
> + .digest_size = {
> + .min = 20,
> + .max = 20,
> + .increment = 0
> + },
> + .aad_size = { 0 }
> + }, }
> + }, }
> + },
//snipp//
> + {
> + /* AES CMAC */
> + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
> + {.sym = {
> + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
> + {.auth = {
> + .algo = RTE_CRYPTO_AUTH_AES_CMAC,
> + .block_size = 16,
> + .key_size = {
> + .min = 1,
> + .max = 16,
> + .increment = 0
Do you only support key sizes of 1 and 16? I see increment =0 in many cases.
> + },
> + .digest_size = {
> + .min = 16,
> + .max = 16,
> + .increment = 0
> + },
> + .aad_size = { 0 }
> + }, }
> + }, }
> + },
> + {
//snip//
> +
> +const struct rte_cryptodev_capabilities *
> +bcmfs_sym_get_capabilities(void)
> +{
> + return bcmfs_sym_capabilities;
> +}
> diff --git a/drivers/crypto/bcmfs/bcmfs_sym_capabilities.h
> b/drivers/crypto/bcmfs/bcmfs_sym_capabilities.h
> new file mode 100644
> index 000000000..3ff61b7d2
> --- /dev/null
> +++ b/drivers/crypto/bcmfs/bcmfs_sym_capabilities.h
> @@ -0,0 +1,16 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2020 Broadcom
> + * All rights reserved.
> + */
> +
> +#ifndef _BCMFS_SYM_CAPABILITIES_H_
> +#define _BCMFS_SYM_CAPABILITIES_H_
> +
> +/*
> + * Get capabilities list for the device
> + *
> + */
> +const struct rte_cryptodev_capabilities *bcmfs_sym_get_capabilities(void);
> +
> +#endif /* _BCMFS_SYM_CAPABILITIES_H__ */
> +
> diff --git a/drivers/crypto/bcmfs/bcmfs_sym_defs.h
> b/drivers/crypto/bcmfs/bcmfs_sym_defs.h
> new file mode 100644
> index 000000000..d94446d35
> --- /dev/null
> +++ b/drivers/crypto/bcmfs/bcmfs_sym_defs.h
> @@ -0,0 +1,170 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2020 Broadcom
> + * All rights reserved.
> + */
> +
> +#ifndef _BCMFS_SYM_DEFS_H_
> +#define _BCMFS_SYM_DEFS_H_
> +
> +/*
> + * Max block size of hash algorithm
> + * currently SHA3 supports max block size
> + * of 144 bytes
> + */
> +#define BCMFS_MAX_KEY_SIZE 144
> +#define BCMFS_MAX_IV_SIZE 16
> +#define BCMFS_MAX_DIGEST_SIZE 64
> +
> +/** Symmetric Cipher Direction */
> +enum bcmfs_crypto_cipher_op {
> + /** Encrypt cipher operation */
> + BCMFS_CRYPTO_CIPHER_OP_ENCRYPT,
> +
> + /** Decrypt cipher operation */
> + BCMFS_CRYPTO_CIPHER_OP_DECRYPT,
> +};
> +
Why are these enums needed, Aren't these replica of rte_sym_crypto.h
Are these enum values getting filled in some HW desc/registers. If so, then
Probably move it to the hw folder.
> +/** Symmetric Cipher Algorithms */
> +enum bcmfs_crypto_cipher_algorithm {
> + /** NULL cipher algorithm. No mode applies to the NULL algorithm. */
> + BCMFS_CRYPTO_CIPHER_NONE = 0,
> +
> + /** Triple DES algorithm in CBC mode */
> + BCMFS_CRYPTO_CIPHER_DES_CBC,
> +
> + /** Triple DES algorithm in ECB mode */
> + BCMFS_CRYPTO_CIPHER_DES_ECB,
> +
> + /** Triple DES algorithm in CBC mode */
> + BCMFS_CRYPTO_CIPHER_3DES_CBC,
> +
> + /** Triple DES algorithm in ECB mode */
> + BCMFS_CRYPTO_CIPHER_3DES_ECB,
> +
> + /** AES algorithm in CBC mode */
> + BCMFS_CRYPTO_CIPHER_AES_CBC,
> +
> + /** AES algorithm in CCM mode. */
> + BCMFS_CRYPTO_CIPHER_AES_CCM,
> +
> + /** AES algorithm in Counter mode */
> + BCMFS_CRYPTO_CIPHER_AES_CTR,
> +
> + /** AES algorithm in ECB mode */
> + BCMFS_CRYPTO_CIPHER_AES_ECB,
> +
> + /** AES algorithm in GCM mode. */
> + BCMFS_CRYPTO_CIPHER_AES_GCM,
> +
> + /** AES algorithm in XTS mode */
> + BCMFS_CRYPTO_CIPHER_AES_XTS,
> +
> + /** AES algorithm in OFB mode */
> + BCMFS_CRYPTO_CIPHER_AES_OFB,
> +};
> +
^ permalink raw reply [flat|nested] 75+ messages in thread
* Re: [dpdk-dev] [PATCH v2 7/8] crypto/bcmfs: add crypto h/w module
2020-08-13 17:23 ` [dpdk-dev] [PATCH v2 7/8] crypto/bcmfs: add crypto h/w module Vikas Gupta
@ 2020-09-28 20:00 ` Akhil Goyal
0 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2020-09-28 20:00 UTC (permalink / raw)
To: Vikas Gupta, dev; +Cc: vikram.prakash, Raveendra Padasalagi
Hi Vikas,
> Subject: [PATCH v2 7/8] crypto/bcmfs: add crypto h/w module
>
> Add crypto h/w module to process crypto op. Crypto op is processed via
> sym_engine module before submitting the crypto request to h/w queues.
>
> Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
> Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
> Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
> ---
> drivers/crypto/bcmfs/bcmfs_sym.c | 316 ++++++++
> drivers/crypto/bcmfs/bcmfs_sym_defs.h | 16 +
You can probably move the hardware specific defines and enums into hw
Directory of your driver.
> drivers/crypto/bcmfs/bcmfs_sym_engine.c | 994 ++++++++++++++++++++++++
> drivers/crypto/bcmfs/bcmfs_sym_engine.h | 103 +++
> drivers/crypto/bcmfs/bcmfs_sym_pmd.c | 26 +
> drivers/crypto/bcmfs/bcmfs_sym_req.h | 40 +
> drivers/crypto/bcmfs/meson.build | 4 +-
> 7 files changed, 1498 insertions(+), 1 deletion(-)
> create mode 100644 drivers/crypto/bcmfs/bcmfs_sym.c
> create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_engine.c
> create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_engine.h
>
^ permalink raw reply [flat|nested] 75+ messages in thread
* Re: [dpdk-dev] [PATCH v2 8/8] crypto/bcmfs: add crypto pmd into cryptodev test
2020-08-13 17:23 ` [dpdk-dev] [PATCH v2 8/8] crypto/bcmfs: add crypto pmd into cryptodev test Vikas Gupta
@ 2020-09-28 20:01 ` Akhil Goyal
0 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2020-09-28 20:01 UTC (permalink / raw)
To: Vikas Gupta, dev; +Cc: vikram.prakash, Raveendra Padasalagi
> Add global test suite for bcmfs crypto pmd
>
> Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
> Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
> Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
> ---
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
^ permalink raw reply [flat|nested] 75+ messages in thread
* Re: [dpdk-dev] [PATCH v2 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices
2020-08-13 17:23 ` [dpdk-dev] [PATCH v2 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
` (7 preceding siblings ...)
2020-08-13 17:23 ` [dpdk-dev] [PATCH v2 8/8] crypto/bcmfs: add crypto pmd into cryptodev test Vikas Gupta
@ 2020-09-28 20:06 ` Akhil Goyal
2020-10-05 15:39 ` Akhil Goyal
2020-10-05 16:26 ` [dpdk-dev] [PATCH v3 " Vikas Gupta
9 siblings, 1 reply; 75+ messages in thread
From: Akhil Goyal @ 2020-09-28 20:06 UTC (permalink / raw)
To: Vikas Gupta, dev; +Cc: vikram.prakash
>
> Hi,
> This patchset contains support for Crypto offload on Broadcom’s
> Stingray/Stingray2 SoCs having FlexSparc unit.
> BCMFS is an acronym for Broadcom FlexSparc device used in the patchest.
>
> The patchset progressively adds major modules as below.
> a) Detection of platform-device based on the known registered platforms and
> attaching with VFIO.
> b) Creation of Cryptodevice.
> c) Addition of session handling.
> d) Add Cryptodevice into test Cryptodev framework.
>
> The patchset has been tested on the above mentioned SoCs.
>
> Regards,
> Vikas
>
> Changes from v0->v1:
> Updated the ABI version in
> file .../crypto/bcmfs/rte_pmd_bcmfs_version.map
>
> Changes from v1->v2:
> - Fix compilation errors and coding style warnings.
> - Use global test crypto suite suggested by Adam Dybkowski
>
> Vikas Gupta (8):
> crypto/bcmfs: add BCMFS driver
> crypto/bcmfs: add vfio support
> crypto/bcmfs: add apis for queue pair management
> crypto/bcmfs: add hw queue pair operations
> crypto/bcmfs: create a symmetric cryptodev
> crypto/bcmfs: add session handling and capabilities
> crypto/bcmfs: add crypto h/w module
> crypto/bcmfs: add crypto pmd into cryptodev test
>
> MAINTAINERS | 7 +
> app/test/test_cryptodev.c | 17 +
> app/test/test_cryptodev.h | 1 +
> config/common_base | 5 +
> doc/guides/cryptodevs/bcmfs.rst | 72 ++
> doc/guides/cryptodevs/features/bcmfs.ini | 56 +
> doc/guides/cryptodevs/index.rst | 1 +
> drivers/crypto/bcmfs/bcmfs_dev_msg.h | 29 +
> drivers/crypto/bcmfs/bcmfs_device.c | 331 ++++++
> drivers/crypto/bcmfs/bcmfs_device.h | 76 ++
> drivers/crypto/bcmfs/bcmfs_hw_defs.h | 38 +
> drivers/crypto/bcmfs/bcmfs_logs.c | 38 +
> drivers/crypto/bcmfs/bcmfs_logs.h | 34 +
> drivers/crypto/bcmfs/bcmfs_qp.c | 383 +++++++
> drivers/crypto/bcmfs/bcmfs_qp.h | 142 +++
> drivers/crypto/bcmfs/bcmfs_sym.c | 316 ++++++
> drivers/crypto/bcmfs/bcmfs_sym_capabilities.c | 764 ++++++++++++++
> drivers/crypto/bcmfs/bcmfs_sym_capabilities.h | 16 +
> drivers/crypto/bcmfs/bcmfs_sym_defs.h | 186 ++++
> drivers/crypto/bcmfs/bcmfs_sym_engine.c | 994 ++++++++++++++++++
> drivers/crypto/bcmfs/bcmfs_sym_engine.h | 103 ++
> drivers/crypto/bcmfs/bcmfs_sym_pmd.c | 426 ++++++++
> drivers/crypto/bcmfs/bcmfs_sym_pmd.h | 38 +
> drivers/crypto/bcmfs/bcmfs_sym_req.h | 62 ++
> drivers/crypto/bcmfs/bcmfs_sym_session.c | 424 ++++++++
> drivers/crypto/bcmfs/bcmfs_sym_session.h | 99 ++
> drivers/crypto/bcmfs/bcmfs_vfio.c | 107 ++
> drivers/crypto/bcmfs/bcmfs_vfio.h | 17 +
> drivers/crypto/bcmfs/hw/bcmfs4_rm.c | 742 +++++++++++++
> drivers/crypto/bcmfs/hw/bcmfs5_rm.c | 677 ++++++++++++
> drivers/crypto/bcmfs/hw/bcmfs_rm_common.c | 82 ++
> drivers/crypto/bcmfs/hw/bcmfs_rm_common.h | 46 +
> drivers/crypto/bcmfs/meson.build | 20 +
> .../crypto/bcmfs/rte_pmd_bcmfs_version.map | 3 +
> drivers/crypto/meson.build | 3 +-
> mk/rte.app.mk | 1 +
> 36 files changed, 6355 insertions(+), 1 deletion(-)
Release notes missing.
^ permalink raw reply [flat|nested] 75+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/8] crypto/bcmfs: add BCMFS driver
2020-09-28 18:49 ` Akhil Goyal
@ 2020-09-29 10:52 ` Vikas Gupta
0 siblings, 0 replies; 75+ messages in thread
From: Vikas Gupta @ 2020-09-29 10:52 UTC (permalink / raw)
To: Akhil Goyal; +Cc: dev, vikram.prakash, Raveendra Padasalagi
Hi Akhil,
On Tue, Sep 29, 2020 at 12:19 AM Akhil Goyal <akhil.goyal@nxp.com> wrote:
>
> Hi Vikas,
>
> > +BCMFS crypto PMD depend upon the devices present in the path
> > +/sys/bus/platform/devices/fs<version>/<dev_name> on the platform.
> > +Each cryptodev PMD instance can be attached to the nodes present
> > +in the mentioned path.
>
> It would be good, if you can mention the details about the SDKs which need
> To be installed, any kernel dependencies if any.
> The device path mentioned is from which rootfs? This looks incomplete documentation.
Ok sure I`ll add missing items in next patch set.
>
> > diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst
> > index a67ed5a28..5d7e028bd 100644
> > --- a/doc/guides/cryptodevs/index.rst
> > +++ b/doc/guides/cryptodevs/index.rst
> > @@ -29,3 +29,4 @@ Crypto Device Drivers
> > qat
> > virtio
> > zuc
> > + bcmfs
>
> It is better to maintain an alphabetical order.
Sure.
>
> > diff --git a/drivers/crypto/bcmfs/bcmfs_device.c
> > b/drivers/crypto/bcmfs/bcmfs_device.c
> > new file mode 100644
> > index 000000000..47c776de6
> > --- /dev/null
> > +++ b/drivers/crypto/bcmfs/bcmfs_device.c
> > @@ -0,0 +1,256 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(C) 2020 Broadcom.
> > + * All rights reserved.
> > + */
> > +
> > +#include <dirent.h>
> > +#include <stdbool.h>
> > +#include <sys/queue.h>
> > +
> > +#include <rte_string_fns.h>
> > +
> > +#include "bcmfs_device.h"
> > +#include "bcmfs_logs.h"
> > +
> > +struct bcmfs_device_attr {
> > + const char name[BCMFS_MAX_PATH_LEN];
> > + const char suffix[BCMFS_DEV_NAME_LEN];
> > + const enum bcmfs_device_type type;
> > + const uint32_t offset;
> > + const uint32_t version;
> > +};
> > +
> > +/* BCMFS supported devices */
> > +static struct bcmfs_device_attr dev_table[] = {
> > + {
> > + .name = "fs4",
> > + .suffix = "crypto_mbox",
> > + .type = BCMFS_SYM_FS4,
> > + .offset = 0,
> > + .version = 0x76303031
> > + },
> > + {
> > + .name = "fs5",
> > + .suffix = "mbox",
> > + .type = BCMFS_SYM_FS5,
> > + .offset = 0,
> > + .version = 0x76303032
> > + },
> > + {
> > + /* sentinel */
> > + }
> > +};
> > +
> > +TAILQ_HEAD(fsdev_list, bcmfs_device);
> > +static struct fsdev_list fsdev_list = TAILQ_HEAD_INITIALIZER(fsdev_list);
> > +
> > +static struct bcmfs_device *
> > +fsdev_allocate_one_dev(struct rte_vdev_device *vdev,
> > + char *dirpath,
> > + char *devname,
> > + enum bcmfs_device_type dev_type __rte_unused)
> > +{
> > + struct bcmfs_device *fsdev;
> > +
> > + fsdev = calloc(1, sizeof(*fsdev));
>
> Can we use rte_calloc
will fix it in next patch set.
>
> > + if (!fsdev)
> > + return NULL;
> > +
> > + if (strlen(dirpath) > sizeof(fsdev->dirname)) {
> > + BCMFS_LOG(ERR, "dir path name is too long");
> > + goto cleanup;
> > + }
> > +
> > + if (strlen(devname) > sizeof(fsdev->name)) {
> > + BCMFS_LOG(ERR, "devname is too long");
> > + goto cleanup;
> > + }
> > +
> > + strcpy(fsdev->dirname, dirpath);
> > + strcpy(fsdev->name, devname);
> > +
> > + fsdev->vdev = vdev;
> > +
> > + TAILQ_INSERT_TAIL(&fsdev_list, fsdev, next);
> > +
> > + return fsdev;
> > +
> > +cleanup:
> > + free(fsdev);
> > +
> > + return NULL;
> > +}
> > +
>
> <snip>
>
> > diff --git a/drivers/crypto/meson.build b/drivers/crypto/meson.build
> > index a2423507a..8e06d0533 100644
> > --- a/drivers/crypto/meson.build
> > +++ b/drivers/crypto/meson.build
> > @@ -23,7 +23,8 @@ drivers = ['aesni_gcm',
> > 'scheduler',
> > 'snow3g',
> > 'virtio',
> > - 'zuc']
> > + 'zuc',
> > + 'bcmfs']
>
> Please maintain an alphabetical order.
Sure.
^ permalink raw reply [flat|nested] 75+ messages in thread
* Re: [dpdk-dev] [PATCH v2 2/8] crypto/bcmfs: add vfio support
2020-09-28 19:00 ` Akhil Goyal
@ 2020-09-29 11:01 ` Vikas Gupta
2020-09-29 12:39 ` Akhil Goyal
0 siblings, 1 reply; 75+ messages in thread
From: Vikas Gupta @ 2020-09-29 11:01 UTC (permalink / raw)
To: Akhil Goyal; +Cc: dev, vikram.prakash, Raveendra Padasalagi
Hi Akhil,
On Tue, Sep 29, 2020 at 12:30 AM Akhil Goyal <akhil.goyal@nxp.com> wrote:
>
> Hi Vikas,
>
> > Subject: [PATCH v2 2/8] crypto/bcmfs: add vfio support
> >
> > Add vfio support for device.
> >
> > Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
> > Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
> > Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
> > ---
> > drivers/crypto/bcmfs/bcmfs_device.c | 5 ++
> > drivers/crypto/bcmfs/bcmfs_device.h | 6 ++
> > drivers/crypto/bcmfs/bcmfs_vfio.c | 107 ++++++++++++++++++++++++++++
> > drivers/crypto/bcmfs/bcmfs_vfio.h | 17 +++++
> > drivers/crypto/bcmfs/meson.build | 3 +-
> > 5 files changed, 137 insertions(+), 1 deletion(-)
> > create mode 100644 drivers/crypto/bcmfs/bcmfs_vfio.c
> > create mode 100644 drivers/crypto/bcmfs/bcmfs_vfio.h
> >
> > diff --git a/drivers/crypto/bcmfs/bcmfs_device.c
> > b/drivers/crypto/bcmfs/bcmfs_device.c
> > index 47c776de6..3b5cc9e98 100644
> > --- a/drivers/crypto/bcmfs/bcmfs_device.c
> > +++ b/drivers/crypto/bcmfs/bcmfs_device.c
> > @@ -11,6 +11,7 @@
> >
> > #include "bcmfs_device.h"
> > #include "bcmfs_logs.h"
> > +#include "bcmfs_vfio.h"
> >
> > struct bcmfs_device_attr {
> > const char name[BCMFS_MAX_PATH_LEN];
> > @@ -71,6 +72,10 @@ fsdev_allocate_one_dev(struct rte_vdev_device *vdev,
> >
> > fsdev->vdev = vdev;
> >
> > + /* attach to VFIO */
> > + if (bcmfs_attach_vfio(fsdev))
> > + goto cleanup;
> > +
> > TAILQ_INSERT_TAIL(&fsdev_list, fsdev, next);
> >
> > return fsdev;
> > diff --git a/drivers/crypto/bcmfs/bcmfs_device.h
> > b/drivers/crypto/bcmfs/bcmfs_device.h
> > index cc64a8df2..c41cc0031 100644
> > --- a/drivers/crypto/bcmfs/bcmfs_device.h
> > +++ b/drivers/crypto/bcmfs/bcmfs_device.h
> > @@ -35,6 +35,12 @@ struct bcmfs_device {
> > char name[BCMFS_DEV_NAME_LEN];
> > /* Parent vdev */
> > struct rte_vdev_device *vdev;
> > + /* vfio handle */
> > + int vfio_dev_fd;
> > + /* mapped address */
> > + uint8_t *mmap_addr;
> > + /* mapped size */
> > + uint32_t mmap_size;
> > };
> >
> > #endif /* _BCMFS_DEV_H_ */
> > diff --git a/drivers/crypto/bcmfs/bcmfs_vfio.c
> > b/drivers/crypto/bcmfs/bcmfs_vfio.c
> > new file mode 100644
> > index 000000000..dc2def580
> > --- /dev/null
> > +++ b/drivers/crypto/bcmfs/bcmfs_vfio.c
> > @@ -0,0 +1,107 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(C) 2020 Broadcom.
> > + * All rights reserved.
> > + */
> > +
> > +#include <errno.h>
> > +#include <sys/mman.h>
> > +#include <sys/ioctl.h>
> > +
> > +#include <rte_vfio.h>
> > +
> > +#include "bcmfs_device.h"
> > +#include "bcmfs_logs.h"
> > +#include "bcmfs_vfio.h"
> > +
> > +#ifdef VFIO_PRESENT
>
> I cannot see VFIO_PRESENT flag defined in this patch.
> Hence the below code is a dead code and the patch
> Title is not justified as it says adding support for VFIO.
I believe VFIO_PRESENT flag is dependent on the platform who supports
VFIO and determined in rte_vfio.h.
The driver will not work without VFIO support and returns silently
(functions in #else part).
Do you mean I need to change the title?
>
> > +static int
> > +vfio_map_dev_obj(const char *path, const char *dev_obj,
> > + uint32_t *size, void **addr, int *dev_fd)
>
> Regards,
> Akhil
Thanks,
Vikas
^ permalink raw reply [flat|nested] 75+ messages in thread
* Re: [dpdk-dev] [PATCH v2 3/8] crypto/bcmfs: add apis for queue pair management
2020-09-28 19:29 ` Akhil Goyal
@ 2020-09-29 11:04 ` Vikas Gupta
0 siblings, 0 replies; 75+ messages in thread
From: Vikas Gupta @ 2020-09-29 11:04 UTC (permalink / raw)
To: Akhil Goyal; +Cc: dev, vikram.prakash, Raveendra Padasalagi
Hi Akhil,
On Tue, Sep 29, 2020 at 12:59 AM Akhil Goyal <akhil.goyal@nxp.com> wrote:
>
> > Subject: [PATCH v2 3/8] crypto/bcmfs: add apis for queue pair management
> >
> > diff --git a/drivers/crypto/bcmfs/bcmfs_hw_defs.h
> > b/drivers/crypto/bcmfs/bcmfs_hw_defs.h
> > new file mode 100644
> > index 000000000..ecb0c09ba
> > --- /dev/null
> > +++ b/drivers/crypto/bcmfs/bcmfs_hw_defs.h
> > @@ -0,0 +1,38 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(c) 2020 Broadcom
> > + * All rights reserved.
> > + */
> > +
> > +#ifndef _BCMFS_RM_DEFS_H_
> > +#define _BCMFS_RM_DEFS_H_
>
> The file name is bcmfs_hw_defs.h
> Check for other headers also.
>
Will fix it next patch set. Thank you for catching this.
Thanks,
Vikas
^ permalink raw reply [flat|nested] 75+ messages in thread
* Re: [dpdk-dev] [PATCH v2 6/8] crypto/bcmfs: add session handling and capabilities
2020-09-28 19:46 ` Akhil Goyal
@ 2020-09-29 11:12 ` Vikas Gupta
0 siblings, 0 replies; 75+ messages in thread
From: Vikas Gupta @ 2020-09-29 11:12 UTC (permalink / raw)
To: Akhil Goyal; +Cc: dev, vikram.prakash, Raveendra Padasalagi
Hi Akhil,
On Tue, Sep 29, 2020 at 1:16 AM Akhil Goyal <akhil.goyal@nxp.com> wrote:
>
> Hi Vikas,
>
> > diff --git a/doc/guides/cryptodevs/features/bcmfs.ini
> > b/doc/guides/cryptodevs/features/bcmfs.ini
> > new file mode 100644
> > index 000000000..82d2c639d
> > --- /dev/null
> > +++ b/doc/guides/cryptodevs/features/bcmfs.ini
> > @@ -0,0 +1,56 @@
> > +;
> > +; Supported features of the 'bcmfs' crypto driver.
> > +;
> > +; Refer to default.ini for the full list of available PMD features.
> > +;
> > +[Features]
> > +Symmetric crypto = Y
> > +Sym operation chaining = Y
> > +HW Accelerated = Y
> > +Protocol offload = Y
> > +In Place SGL = Y
> > +
> > +;
> > +; Supported crypto algorithms of the 'bcmfs' crypto driver.
> > +;
> > +[Cipher]
> > +AES CBC (128) = Y
> > +AES CBC (192) = Y
> > +AES CBC (256) = Y
> > +AES CTR (128) = Y
> > +AES CTR (192) = Y
> > +AES CTR (256) = Y
> > +AES XTS (128) = Y
> > +AES XTS (256) = Y
> > +3DES CBC = Y
> > +DES CBC = Y
> > +;
> > +; Supported authentication algorithms of the 'bcmfs' crypto driver.
> > +;
> > +[Auth]
> > +MD5 HMAC = Y
> > +SHA1 = Y
> > +SHA1 HMAC = Y
> > +SHA224 = Y
> > +SHA224 HMAC = Y
> > +SHA256 = Y
> > +SHA256 HMAC = Y
> > +SHA384 = Y
> > +SHA384 HMAC = Y
> > +SHA512 = Y
> > +SHA512 HMAC = Y
> > +AES GMAC = Y
> > +AES CMAC (128) = Y
> > +AES CBC = Y
>
> AES CBC is not an auth algo
> You should use AES CBC MAC
> Please use the same notation as there in default.ini
> Check for all the names.
Will fix it.
>
> > +AES XCBC = Y
> > +
> > +;
> > +; Supported AEAD algorithms of the 'bcmfs' crypto driver.
> > +;
> > +[AEAD]
> > +AES GCM (128) = Y
> > +AES GCM (192) = Y
> > +AES GCM (256) = Y
> > +AES CCM (128) = Y
> > +AES CCM (192) = Y
> > +AES CCM (256) = Y
>
> // snip//
>
> > + {
> > + /* SHA1 HMAC */
> > + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
> > + {.sym = {
> > + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
> > + {.auth = {
> > + .algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
> > + .block_size = 64,
> > + .key_size = {
> > + .min = 1,
> > + .max = 64,
> > + .increment = 0
>
> Increment should be 1 for all HMAC cases.
I`ll go through all the list again. Thanks for catching.
>
> > + },
> > + .digest_size = {
> > + .min = 20,
> > + .max = 20,
> > + .increment = 0
> > + },
> > + .aad_size = { 0 }
> > + }, }
> > + }, }
> > + },
>
> //snipp//
>
> > + {
> > + /* AES CMAC */
> > + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
> > + {.sym = {
> > + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
> > + {.auth = {
> > + .algo = RTE_CRYPTO_AUTH_AES_CMAC,
> > + .block_size = 16,
> > + .key_size = {
> > + .min = 1,
> > + .max = 16,
> > + .increment = 0
>
> Do you only support key sizes of 1 and 16? I see increment =0 in many cases.
Will review the list and fix it accordingly.
>
> > + },
> > + .digest_size = {
> > + .min = 16,
> > + .max = 16,
> > + .increment = 0
> > + },
> > + .aad_size = { 0 }
> > + }, }
> > + }, }
> > + },
> > + {
>
> //snip//
>
>
> > +
> > +const struct rte_cryptodev_capabilities *
> > +bcmfs_sym_get_capabilities(void)
> > +{
> > + return bcmfs_sym_capabilities;
> > +}
> > diff --git a/drivers/crypto/bcmfs/bcmfs_sym_capabilities.h
> > b/drivers/crypto/bcmfs/bcmfs_sym_capabilities.h
> > new file mode 100644
> > index 000000000..3ff61b7d2
> > --- /dev/null
> > +++ b/drivers/crypto/bcmfs/bcmfs_sym_capabilities.h
> > @@ -0,0 +1,16 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(c) 2020 Broadcom
> > + * All rights reserved.
> > + */
> > +
> > +#ifndef _BCMFS_SYM_CAPABILITIES_H_
> > +#define _BCMFS_SYM_CAPABILITIES_H_
> > +
> > +/*
> > + * Get capabilities list for the device
> > + *
> > + */
> > +const struct rte_cryptodev_capabilities *bcmfs_sym_get_capabilities(void);
> > +
> > +#endif /* _BCMFS_SYM_CAPABILITIES_H__ */
> > +
> > diff --git a/drivers/crypto/bcmfs/bcmfs_sym_defs.h
> > b/drivers/crypto/bcmfs/bcmfs_sym_defs.h
> > new file mode 100644
> > index 000000000..d94446d35
> > --- /dev/null
> > +++ b/drivers/crypto/bcmfs/bcmfs_sym_defs.h
> > @@ -0,0 +1,170 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(c) 2020 Broadcom
> > + * All rights reserved.
> > + */
> > +
> > +#ifndef _BCMFS_SYM_DEFS_H_
> > +#define _BCMFS_SYM_DEFS_H_
> > +
> > +/*
> > + * Max block size of hash algorithm
> > + * currently SHA3 supports max block size
> > + * of 144 bytes
> > + */
> > +#define BCMFS_MAX_KEY_SIZE 144
> > +#define BCMFS_MAX_IV_SIZE 16
> > +#define BCMFS_MAX_DIGEST_SIZE 64
> > +
> > +/** Symmetric Cipher Direction */
> > +enum bcmfs_crypto_cipher_op {
> > + /** Encrypt cipher operation */
> > + BCMFS_CRYPTO_CIPHER_OP_ENCRYPT,
> > +
> > + /** Decrypt cipher operation */
> > + BCMFS_CRYPTO_CIPHER_OP_DECRYPT,
> > +};
> > +
>
> Why are these enums needed, Aren't these replica of rte_sym_crypto.h
>
> Are these enum values getting filled in some HW desc/registers. If so, then
> Probably move it to the hw folder.
We`ll review this and place/modify macros accordingly.
>
> > +/** Symmetric Cipher Algorithms */
> > +enum bcmfs_crypto_cipher_algorithm {
> > + /** NULL cipher algorithm. No mode applies to the NULL algorithm. */
> > + BCMFS_CRYPTO_CIPHER_NONE = 0,
> > +
> > + /** Triple DES algorithm in CBC mode */
> > + BCMFS_CRYPTO_CIPHER_DES_CBC,
> > +
> > + /** Triple DES algorithm in ECB mode */
> > + BCMFS_CRYPTO_CIPHER_DES_ECB,
> > +
> > + /** Triple DES algorithm in CBC mode */
> > + BCMFS_CRYPTO_CIPHER_3DES_CBC,
> > +
> > + /** Triple DES algorithm in ECB mode */
> > + BCMFS_CRYPTO_CIPHER_3DES_ECB,
> > +
> > + /** AES algorithm in CBC mode */
> > + BCMFS_CRYPTO_CIPHER_AES_CBC,
> > +
> > + /** AES algorithm in CCM mode. */
> > + BCMFS_CRYPTO_CIPHER_AES_CCM,
> > +
> > + /** AES algorithm in Counter mode */
> > + BCMFS_CRYPTO_CIPHER_AES_CTR,
> > +
> > + /** AES algorithm in ECB mode */
> > + BCMFS_CRYPTO_CIPHER_AES_ECB,
> > +
> > + /** AES algorithm in GCM mode. */
> > + BCMFS_CRYPTO_CIPHER_AES_GCM,
> > +
> > + /** AES algorithm in XTS mode */
> > + BCMFS_CRYPTO_CIPHER_AES_XTS,
> > +
> > + /** AES algorithm in OFB mode */
> > + BCMFS_CRYPTO_CIPHER_AES_OFB,
> > +};
> > +
Thanks,
Vikas
^ permalink raw reply [flat|nested] 75+ messages in thread
* Re: [dpdk-dev] [PATCH v2 2/8] crypto/bcmfs: add vfio support
2020-09-29 11:01 ` Vikas Gupta
@ 2020-09-29 12:39 ` Akhil Goyal
0 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2020-09-29 12:39 UTC (permalink / raw)
To: Vikas Gupta; +Cc: dev, vikram.prakash, Raveendra Padasalagi
Hi Vikas,
>
> Hi Akhil,
>
> > > +
> > > +#ifdef VFIO_PRESENT
> >
> > I cannot see VFIO_PRESENT flag defined in this patch.
> > Hence the below code is a dead code and the patch
> > Title is not justified as it says adding support for VFIO.
> I believe VFIO_PRESENT flag is dependent on the platform who supports
> VFIO and determined in rte_vfio.h.
> The driver will not work without VFIO support and returns silently
> (functions in #else part).
> Do you mean I need to change the title?
Title is OK. You can explain in the patch description and bcmfs.rst about how
it is getting enabled, which config flags need to be enabled and probably in
the bcmfs.rst as well.
Regards,
Akhil
^ permalink raw reply [flat|nested] 75+ messages in thread
* Re: [dpdk-dev] [PATCH v2 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices
2020-09-28 20:06 ` [dpdk-dev] [PATCH v2 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Akhil Goyal
@ 2020-10-05 15:39 ` Akhil Goyal
2020-10-05 16:46 ` Ajit Khaparde
0 siblings, 1 reply; 75+ messages in thread
From: Akhil Goyal @ 2020-10-05 15:39 UTC (permalink / raw)
To: Akhil Goyal, Vikas Gupta, dev; +Cc: vikram.prakash
Hi Vikas
>
> >
> > Hi,
> > This patchset contains support for Crypto offload on Broadcom’s
> > Stingray/Stingray2 SoCs having FlexSparc unit.
> > BCMFS is an acronym for Broadcom FlexSparc device used in the patchest.
> >
> > The patchset progressively adds major modules as below.
> > a) Detection of platform-device based on the known registered platforms and
> > attaching with VFIO.
> > b) Creation of Cryptodevice.
> > c) Addition of session handling.
> > d) Add Cryptodevice into test Cryptodev framework.
> >
> > The patchset has been tested on the above mentioned SoCs.
> >
> Release notes missing.
When do you plan to submit the next version. I plan to merge it in RC1 timeline.
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v3 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices
2020-08-13 17:23 ` [dpdk-dev] [PATCH v2 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
` (8 preceding siblings ...)
2020-09-28 20:06 ` [dpdk-dev] [PATCH v2 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Akhil Goyal
@ 2020-10-05 16:26 ` Vikas Gupta
2020-10-05 16:26 ` [dpdk-dev] [PATCH v3 1/8] crypto/bcmfs: add BCMFS driver Vikas Gupta
` (8 more replies)
9 siblings, 9 replies; 75+ messages in thread
From: Vikas Gupta @ 2020-10-05 16:26 UTC (permalink / raw)
To: dev, akhil.goyal; +Cc: vikram.prakash, Vikas Gupta
Hi,
This patchset contains support for Crypto offload on Broadcom’s
Stingray/Stingray2 SoCs having FlexSparc unit.
BCMFS is an acronym for Broadcom FlexSparc device used in the patchest.
The patchset progressively adds major modules as below.
a) Detection of platform-device based on the known registered platforms and attaching with VFIO.
b) Creation of Cryptodevice.
c) Addition of session handling.
d) Add Cryptodevice into test Cryptodev framework.
The patchset has been tested on the above mentioned SoCs.
Regards,
Vikas
Changes from v0->v1:
Updated the ABI version in file .../crypto/bcmfs/rte_pmd_bcmfs_version.map
Changes from v1->v2:
- Fix compilation errors and coding style warnings.
- Use global test crypto suite suggested by Adam Dybkowski
Changes from v2->v3:
- Release notes updated.
- bcmfs.rst updated with missing information about installation.
- Review comments from patch1 from v2 addressed.
- Updated description about dependency of PMD driver on VFIO_PRESENT.
- Fixed typo in bcmfs_hw_defs.h (comments on patch3 from v2 addressed)
- Comments on patch6 from v2 addressed and capability list is fixed.
Removed redundant enums and macros from the file
bcmfs_sym_defs.h and updated other impacted APIs accordingly.
patch7 too is updated due to removal of redundancy.
Thanks! to Akhil for pointing out the redundancy.
- Fix minor code style issues in few files as part of review.
Vikas Gupta (8):
crypto/bcmfs: add BCMFS driver
crypto/bcmfs: add vfio support
crypto/bcmfs: add apis for queue pair management
crypto/bcmfs: add hw queue pair operations
crypto/bcmfs: create a symmetric cryptodev
crypto/bcmfs: add session handling and capabilities
crypto/bcmfs: add crypto h/w module
crypto/bcmfs: add crypto pmd into cryptodev test
MAINTAINERS | 7 +
app/test/test_cryptodev.c | 17 +
app/test/test_cryptodev.h | 1 +
doc/guides/cryptodevs/bcmfs.rst | 109 ++
doc/guides/cryptodevs/features/bcmfs.ini | 56 +
doc/guides/cryptodevs/index.rst | 1 +
doc/guides/rel_notes/release_20_11.rst | 5 +
drivers/crypto/bcmfs/bcmfs_dev_msg.h | 29 +
drivers/crypto/bcmfs/bcmfs_device.c | 332 +++++
drivers/crypto/bcmfs/bcmfs_device.h | 76 ++
drivers/crypto/bcmfs/bcmfs_hw_defs.h | 32 +
drivers/crypto/bcmfs/bcmfs_logs.c | 38 +
drivers/crypto/bcmfs/bcmfs_logs.h | 34 +
drivers/crypto/bcmfs/bcmfs_qp.c | 383 ++++++
drivers/crypto/bcmfs/bcmfs_qp.h | 142 ++
drivers/crypto/bcmfs/bcmfs_sym.c | 289 +++++
drivers/crypto/bcmfs/bcmfs_sym_capabilities.c | 764 +++++++++++
drivers/crypto/bcmfs/bcmfs_sym_capabilities.h | 16 +
drivers/crypto/bcmfs/bcmfs_sym_defs.h | 34 +
drivers/crypto/bcmfs/bcmfs_sym_engine.c | 1155 +++++++++++++++++
drivers/crypto/bcmfs/bcmfs_sym_engine.h | 115 ++
drivers/crypto/bcmfs/bcmfs_sym_pmd.c | 426 ++++++
drivers/crypto/bcmfs/bcmfs_sym_pmd.h | 38 +
drivers/crypto/bcmfs/bcmfs_sym_req.h | 62 +
drivers/crypto/bcmfs/bcmfs_sym_session.c | 282 ++++
drivers/crypto/bcmfs/bcmfs_sym_session.h | 109 ++
drivers/crypto/bcmfs/bcmfs_vfio.c | 107 ++
drivers/crypto/bcmfs/bcmfs_vfio.h | 17 +
drivers/crypto/bcmfs/hw/bcmfs4_rm.c | 743 +++++++++++
drivers/crypto/bcmfs/hw/bcmfs5_rm.c | 677 ++++++++++
drivers/crypto/bcmfs/hw/bcmfs_rm_common.c | 82 ++
drivers/crypto/bcmfs/hw/bcmfs_rm_common.h | 51 +
drivers/crypto/bcmfs/meson.build | 20 +
.../crypto/bcmfs/rte_pmd_bcmfs_version.map | 3 +
drivers/crypto/meson.build | 1 +
35 files changed, 6253 insertions(+)
create mode 100644 doc/guides/cryptodevs/bcmfs.rst
create mode 100644 doc/guides/cryptodevs/features/bcmfs.ini
create mode 100644 drivers/crypto/bcmfs/bcmfs_dev_msg.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_device.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_device.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_hw_defs.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_logs.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_logs.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_qp.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_qp.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_capabilities.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_capabilities.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_defs.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_engine.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_engine.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_pmd.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_pmd.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_req.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_session.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_session.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_vfio.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_vfio.h
create mode 100644 drivers/crypto/bcmfs/hw/bcmfs4_rm.c
create mode 100644 drivers/crypto/bcmfs/hw/bcmfs5_rm.c
create mode 100644 drivers/crypto/bcmfs/hw/bcmfs_rm_common.c
create mode 100644 drivers/crypto/bcmfs/hw/bcmfs_rm_common.h
create mode 100644 drivers/crypto/bcmfs/meson.build
create mode 100644 drivers/crypto/bcmfs/rte_pmd_bcmfs_version.map
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v3 1/8] crypto/bcmfs: add BCMFS driver
2020-10-05 16:26 ` [dpdk-dev] [PATCH v3 " Vikas Gupta
@ 2020-10-05 16:26 ` Vikas Gupta
2020-10-05 16:26 ` [dpdk-dev] [PATCH v3 2/8] crypto/bcmfs: add vfio support Vikas Gupta
` (7 subsequent siblings)
8 siblings, 0 replies; 75+ messages in thread
From: Vikas Gupta @ 2020-10-05 16:26 UTC (permalink / raw)
To: dev, akhil.goyal; +Cc: vikram.prakash, Vikas Gupta, Raveendra Padasalagi
Add Broadcom FlexSparc(FS) device creation driver which registers to a
vdev and create a device. Add APIs for logs, supportive documention and
maintainers file.
Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
MAINTAINERS | 7 +
doc/guides/cryptodevs/bcmfs.rst | 51 ++++
doc/guides/cryptodevs/index.rst | 1 +
doc/guides/rel_notes/release_20_11.rst | 5 +
drivers/crypto/bcmfs/bcmfs_device.c | 257 ++++++++++++++++++
drivers/crypto/bcmfs/bcmfs_device.h | 40 +++
drivers/crypto/bcmfs/bcmfs_logs.c | 38 +++
drivers/crypto/bcmfs/bcmfs_logs.h | 34 +++
drivers/crypto/bcmfs/meson.build | 10 +
.../crypto/bcmfs/rte_pmd_bcmfs_version.map | 3 +
drivers/crypto/meson.build | 1 +
11 files changed, 447 insertions(+)
create mode 100644 doc/guides/cryptodevs/bcmfs.rst
create mode 100644 drivers/crypto/bcmfs/bcmfs_device.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_device.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_logs.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_logs.h
create mode 100644 drivers/crypto/bcmfs/meson.build
create mode 100644 drivers/crypto/bcmfs/rte_pmd_bcmfs_version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index c0abbe0fc8..ab849ac1d1 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1081,6 +1081,13 @@ F: drivers/crypto/zuc/
F: doc/guides/cryptodevs/zuc.rst
F: doc/guides/cryptodevs/features/zuc.ini
+Broadcom FlexSparc
+M: Vikas Gupta <vikas.gupta@@broadcom.com>
+M: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
+M: Ajit Khaparde <ajit.khaparde@broadcom.com>
+F: drivers/crypto/bcmfs/
+F: doc/guides/cryptodevs/bcmfs.rst
+F: doc/guides/cryptodevs/features/bcmfs.ini
Compression Drivers
-------------------
diff --git a/doc/guides/cryptodevs/bcmfs.rst b/doc/guides/cryptodevs/bcmfs.rst
new file mode 100644
index 0000000000..dc21bf60cc
--- /dev/null
+++ b/doc/guides/cryptodevs/bcmfs.rst
@@ -0,0 +1,51 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(C) 2020 Broadcom
+
+Broadcom FlexSparc Crypto Poll Mode Driver
+==========================================
+
+The FlexSparc crypto poll mode driver (BCMFS PMD) provides support for offloading
+cryptographic operations to the Broadcom SoCs having FlexSparc4/FlexSparc5 unit.
+Detailed information about SoCs can be found at `Broadcom Official Website
+<https://www.broadcom.com/products/ethernet-connectivity/network-adapters/smartnic>`__.
+
+Supported Broadcom SoCs
+-----------------------
+
+* Stingray
+* Stingray2
+
+Installation
+------------
+Information about kernel, rootfs and toolchain can be found at
+`Broadcom Official Website <https://www.broadcom.com/products/ethernet-connectivity
+/network-adapters/smartnic/stingray-software>`__.
+
+ .. Note::
+ To execute BCMFS PMD, it must be compiled with VFIO_PRESENT flag on the
+ compiling platform and same gets enabled in rte_vfio.h.
+
+The BCMFS crypto PMD may be compiled natively on a Stingray/Stingray2 platform or
+cross-compiled on an x86 platform. For example, below commands can be executed
+for cross compiling on on x86 platform.
+
+.. code-block:: console
+
+ cd <DPDK-source-directory>
+ meson <dest-dir> --cross-file config/arm/arm64_stingray_linux_gcc
+ cd <dest-dir>
+ ninja
+
+Initialization
+--------------
+The supported platfom devices should be present in the
+*/sys/bus/platform/devices/fs<version>/<dev_name>* path on the booted kernel.
+To get BCMFS PMD executing, device node must be owned by VFIO platform module only.
+For example, below commands can be run to get hold of a device node by VFIO.
+
+.. code-block:: console
+
+ SETUP_SYSFS_DEV_NAME=67000000.crypto_mbox
+ io_device_name="vfio-platform"
+ echo $io_device_name > /sys/bus/platform/devices/${SETUP_SYSFS_DEV_NAME}/driver_override
+ echo ${SETUP_SYSFS_DEV_NAME} > /sys/bus/platform/drivers_probe
diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst
index a67ed5a282..279f56a002 100644
--- a/doc/guides/cryptodevs/index.rst
+++ b/doc/guides/cryptodevs/index.rst
@@ -13,6 +13,7 @@ Crypto Device Drivers
aesni_mb
aesni_gcm
armv8
+ bcmfs
caam_jr
ccp
dpaa2_sec
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 73ac08fb0e..8643330321 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -185,3 +185,8 @@ Tested Platforms
This section is a comment. Do not overwrite or remove it.
Also, make sure to start the actual text at the margin.
=======================================================
+
+* **Added Broadcom BCMFS symmetric crypto PMD.**
+
+ Added a symmetric crypto PMD for Broadcom FlexSparc crypto units.
+ See :doc:`../cryptodevs/bcmfs` guide for more details on this new PMD.
diff --git a/drivers/crypto/bcmfs/bcmfs_device.c b/drivers/crypto/bcmfs/bcmfs_device.c
new file mode 100644
index 0000000000..c9865ee6a5
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_device.c
@@ -0,0 +1,257 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Broadcom.
+ * All rights reserved.
+ */
+
+#include <dirent.h>
+#include <stdbool.h>
+#include <sys/queue.h>
+
+#include <rte_malloc.h>
+#include <rte_string_fns.h>
+
+#include "bcmfs_device.h"
+#include "bcmfs_logs.h"
+
+struct bcmfs_device_attr {
+ const char name[BCMFS_MAX_PATH_LEN];
+ const char suffix[BCMFS_DEV_NAME_LEN];
+ const enum bcmfs_device_type type;
+ const uint32_t offset;
+ const uint32_t version;
+};
+
+/* BCMFS supported devices */
+static struct bcmfs_device_attr dev_table[] = {
+ {
+ .name = "fs4",
+ .suffix = "crypto_mbox",
+ .type = BCMFS_SYM_FS4,
+ .offset = 0,
+ .version = 0x76303031
+ },
+ {
+ .name = "fs5",
+ .suffix = "mbox",
+ .type = BCMFS_SYM_FS5,
+ .offset = 0,
+ .version = 0x76303032
+ },
+ {
+ /* sentinel */
+ }
+};
+
+TAILQ_HEAD(fsdev_list, bcmfs_device);
+static struct fsdev_list fsdev_list = TAILQ_HEAD_INITIALIZER(fsdev_list);
+
+static struct bcmfs_device *
+fsdev_allocate_one_dev(struct rte_vdev_device *vdev,
+ char *dirpath,
+ char *devname,
+ enum bcmfs_device_type dev_type __rte_unused)
+{
+ struct bcmfs_device *fsdev;
+
+ fsdev = rte_calloc(__func__, 1, sizeof(*fsdev), 0);
+ if (!fsdev)
+ return NULL;
+
+ if (strlen(dirpath) > sizeof(fsdev->dirname)) {
+ BCMFS_LOG(ERR, "dir path name is too long");
+ goto cleanup;
+ }
+
+ if (strlen(devname) > sizeof(fsdev->name)) {
+ BCMFS_LOG(ERR, "devname is too long");
+ goto cleanup;
+ }
+
+ strcpy(fsdev->dirname, dirpath);
+ strcpy(fsdev->name, devname);
+
+ fsdev->vdev = vdev;
+
+ TAILQ_INSERT_TAIL(&fsdev_list, fsdev, next);
+
+ return fsdev;
+
+cleanup:
+ free(fsdev);
+
+ return NULL;
+}
+
+static struct bcmfs_device *
+find_fsdev(struct rte_vdev_device *vdev)
+{
+ struct bcmfs_device *fsdev;
+
+ TAILQ_FOREACH(fsdev, &fsdev_list, next)
+ if (fsdev->vdev == vdev)
+ return fsdev;
+
+ return NULL;
+}
+
+static void
+fsdev_release(struct bcmfs_device *fsdev)
+{
+ if (fsdev == NULL)
+ return;
+
+ TAILQ_REMOVE(&fsdev_list, fsdev, next);
+ free(fsdev);
+}
+
+static int
+cmprator(const void *a, const void *b)
+{
+ return (*(const unsigned int *)a - *(const unsigned int *)b);
+}
+
+static int
+fsdev_find_all_devs(const char *path, const char *search,
+ uint32_t *devs)
+{
+ DIR *dir;
+ struct dirent *entry;
+ int count = 0;
+ char addr[BCMFS_MAX_NODES][BCMFS_MAX_PATH_LEN];
+ int i;
+
+ dir = opendir(path);
+ if (dir == NULL) {
+ BCMFS_LOG(ERR, "Unable to open directory");
+ return 0;
+ }
+
+ while ((entry = readdir(dir)) != NULL) {
+ if (strstr(entry->d_name, search)) {
+ strlcpy(addr[count], entry->d_name,
+ BCMFS_MAX_PATH_LEN);
+ count++;
+ }
+ }
+
+ closedir(dir);
+
+ for (i = 0 ; i < count; i++)
+ devs[i] = (uint32_t)strtoul(addr[i], NULL, 16);
+ /* sort the devices based on IO addresses */
+ qsort(devs, count, sizeof(uint32_t), cmprator);
+
+ return count;
+}
+
+static bool
+fsdev_find_sub_dir(char *path, const char *search, char *output)
+{
+ DIR *dir;
+ struct dirent *entry;
+
+ dir = opendir(path);
+ if (dir == NULL) {
+ BCMFS_LOG(ERR, "Unable to open directory");
+ return -ENODEV;
+ }
+
+ while ((entry = readdir(dir)) != NULL) {
+ if (!strcmp(entry->d_name, search)) {
+ strlcpy(output, entry->d_name, BCMFS_MAX_PATH_LEN);
+ closedir(dir);
+ return true;
+ }
+ }
+
+ closedir(dir);
+
+ return false;
+}
+
+
+static int
+bcmfs_vdev_probe(struct rte_vdev_device *vdev)
+{
+ struct bcmfs_device *fsdev;
+ char top_dirpath[BCMFS_MAX_PATH_LEN];
+ char sub_dirpath[BCMFS_MAX_PATH_LEN];
+ char out_dirpath[BCMFS_MAX_PATH_LEN];
+ char out_dirname[BCMFS_MAX_PATH_LEN];
+ uint32_t fsdev_dev[BCMFS_MAX_NODES];
+ enum bcmfs_device_type dtype;
+ int i = 0;
+ int dev_idx;
+ int count = 0;
+ bool found = false;
+
+ sprintf(top_dirpath, "%s", SYSFS_BCM_PLTFORM_DEVICES);
+ while (strlen(dev_table[i].name)) {
+ found = fsdev_find_sub_dir(top_dirpath,
+ dev_table[i].name,
+ sub_dirpath);
+ if (found)
+ break;
+ i++;
+ }
+ if (!found) {
+ BCMFS_LOG(ERR, "No supported bcmfs dev found");
+ return -ENODEV;
+ }
+
+ dev_idx = i;
+ dtype = dev_table[i].type;
+
+ snprintf(out_dirpath, sizeof(out_dirpath), "%s/%s",
+ top_dirpath, sub_dirpath);
+ count = fsdev_find_all_devs(out_dirpath,
+ dev_table[dev_idx].suffix,
+ fsdev_dev);
+ if (!count) {
+ BCMFS_LOG(ERR, "No supported bcmfs dev found");
+ return -ENODEV;
+ }
+
+ i = 0;
+ while (count) {
+ /* format the device name present in the patch */
+ snprintf(out_dirname, sizeof(out_dirname), "%x.%s",
+ fsdev_dev[i], dev_table[dev_idx].suffix);
+ fsdev = fsdev_allocate_one_dev(vdev, out_dirpath,
+ out_dirname, dtype);
+ if (!fsdev) {
+ count--;
+ i++;
+ continue;
+ }
+ break;
+ }
+ if (fsdev == NULL) {
+ BCMFS_LOG(ERR, "All supported devs busy");
+ return -ENODEV;
+ }
+
+ return 0;
+}
+
+static int
+bcmfs_vdev_remove(struct rte_vdev_device *vdev)
+{
+ struct bcmfs_device *fsdev;
+
+ fsdev = find_fsdev(vdev);
+ if (fsdev == NULL)
+ return -ENODEV;
+
+ fsdev_release(fsdev);
+ return 0;
+}
+
+/* Register with vdev */
+static struct rte_vdev_driver rte_bcmfs_pmd = {
+ .probe = bcmfs_vdev_probe,
+ .remove = bcmfs_vdev_remove
+};
+
+RTE_PMD_REGISTER_VDEV(bcmfs_pmd,
+ rte_bcmfs_pmd);
diff --git a/drivers/crypto/bcmfs/bcmfs_device.h b/drivers/crypto/bcmfs/bcmfs_device.h
new file mode 100644
index 0000000000..cc64a8df2c
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_device.h
@@ -0,0 +1,40 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Broadcom.
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_DEV_H_
+#define _BCMFS_DEV_H_
+
+#include <sys/queue.h>
+
+#include <rte_bus_vdev.h>
+
+#include "bcmfs_logs.h"
+
+/* max number of dev nodes */
+#define BCMFS_MAX_NODES 4
+#define BCMFS_MAX_PATH_LEN 512
+#define BCMFS_DEV_NAME_LEN 64
+
+/* Path for BCM-Platform device directory */
+#define SYSFS_BCM_PLTFORM_DEVICES "/sys/bus/platform/devices"
+
+/* Supported devices */
+enum bcmfs_device_type {
+ BCMFS_SYM_FS4,
+ BCMFS_SYM_FS5,
+ BCMFS_UNKNOWN
+};
+
+struct bcmfs_device {
+ TAILQ_ENTRY(bcmfs_device) next;
+ /* Directory path for vfio */
+ char dirname[BCMFS_MAX_PATH_LEN];
+ /* BCMFS device name */
+ char name[BCMFS_DEV_NAME_LEN];
+ /* Parent vdev */
+ struct rte_vdev_device *vdev;
+};
+
+#endif /* _BCMFS_DEV_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_logs.c b/drivers/crypto/bcmfs/bcmfs_logs.c
new file mode 100644
index 0000000000..86f4ff3b53
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_logs.c
@@ -0,0 +1,38 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_log.h>
+#include <rte_hexdump.h>
+
+#include "bcmfs_logs.h"
+
+int bcmfs_conf_logtype;
+int bcmfs_dp_logtype;
+
+int
+bcmfs_hexdump_log(uint32_t level, uint32_t logtype, const char *title,
+ const void *buf, unsigned int len)
+{
+ if (level > rte_log_get_global_level())
+ return 0;
+ if (level > (uint32_t)(rte_log_get_level(logtype)))
+ return 0;
+
+ rte_hexdump(rte_log_get_stream(), title, buf, len);
+ return 0;
+}
+
+RTE_INIT(bcmfs_device_init_log)
+{
+ /* Configuration and general logs */
+ bcmfs_conf_logtype = rte_log_register("pmd.bcmfs_config");
+ if (bcmfs_conf_logtype >= 0)
+ rte_log_set_level(bcmfs_conf_logtype, RTE_LOG_NOTICE);
+
+ /* data-path logs */
+ bcmfs_dp_logtype = rte_log_register("pmd.bcmfs_fp");
+ if (bcmfs_dp_logtype >= 0)
+ rte_log_set_level(bcmfs_dp_logtype, RTE_LOG_NOTICE);
+}
diff --git a/drivers/crypto/bcmfs/bcmfs_logs.h b/drivers/crypto/bcmfs/bcmfs_logs.h
new file mode 100644
index 0000000000..c03a49b75c
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_logs.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_LOGS_H_
+#define _BCMFS_LOGS_H_
+
+#include <rte_log.h>
+
+extern int bcmfs_conf_logtype;
+extern int bcmfs_dp_logtype;
+
+#define BCMFS_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, bcmfs_conf_logtype, \
+ "%s(): " fmt "\n", __func__, ## args)
+
+#define BCMFS_DP_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, bcmfs_dp_logtype, \
+ "%s(): " fmt "\n", __func__, ## args)
+
+#define BCMFS_DP_HEXDUMP_LOG(level, title, buf, len) \
+ bcmfs_hexdump_log(RTE_LOG_ ## level, bcmfs_dp_logtype, title, buf, len)
+
+/**
+ * bcmfs_hexdump_log Dump out memory in a special hex dump format.
+ *
+ * The message will be sent to the stream used by the rte_log infrastructure.
+ */
+int
+bcmfs_hexdump_log(uint32_t level, uint32_t logtype, const char *heading,
+ const void *buf, unsigned int len);
+
+#endif /* _BCMFS_LOGS_H_ */
diff --git a/drivers/crypto/bcmfs/meson.build b/drivers/crypto/bcmfs/meson.build
new file mode 100644
index 0000000000..a4bdd8ee5d
--- /dev/null
+++ b/drivers/crypto/bcmfs/meson.build
@@ -0,0 +1,10 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(C) 2020 Broadcom
+# All rights reserved.
+#
+
+deps += ['eal', 'bus_vdev']
+sources = files(
+ 'bcmfs_logs.c',
+ 'bcmfs_device.c'
+ )
diff --git a/drivers/crypto/bcmfs/rte_pmd_bcmfs_version.map b/drivers/crypto/bcmfs/rte_pmd_bcmfs_version.map
new file mode 100644
index 0000000000..299ae632da
--- /dev/null
+++ b/drivers/crypto/bcmfs/rte_pmd_bcmfs_version.map
@@ -0,0 +1,3 @@
+DPDK_21.0 {
+ local: *;
+};
diff --git a/drivers/crypto/meson.build b/drivers/crypto/meson.build
index a2423507ad..93c2968acb 100644
--- a/drivers/crypto/meson.build
+++ b/drivers/crypto/meson.build
@@ -8,6 +8,7 @@ endif
drivers = ['aesni_gcm',
'aesni_mb',
'armv8',
+ 'bcmfs',
'caam_jr',
'ccp',
'dpaa_sec',
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v3 2/8] crypto/bcmfs: add vfio support
2020-10-05 16:26 ` [dpdk-dev] [PATCH v3 " Vikas Gupta
2020-10-05 16:26 ` [dpdk-dev] [PATCH v3 1/8] crypto/bcmfs: add BCMFS driver Vikas Gupta
@ 2020-10-05 16:26 ` Vikas Gupta
2020-10-05 16:26 ` [dpdk-dev] [PATCH v3 3/8] crypto/bcmfs: add apis for queue pair management Vikas Gupta
` (6 subsequent siblings)
8 siblings, 0 replies; 75+ messages in thread
From: Vikas Gupta @ 2020-10-05 16:26 UTC (permalink / raw)
To: dev, akhil.goyal; +Cc: vikram.prakash, Vikas Gupta, Raveendra Padasalagi
Add VFIO support for BCMFS PMD.
The BCMFS PMD functionality is dependent on the VFIO_PRESENT flag,
which gets enabled in the rte_vfio.h.
If this flag is not enabled in the compiling platform driver will
silently return with error, when executed.
Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
drivers/crypto/bcmfs/bcmfs_device.c | 5 ++
drivers/crypto/bcmfs/bcmfs_device.h | 6 ++
drivers/crypto/bcmfs/bcmfs_vfio.c | 107 ++++++++++++++++++++++++++++
drivers/crypto/bcmfs/bcmfs_vfio.h | 17 +++++
drivers/crypto/bcmfs/meson.build | 3 +-
5 files changed, 137 insertions(+), 1 deletion(-)
create mode 100644 drivers/crypto/bcmfs/bcmfs_vfio.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_vfio.h
diff --git a/drivers/crypto/bcmfs/bcmfs_device.c b/drivers/crypto/bcmfs/bcmfs_device.c
index c9865ee6a5..1ea6c3b13e 100644
--- a/drivers/crypto/bcmfs/bcmfs_device.c
+++ b/drivers/crypto/bcmfs/bcmfs_device.c
@@ -12,6 +12,7 @@
#include "bcmfs_device.h"
#include "bcmfs_logs.h"
+#include "bcmfs_vfio.h"
struct bcmfs_device_attr {
const char name[BCMFS_MAX_PATH_LEN];
@@ -72,6 +73,10 @@ fsdev_allocate_one_dev(struct rte_vdev_device *vdev,
fsdev->vdev = vdev;
+ /* attach to VFIO */
+ if (bcmfs_attach_vfio(fsdev))
+ goto cleanup;
+
TAILQ_INSERT_TAIL(&fsdev_list, fsdev, next);
return fsdev;
diff --git a/drivers/crypto/bcmfs/bcmfs_device.h b/drivers/crypto/bcmfs/bcmfs_device.h
index cc64a8df2c..c41cc00318 100644
--- a/drivers/crypto/bcmfs/bcmfs_device.h
+++ b/drivers/crypto/bcmfs/bcmfs_device.h
@@ -35,6 +35,12 @@ struct bcmfs_device {
char name[BCMFS_DEV_NAME_LEN];
/* Parent vdev */
struct rte_vdev_device *vdev;
+ /* vfio handle */
+ int vfio_dev_fd;
+ /* mapped address */
+ uint8_t *mmap_addr;
+ /* mapped size */
+ uint32_t mmap_size;
};
#endif /* _BCMFS_DEV_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_vfio.c b/drivers/crypto/bcmfs/bcmfs_vfio.c
new file mode 100644
index 0000000000..dc2def580f
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_vfio.c
@@ -0,0 +1,107 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Broadcom.
+ * All rights reserved.
+ */
+
+#include <errno.h>
+#include <sys/mman.h>
+#include <sys/ioctl.h>
+
+#include <rte_vfio.h>
+
+#include "bcmfs_device.h"
+#include "bcmfs_logs.h"
+#include "bcmfs_vfio.h"
+
+#ifdef VFIO_PRESENT
+static int
+vfio_map_dev_obj(const char *path, const char *dev_obj,
+ uint32_t *size, void **addr, int *dev_fd)
+{
+ int32_t ret;
+ struct vfio_group_status status = { .argsz = sizeof(status) };
+
+ struct vfio_device_info d_info = { .argsz = sizeof(d_info) };
+ struct vfio_region_info reg_info = { .argsz = sizeof(reg_info) };
+
+ ret = rte_vfio_setup_device(path, dev_obj, dev_fd, &d_info);
+ if (ret) {
+ BCMFS_LOG(ERR, "VFIO Setting for device failed");
+ return ret;
+ }
+
+ /* getting device region info*/
+ ret = ioctl(*dev_fd, VFIO_DEVICE_GET_REGION_INFO, ®_info);
+ if (ret < 0) {
+ BCMFS_LOG(ERR, "Error in VFIO getting REGION_INFO");
+ goto map_failed;
+ }
+
+ *addr = mmap(NULL, reg_info.size,
+ PROT_WRITE | PROT_READ, MAP_SHARED,
+ *dev_fd, reg_info.offset);
+ if (*addr == MAP_FAILED) {
+ BCMFS_LOG(ERR, "Error mapping region (errno = %d)", errno);
+ ret = errno;
+ goto map_failed;
+ }
+ *size = reg_info.size;
+
+ return 0;
+
+map_failed:
+ rte_vfio_release_device(path, dev_obj, *dev_fd);
+
+ return ret;
+}
+
+int
+bcmfs_attach_vfio(struct bcmfs_device *dev)
+{
+ int ret;
+ int vfio_dev_fd;
+ void *v_addr = NULL;
+ uint32_t size = 0;
+
+ ret = vfio_map_dev_obj(dev->dirname, dev->name,
+ &size, &v_addr, &vfio_dev_fd);
+ if (ret)
+ return -1;
+
+ dev->mmap_size = size;
+ dev->mmap_addr = v_addr;
+ dev->vfio_dev_fd = vfio_dev_fd;
+
+ return 0;
+}
+
+void
+bcmfs_release_vfio(struct bcmfs_device *dev)
+{
+ int ret;
+
+ if (dev == NULL)
+ return;
+
+ /* unmap the addr */
+ munmap(dev->mmap_addr, dev->mmap_size);
+ /* release the device */
+ ret = rte_vfio_release_device(dev->dirname, dev->name,
+ dev->vfio_dev_fd);
+ if (ret < 0) {
+ BCMFS_LOG(ERR, "cannot release device");
+ return;
+ }
+}
+#else
+int
+bcmfs_attach_vfio(struct bcmfs_device *dev __rte_unused)
+{
+ return -1;
+}
+
+void
+bcmfs_release_vfio(struct bcmfs_device *dev __rte_unused)
+{
+}
+#endif
diff --git a/drivers/crypto/bcmfs/bcmfs_vfio.h b/drivers/crypto/bcmfs/bcmfs_vfio.h
new file mode 100644
index 0000000000..d0fdf6483f
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_vfio.h
@@ -0,0 +1,17 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_VFIO_H_
+#define _BCMFS_VFIO_H_
+
+/* Attach the bcmfs device to vfio */
+int
+bcmfs_attach_vfio(struct bcmfs_device *dev);
+
+/* Release the bcmfs device from vfio */
+void
+bcmfs_release_vfio(struct bcmfs_device *dev);
+
+#endif /* _BCMFS_VFIO_H_ */
diff --git a/drivers/crypto/bcmfs/meson.build b/drivers/crypto/bcmfs/meson.build
index a4bdd8ee5d..fd39eba20e 100644
--- a/drivers/crypto/bcmfs/meson.build
+++ b/drivers/crypto/bcmfs/meson.build
@@ -6,5 +6,6 @@
deps += ['eal', 'bus_vdev']
sources = files(
'bcmfs_logs.c',
- 'bcmfs_device.c'
+ 'bcmfs_device.c',
+ 'bcmfs_vfio.c'
)
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v3 3/8] crypto/bcmfs: add apis for queue pair management
2020-10-05 16:26 ` [dpdk-dev] [PATCH v3 " Vikas Gupta
2020-10-05 16:26 ` [dpdk-dev] [PATCH v3 1/8] crypto/bcmfs: add BCMFS driver Vikas Gupta
2020-10-05 16:26 ` [dpdk-dev] [PATCH v3 2/8] crypto/bcmfs: add vfio support Vikas Gupta
@ 2020-10-05 16:26 ` Vikas Gupta
2020-10-05 16:26 ` [dpdk-dev] [PATCH v3 4/8] crypto/bcmfs: add hw queue pair operations Vikas Gupta
` (5 subsequent siblings)
8 siblings, 0 replies; 75+ messages in thread
From: Vikas Gupta @ 2020-10-05 16:26 UTC (permalink / raw)
To: dev, akhil.goyal; +Cc: vikram.prakash, Vikas Gupta, Raveendra Padasalagi
Add queue pair management APIs which will be used by Crypto device to
manage h/w queues. A bcmfs device structure owns multiple queue-pairs
based on the mapped address allocated to it.
Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
drivers/crypto/bcmfs/bcmfs_device.c | 4 +
drivers/crypto/bcmfs/bcmfs_device.h | 5 +
drivers/crypto/bcmfs/bcmfs_hw_defs.h | 32 +++
drivers/crypto/bcmfs/bcmfs_qp.c | 345 +++++++++++++++++++++++++++
drivers/crypto/bcmfs/bcmfs_qp.h | 122 ++++++++++
drivers/crypto/bcmfs/meson.build | 3 +-
6 files changed, 510 insertions(+), 1 deletion(-)
create mode 100644 drivers/crypto/bcmfs/bcmfs_hw_defs.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_qp.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_qp.h
diff --git a/drivers/crypto/bcmfs/bcmfs_device.c b/drivers/crypto/bcmfs/bcmfs_device.c
index 1ea6c3b13e..7622720a40 100644
--- a/drivers/crypto/bcmfs/bcmfs_device.c
+++ b/drivers/crypto/bcmfs/bcmfs_device.c
@@ -12,6 +12,7 @@
#include "bcmfs_device.h"
#include "bcmfs_logs.h"
+#include "bcmfs_qp.h"
#include "bcmfs_vfio.h"
struct bcmfs_device_attr {
@@ -77,6 +78,9 @@ fsdev_allocate_one_dev(struct rte_vdev_device *vdev,
if (bcmfs_attach_vfio(fsdev))
goto cleanup;
+ /* Maximum number of QPs supported */
+ fsdev->max_hw_qps = fsdev->mmap_size / BCMFS_HW_QUEUE_IO_ADDR_LEN;
+
TAILQ_INSERT_TAIL(&fsdev_list, fsdev, next);
return fsdev;
diff --git a/drivers/crypto/bcmfs/bcmfs_device.h b/drivers/crypto/bcmfs/bcmfs_device.h
index c41cc00318..a475373324 100644
--- a/drivers/crypto/bcmfs/bcmfs_device.h
+++ b/drivers/crypto/bcmfs/bcmfs_device.h
@@ -11,6 +11,7 @@
#include <rte_bus_vdev.h>
#include "bcmfs_logs.h"
+#include "bcmfs_qp.h"
/* max number of dev nodes */
#define BCMFS_MAX_NODES 4
@@ -41,6 +42,10 @@ struct bcmfs_device {
uint8_t *mmap_addr;
/* mapped size */
uint32_t mmap_size;
+ /* max number of h/w queue pairs detected */
+ uint16_t max_hw_qps;
+ /* current qpairs in use */
+ struct bcmfs_qp *qps_in_use[BCMFS_MAX_HW_QUEUES];
};
#endif /* _BCMFS_DEV_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_hw_defs.h b/drivers/crypto/bcmfs/bcmfs_hw_defs.h
new file mode 100644
index 0000000000..7d5bb5d8fe
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_hw_defs.h
@@ -0,0 +1,32 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_HW_DEFS_H_
+#define _BCMFS_HW_DEFS_H_
+
+#include <rte_atomic.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_io.h>
+
+#ifndef BIT
+#define BIT(nr) (1UL << (nr))
+#endif
+
+#define FS_RING_REGS_SIZE 0x10000
+#define FS_RING_DESC_SIZE 8
+#define FS_RING_BD_ALIGN_ORDER 12
+#define FS_RING_BD_DESC_PER_REQ 32
+#define FS_RING_CMPL_ALIGN_ORDER 13
+#define FS_RING_CMPL_SIZE (1024 * FS_RING_DESC_SIZE)
+#define FS_RING_MAX_REQ_COUNT 1024
+#define FS_RING_PAGE_SHFT 12
+#define FS_RING_PAGE_SIZE BIT(FS_RING_PAGE_SHFT)
+
+/* Minimum and maximum number of requests supported */
+#define FS_RM_MAX_REQS 4096
+#define FS_RM_MIN_REQS 32
+
+#endif /* BCMFS_HW_DEFS_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_qp.c b/drivers/crypto/bcmfs/bcmfs_qp.c
new file mode 100644
index 0000000000..864e7bb746
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_qp.c
@@ -0,0 +1,345 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Broadcom.
+ * All rights reserved.
+ */
+
+#include <inttypes.h>
+
+#include <rte_atomic.h>
+#include <rte_bitmap.h>
+#include <rte_common.h>
+#include <rte_dev.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_prefetch.h>
+#include <rte_string_fns.h>
+
+#include "bcmfs_logs.h"
+#include "bcmfs_qp.h"
+#include "bcmfs_hw_defs.h"
+
+/* TX or submission queue name */
+static const char *txq_name = "tx";
+/* Completion or receive queue name */
+static const char *cmplq_name = "cmpl";
+
+/* Helper function */
+static int
+bcmfs_qp_check_queue_alignment(uint64_t phys_addr,
+ uint32_t align)
+{
+ if (((align - 1) & phys_addr) != 0)
+ return -EINVAL;
+ return 0;
+}
+
+static void
+bcmfs_queue_delete(struct bcmfs_queue *queue,
+ uint16_t queue_pair_id)
+{
+ const struct rte_memzone *mz;
+ int status = 0;
+
+ if (queue == NULL) {
+ BCMFS_LOG(DEBUG, "Invalid queue");
+ return;
+ }
+ BCMFS_LOG(DEBUG, "Free ring %d type %d, memzone: %s",
+ queue_pair_id, queue->q_type, queue->memz_name);
+
+ mz = rte_memzone_lookup(queue->memz_name);
+ if (mz != NULL) {
+ /* Write an unused pattern to the queue memory. */
+ memset(queue->base_addr, 0x9B, queue->queue_size);
+ status = rte_memzone_free(mz);
+ if (status != 0)
+ BCMFS_LOG(ERR, "Error %d on freeing queue %s",
+ status, queue->memz_name);
+ } else {
+ BCMFS_LOG(DEBUG, "queue %s doesn't exist",
+ queue->memz_name);
+ }
+}
+
+static const struct rte_memzone *
+queue_dma_zone_reserve(const char *queue_name, uint32_t queue_size,
+ int socket_id, unsigned int align)
+{
+ const struct rte_memzone *mz;
+
+ mz = rte_memzone_lookup(queue_name);
+ if (mz != NULL) {
+ if (((size_t)queue_size <= mz->len) &&
+ (socket_id == SOCKET_ID_ANY ||
+ socket_id == mz->socket_id)) {
+ BCMFS_LOG(DEBUG, "re-use memzone already "
+ "allocated for %s", queue_name);
+ return mz;
+ }
+
+ BCMFS_LOG(ERR, "Incompatible memzone already "
+ "allocated %s, size %u, socket %d. "
+ "Requested size %u, socket %u",
+ queue_name, (uint32_t)mz->len,
+ mz->socket_id, queue_size, socket_id);
+ return NULL;
+ }
+
+ BCMFS_LOG(DEBUG, "Allocate memzone for %s, size %u on socket %u",
+ queue_name, queue_size, socket_id);
+ return rte_memzone_reserve_aligned(queue_name, queue_size,
+ socket_id, RTE_MEMZONE_IOVA_CONTIG, align);
+}
+
+static int
+bcmfs_queue_create(struct bcmfs_queue *queue,
+ struct bcmfs_qp_config *qp_conf,
+ uint16_t queue_pair_id,
+ enum bcmfs_queue_type qtype)
+{
+ const struct rte_memzone *qp_mz;
+ char q_name[16];
+ unsigned int align;
+ uint32_t queue_size_bytes;
+ int ret;
+
+ if (qtype == BCMFS_RM_TXQ) {
+ strlcpy(q_name, txq_name, sizeof(q_name));
+ align = 1U << FS_RING_BD_ALIGN_ORDER;
+ queue_size_bytes = qp_conf->nb_descriptors *
+ qp_conf->max_descs_req * FS_RING_DESC_SIZE;
+ queue_size_bytes = RTE_ALIGN_MUL_CEIL(queue_size_bytes,
+ FS_RING_PAGE_SIZE);
+ /* make queue size to multiple for 4K pages */
+ } else if (qtype == BCMFS_RM_CPLQ) {
+ strlcpy(q_name, cmplq_name, sizeof(q_name));
+ align = 1U << FS_RING_CMPL_ALIGN_ORDER;
+
+ /*
+ * Memory size for cmpl + MSI
+ * For MSI allocate here itself and so we allocate twice
+ */
+ queue_size_bytes = 2 * FS_RING_CMPL_SIZE;
+ } else {
+ BCMFS_LOG(ERR, "Invalid queue selection");
+ return -EINVAL;
+ }
+
+ queue->q_type = qtype;
+
+ /*
+ * Allocate a memzone for the queue - create a unique name.
+ */
+ snprintf(queue->memz_name, sizeof(queue->memz_name),
+ "%s_%d_%s_%d_%s", "bcmfs", qtype, "qp_mem",
+ queue_pair_id, q_name);
+ qp_mz = queue_dma_zone_reserve(queue->memz_name, queue_size_bytes,
+ 0, align);
+ if (qp_mz == NULL) {
+ BCMFS_LOG(ERR, "Failed to allocate ring memzone");
+ return -ENOMEM;
+ }
+
+ if (bcmfs_qp_check_queue_alignment(qp_mz->iova, align)) {
+ BCMFS_LOG(ERR, "Invalid alignment on queue create "
+ " 0x%" PRIx64 "\n",
+ queue->base_phys_addr);
+ ret = -EFAULT;
+ goto queue_create_err;
+ }
+
+ queue->base_addr = (char *)qp_mz->addr;
+ queue->base_phys_addr = qp_mz->iova;
+ queue->queue_size = queue_size_bytes;
+
+ return 0;
+
+queue_create_err:
+ rte_memzone_free(qp_mz);
+
+ return ret;
+}
+
+int
+bcmfs_qp_release(struct bcmfs_qp **qp_addr)
+{
+ struct bcmfs_qp *qp = *qp_addr;
+
+ if (qp == NULL) {
+ BCMFS_LOG(DEBUG, "qp already freed");
+ return 0;
+ }
+
+ /* Don't free memory if there are still responses to be processed */
+ if ((qp->stats.enqueued_count - qp->stats.dequeued_count) == 0) {
+ /* Stop the h/w ring */
+ qp->ops->stopq(qp);
+ /* Delete the queue pairs */
+ bcmfs_queue_delete(&qp->tx_q, qp->qpair_id);
+ bcmfs_queue_delete(&qp->cmpl_q, qp->qpair_id);
+ } else {
+ return -EAGAIN;
+ }
+
+ rte_bitmap_reset(qp->ctx_bmp);
+ rte_free(qp->ctx_bmp_mem);
+ rte_free(qp->ctx_pool);
+
+ rte_free(qp);
+ *qp_addr = NULL;
+
+ return 0;
+}
+
+int
+bcmfs_qp_setup(struct bcmfs_qp **qp_addr,
+ uint16_t queue_pair_id,
+ struct bcmfs_qp_config *qp_conf)
+{
+ struct bcmfs_qp *qp;
+ uint32_t bmp_size;
+ uint32_t nb_descriptors = qp_conf->nb_descriptors;
+ uint16_t i;
+ int rc;
+
+ if (nb_descriptors < FS_RM_MIN_REQS) {
+ BCMFS_LOG(ERR, "Can't create qp for %u descriptors",
+ nb_descriptors);
+ return -EINVAL;
+ }
+
+ if (nb_descriptors > FS_RM_MAX_REQS)
+ nb_descriptors = FS_RM_MAX_REQS;
+
+ if (qp_conf->iobase == NULL) {
+ BCMFS_LOG(ERR, "IO onfig space null");
+ return -EINVAL;
+ }
+
+ qp = rte_zmalloc_socket("BCM FS PMD qp metadata",
+ sizeof(*qp), RTE_CACHE_LINE_SIZE,
+ qp_conf->socket_id);
+ if (qp == NULL) {
+ BCMFS_LOG(ERR, "Failed to alloc mem for qp struct");
+ return -ENOMEM;
+ }
+
+ qp->qpair_id = queue_pair_id;
+ qp->ioreg = qp_conf->iobase;
+ qp->nb_descriptors = nb_descriptors;
+
+ qp->stats.enqueued_count = 0;
+ qp->stats.dequeued_count = 0;
+
+ rc = bcmfs_queue_create(&qp->tx_q, qp_conf, qp->qpair_id,
+ BCMFS_RM_TXQ);
+ if (rc) {
+ BCMFS_LOG(ERR, "Tx queue create failed queue_pair_id %u",
+ queue_pair_id);
+ goto create_err;
+ }
+
+ rc = bcmfs_queue_create(&qp->cmpl_q, qp_conf, qp->qpair_id,
+ BCMFS_RM_CPLQ);
+ if (rc) {
+ BCMFS_LOG(ERR, "Cmpl queue create failed queue_pair_id= %u",
+ queue_pair_id);
+ goto q_create_err;
+ }
+
+ /* ctx saving bitmap */
+ bmp_size = rte_bitmap_get_memory_footprint(nb_descriptors);
+
+ /* Allocate memory for bitmap */
+ qp->ctx_bmp_mem = rte_zmalloc("ctx_bmp_mem", bmp_size,
+ RTE_CACHE_LINE_SIZE);
+ if (qp->ctx_bmp_mem == NULL) {
+ rc = -ENOMEM;
+ goto qp_create_err;
+ }
+
+ /* Initialize pool resource bitmap array */
+ qp->ctx_bmp = rte_bitmap_init(nb_descriptors, qp->ctx_bmp_mem,
+ bmp_size);
+ if (qp->ctx_bmp == NULL) {
+ rc = -EINVAL;
+ goto bmap_mem_free;
+ }
+
+ /* Mark all pools available */
+ for (i = 0; i < nb_descriptors; i++)
+ rte_bitmap_set(qp->ctx_bmp, i);
+
+ /* Allocate memory for context */
+ qp->ctx_pool = rte_zmalloc("qp_ctx_pool",
+ sizeof(unsigned long) *
+ nb_descriptors, 0);
+ if (qp->ctx_pool == NULL) {
+ BCMFS_LOG(ERR, "ctx allocation pool fails");
+ rc = -ENOMEM;
+ goto bmap_free;
+ }
+
+ /* Start h/w ring */
+ qp->ops->startq(qp);
+
+ *qp_addr = qp;
+
+ return 0;
+
+bmap_free:
+ rte_bitmap_reset(qp->ctx_bmp);
+bmap_mem_free:
+ rte_free(qp->ctx_bmp_mem);
+qp_create_err:
+ bcmfs_queue_delete(&qp->cmpl_q, queue_pair_id);
+q_create_err:
+ bcmfs_queue_delete(&qp->tx_q, queue_pair_id);
+create_err:
+ rte_free(qp);
+
+ return rc;
+}
+
+uint16_t
+bcmfs_enqueue_op_burst(void *qp, void **ops, uint16_t nb_ops)
+{
+ struct bcmfs_qp *tmp_qp = (struct bcmfs_qp *)qp;
+ register uint32_t nb_ops_sent = 0;
+ uint16_t nb_ops_possible = nb_ops;
+ int ret;
+
+ if (unlikely(nb_ops == 0))
+ return 0;
+
+ while (nb_ops_sent != nb_ops_possible) {
+ ret = tmp_qp->ops->enq_one_req(qp, *ops);
+ if (ret != 0) {
+ tmp_qp->stats.enqueue_err_count++;
+ /* This message cannot be enqueued */
+ if (nb_ops_sent == 0)
+ return 0;
+ goto ring_db;
+ }
+
+ ops++;
+ nb_ops_sent++;
+ }
+
+ring_db:
+ tmp_qp->stats.enqueued_count += nb_ops_sent;
+ tmp_qp->ops->ring_db(tmp_qp);
+
+ return nb_ops_sent;
+}
+
+uint16_t
+bcmfs_dequeue_op_burst(void *qp, void **ops, uint16_t nb_ops)
+{
+ struct bcmfs_qp *tmp_qp = (struct bcmfs_qp *)qp;
+ uint32_t deq = tmp_qp->ops->dequeue(tmp_qp, ops, nb_ops);
+
+ tmp_qp->stats.dequeued_count += deq;
+
+ return deq;
+}
diff --git a/drivers/crypto/bcmfs/bcmfs_qp.h b/drivers/crypto/bcmfs/bcmfs_qp.h
new file mode 100644
index 0000000000..52c487956e
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_qp.h
@@ -0,0 +1,122 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_QP_H_
+#define _BCMFS_QP_H_
+
+#include <rte_memzone.h>
+
+/* Maximum number of h/w queues supported by device */
+#define BCMFS_MAX_HW_QUEUES 32
+
+/* H/W queue IO address space len */
+#define BCMFS_HW_QUEUE_IO_ADDR_LEN (64 * 1024)
+
+/* Maximum size of device ops name */
+#define BCMFS_HW_OPS_NAMESIZE 32
+
+enum bcmfs_queue_type {
+ /* TX or submission queue */
+ BCMFS_RM_TXQ,
+ /* Completion or receive queue */
+ BCMFS_RM_CPLQ
+};
+
+struct bcmfs_qp_stats {
+ /* Count of all operations enqueued */
+ uint64_t enqueued_count;
+ /* Count of all operations dequeued */
+ uint64_t dequeued_count;
+ /* Total error count on operations enqueued */
+ uint64_t enqueue_err_count;
+ /* Total error count on operations dequeued */
+ uint64_t dequeue_err_count;
+};
+
+struct bcmfs_qp_config {
+ /* Socket to allocate memory on */
+ int socket_id;
+ /* Mapped iobase for qp */
+ void *iobase;
+ /* nb_descriptors or requests a h/w queue can accommodate */
+ uint16_t nb_descriptors;
+ /* Maximum number of h/w descriptors needed by a request */
+ uint16_t max_descs_req;
+};
+
+struct bcmfs_queue {
+ /* Base virt address */
+ void *base_addr;
+ /* Base iova */
+ rte_iova_t base_phys_addr;
+ /* Queue type */
+ enum bcmfs_queue_type q_type;
+ /* Queue size based on nb_descriptors and max_descs_reqs */
+ uint32_t queue_size;
+ union {
+ /* s/w pointer for tx h/w queue*/
+ uint32_t tx_write_ptr;
+ /* s/w pointer for completion h/w queue*/
+ uint32_t cmpl_read_ptr;
+ };
+ /* Memzone name */
+ char memz_name[RTE_MEMZONE_NAMESIZE];
+};
+
+struct bcmfs_qp {
+ /* Queue-pair ID */
+ uint16_t qpair_id;
+ /* Mapped IO address */
+ void *ioreg;
+ /* A TX queue */
+ struct bcmfs_queue tx_q;
+ /* A Completion queue */
+ struct bcmfs_queue cmpl_q;
+ /* Number of requests queue can accommodate */
+ uint32_t nb_descriptors;
+ /* Number of pending requests and enqueued to h/w queue */
+ uint16_t nb_pending_requests;
+ /* A pool which act as a hash for <request-ID and virt address> pair */
+ unsigned long *ctx_pool;
+ /* virt address for mem allocated for bitmap */
+ void *ctx_bmp_mem;
+ /* Bitmap */
+ struct rte_bitmap *ctx_bmp;
+ /* Associated stats */
+ struct bcmfs_qp_stats stats;
+ /* h/w ops associated with qp */
+ struct bcmfs_hw_queue_pair_ops *ops;
+
+} __rte_cache_aligned;
+
+/* Structure defining h/w queue pair operations */
+struct bcmfs_hw_queue_pair_ops {
+ /* ops name */
+ char name[BCMFS_HW_OPS_NAMESIZE];
+ /* Enqueue an object */
+ int (*enq_one_req)(struct bcmfs_qp *qp, void *obj);
+ /* Ring doorbell */
+ void (*ring_db)(struct bcmfs_qp *qp);
+ /* Dequeue objects */
+ uint16_t (*dequeue)(struct bcmfs_qp *qp, void **obj,
+ uint16_t nb_ops);
+ /* Start the h/w queue */
+ int (*startq)(struct bcmfs_qp *qp);
+ /* Stop the h/w queue */
+ void (*stopq)(struct bcmfs_qp *qp);
+};
+
+uint16_t
+bcmfs_enqueue_op_burst(void *qp, void **ops, uint16_t nb_ops);
+uint16_t
+bcmfs_dequeue_op_burst(void *qp, void **ops, uint16_t nb_ops);
+int
+bcmfs_qp_release(struct bcmfs_qp **qp_addr);
+int
+bcmfs_qp_setup(struct bcmfs_qp **qp_addr,
+ uint16_t queue_pair_id,
+ struct bcmfs_qp_config *bcmfs_conf);
+
+#endif /* _BCMFS_QP_H_ */
diff --git a/drivers/crypto/bcmfs/meson.build b/drivers/crypto/bcmfs/meson.build
index fd39eba20e..7e2bcbf14b 100644
--- a/drivers/crypto/bcmfs/meson.build
+++ b/drivers/crypto/bcmfs/meson.build
@@ -7,5 +7,6 @@ deps += ['eal', 'bus_vdev']
sources = files(
'bcmfs_logs.c',
'bcmfs_device.c',
- 'bcmfs_vfio.c'
+ 'bcmfs_vfio.c',
+ 'bcmfs_qp.c'
)
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v3 4/8] crypto/bcmfs: add hw queue pair operations
2020-10-05 16:26 ` [dpdk-dev] [PATCH v3 " Vikas Gupta
` (2 preceding siblings ...)
2020-10-05 16:26 ` [dpdk-dev] [PATCH v3 3/8] crypto/bcmfs: add apis for queue pair management Vikas Gupta
@ 2020-10-05 16:26 ` Vikas Gupta
2020-10-05 16:26 ` [dpdk-dev] [PATCH v3 5/8] crypto/bcmfs: create a symmetric cryptodev Vikas Gupta
` (4 subsequent siblings)
8 siblings, 0 replies; 75+ messages in thread
From: Vikas Gupta @ 2020-10-05 16:26 UTC (permalink / raw)
To: dev, akhil.goyal; +Cc: vikram.prakash, Vikas Gupta, Raveendra Padasalagi
Add queue pair operations exported by supported devices.
Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
drivers/crypto/bcmfs/bcmfs_dev_msg.h | 29 +
drivers/crypto/bcmfs/bcmfs_device.c | 51 ++
drivers/crypto/bcmfs/bcmfs_device.h | 16 +
drivers/crypto/bcmfs/bcmfs_qp.c | 1 +
drivers/crypto/bcmfs/bcmfs_qp.h | 4 +
drivers/crypto/bcmfs/hw/bcmfs4_rm.c | 743 ++++++++++++++++++++++
drivers/crypto/bcmfs/hw/bcmfs5_rm.c | 677 ++++++++++++++++++++
drivers/crypto/bcmfs/hw/bcmfs_rm_common.c | 82 +++
drivers/crypto/bcmfs/hw/bcmfs_rm_common.h | 51 ++
drivers/crypto/bcmfs/meson.build | 5 +-
10 files changed, 1658 insertions(+), 1 deletion(-)
create mode 100644 drivers/crypto/bcmfs/bcmfs_dev_msg.h
create mode 100644 drivers/crypto/bcmfs/hw/bcmfs4_rm.c
create mode 100644 drivers/crypto/bcmfs/hw/bcmfs5_rm.c
create mode 100644 drivers/crypto/bcmfs/hw/bcmfs_rm_common.c
create mode 100644 drivers/crypto/bcmfs/hw/bcmfs_rm_common.h
diff --git a/drivers/crypto/bcmfs/bcmfs_dev_msg.h b/drivers/crypto/bcmfs/bcmfs_dev_msg.h
new file mode 100644
index 0000000000..5b50bde35a
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_dev_msg.h
@@ -0,0 +1,29 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_DEV_MSG_H_
+#define _BCMFS_DEV_MSG_H_
+
+#define MAX_SRC_ADDR_BUFFERS 8
+#define MAX_DST_ADDR_BUFFERS 3
+
+struct bcmfs_qp_message {
+ /** Physical address of each source */
+ uint64_t srcs_addr[MAX_SRC_ADDR_BUFFERS];
+ /** Length of each sources */
+ uint32_t srcs_len[MAX_SRC_ADDR_BUFFERS];
+ /** Total number of sources */
+ unsigned int srcs_count;
+ /** Physical address of each destination */
+ uint64_t dsts_addr[MAX_DST_ADDR_BUFFERS];
+ /** Length of each destination */
+ uint32_t dsts_len[MAX_DST_ADDR_BUFFERS];
+ /** Total number of destinations */
+ unsigned int dsts_count;
+
+ void *ctx;
+};
+
+#endif /* _BCMFS_DEV_MSG_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_device.c b/drivers/crypto/bcmfs/bcmfs_device.c
index 7622720a40..6ff65adfc7 100644
--- a/drivers/crypto/bcmfs/bcmfs_device.c
+++ b/drivers/crypto/bcmfs/bcmfs_device.c
@@ -44,6 +44,47 @@ static struct bcmfs_device_attr dev_table[] = {
}
};
+struct bcmfs_hw_queue_pair_ops_table bcmfs_hw_queue_pair_ops_table = {
+ .tl = RTE_SPINLOCK_INITIALIZER,
+ .num_ops = 0
+};
+
+int bcmfs_hw_queue_pair_register_ops(const struct bcmfs_hw_queue_pair_ops *h)
+{
+ struct bcmfs_hw_queue_pair_ops *ops;
+ int16_t ops_index;
+
+ rte_spinlock_lock(&bcmfs_hw_queue_pair_ops_table.tl);
+
+ if (h->enq_one_req == NULL || h->dequeue == NULL ||
+ h->ring_db == NULL || h->startq == NULL || h->stopq == NULL) {
+ rte_spinlock_unlock(&bcmfs_hw_queue_pair_ops_table.tl);
+ BCMFS_LOG(ERR,
+ "Missing callback while registering device ops");
+ return -EINVAL;
+ }
+
+ if (strlen(h->name) >= sizeof(ops->name) - 1) {
+ rte_spinlock_unlock(&bcmfs_hw_queue_pair_ops_table.tl);
+ BCMFS_LOG(ERR, "%s(): fs device_ops <%s>: name too long",
+ __func__, h->name);
+ return -EEXIST;
+ }
+
+ ops_index = bcmfs_hw_queue_pair_ops_table.num_ops++;
+ ops = &bcmfs_hw_queue_pair_ops_table.qp_ops[ops_index];
+ strlcpy(ops->name, h->name, sizeof(ops->name));
+ ops->enq_one_req = h->enq_one_req;
+ ops->dequeue = h->dequeue;
+ ops->ring_db = h->ring_db;
+ ops->startq = h->startq;
+ ops->stopq = h->stopq;
+
+ rte_spinlock_unlock(&bcmfs_hw_queue_pair_ops_table.tl);
+
+ return ops_index;
+}
+
TAILQ_HEAD(fsdev_list, bcmfs_device);
static struct fsdev_list fsdev_list = TAILQ_HEAD_INITIALIZER(fsdev_list);
@@ -54,6 +95,7 @@ fsdev_allocate_one_dev(struct rte_vdev_device *vdev,
enum bcmfs_device_type dev_type __rte_unused)
{
struct bcmfs_device *fsdev;
+ uint32_t i;
fsdev = rte_calloc(__func__, 1, sizeof(*fsdev), 0);
if (!fsdev)
@@ -69,6 +111,15 @@ fsdev_allocate_one_dev(struct rte_vdev_device *vdev,
goto cleanup;
}
+ /* check if registered ops name is present in directory path */
+ for (i = 0; i < bcmfs_hw_queue_pair_ops_table.num_ops; i++)
+ if (strstr(dirpath,
+ bcmfs_hw_queue_pair_ops_table.qp_ops[i].name))
+ fsdev->sym_hw_qp_ops =
+ &bcmfs_hw_queue_pair_ops_table.qp_ops[i];
+ if (!fsdev->sym_hw_qp_ops)
+ goto cleanup;
+
strcpy(fsdev->dirname, dirpath);
strcpy(fsdev->name, devname);
diff --git a/drivers/crypto/bcmfs/bcmfs_device.h b/drivers/crypto/bcmfs/bcmfs_device.h
index a475373324..9e40c5d747 100644
--- a/drivers/crypto/bcmfs/bcmfs_device.h
+++ b/drivers/crypto/bcmfs/bcmfs_device.h
@@ -8,6 +8,7 @@
#include <sys/queue.h>
+#include <rte_spinlock.h>
#include <rte_bus_vdev.h>
#include "bcmfs_logs.h"
@@ -28,6 +29,19 @@ enum bcmfs_device_type {
BCMFS_UNKNOWN
};
+/* A table to store registered queue pair opertations */
+struct bcmfs_hw_queue_pair_ops_table {
+ rte_spinlock_t tl;
+ /* Number of used ops structs in the table. */
+ uint32_t num_ops;
+ /* Storage for all possible ops structs. */
+ struct bcmfs_hw_queue_pair_ops qp_ops[BCMFS_MAX_NODES];
+};
+
+/* HW queue pair ops register function */
+int bcmfs_hw_queue_pair_register_ops(const struct bcmfs_hw_queue_pair_ops
+ *qp_ops);
+
struct bcmfs_device {
TAILQ_ENTRY(bcmfs_device) next;
/* Directory path for vfio */
@@ -46,6 +60,8 @@ struct bcmfs_device {
uint16_t max_hw_qps;
/* current qpairs in use */
struct bcmfs_qp *qps_in_use[BCMFS_MAX_HW_QUEUES];
+ /* queue pair ops exported by symmetric crypto hw */
+ struct bcmfs_hw_queue_pair_ops *sym_hw_qp_ops;
};
#endif /* _BCMFS_DEV_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_qp.c b/drivers/crypto/bcmfs/bcmfs_qp.c
index 864e7bb746..ec1327b780 100644
--- a/drivers/crypto/bcmfs/bcmfs_qp.c
+++ b/drivers/crypto/bcmfs/bcmfs_qp.c
@@ -227,6 +227,7 @@ bcmfs_qp_setup(struct bcmfs_qp **qp_addr,
qp->qpair_id = queue_pair_id;
qp->ioreg = qp_conf->iobase;
qp->nb_descriptors = nb_descriptors;
+ qp->ops = qp_conf->ops;
qp->stats.enqueued_count = 0;
qp->stats.dequeued_count = 0;
diff --git a/drivers/crypto/bcmfs/bcmfs_qp.h b/drivers/crypto/bcmfs/bcmfs_qp.h
index 52c487956e..59785865b0 100644
--- a/drivers/crypto/bcmfs/bcmfs_qp.h
+++ b/drivers/crypto/bcmfs/bcmfs_qp.h
@@ -44,6 +44,8 @@ struct bcmfs_qp_config {
uint16_t nb_descriptors;
/* Maximum number of h/w descriptors needed by a request */
uint16_t max_descs_req;
+ /* h/w ops associated with qp */
+ struct bcmfs_hw_queue_pair_ops *ops;
};
struct bcmfs_queue {
@@ -61,6 +63,8 @@ struct bcmfs_queue {
/* s/w pointer for completion h/w queue*/
uint32_t cmpl_read_ptr;
};
+ /* number of inflight descriptor accumulated before next db ring */
+ uint16_t descs_inflight;
/* Memzone name */
char memz_name[RTE_MEMZONE_NAMESIZE];
};
diff --git a/drivers/crypto/bcmfs/hw/bcmfs4_rm.c b/drivers/crypto/bcmfs/hw/bcmfs4_rm.c
new file mode 100644
index 0000000000..a6206ab023
--- /dev/null
+++ b/drivers/crypto/bcmfs/hw/bcmfs4_rm.c
@@ -0,0 +1,743 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <unistd.h>
+
+#include <rte_bitmap.h>
+
+#include "bcmfs_device.h"
+#include "bcmfs_dev_msg.h"
+#include "bcmfs_hw_defs.h"
+#include "bcmfs_logs.h"
+#include "bcmfs_qp.h"
+#include "bcmfs_rm_common.h"
+
+/* FS4 configuration */
+#define RING_BD_TOGGLE_INVALID(offset) \
+ (((offset) >> FS_RING_BD_ALIGN_ORDER) & 0x1)
+#define RING_BD_TOGGLE_VALID(offset) \
+ (!RING_BD_TOGGLE_INVALID(offset))
+
+#define RING_VER_MAGIC 0x76303031
+
+/* Per-Ring register offsets */
+#define RING_VER 0x000
+#define RING_BD_START_ADDR 0x004
+#define RING_BD_READ_PTR 0x008
+#define RING_BD_WRITE_PTR 0x00c
+#define RING_BD_READ_PTR_DDR_LS 0x010
+#define RING_BD_READ_PTR_DDR_MS 0x014
+#define RING_CMPL_START_ADDR 0x018
+#define RING_CMPL_WRITE_PTR 0x01c
+#define RING_NUM_REQ_RECV_LS 0x020
+#define RING_NUM_REQ_RECV_MS 0x024
+#define RING_NUM_REQ_TRANS_LS 0x028
+#define RING_NUM_REQ_TRANS_MS 0x02c
+#define RING_NUM_REQ_OUTSTAND 0x030
+#define RING_CONTROL 0x034
+#define RING_FLUSH_DONE 0x038
+#define RING_MSI_ADDR_LS 0x03c
+#define RING_MSI_ADDR_MS 0x040
+#define RING_MSI_CONTROL 0x048
+#define RING_BD_READ_PTR_DDR_CONTROL 0x04c
+#define RING_MSI_DATA_VALUE 0x064
+
+/* Register RING_BD_START_ADDR fields */
+#define BD_LAST_UPDATE_HW_SHIFT 28
+#define BD_LAST_UPDATE_HW_MASK 0x1
+#define BD_START_ADDR_VALUE(pa) \
+ ((uint32_t)((((uint64_t)(pa)) >> FS_RING_BD_ALIGN_ORDER) & 0x0fffffff))
+#define BD_START_ADDR_DECODE(val) \
+ ((uint64_t)((val) & 0x0fffffff) << FS_RING_BD_ALIGN_ORDER)
+
+/* Register RING_CMPL_START_ADDR fields */
+#define CMPL_START_ADDR_VALUE(pa) \
+ ((uint32_t)((((uint64_t)(pa)) >> FS_RING_CMPL_ALIGN_ORDER) & 0x7ffffff))
+
+/* Register RING_CONTROL fields */
+#define CONTROL_MASK_DISABLE_CONTROL 12
+#define CONTROL_FLUSH_SHIFT 5
+#define CONTROL_ACTIVE_SHIFT 4
+#define CONTROL_RATE_ADAPT_MASK 0xf
+#define CONTROL_RATE_DYNAMIC 0x0
+#define CONTROL_RATE_FAST 0x8
+#define CONTROL_RATE_MEDIUM 0x9
+#define CONTROL_RATE_SLOW 0xa
+#define CONTROL_RATE_IDLE 0xb
+
+/* Register RING_FLUSH_DONE fields */
+#define FLUSH_DONE_MASK 0x1
+
+/* Register RING_MSI_CONTROL fields */
+#define MSI_TIMER_VAL_SHIFT 16
+#define MSI_TIMER_VAL_MASK 0xffff
+#define MSI_ENABLE_SHIFT 15
+#define MSI_ENABLE_MASK 0x1
+#define MSI_COUNT_SHIFT 0
+#define MSI_COUNT_MASK 0x3ff
+
+/* Register RING_BD_READ_PTR_DDR_CONTROL fields */
+#define BD_READ_PTR_DDR_TIMER_VAL_SHIFT 16
+#define BD_READ_PTR_DDR_TIMER_VAL_MASK 0xffff
+#define BD_READ_PTR_DDR_ENABLE_SHIFT 15
+#define BD_READ_PTR_DDR_ENABLE_MASK 0x1
+
+/* ====== Broadcom FS4-RM ring descriptor defines ===== */
+
+
+/* General descriptor format */
+#define DESC_TYPE_SHIFT 60
+#define DESC_TYPE_MASK 0xf
+#define DESC_PAYLOAD_SHIFT 0
+#define DESC_PAYLOAD_MASK 0x0fffffffffffffff
+
+/* Null descriptor format */
+#define NULL_TYPE 0
+#define NULL_TOGGLE_SHIFT 58
+#define NULL_TOGGLE_MASK 0x1
+
+/* Header descriptor format */
+#define HEADER_TYPE 1
+#define HEADER_TOGGLE_SHIFT 58
+#define HEADER_TOGGLE_MASK 0x1
+#define HEADER_ENDPKT_SHIFT 57
+#define HEADER_ENDPKT_MASK 0x1
+#define HEADER_STARTPKT_SHIFT 56
+#define HEADER_STARTPKT_MASK 0x1
+#define HEADER_BDCOUNT_SHIFT 36
+#define HEADER_BDCOUNT_MASK 0x1f
+#define HEADER_BDCOUNT_MAX HEADER_BDCOUNT_MASK
+#define HEADER_FLAGS_SHIFT 16
+#define HEADER_FLAGS_MASK 0xffff
+#define HEADER_OPAQUE_SHIFT 0
+#define HEADER_OPAQUE_MASK 0xffff
+
+/* Source (SRC) descriptor format */
+#define SRC_TYPE 2
+#define SRC_LENGTH_SHIFT 44
+#define SRC_LENGTH_MASK 0xffff
+#define SRC_ADDR_SHIFT 0
+#define SRC_ADDR_MASK 0x00000fffffffffff
+
+/* Destination (DST) descriptor format */
+#define DST_TYPE 3
+#define DST_LENGTH_SHIFT 44
+#define DST_LENGTH_MASK 0xffff
+#define DST_ADDR_SHIFT 0
+#define DST_ADDR_MASK 0x00000fffffffffff
+
+/* Next pointer (NPTR) descriptor format */
+#define NPTR_TYPE 5
+#define NPTR_TOGGLE_SHIFT 58
+#define NPTR_TOGGLE_MASK 0x1
+#define NPTR_ADDR_SHIFT 0
+#define NPTR_ADDR_MASK 0x00000fffffffffff
+
+/* Mega source (MSRC) descriptor format */
+#define MSRC_TYPE 6
+#define MSRC_LENGTH_SHIFT 44
+#define MSRC_LENGTH_MASK 0xffff
+#define MSRC_ADDR_SHIFT 0
+#define MSRC_ADDR_MASK 0x00000fffffffffff
+
+/* Mega destination (MDST) descriptor format */
+#define MDST_TYPE 7
+#define MDST_LENGTH_SHIFT 44
+#define MDST_LENGTH_MASK 0xffff
+#define MDST_ADDR_SHIFT 0
+#define MDST_ADDR_MASK 0x00000fffffffffff
+
+static uint8_t
+bcmfs4_is_next_table_desc(void *desc_ptr)
+{
+ uint64_t desc = rm_read_desc(desc_ptr);
+ uint32_t type = FS_DESC_DEC(desc, DESC_TYPE_SHIFT, DESC_TYPE_MASK);
+
+ return (type == NPTR_TYPE) ? true : false;
+}
+
+static uint64_t
+bcmfs4_next_table_desc(uint32_t toggle, uint64_t next_addr)
+{
+ return (rm_build_desc(NPTR_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(toggle, NPTR_TOGGLE_SHIFT, NPTR_TOGGLE_MASK) |
+ rm_build_desc(next_addr, NPTR_ADDR_SHIFT, NPTR_ADDR_MASK));
+}
+
+static uint64_t
+bcmfs4_null_desc(uint32_t toggle)
+{
+ return (rm_build_desc(NULL_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(toggle, NULL_TOGGLE_SHIFT, NULL_TOGGLE_MASK));
+}
+
+static void
+bcmfs4_flip_header_toggle(void *desc_ptr)
+{
+ uint64_t desc = rm_read_desc(desc_ptr);
+
+ if (desc & ((uint64_t)0x1 << HEADER_TOGGLE_SHIFT))
+ desc &= ~((uint64_t)0x1 << HEADER_TOGGLE_SHIFT);
+ else
+ desc |= ((uint64_t)0x1 << HEADER_TOGGLE_SHIFT);
+
+ rm_write_desc(desc_ptr, desc);
+}
+
+static uint64_t
+bcmfs4_header_desc(uint32_t toggle, uint32_t startpkt,
+ uint32_t endpkt, uint32_t bdcount,
+ uint32_t flags, uint32_t opaque)
+{
+ return (rm_build_desc(HEADER_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(toggle, HEADER_TOGGLE_SHIFT, HEADER_TOGGLE_MASK) |
+ rm_build_desc(startpkt, HEADER_STARTPKT_SHIFT,
+ HEADER_STARTPKT_MASK) |
+ rm_build_desc(endpkt, HEADER_ENDPKT_SHIFT, HEADER_ENDPKT_MASK) |
+ rm_build_desc(bdcount, HEADER_BDCOUNT_SHIFT,
+ HEADER_BDCOUNT_MASK) |
+ rm_build_desc(flags, HEADER_FLAGS_SHIFT, HEADER_FLAGS_MASK) |
+ rm_build_desc(opaque, HEADER_OPAQUE_SHIFT, HEADER_OPAQUE_MASK));
+}
+
+static void
+bcmfs4_enqueue_desc(uint32_t nhpos, uint32_t nhcnt,
+ uint32_t reqid, uint64_t desc,
+ void **desc_ptr, uint32_t *toggle,
+ void *start_desc, void *end_desc)
+{
+ uint64_t d;
+ uint32_t nhavail, _toggle, _startpkt, _endpkt, _bdcount;
+
+ /*
+ * Each request or packet start with a HEADER descriptor followed
+ * by one or more non-HEADER descriptors (SRC, SRCT, MSRC, DST,
+ * DSTT, MDST, IMM, and IMMT). The number of non-HEADER descriptors
+ * following a HEADER descriptor is represented by BDCOUNT field
+ * of HEADER descriptor. The max value of BDCOUNT field is 31 which
+ * means we can only have 31 non-HEADER descriptors following one
+ * HEADER descriptor.
+ *
+ * In general use, number of non-HEADER descriptors can easily go
+ * beyond 31. To tackle this situation, we have packet (or request)
+ * extension bits (STARTPKT and ENDPKT) in the HEADER descriptor.
+ *
+ * To use packet extension, the first HEADER descriptor of request
+ * (or packet) will have STARTPKT=1 and ENDPKT=0. The intermediate
+ * HEADER descriptors will have STARTPKT=0 and ENDPKT=0. The last
+ * HEADER descriptor will have STARTPKT=0 and ENDPKT=1. Also, the
+ * TOGGLE bit of the first HEADER will be set to invalid state to
+ * ensure that FlexDMA engine does not start fetching descriptors
+ * till all descriptors are enqueued. The user of this function
+ * will flip the TOGGLE bit of first HEADER after all descriptors
+ * are enqueued.
+ */
+
+ if ((nhpos % HEADER_BDCOUNT_MAX == 0) && (nhcnt - nhpos)) {
+ /* Prepare the header descriptor */
+ nhavail = (nhcnt - nhpos);
+ _toggle = (nhpos == 0) ? !(*toggle) : (*toggle);
+ _startpkt = (nhpos == 0) ? 0x1 : 0x0;
+ _endpkt = (nhavail <= HEADER_BDCOUNT_MAX) ? 0x1 : 0x0;
+ _bdcount = (nhavail <= HEADER_BDCOUNT_MAX) ?
+ nhavail : HEADER_BDCOUNT_MAX;
+ if (nhavail <= HEADER_BDCOUNT_MAX)
+ _bdcount = nhavail;
+ else
+ _bdcount = HEADER_BDCOUNT_MAX;
+ d = bcmfs4_header_desc(_toggle, _startpkt, _endpkt,
+ _bdcount, 0x0, reqid);
+
+ /* Write header descriptor */
+ rm_write_desc(*desc_ptr, d);
+
+ /* Point to next descriptor */
+ *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc);
+ if (*desc_ptr == end_desc)
+ *desc_ptr = start_desc;
+
+ /* Skip next pointer descriptors */
+ while (bcmfs4_is_next_table_desc(*desc_ptr)) {
+ *toggle = (*toggle) ? 0 : 1;
+ *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc);
+ if (*desc_ptr == end_desc)
+ *desc_ptr = start_desc;
+ }
+ }
+
+ /* Write desired descriptor */
+ rm_write_desc(*desc_ptr, desc);
+
+ /* Point to next descriptor */
+ *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc);
+ if (*desc_ptr == end_desc)
+ *desc_ptr = start_desc;
+
+ /* Skip next pointer descriptors */
+ while (bcmfs4_is_next_table_desc(*desc_ptr)) {
+ *toggle = (*toggle) ? 0 : 1;
+ *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc);
+ if (*desc_ptr == end_desc)
+ *desc_ptr = start_desc;
+ }
+}
+
+static uint64_t
+bcmfs4_src_desc(uint64_t addr, unsigned int length)
+{
+ return (rm_build_desc(SRC_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(length, SRC_LENGTH_SHIFT, SRC_LENGTH_MASK) |
+ rm_build_desc(addr, SRC_ADDR_SHIFT, SRC_ADDR_MASK));
+}
+
+static uint64_t
+bcmfs4_msrc_desc(uint64_t addr, unsigned int length_div_16)
+{
+ return (rm_build_desc(MSRC_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(length_div_16, MSRC_LENGTH_SHIFT, MSRC_LENGTH_MASK) |
+ rm_build_desc(addr, MSRC_ADDR_SHIFT, MSRC_ADDR_MASK));
+}
+
+static uint64_t
+bcmfs4_dst_desc(uint64_t addr, unsigned int length)
+{
+ return (rm_build_desc(DST_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(length, DST_LENGTH_SHIFT, DST_LENGTH_MASK) |
+ rm_build_desc(addr, DST_ADDR_SHIFT, DST_ADDR_MASK));
+}
+
+static uint64_t
+bcmfs4_mdst_desc(uint64_t addr, unsigned int length_div_16)
+{
+ return (rm_build_desc(MDST_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(length_div_16, MDST_LENGTH_SHIFT, MDST_LENGTH_MASK) |
+ rm_build_desc(addr, MDST_ADDR_SHIFT, MDST_ADDR_MASK));
+}
+
+static bool
+bcmfs4_sanity_check(struct bcmfs_qp_message *msg)
+{
+ unsigned int i = 0;
+
+ if (msg == NULL)
+ return false;
+
+ for (i = 0; i < msg->srcs_count; i++) {
+ if (msg->srcs_len[i] & 0xf) {
+ if (msg->srcs_len[i] > SRC_LENGTH_MASK)
+ return false;
+ } else {
+ if (msg->srcs_len[i] > (MSRC_LENGTH_MASK * 16))
+ return false;
+ }
+ }
+ for (i = 0; i < msg->dsts_count; i++) {
+ if (msg->dsts_len[i] & 0xf) {
+ if (msg->dsts_len[i] > DST_LENGTH_MASK)
+ return false;
+ } else {
+ if (msg->dsts_len[i] > (MDST_LENGTH_MASK * 16))
+ return false;
+ }
+ }
+
+ return true;
+}
+
+static uint32_t
+estimate_nonheader_desc_count(struct bcmfs_qp_message *msg)
+{
+ uint32_t cnt = 0;
+ unsigned int src = 0;
+ unsigned int dst = 0;
+ unsigned int dst_target = 0;
+
+ while (src < msg->srcs_count ||
+ dst < msg->dsts_count) {
+ if (src < msg->srcs_count) {
+ cnt++;
+ dst_target = msg->srcs_len[src];
+ src++;
+ } else {
+ dst_target = UINT_MAX;
+ }
+ while (dst_target && dst < msg->dsts_count) {
+ cnt++;
+ if (msg->dsts_len[dst] < dst_target)
+ dst_target -= msg->dsts_len[dst];
+ else
+ dst_target = 0;
+ dst++;
+ }
+ }
+
+ return cnt;
+}
+
+static void *
+bcmfs4_enqueue_msg(struct bcmfs_qp_message *msg,
+ uint32_t nhcnt, uint32_t reqid,
+ void *desc_ptr, uint32_t toggle,
+ void *start_desc, void *end_desc)
+{
+ uint64_t d;
+ uint32_t nhpos = 0;
+ unsigned int src = 0;
+ unsigned int dst = 0;
+ unsigned int dst_target = 0;
+ void *orig_desc_ptr = desc_ptr;
+
+ if (!desc_ptr || !start_desc || !end_desc)
+ return NULL;
+
+ if (desc_ptr < start_desc || end_desc <= desc_ptr)
+ return NULL;
+
+ while (src < msg->srcs_count || dst < msg->dsts_count) {
+ if (src < msg->srcs_count) {
+ if (msg->srcs_len[src] & 0xf) {
+ d = bcmfs4_src_desc(msg->srcs_addr[src],
+ msg->srcs_len[src]);
+ } else {
+ d = bcmfs4_msrc_desc(msg->srcs_addr[src],
+ msg->srcs_len[src] / 16);
+ }
+ bcmfs4_enqueue_desc(nhpos, nhcnt, reqid,
+ d, &desc_ptr, &toggle,
+ start_desc, end_desc);
+ nhpos++;
+ dst_target = msg->srcs_len[src];
+ src++;
+ } else {
+ dst_target = UINT_MAX;
+ }
+
+ while (dst_target && (dst < msg->dsts_count)) {
+ if (msg->dsts_len[dst] & 0xf) {
+ d = bcmfs4_dst_desc(msg->dsts_addr[dst],
+ msg->dsts_len[dst]);
+ } else {
+ d = bcmfs4_mdst_desc(msg->dsts_addr[dst],
+ msg->dsts_len[dst] / 16);
+ }
+ bcmfs4_enqueue_desc(nhpos, nhcnt, reqid,
+ d, &desc_ptr, &toggle,
+ start_desc, end_desc);
+ nhpos++;
+ if (msg->dsts_len[dst] < dst_target)
+ dst_target -= msg->dsts_len[dst];
+ else
+ dst_target = 0;
+ dst++; /* for next buffer */
+ }
+ }
+
+ /* Null descriptor with invalid toggle bit */
+ rm_write_desc(desc_ptr, bcmfs4_null_desc(!toggle));
+
+ /* Ensure that descriptors have been written to memory */
+ rte_smp_wmb();
+
+ bcmfs4_flip_header_toggle(orig_desc_ptr);
+
+ return desc_ptr;
+}
+
+static int
+bcmfs4_enqueue_single_request_qp(struct bcmfs_qp *qp, void *op)
+{
+ int reqid;
+ void *next;
+ uint32_t nhcnt;
+ int ret = 0;
+ uint32_t pos = 0;
+ uint64_t slab = 0;
+ uint8_t exit_cleanup = false;
+ struct bcmfs_queue *txq = &qp->tx_q;
+ struct bcmfs_qp_message *msg = (struct bcmfs_qp_message *)op;
+
+ /* Do sanity check on message */
+ if (!bcmfs4_sanity_check(msg)) {
+ BCMFS_DP_LOG(ERR, "Invalid msg on queue %d", qp->qpair_id);
+ return -EIO;
+ }
+
+ /* Scan from the beginning */
+ __rte_bitmap_scan_init(qp->ctx_bmp);
+ /* Scan bitmap to get the free pool */
+ ret = rte_bitmap_scan(qp->ctx_bmp, &pos, &slab);
+ if (ret == 0) {
+ BCMFS_DP_LOG(ERR, "BD memory exhausted");
+ return -ERANGE;
+ }
+
+ reqid = pos + __builtin_ctzll(slab);
+ rte_bitmap_clear(qp->ctx_bmp, reqid);
+ qp->ctx_pool[reqid] = (unsigned long)msg;
+
+ /*
+ * Number required descriptors = number of non-header descriptors +
+ * number of header descriptors +
+ * 1x null descriptor
+ */
+ nhcnt = estimate_nonheader_desc_count(msg);
+
+ /* Write descriptors to ring */
+ next = bcmfs4_enqueue_msg(msg, nhcnt, reqid,
+ (uint8_t *)txq->base_addr + txq->tx_write_ptr,
+ RING_BD_TOGGLE_VALID(txq->tx_write_ptr),
+ txq->base_addr,
+ (uint8_t *)txq->base_addr + txq->queue_size);
+ if (next == NULL) {
+ BCMFS_DP_LOG(ERR, "Enqueue for desc failed on queue %d",
+ qp->qpair_id);
+ ret = -EINVAL;
+ exit_cleanup = true;
+ goto exit;
+ }
+
+ /* Save ring BD write offset */
+ txq->tx_write_ptr = (uint32_t)((uint8_t *)next -
+ (uint8_t *)txq->base_addr);
+
+ qp->nb_pending_requests++;
+
+ return 0;
+
+exit:
+ /* Cleanup if we failed */
+ if (exit_cleanup)
+ rte_bitmap_set(qp->ctx_bmp, reqid);
+
+ return ret;
+}
+
+static void
+bcmfs4_ring_doorbell_qp(struct bcmfs_qp *qp __rte_unused)
+{
+ /* no door bell method supported */
+}
+
+static uint16_t
+bcmfs4_dequeue_qp(struct bcmfs_qp *qp, void **ops, uint16_t budget)
+{
+ int err;
+ uint16_t reqid;
+ uint64_t desc;
+ uint16_t count = 0;
+ unsigned long context = 0;
+ struct bcmfs_queue *hwq = &qp->cmpl_q;
+ uint32_t cmpl_read_offset, cmpl_write_offset;
+
+ /*
+ * Check whether budget is valid, else set the budget to maximum
+ * so that all the available completions will be processed.
+ */
+ if (budget > qp->nb_pending_requests)
+ budget = qp->nb_pending_requests;
+
+ /*
+ * Get current completion read and write offset
+ * Note: We should read completion write pointer at least once
+ * after we get a MSI interrupt because HW maintains internal
+ * MSI status which will allow next MSI interrupt only after
+ * completion write pointer is read.
+ */
+ cmpl_write_offset = FS_MMIO_READ32((uint8_t *)qp->ioreg +
+ RING_CMPL_WRITE_PTR);
+ cmpl_write_offset *= FS_RING_DESC_SIZE;
+ cmpl_read_offset = hwq->cmpl_read_ptr;
+
+ /* Ensure completion pointer is read before proceeding */
+ rte_smp_rmb();
+
+ /* For each completed request notify mailbox clients */
+ reqid = 0;
+ while ((cmpl_read_offset != cmpl_write_offset) && (budget > 0)) {
+ /* Dequeue next completion descriptor */
+ desc = *((uint64_t *)((uint8_t *)hwq->base_addr +
+ cmpl_read_offset));
+
+ /* Next read offset */
+ cmpl_read_offset += FS_RING_DESC_SIZE;
+ if (cmpl_read_offset == FS_RING_CMPL_SIZE)
+ cmpl_read_offset = 0;
+
+ /* Decode error from completion descriptor */
+ err = rm_cmpl_desc_to_error(desc);
+ if (err < 0)
+ BCMFS_DP_LOG(ERR, "error desc rcvd");
+
+ /* Determine request id from completion descriptor */
+ reqid = rm_cmpl_desc_to_reqid(desc);
+
+ /* Determine message pointer based on reqid */
+ context = qp->ctx_pool[reqid];
+ if (context == 0)
+ BCMFS_DP_LOG(ERR, "HW error detected");
+
+ /* Release reqid for recycling */
+ qp->ctx_pool[reqid] = 0;
+ rte_bitmap_set(qp->ctx_bmp, reqid);
+
+ *ops = (void *)context;
+
+ /* Increment number of completions processed */
+ count++;
+ budget--;
+ ops++;
+ }
+
+ hwq->cmpl_read_ptr = cmpl_read_offset;
+
+ qp->nb_pending_requests -= count;
+
+ return count;
+}
+
+static int
+bcmfs4_start_qp(struct bcmfs_qp *qp)
+{
+ int timeout;
+ uint32_t val, off;
+ uint64_t d, next_addr, msi;
+ struct bcmfs_queue *tx_queue = &qp->tx_q;
+ struct bcmfs_queue *cmpl_queue = &qp->cmpl_q;
+
+ /* Disable/deactivate ring */
+ FS_MMIO_WRITE32(0x0, (uint8_t *)qp->ioreg + RING_CONTROL);
+
+ /* Configure next table pointer entries in BD memory */
+ for (off = 0; off < tx_queue->queue_size; off += FS_RING_DESC_SIZE) {
+ next_addr = off + FS_RING_DESC_SIZE;
+ if (next_addr == tx_queue->queue_size)
+ next_addr = 0;
+ next_addr += (uint64_t)tx_queue->base_phys_addr;
+ if (FS_RING_BD_ALIGN_CHECK(next_addr))
+ d = bcmfs4_next_table_desc(RING_BD_TOGGLE_VALID(off),
+ next_addr);
+ else
+ d = bcmfs4_null_desc(RING_BD_TOGGLE_INVALID(off));
+ rm_write_desc((uint8_t *)tx_queue->base_addr + off, d);
+ }
+
+ /*
+ * If user interrupt the test in between the run(Ctrl+C), then all
+ * subsequent test run will fail because sw cmpl_read_offset and hw
+ * cmpl_write_offset will be pointing at different completion BD. To
+ * handle this we should flush all the rings in the startup instead
+ * of shutdown function.
+ * Ring flush will reset hw cmpl_write_offset.
+ */
+
+ /* Set ring flush state */
+ timeout = 1000;
+ FS_MMIO_WRITE32(BIT(CONTROL_FLUSH_SHIFT),
+ (uint8_t *)qp->ioreg + RING_CONTROL);
+ do {
+ /*
+ * If previous test is stopped in between the run, then
+ * sw has to read cmpl_write_offset else DME/AE will be not
+ * come out of flush state.
+ */
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_CMPL_WRITE_PTR);
+
+ if (FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_FLUSH_DONE) &
+ FLUSH_DONE_MASK)
+ break;
+ usleep(1000);
+ } while (--timeout);
+ if (!timeout) {
+ BCMFS_DP_LOG(ERR, "Ring flush timeout hw-queue %d",
+ qp->qpair_id);
+ }
+
+ /* Clear ring flush state */
+ timeout = 1000;
+ FS_MMIO_WRITE32(0x0, (uint8_t *)qp->ioreg + RING_CONTROL);
+ do {
+ if (!(FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_FLUSH_DONE) &
+ FLUSH_DONE_MASK))
+ break;
+ usleep(1000);
+ } while (--timeout);
+ if (!timeout) {
+ BCMFS_DP_LOG(ERR, "Ring clear flush timeout hw-queue %d",
+ qp->qpair_id);
+ }
+
+ /* Program BD start address */
+ val = BD_START_ADDR_VALUE(tx_queue->base_phys_addr);
+ FS_MMIO_WRITE32(val, (uint8_t *)qp->ioreg + RING_BD_START_ADDR);
+
+ /* BD write pointer will be same as HW write pointer */
+ tx_queue->tx_write_ptr = FS_MMIO_READ32((uint8_t *)qp->ioreg +
+ RING_BD_WRITE_PTR);
+ tx_queue->tx_write_ptr *= FS_RING_DESC_SIZE;
+
+
+ for (off = 0; off < FS_RING_CMPL_SIZE; off += FS_RING_DESC_SIZE)
+ rm_write_desc((uint8_t *)cmpl_queue->base_addr + off, 0x0);
+
+ /* Program completion start address */
+ val = CMPL_START_ADDR_VALUE(cmpl_queue->base_phys_addr);
+ FS_MMIO_WRITE32(val, (uint8_t *)qp->ioreg + RING_CMPL_START_ADDR);
+
+ /* Completion read pointer will be same as HW write pointer */
+ cmpl_queue->cmpl_read_ptr = FS_MMIO_READ32((uint8_t *)qp->ioreg +
+ RING_CMPL_WRITE_PTR);
+ cmpl_queue->cmpl_read_ptr *= FS_RING_DESC_SIZE;
+
+ /* Read ring Tx, Rx, and Outstanding counts to clear */
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_RECV_LS);
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_RECV_MS);
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_TRANS_LS);
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_TRANS_MS);
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_OUTSTAND);
+
+ /* Configure per-Ring MSI registers with dummy location */
+ /* We leave 1k * FS_RING_DESC_SIZE size from base phys for MSI */
+ msi = cmpl_queue->base_phys_addr + (1024 * FS_RING_DESC_SIZE);
+ FS_MMIO_WRITE32((msi & 0xFFFFFFFF),
+ (uint8_t *)qp->ioreg + RING_MSI_ADDR_LS);
+ FS_MMIO_WRITE32(((msi >> 32) & 0xFFFFFFFF),
+ (uint8_t *)qp->ioreg + RING_MSI_ADDR_MS);
+ FS_MMIO_WRITE32(qp->qpair_id,
+ (uint8_t *)qp->ioreg + RING_MSI_DATA_VALUE);
+
+ /* Configure RING_MSI_CONTROL */
+ val = 0;
+ val |= (MSI_TIMER_VAL_MASK << MSI_TIMER_VAL_SHIFT);
+ val |= BIT(MSI_ENABLE_SHIFT);
+ val |= (0x1 & MSI_COUNT_MASK) << MSI_COUNT_SHIFT;
+ FS_MMIO_WRITE32(val, (uint8_t *)qp->ioreg + RING_MSI_CONTROL);
+
+ /* Enable/activate ring */
+ val = BIT(CONTROL_ACTIVE_SHIFT);
+ FS_MMIO_WRITE32(val, (uint8_t *)qp->ioreg + RING_CONTROL);
+
+ return 0;
+}
+
+static void
+bcmfs4_shutdown_qp(struct bcmfs_qp *qp)
+{
+ /* Disable/deactivate ring */
+ FS_MMIO_WRITE32(0x0, (uint8_t *)qp->ioreg + RING_CONTROL);
+}
+
+struct bcmfs_hw_queue_pair_ops bcmfs4_qp_ops = {
+ .name = "fs4",
+ .enq_one_req = bcmfs4_enqueue_single_request_qp,
+ .ring_db = bcmfs4_ring_doorbell_qp,
+ .dequeue = bcmfs4_dequeue_qp,
+ .startq = bcmfs4_start_qp,
+ .stopq = bcmfs4_shutdown_qp,
+};
+
+RTE_INIT(bcmfs4_register_qp_ops)
+{
+ bcmfs_hw_queue_pair_register_ops(&bcmfs4_qp_ops);
+}
diff --git a/drivers/crypto/bcmfs/hw/bcmfs5_rm.c b/drivers/crypto/bcmfs/hw/bcmfs5_rm.c
new file mode 100644
index 0000000000..00ea7a1b37
--- /dev/null
+++ b/drivers/crypto/bcmfs/hw/bcmfs5_rm.c
@@ -0,0 +1,677 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <unistd.h>
+
+#include <rte_bitmap.h>
+
+#include "bcmfs_qp.h"
+#include "bcmfs_logs.h"
+#include "bcmfs_dev_msg.h"
+#include "bcmfs_device.h"
+#include "bcmfs_hw_defs.h"
+#include "bcmfs_rm_common.h"
+
+/* Ring version */
+#define RING_VER_MAGIC 0x76303032
+
+/* Per-Ring register offsets */
+#define RING_VER 0x000
+#define RING_BD_START_ADDRESS_LSB 0x004
+#define RING_BD_READ_PTR 0x008
+#define RING_BD_WRITE_PTR 0x00c
+#define RING_BD_READ_PTR_DDR_LS 0x010
+#define RING_BD_READ_PTR_DDR_MS 0x014
+#define RING_CMPL_START_ADDR_LSB 0x018
+#define RING_CMPL_WRITE_PTR 0x01c
+#define RING_NUM_REQ_RECV_LS 0x020
+#define RING_NUM_REQ_RECV_MS 0x024
+#define RING_NUM_REQ_TRANS_LS 0x028
+#define RING_NUM_REQ_TRANS_MS 0x02c
+#define RING_NUM_REQ_OUTSTAND 0x030
+#define RING_CONTROL 0x034
+#define RING_FLUSH_DONE 0x038
+#define RING_MSI_ADDR_LS 0x03c
+#define RING_MSI_ADDR_MS 0x040
+#define RING_MSI_CONTROL 0x048
+#define RING_BD_READ_PTR_DDR_CONTROL 0x04c
+#define RING_MSI_DATA_VALUE 0x064
+#define RING_BD_START_ADDRESS_MSB 0x078
+#define RING_CMPL_START_ADDR_MSB 0x07c
+#define RING_DOORBELL_BD_WRITE_COUNT 0x074
+
+/* Register RING_BD_START_ADDR fields */
+#define BD_LAST_UPDATE_HW_SHIFT 28
+#define BD_LAST_UPDATE_HW_MASK 0x1
+#define BD_START_ADDR_VALUE(pa) \
+ ((uint32_t)((((uint64_t)(pa)) >> RING_BD_ALIGN_ORDER) & 0x0fffffff))
+#define BD_START_ADDR_DECODE(val) \
+ ((uint64_t)((val) & 0x0fffffff) << RING_BD_ALIGN_ORDER)
+
+/* Register RING_CMPL_START_ADDR fields */
+#define CMPL_START_ADDR_VALUE(pa) \
+ ((uint32_t)((((uint64_t)(pa)) >> RING_CMPL_ALIGN_ORDER) & 0x07ffffff))
+
+/* Register RING_CONTROL fields */
+#define CONTROL_MASK_DISABLE_CONTROL 12
+#define CONTROL_FLUSH_SHIFT 5
+#define CONTROL_ACTIVE_SHIFT 4
+#define CONTROL_RATE_ADAPT_MASK 0xf
+#define CONTROL_RATE_DYNAMIC 0x0
+#define CONTROL_RATE_FAST 0x8
+#define CONTROL_RATE_MEDIUM 0x9
+#define CONTROL_RATE_SLOW 0xa
+#define CONTROL_RATE_IDLE 0xb
+
+/* Register RING_FLUSH_DONE fields */
+#define FLUSH_DONE_MASK 0x1
+
+/* Register RING_MSI_CONTROL fields */
+#define MSI_TIMER_VAL_SHIFT 16
+#define MSI_TIMER_VAL_MASK 0xffff
+#define MSI_ENABLE_SHIFT 15
+#define MSI_ENABLE_MASK 0x1
+#define MSI_COUNT_SHIFT 0
+#define MSI_COUNT_MASK 0x3ff
+
+/* Register RING_BD_READ_PTR_DDR_CONTROL fields */
+#define BD_READ_PTR_DDR_TIMER_VAL_SHIFT 16
+#define BD_READ_PTR_DDR_TIMER_VAL_MASK 0xffff
+#define BD_READ_PTR_DDR_ENABLE_SHIFT 15
+#define BD_READ_PTR_DDR_ENABLE_MASK 0x1
+
+/* General descriptor format */
+#define DESC_TYPE_SHIFT 60
+#define DESC_TYPE_MASK 0xf
+#define DESC_PAYLOAD_SHIFT 0
+#define DESC_PAYLOAD_MASK 0x0fffffffffffffff
+
+/* Null descriptor format */
+#define NULL_TYPE 0
+#define NULL_TOGGLE_SHIFT 59
+#define NULL_TOGGLE_MASK 0x1
+
+/* Header descriptor format */
+#define HEADER_TYPE 1
+#define HEADER_TOGGLE_SHIFT 59
+#define HEADER_TOGGLE_MASK 0x1
+#define HEADER_ENDPKT_SHIFT 57
+#define HEADER_ENDPKT_MASK 0x1
+#define HEADER_STARTPKT_SHIFT 56
+#define HEADER_STARTPKT_MASK 0x1
+#define HEADER_BDCOUNT_SHIFT 36
+#define HEADER_BDCOUNT_MASK 0x1f
+#define HEADER_BDCOUNT_MAX HEADER_BDCOUNT_MASK
+#define HEADER_FLAGS_SHIFT 16
+#define HEADER_FLAGS_MASK 0xffff
+#define HEADER_OPAQUE_SHIFT 0
+#define HEADER_OPAQUE_MASK 0xffff
+
+/* Source (SRC) descriptor format */
+
+#define SRC_TYPE 2
+#define SRC_LENGTH_SHIFT 44
+#define SRC_LENGTH_MASK 0xffff
+#define SRC_ADDR_SHIFT 0
+#define SRC_ADDR_MASK 0x00000fffffffffff
+
+/* Destination (DST) descriptor format */
+#define DST_TYPE 3
+#define DST_LENGTH_SHIFT 44
+#define DST_LENGTH_MASK 0xffff
+#define DST_ADDR_SHIFT 0
+#define DST_ADDR_MASK 0x00000fffffffffff
+
+/* Next pointer (NPTR) descriptor format */
+#define NPTR_TYPE 5
+#define NPTR_TOGGLE_SHIFT 59
+#define NPTR_TOGGLE_MASK 0x1
+#define NPTR_ADDR_SHIFT 0
+#define NPTR_ADDR_MASK 0x00000fffffffffff
+
+/* Mega source (MSRC) descriptor format */
+#define MSRC_TYPE 6
+#define MSRC_LENGTH_SHIFT 44
+#define MSRC_LENGTH_MASK 0xffff
+#define MSRC_ADDR_SHIFT 0
+#define MSRC_ADDR_MASK 0x00000fffffffffff
+
+/* Mega destination (MDST) descriptor format */
+#define MDST_TYPE 7
+#define MDST_LENGTH_SHIFT 44
+#define MDST_LENGTH_MASK 0xffff
+#define MDST_ADDR_SHIFT 0
+#define MDST_ADDR_MASK 0x00000fffffffffff
+
+static uint8_t
+bcmfs5_is_next_table_desc(void *desc_ptr)
+{
+ uint64_t desc = rm_read_desc(desc_ptr);
+ uint32_t type = FS_DESC_DEC(desc, DESC_TYPE_SHIFT, DESC_TYPE_MASK);
+
+ return (type == NPTR_TYPE) ? true : false;
+}
+
+static uint64_t
+bcmfs5_next_table_desc(uint64_t next_addr)
+{
+ return (rm_build_desc(NPTR_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(next_addr, NPTR_ADDR_SHIFT, NPTR_ADDR_MASK));
+}
+
+static uint64_t
+bcmfs5_null_desc(void)
+{
+ return rm_build_desc(NULL_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK);
+}
+
+static uint64_t
+bcmfs5_header_desc(uint32_t startpkt, uint32_t endpkt,
+ uint32_t bdcount, uint32_t flags,
+ uint32_t opaque)
+{
+ return (rm_build_desc(HEADER_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(startpkt, HEADER_STARTPKT_SHIFT,
+ HEADER_STARTPKT_MASK) |
+ rm_build_desc(endpkt, HEADER_ENDPKT_SHIFT, HEADER_ENDPKT_MASK) |
+ rm_build_desc(bdcount, HEADER_BDCOUNT_SHIFT, HEADER_BDCOUNT_MASK) |
+ rm_build_desc(flags, HEADER_FLAGS_SHIFT, HEADER_FLAGS_MASK) |
+ rm_build_desc(opaque, HEADER_OPAQUE_SHIFT, HEADER_OPAQUE_MASK));
+}
+
+static int
+bcmfs5_enqueue_desc(uint32_t nhpos, uint32_t nhcnt,
+ uint32_t reqid, uint64_t desc,
+ void **desc_ptr, void *start_desc,
+ void *end_desc)
+{
+ uint64_t d;
+ uint32_t nhavail, _startpkt, _endpkt, _bdcount;
+ int is_nxt_page = 0;
+
+ /*
+ * Each request or packet start with a HEADER descriptor followed
+ * by one or more non-HEADER descriptors (SRC, SRCT, MSRC, DST,
+ * DSTT, MDST, IMM, and IMMT). The number of non-HEADER descriptors
+ * following a HEADER descriptor is represented by BDCOUNT field
+ * of HEADER descriptor. The max value of BDCOUNT field is 31 which
+ * means we can only have 31 non-HEADER descriptors following one
+ * HEADER descriptor.
+ *
+ * In general use, number of non-HEADER descriptors can easily go
+ * beyond 31. To tackle this situation, we have packet (or request)
+ * extension bits (STARTPKT and ENDPKT) in the HEADER descriptor.
+ *
+ * To use packet extension, the first HEADER descriptor of request
+ * (or packet) will have STARTPKT=1 and ENDPKT=0. The intermediate
+ * HEADER descriptors will have STARTPKT=0 and ENDPKT=0. The last
+ * HEADER descriptor will have STARTPKT=0 and ENDPKT=1.
+ */
+
+ if ((nhpos % HEADER_BDCOUNT_MAX == 0) && (nhcnt - nhpos)) {
+ /* Prepare the header descriptor */
+ nhavail = (nhcnt - nhpos);
+ _startpkt = (nhpos == 0) ? 0x1 : 0x0;
+ _endpkt = (nhavail <= HEADER_BDCOUNT_MAX) ? 0x1 : 0x0;
+ _bdcount = (nhavail <= HEADER_BDCOUNT_MAX) ?
+ nhavail : HEADER_BDCOUNT_MAX;
+ if (nhavail <= HEADER_BDCOUNT_MAX)
+ _bdcount = nhavail;
+ else
+ _bdcount = HEADER_BDCOUNT_MAX;
+ d = bcmfs5_header_desc(_startpkt, _endpkt,
+ _bdcount, 0x0, reqid);
+
+ /* Write header descriptor */
+ rm_write_desc(*desc_ptr, d);
+
+ /* Point to next descriptor */
+ *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc);
+ if (*desc_ptr == end_desc)
+ *desc_ptr = start_desc;
+
+ /* Skip next pointer descriptors */
+ while (bcmfs5_is_next_table_desc(*desc_ptr)) {
+ is_nxt_page = 1;
+ *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc);
+ if (*desc_ptr == end_desc)
+ *desc_ptr = start_desc;
+ }
+ }
+
+ /* Write desired descriptor */
+ rm_write_desc(*desc_ptr, desc);
+
+ /* Point to next descriptor */
+ *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc);
+ if (*desc_ptr == end_desc)
+ *desc_ptr = start_desc;
+
+ /* Skip next pointer descriptors */
+ while (bcmfs5_is_next_table_desc(*desc_ptr)) {
+ is_nxt_page = 1;
+ *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc);
+ if (*desc_ptr == end_desc)
+ *desc_ptr = start_desc;
+ }
+
+ return is_nxt_page;
+}
+
+static uint64_t
+bcmfs5_src_desc(uint64_t addr, unsigned int len)
+{
+ return (rm_build_desc(SRC_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(len, SRC_LENGTH_SHIFT, SRC_LENGTH_MASK) |
+ rm_build_desc(addr, SRC_ADDR_SHIFT, SRC_ADDR_MASK));
+}
+
+static uint64_t
+bcmfs5_msrc_desc(uint64_t addr, unsigned int len_div_16)
+{
+ return (rm_build_desc(MSRC_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(len_div_16, MSRC_LENGTH_SHIFT, MSRC_LENGTH_MASK) |
+ rm_build_desc(addr, MSRC_ADDR_SHIFT, MSRC_ADDR_MASK));
+}
+
+static uint64_t
+bcmfs5_dst_desc(uint64_t addr, unsigned int len)
+{
+ return (rm_build_desc(DST_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(len, DST_LENGTH_SHIFT, DST_LENGTH_MASK) |
+ rm_build_desc(addr, DST_ADDR_SHIFT, DST_ADDR_MASK));
+}
+
+static uint64_t
+bcmfs5_mdst_desc(uint64_t addr, unsigned int len_div_16)
+{
+ return (rm_build_desc(MDST_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(len_div_16, MDST_LENGTH_SHIFT, MDST_LENGTH_MASK) |
+ rm_build_desc(addr, MDST_ADDR_SHIFT, MDST_ADDR_MASK));
+}
+
+static bool
+bcmfs5_sanity_check(struct bcmfs_qp_message *msg)
+{
+ unsigned int i = 0;
+
+ if (msg == NULL)
+ return false;
+
+ for (i = 0; i < msg->srcs_count; i++) {
+ if (msg->srcs_len[i] & 0xf) {
+ if (msg->srcs_len[i] > SRC_LENGTH_MASK)
+ return false;
+ } else {
+ if (msg->srcs_len[i] > (MSRC_LENGTH_MASK * 16))
+ return false;
+ }
+ }
+ for (i = 0; i < msg->dsts_count; i++) {
+ if (msg->dsts_len[i] & 0xf) {
+ if (msg->dsts_len[i] > DST_LENGTH_MASK)
+ return false;
+ } else {
+ if (msg->dsts_len[i] > (MDST_LENGTH_MASK * 16))
+ return false;
+ }
+ }
+
+ return true;
+}
+
+static void *
+bcmfs5_enqueue_msg(struct bcmfs_queue *txq,
+ struct bcmfs_qp_message *msg,
+ uint32_t reqid, void *desc_ptr,
+ void *start_desc, void *end_desc)
+{
+ uint64_t d;
+ unsigned int src, dst;
+ uint32_t nhpos = 0;
+ int nxt_page = 0;
+ uint32_t nhcnt = msg->srcs_count + msg->dsts_count;
+
+ if (desc_ptr == NULL || start_desc == NULL || end_desc == NULL)
+ return NULL;
+
+ if (desc_ptr < start_desc || end_desc <= desc_ptr)
+ return NULL;
+
+ for (src = 0; src < msg->srcs_count; src++) {
+ if (msg->srcs_len[src] & 0xf)
+ d = bcmfs5_src_desc(msg->srcs_addr[src],
+ msg->srcs_len[src]);
+ else
+ d = bcmfs5_msrc_desc(msg->srcs_addr[src],
+ msg->srcs_len[src] / 16);
+
+ nxt_page = bcmfs5_enqueue_desc(nhpos, nhcnt, reqid,
+ d, &desc_ptr, start_desc,
+ end_desc);
+ if (nxt_page)
+ txq->descs_inflight++;
+ nhpos++;
+ }
+
+ for (dst = 0; dst < msg->dsts_count; dst++) {
+ if (msg->dsts_len[dst] & 0xf)
+ d = bcmfs5_dst_desc(msg->dsts_addr[dst],
+ msg->dsts_len[dst]);
+ else
+ d = bcmfs5_mdst_desc(msg->dsts_addr[dst],
+ msg->dsts_len[dst] / 16);
+
+ nxt_page = bcmfs5_enqueue_desc(nhpos, nhcnt, reqid,
+ d, &desc_ptr, start_desc,
+ end_desc);
+ if (nxt_page)
+ txq->descs_inflight++;
+ nhpos++;
+ }
+
+ txq->descs_inflight += nhcnt + 1;
+
+ return desc_ptr;
+}
+
+static int
+bcmfs5_enqueue_single_request_qp(struct bcmfs_qp *qp, void *op)
+{
+ void *next;
+ int reqid;
+ int ret = 0;
+ uint64_t slab = 0;
+ uint32_t pos = 0;
+ uint8_t exit_cleanup = false;
+ struct bcmfs_queue *txq = &qp->tx_q;
+ struct bcmfs_qp_message *msg = (struct bcmfs_qp_message *)op;
+
+ /* Do sanity check on message */
+ if (!bcmfs5_sanity_check(msg)) {
+ BCMFS_DP_LOG(ERR, "Invalid msg on queue %d", qp->qpair_id);
+ return -EIO;
+ }
+
+ /* Scan from the beginning */
+ __rte_bitmap_scan_init(qp->ctx_bmp);
+ /* Scan bitmap to get the free pool */
+ ret = rte_bitmap_scan(qp->ctx_bmp, &pos, &slab);
+ if (ret == 0) {
+ BCMFS_DP_LOG(ERR, "BD memory exhausted");
+ return -ERANGE;
+ }
+
+ reqid = pos + __builtin_ctzll(slab);
+ rte_bitmap_clear(qp->ctx_bmp, reqid);
+ qp->ctx_pool[reqid] = (unsigned long)msg;
+
+ /* Write descriptors to ring */
+ next = bcmfs5_enqueue_msg(txq, msg, reqid,
+ (uint8_t *)txq->base_addr + txq->tx_write_ptr,
+ txq->base_addr,
+ (uint8_t *)txq->base_addr + txq->queue_size);
+ if (next == NULL) {
+ BCMFS_DP_LOG(ERR, "Enqueue for desc failed on queue %d",
+ qp->qpair_id);
+ ret = -EINVAL;
+ exit_cleanup = true;
+ goto exit;
+ }
+
+ /* Save ring BD write offset */
+ txq->tx_write_ptr = (uint32_t)((uint8_t *)next -
+ (uint8_t *)txq->base_addr);
+
+ qp->nb_pending_requests++;
+
+ return 0;
+
+exit:
+ /* Cleanup if we failed */
+ if (exit_cleanup)
+ rte_bitmap_set(qp->ctx_bmp, reqid);
+
+ return ret;
+}
+
+static void bcmfs5_write_doorbell(struct bcmfs_qp *qp)
+{
+ struct bcmfs_queue *txq = &qp->tx_q;
+
+ /* sync in bfeore ringing the door-bell */
+ rte_wmb();
+
+ FS_MMIO_WRITE32(txq->descs_inflight,
+ (uint8_t *)qp->ioreg + RING_DOORBELL_BD_WRITE_COUNT);
+
+ /* reset the count */
+ txq->descs_inflight = 0;
+}
+
+static uint16_t
+bcmfs5_dequeue_qp(struct bcmfs_qp *qp, void **ops, uint16_t budget)
+{
+ int err;
+ uint16_t reqid;
+ uint64_t desc;
+ uint16_t count = 0;
+ unsigned long context = 0;
+ struct bcmfs_queue *hwq = &qp->cmpl_q;
+ uint32_t cmpl_read_offset, cmpl_write_offset;
+
+ /*
+ * Check whether budget is valid, else set the budget to maximum
+ * so that all the available completions will be processed.
+ */
+ if (budget > qp->nb_pending_requests)
+ budget = qp->nb_pending_requests;
+
+ /*
+ * Get current completion read and write offset
+ *
+ * Note: We should read completion write pointer at least once
+ * after we get a MSI interrupt because HW maintains internal
+ * MSI status which will allow next MSI interrupt only after
+ * completion write pointer is read.
+ */
+ cmpl_write_offset = FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_CMPL_WRITE_PTR);
+ cmpl_write_offset *= FS_RING_DESC_SIZE;
+ cmpl_read_offset = hwq->cmpl_read_ptr;
+
+ /* read the ring cmpl write ptr before cmpl read offset */
+ rte_smp_rmb();
+
+ /* For each completed request notify mailbox clients */
+ reqid = 0;
+ while ((cmpl_read_offset != cmpl_write_offset) && (budget > 0)) {
+ /* Dequeue next completion descriptor */
+ desc = *((uint64_t *)((uint8_t *)hwq->base_addr +
+ cmpl_read_offset));
+
+ /* Next read offset */
+ cmpl_read_offset += FS_RING_DESC_SIZE;
+ if (cmpl_read_offset == FS_RING_CMPL_SIZE)
+ cmpl_read_offset = 0;
+
+ /* Decode error from completion descriptor */
+ err = rm_cmpl_desc_to_error(desc);
+ if (err < 0)
+ BCMFS_DP_LOG(ERR, "error desc rcvd");
+
+ /* Determine request id from completion descriptor */
+ reqid = rm_cmpl_desc_to_reqid(desc);
+
+ /* Retrieve context */
+ context = qp->ctx_pool[reqid];
+ if (context == 0)
+ BCMFS_DP_LOG(ERR, "HW error detected");
+
+ /* Release reqid for recycling */
+ qp->ctx_pool[reqid] = 0;
+ rte_bitmap_set(qp->ctx_bmp, reqid);
+
+ *ops = (void *)context;
+
+ /* Increment number of completions processed */
+ count++;
+ budget--;
+ ops++;
+ }
+
+ hwq->cmpl_read_ptr = cmpl_read_offset;
+
+ qp->nb_pending_requests -= count;
+
+ return count;
+}
+
+static int
+bcmfs5_start_qp(struct bcmfs_qp *qp)
+{
+ uint32_t val, off;
+ uint64_t d, next_addr, msi;
+ int timeout;
+ uint32_t bd_high, bd_low, cmpl_high, cmpl_low;
+ struct bcmfs_queue *tx_queue = &qp->tx_q;
+ struct bcmfs_queue *cmpl_queue = &qp->cmpl_q;
+
+ /* Disable/deactivate ring */
+ FS_MMIO_WRITE32(0x0, (uint8_t *)qp->ioreg + RING_CONTROL);
+
+ /* Configure next table pointer entries in BD memory */
+ for (off = 0; off < tx_queue->queue_size; off += FS_RING_DESC_SIZE) {
+ next_addr = off + FS_RING_DESC_SIZE;
+ if (next_addr == tx_queue->queue_size)
+ next_addr = 0;
+ next_addr += (uint64_t)tx_queue->base_phys_addr;
+ if (FS_RING_BD_ALIGN_CHECK(next_addr))
+ d = bcmfs5_next_table_desc(next_addr);
+ else
+ d = bcmfs5_null_desc();
+ rm_write_desc((uint8_t *)tx_queue->base_addr + off, d);
+ }
+
+ /*
+ * If user interrupt the test in between the run(Ctrl+C), then all
+ * subsequent test run will fail because sw cmpl_read_offset and hw
+ * cmpl_write_offset will be pointing at different completion BD. To
+ * handle this we should flush all the rings in the startup instead
+ * of shutdown function.
+ * Ring flush will reset hw cmpl_write_offset.
+ */
+
+ /* Set ring flush state */
+ timeout = 1000;
+ FS_MMIO_WRITE32(BIT(CONTROL_FLUSH_SHIFT),
+ (uint8_t *)qp->ioreg + RING_CONTROL);
+ do {
+ /*
+ * If previous test is stopped in between the run, then
+ * sw has to read cmpl_write_offset else DME/AE will be not
+ * come out of flush state.
+ */
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_CMPL_WRITE_PTR);
+
+ if (FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_FLUSH_DONE) &
+ FLUSH_DONE_MASK)
+ break;
+ usleep(1000);
+ } while (--timeout);
+ if (!timeout) {
+ BCMFS_DP_LOG(ERR, "Ring flush timeout hw-queue %d",
+ qp->qpair_id);
+ }
+
+ /* Clear ring flush state */
+ timeout = 1000;
+ FS_MMIO_WRITE32(0x0, (uint8_t *)qp->ioreg + RING_CONTROL);
+ do {
+ if (!(FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_FLUSH_DONE) &
+ FLUSH_DONE_MASK))
+ break;
+ usleep(1000);
+ } while (--timeout);
+ if (!timeout) {
+ BCMFS_DP_LOG(ERR, "Ring clear flush timeout hw-queue %d",
+ qp->qpair_id);
+ }
+
+ /* Program BD start address */
+ bd_low = lower_32_bits(tx_queue->base_phys_addr);
+ bd_high = upper_32_bits(tx_queue->base_phys_addr);
+ FS_MMIO_WRITE32(bd_low, (uint8_t *)qp->ioreg +
+ RING_BD_START_ADDRESS_LSB);
+ FS_MMIO_WRITE32(bd_high, (uint8_t *)qp->ioreg +
+ RING_BD_START_ADDRESS_MSB);
+
+ tx_queue->tx_write_ptr = 0;
+
+ for (off = 0; off < FS_RING_CMPL_SIZE; off += FS_RING_DESC_SIZE)
+ rm_write_desc((uint8_t *)cmpl_queue->base_addr + off, 0x0);
+
+ /* Completion read pointer will be same as HW write pointer */
+ cmpl_queue->cmpl_read_ptr = FS_MMIO_READ32((uint8_t *)qp->ioreg +
+ RING_CMPL_WRITE_PTR);
+ /* Program completion start address */
+ cmpl_low = lower_32_bits(cmpl_queue->base_phys_addr);
+ cmpl_high = upper_32_bits(cmpl_queue->base_phys_addr);
+ FS_MMIO_WRITE32(cmpl_low, (uint8_t *)qp->ioreg +
+ RING_CMPL_START_ADDR_LSB);
+ FS_MMIO_WRITE32(cmpl_high, (uint8_t *)qp->ioreg +
+ RING_CMPL_START_ADDR_MSB);
+
+ cmpl_queue->cmpl_read_ptr *= FS_RING_DESC_SIZE;
+
+ /* Read ring Tx, Rx, and Outstanding counts to clear */
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_RECV_LS);
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_RECV_MS);
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_TRANS_LS);
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_TRANS_MS);
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_OUTSTAND);
+
+ /* Configure per-Ring MSI registers with dummy location */
+ msi = cmpl_queue->base_phys_addr + (1024 * FS_RING_DESC_SIZE);
+ FS_MMIO_WRITE32((msi & 0xFFFFFFFF),
+ (uint8_t *)qp->ioreg + RING_MSI_ADDR_LS);
+ FS_MMIO_WRITE32(((msi >> 32) & 0xFFFFFFFF),
+ (uint8_t *)qp->ioreg + RING_MSI_ADDR_MS);
+ FS_MMIO_WRITE32(qp->qpair_id, (uint8_t *)qp->ioreg +
+ RING_MSI_DATA_VALUE);
+
+ /* Configure RING_MSI_CONTROL */
+ val = 0;
+ val |= (MSI_TIMER_VAL_MASK << MSI_TIMER_VAL_SHIFT);
+ val |= BIT(MSI_ENABLE_SHIFT);
+ val |= (0x1 & MSI_COUNT_MASK) << MSI_COUNT_SHIFT;
+ FS_MMIO_WRITE32(val, (uint8_t *)qp->ioreg + RING_MSI_CONTROL);
+
+ /* Enable/activate ring */
+ val = BIT(CONTROL_ACTIVE_SHIFT);
+ FS_MMIO_WRITE32(val, (uint8_t *)qp->ioreg + RING_CONTROL);
+
+ return 0;
+}
+
+static void
+bcmfs5_shutdown_qp(struct bcmfs_qp *qp)
+{
+ /* Disable/deactivate ring */
+ FS_MMIO_WRITE32(0x0, (uint8_t *)qp->ioreg + RING_CONTROL);
+}
+
+struct bcmfs_hw_queue_pair_ops bcmfs5_qp_ops = {
+ .name = "fs5",
+ .enq_one_req = bcmfs5_enqueue_single_request_qp,
+ .ring_db = bcmfs5_write_doorbell,
+ .dequeue = bcmfs5_dequeue_qp,
+ .startq = bcmfs5_start_qp,
+ .stopq = bcmfs5_shutdown_qp,
+};
+
+RTE_INIT(bcmfs5_register_qp_ops)
+{
+ bcmfs_hw_queue_pair_register_ops(&bcmfs5_qp_ops);
+}
diff --git a/drivers/crypto/bcmfs/hw/bcmfs_rm_common.c b/drivers/crypto/bcmfs/hw/bcmfs_rm_common.c
new file mode 100644
index 0000000000..9445d28f92
--- /dev/null
+++ b/drivers/crypto/bcmfs/hw/bcmfs_rm_common.c
@@ -0,0 +1,82 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Broadcom.
+ * All rights reserved.
+ */
+
+#include "bcmfs_hw_defs.h"
+#include "bcmfs_rm_common.h"
+
+/* Completion descriptor format */
+#define FS_CMPL_OPAQUE_SHIFT 0
+#define FS_CMPL_OPAQUE_MASK 0xffff
+#define FS_CMPL_ENGINE_STATUS_SHIFT 16
+#define FS_CMPL_ENGINE_STATUS_MASK 0xffff
+#define FS_CMPL_DME_STATUS_SHIFT 32
+#define FS_CMPL_DME_STATUS_MASK 0xffff
+#define FS_CMPL_RM_STATUS_SHIFT 48
+#define FS_CMPL_RM_STATUS_MASK 0xffff
+/* Completion RM status code */
+#define FS_RM_STATUS_CODE_SHIFT 0
+#define FS_RM_STATUS_CODE_MASK 0x3ff
+#define FS_RM_STATUS_CODE_GOOD 0x0
+#define FS_RM_STATUS_CODE_AE_TIMEOUT 0x3ff
+
+
+/* Completion DME status code */
+#define FS_DME_STATUS_MEM_COR_ERR BIT(0)
+#define FS_DME_STATUS_MEM_UCOR_ERR BIT(1)
+#define FS_DME_STATUS_FIFO_UNDRFLOW BIT(2)
+#define FS_DME_STATUS_FIFO_OVERFLOW BIT(3)
+#define FS_DME_STATUS_RRESP_ERR BIT(4)
+#define FS_DME_STATUS_BRESP_ERR BIT(5)
+#define FS_DME_STATUS_ERROR_MASK (FS_DME_STATUS_MEM_COR_ERR | \
+ FS_DME_STATUS_MEM_UCOR_ERR | \
+ FS_DME_STATUS_FIFO_UNDRFLOW | \
+ FS_DME_STATUS_FIFO_OVERFLOW | \
+ FS_DME_STATUS_RRESP_ERR | \
+ FS_DME_STATUS_BRESP_ERR)
+
+/* APIs related to ring manager descriptors */
+uint64_t
+rm_build_desc(uint64_t val, uint32_t shift,
+ uint64_t mask)
+{
+ return((val & mask) << shift);
+}
+
+uint64_t
+rm_read_desc(void *desc_ptr)
+{
+ return le64_to_cpu(*((uint64_t *)desc_ptr));
+}
+
+void
+rm_write_desc(void *desc_ptr, uint64_t desc)
+{
+ *((uint64_t *)desc_ptr) = cpu_to_le64(desc);
+}
+
+uint32_t
+rm_cmpl_desc_to_reqid(uint64_t cmpl_desc)
+{
+ return (uint32_t)(cmpl_desc & FS_CMPL_OPAQUE_MASK);
+}
+
+int
+rm_cmpl_desc_to_error(uint64_t cmpl_desc)
+{
+ uint32_t status;
+
+ status = FS_DESC_DEC(cmpl_desc, FS_CMPL_DME_STATUS_SHIFT,
+ FS_CMPL_DME_STATUS_MASK);
+ if (status & FS_DME_STATUS_ERROR_MASK)
+ return -EIO;
+
+ status = FS_DESC_DEC(cmpl_desc, FS_CMPL_RM_STATUS_SHIFT,
+ FS_CMPL_RM_STATUS_MASK);
+ status &= FS_RM_STATUS_CODE_MASK;
+ if (status == FS_RM_STATUS_CODE_AE_TIMEOUT)
+ return -ETIMEDOUT;
+
+ return 0;
+}
diff --git a/drivers/crypto/bcmfs/hw/bcmfs_rm_common.h b/drivers/crypto/bcmfs/hw/bcmfs_rm_common.h
new file mode 100644
index 0000000000..e5d30d75c0
--- /dev/null
+++ b/drivers/crypto/bcmfs/hw/bcmfs_rm_common.h
@@ -0,0 +1,51 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_RM_COMMON_H_
+#define _BCMFS_RM_COMMON_H_
+
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_io.h>
+
+/* 32-bit MMIO register write */
+#define FS_MMIO_WRITE32(value, addr) rte_write32_relaxed((value), (addr))
+/* 32-bit MMIO register read */
+#define FS_MMIO_READ32(addr) rte_read32_relaxed((addr))
+
+/* Descriptor helper macros */
+#define FS_DESC_DEC(d, s, m) (((d) >> (s)) & (m))
+
+#define FS_RING_BD_ALIGN_CHECK(addr) \
+ (!((addr) & ((0x1 << FS_RING_BD_ALIGN_ORDER) - 1)))
+
+#define cpu_to_le64 rte_cpu_to_le_64
+#define cpu_to_le32 rte_cpu_to_le_32
+#define cpu_to_le16 rte_cpu_to_le_16
+
+#define le64_to_cpu rte_le_to_cpu_64
+#define le32_to_cpu rte_le_to_cpu_32
+#define le16_to_cpu rte_le_to_cpu_16
+
+#define lower_32_bits(x) ((uint32_t)(x))
+#define upper_32_bits(x) ((uint32_t)(((x) >> 16) >> 16))
+
+uint64_t
+rm_build_desc(uint64_t val, uint32_t shift,
+ uint64_t mask);
+uint64_t
+rm_read_desc(void *desc_ptr);
+
+void
+rm_write_desc(void *desc_ptr, uint64_t desc);
+
+uint32_t
+rm_cmpl_desc_to_reqid(uint64_t cmpl_desc);
+
+int
+rm_cmpl_desc_to_error(uint64_t cmpl_desc);
+
+#endif /* _BCMFS_RM_COMMON_H_ */
+
diff --git a/drivers/crypto/bcmfs/meson.build b/drivers/crypto/bcmfs/meson.build
index 7e2bcbf14b..cd58bd5e25 100644
--- a/drivers/crypto/bcmfs/meson.build
+++ b/drivers/crypto/bcmfs/meson.build
@@ -8,5 +8,8 @@ sources = files(
'bcmfs_logs.c',
'bcmfs_device.c',
'bcmfs_vfio.c',
- 'bcmfs_qp.c'
+ 'bcmfs_qp.c',
+ 'hw/bcmfs4_rm.c',
+ 'hw/bcmfs5_rm.c',
+ 'hw/bcmfs_rm_common.c'
)
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v3 5/8] crypto/bcmfs: create a symmetric cryptodev
2020-10-05 16:26 ` [dpdk-dev] [PATCH v3 " Vikas Gupta
` (3 preceding siblings ...)
2020-10-05 16:26 ` [dpdk-dev] [PATCH v3 4/8] crypto/bcmfs: add hw queue pair operations Vikas Gupta
@ 2020-10-05 16:26 ` Vikas Gupta
2020-10-05 16:26 ` [dpdk-dev] [PATCH v3 6/8] crypto/bcmfs: add session handling and capabilities Vikas Gupta
` (3 subsequent siblings)
8 siblings, 0 replies; 75+ messages in thread
From: Vikas Gupta @ 2020-10-05 16:26 UTC (permalink / raw)
To: dev, akhil.goyal; +Cc: vikram.prakash, Vikas Gupta, Raveendra Padasalagi
Create a symmetric crypto device and add supported cryptodev ops.
Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
drivers/crypto/bcmfs/bcmfs_device.c | 15 ++
drivers/crypto/bcmfs/bcmfs_device.h | 9 +
drivers/crypto/bcmfs/bcmfs_qp.c | 37 +++
drivers/crypto/bcmfs/bcmfs_qp.h | 16 ++
drivers/crypto/bcmfs/bcmfs_sym_pmd.c | 387 +++++++++++++++++++++++++++
drivers/crypto/bcmfs/bcmfs_sym_pmd.h | 38 +++
drivers/crypto/bcmfs/bcmfs_sym_req.h | 22 ++
drivers/crypto/bcmfs/meson.build | 3 +-
8 files changed, 526 insertions(+), 1 deletion(-)
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_pmd.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_pmd.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_req.h
diff --git a/drivers/crypto/bcmfs/bcmfs_device.c b/drivers/crypto/bcmfs/bcmfs_device.c
index 6ff65adfc7..0c99cc7adf 100644
--- a/drivers/crypto/bcmfs/bcmfs_device.c
+++ b/drivers/crypto/bcmfs/bcmfs_device.c
@@ -14,6 +14,7 @@
#include "bcmfs_logs.h"
#include "bcmfs_qp.h"
#include "bcmfs_vfio.h"
+#include "bcmfs_sym_pmd.h"
struct bcmfs_device_attr {
const char name[BCMFS_MAX_PATH_LEN];
@@ -240,6 +241,7 @@ bcmfs_vdev_probe(struct rte_vdev_device *vdev)
char out_dirname[BCMFS_MAX_PATH_LEN];
uint32_t fsdev_dev[BCMFS_MAX_NODES];
enum bcmfs_device_type dtype;
+ int err;
int i = 0;
int dev_idx;
int count = 0;
@@ -291,7 +293,20 @@ bcmfs_vdev_probe(struct rte_vdev_device *vdev)
return -ENODEV;
}
+ err = bcmfs_sym_dev_create(fsdev);
+ if (err) {
+ BCMFS_LOG(WARNING,
+ "Failed to create BCMFS SYM PMD for device %s",
+ fsdev->name);
+ goto pmd_create_fail;
+ }
+
return 0;
+
+pmd_create_fail:
+ fsdev_release(fsdev);
+
+ return err;
}
static int
diff --git a/drivers/crypto/bcmfs/bcmfs_device.h b/drivers/crypto/bcmfs/bcmfs_device.h
index 9e40c5d747..e8a9c40910 100644
--- a/drivers/crypto/bcmfs/bcmfs_device.h
+++ b/drivers/crypto/bcmfs/bcmfs_device.h
@@ -62,6 +62,15 @@ struct bcmfs_device {
struct bcmfs_qp *qps_in_use[BCMFS_MAX_HW_QUEUES];
/* queue pair ops exported by symmetric crypto hw */
struct bcmfs_hw_queue_pair_ops *sym_hw_qp_ops;
+ /* a cryptodevice attached to bcmfs device */
+ struct rte_cryptodev *cdev;
+ /* a rte_device to register with cryptodev */
+ struct rte_device sym_rte_dev;
+ /* private info to keep with cryptodev */
+ struct bcmfs_sym_dev_private *sym_dev;
};
+/* stats exported by device */
+
+
#endif /* _BCMFS_DEV_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_qp.c b/drivers/crypto/bcmfs/bcmfs_qp.c
index ec1327b780..cb5ff6c61b 100644
--- a/drivers/crypto/bcmfs/bcmfs_qp.c
+++ b/drivers/crypto/bcmfs/bcmfs_qp.c
@@ -344,3 +344,40 @@ bcmfs_dequeue_op_burst(void *qp, void **ops, uint16_t nb_ops)
return deq;
}
+
+void bcmfs_qp_stats_get(struct bcmfs_qp **qp, int num_qp,
+ struct bcmfs_qp_stats *stats)
+{
+ int i;
+
+ if (stats == NULL) {
+ BCMFS_LOG(ERR, "invalid param: stats %p",
+ stats);
+ return;
+ }
+
+ for (i = 0; i < num_qp; i++) {
+ if (qp[i] == NULL) {
+ BCMFS_LOG(DEBUG, "Uninitialised qp %d", i);
+ continue;
+ }
+
+ stats->enqueued_count += qp[i]->stats.enqueued_count;
+ stats->dequeued_count += qp[i]->stats.dequeued_count;
+ stats->enqueue_err_count += qp[i]->stats.enqueue_err_count;
+ stats->dequeue_err_count += qp[i]->stats.dequeue_err_count;
+ }
+}
+
+void bcmfs_qp_stats_reset(struct bcmfs_qp **qp, int num_qp)
+{
+ int i;
+
+ for (i = 0; i < num_qp; i++) {
+ if (qp[i] == NULL) {
+ BCMFS_LOG(DEBUG, "Uninitialised qp %d", i);
+ continue;
+ }
+ memset(&qp[i]->stats, 0, sizeof(qp[i]->stats));
+ }
+}
diff --git a/drivers/crypto/bcmfs/bcmfs_qp.h b/drivers/crypto/bcmfs/bcmfs_qp.h
index 59785865b0..57fe0a93a3 100644
--- a/drivers/crypto/bcmfs/bcmfs_qp.h
+++ b/drivers/crypto/bcmfs/bcmfs_qp.h
@@ -24,6 +24,13 @@ enum bcmfs_queue_type {
BCMFS_RM_CPLQ
};
+#define BCMFS_QP_IOBASE_XLATE(base, idx) \
+ ((base) + ((idx) * BCMFS_HW_QUEUE_IO_ADDR_LEN))
+
+/* Max pkts for preprocessing before submitting to h/w qp */
+#define BCMFS_MAX_REQS_BUFF 64
+
+/* qp stats */
struct bcmfs_qp_stats {
/* Count of all operations enqueued */
uint64_t enqueued_count;
@@ -92,6 +99,10 @@ struct bcmfs_qp {
struct bcmfs_qp_stats stats;
/* h/w ops associated with qp */
struct bcmfs_hw_queue_pair_ops *ops;
+ /* bcmfs requests pool*/
+ struct rte_mempool *sr_mp;
+ /* a temporary buffer to keep message pointers */
+ struct bcmfs_qp_message *infl_msgs[BCMFS_MAX_REQS_BUFF];
} __rte_cache_aligned;
@@ -123,4 +134,9 @@ bcmfs_qp_setup(struct bcmfs_qp **qp_addr,
uint16_t queue_pair_id,
struct bcmfs_qp_config *bcmfs_conf);
+/* stats functions*/
+void bcmfs_qp_stats_get(struct bcmfs_qp **qp, int num_qp,
+ struct bcmfs_qp_stats *stats);
+void bcmfs_qp_stats_reset(struct bcmfs_qp **qp, int num_qp);
+
#endif /* _BCMFS_QP_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
new file mode 100644
index 0000000000..0f96915f70
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
@@ -0,0 +1,387 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+#include <rte_dev.h>
+#include <rte_errno.h>
+#include <rte_malloc.h>
+#include <rte_cryptodev_pmd.h>
+
+#include "bcmfs_device.h"
+#include "bcmfs_logs.h"
+#include "bcmfs_qp.h"
+#include "bcmfs_sym_pmd.h"
+#include "bcmfs_sym_req.h"
+
+uint8_t cryptodev_bcmfs_driver_id;
+
+static int bcmfs_sym_qp_release(struct rte_cryptodev *dev,
+ uint16_t queue_pair_id);
+
+static int
+bcmfs_sym_dev_config(__rte_unused struct rte_cryptodev *dev,
+ __rte_unused struct rte_cryptodev_config *config)
+{
+ return 0;
+}
+
+static int
+bcmfs_sym_dev_start(__rte_unused struct rte_cryptodev *dev)
+{
+ return 0;
+}
+
+static void
+bcmfs_sym_dev_stop(__rte_unused struct rte_cryptodev *dev)
+{
+}
+
+static int
+bcmfs_sym_dev_close(struct rte_cryptodev *dev)
+{
+ int i, ret;
+
+ for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+ ret = bcmfs_sym_qp_release(dev, i);
+ if (ret < 0)
+ return ret;
+ }
+
+ return 0;
+}
+
+static void
+bcmfs_sym_dev_info_get(struct rte_cryptodev *dev,
+ struct rte_cryptodev_info *dev_info)
+{
+ struct bcmfs_sym_dev_private *internals = dev->data->dev_private;
+ struct bcmfs_device *fsdev = internals->fsdev;
+
+ if (dev_info != NULL) {
+ dev_info->driver_id = cryptodev_bcmfs_driver_id;
+ dev_info->feature_flags = dev->feature_flags;
+ dev_info->max_nb_queue_pairs = fsdev->max_hw_qps;
+ /* No limit of number of sessions */
+ dev_info->sym.max_nb_sessions = 0;
+ }
+}
+
+static void
+bcmfs_sym_stats_get(struct rte_cryptodev *dev,
+ struct rte_cryptodev_stats *stats)
+{
+ struct bcmfs_qp_stats bcmfs_stats = {0};
+ struct bcmfs_sym_dev_private *bcmfs_priv;
+ struct bcmfs_device *fsdev;
+
+ if (stats == NULL || dev == NULL) {
+ BCMFS_LOG(ERR, "invalid ptr: stats %p, dev %p", stats, dev);
+ return;
+ }
+ bcmfs_priv = dev->data->dev_private;
+ fsdev = bcmfs_priv->fsdev;
+
+ bcmfs_qp_stats_get(fsdev->qps_in_use, fsdev->max_hw_qps, &bcmfs_stats);
+
+ stats->enqueued_count = bcmfs_stats.enqueued_count;
+ stats->dequeued_count = bcmfs_stats.dequeued_count;
+ stats->enqueue_err_count = bcmfs_stats.enqueue_err_count;
+ stats->dequeue_err_count = bcmfs_stats.dequeue_err_count;
+}
+
+static void
+bcmfs_sym_stats_reset(struct rte_cryptodev *dev)
+{
+ struct bcmfs_sym_dev_private *bcmfs_priv;
+ struct bcmfs_device *fsdev;
+
+ if (dev == NULL) {
+ BCMFS_LOG(ERR, "invalid cryptodev ptr %p", dev);
+ return;
+ }
+ bcmfs_priv = dev->data->dev_private;
+ fsdev = bcmfs_priv->fsdev;
+
+ bcmfs_qp_stats_reset(fsdev->qps_in_use, fsdev->max_hw_qps);
+}
+
+static int
+bcmfs_sym_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id)
+{
+ struct bcmfs_sym_dev_private *bcmfs_private = dev->data->dev_private;
+ struct bcmfs_qp *qp = (struct bcmfs_qp *)
+ (dev->data->queue_pairs[queue_pair_id]);
+
+ BCMFS_LOG(DEBUG, "Release sym qp %u on device %d",
+ queue_pair_id, dev->data->dev_id);
+
+ rte_mempool_free(qp->sr_mp);
+
+ bcmfs_private->fsdev->qps_in_use[queue_pair_id] = NULL;
+
+ return bcmfs_qp_release((struct bcmfs_qp **)
+ &dev->data->queue_pairs[queue_pair_id]);
+}
+
+static void
+spu_req_init(struct bcmfs_sym_request *sr, rte_iova_t iova __rte_unused)
+{
+ memset(sr, 0, sizeof(*sr));
+}
+
+static void
+req_pool_obj_init(__rte_unused struct rte_mempool *mp,
+ __rte_unused void *opaque, void *obj,
+ __rte_unused unsigned int obj_idx)
+{
+ spu_req_init(obj, rte_mempool_virt2iova(obj));
+}
+
+static struct rte_mempool *
+bcmfs_sym_req_pool_create(struct rte_cryptodev *cdev __rte_unused,
+ uint32_t nobjs, uint16_t qp_id,
+ int socket_id)
+{
+ char softreq_pool_name[RTE_RING_NAMESIZE];
+ struct rte_mempool *mp;
+
+ snprintf(softreq_pool_name, RTE_RING_NAMESIZE, "%s_%d",
+ "bcm_sym", qp_id);
+
+ mp = rte_mempool_create(softreq_pool_name,
+ RTE_ALIGN_MUL_CEIL(nobjs, 64),
+ sizeof(struct bcmfs_sym_request),
+ 64, 0, NULL, NULL, req_pool_obj_init, NULL,
+ socket_id, 0);
+ if (mp == NULL)
+ BCMFS_LOG(ERR, "Failed to create req pool, qid %d, err %d",
+ qp_id, rte_errno);
+
+ return mp;
+}
+
+static int
+bcmfs_sym_qp_setup(struct rte_cryptodev *cdev, uint16_t qp_id,
+ const struct rte_cryptodev_qp_conf *qp_conf,
+ int socket_id)
+{
+ int ret = 0;
+ struct bcmfs_qp *qp = NULL;
+ struct bcmfs_qp_config bcmfs_qp_conf;
+
+ struct bcmfs_qp **qp_addr =
+ (struct bcmfs_qp **)&cdev->data->queue_pairs[qp_id];
+ struct bcmfs_sym_dev_private *bcmfs_private = cdev->data->dev_private;
+ struct bcmfs_device *fsdev = bcmfs_private->fsdev;
+
+
+ /* If qp is already in use free ring memory and qp metadata. */
+ if (*qp_addr != NULL) {
+ ret = bcmfs_sym_qp_release(cdev, qp_id);
+ if (ret < 0)
+ return ret;
+ }
+
+ if (qp_id >= fsdev->max_hw_qps) {
+ BCMFS_LOG(ERR, "qp_id %u invalid for this device", qp_id);
+ return -EINVAL;
+ }
+
+ bcmfs_qp_conf.nb_descriptors = qp_conf->nb_descriptors;
+ bcmfs_qp_conf.socket_id = socket_id;
+ bcmfs_qp_conf.max_descs_req = BCMFS_CRYPTO_MAX_HW_DESCS_PER_REQ;
+ bcmfs_qp_conf.iobase = BCMFS_QP_IOBASE_XLATE(fsdev->mmap_addr, qp_id);
+ bcmfs_qp_conf.ops = fsdev->sym_hw_qp_ops;
+
+ ret = bcmfs_qp_setup(qp_addr, qp_id, &bcmfs_qp_conf);
+ if (ret != 0)
+ return ret;
+
+ qp = (struct bcmfs_qp *)*qp_addr;
+
+ qp->sr_mp = bcmfs_sym_req_pool_create(cdev, qp_conf->nb_descriptors,
+ qp_id, socket_id);
+ if (qp->sr_mp == NULL)
+ return -ENOMEM;
+
+ /* store a link to the qp in the bcmfs_device */
+ bcmfs_private->fsdev->qps_in_use[qp_id] = *qp_addr;
+
+ cdev->data->queue_pairs[qp_id] = qp;
+ BCMFS_LOG(NOTICE, "queue %d setup done\n", qp_id);
+
+ return 0;
+}
+
+static struct rte_cryptodev_ops crypto_bcmfs_ops = {
+ /* Device related operations */
+ .dev_configure = bcmfs_sym_dev_config,
+ .dev_start = bcmfs_sym_dev_start,
+ .dev_stop = bcmfs_sym_dev_stop,
+ .dev_close = bcmfs_sym_dev_close,
+ .dev_infos_get = bcmfs_sym_dev_info_get,
+ /* Stats Collection */
+ .stats_get = bcmfs_sym_stats_get,
+ .stats_reset = bcmfs_sym_stats_reset,
+ /* Queue-Pair management */
+ .queue_pair_setup = bcmfs_sym_qp_setup,
+ .queue_pair_release = bcmfs_sym_qp_release,
+};
+
+/** Enqueue burst */
+static uint16_t
+bcmfs_sym_pmd_enqueue_op_burst(void *queue_pair,
+ struct rte_crypto_op **ops,
+ uint16_t nb_ops)
+{
+ int i, j;
+ uint16_t enq = 0;
+ struct bcmfs_sym_request *sreq;
+ struct bcmfs_qp *qp = (struct bcmfs_qp *)queue_pair;
+
+ if (nb_ops == 0)
+ return 0;
+
+ if (nb_ops > BCMFS_MAX_REQS_BUFF)
+ nb_ops = BCMFS_MAX_REQS_BUFF;
+
+ /* We do not process more than available space */
+ if (nb_ops > (qp->nb_descriptors - qp->nb_pending_requests))
+ nb_ops = qp->nb_descriptors - qp->nb_pending_requests;
+
+ for (i = 0; i < nb_ops; i++) {
+ if (rte_mempool_get(qp->sr_mp, (void **)&sreq))
+ goto enqueue_err;
+
+ /* save rte_crypto_op */
+ sreq->op = ops[i];
+
+ /* save context */
+ qp->infl_msgs[i] = &sreq->msgs;
+ qp->infl_msgs[i]->ctx = (void *)sreq;
+ }
+ /* Send burst request to hw QP */
+ enq = bcmfs_enqueue_op_burst(qp, (void **)qp->infl_msgs, i);
+
+ for (j = enq; j < i; j++)
+ rte_mempool_put(qp->sr_mp, qp->infl_msgs[j]->ctx);
+
+ return enq;
+
+enqueue_err:
+ for (j = 0; j < i; j++)
+ rte_mempool_put(qp->sr_mp, qp->infl_msgs[j]->ctx);
+
+ return enq;
+}
+
+static uint16_t
+bcmfs_sym_pmd_dequeue_op_burst(void *queue_pair,
+ struct rte_crypto_op **ops,
+ uint16_t nb_ops)
+{
+ int i;
+ uint16_t deq = 0;
+ unsigned int pkts = 0;
+ struct bcmfs_sym_request *sreq;
+ struct bcmfs_qp *qp = queue_pair;
+
+ if (nb_ops > BCMFS_MAX_REQS_BUFF)
+ nb_ops = BCMFS_MAX_REQS_BUFF;
+
+ deq = bcmfs_dequeue_op_burst(qp, (void **)qp->infl_msgs, nb_ops);
+ /* get rte_crypto_ops */
+ for (i = 0; i < deq; i++) {
+ sreq = (struct bcmfs_sym_request *)qp->infl_msgs[i]->ctx;
+
+ ops[pkts++] = sreq->op;
+
+ rte_mempool_put(qp->sr_mp, sreq);
+ }
+
+ return pkts;
+}
+
+/*
+ * An rte_driver is needed in the registration of both the
+ * device and the driver with cryptodev.
+ */
+static const char bcmfs_sym_drv_name[] = RTE_STR(CRYPTODEV_NAME_BCMFS_SYM_PMD);
+static const struct rte_driver cryptodev_bcmfs_sym_driver = {
+ .name = bcmfs_sym_drv_name,
+ .alias = bcmfs_sym_drv_name
+};
+
+int
+bcmfs_sym_dev_create(struct bcmfs_device *fsdev)
+{
+ struct rte_cryptodev_pmd_init_params init_params = {
+ .name = "",
+ .socket_id = rte_socket_id(),
+ .private_data_size = sizeof(struct bcmfs_sym_dev_private)
+ };
+ char name[RTE_CRYPTODEV_NAME_MAX_LEN];
+ struct rte_cryptodev *cryptodev;
+ struct bcmfs_sym_dev_private *internals;
+
+ snprintf(name, RTE_CRYPTODEV_NAME_MAX_LEN, "%s_%s",
+ fsdev->name, "sym");
+
+ /* Populate subset device to use in cryptodev device creation */
+ fsdev->sym_rte_dev.driver = &cryptodev_bcmfs_sym_driver;
+ fsdev->sym_rte_dev.numa_node = 0;
+ fsdev->sym_rte_dev.devargs = NULL;
+
+ cryptodev = rte_cryptodev_pmd_create(name,
+ &fsdev->sym_rte_dev,
+ &init_params);
+ if (cryptodev == NULL)
+ return -ENODEV;
+
+ fsdev->sym_rte_dev.name = cryptodev->data->name;
+ cryptodev->driver_id = cryptodev_bcmfs_driver_id;
+ cryptodev->dev_ops = &crypto_bcmfs_ops;
+
+ cryptodev->enqueue_burst = bcmfs_sym_pmd_enqueue_op_burst;
+ cryptodev->dequeue_burst = bcmfs_sym_pmd_dequeue_op_burst;
+
+ cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
+ RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
+ RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT;
+
+ internals = cryptodev->data->dev_private;
+ internals->fsdev = fsdev;
+ fsdev->sym_dev = internals;
+
+ internals->sym_dev_id = cryptodev->data->dev_id;
+
+ BCMFS_LOG(DEBUG, "Created bcmfs-sym device %s as cryptodev instance %d",
+ cryptodev->data->name, internals->sym_dev_id);
+ return 0;
+}
+
+int
+bcmfs_sym_dev_destroy(struct bcmfs_device *fsdev)
+{
+ struct rte_cryptodev *cryptodev;
+
+ if (fsdev == NULL)
+ return -ENODEV;
+ if (fsdev->sym_dev == NULL)
+ return 0;
+
+ /* free crypto device */
+ cryptodev = rte_cryptodev_pmd_get_dev(fsdev->sym_dev->sym_dev_id);
+ rte_cryptodev_pmd_destroy(cryptodev);
+ fsdev->sym_rte_dev.name = NULL;
+ fsdev->sym_dev = NULL;
+
+ return 0;
+}
+
+static struct cryptodev_driver bcmfs_crypto_drv;
+RTE_PMD_REGISTER_CRYPTO_DRIVER(bcmfs_crypto_drv,
+ cryptodev_bcmfs_sym_driver,
+ cryptodev_bcmfs_driver_id);
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_pmd.h b/drivers/crypto/bcmfs/bcmfs_sym_pmd.h
new file mode 100644
index 0000000000..65d7046090
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_pmd.h
@@ -0,0 +1,38 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_SYM_PMD_H_
+#define _BCMFS_SYM_PMD_H_
+
+#include <rte_cryptodev.h>
+
+#include "bcmfs_device.h"
+
+#define CRYPTODEV_NAME_BCMFS_SYM_PMD crypto_bcmfs
+
+#define BCMFS_CRYPTO_MAX_HW_DESCS_PER_REQ 16
+
+extern uint8_t cryptodev_bcmfs_driver_id;
+
+/** private data structure for a BCMFS device.
+ * This BCMFS device is a device offering only symmetric crypto service,
+ * there can be one of these on each bcmfs_pci_device (VF).
+ */
+struct bcmfs_sym_dev_private {
+ /* The bcmfs device hosting the service */
+ struct bcmfs_device *fsdev;
+ /* Device instance for this rte_cryptodev */
+ uint8_t sym_dev_id;
+ /* BCMFS device symmetric crypto capabilities */
+ const struct rte_cryptodev_capabilities *fsdev_capabilities;
+};
+
+int
+bcmfs_sym_dev_create(struct bcmfs_device *fdev);
+
+int
+bcmfs_sym_dev_destroy(struct bcmfs_device *fdev);
+
+#endif /* _BCMFS_SYM_PMD_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_req.h b/drivers/crypto/bcmfs/bcmfs_sym_req.h
new file mode 100644
index 0000000000..0f0b051f1e
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_req.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_SYM_REQ_H_
+#define _BCMFS_SYM_REQ_H_
+
+#include "bcmfs_dev_msg.h"
+
+/*
+ * This structure hold the supportive data required to process a
+ * rte_crypto_op
+ */
+struct bcmfs_sym_request {
+ /* bcmfs qp message for h/w queues to process */
+ struct bcmfs_qp_message msgs;
+ /* crypto op */
+ struct rte_crypto_op *op;
+};
+
+#endif /* _BCMFS_SYM_REQ_H_ */
diff --git a/drivers/crypto/bcmfs/meson.build b/drivers/crypto/bcmfs/meson.build
index cd58bd5e25..d9a3d73e99 100644
--- a/drivers/crypto/bcmfs/meson.build
+++ b/drivers/crypto/bcmfs/meson.build
@@ -11,5 +11,6 @@ sources = files(
'bcmfs_qp.c',
'hw/bcmfs4_rm.c',
'hw/bcmfs5_rm.c',
- 'hw/bcmfs_rm_common.c'
+ 'hw/bcmfs_rm_common.c',
+ 'bcmfs_sym_pmd.c'
)
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v3 6/8] crypto/bcmfs: add session handling and capabilities
2020-10-05 16:26 ` [dpdk-dev] [PATCH v3 " Vikas Gupta
` (4 preceding siblings ...)
2020-10-05 16:26 ` [dpdk-dev] [PATCH v3 5/8] crypto/bcmfs: create a symmetric cryptodev Vikas Gupta
@ 2020-10-05 16:26 ` Vikas Gupta
2020-10-05 16:26 ` [dpdk-dev] [PATCH v3 7/8] crypto/bcmfs: add crypto h/w module Vikas Gupta
` (2 subsequent siblings)
8 siblings, 0 replies; 75+ messages in thread
From: Vikas Gupta @ 2020-10-05 16:26 UTC (permalink / raw)
To: dev, akhil.goyal; +Cc: vikram.prakash, Vikas Gupta, Raveendra Padasalagi
Add session handling and capabilities supported by crypto h/w
accelerator
Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
doc/guides/cryptodevs/bcmfs.rst | 47 ++
doc/guides/cryptodevs/features/bcmfs.ini | 56 ++
drivers/crypto/bcmfs/bcmfs_sym_capabilities.c | 764 ++++++++++++++++++
drivers/crypto/bcmfs/bcmfs_sym_capabilities.h | 16 +
drivers/crypto/bcmfs/bcmfs_sym_defs.h | 34 +
drivers/crypto/bcmfs/bcmfs_sym_pmd.c | 13 +
drivers/crypto/bcmfs/bcmfs_sym_session.c | 282 +++++++
drivers/crypto/bcmfs/bcmfs_sym_session.h | 109 +++
drivers/crypto/bcmfs/meson.build | 4 +-
9 files changed, 1324 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/cryptodevs/features/bcmfs.ini
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_capabilities.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_capabilities.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_defs.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_session.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_session.h
diff --git a/doc/guides/cryptodevs/bcmfs.rst b/doc/guides/cryptodevs/bcmfs.rst
index dc21bf60cc..aaa6e1af70 100644
--- a/doc/guides/cryptodevs/bcmfs.rst
+++ b/doc/guides/cryptodevs/bcmfs.rst
@@ -15,6 +15,47 @@ Supported Broadcom SoCs
* Stingray
* Stingray2
+Features
+--------
+
+The BCMFS SYM PMD has support for:
+
+Cipher algorithms:
+
+* ``RTE_CRYPTO_CIPHER_3DES_CBC``
+* ``RTE_CRYPTO_CIPHER_3DES_CTR``
+* ``RTE_CRYPTO_CIPHER_AES128_CBC``
+* ``RTE_CRYPTO_CIPHER_AES192_CBC``
+* ``RTE_CRYPTO_CIPHER_AES256_CBC``
+* ``RTE_CRYPTO_CIPHER_AES128_CTR``
+* ``RTE_CRYPTO_CIPHER_AES192_CTR``
+* ``RTE_CRYPTO_CIPHER_AES256_CTR``
+* ``RTE_CRYPTO_CIPHER_AES_XTS``
+* ``RTE_CRYPTO_CIPHER_DES_CBC``
+
+Hash algorithms:
+
+* ``RTE_CRYPTO_AUTH_SHA1``
+* ``RTE_CRYPTO_AUTH_SHA1_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA224``
+* ``RTE_CRYPTO_AUTH_SHA224_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA256``
+* ``RTE_CRYPTO_AUTH_SHA256_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA384``
+* ``RTE_CRYPTO_AUTH_SHA384_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA512``
+* ``RTE_CRYPTO_AUTH_SHA512_HMAC``
+* ``RTE_CRYPTO_AUTH_AES_XCBC_MAC``
+* ``RTE_CRYPTO_AUTH_AES_CBC_MAC``
+* ``RTE_CRYPTO_AUTH_MD5_HMAC``
+* ``RTE_CRYPTO_AUTH_AES_GMAC``
+* ``RTE_CRYPTO_AUTH_AES_CMAC``
+
+Supported AEAD algorithms:
+
+* ``RTE_CRYPTO_AEAD_AES_GCM``
+* ``RTE_CRYPTO_AEAD_AES_CCM``
+
Installation
------------
Information about kernel, rootfs and toolchain can be found at
@@ -49,3 +90,9 @@ For example, below commands can be run to get hold of a device node by VFIO.
io_device_name="vfio-platform"
echo $io_device_name > /sys/bus/platform/devices/${SETUP_SYSFS_DEV_NAME}/driver_override
echo ${SETUP_SYSFS_DEV_NAME} > /sys/bus/platform/drivers_probe
+
+Limitations
+-----------
+
+* Only supports the session-oriented API implementation (session-less APIs are not supported).
+* CCM is not supported on Broadcom`s SoCs having FlexSparc4 unit.
diff --git a/doc/guides/cryptodevs/features/bcmfs.ini b/doc/guides/cryptodevs/features/bcmfs.ini
new file mode 100644
index 0000000000..6a718856b9
--- /dev/null
+++ b/doc/guides/cryptodevs/features/bcmfs.ini
@@ -0,0 +1,56 @@
+;
+; Supported features of the 'bcmfs' crypto driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Symmetric crypto = Y
+Sym operation chaining = Y
+HW Accelerated = Y
+Protocol offload = Y
+OOP LB In LB Out = Y
+
+;
+; Supported crypto algorithms of the 'bcmfs' crypto driver.
+;
+[Cipher]
+AES CBC (128) = Y
+AES CBC (192) = Y
+AES CBC (256) = Y
+AES CTR (128) = Y
+AES CTR (192) = Y
+AES CTR (256) = Y
+AES XTS (128) = Y
+AES XTS (256) = Y
+3DES CBC = Y
+DES CBC = Y
+;
+; Supported authentication algorithms of the 'bcmfs' crypto driver.
+;
+[Auth]
+MD5 HMAC = Y
+SHA1 = Y
+SHA1 HMAC = Y
+SHA224 = Y
+SHA224 HMAC = Y
+SHA256 = Y
+SHA256 HMAC = Y
+SHA384 = Y
+SHA384 HMAC = Y
+SHA512 = Y
+SHA512 HMAC = Y
+AES GMAC = Y
+AES CMAC (128) = Y
+AES CBC MAC = Y
+AES XCBC MAC = Y
+
+;
+; Supported AEAD algorithms of the 'bcmfs' crypto driver.
+;
+[AEAD]
+AES GCM (128) = Y
+AES GCM (192) = Y
+AES GCM (256) = Y
+AES CCM (128) = Y
+AES CCM (192) = Y
+AES CCM (256) = Y
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_capabilities.c b/drivers/crypto/bcmfs/bcmfs_sym_capabilities.c
new file mode 100644
index 0000000000..afed7696a6
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_capabilities.c
@@ -0,0 +1,764 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_cryptodev.h>
+
+#include "bcmfs_sym_capabilities.h"
+
+static const struct rte_cryptodev_capabilities bcmfs_sym_capabilities[] = {
+ {
+ /* SHA1 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA1,
+ .block_size = 64,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 20,
+ .max = 20,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* MD5 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_MD5,
+ .block_size = 64,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ }, }
+ }, }
+ },
+ {
+ /* SHA224 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA224,
+ .block_size = 64,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 28,
+ .max = 28,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA256 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA256,
+ .block_size = 64,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 32,
+ .max = 32,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA384 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA384,
+ .block_size = 64,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 48,
+ .max = 48,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA512 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA512,
+ .block_size = 64,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 64,
+ .max = 64,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA3_224 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA3_224,
+ .block_size = 144,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 28,
+ .max = 28,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA3_256 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA3_256,
+ .block_size = 136,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 32,
+ .max = 32,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA3_384 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA3_384,
+ .block_size = 104,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 48,
+ .max = 48,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA3_512 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA3_512,
+ .block_size = 72,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 64,
+ .max = 64,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA1 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
+ .block_size = 64,
+ .key_size = {
+ .min = 1,
+ .max = 64,
+ .increment = 1
+ },
+ .digest_size = {
+ .min = 20,
+ .max = 20,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* MD5 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_MD5_HMAC,
+ .block_size = 64,
+ .key_size = {
+ .min = 1,
+ .max = 64,
+ .increment = 1
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA224 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA224_HMAC,
+ .block_size = 64,
+ .key_size = {
+ .min = 1,
+ .max = 64,
+ .increment = 1
+ },
+ .digest_size = {
+ .min = 28,
+ .max = 28,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA256 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA256_HMAC,
+ .block_size = 64,
+ .key_size = {
+ .min = 1,
+ .max = 64,
+ .increment = 1
+ },
+ .digest_size = {
+ .min = 32,
+ .max = 32,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA384 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA384_HMAC,
+ .block_size = 128,
+ .key_size = {
+ .min = 1,
+ .max = 128,
+ .increment = 1
+ },
+ .digest_size = {
+ .min = 48,
+ .max = 48,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA512 HMAC*/
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA512_HMAC,
+ .block_size = 128,
+ .key_size = {
+ .min = 1,
+ .max = 128,
+ .increment = 1
+ },
+ .digest_size = {
+ .min = 64,
+ .max = 64,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA3_224 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA3_224_HMAC,
+ .block_size = 144,
+ .key_size = {
+ .min = 1,
+ .max = 144,
+ .increment = 1
+ },
+ .digest_size = {
+ .min = 28,
+ .max = 28,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA3_256 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA3_256_HMAC,
+ .block_size = 136,
+ .key_size = {
+ .min = 1,
+ .max = 136,
+ .increment = 1
+ },
+ .digest_size = {
+ .min = 32,
+ .max = 32,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA3_384 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA3_384_HMAC,
+ .block_size = 104,
+ .key_size = {
+ .min = 1,
+ .max = 104,
+ .increment = 1
+ },
+ .digest_size = {
+ .min = 48,
+ .max = 48,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA3_512 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA3_512_HMAC,
+ .block_size = 72,
+ .key_size = {
+ .min = 1,
+ .max = 72,
+ .increment = 1
+ },
+ .digest_size = {
+ .min = 64,
+ .max = 64,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* AES XCBC MAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_AES_XCBC_MAC,
+ .block_size = 16,
+ .key_size = {
+ .min = 1,
+ .max = 16,
+ .increment = 1
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* AES GMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_AES_GMAC,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 8
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .aad_size = {
+ .min = 0,
+ .max = 65535,
+ .increment = 1
+ },
+ .iv_size = {
+ .min = 12,
+ .max = 16,
+ .increment = 4
+ },
+ }, }
+ }, }
+ },
+ {
+ /* AES CMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_AES_CMAC,
+ .block_size = 16,
+ .key_size = {
+ .min = 1,
+ .max = 16,
+ .increment = 1
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* AES CBC MAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_AES_CBC_MAC,
+ .block_size = 16,
+ .key_size = {
+ .min = 1,
+ .max = 16,
+ .increment = 1
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* AES ECB */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_AES_ECB,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 8
+ },
+ .iv_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ {
+ /* AES CBC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_AES_CBC,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 8
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ {
+ /* AES CTR */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_AES_CTR,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 8
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ {
+ /* AES XTS */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_AES_XTS,
+ .block_size = 16,
+ .key_size = {
+ .min = 32,
+ .max = 64,
+ .increment = 32
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ {
+ /* DES CBC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_DES_CBC,
+ .block_size = 8,
+ .key_size = {
+ .min = 8,
+ .max = 8,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ {
+ /* 3DES CBC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_3DES_CBC,
+ .block_size = 8,
+ .key_size = {
+ .min = 24,
+ .max = 24,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ {
+ /* 3DES ECB */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_3DES_ECB,
+ .block_size = 8,
+ .key_size = {
+ .min = 24,
+ .max = 24,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ {
+ /* AES GCM */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
+ {.aead = {
+ .algo = RTE_CRYPTO_AEAD_AES_GCM,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 8
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .aad_size = {
+ .min = 0,
+ .max = 65535,
+ .increment = 1
+ },
+ .iv_size = {
+ .min = 12,
+ .max = 16,
+ .increment = 4
+ },
+ }, }
+ }, }
+ },
+ {
+ /* AES CCM */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
+ {.aead = {
+ .algo = RTE_CRYPTO_AEAD_AES_CCM,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 8
+ },
+ .digest_size = {
+ .min = 4,
+ .max = 16,
+ .increment = 2
+ },
+ .aad_size = {
+ .min = 0,
+ .max = 65535,
+ .increment = 1
+ },
+ .iv_size = {
+ .min = 7,
+ .max = 13,
+ .increment = 1
+ },
+ }, }
+ }, }
+ },
+
+ RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+const struct rte_cryptodev_capabilities *
+bcmfs_sym_get_capabilities(void)
+{
+ return bcmfs_sym_capabilities;
+}
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_capabilities.h b/drivers/crypto/bcmfs/bcmfs_sym_capabilities.h
new file mode 100644
index 0000000000..3ff61b7d29
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_capabilities.h
@@ -0,0 +1,16 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_SYM_CAPABILITIES_H_
+#define _BCMFS_SYM_CAPABILITIES_H_
+
+/*
+ * Get capabilities list for the device
+ *
+ */
+const struct rte_cryptodev_capabilities *bcmfs_sym_get_capabilities(void);
+
+#endif /* _BCMFS_SYM_CAPABILITIES_H__ */
+
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_defs.h b/drivers/crypto/bcmfs/bcmfs_sym_defs.h
new file mode 100644
index 0000000000..aea1f281e4
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_defs.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_SYM_DEFS_H_
+#define _BCMFS_SYM_DEFS_H_
+
+/*
+ * Max block size of hash algorithm
+ * currently SHA3 supports max block size
+ * of 144 bytes
+ */
+#define BCMFS_MAX_KEY_SIZE 144
+#define BCMFS_MAX_IV_SIZE 16
+#define BCMFS_MAX_DIGEST_SIZE 64
+
+struct bcmfs_sym_session;
+struct bcmfs_sym_request;
+
+/** Crypto Request processing successful. */
+#define BCMFS_SYM_RESPONSE_SUCCESS (0)
+/** Crypot Request processing protocol failure. */
+#define BCMFS_SYM_RESPONSE_PROTO_FAILURE (1)
+/** Crypot Request processing completion failure. */
+#define BCMFS_SYM_RESPONSE_COMPL_ERROR (2)
+/** Crypot Request processing hash tag check error. */
+#define BCMFS_SYM_RESPONSE_HASH_TAG_ERROR (3)
+
+int
+bcmfs_process_sym_crypto_op(struct rte_crypto_op *op,
+ struct bcmfs_sym_session *sess,
+ struct bcmfs_sym_request *req);
+#endif /* _BCMFS_SYM_DEFS_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
index 0f96915f70..381ca8ea48 100644
--- a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
+++ b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
@@ -14,6 +14,8 @@
#include "bcmfs_qp.h"
#include "bcmfs_sym_pmd.h"
#include "bcmfs_sym_req.h"
+#include "bcmfs_sym_session.h"
+#include "bcmfs_sym_capabilities.h"
uint8_t cryptodev_bcmfs_driver_id;
@@ -65,6 +67,7 @@ bcmfs_sym_dev_info_get(struct rte_cryptodev *dev,
dev_info->max_nb_queue_pairs = fsdev->max_hw_qps;
/* No limit of number of sessions */
dev_info->sym.max_nb_sessions = 0;
+ dev_info->capabilities = bcmfs_sym_get_capabilities();
}
}
@@ -228,6 +231,10 @@ static struct rte_cryptodev_ops crypto_bcmfs_ops = {
/* Queue-Pair management */
.queue_pair_setup = bcmfs_sym_qp_setup,
.queue_pair_release = bcmfs_sym_qp_release,
+ /* Crypto session related operations */
+ .sym_session_get_size = bcmfs_sym_session_get_private_size,
+ .sym_session_configure = bcmfs_sym_session_configure,
+ .sym_session_clear = bcmfs_sym_session_clear
};
/** Enqueue burst */
@@ -239,6 +246,7 @@ bcmfs_sym_pmd_enqueue_op_burst(void *queue_pair,
int i, j;
uint16_t enq = 0;
struct bcmfs_sym_request *sreq;
+ struct bcmfs_sym_session *sess;
struct bcmfs_qp *qp = (struct bcmfs_qp *)queue_pair;
if (nb_ops == 0)
@@ -252,6 +260,10 @@ bcmfs_sym_pmd_enqueue_op_burst(void *queue_pair,
nb_ops = qp->nb_descriptors - qp->nb_pending_requests;
for (i = 0; i < nb_ops; i++) {
+ sess = bcmfs_sym_get_session(ops[i]);
+ if (unlikely(sess == NULL))
+ goto enqueue_err;
+
if (rte_mempool_get(qp->sr_mp, (void **)&sreq))
goto enqueue_err;
@@ -356,6 +368,7 @@ bcmfs_sym_dev_create(struct bcmfs_device *fsdev)
fsdev->sym_dev = internals;
internals->sym_dev_id = cryptodev->data->dev_id;
+ internals->fsdev_capabilities = bcmfs_sym_get_capabilities();
BCMFS_LOG(DEBUG, "Created bcmfs-sym device %s as cryptodev instance %d",
cryptodev->data->name, internals->sym_dev_id);
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_session.c b/drivers/crypto/bcmfs/bcmfs_sym_session.c
new file mode 100644
index 0000000000..675ed0ad55
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_session.c
@@ -0,0 +1,282 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_crypto.h>
+#include <rte_crypto_sym.h>
+#include <rte_log.h>
+
+#include "bcmfs_logs.h"
+#include "bcmfs_sym_defs.h"
+#include "bcmfs_sym_pmd.h"
+#include "bcmfs_sym_session.h"
+
+/** Configure the session from a crypto xform chain */
+static enum bcmfs_sym_chain_order
+crypto_get_chain_order(const struct rte_crypto_sym_xform *xform)
+{
+ enum bcmfs_sym_chain_order res = BCMFS_SYM_CHAIN_NOT_SUPPORTED;
+
+ if (xform != NULL) {
+ if (xform->type == RTE_CRYPTO_SYM_XFORM_AEAD)
+ res = BCMFS_SYM_CHAIN_AEAD;
+
+ if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
+ if (xform->next == NULL)
+ res = BCMFS_SYM_CHAIN_ONLY_AUTH;
+ else if (xform->next->type ==
+ RTE_CRYPTO_SYM_XFORM_CIPHER)
+ res = BCMFS_SYM_CHAIN_AUTH_CIPHER;
+ }
+ if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
+ if (xform->next == NULL)
+ res = BCMFS_SYM_CHAIN_ONLY_CIPHER;
+ else if (xform->next->type == RTE_CRYPTO_SYM_XFORM_AUTH)
+ res = BCMFS_SYM_CHAIN_CIPHER_AUTH;
+ }
+ }
+
+ return res;
+}
+
+/* Get session cipher key from input cipher key */
+static void
+get_key(const uint8_t *input_key, int keylen, uint8_t *session_key)
+{
+ memcpy(session_key, input_key, keylen);
+}
+
+/* Set session cipher parameters */
+static int
+crypto_set_session_cipher_parameters(struct bcmfs_sym_session *sess,
+ const struct rte_crypto_cipher_xform *cipher_xform)
+{
+ if (cipher_xform->key.length > BCMFS_MAX_KEY_SIZE) {
+ BCMFS_DP_LOG(ERR, "key length not supported");
+ return -EINVAL;
+ }
+
+ sess->cipher.key.length = cipher_xform->key.length;
+ sess->cipher.iv.offset = cipher_xform->iv.offset;
+ sess->cipher.iv.length = cipher_xform->iv.length;
+ sess->cipher.op = cipher_xform->op;
+ sess->cipher.algo = cipher_xform->algo;
+
+ get_key(cipher_xform->key.data,
+ sess->cipher.key.length,
+ sess->cipher.key.data);
+
+ return 0;
+}
+
+/* Set session auth parameters */
+static int
+crypto_set_session_auth_parameters(struct bcmfs_sym_session *sess,
+ const struct rte_crypto_auth_xform *auth_xform)
+{
+ if (auth_xform->key.length > BCMFS_MAX_KEY_SIZE) {
+ BCMFS_DP_LOG(ERR, "key length not supported");
+ return -EINVAL;
+ }
+
+ sess->auth.op = auth_xform->op;
+ sess->auth.key.length = auth_xform->key.length;
+ sess->auth.digest_length = auth_xform->digest_length;
+ sess->auth.iv.length = auth_xform->iv.length;
+ sess->auth.iv.offset = auth_xform->iv.offset;
+ sess->auth.algo = auth_xform->algo;
+
+ get_key(auth_xform->key.data,
+ auth_xform->key.length,
+ sess->auth.key.data);
+
+ return 0;
+}
+
+/* Set session aead parameters */
+static int
+crypto_set_session_aead_parameters(struct bcmfs_sym_session *sess,
+ const struct rte_crypto_sym_xform *aead_xform)
+{
+ if (aead_xform->aead.key.length > BCMFS_MAX_KEY_SIZE) {
+ BCMFS_DP_LOG(ERR, "key length not supported");
+ return -EINVAL;
+ }
+
+ sess->aead.iv.offset = aead_xform->aead.iv.offset;
+ sess->aead.iv.length = aead_xform->aead.iv.length;
+ sess->aead.aad_length = aead_xform->aead.aad_length;
+ sess->aead.key.length = aead_xform->aead.key.length;
+ sess->aead.digest_length = aead_xform->aead.digest_length;
+ sess->aead.op = aead_xform->aead.op;
+ sess->aead.algo = aead_xform->aead.algo;
+
+ get_key(aead_xform->aead.key.data,
+ aead_xform->aead.key.length,
+ sess->aead.key.data);
+
+ return 0;
+}
+
+static struct rte_crypto_auth_xform *
+crypto_get_auth_xform(struct rte_crypto_sym_xform *xform)
+{
+ do {
+ if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH)
+ return &xform->auth;
+
+ xform = xform->next;
+ } while (xform);
+
+ return NULL;
+}
+
+static struct rte_crypto_cipher_xform *
+crypto_get_cipher_xform(struct rte_crypto_sym_xform *xform)
+{
+ do {
+ if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER)
+ return &xform->cipher;
+
+ xform = xform->next;
+ } while (xform);
+
+ return NULL;
+}
+
+/** Parse crypto xform chain and set private session parameters */
+static int
+crypto_set_session_parameters(struct bcmfs_sym_session *sess,
+ struct rte_crypto_sym_xform *xform)
+{
+ int rc = 0;
+ struct rte_crypto_cipher_xform *cipher_xform =
+ crypto_get_cipher_xform(xform);
+ struct rte_crypto_auth_xform *auth_xform =
+ crypto_get_auth_xform(xform);
+
+ sess->chain_order = crypto_get_chain_order(xform);
+
+ switch (sess->chain_order) {
+ case BCMFS_SYM_CHAIN_ONLY_CIPHER:
+ if (crypto_set_session_cipher_parameters(sess, cipher_xform))
+ rc = -EINVAL;
+ break;
+ case BCMFS_SYM_CHAIN_ONLY_AUTH:
+ if (crypto_set_session_auth_parameters(sess, auth_xform))
+ rc = -EINVAL;
+ break;
+ case BCMFS_SYM_CHAIN_AUTH_CIPHER:
+ sess->cipher_first = false;
+ if (crypto_set_session_auth_parameters(sess, auth_xform)) {
+ rc = -EINVAL;
+ goto error;
+ }
+
+ if (crypto_set_session_cipher_parameters(sess, cipher_xform))
+ rc = -EINVAL;
+ break;
+ case BCMFS_SYM_CHAIN_CIPHER_AUTH:
+ sess->cipher_first = true;
+ if (crypto_set_session_auth_parameters(sess, auth_xform)) {
+ rc = -EINVAL;
+ goto error;
+ }
+
+ if (crypto_set_session_cipher_parameters(sess, cipher_xform))
+ rc = -EINVAL;
+ break;
+ case BCMFS_SYM_CHAIN_AEAD:
+ if (crypto_set_session_aead_parameters(sess, xform))
+ rc = -EINVAL;
+ break;
+ default:
+ BCMFS_DP_LOG(ERR, "Invalid chain order\n");
+ rc = -EINVAL;
+ break;
+ }
+
+error:
+ return rc;
+}
+
+struct bcmfs_sym_session *
+bcmfs_sym_get_session(struct rte_crypto_op *op)
+{
+ struct bcmfs_sym_session *sess = NULL;
+
+ if (unlikely(op->sess_type == RTE_CRYPTO_OP_SESSIONLESS)) {
+ BCMFS_DP_LOG(ERR, "operations op(%p) is sessionless", op);
+ } else if (likely(op->sym->session != NULL)) {
+ /* get existing session */
+ sess = (struct bcmfs_sym_session *)
+ get_sym_session_private_data(op->sym->session,
+ cryptodev_bcmfs_driver_id);
+ }
+
+ if (sess == NULL)
+ op->status = RTE_CRYPTO_OP_STATUS_INVALID_SESSION;
+
+ return sess;
+}
+
+int
+bcmfs_sym_session_configure(struct rte_cryptodev *dev,
+ struct rte_crypto_sym_xform *xform,
+ struct rte_cryptodev_sym_session *sess,
+ struct rte_mempool *mempool)
+{
+ void *sess_private_data;
+ int ret;
+
+ if (unlikely(sess == NULL)) {
+ BCMFS_DP_LOG(ERR, "Invalid session struct");
+ return -EINVAL;
+ }
+
+ if (rte_mempool_get(mempool, &sess_private_data)) {
+ BCMFS_DP_LOG(ERR,
+ "Couldn't get object from session mempool");
+ return -ENOMEM;
+ }
+
+ ret = crypto_set_session_parameters(sess_private_data, xform);
+
+ if (ret != 0) {
+ BCMFS_DP_LOG(ERR, "Failed configure session parameters");
+ /* Return session to mempool */
+ rte_mempool_put(mempool, sess_private_data);
+ return ret;
+ }
+
+ set_sym_session_private_data(sess, dev->driver_id,
+ sess_private_data);
+
+ return 0;
+}
+
+/* Clear the memory of session so it doesn't leave key material behind */
+void
+bcmfs_sym_session_clear(struct rte_cryptodev *dev,
+ struct rte_cryptodev_sym_session *sess)
+{
+ uint8_t index = dev->driver_id;
+ void *sess_priv = get_sym_session_private_data(sess, index);
+
+ if (sess_priv) {
+ struct rte_mempool *sess_mp;
+
+ memset(sess_priv, 0, sizeof(struct bcmfs_sym_session));
+ sess_mp = rte_mempool_from_obj(sess_priv);
+
+ set_sym_session_private_data(sess, index, NULL);
+ rte_mempool_put(sess_mp, sess_priv);
+ }
+}
+
+unsigned int
+bcmfs_sym_session_get_private_size(struct rte_cryptodev *dev __rte_unused)
+{
+ return sizeof(struct bcmfs_sym_session);
+}
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_session.h b/drivers/crypto/bcmfs/bcmfs_sym_session.h
new file mode 100644
index 0000000000..8240c6fc25
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_session.h
@@ -0,0 +1,109 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_SYM_SESSION_H_
+#define _BCMFS_SYM_SESSION_H_
+
+#include <stdbool.h>
+#include <rte_crypto.h>
+#include <rte_cryptodev_pmd.h>
+
+#include "bcmfs_sym_defs.h"
+#include "bcmfs_sym_req.h"
+
+/* BCMFS_SYM operation order mode enumerator */
+enum bcmfs_sym_chain_order {
+ BCMFS_SYM_CHAIN_ONLY_CIPHER,
+ BCMFS_SYM_CHAIN_ONLY_AUTH,
+ BCMFS_SYM_CHAIN_CIPHER_AUTH,
+ BCMFS_SYM_CHAIN_AUTH_CIPHER,
+ BCMFS_SYM_CHAIN_AEAD,
+ BCMFS_SYM_CHAIN_NOT_SUPPORTED
+};
+
+/* BCMFS_SYM crypto private session structure */
+struct bcmfs_sym_session {
+ enum bcmfs_sym_chain_order chain_order;
+
+ /* Cipher Parameters */
+ struct {
+ enum rte_crypto_cipher_operation op;
+ /* Cipher operation */
+ enum rte_crypto_cipher_algorithm algo;
+ /* Cipher algorithm */
+ struct {
+ uint8_t data[BCMFS_MAX_KEY_SIZE];
+ size_t length;
+ } key;
+ struct {
+ uint16_t offset;
+ uint16_t length;
+ } iv;
+ } cipher;
+
+ /* Authentication Parameters */
+ struct {
+ enum rte_crypto_auth_operation op;
+ /* Auth operation */
+ enum rte_crypto_auth_algorithm algo;
+ /* Auth algorithm */
+
+ struct {
+ uint8_t data[BCMFS_MAX_KEY_SIZE];
+ size_t length;
+ } key;
+ struct {
+ uint16_t offset;
+ uint16_t length;
+ } iv;
+
+ uint16_t digest_length;
+ } auth;
+
+ /* Aead Parameters */
+ struct {
+ enum rte_crypto_aead_operation op;
+ /* AEAD operation */
+ enum rte_crypto_aead_algorithm algo;
+ /* AEAD algorithm */
+ struct {
+ uint8_t data[BCMFS_MAX_KEY_SIZE];
+ size_t length;
+ } key;
+ struct {
+ uint16_t offset;
+ uint16_t length;
+ } iv;
+
+ uint16_t digest_length;
+
+ uint16_t aad_length;
+ } aead;
+
+ bool cipher_first;
+} __rte_cache_aligned;
+
+int
+bcmfs_process_crypto_op(struct rte_crypto_op *op,
+ struct bcmfs_sym_session *sess,
+ struct bcmfs_sym_request *req);
+
+int
+bcmfs_sym_session_configure(struct rte_cryptodev *dev,
+ struct rte_crypto_sym_xform *xform,
+ struct rte_cryptodev_sym_session *sess,
+ struct rte_mempool *mempool);
+
+void
+bcmfs_sym_session_clear(struct rte_cryptodev *dev,
+ struct rte_cryptodev_sym_session *sess);
+
+unsigned int
+bcmfs_sym_session_get_private_size(struct rte_cryptodev *dev __rte_unused);
+
+struct bcmfs_sym_session *
+bcmfs_sym_get_session(struct rte_crypto_op *op);
+
+#endif /* _BCMFS_SYM_SESSION_H_ */
diff --git a/drivers/crypto/bcmfs/meson.build b/drivers/crypto/bcmfs/meson.build
index d9a3d73e99..2e86c733e1 100644
--- a/drivers/crypto/bcmfs/meson.build
+++ b/drivers/crypto/bcmfs/meson.build
@@ -12,5 +12,7 @@ sources = files(
'hw/bcmfs4_rm.c',
'hw/bcmfs5_rm.c',
'hw/bcmfs_rm_common.c',
- 'bcmfs_sym_pmd.c'
+ 'bcmfs_sym_pmd.c',
+ 'bcmfs_sym_capabilities.c',
+ 'bcmfs_sym_session.c'
)
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v3 7/8] crypto/bcmfs: add crypto h/w module
2020-10-05 16:26 ` [dpdk-dev] [PATCH v3 " Vikas Gupta
` (5 preceding siblings ...)
2020-10-05 16:26 ` [dpdk-dev] [PATCH v3 6/8] crypto/bcmfs: add session handling and capabilities Vikas Gupta
@ 2020-10-05 16:26 ` Vikas Gupta
2020-10-05 16:26 ` [dpdk-dev] [PATCH v3 8/8] crypto/bcmfs: add crypto pmd into cryptodev test Vikas Gupta
2020-10-07 16:45 ` [dpdk-dev] [PATCH v4 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
8 siblings, 0 replies; 75+ messages in thread
From: Vikas Gupta @ 2020-10-05 16:26 UTC (permalink / raw)
To: dev, akhil.goyal; +Cc: vikram.prakash, Vikas Gupta, Raveendra Padasalagi
Add crypto h/w module to process crypto op. Crypto op is processed via
sym_engine module before submitting the crypto request to h/w queues.
Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
drivers/crypto/bcmfs/bcmfs_sym.c | 289 ++++++
drivers/crypto/bcmfs/bcmfs_sym_engine.c | 1155 +++++++++++++++++++++++
drivers/crypto/bcmfs/bcmfs_sym_engine.h | 115 +++
drivers/crypto/bcmfs/bcmfs_sym_pmd.c | 26 +
drivers/crypto/bcmfs/bcmfs_sym_req.h | 40 +
drivers/crypto/bcmfs/meson.build | 4 +-
6 files changed, 1628 insertions(+), 1 deletion(-)
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_engine.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_engine.h
diff --git a/drivers/crypto/bcmfs/bcmfs_sym.c b/drivers/crypto/bcmfs/bcmfs_sym.c
new file mode 100644
index 0000000000..2d164a1ec8
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym.c
@@ -0,0 +1,289 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <stdbool.h>
+
+#include <rte_byteorder.h>
+#include <rte_crypto_sym.h>
+#include <rte_cryptodev.h>
+#include <rte_mbuf.h>
+#include <rte_mempool.h>
+
+#include "bcmfs_sym_defs.h"
+#include "bcmfs_sym_engine.h"
+#include "bcmfs_sym_req.h"
+#include "bcmfs_sym_session.h"
+
+/** Process cipher operation */
+static int
+process_crypto_cipher_op(struct rte_crypto_op *op,
+ struct rte_mbuf *mbuf_src,
+ struct rte_mbuf *mbuf_dst,
+ struct bcmfs_sym_session *sess,
+ struct bcmfs_sym_request *req)
+{
+ int rc = 0;
+ struct fsattr src, dst, iv, key;
+ struct rte_crypto_sym_op *sym_op = op->sym;
+
+ fsattr_sz(&src) = sym_op->cipher.data.length;
+ fsattr_sz(&dst) = sym_op->cipher.data.length;
+
+ fsattr_va(&src) = rte_pktmbuf_mtod_offset
+ (mbuf_src,
+ uint8_t *,
+ op->sym->cipher.data.offset);
+
+ fsattr_va(&dst) = rte_pktmbuf_mtod_offset
+ (mbuf_dst,
+ uint8_t *,
+ op->sym->cipher.data.offset);
+
+ fsattr_pa(&src) = rte_pktmbuf_iova(mbuf_src);
+ fsattr_pa(&dst) = rte_pktmbuf_iova(mbuf_dst);
+
+ fsattr_va(&iv) = rte_crypto_op_ctod_offset(op,
+ uint8_t *,
+ sess->cipher.iv.offset);
+
+ fsattr_sz(&iv) = sess->cipher.iv.length;
+
+ fsattr_va(&key) = sess->cipher.key.data;
+ fsattr_pa(&key) = 0;
+ fsattr_sz(&key) = sess->cipher.key.length;
+
+ rc = bcmfs_crypto_build_cipher_req(req, sess->cipher.algo,
+ sess->cipher.op, &src,
+ &dst, &key, &iv);
+ if (rc)
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+
+ return rc;
+}
+
+/** Process auth operation */
+static int
+process_crypto_auth_op(struct rte_crypto_op *op,
+ struct rte_mbuf *mbuf_src,
+ struct bcmfs_sym_session *sess,
+ struct bcmfs_sym_request *req)
+{
+ int rc = 0;
+ struct fsattr src, dst, mac, key, iv;
+
+ fsattr_sz(&src) = op->sym->auth.data.length;
+ fsattr_va(&src) = rte_pktmbuf_mtod_offset(mbuf_src,
+ uint8_t *,
+ op->sym->auth.data.offset);
+ fsattr_pa(&src) = rte_pktmbuf_iova(mbuf_src);
+
+ if (!sess->auth.op) {
+ fsattr_va(&mac) = op->sym->auth.digest.data;
+ fsattr_pa(&mac) = op->sym->auth.digest.phys_addr;
+ fsattr_sz(&mac) = sess->auth.digest_length;
+ } else {
+ fsattr_va(&dst) = op->sym->auth.digest.data;
+ fsattr_pa(&dst) = op->sym->auth.digest.phys_addr;
+ fsattr_sz(&dst) = sess->auth.digest_length;
+ }
+
+ fsattr_va(&key) = sess->auth.key.data;
+ fsattr_pa(&key) = 0;
+ fsattr_sz(&key) = sess->auth.key.length;
+
+ /* AES-GMAC uses AES-GCM-128 authenticator */
+ if (sess->auth.algo == RTE_CRYPTO_AUTH_AES_GMAC) {
+ fsattr_va(&iv) = rte_crypto_op_ctod_offset(op,
+ uint8_t *,
+ sess->auth.iv.offset);
+ fsattr_pa(&iv) = 0;
+ fsattr_sz(&iv) = sess->auth.iv.length;
+ } else {
+ fsattr_va(&iv) = NULL;
+ fsattr_sz(&iv) = 0;
+ }
+
+ rc = bcmfs_crypto_build_auth_req(req, sess->auth.algo,
+ sess->auth.op,
+ &src,
+ (sess->auth.op) ? (&dst) : NULL,
+ (sess->auth.op) ? NULL : (&mac),
+ &key, &iv);
+
+ if (rc)
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+
+ return rc;
+}
+
+/** Process combined/chained mode operation */
+static int
+process_crypto_combined_op(struct rte_crypto_op *op,
+ struct rte_mbuf *mbuf_src,
+ struct rte_mbuf *mbuf_dst,
+ struct bcmfs_sym_session *sess,
+ struct bcmfs_sym_request *req)
+{
+ int rc = 0, aad_size = 0;
+ struct fsattr src, dst, iv;
+ struct rte_crypto_sym_op *sym_op = op->sym;
+ struct fsattr cipher_key, aad, mac, auth_key;
+
+ fsattr_sz(&src) = sym_op->cipher.data.length;
+ fsattr_sz(&dst) = sym_op->cipher.data.length;
+
+ fsattr_va(&src) = rte_pktmbuf_mtod_offset
+ (mbuf_src,
+ uint8_t *,
+ sym_op->cipher.data.offset);
+
+ fsattr_va(&dst) = rte_pktmbuf_mtod_offset
+ (mbuf_dst,
+ uint8_t *,
+ sym_op->cipher.data.offset);
+
+ fsattr_pa(&src) = rte_pktmbuf_iova_offset(mbuf_src,
+ sym_op->cipher.data.offset);
+ fsattr_pa(&dst) = rte_pktmbuf_iova_offset(mbuf_dst,
+ sym_op->cipher.data.offset);
+
+ fsattr_va(&iv) = rte_crypto_op_ctod_offset(op,
+ uint8_t *,
+ sess->cipher.iv.offset);
+
+ fsattr_pa(&iv) = 0;
+ fsattr_sz(&iv) = sess->cipher.iv.length;
+
+ fsattr_va(&cipher_key) = sess->cipher.key.data;
+ fsattr_pa(&cipher_key) = 0;
+ fsattr_sz(&cipher_key) = sess->cipher.key.length;
+
+ fsattr_va(&auth_key) = sess->auth.key.data;
+ fsattr_pa(&auth_key) = 0;
+ fsattr_sz(&auth_key) = sess->auth.key.length;
+
+ fsattr_va(&mac) = op->sym->auth.digest.data;
+ fsattr_pa(&mac) = op->sym->auth.digest.phys_addr;
+ fsattr_sz(&mac) = sess->auth.digest_length;
+
+ aad_size = sym_op->auth.data.length - sym_op->cipher.data.length;
+
+ if (aad_size > 0) {
+ fsattr_sz(&aad) = aad_size;
+ fsattr_va(&aad) = rte_pktmbuf_mtod_offset
+ (mbuf_src,
+ uint8_t *,
+ sym_op->auth.data.offset);
+ fsattr_pa(&aad) = rte_pktmbuf_iova_offset(mbuf_src,
+ sym_op->auth.data.offset);
+ }
+
+ rc = bcmfs_crypto_build_chain_request(req, sess->cipher.algo,
+ sess->cipher.op,
+ sess->auth.algo,
+ sess->auth.op,
+ &src, &dst, &cipher_key,
+ &auth_key, &iv,
+ (aad_size > 0) ? (&aad) : NULL,
+ &mac, sess->cipher_first);
+
+ if (rc)
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+
+ return rc;
+}
+
+/** Process AEAD operation */
+static int
+process_crypto_aead_op(struct rte_crypto_op *op,
+ struct rte_mbuf *mbuf_src,
+ struct rte_mbuf *mbuf_dst,
+ struct bcmfs_sym_session *sess,
+ struct bcmfs_sym_request *req)
+{
+ int rc = 0;
+ struct fsattr src, dst, iv;
+ struct rte_crypto_sym_op *sym_op = op->sym;
+ struct fsattr key, aad, mac;
+
+ fsattr_sz(&src) = sym_op->aead.data.length;
+ fsattr_sz(&dst) = sym_op->aead.data.length;
+
+ fsattr_va(&src) = rte_pktmbuf_mtod_offset(mbuf_src,
+ uint8_t *,
+ sym_op->aead.data.offset);
+
+ fsattr_va(&dst) = rte_pktmbuf_mtod_offset(mbuf_dst,
+ uint8_t *,
+ sym_op->aead.data.offset);
+
+ fsattr_pa(&src) = rte_pktmbuf_iova_offset(mbuf_src,
+ sym_op->aead.data.offset);
+ fsattr_pa(&dst) = rte_pktmbuf_iova_offset(mbuf_dst,
+ sym_op->aead.data.offset);
+
+ fsattr_va(&iv) = rte_crypto_op_ctod_offset(op,
+ uint8_t *,
+ sess->aead.iv.offset);
+
+ fsattr_pa(&iv) = 0;
+ fsattr_sz(&iv) = sess->aead.iv.length;
+
+ fsattr_va(&key) = sess->aead.key.data;
+ fsattr_pa(&key) = 0;
+ fsattr_sz(&key) = sess->aead.key.length;
+
+ fsattr_va(&mac) = op->sym->aead.digest.data;
+ fsattr_pa(&mac) = op->sym->aead.digest.phys_addr;
+ fsattr_sz(&mac) = sess->aead.digest_length;
+
+ fsattr_va(&aad) = op->sym->aead.aad.data;
+ fsattr_pa(&aad) = op->sym->aead.aad.phys_addr;
+ fsattr_sz(&aad) = sess->aead.aad_length;
+
+ rc = bcmfs_crypto_build_aead_request(req, sess->aead.algo,
+ sess->aead.op, &src, &dst,
+ &key, &iv, &aad, &mac);
+
+ if (rc)
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+
+ return rc;
+}
+
+/** Process crypto operation for mbuf */
+int
+bcmfs_process_sym_crypto_op(struct rte_crypto_op *op,
+ struct bcmfs_sym_session *sess,
+ struct bcmfs_sym_request *req)
+{
+ struct rte_mbuf *msrc, *mdst;
+ int rc = 0;
+
+ msrc = op->sym->m_src;
+ mdst = op->sym->m_dst ? op->sym->m_dst : op->sym->m_src;
+ op->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+
+ switch (sess->chain_order) {
+ case BCMFS_SYM_CHAIN_ONLY_CIPHER:
+ rc = process_crypto_cipher_op(op, msrc, mdst, sess, req);
+ break;
+ case BCMFS_SYM_CHAIN_ONLY_AUTH:
+ rc = process_crypto_auth_op(op, msrc, sess, req);
+ break;
+ case BCMFS_SYM_CHAIN_CIPHER_AUTH:
+ case BCMFS_SYM_CHAIN_AUTH_CIPHER:
+ rc = process_crypto_combined_op(op, msrc, mdst, sess, req);
+ break;
+ case BCMFS_SYM_CHAIN_AEAD:
+ rc = process_crypto_aead_op(op, msrc, mdst, sess, req);
+ break;
+ default:
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ break;
+ }
+
+ return rc;
+}
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_engine.c b/drivers/crypto/bcmfs/bcmfs_sym_engine.c
new file mode 100644
index 0000000000..537bfbec8b
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_engine.c
@@ -0,0 +1,1155 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Broadcom.
+ * All rights reserved.
+ */
+
+#include <stdbool.h>
+#include <string.h>
+
+#include <rte_common.h>
+#include <rte_cryptodev.h>
+#include <rte_crypto_sym.h>
+
+#include "bcmfs_logs.h"
+#include "bcmfs_sym_defs.h"
+#include "bcmfs_dev_msg.h"
+#include "bcmfs_sym_req.h"
+#include "bcmfs_sym_engine.h"
+
+enum spu2_cipher_type {
+ SPU2_CIPHER_TYPE_NONE = 0x0,
+ SPU2_CIPHER_TYPE_AES128 = 0x1,
+ SPU2_CIPHER_TYPE_AES192 = 0x2,
+ SPU2_CIPHER_TYPE_AES256 = 0x3,
+ SPU2_CIPHER_TYPE_DES = 0x4,
+ SPU2_CIPHER_TYPE_3DES = 0x5,
+ SPU2_CIPHER_TYPE_LAST
+};
+
+enum spu2_cipher_mode {
+ SPU2_CIPHER_MODE_ECB = 0x0,
+ SPU2_CIPHER_MODE_CBC = 0x1,
+ SPU2_CIPHER_MODE_CTR = 0x2,
+ SPU2_CIPHER_MODE_CFB = 0x3,
+ SPU2_CIPHER_MODE_OFB = 0x4,
+ SPU2_CIPHER_MODE_XTS = 0x5,
+ SPU2_CIPHER_MODE_CCM = 0x6,
+ SPU2_CIPHER_MODE_GCM = 0x7,
+ SPU2_CIPHER_MODE_LAST
+};
+
+enum spu2_hash_type {
+ SPU2_HASH_TYPE_NONE = 0x0,
+ SPU2_HASH_TYPE_AES128 = 0x1,
+ SPU2_HASH_TYPE_AES192 = 0x2,
+ SPU2_HASH_TYPE_AES256 = 0x3,
+ SPU2_HASH_TYPE_MD5 = 0x6,
+ SPU2_HASH_TYPE_SHA1 = 0x7,
+ SPU2_HASH_TYPE_SHA224 = 0x8,
+ SPU2_HASH_TYPE_SHA256 = 0x9,
+ SPU2_HASH_TYPE_SHA384 = 0xa,
+ SPU2_HASH_TYPE_SHA512 = 0xb,
+ SPU2_HASH_TYPE_SHA512_224 = 0xc,
+ SPU2_HASH_TYPE_SHA512_256 = 0xd,
+ SPU2_HASH_TYPE_SHA3_224 = 0xe,
+ SPU2_HASH_TYPE_SHA3_256 = 0xf,
+ SPU2_HASH_TYPE_SHA3_384 = 0x10,
+ SPU2_HASH_TYPE_SHA3_512 = 0x11,
+ SPU2_HASH_TYPE_LAST
+};
+
+enum spu2_hash_mode {
+ SPU2_HASH_MODE_CMAC = 0x0,
+ SPU2_HASH_MODE_CBC_MAC = 0x1,
+ SPU2_HASH_MODE_XCBC_MAC = 0x2,
+ SPU2_HASH_MODE_HMAC = 0x3,
+ SPU2_HASH_MODE_RABIN = 0x4,
+ SPU2_HASH_MODE_CCM = 0x5,
+ SPU2_HASH_MODE_GCM = 0x6,
+ SPU2_HASH_MODE_RESERVED = 0x7,
+ SPU2_HASH_MODE_LAST
+};
+
+enum spu2_proto_sel {
+ SPU2_PROTO_RESV = 0,
+ SPU2_MACSEC_SECTAG8_ECB = 1,
+ SPU2_MACSEC_SECTAG8_SCB = 2,
+ SPU2_MACSEC_SECTAG16 = 3,
+ SPU2_MACSEC_SECTAG16_8_XPN = 4,
+ SPU2_IPSEC = 5,
+ SPU2_IPSEC_ESN = 6,
+ SPU2_TLS_CIPHER = 7,
+ SPU2_TLS_AEAD = 8,
+ SPU2_DTLS_CIPHER = 9,
+ SPU2_DTLS_AEAD = 10
+};
+
+/* SPU2 response size */
+#define SPU2_STATUS_LEN 2
+
+/* Metadata settings in response */
+enum spu2_ret_md_opts {
+ SPU2_RET_NO_MD = 0, /* return no metadata */
+ SPU2_RET_FMD_OMD = 1, /* return both FMD and OMD */
+ SPU2_RET_FMD_ONLY = 2, /* return only FMD */
+ SPU2_RET_FMD_OMD_IV = 3, /* return FMD and OMD with just IVs */
+};
+
+/* FMD ctrl0 field masks */
+#define SPU2_CIPH_ENCRYPT_EN 0x1 /* 0: decrypt, 1: encrypt */
+#define SPU2_CIPH_TYPE_SHIFT 4
+#define SPU2_CIPH_MODE 0xF00 /* one of spu2_cipher_mode */
+#define SPU2_CIPH_MODE_SHIFT 8
+#define SPU2_CFB_MASK 0x7000 /* cipher feedback mask */
+#define SPU2_CFB_MASK_SHIFT 12
+#define SPU2_PROTO_SEL 0xF00000 /* MACsec, IPsec, TLS... */
+#define SPU2_PROTO_SEL_SHIFT 20
+#define SPU2_HASH_FIRST 0x1000000 /* 1: hash input is input pkt
+ * data
+ */
+#define SPU2_CHK_TAG 0x2000000 /* 1: check digest provided */
+#define SPU2_HASH_TYPE 0x1F0000000 /* one of spu2_hash_type */
+#define SPU2_HASH_TYPE_SHIFT 28
+#define SPU2_HASH_MODE 0xF000000000 /* one of spu2_hash_mode */
+#define SPU2_HASH_MODE_SHIFT 36
+#define SPU2_CIPH_PAD_EN 0x100000000000 /* 1: Add pad to end of payload for
+ * enc
+ */
+#define SPU2_CIPH_PAD 0xFF000000000000 /* cipher pad value */
+#define SPU2_CIPH_PAD_SHIFT 48
+
+/* FMD ctrl1 field masks */
+#define SPU2_TAG_LOC 0x1 /* 1: end of payload, 0: undef */
+#define SPU2_HAS_FR_DATA 0x2 /* 1: msg has frame data */
+#define SPU2_HAS_AAD1 0x4 /* 1: msg has AAD1 field */
+#define SPU2_HAS_NAAD 0x8 /* 1: msg has NAAD field */
+#define SPU2_HAS_AAD2 0x10 /* 1: msg has AAD2 field */
+#define SPU2_HAS_ESN 0x20 /* 1: msg has ESN field */
+#define SPU2_HASH_KEY_LEN 0xFF00 /* len of hash key in bytes.
+ * HMAC only.
+ */
+#define SPU2_HASH_KEY_LEN_SHIFT 8
+#define SPU2_CIPH_KEY_LEN 0xFF00000 /* len of cipher key in bytes */
+#define SPU2_CIPH_KEY_LEN_SHIFT 20
+#define SPU2_GENIV 0x10000000 /* 1: hw generates IV */
+#define SPU2_HASH_IV 0x20000000 /* 1: IV incl in hash */
+#define SPU2_RET_IV 0x40000000 /* 1: return IV in output msg
+ * b4 payload
+ */
+#define SPU2_RET_IV_LEN 0xF00000000 /* length in bytes of IV returned.
+ * 0 = 16 bytes
+ */
+#define SPU2_RET_IV_LEN_SHIFT 32
+#define SPU2_IV_OFFSET 0xF000000000 /* gen IV offset */
+#define SPU2_IV_OFFSET_SHIFT 36
+#define SPU2_IV_LEN 0x1F0000000000 /* length of input IV in bytes */
+#define SPU2_IV_LEN_SHIFT 40
+#define SPU2_HASH_TAG_LEN 0x7F000000000000 /* hash tag length in bytes */
+#define SPU2_HASH_TAG_LEN_SHIFT 48
+#define SPU2_RETURN_MD 0x300000000000000 /* return metadata */
+#define SPU2_RETURN_MD_SHIFT 56
+#define SPU2_RETURN_FD 0x400000000000000
+#define SPU2_RETURN_AAD1 0x800000000000000
+#define SPU2_RETURN_NAAD 0x1000000000000000
+#define SPU2_RETURN_AAD2 0x2000000000000000
+#define SPU2_RETURN_PAY 0x4000000000000000 /* return payload */
+
+/* FMD ctrl2 field masks */
+#define SPU2_AAD1_OFFSET 0xFFF /* byte offset of AAD1 field */
+#define SPU2_AAD1_LEN 0xFF000 /* length of AAD1 in bytes */
+#define SPU2_AAD1_LEN_SHIFT 12
+#define SPU2_AAD2_OFFSET 0xFFF00000 /* byte offset of AAD2 field */
+#define SPU2_AAD2_OFFSET_SHIFT 20
+#define SPU2_PL_OFFSET 0xFFFFFFFF00000000 /* payload offset from AAD2 */
+#define SPU2_PL_OFFSET_SHIFT 32
+
+/* FMD ctrl3 field masks */
+#define SPU2_PL_LEN 0xFFFFFFFF /* payload length in bytes */
+#define SPU2_TLS_LEN 0xFFFF00000000 /* TLS encrypt: cipher len
+ * TLS decrypt: compressed len
+ */
+#define SPU2_TLS_LEN_SHIFT 32
+
+/*
+ * Max value that can be represented in the Payload Length field of the
+ * ctrl3 word of FMD.
+ */
+#define SPU2_MAX_PAYLOAD SPU2_PL_LEN
+
+#define SPU2_VAL_NONE 0
+
+/* CCM B_0 field definitions, common for SPU-M and SPU2 */
+#define CCM_B0_ADATA 0x40
+#define CCM_B0_ADATA_SHIFT 6
+#define CCM_B0_M_PRIME 0x38
+#define CCM_B0_M_PRIME_SHIFT 3
+#define CCM_B0_L_PRIME 0x07
+#define CCM_B0_L_PRIME_SHIFT 0
+#define CCM_ESP_L_VALUE 4
+
+static uint16_t
+spu2_cipher_type_xlate(enum rte_crypto_cipher_algorithm cipher_alg,
+ enum spu2_cipher_type *spu2_type,
+ struct fsattr *key)
+{
+ int ret = 0;
+ int key_size = fsattr_sz(key);
+
+ if (cipher_alg == RTE_CRYPTO_CIPHER_AES_XTS)
+ key_size = key_size / 2;
+
+ switch (key_size) {
+ case BCMFS_CRYPTO_AES128:
+ *spu2_type = SPU2_CIPHER_TYPE_AES128;
+ break;
+ case BCMFS_CRYPTO_AES192:
+ *spu2_type = SPU2_CIPHER_TYPE_AES192;
+ break;
+ case BCMFS_CRYPTO_AES256:
+ *spu2_type = SPU2_CIPHER_TYPE_AES256;
+ break;
+ default:
+ ret = -EINVAL;
+ }
+
+ return ret;
+}
+
+static int
+spu2_hash_xlate(enum rte_crypto_auth_algorithm auth_alg,
+ struct fsattr *key,
+ enum spu2_hash_type *spu2_type,
+ enum spu2_hash_mode *spu2_mode)
+{
+ *spu2_mode = 0;
+
+ switch (auth_alg) {
+ case RTE_CRYPTO_AUTH_NULL:
+ *spu2_type = SPU2_HASH_TYPE_NONE;
+ break;
+ case RTE_CRYPTO_AUTH_MD5:
+ *spu2_type = SPU2_HASH_TYPE_MD5;
+ break;
+ case RTE_CRYPTO_AUTH_MD5_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_MD5;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA1:
+ *spu2_type = SPU2_HASH_TYPE_SHA1;
+ break;
+ case RTE_CRYPTO_AUTH_SHA1_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA1;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA224:
+ *spu2_type = SPU2_HASH_TYPE_SHA224;
+ break;
+ case RTE_CRYPTO_AUTH_SHA224_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA224;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA256:
+ *spu2_type = SPU2_HASH_TYPE_SHA256;
+ break;
+ case RTE_CRYPTO_AUTH_SHA256_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA256;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA384:
+ *spu2_type = SPU2_HASH_TYPE_SHA384;
+ break;
+ case RTE_CRYPTO_AUTH_SHA384_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA384;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA512:
+ *spu2_type = SPU2_HASH_TYPE_SHA512;
+ break;
+ case RTE_CRYPTO_AUTH_SHA512_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA512;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA3_224:
+ *spu2_type = SPU2_HASH_TYPE_SHA3_224;
+ break;
+ case RTE_CRYPTO_AUTH_SHA3_224_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA3_224;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA3_256:
+ *spu2_type = SPU2_HASH_TYPE_SHA3_256;
+ break;
+ case RTE_CRYPTO_AUTH_SHA3_256_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA3_256;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA3_384:
+ *spu2_type = SPU2_HASH_TYPE_SHA3_384;
+ break;
+ case RTE_CRYPTO_AUTH_SHA3_384_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA3_384;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA3_512:
+ *spu2_type = SPU2_HASH_TYPE_SHA3_512;
+ break;
+ case RTE_CRYPTO_AUTH_SHA3_512_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA3_512;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
+ *spu2_mode = SPU2_HASH_MODE_XCBC_MAC;
+ switch (fsattr_sz(key)) {
+ case BCMFS_CRYPTO_AES128:
+ *spu2_type = SPU2_HASH_TYPE_AES128;
+ break;
+ case BCMFS_CRYPTO_AES192:
+ *spu2_type = SPU2_HASH_TYPE_AES192;
+ break;
+ case BCMFS_CRYPTO_AES256:
+ *spu2_type = SPU2_HASH_TYPE_AES256;
+ break;
+ default:
+ return -EINVAL;
+ }
+ break;
+ case RTE_CRYPTO_AUTH_AES_CMAC:
+ *spu2_mode = SPU2_HASH_MODE_CMAC;
+ switch (fsattr_sz(key)) {
+ case BCMFS_CRYPTO_AES128:
+ *spu2_type = SPU2_HASH_TYPE_AES128;
+ break;
+ case BCMFS_CRYPTO_AES192:
+ *spu2_type = SPU2_HASH_TYPE_AES192;
+ break;
+ case BCMFS_CRYPTO_AES256:
+ *spu2_type = SPU2_HASH_TYPE_AES256;
+ break;
+ default:
+ return -EINVAL;
+ }
+ break;
+ case RTE_CRYPTO_AUTH_AES_GMAC:
+ *spu2_mode = SPU2_HASH_MODE_GCM;
+ switch (fsattr_sz(key)) {
+ case BCMFS_CRYPTO_AES128:
+ *spu2_type = SPU2_HASH_TYPE_AES128;
+ break;
+ case BCMFS_CRYPTO_AES192:
+ *spu2_type = SPU2_HASH_TYPE_AES192;
+ break;
+ case BCMFS_CRYPTO_AES256:
+ *spu2_type = SPU2_HASH_TYPE_AES256;
+ break;
+ default:
+ return -EINVAL;
+ }
+ break;
+ case RTE_CRYPTO_AUTH_AES_CBC_MAC:
+ *spu2_mode = SPU2_HASH_MODE_CBC_MAC;
+ switch (fsattr_sz(key)) {
+ case BCMFS_CRYPTO_AES128:
+ *spu2_type = SPU2_HASH_TYPE_AES128;
+ break;
+ case BCMFS_CRYPTO_AES192:
+ *spu2_type = SPU2_HASH_TYPE_AES192;
+ break;
+ case BCMFS_CRYPTO_AES256:
+ *spu2_type = SPU2_HASH_TYPE_AES256;
+ break;
+ default:
+ return -EINVAL;
+ }
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int
+spu2_cipher_xlate(enum rte_crypto_cipher_algorithm cipher_alg,
+ struct fsattr *key,
+ enum spu2_cipher_type *spu2_type,
+ enum spu2_cipher_mode *spu2_mode)
+{
+ int ret = 0;
+
+ switch (cipher_alg) {
+ case RTE_CRYPTO_CIPHER_NULL:
+ *spu2_type = SPU2_CIPHER_TYPE_NONE;
+ break;
+ case RTE_CRYPTO_CIPHER_DES_CBC:
+ *spu2_mode = SPU2_CIPHER_MODE_CBC;
+ *spu2_type = SPU2_CIPHER_TYPE_DES;
+ break;
+ case RTE_CRYPTO_CIPHER_3DES_ECB:
+ *spu2_mode = SPU2_CIPHER_MODE_ECB;
+ *spu2_type = SPU2_CIPHER_TYPE_3DES;
+ break;
+ case RTE_CRYPTO_CIPHER_3DES_CBC:
+ *spu2_mode = SPU2_CIPHER_MODE_CBC;
+ *spu2_type = SPU2_CIPHER_TYPE_3DES;
+ break;
+ case RTE_CRYPTO_CIPHER_AES_CBC:
+ *spu2_mode = SPU2_CIPHER_MODE_CBC;
+ ret = spu2_cipher_type_xlate(cipher_alg, spu2_type, key);
+ break;
+ case RTE_CRYPTO_CIPHER_AES_ECB:
+ *spu2_mode = SPU2_CIPHER_MODE_ECB;
+ ret = spu2_cipher_type_xlate(cipher_alg, spu2_type, key);
+ break;
+ case RTE_CRYPTO_CIPHER_AES_CTR:
+ *spu2_mode = SPU2_CIPHER_MODE_CTR;
+ ret = spu2_cipher_type_xlate(cipher_alg, spu2_type, key);
+ break;
+ case RTE_CRYPTO_CIPHER_AES_XTS:
+ *spu2_mode = SPU2_CIPHER_MODE_XTS;
+ ret = spu2_cipher_type_xlate(cipher_alg, spu2_type, key);
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return ret;
+}
+
+static void
+spu2_fmd_ctrl0_write(struct spu2_fmd *fmd,
+ bool is_inbound, bool auth_first,
+ enum spu2_proto_sel protocol,
+ enum spu2_cipher_type cipher_type,
+ enum spu2_cipher_mode cipher_mode,
+ enum spu2_hash_type auth_type,
+ enum spu2_hash_mode auth_mode)
+{
+ uint64_t ctrl0 = 0;
+
+ if (cipher_type != SPU2_CIPHER_TYPE_NONE && !is_inbound)
+ ctrl0 |= SPU2_CIPH_ENCRYPT_EN;
+
+ ctrl0 |= ((uint64_t)cipher_type << SPU2_CIPH_TYPE_SHIFT) |
+ ((uint64_t)cipher_mode << SPU2_CIPH_MODE_SHIFT);
+
+ if (protocol != SPU2_PROTO_RESV)
+ ctrl0 |= (uint64_t)protocol << SPU2_PROTO_SEL_SHIFT;
+
+ if (auth_first)
+ ctrl0 |= SPU2_HASH_FIRST;
+
+ if (is_inbound && auth_type != SPU2_HASH_TYPE_NONE)
+ ctrl0 |= SPU2_CHK_TAG;
+
+ ctrl0 |= (((uint64_t)auth_type << SPU2_HASH_TYPE_SHIFT) |
+ ((uint64_t)auth_mode << SPU2_HASH_MODE_SHIFT));
+
+ fmd->ctrl0 = ctrl0;
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "ctrl0:", &fmd->ctrl0, sizeof(uint64_t));
+#endif
+}
+
+static void
+spu2_fmd_ctrl1_write(struct spu2_fmd *fmd, bool is_inbound,
+ uint64_t assoc_size, uint64_t auth_key_len,
+ uint64_t cipher_key_len, bool gen_iv, bool hash_iv,
+ bool return_iv, uint64_t ret_iv_len,
+ uint64_t ret_iv_offset, uint64_t cipher_iv_len,
+ uint64_t digest_size, bool return_payload, bool return_md)
+{
+ uint64_t ctrl1 = 0;
+
+ if (is_inbound && digest_size != 0)
+ ctrl1 |= SPU2_TAG_LOC;
+
+ if (assoc_size != 0)
+ ctrl1 |= SPU2_HAS_AAD2;
+
+ if (auth_key_len != 0)
+ ctrl1 |= ((auth_key_len << SPU2_HASH_KEY_LEN_SHIFT) &
+ SPU2_HASH_KEY_LEN);
+
+ if (cipher_key_len != 0)
+ ctrl1 |= ((cipher_key_len << SPU2_CIPH_KEY_LEN_SHIFT) &
+ SPU2_CIPH_KEY_LEN);
+
+ if (gen_iv)
+ ctrl1 |= SPU2_GENIV;
+
+ if (hash_iv)
+ ctrl1 |= SPU2_HASH_IV;
+
+ if (return_iv) {
+ ctrl1 |= SPU2_RET_IV;
+ ctrl1 |= ret_iv_len << SPU2_RET_IV_LEN_SHIFT;
+ ctrl1 |= ret_iv_offset << SPU2_IV_OFFSET_SHIFT;
+ }
+
+ ctrl1 |= ((cipher_iv_len << SPU2_IV_LEN_SHIFT) & SPU2_IV_LEN);
+
+ if (digest_size != 0) {
+ ctrl1 |= ((digest_size << SPU2_HASH_TAG_LEN_SHIFT) &
+ SPU2_HASH_TAG_LEN);
+ }
+
+ /*
+ * Let's ask for the output pkt to include FMD, but don't need to
+ * get keys and IVs back in OMD.
+ */
+ if (return_md)
+ ctrl1 |= ((uint64_t)SPU2_RET_FMD_ONLY << SPU2_RETURN_MD_SHIFT);
+ else
+ ctrl1 |= ((uint64_t)SPU2_RET_NO_MD << SPU2_RETURN_MD_SHIFT);
+
+ /* Crypto API does not get assoc data back. So no need for AAD2. */
+
+ if (return_payload)
+ ctrl1 |= SPU2_RETURN_PAY;
+
+ fmd->ctrl1 = ctrl1;
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "ctrl1:", &fmd->ctrl1, sizeof(uint64_t));
+#endif
+}
+
+static void
+spu2_fmd_ctrl2_write(struct spu2_fmd *fmd, uint64_t cipher_offset,
+ uint64_t auth_key_len __rte_unused,
+ uint64_t auth_iv_len __rte_unused,
+ uint64_t cipher_key_len __rte_unused,
+ uint64_t cipher_iv_len __rte_unused)
+{
+ uint64_t aad1_offset;
+ uint64_t aad2_offset;
+ uint16_t aad1_len = 0;
+ uint64_t payload_offset;
+
+ /* AAD1 offset is from start of FD. FD length always 0. */
+ aad1_offset = 0;
+
+ aad2_offset = aad1_offset;
+ payload_offset = cipher_offset;
+ fmd->ctrl2 = aad1_offset |
+ (aad1_len << SPU2_AAD1_LEN_SHIFT) |
+ (aad2_offset << SPU2_AAD2_OFFSET_SHIFT) |
+ (payload_offset << SPU2_PL_OFFSET_SHIFT);
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "ctrl2:", &fmd->ctrl2, sizeof(uint64_t));
+#endif
+}
+
+static void
+spu2_fmd_ctrl3_write(struct spu2_fmd *fmd, uint64_t payload_len)
+{
+ fmd->ctrl3 = payload_len & SPU2_PL_LEN;
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "ctrl3:", &fmd->ctrl3, sizeof(uint64_t));
+#endif
+}
+
+int
+bcmfs_crypto_build_auth_req(struct bcmfs_sym_request *sreq,
+ enum rte_crypto_auth_algorithm a_alg,
+ enum rte_crypto_auth_operation auth_op,
+ struct fsattr *src, struct fsattr *dst,
+ struct fsattr *mac, struct fsattr *auth_key,
+ struct fsattr *iv)
+{
+ int ret;
+ uint64_t dst_size;
+ int src_index = 0;
+ struct spu2_fmd *fmd;
+ uint64_t payload_len;
+ enum spu2_hash_mode spu2_auth_mode;
+ enum spu2_hash_type spu2_auth_type = SPU2_HASH_TYPE_NONE;
+ uint64_t iv_size = (iv != NULL) ? fsattr_sz(iv) : 0;
+ uint64_t auth_ksize = (auth_key != NULL) ? fsattr_sz(auth_key) : 0;
+ bool is_inbound = (auth_op == RTE_CRYPTO_AUTH_OP_VERIFY);
+
+ if (src == NULL)
+ return -EINVAL;
+
+ payload_len = fsattr_sz(src);
+ if (!payload_len) {
+ BCMFS_DP_LOG(ERR, "null payload not supported");
+ return -EINVAL;
+ }
+
+ /* one of dst or mac should not be NULL */
+ if (dst == NULL && mac == NULL)
+ return -EINVAL;
+
+ if (auth_op == RTE_CRYPTO_AUTH_OP_GENERATE && dst != NULL)
+ dst_size = fsattr_sz(dst);
+ else if (auth_op == RTE_CRYPTO_AUTH_OP_VERIFY && mac != NULL)
+ dst_size = fsattr_sz(mac);
+ else
+ return -EINVAL;
+
+ /* spu2 hash algorithm and hash algorithm mode */
+ ret = spu2_hash_xlate(a_alg, auth_key, &spu2_auth_type,
+ &spu2_auth_mode);
+ if (ret)
+ return -EINVAL;
+
+ fmd = &sreq->fmd;
+
+ spu2_fmd_ctrl0_write(fmd, is_inbound, SPU2_VAL_NONE,
+ SPU2_PROTO_RESV, SPU2_VAL_NONE,
+ SPU2_VAL_NONE, spu2_auth_type, spu2_auth_mode);
+
+ spu2_fmd_ctrl1_write(fmd, is_inbound, SPU2_VAL_NONE,
+ auth_ksize, SPU2_VAL_NONE, false,
+ false, SPU2_VAL_NONE, SPU2_VAL_NONE,
+ SPU2_VAL_NONE, iv_size,
+ dst_size, SPU2_VAL_NONE, SPU2_VAL_NONE);
+
+ memset(&fmd->ctrl2, 0, sizeof(uint64_t));
+
+ spu2_fmd_ctrl3_write(fmd, fsattr_sz(src));
+
+ /* Source metadata and data pointers */
+ sreq->msgs.srcs_addr[src_index] = sreq->fptr;
+ sreq->msgs.srcs_len[src_index] = sizeof(struct spu2_fmd);
+ src_index++;
+
+ if (auth_key != NULL && fsattr_sz(auth_key) != 0) {
+ memcpy(sreq->auth_key, fsattr_va(auth_key),
+ fsattr_sz(auth_key));
+
+ sreq->msgs.srcs_addr[src_index] = sreq->aptr;
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(auth_key);
+ src_index++;
+ }
+
+ if (iv != NULL && fsattr_sz(iv) != 0) {
+ memcpy(sreq->iv, fsattr_va(iv), fsattr_sz(iv));
+ sreq->msgs.srcs_addr[src_index] = sreq->iptr;
+ sreq->msgs.srcs_len[src_index] = iv_size;
+ src_index++;
+ }
+
+ sreq->msgs.srcs_addr[src_index] = fsattr_pa(src);
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(src);
+ src_index++;
+
+ /*
+ * In case of authentication verify operation, use input mac data to
+ * SPU2 engine.
+ */
+ if (auth_op == RTE_CRYPTO_AUTH_OP_VERIFY && mac != NULL) {
+ sreq->msgs.srcs_addr[src_index] = fsattr_pa(mac);
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(mac);
+ src_index++;
+ }
+ sreq->msgs.srcs_count = src_index;
+
+ /*
+ * Output packet contains actual output from SPU2 and
+ * the status packet, so the dsts_count is always 2 below.
+ */
+ if (auth_op == RTE_CRYPTO_AUTH_OP_GENERATE) {
+ sreq->msgs.dsts_addr[0] = fsattr_pa(dst);
+ sreq->msgs.dsts_len[0] = fsattr_sz(dst);
+ } else {
+ /*
+ * In case of authentication verify operation, provide dummy
+ * location to SPU2 engine to generate hash. This is needed
+ * because SPU2 generates hash even in case of verify operation.
+ */
+ sreq->msgs.dsts_addr[0] = sreq->dptr;
+ sreq->msgs.dsts_len[0] = fsattr_sz(mac);
+ }
+
+ sreq->msgs.dsts_addr[1] = sreq->rptr;
+ sreq->msgs.dsts_len[1] = SPU2_STATUS_LEN;
+ sreq->msgs.dsts_count = 2;
+
+ return 0;
+}
+
+int
+bcmfs_crypto_build_cipher_req(struct bcmfs_sym_request *sreq,
+ enum rte_crypto_cipher_algorithm calgo,
+ enum rte_crypto_cipher_operation cipher_op,
+ struct fsattr *src, struct fsattr *dst,
+ struct fsattr *cipher_key, struct fsattr *iv)
+{
+ int ret = 0;
+ int src_index = 0;
+ struct spu2_fmd *fmd;
+ unsigned int xts_keylen;
+ enum spu2_cipher_mode spu2_ciph_mode = 0;
+ enum spu2_cipher_type spu2_ciph_type = SPU2_CIPHER_TYPE_NONE;
+ bool is_inbound = (cipher_op == RTE_CRYPTO_CIPHER_OP_DECRYPT);
+
+ if (src == NULL || dst == NULL || iv == NULL)
+ return -EINVAL;
+
+ fmd = &sreq->fmd;
+
+ /* spu2 cipher algorithm and cipher algorithm mode */
+ ret = spu2_cipher_xlate(calgo, cipher_key,
+ &spu2_ciph_type, &spu2_ciph_mode);
+ if (ret)
+ return -EINVAL;
+
+ spu2_fmd_ctrl0_write(fmd, is_inbound, SPU2_VAL_NONE,
+ SPU2_PROTO_RESV, spu2_ciph_type, spu2_ciph_mode,
+ SPU2_VAL_NONE, SPU2_VAL_NONE);
+
+ spu2_fmd_ctrl1_write(fmd, SPU2_VAL_NONE, SPU2_VAL_NONE, SPU2_VAL_NONE,
+ fsattr_sz(cipher_key), false, false,
+ SPU2_VAL_NONE, SPU2_VAL_NONE, SPU2_VAL_NONE,
+ fsattr_sz(iv), SPU2_VAL_NONE, SPU2_VAL_NONE,
+ SPU2_VAL_NONE);
+
+ /* Nothing for FMD2 */
+ memset(&fmd->ctrl2, 0, sizeof(uint64_t));
+
+ spu2_fmd_ctrl3_write(fmd, fsattr_sz(src));
+
+ /* Source metadata and data pointers */
+ sreq->msgs.srcs_addr[src_index] = sreq->fptr;
+ sreq->msgs.srcs_len[src_index] = sizeof(struct spu2_fmd);
+ src_index++;
+
+ if (cipher_key != NULL && fsattr_sz(cipher_key) != 0) {
+ if (calgo == RTE_CRYPTO_CIPHER_AES_XTS) {
+ xts_keylen = fsattr_sz(cipher_key) / 2;
+ memcpy(sreq->cipher_key,
+ (uint8_t *)fsattr_va(cipher_key) + xts_keylen,
+ xts_keylen);
+ memcpy(sreq->cipher_key + xts_keylen,
+ fsattr_va(cipher_key), xts_keylen);
+ } else {
+ memcpy(sreq->cipher_key,
+ fsattr_va(cipher_key), fsattr_sz(cipher_key));
+ }
+
+ sreq->msgs.srcs_addr[src_index] = sreq->cptr;
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(cipher_key);
+ src_index++;
+ }
+
+ if (iv != NULL && fsattr_sz(iv) != 0) {
+ memcpy(sreq->iv,
+ fsattr_va(iv), fsattr_sz(iv));
+ sreq->msgs.srcs_addr[src_index] = sreq->iptr;
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(iv);
+ src_index++;
+ }
+
+ sreq->msgs.srcs_addr[src_index] = fsattr_pa(src);
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(src);
+ src_index++;
+ sreq->msgs.srcs_count = src_index;
+
+ /**
+ * Output packet contains actual output from SPU2 and
+ * the status packet, so the dsts_count is always 2 below.
+ */
+ sreq->msgs.dsts_addr[0] = fsattr_pa(dst);
+ sreq->msgs.dsts_len[0] = fsattr_sz(dst);
+
+ sreq->msgs.dsts_addr[1] = sreq->rptr;
+ sreq->msgs.dsts_len[1] = SPU2_STATUS_LEN;
+ sreq->msgs.dsts_count = 2;
+
+ return 0;
+}
+
+int
+bcmfs_crypto_build_chain_request(struct bcmfs_sym_request *sreq,
+ enum rte_crypto_cipher_algorithm cipher_alg,
+ enum rte_crypto_cipher_operation cipher_op __rte_unused,
+ enum rte_crypto_auth_algorithm auth_alg,
+ enum rte_crypto_auth_operation auth_op,
+ struct fsattr *src, struct fsattr *dst,
+ struct fsattr *cipher_key,
+ struct fsattr *auth_key,
+ struct fsattr *iv, struct fsattr *aad,
+ struct fsattr *digest, bool cipher_first)
+{
+ int ret = 0;
+ int src_index = 0;
+ int dst_index = 0;
+ bool auth_first = 0;
+ struct spu2_fmd *fmd;
+ uint64_t payload_len;
+ enum spu2_cipher_mode spu2_ciph_mode = 0;
+ enum spu2_hash_mode spu2_auth_mode = 0;
+ uint64_t aad_size = (aad != NULL) ? fsattr_sz(aad) : 0;
+ uint64_t iv_size = (iv != NULL) ? fsattr_sz(iv) : 0;
+ enum spu2_cipher_type spu2_ciph_type = SPU2_CIPHER_TYPE_NONE;
+ uint64_t auth_ksize = (auth_key != NULL) ?
+ fsattr_sz(auth_key) : 0;
+ uint64_t cipher_ksize = (cipher_key != NULL) ?
+ fsattr_sz(cipher_key) : 0;
+ uint64_t digest_size = (digest != NULL) ?
+ fsattr_sz(digest) : 0;
+ enum spu2_hash_type spu2_auth_type = SPU2_HASH_TYPE_NONE;
+ bool is_inbound = (auth_op == RTE_CRYPTO_AUTH_OP_VERIFY);
+
+ if (src == NULL)
+ return -EINVAL;
+
+ payload_len = fsattr_sz(src);
+ if (!payload_len) {
+ BCMFS_DP_LOG(ERR, "null payload not supported");
+ return -EINVAL;
+ }
+
+ /* spu2 hash algorithm and hash algorithm mode */
+ ret = spu2_hash_xlate(auth_alg, auth_key, &spu2_auth_type,
+ &spu2_auth_mode);
+ if (ret)
+ return -EINVAL;
+
+ /* spu2 cipher algorithm and cipher algorithm mode */
+ ret = spu2_cipher_xlate(cipher_alg, cipher_key, &spu2_ciph_type,
+ &spu2_ciph_mode);
+ if (ret) {
+ BCMFS_DP_LOG(ERR, "cipher xlate error");
+ return -EINVAL;
+ }
+
+ auth_first = cipher_first ? 0 : 1;
+
+ if (iv != NULL && fsattr_sz(iv) != 0)
+ memcpy(sreq->iv, fsattr_va(iv), fsattr_sz(iv));
+
+ fmd = &sreq->fmd;
+
+ spu2_fmd_ctrl0_write(fmd, is_inbound, auth_first, SPU2_PROTO_RESV,
+ spu2_ciph_type, spu2_ciph_mode,
+ spu2_auth_type, spu2_auth_mode);
+
+ spu2_fmd_ctrl1_write(fmd, is_inbound, aad_size, auth_ksize,
+ cipher_ksize, false, false, SPU2_VAL_NONE,
+ SPU2_VAL_NONE, SPU2_VAL_NONE, iv_size,
+ digest_size, false, SPU2_VAL_NONE);
+
+ spu2_fmd_ctrl2_write(fmd, aad_size, auth_ksize, 0,
+ cipher_ksize, iv_size);
+
+ spu2_fmd_ctrl3_write(fmd, payload_len);
+
+ /* Source metadata and data pointers */
+ sreq->msgs.srcs_addr[src_index] = sreq->fptr;
+ sreq->msgs.srcs_len[src_index] = sizeof(struct spu2_fmd);
+ src_index++;
+
+ if (auth_key != NULL && fsattr_sz(auth_key) != 0) {
+ memcpy(sreq->auth_key,
+ fsattr_va(auth_key), fsattr_sz(auth_key));
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "auth key:", fsattr_va(auth_key),
+ fsattr_sz(auth_key));
+#endif
+ sreq->msgs.srcs_addr[src_index] = sreq->aptr;
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(auth_key);
+ src_index++;
+ }
+
+ if (cipher_key != NULL && fsattr_sz(cipher_key) != 0) {
+ memcpy(sreq->cipher_key,
+ fsattr_va(cipher_key), fsattr_sz(cipher_key));
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "cipher key:", fsattr_va(cipher_key),
+ fsattr_sz(cipher_key));
+#endif
+ sreq->msgs.srcs_addr[src_index] = sreq->cptr;
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(cipher_key);
+ src_index++;
+ }
+
+ if (iv != NULL && fsattr_sz(iv) != 0) {
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "iv key:", fsattr_va(iv),
+ fsattr_sz(iv));
+#endif
+ sreq->msgs.srcs_addr[src_index] = sreq->iptr;
+ sreq->msgs.srcs_len[src_index] = iv_size;
+ src_index++;
+ }
+
+ if (aad != NULL && fsattr_sz(aad) != 0) {
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "aad :", fsattr_va(aad),
+ fsattr_sz(aad));
+#endif
+ sreq->msgs.srcs_addr[src_index] = fsattr_pa(aad);
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(aad);
+ src_index++;
+ }
+
+ sreq->msgs.srcs_addr[src_index] = fsattr_pa(src);
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(src);
+ src_index++;
+
+ if (auth_op == RTE_CRYPTO_AUTH_OP_VERIFY && digest != NULL &&
+ fsattr_sz(digest) != 0) {
+ sreq->msgs.srcs_addr[src_index] = fsattr_pa(digest);
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(digest);
+ src_index++;
+ }
+ sreq->msgs.srcs_count = src_index;
+
+ if (dst != NULL) {
+ sreq->msgs.dsts_addr[dst_index] = fsattr_pa(dst);
+ sreq->msgs.dsts_len[dst_index] = fsattr_sz(dst);
+ dst_index++;
+ }
+
+ if (auth_op == RTE_CRYPTO_AUTH_OP_VERIFY) {
+ /*
+ * In case of decryption digest data is generated by
+ * SPU2 engine but application doesn't need digest
+ * as such. So program dummy location to capture
+ * digest data
+ */
+ if (digest != NULL && fsattr_sz(digest) != 0) {
+ sreq->msgs.dsts_addr[dst_index] =
+ sreq->dptr;
+ sreq->msgs.dsts_len[dst_index] =
+ fsattr_sz(digest);
+ dst_index++;
+ }
+ } else {
+ if (digest != NULL && fsattr_sz(digest) != 0) {
+ sreq->msgs.dsts_addr[dst_index] =
+ fsattr_pa(digest);
+ sreq->msgs.dsts_len[dst_index] =
+ fsattr_sz(digest);
+ dst_index++;
+ }
+ }
+
+ sreq->msgs.dsts_addr[dst_index] = sreq->rptr;
+ sreq->msgs.dsts_len[dst_index] = SPU2_STATUS_LEN;
+ dst_index++;
+ sreq->msgs.dsts_count = dst_index;
+
+ return 0;
+}
+
+static void
+bcmfs_crypto_ccm_update_iv(uint8_t *ivbuf,
+ unsigned int *ivlen, bool is_esp)
+{
+ int L; /* size of length field, in bytes */
+
+ /*
+ * In RFC4309 mode, L is fixed at 4 bytes; otherwise, IV from
+ * testmgr contains (L-1) in bottom 3 bits of first byte,
+ * per RFC 3610.
+ */
+ if (is_esp)
+ L = CCM_ESP_L_VALUE;
+ else
+ L = ((ivbuf[0] & CCM_B0_L_PRIME) >>
+ CCM_B0_L_PRIME_SHIFT) + 1;
+
+ /* SPU2 doesn't want these length bytes nor the first byte... */
+ *ivlen -= (1 + L);
+ memmove(ivbuf, &ivbuf[1], *ivlen);
+}
+
+int
+bcmfs_crypto_build_aead_request(struct bcmfs_sym_request *sreq,
+ enum rte_crypto_aead_algorithm ae_algo,
+ enum rte_crypto_aead_operation aeop,
+ struct fsattr *src, struct fsattr *dst,
+ struct fsattr *key, struct fsattr *iv,
+ struct fsattr *aad, struct fsattr *digest)
+{
+ int src_index = 0;
+ int dst_index = 0;
+ bool auth_first = 0;
+ struct spu2_fmd *fmd;
+ uint64_t payload_len;
+ uint64_t aad_size = (aad != NULL) ? fsattr_sz(aad) : 0;
+ unsigned int iv_size = (iv != NULL) ? fsattr_sz(iv) : 0;
+ enum spu2_cipher_mode spu2_ciph_mode = 0;
+ enum spu2_hash_mode spu2_auth_mode = 0;
+ enum spu2_cipher_type spu2_ciph_type = SPU2_CIPHER_TYPE_NONE;
+ enum spu2_hash_type spu2_auth_type = SPU2_HASH_TYPE_NONE;
+ uint64_t ksize = (key != NULL) ? fsattr_sz(key) : 0;
+ uint64_t digest_size = (digest != NULL) ?
+ fsattr_sz(digest) : 0;
+ bool is_inbound = (aeop == RTE_CRYPTO_AEAD_OP_DECRYPT);
+
+ if (src == NULL)
+ return -EINVAL;
+
+ payload_len = fsattr_sz(src);
+ if (!payload_len) {
+ BCMFS_DP_LOG(ERR, "null payload not supported");
+ return -EINVAL;
+ }
+
+ switch (ksize) {
+ case BCMFS_CRYPTO_AES128:
+ spu2_auth_type = SPU2_HASH_TYPE_AES128;
+ spu2_ciph_type = SPU2_CIPHER_TYPE_AES128;
+ break;
+ case BCMFS_CRYPTO_AES192:
+ spu2_auth_type = SPU2_HASH_TYPE_AES192;
+ spu2_ciph_type = SPU2_CIPHER_TYPE_AES192;
+ break;
+ case BCMFS_CRYPTO_AES256:
+ spu2_auth_type = SPU2_HASH_TYPE_AES256;
+ spu2_ciph_type = SPU2_CIPHER_TYPE_AES256;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ if (ae_algo == RTE_CRYPTO_AEAD_AES_GCM) {
+ spu2_auth_mode = SPU2_HASH_MODE_GCM;
+ spu2_ciph_mode = SPU2_CIPHER_MODE_GCM;
+ /*
+ * SPU2 needs in total 12 bytes of IV
+ * ie IV of 8 bytes(random number) and 4 bytes of salt.
+ */
+ if (fsattr_sz(iv) > 12)
+ iv_size = 12;
+
+ /*
+ * On SPU 2, aes gcm cipher first on encrypt, auth first on
+ * decrypt
+ */
+
+ auth_first = (aeop == RTE_CRYPTO_AEAD_OP_ENCRYPT) ?
+ 0 : 1;
+ }
+
+ if (iv != NULL && fsattr_sz(iv) != 0)
+ memcpy(sreq->iv, fsattr_va(iv), fsattr_sz(iv));
+
+ if (ae_algo == RTE_CRYPTO_AEAD_AES_CCM) {
+ spu2_auth_mode = SPU2_HASH_MODE_CCM;
+ spu2_ciph_mode = SPU2_CIPHER_MODE_CCM;
+ if (iv != NULL) {
+ memcpy(sreq->iv, fsattr_va(iv),
+ fsattr_sz(iv));
+ iv_size = fsattr_sz(iv);
+ bcmfs_crypto_ccm_update_iv(sreq->iv, &iv_size, false);
+ }
+
+ /* opposite for ccm (auth 1st on encrypt) */
+ auth_first = (aeop == RTE_CRYPTO_AEAD_OP_ENCRYPT) ?
+ 0 : 1;
+ }
+
+ fmd = &sreq->fmd;
+
+ spu2_fmd_ctrl0_write(fmd, is_inbound, auth_first, SPU2_PROTO_RESV,
+ spu2_ciph_type, spu2_ciph_mode,
+ spu2_auth_type, spu2_auth_mode);
+
+ spu2_fmd_ctrl1_write(fmd, is_inbound, aad_size, 0,
+ ksize, false, false, SPU2_VAL_NONE,
+ SPU2_VAL_NONE, SPU2_VAL_NONE, iv_size,
+ digest_size, false, SPU2_VAL_NONE);
+
+ spu2_fmd_ctrl2_write(fmd, aad_size, 0, 0,
+ ksize, iv_size);
+
+ spu2_fmd_ctrl3_write(fmd, payload_len);
+
+ /* Source metadata and data pointers */
+ sreq->msgs.srcs_addr[src_index] = sreq->fptr;
+ sreq->msgs.srcs_len[src_index] = sizeof(struct spu2_fmd);
+ src_index++;
+
+ if (key != NULL && fsattr_sz(key) != 0) {
+ memcpy(sreq->cipher_key,
+ fsattr_va(key), fsattr_sz(key));
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "cipher key:", fsattr_va(key),
+ fsattr_sz(key));
+#endif
+ sreq->msgs.srcs_addr[src_index] = sreq->cptr;
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(key);
+ src_index++;
+ }
+
+ if (iv != NULL && fsattr_sz(iv) != 0) {
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "iv key:", fsattr_va(iv),
+ fsattr_sz(iv));
+#endif
+ sreq->msgs.srcs_addr[src_index] = sreq->iptr;
+ sreq->msgs.srcs_len[src_index] = iv_size;
+ src_index++;
+ }
+
+ if (aad != NULL && fsattr_sz(aad) != 0) {
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "aad :", fsattr_va(aad),
+ fsattr_sz(aad));
+#endif
+ sreq->msgs.srcs_addr[src_index] = fsattr_pa(aad);
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(aad);
+ src_index++;
+ }
+
+ sreq->msgs.srcs_addr[src_index] = fsattr_pa(src);
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(src);
+ src_index++;
+
+ if (aeop == RTE_CRYPTO_AEAD_OP_DECRYPT && digest != NULL &&
+ fsattr_sz(digest) != 0) {
+ sreq->msgs.srcs_addr[src_index] = fsattr_pa(digest);
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(digest);
+ src_index++;
+ }
+ sreq->msgs.srcs_count = src_index;
+
+ if (dst != NULL) {
+ sreq->msgs.dsts_addr[dst_index] = fsattr_pa(dst);
+ sreq->msgs.dsts_len[dst_index] = fsattr_sz(dst);
+ dst_index++;
+ }
+
+ if (aeop == RTE_CRYPTO_AEAD_OP_DECRYPT) {
+ /*
+ * In case of decryption digest data is generated by
+ * SPU2 engine but application doesn't need digest
+ * as such. So program dummy location to capture
+ * digest data
+ */
+ if (digest != NULL && fsattr_sz(digest) != 0) {
+ sreq->msgs.dsts_addr[dst_index] =
+ sreq->dptr;
+ sreq->msgs.dsts_len[dst_index] =
+ fsattr_sz(digest);
+ dst_index++;
+ }
+ } else {
+ if (digest != NULL && fsattr_sz(digest) != 0) {
+ sreq->msgs.dsts_addr[dst_index] =
+ fsattr_pa(digest);
+ sreq->msgs.dsts_len[dst_index] =
+ fsattr_sz(digest);
+ dst_index++;
+ }
+ }
+
+ sreq->msgs.dsts_addr[dst_index] = sreq->rptr;
+ sreq->msgs.dsts_len[dst_index] = SPU2_STATUS_LEN;
+ dst_index++;
+ sreq->msgs.dsts_count = dst_index;
+
+ return 0;
+}
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_engine.h b/drivers/crypto/bcmfs/bcmfs_sym_engine.h
new file mode 100644
index 0000000000..d9594246b5
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_engine.h
@@ -0,0 +1,115 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_SYM_ENGINE_H_
+#define _BCMFS_SYM_ENGINE_H_
+
+#include <rte_crypto_sym.h>
+
+#include "bcmfs_dev_msg.h"
+#include "bcmfs_sym_defs.h"
+#include "bcmfs_sym_req.h"
+
+/* structure to hold element's arrtibutes */
+struct fsattr {
+ void *va;
+ uint64_t pa;
+ uint64_t sz;
+};
+
+#define fsattr_va(__ptr) ((__ptr)->va)
+#define fsattr_pa(__ptr) ((__ptr)->pa)
+#define fsattr_sz(__ptr) ((__ptr)->sz)
+
+/*
+ * Macros for Crypto h/w constraints
+ */
+
+#define BCMFS_CRYPTO_AES_BLOCK_SIZE 16
+#define BCMFS_CRYPTO_AES_MIN_KEY_SIZE 16
+#define BCMFS_CRYPTO_AES_MAX_KEY_SIZE 32
+
+#define BCMFS_CRYPTO_DES_BLOCK_SIZE 8
+#define BCMFS_CRYPTO_DES_KEY_SIZE 8
+
+#define BCMFS_CRYPTO_3DES_BLOCK_SIZE 8
+#define BCMFS_CRYPTO_3DES_KEY_SIZE (3 * 8)
+
+#define BCMFS_CRYPTO_MD5_DIGEST_SIZE 16
+#define BCMFS_CRYPTO_MD5_BLOCK_SIZE 64
+
+#define BCMFS_CRYPTO_SHA1_DIGEST_SIZE 20
+#define BCMFS_CRYPTO_SHA1_BLOCK_SIZE 64
+
+#define BCMFS_CRYPTO_SHA224_DIGEST_SIZE 28
+#define BCMFS_CRYPTO_SHA224_BLOCK_SIZE 64
+
+#define BCMFS_CRYPTO_SHA256_DIGEST_SIZE 32
+#define BCMFS_CRYPTO_SHA256_BLOCK_SIZE 64
+
+#define BCMFS_CRYPTO_SHA384_DIGEST_SIZE 48
+#define BCMFS_CRYPTO_SHA384_BLOCK_SIZE 128
+
+#define BCMFS_CRYPTO_SHA512_DIGEST_SIZE 64
+#define BCMFS_CRYPTO_SHA512_BLOCK_SIZE 128
+
+#define BCMFS_CRYPTO_SHA3_224_DIGEST_SIZE (224 / 8)
+#define BCMFS_CRYPTO_SHA3_224_BLOCK_SIZE (200 - 2 * \
+ BCMFS_CRYPTO_SHA3_224_DIGEST_SIZE)
+
+#define BCMFS_CRYPTO_SHA3_256_DIGEST_SIZE (256 / 8)
+#define BCMFS_CRYPTO_SHA3_256_BLOCK_SIZE (200 - 2 * \
+ BCMFS_CRYPTO_SHA3_256_DIGEST_SIZE)
+
+#define BCMFS_CRYPTO_SHA3_384_DIGEST_SIZE (384 / 8)
+#define BCMFS_CRYPTO_SHA3_384_BLOCK_SIZE (200 - 2 * \
+ BCMFS_CRYPTO_SHA3_384_DIGEST_SIZE)
+
+#define BCMFS_CRYPTO_SHA3_512_DIGEST_SIZE (512 / 8)
+#define BCMFS_CRYPTO_SHA3_512_BLOCK_SIZE (200 - 2 * \
+ BCMFS_CRYPTO_SHA3_512_DIGEST_SIZE)
+
+enum bcmfs_crypto_aes_cipher_key {
+ BCMFS_CRYPTO_AES128 = 16,
+ BCMFS_CRYPTO_AES192 = 24,
+ BCMFS_CRYPTO_AES256 = 32,
+};
+
+int
+bcmfs_crypto_build_cipher_req(struct bcmfs_sym_request *req,
+ enum rte_crypto_cipher_algorithm c_algo,
+ enum rte_crypto_cipher_operation cop,
+ struct fsattr *src, struct fsattr *dst,
+ struct fsattr *key, struct fsattr *iv);
+
+int
+bcmfs_crypto_build_auth_req(struct bcmfs_sym_request *req,
+ enum rte_crypto_auth_algorithm a_algo,
+ enum rte_crypto_auth_operation aop,
+ struct fsattr *src, struct fsattr *dst,
+ struct fsattr *mac, struct fsattr *key,
+ struct fsattr *iv);
+
+int
+bcmfs_crypto_build_chain_request(struct bcmfs_sym_request *req,
+ enum rte_crypto_cipher_algorithm c_algo,
+ enum rte_crypto_cipher_operation cop,
+ enum rte_crypto_auth_algorithm a_algo,
+ enum rte_crypto_auth_operation aop,
+ struct fsattr *src, struct fsattr *dst,
+ struct fsattr *cipher_key,
+ struct fsattr *auth_key,
+ struct fsattr *iv, struct fsattr *aad,
+ struct fsattr *digest, bool cipher_first);
+
+int
+bcmfs_crypto_build_aead_request(struct bcmfs_sym_request *req,
+ enum rte_crypto_aead_algorithm ae_algo,
+ enum rte_crypto_aead_operation aeop,
+ struct fsattr *src, struct fsattr *dst,
+ struct fsattr *key, struct fsattr *iv,
+ struct fsattr *aad, struct fsattr *digest);
+
+#endif /* _BCMFS_SYM_ENGINE_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
index 381ca8ea48..568797b4fd 100644
--- a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
+++ b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
@@ -132,6 +132,12 @@ static void
spu_req_init(struct bcmfs_sym_request *sr, rte_iova_t iova __rte_unused)
{
memset(sr, 0, sizeof(*sr));
+ sr->fptr = iova;
+ sr->cptr = iova + offsetof(struct bcmfs_sym_request, cipher_key);
+ sr->aptr = iova + offsetof(struct bcmfs_sym_request, auth_key);
+ sr->iptr = iova + offsetof(struct bcmfs_sym_request, iv);
+ sr->dptr = iova + offsetof(struct bcmfs_sym_request, digest);
+ sr->rptr = iova + offsetof(struct bcmfs_sym_request, resp);
}
static void
@@ -244,6 +250,7 @@ bcmfs_sym_pmd_enqueue_op_burst(void *queue_pair,
uint16_t nb_ops)
{
int i, j;
+ int retval;
uint16_t enq = 0;
struct bcmfs_sym_request *sreq;
struct bcmfs_sym_session *sess;
@@ -273,6 +280,11 @@ bcmfs_sym_pmd_enqueue_op_burst(void *queue_pair,
/* save context */
qp->infl_msgs[i] = &sreq->msgs;
qp->infl_msgs[i]->ctx = (void *)sreq;
+
+ /* pre process the request crypto h/w acceleration */
+ retval = bcmfs_process_sym_crypto_op(ops[i], sess, sreq);
+ if (unlikely(retval < 0))
+ goto enqueue_err;
}
/* Send burst request to hw QP */
enq = bcmfs_enqueue_op_burst(qp, (void **)qp->infl_msgs, i);
@@ -289,6 +301,17 @@ bcmfs_sym_pmd_enqueue_op_burst(void *queue_pair,
return enq;
}
+static void bcmfs_sym_set_request_status(struct rte_crypto_op *op,
+ struct bcmfs_sym_request *out)
+{
+ if (*out->resp == BCMFS_SYM_RESPONSE_SUCCESS)
+ op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+ else if (*out->resp == BCMFS_SYM_RESPONSE_HASH_TAG_ERROR)
+ op->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
+ else
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+}
+
static uint16_t
bcmfs_sym_pmd_dequeue_op_burst(void *queue_pair,
struct rte_crypto_op **ops,
@@ -308,6 +331,9 @@ bcmfs_sym_pmd_dequeue_op_burst(void *queue_pair,
for (i = 0; i < deq; i++) {
sreq = (struct bcmfs_sym_request *)qp->infl_msgs[i]->ctx;
+ /* set the status based on the response from the crypto h/w */
+ bcmfs_sym_set_request_status(sreq->op, sreq);
+
ops[pkts++] = sreq->op;
rte_mempool_put(qp->sr_mp, sreq);
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_req.h b/drivers/crypto/bcmfs/bcmfs_sym_req.h
index 0f0b051f1e..e53c50adc1 100644
--- a/drivers/crypto/bcmfs/bcmfs_sym_req.h
+++ b/drivers/crypto/bcmfs/bcmfs_sym_req.h
@@ -6,13 +6,53 @@
#ifndef _BCMFS_SYM_REQ_H_
#define _BCMFS_SYM_REQ_H_
+#include <rte_cryptodev.h>
+
#include "bcmfs_dev_msg.h"
+#include "bcmfs_sym_defs.h"
+
+/* Fixed SPU2 Metadata */
+struct spu2_fmd {
+ uint64_t ctrl0;
+ uint64_t ctrl1;
+ uint64_t ctrl2;
+ uint64_t ctrl3;
+};
/*
* This structure hold the supportive data required to process a
* rte_crypto_op
*/
struct bcmfs_sym_request {
+ /* spu2 engine related data */
+ struct spu2_fmd fmd;
+ /* cipher key */
+ uint8_t cipher_key[BCMFS_MAX_KEY_SIZE];
+ /* auth key */
+ uint8_t auth_key[BCMFS_MAX_KEY_SIZE];
+ /* iv key */
+ uint8_t iv[BCMFS_MAX_IV_SIZE];
+ /* digest data output from crypto h/w */
+ uint8_t digest[BCMFS_MAX_DIGEST_SIZE];
+ /* 2-Bytes response from crypto h/w */
+ uint8_t resp[2];
+ /*
+ * Below are all iovas for above members
+ * from top
+ */
+ /* iova for fmd */
+ rte_iova_t fptr;
+ /* iova for cipher key */
+ rte_iova_t cptr;
+ /* iova for auth key */
+ rte_iova_t aptr;
+ /* iova for iv key */
+ rte_iova_t iptr;
+ /* iova for digest */
+ rte_iova_t dptr;
+ /* iova for response */
+ rte_iova_t rptr;
+
/* bcmfs qp message for h/w queues to process */
struct bcmfs_qp_message msgs;
/* crypto op */
diff --git a/drivers/crypto/bcmfs/meson.build b/drivers/crypto/bcmfs/meson.build
index 2e86c733e1..7aa0f05dbd 100644
--- a/drivers/crypto/bcmfs/meson.build
+++ b/drivers/crypto/bcmfs/meson.build
@@ -14,5 +14,7 @@ sources = files(
'hw/bcmfs_rm_common.c',
'bcmfs_sym_pmd.c',
'bcmfs_sym_capabilities.c',
- 'bcmfs_sym_session.c'
+ 'bcmfs_sym_session.c',
+ 'bcmfs_sym.c',
+ 'bcmfs_sym_engine.c'
)
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v3 8/8] crypto/bcmfs: add crypto pmd into cryptodev test
2020-10-05 16:26 ` [dpdk-dev] [PATCH v3 " Vikas Gupta
` (6 preceding siblings ...)
2020-10-05 16:26 ` [dpdk-dev] [PATCH v3 7/8] crypto/bcmfs: add crypto h/w module Vikas Gupta
@ 2020-10-05 16:26 ` Vikas Gupta
2020-10-07 16:45 ` [dpdk-dev] [PATCH v4 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
8 siblings, 0 replies; 75+ messages in thread
From: Vikas Gupta @ 2020-10-05 16:26 UTC (permalink / raw)
To: dev, akhil.goyal; +Cc: vikram.prakash, Vikas Gupta, Raveendra Padasalagi
Add global test suite for bcmfs crypto pmd
Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
app/test/test_cryptodev.c | 17 +++++++++++++++++
app/test/test_cryptodev.h | 1 +
doc/guides/cryptodevs/bcmfs.rst | 11 +++++++++++
3 files changed, 29 insertions(+)
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 70bf6fe2c1..9157115ab3 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -13041,6 +13041,22 @@ test_cryptodev_nitrox(void)
return unit_test_suite_runner(&cryptodev_nitrox_testsuite);
}
+static int
+test_cryptodev_bcmfs(void)
+{
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_BCMFS_PMD));
+
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "BCMFS PMD must be loaded. Check if "
+ "CONFIG_RTE_LIBRTE_PMD_BCMFS is enabled "
+ "in config file to run this testsuite.\n");
+ return TEST_FAILED;
+ }
+
+ return unit_test_suite_runner(&cryptodev_testsuite);
+}
+
REGISTER_TEST_COMMAND(cryptodev_qat_autotest, test_cryptodev_qat);
REGISTER_TEST_COMMAND(cryptodev_aesni_mb_autotest, test_cryptodev_aesni_mb);
REGISTER_TEST_COMMAND(cryptodev_cpu_aesni_mb_autotest,
@@ -13063,3 +13079,4 @@ REGISTER_TEST_COMMAND(cryptodev_octeontx_autotest, test_cryptodev_octeontx);
REGISTER_TEST_COMMAND(cryptodev_octeontx2_autotest, test_cryptodev_octeontx2);
REGISTER_TEST_COMMAND(cryptodev_caam_jr_autotest, test_cryptodev_caam_jr);
REGISTER_TEST_COMMAND(cryptodev_nitrox_autotest, test_cryptodev_nitrox);
+REGISTER_TEST_COMMAND(cryptodev_bcmfs_autotest, test_cryptodev_bcmfs);
diff --git a/app/test/test_cryptodev.h b/app/test/test_cryptodev.h
index 41542e0552..c58126368c 100644
--- a/app/test/test_cryptodev.h
+++ b/app/test/test_cryptodev.h
@@ -70,6 +70,7 @@
#define CRYPTODEV_NAME_OCTEONTX2_PMD crypto_octeontx2
#define CRYPTODEV_NAME_CAAM_JR_PMD crypto_caam_jr
#define CRYPTODEV_NAME_NITROX_PMD crypto_nitrox_sym
+#define CRYPTODEV_NAME_BCMFS_PMD crypto_bcmfs
/**
* Write (spread) data from buffer to mbuf data
diff --git a/doc/guides/cryptodevs/bcmfs.rst b/doc/guides/cryptodevs/bcmfs.rst
index aaa6e1af70..f7b7cde6d7 100644
--- a/doc/guides/cryptodevs/bcmfs.rst
+++ b/doc/guides/cryptodevs/bcmfs.rst
@@ -96,3 +96,14 @@ Limitations
* Only supports the session-oriented API implementation (session-less APIs are not supported).
* CCM is not supported on Broadcom`s SoCs having FlexSparc4 unit.
+
+Testing
+-------
+
+The symmetric crypto operations on BCMFS crypto PMD may be verified by running the test
+application:
+
+.. code-block:: console
+
+ ./test
+ RTE>>cryptodev_bcmfs_autotest
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* Re: [dpdk-dev] [PATCH v2 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices
2020-10-05 15:39 ` Akhil Goyal
@ 2020-10-05 16:46 ` Ajit Khaparde
2020-10-05 17:01 ` Vikas Gupta
0 siblings, 1 reply; 75+ messages in thread
From: Ajit Khaparde @ 2020-10-05 16:46 UTC (permalink / raw)
To: Akhil Goyal; +Cc: Vikas Gupta, dev, vikram.prakash
On Mon, Oct 5, 2020 at 8:39 AM Akhil Goyal <akhil.goyal@nxp.com> wrote:
>
> Hi Vikas
>
> >
> > >
> > > Hi,
> > > This patchset contains support for Crypto offload on Broadcom’s
> > > Stingray/Stingray2 SoCs having FlexSparc unit.
> > > BCMFS is an acronym for Broadcom FlexSparc device used in the patchest.
> > >
> > > The patchset progressively adds major modules as below.
> > > a) Detection of platform-device based on the known registered platforms and
> > > attaching with VFIO.
> > > b) Creation of Cryptodevice.
> > > c) Addition of session handling.
> > > d) Add Cryptodevice into test Cryptodev framework.
> > >
> > > The patchset has been tested on the above mentioned SoCs.
> > >
>
>
> > Release notes missing.
>
> When do you plan to submit the next version. I plan to merge it in RC1 timeline.
Akhil, You can expect a new version in a day - worst case two.
^ permalink raw reply [flat|nested] 75+ messages in thread
* Re: [dpdk-dev] [PATCH v2 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices
2020-10-05 16:46 ` Ajit Khaparde
@ 2020-10-05 17:01 ` Vikas Gupta
0 siblings, 0 replies; 75+ messages in thread
From: Vikas Gupta @ 2020-10-05 17:01 UTC (permalink / raw)
To: Ajit Khaparde; +Cc: Akhil Goyal, dev, vikram.prakash
Hi Akhil,
On Mon, Oct 5, 2020 at 10:17 PM Ajit Khaparde
<ajit.khaparde@broadcom.com> wrote:
>
> On Mon, Oct 5, 2020 at 8:39 AM Akhil Goyal <akhil.goyal@nxp.com> wrote:
> >
> > Hi Vikas
> >
> > >
> > > >
> > > > Hi,
> > > > This patchset contains support for Crypto offload on Broadcom’s
> > > > Stingray/Stingray2 SoCs having FlexSparc unit.
> > > > BCMFS is an acronym for Broadcom FlexSparc device used in the patchest.
> > > >
> > > > The patchset progressively adds major modules as below.
> > > > a) Detection of platform-device based on the known registered platforms and
> > > > attaching with VFIO.
> > > > b) Creation of Cryptodevice.
> > > > c) Addition of session handling.
> > > > d) Add Cryptodevice into test Cryptodev framework.
> > > >
> > > > The patchset has been tested on the above mentioned SoCs.
> > > >
> >
> >
> > > Release notes missing.
> >
> > When do you plan to submit the next version. I plan to merge it in RC1 timeline.
> Akhil, You can expect a new version in a day - worst case two.
I have pushed the v3 patchset.
Thanks,
Vikas
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v4 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices
2020-10-05 16:26 ` [dpdk-dev] [PATCH v3 " Vikas Gupta
` (7 preceding siblings ...)
2020-10-05 16:26 ` [dpdk-dev] [PATCH v3 8/8] crypto/bcmfs: add crypto pmd into cryptodev test Vikas Gupta
@ 2020-10-07 16:45 ` Vikas Gupta
2020-10-07 16:45 ` [dpdk-dev] [PATCH v4 1/8] crypto/bcmfs: add BCMFS driver Vikas Gupta
` (8 more replies)
8 siblings, 9 replies; 75+ messages in thread
From: Vikas Gupta @ 2020-10-07 16:45 UTC (permalink / raw)
To: dev, akhil.goyal; +Cc: vikram.prakash, Vikas Gupta
Hi,
This patchset contains support for Crypto offload on Broadcom’s
Stingray/Stingray2 SoCs having FlexSparc unit.
BCMFS is an acronym for Broadcom FlexSparc device used in the patchest.
The patchset progressively adds major modules as below.
a) Detection of platform-device based on the known registered platforms and attaching with VFIO.
b) Creation of Cryptodevice.
c) Addition of session handling.
d) Add Cryptodevice into test Cryptodev framework.
The patchset has been tested on the above mentioned SoCs.
Regards,
Vikas
Changes from v0->v1:
Updated the ABI version in file .../crypto/bcmfs/rte_pmd_bcmfs_version.map
Changes from v1->v2:
- Fix compilation errors and coding style warnings.
- Use global test crypto suite suggested by Adam Dybkowski
Changes from v2->v3:
- Release notes updated.
- bcmfs.rst updated with missing information about installation.
- Review comments from patch1 from v2 addressed.
- Updated description about dependency of PMD driver on VFIO_PRESENT.
- Fixed typo in bcmfs_hw_defs.h (comments on patch3 from v2 addressed)
- Comments on patch6 from v2 addressed and capability list is fixed.
Removed redundant enums and macros from the file
bcmfs_sym_defs.h and updated other impacted APIs accordingly.
patch7 too is updated due to removal of redundancy.
Thanks! to Akhil for pointing out the redundancy.
- Fix minor code style issues in few files as part of review.
Changes from v3->v4:
- Code style issues fixed.
- Change of barrier API in bcmfs4_rm.c and bcmfs5_rm.c
Vikas Gupta (8):
crypto/bcmfs: add BCMFS driver
crypto/bcmfs: add vfio support
crypto/bcmfs: add queue pair management API
crypto/bcmfs: add HW queue pair operations
crypto/bcmfs: create a symmetric cryptodev
crypto/bcmfs: add session handling and capabilities
crypto/bcmfs: add crypto HW module
crypto/bcmfs: add crypto pmd into cryptodev test
MAINTAINERS | 7 +
app/test/test_cryptodev.c | 17 +
app/test/test_cryptodev.h | 1 +
doc/guides/cryptodevs/bcmfs.rst | 109 ++
doc/guides/cryptodevs/features/bcmfs.ini | 56 +
doc/guides/cryptodevs/index.rst | 1 +
doc/guides/rel_notes/release_20_11.rst | 5 +
drivers/crypto/bcmfs/bcmfs_dev_msg.h | 29 +
drivers/crypto/bcmfs/bcmfs_device.c | 332 +++++
drivers/crypto/bcmfs/bcmfs_device.h | 76 ++
drivers/crypto/bcmfs/bcmfs_hw_defs.h | 32 +
drivers/crypto/bcmfs/bcmfs_logs.c | 38 +
drivers/crypto/bcmfs/bcmfs_logs.h | 34 +
drivers/crypto/bcmfs/bcmfs_qp.c | 383 ++++++
drivers/crypto/bcmfs/bcmfs_qp.h | 142 ++
drivers/crypto/bcmfs/bcmfs_sym.c | 289 +++++
drivers/crypto/bcmfs/bcmfs_sym_capabilities.c | 764 +++++++++++
drivers/crypto/bcmfs/bcmfs_sym_capabilities.h | 16 +
drivers/crypto/bcmfs/bcmfs_sym_defs.h | 34 +
drivers/crypto/bcmfs/bcmfs_sym_engine.c | 1155 +++++++++++++++++
drivers/crypto/bcmfs/bcmfs_sym_engine.h | 115 ++
drivers/crypto/bcmfs/bcmfs_sym_pmd.c | 426 ++++++
drivers/crypto/bcmfs/bcmfs_sym_pmd.h | 38 +
drivers/crypto/bcmfs/bcmfs_sym_req.h | 62 +
drivers/crypto/bcmfs/bcmfs_sym_session.c | 282 ++++
drivers/crypto/bcmfs/bcmfs_sym_session.h | 109 ++
drivers/crypto/bcmfs/bcmfs_vfio.c | 107 ++
drivers/crypto/bcmfs/bcmfs_vfio.h | 17 +
drivers/crypto/bcmfs/hw/bcmfs4_rm.c | 743 +++++++++++
drivers/crypto/bcmfs/hw/bcmfs5_rm.c | 677 ++++++++++
drivers/crypto/bcmfs/hw/bcmfs_rm_common.c | 82 ++
drivers/crypto/bcmfs/hw/bcmfs_rm_common.h | 51 +
drivers/crypto/bcmfs/meson.build | 20 +
.../crypto/bcmfs/rte_pmd_bcmfs_version.map | 3 +
drivers/crypto/meson.build | 1 +
35 files changed, 6253 insertions(+)
create mode 100644 doc/guides/cryptodevs/bcmfs.rst
create mode 100644 doc/guides/cryptodevs/features/bcmfs.ini
create mode 100644 drivers/crypto/bcmfs/bcmfs_dev_msg.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_device.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_device.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_hw_defs.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_logs.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_logs.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_qp.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_qp.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_capabilities.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_capabilities.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_defs.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_engine.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_engine.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_pmd.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_pmd.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_req.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_session.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_session.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_vfio.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_vfio.h
create mode 100644 drivers/crypto/bcmfs/hw/bcmfs4_rm.c
create mode 100644 drivers/crypto/bcmfs/hw/bcmfs5_rm.c
create mode 100644 drivers/crypto/bcmfs/hw/bcmfs_rm_common.c
create mode 100644 drivers/crypto/bcmfs/hw/bcmfs_rm_common.h
create mode 100644 drivers/crypto/bcmfs/meson.build
create mode 100644 drivers/crypto/bcmfs/rte_pmd_bcmfs_version.map
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v4 1/8] crypto/bcmfs: add BCMFS driver
2020-10-07 16:45 ` [dpdk-dev] [PATCH v4 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
@ 2020-10-07 16:45 ` Vikas Gupta
2020-10-07 16:45 ` [dpdk-dev] [PATCH v4 2/8] crypto/bcmfs: add vfio support Vikas Gupta
` (7 subsequent siblings)
8 siblings, 0 replies; 75+ messages in thread
From: Vikas Gupta @ 2020-10-07 16:45 UTC (permalink / raw)
To: dev, akhil.goyal; +Cc: vikram.prakash, Vikas Gupta, Raveendra Padasalagi
Add Broadcom FlexSparc(FS) device creation driver which registers to a
vdev and create a device. Add APIs for logs, supportive documentation and
maintainers file.
Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
MAINTAINERS | 7 +
doc/guides/cryptodevs/bcmfs.rst | 51 ++++
doc/guides/cryptodevs/index.rst | 1 +
doc/guides/rel_notes/release_20_11.rst | 5 +
drivers/crypto/bcmfs/bcmfs_device.c | 257 ++++++++++++++++++
drivers/crypto/bcmfs/bcmfs_device.h | 43 +++
drivers/crypto/bcmfs/bcmfs_logs.c | 38 +++
drivers/crypto/bcmfs/bcmfs_logs.h | 34 +++
drivers/crypto/bcmfs/meson.build | 10 +
.../crypto/bcmfs/rte_pmd_bcmfs_version.map | 3 +
drivers/crypto/meson.build | 1 +
11 files changed, 450 insertions(+)
create mode 100644 doc/guides/cryptodevs/bcmfs.rst
create mode 100644 drivers/crypto/bcmfs/bcmfs_device.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_device.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_logs.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_logs.h
create mode 100644 drivers/crypto/bcmfs/meson.build
create mode 100644 drivers/crypto/bcmfs/rte_pmd_bcmfs_version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index c0abbe0fc8..49c015ebbe 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1081,6 +1081,13 @@ F: drivers/crypto/zuc/
F: doc/guides/cryptodevs/zuc.rst
F: doc/guides/cryptodevs/features/zuc.ini
+Broadcom FlexSparc
+M: Ajit Khaparde <ajit.khaparde@broadcom.com>
+M: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
+M: Vikas Gupta <vikas.gupta@@broadcom.com>
+F: drivers/crypto/bcmfs/
+F: doc/guides/cryptodevs/bcmfs.rst
+F: doc/guides/cryptodevs/features/bcmfs.ini
Compression Drivers
-------------------
diff --git a/doc/guides/cryptodevs/bcmfs.rst b/doc/guides/cryptodevs/bcmfs.rst
new file mode 100644
index 0000000000..6b68673df0
--- /dev/null
+++ b/doc/guides/cryptodevs/bcmfs.rst
@@ -0,0 +1,51 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(C) 2020 Broadcom
+
+Broadcom FlexSparc Crypto Poll Mode Driver
+==========================================
+
+The FlexSparc crypto poll mode driver (BCMFS PMD) provides support for offloading
+cryptographic operations to the Broadcom SoCs having FlexSparc4/FlexSparc5 unit.
+Detailed information about SoCs can be found at `Broadcom Official Website
+<https://www.broadcom.com/products/ethernet-connectivity/network-adapters/smartnic>`__.
+
+Supported Broadcom SoCs
+-----------------------
+
+* Stingray
+* Stingray2
+
+Installation
+------------
+Information about kernel, rootfs and toolchain can be found at
+`Broadcom Official Website <https://www.broadcom.com/products/ethernet-connectivity
+/network-adapters/smartnic/stingray-software>`__.
+
+ .. Note::
+ To execute BCMFS PMD, it must be compiled with VFIO_PRESENT flag on the
+ compiling platform and same gets enabled in rte_vfio.h.
+
+The BCMFS crypto PMD may be compiled natively on a Stingray/Stingray2 platform or
+cross-compiled on an x86 platform. For example, below commands can be executed
+for cross compiling on on x86 platform.
+
+.. code-block:: console
+
+ cd <DPDK-source-directory>
+ meson <dest-dir> --cross-file config/arm/arm64_stingray_linux_gcc
+ cd <dest-dir>
+ ninja
+
+Initialization
+--------------
+The supported platform devices should be present in the
+*/sys/bus/platform/devices/fs<version>/<dev_name>* path on the booted kernel.
+To get BCMFS PMD executing, device node must be owned by VFIO platform module only.
+For example, below commands can be run to get hold of a device node by VFIO.
+
+.. code-block:: console
+
+ SETUP_SYSFS_DEV_NAME=67000000.crypto_mbox
+ io_device_name="vfio-platform"
+ echo $io_device_name > /sys/bus/platform/devices/${SETUP_SYSFS_DEV_NAME}/driver_override
+ echo ${SETUP_SYSFS_DEV_NAME} > /sys/bus/platform/drivers_probe
diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst
index a67ed5a282..279f56a002 100644
--- a/doc/guides/cryptodevs/index.rst
+++ b/doc/guides/cryptodevs/index.rst
@@ -13,6 +13,7 @@ Crypto Device Drivers
aesni_mb
aesni_gcm
armv8
+ bcmfs
caam_jr
ccp
dpaa2_sec
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 73ac08fb0e..8643330321 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -185,3 +185,8 @@ Tested Platforms
This section is a comment. Do not overwrite or remove it.
Also, make sure to start the actual text at the margin.
=======================================================
+
+* **Added Broadcom BCMFS symmetric crypto PMD.**
+
+ Added a symmetric crypto PMD for Broadcom FlexSparc crypto units.
+ See :doc:`../cryptodevs/bcmfs` guide for more details on this new PMD.
diff --git a/drivers/crypto/bcmfs/bcmfs_device.c b/drivers/crypto/bcmfs/bcmfs_device.c
new file mode 100644
index 0000000000..f1050ff112
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_device.c
@@ -0,0 +1,257 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Broadcom.
+ * All rights reserved.
+ */
+
+#include <dirent.h>
+#include <stdbool.h>
+#include <sys/queue.h>
+
+#include <rte_malloc.h>
+#include <rte_string_fns.h>
+
+#include "bcmfs_device.h"
+#include "bcmfs_logs.h"
+
+struct bcmfs_device_attr {
+ const char name[BCMFS_MAX_PATH_LEN];
+ const char suffix[BCMFS_DEV_NAME_LEN];
+ const enum bcmfs_device_type type;
+ const uint32_t offset;
+ const uint32_t version;
+};
+
+/* BCMFS supported devices */
+static struct bcmfs_device_attr dev_table[] = {
+ {
+ .name = "fs4",
+ .suffix = "crypto_mbox",
+ .type = BCMFS_SYM_FS4,
+ .offset = 0,
+ .version = BCMFS_SYM_FS4_VERSION
+ },
+ {
+ .name = "fs5",
+ .suffix = "mbox",
+ .type = BCMFS_SYM_FS5,
+ .offset = 0,
+ .version = BCMFS_SYM_FS5_VERSION
+ },
+ {
+ /* sentinel */
+ }
+};
+
+TAILQ_HEAD(fsdev_list, bcmfs_device);
+static struct fsdev_list fsdev_list = TAILQ_HEAD_INITIALIZER(fsdev_list);
+
+static struct bcmfs_device *
+fsdev_allocate_one_dev(struct rte_vdev_device *vdev,
+ char *dirpath,
+ char *devname,
+ enum bcmfs_device_type dev_type __rte_unused)
+{
+ struct bcmfs_device *fsdev;
+
+ fsdev = rte_calloc(__func__, 1, sizeof(*fsdev), 0);
+ if (!fsdev)
+ return NULL;
+
+ if (strlen(dirpath) > sizeof(fsdev->dirname)) {
+ BCMFS_LOG(ERR, "dir path name is too long");
+ goto cleanup;
+ }
+
+ if (strlen(devname) > sizeof(fsdev->name)) {
+ BCMFS_LOG(ERR, "devname is too long");
+ goto cleanup;
+ }
+
+ strcpy(fsdev->dirname, dirpath);
+ strcpy(fsdev->name, devname);
+
+ fsdev->vdev = vdev;
+
+ TAILQ_INSERT_TAIL(&fsdev_list, fsdev, next);
+
+ return fsdev;
+
+cleanup:
+ free(fsdev);
+
+ return NULL;
+}
+
+static struct bcmfs_device *
+find_fsdev(struct rte_vdev_device *vdev)
+{
+ struct bcmfs_device *fsdev;
+
+ TAILQ_FOREACH(fsdev, &fsdev_list, next)
+ if (fsdev->vdev == vdev)
+ return fsdev;
+
+ return NULL;
+}
+
+static void
+fsdev_release(struct bcmfs_device *fsdev)
+{
+ if (fsdev == NULL)
+ return;
+
+ TAILQ_REMOVE(&fsdev_list, fsdev, next);
+ free(fsdev);
+}
+
+static int
+cmprator(const void *a, const void *b)
+{
+ return (*(const unsigned int *)a - *(const unsigned int *)b);
+}
+
+static int
+fsdev_find_all_devs(const char *path, const char *search,
+ uint32_t *devs)
+{
+ DIR *dir;
+ struct dirent *entry;
+ int count = 0;
+ char addr[BCMFS_MAX_NODES][BCMFS_MAX_PATH_LEN];
+ int i;
+
+ dir = opendir(path);
+ if (dir == NULL) {
+ BCMFS_LOG(ERR, "Unable to open directory");
+ return 0;
+ }
+
+ while ((entry = readdir(dir)) != NULL) {
+ if (strstr(entry->d_name, search)) {
+ strlcpy(addr[count], entry->d_name,
+ BCMFS_MAX_PATH_LEN);
+ count++;
+ }
+ }
+
+ closedir(dir);
+
+ for (i = 0 ; i < count; i++)
+ devs[i] = (uint32_t)strtoul(addr[i], NULL, 16);
+ /* sort the devices based on IO addresses */
+ qsort(devs, count, sizeof(uint32_t), cmprator);
+
+ return count;
+}
+
+static bool
+fsdev_find_sub_dir(char *path, const char *search, char *output)
+{
+ DIR *dir;
+ struct dirent *entry;
+
+ dir = opendir(path);
+ if (dir == NULL) {
+ BCMFS_LOG(ERR, "Unable to open directory");
+ return -ENODEV;
+ }
+
+ while ((entry = readdir(dir)) != NULL) {
+ if (!strcmp(entry->d_name, search)) {
+ strlcpy(output, entry->d_name, BCMFS_MAX_PATH_LEN);
+ closedir(dir);
+ return true;
+ }
+ }
+
+ closedir(dir);
+
+ return false;
+}
+
+
+static int
+bcmfs_vdev_probe(struct rte_vdev_device *vdev)
+{
+ struct bcmfs_device *fsdev;
+ char top_dirpath[BCMFS_MAX_PATH_LEN];
+ char sub_dirpath[BCMFS_MAX_PATH_LEN];
+ char out_dirpath[BCMFS_MAX_PATH_LEN];
+ char out_dirname[BCMFS_MAX_PATH_LEN];
+ uint32_t fsdev_dev[BCMFS_MAX_NODES];
+ enum bcmfs_device_type dtype;
+ int i = 0;
+ int dev_idx;
+ int count = 0;
+ bool found = false;
+
+ sprintf(top_dirpath, "%s", SYSFS_BCM_PLTFORM_DEVICES);
+ while (strlen(dev_table[i].name)) {
+ found = fsdev_find_sub_dir(top_dirpath,
+ dev_table[i].name,
+ sub_dirpath);
+ if (found)
+ break;
+ i++;
+ }
+ if (!found) {
+ BCMFS_LOG(ERR, "No supported bcmfs dev found");
+ return -ENODEV;
+ }
+
+ dev_idx = i;
+ dtype = dev_table[i].type;
+
+ snprintf(out_dirpath, sizeof(out_dirpath), "%s/%s",
+ top_dirpath, sub_dirpath);
+ count = fsdev_find_all_devs(out_dirpath,
+ dev_table[dev_idx].suffix,
+ fsdev_dev);
+ if (!count) {
+ BCMFS_LOG(ERR, "No supported bcmfs dev found");
+ return -ENODEV;
+ }
+
+ i = 0;
+ while (count) {
+ /* format the device name present in the patch */
+ snprintf(out_dirname, sizeof(out_dirname), "%x.%s",
+ fsdev_dev[i], dev_table[dev_idx].suffix);
+ fsdev = fsdev_allocate_one_dev(vdev, out_dirpath,
+ out_dirname, dtype);
+ if (!fsdev) {
+ count--;
+ i++;
+ continue;
+ }
+ break;
+ }
+ if (fsdev == NULL) {
+ BCMFS_LOG(ERR, "All supported devs busy");
+ return -ENODEV;
+ }
+
+ return 0;
+}
+
+static int
+bcmfs_vdev_remove(struct rte_vdev_device *vdev)
+{
+ struct bcmfs_device *fsdev;
+
+ fsdev = find_fsdev(vdev);
+ if (fsdev == NULL)
+ return -ENODEV;
+
+ fsdev_release(fsdev);
+ return 0;
+}
+
+/* Register with vdev */
+static struct rte_vdev_driver rte_bcmfs_pmd = {
+ .probe = bcmfs_vdev_probe,
+ .remove = bcmfs_vdev_remove
+};
+
+RTE_PMD_REGISTER_VDEV(bcmfs_pmd,
+ rte_bcmfs_pmd);
diff --git a/drivers/crypto/bcmfs/bcmfs_device.h b/drivers/crypto/bcmfs/bcmfs_device.h
new file mode 100644
index 0000000000..1a4d0cf365
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_device.h
@@ -0,0 +1,43 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Broadcom.
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_DEVICE_H_
+#define _BCMFS_DEVICE_H_
+
+#include <sys/queue.h>
+
+#include <rte_bus_vdev.h>
+
+#include "bcmfs_logs.h"
+
+/* max number of dev nodes */
+#define BCMFS_MAX_NODES 4
+#define BCMFS_MAX_PATH_LEN 512
+#define BCMFS_DEV_NAME_LEN 64
+
+/* Path for BCM-Platform device directory */
+#define SYSFS_BCM_PLTFORM_DEVICES "/sys/bus/platform/devices"
+
+#define BCMFS_SYM_FS4_VERSION 0x76303031
+#define BCMFS_SYM_FS5_VERSION 0x76303032
+
+/* Supported devices */
+enum bcmfs_device_type {
+ BCMFS_SYM_FS4,
+ BCMFS_SYM_FS5,
+ BCMFS_UNKNOWN
+};
+
+struct bcmfs_device {
+ TAILQ_ENTRY(bcmfs_device) next;
+ /* Directory path for vfio */
+ char dirname[BCMFS_MAX_PATH_LEN];
+ /* BCMFS device name */
+ char name[BCMFS_DEV_NAME_LEN];
+ /* Parent vdev */
+ struct rte_vdev_device *vdev;
+};
+
+#endif /* _BCMFS_DEVICE_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_logs.c b/drivers/crypto/bcmfs/bcmfs_logs.c
new file mode 100644
index 0000000000..86f4ff3b53
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_logs.c
@@ -0,0 +1,38 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_log.h>
+#include <rte_hexdump.h>
+
+#include "bcmfs_logs.h"
+
+int bcmfs_conf_logtype;
+int bcmfs_dp_logtype;
+
+int
+bcmfs_hexdump_log(uint32_t level, uint32_t logtype, const char *title,
+ const void *buf, unsigned int len)
+{
+ if (level > rte_log_get_global_level())
+ return 0;
+ if (level > (uint32_t)(rte_log_get_level(logtype)))
+ return 0;
+
+ rte_hexdump(rte_log_get_stream(), title, buf, len);
+ return 0;
+}
+
+RTE_INIT(bcmfs_device_init_log)
+{
+ /* Configuration and general logs */
+ bcmfs_conf_logtype = rte_log_register("pmd.bcmfs_config");
+ if (bcmfs_conf_logtype >= 0)
+ rte_log_set_level(bcmfs_conf_logtype, RTE_LOG_NOTICE);
+
+ /* data-path logs */
+ bcmfs_dp_logtype = rte_log_register("pmd.bcmfs_fp");
+ if (bcmfs_dp_logtype >= 0)
+ rte_log_set_level(bcmfs_dp_logtype, RTE_LOG_NOTICE);
+}
diff --git a/drivers/crypto/bcmfs/bcmfs_logs.h b/drivers/crypto/bcmfs/bcmfs_logs.h
new file mode 100644
index 0000000000..c03a49b75c
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_logs.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_LOGS_H_
+#define _BCMFS_LOGS_H_
+
+#include <rte_log.h>
+
+extern int bcmfs_conf_logtype;
+extern int bcmfs_dp_logtype;
+
+#define BCMFS_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, bcmfs_conf_logtype, \
+ "%s(): " fmt "\n", __func__, ## args)
+
+#define BCMFS_DP_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, bcmfs_dp_logtype, \
+ "%s(): " fmt "\n", __func__, ## args)
+
+#define BCMFS_DP_HEXDUMP_LOG(level, title, buf, len) \
+ bcmfs_hexdump_log(RTE_LOG_ ## level, bcmfs_dp_logtype, title, buf, len)
+
+/**
+ * bcmfs_hexdump_log Dump out memory in a special hex dump format.
+ *
+ * The message will be sent to the stream used by the rte_log infrastructure.
+ */
+int
+bcmfs_hexdump_log(uint32_t level, uint32_t logtype, const char *heading,
+ const void *buf, unsigned int len);
+
+#endif /* _BCMFS_LOGS_H_ */
diff --git a/drivers/crypto/bcmfs/meson.build b/drivers/crypto/bcmfs/meson.build
new file mode 100644
index 0000000000..a4bdd8ee5d
--- /dev/null
+++ b/drivers/crypto/bcmfs/meson.build
@@ -0,0 +1,10 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(C) 2020 Broadcom
+# All rights reserved.
+#
+
+deps += ['eal', 'bus_vdev']
+sources = files(
+ 'bcmfs_logs.c',
+ 'bcmfs_device.c'
+ )
diff --git a/drivers/crypto/bcmfs/rte_pmd_bcmfs_version.map b/drivers/crypto/bcmfs/rte_pmd_bcmfs_version.map
new file mode 100644
index 0000000000..299ae632da
--- /dev/null
+++ b/drivers/crypto/bcmfs/rte_pmd_bcmfs_version.map
@@ -0,0 +1,3 @@
+DPDK_21.0 {
+ local: *;
+};
diff --git a/drivers/crypto/meson.build b/drivers/crypto/meson.build
index a2423507ad..93c2968acb 100644
--- a/drivers/crypto/meson.build
+++ b/drivers/crypto/meson.build
@@ -8,6 +8,7 @@ endif
drivers = ['aesni_gcm',
'aesni_mb',
'armv8',
+ 'bcmfs',
'caam_jr',
'ccp',
'dpaa_sec',
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v4 2/8] crypto/bcmfs: add vfio support
2020-10-07 16:45 ` [dpdk-dev] [PATCH v4 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
2020-10-07 16:45 ` [dpdk-dev] [PATCH v4 1/8] crypto/bcmfs: add BCMFS driver Vikas Gupta
@ 2020-10-07 16:45 ` Vikas Gupta
2020-10-07 16:45 ` [dpdk-dev] [PATCH v4 3/8] crypto/bcmfs: add queue pair management API Vikas Gupta
` (6 subsequent siblings)
8 siblings, 0 replies; 75+ messages in thread
From: Vikas Gupta @ 2020-10-07 16:45 UTC (permalink / raw)
To: dev, akhil.goyal; +Cc: vikram.prakash, Vikas Gupta, Raveendra Padasalagi
Add VFIO support for BCMFS PMD.
The BCMFS PMD functionality is dependent on the VFIO_PRESENT flag,
which gets enabled in the rte_vfio.h.
If this flag is not enabled in the compiling platform driver will
silently return with error, when executed.
Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
drivers/crypto/bcmfs/bcmfs_device.c | 5 ++
drivers/crypto/bcmfs/bcmfs_device.h | 6 ++
drivers/crypto/bcmfs/bcmfs_vfio.c | 107 ++++++++++++++++++++++++++++
drivers/crypto/bcmfs/bcmfs_vfio.h | 17 +++++
drivers/crypto/bcmfs/meson.build | 3 +-
5 files changed, 137 insertions(+), 1 deletion(-)
create mode 100644 drivers/crypto/bcmfs/bcmfs_vfio.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_vfio.h
diff --git a/drivers/crypto/bcmfs/bcmfs_device.c b/drivers/crypto/bcmfs/bcmfs_device.c
index f1050ff112..0ccddea202 100644
--- a/drivers/crypto/bcmfs/bcmfs_device.c
+++ b/drivers/crypto/bcmfs/bcmfs_device.c
@@ -12,6 +12,7 @@
#include "bcmfs_device.h"
#include "bcmfs_logs.h"
+#include "bcmfs_vfio.h"
struct bcmfs_device_attr {
const char name[BCMFS_MAX_PATH_LEN];
@@ -72,6 +73,10 @@ fsdev_allocate_one_dev(struct rte_vdev_device *vdev,
fsdev->vdev = vdev;
+ /* attach to VFIO */
+ if (bcmfs_attach_vfio(fsdev))
+ goto cleanup;
+
TAILQ_INSERT_TAIL(&fsdev_list, fsdev, next);
return fsdev;
diff --git a/drivers/crypto/bcmfs/bcmfs_device.h b/drivers/crypto/bcmfs/bcmfs_device.h
index 1a4d0cf365..f99d57d4bd 100644
--- a/drivers/crypto/bcmfs/bcmfs_device.h
+++ b/drivers/crypto/bcmfs/bcmfs_device.h
@@ -38,6 +38,12 @@ struct bcmfs_device {
char name[BCMFS_DEV_NAME_LEN];
/* Parent vdev */
struct rte_vdev_device *vdev;
+ /* vfio handle */
+ int vfio_dev_fd;
+ /* mapped address */
+ uint8_t *mmap_addr;
+ /* mapped size */
+ uint32_t mmap_size;
};
#endif /* _BCMFS_DEVICE_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_vfio.c b/drivers/crypto/bcmfs/bcmfs_vfio.c
new file mode 100644
index 0000000000..dc2def580f
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_vfio.c
@@ -0,0 +1,107 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Broadcom.
+ * All rights reserved.
+ */
+
+#include <errno.h>
+#include <sys/mman.h>
+#include <sys/ioctl.h>
+
+#include <rte_vfio.h>
+
+#include "bcmfs_device.h"
+#include "bcmfs_logs.h"
+#include "bcmfs_vfio.h"
+
+#ifdef VFIO_PRESENT
+static int
+vfio_map_dev_obj(const char *path, const char *dev_obj,
+ uint32_t *size, void **addr, int *dev_fd)
+{
+ int32_t ret;
+ struct vfio_group_status status = { .argsz = sizeof(status) };
+
+ struct vfio_device_info d_info = { .argsz = sizeof(d_info) };
+ struct vfio_region_info reg_info = { .argsz = sizeof(reg_info) };
+
+ ret = rte_vfio_setup_device(path, dev_obj, dev_fd, &d_info);
+ if (ret) {
+ BCMFS_LOG(ERR, "VFIO Setting for device failed");
+ return ret;
+ }
+
+ /* getting device region info*/
+ ret = ioctl(*dev_fd, VFIO_DEVICE_GET_REGION_INFO, ®_info);
+ if (ret < 0) {
+ BCMFS_LOG(ERR, "Error in VFIO getting REGION_INFO");
+ goto map_failed;
+ }
+
+ *addr = mmap(NULL, reg_info.size,
+ PROT_WRITE | PROT_READ, MAP_SHARED,
+ *dev_fd, reg_info.offset);
+ if (*addr == MAP_FAILED) {
+ BCMFS_LOG(ERR, "Error mapping region (errno = %d)", errno);
+ ret = errno;
+ goto map_failed;
+ }
+ *size = reg_info.size;
+
+ return 0;
+
+map_failed:
+ rte_vfio_release_device(path, dev_obj, *dev_fd);
+
+ return ret;
+}
+
+int
+bcmfs_attach_vfio(struct bcmfs_device *dev)
+{
+ int ret;
+ int vfio_dev_fd;
+ void *v_addr = NULL;
+ uint32_t size = 0;
+
+ ret = vfio_map_dev_obj(dev->dirname, dev->name,
+ &size, &v_addr, &vfio_dev_fd);
+ if (ret)
+ return -1;
+
+ dev->mmap_size = size;
+ dev->mmap_addr = v_addr;
+ dev->vfio_dev_fd = vfio_dev_fd;
+
+ return 0;
+}
+
+void
+bcmfs_release_vfio(struct bcmfs_device *dev)
+{
+ int ret;
+
+ if (dev == NULL)
+ return;
+
+ /* unmap the addr */
+ munmap(dev->mmap_addr, dev->mmap_size);
+ /* release the device */
+ ret = rte_vfio_release_device(dev->dirname, dev->name,
+ dev->vfio_dev_fd);
+ if (ret < 0) {
+ BCMFS_LOG(ERR, "cannot release device");
+ return;
+ }
+}
+#else
+int
+bcmfs_attach_vfio(struct bcmfs_device *dev __rte_unused)
+{
+ return -1;
+}
+
+void
+bcmfs_release_vfio(struct bcmfs_device *dev __rte_unused)
+{
+}
+#endif
diff --git a/drivers/crypto/bcmfs/bcmfs_vfio.h b/drivers/crypto/bcmfs/bcmfs_vfio.h
new file mode 100644
index 0000000000..d0fdf6483f
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_vfio.h
@@ -0,0 +1,17 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_VFIO_H_
+#define _BCMFS_VFIO_H_
+
+/* Attach the bcmfs device to vfio */
+int
+bcmfs_attach_vfio(struct bcmfs_device *dev);
+
+/* Release the bcmfs device from vfio */
+void
+bcmfs_release_vfio(struct bcmfs_device *dev);
+
+#endif /* _BCMFS_VFIO_H_ */
diff --git a/drivers/crypto/bcmfs/meson.build b/drivers/crypto/bcmfs/meson.build
index a4bdd8ee5d..fd39eba20e 100644
--- a/drivers/crypto/bcmfs/meson.build
+++ b/drivers/crypto/bcmfs/meson.build
@@ -6,5 +6,6 @@
deps += ['eal', 'bus_vdev']
sources = files(
'bcmfs_logs.c',
- 'bcmfs_device.c'
+ 'bcmfs_device.c',
+ 'bcmfs_vfio.c'
)
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v4 3/8] crypto/bcmfs: add queue pair management API
2020-10-07 16:45 ` [dpdk-dev] [PATCH v4 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
2020-10-07 16:45 ` [dpdk-dev] [PATCH v4 1/8] crypto/bcmfs: add BCMFS driver Vikas Gupta
2020-10-07 16:45 ` [dpdk-dev] [PATCH v4 2/8] crypto/bcmfs: add vfio support Vikas Gupta
@ 2020-10-07 16:45 ` Vikas Gupta
2020-10-07 16:45 ` [dpdk-dev] [PATCH v4 4/8] crypto/bcmfs: add HW queue pair operations Vikas Gupta
` (5 subsequent siblings)
8 siblings, 0 replies; 75+ messages in thread
From: Vikas Gupta @ 2020-10-07 16:45 UTC (permalink / raw)
To: dev, akhil.goyal; +Cc: vikram.prakash, Vikas Gupta, Raveendra Padasalagi
Add queue pair management APIs which will be used by Crypto device to
manage h/w queues. A bcmfs device structure owns multiple queue-pairs
based on the mapped address allocated to it.
Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
drivers/crypto/bcmfs/bcmfs_device.c | 4 +
drivers/crypto/bcmfs/bcmfs_device.h | 5 +
drivers/crypto/bcmfs/bcmfs_hw_defs.h | 32 +++
drivers/crypto/bcmfs/bcmfs_qp.c | 345 +++++++++++++++++++++++++++
drivers/crypto/bcmfs/bcmfs_qp.h | 122 ++++++++++
drivers/crypto/bcmfs/meson.build | 3 +-
6 files changed, 510 insertions(+), 1 deletion(-)
create mode 100644 drivers/crypto/bcmfs/bcmfs_hw_defs.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_qp.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_qp.h
diff --git a/drivers/crypto/bcmfs/bcmfs_device.c b/drivers/crypto/bcmfs/bcmfs_device.c
index 0ccddea202..a01a5c79d5 100644
--- a/drivers/crypto/bcmfs/bcmfs_device.c
+++ b/drivers/crypto/bcmfs/bcmfs_device.c
@@ -12,6 +12,7 @@
#include "bcmfs_device.h"
#include "bcmfs_logs.h"
+#include "bcmfs_qp.h"
#include "bcmfs_vfio.h"
struct bcmfs_device_attr {
@@ -77,6 +78,9 @@ fsdev_allocate_one_dev(struct rte_vdev_device *vdev,
if (bcmfs_attach_vfio(fsdev))
goto cleanup;
+ /* Maximum number of QPs supported */
+ fsdev->max_hw_qps = fsdev->mmap_size / BCMFS_HW_QUEUE_IO_ADDR_LEN;
+
TAILQ_INSERT_TAIL(&fsdev_list, fsdev, next);
return fsdev;
diff --git a/drivers/crypto/bcmfs/bcmfs_device.h b/drivers/crypto/bcmfs/bcmfs_device.h
index f99d57d4bd..dede5b82dc 100644
--- a/drivers/crypto/bcmfs/bcmfs_device.h
+++ b/drivers/crypto/bcmfs/bcmfs_device.h
@@ -11,6 +11,7 @@
#include <rte_bus_vdev.h>
#include "bcmfs_logs.h"
+#include "bcmfs_qp.h"
/* max number of dev nodes */
#define BCMFS_MAX_NODES 4
@@ -44,6 +45,10 @@ struct bcmfs_device {
uint8_t *mmap_addr;
/* mapped size */
uint32_t mmap_size;
+ /* max number of h/w queue pairs detected */
+ uint16_t max_hw_qps;
+ /* current qpairs in use */
+ struct bcmfs_qp *qps_in_use[BCMFS_MAX_HW_QUEUES];
};
#endif /* _BCMFS_DEVICE_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_hw_defs.h b/drivers/crypto/bcmfs/bcmfs_hw_defs.h
new file mode 100644
index 0000000000..7d5bb5d8fe
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_hw_defs.h
@@ -0,0 +1,32 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_HW_DEFS_H_
+#define _BCMFS_HW_DEFS_H_
+
+#include <rte_atomic.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_io.h>
+
+#ifndef BIT
+#define BIT(nr) (1UL << (nr))
+#endif
+
+#define FS_RING_REGS_SIZE 0x10000
+#define FS_RING_DESC_SIZE 8
+#define FS_RING_BD_ALIGN_ORDER 12
+#define FS_RING_BD_DESC_PER_REQ 32
+#define FS_RING_CMPL_ALIGN_ORDER 13
+#define FS_RING_CMPL_SIZE (1024 * FS_RING_DESC_SIZE)
+#define FS_RING_MAX_REQ_COUNT 1024
+#define FS_RING_PAGE_SHFT 12
+#define FS_RING_PAGE_SIZE BIT(FS_RING_PAGE_SHFT)
+
+/* Minimum and maximum number of requests supported */
+#define FS_RM_MAX_REQS 4096
+#define FS_RM_MIN_REQS 32
+
+#endif /* BCMFS_HW_DEFS_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_qp.c b/drivers/crypto/bcmfs/bcmfs_qp.c
new file mode 100644
index 0000000000..864e7bb746
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_qp.c
@@ -0,0 +1,345 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Broadcom.
+ * All rights reserved.
+ */
+
+#include <inttypes.h>
+
+#include <rte_atomic.h>
+#include <rte_bitmap.h>
+#include <rte_common.h>
+#include <rte_dev.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_prefetch.h>
+#include <rte_string_fns.h>
+
+#include "bcmfs_logs.h"
+#include "bcmfs_qp.h"
+#include "bcmfs_hw_defs.h"
+
+/* TX or submission queue name */
+static const char *txq_name = "tx";
+/* Completion or receive queue name */
+static const char *cmplq_name = "cmpl";
+
+/* Helper function */
+static int
+bcmfs_qp_check_queue_alignment(uint64_t phys_addr,
+ uint32_t align)
+{
+ if (((align - 1) & phys_addr) != 0)
+ return -EINVAL;
+ return 0;
+}
+
+static void
+bcmfs_queue_delete(struct bcmfs_queue *queue,
+ uint16_t queue_pair_id)
+{
+ const struct rte_memzone *mz;
+ int status = 0;
+
+ if (queue == NULL) {
+ BCMFS_LOG(DEBUG, "Invalid queue");
+ return;
+ }
+ BCMFS_LOG(DEBUG, "Free ring %d type %d, memzone: %s",
+ queue_pair_id, queue->q_type, queue->memz_name);
+
+ mz = rte_memzone_lookup(queue->memz_name);
+ if (mz != NULL) {
+ /* Write an unused pattern to the queue memory. */
+ memset(queue->base_addr, 0x9B, queue->queue_size);
+ status = rte_memzone_free(mz);
+ if (status != 0)
+ BCMFS_LOG(ERR, "Error %d on freeing queue %s",
+ status, queue->memz_name);
+ } else {
+ BCMFS_LOG(DEBUG, "queue %s doesn't exist",
+ queue->memz_name);
+ }
+}
+
+static const struct rte_memzone *
+queue_dma_zone_reserve(const char *queue_name, uint32_t queue_size,
+ int socket_id, unsigned int align)
+{
+ const struct rte_memzone *mz;
+
+ mz = rte_memzone_lookup(queue_name);
+ if (mz != NULL) {
+ if (((size_t)queue_size <= mz->len) &&
+ (socket_id == SOCKET_ID_ANY ||
+ socket_id == mz->socket_id)) {
+ BCMFS_LOG(DEBUG, "re-use memzone already "
+ "allocated for %s", queue_name);
+ return mz;
+ }
+
+ BCMFS_LOG(ERR, "Incompatible memzone already "
+ "allocated %s, size %u, socket %d. "
+ "Requested size %u, socket %u",
+ queue_name, (uint32_t)mz->len,
+ mz->socket_id, queue_size, socket_id);
+ return NULL;
+ }
+
+ BCMFS_LOG(DEBUG, "Allocate memzone for %s, size %u on socket %u",
+ queue_name, queue_size, socket_id);
+ return rte_memzone_reserve_aligned(queue_name, queue_size,
+ socket_id, RTE_MEMZONE_IOVA_CONTIG, align);
+}
+
+static int
+bcmfs_queue_create(struct bcmfs_queue *queue,
+ struct bcmfs_qp_config *qp_conf,
+ uint16_t queue_pair_id,
+ enum bcmfs_queue_type qtype)
+{
+ const struct rte_memzone *qp_mz;
+ char q_name[16];
+ unsigned int align;
+ uint32_t queue_size_bytes;
+ int ret;
+
+ if (qtype == BCMFS_RM_TXQ) {
+ strlcpy(q_name, txq_name, sizeof(q_name));
+ align = 1U << FS_RING_BD_ALIGN_ORDER;
+ queue_size_bytes = qp_conf->nb_descriptors *
+ qp_conf->max_descs_req * FS_RING_DESC_SIZE;
+ queue_size_bytes = RTE_ALIGN_MUL_CEIL(queue_size_bytes,
+ FS_RING_PAGE_SIZE);
+ /* make queue size to multiple for 4K pages */
+ } else if (qtype == BCMFS_RM_CPLQ) {
+ strlcpy(q_name, cmplq_name, sizeof(q_name));
+ align = 1U << FS_RING_CMPL_ALIGN_ORDER;
+
+ /*
+ * Memory size for cmpl + MSI
+ * For MSI allocate here itself and so we allocate twice
+ */
+ queue_size_bytes = 2 * FS_RING_CMPL_SIZE;
+ } else {
+ BCMFS_LOG(ERR, "Invalid queue selection");
+ return -EINVAL;
+ }
+
+ queue->q_type = qtype;
+
+ /*
+ * Allocate a memzone for the queue - create a unique name.
+ */
+ snprintf(queue->memz_name, sizeof(queue->memz_name),
+ "%s_%d_%s_%d_%s", "bcmfs", qtype, "qp_mem",
+ queue_pair_id, q_name);
+ qp_mz = queue_dma_zone_reserve(queue->memz_name, queue_size_bytes,
+ 0, align);
+ if (qp_mz == NULL) {
+ BCMFS_LOG(ERR, "Failed to allocate ring memzone");
+ return -ENOMEM;
+ }
+
+ if (bcmfs_qp_check_queue_alignment(qp_mz->iova, align)) {
+ BCMFS_LOG(ERR, "Invalid alignment on queue create "
+ " 0x%" PRIx64 "\n",
+ queue->base_phys_addr);
+ ret = -EFAULT;
+ goto queue_create_err;
+ }
+
+ queue->base_addr = (char *)qp_mz->addr;
+ queue->base_phys_addr = qp_mz->iova;
+ queue->queue_size = queue_size_bytes;
+
+ return 0;
+
+queue_create_err:
+ rte_memzone_free(qp_mz);
+
+ return ret;
+}
+
+int
+bcmfs_qp_release(struct bcmfs_qp **qp_addr)
+{
+ struct bcmfs_qp *qp = *qp_addr;
+
+ if (qp == NULL) {
+ BCMFS_LOG(DEBUG, "qp already freed");
+ return 0;
+ }
+
+ /* Don't free memory if there are still responses to be processed */
+ if ((qp->stats.enqueued_count - qp->stats.dequeued_count) == 0) {
+ /* Stop the h/w ring */
+ qp->ops->stopq(qp);
+ /* Delete the queue pairs */
+ bcmfs_queue_delete(&qp->tx_q, qp->qpair_id);
+ bcmfs_queue_delete(&qp->cmpl_q, qp->qpair_id);
+ } else {
+ return -EAGAIN;
+ }
+
+ rte_bitmap_reset(qp->ctx_bmp);
+ rte_free(qp->ctx_bmp_mem);
+ rte_free(qp->ctx_pool);
+
+ rte_free(qp);
+ *qp_addr = NULL;
+
+ return 0;
+}
+
+int
+bcmfs_qp_setup(struct bcmfs_qp **qp_addr,
+ uint16_t queue_pair_id,
+ struct bcmfs_qp_config *qp_conf)
+{
+ struct bcmfs_qp *qp;
+ uint32_t bmp_size;
+ uint32_t nb_descriptors = qp_conf->nb_descriptors;
+ uint16_t i;
+ int rc;
+
+ if (nb_descriptors < FS_RM_MIN_REQS) {
+ BCMFS_LOG(ERR, "Can't create qp for %u descriptors",
+ nb_descriptors);
+ return -EINVAL;
+ }
+
+ if (nb_descriptors > FS_RM_MAX_REQS)
+ nb_descriptors = FS_RM_MAX_REQS;
+
+ if (qp_conf->iobase == NULL) {
+ BCMFS_LOG(ERR, "IO onfig space null");
+ return -EINVAL;
+ }
+
+ qp = rte_zmalloc_socket("BCM FS PMD qp metadata",
+ sizeof(*qp), RTE_CACHE_LINE_SIZE,
+ qp_conf->socket_id);
+ if (qp == NULL) {
+ BCMFS_LOG(ERR, "Failed to alloc mem for qp struct");
+ return -ENOMEM;
+ }
+
+ qp->qpair_id = queue_pair_id;
+ qp->ioreg = qp_conf->iobase;
+ qp->nb_descriptors = nb_descriptors;
+
+ qp->stats.enqueued_count = 0;
+ qp->stats.dequeued_count = 0;
+
+ rc = bcmfs_queue_create(&qp->tx_q, qp_conf, qp->qpair_id,
+ BCMFS_RM_TXQ);
+ if (rc) {
+ BCMFS_LOG(ERR, "Tx queue create failed queue_pair_id %u",
+ queue_pair_id);
+ goto create_err;
+ }
+
+ rc = bcmfs_queue_create(&qp->cmpl_q, qp_conf, qp->qpair_id,
+ BCMFS_RM_CPLQ);
+ if (rc) {
+ BCMFS_LOG(ERR, "Cmpl queue create failed queue_pair_id= %u",
+ queue_pair_id);
+ goto q_create_err;
+ }
+
+ /* ctx saving bitmap */
+ bmp_size = rte_bitmap_get_memory_footprint(nb_descriptors);
+
+ /* Allocate memory for bitmap */
+ qp->ctx_bmp_mem = rte_zmalloc("ctx_bmp_mem", bmp_size,
+ RTE_CACHE_LINE_SIZE);
+ if (qp->ctx_bmp_mem == NULL) {
+ rc = -ENOMEM;
+ goto qp_create_err;
+ }
+
+ /* Initialize pool resource bitmap array */
+ qp->ctx_bmp = rte_bitmap_init(nb_descriptors, qp->ctx_bmp_mem,
+ bmp_size);
+ if (qp->ctx_bmp == NULL) {
+ rc = -EINVAL;
+ goto bmap_mem_free;
+ }
+
+ /* Mark all pools available */
+ for (i = 0; i < nb_descriptors; i++)
+ rte_bitmap_set(qp->ctx_bmp, i);
+
+ /* Allocate memory for context */
+ qp->ctx_pool = rte_zmalloc("qp_ctx_pool",
+ sizeof(unsigned long) *
+ nb_descriptors, 0);
+ if (qp->ctx_pool == NULL) {
+ BCMFS_LOG(ERR, "ctx allocation pool fails");
+ rc = -ENOMEM;
+ goto bmap_free;
+ }
+
+ /* Start h/w ring */
+ qp->ops->startq(qp);
+
+ *qp_addr = qp;
+
+ return 0;
+
+bmap_free:
+ rte_bitmap_reset(qp->ctx_bmp);
+bmap_mem_free:
+ rte_free(qp->ctx_bmp_mem);
+qp_create_err:
+ bcmfs_queue_delete(&qp->cmpl_q, queue_pair_id);
+q_create_err:
+ bcmfs_queue_delete(&qp->tx_q, queue_pair_id);
+create_err:
+ rte_free(qp);
+
+ return rc;
+}
+
+uint16_t
+bcmfs_enqueue_op_burst(void *qp, void **ops, uint16_t nb_ops)
+{
+ struct bcmfs_qp *tmp_qp = (struct bcmfs_qp *)qp;
+ register uint32_t nb_ops_sent = 0;
+ uint16_t nb_ops_possible = nb_ops;
+ int ret;
+
+ if (unlikely(nb_ops == 0))
+ return 0;
+
+ while (nb_ops_sent != nb_ops_possible) {
+ ret = tmp_qp->ops->enq_one_req(qp, *ops);
+ if (ret != 0) {
+ tmp_qp->stats.enqueue_err_count++;
+ /* This message cannot be enqueued */
+ if (nb_ops_sent == 0)
+ return 0;
+ goto ring_db;
+ }
+
+ ops++;
+ nb_ops_sent++;
+ }
+
+ring_db:
+ tmp_qp->stats.enqueued_count += nb_ops_sent;
+ tmp_qp->ops->ring_db(tmp_qp);
+
+ return nb_ops_sent;
+}
+
+uint16_t
+bcmfs_dequeue_op_burst(void *qp, void **ops, uint16_t nb_ops)
+{
+ struct bcmfs_qp *tmp_qp = (struct bcmfs_qp *)qp;
+ uint32_t deq = tmp_qp->ops->dequeue(tmp_qp, ops, nb_ops);
+
+ tmp_qp->stats.dequeued_count += deq;
+
+ return deq;
+}
diff --git a/drivers/crypto/bcmfs/bcmfs_qp.h b/drivers/crypto/bcmfs/bcmfs_qp.h
new file mode 100644
index 0000000000..52c487956e
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_qp.h
@@ -0,0 +1,122 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_QP_H_
+#define _BCMFS_QP_H_
+
+#include <rte_memzone.h>
+
+/* Maximum number of h/w queues supported by device */
+#define BCMFS_MAX_HW_QUEUES 32
+
+/* H/W queue IO address space len */
+#define BCMFS_HW_QUEUE_IO_ADDR_LEN (64 * 1024)
+
+/* Maximum size of device ops name */
+#define BCMFS_HW_OPS_NAMESIZE 32
+
+enum bcmfs_queue_type {
+ /* TX or submission queue */
+ BCMFS_RM_TXQ,
+ /* Completion or receive queue */
+ BCMFS_RM_CPLQ
+};
+
+struct bcmfs_qp_stats {
+ /* Count of all operations enqueued */
+ uint64_t enqueued_count;
+ /* Count of all operations dequeued */
+ uint64_t dequeued_count;
+ /* Total error count on operations enqueued */
+ uint64_t enqueue_err_count;
+ /* Total error count on operations dequeued */
+ uint64_t dequeue_err_count;
+};
+
+struct bcmfs_qp_config {
+ /* Socket to allocate memory on */
+ int socket_id;
+ /* Mapped iobase for qp */
+ void *iobase;
+ /* nb_descriptors or requests a h/w queue can accommodate */
+ uint16_t nb_descriptors;
+ /* Maximum number of h/w descriptors needed by a request */
+ uint16_t max_descs_req;
+};
+
+struct bcmfs_queue {
+ /* Base virt address */
+ void *base_addr;
+ /* Base iova */
+ rte_iova_t base_phys_addr;
+ /* Queue type */
+ enum bcmfs_queue_type q_type;
+ /* Queue size based on nb_descriptors and max_descs_reqs */
+ uint32_t queue_size;
+ union {
+ /* s/w pointer for tx h/w queue*/
+ uint32_t tx_write_ptr;
+ /* s/w pointer for completion h/w queue*/
+ uint32_t cmpl_read_ptr;
+ };
+ /* Memzone name */
+ char memz_name[RTE_MEMZONE_NAMESIZE];
+};
+
+struct bcmfs_qp {
+ /* Queue-pair ID */
+ uint16_t qpair_id;
+ /* Mapped IO address */
+ void *ioreg;
+ /* A TX queue */
+ struct bcmfs_queue tx_q;
+ /* A Completion queue */
+ struct bcmfs_queue cmpl_q;
+ /* Number of requests queue can accommodate */
+ uint32_t nb_descriptors;
+ /* Number of pending requests and enqueued to h/w queue */
+ uint16_t nb_pending_requests;
+ /* A pool which act as a hash for <request-ID and virt address> pair */
+ unsigned long *ctx_pool;
+ /* virt address for mem allocated for bitmap */
+ void *ctx_bmp_mem;
+ /* Bitmap */
+ struct rte_bitmap *ctx_bmp;
+ /* Associated stats */
+ struct bcmfs_qp_stats stats;
+ /* h/w ops associated with qp */
+ struct bcmfs_hw_queue_pair_ops *ops;
+
+} __rte_cache_aligned;
+
+/* Structure defining h/w queue pair operations */
+struct bcmfs_hw_queue_pair_ops {
+ /* ops name */
+ char name[BCMFS_HW_OPS_NAMESIZE];
+ /* Enqueue an object */
+ int (*enq_one_req)(struct bcmfs_qp *qp, void *obj);
+ /* Ring doorbell */
+ void (*ring_db)(struct bcmfs_qp *qp);
+ /* Dequeue objects */
+ uint16_t (*dequeue)(struct bcmfs_qp *qp, void **obj,
+ uint16_t nb_ops);
+ /* Start the h/w queue */
+ int (*startq)(struct bcmfs_qp *qp);
+ /* Stop the h/w queue */
+ void (*stopq)(struct bcmfs_qp *qp);
+};
+
+uint16_t
+bcmfs_enqueue_op_burst(void *qp, void **ops, uint16_t nb_ops);
+uint16_t
+bcmfs_dequeue_op_burst(void *qp, void **ops, uint16_t nb_ops);
+int
+bcmfs_qp_release(struct bcmfs_qp **qp_addr);
+int
+bcmfs_qp_setup(struct bcmfs_qp **qp_addr,
+ uint16_t queue_pair_id,
+ struct bcmfs_qp_config *bcmfs_conf);
+
+#endif /* _BCMFS_QP_H_ */
diff --git a/drivers/crypto/bcmfs/meson.build b/drivers/crypto/bcmfs/meson.build
index fd39eba20e..7e2bcbf14b 100644
--- a/drivers/crypto/bcmfs/meson.build
+++ b/drivers/crypto/bcmfs/meson.build
@@ -7,5 +7,6 @@ deps += ['eal', 'bus_vdev']
sources = files(
'bcmfs_logs.c',
'bcmfs_device.c',
- 'bcmfs_vfio.c'
+ 'bcmfs_vfio.c',
+ 'bcmfs_qp.c'
)
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v4 4/8] crypto/bcmfs: add HW queue pair operations
2020-10-07 16:45 ` [dpdk-dev] [PATCH v4 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
` (2 preceding siblings ...)
2020-10-07 16:45 ` [dpdk-dev] [PATCH v4 3/8] crypto/bcmfs: add queue pair management API Vikas Gupta
@ 2020-10-07 16:45 ` Vikas Gupta
2020-10-07 16:45 ` [dpdk-dev] [PATCH v4 5/8] crypto/bcmfs: create a symmetric cryptodev Vikas Gupta
` (4 subsequent siblings)
8 siblings, 0 replies; 75+ messages in thread
From: Vikas Gupta @ 2020-10-07 16:45 UTC (permalink / raw)
To: dev, akhil.goyal; +Cc: vikram.prakash, Vikas Gupta, Raveendra Padasalagi
Add queue pair operations exported by supported devices.
Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
drivers/crypto/bcmfs/bcmfs_dev_msg.h | 29 +
drivers/crypto/bcmfs/bcmfs_device.c | 51 ++
drivers/crypto/bcmfs/bcmfs_device.h | 16 +
drivers/crypto/bcmfs/bcmfs_qp.c | 1 +
drivers/crypto/bcmfs/bcmfs_qp.h | 4 +
drivers/crypto/bcmfs/hw/bcmfs4_rm.c | 743 ++++++++++++++++++++++
drivers/crypto/bcmfs/hw/bcmfs5_rm.c | 677 ++++++++++++++++++++
drivers/crypto/bcmfs/hw/bcmfs_rm_common.c | 82 +++
drivers/crypto/bcmfs/hw/bcmfs_rm_common.h | 51 ++
drivers/crypto/bcmfs/meson.build | 5 +-
10 files changed, 1658 insertions(+), 1 deletion(-)
create mode 100644 drivers/crypto/bcmfs/bcmfs_dev_msg.h
create mode 100644 drivers/crypto/bcmfs/hw/bcmfs4_rm.c
create mode 100644 drivers/crypto/bcmfs/hw/bcmfs5_rm.c
create mode 100644 drivers/crypto/bcmfs/hw/bcmfs_rm_common.c
create mode 100644 drivers/crypto/bcmfs/hw/bcmfs_rm_common.h
diff --git a/drivers/crypto/bcmfs/bcmfs_dev_msg.h b/drivers/crypto/bcmfs/bcmfs_dev_msg.h
new file mode 100644
index 0000000000..5b50bde35a
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_dev_msg.h
@@ -0,0 +1,29 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_DEV_MSG_H_
+#define _BCMFS_DEV_MSG_H_
+
+#define MAX_SRC_ADDR_BUFFERS 8
+#define MAX_DST_ADDR_BUFFERS 3
+
+struct bcmfs_qp_message {
+ /** Physical address of each source */
+ uint64_t srcs_addr[MAX_SRC_ADDR_BUFFERS];
+ /** Length of each sources */
+ uint32_t srcs_len[MAX_SRC_ADDR_BUFFERS];
+ /** Total number of sources */
+ unsigned int srcs_count;
+ /** Physical address of each destination */
+ uint64_t dsts_addr[MAX_DST_ADDR_BUFFERS];
+ /** Length of each destination */
+ uint32_t dsts_len[MAX_DST_ADDR_BUFFERS];
+ /** Total number of destinations */
+ unsigned int dsts_count;
+
+ void *ctx;
+};
+
+#endif /* _BCMFS_DEV_MSG_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_device.c b/drivers/crypto/bcmfs/bcmfs_device.c
index a01a5c79d5..07423d3cc1 100644
--- a/drivers/crypto/bcmfs/bcmfs_device.c
+++ b/drivers/crypto/bcmfs/bcmfs_device.c
@@ -44,6 +44,47 @@ static struct bcmfs_device_attr dev_table[] = {
}
};
+struct bcmfs_hw_queue_pair_ops_table bcmfs_hw_queue_pair_ops_table = {
+ .tl = RTE_SPINLOCK_INITIALIZER,
+ .num_ops = 0
+};
+
+int bcmfs_hw_queue_pair_register_ops(const struct bcmfs_hw_queue_pair_ops *h)
+{
+ struct bcmfs_hw_queue_pair_ops *ops;
+ int16_t ops_index;
+
+ rte_spinlock_lock(&bcmfs_hw_queue_pair_ops_table.tl);
+
+ if (h->enq_one_req == NULL || h->dequeue == NULL ||
+ h->ring_db == NULL || h->startq == NULL || h->stopq == NULL) {
+ rte_spinlock_unlock(&bcmfs_hw_queue_pair_ops_table.tl);
+ BCMFS_LOG(ERR,
+ "Missing callback while registering device ops");
+ return -EINVAL;
+ }
+
+ if (strlen(h->name) >= sizeof(ops->name) - 1) {
+ rte_spinlock_unlock(&bcmfs_hw_queue_pair_ops_table.tl);
+ BCMFS_LOG(ERR, "%s(): fs device_ops <%s>: name too long",
+ __func__, h->name);
+ return -EEXIST;
+ }
+
+ ops_index = bcmfs_hw_queue_pair_ops_table.num_ops++;
+ ops = &bcmfs_hw_queue_pair_ops_table.qp_ops[ops_index];
+ strlcpy(ops->name, h->name, sizeof(ops->name));
+ ops->enq_one_req = h->enq_one_req;
+ ops->dequeue = h->dequeue;
+ ops->ring_db = h->ring_db;
+ ops->startq = h->startq;
+ ops->stopq = h->stopq;
+
+ rte_spinlock_unlock(&bcmfs_hw_queue_pair_ops_table.tl);
+
+ return ops_index;
+}
+
TAILQ_HEAD(fsdev_list, bcmfs_device);
static struct fsdev_list fsdev_list = TAILQ_HEAD_INITIALIZER(fsdev_list);
@@ -54,6 +95,7 @@ fsdev_allocate_one_dev(struct rte_vdev_device *vdev,
enum bcmfs_device_type dev_type __rte_unused)
{
struct bcmfs_device *fsdev;
+ uint32_t i;
fsdev = rte_calloc(__func__, 1, sizeof(*fsdev), 0);
if (!fsdev)
@@ -69,6 +111,15 @@ fsdev_allocate_one_dev(struct rte_vdev_device *vdev,
goto cleanup;
}
+ /* check if registered ops name is present in directory path */
+ for (i = 0; i < bcmfs_hw_queue_pair_ops_table.num_ops; i++)
+ if (strstr(dirpath,
+ bcmfs_hw_queue_pair_ops_table.qp_ops[i].name))
+ fsdev->sym_hw_qp_ops =
+ &bcmfs_hw_queue_pair_ops_table.qp_ops[i];
+ if (!fsdev->sym_hw_qp_ops)
+ goto cleanup;
+
strcpy(fsdev->dirname, dirpath);
strcpy(fsdev->name, devname);
diff --git a/drivers/crypto/bcmfs/bcmfs_device.h b/drivers/crypto/bcmfs/bcmfs_device.h
index dede5b82dc..2fb8eed143 100644
--- a/drivers/crypto/bcmfs/bcmfs_device.h
+++ b/drivers/crypto/bcmfs/bcmfs_device.h
@@ -8,6 +8,7 @@
#include <sys/queue.h>
+#include <rte_spinlock.h>
#include <rte_bus_vdev.h>
#include "bcmfs_logs.h"
@@ -31,6 +32,19 @@ enum bcmfs_device_type {
BCMFS_UNKNOWN
};
+/* A table to store registered queue pair opertations */
+struct bcmfs_hw_queue_pair_ops_table {
+ rte_spinlock_t tl;
+ /* Number of used ops structs in the table. */
+ uint32_t num_ops;
+ /* Storage for all possible ops structs. */
+ struct bcmfs_hw_queue_pair_ops qp_ops[BCMFS_MAX_NODES];
+};
+
+/* HW queue pair ops register function */
+int
+bcmfs_hw_queue_pair_register_ops(const struct bcmfs_hw_queue_pair_ops *qp_ops);
+
struct bcmfs_device {
TAILQ_ENTRY(bcmfs_device) next;
/* Directory path for vfio */
@@ -49,6 +63,8 @@ struct bcmfs_device {
uint16_t max_hw_qps;
/* current qpairs in use */
struct bcmfs_qp *qps_in_use[BCMFS_MAX_HW_QUEUES];
+ /* queue pair ops exported by symmetric crypto hw */
+ struct bcmfs_hw_queue_pair_ops *sym_hw_qp_ops;
};
#endif /* _BCMFS_DEVICE_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_qp.c b/drivers/crypto/bcmfs/bcmfs_qp.c
index 864e7bb746..ec1327b780 100644
--- a/drivers/crypto/bcmfs/bcmfs_qp.c
+++ b/drivers/crypto/bcmfs/bcmfs_qp.c
@@ -227,6 +227,7 @@ bcmfs_qp_setup(struct bcmfs_qp **qp_addr,
qp->qpair_id = queue_pair_id;
qp->ioreg = qp_conf->iobase;
qp->nb_descriptors = nb_descriptors;
+ qp->ops = qp_conf->ops;
qp->stats.enqueued_count = 0;
qp->stats.dequeued_count = 0;
diff --git a/drivers/crypto/bcmfs/bcmfs_qp.h b/drivers/crypto/bcmfs/bcmfs_qp.h
index 52c487956e..59785865b0 100644
--- a/drivers/crypto/bcmfs/bcmfs_qp.h
+++ b/drivers/crypto/bcmfs/bcmfs_qp.h
@@ -44,6 +44,8 @@ struct bcmfs_qp_config {
uint16_t nb_descriptors;
/* Maximum number of h/w descriptors needed by a request */
uint16_t max_descs_req;
+ /* h/w ops associated with qp */
+ struct bcmfs_hw_queue_pair_ops *ops;
};
struct bcmfs_queue {
@@ -61,6 +63,8 @@ struct bcmfs_queue {
/* s/w pointer for completion h/w queue*/
uint32_t cmpl_read_ptr;
};
+ /* number of inflight descriptor accumulated before next db ring */
+ uint16_t descs_inflight;
/* Memzone name */
char memz_name[RTE_MEMZONE_NAMESIZE];
};
diff --git a/drivers/crypto/bcmfs/hw/bcmfs4_rm.c b/drivers/crypto/bcmfs/hw/bcmfs4_rm.c
new file mode 100644
index 0000000000..aec1089637
--- /dev/null
+++ b/drivers/crypto/bcmfs/hw/bcmfs4_rm.c
@@ -0,0 +1,743 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <unistd.h>
+
+#include <rte_bitmap.h>
+
+#include "bcmfs_device.h"
+#include "bcmfs_dev_msg.h"
+#include "bcmfs_hw_defs.h"
+#include "bcmfs_logs.h"
+#include "bcmfs_qp.h"
+#include "bcmfs_rm_common.h"
+
+/* FS4 configuration */
+#define RING_BD_TOGGLE_INVALID(offset) \
+ (((offset) >> FS_RING_BD_ALIGN_ORDER) & 0x1)
+#define RING_BD_TOGGLE_VALID(offset) \
+ (!RING_BD_TOGGLE_INVALID(offset))
+
+#define RING_VER_MAGIC 0x76303031
+
+/* Per-Ring register offsets */
+#define RING_VER 0x000
+#define RING_BD_START_ADDR 0x004
+#define RING_BD_READ_PTR 0x008
+#define RING_BD_WRITE_PTR 0x00c
+#define RING_BD_READ_PTR_DDR_LS 0x010
+#define RING_BD_READ_PTR_DDR_MS 0x014
+#define RING_CMPL_START_ADDR 0x018
+#define RING_CMPL_WRITE_PTR 0x01c
+#define RING_NUM_REQ_RECV_LS 0x020
+#define RING_NUM_REQ_RECV_MS 0x024
+#define RING_NUM_REQ_TRANS_LS 0x028
+#define RING_NUM_REQ_TRANS_MS 0x02c
+#define RING_NUM_REQ_OUTSTAND 0x030
+#define RING_CONTROL 0x034
+#define RING_FLUSH_DONE 0x038
+#define RING_MSI_ADDR_LS 0x03c
+#define RING_MSI_ADDR_MS 0x040
+#define RING_MSI_CONTROL 0x048
+#define RING_BD_READ_PTR_DDR_CONTROL 0x04c
+#define RING_MSI_DATA_VALUE 0x064
+
+/* Register RING_BD_START_ADDR fields */
+#define BD_LAST_UPDATE_HW_SHIFT 28
+#define BD_LAST_UPDATE_HW_MASK 0x1
+#define BD_START_ADDR_VALUE(pa) \
+ ((uint32_t)((((uint64_t)(pa)) >> FS_RING_BD_ALIGN_ORDER) & 0x0fffffff))
+#define BD_START_ADDR_DECODE(val) \
+ ((uint64_t)((val) & 0x0fffffff) << FS_RING_BD_ALIGN_ORDER)
+
+/* Register RING_CMPL_START_ADDR fields */
+#define CMPL_START_ADDR_VALUE(pa) \
+ ((uint32_t)((((uint64_t)(pa)) >> FS_RING_CMPL_ALIGN_ORDER) & 0x7ffffff))
+
+/* Register RING_CONTROL fields */
+#define CONTROL_MASK_DISABLE_CONTROL 12
+#define CONTROL_FLUSH_SHIFT 5
+#define CONTROL_ACTIVE_SHIFT 4
+#define CONTROL_RATE_ADAPT_MASK 0xf
+#define CONTROL_RATE_DYNAMIC 0x0
+#define CONTROL_RATE_FAST 0x8
+#define CONTROL_RATE_MEDIUM 0x9
+#define CONTROL_RATE_SLOW 0xa
+#define CONTROL_RATE_IDLE 0xb
+
+/* Register RING_FLUSH_DONE fields */
+#define FLUSH_DONE_MASK 0x1
+
+/* Register RING_MSI_CONTROL fields */
+#define MSI_TIMER_VAL_SHIFT 16
+#define MSI_TIMER_VAL_MASK 0xffff
+#define MSI_ENABLE_SHIFT 15
+#define MSI_ENABLE_MASK 0x1
+#define MSI_COUNT_SHIFT 0
+#define MSI_COUNT_MASK 0x3ff
+
+/* Register RING_BD_READ_PTR_DDR_CONTROL fields */
+#define BD_READ_PTR_DDR_TIMER_VAL_SHIFT 16
+#define BD_READ_PTR_DDR_TIMER_VAL_MASK 0xffff
+#define BD_READ_PTR_DDR_ENABLE_SHIFT 15
+#define BD_READ_PTR_DDR_ENABLE_MASK 0x1
+
+/* ====== Broadcom FS4-RM ring descriptor defines ===== */
+
+
+/* General descriptor format */
+#define DESC_TYPE_SHIFT 60
+#define DESC_TYPE_MASK 0xf
+#define DESC_PAYLOAD_SHIFT 0
+#define DESC_PAYLOAD_MASK 0x0fffffffffffffff
+
+/* Null descriptor format */
+#define NULL_TYPE 0
+#define NULL_TOGGLE_SHIFT 58
+#define NULL_TOGGLE_MASK 0x1
+
+/* Header descriptor format */
+#define HEADER_TYPE 1
+#define HEADER_TOGGLE_SHIFT 58
+#define HEADER_TOGGLE_MASK 0x1
+#define HEADER_ENDPKT_SHIFT 57
+#define HEADER_ENDPKT_MASK 0x1
+#define HEADER_STARTPKT_SHIFT 56
+#define HEADER_STARTPKT_MASK 0x1
+#define HEADER_BDCOUNT_SHIFT 36
+#define HEADER_BDCOUNT_MASK 0x1f
+#define HEADER_BDCOUNT_MAX HEADER_BDCOUNT_MASK
+#define HEADER_FLAGS_SHIFT 16
+#define HEADER_FLAGS_MASK 0xffff
+#define HEADER_OPAQUE_SHIFT 0
+#define HEADER_OPAQUE_MASK 0xffff
+
+/* Source (SRC) descriptor format */
+#define SRC_TYPE 2
+#define SRC_LENGTH_SHIFT 44
+#define SRC_LENGTH_MASK 0xffff
+#define SRC_ADDR_SHIFT 0
+#define SRC_ADDR_MASK 0x00000fffffffffff
+
+/* Destination (DST) descriptor format */
+#define DST_TYPE 3
+#define DST_LENGTH_SHIFT 44
+#define DST_LENGTH_MASK 0xffff
+#define DST_ADDR_SHIFT 0
+#define DST_ADDR_MASK 0x00000fffffffffff
+
+/* Next pointer (NPTR) descriptor format */
+#define NPTR_TYPE 5
+#define NPTR_TOGGLE_SHIFT 58
+#define NPTR_TOGGLE_MASK 0x1
+#define NPTR_ADDR_SHIFT 0
+#define NPTR_ADDR_MASK 0x00000fffffffffff
+
+/* Mega source (MSRC) descriptor format */
+#define MSRC_TYPE 6
+#define MSRC_LENGTH_SHIFT 44
+#define MSRC_LENGTH_MASK 0xffff
+#define MSRC_ADDR_SHIFT 0
+#define MSRC_ADDR_MASK 0x00000fffffffffff
+
+/* Mega destination (MDST) descriptor format */
+#define MDST_TYPE 7
+#define MDST_LENGTH_SHIFT 44
+#define MDST_LENGTH_MASK 0xffff
+#define MDST_ADDR_SHIFT 0
+#define MDST_ADDR_MASK 0x00000fffffffffff
+
+static uint8_t
+bcmfs4_is_next_table_desc(void *desc_ptr)
+{
+ uint64_t desc = rm_read_desc(desc_ptr);
+ uint32_t type = FS_DESC_DEC(desc, DESC_TYPE_SHIFT, DESC_TYPE_MASK);
+
+ return (type == NPTR_TYPE) ? true : false;
+}
+
+static uint64_t
+bcmfs4_next_table_desc(uint32_t toggle, uint64_t next_addr)
+{
+ return (rm_build_desc(NPTR_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(toggle, NPTR_TOGGLE_SHIFT, NPTR_TOGGLE_MASK) |
+ rm_build_desc(next_addr, NPTR_ADDR_SHIFT, NPTR_ADDR_MASK));
+}
+
+static uint64_t
+bcmfs4_null_desc(uint32_t toggle)
+{
+ return (rm_build_desc(NULL_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(toggle, NULL_TOGGLE_SHIFT, NULL_TOGGLE_MASK));
+}
+
+static void
+bcmfs4_flip_header_toggle(void *desc_ptr)
+{
+ uint64_t desc = rm_read_desc(desc_ptr);
+
+ if (desc & ((uint64_t)0x1 << HEADER_TOGGLE_SHIFT))
+ desc &= ~((uint64_t)0x1 << HEADER_TOGGLE_SHIFT);
+ else
+ desc |= ((uint64_t)0x1 << HEADER_TOGGLE_SHIFT);
+
+ rm_write_desc(desc_ptr, desc);
+}
+
+static uint64_t
+bcmfs4_header_desc(uint32_t toggle, uint32_t startpkt,
+ uint32_t endpkt, uint32_t bdcount,
+ uint32_t flags, uint32_t opaque)
+{
+ return (rm_build_desc(HEADER_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(toggle, HEADER_TOGGLE_SHIFT, HEADER_TOGGLE_MASK) |
+ rm_build_desc(startpkt, HEADER_STARTPKT_SHIFT,
+ HEADER_STARTPKT_MASK) |
+ rm_build_desc(endpkt, HEADER_ENDPKT_SHIFT, HEADER_ENDPKT_MASK) |
+ rm_build_desc(bdcount, HEADER_BDCOUNT_SHIFT,
+ HEADER_BDCOUNT_MASK) |
+ rm_build_desc(flags, HEADER_FLAGS_SHIFT, HEADER_FLAGS_MASK) |
+ rm_build_desc(opaque, HEADER_OPAQUE_SHIFT, HEADER_OPAQUE_MASK));
+}
+
+static void
+bcmfs4_enqueue_desc(uint32_t nhpos, uint32_t nhcnt,
+ uint32_t reqid, uint64_t desc,
+ void **desc_ptr, uint32_t *toggle,
+ void *start_desc, void *end_desc)
+{
+ uint64_t d;
+ uint32_t nhavail, _toggle, _startpkt, _endpkt, _bdcount;
+
+ /*
+ * Each request or packet start with a HEADER descriptor followed
+ * by one or more non-HEADER descriptors (SRC, SRCT, MSRC, DST,
+ * DSTT, MDST, IMM, and IMMT). The number of non-HEADER descriptors
+ * following a HEADER descriptor is represented by BDCOUNT field
+ * of HEADER descriptor. The max value of BDCOUNT field is 31 which
+ * means we can only have 31 non-HEADER descriptors following one
+ * HEADER descriptor.
+ *
+ * In general use, number of non-HEADER descriptors can easily go
+ * beyond 31. To tackle this situation, we have packet (or request)
+ * extension bits (STARTPKT and ENDPKT) in the HEADER descriptor.
+ *
+ * To use packet extension, the first HEADER descriptor of request
+ * (or packet) will have STARTPKT=1 and ENDPKT=0. The intermediate
+ * HEADER descriptors will have STARTPKT=0 and ENDPKT=0. The last
+ * HEADER descriptor will have STARTPKT=0 and ENDPKT=1. Also, the
+ * TOGGLE bit of the first HEADER will be set to invalid state to
+ * ensure that FlexDMA engine does not start fetching descriptors
+ * till all descriptors are enqueued. The user of this function
+ * will flip the TOGGLE bit of first HEADER after all descriptors
+ * are enqueued.
+ */
+
+ if ((nhpos % HEADER_BDCOUNT_MAX == 0) && (nhcnt - nhpos)) {
+ /* Prepare the header descriptor */
+ nhavail = (nhcnt - nhpos);
+ _toggle = (nhpos == 0) ? !(*toggle) : (*toggle);
+ _startpkt = (nhpos == 0) ? 0x1 : 0x0;
+ _endpkt = (nhavail <= HEADER_BDCOUNT_MAX) ? 0x1 : 0x0;
+ _bdcount = (nhavail <= HEADER_BDCOUNT_MAX) ?
+ nhavail : HEADER_BDCOUNT_MAX;
+ if (nhavail <= HEADER_BDCOUNT_MAX)
+ _bdcount = nhavail;
+ else
+ _bdcount = HEADER_BDCOUNT_MAX;
+ d = bcmfs4_header_desc(_toggle, _startpkt, _endpkt,
+ _bdcount, 0x0, reqid);
+
+ /* Write header descriptor */
+ rm_write_desc(*desc_ptr, d);
+
+ /* Point to next descriptor */
+ *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc);
+ if (*desc_ptr == end_desc)
+ *desc_ptr = start_desc;
+
+ /* Skip next pointer descriptors */
+ while (bcmfs4_is_next_table_desc(*desc_ptr)) {
+ *toggle = (*toggle) ? 0 : 1;
+ *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc);
+ if (*desc_ptr == end_desc)
+ *desc_ptr = start_desc;
+ }
+ }
+
+ /* Write desired descriptor */
+ rm_write_desc(*desc_ptr, desc);
+
+ /* Point to next descriptor */
+ *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc);
+ if (*desc_ptr == end_desc)
+ *desc_ptr = start_desc;
+
+ /* Skip next pointer descriptors */
+ while (bcmfs4_is_next_table_desc(*desc_ptr)) {
+ *toggle = (*toggle) ? 0 : 1;
+ *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc);
+ if (*desc_ptr == end_desc)
+ *desc_ptr = start_desc;
+ }
+}
+
+static uint64_t
+bcmfs4_src_desc(uint64_t addr, unsigned int length)
+{
+ return (rm_build_desc(SRC_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(length, SRC_LENGTH_SHIFT, SRC_LENGTH_MASK) |
+ rm_build_desc(addr, SRC_ADDR_SHIFT, SRC_ADDR_MASK));
+}
+
+static uint64_t
+bcmfs4_msrc_desc(uint64_t addr, unsigned int length_div_16)
+{
+ return (rm_build_desc(MSRC_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(length_div_16, MSRC_LENGTH_SHIFT, MSRC_LENGTH_MASK) |
+ rm_build_desc(addr, MSRC_ADDR_SHIFT, MSRC_ADDR_MASK));
+}
+
+static uint64_t
+bcmfs4_dst_desc(uint64_t addr, unsigned int length)
+{
+ return (rm_build_desc(DST_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(length, DST_LENGTH_SHIFT, DST_LENGTH_MASK) |
+ rm_build_desc(addr, DST_ADDR_SHIFT, DST_ADDR_MASK));
+}
+
+static uint64_t
+bcmfs4_mdst_desc(uint64_t addr, unsigned int length_div_16)
+{
+ return (rm_build_desc(MDST_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(length_div_16, MDST_LENGTH_SHIFT, MDST_LENGTH_MASK) |
+ rm_build_desc(addr, MDST_ADDR_SHIFT, MDST_ADDR_MASK));
+}
+
+static bool
+bcmfs4_sanity_check(struct bcmfs_qp_message *msg)
+{
+ unsigned int i = 0;
+
+ if (msg == NULL)
+ return false;
+
+ for (i = 0; i < msg->srcs_count; i++) {
+ if (msg->srcs_len[i] & 0xf) {
+ if (msg->srcs_len[i] > SRC_LENGTH_MASK)
+ return false;
+ } else {
+ if (msg->srcs_len[i] > (MSRC_LENGTH_MASK * 16))
+ return false;
+ }
+ }
+ for (i = 0; i < msg->dsts_count; i++) {
+ if (msg->dsts_len[i] & 0xf) {
+ if (msg->dsts_len[i] > DST_LENGTH_MASK)
+ return false;
+ } else {
+ if (msg->dsts_len[i] > (MDST_LENGTH_MASK * 16))
+ return false;
+ }
+ }
+
+ return true;
+}
+
+static uint32_t
+estimate_nonheader_desc_count(struct bcmfs_qp_message *msg)
+{
+ uint32_t cnt = 0;
+ unsigned int src = 0;
+ unsigned int dst = 0;
+ unsigned int dst_target = 0;
+
+ while (src < msg->srcs_count ||
+ dst < msg->dsts_count) {
+ if (src < msg->srcs_count) {
+ cnt++;
+ dst_target = msg->srcs_len[src];
+ src++;
+ } else {
+ dst_target = UINT_MAX;
+ }
+ while (dst_target && dst < msg->dsts_count) {
+ cnt++;
+ if (msg->dsts_len[dst] < dst_target)
+ dst_target -= msg->dsts_len[dst];
+ else
+ dst_target = 0;
+ dst++;
+ }
+ }
+
+ return cnt;
+}
+
+static void *
+bcmfs4_enqueue_msg(struct bcmfs_qp_message *msg,
+ uint32_t nhcnt, uint32_t reqid,
+ void *desc_ptr, uint32_t toggle,
+ void *start_desc, void *end_desc)
+{
+ uint64_t d;
+ uint32_t nhpos = 0;
+ unsigned int src = 0;
+ unsigned int dst = 0;
+ unsigned int dst_target = 0;
+ void *orig_desc_ptr = desc_ptr;
+
+ if (!desc_ptr || !start_desc || !end_desc)
+ return NULL;
+
+ if (desc_ptr < start_desc || end_desc <= desc_ptr)
+ return NULL;
+
+ while (src < msg->srcs_count || dst < msg->dsts_count) {
+ if (src < msg->srcs_count) {
+ if (msg->srcs_len[src] & 0xf) {
+ d = bcmfs4_src_desc(msg->srcs_addr[src],
+ msg->srcs_len[src]);
+ } else {
+ d = bcmfs4_msrc_desc(msg->srcs_addr[src],
+ msg->srcs_len[src] / 16);
+ }
+ bcmfs4_enqueue_desc(nhpos, nhcnt, reqid,
+ d, &desc_ptr, &toggle,
+ start_desc, end_desc);
+ nhpos++;
+ dst_target = msg->srcs_len[src];
+ src++;
+ } else {
+ dst_target = UINT_MAX;
+ }
+
+ while (dst_target && (dst < msg->dsts_count)) {
+ if (msg->dsts_len[dst] & 0xf) {
+ d = bcmfs4_dst_desc(msg->dsts_addr[dst],
+ msg->dsts_len[dst]);
+ } else {
+ d = bcmfs4_mdst_desc(msg->dsts_addr[dst],
+ msg->dsts_len[dst] / 16);
+ }
+ bcmfs4_enqueue_desc(nhpos, nhcnt, reqid,
+ d, &desc_ptr, &toggle,
+ start_desc, end_desc);
+ nhpos++;
+ if (msg->dsts_len[dst] < dst_target)
+ dst_target -= msg->dsts_len[dst];
+ else
+ dst_target = 0;
+ dst++; /* for next buffer */
+ }
+ }
+
+ /* Null descriptor with invalid toggle bit */
+ rm_write_desc(desc_ptr, bcmfs4_null_desc(!toggle));
+
+ /* Ensure that descriptors have been written to memory */
+ rte_smp_wmb();
+
+ bcmfs4_flip_header_toggle(orig_desc_ptr);
+
+ return desc_ptr;
+}
+
+static int
+bcmfs4_enqueue_single_request_qp(struct bcmfs_qp *qp, void *op)
+{
+ int reqid;
+ void *next;
+ uint32_t nhcnt;
+ int ret = 0;
+ uint32_t pos = 0;
+ uint64_t slab = 0;
+ uint8_t exit_cleanup = false;
+ struct bcmfs_queue *txq = &qp->tx_q;
+ struct bcmfs_qp_message *msg = (struct bcmfs_qp_message *)op;
+
+ /* Do sanity check on message */
+ if (!bcmfs4_sanity_check(msg)) {
+ BCMFS_DP_LOG(ERR, "Invalid msg on queue %d", qp->qpair_id);
+ return -EIO;
+ }
+
+ /* Scan from the beginning */
+ __rte_bitmap_scan_init(qp->ctx_bmp);
+ /* Scan bitmap to get the free pool */
+ ret = rte_bitmap_scan(qp->ctx_bmp, &pos, &slab);
+ if (ret == 0) {
+ BCMFS_DP_LOG(ERR, "BD memory exhausted");
+ return -ERANGE;
+ }
+
+ reqid = pos + __builtin_ctzll(slab);
+ rte_bitmap_clear(qp->ctx_bmp, reqid);
+ qp->ctx_pool[reqid] = (unsigned long)msg;
+
+ /*
+ * Number required descriptors = number of non-header descriptors +
+ * number of header descriptors +
+ * 1x null descriptor
+ */
+ nhcnt = estimate_nonheader_desc_count(msg);
+
+ /* Write descriptors to ring */
+ next = bcmfs4_enqueue_msg(msg, nhcnt, reqid,
+ (uint8_t *)txq->base_addr + txq->tx_write_ptr,
+ RING_BD_TOGGLE_VALID(txq->tx_write_ptr),
+ txq->base_addr,
+ (uint8_t *)txq->base_addr + txq->queue_size);
+ if (next == NULL) {
+ BCMFS_DP_LOG(ERR, "Enqueue for desc failed on queue %d",
+ qp->qpair_id);
+ ret = -EINVAL;
+ exit_cleanup = true;
+ goto exit;
+ }
+
+ /* Save ring BD write offset */
+ txq->tx_write_ptr = (uint32_t)((uint8_t *)next -
+ (uint8_t *)txq->base_addr);
+
+ qp->nb_pending_requests++;
+
+ return 0;
+
+exit:
+ /* Cleanup if we failed */
+ if (exit_cleanup)
+ rte_bitmap_set(qp->ctx_bmp, reqid);
+
+ return ret;
+}
+
+static void
+bcmfs4_ring_doorbell_qp(struct bcmfs_qp *qp __rte_unused)
+{
+ /* no door bell method supported */
+}
+
+static uint16_t
+bcmfs4_dequeue_qp(struct bcmfs_qp *qp, void **ops, uint16_t budget)
+{
+ int err;
+ uint16_t reqid;
+ uint64_t desc;
+ uint16_t count = 0;
+ unsigned long context = 0;
+ struct bcmfs_queue *hwq = &qp->cmpl_q;
+ uint32_t cmpl_read_offset, cmpl_write_offset;
+
+ /*
+ * Check whether budget is valid, else set the budget to maximum
+ * so that all the available completions will be processed.
+ */
+ if (budget > qp->nb_pending_requests)
+ budget = qp->nb_pending_requests;
+
+ /*
+ * Get current completion read and write offset
+ * Note: We should read completion write pointer at least once
+ * after we get a MSI interrupt because HW maintains internal
+ * MSI status which will allow next MSI interrupt only after
+ * completion write pointer is read.
+ */
+ cmpl_write_offset = FS_MMIO_READ32((uint8_t *)qp->ioreg +
+ RING_CMPL_WRITE_PTR);
+ cmpl_write_offset *= FS_RING_DESC_SIZE;
+ cmpl_read_offset = hwq->cmpl_read_ptr;
+
+ /* Ensure completion pointer is read before proceeding */
+ rte_io_rmb();
+
+ /* For each completed request notify mailbox clients */
+ reqid = 0;
+ while ((cmpl_read_offset != cmpl_write_offset) && (budget > 0)) {
+ /* Dequeue next completion descriptor */
+ desc = *((uint64_t *)((uint8_t *)hwq->base_addr +
+ cmpl_read_offset));
+
+ /* Next read offset */
+ cmpl_read_offset += FS_RING_DESC_SIZE;
+ if (cmpl_read_offset == FS_RING_CMPL_SIZE)
+ cmpl_read_offset = 0;
+
+ /* Decode error from completion descriptor */
+ err = rm_cmpl_desc_to_error(desc);
+ if (err < 0)
+ BCMFS_DP_LOG(ERR, "error desc rcvd");
+
+ /* Determine request id from completion descriptor */
+ reqid = rm_cmpl_desc_to_reqid(desc);
+
+ /* Determine message pointer based on reqid */
+ context = qp->ctx_pool[reqid];
+ if (context == 0)
+ BCMFS_DP_LOG(ERR, "HW error detected");
+
+ /* Release reqid for recycling */
+ qp->ctx_pool[reqid] = 0;
+ rte_bitmap_set(qp->ctx_bmp, reqid);
+
+ *ops = (void *)context;
+
+ /* Increment number of completions processed */
+ count++;
+ budget--;
+ ops++;
+ }
+
+ hwq->cmpl_read_ptr = cmpl_read_offset;
+
+ qp->nb_pending_requests -= count;
+
+ return count;
+}
+
+static int
+bcmfs4_start_qp(struct bcmfs_qp *qp)
+{
+ int timeout;
+ uint32_t val, off;
+ uint64_t d, next_addr, msi;
+ struct bcmfs_queue *tx_queue = &qp->tx_q;
+ struct bcmfs_queue *cmpl_queue = &qp->cmpl_q;
+
+ /* Disable/deactivate ring */
+ FS_MMIO_WRITE32(0x0, (uint8_t *)qp->ioreg + RING_CONTROL);
+
+ /* Configure next table pointer entries in BD memory */
+ for (off = 0; off < tx_queue->queue_size; off += FS_RING_DESC_SIZE) {
+ next_addr = off + FS_RING_DESC_SIZE;
+ if (next_addr == tx_queue->queue_size)
+ next_addr = 0;
+ next_addr += (uint64_t)tx_queue->base_phys_addr;
+ if (FS_RING_BD_ALIGN_CHECK(next_addr))
+ d = bcmfs4_next_table_desc(RING_BD_TOGGLE_VALID(off),
+ next_addr);
+ else
+ d = bcmfs4_null_desc(RING_BD_TOGGLE_INVALID(off));
+ rm_write_desc((uint8_t *)tx_queue->base_addr + off, d);
+ }
+
+ /*
+ * If user interrupt the test in between the run(Ctrl+C), then all
+ * subsequent test run will fail because sw cmpl_read_offset and hw
+ * cmpl_write_offset will be pointing at different completion BD. To
+ * handle this we should flush all the rings in the startup instead
+ * of shutdown function.
+ * Ring flush will reset hw cmpl_write_offset.
+ */
+
+ /* Set ring flush state */
+ timeout = 1000;
+ FS_MMIO_WRITE32(BIT(CONTROL_FLUSH_SHIFT),
+ (uint8_t *)qp->ioreg + RING_CONTROL);
+ do {
+ /*
+ * If previous test is stopped in between the run, then
+ * sw has to read cmpl_write_offset else DME/AE will be not
+ * come out of flush state.
+ */
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_CMPL_WRITE_PTR);
+
+ if (FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_FLUSH_DONE) &
+ FLUSH_DONE_MASK)
+ break;
+ usleep(1000);
+ } while (--timeout);
+ if (!timeout) {
+ BCMFS_DP_LOG(ERR, "Ring flush timeout hw-queue %d",
+ qp->qpair_id);
+ }
+
+ /* Clear ring flush state */
+ timeout = 1000;
+ FS_MMIO_WRITE32(0x0, (uint8_t *)qp->ioreg + RING_CONTROL);
+ do {
+ if (!(FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_FLUSH_DONE) &
+ FLUSH_DONE_MASK))
+ break;
+ usleep(1000);
+ } while (--timeout);
+ if (!timeout) {
+ BCMFS_DP_LOG(ERR, "Ring clear flush timeout hw-queue %d",
+ qp->qpair_id);
+ }
+
+ /* Program BD start address */
+ val = BD_START_ADDR_VALUE(tx_queue->base_phys_addr);
+ FS_MMIO_WRITE32(val, (uint8_t *)qp->ioreg + RING_BD_START_ADDR);
+
+ /* BD write pointer will be same as HW write pointer */
+ tx_queue->tx_write_ptr = FS_MMIO_READ32((uint8_t *)qp->ioreg +
+ RING_BD_WRITE_PTR);
+ tx_queue->tx_write_ptr *= FS_RING_DESC_SIZE;
+
+
+ for (off = 0; off < FS_RING_CMPL_SIZE; off += FS_RING_DESC_SIZE)
+ rm_write_desc((uint8_t *)cmpl_queue->base_addr + off, 0x0);
+
+ /* Program completion start address */
+ val = CMPL_START_ADDR_VALUE(cmpl_queue->base_phys_addr);
+ FS_MMIO_WRITE32(val, (uint8_t *)qp->ioreg + RING_CMPL_START_ADDR);
+
+ /* Completion read pointer will be same as HW write pointer */
+ cmpl_queue->cmpl_read_ptr = FS_MMIO_READ32((uint8_t *)qp->ioreg +
+ RING_CMPL_WRITE_PTR);
+ cmpl_queue->cmpl_read_ptr *= FS_RING_DESC_SIZE;
+
+ /* Read ring Tx, Rx, and Outstanding counts to clear */
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_RECV_LS);
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_RECV_MS);
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_TRANS_LS);
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_TRANS_MS);
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_OUTSTAND);
+
+ /* Configure per-Ring MSI registers with dummy location */
+ /* We leave 1k * FS_RING_DESC_SIZE size from base phys for MSI */
+ msi = cmpl_queue->base_phys_addr + (1024 * FS_RING_DESC_SIZE);
+ FS_MMIO_WRITE32((msi & 0xFFFFFFFF),
+ (uint8_t *)qp->ioreg + RING_MSI_ADDR_LS);
+ FS_MMIO_WRITE32(((msi >> 32) & 0xFFFFFFFF),
+ (uint8_t *)qp->ioreg + RING_MSI_ADDR_MS);
+ FS_MMIO_WRITE32(qp->qpair_id,
+ (uint8_t *)qp->ioreg + RING_MSI_DATA_VALUE);
+
+ /* Configure RING_MSI_CONTROL */
+ val = 0;
+ val |= (MSI_TIMER_VAL_MASK << MSI_TIMER_VAL_SHIFT);
+ val |= BIT(MSI_ENABLE_SHIFT);
+ val |= (0x1 & MSI_COUNT_MASK) << MSI_COUNT_SHIFT;
+ FS_MMIO_WRITE32(val, (uint8_t *)qp->ioreg + RING_MSI_CONTROL);
+
+ /* Enable/activate ring */
+ val = BIT(CONTROL_ACTIVE_SHIFT);
+ FS_MMIO_WRITE32(val, (uint8_t *)qp->ioreg + RING_CONTROL);
+
+ return 0;
+}
+
+static void
+bcmfs4_shutdown_qp(struct bcmfs_qp *qp)
+{
+ /* Disable/deactivate ring */
+ FS_MMIO_WRITE32(0x0, (uint8_t *)qp->ioreg + RING_CONTROL);
+}
+
+struct bcmfs_hw_queue_pair_ops bcmfs4_qp_ops = {
+ .name = "fs4",
+ .enq_one_req = bcmfs4_enqueue_single_request_qp,
+ .ring_db = bcmfs4_ring_doorbell_qp,
+ .dequeue = bcmfs4_dequeue_qp,
+ .startq = bcmfs4_start_qp,
+ .stopq = bcmfs4_shutdown_qp,
+};
+
+RTE_INIT(bcmfs4_register_qp_ops)
+{
+ bcmfs_hw_queue_pair_register_ops(&bcmfs4_qp_ops);
+}
diff --git a/drivers/crypto/bcmfs/hw/bcmfs5_rm.c b/drivers/crypto/bcmfs/hw/bcmfs5_rm.c
new file mode 100644
index 0000000000..86e53051dd
--- /dev/null
+++ b/drivers/crypto/bcmfs/hw/bcmfs5_rm.c
@@ -0,0 +1,677 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <unistd.h>
+
+#include <rte_bitmap.h>
+
+#include "bcmfs_qp.h"
+#include "bcmfs_logs.h"
+#include "bcmfs_dev_msg.h"
+#include "bcmfs_device.h"
+#include "bcmfs_hw_defs.h"
+#include "bcmfs_rm_common.h"
+
+/* Ring version */
+#define RING_VER_MAGIC 0x76303032
+
+/* Per-Ring register offsets */
+#define RING_VER 0x000
+#define RING_BD_START_ADDRESS_LSB 0x004
+#define RING_BD_READ_PTR 0x008
+#define RING_BD_WRITE_PTR 0x00c
+#define RING_BD_READ_PTR_DDR_LS 0x010
+#define RING_BD_READ_PTR_DDR_MS 0x014
+#define RING_CMPL_START_ADDR_LSB 0x018
+#define RING_CMPL_WRITE_PTR 0x01c
+#define RING_NUM_REQ_RECV_LS 0x020
+#define RING_NUM_REQ_RECV_MS 0x024
+#define RING_NUM_REQ_TRANS_LS 0x028
+#define RING_NUM_REQ_TRANS_MS 0x02c
+#define RING_NUM_REQ_OUTSTAND 0x030
+#define RING_CONTROL 0x034
+#define RING_FLUSH_DONE 0x038
+#define RING_MSI_ADDR_LS 0x03c
+#define RING_MSI_ADDR_MS 0x040
+#define RING_MSI_CONTROL 0x048
+#define RING_BD_READ_PTR_DDR_CONTROL 0x04c
+#define RING_MSI_DATA_VALUE 0x064
+#define RING_BD_START_ADDRESS_MSB 0x078
+#define RING_CMPL_START_ADDR_MSB 0x07c
+#define RING_DOORBELL_BD_WRITE_COUNT 0x074
+
+/* Register RING_BD_START_ADDR fields */
+#define BD_LAST_UPDATE_HW_SHIFT 28
+#define BD_LAST_UPDATE_HW_MASK 0x1
+#define BD_START_ADDR_VALUE(pa) \
+ ((uint32_t)((((uint64_t)(pa)) >> RING_BD_ALIGN_ORDER) & 0x0fffffff))
+#define BD_START_ADDR_DECODE(val) \
+ ((uint64_t)((val) & 0x0fffffff) << RING_BD_ALIGN_ORDER)
+
+/* Register RING_CMPL_START_ADDR fields */
+#define CMPL_START_ADDR_VALUE(pa) \
+ ((uint32_t)((((uint64_t)(pa)) >> RING_CMPL_ALIGN_ORDER) & 0x07ffffff))
+
+/* Register RING_CONTROL fields */
+#define CONTROL_MASK_DISABLE_CONTROL 12
+#define CONTROL_FLUSH_SHIFT 5
+#define CONTROL_ACTIVE_SHIFT 4
+#define CONTROL_RATE_ADAPT_MASK 0xf
+#define CONTROL_RATE_DYNAMIC 0x0
+#define CONTROL_RATE_FAST 0x8
+#define CONTROL_RATE_MEDIUM 0x9
+#define CONTROL_RATE_SLOW 0xa
+#define CONTROL_RATE_IDLE 0xb
+
+/* Register RING_FLUSH_DONE fields */
+#define FLUSH_DONE_MASK 0x1
+
+/* Register RING_MSI_CONTROL fields */
+#define MSI_TIMER_VAL_SHIFT 16
+#define MSI_TIMER_VAL_MASK 0xffff
+#define MSI_ENABLE_SHIFT 15
+#define MSI_ENABLE_MASK 0x1
+#define MSI_COUNT_SHIFT 0
+#define MSI_COUNT_MASK 0x3ff
+
+/* Register RING_BD_READ_PTR_DDR_CONTROL fields */
+#define BD_READ_PTR_DDR_TIMER_VAL_SHIFT 16
+#define BD_READ_PTR_DDR_TIMER_VAL_MASK 0xffff
+#define BD_READ_PTR_DDR_ENABLE_SHIFT 15
+#define BD_READ_PTR_DDR_ENABLE_MASK 0x1
+
+/* General descriptor format */
+#define DESC_TYPE_SHIFT 60
+#define DESC_TYPE_MASK 0xf
+#define DESC_PAYLOAD_SHIFT 0
+#define DESC_PAYLOAD_MASK 0x0fffffffffffffff
+
+/* Null descriptor format */
+#define NULL_TYPE 0
+#define NULL_TOGGLE_SHIFT 59
+#define NULL_TOGGLE_MASK 0x1
+
+/* Header descriptor format */
+#define HEADER_TYPE 1
+#define HEADER_TOGGLE_SHIFT 59
+#define HEADER_TOGGLE_MASK 0x1
+#define HEADER_ENDPKT_SHIFT 57
+#define HEADER_ENDPKT_MASK 0x1
+#define HEADER_STARTPKT_SHIFT 56
+#define HEADER_STARTPKT_MASK 0x1
+#define HEADER_BDCOUNT_SHIFT 36
+#define HEADER_BDCOUNT_MASK 0x1f
+#define HEADER_BDCOUNT_MAX HEADER_BDCOUNT_MASK
+#define HEADER_FLAGS_SHIFT 16
+#define HEADER_FLAGS_MASK 0xffff
+#define HEADER_OPAQUE_SHIFT 0
+#define HEADER_OPAQUE_MASK 0xffff
+
+/* Source (SRC) descriptor format */
+
+#define SRC_TYPE 2
+#define SRC_LENGTH_SHIFT 44
+#define SRC_LENGTH_MASK 0xffff
+#define SRC_ADDR_SHIFT 0
+#define SRC_ADDR_MASK 0x00000fffffffffff
+
+/* Destination (DST) descriptor format */
+#define DST_TYPE 3
+#define DST_LENGTH_SHIFT 44
+#define DST_LENGTH_MASK 0xffff
+#define DST_ADDR_SHIFT 0
+#define DST_ADDR_MASK 0x00000fffffffffff
+
+/* Next pointer (NPTR) descriptor format */
+#define NPTR_TYPE 5
+#define NPTR_TOGGLE_SHIFT 59
+#define NPTR_TOGGLE_MASK 0x1
+#define NPTR_ADDR_SHIFT 0
+#define NPTR_ADDR_MASK 0x00000fffffffffff
+
+/* Mega source (MSRC) descriptor format */
+#define MSRC_TYPE 6
+#define MSRC_LENGTH_SHIFT 44
+#define MSRC_LENGTH_MASK 0xffff
+#define MSRC_ADDR_SHIFT 0
+#define MSRC_ADDR_MASK 0x00000fffffffffff
+
+/* Mega destination (MDST) descriptor format */
+#define MDST_TYPE 7
+#define MDST_LENGTH_SHIFT 44
+#define MDST_LENGTH_MASK 0xffff
+#define MDST_ADDR_SHIFT 0
+#define MDST_ADDR_MASK 0x00000fffffffffff
+
+static uint8_t
+bcmfs5_is_next_table_desc(void *desc_ptr)
+{
+ uint64_t desc = rm_read_desc(desc_ptr);
+ uint32_t type = FS_DESC_DEC(desc, DESC_TYPE_SHIFT, DESC_TYPE_MASK);
+
+ return (type == NPTR_TYPE) ? true : false;
+}
+
+static uint64_t
+bcmfs5_next_table_desc(uint64_t next_addr)
+{
+ return (rm_build_desc(NPTR_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(next_addr, NPTR_ADDR_SHIFT, NPTR_ADDR_MASK));
+}
+
+static uint64_t
+bcmfs5_null_desc(void)
+{
+ return rm_build_desc(NULL_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK);
+}
+
+static uint64_t
+bcmfs5_header_desc(uint32_t startpkt, uint32_t endpkt,
+ uint32_t bdcount, uint32_t flags,
+ uint32_t opaque)
+{
+ return (rm_build_desc(HEADER_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(startpkt, HEADER_STARTPKT_SHIFT,
+ HEADER_STARTPKT_MASK) |
+ rm_build_desc(endpkt, HEADER_ENDPKT_SHIFT, HEADER_ENDPKT_MASK) |
+ rm_build_desc(bdcount, HEADER_BDCOUNT_SHIFT, HEADER_BDCOUNT_MASK) |
+ rm_build_desc(flags, HEADER_FLAGS_SHIFT, HEADER_FLAGS_MASK) |
+ rm_build_desc(opaque, HEADER_OPAQUE_SHIFT, HEADER_OPAQUE_MASK));
+}
+
+static int
+bcmfs5_enqueue_desc(uint32_t nhpos, uint32_t nhcnt,
+ uint32_t reqid, uint64_t desc,
+ void **desc_ptr, void *start_desc,
+ void *end_desc)
+{
+ uint64_t d;
+ uint32_t nhavail, _startpkt, _endpkt, _bdcount;
+ int is_nxt_page = 0;
+
+ /*
+ * Each request or packet start with a HEADER descriptor followed
+ * by one or more non-HEADER descriptors (SRC, SRCT, MSRC, DST,
+ * DSTT, MDST, IMM, and IMMT). The number of non-HEADER descriptors
+ * following a HEADER descriptor is represented by BDCOUNT field
+ * of HEADER descriptor. The max value of BDCOUNT field is 31 which
+ * means we can only have 31 non-HEADER descriptors following one
+ * HEADER descriptor.
+ *
+ * In general use, number of non-HEADER descriptors can easily go
+ * beyond 31. To tackle this situation, we have packet (or request)
+ * extension bits (STARTPKT and ENDPKT) in the HEADER descriptor.
+ *
+ * To use packet extension, the first HEADER descriptor of request
+ * (or packet) will have STARTPKT=1 and ENDPKT=0. The intermediate
+ * HEADER descriptors will have STARTPKT=0 and ENDPKT=0. The last
+ * HEADER descriptor will have STARTPKT=0 and ENDPKT=1.
+ */
+
+ if ((nhpos % HEADER_BDCOUNT_MAX == 0) && (nhcnt - nhpos)) {
+ /* Prepare the header descriptor */
+ nhavail = (nhcnt - nhpos);
+ _startpkt = (nhpos == 0) ? 0x1 : 0x0;
+ _endpkt = (nhavail <= HEADER_BDCOUNT_MAX) ? 0x1 : 0x0;
+ _bdcount = (nhavail <= HEADER_BDCOUNT_MAX) ?
+ nhavail : HEADER_BDCOUNT_MAX;
+ if (nhavail <= HEADER_BDCOUNT_MAX)
+ _bdcount = nhavail;
+ else
+ _bdcount = HEADER_BDCOUNT_MAX;
+ d = bcmfs5_header_desc(_startpkt, _endpkt,
+ _bdcount, 0x0, reqid);
+
+ /* Write header descriptor */
+ rm_write_desc(*desc_ptr, d);
+
+ /* Point to next descriptor */
+ *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc);
+ if (*desc_ptr == end_desc)
+ *desc_ptr = start_desc;
+
+ /* Skip next pointer descriptors */
+ while (bcmfs5_is_next_table_desc(*desc_ptr)) {
+ is_nxt_page = 1;
+ *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc);
+ if (*desc_ptr == end_desc)
+ *desc_ptr = start_desc;
+ }
+ }
+
+ /* Write desired descriptor */
+ rm_write_desc(*desc_ptr, desc);
+
+ /* Point to next descriptor */
+ *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc);
+ if (*desc_ptr == end_desc)
+ *desc_ptr = start_desc;
+
+ /* Skip next pointer descriptors */
+ while (bcmfs5_is_next_table_desc(*desc_ptr)) {
+ is_nxt_page = 1;
+ *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc);
+ if (*desc_ptr == end_desc)
+ *desc_ptr = start_desc;
+ }
+
+ return is_nxt_page;
+}
+
+static uint64_t
+bcmfs5_src_desc(uint64_t addr, unsigned int len)
+{
+ return (rm_build_desc(SRC_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(len, SRC_LENGTH_SHIFT, SRC_LENGTH_MASK) |
+ rm_build_desc(addr, SRC_ADDR_SHIFT, SRC_ADDR_MASK));
+}
+
+static uint64_t
+bcmfs5_msrc_desc(uint64_t addr, unsigned int len_div_16)
+{
+ return (rm_build_desc(MSRC_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(len_div_16, MSRC_LENGTH_SHIFT, MSRC_LENGTH_MASK) |
+ rm_build_desc(addr, MSRC_ADDR_SHIFT, MSRC_ADDR_MASK));
+}
+
+static uint64_t
+bcmfs5_dst_desc(uint64_t addr, unsigned int len)
+{
+ return (rm_build_desc(DST_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(len, DST_LENGTH_SHIFT, DST_LENGTH_MASK) |
+ rm_build_desc(addr, DST_ADDR_SHIFT, DST_ADDR_MASK));
+}
+
+static uint64_t
+bcmfs5_mdst_desc(uint64_t addr, unsigned int len_div_16)
+{
+ return (rm_build_desc(MDST_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(len_div_16, MDST_LENGTH_SHIFT, MDST_LENGTH_MASK) |
+ rm_build_desc(addr, MDST_ADDR_SHIFT, MDST_ADDR_MASK));
+}
+
+static bool
+bcmfs5_sanity_check(struct bcmfs_qp_message *msg)
+{
+ unsigned int i = 0;
+
+ if (msg == NULL)
+ return false;
+
+ for (i = 0; i < msg->srcs_count; i++) {
+ if (msg->srcs_len[i] & 0xf) {
+ if (msg->srcs_len[i] > SRC_LENGTH_MASK)
+ return false;
+ } else {
+ if (msg->srcs_len[i] > (MSRC_LENGTH_MASK * 16))
+ return false;
+ }
+ }
+ for (i = 0; i < msg->dsts_count; i++) {
+ if (msg->dsts_len[i] & 0xf) {
+ if (msg->dsts_len[i] > DST_LENGTH_MASK)
+ return false;
+ } else {
+ if (msg->dsts_len[i] > (MDST_LENGTH_MASK * 16))
+ return false;
+ }
+ }
+
+ return true;
+}
+
+static void *
+bcmfs5_enqueue_msg(struct bcmfs_queue *txq,
+ struct bcmfs_qp_message *msg,
+ uint32_t reqid, void *desc_ptr,
+ void *start_desc, void *end_desc)
+{
+ uint64_t d;
+ unsigned int src, dst;
+ uint32_t nhpos = 0;
+ int nxt_page = 0;
+ uint32_t nhcnt = msg->srcs_count + msg->dsts_count;
+
+ if (desc_ptr == NULL || start_desc == NULL || end_desc == NULL)
+ return NULL;
+
+ if (desc_ptr < start_desc || end_desc <= desc_ptr)
+ return NULL;
+
+ for (src = 0; src < msg->srcs_count; src++) {
+ if (msg->srcs_len[src] & 0xf)
+ d = bcmfs5_src_desc(msg->srcs_addr[src],
+ msg->srcs_len[src]);
+ else
+ d = bcmfs5_msrc_desc(msg->srcs_addr[src],
+ msg->srcs_len[src] / 16);
+
+ nxt_page = bcmfs5_enqueue_desc(nhpos, nhcnt, reqid,
+ d, &desc_ptr, start_desc,
+ end_desc);
+ if (nxt_page)
+ txq->descs_inflight++;
+ nhpos++;
+ }
+
+ for (dst = 0; dst < msg->dsts_count; dst++) {
+ if (msg->dsts_len[dst] & 0xf)
+ d = bcmfs5_dst_desc(msg->dsts_addr[dst],
+ msg->dsts_len[dst]);
+ else
+ d = bcmfs5_mdst_desc(msg->dsts_addr[dst],
+ msg->dsts_len[dst] / 16);
+
+ nxt_page = bcmfs5_enqueue_desc(nhpos, nhcnt, reqid,
+ d, &desc_ptr, start_desc,
+ end_desc);
+ if (nxt_page)
+ txq->descs_inflight++;
+ nhpos++;
+ }
+
+ txq->descs_inflight += nhcnt + 1;
+
+ return desc_ptr;
+}
+
+static int
+bcmfs5_enqueue_single_request_qp(struct bcmfs_qp *qp, void *op)
+{
+ void *next;
+ int reqid;
+ int ret = 0;
+ uint64_t slab = 0;
+ uint32_t pos = 0;
+ uint8_t exit_cleanup = false;
+ struct bcmfs_queue *txq = &qp->tx_q;
+ struct bcmfs_qp_message *msg = (struct bcmfs_qp_message *)op;
+
+ /* Do sanity check on message */
+ if (!bcmfs5_sanity_check(msg)) {
+ BCMFS_DP_LOG(ERR, "Invalid msg on queue %d", qp->qpair_id);
+ return -EIO;
+ }
+
+ /* Scan from the beginning */
+ __rte_bitmap_scan_init(qp->ctx_bmp);
+ /* Scan bitmap to get the free pool */
+ ret = rte_bitmap_scan(qp->ctx_bmp, &pos, &slab);
+ if (ret == 0) {
+ BCMFS_DP_LOG(ERR, "BD memory exhausted");
+ return -ERANGE;
+ }
+
+ reqid = pos + __builtin_ctzll(slab);
+ rte_bitmap_clear(qp->ctx_bmp, reqid);
+ qp->ctx_pool[reqid] = (unsigned long)msg;
+
+ /* Write descriptors to ring */
+ next = bcmfs5_enqueue_msg(txq, msg, reqid,
+ (uint8_t *)txq->base_addr + txq->tx_write_ptr,
+ txq->base_addr,
+ (uint8_t *)txq->base_addr + txq->queue_size);
+ if (next == NULL) {
+ BCMFS_DP_LOG(ERR, "Enqueue for desc failed on queue %d",
+ qp->qpair_id);
+ ret = -EINVAL;
+ exit_cleanup = true;
+ goto exit;
+ }
+
+ /* Save ring BD write offset */
+ txq->tx_write_ptr = (uint32_t)((uint8_t *)next -
+ (uint8_t *)txq->base_addr);
+
+ qp->nb_pending_requests++;
+
+ return 0;
+
+exit:
+ /* Cleanup if we failed */
+ if (exit_cleanup)
+ rte_bitmap_set(qp->ctx_bmp, reqid);
+
+ return ret;
+}
+
+static void bcmfs5_write_doorbell(struct bcmfs_qp *qp)
+{
+ struct bcmfs_queue *txq = &qp->tx_q;
+
+ /* sync in bfeore ringing the door-bell */
+ rte_wmb();
+
+ FS_MMIO_WRITE32(txq->descs_inflight,
+ (uint8_t *)qp->ioreg + RING_DOORBELL_BD_WRITE_COUNT);
+
+ /* reset the count */
+ txq->descs_inflight = 0;
+}
+
+static uint16_t
+bcmfs5_dequeue_qp(struct bcmfs_qp *qp, void **ops, uint16_t budget)
+{
+ int err;
+ uint16_t reqid;
+ uint64_t desc;
+ uint16_t count = 0;
+ unsigned long context = 0;
+ struct bcmfs_queue *hwq = &qp->cmpl_q;
+ uint32_t cmpl_read_offset, cmpl_write_offset;
+
+ /*
+ * Check whether budget is valid, else set the budget to maximum
+ * so that all the available completions will be processed.
+ */
+ if (budget > qp->nb_pending_requests)
+ budget = qp->nb_pending_requests;
+
+ /*
+ * Get current completion read and write offset
+ *
+ * Note: We should read completion write pointer at least once
+ * after we get a MSI interrupt because HW maintains internal
+ * MSI status which will allow next MSI interrupt only after
+ * completion write pointer is read.
+ */
+ cmpl_write_offset = FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_CMPL_WRITE_PTR);
+ cmpl_write_offset *= FS_RING_DESC_SIZE;
+ cmpl_read_offset = hwq->cmpl_read_ptr;
+
+ /* read the ring cmpl write ptr before cmpl read offset */
+ rte_io_rmb();
+
+ /* For each completed request notify mailbox clients */
+ reqid = 0;
+ while ((cmpl_read_offset != cmpl_write_offset) && (budget > 0)) {
+ /* Dequeue next completion descriptor */
+ desc = *((uint64_t *)((uint8_t *)hwq->base_addr +
+ cmpl_read_offset));
+
+ /* Next read offset */
+ cmpl_read_offset += FS_RING_DESC_SIZE;
+ if (cmpl_read_offset == FS_RING_CMPL_SIZE)
+ cmpl_read_offset = 0;
+
+ /* Decode error from completion descriptor */
+ err = rm_cmpl_desc_to_error(desc);
+ if (err < 0)
+ BCMFS_DP_LOG(ERR, "error desc rcvd");
+
+ /* Determine request id from completion descriptor */
+ reqid = rm_cmpl_desc_to_reqid(desc);
+
+ /* Retrieve context */
+ context = qp->ctx_pool[reqid];
+ if (context == 0)
+ BCMFS_DP_LOG(ERR, "HW error detected");
+
+ /* Release reqid for recycling */
+ qp->ctx_pool[reqid] = 0;
+ rte_bitmap_set(qp->ctx_bmp, reqid);
+
+ *ops = (void *)context;
+
+ /* Increment number of completions processed */
+ count++;
+ budget--;
+ ops++;
+ }
+
+ hwq->cmpl_read_ptr = cmpl_read_offset;
+
+ qp->nb_pending_requests -= count;
+
+ return count;
+}
+
+static int
+bcmfs5_start_qp(struct bcmfs_qp *qp)
+{
+ uint32_t val, off;
+ uint64_t d, next_addr, msi;
+ int timeout;
+ uint32_t bd_high, bd_low, cmpl_high, cmpl_low;
+ struct bcmfs_queue *tx_queue = &qp->tx_q;
+ struct bcmfs_queue *cmpl_queue = &qp->cmpl_q;
+
+ /* Disable/deactivate ring */
+ FS_MMIO_WRITE32(0x0, (uint8_t *)qp->ioreg + RING_CONTROL);
+
+ /* Configure next table pointer entries in BD memory */
+ for (off = 0; off < tx_queue->queue_size; off += FS_RING_DESC_SIZE) {
+ next_addr = off + FS_RING_DESC_SIZE;
+ if (next_addr == tx_queue->queue_size)
+ next_addr = 0;
+ next_addr += (uint64_t)tx_queue->base_phys_addr;
+ if (FS_RING_BD_ALIGN_CHECK(next_addr))
+ d = bcmfs5_next_table_desc(next_addr);
+ else
+ d = bcmfs5_null_desc();
+ rm_write_desc((uint8_t *)tx_queue->base_addr + off, d);
+ }
+
+ /*
+ * If user interrupt the test in between the run(Ctrl+C), then all
+ * subsequent test run will fail because sw cmpl_read_offset and hw
+ * cmpl_write_offset will be pointing at different completion BD. To
+ * handle this we should flush all the rings in the startup instead
+ * of shutdown function.
+ * Ring flush will reset hw cmpl_write_offset.
+ */
+
+ /* Set ring flush state */
+ timeout = 1000;
+ FS_MMIO_WRITE32(BIT(CONTROL_FLUSH_SHIFT),
+ (uint8_t *)qp->ioreg + RING_CONTROL);
+ do {
+ /*
+ * If previous test is stopped in between the run, then
+ * sw has to read cmpl_write_offset else DME/AE will be not
+ * come out of flush state.
+ */
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_CMPL_WRITE_PTR);
+
+ if (FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_FLUSH_DONE) &
+ FLUSH_DONE_MASK)
+ break;
+ usleep(1000);
+ } while (--timeout);
+ if (!timeout) {
+ BCMFS_DP_LOG(ERR, "Ring flush timeout hw-queue %d",
+ qp->qpair_id);
+ }
+
+ /* Clear ring flush state */
+ timeout = 1000;
+ FS_MMIO_WRITE32(0x0, (uint8_t *)qp->ioreg + RING_CONTROL);
+ do {
+ if (!(FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_FLUSH_DONE) &
+ FLUSH_DONE_MASK))
+ break;
+ usleep(1000);
+ } while (--timeout);
+ if (!timeout) {
+ BCMFS_DP_LOG(ERR, "Ring clear flush timeout hw-queue %d",
+ qp->qpair_id);
+ }
+
+ /* Program BD start address */
+ bd_low = lower_32_bits(tx_queue->base_phys_addr);
+ bd_high = upper_32_bits(tx_queue->base_phys_addr);
+ FS_MMIO_WRITE32(bd_low, (uint8_t *)qp->ioreg +
+ RING_BD_START_ADDRESS_LSB);
+ FS_MMIO_WRITE32(bd_high, (uint8_t *)qp->ioreg +
+ RING_BD_START_ADDRESS_MSB);
+
+ tx_queue->tx_write_ptr = 0;
+
+ for (off = 0; off < FS_RING_CMPL_SIZE; off += FS_RING_DESC_SIZE)
+ rm_write_desc((uint8_t *)cmpl_queue->base_addr + off, 0x0);
+
+ /* Completion read pointer will be same as HW write pointer */
+ cmpl_queue->cmpl_read_ptr = FS_MMIO_READ32((uint8_t *)qp->ioreg +
+ RING_CMPL_WRITE_PTR);
+ /* Program completion start address */
+ cmpl_low = lower_32_bits(cmpl_queue->base_phys_addr);
+ cmpl_high = upper_32_bits(cmpl_queue->base_phys_addr);
+ FS_MMIO_WRITE32(cmpl_low, (uint8_t *)qp->ioreg +
+ RING_CMPL_START_ADDR_LSB);
+ FS_MMIO_WRITE32(cmpl_high, (uint8_t *)qp->ioreg +
+ RING_CMPL_START_ADDR_MSB);
+
+ cmpl_queue->cmpl_read_ptr *= FS_RING_DESC_SIZE;
+
+ /* Read ring Tx, Rx, and Outstanding counts to clear */
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_RECV_LS);
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_RECV_MS);
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_TRANS_LS);
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_TRANS_MS);
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_OUTSTAND);
+
+ /* Configure per-Ring MSI registers with dummy location */
+ msi = cmpl_queue->base_phys_addr + (1024 * FS_RING_DESC_SIZE);
+ FS_MMIO_WRITE32((msi & 0xFFFFFFFF),
+ (uint8_t *)qp->ioreg + RING_MSI_ADDR_LS);
+ FS_MMIO_WRITE32(((msi >> 32) & 0xFFFFFFFF),
+ (uint8_t *)qp->ioreg + RING_MSI_ADDR_MS);
+ FS_MMIO_WRITE32(qp->qpair_id, (uint8_t *)qp->ioreg +
+ RING_MSI_DATA_VALUE);
+
+ /* Configure RING_MSI_CONTROL */
+ val = 0;
+ val |= (MSI_TIMER_VAL_MASK << MSI_TIMER_VAL_SHIFT);
+ val |= BIT(MSI_ENABLE_SHIFT);
+ val |= (0x1 & MSI_COUNT_MASK) << MSI_COUNT_SHIFT;
+ FS_MMIO_WRITE32(val, (uint8_t *)qp->ioreg + RING_MSI_CONTROL);
+
+ /* Enable/activate ring */
+ val = BIT(CONTROL_ACTIVE_SHIFT);
+ FS_MMIO_WRITE32(val, (uint8_t *)qp->ioreg + RING_CONTROL);
+
+ return 0;
+}
+
+static void
+bcmfs5_shutdown_qp(struct bcmfs_qp *qp)
+{
+ /* Disable/deactivate ring */
+ FS_MMIO_WRITE32(0x0, (uint8_t *)qp->ioreg + RING_CONTROL);
+}
+
+struct bcmfs_hw_queue_pair_ops bcmfs5_qp_ops = {
+ .name = "fs5",
+ .enq_one_req = bcmfs5_enqueue_single_request_qp,
+ .ring_db = bcmfs5_write_doorbell,
+ .dequeue = bcmfs5_dequeue_qp,
+ .startq = bcmfs5_start_qp,
+ .stopq = bcmfs5_shutdown_qp,
+};
+
+RTE_INIT(bcmfs5_register_qp_ops)
+{
+ bcmfs_hw_queue_pair_register_ops(&bcmfs5_qp_ops);
+}
diff --git a/drivers/crypto/bcmfs/hw/bcmfs_rm_common.c b/drivers/crypto/bcmfs/hw/bcmfs_rm_common.c
new file mode 100644
index 0000000000..9445d28f92
--- /dev/null
+++ b/drivers/crypto/bcmfs/hw/bcmfs_rm_common.c
@@ -0,0 +1,82 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Broadcom.
+ * All rights reserved.
+ */
+
+#include "bcmfs_hw_defs.h"
+#include "bcmfs_rm_common.h"
+
+/* Completion descriptor format */
+#define FS_CMPL_OPAQUE_SHIFT 0
+#define FS_CMPL_OPAQUE_MASK 0xffff
+#define FS_CMPL_ENGINE_STATUS_SHIFT 16
+#define FS_CMPL_ENGINE_STATUS_MASK 0xffff
+#define FS_CMPL_DME_STATUS_SHIFT 32
+#define FS_CMPL_DME_STATUS_MASK 0xffff
+#define FS_CMPL_RM_STATUS_SHIFT 48
+#define FS_CMPL_RM_STATUS_MASK 0xffff
+/* Completion RM status code */
+#define FS_RM_STATUS_CODE_SHIFT 0
+#define FS_RM_STATUS_CODE_MASK 0x3ff
+#define FS_RM_STATUS_CODE_GOOD 0x0
+#define FS_RM_STATUS_CODE_AE_TIMEOUT 0x3ff
+
+
+/* Completion DME status code */
+#define FS_DME_STATUS_MEM_COR_ERR BIT(0)
+#define FS_DME_STATUS_MEM_UCOR_ERR BIT(1)
+#define FS_DME_STATUS_FIFO_UNDRFLOW BIT(2)
+#define FS_DME_STATUS_FIFO_OVERFLOW BIT(3)
+#define FS_DME_STATUS_RRESP_ERR BIT(4)
+#define FS_DME_STATUS_BRESP_ERR BIT(5)
+#define FS_DME_STATUS_ERROR_MASK (FS_DME_STATUS_MEM_COR_ERR | \
+ FS_DME_STATUS_MEM_UCOR_ERR | \
+ FS_DME_STATUS_FIFO_UNDRFLOW | \
+ FS_DME_STATUS_FIFO_OVERFLOW | \
+ FS_DME_STATUS_RRESP_ERR | \
+ FS_DME_STATUS_BRESP_ERR)
+
+/* APIs related to ring manager descriptors */
+uint64_t
+rm_build_desc(uint64_t val, uint32_t shift,
+ uint64_t mask)
+{
+ return((val & mask) << shift);
+}
+
+uint64_t
+rm_read_desc(void *desc_ptr)
+{
+ return le64_to_cpu(*((uint64_t *)desc_ptr));
+}
+
+void
+rm_write_desc(void *desc_ptr, uint64_t desc)
+{
+ *((uint64_t *)desc_ptr) = cpu_to_le64(desc);
+}
+
+uint32_t
+rm_cmpl_desc_to_reqid(uint64_t cmpl_desc)
+{
+ return (uint32_t)(cmpl_desc & FS_CMPL_OPAQUE_MASK);
+}
+
+int
+rm_cmpl_desc_to_error(uint64_t cmpl_desc)
+{
+ uint32_t status;
+
+ status = FS_DESC_DEC(cmpl_desc, FS_CMPL_DME_STATUS_SHIFT,
+ FS_CMPL_DME_STATUS_MASK);
+ if (status & FS_DME_STATUS_ERROR_MASK)
+ return -EIO;
+
+ status = FS_DESC_DEC(cmpl_desc, FS_CMPL_RM_STATUS_SHIFT,
+ FS_CMPL_RM_STATUS_MASK);
+ status &= FS_RM_STATUS_CODE_MASK;
+ if (status == FS_RM_STATUS_CODE_AE_TIMEOUT)
+ return -ETIMEDOUT;
+
+ return 0;
+}
diff --git a/drivers/crypto/bcmfs/hw/bcmfs_rm_common.h b/drivers/crypto/bcmfs/hw/bcmfs_rm_common.h
new file mode 100644
index 0000000000..e5d30d75c0
--- /dev/null
+++ b/drivers/crypto/bcmfs/hw/bcmfs_rm_common.h
@@ -0,0 +1,51 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_RM_COMMON_H_
+#define _BCMFS_RM_COMMON_H_
+
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_io.h>
+
+/* 32-bit MMIO register write */
+#define FS_MMIO_WRITE32(value, addr) rte_write32_relaxed((value), (addr))
+/* 32-bit MMIO register read */
+#define FS_MMIO_READ32(addr) rte_read32_relaxed((addr))
+
+/* Descriptor helper macros */
+#define FS_DESC_DEC(d, s, m) (((d) >> (s)) & (m))
+
+#define FS_RING_BD_ALIGN_CHECK(addr) \
+ (!((addr) & ((0x1 << FS_RING_BD_ALIGN_ORDER) - 1)))
+
+#define cpu_to_le64 rte_cpu_to_le_64
+#define cpu_to_le32 rte_cpu_to_le_32
+#define cpu_to_le16 rte_cpu_to_le_16
+
+#define le64_to_cpu rte_le_to_cpu_64
+#define le32_to_cpu rte_le_to_cpu_32
+#define le16_to_cpu rte_le_to_cpu_16
+
+#define lower_32_bits(x) ((uint32_t)(x))
+#define upper_32_bits(x) ((uint32_t)(((x) >> 16) >> 16))
+
+uint64_t
+rm_build_desc(uint64_t val, uint32_t shift,
+ uint64_t mask);
+uint64_t
+rm_read_desc(void *desc_ptr);
+
+void
+rm_write_desc(void *desc_ptr, uint64_t desc);
+
+uint32_t
+rm_cmpl_desc_to_reqid(uint64_t cmpl_desc);
+
+int
+rm_cmpl_desc_to_error(uint64_t cmpl_desc);
+
+#endif /* _BCMFS_RM_COMMON_H_ */
+
diff --git a/drivers/crypto/bcmfs/meson.build b/drivers/crypto/bcmfs/meson.build
index 7e2bcbf14b..cd58bd5e25 100644
--- a/drivers/crypto/bcmfs/meson.build
+++ b/drivers/crypto/bcmfs/meson.build
@@ -8,5 +8,8 @@ sources = files(
'bcmfs_logs.c',
'bcmfs_device.c',
'bcmfs_vfio.c',
- 'bcmfs_qp.c'
+ 'bcmfs_qp.c',
+ 'hw/bcmfs4_rm.c',
+ 'hw/bcmfs5_rm.c',
+ 'hw/bcmfs_rm_common.c'
)
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v4 5/8] crypto/bcmfs: create a symmetric cryptodev
2020-10-07 16:45 ` [dpdk-dev] [PATCH v4 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
` (3 preceding siblings ...)
2020-10-07 16:45 ` [dpdk-dev] [PATCH v4 4/8] crypto/bcmfs: add HW queue pair operations Vikas Gupta
@ 2020-10-07 16:45 ` Vikas Gupta
2020-10-07 16:45 ` [dpdk-dev] [PATCH v4 6/8] crypto/bcmfs: add session handling and capabilities Vikas Gupta
` (3 subsequent siblings)
8 siblings, 0 replies; 75+ messages in thread
From: Vikas Gupta @ 2020-10-07 16:45 UTC (permalink / raw)
To: dev, akhil.goyal; +Cc: vikram.prakash, Vikas Gupta, Raveendra Padasalagi
Create a symmetric crypto device and add supported cryptodev ops.
Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
drivers/crypto/bcmfs/bcmfs_device.c | 15 ++
drivers/crypto/bcmfs/bcmfs_device.h | 6 +
drivers/crypto/bcmfs/bcmfs_qp.c | 37 +++
drivers/crypto/bcmfs/bcmfs_qp.h | 16 ++
drivers/crypto/bcmfs/bcmfs_sym_pmd.c | 387 +++++++++++++++++++++++++++
drivers/crypto/bcmfs/bcmfs_sym_pmd.h | 38 +++
drivers/crypto/bcmfs/bcmfs_sym_req.h | 22 ++
drivers/crypto/bcmfs/meson.build | 3 +-
8 files changed, 523 insertions(+), 1 deletion(-)
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_pmd.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_pmd.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_req.h
diff --git a/drivers/crypto/bcmfs/bcmfs_device.c b/drivers/crypto/bcmfs/bcmfs_device.c
index 07423d3cc1..27720e4eb8 100644
--- a/drivers/crypto/bcmfs/bcmfs_device.c
+++ b/drivers/crypto/bcmfs/bcmfs_device.c
@@ -14,6 +14,7 @@
#include "bcmfs_logs.h"
#include "bcmfs_qp.h"
#include "bcmfs_vfio.h"
+#include "bcmfs_sym_pmd.h"
struct bcmfs_device_attr {
const char name[BCMFS_MAX_PATH_LEN];
@@ -240,6 +241,7 @@ bcmfs_vdev_probe(struct rte_vdev_device *vdev)
char out_dirname[BCMFS_MAX_PATH_LEN];
uint32_t fsdev_dev[BCMFS_MAX_NODES];
enum bcmfs_device_type dtype;
+ int err;
int i = 0;
int dev_idx;
int count = 0;
@@ -291,7 +293,20 @@ bcmfs_vdev_probe(struct rte_vdev_device *vdev)
return -ENODEV;
}
+ err = bcmfs_sym_dev_create(fsdev);
+ if (err) {
+ BCMFS_LOG(WARNING,
+ "Failed to create BCMFS SYM PMD for device %s",
+ fsdev->name);
+ goto pmd_create_fail;
+ }
+
return 0;
+
+pmd_create_fail:
+ fsdev_release(fsdev);
+
+ return err;
}
static int
diff --git a/drivers/crypto/bcmfs/bcmfs_device.h b/drivers/crypto/bcmfs/bcmfs_device.h
index 2fb8eed143..e5ca866977 100644
--- a/drivers/crypto/bcmfs/bcmfs_device.h
+++ b/drivers/crypto/bcmfs/bcmfs_device.h
@@ -65,6 +65,12 @@ struct bcmfs_device {
struct bcmfs_qp *qps_in_use[BCMFS_MAX_HW_QUEUES];
/* queue pair ops exported by symmetric crypto hw */
struct bcmfs_hw_queue_pair_ops *sym_hw_qp_ops;
+ /* a cryptodevice attached to bcmfs device */
+ struct rte_cryptodev *cdev;
+ /* a rte_device to register with cryptodev */
+ struct rte_device sym_rte_dev;
+ /* private info to keep with cryptodev */
+ struct bcmfs_sym_dev_private *sym_dev;
};
#endif /* _BCMFS_DEVICE_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_qp.c b/drivers/crypto/bcmfs/bcmfs_qp.c
index ec1327b780..cb5ff6c61b 100644
--- a/drivers/crypto/bcmfs/bcmfs_qp.c
+++ b/drivers/crypto/bcmfs/bcmfs_qp.c
@@ -344,3 +344,40 @@ bcmfs_dequeue_op_burst(void *qp, void **ops, uint16_t nb_ops)
return deq;
}
+
+void bcmfs_qp_stats_get(struct bcmfs_qp **qp, int num_qp,
+ struct bcmfs_qp_stats *stats)
+{
+ int i;
+
+ if (stats == NULL) {
+ BCMFS_LOG(ERR, "invalid param: stats %p",
+ stats);
+ return;
+ }
+
+ for (i = 0; i < num_qp; i++) {
+ if (qp[i] == NULL) {
+ BCMFS_LOG(DEBUG, "Uninitialised qp %d", i);
+ continue;
+ }
+
+ stats->enqueued_count += qp[i]->stats.enqueued_count;
+ stats->dequeued_count += qp[i]->stats.dequeued_count;
+ stats->enqueue_err_count += qp[i]->stats.enqueue_err_count;
+ stats->dequeue_err_count += qp[i]->stats.dequeue_err_count;
+ }
+}
+
+void bcmfs_qp_stats_reset(struct bcmfs_qp **qp, int num_qp)
+{
+ int i;
+
+ for (i = 0; i < num_qp; i++) {
+ if (qp[i] == NULL) {
+ BCMFS_LOG(DEBUG, "Uninitialised qp %d", i);
+ continue;
+ }
+ memset(&qp[i]->stats, 0, sizeof(qp[i]->stats));
+ }
+}
diff --git a/drivers/crypto/bcmfs/bcmfs_qp.h b/drivers/crypto/bcmfs/bcmfs_qp.h
index 59785865b0..57fe0a93a3 100644
--- a/drivers/crypto/bcmfs/bcmfs_qp.h
+++ b/drivers/crypto/bcmfs/bcmfs_qp.h
@@ -24,6 +24,13 @@ enum bcmfs_queue_type {
BCMFS_RM_CPLQ
};
+#define BCMFS_QP_IOBASE_XLATE(base, idx) \
+ ((base) + ((idx) * BCMFS_HW_QUEUE_IO_ADDR_LEN))
+
+/* Max pkts for preprocessing before submitting to h/w qp */
+#define BCMFS_MAX_REQS_BUFF 64
+
+/* qp stats */
struct bcmfs_qp_stats {
/* Count of all operations enqueued */
uint64_t enqueued_count;
@@ -92,6 +99,10 @@ struct bcmfs_qp {
struct bcmfs_qp_stats stats;
/* h/w ops associated with qp */
struct bcmfs_hw_queue_pair_ops *ops;
+ /* bcmfs requests pool*/
+ struct rte_mempool *sr_mp;
+ /* a temporary buffer to keep message pointers */
+ struct bcmfs_qp_message *infl_msgs[BCMFS_MAX_REQS_BUFF];
} __rte_cache_aligned;
@@ -123,4 +134,9 @@ bcmfs_qp_setup(struct bcmfs_qp **qp_addr,
uint16_t queue_pair_id,
struct bcmfs_qp_config *bcmfs_conf);
+/* stats functions*/
+void bcmfs_qp_stats_get(struct bcmfs_qp **qp, int num_qp,
+ struct bcmfs_qp_stats *stats);
+void bcmfs_qp_stats_reset(struct bcmfs_qp **qp, int num_qp);
+
#endif /* _BCMFS_QP_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
new file mode 100644
index 0000000000..0f96915f70
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
@@ -0,0 +1,387 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+#include <rte_dev.h>
+#include <rte_errno.h>
+#include <rte_malloc.h>
+#include <rte_cryptodev_pmd.h>
+
+#include "bcmfs_device.h"
+#include "bcmfs_logs.h"
+#include "bcmfs_qp.h"
+#include "bcmfs_sym_pmd.h"
+#include "bcmfs_sym_req.h"
+
+uint8_t cryptodev_bcmfs_driver_id;
+
+static int bcmfs_sym_qp_release(struct rte_cryptodev *dev,
+ uint16_t queue_pair_id);
+
+static int
+bcmfs_sym_dev_config(__rte_unused struct rte_cryptodev *dev,
+ __rte_unused struct rte_cryptodev_config *config)
+{
+ return 0;
+}
+
+static int
+bcmfs_sym_dev_start(__rte_unused struct rte_cryptodev *dev)
+{
+ return 0;
+}
+
+static void
+bcmfs_sym_dev_stop(__rte_unused struct rte_cryptodev *dev)
+{
+}
+
+static int
+bcmfs_sym_dev_close(struct rte_cryptodev *dev)
+{
+ int i, ret;
+
+ for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+ ret = bcmfs_sym_qp_release(dev, i);
+ if (ret < 0)
+ return ret;
+ }
+
+ return 0;
+}
+
+static void
+bcmfs_sym_dev_info_get(struct rte_cryptodev *dev,
+ struct rte_cryptodev_info *dev_info)
+{
+ struct bcmfs_sym_dev_private *internals = dev->data->dev_private;
+ struct bcmfs_device *fsdev = internals->fsdev;
+
+ if (dev_info != NULL) {
+ dev_info->driver_id = cryptodev_bcmfs_driver_id;
+ dev_info->feature_flags = dev->feature_flags;
+ dev_info->max_nb_queue_pairs = fsdev->max_hw_qps;
+ /* No limit of number of sessions */
+ dev_info->sym.max_nb_sessions = 0;
+ }
+}
+
+static void
+bcmfs_sym_stats_get(struct rte_cryptodev *dev,
+ struct rte_cryptodev_stats *stats)
+{
+ struct bcmfs_qp_stats bcmfs_stats = {0};
+ struct bcmfs_sym_dev_private *bcmfs_priv;
+ struct bcmfs_device *fsdev;
+
+ if (stats == NULL || dev == NULL) {
+ BCMFS_LOG(ERR, "invalid ptr: stats %p, dev %p", stats, dev);
+ return;
+ }
+ bcmfs_priv = dev->data->dev_private;
+ fsdev = bcmfs_priv->fsdev;
+
+ bcmfs_qp_stats_get(fsdev->qps_in_use, fsdev->max_hw_qps, &bcmfs_stats);
+
+ stats->enqueued_count = bcmfs_stats.enqueued_count;
+ stats->dequeued_count = bcmfs_stats.dequeued_count;
+ stats->enqueue_err_count = bcmfs_stats.enqueue_err_count;
+ stats->dequeue_err_count = bcmfs_stats.dequeue_err_count;
+}
+
+static void
+bcmfs_sym_stats_reset(struct rte_cryptodev *dev)
+{
+ struct bcmfs_sym_dev_private *bcmfs_priv;
+ struct bcmfs_device *fsdev;
+
+ if (dev == NULL) {
+ BCMFS_LOG(ERR, "invalid cryptodev ptr %p", dev);
+ return;
+ }
+ bcmfs_priv = dev->data->dev_private;
+ fsdev = bcmfs_priv->fsdev;
+
+ bcmfs_qp_stats_reset(fsdev->qps_in_use, fsdev->max_hw_qps);
+}
+
+static int
+bcmfs_sym_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id)
+{
+ struct bcmfs_sym_dev_private *bcmfs_private = dev->data->dev_private;
+ struct bcmfs_qp *qp = (struct bcmfs_qp *)
+ (dev->data->queue_pairs[queue_pair_id]);
+
+ BCMFS_LOG(DEBUG, "Release sym qp %u on device %d",
+ queue_pair_id, dev->data->dev_id);
+
+ rte_mempool_free(qp->sr_mp);
+
+ bcmfs_private->fsdev->qps_in_use[queue_pair_id] = NULL;
+
+ return bcmfs_qp_release((struct bcmfs_qp **)
+ &dev->data->queue_pairs[queue_pair_id]);
+}
+
+static void
+spu_req_init(struct bcmfs_sym_request *sr, rte_iova_t iova __rte_unused)
+{
+ memset(sr, 0, sizeof(*sr));
+}
+
+static void
+req_pool_obj_init(__rte_unused struct rte_mempool *mp,
+ __rte_unused void *opaque, void *obj,
+ __rte_unused unsigned int obj_idx)
+{
+ spu_req_init(obj, rte_mempool_virt2iova(obj));
+}
+
+static struct rte_mempool *
+bcmfs_sym_req_pool_create(struct rte_cryptodev *cdev __rte_unused,
+ uint32_t nobjs, uint16_t qp_id,
+ int socket_id)
+{
+ char softreq_pool_name[RTE_RING_NAMESIZE];
+ struct rte_mempool *mp;
+
+ snprintf(softreq_pool_name, RTE_RING_NAMESIZE, "%s_%d",
+ "bcm_sym", qp_id);
+
+ mp = rte_mempool_create(softreq_pool_name,
+ RTE_ALIGN_MUL_CEIL(nobjs, 64),
+ sizeof(struct bcmfs_sym_request),
+ 64, 0, NULL, NULL, req_pool_obj_init, NULL,
+ socket_id, 0);
+ if (mp == NULL)
+ BCMFS_LOG(ERR, "Failed to create req pool, qid %d, err %d",
+ qp_id, rte_errno);
+
+ return mp;
+}
+
+static int
+bcmfs_sym_qp_setup(struct rte_cryptodev *cdev, uint16_t qp_id,
+ const struct rte_cryptodev_qp_conf *qp_conf,
+ int socket_id)
+{
+ int ret = 0;
+ struct bcmfs_qp *qp = NULL;
+ struct bcmfs_qp_config bcmfs_qp_conf;
+
+ struct bcmfs_qp **qp_addr =
+ (struct bcmfs_qp **)&cdev->data->queue_pairs[qp_id];
+ struct bcmfs_sym_dev_private *bcmfs_private = cdev->data->dev_private;
+ struct bcmfs_device *fsdev = bcmfs_private->fsdev;
+
+
+ /* If qp is already in use free ring memory and qp metadata. */
+ if (*qp_addr != NULL) {
+ ret = bcmfs_sym_qp_release(cdev, qp_id);
+ if (ret < 0)
+ return ret;
+ }
+
+ if (qp_id >= fsdev->max_hw_qps) {
+ BCMFS_LOG(ERR, "qp_id %u invalid for this device", qp_id);
+ return -EINVAL;
+ }
+
+ bcmfs_qp_conf.nb_descriptors = qp_conf->nb_descriptors;
+ bcmfs_qp_conf.socket_id = socket_id;
+ bcmfs_qp_conf.max_descs_req = BCMFS_CRYPTO_MAX_HW_DESCS_PER_REQ;
+ bcmfs_qp_conf.iobase = BCMFS_QP_IOBASE_XLATE(fsdev->mmap_addr, qp_id);
+ bcmfs_qp_conf.ops = fsdev->sym_hw_qp_ops;
+
+ ret = bcmfs_qp_setup(qp_addr, qp_id, &bcmfs_qp_conf);
+ if (ret != 0)
+ return ret;
+
+ qp = (struct bcmfs_qp *)*qp_addr;
+
+ qp->sr_mp = bcmfs_sym_req_pool_create(cdev, qp_conf->nb_descriptors,
+ qp_id, socket_id);
+ if (qp->sr_mp == NULL)
+ return -ENOMEM;
+
+ /* store a link to the qp in the bcmfs_device */
+ bcmfs_private->fsdev->qps_in_use[qp_id] = *qp_addr;
+
+ cdev->data->queue_pairs[qp_id] = qp;
+ BCMFS_LOG(NOTICE, "queue %d setup done\n", qp_id);
+
+ return 0;
+}
+
+static struct rte_cryptodev_ops crypto_bcmfs_ops = {
+ /* Device related operations */
+ .dev_configure = bcmfs_sym_dev_config,
+ .dev_start = bcmfs_sym_dev_start,
+ .dev_stop = bcmfs_sym_dev_stop,
+ .dev_close = bcmfs_sym_dev_close,
+ .dev_infos_get = bcmfs_sym_dev_info_get,
+ /* Stats Collection */
+ .stats_get = bcmfs_sym_stats_get,
+ .stats_reset = bcmfs_sym_stats_reset,
+ /* Queue-Pair management */
+ .queue_pair_setup = bcmfs_sym_qp_setup,
+ .queue_pair_release = bcmfs_sym_qp_release,
+};
+
+/** Enqueue burst */
+static uint16_t
+bcmfs_sym_pmd_enqueue_op_burst(void *queue_pair,
+ struct rte_crypto_op **ops,
+ uint16_t nb_ops)
+{
+ int i, j;
+ uint16_t enq = 0;
+ struct bcmfs_sym_request *sreq;
+ struct bcmfs_qp *qp = (struct bcmfs_qp *)queue_pair;
+
+ if (nb_ops == 0)
+ return 0;
+
+ if (nb_ops > BCMFS_MAX_REQS_BUFF)
+ nb_ops = BCMFS_MAX_REQS_BUFF;
+
+ /* We do not process more than available space */
+ if (nb_ops > (qp->nb_descriptors - qp->nb_pending_requests))
+ nb_ops = qp->nb_descriptors - qp->nb_pending_requests;
+
+ for (i = 0; i < nb_ops; i++) {
+ if (rte_mempool_get(qp->sr_mp, (void **)&sreq))
+ goto enqueue_err;
+
+ /* save rte_crypto_op */
+ sreq->op = ops[i];
+
+ /* save context */
+ qp->infl_msgs[i] = &sreq->msgs;
+ qp->infl_msgs[i]->ctx = (void *)sreq;
+ }
+ /* Send burst request to hw QP */
+ enq = bcmfs_enqueue_op_burst(qp, (void **)qp->infl_msgs, i);
+
+ for (j = enq; j < i; j++)
+ rte_mempool_put(qp->sr_mp, qp->infl_msgs[j]->ctx);
+
+ return enq;
+
+enqueue_err:
+ for (j = 0; j < i; j++)
+ rte_mempool_put(qp->sr_mp, qp->infl_msgs[j]->ctx);
+
+ return enq;
+}
+
+static uint16_t
+bcmfs_sym_pmd_dequeue_op_burst(void *queue_pair,
+ struct rte_crypto_op **ops,
+ uint16_t nb_ops)
+{
+ int i;
+ uint16_t deq = 0;
+ unsigned int pkts = 0;
+ struct bcmfs_sym_request *sreq;
+ struct bcmfs_qp *qp = queue_pair;
+
+ if (nb_ops > BCMFS_MAX_REQS_BUFF)
+ nb_ops = BCMFS_MAX_REQS_BUFF;
+
+ deq = bcmfs_dequeue_op_burst(qp, (void **)qp->infl_msgs, nb_ops);
+ /* get rte_crypto_ops */
+ for (i = 0; i < deq; i++) {
+ sreq = (struct bcmfs_sym_request *)qp->infl_msgs[i]->ctx;
+
+ ops[pkts++] = sreq->op;
+
+ rte_mempool_put(qp->sr_mp, sreq);
+ }
+
+ return pkts;
+}
+
+/*
+ * An rte_driver is needed in the registration of both the
+ * device and the driver with cryptodev.
+ */
+static const char bcmfs_sym_drv_name[] = RTE_STR(CRYPTODEV_NAME_BCMFS_SYM_PMD);
+static const struct rte_driver cryptodev_bcmfs_sym_driver = {
+ .name = bcmfs_sym_drv_name,
+ .alias = bcmfs_sym_drv_name
+};
+
+int
+bcmfs_sym_dev_create(struct bcmfs_device *fsdev)
+{
+ struct rte_cryptodev_pmd_init_params init_params = {
+ .name = "",
+ .socket_id = rte_socket_id(),
+ .private_data_size = sizeof(struct bcmfs_sym_dev_private)
+ };
+ char name[RTE_CRYPTODEV_NAME_MAX_LEN];
+ struct rte_cryptodev *cryptodev;
+ struct bcmfs_sym_dev_private *internals;
+
+ snprintf(name, RTE_CRYPTODEV_NAME_MAX_LEN, "%s_%s",
+ fsdev->name, "sym");
+
+ /* Populate subset device to use in cryptodev device creation */
+ fsdev->sym_rte_dev.driver = &cryptodev_bcmfs_sym_driver;
+ fsdev->sym_rte_dev.numa_node = 0;
+ fsdev->sym_rte_dev.devargs = NULL;
+
+ cryptodev = rte_cryptodev_pmd_create(name,
+ &fsdev->sym_rte_dev,
+ &init_params);
+ if (cryptodev == NULL)
+ return -ENODEV;
+
+ fsdev->sym_rte_dev.name = cryptodev->data->name;
+ cryptodev->driver_id = cryptodev_bcmfs_driver_id;
+ cryptodev->dev_ops = &crypto_bcmfs_ops;
+
+ cryptodev->enqueue_burst = bcmfs_sym_pmd_enqueue_op_burst;
+ cryptodev->dequeue_burst = bcmfs_sym_pmd_dequeue_op_burst;
+
+ cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
+ RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
+ RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT;
+
+ internals = cryptodev->data->dev_private;
+ internals->fsdev = fsdev;
+ fsdev->sym_dev = internals;
+
+ internals->sym_dev_id = cryptodev->data->dev_id;
+
+ BCMFS_LOG(DEBUG, "Created bcmfs-sym device %s as cryptodev instance %d",
+ cryptodev->data->name, internals->sym_dev_id);
+ return 0;
+}
+
+int
+bcmfs_sym_dev_destroy(struct bcmfs_device *fsdev)
+{
+ struct rte_cryptodev *cryptodev;
+
+ if (fsdev == NULL)
+ return -ENODEV;
+ if (fsdev->sym_dev == NULL)
+ return 0;
+
+ /* free crypto device */
+ cryptodev = rte_cryptodev_pmd_get_dev(fsdev->sym_dev->sym_dev_id);
+ rte_cryptodev_pmd_destroy(cryptodev);
+ fsdev->sym_rte_dev.name = NULL;
+ fsdev->sym_dev = NULL;
+
+ return 0;
+}
+
+static struct cryptodev_driver bcmfs_crypto_drv;
+RTE_PMD_REGISTER_CRYPTO_DRIVER(bcmfs_crypto_drv,
+ cryptodev_bcmfs_sym_driver,
+ cryptodev_bcmfs_driver_id);
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_pmd.h b/drivers/crypto/bcmfs/bcmfs_sym_pmd.h
new file mode 100644
index 0000000000..65d7046090
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_pmd.h
@@ -0,0 +1,38 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_SYM_PMD_H_
+#define _BCMFS_SYM_PMD_H_
+
+#include <rte_cryptodev.h>
+
+#include "bcmfs_device.h"
+
+#define CRYPTODEV_NAME_BCMFS_SYM_PMD crypto_bcmfs
+
+#define BCMFS_CRYPTO_MAX_HW_DESCS_PER_REQ 16
+
+extern uint8_t cryptodev_bcmfs_driver_id;
+
+/** private data structure for a BCMFS device.
+ * This BCMFS device is a device offering only symmetric crypto service,
+ * there can be one of these on each bcmfs_pci_device (VF).
+ */
+struct bcmfs_sym_dev_private {
+ /* The bcmfs device hosting the service */
+ struct bcmfs_device *fsdev;
+ /* Device instance for this rte_cryptodev */
+ uint8_t sym_dev_id;
+ /* BCMFS device symmetric crypto capabilities */
+ const struct rte_cryptodev_capabilities *fsdev_capabilities;
+};
+
+int
+bcmfs_sym_dev_create(struct bcmfs_device *fdev);
+
+int
+bcmfs_sym_dev_destroy(struct bcmfs_device *fdev);
+
+#endif /* _BCMFS_SYM_PMD_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_req.h b/drivers/crypto/bcmfs/bcmfs_sym_req.h
new file mode 100644
index 0000000000..0f0b051f1e
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_req.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_SYM_REQ_H_
+#define _BCMFS_SYM_REQ_H_
+
+#include "bcmfs_dev_msg.h"
+
+/*
+ * This structure hold the supportive data required to process a
+ * rte_crypto_op
+ */
+struct bcmfs_sym_request {
+ /* bcmfs qp message for h/w queues to process */
+ struct bcmfs_qp_message msgs;
+ /* crypto op */
+ struct rte_crypto_op *op;
+};
+
+#endif /* _BCMFS_SYM_REQ_H_ */
diff --git a/drivers/crypto/bcmfs/meson.build b/drivers/crypto/bcmfs/meson.build
index cd58bd5e25..d9a3d73e99 100644
--- a/drivers/crypto/bcmfs/meson.build
+++ b/drivers/crypto/bcmfs/meson.build
@@ -11,5 +11,6 @@ sources = files(
'bcmfs_qp.c',
'hw/bcmfs4_rm.c',
'hw/bcmfs5_rm.c',
- 'hw/bcmfs_rm_common.c'
+ 'hw/bcmfs_rm_common.c',
+ 'bcmfs_sym_pmd.c'
)
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v4 6/8] crypto/bcmfs: add session handling and capabilities
2020-10-07 16:45 ` [dpdk-dev] [PATCH v4 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
` (4 preceding siblings ...)
2020-10-07 16:45 ` [dpdk-dev] [PATCH v4 5/8] crypto/bcmfs: create a symmetric cryptodev Vikas Gupta
@ 2020-10-07 16:45 ` Vikas Gupta
2020-10-07 16:45 ` [dpdk-dev] [PATCH v4 7/8] crypto/bcmfs: add crypto HW module Vikas Gupta
` (2 subsequent siblings)
8 siblings, 0 replies; 75+ messages in thread
From: Vikas Gupta @ 2020-10-07 16:45 UTC (permalink / raw)
To: dev, akhil.goyal; +Cc: vikram.prakash, Vikas Gupta, Raveendra Padasalagi
Add session handling and capabilities supported by crypto h/w
accelerator
Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
doc/guides/cryptodevs/bcmfs.rst | 47 ++
doc/guides/cryptodevs/features/bcmfs.ini | 56 ++
drivers/crypto/bcmfs/bcmfs_sym_capabilities.c | 764 ++++++++++++++++++
drivers/crypto/bcmfs/bcmfs_sym_capabilities.h | 16 +
drivers/crypto/bcmfs/bcmfs_sym_defs.h | 34 +
drivers/crypto/bcmfs/bcmfs_sym_pmd.c | 13 +
drivers/crypto/bcmfs/bcmfs_sym_session.c | 282 +++++++
drivers/crypto/bcmfs/bcmfs_sym_session.h | 109 +++
drivers/crypto/bcmfs/meson.build | 4 +-
9 files changed, 1324 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/cryptodevs/features/bcmfs.ini
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_capabilities.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_capabilities.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_defs.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_session.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_session.h
diff --git a/doc/guides/cryptodevs/bcmfs.rst b/doc/guides/cryptodevs/bcmfs.rst
index 6b68673df0..f7e15f4cfb 100644
--- a/doc/guides/cryptodevs/bcmfs.rst
+++ b/doc/guides/cryptodevs/bcmfs.rst
@@ -15,6 +15,47 @@ Supported Broadcom SoCs
* Stingray
* Stingray2
+Features
+--------
+
+The BCMFS SYM PMD has support for:
+
+Cipher algorithms:
+
+* ``RTE_CRYPTO_CIPHER_3DES_CBC``
+* ``RTE_CRYPTO_CIPHER_3DES_CTR``
+* ``RTE_CRYPTO_CIPHER_AES128_CBC``
+* ``RTE_CRYPTO_CIPHER_AES192_CBC``
+* ``RTE_CRYPTO_CIPHER_AES256_CBC``
+* ``RTE_CRYPTO_CIPHER_AES128_CTR``
+* ``RTE_CRYPTO_CIPHER_AES192_CTR``
+* ``RTE_CRYPTO_CIPHER_AES256_CTR``
+* ``RTE_CRYPTO_CIPHER_AES_XTS``
+* ``RTE_CRYPTO_CIPHER_DES_CBC``
+
+Hash algorithms:
+
+* ``RTE_CRYPTO_AUTH_SHA1``
+* ``RTE_CRYPTO_AUTH_SHA1_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA224``
+* ``RTE_CRYPTO_AUTH_SHA224_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA256``
+* ``RTE_CRYPTO_AUTH_SHA256_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA384``
+* ``RTE_CRYPTO_AUTH_SHA384_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA512``
+* ``RTE_CRYPTO_AUTH_SHA512_HMAC``
+* ``RTE_CRYPTO_AUTH_AES_XCBC_MAC``
+* ``RTE_CRYPTO_AUTH_AES_CBC_MAC``
+* ``RTE_CRYPTO_AUTH_MD5_HMAC``
+* ``RTE_CRYPTO_AUTH_AES_GMAC``
+* ``RTE_CRYPTO_AUTH_AES_CMAC``
+
+Supported AEAD algorithms:
+
+* ``RTE_CRYPTO_AEAD_AES_GCM``
+* ``RTE_CRYPTO_AEAD_AES_CCM``
+
Installation
------------
Information about kernel, rootfs and toolchain can be found at
@@ -49,3 +90,9 @@ For example, below commands can be run to get hold of a device node by VFIO.
io_device_name="vfio-platform"
echo $io_device_name > /sys/bus/platform/devices/${SETUP_SYSFS_DEV_NAME}/driver_override
echo ${SETUP_SYSFS_DEV_NAME} > /sys/bus/platform/drivers_probe
+
+Limitations
+-----------
+
+* Only supports the session-oriented API implementation (session-less APIs are not supported).
+* CCM is not supported on Broadcom`s SoCs having FlexSparc4 unit.
diff --git a/doc/guides/cryptodevs/features/bcmfs.ini b/doc/guides/cryptodevs/features/bcmfs.ini
new file mode 100644
index 0000000000..6a718856b9
--- /dev/null
+++ b/doc/guides/cryptodevs/features/bcmfs.ini
@@ -0,0 +1,56 @@
+;
+; Supported features of the 'bcmfs' crypto driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Symmetric crypto = Y
+Sym operation chaining = Y
+HW Accelerated = Y
+Protocol offload = Y
+OOP LB In LB Out = Y
+
+;
+; Supported crypto algorithms of the 'bcmfs' crypto driver.
+;
+[Cipher]
+AES CBC (128) = Y
+AES CBC (192) = Y
+AES CBC (256) = Y
+AES CTR (128) = Y
+AES CTR (192) = Y
+AES CTR (256) = Y
+AES XTS (128) = Y
+AES XTS (256) = Y
+3DES CBC = Y
+DES CBC = Y
+;
+; Supported authentication algorithms of the 'bcmfs' crypto driver.
+;
+[Auth]
+MD5 HMAC = Y
+SHA1 = Y
+SHA1 HMAC = Y
+SHA224 = Y
+SHA224 HMAC = Y
+SHA256 = Y
+SHA256 HMAC = Y
+SHA384 = Y
+SHA384 HMAC = Y
+SHA512 = Y
+SHA512 HMAC = Y
+AES GMAC = Y
+AES CMAC (128) = Y
+AES CBC MAC = Y
+AES XCBC MAC = Y
+
+;
+; Supported AEAD algorithms of the 'bcmfs' crypto driver.
+;
+[AEAD]
+AES GCM (128) = Y
+AES GCM (192) = Y
+AES GCM (256) = Y
+AES CCM (128) = Y
+AES CCM (192) = Y
+AES CCM (256) = Y
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_capabilities.c b/drivers/crypto/bcmfs/bcmfs_sym_capabilities.c
new file mode 100644
index 0000000000..afed7696a6
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_capabilities.c
@@ -0,0 +1,764 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_cryptodev.h>
+
+#include "bcmfs_sym_capabilities.h"
+
+static const struct rte_cryptodev_capabilities bcmfs_sym_capabilities[] = {
+ {
+ /* SHA1 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA1,
+ .block_size = 64,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 20,
+ .max = 20,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* MD5 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_MD5,
+ .block_size = 64,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ }, }
+ }, }
+ },
+ {
+ /* SHA224 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA224,
+ .block_size = 64,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 28,
+ .max = 28,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA256 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA256,
+ .block_size = 64,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 32,
+ .max = 32,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA384 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA384,
+ .block_size = 64,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 48,
+ .max = 48,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA512 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA512,
+ .block_size = 64,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 64,
+ .max = 64,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA3_224 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA3_224,
+ .block_size = 144,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 28,
+ .max = 28,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA3_256 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA3_256,
+ .block_size = 136,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 32,
+ .max = 32,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA3_384 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA3_384,
+ .block_size = 104,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 48,
+ .max = 48,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA3_512 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA3_512,
+ .block_size = 72,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 64,
+ .max = 64,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA1 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
+ .block_size = 64,
+ .key_size = {
+ .min = 1,
+ .max = 64,
+ .increment = 1
+ },
+ .digest_size = {
+ .min = 20,
+ .max = 20,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* MD5 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_MD5_HMAC,
+ .block_size = 64,
+ .key_size = {
+ .min = 1,
+ .max = 64,
+ .increment = 1
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA224 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA224_HMAC,
+ .block_size = 64,
+ .key_size = {
+ .min = 1,
+ .max = 64,
+ .increment = 1
+ },
+ .digest_size = {
+ .min = 28,
+ .max = 28,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA256 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA256_HMAC,
+ .block_size = 64,
+ .key_size = {
+ .min = 1,
+ .max = 64,
+ .increment = 1
+ },
+ .digest_size = {
+ .min = 32,
+ .max = 32,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA384 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA384_HMAC,
+ .block_size = 128,
+ .key_size = {
+ .min = 1,
+ .max = 128,
+ .increment = 1
+ },
+ .digest_size = {
+ .min = 48,
+ .max = 48,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA512 HMAC*/
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA512_HMAC,
+ .block_size = 128,
+ .key_size = {
+ .min = 1,
+ .max = 128,
+ .increment = 1
+ },
+ .digest_size = {
+ .min = 64,
+ .max = 64,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA3_224 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA3_224_HMAC,
+ .block_size = 144,
+ .key_size = {
+ .min = 1,
+ .max = 144,
+ .increment = 1
+ },
+ .digest_size = {
+ .min = 28,
+ .max = 28,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA3_256 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA3_256_HMAC,
+ .block_size = 136,
+ .key_size = {
+ .min = 1,
+ .max = 136,
+ .increment = 1
+ },
+ .digest_size = {
+ .min = 32,
+ .max = 32,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA3_384 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA3_384_HMAC,
+ .block_size = 104,
+ .key_size = {
+ .min = 1,
+ .max = 104,
+ .increment = 1
+ },
+ .digest_size = {
+ .min = 48,
+ .max = 48,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA3_512 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA3_512_HMAC,
+ .block_size = 72,
+ .key_size = {
+ .min = 1,
+ .max = 72,
+ .increment = 1
+ },
+ .digest_size = {
+ .min = 64,
+ .max = 64,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* AES XCBC MAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_AES_XCBC_MAC,
+ .block_size = 16,
+ .key_size = {
+ .min = 1,
+ .max = 16,
+ .increment = 1
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* AES GMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_AES_GMAC,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 8
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .aad_size = {
+ .min = 0,
+ .max = 65535,
+ .increment = 1
+ },
+ .iv_size = {
+ .min = 12,
+ .max = 16,
+ .increment = 4
+ },
+ }, }
+ }, }
+ },
+ {
+ /* AES CMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_AES_CMAC,
+ .block_size = 16,
+ .key_size = {
+ .min = 1,
+ .max = 16,
+ .increment = 1
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* AES CBC MAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_AES_CBC_MAC,
+ .block_size = 16,
+ .key_size = {
+ .min = 1,
+ .max = 16,
+ .increment = 1
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* AES ECB */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_AES_ECB,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 8
+ },
+ .iv_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ {
+ /* AES CBC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_AES_CBC,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 8
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ {
+ /* AES CTR */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_AES_CTR,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 8
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ {
+ /* AES XTS */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_AES_XTS,
+ .block_size = 16,
+ .key_size = {
+ .min = 32,
+ .max = 64,
+ .increment = 32
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ {
+ /* DES CBC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_DES_CBC,
+ .block_size = 8,
+ .key_size = {
+ .min = 8,
+ .max = 8,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ {
+ /* 3DES CBC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_3DES_CBC,
+ .block_size = 8,
+ .key_size = {
+ .min = 24,
+ .max = 24,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ {
+ /* 3DES ECB */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_3DES_ECB,
+ .block_size = 8,
+ .key_size = {
+ .min = 24,
+ .max = 24,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ {
+ /* AES GCM */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
+ {.aead = {
+ .algo = RTE_CRYPTO_AEAD_AES_GCM,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 8
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .aad_size = {
+ .min = 0,
+ .max = 65535,
+ .increment = 1
+ },
+ .iv_size = {
+ .min = 12,
+ .max = 16,
+ .increment = 4
+ },
+ }, }
+ }, }
+ },
+ {
+ /* AES CCM */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
+ {.aead = {
+ .algo = RTE_CRYPTO_AEAD_AES_CCM,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 8
+ },
+ .digest_size = {
+ .min = 4,
+ .max = 16,
+ .increment = 2
+ },
+ .aad_size = {
+ .min = 0,
+ .max = 65535,
+ .increment = 1
+ },
+ .iv_size = {
+ .min = 7,
+ .max = 13,
+ .increment = 1
+ },
+ }, }
+ }, }
+ },
+
+ RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+const struct rte_cryptodev_capabilities *
+bcmfs_sym_get_capabilities(void)
+{
+ return bcmfs_sym_capabilities;
+}
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_capabilities.h b/drivers/crypto/bcmfs/bcmfs_sym_capabilities.h
new file mode 100644
index 0000000000..3ff61b7d29
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_capabilities.h
@@ -0,0 +1,16 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_SYM_CAPABILITIES_H_
+#define _BCMFS_SYM_CAPABILITIES_H_
+
+/*
+ * Get capabilities list for the device
+ *
+ */
+const struct rte_cryptodev_capabilities *bcmfs_sym_get_capabilities(void);
+
+#endif /* _BCMFS_SYM_CAPABILITIES_H__ */
+
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_defs.h b/drivers/crypto/bcmfs/bcmfs_sym_defs.h
new file mode 100644
index 0000000000..aea1f281e4
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_defs.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_SYM_DEFS_H_
+#define _BCMFS_SYM_DEFS_H_
+
+/*
+ * Max block size of hash algorithm
+ * currently SHA3 supports max block size
+ * of 144 bytes
+ */
+#define BCMFS_MAX_KEY_SIZE 144
+#define BCMFS_MAX_IV_SIZE 16
+#define BCMFS_MAX_DIGEST_SIZE 64
+
+struct bcmfs_sym_session;
+struct bcmfs_sym_request;
+
+/** Crypto Request processing successful. */
+#define BCMFS_SYM_RESPONSE_SUCCESS (0)
+/** Crypot Request processing protocol failure. */
+#define BCMFS_SYM_RESPONSE_PROTO_FAILURE (1)
+/** Crypot Request processing completion failure. */
+#define BCMFS_SYM_RESPONSE_COMPL_ERROR (2)
+/** Crypot Request processing hash tag check error. */
+#define BCMFS_SYM_RESPONSE_HASH_TAG_ERROR (3)
+
+int
+bcmfs_process_sym_crypto_op(struct rte_crypto_op *op,
+ struct bcmfs_sym_session *sess,
+ struct bcmfs_sym_request *req);
+#endif /* _BCMFS_SYM_DEFS_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
index 0f96915f70..381ca8ea48 100644
--- a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
+++ b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
@@ -14,6 +14,8 @@
#include "bcmfs_qp.h"
#include "bcmfs_sym_pmd.h"
#include "bcmfs_sym_req.h"
+#include "bcmfs_sym_session.h"
+#include "bcmfs_sym_capabilities.h"
uint8_t cryptodev_bcmfs_driver_id;
@@ -65,6 +67,7 @@ bcmfs_sym_dev_info_get(struct rte_cryptodev *dev,
dev_info->max_nb_queue_pairs = fsdev->max_hw_qps;
/* No limit of number of sessions */
dev_info->sym.max_nb_sessions = 0;
+ dev_info->capabilities = bcmfs_sym_get_capabilities();
}
}
@@ -228,6 +231,10 @@ static struct rte_cryptodev_ops crypto_bcmfs_ops = {
/* Queue-Pair management */
.queue_pair_setup = bcmfs_sym_qp_setup,
.queue_pair_release = bcmfs_sym_qp_release,
+ /* Crypto session related operations */
+ .sym_session_get_size = bcmfs_sym_session_get_private_size,
+ .sym_session_configure = bcmfs_sym_session_configure,
+ .sym_session_clear = bcmfs_sym_session_clear
};
/** Enqueue burst */
@@ -239,6 +246,7 @@ bcmfs_sym_pmd_enqueue_op_burst(void *queue_pair,
int i, j;
uint16_t enq = 0;
struct bcmfs_sym_request *sreq;
+ struct bcmfs_sym_session *sess;
struct bcmfs_qp *qp = (struct bcmfs_qp *)queue_pair;
if (nb_ops == 0)
@@ -252,6 +260,10 @@ bcmfs_sym_pmd_enqueue_op_burst(void *queue_pair,
nb_ops = qp->nb_descriptors - qp->nb_pending_requests;
for (i = 0; i < nb_ops; i++) {
+ sess = bcmfs_sym_get_session(ops[i]);
+ if (unlikely(sess == NULL))
+ goto enqueue_err;
+
if (rte_mempool_get(qp->sr_mp, (void **)&sreq))
goto enqueue_err;
@@ -356,6 +368,7 @@ bcmfs_sym_dev_create(struct bcmfs_device *fsdev)
fsdev->sym_dev = internals;
internals->sym_dev_id = cryptodev->data->dev_id;
+ internals->fsdev_capabilities = bcmfs_sym_get_capabilities();
BCMFS_LOG(DEBUG, "Created bcmfs-sym device %s as cryptodev instance %d",
cryptodev->data->name, internals->sym_dev_id);
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_session.c b/drivers/crypto/bcmfs/bcmfs_sym_session.c
new file mode 100644
index 0000000000..675ed0ad55
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_session.c
@@ -0,0 +1,282 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_crypto.h>
+#include <rte_crypto_sym.h>
+#include <rte_log.h>
+
+#include "bcmfs_logs.h"
+#include "bcmfs_sym_defs.h"
+#include "bcmfs_sym_pmd.h"
+#include "bcmfs_sym_session.h"
+
+/** Configure the session from a crypto xform chain */
+static enum bcmfs_sym_chain_order
+crypto_get_chain_order(const struct rte_crypto_sym_xform *xform)
+{
+ enum bcmfs_sym_chain_order res = BCMFS_SYM_CHAIN_NOT_SUPPORTED;
+
+ if (xform != NULL) {
+ if (xform->type == RTE_CRYPTO_SYM_XFORM_AEAD)
+ res = BCMFS_SYM_CHAIN_AEAD;
+
+ if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
+ if (xform->next == NULL)
+ res = BCMFS_SYM_CHAIN_ONLY_AUTH;
+ else if (xform->next->type ==
+ RTE_CRYPTO_SYM_XFORM_CIPHER)
+ res = BCMFS_SYM_CHAIN_AUTH_CIPHER;
+ }
+ if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
+ if (xform->next == NULL)
+ res = BCMFS_SYM_CHAIN_ONLY_CIPHER;
+ else if (xform->next->type == RTE_CRYPTO_SYM_XFORM_AUTH)
+ res = BCMFS_SYM_CHAIN_CIPHER_AUTH;
+ }
+ }
+
+ return res;
+}
+
+/* Get session cipher key from input cipher key */
+static void
+get_key(const uint8_t *input_key, int keylen, uint8_t *session_key)
+{
+ memcpy(session_key, input_key, keylen);
+}
+
+/* Set session cipher parameters */
+static int
+crypto_set_session_cipher_parameters(struct bcmfs_sym_session *sess,
+ const struct rte_crypto_cipher_xform *cipher_xform)
+{
+ if (cipher_xform->key.length > BCMFS_MAX_KEY_SIZE) {
+ BCMFS_DP_LOG(ERR, "key length not supported");
+ return -EINVAL;
+ }
+
+ sess->cipher.key.length = cipher_xform->key.length;
+ sess->cipher.iv.offset = cipher_xform->iv.offset;
+ sess->cipher.iv.length = cipher_xform->iv.length;
+ sess->cipher.op = cipher_xform->op;
+ sess->cipher.algo = cipher_xform->algo;
+
+ get_key(cipher_xform->key.data,
+ sess->cipher.key.length,
+ sess->cipher.key.data);
+
+ return 0;
+}
+
+/* Set session auth parameters */
+static int
+crypto_set_session_auth_parameters(struct bcmfs_sym_session *sess,
+ const struct rte_crypto_auth_xform *auth_xform)
+{
+ if (auth_xform->key.length > BCMFS_MAX_KEY_SIZE) {
+ BCMFS_DP_LOG(ERR, "key length not supported");
+ return -EINVAL;
+ }
+
+ sess->auth.op = auth_xform->op;
+ sess->auth.key.length = auth_xform->key.length;
+ sess->auth.digest_length = auth_xform->digest_length;
+ sess->auth.iv.length = auth_xform->iv.length;
+ sess->auth.iv.offset = auth_xform->iv.offset;
+ sess->auth.algo = auth_xform->algo;
+
+ get_key(auth_xform->key.data,
+ auth_xform->key.length,
+ sess->auth.key.data);
+
+ return 0;
+}
+
+/* Set session aead parameters */
+static int
+crypto_set_session_aead_parameters(struct bcmfs_sym_session *sess,
+ const struct rte_crypto_sym_xform *aead_xform)
+{
+ if (aead_xform->aead.key.length > BCMFS_MAX_KEY_SIZE) {
+ BCMFS_DP_LOG(ERR, "key length not supported");
+ return -EINVAL;
+ }
+
+ sess->aead.iv.offset = aead_xform->aead.iv.offset;
+ sess->aead.iv.length = aead_xform->aead.iv.length;
+ sess->aead.aad_length = aead_xform->aead.aad_length;
+ sess->aead.key.length = aead_xform->aead.key.length;
+ sess->aead.digest_length = aead_xform->aead.digest_length;
+ sess->aead.op = aead_xform->aead.op;
+ sess->aead.algo = aead_xform->aead.algo;
+
+ get_key(aead_xform->aead.key.data,
+ aead_xform->aead.key.length,
+ sess->aead.key.data);
+
+ return 0;
+}
+
+static struct rte_crypto_auth_xform *
+crypto_get_auth_xform(struct rte_crypto_sym_xform *xform)
+{
+ do {
+ if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH)
+ return &xform->auth;
+
+ xform = xform->next;
+ } while (xform);
+
+ return NULL;
+}
+
+static struct rte_crypto_cipher_xform *
+crypto_get_cipher_xform(struct rte_crypto_sym_xform *xform)
+{
+ do {
+ if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER)
+ return &xform->cipher;
+
+ xform = xform->next;
+ } while (xform);
+
+ return NULL;
+}
+
+/** Parse crypto xform chain and set private session parameters */
+static int
+crypto_set_session_parameters(struct bcmfs_sym_session *sess,
+ struct rte_crypto_sym_xform *xform)
+{
+ int rc = 0;
+ struct rte_crypto_cipher_xform *cipher_xform =
+ crypto_get_cipher_xform(xform);
+ struct rte_crypto_auth_xform *auth_xform =
+ crypto_get_auth_xform(xform);
+
+ sess->chain_order = crypto_get_chain_order(xform);
+
+ switch (sess->chain_order) {
+ case BCMFS_SYM_CHAIN_ONLY_CIPHER:
+ if (crypto_set_session_cipher_parameters(sess, cipher_xform))
+ rc = -EINVAL;
+ break;
+ case BCMFS_SYM_CHAIN_ONLY_AUTH:
+ if (crypto_set_session_auth_parameters(sess, auth_xform))
+ rc = -EINVAL;
+ break;
+ case BCMFS_SYM_CHAIN_AUTH_CIPHER:
+ sess->cipher_first = false;
+ if (crypto_set_session_auth_parameters(sess, auth_xform)) {
+ rc = -EINVAL;
+ goto error;
+ }
+
+ if (crypto_set_session_cipher_parameters(sess, cipher_xform))
+ rc = -EINVAL;
+ break;
+ case BCMFS_SYM_CHAIN_CIPHER_AUTH:
+ sess->cipher_first = true;
+ if (crypto_set_session_auth_parameters(sess, auth_xform)) {
+ rc = -EINVAL;
+ goto error;
+ }
+
+ if (crypto_set_session_cipher_parameters(sess, cipher_xform))
+ rc = -EINVAL;
+ break;
+ case BCMFS_SYM_CHAIN_AEAD:
+ if (crypto_set_session_aead_parameters(sess, xform))
+ rc = -EINVAL;
+ break;
+ default:
+ BCMFS_DP_LOG(ERR, "Invalid chain order\n");
+ rc = -EINVAL;
+ break;
+ }
+
+error:
+ return rc;
+}
+
+struct bcmfs_sym_session *
+bcmfs_sym_get_session(struct rte_crypto_op *op)
+{
+ struct bcmfs_sym_session *sess = NULL;
+
+ if (unlikely(op->sess_type == RTE_CRYPTO_OP_SESSIONLESS)) {
+ BCMFS_DP_LOG(ERR, "operations op(%p) is sessionless", op);
+ } else if (likely(op->sym->session != NULL)) {
+ /* get existing session */
+ sess = (struct bcmfs_sym_session *)
+ get_sym_session_private_data(op->sym->session,
+ cryptodev_bcmfs_driver_id);
+ }
+
+ if (sess == NULL)
+ op->status = RTE_CRYPTO_OP_STATUS_INVALID_SESSION;
+
+ return sess;
+}
+
+int
+bcmfs_sym_session_configure(struct rte_cryptodev *dev,
+ struct rte_crypto_sym_xform *xform,
+ struct rte_cryptodev_sym_session *sess,
+ struct rte_mempool *mempool)
+{
+ void *sess_private_data;
+ int ret;
+
+ if (unlikely(sess == NULL)) {
+ BCMFS_DP_LOG(ERR, "Invalid session struct");
+ return -EINVAL;
+ }
+
+ if (rte_mempool_get(mempool, &sess_private_data)) {
+ BCMFS_DP_LOG(ERR,
+ "Couldn't get object from session mempool");
+ return -ENOMEM;
+ }
+
+ ret = crypto_set_session_parameters(sess_private_data, xform);
+
+ if (ret != 0) {
+ BCMFS_DP_LOG(ERR, "Failed configure session parameters");
+ /* Return session to mempool */
+ rte_mempool_put(mempool, sess_private_data);
+ return ret;
+ }
+
+ set_sym_session_private_data(sess, dev->driver_id,
+ sess_private_data);
+
+ return 0;
+}
+
+/* Clear the memory of session so it doesn't leave key material behind */
+void
+bcmfs_sym_session_clear(struct rte_cryptodev *dev,
+ struct rte_cryptodev_sym_session *sess)
+{
+ uint8_t index = dev->driver_id;
+ void *sess_priv = get_sym_session_private_data(sess, index);
+
+ if (sess_priv) {
+ struct rte_mempool *sess_mp;
+
+ memset(sess_priv, 0, sizeof(struct bcmfs_sym_session));
+ sess_mp = rte_mempool_from_obj(sess_priv);
+
+ set_sym_session_private_data(sess, index, NULL);
+ rte_mempool_put(sess_mp, sess_priv);
+ }
+}
+
+unsigned int
+bcmfs_sym_session_get_private_size(struct rte_cryptodev *dev __rte_unused)
+{
+ return sizeof(struct bcmfs_sym_session);
+}
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_session.h b/drivers/crypto/bcmfs/bcmfs_sym_session.h
new file mode 100644
index 0000000000..8240c6fc25
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_session.h
@@ -0,0 +1,109 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_SYM_SESSION_H_
+#define _BCMFS_SYM_SESSION_H_
+
+#include <stdbool.h>
+#include <rte_crypto.h>
+#include <rte_cryptodev_pmd.h>
+
+#include "bcmfs_sym_defs.h"
+#include "bcmfs_sym_req.h"
+
+/* BCMFS_SYM operation order mode enumerator */
+enum bcmfs_sym_chain_order {
+ BCMFS_SYM_CHAIN_ONLY_CIPHER,
+ BCMFS_SYM_CHAIN_ONLY_AUTH,
+ BCMFS_SYM_CHAIN_CIPHER_AUTH,
+ BCMFS_SYM_CHAIN_AUTH_CIPHER,
+ BCMFS_SYM_CHAIN_AEAD,
+ BCMFS_SYM_CHAIN_NOT_SUPPORTED
+};
+
+/* BCMFS_SYM crypto private session structure */
+struct bcmfs_sym_session {
+ enum bcmfs_sym_chain_order chain_order;
+
+ /* Cipher Parameters */
+ struct {
+ enum rte_crypto_cipher_operation op;
+ /* Cipher operation */
+ enum rte_crypto_cipher_algorithm algo;
+ /* Cipher algorithm */
+ struct {
+ uint8_t data[BCMFS_MAX_KEY_SIZE];
+ size_t length;
+ } key;
+ struct {
+ uint16_t offset;
+ uint16_t length;
+ } iv;
+ } cipher;
+
+ /* Authentication Parameters */
+ struct {
+ enum rte_crypto_auth_operation op;
+ /* Auth operation */
+ enum rte_crypto_auth_algorithm algo;
+ /* Auth algorithm */
+
+ struct {
+ uint8_t data[BCMFS_MAX_KEY_SIZE];
+ size_t length;
+ } key;
+ struct {
+ uint16_t offset;
+ uint16_t length;
+ } iv;
+
+ uint16_t digest_length;
+ } auth;
+
+ /* Aead Parameters */
+ struct {
+ enum rte_crypto_aead_operation op;
+ /* AEAD operation */
+ enum rte_crypto_aead_algorithm algo;
+ /* AEAD algorithm */
+ struct {
+ uint8_t data[BCMFS_MAX_KEY_SIZE];
+ size_t length;
+ } key;
+ struct {
+ uint16_t offset;
+ uint16_t length;
+ } iv;
+
+ uint16_t digest_length;
+
+ uint16_t aad_length;
+ } aead;
+
+ bool cipher_first;
+} __rte_cache_aligned;
+
+int
+bcmfs_process_crypto_op(struct rte_crypto_op *op,
+ struct bcmfs_sym_session *sess,
+ struct bcmfs_sym_request *req);
+
+int
+bcmfs_sym_session_configure(struct rte_cryptodev *dev,
+ struct rte_crypto_sym_xform *xform,
+ struct rte_cryptodev_sym_session *sess,
+ struct rte_mempool *mempool);
+
+void
+bcmfs_sym_session_clear(struct rte_cryptodev *dev,
+ struct rte_cryptodev_sym_session *sess);
+
+unsigned int
+bcmfs_sym_session_get_private_size(struct rte_cryptodev *dev __rte_unused);
+
+struct bcmfs_sym_session *
+bcmfs_sym_get_session(struct rte_crypto_op *op);
+
+#endif /* _BCMFS_SYM_SESSION_H_ */
diff --git a/drivers/crypto/bcmfs/meson.build b/drivers/crypto/bcmfs/meson.build
index d9a3d73e99..2e86c733e1 100644
--- a/drivers/crypto/bcmfs/meson.build
+++ b/drivers/crypto/bcmfs/meson.build
@@ -12,5 +12,7 @@ sources = files(
'hw/bcmfs4_rm.c',
'hw/bcmfs5_rm.c',
'hw/bcmfs_rm_common.c',
- 'bcmfs_sym_pmd.c'
+ 'bcmfs_sym_pmd.c',
+ 'bcmfs_sym_capabilities.c',
+ 'bcmfs_sym_session.c'
)
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v4 7/8] crypto/bcmfs: add crypto HW module
2020-10-07 16:45 ` [dpdk-dev] [PATCH v4 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
` (5 preceding siblings ...)
2020-10-07 16:45 ` [dpdk-dev] [PATCH v4 6/8] crypto/bcmfs: add session handling and capabilities Vikas Gupta
@ 2020-10-07 16:45 ` Vikas Gupta
2020-10-07 16:45 ` [dpdk-dev] [PATCH v4 8/8] crypto/bcmfs: add crypto pmd into cryptodev test Vikas Gupta
2020-10-07 17:18 ` [dpdk-dev] [PATCH v5 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
8 siblings, 0 replies; 75+ messages in thread
From: Vikas Gupta @ 2020-10-07 16:45 UTC (permalink / raw)
To: dev, akhil.goyal; +Cc: vikram.prakash, Vikas Gupta, Raveendra Padasalagi
Add crypto h/w module to process crypto op. Crypto op is processed via
sym_engine module before submitting the crypto request to h/w queues.
Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
drivers/crypto/bcmfs/bcmfs_sym.c | 289 ++++++
drivers/crypto/bcmfs/bcmfs_sym_engine.c | 1155 +++++++++++++++++++++++
drivers/crypto/bcmfs/bcmfs_sym_engine.h | 115 +++
drivers/crypto/bcmfs/bcmfs_sym_pmd.c | 26 +
drivers/crypto/bcmfs/bcmfs_sym_req.h | 40 +
drivers/crypto/bcmfs/meson.build | 4 +-
6 files changed, 1628 insertions(+), 1 deletion(-)
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_engine.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_engine.h
diff --git a/drivers/crypto/bcmfs/bcmfs_sym.c b/drivers/crypto/bcmfs/bcmfs_sym.c
new file mode 100644
index 0000000000..2d164a1ec8
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym.c
@@ -0,0 +1,289 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <stdbool.h>
+
+#include <rte_byteorder.h>
+#include <rte_crypto_sym.h>
+#include <rte_cryptodev.h>
+#include <rte_mbuf.h>
+#include <rte_mempool.h>
+
+#include "bcmfs_sym_defs.h"
+#include "bcmfs_sym_engine.h"
+#include "bcmfs_sym_req.h"
+#include "bcmfs_sym_session.h"
+
+/** Process cipher operation */
+static int
+process_crypto_cipher_op(struct rte_crypto_op *op,
+ struct rte_mbuf *mbuf_src,
+ struct rte_mbuf *mbuf_dst,
+ struct bcmfs_sym_session *sess,
+ struct bcmfs_sym_request *req)
+{
+ int rc = 0;
+ struct fsattr src, dst, iv, key;
+ struct rte_crypto_sym_op *sym_op = op->sym;
+
+ fsattr_sz(&src) = sym_op->cipher.data.length;
+ fsattr_sz(&dst) = sym_op->cipher.data.length;
+
+ fsattr_va(&src) = rte_pktmbuf_mtod_offset
+ (mbuf_src,
+ uint8_t *,
+ op->sym->cipher.data.offset);
+
+ fsattr_va(&dst) = rte_pktmbuf_mtod_offset
+ (mbuf_dst,
+ uint8_t *,
+ op->sym->cipher.data.offset);
+
+ fsattr_pa(&src) = rte_pktmbuf_iova(mbuf_src);
+ fsattr_pa(&dst) = rte_pktmbuf_iova(mbuf_dst);
+
+ fsattr_va(&iv) = rte_crypto_op_ctod_offset(op,
+ uint8_t *,
+ sess->cipher.iv.offset);
+
+ fsattr_sz(&iv) = sess->cipher.iv.length;
+
+ fsattr_va(&key) = sess->cipher.key.data;
+ fsattr_pa(&key) = 0;
+ fsattr_sz(&key) = sess->cipher.key.length;
+
+ rc = bcmfs_crypto_build_cipher_req(req, sess->cipher.algo,
+ sess->cipher.op, &src,
+ &dst, &key, &iv);
+ if (rc)
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+
+ return rc;
+}
+
+/** Process auth operation */
+static int
+process_crypto_auth_op(struct rte_crypto_op *op,
+ struct rte_mbuf *mbuf_src,
+ struct bcmfs_sym_session *sess,
+ struct bcmfs_sym_request *req)
+{
+ int rc = 0;
+ struct fsattr src, dst, mac, key, iv;
+
+ fsattr_sz(&src) = op->sym->auth.data.length;
+ fsattr_va(&src) = rte_pktmbuf_mtod_offset(mbuf_src,
+ uint8_t *,
+ op->sym->auth.data.offset);
+ fsattr_pa(&src) = rte_pktmbuf_iova(mbuf_src);
+
+ if (!sess->auth.op) {
+ fsattr_va(&mac) = op->sym->auth.digest.data;
+ fsattr_pa(&mac) = op->sym->auth.digest.phys_addr;
+ fsattr_sz(&mac) = sess->auth.digest_length;
+ } else {
+ fsattr_va(&dst) = op->sym->auth.digest.data;
+ fsattr_pa(&dst) = op->sym->auth.digest.phys_addr;
+ fsattr_sz(&dst) = sess->auth.digest_length;
+ }
+
+ fsattr_va(&key) = sess->auth.key.data;
+ fsattr_pa(&key) = 0;
+ fsattr_sz(&key) = sess->auth.key.length;
+
+ /* AES-GMAC uses AES-GCM-128 authenticator */
+ if (sess->auth.algo == RTE_CRYPTO_AUTH_AES_GMAC) {
+ fsattr_va(&iv) = rte_crypto_op_ctod_offset(op,
+ uint8_t *,
+ sess->auth.iv.offset);
+ fsattr_pa(&iv) = 0;
+ fsattr_sz(&iv) = sess->auth.iv.length;
+ } else {
+ fsattr_va(&iv) = NULL;
+ fsattr_sz(&iv) = 0;
+ }
+
+ rc = bcmfs_crypto_build_auth_req(req, sess->auth.algo,
+ sess->auth.op,
+ &src,
+ (sess->auth.op) ? (&dst) : NULL,
+ (sess->auth.op) ? NULL : (&mac),
+ &key, &iv);
+
+ if (rc)
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+
+ return rc;
+}
+
+/** Process combined/chained mode operation */
+static int
+process_crypto_combined_op(struct rte_crypto_op *op,
+ struct rte_mbuf *mbuf_src,
+ struct rte_mbuf *mbuf_dst,
+ struct bcmfs_sym_session *sess,
+ struct bcmfs_sym_request *req)
+{
+ int rc = 0, aad_size = 0;
+ struct fsattr src, dst, iv;
+ struct rte_crypto_sym_op *sym_op = op->sym;
+ struct fsattr cipher_key, aad, mac, auth_key;
+
+ fsattr_sz(&src) = sym_op->cipher.data.length;
+ fsattr_sz(&dst) = sym_op->cipher.data.length;
+
+ fsattr_va(&src) = rte_pktmbuf_mtod_offset
+ (mbuf_src,
+ uint8_t *,
+ sym_op->cipher.data.offset);
+
+ fsattr_va(&dst) = rte_pktmbuf_mtod_offset
+ (mbuf_dst,
+ uint8_t *,
+ sym_op->cipher.data.offset);
+
+ fsattr_pa(&src) = rte_pktmbuf_iova_offset(mbuf_src,
+ sym_op->cipher.data.offset);
+ fsattr_pa(&dst) = rte_pktmbuf_iova_offset(mbuf_dst,
+ sym_op->cipher.data.offset);
+
+ fsattr_va(&iv) = rte_crypto_op_ctod_offset(op,
+ uint8_t *,
+ sess->cipher.iv.offset);
+
+ fsattr_pa(&iv) = 0;
+ fsattr_sz(&iv) = sess->cipher.iv.length;
+
+ fsattr_va(&cipher_key) = sess->cipher.key.data;
+ fsattr_pa(&cipher_key) = 0;
+ fsattr_sz(&cipher_key) = sess->cipher.key.length;
+
+ fsattr_va(&auth_key) = sess->auth.key.data;
+ fsattr_pa(&auth_key) = 0;
+ fsattr_sz(&auth_key) = sess->auth.key.length;
+
+ fsattr_va(&mac) = op->sym->auth.digest.data;
+ fsattr_pa(&mac) = op->sym->auth.digest.phys_addr;
+ fsattr_sz(&mac) = sess->auth.digest_length;
+
+ aad_size = sym_op->auth.data.length - sym_op->cipher.data.length;
+
+ if (aad_size > 0) {
+ fsattr_sz(&aad) = aad_size;
+ fsattr_va(&aad) = rte_pktmbuf_mtod_offset
+ (mbuf_src,
+ uint8_t *,
+ sym_op->auth.data.offset);
+ fsattr_pa(&aad) = rte_pktmbuf_iova_offset(mbuf_src,
+ sym_op->auth.data.offset);
+ }
+
+ rc = bcmfs_crypto_build_chain_request(req, sess->cipher.algo,
+ sess->cipher.op,
+ sess->auth.algo,
+ sess->auth.op,
+ &src, &dst, &cipher_key,
+ &auth_key, &iv,
+ (aad_size > 0) ? (&aad) : NULL,
+ &mac, sess->cipher_first);
+
+ if (rc)
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+
+ return rc;
+}
+
+/** Process AEAD operation */
+static int
+process_crypto_aead_op(struct rte_crypto_op *op,
+ struct rte_mbuf *mbuf_src,
+ struct rte_mbuf *mbuf_dst,
+ struct bcmfs_sym_session *sess,
+ struct bcmfs_sym_request *req)
+{
+ int rc = 0;
+ struct fsattr src, dst, iv;
+ struct rte_crypto_sym_op *sym_op = op->sym;
+ struct fsattr key, aad, mac;
+
+ fsattr_sz(&src) = sym_op->aead.data.length;
+ fsattr_sz(&dst) = sym_op->aead.data.length;
+
+ fsattr_va(&src) = rte_pktmbuf_mtod_offset(mbuf_src,
+ uint8_t *,
+ sym_op->aead.data.offset);
+
+ fsattr_va(&dst) = rte_pktmbuf_mtod_offset(mbuf_dst,
+ uint8_t *,
+ sym_op->aead.data.offset);
+
+ fsattr_pa(&src) = rte_pktmbuf_iova_offset(mbuf_src,
+ sym_op->aead.data.offset);
+ fsattr_pa(&dst) = rte_pktmbuf_iova_offset(mbuf_dst,
+ sym_op->aead.data.offset);
+
+ fsattr_va(&iv) = rte_crypto_op_ctod_offset(op,
+ uint8_t *,
+ sess->aead.iv.offset);
+
+ fsattr_pa(&iv) = 0;
+ fsattr_sz(&iv) = sess->aead.iv.length;
+
+ fsattr_va(&key) = sess->aead.key.data;
+ fsattr_pa(&key) = 0;
+ fsattr_sz(&key) = sess->aead.key.length;
+
+ fsattr_va(&mac) = op->sym->aead.digest.data;
+ fsattr_pa(&mac) = op->sym->aead.digest.phys_addr;
+ fsattr_sz(&mac) = sess->aead.digest_length;
+
+ fsattr_va(&aad) = op->sym->aead.aad.data;
+ fsattr_pa(&aad) = op->sym->aead.aad.phys_addr;
+ fsattr_sz(&aad) = sess->aead.aad_length;
+
+ rc = bcmfs_crypto_build_aead_request(req, sess->aead.algo,
+ sess->aead.op, &src, &dst,
+ &key, &iv, &aad, &mac);
+
+ if (rc)
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+
+ return rc;
+}
+
+/** Process crypto operation for mbuf */
+int
+bcmfs_process_sym_crypto_op(struct rte_crypto_op *op,
+ struct bcmfs_sym_session *sess,
+ struct bcmfs_sym_request *req)
+{
+ struct rte_mbuf *msrc, *mdst;
+ int rc = 0;
+
+ msrc = op->sym->m_src;
+ mdst = op->sym->m_dst ? op->sym->m_dst : op->sym->m_src;
+ op->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+
+ switch (sess->chain_order) {
+ case BCMFS_SYM_CHAIN_ONLY_CIPHER:
+ rc = process_crypto_cipher_op(op, msrc, mdst, sess, req);
+ break;
+ case BCMFS_SYM_CHAIN_ONLY_AUTH:
+ rc = process_crypto_auth_op(op, msrc, sess, req);
+ break;
+ case BCMFS_SYM_CHAIN_CIPHER_AUTH:
+ case BCMFS_SYM_CHAIN_AUTH_CIPHER:
+ rc = process_crypto_combined_op(op, msrc, mdst, sess, req);
+ break;
+ case BCMFS_SYM_CHAIN_AEAD:
+ rc = process_crypto_aead_op(op, msrc, mdst, sess, req);
+ break;
+ default:
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ break;
+ }
+
+ return rc;
+}
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_engine.c b/drivers/crypto/bcmfs/bcmfs_sym_engine.c
new file mode 100644
index 0000000000..537bfbec8b
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_engine.c
@@ -0,0 +1,1155 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Broadcom.
+ * All rights reserved.
+ */
+
+#include <stdbool.h>
+#include <string.h>
+
+#include <rte_common.h>
+#include <rte_cryptodev.h>
+#include <rte_crypto_sym.h>
+
+#include "bcmfs_logs.h"
+#include "bcmfs_sym_defs.h"
+#include "bcmfs_dev_msg.h"
+#include "bcmfs_sym_req.h"
+#include "bcmfs_sym_engine.h"
+
+enum spu2_cipher_type {
+ SPU2_CIPHER_TYPE_NONE = 0x0,
+ SPU2_CIPHER_TYPE_AES128 = 0x1,
+ SPU2_CIPHER_TYPE_AES192 = 0x2,
+ SPU2_CIPHER_TYPE_AES256 = 0x3,
+ SPU2_CIPHER_TYPE_DES = 0x4,
+ SPU2_CIPHER_TYPE_3DES = 0x5,
+ SPU2_CIPHER_TYPE_LAST
+};
+
+enum spu2_cipher_mode {
+ SPU2_CIPHER_MODE_ECB = 0x0,
+ SPU2_CIPHER_MODE_CBC = 0x1,
+ SPU2_CIPHER_MODE_CTR = 0x2,
+ SPU2_CIPHER_MODE_CFB = 0x3,
+ SPU2_CIPHER_MODE_OFB = 0x4,
+ SPU2_CIPHER_MODE_XTS = 0x5,
+ SPU2_CIPHER_MODE_CCM = 0x6,
+ SPU2_CIPHER_MODE_GCM = 0x7,
+ SPU2_CIPHER_MODE_LAST
+};
+
+enum spu2_hash_type {
+ SPU2_HASH_TYPE_NONE = 0x0,
+ SPU2_HASH_TYPE_AES128 = 0x1,
+ SPU2_HASH_TYPE_AES192 = 0x2,
+ SPU2_HASH_TYPE_AES256 = 0x3,
+ SPU2_HASH_TYPE_MD5 = 0x6,
+ SPU2_HASH_TYPE_SHA1 = 0x7,
+ SPU2_HASH_TYPE_SHA224 = 0x8,
+ SPU2_HASH_TYPE_SHA256 = 0x9,
+ SPU2_HASH_TYPE_SHA384 = 0xa,
+ SPU2_HASH_TYPE_SHA512 = 0xb,
+ SPU2_HASH_TYPE_SHA512_224 = 0xc,
+ SPU2_HASH_TYPE_SHA512_256 = 0xd,
+ SPU2_HASH_TYPE_SHA3_224 = 0xe,
+ SPU2_HASH_TYPE_SHA3_256 = 0xf,
+ SPU2_HASH_TYPE_SHA3_384 = 0x10,
+ SPU2_HASH_TYPE_SHA3_512 = 0x11,
+ SPU2_HASH_TYPE_LAST
+};
+
+enum spu2_hash_mode {
+ SPU2_HASH_MODE_CMAC = 0x0,
+ SPU2_HASH_MODE_CBC_MAC = 0x1,
+ SPU2_HASH_MODE_XCBC_MAC = 0x2,
+ SPU2_HASH_MODE_HMAC = 0x3,
+ SPU2_HASH_MODE_RABIN = 0x4,
+ SPU2_HASH_MODE_CCM = 0x5,
+ SPU2_HASH_MODE_GCM = 0x6,
+ SPU2_HASH_MODE_RESERVED = 0x7,
+ SPU2_HASH_MODE_LAST
+};
+
+enum spu2_proto_sel {
+ SPU2_PROTO_RESV = 0,
+ SPU2_MACSEC_SECTAG8_ECB = 1,
+ SPU2_MACSEC_SECTAG8_SCB = 2,
+ SPU2_MACSEC_SECTAG16 = 3,
+ SPU2_MACSEC_SECTAG16_8_XPN = 4,
+ SPU2_IPSEC = 5,
+ SPU2_IPSEC_ESN = 6,
+ SPU2_TLS_CIPHER = 7,
+ SPU2_TLS_AEAD = 8,
+ SPU2_DTLS_CIPHER = 9,
+ SPU2_DTLS_AEAD = 10
+};
+
+/* SPU2 response size */
+#define SPU2_STATUS_LEN 2
+
+/* Metadata settings in response */
+enum spu2_ret_md_opts {
+ SPU2_RET_NO_MD = 0, /* return no metadata */
+ SPU2_RET_FMD_OMD = 1, /* return both FMD and OMD */
+ SPU2_RET_FMD_ONLY = 2, /* return only FMD */
+ SPU2_RET_FMD_OMD_IV = 3, /* return FMD and OMD with just IVs */
+};
+
+/* FMD ctrl0 field masks */
+#define SPU2_CIPH_ENCRYPT_EN 0x1 /* 0: decrypt, 1: encrypt */
+#define SPU2_CIPH_TYPE_SHIFT 4
+#define SPU2_CIPH_MODE 0xF00 /* one of spu2_cipher_mode */
+#define SPU2_CIPH_MODE_SHIFT 8
+#define SPU2_CFB_MASK 0x7000 /* cipher feedback mask */
+#define SPU2_CFB_MASK_SHIFT 12
+#define SPU2_PROTO_SEL 0xF00000 /* MACsec, IPsec, TLS... */
+#define SPU2_PROTO_SEL_SHIFT 20
+#define SPU2_HASH_FIRST 0x1000000 /* 1: hash input is input pkt
+ * data
+ */
+#define SPU2_CHK_TAG 0x2000000 /* 1: check digest provided */
+#define SPU2_HASH_TYPE 0x1F0000000 /* one of spu2_hash_type */
+#define SPU2_HASH_TYPE_SHIFT 28
+#define SPU2_HASH_MODE 0xF000000000 /* one of spu2_hash_mode */
+#define SPU2_HASH_MODE_SHIFT 36
+#define SPU2_CIPH_PAD_EN 0x100000000000 /* 1: Add pad to end of payload for
+ * enc
+ */
+#define SPU2_CIPH_PAD 0xFF000000000000 /* cipher pad value */
+#define SPU2_CIPH_PAD_SHIFT 48
+
+/* FMD ctrl1 field masks */
+#define SPU2_TAG_LOC 0x1 /* 1: end of payload, 0: undef */
+#define SPU2_HAS_FR_DATA 0x2 /* 1: msg has frame data */
+#define SPU2_HAS_AAD1 0x4 /* 1: msg has AAD1 field */
+#define SPU2_HAS_NAAD 0x8 /* 1: msg has NAAD field */
+#define SPU2_HAS_AAD2 0x10 /* 1: msg has AAD2 field */
+#define SPU2_HAS_ESN 0x20 /* 1: msg has ESN field */
+#define SPU2_HASH_KEY_LEN 0xFF00 /* len of hash key in bytes.
+ * HMAC only.
+ */
+#define SPU2_HASH_KEY_LEN_SHIFT 8
+#define SPU2_CIPH_KEY_LEN 0xFF00000 /* len of cipher key in bytes */
+#define SPU2_CIPH_KEY_LEN_SHIFT 20
+#define SPU2_GENIV 0x10000000 /* 1: hw generates IV */
+#define SPU2_HASH_IV 0x20000000 /* 1: IV incl in hash */
+#define SPU2_RET_IV 0x40000000 /* 1: return IV in output msg
+ * b4 payload
+ */
+#define SPU2_RET_IV_LEN 0xF00000000 /* length in bytes of IV returned.
+ * 0 = 16 bytes
+ */
+#define SPU2_RET_IV_LEN_SHIFT 32
+#define SPU2_IV_OFFSET 0xF000000000 /* gen IV offset */
+#define SPU2_IV_OFFSET_SHIFT 36
+#define SPU2_IV_LEN 0x1F0000000000 /* length of input IV in bytes */
+#define SPU2_IV_LEN_SHIFT 40
+#define SPU2_HASH_TAG_LEN 0x7F000000000000 /* hash tag length in bytes */
+#define SPU2_HASH_TAG_LEN_SHIFT 48
+#define SPU2_RETURN_MD 0x300000000000000 /* return metadata */
+#define SPU2_RETURN_MD_SHIFT 56
+#define SPU2_RETURN_FD 0x400000000000000
+#define SPU2_RETURN_AAD1 0x800000000000000
+#define SPU2_RETURN_NAAD 0x1000000000000000
+#define SPU2_RETURN_AAD2 0x2000000000000000
+#define SPU2_RETURN_PAY 0x4000000000000000 /* return payload */
+
+/* FMD ctrl2 field masks */
+#define SPU2_AAD1_OFFSET 0xFFF /* byte offset of AAD1 field */
+#define SPU2_AAD1_LEN 0xFF000 /* length of AAD1 in bytes */
+#define SPU2_AAD1_LEN_SHIFT 12
+#define SPU2_AAD2_OFFSET 0xFFF00000 /* byte offset of AAD2 field */
+#define SPU2_AAD2_OFFSET_SHIFT 20
+#define SPU2_PL_OFFSET 0xFFFFFFFF00000000 /* payload offset from AAD2 */
+#define SPU2_PL_OFFSET_SHIFT 32
+
+/* FMD ctrl3 field masks */
+#define SPU2_PL_LEN 0xFFFFFFFF /* payload length in bytes */
+#define SPU2_TLS_LEN 0xFFFF00000000 /* TLS encrypt: cipher len
+ * TLS decrypt: compressed len
+ */
+#define SPU2_TLS_LEN_SHIFT 32
+
+/*
+ * Max value that can be represented in the Payload Length field of the
+ * ctrl3 word of FMD.
+ */
+#define SPU2_MAX_PAYLOAD SPU2_PL_LEN
+
+#define SPU2_VAL_NONE 0
+
+/* CCM B_0 field definitions, common for SPU-M and SPU2 */
+#define CCM_B0_ADATA 0x40
+#define CCM_B0_ADATA_SHIFT 6
+#define CCM_B0_M_PRIME 0x38
+#define CCM_B0_M_PRIME_SHIFT 3
+#define CCM_B0_L_PRIME 0x07
+#define CCM_B0_L_PRIME_SHIFT 0
+#define CCM_ESP_L_VALUE 4
+
+static uint16_t
+spu2_cipher_type_xlate(enum rte_crypto_cipher_algorithm cipher_alg,
+ enum spu2_cipher_type *spu2_type,
+ struct fsattr *key)
+{
+ int ret = 0;
+ int key_size = fsattr_sz(key);
+
+ if (cipher_alg == RTE_CRYPTO_CIPHER_AES_XTS)
+ key_size = key_size / 2;
+
+ switch (key_size) {
+ case BCMFS_CRYPTO_AES128:
+ *spu2_type = SPU2_CIPHER_TYPE_AES128;
+ break;
+ case BCMFS_CRYPTO_AES192:
+ *spu2_type = SPU2_CIPHER_TYPE_AES192;
+ break;
+ case BCMFS_CRYPTO_AES256:
+ *spu2_type = SPU2_CIPHER_TYPE_AES256;
+ break;
+ default:
+ ret = -EINVAL;
+ }
+
+ return ret;
+}
+
+static int
+spu2_hash_xlate(enum rte_crypto_auth_algorithm auth_alg,
+ struct fsattr *key,
+ enum spu2_hash_type *spu2_type,
+ enum spu2_hash_mode *spu2_mode)
+{
+ *spu2_mode = 0;
+
+ switch (auth_alg) {
+ case RTE_CRYPTO_AUTH_NULL:
+ *spu2_type = SPU2_HASH_TYPE_NONE;
+ break;
+ case RTE_CRYPTO_AUTH_MD5:
+ *spu2_type = SPU2_HASH_TYPE_MD5;
+ break;
+ case RTE_CRYPTO_AUTH_MD5_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_MD5;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA1:
+ *spu2_type = SPU2_HASH_TYPE_SHA1;
+ break;
+ case RTE_CRYPTO_AUTH_SHA1_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA1;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA224:
+ *spu2_type = SPU2_HASH_TYPE_SHA224;
+ break;
+ case RTE_CRYPTO_AUTH_SHA224_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA224;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA256:
+ *spu2_type = SPU2_HASH_TYPE_SHA256;
+ break;
+ case RTE_CRYPTO_AUTH_SHA256_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA256;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA384:
+ *spu2_type = SPU2_HASH_TYPE_SHA384;
+ break;
+ case RTE_CRYPTO_AUTH_SHA384_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA384;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA512:
+ *spu2_type = SPU2_HASH_TYPE_SHA512;
+ break;
+ case RTE_CRYPTO_AUTH_SHA512_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA512;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA3_224:
+ *spu2_type = SPU2_HASH_TYPE_SHA3_224;
+ break;
+ case RTE_CRYPTO_AUTH_SHA3_224_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA3_224;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA3_256:
+ *spu2_type = SPU2_HASH_TYPE_SHA3_256;
+ break;
+ case RTE_CRYPTO_AUTH_SHA3_256_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA3_256;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA3_384:
+ *spu2_type = SPU2_HASH_TYPE_SHA3_384;
+ break;
+ case RTE_CRYPTO_AUTH_SHA3_384_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA3_384;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA3_512:
+ *spu2_type = SPU2_HASH_TYPE_SHA3_512;
+ break;
+ case RTE_CRYPTO_AUTH_SHA3_512_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA3_512;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
+ *spu2_mode = SPU2_HASH_MODE_XCBC_MAC;
+ switch (fsattr_sz(key)) {
+ case BCMFS_CRYPTO_AES128:
+ *spu2_type = SPU2_HASH_TYPE_AES128;
+ break;
+ case BCMFS_CRYPTO_AES192:
+ *spu2_type = SPU2_HASH_TYPE_AES192;
+ break;
+ case BCMFS_CRYPTO_AES256:
+ *spu2_type = SPU2_HASH_TYPE_AES256;
+ break;
+ default:
+ return -EINVAL;
+ }
+ break;
+ case RTE_CRYPTO_AUTH_AES_CMAC:
+ *spu2_mode = SPU2_HASH_MODE_CMAC;
+ switch (fsattr_sz(key)) {
+ case BCMFS_CRYPTO_AES128:
+ *spu2_type = SPU2_HASH_TYPE_AES128;
+ break;
+ case BCMFS_CRYPTO_AES192:
+ *spu2_type = SPU2_HASH_TYPE_AES192;
+ break;
+ case BCMFS_CRYPTO_AES256:
+ *spu2_type = SPU2_HASH_TYPE_AES256;
+ break;
+ default:
+ return -EINVAL;
+ }
+ break;
+ case RTE_CRYPTO_AUTH_AES_GMAC:
+ *spu2_mode = SPU2_HASH_MODE_GCM;
+ switch (fsattr_sz(key)) {
+ case BCMFS_CRYPTO_AES128:
+ *spu2_type = SPU2_HASH_TYPE_AES128;
+ break;
+ case BCMFS_CRYPTO_AES192:
+ *spu2_type = SPU2_HASH_TYPE_AES192;
+ break;
+ case BCMFS_CRYPTO_AES256:
+ *spu2_type = SPU2_HASH_TYPE_AES256;
+ break;
+ default:
+ return -EINVAL;
+ }
+ break;
+ case RTE_CRYPTO_AUTH_AES_CBC_MAC:
+ *spu2_mode = SPU2_HASH_MODE_CBC_MAC;
+ switch (fsattr_sz(key)) {
+ case BCMFS_CRYPTO_AES128:
+ *spu2_type = SPU2_HASH_TYPE_AES128;
+ break;
+ case BCMFS_CRYPTO_AES192:
+ *spu2_type = SPU2_HASH_TYPE_AES192;
+ break;
+ case BCMFS_CRYPTO_AES256:
+ *spu2_type = SPU2_HASH_TYPE_AES256;
+ break;
+ default:
+ return -EINVAL;
+ }
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int
+spu2_cipher_xlate(enum rte_crypto_cipher_algorithm cipher_alg,
+ struct fsattr *key,
+ enum spu2_cipher_type *spu2_type,
+ enum spu2_cipher_mode *spu2_mode)
+{
+ int ret = 0;
+
+ switch (cipher_alg) {
+ case RTE_CRYPTO_CIPHER_NULL:
+ *spu2_type = SPU2_CIPHER_TYPE_NONE;
+ break;
+ case RTE_CRYPTO_CIPHER_DES_CBC:
+ *spu2_mode = SPU2_CIPHER_MODE_CBC;
+ *spu2_type = SPU2_CIPHER_TYPE_DES;
+ break;
+ case RTE_CRYPTO_CIPHER_3DES_ECB:
+ *spu2_mode = SPU2_CIPHER_MODE_ECB;
+ *spu2_type = SPU2_CIPHER_TYPE_3DES;
+ break;
+ case RTE_CRYPTO_CIPHER_3DES_CBC:
+ *spu2_mode = SPU2_CIPHER_MODE_CBC;
+ *spu2_type = SPU2_CIPHER_TYPE_3DES;
+ break;
+ case RTE_CRYPTO_CIPHER_AES_CBC:
+ *spu2_mode = SPU2_CIPHER_MODE_CBC;
+ ret = spu2_cipher_type_xlate(cipher_alg, spu2_type, key);
+ break;
+ case RTE_CRYPTO_CIPHER_AES_ECB:
+ *spu2_mode = SPU2_CIPHER_MODE_ECB;
+ ret = spu2_cipher_type_xlate(cipher_alg, spu2_type, key);
+ break;
+ case RTE_CRYPTO_CIPHER_AES_CTR:
+ *spu2_mode = SPU2_CIPHER_MODE_CTR;
+ ret = spu2_cipher_type_xlate(cipher_alg, spu2_type, key);
+ break;
+ case RTE_CRYPTO_CIPHER_AES_XTS:
+ *spu2_mode = SPU2_CIPHER_MODE_XTS;
+ ret = spu2_cipher_type_xlate(cipher_alg, spu2_type, key);
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return ret;
+}
+
+static void
+spu2_fmd_ctrl0_write(struct spu2_fmd *fmd,
+ bool is_inbound, bool auth_first,
+ enum spu2_proto_sel protocol,
+ enum spu2_cipher_type cipher_type,
+ enum spu2_cipher_mode cipher_mode,
+ enum spu2_hash_type auth_type,
+ enum spu2_hash_mode auth_mode)
+{
+ uint64_t ctrl0 = 0;
+
+ if (cipher_type != SPU2_CIPHER_TYPE_NONE && !is_inbound)
+ ctrl0 |= SPU2_CIPH_ENCRYPT_EN;
+
+ ctrl0 |= ((uint64_t)cipher_type << SPU2_CIPH_TYPE_SHIFT) |
+ ((uint64_t)cipher_mode << SPU2_CIPH_MODE_SHIFT);
+
+ if (protocol != SPU2_PROTO_RESV)
+ ctrl0 |= (uint64_t)protocol << SPU2_PROTO_SEL_SHIFT;
+
+ if (auth_first)
+ ctrl0 |= SPU2_HASH_FIRST;
+
+ if (is_inbound && auth_type != SPU2_HASH_TYPE_NONE)
+ ctrl0 |= SPU2_CHK_TAG;
+
+ ctrl0 |= (((uint64_t)auth_type << SPU2_HASH_TYPE_SHIFT) |
+ ((uint64_t)auth_mode << SPU2_HASH_MODE_SHIFT));
+
+ fmd->ctrl0 = ctrl0;
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "ctrl0:", &fmd->ctrl0, sizeof(uint64_t));
+#endif
+}
+
+static void
+spu2_fmd_ctrl1_write(struct spu2_fmd *fmd, bool is_inbound,
+ uint64_t assoc_size, uint64_t auth_key_len,
+ uint64_t cipher_key_len, bool gen_iv, bool hash_iv,
+ bool return_iv, uint64_t ret_iv_len,
+ uint64_t ret_iv_offset, uint64_t cipher_iv_len,
+ uint64_t digest_size, bool return_payload, bool return_md)
+{
+ uint64_t ctrl1 = 0;
+
+ if (is_inbound && digest_size != 0)
+ ctrl1 |= SPU2_TAG_LOC;
+
+ if (assoc_size != 0)
+ ctrl1 |= SPU2_HAS_AAD2;
+
+ if (auth_key_len != 0)
+ ctrl1 |= ((auth_key_len << SPU2_HASH_KEY_LEN_SHIFT) &
+ SPU2_HASH_KEY_LEN);
+
+ if (cipher_key_len != 0)
+ ctrl1 |= ((cipher_key_len << SPU2_CIPH_KEY_LEN_SHIFT) &
+ SPU2_CIPH_KEY_LEN);
+
+ if (gen_iv)
+ ctrl1 |= SPU2_GENIV;
+
+ if (hash_iv)
+ ctrl1 |= SPU2_HASH_IV;
+
+ if (return_iv) {
+ ctrl1 |= SPU2_RET_IV;
+ ctrl1 |= ret_iv_len << SPU2_RET_IV_LEN_SHIFT;
+ ctrl1 |= ret_iv_offset << SPU2_IV_OFFSET_SHIFT;
+ }
+
+ ctrl1 |= ((cipher_iv_len << SPU2_IV_LEN_SHIFT) & SPU2_IV_LEN);
+
+ if (digest_size != 0) {
+ ctrl1 |= ((digest_size << SPU2_HASH_TAG_LEN_SHIFT) &
+ SPU2_HASH_TAG_LEN);
+ }
+
+ /*
+ * Let's ask for the output pkt to include FMD, but don't need to
+ * get keys and IVs back in OMD.
+ */
+ if (return_md)
+ ctrl1 |= ((uint64_t)SPU2_RET_FMD_ONLY << SPU2_RETURN_MD_SHIFT);
+ else
+ ctrl1 |= ((uint64_t)SPU2_RET_NO_MD << SPU2_RETURN_MD_SHIFT);
+
+ /* Crypto API does not get assoc data back. So no need for AAD2. */
+
+ if (return_payload)
+ ctrl1 |= SPU2_RETURN_PAY;
+
+ fmd->ctrl1 = ctrl1;
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "ctrl1:", &fmd->ctrl1, sizeof(uint64_t));
+#endif
+}
+
+static void
+spu2_fmd_ctrl2_write(struct spu2_fmd *fmd, uint64_t cipher_offset,
+ uint64_t auth_key_len __rte_unused,
+ uint64_t auth_iv_len __rte_unused,
+ uint64_t cipher_key_len __rte_unused,
+ uint64_t cipher_iv_len __rte_unused)
+{
+ uint64_t aad1_offset;
+ uint64_t aad2_offset;
+ uint16_t aad1_len = 0;
+ uint64_t payload_offset;
+
+ /* AAD1 offset is from start of FD. FD length always 0. */
+ aad1_offset = 0;
+
+ aad2_offset = aad1_offset;
+ payload_offset = cipher_offset;
+ fmd->ctrl2 = aad1_offset |
+ (aad1_len << SPU2_AAD1_LEN_SHIFT) |
+ (aad2_offset << SPU2_AAD2_OFFSET_SHIFT) |
+ (payload_offset << SPU2_PL_OFFSET_SHIFT);
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "ctrl2:", &fmd->ctrl2, sizeof(uint64_t));
+#endif
+}
+
+static void
+spu2_fmd_ctrl3_write(struct spu2_fmd *fmd, uint64_t payload_len)
+{
+ fmd->ctrl3 = payload_len & SPU2_PL_LEN;
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "ctrl3:", &fmd->ctrl3, sizeof(uint64_t));
+#endif
+}
+
+int
+bcmfs_crypto_build_auth_req(struct bcmfs_sym_request *sreq,
+ enum rte_crypto_auth_algorithm a_alg,
+ enum rte_crypto_auth_operation auth_op,
+ struct fsattr *src, struct fsattr *dst,
+ struct fsattr *mac, struct fsattr *auth_key,
+ struct fsattr *iv)
+{
+ int ret;
+ uint64_t dst_size;
+ int src_index = 0;
+ struct spu2_fmd *fmd;
+ uint64_t payload_len;
+ enum spu2_hash_mode spu2_auth_mode;
+ enum spu2_hash_type spu2_auth_type = SPU2_HASH_TYPE_NONE;
+ uint64_t iv_size = (iv != NULL) ? fsattr_sz(iv) : 0;
+ uint64_t auth_ksize = (auth_key != NULL) ? fsattr_sz(auth_key) : 0;
+ bool is_inbound = (auth_op == RTE_CRYPTO_AUTH_OP_VERIFY);
+
+ if (src == NULL)
+ return -EINVAL;
+
+ payload_len = fsattr_sz(src);
+ if (!payload_len) {
+ BCMFS_DP_LOG(ERR, "null payload not supported");
+ return -EINVAL;
+ }
+
+ /* one of dst or mac should not be NULL */
+ if (dst == NULL && mac == NULL)
+ return -EINVAL;
+
+ if (auth_op == RTE_CRYPTO_AUTH_OP_GENERATE && dst != NULL)
+ dst_size = fsattr_sz(dst);
+ else if (auth_op == RTE_CRYPTO_AUTH_OP_VERIFY && mac != NULL)
+ dst_size = fsattr_sz(mac);
+ else
+ return -EINVAL;
+
+ /* spu2 hash algorithm and hash algorithm mode */
+ ret = spu2_hash_xlate(a_alg, auth_key, &spu2_auth_type,
+ &spu2_auth_mode);
+ if (ret)
+ return -EINVAL;
+
+ fmd = &sreq->fmd;
+
+ spu2_fmd_ctrl0_write(fmd, is_inbound, SPU2_VAL_NONE,
+ SPU2_PROTO_RESV, SPU2_VAL_NONE,
+ SPU2_VAL_NONE, spu2_auth_type, spu2_auth_mode);
+
+ spu2_fmd_ctrl1_write(fmd, is_inbound, SPU2_VAL_NONE,
+ auth_ksize, SPU2_VAL_NONE, false,
+ false, SPU2_VAL_NONE, SPU2_VAL_NONE,
+ SPU2_VAL_NONE, iv_size,
+ dst_size, SPU2_VAL_NONE, SPU2_VAL_NONE);
+
+ memset(&fmd->ctrl2, 0, sizeof(uint64_t));
+
+ spu2_fmd_ctrl3_write(fmd, fsattr_sz(src));
+
+ /* Source metadata and data pointers */
+ sreq->msgs.srcs_addr[src_index] = sreq->fptr;
+ sreq->msgs.srcs_len[src_index] = sizeof(struct spu2_fmd);
+ src_index++;
+
+ if (auth_key != NULL && fsattr_sz(auth_key) != 0) {
+ memcpy(sreq->auth_key, fsattr_va(auth_key),
+ fsattr_sz(auth_key));
+
+ sreq->msgs.srcs_addr[src_index] = sreq->aptr;
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(auth_key);
+ src_index++;
+ }
+
+ if (iv != NULL && fsattr_sz(iv) != 0) {
+ memcpy(sreq->iv, fsattr_va(iv), fsattr_sz(iv));
+ sreq->msgs.srcs_addr[src_index] = sreq->iptr;
+ sreq->msgs.srcs_len[src_index] = iv_size;
+ src_index++;
+ }
+
+ sreq->msgs.srcs_addr[src_index] = fsattr_pa(src);
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(src);
+ src_index++;
+
+ /*
+ * In case of authentication verify operation, use input mac data to
+ * SPU2 engine.
+ */
+ if (auth_op == RTE_CRYPTO_AUTH_OP_VERIFY && mac != NULL) {
+ sreq->msgs.srcs_addr[src_index] = fsattr_pa(mac);
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(mac);
+ src_index++;
+ }
+ sreq->msgs.srcs_count = src_index;
+
+ /*
+ * Output packet contains actual output from SPU2 and
+ * the status packet, so the dsts_count is always 2 below.
+ */
+ if (auth_op == RTE_CRYPTO_AUTH_OP_GENERATE) {
+ sreq->msgs.dsts_addr[0] = fsattr_pa(dst);
+ sreq->msgs.dsts_len[0] = fsattr_sz(dst);
+ } else {
+ /*
+ * In case of authentication verify operation, provide dummy
+ * location to SPU2 engine to generate hash. This is needed
+ * because SPU2 generates hash even in case of verify operation.
+ */
+ sreq->msgs.dsts_addr[0] = sreq->dptr;
+ sreq->msgs.dsts_len[0] = fsattr_sz(mac);
+ }
+
+ sreq->msgs.dsts_addr[1] = sreq->rptr;
+ sreq->msgs.dsts_len[1] = SPU2_STATUS_LEN;
+ sreq->msgs.dsts_count = 2;
+
+ return 0;
+}
+
+int
+bcmfs_crypto_build_cipher_req(struct bcmfs_sym_request *sreq,
+ enum rte_crypto_cipher_algorithm calgo,
+ enum rte_crypto_cipher_operation cipher_op,
+ struct fsattr *src, struct fsattr *dst,
+ struct fsattr *cipher_key, struct fsattr *iv)
+{
+ int ret = 0;
+ int src_index = 0;
+ struct spu2_fmd *fmd;
+ unsigned int xts_keylen;
+ enum spu2_cipher_mode spu2_ciph_mode = 0;
+ enum spu2_cipher_type spu2_ciph_type = SPU2_CIPHER_TYPE_NONE;
+ bool is_inbound = (cipher_op == RTE_CRYPTO_CIPHER_OP_DECRYPT);
+
+ if (src == NULL || dst == NULL || iv == NULL)
+ return -EINVAL;
+
+ fmd = &sreq->fmd;
+
+ /* spu2 cipher algorithm and cipher algorithm mode */
+ ret = spu2_cipher_xlate(calgo, cipher_key,
+ &spu2_ciph_type, &spu2_ciph_mode);
+ if (ret)
+ return -EINVAL;
+
+ spu2_fmd_ctrl0_write(fmd, is_inbound, SPU2_VAL_NONE,
+ SPU2_PROTO_RESV, spu2_ciph_type, spu2_ciph_mode,
+ SPU2_VAL_NONE, SPU2_VAL_NONE);
+
+ spu2_fmd_ctrl1_write(fmd, SPU2_VAL_NONE, SPU2_VAL_NONE, SPU2_VAL_NONE,
+ fsattr_sz(cipher_key), false, false,
+ SPU2_VAL_NONE, SPU2_VAL_NONE, SPU2_VAL_NONE,
+ fsattr_sz(iv), SPU2_VAL_NONE, SPU2_VAL_NONE,
+ SPU2_VAL_NONE);
+
+ /* Nothing for FMD2 */
+ memset(&fmd->ctrl2, 0, sizeof(uint64_t));
+
+ spu2_fmd_ctrl3_write(fmd, fsattr_sz(src));
+
+ /* Source metadata and data pointers */
+ sreq->msgs.srcs_addr[src_index] = sreq->fptr;
+ sreq->msgs.srcs_len[src_index] = sizeof(struct spu2_fmd);
+ src_index++;
+
+ if (cipher_key != NULL && fsattr_sz(cipher_key) != 0) {
+ if (calgo == RTE_CRYPTO_CIPHER_AES_XTS) {
+ xts_keylen = fsattr_sz(cipher_key) / 2;
+ memcpy(sreq->cipher_key,
+ (uint8_t *)fsattr_va(cipher_key) + xts_keylen,
+ xts_keylen);
+ memcpy(sreq->cipher_key + xts_keylen,
+ fsattr_va(cipher_key), xts_keylen);
+ } else {
+ memcpy(sreq->cipher_key,
+ fsattr_va(cipher_key), fsattr_sz(cipher_key));
+ }
+
+ sreq->msgs.srcs_addr[src_index] = sreq->cptr;
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(cipher_key);
+ src_index++;
+ }
+
+ if (iv != NULL && fsattr_sz(iv) != 0) {
+ memcpy(sreq->iv,
+ fsattr_va(iv), fsattr_sz(iv));
+ sreq->msgs.srcs_addr[src_index] = sreq->iptr;
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(iv);
+ src_index++;
+ }
+
+ sreq->msgs.srcs_addr[src_index] = fsattr_pa(src);
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(src);
+ src_index++;
+ sreq->msgs.srcs_count = src_index;
+
+ /**
+ * Output packet contains actual output from SPU2 and
+ * the status packet, so the dsts_count is always 2 below.
+ */
+ sreq->msgs.dsts_addr[0] = fsattr_pa(dst);
+ sreq->msgs.dsts_len[0] = fsattr_sz(dst);
+
+ sreq->msgs.dsts_addr[1] = sreq->rptr;
+ sreq->msgs.dsts_len[1] = SPU2_STATUS_LEN;
+ sreq->msgs.dsts_count = 2;
+
+ return 0;
+}
+
+int
+bcmfs_crypto_build_chain_request(struct bcmfs_sym_request *sreq,
+ enum rte_crypto_cipher_algorithm cipher_alg,
+ enum rte_crypto_cipher_operation cipher_op __rte_unused,
+ enum rte_crypto_auth_algorithm auth_alg,
+ enum rte_crypto_auth_operation auth_op,
+ struct fsattr *src, struct fsattr *dst,
+ struct fsattr *cipher_key,
+ struct fsattr *auth_key,
+ struct fsattr *iv, struct fsattr *aad,
+ struct fsattr *digest, bool cipher_first)
+{
+ int ret = 0;
+ int src_index = 0;
+ int dst_index = 0;
+ bool auth_first = 0;
+ struct spu2_fmd *fmd;
+ uint64_t payload_len;
+ enum spu2_cipher_mode spu2_ciph_mode = 0;
+ enum spu2_hash_mode spu2_auth_mode = 0;
+ uint64_t aad_size = (aad != NULL) ? fsattr_sz(aad) : 0;
+ uint64_t iv_size = (iv != NULL) ? fsattr_sz(iv) : 0;
+ enum spu2_cipher_type spu2_ciph_type = SPU2_CIPHER_TYPE_NONE;
+ uint64_t auth_ksize = (auth_key != NULL) ?
+ fsattr_sz(auth_key) : 0;
+ uint64_t cipher_ksize = (cipher_key != NULL) ?
+ fsattr_sz(cipher_key) : 0;
+ uint64_t digest_size = (digest != NULL) ?
+ fsattr_sz(digest) : 0;
+ enum spu2_hash_type spu2_auth_type = SPU2_HASH_TYPE_NONE;
+ bool is_inbound = (auth_op == RTE_CRYPTO_AUTH_OP_VERIFY);
+
+ if (src == NULL)
+ return -EINVAL;
+
+ payload_len = fsattr_sz(src);
+ if (!payload_len) {
+ BCMFS_DP_LOG(ERR, "null payload not supported");
+ return -EINVAL;
+ }
+
+ /* spu2 hash algorithm and hash algorithm mode */
+ ret = spu2_hash_xlate(auth_alg, auth_key, &spu2_auth_type,
+ &spu2_auth_mode);
+ if (ret)
+ return -EINVAL;
+
+ /* spu2 cipher algorithm and cipher algorithm mode */
+ ret = spu2_cipher_xlate(cipher_alg, cipher_key, &spu2_ciph_type,
+ &spu2_ciph_mode);
+ if (ret) {
+ BCMFS_DP_LOG(ERR, "cipher xlate error");
+ return -EINVAL;
+ }
+
+ auth_first = cipher_first ? 0 : 1;
+
+ if (iv != NULL && fsattr_sz(iv) != 0)
+ memcpy(sreq->iv, fsattr_va(iv), fsattr_sz(iv));
+
+ fmd = &sreq->fmd;
+
+ spu2_fmd_ctrl0_write(fmd, is_inbound, auth_first, SPU2_PROTO_RESV,
+ spu2_ciph_type, spu2_ciph_mode,
+ spu2_auth_type, spu2_auth_mode);
+
+ spu2_fmd_ctrl1_write(fmd, is_inbound, aad_size, auth_ksize,
+ cipher_ksize, false, false, SPU2_VAL_NONE,
+ SPU2_VAL_NONE, SPU2_VAL_NONE, iv_size,
+ digest_size, false, SPU2_VAL_NONE);
+
+ spu2_fmd_ctrl2_write(fmd, aad_size, auth_ksize, 0,
+ cipher_ksize, iv_size);
+
+ spu2_fmd_ctrl3_write(fmd, payload_len);
+
+ /* Source metadata and data pointers */
+ sreq->msgs.srcs_addr[src_index] = sreq->fptr;
+ sreq->msgs.srcs_len[src_index] = sizeof(struct spu2_fmd);
+ src_index++;
+
+ if (auth_key != NULL && fsattr_sz(auth_key) != 0) {
+ memcpy(sreq->auth_key,
+ fsattr_va(auth_key), fsattr_sz(auth_key));
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "auth key:", fsattr_va(auth_key),
+ fsattr_sz(auth_key));
+#endif
+ sreq->msgs.srcs_addr[src_index] = sreq->aptr;
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(auth_key);
+ src_index++;
+ }
+
+ if (cipher_key != NULL && fsattr_sz(cipher_key) != 0) {
+ memcpy(sreq->cipher_key,
+ fsattr_va(cipher_key), fsattr_sz(cipher_key));
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "cipher key:", fsattr_va(cipher_key),
+ fsattr_sz(cipher_key));
+#endif
+ sreq->msgs.srcs_addr[src_index] = sreq->cptr;
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(cipher_key);
+ src_index++;
+ }
+
+ if (iv != NULL && fsattr_sz(iv) != 0) {
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "iv key:", fsattr_va(iv),
+ fsattr_sz(iv));
+#endif
+ sreq->msgs.srcs_addr[src_index] = sreq->iptr;
+ sreq->msgs.srcs_len[src_index] = iv_size;
+ src_index++;
+ }
+
+ if (aad != NULL && fsattr_sz(aad) != 0) {
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "aad :", fsattr_va(aad),
+ fsattr_sz(aad));
+#endif
+ sreq->msgs.srcs_addr[src_index] = fsattr_pa(aad);
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(aad);
+ src_index++;
+ }
+
+ sreq->msgs.srcs_addr[src_index] = fsattr_pa(src);
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(src);
+ src_index++;
+
+ if (auth_op == RTE_CRYPTO_AUTH_OP_VERIFY && digest != NULL &&
+ fsattr_sz(digest) != 0) {
+ sreq->msgs.srcs_addr[src_index] = fsattr_pa(digest);
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(digest);
+ src_index++;
+ }
+ sreq->msgs.srcs_count = src_index;
+
+ if (dst != NULL) {
+ sreq->msgs.dsts_addr[dst_index] = fsattr_pa(dst);
+ sreq->msgs.dsts_len[dst_index] = fsattr_sz(dst);
+ dst_index++;
+ }
+
+ if (auth_op == RTE_CRYPTO_AUTH_OP_VERIFY) {
+ /*
+ * In case of decryption digest data is generated by
+ * SPU2 engine but application doesn't need digest
+ * as such. So program dummy location to capture
+ * digest data
+ */
+ if (digest != NULL && fsattr_sz(digest) != 0) {
+ sreq->msgs.dsts_addr[dst_index] =
+ sreq->dptr;
+ sreq->msgs.dsts_len[dst_index] =
+ fsattr_sz(digest);
+ dst_index++;
+ }
+ } else {
+ if (digest != NULL && fsattr_sz(digest) != 0) {
+ sreq->msgs.dsts_addr[dst_index] =
+ fsattr_pa(digest);
+ sreq->msgs.dsts_len[dst_index] =
+ fsattr_sz(digest);
+ dst_index++;
+ }
+ }
+
+ sreq->msgs.dsts_addr[dst_index] = sreq->rptr;
+ sreq->msgs.dsts_len[dst_index] = SPU2_STATUS_LEN;
+ dst_index++;
+ sreq->msgs.dsts_count = dst_index;
+
+ return 0;
+}
+
+static void
+bcmfs_crypto_ccm_update_iv(uint8_t *ivbuf,
+ unsigned int *ivlen, bool is_esp)
+{
+ int L; /* size of length field, in bytes */
+
+ /*
+ * In RFC4309 mode, L is fixed at 4 bytes; otherwise, IV from
+ * testmgr contains (L-1) in bottom 3 bits of first byte,
+ * per RFC 3610.
+ */
+ if (is_esp)
+ L = CCM_ESP_L_VALUE;
+ else
+ L = ((ivbuf[0] & CCM_B0_L_PRIME) >>
+ CCM_B0_L_PRIME_SHIFT) + 1;
+
+ /* SPU2 doesn't want these length bytes nor the first byte... */
+ *ivlen -= (1 + L);
+ memmove(ivbuf, &ivbuf[1], *ivlen);
+}
+
+int
+bcmfs_crypto_build_aead_request(struct bcmfs_sym_request *sreq,
+ enum rte_crypto_aead_algorithm ae_algo,
+ enum rte_crypto_aead_operation aeop,
+ struct fsattr *src, struct fsattr *dst,
+ struct fsattr *key, struct fsattr *iv,
+ struct fsattr *aad, struct fsattr *digest)
+{
+ int src_index = 0;
+ int dst_index = 0;
+ bool auth_first = 0;
+ struct spu2_fmd *fmd;
+ uint64_t payload_len;
+ uint64_t aad_size = (aad != NULL) ? fsattr_sz(aad) : 0;
+ unsigned int iv_size = (iv != NULL) ? fsattr_sz(iv) : 0;
+ enum spu2_cipher_mode spu2_ciph_mode = 0;
+ enum spu2_hash_mode spu2_auth_mode = 0;
+ enum spu2_cipher_type spu2_ciph_type = SPU2_CIPHER_TYPE_NONE;
+ enum spu2_hash_type spu2_auth_type = SPU2_HASH_TYPE_NONE;
+ uint64_t ksize = (key != NULL) ? fsattr_sz(key) : 0;
+ uint64_t digest_size = (digest != NULL) ?
+ fsattr_sz(digest) : 0;
+ bool is_inbound = (aeop == RTE_CRYPTO_AEAD_OP_DECRYPT);
+
+ if (src == NULL)
+ return -EINVAL;
+
+ payload_len = fsattr_sz(src);
+ if (!payload_len) {
+ BCMFS_DP_LOG(ERR, "null payload not supported");
+ return -EINVAL;
+ }
+
+ switch (ksize) {
+ case BCMFS_CRYPTO_AES128:
+ spu2_auth_type = SPU2_HASH_TYPE_AES128;
+ spu2_ciph_type = SPU2_CIPHER_TYPE_AES128;
+ break;
+ case BCMFS_CRYPTO_AES192:
+ spu2_auth_type = SPU2_HASH_TYPE_AES192;
+ spu2_ciph_type = SPU2_CIPHER_TYPE_AES192;
+ break;
+ case BCMFS_CRYPTO_AES256:
+ spu2_auth_type = SPU2_HASH_TYPE_AES256;
+ spu2_ciph_type = SPU2_CIPHER_TYPE_AES256;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ if (ae_algo == RTE_CRYPTO_AEAD_AES_GCM) {
+ spu2_auth_mode = SPU2_HASH_MODE_GCM;
+ spu2_ciph_mode = SPU2_CIPHER_MODE_GCM;
+ /*
+ * SPU2 needs in total 12 bytes of IV
+ * ie IV of 8 bytes(random number) and 4 bytes of salt.
+ */
+ if (fsattr_sz(iv) > 12)
+ iv_size = 12;
+
+ /*
+ * On SPU 2, aes gcm cipher first on encrypt, auth first on
+ * decrypt
+ */
+
+ auth_first = (aeop == RTE_CRYPTO_AEAD_OP_ENCRYPT) ?
+ 0 : 1;
+ }
+
+ if (iv != NULL && fsattr_sz(iv) != 0)
+ memcpy(sreq->iv, fsattr_va(iv), fsattr_sz(iv));
+
+ if (ae_algo == RTE_CRYPTO_AEAD_AES_CCM) {
+ spu2_auth_mode = SPU2_HASH_MODE_CCM;
+ spu2_ciph_mode = SPU2_CIPHER_MODE_CCM;
+ if (iv != NULL) {
+ memcpy(sreq->iv, fsattr_va(iv),
+ fsattr_sz(iv));
+ iv_size = fsattr_sz(iv);
+ bcmfs_crypto_ccm_update_iv(sreq->iv, &iv_size, false);
+ }
+
+ /* opposite for ccm (auth 1st on encrypt) */
+ auth_first = (aeop == RTE_CRYPTO_AEAD_OP_ENCRYPT) ?
+ 0 : 1;
+ }
+
+ fmd = &sreq->fmd;
+
+ spu2_fmd_ctrl0_write(fmd, is_inbound, auth_first, SPU2_PROTO_RESV,
+ spu2_ciph_type, spu2_ciph_mode,
+ spu2_auth_type, spu2_auth_mode);
+
+ spu2_fmd_ctrl1_write(fmd, is_inbound, aad_size, 0,
+ ksize, false, false, SPU2_VAL_NONE,
+ SPU2_VAL_NONE, SPU2_VAL_NONE, iv_size,
+ digest_size, false, SPU2_VAL_NONE);
+
+ spu2_fmd_ctrl2_write(fmd, aad_size, 0, 0,
+ ksize, iv_size);
+
+ spu2_fmd_ctrl3_write(fmd, payload_len);
+
+ /* Source metadata and data pointers */
+ sreq->msgs.srcs_addr[src_index] = sreq->fptr;
+ sreq->msgs.srcs_len[src_index] = sizeof(struct spu2_fmd);
+ src_index++;
+
+ if (key != NULL && fsattr_sz(key) != 0) {
+ memcpy(sreq->cipher_key,
+ fsattr_va(key), fsattr_sz(key));
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "cipher key:", fsattr_va(key),
+ fsattr_sz(key));
+#endif
+ sreq->msgs.srcs_addr[src_index] = sreq->cptr;
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(key);
+ src_index++;
+ }
+
+ if (iv != NULL && fsattr_sz(iv) != 0) {
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "iv key:", fsattr_va(iv),
+ fsattr_sz(iv));
+#endif
+ sreq->msgs.srcs_addr[src_index] = sreq->iptr;
+ sreq->msgs.srcs_len[src_index] = iv_size;
+ src_index++;
+ }
+
+ if (aad != NULL && fsattr_sz(aad) != 0) {
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "aad :", fsattr_va(aad),
+ fsattr_sz(aad));
+#endif
+ sreq->msgs.srcs_addr[src_index] = fsattr_pa(aad);
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(aad);
+ src_index++;
+ }
+
+ sreq->msgs.srcs_addr[src_index] = fsattr_pa(src);
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(src);
+ src_index++;
+
+ if (aeop == RTE_CRYPTO_AEAD_OP_DECRYPT && digest != NULL &&
+ fsattr_sz(digest) != 0) {
+ sreq->msgs.srcs_addr[src_index] = fsattr_pa(digest);
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(digest);
+ src_index++;
+ }
+ sreq->msgs.srcs_count = src_index;
+
+ if (dst != NULL) {
+ sreq->msgs.dsts_addr[dst_index] = fsattr_pa(dst);
+ sreq->msgs.dsts_len[dst_index] = fsattr_sz(dst);
+ dst_index++;
+ }
+
+ if (aeop == RTE_CRYPTO_AEAD_OP_DECRYPT) {
+ /*
+ * In case of decryption digest data is generated by
+ * SPU2 engine but application doesn't need digest
+ * as such. So program dummy location to capture
+ * digest data
+ */
+ if (digest != NULL && fsattr_sz(digest) != 0) {
+ sreq->msgs.dsts_addr[dst_index] =
+ sreq->dptr;
+ sreq->msgs.dsts_len[dst_index] =
+ fsattr_sz(digest);
+ dst_index++;
+ }
+ } else {
+ if (digest != NULL && fsattr_sz(digest) != 0) {
+ sreq->msgs.dsts_addr[dst_index] =
+ fsattr_pa(digest);
+ sreq->msgs.dsts_len[dst_index] =
+ fsattr_sz(digest);
+ dst_index++;
+ }
+ }
+
+ sreq->msgs.dsts_addr[dst_index] = sreq->rptr;
+ sreq->msgs.dsts_len[dst_index] = SPU2_STATUS_LEN;
+ dst_index++;
+ sreq->msgs.dsts_count = dst_index;
+
+ return 0;
+}
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_engine.h b/drivers/crypto/bcmfs/bcmfs_sym_engine.h
new file mode 100644
index 0000000000..d9594246b5
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_engine.h
@@ -0,0 +1,115 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_SYM_ENGINE_H_
+#define _BCMFS_SYM_ENGINE_H_
+
+#include <rte_crypto_sym.h>
+
+#include "bcmfs_dev_msg.h"
+#include "bcmfs_sym_defs.h"
+#include "bcmfs_sym_req.h"
+
+/* structure to hold element's arrtibutes */
+struct fsattr {
+ void *va;
+ uint64_t pa;
+ uint64_t sz;
+};
+
+#define fsattr_va(__ptr) ((__ptr)->va)
+#define fsattr_pa(__ptr) ((__ptr)->pa)
+#define fsattr_sz(__ptr) ((__ptr)->sz)
+
+/*
+ * Macros for Crypto h/w constraints
+ */
+
+#define BCMFS_CRYPTO_AES_BLOCK_SIZE 16
+#define BCMFS_CRYPTO_AES_MIN_KEY_SIZE 16
+#define BCMFS_CRYPTO_AES_MAX_KEY_SIZE 32
+
+#define BCMFS_CRYPTO_DES_BLOCK_SIZE 8
+#define BCMFS_CRYPTO_DES_KEY_SIZE 8
+
+#define BCMFS_CRYPTO_3DES_BLOCK_SIZE 8
+#define BCMFS_CRYPTO_3DES_KEY_SIZE (3 * 8)
+
+#define BCMFS_CRYPTO_MD5_DIGEST_SIZE 16
+#define BCMFS_CRYPTO_MD5_BLOCK_SIZE 64
+
+#define BCMFS_CRYPTO_SHA1_DIGEST_SIZE 20
+#define BCMFS_CRYPTO_SHA1_BLOCK_SIZE 64
+
+#define BCMFS_CRYPTO_SHA224_DIGEST_SIZE 28
+#define BCMFS_CRYPTO_SHA224_BLOCK_SIZE 64
+
+#define BCMFS_CRYPTO_SHA256_DIGEST_SIZE 32
+#define BCMFS_CRYPTO_SHA256_BLOCK_SIZE 64
+
+#define BCMFS_CRYPTO_SHA384_DIGEST_SIZE 48
+#define BCMFS_CRYPTO_SHA384_BLOCK_SIZE 128
+
+#define BCMFS_CRYPTO_SHA512_DIGEST_SIZE 64
+#define BCMFS_CRYPTO_SHA512_BLOCK_SIZE 128
+
+#define BCMFS_CRYPTO_SHA3_224_DIGEST_SIZE (224 / 8)
+#define BCMFS_CRYPTO_SHA3_224_BLOCK_SIZE (200 - 2 * \
+ BCMFS_CRYPTO_SHA3_224_DIGEST_SIZE)
+
+#define BCMFS_CRYPTO_SHA3_256_DIGEST_SIZE (256 / 8)
+#define BCMFS_CRYPTO_SHA3_256_BLOCK_SIZE (200 - 2 * \
+ BCMFS_CRYPTO_SHA3_256_DIGEST_SIZE)
+
+#define BCMFS_CRYPTO_SHA3_384_DIGEST_SIZE (384 / 8)
+#define BCMFS_CRYPTO_SHA3_384_BLOCK_SIZE (200 - 2 * \
+ BCMFS_CRYPTO_SHA3_384_DIGEST_SIZE)
+
+#define BCMFS_CRYPTO_SHA3_512_DIGEST_SIZE (512 / 8)
+#define BCMFS_CRYPTO_SHA3_512_BLOCK_SIZE (200 - 2 * \
+ BCMFS_CRYPTO_SHA3_512_DIGEST_SIZE)
+
+enum bcmfs_crypto_aes_cipher_key {
+ BCMFS_CRYPTO_AES128 = 16,
+ BCMFS_CRYPTO_AES192 = 24,
+ BCMFS_CRYPTO_AES256 = 32,
+};
+
+int
+bcmfs_crypto_build_cipher_req(struct bcmfs_sym_request *req,
+ enum rte_crypto_cipher_algorithm c_algo,
+ enum rte_crypto_cipher_operation cop,
+ struct fsattr *src, struct fsattr *dst,
+ struct fsattr *key, struct fsattr *iv);
+
+int
+bcmfs_crypto_build_auth_req(struct bcmfs_sym_request *req,
+ enum rte_crypto_auth_algorithm a_algo,
+ enum rte_crypto_auth_operation aop,
+ struct fsattr *src, struct fsattr *dst,
+ struct fsattr *mac, struct fsattr *key,
+ struct fsattr *iv);
+
+int
+bcmfs_crypto_build_chain_request(struct bcmfs_sym_request *req,
+ enum rte_crypto_cipher_algorithm c_algo,
+ enum rte_crypto_cipher_operation cop,
+ enum rte_crypto_auth_algorithm a_algo,
+ enum rte_crypto_auth_operation aop,
+ struct fsattr *src, struct fsattr *dst,
+ struct fsattr *cipher_key,
+ struct fsattr *auth_key,
+ struct fsattr *iv, struct fsattr *aad,
+ struct fsattr *digest, bool cipher_first);
+
+int
+bcmfs_crypto_build_aead_request(struct bcmfs_sym_request *req,
+ enum rte_crypto_aead_algorithm ae_algo,
+ enum rte_crypto_aead_operation aeop,
+ struct fsattr *src, struct fsattr *dst,
+ struct fsattr *key, struct fsattr *iv,
+ struct fsattr *aad, struct fsattr *digest);
+
+#endif /* _BCMFS_SYM_ENGINE_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
index 381ca8ea48..568797b4fd 100644
--- a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
+++ b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
@@ -132,6 +132,12 @@ static void
spu_req_init(struct bcmfs_sym_request *sr, rte_iova_t iova __rte_unused)
{
memset(sr, 0, sizeof(*sr));
+ sr->fptr = iova;
+ sr->cptr = iova + offsetof(struct bcmfs_sym_request, cipher_key);
+ sr->aptr = iova + offsetof(struct bcmfs_sym_request, auth_key);
+ sr->iptr = iova + offsetof(struct bcmfs_sym_request, iv);
+ sr->dptr = iova + offsetof(struct bcmfs_sym_request, digest);
+ sr->rptr = iova + offsetof(struct bcmfs_sym_request, resp);
}
static void
@@ -244,6 +250,7 @@ bcmfs_sym_pmd_enqueue_op_burst(void *queue_pair,
uint16_t nb_ops)
{
int i, j;
+ int retval;
uint16_t enq = 0;
struct bcmfs_sym_request *sreq;
struct bcmfs_sym_session *sess;
@@ -273,6 +280,11 @@ bcmfs_sym_pmd_enqueue_op_burst(void *queue_pair,
/* save context */
qp->infl_msgs[i] = &sreq->msgs;
qp->infl_msgs[i]->ctx = (void *)sreq;
+
+ /* pre process the request crypto h/w acceleration */
+ retval = bcmfs_process_sym_crypto_op(ops[i], sess, sreq);
+ if (unlikely(retval < 0))
+ goto enqueue_err;
}
/* Send burst request to hw QP */
enq = bcmfs_enqueue_op_burst(qp, (void **)qp->infl_msgs, i);
@@ -289,6 +301,17 @@ bcmfs_sym_pmd_enqueue_op_burst(void *queue_pair,
return enq;
}
+static void bcmfs_sym_set_request_status(struct rte_crypto_op *op,
+ struct bcmfs_sym_request *out)
+{
+ if (*out->resp == BCMFS_SYM_RESPONSE_SUCCESS)
+ op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+ else if (*out->resp == BCMFS_SYM_RESPONSE_HASH_TAG_ERROR)
+ op->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
+ else
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+}
+
static uint16_t
bcmfs_sym_pmd_dequeue_op_burst(void *queue_pair,
struct rte_crypto_op **ops,
@@ -308,6 +331,9 @@ bcmfs_sym_pmd_dequeue_op_burst(void *queue_pair,
for (i = 0; i < deq; i++) {
sreq = (struct bcmfs_sym_request *)qp->infl_msgs[i]->ctx;
+ /* set the status based on the response from the crypto h/w */
+ bcmfs_sym_set_request_status(sreq->op, sreq);
+
ops[pkts++] = sreq->op;
rte_mempool_put(qp->sr_mp, sreq);
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_req.h b/drivers/crypto/bcmfs/bcmfs_sym_req.h
index 0f0b051f1e..e53c50adc1 100644
--- a/drivers/crypto/bcmfs/bcmfs_sym_req.h
+++ b/drivers/crypto/bcmfs/bcmfs_sym_req.h
@@ -6,13 +6,53 @@
#ifndef _BCMFS_SYM_REQ_H_
#define _BCMFS_SYM_REQ_H_
+#include <rte_cryptodev.h>
+
#include "bcmfs_dev_msg.h"
+#include "bcmfs_sym_defs.h"
+
+/* Fixed SPU2 Metadata */
+struct spu2_fmd {
+ uint64_t ctrl0;
+ uint64_t ctrl1;
+ uint64_t ctrl2;
+ uint64_t ctrl3;
+};
/*
* This structure hold the supportive data required to process a
* rte_crypto_op
*/
struct bcmfs_sym_request {
+ /* spu2 engine related data */
+ struct spu2_fmd fmd;
+ /* cipher key */
+ uint8_t cipher_key[BCMFS_MAX_KEY_SIZE];
+ /* auth key */
+ uint8_t auth_key[BCMFS_MAX_KEY_SIZE];
+ /* iv key */
+ uint8_t iv[BCMFS_MAX_IV_SIZE];
+ /* digest data output from crypto h/w */
+ uint8_t digest[BCMFS_MAX_DIGEST_SIZE];
+ /* 2-Bytes response from crypto h/w */
+ uint8_t resp[2];
+ /*
+ * Below are all iovas for above members
+ * from top
+ */
+ /* iova for fmd */
+ rte_iova_t fptr;
+ /* iova for cipher key */
+ rte_iova_t cptr;
+ /* iova for auth key */
+ rte_iova_t aptr;
+ /* iova for iv key */
+ rte_iova_t iptr;
+ /* iova for digest */
+ rte_iova_t dptr;
+ /* iova for response */
+ rte_iova_t rptr;
+
/* bcmfs qp message for h/w queues to process */
struct bcmfs_qp_message msgs;
/* crypto op */
diff --git a/drivers/crypto/bcmfs/meson.build b/drivers/crypto/bcmfs/meson.build
index 2e86c733e1..7aa0f05dbd 100644
--- a/drivers/crypto/bcmfs/meson.build
+++ b/drivers/crypto/bcmfs/meson.build
@@ -14,5 +14,7 @@ sources = files(
'hw/bcmfs_rm_common.c',
'bcmfs_sym_pmd.c',
'bcmfs_sym_capabilities.c',
- 'bcmfs_sym_session.c'
+ 'bcmfs_sym_session.c',
+ 'bcmfs_sym.c',
+ 'bcmfs_sym_engine.c'
)
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v4 8/8] crypto/bcmfs: add crypto pmd into cryptodev test
2020-10-07 16:45 ` [dpdk-dev] [PATCH v4 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
` (6 preceding siblings ...)
2020-10-07 16:45 ` [dpdk-dev] [PATCH v4 7/8] crypto/bcmfs: add crypto HW module Vikas Gupta
@ 2020-10-07 16:45 ` Vikas Gupta
2020-10-07 17:18 ` [dpdk-dev] [PATCH v5 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
8 siblings, 0 replies; 75+ messages in thread
From: Vikas Gupta @ 2020-10-07 16:45 UTC (permalink / raw)
To: dev, akhil.goyal; +Cc: vikram.prakash, Vikas Gupta, Raveendra Padasalagi
Add global test suite for bcmfs crypto pmd
Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
app/test/test_cryptodev.c | 17 +++++++++++++++++
app/test/test_cryptodev.h | 1 +
doc/guides/cryptodevs/bcmfs.rst | 11 +++++++++++
3 files changed, 29 insertions(+)
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 70bf6fe2c1..9157115ab3 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -13041,6 +13041,22 @@ test_cryptodev_nitrox(void)
return unit_test_suite_runner(&cryptodev_nitrox_testsuite);
}
+static int
+test_cryptodev_bcmfs(void)
+{
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_BCMFS_PMD));
+
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "BCMFS PMD must be loaded. Check if "
+ "CONFIG_RTE_LIBRTE_PMD_BCMFS is enabled "
+ "in config file to run this testsuite.\n");
+ return TEST_FAILED;
+ }
+
+ return unit_test_suite_runner(&cryptodev_testsuite);
+}
+
REGISTER_TEST_COMMAND(cryptodev_qat_autotest, test_cryptodev_qat);
REGISTER_TEST_COMMAND(cryptodev_aesni_mb_autotest, test_cryptodev_aesni_mb);
REGISTER_TEST_COMMAND(cryptodev_cpu_aesni_mb_autotest,
@@ -13063,3 +13079,4 @@ REGISTER_TEST_COMMAND(cryptodev_octeontx_autotest, test_cryptodev_octeontx);
REGISTER_TEST_COMMAND(cryptodev_octeontx2_autotest, test_cryptodev_octeontx2);
REGISTER_TEST_COMMAND(cryptodev_caam_jr_autotest, test_cryptodev_caam_jr);
REGISTER_TEST_COMMAND(cryptodev_nitrox_autotest, test_cryptodev_nitrox);
+REGISTER_TEST_COMMAND(cryptodev_bcmfs_autotest, test_cryptodev_bcmfs);
diff --git a/app/test/test_cryptodev.h b/app/test/test_cryptodev.h
index 41542e0552..c58126368c 100644
--- a/app/test/test_cryptodev.h
+++ b/app/test/test_cryptodev.h
@@ -70,6 +70,7 @@
#define CRYPTODEV_NAME_OCTEONTX2_PMD crypto_octeontx2
#define CRYPTODEV_NAME_CAAM_JR_PMD crypto_caam_jr
#define CRYPTODEV_NAME_NITROX_PMD crypto_nitrox_sym
+#define CRYPTODEV_NAME_BCMFS_PMD crypto_bcmfs
/**
* Write (spread) data from buffer to mbuf data
diff --git a/doc/guides/cryptodevs/bcmfs.rst b/doc/guides/cryptodevs/bcmfs.rst
index f7e15f4cfb..5a7eb23c0f 100644
--- a/doc/guides/cryptodevs/bcmfs.rst
+++ b/doc/guides/cryptodevs/bcmfs.rst
@@ -96,3 +96,14 @@ Limitations
* Only supports the session-oriented API implementation (session-less APIs are not supported).
* CCM is not supported on Broadcom`s SoCs having FlexSparc4 unit.
+
+Testing
+-------
+
+The symmetric crypto operations on BCMFS crypto PMD may be verified by running the test
+application:
+
+.. code-block:: console
+
+ ./test
+ RTE>>cryptodev_bcmfs_autotest
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v5 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices
2020-10-07 16:45 ` [dpdk-dev] [PATCH v4 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
` (7 preceding siblings ...)
2020-10-07 16:45 ` [dpdk-dev] [PATCH v4 8/8] crypto/bcmfs: add crypto pmd into cryptodev test Vikas Gupta
@ 2020-10-07 17:18 ` Vikas Gupta
2020-10-07 17:18 ` [dpdk-dev] [PATCH v5 1/8] crypto/bcmfs: add BCMFS driver Vikas Gupta
` (8 more replies)
8 siblings, 9 replies; 75+ messages in thread
From: Vikas Gupta @ 2020-10-07 17:18 UTC (permalink / raw)
To: dev, akhil.goyal; +Cc: vikram.prakash, Vikas Gupta
Hi,
This patchset contains support for Crypto offload on Broadcom’s
Stingray/Stingray2 SoCs having FlexSparc unit.
BCMFS is an acronym for Broadcom FlexSparc device used in the patchest.
The patchset progressively adds major modules as below.
a) Detection of platform-device based on the known registered platforms and attaching with VFIO.
b) Creation of Cryptodevice.
c) Addition of session handling.
d) Add Cryptodevice into test Cryptodev framework.
The patchset has been tested on the above mentioned SoCs.
Regards,
Vikas
Changes from v0->v1:
Updated the ABI version in file .../crypto/bcmfs/rte_pmd_bcmfs_version.map
Changes from v1->v2:
- Fix compilation errors and coding style warnings.
- Use global test crypto suite suggested by Adam Dybkowski
Changes from v2->v3:
- Release notes updated.
- bcmfs.rst updated with missing information about installation.
- Review comments from patch1 from v2 addressed.
- Updated description about dependency of PMD driver on VFIO_PRESENT.
- Fixed typo in bcmfs_hw_defs.h (comments on patch3 from v2 addressed)
- Comments on patch6 from v2 addressed and capability list is fixed.
Removed redundant enums and macros from the file
bcmfs_sym_defs.h and updated other impacted APIs accordingly.
patch7 too is updated due to removal of redundancy.
Thanks! to Akhil for pointing out the redundancy.
- Fix minor code style issues in few files as part of review.
Changes from v3->v4:
- Code style issues fixed.
- Change of barrier API in bcmfs4_rm.c and bcmfs5_rm.c
Changes from v4->v5:
- Change of barrier API in bcmfs4_rm.c. Missed one in v4
Vikas Gupta (8):
crypto/bcmfs: add BCMFS driver
crypto/bcmfs: add vfio support
crypto/bcmfs: add queue pair management API
crypto/bcmfs: add HW queue pair operations
crypto/bcmfs: create a symmetric cryptodev
crypto/bcmfs: add session handling and capabilities
crypto/bcmfs: add crypto HW module
crypto/bcmfs: add crypto pmd into cryptodev test
MAINTAINERS | 7 +
app/test/test_cryptodev.c | 17 +
app/test/test_cryptodev.h | 1 +
doc/guides/cryptodevs/bcmfs.rst | 109 ++
doc/guides/cryptodevs/features/bcmfs.ini | 56 +
doc/guides/cryptodevs/index.rst | 1 +
doc/guides/rel_notes/release_20_11.rst | 5 +
drivers/crypto/bcmfs/bcmfs_dev_msg.h | 29 +
drivers/crypto/bcmfs/bcmfs_device.c | 332 +++++
drivers/crypto/bcmfs/bcmfs_device.h | 76 ++
drivers/crypto/bcmfs/bcmfs_hw_defs.h | 32 +
drivers/crypto/bcmfs/bcmfs_logs.c | 38 +
drivers/crypto/bcmfs/bcmfs_logs.h | 34 +
drivers/crypto/bcmfs/bcmfs_qp.c | 383 ++++++
drivers/crypto/bcmfs/bcmfs_qp.h | 142 ++
drivers/crypto/bcmfs/bcmfs_sym.c | 289 +++++
drivers/crypto/bcmfs/bcmfs_sym_capabilities.c | 764 +++++++++++
drivers/crypto/bcmfs/bcmfs_sym_capabilities.h | 16 +
drivers/crypto/bcmfs/bcmfs_sym_defs.h | 34 +
drivers/crypto/bcmfs/bcmfs_sym_engine.c | 1155 +++++++++++++++++
drivers/crypto/bcmfs/bcmfs_sym_engine.h | 115 ++
drivers/crypto/bcmfs/bcmfs_sym_pmd.c | 426 ++++++
drivers/crypto/bcmfs/bcmfs_sym_pmd.h | 38 +
drivers/crypto/bcmfs/bcmfs_sym_req.h | 62 +
drivers/crypto/bcmfs/bcmfs_sym_session.c | 282 ++++
drivers/crypto/bcmfs/bcmfs_sym_session.h | 109 ++
drivers/crypto/bcmfs/bcmfs_vfio.c | 107 ++
drivers/crypto/bcmfs/bcmfs_vfio.h | 17 +
drivers/crypto/bcmfs/hw/bcmfs4_rm.c | 743 +++++++++++
drivers/crypto/bcmfs/hw/bcmfs5_rm.c | 677 ++++++++++
drivers/crypto/bcmfs/hw/bcmfs_rm_common.c | 82 ++
drivers/crypto/bcmfs/hw/bcmfs_rm_common.h | 51 +
drivers/crypto/bcmfs/meson.build | 20 +
.../crypto/bcmfs/rte_pmd_bcmfs_version.map | 3 +
drivers/crypto/meson.build | 1 +
35 files changed, 6253 insertions(+)
create mode 100644 doc/guides/cryptodevs/bcmfs.rst
create mode 100644 doc/guides/cryptodevs/features/bcmfs.ini
create mode 100644 drivers/crypto/bcmfs/bcmfs_dev_msg.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_device.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_device.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_hw_defs.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_logs.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_logs.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_qp.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_qp.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_capabilities.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_capabilities.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_defs.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_engine.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_engine.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_pmd.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_pmd.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_req.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_session.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_session.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_vfio.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_vfio.h
create mode 100644 drivers/crypto/bcmfs/hw/bcmfs4_rm.c
create mode 100644 drivers/crypto/bcmfs/hw/bcmfs5_rm.c
create mode 100644 drivers/crypto/bcmfs/hw/bcmfs_rm_common.c
create mode 100644 drivers/crypto/bcmfs/hw/bcmfs_rm_common.h
create mode 100644 drivers/crypto/bcmfs/meson.build
create mode 100644 drivers/crypto/bcmfs/rte_pmd_bcmfs_version.map
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v5 1/8] crypto/bcmfs: add BCMFS driver
2020-10-07 17:18 ` [dpdk-dev] [PATCH v5 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
@ 2020-10-07 17:18 ` Vikas Gupta
2020-10-15 0:50 ` Thomas Monjalon
2020-10-15 0:55 ` Thomas Monjalon
2020-10-07 17:18 ` [dpdk-dev] [PATCH v5 2/8] crypto/bcmfs: add vfio support Vikas Gupta
` (7 subsequent siblings)
8 siblings, 2 replies; 75+ messages in thread
From: Vikas Gupta @ 2020-10-07 17:18 UTC (permalink / raw)
To: dev, akhil.goyal; +Cc: vikram.prakash, Vikas Gupta, Raveendra Padasalagi
Add Broadcom FlexSparc(FS) device creation driver which registers to a
vdev and create a device. Add APIs for logs, supportive documentation and
maintainers file.
Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
MAINTAINERS | 7 +
doc/guides/cryptodevs/bcmfs.rst | 51 ++++
doc/guides/cryptodevs/index.rst | 1 +
doc/guides/rel_notes/release_20_11.rst | 5 +
drivers/crypto/bcmfs/bcmfs_device.c | 257 ++++++++++++++++++
drivers/crypto/bcmfs/bcmfs_device.h | 43 +++
drivers/crypto/bcmfs/bcmfs_logs.c | 38 +++
drivers/crypto/bcmfs/bcmfs_logs.h | 34 +++
drivers/crypto/bcmfs/meson.build | 10 +
.../crypto/bcmfs/rte_pmd_bcmfs_version.map | 3 +
drivers/crypto/meson.build | 1 +
11 files changed, 450 insertions(+)
create mode 100644 doc/guides/cryptodevs/bcmfs.rst
create mode 100644 drivers/crypto/bcmfs/bcmfs_device.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_device.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_logs.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_logs.h
create mode 100644 drivers/crypto/bcmfs/meson.build
create mode 100644 drivers/crypto/bcmfs/rte_pmd_bcmfs_version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index c0abbe0fc8..49c015ebbe 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1081,6 +1081,13 @@ F: drivers/crypto/zuc/
F: doc/guides/cryptodevs/zuc.rst
F: doc/guides/cryptodevs/features/zuc.ini
+Broadcom FlexSparc
+M: Ajit Khaparde <ajit.khaparde@broadcom.com>
+M: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
+M: Vikas Gupta <vikas.gupta@@broadcom.com>
+F: drivers/crypto/bcmfs/
+F: doc/guides/cryptodevs/bcmfs.rst
+F: doc/guides/cryptodevs/features/bcmfs.ini
Compression Drivers
-------------------
diff --git a/doc/guides/cryptodevs/bcmfs.rst b/doc/guides/cryptodevs/bcmfs.rst
new file mode 100644
index 0000000000..6b68673df0
--- /dev/null
+++ b/doc/guides/cryptodevs/bcmfs.rst
@@ -0,0 +1,51 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(C) 2020 Broadcom
+
+Broadcom FlexSparc Crypto Poll Mode Driver
+==========================================
+
+The FlexSparc crypto poll mode driver (BCMFS PMD) provides support for offloading
+cryptographic operations to the Broadcom SoCs having FlexSparc4/FlexSparc5 unit.
+Detailed information about SoCs can be found at `Broadcom Official Website
+<https://www.broadcom.com/products/ethernet-connectivity/network-adapters/smartnic>`__.
+
+Supported Broadcom SoCs
+-----------------------
+
+* Stingray
+* Stingray2
+
+Installation
+------------
+Information about kernel, rootfs and toolchain can be found at
+`Broadcom Official Website <https://www.broadcom.com/products/ethernet-connectivity
+/network-adapters/smartnic/stingray-software>`__.
+
+ .. Note::
+ To execute BCMFS PMD, it must be compiled with VFIO_PRESENT flag on the
+ compiling platform and same gets enabled in rte_vfio.h.
+
+The BCMFS crypto PMD may be compiled natively on a Stingray/Stingray2 platform or
+cross-compiled on an x86 platform. For example, below commands can be executed
+for cross compiling on on x86 platform.
+
+.. code-block:: console
+
+ cd <DPDK-source-directory>
+ meson <dest-dir> --cross-file config/arm/arm64_stingray_linux_gcc
+ cd <dest-dir>
+ ninja
+
+Initialization
+--------------
+The supported platform devices should be present in the
+*/sys/bus/platform/devices/fs<version>/<dev_name>* path on the booted kernel.
+To get BCMFS PMD executing, device node must be owned by VFIO platform module only.
+For example, below commands can be run to get hold of a device node by VFIO.
+
+.. code-block:: console
+
+ SETUP_SYSFS_DEV_NAME=67000000.crypto_mbox
+ io_device_name="vfio-platform"
+ echo $io_device_name > /sys/bus/platform/devices/${SETUP_SYSFS_DEV_NAME}/driver_override
+ echo ${SETUP_SYSFS_DEV_NAME} > /sys/bus/platform/drivers_probe
diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst
index a67ed5a282..279f56a002 100644
--- a/doc/guides/cryptodevs/index.rst
+++ b/doc/guides/cryptodevs/index.rst
@@ -13,6 +13,7 @@ Crypto Device Drivers
aesni_mb
aesni_gcm
armv8
+ bcmfs
caam_jr
ccp
dpaa2_sec
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 73ac08fb0e..8643330321 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -185,3 +185,8 @@ Tested Platforms
This section is a comment. Do not overwrite or remove it.
Also, make sure to start the actual text at the margin.
=======================================================
+
+* **Added Broadcom BCMFS symmetric crypto PMD.**
+
+ Added a symmetric crypto PMD for Broadcom FlexSparc crypto units.
+ See :doc:`../cryptodevs/bcmfs` guide for more details on this new PMD.
diff --git a/drivers/crypto/bcmfs/bcmfs_device.c b/drivers/crypto/bcmfs/bcmfs_device.c
new file mode 100644
index 0000000000..f1050ff112
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_device.c
@@ -0,0 +1,257 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Broadcom.
+ * All rights reserved.
+ */
+
+#include <dirent.h>
+#include <stdbool.h>
+#include <sys/queue.h>
+
+#include <rte_malloc.h>
+#include <rte_string_fns.h>
+
+#include "bcmfs_device.h"
+#include "bcmfs_logs.h"
+
+struct bcmfs_device_attr {
+ const char name[BCMFS_MAX_PATH_LEN];
+ const char suffix[BCMFS_DEV_NAME_LEN];
+ const enum bcmfs_device_type type;
+ const uint32_t offset;
+ const uint32_t version;
+};
+
+/* BCMFS supported devices */
+static struct bcmfs_device_attr dev_table[] = {
+ {
+ .name = "fs4",
+ .suffix = "crypto_mbox",
+ .type = BCMFS_SYM_FS4,
+ .offset = 0,
+ .version = BCMFS_SYM_FS4_VERSION
+ },
+ {
+ .name = "fs5",
+ .suffix = "mbox",
+ .type = BCMFS_SYM_FS5,
+ .offset = 0,
+ .version = BCMFS_SYM_FS5_VERSION
+ },
+ {
+ /* sentinel */
+ }
+};
+
+TAILQ_HEAD(fsdev_list, bcmfs_device);
+static struct fsdev_list fsdev_list = TAILQ_HEAD_INITIALIZER(fsdev_list);
+
+static struct bcmfs_device *
+fsdev_allocate_one_dev(struct rte_vdev_device *vdev,
+ char *dirpath,
+ char *devname,
+ enum bcmfs_device_type dev_type __rte_unused)
+{
+ struct bcmfs_device *fsdev;
+
+ fsdev = rte_calloc(__func__, 1, sizeof(*fsdev), 0);
+ if (!fsdev)
+ return NULL;
+
+ if (strlen(dirpath) > sizeof(fsdev->dirname)) {
+ BCMFS_LOG(ERR, "dir path name is too long");
+ goto cleanup;
+ }
+
+ if (strlen(devname) > sizeof(fsdev->name)) {
+ BCMFS_LOG(ERR, "devname is too long");
+ goto cleanup;
+ }
+
+ strcpy(fsdev->dirname, dirpath);
+ strcpy(fsdev->name, devname);
+
+ fsdev->vdev = vdev;
+
+ TAILQ_INSERT_TAIL(&fsdev_list, fsdev, next);
+
+ return fsdev;
+
+cleanup:
+ free(fsdev);
+
+ return NULL;
+}
+
+static struct bcmfs_device *
+find_fsdev(struct rte_vdev_device *vdev)
+{
+ struct bcmfs_device *fsdev;
+
+ TAILQ_FOREACH(fsdev, &fsdev_list, next)
+ if (fsdev->vdev == vdev)
+ return fsdev;
+
+ return NULL;
+}
+
+static void
+fsdev_release(struct bcmfs_device *fsdev)
+{
+ if (fsdev == NULL)
+ return;
+
+ TAILQ_REMOVE(&fsdev_list, fsdev, next);
+ free(fsdev);
+}
+
+static int
+cmprator(const void *a, const void *b)
+{
+ return (*(const unsigned int *)a - *(const unsigned int *)b);
+}
+
+static int
+fsdev_find_all_devs(const char *path, const char *search,
+ uint32_t *devs)
+{
+ DIR *dir;
+ struct dirent *entry;
+ int count = 0;
+ char addr[BCMFS_MAX_NODES][BCMFS_MAX_PATH_LEN];
+ int i;
+
+ dir = opendir(path);
+ if (dir == NULL) {
+ BCMFS_LOG(ERR, "Unable to open directory");
+ return 0;
+ }
+
+ while ((entry = readdir(dir)) != NULL) {
+ if (strstr(entry->d_name, search)) {
+ strlcpy(addr[count], entry->d_name,
+ BCMFS_MAX_PATH_LEN);
+ count++;
+ }
+ }
+
+ closedir(dir);
+
+ for (i = 0 ; i < count; i++)
+ devs[i] = (uint32_t)strtoul(addr[i], NULL, 16);
+ /* sort the devices based on IO addresses */
+ qsort(devs, count, sizeof(uint32_t), cmprator);
+
+ return count;
+}
+
+static bool
+fsdev_find_sub_dir(char *path, const char *search, char *output)
+{
+ DIR *dir;
+ struct dirent *entry;
+
+ dir = opendir(path);
+ if (dir == NULL) {
+ BCMFS_LOG(ERR, "Unable to open directory");
+ return -ENODEV;
+ }
+
+ while ((entry = readdir(dir)) != NULL) {
+ if (!strcmp(entry->d_name, search)) {
+ strlcpy(output, entry->d_name, BCMFS_MAX_PATH_LEN);
+ closedir(dir);
+ return true;
+ }
+ }
+
+ closedir(dir);
+
+ return false;
+}
+
+
+static int
+bcmfs_vdev_probe(struct rte_vdev_device *vdev)
+{
+ struct bcmfs_device *fsdev;
+ char top_dirpath[BCMFS_MAX_PATH_LEN];
+ char sub_dirpath[BCMFS_MAX_PATH_LEN];
+ char out_dirpath[BCMFS_MAX_PATH_LEN];
+ char out_dirname[BCMFS_MAX_PATH_LEN];
+ uint32_t fsdev_dev[BCMFS_MAX_NODES];
+ enum bcmfs_device_type dtype;
+ int i = 0;
+ int dev_idx;
+ int count = 0;
+ bool found = false;
+
+ sprintf(top_dirpath, "%s", SYSFS_BCM_PLTFORM_DEVICES);
+ while (strlen(dev_table[i].name)) {
+ found = fsdev_find_sub_dir(top_dirpath,
+ dev_table[i].name,
+ sub_dirpath);
+ if (found)
+ break;
+ i++;
+ }
+ if (!found) {
+ BCMFS_LOG(ERR, "No supported bcmfs dev found");
+ return -ENODEV;
+ }
+
+ dev_idx = i;
+ dtype = dev_table[i].type;
+
+ snprintf(out_dirpath, sizeof(out_dirpath), "%s/%s",
+ top_dirpath, sub_dirpath);
+ count = fsdev_find_all_devs(out_dirpath,
+ dev_table[dev_idx].suffix,
+ fsdev_dev);
+ if (!count) {
+ BCMFS_LOG(ERR, "No supported bcmfs dev found");
+ return -ENODEV;
+ }
+
+ i = 0;
+ while (count) {
+ /* format the device name present in the patch */
+ snprintf(out_dirname, sizeof(out_dirname), "%x.%s",
+ fsdev_dev[i], dev_table[dev_idx].suffix);
+ fsdev = fsdev_allocate_one_dev(vdev, out_dirpath,
+ out_dirname, dtype);
+ if (!fsdev) {
+ count--;
+ i++;
+ continue;
+ }
+ break;
+ }
+ if (fsdev == NULL) {
+ BCMFS_LOG(ERR, "All supported devs busy");
+ return -ENODEV;
+ }
+
+ return 0;
+}
+
+static int
+bcmfs_vdev_remove(struct rte_vdev_device *vdev)
+{
+ struct bcmfs_device *fsdev;
+
+ fsdev = find_fsdev(vdev);
+ if (fsdev == NULL)
+ return -ENODEV;
+
+ fsdev_release(fsdev);
+ return 0;
+}
+
+/* Register with vdev */
+static struct rte_vdev_driver rte_bcmfs_pmd = {
+ .probe = bcmfs_vdev_probe,
+ .remove = bcmfs_vdev_remove
+};
+
+RTE_PMD_REGISTER_VDEV(bcmfs_pmd,
+ rte_bcmfs_pmd);
diff --git a/drivers/crypto/bcmfs/bcmfs_device.h b/drivers/crypto/bcmfs/bcmfs_device.h
new file mode 100644
index 0000000000..1a4d0cf365
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_device.h
@@ -0,0 +1,43 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Broadcom.
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_DEVICE_H_
+#define _BCMFS_DEVICE_H_
+
+#include <sys/queue.h>
+
+#include <rte_bus_vdev.h>
+
+#include "bcmfs_logs.h"
+
+/* max number of dev nodes */
+#define BCMFS_MAX_NODES 4
+#define BCMFS_MAX_PATH_LEN 512
+#define BCMFS_DEV_NAME_LEN 64
+
+/* Path for BCM-Platform device directory */
+#define SYSFS_BCM_PLTFORM_DEVICES "/sys/bus/platform/devices"
+
+#define BCMFS_SYM_FS4_VERSION 0x76303031
+#define BCMFS_SYM_FS5_VERSION 0x76303032
+
+/* Supported devices */
+enum bcmfs_device_type {
+ BCMFS_SYM_FS4,
+ BCMFS_SYM_FS5,
+ BCMFS_UNKNOWN
+};
+
+struct bcmfs_device {
+ TAILQ_ENTRY(bcmfs_device) next;
+ /* Directory path for vfio */
+ char dirname[BCMFS_MAX_PATH_LEN];
+ /* BCMFS device name */
+ char name[BCMFS_DEV_NAME_LEN];
+ /* Parent vdev */
+ struct rte_vdev_device *vdev;
+};
+
+#endif /* _BCMFS_DEVICE_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_logs.c b/drivers/crypto/bcmfs/bcmfs_logs.c
new file mode 100644
index 0000000000..86f4ff3b53
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_logs.c
@@ -0,0 +1,38 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_log.h>
+#include <rte_hexdump.h>
+
+#include "bcmfs_logs.h"
+
+int bcmfs_conf_logtype;
+int bcmfs_dp_logtype;
+
+int
+bcmfs_hexdump_log(uint32_t level, uint32_t logtype, const char *title,
+ const void *buf, unsigned int len)
+{
+ if (level > rte_log_get_global_level())
+ return 0;
+ if (level > (uint32_t)(rte_log_get_level(logtype)))
+ return 0;
+
+ rte_hexdump(rte_log_get_stream(), title, buf, len);
+ return 0;
+}
+
+RTE_INIT(bcmfs_device_init_log)
+{
+ /* Configuration and general logs */
+ bcmfs_conf_logtype = rte_log_register("pmd.bcmfs_config");
+ if (bcmfs_conf_logtype >= 0)
+ rte_log_set_level(bcmfs_conf_logtype, RTE_LOG_NOTICE);
+
+ /* data-path logs */
+ bcmfs_dp_logtype = rte_log_register("pmd.bcmfs_fp");
+ if (bcmfs_dp_logtype >= 0)
+ rte_log_set_level(bcmfs_dp_logtype, RTE_LOG_NOTICE);
+}
diff --git a/drivers/crypto/bcmfs/bcmfs_logs.h b/drivers/crypto/bcmfs/bcmfs_logs.h
new file mode 100644
index 0000000000..c03a49b75c
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_logs.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_LOGS_H_
+#define _BCMFS_LOGS_H_
+
+#include <rte_log.h>
+
+extern int bcmfs_conf_logtype;
+extern int bcmfs_dp_logtype;
+
+#define BCMFS_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, bcmfs_conf_logtype, \
+ "%s(): " fmt "\n", __func__, ## args)
+
+#define BCMFS_DP_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, bcmfs_dp_logtype, \
+ "%s(): " fmt "\n", __func__, ## args)
+
+#define BCMFS_DP_HEXDUMP_LOG(level, title, buf, len) \
+ bcmfs_hexdump_log(RTE_LOG_ ## level, bcmfs_dp_logtype, title, buf, len)
+
+/**
+ * bcmfs_hexdump_log Dump out memory in a special hex dump format.
+ *
+ * The message will be sent to the stream used by the rte_log infrastructure.
+ */
+int
+bcmfs_hexdump_log(uint32_t level, uint32_t logtype, const char *heading,
+ const void *buf, unsigned int len);
+
+#endif /* _BCMFS_LOGS_H_ */
diff --git a/drivers/crypto/bcmfs/meson.build b/drivers/crypto/bcmfs/meson.build
new file mode 100644
index 0000000000..a4bdd8ee5d
--- /dev/null
+++ b/drivers/crypto/bcmfs/meson.build
@@ -0,0 +1,10 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(C) 2020 Broadcom
+# All rights reserved.
+#
+
+deps += ['eal', 'bus_vdev']
+sources = files(
+ 'bcmfs_logs.c',
+ 'bcmfs_device.c'
+ )
diff --git a/drivers/crypto/bcmfs/rte_pmd_bcmfs_version.map b/drivers/crypto/bcmfs/rte_pmd_bcmfs_version.map
new file mode 100644
index 0000000000..299ae632da
--- /dev/null
+++ b/drivers/crypto/bcmfs/rte_pmd_bcmfs_version.map
@@ -0,0 +1,3 @@
+DPDK_21.0 {
+ local: *;
+};
diff --git a/drivers/crypto/meson.build b/drivers/crypto/meson.build
index a2423507ad..93c2968acb 100644
--- a/drivers/crypto/meson.build
+++ b/drivers/crypto/meson.build
@@ -8,6 +8,7 @@ endif
drivers = ['aesni_gcm',
'aesni_mb',
'armv8',
+ 'bcmfs',
'caam_jr',
'ccp',
'dpaa_sec',
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v5 2/8] crypto/bcmfs: add vfio support
2020-10-07 17:18 ` [dpdk-dev] [PATCH v5 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
2020-10-07 17:18 ` [dpdk-dev] [PATCH v5 1/8] crypto/bcmfs: add BCMFS driver Vikas Gupta
@ 2020-10-07 17:18 ` Vikas Gupta
2020-10-07 17:18 ` [dpdk-dev] [PATCH v5 3/8] crypto/bcmfs: add queue pair management API Vikas Gupta
` (6 subsequent siblings)
8 siblings, 0 replies; 75+ messages in thread
From: Vikas Gupta @ 2020-10-07 17:18 UTC (permalink / raw)
To: dev, akhil.goyal; +Cc: vikram.prakash, Vikas Gupta, Raveendra Padasalagi
Add VFIO support for BCMFS PMD.
The BCMFS PMD functionality is dependent on the VFIO_PRESENT flag,
which gets enabled in the rte_vfio.h.
If this flag is not enabled in the compiling platform driver will
silently return with error, when executed.
Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
drivers/crypto/bcmfs/bcmfs_device.c | 5 ++
drivers/crypto/bcmfs/bcmfs_device.h | 6 ++
drivers/crypto/bcmfs/bcmfs_vfio.c | 107 ++++++++++++++++++++++++++++
drivers/crypto/bcmfs/bcmfs_vfio.h | 17 +++++
drivers/crypto/bcmfs/meson.build | 3 +-
5 files changed, 137 insertions(+), 1 deletion(-)
create mode 100644 drivers/crypto/bcmfs/bcmfs_vfio.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_vfio.h
diff --git a/drivers/crypto/bcmfs/bcmfs_device.c b/drivers/crypto/bcmfs/bcmfs_device.c
index f1050ff112..0ccddea202 100644
--- a/drivers/crypto/bcmfs/bcmfs_device.c
+++ b/drivers/crypto/bcmfs/bcmfs_device.c
@@ -12,6 +12,7 @@
#include "bcmfs_device.h"
#include "bcmfs_logs.h"
+#include "bcmfs_vfio.h"
struct bcmfs_device_attr {
const char name[BCMFS_MAX_PATH_LEN];
@@ -72,6 +73,10 @@ fsdev_allocate_one_dev(struct rte_vdev_device *vdev,
fsdev->vdev = vdev;
+ /* attach to VFIO */
+ if (bcmfs_attach_vfio(fsdev))
+ goto cleanup;
+
TAILQ_INSERT_TAIL(&fsdev_list, fsdev, next);
return fsdev;
diff --git a/drivers/crypto/bcmfs/bcmfs_device.h b/drivers/crypto/bcmfs/bcmfs_device.h
index 1a4d0cf365..f99d57d4bd 100644
--- a/drivers/crypto/bcmfs/bcmfs_device.h
+++ b/drivers/crypto/bcmfs/bcmfs_device.h
@@ -38,6 +38,12 @@ struct bcmfs_device {
char name[BCMFS_DEV_NAME_LEN];
/* Parent vdev */
struct rte_vdev_device *vdev;
+ /* vfio handle */
+ int vfio_dev_fd;
+ /* mapped address */
+ uint8_t *mmap_addr;
+ /* mapped size */
+ uint32_t mmap_size;
};
#endif /* _BCMFS_DEVICE_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_vfio.c b/drivers/crypto/bcmfs/bcmfs_vfio.c
new file mode 100644
index 0000000000..dc2def580f
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_vfio.c
@@ -0,0 +1,107 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Broadcom.
+ * All rights reserved.
+ */
+
+#include <errno.h>
+#include <sys/mman.h>
+#include <sys/ioctl.h>
+
+#include <rte_vfio.h>
+
+#include "bcmfs_device.h"
+#include "bcmfs_logs.h"
+#include "bcmfs_vfio.h"
+
+#ifdef VFIO_PRESENT
+static int
+vfio_map_dev_obj(const char *path, const char *dev_obj,
+ uint32_t *size, void **addr, int *dev_fd)
+{
+ int32_t ret;
+ struct vfio_group_status status = { .argsz = sizeof(status) };
+
+ struct vfio_device_info d_info = { .argsz = sizeof(d_info) };
+ struct vfio_region_info reg_info = { .argsz = sizeof(reg_info) };
+
+ ret = rte_vfio_setup_device(path, dev_obj, dev_fd, &d_info);
+ if (ret) {
+ BCMFS_LOG(ERR, "VFIO Setting for device failed");
+ return ret;
+ }
+
+ /* getting device region info*/
+ ret = ioctl(*dev_fd, VFIO_DEVICE_GET_REGION_INFO, ®_info);
+ if (ret < 0) {
+ BCMFS_LOG(ERR, "Error in VFIO getting REGION_INFO");
+ goto map_failed;
+ }
+
+ *addr = mmap(NULL, reg_info.size,
+ PROT_WRITE | PROT_READ, MAP_SHARED,
+ *dev_fd, reg_info.offset);
+ if (*addr == MAP_FAILED) {
+ BCMFS_LOG(ERR, "Error mapping region (errno = %d)", errno);
+ ret = errno;
+ goto map_failed;
+ }
+ *size = reg_info.size;
+
+ return 0;
+
+map_failed:
+ rte_vfio_release_device(path, dev_obj, *dev_fd);
+
+ return ret;
+}
+
+int
+bcmfs_attach_vfio(struct bcmfs_device *dev)
+{
+ int ret;
+ int vfio_dev_fd;
+ void *v_addr = NULL;
+ uint32_t size = 0;
+
+ ret = vfio_map_dev_obj(dev->dirname, dev->name,
+ &size, &v_addr, &vfio_dev_fd);
+ if (ret)
+ return -1;
+
+ dev->mmap_size = size;
+ dev->mmap_addr = v_addr;
+ dev->vfio_dev_fd = vfio_dev_fd;
+
+ return 0;
+}
+
+void
+bcmfs_release_vfio(struct bcmfs_device *dev)
+{
+ int ret;
+
+ if (dev == NULL)
+ return;
+
+ /* unmap the addr */
+ munmap(dev->mmap_addr, dev->mmap_size);
+ /* release the device */
+ ret = rte_vfio_release_device(dev->dirname, dev->name,
+ dev->vfio_dev_fd);
+ if (ret < 0) {
+ BCMFS_LOG(ERR, "cannot release device");
+ return;
+ }
+}
+#else
+int
+bcmfs_attach_vfio(struct bcmfs_device *dev __rte_unused)
+{
+ return -1;
+}
+
+void
+bcmfs_release_vfio(struct bcmfs_device *dev __rte_unused)
+{
+}
+#endif
diff --git a/drivers/crypto/bcmfs/bcmfs_vfio.h b/drivers/crypto/bcmfs/bcmfs_vfio.h
new file mode 100644
index 0000000000..d0fdf6483f
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_vfio.h
@@ -0,0 +1,17 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_VFIO_H_
+#define _BCMFS_VFIO_H_
+
+/* Attach the bcmfs device to vfio */
+int
+bcmfs_attach_vfio(struct bcmfs_device *dev);
+
+/* Release the bcmfs device from vfio */
+void
+bcmfs_release_vfio(struct bcmfs_device *dev);
+
+#endif /* _BCMFS_VFIO_H_ */
diff --git a/drivers/crypto/bcmfs/meson.build b/drivers/crypto/bcmfs/meson.build
index a4bdd8ee5d..fd39eba20e 100644
--- a/drivers/crypto/bcmfs/meson.build
+++ b/drivers/crypto/bcmfs/meson.build
@@ -6,5 +6,6 @@
deps += ['eal', 'bus_vdev']
sources = files(
'bcmfs_logs.c',
- 'bcmfs_device.c'
+ 'bcmfs_device.c',
+ 'bcmfs_vfio.c'
)
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v5 3/8] crypto/bcmfs: add queue pair management API
2020-10-07 17:18 ` [dpdk-dev] [PATCH v5 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
2020-10-07 17:18 ` [dpdk-dev] [PATCH v5 1/8] crypto/bcmfs: add BCMFS driver Vikas Gupta
2020-10-07 17:18 ` [dpdk-dev] [PATCH v5 2/8] crypto/bcmfs: add vfio support Vikas Gupta
@ 2020-10-07 17:18 ` Vikas Gupta
2020-10-07 17:18 ` [dpdk-dev] [PATCH v5 4/8] crypto/bcmfs: add HW queue pair operations Vikas Gupta
` (5 subsequent siblings)
8 siblings, 0 replies; 75+ messages in thread
From: Vikas Gupta @ 2020-10-07 17:18 UTC (permalink / raw)
To: dev, akhil.goyal; +Cc: vikram.prakash, Vikas Gupta, Raveendra Padasalagi
Add queue pair management APIs which will be used by Crypto device to
manage h/w queues. A bcmfs device structure owns multiple queue-pairs
based on the mapped address allocated to it.
Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
drivers/crypto/bcmfs/bcmfs_device.c | 4 +
drivers/crypto/bcmfs/bcmfs_device.h | 5 +
drivers/crypto/bcmfs/bcmfs_hw_defs.h | 32 +++
drivers/crypto/bcmfs/bcmfs_qp.c | 345 +++++++++++++++++++++++++++
drivers/crypto/bcmfs/bcmfs_qp.h | 122 ++++++++++
drivers/crypto/bcmfs/meson.build | 3 +-
6 files changed, 510 insertions(+), 1 deletion(-)
create mode 100644 drivers/crypto/bcmfs/bcmfs_hw_defs.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_qp.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_qp.h
diff --git a/drivers/crypto/bcmfs/bcmfs_device.c b/drivers/crypto/bcmfs/bcmfs_device.c
index 0ccddea202..a01a5c79d5 100644
--- a/drivers/crypto/bcmfs/bcmfs_device.c
+++ b/drivers/crypto/bcmfs/bcmfs_device.c
@@ -12,6 +12,7 @@
#include "bcmfs_device.h"
#include "bcmfs_logs.h"
+#include "bcmfs_qp.h"
#include "bcmfs_vfio.h"
struct bcmfs_device_attr {
@@ -77,6 +78,9 @@ fsdev_allocate_one_dev(struct rte_vdev_device *vdev,
if (bcmfs_attach_vfio(fsdev))
goto cleanup;
+ /* Maximum number of QPs supported */
+ fsdev->max_hw_qps = fsdev->mmap_size / BCMFS_HW_QUEUE_IO_ADDR_LEN;
+
TAILQ_INSERT_TAIL(&fsdev_list, fsdev, next);
return fsdev;
diff --git a/drivers/crypto/bcmfs/bcmfs_device.h b/drivers/crypto/bcmfs/bcmfs_device.h
index f99d57d4bd..dede5b82dc 100644
--- a/drivers/crypto/bcmfs/bcmfs_device.h
+++ b/drivers/crypto/bcmfs/bcmfs_device.h
@@ -11,6 +11,7 @@
#include <rte_bus_vdev.h>
#include "bcmfs_logs.h"
+#include "bcmfs_qp.h"
/* max number of dev nodes */
#define BCMFS_MAX_NODES 4
@@ -44,6 +45,10 @@ struct bcmfs_device {
uint8_t *mmap_addr;
/* mapped size */
uint32_t mmap_size;
+ /* max number of h/w queue pairs detected */
+ uint16_t max_hw_qps;
+ /* current qpairs in use */
+ struct bcmfs_qp *qps_in_use[BCMFS_MAX_HW_QUEUES];
};
#endif /* _BCMFS_DEVICE_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_hw_defs.h b/drivers/crypto/bcmfs/bcmfs_hw_defs.h
new file mode 100644
index 0000000000..7d5bb5d8fe
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_hw_defs.h
@@ -0,0 +1,32 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_HW_DEFS_H_
+#define _BCMFS_HW_DEFS_H_
+
+#include <rte_atomic.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_io.h>
+
+#ifndef BIT
+#define BIT(nr) (1UL << (nr))
+#endif
+
+#define FS_RING_REGS_SIZE 0x10000
+#define FS_RING_DESC_SIZE 8
+#define FS_RING_BD_ALIGN_ORDER 12
+#define FS_RING_BD_DESC_PER_REQ 32
+#define FS_RING_CMPL_ALIGN_ORDER 13
+#define FS_RING_CMPL_SIZE (1024 * FS_RING_DESC_SIZE)
+#define FS_RING_MAX_REQ_COUNT 1024
+#define FS_RING_PAGE_SHFT 12
+#define FS_RING_PAGE_SIZE BIT(FS_RING_PAGE_SHFT)
+
+/* Minimum and maximum number of requests supported */
+#define FS_RM_MAX_REQS 4096
+#define FS_RM_MIN_REQS 32
+
+#endif /* BCMFS_HW_DEFS_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_qp.c b/drivers/crypto/bcmfs/bcmfs_qp.c
new file mode 100644
index 0000000000..864e7bb746
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_qp.c
@@ -0,0 +1,345 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Broadcom.
+ * All rights reserved.
+ */
+
+#include <inttypes.h>
+
+#include <rte_atomic.h>
+#include <rte_bitmap.h>
+#include <rte_common.h>
+#include <rte_dev.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_prefetch.h>
+#include <rte_string_fns.h>
+
+#include "bcmfs_logs.h"
+#include "bcmfs_qp.h"
+#include "bcmfs_hw_defs.h"
+
+/* TX or submission queue name */
+static const char *txq_name = "tx";
+/* Completion or receive queue name */
+static const char *cmplq_name = "cmpl";
+
+/* Helper function */
+static int
+bcmfs_qp_check_queue_alignment(uint64_t phys_addr,
+ uint32_t align)
+{
+ if (((align - 1) & phys_addr) != 0)
+ return -EINVAL;
+ return 0;
+}
+
+static void
+bcmfs_queue_delete(struct bcmfs_queue *queue,
+ uint16_t queue_pair_id)
+{
+ const struct rte_memzone *mz;
+ int status = 0;
+
+ if (queue == NULL) {
+ BCMFS_LOG(DEBUG, "Invalid queue");
+ return;
+ }
+ BCMFS_LOG(DEBUG, "Free ring %d type %d, memzone: %s",
+ queue_pair_id, queue->q_type, queue->memz_name);
+
+ mz = rte_memzone_lookup(queue->memz_name);
+ if (mz != NULL) {
+ /* Write an unused pattern to the queue memory. */
+ memset(queue->base_addr, 0x9B, queue->queue_size);
+ status = rte_memzone_free(mz);
+ if (status != 0)
+ BCMFS_LOG(ERR, "Error %d on freeing queue %s",
+ status, queue->memz_name);
+ } else {
+ BCMFS_LOG(DEBUG, "queue %s doesn't exist",
+ queue->memz_name);
+ }
+}
+
+static const struct rte_memzone *
+queue_dma_zone_reserve(const char *queue_name, uint32_t queue_size,
+ int socket_id, unsigned int align)
+{
+ const struct rte_memzone *mz;
+
+ mz = rte_memzone_lookup(queue_name);
+ if (mz != NULL) {
+ if (((size_t)queue_size <= mz->len) &&
+ (socket_id == SOCKET_ID_ANY ||
+ socket_id == mz->socket_id)) {
+ BCMFS_LOG(DEBUG, "re-use memzone already "
+ "allocated for %s", queue_name);
+ return mz;
+ }
+
+ BCMFS_LOG(ERR, "Incompatible memzone already "
+ "allocated %s, size %u, socket %d. "
+ "Requested size %u, socket %u",
+ queue_name, (uint32_t)mz->len,
+ mz->socket_id, queue_size, socket_id);
+ return NULL;
+ }
+
+ BCMFS_LOG(DEBUG, "Allocate memzone for %s, size %u on socket %u",
+ queue_name, queue_size, socket_id);
+ return rte_memzone_reserve_aligned(queue_name, queue_size,
+ socket_id, RTE_MEMZONE_IOVA_CONTIG, align);
+}
+
+static int
+bcmfs_queue_create(struct bcmfs_queue *queue,
+ struct bcmfs_qp_config *qp_conf,
+ uint16_t queue_pair_id,
+ enum bcmfs_queue_type qtype)
+{
+ const struct rte_memzone *qp_mz;
+ char q_name[16];
+ unsigned int align;
+ uint32_t queue_size_bytes;
+ int ret;
+
+ if (qtype == BCMFS_RM_TXQ) {
+ strlcpy(q_name, txq_name, sizeof(q_name));
+ align = 1U << FS_RING_BD_ALIGN_ORDER;
+ queue_size_bytes = qp_conf->nb_descriptors *
+ qp_conf->max_descs_req * FS_RING_DESC_SIZE;
+ queue_size_bytes = RTE_ALIGN_MUL_CEIL(queue_size_bytes,
+ FS_RING_PAGE_SIZE);
+ /* make queue size to multiple for 4K pages */
+ } else if (qtype == BCMFS_RM_CPLQ) {
+ strlcpy(q_name, cmplq_name, sizeof(q_name));
+ align = 1U << FS_RING_CMPL_ALIGN_ORDER;
+
+ /*
+ * Memory size for cmpl + MSI
+ * For MSI allocate here itself and so we allocate twice
+ */
+ queue_size_bytes = 2 * FS_RING_CMPL_SIZE;
+ } else {
+ BCMFS_LOG(ERR, "Invalid queue selection");
+ return -EINVAL;
+ }
+
+ queue->q_type = qtype;
+
+ /*
+ * Allocate a memzone for the queue - create a unique name.
+ */
+ snprintf(queue->memz_name, sizeof(queue->memz_name),
+ "%s_%d_%s_%d_%s", "bcmfs", qtype, "qp_mem",
+ queue_pair_id, q_name);
+ qp_mz = queue_dma_zone_reserve(queue->memz_name, queue_size_bytes,
+ 0, align);
+ if (qp_mz == NULL) {
+ BCMFS_LOG(ERR, "Failed to allocate ring memzone");
+ return -ENOMEM;
+ }
+
+ if (bcmfs_qp_check_queue_alignment(qp_mz->iova, align)) {
+ BCMFS_LOG(ERR, "Invalid alignment on queue create "
+ " 0x%" PRIx64 "\n",
+ queue->base_phys_addr);
+ ret = -EFAULT;
+ goto queue_create_err;
+ }
+
+ queue->base_addr = (char *)qp_mz->addr;
+ queue->base_phys_addr = qp_mz->iova;
+ queue->queue_size = queue_size_bytes;
+
+ return 0;
+
+queue_create_err:
+ rte_memzone_free(qp_mz);
+
+ return ret;
+}
+
+int
+bcmfs_qp_release(struct bcmfs_qp **qp_addr)
+{
+ struct bcmfs_qp *qp = *qp_addr;
+
+ if (qp == NULL) {
+ BCMFS_LOG(DEBUG, "qp already freed");
+ return 0;
+ }
+
+ /* Don't free memory if there are still responses to be processed */
+ if ((qp->stats.enqueued_count - qp->stats.dequeued_count) == 0) {
+ /* Stop the h/w ring */
+ qp->ops->stopq(qp);
+ /* Delete the queue pairs */
+ bcmfs_queue_delete(&qp->tx_q, qp->qpair_id);
+ bcmfs_queue_delete(&qp->cmpl_q, qp->qpair_id);
+ } else {
+ return -EAGAIN;
+ }
+
+ rte_bitmap_reset(qp->ctx_bmp);
+ rte_free(qp->ctx_bmp_mem);
+ rte_free(qp->ctx_pool);
+
+ rte_free(qp);
+ *qp_addr = NULL;
+
+ return 0;
+}
+
+int
+bcmfs_qp_setup(struct bcmfs_qp **qp_addr,
+ uint16_t queue_pair_id,
+ struct bcmfs_qp_config *qp_conf)
+{
+ struct bcmfs_qp *qp;
+ uint32_t bmp_size;
+ uint32_t nb_descriptors = qp_conf->nb_descriptors;
+ uint16_t i;
+ int rc;
+
+ if (nb_descriptors < FS_RM_MIN_REQS) {
+ BCMFS_LOG(ERR, "Can't create qp for %u descriptors",
+ nb_descriptors);
+ return -EINVAL;
+ }
+
+ if (nb_descriptors > FS_RM_MAX_REQS)
+ nb_descriptors = FS_RM_MAX_REQS;
+
+ if (qp_conf->iobase == NULL) {
+ BCMFS_LOG(ERR, "IO onfig space null");
+ return -EINVAL;
+ }
+
+ qp = rte_zmalloc_socket("BCM FS PMD qp metadata",
+ sizeof(*qp), RTE_CACHE_LINE_SIZE,
+ qp_conf->socket_id);
+ if (qp == NULL) {
+ BCMFS_LOG(ERR, "Failed to alloc mem for qp struct");
+ return -ENOMEM;
+ }
+
+ qp->qpair_id = queue_pair_id;
+ qp->ioreg = qp_conf->iobase;
+ qp->nb_descriptors = nb_descriptors;
+
+ qp->stats.enqueued_count = 0;
+ qp->stats.dequeued_count = 0;
+
+ rc = bcmfs_queue_create(&qp->tx_q, qp_conf, qp->qpair_id,
+ BCMFS_RM_TXQ);
+ if (rc) {
+ BCMFS_LOG(ERR, "Tx queue create failed queue_pair_id %u",
+ queue_pair_id);
+ goto create_err;
+ }
+
+ rc = bcmfs_queue_create(&qp->cmpl_q, qp_conf, qp->qpair_id,
+ BCMFS_RM_CPLQ);
+ if (rc) {
+ BCMFS_LOG(ERR, "Cmpl queue create failed queue_pair_id= %u",
+ queue_pair_id);
+ goto q_create_err;
+ }
+
+ /* ctx saving bitmap */
+ bmp_size = rte_bitmap_get_memory_footprint(nb_descriptors);
+
+ /* Allocate memory for bitmap */
+ qp->ctx_bmp_mem = rte_zmalloc("ctx_bmp_mem", bmp_size,
+ RTE_CACHE_LINE_SIZE);
+ if (qp->ctx_bmp_mem == NULL) {
+ rc = -ENOMEM;
+ goto qp_create_err;
+ }
+
+ /* Initialize pool resource bitmap array */
+ qp->ctx_bmp = rte_bitmap_init(nb_descriptors, qp->ctx_bmp_mem,
+ bmp_size);
+ if (qp->ctx_bmp == NULL) {
+ rc = -EINVAL;
+ goto bmap_mem_free;
+ }
+
+ /* Mark all pools available */
+ for (i = 0; i < nb_descriptors; i++)
+ rte_bitmap_set(qp->ctx_bmp, i);
+
+ /* Allocate memory for context */
+ qp->ctx_pool = rte_zmalloc("qp_ctx_pool",
+ sizeof(unsigned long) *
+ nb_descriptors, 0);
+ if (qp->ctx_pool == NULL) {
+ BCMFS_LOG(ERR, "ctx allocation pool fails");
+ rc = -ENOMEM;
+ goto bmap_free;
+ }
+
+ /* Start h/w ring */
+ qp->ops->startq(qp);
+
+ *qp_addr = qp;
+
+ return 0;
+
+bmap_free:
+ rte_bitmap_reset(qp->ctx_bmp);
+bmap_mem_free:
+ rte_free(qp->ctx_bmp_mem);
+qp_create_err:
+ bcmfs_queue_delete(&qp->cmpl_q, queue_pair_id);
+q_create_err:
+ bcmfs_queue_delete(&qp->tx_q, queue_pair_id);
+create_err:
+ rte_free(qp);
+
+ return rc;
+}
+
+uint16_t
+bcmfs_enqueue_op_burst(void *qp, void **ops, uint16_t nb_ops)
+{
+ struct bcmfs_qp *tmp_qp = (struct bcmfs_qp *)qp;
+ register uint32_t nb_ops_sent = 0;
+ uint16_t nb_ops_possible = nb_ops;
+ int ret;
+
+ if (unlikely(nb_ops == 0))
+ return 0;
+
+ while (nb_ops_sent != nb_ops_possible) {
+ ret = tmp_qp->ops->enq_one_req(qp, *ops);
+ if (ret != 0) {
+ tmp_qp->stats.enqueue_err_count++;
+ /* This message cannot be enqueued */
+ if (nb_ops_sent == 0)
+ return 0;
+ goto ring_db;
+ }
+
+ ops++;
+ nb_ops_sent++;
+ }
+
+ring_db:
+ tmp_qp->stats.enqueued_count += nb_ops_sent;
+ tmp_qp->ops->ring_db(tmp_qp);
+
+ return nb_ops_sent;
+}
+
+uint16_t
+bcmfs_dequeue_op_burst(void *qp, void **ops, uint16_t nb_ops)
+{
+ struct bcmfs_qp *tmp_qp = (struct bcmfs_qp *)qp;
+ uint32_t deq = tmp_qp->ops->dequeue(tmp_qp, ops, nb_ops);
+
+ tmp_qp->stats.dequeued_count += deq;
+
+ return deq;
+}
diff --git a/drivers/crypto/bcmfs/bcmfs_qp.h b/drivers/crypto/bcmfs/bcmfs_qp.h
new file mode 100644
index 0000000000..52c487956e
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_qp.h
@@ -0,0 +1,122 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_QP_H_
+#define _BCMFS_QP_H_
+
+#include <rte_memzone.h>
+
+/* Maximum number of h/w queues supported by device */
+#define BCMFS_MAX_HW_QUEUES 32
+
+/* H/W queue IO address space len */
+#define BCMFS_HW_QUEUE_IO_ADDR_LEN (64 * 1024)
+
+/* Maximum size of device ops name */
+#define BCMFS_HW_OPS_NAMESIZE 32
+
+enum bcmfs_queue_type {
+ /* TX or submission queue */
+ BCMFS_RM_TXQ,
+ /* Completion or receive queue */
+ BCMFS_RM_CPLQ
+};
+
+struct bcmfs_qp_stats {
+ /* Count of all operations enqueued */
+ uint64_t enqueued_count;
+ /* Count of all operations dequeued */
+ uint64_t dequeued_count;
+ /* Total error count on operations enqueued */
+ uint64_t enqueue_err_count;
+ /* Total error count on operations dequeued */
+ uint64_t dequeue_err_count;
+};
+
+struct bcmfs_qp_config {
+ /* Socket to allocate memory on */
+ int socket_id;
+ /* Mapped iobase for qp */
+ void *iobase;
+ /* nb_descriptors or requests a h/w queue can accommodate */
+ uint16_t nb_descriptors;
+ /* Maximum number of h/w descriptors needed by a request */
+ uint16_t max_descs_req;
+};
+
+struct bcmfs_queue {
+ /* Base virt address */
+ void *base_addr;
+ /* Base iova */
+ rte_iova_t base_phys_addr;
+ /* Queue type */
+ enum bcmfs_queue_type q_type;
+ /* Queue size based on nb_descriptors and max_descs_reqs */
+ uint32_t queue_size;
+ union {
+ /* s/w pointer for tx h/w queue*/
+ uint32_t tx_write_ptr;
+ /* s/w pointer for completion h/w queue*/
+ uint32_t cmpl_read_ptr;
+ };
+ /* Memzone name */
+ char memz_name[RTE_MEMZONE_NAMESIZE];
+};
+
+struct bcmfs_qp {
+ /* Queue-pair ID */
+ uint16_t qpair_id;
+ /* Mapped IO address */
+ void *ioreg;
+ /* A TX queue */
+ struct bcmfs_queue tx_q;
+ /* A Completion queue */
+ struct bcmfs_queue cmpl_q;
+ /* Number of requests queue can accommodate */
+ uint32_t nb_descriptors;
+ /* Number of pending requests and enqueued to h/w queue */
+ uint16_t nb_pending_requests;
+ /* A pool which act as a hash for <request-ID and virt address> pair */
+ unsigned long *ctx_pool;
+ /* virt address for mem allocated for bitmap */
+ void *ctx_bmp_mem;
+ /* Bitmap */
+ struct rte_bitmap *ctx_bmp;
+ /* Associated stats */
+ struct bcmfs_qp_stats stats;
+ /* h/w ops associated with qp */
+ struct bcmfs_hw_queue_pair_ops *ops;
+
+} __rte_cache_aligned;
+
+/* Structure defining h/w queue pair operations */
+struct bcmfs_hw_queue_pair_ops {
+ /* ops name */
+ char name[BCMFS_HW_OPS_NAMESIZE];
+ /* Enqueue an object */
+ int (*enq_one_req)(struct bcmfs_qp *qp, void *obj);
+ /* Ring doorbell */
+ void (*ring_db)(struct bcmfs_qp *qp);
+ /* Dequeue objects */
+ uint16_t (*dequeue)(struct bcmfs_qp *qp, void **obj,
+ uint16_t nb_ops);
+ /* Start the h/w queue */
+ int (*startq)(struct bcmfs_qp *qp);
+ /* Stop the h/w queue */
+ void (*stopq)(struct bcmfs_qp *qp);
+};
+
+uint16_t
+bcmfs_enqueue_op_burst(void *qp, void **ops, uint16_t nb_ops);
+uint16_t
+bcmfs_dequeue_op_burst(void *qp, void **ops, uint16_t nb_ops);
+int
+bcmfs_qp_release(struct bcmfs_qp **qp_addr);
+int
+bcmfs_qp_setup(struct bcmfs_qp **qp_addr,
+ uint16_t queue_pair_id,
+ struct bcmfs_qp_config *bcmfs_conf);
+
+#endif /* _BCMFS_QP_H_ */
diff --git a/drivers/crypto/bcmfs/meson.build b/drivers/crypto/bcmfs/meson.build
index fd39eba20e..7e2bcbf14b 100644
--- a/drivers/crypto/bcmfs/meson.build
+++ b/drivers/crypto/bcmfs/meson.build
@@ -7,5 +7,6 @@ deps += ['eal', 'bus_vdev']
sources = files(
'bcmfs_logs.c',
'bcmfs_device.c',
- 'bcmfs_vfio.c'
+ 'bcmfs_vfio.c',
+ 'bcmfs_qp.c'
)
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v5 4/8] crypto/bcmfs: add HW queue pair operations
2020-10-07 17:18 ` [dpdk-dev] [PATCH v5 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
` (2 preceding siblings ...)
2020-10-07 17:18 ` [dpdk-dev] [PATCH v5 3/8] crypto/bcmfs: add queue pair management API Vikas Gupta
@ 2020-10-07 17:18 ` Vikas Gupta
2020-10-07 17:18 ` [dpdk-dev] [PATCH v5 5/8] crypto/bcmfs: create a symmetric cryptodev Vikas Gupta
` (4 subsequent siblings)
8 siblings, 0 replies; 75+ messages in thread
From: Vikas Gupta @ 2020-10-07 17:18 UTC (permalink / raw)
To: dev, akhil.goyal; +Cc: vikram.prakash, Vikas Gupta, Raveendra Padasalagi
Add queue pair operations exported by supported devices.
Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
drivers/crypto/bcmfs/bcmfs_dev_msg.h | 29 +
drivers/crypto/bcmfs/bcmfs_device.c | 51 ++
drivers/crypto/bcmfs/bcmfs_device.h | 16 +
drivers/crypto/bcmfs/bcmfs_qp.c | 1 +
drivers/crypto/bcmfs/bcmfs_qp.h | 4 +
drivers/crypto/bcmfs/hw/bcmfs4_rm.c | 743 ++++++++++++++++++++++
drivers/crypto/bcmfs/hw/bcmfs5_rm.c | 677 ++++++++++++++++++++
drivers/crypto/bcmfs/hw/bcmfs_rm_common.c | 82 +++
drivers/crypto/bcmfs/hw/bcmfs_rm_common.h | 51 ++
drivers/crypto/bcmfs/meson.build | 5 +-
10 files changed, 1658 insertions(+), 1 deletion(-)
create mode 100644 drivers/crypto/bcmfs/bcmfs_dev_msg.h
create mode 100644 drivers/crypto/bcmfs/hw/bcmfs4_rm.c
create mode 100644 drivers/crypto/bcmfs/hw/bcmfs5_rm.c
create mode 100644 drivers/crypto/bcmfs/hw/bcmfs_rm_common.c
create mode 100644 drivers/crypto/bcmfs/hw/bcmfs_rm_common.h
diff --git a/drivers/crypto/bcmfs/bcmfs_dev_msg.h b/drivers/crypto/bcmfs/bcmfs_dev_msg.h
new file mode 100644
index 0000000000..5b50bde35a
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_dev_msg.h
@@ -0,0 +1,29 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_DEV_MSG_H_
+#define _BCMFS_DEV_MSG_H_
+
+#define MAX_SRC_ADDR_BUFFERS 8
+#define MAX_DST_ADDR_BUFFERS 3
+
+struct bcmfs_qp_message {
+ /** Physical address of each source */
+ uint64_t srcs_addr[MAX_SRC_ADDR_BUFFERS];
+ /** Length of each sources */
+ uint32_t srcs_len[MAX_SRC_ADDR_BUFFERS];
+ /** Total number of sources */
+ unsigned int srcs_count;
+ /** Physical address of each destination */
+ uint64_t dsts_addr[MAX_DST_ADDR_BUFFERS];
+ /** Length of each destination */
+ uint32_t dsts_len[MAX_DST_ADDR_BUFFERS];
+ /** Total number of destinations */
+ unsigned int dsts_count;
+
+ void *ctx;
+};
+
+#endif /* _BCMFS_DEV_MSG_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_device.c b/drivers/crypto/bcmfs/bcmfs_device.c
index a01a5c79d5..07423d3cc1 100644
--- a/drivers/crypto/bcmfs/bcmfs_device.c
+++ b/drivers/crypto/bcmfs/bcmfs_device.c
@@ -44,6 +44,47 @@ static struct bcmfs_device_attr dev_table[] = {
}
};
+struct bcmfs_hw_queue_pair_ops_table bcmfs_hw_queue_pair_ops_table = {
+ .tl = RTE_SPINLOCK_INITIALIZER,
+ .num_ops = 0
+};
+
+int bcmfs_hw_queue_pair_register_ops(const struct bcmfs_hw_queue_pair_ops *h)
+{
+ struct bcmfs_hw_queue_pair_ops *ops;
+ int16_t ops_index;
+
+ rte_spinlock_lock(&bcmfs_hw_queue_pair_ops_table.tl);
+
+ if (h->enq_one_req == NULL || h->dequeue == NULL ||
+ h->ring_db == NULL || h->startq == NULL || h->stopq == NULL) {
+ rte_spinlock_unlock(&bcmfs_hw_queue_pair_ops_table.tl);
+ BCMFS_LOG(ERR,
+ "Missing callback while registering device ops");
+ return -EINVAL;
+ }
+
+ if (strlen(h->name) >= sizeof(ops->name) - 1) {
+ rte_spinlock_unlock(&bcmfs_hw_queue_pair_ops_table.tl);
+ BCMFS_LOG(ERR, "%s(): fs device_ops <%s>: name too long",
+ __func__, h->name);
+ return -EEXIST;
+ }
+
+ ops_index = bcmfs_hw_queue_pair_ops_table.num_ops++;
+ ops = &bcmfs_hw_queue_pair_ops_table.qp_ops[ops_index];
+ strlcpy(ops->name, h->name, sizeof(ops->name));
+ ops->enq_one_req = h->enq_one_req;
+ ops->dequeue = h->dequeue;
+ ops->ring_db = h->ring_db;
+ ops->startq = h->startq;
+ ops->stopq = h->stopq;
+
+ rte_spinlock_unlock(&bcmfs_hw_queue_pair_ops_table.tl);
+
+ return ops_index;
+}
+
TAILQ_HEAD(fsdev_list, bcmfs_device);
static struct fsdev_list fsdev_list = TAILQ_HEAD_INITIALIZER(fsdev_list);
@@ -54,6 +95,7 @@ fsdev_allocate_one_dev(struct rte_vdev_device *vdev,
enum bcmfs_device_type dev_type __rte_unused)
{
struct bcmfs_device *fsdev;
+ uint32_t i;
fsdev = rte_calloc(__func__, 1, sizeof(*fsdev), 0);
if (!fsdev)
@@ -69,6 +111,15 @@ fsdev_allocate_one_dev(struct rte_vdev_device *vdev,
goto cleanup;
}
+ /* check if registered ops name is present in directory path */
+ for (i = 0; i < bcmfs_hw_queue_pair_ops_table.num_ops; i++)
+ if (strstr(dirpath,
+ bcmfs_hw_queue_pair_ops_table.qp_ops[i].name))
+ fsdev->sym_hw_qp_ops =
+ &bcmfs_hw_queue_pair_ops_table.qp_ops[i];
+ if (!fsdev->sym_hw_qp_ops)
+ goto cleanup;
+
strcpy(fsdev->dirname, dirpath);
strcpy(fsdev->name, devname);
diff --git a/drivers/crypto/bcmfs/bcmfs_device.h b/drivers/crypto/bcmfs/bcmfs_device.h
index dede5b82dc..2fb8eed143 100644
--- a/drivers/crypto/bcmfs/bcmfs_device.h
+++ b/drivers/crypto/bcmfs/bcmfs_device.h
@@ -8,6 +8,7 @@
#include <sys/queue.h>
+#include <rte_spinlock.h>
#include <rte_bus_vdev.h>
#include "bcmfs_logs.h"
@@ -31,6 +32,19 @@ enum bcmfs_device_type {
BCMFS_UNKNOWN
};
+/* A table to store registered queue pair opertations */
+struct bcmfs_hw_queue_pair_ops_table {
+ rte_spinlock_t tl;
+ /* Number of used ops structs in the table. */
+ uint32_t num_ops;
+ /* Storage for all possible ops structs. */
+ struct bcmfs_hw_queue_pair_ops qp_ops[BCMFS_MAX_NODES];
+};
+
+/* HW queue pair ops register function */
+int
+bcmfs_hw_queue_pair_register_ops(const struct bcmfs_hw_queue_pair_ops *qp_ops);
+
struct bcmfs_device {
TAILQ_ENTRY(bcmfs_device) next;
/* Directory path for vfio */
@@ -49,6 +63,8 @@ struct bcmfs_device {
uint16_t max_hw_qps;
/* current qpairs in use */
struct bcmfs_qp *qps_in_use[BCMFS_MAX_HW_QUEUES];
+ /* queue pair ops exported by symmetric crypto hw */
+ struct bcmfs_hw_queue_pair_ops *sym_hw_qp_ops;
};
#endif /* _BCMFS_DEVICE_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_qp.c b/drivers/crypto/bcmfs/bcmfs_qp.c
index 864e7bb746..ec1327b780 100644
--- a/drivers/crypto/bcmfs/bcmfs_qp.c
+++ b/drivers/crypto/bcmfs/bcmfs_qp.c
@@ -227,6 +227,7 @@ bcmfs_qp_setup(struct bcmfs_qp **qp_addr,
qp->qpair_id = queue_pair_id;
qp->ioreg = qp_conf->iobase;
qp->nb_descriptors = nb_descriptors;
+ qp->ops = qp_conf->ops;
qp->stats.enqueued_count = 0;
qp->stats.dequeued_count = 0;
diff --git a/drivers/crypto/bcmfs/bcmfs_qp.h b/drivers/crypto/bcmfs/bcmfs_qp.h
index 52c487956e..59785865b0 100644
--- a/drivers/crypto/bcmfs/bcmfs_qp.h
+++ b/drivers/crypto/bcmfs/bcmfs_qp.h
@@ -44,6 +44,8 @@ struct bcmfs_qp_config {
uint16_t nb_descriptors;
/* Maximum number of h/w descriptors needed by a request */
uint16_t max_descs_req;
+ /* h/w ops associated with qp */
+ struct bcmfs_hw_queue_pair_ops *ops;
};
struct bcmfs_queue {
@@ -61,6 +63,8 @@ struct bcmfs_queue {
/* s/w pointer for completion h/w queue*/
uint32_t cmpl_read_ptr;
};
+ /* number of inflight descriptor accumulated before next db ring */
+ uint16_t descs_inflight;
/* Memzone name */
char memz_name[RTE_MEMZONE_NAMESIZE];
};
diff --git a/drivers/crypto/bcmfs/hw/bcmfs4_rm.c b/drivers/crypto/bcmfs/hw/bcmfs4_rm.c
new file mode 100644
index 0000000000..0ccb111898
--- /dev/null
+++ b/drivers/crypto/bcmfs/hw/bcmfs4_rm.c
@@ -0,0 +1,743 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <unistd.h>
+
+#include <rte_bitmap.h>
+
+#include "bcmfs_device.h"
+#include "bcmfs_dev_msg.h"
+#include "bcmfs_hw_defs.h"
+#include "bcmfs_logs.h"
+#include "bcmfs_qp.h"
+#include "bcmfs_rm_common.h"
+
+/* FS4 configuration */
+#define RING_BD_TOGGLE_INVALID(offset) \
+ (((offset) >> FS_RING_BD_ALIGN_ORDER) & 0x1)
+#define RING_BD_TOGGLE_VALID(offset) \
+ (!RING_BD_TOGGLE_INVALID(offset))
+
+#define RING_VER_MAGIC 0x76303031
+
+/* Per-Ring register offsets */
+#define RING_VER 0x000
+#define RING_BD_START_ADDR 0x004
+#define RING_BD_READ_PTR 0x008
+#define RING_BD_WRITE_PTR 0x00c
+#define RING_BD_READ_PTR_DDR_LS 0x010
+#define RING_BD_READ_PTR_DDR_MS 0x014
+#define RING_CMPL_START_ADDR 0x018
+#define RING_CMPL_WRITE_PTR 0x01c
+#define RING_NUM_REQ_RECV_LS 0x020
+#define RING_NUM_REQ_RECV_MS 0x024
+#define RING_NUM_REQ_TRANS_LS 0x028
+#define RING_NUM_REQ_TRANS_MS 0x02c
+#define RING_NUM_REQ_OUTSTAND 0x030
+#define RING_CONTROL 0x034
+#define RING_FLUSH_DONE 0x038
+#define RING_MSI_ADDR_LS 0x03c
+#define RING_MSI_ADDR_MS 0x040
+#define RING_MSI_CONTROL 0x048
+#define RING_BD_READ_PTR_DDR_CONTROL 0x04c
+#define RING_MSI_DATA_VALUE 0x064
+
+/* Register RING_BD_START_ADDR fields */
+#define BD_LAST_UPDATE_HW_SHIFT 28
+#define BD_LAST_UPDATE_HW_MASK 0x1
+#define BD_START_ADDR_VALUE(pa) \
+ ((uint32_t)((((uint64_t)(pa)) >> FS_RING_BD_ALIGN_ORDER) & 0x0fffffff))
+#define BD_START_ADDR_DECODE(val) \
+ ((uint64_t)((val) & 0x0fffffff) << FS_RING_BD_ALIGN_ORDER)
+
+/* Register RING_CMPL_START_ADDR fields */
+#define CMPL_START_ADDR_VALUE(pa) \
+ ((uint32_t)((((uint64_t)(pa)) >> FS_RING_CMPL_ALIGN_ORDER) & 0x7ffffff))
+
+/* Register RING_CONTROL fields */
+#define CONTROL_MASK_DISABLE_CONTROL 12
+#define CONTROL_FLUSH_SHIFT 5
+#define CONTROL_ACTIVE_SHIFT 4
+#define CONTROL_RATE_ADAPT_MASK 0xf
+#define CONTROL_RATE_DYNAMIC 0x0
+#define CONTROL_RATE_FAST 0x8
+#define CONTROL_RATE_MEDIUM 0x9
+#define CONTROL_RATE_SLOW 0xa
+#define CONTROL_RATE_IDLE 0xb
+
+/* Register RING_FLUSH_DONE fields */
+#define FLUSH_DONE_MASK 0x1
+
+/* Register RING_MSI_CONTROL fields */
+#define MSI_TIMER_VAL_SHIFT 16
+#define MSI_TIMER_VAL_MASK 0xffff
+#define MSI_ENABLE_SHIFT 15
+#define MSI_ENABLE_MASK 0x1
+#define MSI_COUNT_SHIFT 0
+#define MSI_COUNT_MASK 0x3ff
+
+/* Register RING_BD_READ_PTR_DDR_CONTROL fields */
+#define BD_READ_PTR_DDR_TIMER_VAL_SHIFT 16
+#define BD_READ_PTR_DDR_TIMER_VAL_MASK 0xffff
+#define BD_READ_PTR_DDR_ENABLE_SHIFT 15
+#define BD_READ_PTR_DDR_ENABLE_MASK 0x1
+
+/* ====== Broadcom FS4-RM ring descriptor defines ===== */
+
+
+/* General descriptor format */
+#define DESC_TYPE_SHIFT 60
+#define DESC_TYPE_MASK 0xf
+#define DESC_PAYLOAD_SHIFT 0
+#define DESC_PAYLOAD_MASK 0x0fffffffffffffff
+
+/* Null descriptor format */
+#define NULL_TYPE 0
+#define NULL_TOGGLE_SHIFT 58
+#define NULL_TOGGLE_MASK 0x1
+
+/* Header descriptor format */
+#define HEADER_TYPE 1
+#define HEADER_TOGGLE_SHIFT 58
+#define HEADER_TOGGLE_MASK 0x1
+#define HEADER_ENDPKT_SHIFT 57
+#define HEADER_ENDPKT_MASK 0x1
+#define HEADER_STARTPKT_SHIFT 56
+#define HEADER_STARTPKT_MASK 0x1
+#define HEADER_BDCOUNT_SHIFT 36
+#define HEADER_BDCOUNT_MASK 0x1f
+#define HEADER_BDCOUNT_MAX HEADER_BDCOUNT_MASK
+#define HEADER_FLAGS_SHIFT 16
+#define HEADER_FLAGS_MASK 0xffff
+#define HEADER_OPAQUE_SHIFT 0
+#define HEADER_OPAQUE_MASK 0xffff
+
+/* Source (SRC) descriptor format */
+#define SRC_TYPE 2
+#define SRC_LENGTH_SHIFT 44
+#define SRC_LENGTH_MASK 0xffff
+#define SRC_ADDR_SHIFT 0
+#define SRC_ADDR_MASK 0x00000fffffffffff
+
+/* Destination (DST) descriptor format */
+#define DST_TYPE 3
+#define DST_LENGTH_SHIFT 44
+#define DST_LENGTH_MASK 0xffff
+#define DST_ADDR_SHIFT 0
+#define DST_ADDR_MASK 0x00000fffffffffff
+
+/* Next pointer (NPTR) descriptor format */
+#define NPTR_TYPE 5
+#define NPTR_TOGGLE_SHIFT 58
+#define NPTR_TOGGLE_MASK 0x1
+#define NPTR_ADDR_SHIFT 0
+#define NPTR_ADDR_MASK 0x00000fffffffffff
+
+/* Mega source (MSRC) descriptor format */
+#define MSRC_TYPE 6
+#define MSRC_LENGTH_SHIFT 44
+#define MSRC_LENGTH_MASK 0xffff
+#define MSRC_ADDR_SHIFT 0
+#define MSRC_ADDR_MASK 0x00000fffffffffff
+
+/* Mega destination (MDST) descriptor format */
+#define MDST_TYPE 7
+#define MDST_LENGTH_SHIFT 44
+#define MDST_LENGTH_MASK 0xffff
+#define MDST_ADDR_SHIFT 0
+#define MDST_ADDR_MASK 0x00000fffffffffff
+
+static uint8_t
+bcmfs4_is_next_table_desc(void *desc_ptr)
+{
+ uint64_t desc = rm_read_desc(desc_ptr);
+ uint32_t type = FS_DESC_DEC(desc, DESC_TYPE_SHIFT, DESC_TYPE_MASK);
+
+ return (type == NPTR_TYPE) ? true : false;
+}
+
+static uint64_t
+bcmfs4_next_table_desc(uint32_t toggle, uint64_t next_addr)
+{
+ return (rm_build_desc(NPTR_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(toggle, NPTR_TOGGLE_SHIFT, NPTR_TOGGLE_MASK) |
+ rm_build_desc(next_addr, NPTR_ADDR_SHIFT, NPTR_ADDR_MASK));
+}
+
+static uint64_t
+bcmfs4_null_desc(uint32_t toggle)
+{
+ return (rm_build_desc(NULL_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(toggle, NULL_TOGGLE_SHIFT, NULL_TOGGLE_MASK));
+}
+
+static void
+bcmfs4_flip_header_toggle(void *desc_ptr)
+{
+ uint64_t desc = rm_read_desc(desc_ptr);
+
+ if (desc & ((uint64_t)0x1 << HEADER_TOGGLE_SHIFT))
+ desc &= ~((uint64_t)0x1 << HEADER_TOGGLE_SHIFT);
+ else
+ desc |= ((uint64_t)0x1 << HEADER_TOGGLE_SHIFT);
+
+ rm_write_desc(desc_ptr, desc);
+}
+
+static uint64_t
+bcmfs4_header_desc(uint32_t toggle, uint32_t startpkt,
+ uint32_t endpkt, uint32_t bdcount,
+ uint32_t flags, uint32_t opaque)
+{
+ return (rm_build_desc(HEADER_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(toggle, HEADER_TOGGLE_SHIFT, HEADER_TOGGLE_MASK) |
+ rm_build_desc(startpkt, HEADER_STARTPKT_SHIFT,
+ HEADER_STARTPKT_MASK) |
+ rm_build_desc(endpkt, HEADER_ENDPKT_SHIFT, HEADER_ENDPKT_MASK) |
+ rm_build_desc(bdcount, HEADER_BDCOUNT_SHIFT,
+ HEADER_BDCOUNT_MASK) |
+ rm_build_desc(flags, HEADER_FLAGS_SHIFT, HEADER_FLAGS_MASK) |
+ rm_build_desc(opaque, HEADER_OPAQUE_SHIFT, HEADER_OPAQUE_MASK));
+}
+
+static void
+bcmfs4_enqueue_desc(uint32_t nhpos, uint32_t nhcnt,
+ uint32_t reqid, uint64_t desc,
+ void **desc_ptr, uint32_t *toggle,
+ void *start_desc, void *end_desc)
+{
+ uint64_t d;
+ uint32_t nhavail, _toggle, _startpkt, _endpkt, _bdcount;
+
+ /*
+ * Each request or packet start with a HEADER descriptor followed
+ * by one or more non-HEADER descriptors (SRC, SRCT, MSRC, DST,
+ * DSTT, MDST, IMM, and IMMT). The number of non-HEADER descriptors
+ * following a HEADER descriptor is represented by BDCOUNT field
+ * of HEADER descriptor. The max value of BDCOUNT field is 31 which
+ * means we can only have 31 non-HEADER descriptors following one
+ * HEADER descriptor.
+ *
+ * In general use, number of non-HEADER descriptors can easily go
+ * beyond 31. To tackle this situation, we have packet (or request)
+ * extension bits (STARTPKT and ENDPKT) in the HEADER descriptor.
+ *
+ * To use packet extension, the first HEADER descriptor of request
+ * (or packet) will have STARTPKT=1 and ENDPKT=0. The intermediate
+ * HEADER descriptors will have STARTPKT=0 and ENDPKT=0. The last
+ * HEADER descriptor will have STARTPKT=0 and ENDPKT=1. Also, the
+ * TOGGLE bit of the first HEADER will be set to invalid state to
+ * ensure that FlexDMA engine does not start fetching descriptors
+ * till all descriptors are enqueued. The user of this function
+ * will flip the TOGGLE bit of first HEADER after all descriptors
+ * are enqueued.
+ */
+
+ if ((nhpos % HEADER_BDCOUNT_MAX == 0) && (nhcnt - nhpos)) {
+ /* Prepare the header descriptor */
+ nhavail = (nhcnt - nhpos);
+ _toggle = (nhpos == 0) ? !(*toggle) : (*toggle);
+ _startpkt = (nhpos == 0) ? 0x1 : 0x0;
+ _endpkt = (nhavail <= HEADER_BDCOUNT_MAX) ? 0x1 : 0x0;
+ _bdcount = (nhavail <= HEADER_BDCOUNT_MAX) ?
+ nhavail : HEADER_BDCOUNT_MAX;
+ if (nhavail <= HEADER_BDCOUNT_MAX)
+ _bdcount = nhavail;
+ else
+ _bdcount = HEADER_BDCOUNT_MAX;
+ d = bcmfs4_header_desc(_toggle, _startpkt, _endpkt,
+ _bdcount, 0x0, reqid);
+
+ /* Write header descriptor */
+ rm_write_desc(*desc_ptr, d);
+
+ /* Point to next descriptor */
+ *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc);
+ if (*desc_ptr == end_desc)
+ *desc_ptr = start_desc;
+
+ /* Skip next pointer descriptors */
+ while (bcmfs4_is_next_table_desc(*desc_ptr)) {
+ *toggle = (*toggle) ? 0 : 1;
+ *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc);
+ if (*desc_ptr == end_desc)
+ *desc_ptr = start_desc;
+ }
+ }
+
+ /* Write desired descriptor */
+ rm_write_desc(*desc_ptr, desc);
+
+ /* Point to next descriptor */
+ *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc);
+ if (*desc_ptr == end_desc)
+ *desc_ptr = start_desc;
+
+ /* Skip next pointer descriptors */
+ while (bcmfs4_is_next_table_desc(*desc_ptr)) {
+ *toggle = (*toggle) ? 0 : 1;
+ *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc);
+ if (*desc_ptr == end_desc)
+ *desc_ptr = start_desc;
+ }
+}
+
+static uint64_t
+bcmfs4_src_desc(uint64_t addr, unsigned int length)
+{
+ return (rm_build_desc(SRC_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(length, SRC_LENGTH_SHIFT, SRC_LENGTH_MASK) |
+ rm_build_desc(addr, SRC_ADDR_SHIFT, SRC_ADDR_MASK));
+}
+
+static uint64_t
+bcmfs4_msrc_desc(uint64_t addr, unsigned int length_div_16)
+{
+ return (rm_build_desc(MSRC_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(length_div_16, MSRC_LENGTH_SHIFT, MSRC_LENGTH_MASK) |
+ rm_build_desc(addr, MSRC_ADDR_SHIFT, MSRC_ADDR_MASK));
+}
+
+static uint64_t
+bcmfs4_dst_desc(uint64_t addr, unsigned int length)
+{
+ return (rm_build_desc(DST_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(length, DST_LENGTH_SHIFT, DST_LENGTH_MASK) |
+ rm_build_desc(addr, DST_ADDR_SHIFT, DST_ADDR_MASK));
+}
+
+static uint64_t
+bcmfs4_mdst_desc(uint64_t addr, unsigned int length_div_16)
+{
+ return (rm_build_desc(MDST_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(length_div_16, MDST_LENGTH_SHIFT, MDST_LENGTH_MASK) |
+ rm_build_desc(addr, MDST_ADDR_SHIFT, MDST_ADDR_MASK));
+}
+
+static bool
+bcmfs4_sanity_check(struct bcmfs_qp_message *msg)
+{
+ unsigned int i = 0;
+
+ if (msg == NULL)
+ return false;
+
+ for (i = 0; i < msg->srcs_count; i++) {
+ if (msg->srcs_len[i] & 0xf) {
+ if (msg->srcs_len[i] > SRC_LENGTH_MASK)
+ return false;
+ } else {
+ if (msg->srcs_len[i] > (MSRC_LENGTH_MASK * 16))
+ return false;
+ }
+ }
+ for (i = 0; i < msg->dsts_count; i++) {
+ if (msg->dsts_len[i] & 0xf) {
+ if (msg->dsts_len[i] > DST_LENGTH_MASK)
+ return false;
+ } else {
+ if (msg->dsts_len[i] > (MDST_LENGTH_MASK * 16))
+ return false;
+ }
+ }
+
+ return true;
+}
+
+static uint32_t
+estimate_nonheader_desc_count(struct bcmfs_qp_message *msg)
+{
+ uint32_t cnt = 0;
+ unsigned int src = 0;
+ unsigned int dst = 0;
+ unsigned int dst_target = 0;
+
+ while (src < msg->srcs_count ||
+ dst < msg->dsts_count) {
+ if (src < msg->srcs_count) {
+ cnt++;
+ dst_target = msg->srcs_len[src];
+ src++;
+ } else {
+ dst_target = UINT_MAX;
+ }
+ while (dst_target && dst < msg->dsts_count) {
+ cnt++;
+ if (msg->dsts_len[dst] < dst_target)
+ dst_target -= msg->dsts_len[dst];
+ else
+ dst_target = 0;
+ dst++;
+ }
+ }
+
+ return cnt;
+}
+
+static void *
+bcmfs4_enqueue_msg(struct bcmfs_qp_message *msg,
+ uint32_t nhcnt, uint32_t reqid,
+ void *desc_ptr, uint32_t toggle,
+ void *start_desc, void *end_desc)
+{
+ uint64_t d;
+ uint32_t nhpos = 0;
+ unsigned int src = 0;
+ unsigned int dst = 0;
+ unsigned int dst_target = 0;
+ void *orig_desc_ptr = desc_ptr;
+
+ if (!desc_ptr || !start_desc || !end_desc)
+ return NULL;
+
+ if (desc_ptr < start_desc || end_desc <= desc_ptr)
+ return NULL;
+
+ while (src < msg->srcs_count || dst < msg->dsts_count) {
+ if (src < msg->srcs_count) {
+ if (msg->srcs_len[src] & 0xf) {
+ d = bcmfs4_src_desc(msg->srcs_addr[src],
+ msg->srcs_len[src]);
+ } else {
+ d = bcmfs4_msrc_desc(msg->srcs_addr[src],
+ msg->srcs_len[src] / 16);
+ }
+ bcmfs4_enqueue_desc(nhpos, nhcnt, reqid,
+ d, &desc_ptr, &toggle,
+ start_desc, end_desc);
+ nhpos++;
+ dst_target = msg->srcs_len[src];
+ src++;
+ } else {
+ dst_target = UINT_MAX;
+ }
+
+ while (dst_target && (dst < msg->dsts_count)) {
+ if (msg->dsts_len[dst] & 0xf) {
+ d = bcmfs4_dst_desc(msg->dsts_addr[dst],
+ msg->dsts_len[dst]);
+ } else {
+ d = bcmfs4_mdst_desc(msg->dsts_addr[dst],
+ msg->dsts_len[dst] / 16);
+ }
+ bcmfs4_enqueue_desc(nhpos, nhcnt, reqid,
+ d, &desc_ptr, &toggle,
+ start_desc, end_desc);
+ nhpos++;
+ if (msg->dsts_len[dst] < dst_target)
+ dst_target -= msg->dsts_len[dst];
+ else
+ dst_target = 0;
+ dst++; /* for next buffer */
+ }
+ }
+
+ /* Null descriptor with invalid toggle bit */
+ rm_write_desc(desc_ptr, bcmfs4_null_desc(!toggle));
+
+ /* Ensure that descriptors have been written to memory */
+ rte_io_wmb();
+
+ bcmfs4_flip_header_toggle(orig_desc_ptr);
+
+ return desc_ptr;
+}
+
+static int
+bcmfs4_enqueue_single_request_qp(struct bcmfs_qp *qp, void *op)
+{
+ int reqid;
+ void *next;
+ uint32_t nhcnt;
+ int ret = 0;
+ uint32_t pos = 0;
+ uint64_t slab = 0;
+ uint8_t exit_cleanup = false;
+ struct bcmfs_queue *txq = &qp->tx_q;
+ struct bcmfs_qp_message *msg = (struct bcmfs_qp_message *)op;
+
+ /* Do sanity check on message */
+ if (!bcmfs4_sanity_check(msg)) {
+ BCMFS_DP_LOG(ERR, "Invalid msg on queue %d", qp->qpair_id);
+ return -EIO;
+ }
+
+ /* Scan from the beginning */
+ __rte_bitmap_scan_init(qp->ctx_bmp);
+ /* Scan bitmap to get the free pool */
+ ret = rte_bitmap_scan(qp->ctx_bmp, &pos, &slab);
+ if (ret == 0) {
+ BCMFS_DP_LOG(ERR, "BD memory exhausted");
+ return -ERANGE;
+ }
+
+ reqid = pos + __builtin_ctzll(slab);
+ rte_bitmap_clear(qp->ctx_bmp, reqid);
+ qp->ctx_pool[reqid] = (unsigned long)msg;
+
+ /*
+ * Number required descriptors = number of non-header descriptors +
+ * number of header descriptors +
+ * 1x null descriptor
+ */
+ nhcnt = estimate_nonheader_desc_count(msg);
+
+ /* Write descriptors to ring */
+ next = bcmfs4_enqueue_msg(msg, nhcnt, reqid,
+ (uint8_t *)txq->base_addr + txq->tx_write_ptr,
+ RING_BD_TOGGLE_VALID(txq->tx_write_ptr),
+ txq->base_addr,
+ (uint8_t *)txq->base_addr + txq->queue_size);
+ if (next == NULL) {
+ BCMFS_DP_LOG(ERR, "Enqueue for desc failed on queue %d",
+ qp->qpair_id);
+ ret = -EINVAL;
+ exit_cleanup = true;
+ goto exit;
+ }
+
+ /* Save ring BD write offset */
+ txq->tx_write_ptr = (uint32_t)((uint8_t *)next -
+ (uint8_t *)txq->base_addr);
+
+ qp->nb_pending_requests++;
+
+ return 0;
+
+exit:
+ /* Cleanup if we failed */
+ if (exit_cleanup)
+ rte_bitmap_set(qp->ctx_bmp, reqid);
+
+ return ret;
+}
+
+static void
+bcmfs4_ring_doorbell_qp(struct bcmfs_qp *qp __rte_unused)
+{
+ /* no door bell method supported */
+}
+
+static uint16_t
+bcmfs4_dequeue_qp(struct bcmfs_qp *qp, void **ops, uint16_t budget)
+{
+ int err;
+ uint16_t reqid;
+ uint64_t desc;
+ uint16_t count = 0;
+ unsigned long context = 0;
+ struct bcmfs_queue *hwq = &qp->cmpl_q;
+ uint32_t cmpl_read_offset, cmpl_write_offset;
+
+ /*
+ * Check whether budget is valid, else set the budget to maximum
+ * so that all the available completions will be processed.
+ */
+ if (budget > qp->nb_pending_requests)
+ budget = qp->nb_pending_requests;
+
+ /*
+ * Get current completion read and write offset
+ * Note: We should read completion write pointer at least once
+ * after we get a MSI interrupt because HW maintains internal
+ * MSI status which will allow next MSI interrupt only after
+ * completion write pointer is read.
+ */
+ cmpl_write_offset = FS_MMIO_READ32((uint8_t *)qp->ioreg +
+ RING_CMPL_WRITE_PTR);
+ cmpl_write_offset *= FS_RING_DESC_SIZE;
+ cmpl_read_offset = hwq->cmpl_read_ptr;
+
+ /* Ensure completion pointer is read before proceeding */
+ rte_io_rmb();
+
+ /* For each completed request notify mailbox clients */
+ reqid = 0;
+ while ((cmpl_read_offset != cmpl_write_offset) && (budget > 0)) {
+ /* Dequeue next completion descriptor */
+ desc = *((uint64_t *)((uint8_t *)hwq->base_addr +
+ cmpl_read_offset));
+
+ /* Next read offset */
+ cmpl_read_offset += FS_RING_DESC_SIZE;
+ if (cmpl_read_offset == FS_RING_CMPL_SIZE)
+ cmpl_read_offset = 0;
+
+ /* Decode error from completion descriptor */
+ err = rm_cmpl_desc_to_error(desc);
+ if (err < 0)
+ BCMFS_DP_LOG(ERR, "error desc rcvd");
+
+ /* Determine request id from completion descriptor */
+ reqid = rm_cmpl_desc_to_reqid(desc);
+
+ /* Determine message pointer based on reqid */
+ context = qp->ctx_pool[reqid];
+ if (context == 0)
+ BCMFS_DP_LOG(ERR, "HW error detected");
+
+ /* Release reqid for recycling */
+ qp->ctx_pool[reqid] = 0;
+ rte_bitmap_set(qp->ctx_bmp, reqid);
+
+ *ops = (void *)context;
+
+ /* Increment number of completions processed */
+ count++;
+ budget--;
+ ops++;
+ }
+
+ hwq->cmpl_read_ptr = cmpl_read_offset;
+
+ qp->nb_pending_requests -= count;
+
+ return count;
+}
+
+static int
+bcmfs4_start_qp(struct bcmfs_qp *qp)
+{
+ int timeout;
+ uint32_t val, off;
+ uint64_t d, next_addr, msi;
+ struct bcmfs_queue *tx_queue = &qp->tx_q;
+ struct bcmfs_queue *cmpl_queue = &qp->cmpl_q;
+
+ /* Disable/deactivate ring */
+ FS_MMIO_WRITE32(0x0, (uint8_t *)qp->ioreg + RING_CONTROL);
+
+ /* Configure next table pointer entries in BD memory */
+ for (off = 0; off < tx_queue->queue_size; off += FS_RING_DESC_SIZE) {
+ next_addr = off + FS_RING_DESC_SIZE;
+ if (next_addr == tx_queue->queue_size)
+ next_addr = 0;
+ next_addr += (uint64_t)tx_queue->base_phys_addr;
+ if (FS_RING_BD_ALIGN_CHECK(next_addr))
+ d = bcmfs4_next_table_desc(RING_BD_TOGGLE_VALID(off),
+ next_addr);
+ else
+ d = bcmfs4_null_desc(RING_BD_TOGGLE_INVALID(off));
+ rm_write_desc((uint8_t *)tx_queue->base_addr + off, d);
+ }
+
+ /*
+ * If user interrupt the test in between the run(Ctrl+C), then all
+ * subsequent test run will fail because sw cmpl_read_offset and hw
+ * cmpl_write_offset will be pointing at different completion BD. To
+ * handle this we should flush all the rings in the startup instead
+ * of shutdown function.
+ * Ring flush will reset hw cmpl_write_offset.
+ */
+
+ /* Set ring flush state */
+ timeout = 1000;
+ FS_MMIO_WRITE32(BIT(CONTROL_FLUSH_SHIFT),
+ (uint8_t *)qp->ioreg + RING_CONTROL);
+ do {
+ /*
+ * If previous test is stopped in between the run, then
+ * sw has to read cmpl_write_offset else DME/AE will be not
+ * come out of flush state.
+ */
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_CMPL_WRITE_PTR);
+
+ if (FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_FLUSH_DONE) &
+ FLUSH_DONE_MASK)
+ break;
+ usleep(1000);
+ } while (--timeout);
+ if (!timeout) {
+ BCMFS_DP_LOG(ERR, "Ring flush timeout hw-queue %d",
+ qp->qpair_id);
+ }
+
+ /* Clear ring flush state */
+ timeout = 1000;
+ FS_MMIO_WRITE32(0x0, (uint8_t *)qp->ioreg + RING_CONTROL);
+ do {
+ if (!(FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_FLUSH_DONE) &
+ FLUSH_DONE_MASK))
+ break;
+ usleep(1000);
+ } while (--timeout);
+ if (!timeout) {
+ BCMFS_DP_LOG(ERR, "Ring clear flush timeout hw-queue %d",
+ qp->qpair_id);
+ }
+
+ /* Program BD start address */
+ val = BD_START_ADDR_VALUE(tx_queue->base_phys_addr);
+ FS_MMIO_WRITE32(val, (uint8_t *)qp->ioreg + RING_BD_START_ADDR);
+
+ /* BD write pointer will be same as HW write pointer */
+ tx_queue->tx_write_ptr = FS_MMIO_READ32((uint8_t *)qp->ioreg +
+ RING_BD_WRITE_PTR);
+ tx_queue->tx_write_ptr *= FS_RING_DESC_SIZE;
+
+
+ for (off = 0; off < FS_RING_CMPL_SIZE; off += FS_RING_DESC_SIZE)
+ rm_write_desc((uint8_t *)cmpl_queue->base_addr + off, 0x0);
+
+ /* Program completion start address */
+ val = CMPL_START_ADDR_VALUE(cmpl_queue->base_phys_addr);
+ FS_MMIO_WRITE32(val, (uint8_t *)qp->ioreg + RING_CMPL_START_ADDR);
+
+ /* Completion read pointer will be same as HW write pointer */
+ cmpl_queue->cmpl_read_ptr = FS_MMIO_READ32((uint8_t *)qp->ioreg +
+ RING_CMPL_WRITE_PTR);
+ cmpl_queue->cmpl_read_ptr *= FS_RING_DESC_SIZE;
+
+ /* Read ring Tx, Rx, and Outstanding counts to clear */
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_RECV_LS);
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_RECV_MS);
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_TRANS_LS);
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_TRANS_MS);
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_OUTSTAND);
+
+ /* Configure per-Ring MSI registers with dummy location */
+ /* We leave 1k * FS_RING_DESC_SIZE size from base phys for MSI */
+ msi = cmpl_queue->base_phys_addr + (1024 * FS_RING_DESC_SIZE);
+ FS_MMIO_WRITE32((msi & 0xFFFFFFFF),
+ (uint8_t *)qp->ioreg + RING_MSI_ADDR_LS);
+ FS_MMIO_WRITE32(((msi >> 32) & 0xFFFFFFFF),
+ (uint8_t *)qp->ioreg + RING_MSI_ADDR_MS);
+ FS_MMIO_WRITE32(qp->qpair_id,
+ (uint8_t *)qp->ioreg + RING_MSI_DATA_VALUE);
+
+ /* Configure RING_MSI_CONTROL */
+ val = 0;
+ val |= (MSI_TIMER_VAL_MASK << MSI_TIMER_VAL_SHIFT);
+ val |= BIT(MSI_ENABLE_SHIFT);
+ val |= (0x1 & MSI_COUNT_MASK) << MSI_COUNT_SHIFT;
+ FS_MMIO_WRITE32(val, (uint8_t *)qp->ioreg + RING_MSI_CONTROL);
+
+ /* Enable/activate ring */
+ val = BIT(CONTROL_ACTIVE_SHIFT);
+ FS_MMIO_WRITE32(val, (uint8_t *)qp->ioreg + RING_CONTROL);
+
+ return 0;
+}
+
+static void
+bcmfs4_shutdown_qp(struct bcmfs_qp *qp)
+{
+ /* Disable/deactivate ring */
+ FS_MMIO_WRITE32(0x0, (uint8_t *)qp->ioreg + RING_CONTROL);
+}
+
+struct bcmfs_hw_queue_pair_ops bcmfs4_qp_ops = {
+ .name = "fs4",
+ .enq_one_req = bcmfs4_enqueue_single_request_qp,
+ .ring_db = bcmfs4_ring_doorbell_qp,
+ .dequeue = bcmfs4_dequeue_qp,
+ .startq = bcmfs4_start_qp,
+ .stopq = bcmfs4_shutdown_qp,
+};
+
+RTE_INIT(bcmfs4_register_qp_ops)
+{
+ bcmfs_hw_queue_pair_register_ops(&bcmfs4_qp_ops);
+}
diff --git a/drivers/crypto/bcmfs/hw/bcmfs5_rm.c b/drivers/crypto/bcmfs/hw/bcmfs5_rm.c
new file mode 100644
index 0000000000..86e53051dd
--- /dev/null
+++ b/drivers/crypto/bcmfs/hw/bcmfs5_rm.c
@@ -0,0 +1,677 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <unistd.h>
+
+#include <rte_bitmap.h>
+
+#include "bcmfs_qp.h"
+#include "bcmfs_logs.h"
+#include "bcmfs_dev_msg.h"
+#include "bcmfs_device.h"
+#include "bcmfs_hw_defs.h"
+#include "bcmfs_rm_common.h"
+
+/* Ring version */
+#define RING_VER_MAGIC 0x76303032
+
+/* Per-Ring register offsets */
+#define RING_VER 0x000
+#define RING_BD_START_ADDRESS_LSB 0x004
+#define RING_BD_READ_PTR 0x008
+#define RING_BD_WRITE_PTR 0x00c
+#define RING_BD_READ_PTR_DDR_LS 0x010
+#define RING_BD_READ_PTR_DDR_MS 0x014
+#define RING_CMPL_START_ADDR_LSB 0x018
+#define RING_CMPL_WRITE_PTR 0x01c
+#define RING_NUM_REQ_RECV_LS 0x020
+#define RING_NUM_REQ_RECV_MS 0x024
+#define RING_NUM_REQ_TRANS_LS 0x028
+#define RING_NUM_REQ_TRANS_MS 0x02c
+#define RING_NUM_REQ_OUTSTAND 0x030
+#define RING_CONTROL 0x034
+#define RING_FLUSH_DONE 0x038
+#define RING_MSI_ADDR_LS 0x03c
+#define RING_MSI_ADDR_MS 0x040
+#define RING_MSI_CONTROL 0x048
+#define RING_BD_READ_PTR_DDR_CONTROL 0x04c
+#define RING_MSI_DATA_VALUE 0x064
+#define RING_BD_START_ADDRESS_MSB 0x078
+#define RING_CMPL_START_ADDR_MSB 0x07c
+#define RING_DOORBELL_BD_WRITE_COUNT 0x074
+
+/* Register RING_BD_START_ADDR fields */
+#define BD_LAST_UPDATE_HW_SHIFT 28
+#define BD_LAST_UPDATE_HW_MASK 0x1
+#define BD_START_ADDR_VALUE(pa) \
+ ((uint32_t)((((uint64_t)(pa)) >> RING_BD_ALIGN_ORDER) & 0x0fffffff))
+#define BD_START_ADDR_DECODE(val) \
+ ((uint64_t)((val) & 0x0fffffff) << RING_BD_ALIGN_ORDER)
+
+/* Register RING_CMPL_START_ADDR fields */
+#define CMPL_START_ADDR_VALUE(pa) \
+ ((uint32_t)((((uint64_t)(pa)) >> RING_CMPL_ALIGN_ORDER) & 0x07ffffff))
+
+/* Register RING_CONTROL fields */
+#define CONTROL_MASK_DISABLE_CONTROL 12
+#define CONTROL_FLUSH_SHIFT 5
+#define CONTROL_ACTIVE_SHIFT 4
+#define CONTROL_RATE_ADAPT_MASK 0xf
+#define CONTROL_RATE_DYNAMIC 0x0
+#define CONTROL_RATE_FAST 0x8
+#define CONTROL_RATE_MEDIUM 0x9
+#define CONTROL_RATE_SLOW 0xa
+#define CONTROL_RATE_IDLE 0xb
+
+/* Register RING_FLUSH_DONE fields */
+#define FLUSH_DONE_MASK 0x1
+
+/* Register RING_MSI_CONTROL fields */
+#define MSI_TIMER_VAL_SHIFT 16
+#define MSI_TIMER_VAL_MASK 0xffff
+#define MSI_ENABLE_SHIFT 15
+#define MSI_ENABLE_MASK 0x1
+#define MSI_COUNT_SHIFT 0
+#define MSI_COUNT_MASK 0x3ff
+
+/* Register RING_BD_READ_PTR_DDR_CONTROL fields */
+#define BD_READ_PTR_DDR_TIMER_VAL_SHIFT 16
+#define BD_READ_PTR_DDR_TIMER_VAL_MASK 0xffff
+#define BD_READ_PTR_DDR_ENABLE_SHIFT 15
+#define BD_READ_PTR_DDR_ENABLE_MASK 0x1
+
+/* General descriptor format */
+#define DESC_TYPE_SHIFT 60
+#define DESC_TYPE_MASK 0xf
+#define DESC_PAYLOAD_SHIFT 0
+#define DESC_PAYLOAD_MASK 0x0fffffffffffffff
+
+/* Null descriptor format */
+#define NULL_TYPE 0
+#define NULL_TOGGLE_SHIFT 59
+#define NULL_TOGGLE_MASK 0x1
+
+/* Header descriptor format */
+#define HEADER_TYPE 1
+#define HEADER_TOGGLE_SHIFT 59
+#define HEADER_TOGGLE_MASK 0x1
+#define HEADER_ENDPKT_SHIFT 57
+#define HEADER_ENDPKT_MASK 0x1
+#define HEADER_STARTPKT_SHIFT 56
+#define HEADER_STARTPKT_MASK 0x1
+#define HEADER_BDCOUNT_SHIFT 36
+#define HEADER_BDCOUNT_MASK 0x1f
+#define HEADER_BDCOUNT_MAX HEADER_BDCOUNT_MASK
+#define HEADER_FLAGS_SHIFT 16
+#define HEADER_FLAGS_MASK 0xffff
+#define HEADER_OPAQUE_SHIFT 0
+#define HEADER_OPAQUE_MASK 0xffff
+
+/* Source (SRC) descriptor format */
+
+#define SRC_TYPE 2
+#define SRC_LENGTH_SHIFT 44
+#define SRC_LENGTH_MASK 0xffff
+#define SRC_ADDR_SHIFT 0
+#define SRC_ADDR_MASK 0x00000fffffffffff
+
+/* Destination (DST) descriptor format */
+#define DST_TYPE 3
+#define DST_LENGTH_SHIFT 44
+#define DST_LENGTH_MASK 0xffff
+#define DST_ADDR_SHIFT 0
+#define DST_ADDR_MASK 0x00000fffffffffff
+
+/* Next pointer (NPTR) descriptor format */
+#define NPTR_TYPE 5
+#define NPTR_TOGGLE_SHIFT 59
+#define NPTR_TOGGLE_MASK 0x1
+#define NPTR_ADDR_SHIFT 0
+#define NPTR_ADDR_MASK 0x00000fffffffffff
+
+/* Mega source (MSRC) descriptor format */
+#define MSRC_TYPE 6
+#define MSRC_LENGTH_SHIFT 44
+#define MSRC_LENGTH_MASK 0xffff
+#define MSRC_ADDR_SHIFT 0
+#define MSRC_ADDR_MASK 0x00000fffffffffff
+
+/* Mega destination (MDST) descriptor format */
+#define MDST_TYPE 7
+#define MDST_LENGTH_SHIFT 44
+#define MDST_LENGTH_MASK 0xffff
+#define MDST_ADDR_SHIFT 0
+#define MDST_ADDR_MASK 0x00000fffffffffff
+
+static uint8_t
+bcmfs5_is_next_table_desc(void *desc_ptr)
+{
+ uint64_t desc = rm_read_desc(desc_ptr);
+ uint32_t type = FS_DESC_DEC(desc, DESC_TYPE_SHIFT, DESC_TYPE_MASK);
+
+ return (type == NPTR_TYPE) ? true : false;
+}
+
+static uint64_t
+bcmfs5_next_table_desc(uint64_t next_addr)
+{
+ return (rm_build_desc(NPTR_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(next_addr, NPTR_ADDR_SHIFT, NPTR_ADDR_MASK));
+}
+
+static uint64_t
+bcmfs5_null_desc(void)
+{
+ return rm_build_desc(NULL_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK);
+}
+
+static uint64_t
+bcmfs5_header_desc(uint32_t startpkt, uint32_t endpkt,
+ uint32_t bdcount, uint32_t flags,
+ uint32_t opaque)
+{
+ return (rm_build_desc(HEADER_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(startpkt, HEADER_STARTPKT_SHIFT,
+ HEADER_STARTPKT_MASK) |
+ rm_build_desc(endpkt, HEADER_ENDPKT_SHIFT, HEADER_ENDPKT_MASK) |
+ rm_build_desc(bdcount, HEADER_BDCOUNT_SHIFT, HEADER_BDCOUNT_MASK) |
+ rm_build_desc(flags, HEADER_FLAGS_SHIFT, HEADER_FLAGS_MASK) |
+ rm_build_desc(opaque, HEADER_OPAQUE_SHIFT, HEADER_OPAQUE_MASK));
+}
+
+static int
+bcmfs5_enqueue_desc(uint32_t nhpos, uint32_t nhcnt,
+ uint32_t reqid, uint64_t desc,
+ void **desc_ptr, void *start_desc,
+ void *end_desc)
+{
+ uint64_t d;
+ uint32_t nhavail, _startpkt, _endpkt, _bdcount;
+ int is_nxt_page = 0;
+
+ /*
+ * Each request or packet start with a HEADER descriptor followed
+ * by one or more non-HEADER descriptors (SRC, SRCT, MSRC, DST,
+ * DSTT, MDST, IMM, and IMMT). The number of non-HEADER descriptors
+ * following a HEADER descriptor is represented by BDCOUNT field
+ * of HEADER descriptor. The max value of BDCOUNT field is 31 which
+ * means we can only have 31 non-HEADER descriptors following one
+ * HEADER descriptor.
+ *
+ * In general use, number of non-HEADER descriptors can easily go
+ * beyond 31. To tackle this situation, we have packet (or request)
+ * extension bits (STARTPKT and ENDPKT) in the HEADER descriptor.
+ *
+ * To use packet extension, the first HEADER descriptor of request
+ * (or packet) will have STARTPKT=1 and ENDPKT=0. The intermediate
+ * HEADER descriptors will have STARTPKT=0 and ENDPKT=0. The last
+ * HEADER descriptor will have STARTPKT=0 and ENDPKT=1.
+ */
+
+ if ((nhpos % HEADER_BDCOUNT_MAX == 0) && (nhcnt - nhpos)) {
+ /* Prepare the header descriptor */
+ nhavail = (nhcnt - nhpos);
+ _startpkt = (nhpos == 0) ? 0x1 : 0x0;
+ _endpkt = (nhavail <= HEADER_BDCOUNT_MAX) ? 0x1 : 0x0;
+ _bdcount = (nhavail <= HEADER_BDCOUNT_MAX) ?
+ nhavail : HEADER_BDCOUNT_MAX;
+ if (nhavail <= HEADER_BDCOUNT_MAX)
+ _bdcount = nhavail;
+ else
+ _bdcount = HEADER_BDCOUNT_MAX;
+ d = bcmfs5_header_desc(_startpkt, _endpkt,
+ _bdcount, 0x0, reqid);
+
+ /* Write header descriptor */
+ rm_write_desc(*desc_ptr, d);
+
+ /* Point to next descriptor */
+ *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc);
+ if (*desc_ptr == end_desc)
+ *desc_ptr = start_desc;
+
+ /* Skip next pointer descriptors */
+ while (bcmfs5_is_next_table_desc(*desc_ptr)) {
+ is_nxt_page = 1;
+ *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc);
+ if (*desc_ptr == end_desc)
+ *desc_ptr = start_desc;
+ }
+ }
+
+ /* Write desired descriptor */
+ rm_write_desc(*desc_ptr, desc);
+
+ /* Point to next descriptor */
+ *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc);
+ if (*desc_ptr == end_desc)
+ *desc_ptr = start_desc;
+
+ /* Skip next pointer descriptors */
+ while (bcmfs5_is_next_table_desc(*desc_ptr)) {
+ is_nxt_page = 1;
+ *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc);
+ if (*desc_ptr == end_desc)
+ *desc_ptr = start_desc;
+ }
+
+ return is_nxt_page;
+}
+
+static uint64_t
+bcmfs5_src_desc(uint64_t addr, unsigned int len)
+{
+ return (rm_build_desc(SRC_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(len, SRC_LENGTH_SHIFT, SRC_LENGTH_MASK) |
+ rm_build_desc(addr, SRC_ADDR_SHIFT, SRC_ADDR_MASK));
+}
+
+static uint64_t
+bcmfs5_msrc_desc(uint64_t addr, unsigned int len_div_16)
+{
+ return (rm_build_desc(MSRC_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(len_div_16, MSRC_LENGTH_SHIFT, MSRC_LENGTH_MASK) |
+ rm_build_desc(addr, MSRC_ADDR_SHIFT, MSRC_ADDR_MASK));
+}
+
+static uint64_t
+bcmfs5_dst_desc(uint64_t addr, unsigned int len)
+{
+ return (rm_build_desc(DST_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(len, DST_LENGTH_SHIFT, DST_LENGTH_MASK) |
+ rm_build_desc(addr, DST_ADDR_SHIFT, DST_ADDR_MASK));
+}
+
+static uint64_t
+bcmfs5_mdst_desc(uint64_t addr, unsigned int len_div_16)
+{
+ return (rm_build_desc(MDST_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) |
+ rm_build_desc(len_div_16, MDST_LENGTH_SHIFT, MDST_LENGTH_MASK) |
+ rm_build_desc(addr, MDST_ADDR_SHIFT, MDST_ADDR_MASK));
+}
+
+static bool
+bcmfs5_sanity_check(struct bcmfs_qp_message *msg)
+{
+ unsigned int i = 0;
+
+ if (msg == NULL)
+ return false;
+
+ for (i = 0; i < msg->srcs_count; i++) {
+ if (msg->srcs_len[i] & 0xf) {
+ if (msg->srcs_len[i] > SRC_LENGTH_MASK)
+ return false;
+ } else {
+ if (msg->srcs_len[i] > (MSRC_LENGTH_MASK * 16))
+ return false;
+ }
+ }
+ for (i = 0; i < msg->dsts_count; i++) {
+ if (msg->dsts_len[i] & 0xf) {
+ if (msg->dsts_len[i] > DST_LENGTH_MASK)
+ return false;
+ } else {
+ if (msg->dsts_len[i] > (MDST_LENGTH_MASK * 16))
+ return false;
+ }
+ }
+
+ return true;
+}
+
+static void *
+bcmfs5_enqueue_msg(struct bcmfs_queue *txq,
+ struct bcmfs_qp_message *msg,
+ uint32_t reqid, void *desc_ptr,
+ void *start_desc, void *end_desc)
+{
+ uint64_t d;
+ unsigned int src, dst;
+ uint32_t nhpos = 0;
+ int nxt_page = 0;
+ uint32_t nhcnt = msg->srcs_count + msg->dsts_count;
+
+ if (desc_ptr == NULL || start_desc == NULL || end_desc == NULL)
+ return NULL;
+
+ if (desc_ptr < start_desc || end_desc <= desc_ptr)
+ return NULL;
+
+ for (src = 0; src < msg->srcs_count; src++) {
+ if (msg->srcs_len[src] & 0xf)
+ d = bcmfs5_src_desc(msg->srcs_addr[src],
+ msg->srcs_len[src]);
+ else
+ d = bcmfs5_msrc_desc(msg->srcs_addr[src],
+ msg->srcs_len[src] / 16);
+
+ nxt_page = bcmfs5_enqueue_desc(nhpos, nhcnt, reqid,
+ d, &desc_ptr, start_desc,
+ end_desc);
+ if (nxt_page)
+ txq->descs_inflight++;
+ nhpos++;
+ }
+
+ for (dst = 0; dst < msg->dsts_count; dst++) {
+ if (msg->dsts_len[dst] & 0xf)
+ d = bcmfs5_dst_desc(msg->dsts_addr[dst],
+ msg->dsts_len[dst]);
+ else
+ d = bcmfs5_mdst_desc(msg->dsts_addr[dst],
+ msg->dsts_len[dst] / 16);
+
+ nxt_page = bcmfs5_enqueue_desc(nhpos, nhcnt, reqid,
+ d, &desc_ptr, start_desc,
+ end_desc);
+ if (nxt_page)
+ txq->descs_inflight++;
+ nhpos++;
+ }
+
+ txq->descs_inflight += nhcnt + 1;
+
+ return desc_ptr;
+}
+
+static int
+bcmfs5_enqueue_single_request_qp(struct bcmfs_qp *qp, void *op)
+{
+ void *next;
+ int reqid;
+ int ret = 0;
+ uint64_t slab = 0;
+ uint32_t pos = 0;
+ uint8_t exit_cleanup = false;
+ struct bcmfs_queue *txq = &qp->tx_q;
+ struct bcmfs_qp_message *msg = (struct bcmfs_qp_message *)op;
+
+ /* Do sanity check on message */
+ if (!bcmfs5_sanity_check(msg)) {
+ BCMFS_DP_LOG(ERR, "Invalid msg on queue %d", qp->qpair_id);
+ return -EIO;
+ }
+
+ /* Scan from the beginning */
+ __rte_bitmap_scan_init(qp->ctx_bmp);
+ /* Scan bitmap to get the free pool */
+ ret = rte_bitmap_scan(qp->ctx_bmp, &pos, &slab);
+ if (ret == 0) {
+ BCMFS_DP_LOG(ERR, "BD memory exhausted");
+ return -ERANGE;
+ }
+
+ reqid = pos + __builtin_ctzll(slab);
+ rte_bitmap_clear(qp->ctx_bmp, reqid);
+ qp->ctx_pool[reqid] = (unsigned long)msg;
+
+ /* Write descriptors to ring */
+ next = bcmfs5_enqueue_msg(txq, msg, reqid,
+ (uint8_t *)txq->base_addr + txq->tx_write_ptr,
+ txq->base_addr,
+ (uint8_t *)txq->base_addr + txq->queue_size);
+ if (next == NULL) {
+ BCMFS_DP_LOG(ERR, "Enqueue for desc failed on queue %d",
+ qp->qpair_id);
+ ret = -EINVAL;
+ exit_cleanup = true;
+ goto exit;
+ }
+
+ /* Save ring BD write offset */
+ txq->tx_write_ptr = (uint32_t)((uint8_t *)next -
+ (uint8_t *)txq->base_addr);
+
+ qp->nb_pending_requests++;
+
+ return 0;
+
+exit:
+ /* Cleanup if we failed */
+ if (exit_cleanup)
+ rte_bitmap_set(qp->ctx_bmp, reqid);
+
+ return ret;
+}
+
+static void bcmfs5_write_doorbell(struct bcmfs_qp *qp)
+{
+ struct bcmfs_queue *txq = &qp->tx_q;
+
+ /* sync in bfeore ringing the door-bell */
+ rte_wmb();
+
+ FS_MMIO_WRITE32(txq->descs_inflight,
+ (uint8_t *)qp->ioreg + RING_DOORBELL_BD_WRITE_COUNT);
+
+ /* reset the count */
+ txq->descs_inflight = 0;
+}
+
+static uint16_t
+bcmfs5_dequeue_qp(struct bcmfs_qp *qp, void **ops, uint16_t budget)
+{
+ int err;
+ uint16_t reqid;
+ uint64_t desc;
+ uint16_t count = 0;
+ unsigned long context = 0;
+ struct bcmfs_queue *hwq = &qp->cmpl_q;
+ uint32_t cmpl_read_offset, cmpl_write_offset;
+
+ /*
+ * Check whether budget is valid, else set the budget to maximum
+ * so that all the available completions will be processed.
+ */
+ if (budget > qp->nb_pending_requests)
+ budget = qp->nb_pending_requests;
+
+ /*
+ * Get current completion read and write offset
+ *
+ * Note: We should read completion write pointer at least once
+ * after we get a MSI interrupt because HW maintains internal
+ * MSI status which will allow next MSI interrupt only after
+ * completion write pointer is read.
+ */
+ cmpl_write_offset = FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_CMPL_WRITE_PTR);
+ cmpl_write_offset *= FS_RING_DESC_SIZE;
+ cmpl_read_offset = hwq->cmpl_read_ptr;
+
+ /* read the ring cmpl write ptr before cmpl read offset */
+ rte_io_rmb();
+
+ /* For each completed request notify mailbox clients */
+ reqid = 0;
+ while ((cmpl_read_offset != cmpl_write_offset) && (budget > 0)) {
+ /* Dequeue next completion descriptor */
+ desc = *((uint64_t *)((uint8_t *)hwq->base_addr +
+ cmpl_read_offset));
+
+ /* Next read offset */
+ cmpl_read_offset += FS_RING_DESC_SIZE;
+ if (cmpl_read_offset == FS_RING_CMPL_SIZE)
+ cmpl_read_offset = 0;
+
+ /* Decode error from completion descriptor */
+ err = rm_cmpl_desc_to_error(desc);
+ if (err < 0)
+ BCMFS_DP_LOG(ERR, "error desc rcvd");
+
+ /* Determine request id from completion descriptor */
+ reqid = rm_cmpl_desc_to_reqid(desc);
+
+ /* Retrieve context */
+ context = qp->ctx_pool[reqid];
+ if (context == 0)
+ BCMFS_DP_LOG(ERR, "HW error detected");
+
+ /* Release reqid for recycling */
+ qp->ctx_pool[reqid] = 0;
+ rte_bitmap_set(qp->ctx_bmp, reqid);
+
+ *ops = (void *)context;
+
+ /* Increment number of completions processed */
+ count++;
+ budget--;
+ ops++;
+ }
+
+ hwq->cmpl_read_ptr = cmpl_read_offset;
+
+ qp->nb_pending_requests -= count;
+
+ return count;
+}
+
+static int
+bcmfs5_start_qp(struct bcmfs_qp *qp)
+{
+ uint32_t val, off;
+ uint64_t d, next_addr, msi;
+ int timeout;
+ uint32_t bd_high, bd_low, cmpl_high, cmpl_low;
+ struct bcmfs_queue *tx_queue = &qp->tx_q;
+ struct bcmfs_queue *cmpl_queue = &qp->cmpl_q;
+
+ /* Disable/deactivate ring */
+ FS_MMIO_WRITE32(0x0, (uint8_t *)qp->ioreg + RING_CONTROL);
+
+ /* Configure next table pointer entries in BD memory */
+ for (off = 0; off < tx_queue->queue_size; off += FS_RING_DESC_SIZE) {
+ next_addr = off + FS_RING_DESC_SIZE;
+ if (next_addr == tx_queue->queue_size)
+ next_addr = 0;
+ next_addr += (uint64_t)tx_queue->base_phys_addr;
+ if (FS_RING_BD_ALIGN_CHECK(next_addr))
+ d = bcmfs5_next_table_desc(next_addr);
+ else
+ d = bcmfs5_null_desc();
+ rm_write_desc((uint8_t *)tx_queue->base_addr + off, d);
+ }
+
+ /*
+ * If user interrupt the test in between the run(Ctrl+C), then all
+ * subsequent test run will fail because sw cmpl_read_offset and hw
+ * cmpl_write_offset will be pointing at different completion BD. To
+ * handle this we should flush all the rings in the startup instead
+ * of shutdown function.
+ * Ring flush will reset hw cmpl_write_offset.
+ */
+
+ /* Set ring flush state */
+ timeout = 1000;
+ FS_MMIO_WRITE32(BIT(CONTROL_FLUSH_SHIFT),
+ (uint8_t *)qp->ioreg + RING_CONTROL);
+ do {
+ /*
+ * If previous test is stopped in between the run, then
+ * sw has to read cmpl_write_offset else DME/AE will be not
+ * come out of flush state.
+ */
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_CMPL_WRITE_PTR);
+
+ if (FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_FLUSH_DONE) &
+ FLUSH_DONE_MASK)
+ break;
+ usleep(1000);
+ } while (--timeout);
+ if (!timeout) {
+ BCMFS_DP_LOG(ERR, "Ring flush timeout hw-queue %d",
+ qp->qpair_id);
+ }
+
+ /* Clear ring flush state */
+ timeout = 1000;
+ FS_MMIO_WRITE32(0x0, (uint8_t *)qp->ioreg + RING_CONTROL);
+ do {
+ if (!(FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_FLUSH_DONE) &
+ FLUSH_DONE_MASK))
+ break;
+ usleep(1000);
+ } while (--timeout);
+ if (!timeout) {
+ BCMFS_DP_LOG(ERR, "Ring clear flush timeout hw-queue %d",
+ qp->qpair_id);
+ }
+
+ /* Program BD start address */
+ bd_low = lower_32_bits(tx_queue->base_phys_addr);
+ bd_high = upper_32_bits(tx_queue->base_phys_addr);
+ FS_MMIO_WRITE32(bd_low, (uint8_t *)qp->ioreg +
+ RING_BD_START_ADDRESS_LSB);
+ FS_MMIO_WRITE32(bd_high, (uint8_t *)qp->ioreg +
+ RING_BD_START_ADDRESS_MSB);
+
+ tx_queue->tx_write_ptr = 0;
+
+ for (off = 0; off < FS_RING_CMPL_SIZE; off += FS_RING_DESC_SIZE)
+ rm_write_desc((uint8_t *)cmpl_queue->base_addr + off, 0x0);
+
+ /* Completion read pointer will be same as HW write pointer */
+ cmpl_queue->cmpl_read_ptr = FS_MMIO_READ32((uint8_t *)qp->ioreg +
+ RING_CMPL_WRITE_PTR);
+ /* Program completion start address */
+ cmpl_low = lower_32_bits(cmpl_queue->base_phys_addr);
+ cmpl_high = upper_32_bits(cmpl_queue->base_phys_addr);
+ FS_MMIO_WRITE32(cmpl_low, (uint8_t *)qp->ioreg +
+ RING_CMPL_START_ADDR_LSB);
+ FS_MMIO_WRITE32(cmpl_high, (uint8_t *)qp->ioreg +
+ RING_CMPL_START_ADDR_MSB);
+
+ cmpl_queue->cmpl_read_ptr *= FS_RING_DESC_SIZE;
+
+ /* Read ring Tx, Rx, and Outstanding counts to clear */
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_RECV_LS);
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_RECV_MS);
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_TRANS_LS);
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_TRANS_MS);
+ FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_OUTSTAND);
+
+ /* Configure per-Ring MSI registers with dummy location */
+ msi = cmpl_queue->base_phys_addr + (1024 * FS_RING_DESC_SIZE);
+ FS_MMIO_WRITE32((msi & 0xFFFFFFFF),
+ (uint8_t *)qp->ioreg + RING_MSI_ADDR_LS);
+ FS_MMIO_WRITE32(((msi >> 32) & 0xFFFFFFFF),
+ (uint8_t *)qp->ioreg + RING_MSI_ADDR_MS);
+ FS_MMIO_WRITE32(qp->qpair_id, (uint8_t *)qp->ioreg +
+ RING_MSI_DATA_VALUE);
+
+ /* Configure RING_MSI_CONTROL */
+ val = 0;
+ val |= (MSI_TIMER_VAL_MASK << MSI_TIMER_VAL_SHIFT);
+ val |= BIT(MSI_ENABLE_SHIFT);
+ val |= (0x1 & MSI_COUNT_MASK) << MSI_COUNT_SHIFT;
+ FS_MMIO_WRITE32(val, (uint8_t *)qp->ioreg + RING_MSI_CONTROL);
+
+ /* Enable/activate ring */
+ val = BIT(CONTROL_ACTIVE_SHIFT);
+ FS_MMIO_WRITE32(val, (uint8_t *)qp->ioreg + RING_CONTROL);
+
+ return 0;
+}
+
+static void
+bcmfs5_shutdown_qp(struct bcmfs_qp *qp)
+{
+ /* Disable/deactivate ring */
+ FS_MMIO_WRITE32(0x0, (uint8_t *)qp->ioreg + RING_CONTROL);
+}
+
+struct bcmfs_hw_queue_pair_ops bcmfs5_qp_ops = {
+ .name = "fs5",
+ .enq_one_req = bcmfs5_enqueue_single_request_qp,
+ .ring_db = bcmfs5_write_doorbell,
+ .dequeue = bcmfs5_dequeue_qp,
+ .startq = bcmfs5_start_qp,
+ .stopq = bcmfs5_shutdown_qp,
+};
+
+RTE_INIT(bcmfs5_register_qp_ops)
+{
+ bcmfs_hw_queue_pair_register_ops(&bcmfs5_qp_ops);
+}
diff --git a/drivers/crypto/bcmfs/hw/bcmfs_rm_common.c b/drivers/crypto/bcmfs/hw/bcmfs_rm_common.c
new file mode 100644
index 0000000000..9445d28f92
--- /dev/null
+++ b/drivers/crypto/bcmfs/hw/bcmfs_rm_common.c
@@ -0,0 +1,82 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Broadcom.
+ * All rights reserved.
+ */
+
+#include "bcmfs_hw_defs.h"
+#include "bcmfs_rm_common.h"
+
+/* Completion descriptor format */
+#define FS_CMPL_OPAQUE_SHIFT 0
+#define FS_CMPL_OPAQUE_MASK 0xffff
+#define FS_CMPL_ENGINE_STATUS_SHIFT 16
+#define FS_CMPL_ENGINE_STATUS_MASK 0xffff
+#define FS_CMPL_DME_STATUS_SHIFT 32
+#define FS_CMPL_DME_STATUS_MASK 0xffff
+#define FS_CMPL_RM_STATUS_SHIFT 48
+#define FS_CMPL_RM_STATUS_MASK 0xffff
+/* Completion RM status code */
+#define FS_RM_STATUS_CODE_SHIFT 0
+#define FS_RM_STATUS_CODE_MASK 0x3ff
+#define FS_RM_STATUS_CODE_GOOD 0x0
+#define FS_RM_STATUS_CODE_AE_TIMEOUT 0x3ff
+
+
+/* Completion DME status code */
+#define FS_DME_STATUS_MEM_COR_ERR BIT(0)
+#define FS_DME_STATUS_MEM_UCOR_ERR BIT(1)
+#define FS_DME_STATUS_FIFO_UNDRFLOW BIT(2)
+#define FS_DME_STATUS_FIFO_OVERFLOW BIT(3)
+#define FS_DME_STATUS_RRESP_ERR BIT(4)
+#define FS_DME_STATUS_BRESP_ERR BIT(5)
+#define FS_DME_STATUS_ERROR_MASK (FS_DME_STATUS_MEM_COR_ERR | \
+ FS_DME_STATUS_MEM_UCOR_ERR | \
+ FS_DME_STATUS_FIFO_UNDRFLOW | \
+ FS_DME_STATUS_FIFO_OVERFLOW | \
+ FS_DME_STATUS_RRESP_ERR | \
+ FS_DME_STATUS_BRESP_ERR)
+
+/* APIs related to ring manager descriptors */
+uint64_t
+rm_build_desc(uint64_t val, uint32_t shift,
+ uint64_t mask)
+{
+ return((val & mask) << shift);
+}
+
+uint64_t
+rm_read_desc(void *desc_ptr)
+{
+ return le64_to_cpu(*((uint64_t *)desc_ptr));
+}
+
+void
+rm_write_desc(void *desc_ptr, uint64_t desc)
+{
+ *((uint64_t *)desc_ptr) = cpu_to_le64(desc);
+}
+
+uint32_t
+rm_cmpl_desc_to_reqid(uint64_t cmpl_desc)
+{
+ return (uint32_t)(cmpl_desc & FS_CMPL_OPAQUE_MASK);
+}
+
+int
+rm_cmpl_desc_to_error(uint64_t cmpl_desc)
+{
+ uint32_t status;
+
+ status = FS_DESC_DEC(cmpl_desc, FS_CMPL_DME_STATUS_SHIFT,
+ FS_CMPL_DME_STATUS_MASK);
+ if (status & FS_DME_STATUS_ERROR_MASK)
+ return -EIO;
+
+ status = FS_DESC_DEC(cmpl_desc, FS_CMPL_RM_STATUS_SHIFT,
+ FS_CMPL_RM_STATUS_MASK);
+ status &= FS_RM_STATUS_CODE_MASK;
+ if (status == FS_RM_STATUS_CODE_AE_TIMEOUT)
+ return -ETIMEDOUT;
+
+ return 0;
+}
diff --git a/drivers/crypto/bcmfs/hw/bcmfs_rm_common.h b/drivers/crypto/bcmfs/hw/bcmfs_rm_common.h
new file mode 100644
index 0000000000..e5d30d75c0
--- /dev/null
+++ b/drivers/crypto/bcmfs/hw/bcmfs_rm_common.h
@@ -0,0 +1,51 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_RM_COMMON_H_
+#define _BCMFS_RM_COMMON_H_
+
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_io.h>
+
+/* 32-bit MMIO register write */
+#define FS_MMIO_WRITE32(value, addr) rte_write32_relaxed((value), (addr))
+/* 32-bit MMIO register read */
+#define FS_MMIO_READ32(addr) rte_read32_relaxed((addr))
+
+/* Descriptor helper macros */
+#define FS_DESC_DEC(d, s, m) (((d) >> (s)) & (m))
+
+#define FS_RING_BD_ALIGN_CHECK(addr) \
+ (!((addr) & ((0x1 << FS_RING_BD_ALIGN_ORDER) - 1)))
+
+#define cpu_to_le64 rte_cpu_to_le_64
+#define cpu_to_le32 rte_cpu_to_le_32
+#define cpu_to_le16 rte_cpu_to_le_16
+
+#define le64_to_cpu rte_le_to_cpu_64
+#define le32_to_cpu rte_le_to_cpu_32
+#define le16_to_cpu rte_le_to_cpu_16
+
+#define lower_32_bits(x) ((uint32_t)(x))
+#define upper_32_bits(x) ((uint32_t)(((x) >> 16) >> 16))
+
+uint64_t
+rm_build_desc(uint64_t val, uint32_t shift,
+ uint64_t mask);
+uint64_t
+rm_read_desc(void *desc_ptr);
+
+void
+rm_write_desc(void *desc_ptr, uint64_t desc);
+
+uint32_t
+rm_cmpl_desc_to_reqid(uint64_t cmpl_desc);
+
+int
+rm_cmpl_desc_to_error(uint64_t cmpl_desc);
+
+#endif /* _BCMFS_RM_COMMON_H_ */
+
diff --git a/drivers/crypto/bcmfs/meson.build b/drivers/crypto/bcmfs/meson.build
index 7e2bcbf14b..cd58bd5e25 100644
--- a/drivers/crypto/bcmfs/meson.build
+++ b/drivers/crypto/bcmfs/meson.build
@@ -8,5 +8,8 @@ sources = files(
'bcmfs_logs.c',
'bcmfs_device.c',
'bcmfs_vfio.c',
- 'bcmfs_qp.c'
+ 'bcmfs_qp.c',
+ 'hw/bcmfs4_rm.c',
+ 'hw/bcmfs5_rm.c',
+ 'hw/bcmfs_rm_common.c'
)
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v5 5/8] crypto/bcmfs: create a symmetric cryptodev
2020-10-07 17:18 ` [dpdk-dev] [PATCH v5 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
` (3 preceding siblings ...)
2020-10-07 17:18 ` [dpdk-dev] [PATCH v5 4/8] crypto/bcmfs: add HW queue pair operations Vikas Gupta
@ 2020-10-07 17:18 ` Vikas Gupta
2020-10-07 17:18 ` [dpdk-dev] [PATCH v5 6/8] crypto/bcmfs: add session handling and capabilities Vikas Gupta
` (3 subsequent siblings)
8 siblings, 0 replies; 75+ messages in thread
From: Vikas Gupta @ 2020-10-07 17:18 UTC (permalink / raw)
To: dev, akhil.goyal; +Cc: vikram.prakash, Vikas Gupta, Raveendra Padasalagi
Create a symmetric crypto device and add supported cryptodev ops.
Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
drivers/crypto/bcmfs/bcmfs_device.c | 15 ++
drivers/crypto/bcmfs/bcmfs_device.h | 6 +
drivers/crypto/bcmfs/bcmfs_qp.c | 37 +++
drivers/crypto/bcmfs/bcmfs_qp.h | 16 ++
drivers/crypto/bcmfs/bcmfs_sym_pmd.c | 387 +++++++++++++++++++++++++++
drivers/crypto/bcmfs/bcmfs_sym_pmd.h | 38 +++
drivers/crypto/bcmfs/bcmfs_sym_req.h | 22 ++
drivers/crypto/bcmfs/meson.build | 3 +-
8 files changed, 523 insertions(+), 1 deletion(-)
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_pmd.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_pmd.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_req.h
diff --git a/drivers/crypto/bcmfs/bcmfs_device.c b/drivers/crypto/bcmfs/bcmfs_device.c
index 07423d3cc1..27720e4eb8 100644
--- a/drivers/crypto/bcmfs/bcmfs_device.c
+++ b/drivers/crypto/bcmfs/bcmfs_device.c
@@ -14,6 +14,7 @@
#include "bcmfs_logs.h"
#include "bcmfs_qp.h"
#include "bcmfs_vfio.h"
+#include "bcmfs_sym_pmd.h"
struct bcmfs_device_attr {
const char name[BCMFS_MAX_PATH_LEN];
@@ -240,6 +241,7 @@ bcmfs_vdev_probe(struct rte_vdev_device *vdev)
char out_dirname[BCMFS_MAX_PATH_LEN];
uint32_t fsdev_dev[BCMFS_MAX_NODES];
enum bcmfs_device_type dtype;
+ int err;
int i = 0;
int dev_idx;
int count = 0;
@@ -291,7 +293,20 @@ bcmfs_vdev_probe(struct rte_vdev_device *vdev)
return -ENODEV;
}
+ err = bcmfs_sym_dev_create(fsdev);
+ if (err) {
+ BCMFS_LOG(WARNING,
+ "Failed to create BCMFS SYM PMD for device %s",
+ fsdev->name);
+ goto pmd_create_fail;
+ }
+
return 0;
+
+pmd_create_fail:
+ fsdev_release(fsdev);
+
+ return err;
}
static int
diff --git a/drivers/crypto/bcmfs/bcmfs_device.h b/drivers/crypto/bcmfs/bcmfs_device.h
index 2fb8eed143..e5ca866977 100644
--- a/drivers/crypto/bcmfs/bcmfs_device.h
+++ b/drivers/crypto/bcmfs/bcmfs_device.h
@@ -65,6 +65,12 @@ struct bcmfs_device {
struct bcmfs_qp *qps_in_use[BCMFS_MAX_HW_QUEUES];
/* queue pair ops exported by symmetric crypto hw */
struct bcmfs_hw_queue_pair_ops *sym_hw_qp_ops;
+ /* a cryptodevice attached to bcmfs device */
+ struct rte_cryptodev *cdev;
+ /* a rte_device to register with cryptodev */
+ struct rte_device sym_rte_dev;
+ /* private info to keep with cryptodev */
+ struct bcmfs_sym_dev_private *sym_dev;
};
#endif /* _BCMFS_DEVICE_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_qp.c b/drivers/crypto/bcmfs/bcmfs_qp.c
index ec1327b780..cb5ff6c61b 100644
--- a/drivers/crypto/bcmfs/bcmfs_qp.c
+++ b/drivers/crypto/bcmfs/bcmfs_qp.c
@@ -344,3 +344,40 @@ bcmfs_dequeue_op_burst(void *qp, void **ops, uint16_t nb_ops)
return deq;
}
+
+void bcmfs_qp_stats_get(struct bcmfs_qp **qp, int num_qp,
+ struct bcmfs_qp_stats *stats)
+{
+ int i;
+
+ if (stats == NULL) {
+ BCMFS_LOG(ERR, "invalid param: stats %p",
+ stats);
+ return;
+ }
+
+ for (i = 0; i < num_qp; i++) {
+ if (qp[i] == NULL) {
+ BCMFS_LOG(DEBUG, "Uninitialised qp %d", i);
+ continue;
+ }
+
+ stats->enqueued_count += qp[i]->stats.enqueued_count;
+ stats->dequeued_count += qp[i]->stats.dequeued_count;
+ stats->enqueue_err_count += qp[i]->stats.enqueue_err_count;
+ stats->dequeue_err_count += qp[i]->stats.dequeue_err_count;
+ }
+}
+
+void bcmfs_qp_stats_reset(struct bcmfs_qp **qp, int num_qp)
+{
+ int i;
+
+ for (i = 0; i < num_qp; i++) {
+ if (qp[i] == NULL) {
+ BCMFS_LOG(DEBUG, "Uninitialised qp %d", i);
+ continue;
+ }
+ memset(&qp[i]->stats, 0, sizeof(qp[i]->stats));
+ }
+}
diff --git a/drivers/crypto/bcmfs/bcmfs_qp.h b/drivers/crypto/bcmfs/bcmfs_qp.h
index 59785865b0..57fe0a93a3 100644
--- a/drivers/crypto/bcmfs/bcmfs_qp.h
+++ b/drivers/crypto/bcmfs/bcmfs_qp.h
@@ -24,6 +24,13 @@ enum bcmfs_queue_type {
BCMFS_RM_CPLQ
};
+#define BCMFS_QP_IOBASE_XLATE(base, idx) \
+ ((base) + ((idx) * BCMFS_HW_QUEUE_IO_ADDR_LEN))
+
+/* Max pkts for preprocessing before submitting to h/w qp */
+#define BCMFS_MAX_REQS_BUFF 64
+
+/* qp stats */
struct bcmfs_qp_stats {
/* Count of all operations enqueued */
uint64_t enqueued_count;
@@ -92,6 +99,10 @@ struct bcmfs_qp {
struct bcmfs_qp_stats stats;
/* h/w ops associated with qp */
struct bcmfs_hw_queue_pair_ops *ops;
+ /* bcmfs requests pool*/
+ struct rte_mempool *sr_mp;
+ /* a temporary buffer to keep message pointers */
+ struct bcmfs_qp_message *infl_msgs[BCMFS_MAX_REQS_BUFF];
} __rte_cache_aligned;
@@ -123,4 +134,9 @@ bcmfs_qp_setup(struct bcmfs_qp **qp_addr,
uint16_t queue_pair_id,
struct bcmfs_qp_config *bcmfs_conf);
+/* stats functions*/
+void bcmfs_qp_stats_get(struct bcmfs_qp **qp, int num_qp,
+ struct bcmfs_qp_stats *stats);
+void bcmfs_qp_stats_reset(struct bcmfs_qp **qp, int num_qp);
+
#endif /* _BCMFS_QP_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
new file mode 100644
index 0000000000..0f96915f70
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
@@ -0,0 +1,387 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+#include <rte_dev.h>
+#include <rte_errno.h>
+#include <rte_malloc.h>
+#include <rte_cryptodev_pmd.h>
+
+#include "bcmfs_device.h"
+#include "bcmfs_logs.h"
+#include "bcmfs_qp.h"
+#include "bcmfs_sym_pmd.h"
+#include "bcmfs_sym_req.h"
+
+uint8_t cryptodev_bcmfs_driver_id;
+
+static int bcmfs_sym_qp_release(struct rte_cryptodev *dev,
+ uint16_t queue_pair_id);
+
+static int
+bcmfs_sym_dev_config(__rte_unused struct rte_cryptodev *dev,
+ __rte_unused struct rte_cryptodev_config *config)
+{
+ return 0;
+}
+
+static int
+bcmfs_sym_dev_start(__rte_unused struct rte_cryptodev *dev)
+{
+ return 0;
+}
+
+static void
+bcmfs_sym_dev_stop(__rte_unused struct rte_cryptodev *dev)
+{
+}
+
+static int
+bcmfs_sym_dev_close(struct rte_cryptodev *dev)
+{
+ int i, ret;
+
+ for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+ ret = bcmfs_sym_qp_release(dev, i);
+ if (ret < 0)
+ return ret;
+ }
+
+ return 0;
+}
+
+static void
+bcmfs_sym_dev_info_get(struct rte_cryptodev *dev,
+ struct rte_cryptodev_info *dev_info)
+{
+ struct bcmfs_sym_dev_private *internals = dev->data->dev_private;
+ struct bcmfs_device *fsdev = internals->fsdev;
+
+ if (dev_info != NULL) {
+ dev_info->driver_id = cryptodev_bcmfs_driver_id;
+ dev_info->feature_flags = dev->feature_flags;
+ dev_info->max_nb_queue_pairs = fsdev->max_hw_qps;
+ /* No limit of number of sessions */
+ dev_info->sym.max_nb_sessions = 0;
+ }
+}
+
+static void
+bcmfs_sym_stats_get(struct rte_cryptodev *dev,
+ struct rte_cryptodev_stats *stats)
+{
+ struct bcmfs_qp_stats bcmfs_stats = {0};
+ struct bcmfs_sym_dev_private *bcmfs_priv;
+ struct bcmfs_device *fsdev;
+
+ if (stats == NULL || dev == NULL) {
+ BCMFS_LOG(ERR, "invalid ptr: stats %p, dev %p", stats, dev);
+ return;
+ }
+ bcmfs_priv = dev->data->dev_private;
+ fsdev = bcmfs_priv->fsdev;
+
+ bcmfs_qp_stats_get(fsdev->qps_in_use, fsdev->max_hw_qps, &bcmfs_stats);
+
+ stats->enqueued_count = bcmfs_stats.enqueued_count;
+ stats->dequeued_count = bcmfs_stats.dequeued_count;
+ stats->enqueue_err_count = bcmfs_stats.enqueue_err_count;
+ stats->dequeue_err_count = bcmfs_stats.dequeue_err_count;
+}
+
+static void
+bcmfs_sym_stats_reset(struct rte_cryptodev *dev)
+{
+ struct bcmfs_sym_dev_private *bcmfs_priv;
+ struct bcmfs_device *fsdev;
+
+ if (dev == NULL) {
+ BCMFS_LOG(ERR, "invalid cryptodev ptr %p", dev);
+ return;
+ }
+ bcmfs_priv = dev->data->dev_private;
+ fsdev = bcmfs_priv->fsdev;
+
+ bcmfs_qp_stats_reset(fsdev->qps_in_use, fsdev->max_hw_qps);
+}
+
+static int
+bcmfs_sym_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id)
+{
+ struct bcmfs_sym_dev_private *bcmfs_private = dev->data->dev_private;
+ struct bcmfs_qp *qp = (struct bcmfs_qp *)
+ (dev->data->queue_pairs[queue_pair_id]);
+
+ BCMFS_LOG(DEBUG, "Release sym qp %u on device %d",
+ queue_pair_id, dev->data->dev_id);
+
+ rte_mempool_free(qp->sr_mp);
+
+ bcmfs_private->fsdev->qps_in_use[queue_pair_id] = NULL;
+
+ return bcmfs_qp_release((struct bcmfs_qp **)
+ &dev->data->queue_pairs[queue_pair_id]);
+}
+
+static void
+spu_req_init(struct bcmfs_sym_request *sr, rte_iova_t iova __rte_unused)
+{
+ memset(sr, 0, sizeof(*sr));
+}
+
+static void
+req_pool_obj_init(__rte_unused struct rte_mempool *mp,
+ __rte_unused void *opaque, void *obj,
+ __rte_unused unsigned int obj_idx)
+{
+ spu_req_init(obj, rte_mempool_virt2iova(obj));
+}
+
+static struct rte_mempool *
+bcmfs_sym_req_pool_create(struct rte_cryptodev *cdev __rte_unused,
+ uint32_t nobjs, uint16_t qp_id,
+ int socket_id)
+{
+ char softreq_pool_name[RTE_RING_NAMESIZE];
+ struct rte_mempool *mp;
+
+ snprintf(softreq_pool_name, RTE_RING_NAMESIZE, "%s_%d",
+ "bcm_sym", qp_id);
+
+ mp = rte_mempool_create(softreq_pool_name,
+ RTE_ALIGN_MUL_CEIL(nobjs, 64),
+ sizeof(struct bcmfs_sym_request),
+ 64, 0, NULL, NULL, req_pool_obj_init, NULL,
+ socket_id, 0);
+ if (mp == NULL)
+ BCMFS_LOG(ERR, "Failed to create req pool, qid %d, err %d",
+ qp_id, rte_errno);
+
+ return mp;
+}
+
+static int
+bcmfs_sym_qp_setup(struct rte_cryptodev *cdev, uint16_t qp_id,
+ const struct rte_cryptodev_qp_conf *qp_conf,
+ int socket_id)
+{
+ int ret = 0;
+ struct bcmfs_qp *qp = NULL;
+ struct bcmfs_qp_config bcmfs_qp_conf;
+
+ struct bcmfs_qp **qp_addr =
+ (struct bcmfs_qp **)&cdev->data->queue_pairs[qp_id];
+ struct bcmfs_sym_dev_private *bcmfs_private = cdev->data->dev_private;
+ struct bcmfs_device *fsdev = bcmfs_private->fsdev;
+
+
+ /* If qp is already in use free ring memory and qp metadata. */
+ if (*qp_addr != NULL) {
+ ret = bcmfs_sym_qp_release(cdev, qp_id);
+ if (ret < 0)
+ return ret;
+ }
+
+ if (qp_id >= fsdev->max_hw_qps) {
+ BCMFS_LOG(ERR, "qp_id %u invalid for this device", qp_id);
+ return -EINVAL;
+ }
+
+ bcmfs_qp_conf.nb_descriptors = qp_conf->nb_descriptors;
+ bcmfs_qp_conf.socket_id = socket_id;
+ bcmfs_qp_conf.max_descs_req = BCMFS_CRYPTO_MAX_HW_DESCS_PER_REQ;
+ bcmfs_qp_conf.iobase = BCMFS_QP_IOBASE_XLATE(fsdev->mmap_addr, qp_id);
+ bcmfs_qp_conf.ops = fsdev->sym_hw_qp_ops;
+
+ ret = bcmfs_qp_setup(qp_addr, qp_id, &bcmfs_qp_conf);
+ if (ret != 0)
+ return ret;
+
+ qp = (struct bcmfs_qp *)*qp_addr;
+
+ qp->sr_mp = bcmfs_sym_req_pool_create(cdev, qp_conf->nb_descriptors,
+ qp_id, socket_id);
+ if (qp->sr_mp == NULL)
+ return -ENOMEM;
+
+ /* store a link to the qp in the bcmfs_device */
+ bcmfs_private->fsdev->qps_in_use[qp_id] = *qp_addr;
+
+ cdev->data->queue_pairs[qp_id] = qp;
+ BCMFS_LOG(NOTICE, "queue %d setup done\n", qp_id);
+
+ return 0;
+}
+
+static struct rte_cryptodev_ops crypto_bcmfs_ops = {
+ /* Device related operations */
+ .dev_configure = bcmfs_sym_dev_config,
+ .dev_start = bcmfs_sym_dev_start,
+ .dev_stop = bcmfs_sym_dev_stop,
+ .dev_close = bcmfs_sym_dev_close,
+ .dev_infos_get = bcmfs_sym_dev_info_get,
+ /* Stats Collection */
+ .stats_get = bcmfs_sym_stats_get,
+ .stats_reset = bcmfs_sym_stats_reset,
+ /* Queue-Pair management */
+ .queue_pair_setup = bcmfs_sym_qp_setup,
+ .queue_pair_release = bcmfs_sym_qp_release,
+};
+
+/** Enqueue burst */
+static uint16_t
+bcmfs_sym_pmd_enqueue_op_burst(void *queue_pair,
+ struct rte_crypto_op **ops,
+ uint16_t nb_ops)
+{
+ int i, j;
+ uint16_t enq = 0;
+ struct bcmfs_sym_request *sreq;
+ struct bcmfs_qp *qp = (struct bcmfs_qp *)queue_pair;
+
+ if (nb_ops == 0)
+ return 0;
+
+ if (nb_ops > BCMFS_MAX_REQS_BUFF)
+ nb_ops = BCMFS_MAX_REQS_BUFF;
+
+ /* We do not process more than available space */
+ if (nb_ops > (qp->nb_descriptors - qp->nb_pending_requests))
+ nb_ops = qp->nb_descriptors - qp->nb_pending_requests;
+
+ for (i = 0; i < nb_ops; i++) {
+ if (rte_mempool_get(qp->sr_mp, (void **)&sreq))
+ goto enqueue_err;
+
+ /* save rte_crypto_op */
+ sreq->op = ops[i];
+
+ /* save context */
+ qp->infl_msgs[i] = &sreq->msgs;
+ qp->infl_msgs[i]->ctx = (void *)sreq;
+ }
+ /* Send burst request to hw QP */
+ enq = bcmfs_enqueue_op_burst(qp, (void **)qp->infl_msgs, i);
+
+ for (j = enq; j < i; j++)
+ rte_mempool_put(qp->sr_mp, qp->infl_msgs[j]->ctx);
+
+ return enq;
+
+enqueue_err:
+ for (j = 0; j < i; j++)
+ rte_mempool_put(qp->sr_mp, qp->infl_msgs[j]->ctx);
+
+ return enq;
+}
+
+static uint16_t
+bcmfs_sym_pmd_dequeue_op_burst(void *queue_pair,
+ struct rte_crypto_op **ops,
+ uint16_t nb_ops)
+{
+ int i;
+ uint16_t deq = 0;
+ unsigned int pkts = 0;
+ struct bcmfs_sym_request *sreq;
+ struct bcmfs_qp *qp = queue_pair;
+
+ if (nb_ops > BCMFS_MAX_REQS_BUFF)
+ nb_ops = BCMFS_MAX_REQS_BUFF;
+
+ deq = bcmfs_dequeue_op_burst(qp, (void **)qp->infl_msgs, nb_ops);
+ /* get rte_crypto_ops */
+ for (i = 0; i < deq; i++) {
+ sreq = (struct bcmfs_sym_request *)qp->infl_msgs[i]->ctx;
+
+ ops[pkts++] = sreq->op;
+
+ rte_mempool_put(qp->sr_mp, sreq);
+ }
+
+ return pkts;
+}
+
+/*
+ * An rte_driver is needed in the registration of both the
+ * device and the driver with cryptodev.
+ */
+static const char bcmfs_sym_drv_name[] = RTE_STR(CRYPTODEV_NAME_BCMFS_SYM_PMD);
+static const struct rte_driver cryptodev_bcmfs_sym_driver = {
+ .name = bcmfs_sym_drv_name,
+ .alias = bcmfs_sym_drv_name
+};
+
+int
+bcmfs_sym_dev_create(struct bcmfs_device *fsdev)
+{
+ struct rte_cryptodev_pmd_init_params init_params = {
+ .name = "",
+ .socket_id = rte_socket_id(),
+ .private_data_size = sizeof(struct bcmfs_sym_dev_private)
+ };
+ char name[RTE_CRYPTODEV_NAME_MAX_LEN];
+ struct rte_cryptodev *cryptodev;
+ struct bcmfs_sym_dev_private *internals;
+
+ snprintf(name, RTE_CRYPTODEV_NAME_MAX_LEN, "%s_%s",
+ fsdev->name, "sym");
+
+ /* Populate subset device to use in cryptodev device creation */
+ fsdev->sym_rte_dev.driver = &cryptodev_bcmfs_sym_driver;
+ fsdev->sym_rte_dev.numa_node = 0;
+ fsdev->sym_rte_dev.devargs = NULL;
+
+ cryptodev = rte_cryptodev_pmd_create(name,
+ &fsdev->sym_rte_dev,
+ &init_params);
+ if (cryptodev == NULL)
+ return -ENODEV;
+
+ fsdev->sym_rte_dev.name = cryptodev->data->name;
+ cryptodev->driver_id = cryptodev_bcmfs_driver_id;
+ cryptodev->dev_ops = &crypto_bcmfs_ops;
+
+ cryptodev->enqueue_burst = bcmfs_sym_pmd_enqueue_op_burst;
+ cryptodev->dequeue_burst = bcmfs_sym_pmd_dequeue_op_burst;
+
+ cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
+ RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
+ RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT;
+
+ internals = cryptodev->data->dev_private;
+ internals->fsdev = fsdev;
+ fsdev->sym_dev = internals;
+
+ internals->sym_dev_id = cryptodev->data->dev_id;
+
+ BCMFS_LOG(DEBUG, "Created bcmfs-sym device %s as cryptodev instance %d",
+ cryptodev->data->name, internals->sym_dev_id);
+ return 0;
+}
+
+int
+bcmfs_sym_dev_destroy(struct bcmfs_device *fsdev)
+{
+ struct rte_cryptodev *cryptodev;
+
+ if (fsdev == NULL)
+ return -ENODEV;
+ if (fsdev->sym_dev == NULL)
+ return 0;
+
+ /* free crypto device */
+ cryptodev = rte_cryptodev_pmd_get_dev(fsdev->sym_dev->sym_dev_id);
+ rte_cryptodev_pmd_destroy(cryptodev);
+ fsdev->sym_rte_dev.name = NULL;
+ fsdev->sym_dev = NULL;
+
+ return 0;
+}
+
+static struct cryptodev_driver bcmfs_crypto_drv;
+RTE_PMD_REGISTER_CRYPTO_DRIVER(bcmfs_crypto_drv,
+ cryptodev_bcmfs_sym_driver,
+ cryptodev_bcmfs_driver_id);
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_pmd.h b/drivers/crypto/bcmfs/bcmfs_sym_pmd.h
new file mode 100644
index 0000000000..65d7046090
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_pmd.h
@@ -0,0 +1,38 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_SYM_PMD_H_
+#define _BCMFS_SYM_PMD_H_
+
+#include <rte_cryptodev.h>
+
+#include "bcmfs_device.h"
+
+#define CRYPTODEV_NAME_BCMFS_SYM_PMD crypto_bcmfs
+
+#define BCMFS_CRYPTO_MAX_HW_DESCS_PER_REQ 16
+
+extern uint8_t cryptodev_bcmfs_driver_id;
+
+/** private data structure for a BCMFS device.
+ * This BCMFS device is a device offering only symmetric crypto service,
+ * there can be one of these on each bcmfs_pci_device (VF).
+ */
+struct bcmfs_sym_dev_private {
+ /* The bcmfs device hosting the service */
+ struct bcmfs_device *fsdev;
+ /* Device instance for this rte_cryptodev */
+ uint8_t sym_dev_id;
+ /* BCMFS device symmetric crypto capabilities */
+ const struct rte_cryptodev_capabilities *fsdev_capabilities;
+};
+
+int
+bcmfs_sym_dev_create(struct bcmfs_device *fdev);
+
+int
+bcmfs_sym_dev_destroy(struct bcmfs_device *fdev);
+
+#endif /* _BCMFS_SYM_PMD_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_req.h b/drivers/crypto/bcmfs/bcmfs_sym_req.h
new file mode 100644
index 0000000000..0f0b051f1e
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_req.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_SYM_REQ_H_
+#define _BCMFS_SYM_REQ_H_
+
+#include "bcmfs_dev_msg.h"
+
+/*
+ * This structure hold the supportive data required to process a
+ * rte_crypto_op
+ */
+struct bcmfs_sym_request {
+ /* bcmfs qp message for h/w queues to process */
+ struct bcmfs_qp_message msgs;
+ /* crypto op */
+ struct rte_crypto_op *op;
+};
+
+#endif /* _BCMFS_SYM_REQ_H_ */
diff --git a/drivers/crypto/bcmfs/meson.build b/drivers/crypto/bcmfs/meson.build
index cd58bd5e25..d9a3d73e99 100644
--- a/drivers/crypto/bcmfs/meson.build
+++ b/drivers/crypto/bcmfs/meson.build
@@ -11,5 +11,6 @@ sources = files(
'bcmfs_qp.c',
'hw/bcmfs4_rm.c',
'hw/bcmfs5_rm.c',
- 'hw/bcmfs_rm_common.c'
+ 'hw/bcmfs_rm_common.c',
+ 'bcmfs_sym_pmd.c'
)
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v5 6/8] crypto/bcmfs: add session handling and capabilities
2020-10-07 17:18 ` [dpdk-dev] [PATCH v5 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
` (4 preceding siblings ...)
2020-10-07 17:18 ` [dpdk-dev] [PATCH v5 5/8] crypto/bcmfs: create a symmetric cryptodev Vikas Gupta
@ 2020-10-07 17:18 ` Vikas Gupta
2020-10-07 17:18 ` [dpdk-dev] [PATCH v5 7/8] crypto/bcmfs: add crypto HW module Vikas Gupta
` (2 subsequent siblings)
8 siblings, 0 replies; 75+ messages in thread
From: Vikas Gupta @ 2020-10-07 17:18 UTC (permalink / raw)
To: dev, akhil.goyal; +Cc: vikram.prakash, Vikas Gupta, Raveendra Padasalagi
Add session handling and capabilities supported by crypto HW
accelerator
Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
doc/guides/cryptodevs/bcmfs.rst | 47 ++
doc/guides/cryptodevs/features/bcmfs.ini | 56 ++
drivers/crypto/bcmfs/bcmfs_sym_capabilities.c | 764 ++++++++++++++++++
drivers/crypto/bcmfs/bcmfs_sym_capabilities.h | 16 +
drivers/crypto/bcmfs/bcmfs_sym_defs.h | 34 +
drivers/crypto/bcmfs/bcmfs_sym_pmd.c | 13 +
drivers/crypto/bcmfs/bcmfs_sym_session.c | 282 +++++++
drivers/crypto/bcmfs/bcmfs_sym_session.h | 109 +++
drivers/crypto/bcmfs/meson.build | 4 +-
9 files changed, 1324 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/cryptodevs/features/bcmfs.ini
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_capabilities.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_capabilities.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_defs.h
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_session.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_session.h
diff --git a/doc/guides/cryptodevs/bcmfs.rst b/doc/guides/cryptodevs/bcmfs.rst
index 6b68673df0..f7e15f4cfb 100644
--- a/doc/guides/cryptodevs/bcmfs.rst
+++ b/doc/guides/cryptodevs/bcmfs.rst
@@ -15,6 +15,47 @@ Supported Broadcom SoCs
* Stingray
* Stingray2
+Features
+--------
+
+The BCMFS SYM PMD has support for:
+
+Cipher algorithms:
+
+* ``RTE_CRYPTO_CIPHER_3DES_CBC``
+* ``RTE_CRYPTO_CIPHER_3DES_CTR``
+* ``RTE_CRYPTO_CIPHER_AES128_CBC``
+* ``RTE_CRYPTO_CIPHER_AES192_CBC``
+* ``RTE_CRYPTO_CIPHER_AES256_CBC``
+* ``RTE_CRYPTO_CIPHER_AES128_CTR``
+* ``RTE_CRYPTO_CIPHER_AES192_CTR``
+* ``RTE_CRYPTO_CIPHER_AES256_CTR``
+* ``RTE_CRYPTO_CIPHER_AES_XTS``
+* ``RTE_CRYPTO_CIPHER_DES_CBC``
+
+Hash algorithms:
+
+* ``RTE_CRYPTO_AUTH_SHA1``
+* ``RTE_CRYPTO_AUTH_SHA1_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA224``
+* ``RTE_CRYPTO_AUTH_SHA224_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA256``
+* ``RTE_CRYPTO_AUTH_SHA256_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA384``
+* ``RTE_CRYPTO_AUTH_SHA384_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA512``
+* ``RTE_CRYPTO_AUTH_SHA512_HMAC``
+* ``RTE_CRYPTO_AUTH_AES_XCBC_MAC``
+* ``RTE_CRYPTO_AUTH_AES_CBC_MAC``
+* ``RTE_CRYPTO_AUTH_MD5_HMAC``
+* ``RTE_CRYPTO_AUTH_AES_GMAC``
+* ``RTE_CRYPTO_AUTH_AES_CMAC``
+
+Supported AEAD algorithms:
+
+* ``RTE_CRYPTO_AEAD_AES_GCM``
+* ``RTE_CRYPTO_AEAD_AES_CCM``
+
Installation
------------
Information about kernel, rootfs and toolchain can be found at
@@ -49,3 +90,9 @@ For example, below commands can be run to get hold of a device node by VFIO.
io_device_name="vfio-platform"
echo $io_device_name > /sys/bus/platform/devices/${SETUP_SYSFS_DEV_NAME}/driver_override
echo ${SETUP_SYSFS_DEV_NAME} > /sys/bus/platform/drivers_probe
+
+Limitations
+-----------
+
+* Only supports the session-oriented API implementation (session-less APIs are not supported).
+* CCM is not supported on Broadcom`s SoCs having FlexSparc4 unit.
diff --git a/doc/guides/cryptodevs/features/bcmfs.ini b/doc/guides/cryptodevs/features/bcmfs.ini
new file mode 100644
index 0000000000..6a718856b9
--- /dev/null
+++ b/doc/guides/cryptodevs/features/bcmfs.ini
@@ -0,0 +1,56 @@
+;
+; Supported features of the 'bcmfs' crypto driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Symmetric crypto = Y
+Sym operation chaining = Y
+HW Accelerated = Y
+Protocol offload = Y
+OOP LB In LB Out = Y
+
+;
+; Supported crypto algorithms of the 'bcmfs' crypto driver.
+;
+[Cipher]
+AES CBC (128) = Y
+AES CBC (192) = Y
+AES CBC (256) = Y
+AES CTR (128) = Y
+AES CTR (192) = Y
+AES CTR (256) = Y
+AES XTS (128) = Y
+AES XTS (256) = Y
+3DES CBC = Y
+DES CBC = Y
+;
+; Supported authentication algorithms of the 'bcmfs' crypto driver.
+;
+[Auth]
+MD5 HMAC = Y
+SHA1 = Y
+SHA1 HMAC = Y
+SHA224 = Y
+SHA224 HMAC = Y
+SHA256 = Y
+SHA256 HMAC = Y
+SHA384 = Y
+SHA384 HMAC = Y
+SHA512 = Y
+SHA512 HMAC = Y
+AES GMAC = Y
+AES CMAC (128) = Y
+AES CBC MAC = Y
+AES XCBC MAC = Y
+
+;
+; Supported AEAD algorithms of the 'bcmfs' crypto driver.
+;
+[AEAD]
+AES GCM (128) = Y
+AES GCM (192) = Y
+AES GCM (256) = Y
+AES CCM (128) = Y
+AES CCM (192) = Y
+AES CCM (256) = Y
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_capabilities.c b/drivers/crypto/bcmfs/bcmfs_sym_capabilities.c
new file mode 100644
index 0000000000..afed7696a6
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_capabilities.c
@@ -0,0 +1,764 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_cryptodev.h>
+
+#include "bcmfs_sym_capabilities.h"
+
+static const struct rte_cryptodev_capabilities bcmfs_sym_capabilities[] = {
+ {
+ /* SHA1 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA1,
+ .block_size = 64,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 20,
+ .max = 20,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* MD5 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_MD5,
+ .block_size = 64,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ }, }
+ }, }
+ },
+ {
+ /* SHA224 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA224,
+ .block_size = 64,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 28,
+ .max = 28,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA256 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA256,
+ .block_size = 64,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 32,
+ .max = 32,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA384 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA384,
+ .block_size = 64,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 48,
+ .max = 48,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA512 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA512,
+ .block_size = 64,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 64,
+ .max = 64,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA3_224 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA3_224,
+ .block_size = 144,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 28,
+ .max = 28,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA3_256 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA3_256,
+ .block_size = 136,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 32,
+ .max = 32,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA3_384 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA3_384,
+ .block_size = 104,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 48,
+ .max = 48,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA3_512 */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA3_512,
+ .block_size = 72,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 64,
+ .max = 64,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA1 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
+ .block_size = 64,
+ .key_size = {
+ .min = 1,
+ .max = 64,
+ .increment = 1
+ },
+ .digest_size = {
+ .min = 20,
+ .max = 20,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* MD5 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_MD5_HMAC,
+ .block_size = 64,
+ .key_size = {
+ .min = 1,
+ .max = 64,
+ .increment = 1
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA224 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA224_HMAC,
+ .block_size = 64,
+ .key_size = {
+ .min = 1,
+ .max = 64,
+ .increment = 1
+ },
+ .digest_size = {
+ .min = 28,
+ .max = 28,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA256 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA256_HMAC,
+ .block_size = 64,
+ .key_size = {
+ .min = 1,
+ .max = 64,
+ .increment = 1
+ },
+ .digest_size = {
+ .min = 32,
+ .max = 32,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA384 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA384_HMAC,
+ .block_size = 128,
+ .key_size = {
+ .min = 1,
+ .max = 128,
+ .increment = 1
+ },
+ .digest_size = {
+ .min = 48,
+ .max = 48,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA512 HMAC*/
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA512_HMAC,
+ .block_size = 128,
+ .key_size = {
+ .min = 1,
+ .max = 128,
+ .increment = 1
+ },
+ .digest_size = {
+ .min = 64,
+ .max = 64,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA3_224 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA3_224_HMAC,
+ .block_size = 144,
+ .key_size = {
+ .min = 1,
+ .max = 144,
+ .increment = 1
+ },
+ .digest_size = {
+ .min = 28,
+ .max = 28,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA3_256 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA3_256_HMAC,
+ .block_size = 136,
+ .key_size = {
+ .min = 1,
+ .max = 136,
+ .increment = 1
+ },
+ .digest_size = {
+ .min = 32,
+ .max = 32,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA3_384 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA3_384_HMAC,
+ .block_size = 104,
+ .key_size = {
+ .min = 1,
+ .max = 104,
+ .increment = 1
+ },
+ .digest_size = {
+ .min = 48,
+ .max = 48,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* SHA3_512 HMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA3_512_HMAC,
+ .block_size = 72,
+ .key_size = {
+ .min = 1,
+ .max = 72,
+ .increment = 1
+ },
+ .digest_size = {
+ .min = 64,
+ .max = 64,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* AES XCBC MAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_AES_XCBC_MAC,
+ .block_size = 16,
+ .key_size = {
+ .min = 1,
+ .max = 16,
+ .increment = 1
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* AES GMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_AES_GMAC,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 8
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .aad_size = {
+ .min = 0,
+ .max = 65535,
+ .increment = 1
+ },
+ .iv_size = {
+ .min = 12,
+ .max = 16,
+ .increment = 4
+ },
+ }, }
+ }, }
+ },
+ {
+ /* AES CMAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_AES_CMAC,
+ .block_size = 16,
+ .key_size = {
+ .min = 1,
+ .max = 16,
+ .increment = 1
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* AES CBC MAC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_AES_CBC_MAC,
+ .block_size = 16,
+ .key_size = {
+ .min = 1,
+ .max = 16,
+ .increment = 1
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .aad_size = { 0 }
+ }, }
+ }, }
+ },
+ {
+ /* AES ECB */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_AES_ECB,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 8
+ },
+ .iv_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ {
+ /* AES CBC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_AES_CBC,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 8
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ {
+ /* AES CTR */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_AES_CTR,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 8
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ {
+ /* AES XTS */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_AES_XTS,
+ .block_size = 16,
+ .key_size = {
+ .min = 32,
+ .max = 64,
+ .increment = 32
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ {
+ /* DES CBC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_DES_CBC,
+ .block_size = 8,
+ .key_size = {
+ .min = 8,
+ .max = 8,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ {
+ /* 3DES CBC */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_3DES_CBC,
+ .block_size = 8,
+ .key_size = {
+ .min = 24,
+ .max = 24,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ {
+ /* 3DES ECB */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+ {.cipher = {
+ .algo = RTE_CRYPTO_CIPHER_3DES_ECB,
+ .block_size = 8,
+ .key_size = {
+ .min = 24,
+ .max = 24,
+ .increment = 0
+ },
+ .iv_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ }
+ }, }
+ }, }
+ },
+ {
+ /* AES GCM */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
+ {.aead = {
+ .algo = RTE_CRYPTO_AEAD_AES_GCM,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 8
+ },
+ .digest_size = {
+ .min = 16,
+ .max = 16,
+ .increment = 0
+ },
+ .aad_size = {
+ .min = 0,
+ .max = 65535,
+ .increment = 1
+ },
+ .iv_size = {
+ .min = 12,
+ .max = 16,
+ .increment = 4
+ },
+ }, }
+ }, }
+ },
+ {
+ /* AES CCM */
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
+ {.aead = {
+ .algo = RTE_CRYPTO_AEAD_AES_CCM,
+ .block_size = 16,
+ .key_size = {
+ .min = 16,
+ .max = 32,
+ .increment = 8
+ },
+ .digest_size = {
+ .min = 4,
+ .max = 16,
+ .increment = 2
+ },
+ .aad_size = {
+ .min = 0,
+ .max = 65535,
+ .increment = 1
+ },
+ .iv_size = {
+ .min = 7,
+ .max = 13,
+ .increment = 1
+ },
+ }, }
+ }, }
+ },
+
+ RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+const struct rte_cryptodev_capabilities *
+bcmfs_sym_get_capabilities(void)
+{
+ return bcmfs_sym_capabilities;
+}
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_capabilities.h b/drivers/crypto/bcmfs/bcmfs_sym_capabilities.h
new file mode 100644
index 0000000000..3ff61b7d29
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_capabilities.h
@@ -0,0 +1,16 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_SYM_CAPABILITIES_H_
+#define _BCMFS_SYM_CAPABILITIES_H_
+
+/*
+ * Get capabilities list for the device
+ *
+ */
+const struct rte_cryptodev_capabilities *bcmfs_sym_get_capabilities(void);
+
+#endif /* _BCMFS_SYM_CAPABILITIES_H__ */
+
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_defs.h b/drivers/crypto/bcmfs/bcmfs_sym_defs.h
new file mode 100644
index 0000000000..aea1f281e4
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_defs.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_SYM_DEFS_H_
+#define _BCMFS_SYM_DEFS_H_
+
+/*
+ * Max block size of hash algorithm
+ * currently SHA3 supports max block size
+ * of 144 bytes
+ */
+#define BCMFS_MAX_KEY_SIZE 144
+#define BCMFS_MAX_IV_SIZE 16
+#define BCMFS_MAX_DIGEST_SIZE 64
+
+struct bcmfs_sym_session;
+struct bcmfs_sym_request;
+
+/** Crypto Request processing successful. */
+#define BCMFS_SYM_RESPONSE_SUCCESS (0)
+/** Crypot Request processing protocol failure. */
+#define BCMFS_SYM_RESPONSE_PROTO_FAILURE (1)
+/** Crypot Request processing completion failure. */
+#define BCMFS_SYM_RESPONSE_COMPL_ERROR (2)
+/** Crypot Request processing hash tag check error. */
+#define BCMFS_SYM_RESPONSE_HASH_TAG_ERROR (3)
+
+int
+bcmfs_process_sym_crypto_op(struct rte_crypto_op *op,
+ struct bcmfs_sym_session *sess,
+ struct bcmfs_sym_request *req);
+#endif /* _BCMFS_SYM_DEFS_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
index 0f96915f70..381ca8ea48 100644
--- a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
+++ b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
@@ -14,6 +14,8 @@
#include "bcmfs_qp.h"
#include "bcmfs_sym_pmd.h"
#include "bcmfs_sym_req.h"
+#include "bcmfs_sym_session.h"
+#include "bcmfs_sym_capabilities.h"
uint8_t cryptodev_bcmfs_driver_id;
@@ -65,6 +67,7 @@ bcmfs_sym_dev_info_get(struct rte_cryptodev *dev,
dev_info->max_nb_queue_pairs = fsdev->max_hw_qps;
/* No limit of number of sessions */
dev_info->sym.max_nb_sessions = 0;
+ dev_info->capabilities = bcmfs_sym_get_capabilities();
}
}
@@ -228,6 +231,10 @@ static struct rte_cryptodev_ops crypto_bcmfs_ops = {
/* Queue-Pair management */
.queue_pair_setup = bcmfs_sym_qp_setup,
.queue_pair_release = bcmfs_sym_qp_release,
+ /* Crypto session related operations */
+ .sym_session_get_size = bcmfs_sym_session_get_private_size,
+ .sym_session_configure = bcmfs_sym_session_configure,
+ .sym_session_clear = bcmfs_sym_session_clear
};
/** Enqueue burst */
@@ -239,6 +246,7 @@ bcmfs_sym_pmd_enqueue_op_burst(void *queue_pair,
int i, j;
uint16_t enq = 0;
struct bcmfs_sym_request *sreq;
+ struct bcmfs_sym_session *sess;
struct bcmfs_qp *qp = (struct bcmfs_qp *)queue_pair;
if (nb_ops == 0)
@@ -252,6 +260,10 @@ bcmfs_sym_pmd_enqueue_op_burst(void *queue_pair,
nb_ops = qp->nb_descriptors - qp->nb_pending_requests;
for (i = 0; i < nb_ops; i++) {
+ sess = bcmfs_sym_get_session(ops[i]);
+ if (unlikely(sess == NULL))
+ goto enqueue_err;
+
if (rte_mempool_get(qp->sr_mp, (void **)&sreq))
goto enqueue_err;
@@ -356,6 +368,7 @@ bcmfs_sym_dev_create(struct bcmfs_device *fsdev)
fsdev->sym_dev = internals;
internals->sym_dev_id = cryptodev->data->dev_id;
+ internals->fsdev_capabilities = bcmfs_sym_get_capabilities();
BCMFS_LOG(DEBUG, "Created bcmfs-sym device %s as cryptodev instance %d",
cryptodev->data->name, internals->sym_dev_id);
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_session.c b/drivers/crypto/bcmfs/bcmfs_sym_session.c
new file mode 100644
index 0000000000..675ed0ad55
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_session.c
@@ -0,0 +1,282 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_crypto.h>
+#include <rte_crypto_sym.h>
+#include <rte_log.h>
+
+#include "bcmfs_logs.h"
+#include "bcmfs_sym_defs.h"
+#include "bcmfs_sym_pmd.h"
+#include "bcmfs_sym_session.h"
+
+/** Configure the session from a crypto xform chain */
+static enum bcmfs_sym_chain_order
+crypto_get_chain_order(const struct rte_crypto_sym_xform *xform)
+{
+ enum bcmfs_sym_chain_order res = BCMFS_SYM_CHAIN_NOT_SUPPORTED;
+
+ if (xform != NULL) {
+ if (xform->type == RTE_CRYPTO_SYM_XFORM_AEAD)
+ res = BCMFS_SYM_CHAIN_AEAD;
+
+ if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
+ if (xform->next == NULL)
+ res = BCMFS_SYM_CHAIN_ONLY_AUTH;
+ else if (xform->next->type ==
+ RTE_CRYPTO_SYM_XFORM_CIPHER)
+ res = BCMFS_SYM_CHAIN_AUTH_CIPHER;
+ }
+ if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
+ if (xform->next == NULL)
+ res = BCMFS_SYM_CHAIN_ONLY_CIPHER;
+ else if (xform->next->type == RTE_CRYPTO_SYM_XFORM_AUTH)
+ res = BCMFS_SYM_CHAIN_CIPHER_AUTH;
+ }
+ }
+
+ return res;
+}
+
+/* Get session cipher key from input cipher key */
+static void
+get_key(const uint8_t *input_key, int keylen, uint8_t *session_key)
+{
+ memcpy(session_key, input_key, keylen);
+}
+
+/* Set session cipher parameters */
+static int
+crypto_set_session_cipher_parameters(struct bcmfs_sym_session *sess,
+ const struct rte_crypto_cipher_xform *cipher_xform)
+{
+ if (cipher_xform->key.length > BCMFS_MAX_KEY_SIZE) {
+ BCMFS_DP_LOG(ERR, "key length not supported");
+ return -EINVAL;
+ }
+
+ sess->cipher.key.length = cipher_xform->key.length;
+ sess->cipher.iv.offset = cipher_xform->iv.offset;
+ sess->cipher.iv.length = cipher_xform->iv.length;
+ sess->cipher.op = cipher_xform->op;
+ sess->cipher.algo = cipher_xform->algo;
+
+ get_key(cipher_xform->key.data,
+ sess->cipher.key.length,
+ sess->cipher.key.data);
+
+ return 0;
+}
+
+/* Set session auth parameters */
+static int
+crypto_set_session_auth_parameters(struct bcmfs_sym_session *sess,
+ const struct rte_crypto_auth_xform *auth_xform)
+{
+ if (auth_xform->key.length > BCMFS_MAX_KEY_SIZE) {
+ BCMFS_DP_LOG(ERR, "key length not supported");
+ return -EINVAL;
+ }
+
+ sess->auth.op = auth_xform->op;
+ sess->auth.key.length = auth_xform->key.length;
+ sess->auth.digest_length = auth_xform->digest_length;
+ sess->auth.iv.length = auth_xform->iv.length;
+ sess->auth.iv.offset = auth_xform->iv.offset;
+ sess->auth.algo = auth_xform->algo;
+
+ get_key(auth_xform->key.data,
+ auth_xform->key.length,
+ sess->auth.key.data);
+
+ return 0;
+}
+
+/* Set session aead parameters */
+static int
+crypto_set_session_aead_parameters(struct bcmfs_sym_session *sess,
+ const struct rte_crypto_sym_xform *aead_xform)
+{
+ if (aead_xform->aead.key.length > BCMFS_MAX_KEY_SIZE) {
+ BCMFS_DP_LOG(ERR, "key length not supported");
+ return -EINVAL;
+ }
+
+ sess->aead.iv.offset = aead_xform->aead.iv.offset;
+ sess->aead.iv.length = aead_xform->aead.iv.length;
+ sess->aead.aad_length = aead_xform->aead.aad_length;
+ sess->aead.key.length = aead_xform->aead.key.length;
+ sess->aead.digest_length = aead_xform->aead.digest_length;
+ sess->aead.op = aead_xform->aead.op;
+ sess->aead.algo = aead_xform->aead.algo;
+
+ get_key(aead_xform->aead.key.data,
+ aead_xform->aead.key.length,
+ sess->aead.key.data);
+
+ return 0;
+}
+
+static struct rte_crypto_auth_xform *
+crypto_get_auth_xform(struct rte_crypto_sym_xform *xform)
+{
+ do {
+ if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH)
+ return &xform->auth;
+
+ xform = xform->next;
+ } while (xform);
+
+ return NULL;
+}
+
+static struct rte_crypto_cipher_xform *
+crypto_get_cipher_xform(struct rte_crypto_sym_xform *xform)
+{
+ do {
+ if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER)
+ return &xform->cipher;
+
+ xform = xform->next;
+ } while (xform);
+
+ return NULL;
+}
+
+/** Parse crypto xform chain and set private session parameters */
+static int
+crypto_set_session_parameters(struct bcmfs_sym_session *sess,
+ struct rte_crypto_sym_xform *xform)
+{
+ int rc = 0;
+ struct rte_crypto_cipher_xform *cipher_xform =
+ crypto_get_cipher_xform(xform);
+ struct rte_crypto_auth_xform *auth_xform =
+ crypto_get_auth_xform(xform);
+
+ sess->chain_order = crypto_get_chain_order(xform);
+
+ switch (sess->chain_order) {
+ case BCMFS_SYM_CHAIN_ONLY_CIPHER:
+ if (crypto_set_session_cipher_parameters(sess, cipher_xform))
+ rc = -EINVAL;
+ break;
+ case BCMFS_SYM_CHAIN_ONLY_AUTH:
+ if (crypto_set_session_auth_parameters(sess, auth_xform))
+ rc = -EINVAL;
+ break;
+ case BCMFS_SYM_CHAIN_AUTH_CIPHER:
+ sess->cipher_first = false;
+ if (crypto_set_session_auth_parameters(sess, auth_xform)) {
+ rc = -EINVAL;
+ goto error;
+ }
+
+ if (crypto_set_session_cipher_parameters(sess, cipher_xform))
+ rc = -EINVAL;
+ break;
+ case BCMFS_SYM_CHAIN_CIPHER_AUTH:
+ sess->cipher_first = true;
+ if (crypto_set_session_auth_parameters(sess, auth_xform)) {
+ rc = -EINVAL;
+ goto error;
+ }
+
+ if (crypto_set_session_cipher_parameters(sess, cipher_xform))
+ rc = -EINVAL;
+ break;
+ case BCMFS_SYM_CHAIN_AEAD:
+ if (crypto_set_session_aead_parameters(sess, xform))
+ rc = -EINVAL;
+ break;
+ default:
+ BCMFS_DP_LOG(ERR, "Invalid chain order\n");
+ rc = -EINVAL;
+ break;
+ }
+
+error:
+ return rc;
+}
+
+struct bcmfs_sym_session *
+bcmfs_sym_get_session(struct rte_crypto_op *op)
+{
+ struct bcmfs_sym_session *sess = NULL;
+
+ if (unlikely(op->sess_type == RTE_CRYPTO_OP_SESSIONLESS)) {
+ BCMFS_DP_LOG(ERR, "operations op(%p) is sessionless", op);
+ } else if (likely(op->sym->session != NULL)) {
+ /* get existing session */
+ sess = (struct bcmfs_sym_session *)
+ get_sym_session_private_data(op->sym->session,
+ cryptodev_bcmfs_driver_id);
+ }
+
+ if (sess == NULL)
+ op->status = RTE_CRYPTO_OP_STATUS_INVALID_SESSION;
+
+ return sess;
+}
+
+int
+bcmfs_sym_session_configure(struct rte_cryptodev *dev,
+ struct rte_crypto_sym_xform *xform,
+ struct rte_cryptodev_sym_session *sess,
+ struct rte_mempool *mempool)
+{
+ void *sess_private_data;
+ int ret;
+
+ if (unlikely(sess == NULL)) {
+ BCMFS_DP_LOG(ERR, "Invalid session struct");
+ return -EINVAL;
+ }
+
+ if (rte_mempool_get(mempool, &sess_private_data)) {
+ BCMFS_DP_LOG(ERR,
+ "Couldn't get object from session mempool");
+ return -ENOMEM;
+ }
+
+ ret = crypto_set_session_parameters(sess_private_data, xform);
+
+ if (ret != 0) {
+ BCMFS_DP_LOG(ERR, "Failed configure session parameters");
+ /* Return session to mempool */
+ rte_mempool_put(mempool, sess_private_data);
+ return ret;
+ }
+
+ set_sym_session_private_data(sess, dev->driver_id,
+ sess_private_data);
+
+ return 0;
+}
+
+/* Clear the memory of session so it doesn't leave key material behind */
+void
+bcmfs_sym_session_clear(struct rte_cryptodev *dev,
+ struct rte_cryptodev_sym_session *sess)
+{
+ uint8_t index = dev->driver_id;
+ void *sess_priv = get_sym_session_private_data(sess, index);
+
+ if (sess_priv) {
+ struct rte_mempool *sess_mp;
+
+ memset(sess_priv, 0, sizeof(struct bcmfs_sym_session));
+ sess_mp = rte_mempool_from_obj(sess_priv);
+
+ set_sym_session_private_data(sess, index, NULL);
+ rte_mempool_put(sess_mp, sess_priv);
+ }
+}
+
+unsigned int
+bcmfs_sym_session_get_private_size(struct rte_cryptodev *dev __rte_unused)
+{
+ return sizeof(struct bcmfs_sym_session);
+}
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_session.h b/drivers/crypto/bcmfs/bcmfs_sym_session.h
new file mode 100644
index 0000000000..8240c6fc25
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_session.h
@@ -0,0 +1,109 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_SYM_SESSION_H_
+#define _BCMFS_SYM_SESSION_H_
+
+#include <stdbool.h>
+#include <rte_crypto.h>
+#include <rte_cryptodev_pmd.h>
+
+#include "bcmfs_sym_defs.h"
+#include "bcmfs_sym_req.h"
+
+/* BCMFS_SYM operation order mode enumerator */
+enum bcmfs_sym_chain_order {
+ BCMFS_SYM_CHAIN_ONLY_CIPHER,
+ BCMFS_SYM_CHAIN_ONLY_AUTH,
+ BCMFS_SYM_CHAIN_CIPHER_AUTH,
+ BCMFS_SYM_CHAIN_AUTH_CIPHER,
+ BCMFS_SYM_CHAIN_AEAD,
+ BCMFS_SYM_CHAIN_NOT_SUPPORTED
+};
+
+/* BCMFS_SYM crypto private session structure */
+struct bcmfs_sym_session {
+ enum bcmfs_sym_chain_order chain_order;
+
+ /* Cipher Parameters */
+ struct {
+ enum rte_crypto_cipher_operation op;
+ /* Cipher operation */
+ enum rte_crypto_cipher_algorithm algo;
+ /* Cipher algorithm */
+ struct {
+ uint8_t data[BCMFS_MAX_KEY_SIZE];
+ size_t length;
+ } key;
+ struct {
+ uint16_t offset;
+ uint16_t length;
+ } iv;
+ } cipher;
+
+ /* Authentication Parameters */
+ struct {
+ enum rte_crypto_auth_operation op;
+ /* Auth operation */
+ enum rte_crypto_auth_algorithm algo;
+ /* Auth algorithm */
+
+ struct {
+ uint8_t data[BCMFS_MAX_KEY_SIZE];
+ size_t length;
+ } key;
+ struct {
+ uint16_t offset;
+ uint16_t length;
+ } iv;
+
+ uint16_t digest_length;
+ } auth;
+
+ /* Aead Parameters */
+ struct {
+ enum rte_crypto_aead_operation op;
+ /* AEAD operation */
+ enum rte_crypto_aead_algorithm algo;
+ /* AEAD algorithm */
+ struct {
+ uint8_t data[BCMFS_MAX_KEY_SIZE];
+ size_t length;
+ } key;
+ struct {
+ uint16_t offset;
+ uint16_t length;
+ } iv;
+
+ uint16_t digest_length;
+
+ uint16_t aad_length;
+ } aead;
+
+ bool cipher_first;
+} __rte_cache_aligned;
+
+int
+bcmfs_process_crypto_op(struct rte_crypto_op *op,
+ struct bcmfs_sym_session *sess,
+ struct bcmfs_sym_request *req);
+
+int
+bcmfs_sym_session_configure(struct rte_cryptodev *dev,
+ struct rte_crypto_sym_xform *xform,
+ struct rte_cryptodev_sym_session *sess,
+ struct rte_mempool *mempool);
+
+void
+bcmfs_sym_session_clear(struct rte_cryptodev *dev,
+ struct rte_cryptodev_sym_session *sess);
+
+unsigned int
+bcmfs_sym_session_get_private_size(struct rte_cryptodev *dev __rte_unused);
+
+struct bcmfs_sym_session *
+bcmfs_sym_get_session(struct rte_crypto_op *op);
+
+#endif /* _BCMFS_SYM_SESSION_H_ */
diff --git a/drivers/crypto/bcmfs/meson.build b/drivers/crypto/bcmfs/meson.build
index d9a3d73e99..2e86c733e1 100644
--- a/drivers/crypto/bcmfs/meson.build
+++ b/drivers/crypto/bcmfs/meson.build
@@ -12,5 +12,7 @@ sources = files(
'hw/bcmfs4_rm.c',
'hw/bcmfs5_rm.c',
'hw/bcmfs_rm_common.c',
- 'bcmfs_sym_pmd.c'
+ 'bcmfs_sym_pmd.c',
+ 'bcmfs_sym_capabilities.c',
+ 'bcmfs_sym_session.c'
)
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v5 7/8] crypto/bcmfs: add crypto HW module
2020-10-07 17:18 ` [dpdk-dev] [PATCH v5 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
` (5 preceding siblings ...)
2020-10-07 17:18 ` [dpdk-dev] [PATCH v5 6/8] crypto/bcmfs: add session handling and capabilities Vikas Gupta
@ 2020-10-07 17:18 ` Vikas Gupta
2020-10-07 17:19 ` [dpdk-dev] [PATCH v5 8/8] crypto/bcmfs: add crypto pmd into cryptodev test Vikas Gupta
2020-10-09 15:00 ` [dpdk-dev] [PATCH v5 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Akhil Goyal
8 siblings, 0 replies; 75+ messages in thread
From: Vikas Gupta @ 2020-10-07 17:18 UTC (permalink / raw)
To: dev, akhil.goyal; +Cc: vikram.prakash, Vikas Gupta, Raveendra Padasalagi
Add crypto h/w module to process crypto op. Crypto op is processed via
sym_engine module before submitting the crypto request to HW queues.
Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
drivers/crypto/bcmfs/bcmfs_sym.c | 289 ++++++
drivers/crypto/bcmfs/bcmfs_sym_engine.c | 1155 +++++++++++++++++++++++
drivers/crypto/bcmfs/bcmfs_sym_engine.h | 115 +++
drivers/crypto/bcmfs/bcmfs_sym_pmd.c | 26 +
drivers/crypto/bcmfs/bcmfs_sym_req.h | 40 +
drivers/crypto/bcmfs/meson.build | 4 +-
6 files changed, 1628 insertions(+), 1 deletion(-)
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_engine.c
create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_engine.h
diff --git a/drivers/crypto/bcmfs/bcmfs_sym.c b/drivers/crypto/bcmfs/bcmfs_sym.c
new file mode 100644
index 0000000000..2d164a1ec8
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym.c
@@ -0,0 +1,289 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <stdbool.h>
+
+#include <rte_byteorder.h>
+#include <rte_crypto_sym.h>
+#include <rte_cryptodev.h>
+#include <rte_mbuf.h>
+#include <rte_mempool.h>
+
+#include "bcmfs_sym_defs.h"
+#include "bcmfs_sym_engine.h"
+#include "bcmfs_sym_req.h"
+#include "bcmfs_sym_session.h"
+
+/** Process cipher operation */
+static int
+process_crypto_cipher_op(struct rte_crypto_op *op,
+ struct rte_mbuf *mbuf_src,
+ struct rte_mbuf *mbuf_dst,
+ struct bcmfs_sym_session *sess,
+ struct bcmfs_sym_request *req)
+{
+ int rc = 0;
+ struct fsattr src, dst, iv, key;
+ struct rte_crypto_sym_op *sym_op = op->sym;
+
+ fsattr_sz(&src) = sym_op->cipher.data.length;
+ fsattr_sz(&dst) = sym_op->cipher.data.length;
+
+ fsattr_va(&src) = rte_pktmbuf_mtod_offset
+ (mbuf_src,
+ uint8_t *,
+ op->sym->cipher.data.offset);
+
+ fsattr_va(&dst) = rte_pktmbuf_mtod_offset
+ (mbuf_dst,
+ uint8_t *,
+ op->sym->cipher.data.offset);
+
+ fsattr_pa(&src) = rte_pktmbuf_iova(mbuf_src);
+ fsattr_pa(&dst) = rte_pktmbuf_iova(mbuf_dst);
+
+ fsattr_va(&iv) = rte_crypto_op_ctod_offset(op,
+ uint8_t *,
+ sess->cipher.iv.offset);
+
+ fsattr_sz(&iv) = sess->cipher.iv.length;
+
+ fsattr_va(&key) = sess->cipher.key.data;
+ fsattr_pa(&key) = 0;
+ fsattr_sz(&key) = sess->cipher.key.length;
+
+ rc = bcmfs_crypto_build_cipher_req(req, sess->cipher.algo,
+ sess->cipher.op, &src,
+ &dst, &key, &iv);
+ if (rc)
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+
+ return rc;
+}
+
+/** Process auth operation */
+static int
+process_crypto_auth_op(struct rte_crypto_op *op,
+ struct rte_mbuf *mbuf_src,
+ struct bcmfs_sym_session *sess,
+ struct bcmfs_sym_request *req)
+{
+ int rc = 0;
+ struct fsattr src, dst, mac, key, iv;
+
+ fsattr_sz(&src) = op->sym->auth.data.length;
+ fsattr_va(&src) = rte_pktmbuf_mtod_offset(mbuf_src,
+ uint8_t *,
+ op->sym->auth.data.offset);
+ fsattr_pa(&src) = rte_pktmbuf_iova(mbuf_src);
+
+ if (!sess->auth.op) {
+ fsattr_va(&mac) = op->sym->auth.digest.data;
+ fsattr_pa(&mac) = op->sym->auth.digest.phys_addr;
+ fsattr_sz(&mac) = sess->auth.digest_length;
+ } else {
+ fsattr_va(&dst) = op->sym->auth.digest.data;
+ fsattr_pa(&dst) = op->sym->auth.digest.phys_addr;
+ fsattr_sz(&dst) = sess->auth.digest_length;
+ }
+
+ fsattr_va(&key) = sess->auth.key.data;
+ fsattr_pa(&key) = 0;
+ fsattr_sz(&key) = sess->auth.key.length;
+
+ /* AES-GMAC uses AES-GCM-128 authenticator */
+ if (sess->auth.algo == RTE_CRYPTO_AUTH_AES_GMAC) {
+ fsattr_va(&iv) = rte_crypto_op_ctod_offset(op,
+ uint8_t *,
+ sess->auth.iv.offset);
+ fsattr_pa(&iv) = 0;
+ fsattr_sz(&iv) = sess->auth.iv.length;
+ } else {
+ fsattr_va(&iv) = NULL;
+ fsattr_sz(&iv) = 0;
+ }
+
+ rc = bcmfs_crypto_build_auth_req(req, sess->auth.algo,
+ sess->auth.op,
+ &src,
+ (sess->auth.op) ? (&dst) : NULL,
+ (sess->auth.op) ? NULL : (&mac),
+ &key, &iv);
+
+ if (rc)
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+
+ return rc;
+}
+
+/** Process combined/chained mode operation */
+static int
+process_crypto_combined_op(struct rte_crypto_op *op,
+ struct rte_mbuf *mbuf_src,
+ struct rte_mbuf *mbuf_dst,
+ struct bcmfs_sym_session *sess,
+ struct bcmfs_sym_request *req)
+{
+ int rc = 0, aad_size = 0;
+ struct fsattr src, dst, iv;
+ struct rte_crypto_sym_op *sym_op = op->sym;
+ struct fsattr cipher_key, aad, mac, auth_key;
+
+ fsattr_sz(&src) = sym_op->cipher.data.length;
+ fsattr_sz(&dst) = sym_op->cipher.data.length;
+
+ fsattr_va(&src) = rte_pktmbuf_mtod_offset
+ (mbuf_src,
+ uint8_t *,
+ sym_op->cipher.data.offset);
+
+ fsattr_va(&dst) = rte_pktmbuf_mtod_offset
+ (mbuf_dst,
+ uint8_t *,
+ sym_op->cipher.data.offset);
+
+ fsattr_pa(&src) = rte_pktmbuf_iova_offset(mbuf_src,
+ sym_op->cipher.data.offset);
+ fsattr_pa(&dst) = rte_pktmbuf_iova_offset(mbuf_dst,
+ sym_op->cipher.data.offset);
+
+ fsattr_va(&iv) = rte_crypto_op_ctod_offset(op,
+ uint8_t *,
+ sess->cipher.iv.offset);
+
+ fsattr_pa(&iv) = 0;
+ fsattr_sz(&iv) = sess->cipher.iv.length;
+
+ fsattr_va(&cipher_key) = sess->cipher.key.data;
+ fsattr_pa(&cipher_key) = 0;
+ fsattr_sz(&cipher_key) = sess->cipher.key.length;
+
+ fsattr_va(&auth_key) = sess->auth.key.data;
+ fsattr_pa(&auth_key) = 0;
+ fsattr_sz(&auth_key) = sess->auth.key.length;
+
+ fsattr_va(&mac) = op->sym->auth.digest.data;
+ fsattr_pa(&mac) = op->sym->auth.digest.phys_addr;
+ fsattr_sz(&mac) = sess->auth.digest_length;
+
+ aad_size = sym_op->auth.data.length - sym_op->cipher.data.length;
+
+ if (aad_size > 0) {
+ fsattr_sz(&aad) = aad_size;
+ fsattr_va(&aad) = rte_pktmbuf_mtod_offset
+ (mbuf_src,
+ uint8_t *,
+ sym_op->auth.data.offset);
+ fsattr_pa(&aad) = rte_pktmbuf_iova_offset(mbuf_src,
+ sym_op->auth.data.offset);
+ }
+
+ rc = bcmfs_crypto_build_chain_request(req, sess->cipher.algo,
+ sess->cipher.op,
+ sess->auth.algo,
+ sess->auth.op,
+ &src, &dst, &cipher_key,
+ &auth_key, &iv,
+ (aad_size > 0) ? (&aad) : NULL,
+ &mac, sess->cipher_first);
+
+ if (rc)
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+
+ return rc;
+}
+
+/** Process AEAD operation */
+static int
+process_crypto_aead_op(struct rte_crypto_op *op,
+ struct rte_mbuf *mbuf_src,
+ struct rte_mbuf *mbuf_dst,
+ struct bcmfs_sym_session *sess,
+ struct bcmfs_sym_request *req)
+{
+ int rc = 0;
+ struct fsattr src, dst, iv;
+ struct rte_crypto_sym_op *sym_op = op->sym;
+ struct fsattr key, aad, mac;
+
+ fsattr_sz(&src) = sym_op->aead.data.length;
+ fsattr_sz(&dst) = sym_op->aead.data.length;
+
+ fsattr_va(&src) = rte_pktmbuf_mtod_offset(mbuf_src,
+ uint8_t *,
+ sym_op->aead.data.offset);
+
+ fsattr_va(&dst) = rte_pktmbuf_mtod_offset(mbuf_dst,
+ uint8_t *,
+ sym_op->aead.data.offset);
+
+ fsattr_pa(&src) = rte_pktmbuf_iova_offset(mbuf_src,
+ sym_op->aead.data.offset);
+ fsattr_pa(&dst) = rte_pktmbuf_iova_offset(mbuf_dst,
+ sym_op->aead.data.offset);
+
+ fsattr_va(&iv) = rte_crypto_op_ctod_offset(op,
+ uint8_t *,
+ sess->aead.iv.offset);
+
+ fsattr_pa(&iv) = 0;
+ fsattr_sz(&iv) = sess->aead.iv.length;
+
+ fsattr_va(&key) = sess->aead.key.data;
+ fsattr_pa(&key) = 0;
+ fsattr_sz(&key) = sess->aead.key.length;
+
+ fsattr_va(&mac) = op->sym->aead.digest.data;
+ fsattr_pa(&mac) = op->sym->aead.digest.phys_addr;
+ fsattr_sz(&mac) = sess->aead.digest_length;
+
+ fsattr_va(&aad) = op->sym->aead.aad.data;
+ fsattr_pa(&aad) = op->sym->aead.aad.phys_addr;
+ fsattr_sz(&aad) = sess->aead.aad_length;
+
+ rc = bcmfs_crypto_build_aead_request(req, sess->aead.algo,
+ sess->aead.op, &src, &dst,
+ &key, &iv, &aad, &mac);
+
+ if (rc)
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+
+ return rc;
+}
+
+/** Process crypto operation for mbuf */
+int
+bcmfs_process_sym_crypto_op(struct rte_crypto_op *op,
+ struct bcmfs_sym_session *sess,
+ struct bcmfs_sym_request *req)
+{
+ struct rte_mbuf *msrc, *mdst;
+ int rc = 0;
+
+ msrc = op->sym->m_src;
+ mdst = op->sym->m_dst ? op->sym->m_dst : op->sym->m_src;
+ op->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+
+ switch (sess->chain_order) {
+ case BCMFS_SYM_CHAIN_ONLY_CIPHER:
+ rc = process_crypto_cipher_op(op, msrc, mdst, sess, req);
+ break;
+ case BCMFS_SYM_CHAIN_ONLY_AUTH:
+ rc = process_crypto_auth_op(op, msrc, sess, req);
+ break;
+ case BCMFS_SYM_CHAIN_CIPHER_AUTH:
+ case BCMFS_SYM_CHAIN_AUTH_CIPHER:
+ rc = process_crypto_combined_op(op, msrc, mdst, sess, req);
+ break;
+ case BCMFS_SYM_CHAIN_AEAD:
+ rc = process_crypto_aead_op(op, msrc, mdst, sess, req);
+ break;
+ default:
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ break;
+ }
+
+ return rc;
+}
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_engine.c b/drivers/crypto/bcmfs/bcmfs_sym_engine.c
new file mode 100644
index 0000000000..537bfbec8b
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_engine.c
@@ -0,0 +1,1155 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Broadcom.
+ * All rights reserved.
+ */
+
+#include <stdbool.h>
+#include <string.h>
+
+#include <rte_common.h>
+#include <rte_cryptodev.h>
+#include <rte_crypto_sym.h>
+
+#include "bcmfs_logs.h"
+#include "bcmfs_sym_defs.h"
+#include "bcmfs_dev_msg.h"
+#include "bcmfs_sym_req.h"
+#include "bcmfs_sym_engine.h"
+
+enum spu2_cipher_type {
+ SPU2_CIPHER_TYPE_NONE = 0x0,
+ SPU2_CIPHER_TYPE_AES128 = 0x1,
+ SPU2_CIPHER_TYPE_AES192 = 0x2,
+ SPU2_CIPHER_TYPE_AES256 = 0x3,
+ SPU2_CIPHER_TYPE_DES = 0x4,
+ SPU2_CIPHER_TYPE_3DES = 0x5,
+ SPU2_CIPHER_TYPE_LAST
+};
+
+enum spu2_cipher_mode {
+ SPU2_CIPHER_MODE_ECB = 0x0,
+ SPU2_CIPHER_MODE_CBC = 0x1,
+ SPU2_CIPHER_MODE_CTR = 0x2,
+ SPU2_CIPHER_MODE_CFB = 0x3,
+ SPU2_CIPHER_MODE_OFB = 0x4,
+ SPU2_CIPHER_MODE_XTS = 0x5,
+ SPU2_CIPHER_MODE_CCM = 0x6,
+ SPU2_CIPHER_MODE_GCM = 0x7,
+ SPU2_CIPHER_MODE_LAST
+};
+
+enum spu2_hash_type {
+ SPU2_HASH_TYPE_NONE = 0x0,
+ SPU2_HASH_TYPE_AES128 = 0x1,
+ SPU2_HASH_TYPE_AES192 = 0x2,
+ SPU2_HASH_TYPE_AES256 = 0x3,
+ SPU2_HASH_TYPE_MD5 = 0x6,
+ SPU2_HASH_TYPE_SHA1 = 0x7,
+ SPU2_HASH_TYPE_SHA224 = 0x8,
+ SPU2_HASH_TYPE_SHA256 = 0x9,
+ SPU2_HASH_TYPE_SHA384 = 0xa,
+ SPU2_HASH_TYPE_SHA512 = 0xb,
+ SPU2_HASH_TYPE_SHA512_224 = 0xc,
+ SPU2_HASH_TYPE_SHA512_256 = 0xd,
+ SPU2_HASH_TYPE_SHA3_224 = 0xe,
+ SPU2_HASH_TYPE_SHA3_256 = 0xf,
+ SPU2_HASH_TYPE_SHA3_384 = 0x10,
+ SPU2_HASH_TYPE_SHA3_512 = 0x11,
+ SPU2_HASH_TYPE_LAST
+};
+
+enum spu2_hash_mode {
+ SPU2_HASH_MODE_CMAC = 0x0,
+ SPU2_HASH_MODE_CBC_MAC = 0x1,
+ SPU2_HASH_MODE_XCBC_MAC = 0x2,
+ SPU2_HASH_MODE_HMAC = 0x3,
+ SPU2_HASH_MODE_RABIN = 0x4,
+ SPU2_HASH_MODE_CCM = 0x5,
+ SPU2_HASH_MODE_GCM = 0x6,
+ SPU2_HASH_MODE_RESERVED = 0x7,
+ SPU2_HASH_MODE_LAST
+};
+
+enum spu2_proto_sel {
+ SPU2_PROTO_RESV = 0,
+ SPU2_MACSEC_SECTAG8_ECB = 1,
+ SPU2_MACSEC_SECTAG8_SCB = 2,
+ SPU2_MACSEC_SECTAG16 = 3,
+ SPU2_MACSEC_SECTAG16_8_XPN = 4,
+ SPU2_IPSEC = 5,
+ SPU2_IPSEC_ESN = 6,
+ SPU2_TLS_CIPHER = 7,
+ SPU2_TLS_AEAD = 8,
+ SPU2_DTLS_CIPHER = 9,
+ SPU2_DTLS_AEAD = 10
+};
+
+/* SPU2 response size */
+#define SPU2_STATUS_LEN 2
+
+/* Metadata settings in response */
+enum spu2_ret_md_opts {
+ SPU2_RET_NO_MD = 0, /* return no metadata */
+ SPU2_RET_FMD_OMD = 1, /* return both FMD and OMD */
+ SPU2_RET_FMD_ONLY = 2, /* return only FMD */
+ SPU2_RET_FMD_OMD_IV = 3, /* return FMD and OMD with just IVs */
+};
+
+/* FMD ctrl0 field masks */
+#define SPU2_CIPH_ENCRYPT_EN 0x1 /* 0: decrypt, 1: encrypt */
+#define SPU2_CIPH_TYPE_SHIFT 4
+#define SPU2_CIPH_MODE 0xF00 /* one of spu2_cipher_mode */
+#define SPU2_CIPH_MODE_SHIFT 8
+#define SPU2_CFB_MASK 0x7000 /* cipher feedback mask */
+#define SPU2_CFB_MASK_SHIFT 12
+#define SPU2_PROTO_SEL 0xF00000 /* MACsec, IPsec, TLS... */
+#define SPU2_PROTO_SEL_SHIFT 20
+#define SPU2_HASH_FIRST 0x1000000 /* 1: hash input is input pkt
+ * data
+ */
+#define SPU2_CHK_TAG 0x2000000 /* 1: check digest provided */
+#define SPU2_HASH_TYPE 0x1F0000000 /* one of spu2_hash_type */
+#define SPU2_HASH_TYPE_SHIFT 28
+#define SPU2_HASH_MODE 0xF000000000 /* one of spu2_hash_mode */
+#define SPU2_HASH_MODE_SHIFT 36
+#define SPU2_CIPH_PAD_EN 0x100000000000 /* 1: Add pad to end of payload for
+ * enc
+ */
+#define SPU2_CIPH_PAD 0xFF000000000000 /* cipher pad value */
+#define SPU2_CIPH_PAD_SHIFT 48
+
+/* FMD ctrl1 field masks */
+#define SPU2_TAG_LOC 0x1 /* 1: end of payload, 0: undef */
+#define SPU2_HAS_FR_DATA 0x2 /* 1: msg has frame data */
+#define SPU2_HAS_AAD1 0x4 /* 1: msg has AAD1 field */
+#define SPU2_HAS_NAAD 0x8 /* 1: msg has NAAD field */
+#define SPU2_HAS_AAD2 0x10 /* 1: msg has AAD2 field */
+#define SPU2_HAS_ESN 0x20 /* 1: msg has ESN field */
+#define SPU2_HASH_KEY_LEN 0xFF00 /* len of hash key in bytes.
+ * HMAC only.
+ */
+#define SPU2_HASH_KEY_LEN_SHIFT 8
+#define SPU2_CIPH_KEY_LEN 0xFF00000 /* len of cipher key in bytes */
+#define SPU2_CIPH_KEY_LEN_SHIFT 20
+#define SPU2_GENIV 0x10000000 /* 1: hw generates IV */
+#define SPU2_HASH_IV 0x20000000 /* 1: IV incl in hash */
+#define SPU2_RET_IV 0x40000000 /* 1: return IV in output msg
+ * b4 payload
+ */
+#define SPU2_RET_IV_LEN 0xF00000000 /* length in bytes of IV returned.
+ * 0 = 16 bytes
+ */
+#define SPU2_RET_IV_LEN_SHIFT 32
+#define SPU2_IV_OFFSET 0xF000000000 /* gen IV offset */
+#define SPU2_IV_OFFSET_SHIFT 36
+#define SPU2_IV_LEN 0x1F0000000000 /* length of input IV in bytes */
+#define SPU2_IV_LEN_SHIFT 40
+#define SPU2_HASH_TAG_LEN 0x7F000000000000 /* hash tag length in bytes */
+#define SPU2_HASH_TAG_LEN_SHIFT 48
+#define SPU2_RETURN_MD 0x300000000000000 /* return metadata */
+#define SPU2_RETURN_MD_SHIFT 56
+#define SPU2_RETURN_FD 0x400000000000000
+#define SPU2_RETURN_AAD1 0x800000000000000
+#define SPU2_RETURN_NAAD 0x1000000000000000
+#define SPU2_RETURN_AAD2 0x2000000000000000
+#define SPU2_RETURN_PAY 0x4000000000000000 /* return payload */
+
+/* FMD ctrl2 field masks */
+#define SPU2_AAD1_OFFSET 0xFFF /* byte offset of AAD1 field */
+#define SPU2_AAD1_LEN 0xFF000 /* length of AAD1 in bytes */
+#define SPU2_AAD1_LEN_SHIFT 12
+#define SPU2_AAD2_OFFSET 0xFFF00000 /* byte offset of AAD2 field */
+#define SPU2_AAD2_OFFSET_SHIFT 20
+#define SPU2_PL_OFFSET 0xFFFFFFFF00000000 /* payload offset from AAD2 */
+#define SPU2_PL_OFFSET_SHIFT 32
+
+/* FMD ctrl3 field masks */
+#define SPU2_PL_LEN 0xFFFFFFFF /* payload length in bytes */
+#define SPU2_TLS_LEN 0xFFFF00000000 /* TLS encrypt: cipher len
+ * TLS decrypt: compressed len
+ */
+#define SPU2_TLS_LEN_SHIFT 32
+
+/*
+ * Max value that can be represented in the Payload Length field of the
+ * ctrl3 word of FMD.
+ */
+#define SPU2_MAX_PAYLOAD SPU2_PL_LEN
+
+#define SPU2_VAL_NONE 0
+
+/* CCM B_0 field definitions, common for SPU-M and SPU2 */
+#define CCM_B0_ADATA 0x40
+#define CCM_B0_ADATA_SHIFT 6
+#define CCM_B0_M_PRIME 0x38
+#define CCM_B0_M_PRIME_SHIFT 3
+#define CCM_B0_L_PRIME 0x07
+#define CCM_B0_L_PRIME_SHIFT 0
+#define CCM_ESP_L_VALUE 4
+
+static uint16_t
+spu2_cipher_type_xlate(enum rte_crypto_cipher_algorithm cipher_alg,
+ enum spu2_cipher_type *spu2_type,
+ struct fsattr *key)
+{
+ int ret = 0;
+ int key_size = fsattr_sz(key);
+
+ if (cipher_alg == RTE_CRYPTO_CIPHER_AES_XTS)
+ key_size = key_size / 2;
+
+ switch (key_size) {
+ case BCMFS_CRYPTO_AES128:
+ *spu2_type = SPU2_CIPHER_TYPE_AES128;
+ break;
+ case BCMFS_CRYPTO_AES192:
+ *spu2_type = SPU2_CIPHER_TYPE_AES192;
+ break;
+ case BCMFS_CRYPTO_AES256:
+ *spu2_type = SPU2_CIPHER_TYPE_AES256;
+ break;
+ default:
+ ret = -EINVAL;
+ }
+
+ return ret;
+}
+
+static int
+spu2_hash_xlate(enum rte_crypto_auth_algorithm auth_alg,
+ struct fsattr *key,
+ enum spu2_hash_type *spu2_type,
+ enum spu2_hash_mode *spu2_mode)
+{
+ *spu2_mode = 0;
+
+ switch (auth_alg) {
+ case RTE_CRYPTO_AUTH_NULL:
+ *spu2_type = SPU2_HASH_TYPE_NONE;
+ break;
+ case RTE_CRYPTO_AUTH_MD5:
+ *spu2_type = SPU2_HASH_TYPE_MD5;
+ break;
+ case RTE_CRYPTO_AUTH_MD5_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_MD5;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA1:
+ *spu2_type = SPU2_HASH_TYPE_SHA1;
+ break;
+ case RTE_CRYPTO_AUTH_SHA1_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA1;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA224:
+ *spu2_type = SPU2_HASH_TYPE_SHA224;
+ break;
+ case RTE_CRYPTO_AUTH_SHA224_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA224;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA256:
+ *spu2_type = SPU2_HASH_TYPE_SHA256;
+ break;
+ case RTE_CRYPTO_AUTH_SHA256_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA256;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA384:
+ *spu2_type = SPU2_HASH_TYPE_SHA384;
+ break;
+ case RTE_CRYPTO_AUTH_SHA384_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA384;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA512:
+ *spu2_type = SPU2_HASH_TYPE_SHA512;
+ break;
+ case RTE_CRYPTO_AUTH_SHA512_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA512;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA3_224:
+ *spu2_type = SPU2_HASH_TYPE_SHA3_224;
+ break;
+ case RTE_CRYPTO_AUTH_SHA3_224_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA3_224;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA3_256:
+ *spu2_type = SPU2_HASH_TYPE_SHA3_256;
+ break;
+ case RTE_CRYPTO_AUTH_SHA3_256_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA3_256;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA3_384:
+ *spu2_type = SPU2_HASH_TYPE_SHA3_384;
+ break;
+ case RTE_CRYPTO_AUTH_SHA3_384_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA3_384;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_SHA3_512:
+ *spu2_type = SPU2_HASH_TYPE_SHA3_512;
+ break;
+ case RTE_CRYPTO_AUTH_SHA3_512_HMAC:
+ *spu2_type = SPU2_HASH_TYPE_SHA3_512;
+ *spu2_mode = SPU2_HASH_MODE_HMAC;
+ break;
+ case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
+ *spu2_mode = SPU2_HASH_MODE_XCBC_MAC;
+ switch (fsattr_sz(key)) {
+ case BCMFS_CRYPTO_AES128:
+ *spu2_type = SPU2_HASH_TYPE_AES128;
+ break;
+ case BCMFS_CRYPTO_AES192:
+ *spu2_type = SPU2_HASH_TYPE_AES192;
+ break;
+ case BCMFS_CRYPTO_AES256:
+ *spu2_type = SPU2_HASH_TYPE_AES256;
+ break;
+ default:
+ return -EINVAL;
+ }
+ break;
+ case RTE_CRYPTO_AUTH_AES_CMAC:
+ *spu2_mode = SPU2_HASH_MODE_CMAC;
+ switch (fsattr_sz(key)) {
+ case BCMFS_CRYPTO_AES128:
+ *spu2_type = SPU2_HASH_TYPE_AES128;
+ break;
+ case BCMFS_CRYPTO_AES192:
+ *spu2_type = SPU2_HASH_TYPE_AES192;
+ break;
+ case BCMFS_CRYPTO_AES256:
+ *spu2_type = SPU2_HASH_TYPE_AES256;
+ break;
+ default:
+ return -EINVAL;
+ }
+ break;
+ case RTE_CRYPTO_AUTH_AES_GMAC:
+ *spu2_mode = SPU2_HASH_MODE_GCM;
+ switch (fsattr_sz(key)) {
+ case BCMFS_CRYPTO_AES128:
+ *spu2_type = SPU2_HASH_TYPE_AES128;
+ break;
+ case BCMFS_CRYPTO_AES192:
+ *spu2_type = SPU2_HASH_TYPE_AES192;
+ break;
+ case BCMFS_CRYPTO_AES256:
+ *spu2_type = SPU2_HASH_TYPE_AES256;
+ break;
+ default:
+ return -EINVAL;
+ }
+ break;
+ case RTE_CRYPTO_AUTH_AES_CBC_MAC:
+ *spu2_mode = SPU2_HASH_MODE_CBC_MAC;
+ switch (fsattr_sz(key)) {
+ case BCMFS_CRYPTO_AES128:
+ *spu2_type = SPU2_HASH_TYPE_AES128;
+ break;
+ case BCMFS_CRYPTO_AES192:
+ *spu2_type = SPU2_HASH_TYPE_AES192;
+ break;
+ case BCMFS_CRYPTO_AES256:
+ *spu2_type = SPU2_HASH_TYPE_AES256;
+ break;
+ default:
+ return -EINVAL;
+ }
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int
+spu2_cipher_xlate(enum rte_crypto_cipher_algorithm cipher_alg,
+ struct fsattr *key,
+ enum spu2_cipher_type *spu2_type,
+ enum spu2_cipher_mode *spu2_mode)
+{
+ int ret = 0;
+
+ switch (cipher_alg) {
+ case RTE_CRYPTO_CIPHER_NULL:
+ *spu2_type = SPU2_CIPHER_TYPE_NONE;
+ break;
+ case RTE_CRYPTO_CIPHER_DES_CBC:
+ *spu2_mode = SPU2_CIPHER_MODE_CBC;
+ *spu2_type = SPU2_CIPHER_TYPE_DES;
+ break;
+ case RTE_CRYPTO_CIPHER_3DES_ECB:
+ *spu2_mode = SPU2_CIPHER_MODE_ECB;
+ *spu2_type = SPU2_CIPHER_TYPE_3DES;
+ break;
+ case RTE_CRYPTO_CIPHER_3DES_CBC:
+ *spu2_mode = SPU2_CIPHER_MODE_CBC;
+ *spu2_type = SPU2_CIPHER_TYPE_3DES;
+ break;
+ case RTE_CRYPTO_CIPHER_AES_CBC:
+ *spu2_mode = SPU2_CIPHER_MODE_CBC;
+ ret = spu2_cipher_type_xlate(cipher_alg, spu2_type, key);
+ break;
+ case RTE_CRYPTO_CIPHER_AES_ECB:
+ *spu2_mode = SPU2_CIPHER_MODE_ECB;
+ ret = spu2_cipher_type_xlate(cipher_alg, spu2_type, key);
+ break;
+ case RTE_CRYPTO_CIPHER_AES_CTR:
+ *spu2_mode = SPU2_CIPHER_MODE_CTR;
+ ret = spu2_cipher_type_xlate(cipher_alg, spu2_type, key);
+ break;
+ case RTE_CRYPTO_CIPHER_AES_XTS:
+ *spu2_mode = SPU2_CIPHER_MODE_XTS;
+ ret = spu2_cipher_type_xlate(cipher_alg, spu2_type, key);
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return ret;
+}
+
+static void
+spu2_fmd_ctrl0_write(struct spu2_fmd *fmd,
+ bool is_inbound, bool auth_first,
+ enum spu2_proto_sel protocol,
+ enum spu2_cipher_type cipher_type,
+ enum spu2_cipher_mode cipher_mode,
+ enum spu2_hash_type auth_type,
+ enum spu2_hash_mode auth_mode)
+{
+ uint64_t ctrl0 = 0;
+
+ if (cipher_type != SPU2_CIPHER_TYPE_NONE && !is_inbound)
+ ctrl0 |= SPU2_CIPH_ENCRYPT_EN;
+
+ ctrl0 |= ((uint64_t)cipher_type << SPU2_CIPH_TYPE_SHIFT) |
+ ((uint64_t)cipher_mode << SPU2_CIPH_MODE_SHIFT);
+
+ if (protocol != SPU2_PROTO_RESV)
+ ctrl0 |= (uint64_t)protocol << SPU2_PROTO_SEL_SHIFT;
+
+ if (auth_first)
+ ctrl0 |= SPU2_HASH_FIRST;
+
+ if (is_inbound && auth_type != SPU2_HASH_TYPE_NONE)
+ ctrl0 |= SPU2_CHK_TAG;
+
+ ctrl0 |= (((uint64_t)auth_type << SPU2_HASH_TYPE_SHIFT) |
+ ((uint64_t)auth_mode << SPU2_HASH_MODE_SHIFT));
+
+ fmd->ctrl0 = ctrl0;
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "ctrl0:", &fmd->ctrl0, sizeof(uint64_t));
+#endif
+}
+
+static void
+spu2_fmd_ctrl1_write(struct spu2_fmd *fmd, bool is_inbound,
+ uint64_t assoc_size, uint64_t auth_key_len,
+ uint64_t cipher_key_len, bool gen_iv, bool hash_iv,
+ bool return_iv, uint64_t ret_iv_len,
+ uint64_t ret_iv_offset, uint64_t cipher_iv_len,
+ uint64_t digest_size, bool return_payload, bool return_md)
+{
+ uint64_t ctrl1 = 0;
+
+ if (is_inbound && digest_size != 0)
+ ctrl1 |= SPU2_TAG_LOC;
+
+ if (assoc_size != 0)
+ ctrl1 |= SPU2_HAS_AAD2;
+
+ if (auth_key_len != 0)
+ ctrl1 |= ((auth_key_len << SPU2_HASH_KEY_LEN_SHIFT) &
+ SPU2_HASH_KEY_LEN);
+
+ if (cipher_key_len != 0)
+ ctrl1 |= ((cipher_key_len << SPU2_CIPH_KEY_LEN_SHIFT) &
+ SPU2_CIPH_KEY_LEN);
+
+ if (gen_iv)
+ ctrl1 |= SPU2_GENIV;
+
+ if (hash_iv)
+ ctrl1 |= SPU2_HASH_IV;
+
+ if (return_iv) {
+ ctrl1 |= SPU2_RET_IV;
+ ctrl1 |= ret_iv_len << SPU2_RET_IV_LEN_SHIFT;
+ ctrl1 |= ret_iv_offset << SPU2_IV_OFFSET_SHIFT;
+ }
+
+ ctrl1 |= ((cipher_iv_len << SPU2_IV_LEN_SHIFT) & SPU2_IV_LEN);
+
+ if (digest_size != 0) {
+ ctrl1 |= ((digest_size << SPU2_HASH_TAG_LEN_SHIFT) &
+ SPU2_HASH_TAG_LEN);
+ }
+
+ /*
+ * Let's ask for the output pkt to include FMD, but don't need to
+ * get keys and IVs back in OMD.
+ */
+ if (return_md)
+ ctrl1 |= ((uint64_t)SPU2_RET_FMD_ONLY << SPU2_RETURN_MD_SHIFT);
+ else
+ ctrl1 |= ((uint64_t)SPU2_RET_NO_MD << SPU2_RETURN_MD_SHIFT);
+
+ /* Crypto API does not get assoc data back. So no need for AAD2. */
+
+ if (return_payload)
+ ctrl1 |= SPU2_RETURN_PAY;
+
+ fmd->ctrl1 = ctrl1;
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "ctrl1:", &fmd->ctrl1, sizeof(uint64_t));
+#endif
+}
+
+static void
+spu2_fmd_ctrl2_write(struct spu2_fmd *fmd, uint64_t cipher_offset,
+ uint64_t auth_key_len __rte_unused,
+ uint64_t auth_iv_len __rte_unused,
+ uint64_t cipher_key_len __rte_unused,
+ uint64_t cipher_iv_len __rte_unused)
+{
+ uint64_t aad1_offset;
+ uint64_t aad2_offset;
+ uint16_t aad1_len = 0;
+ uint64_t payload_offset;
+
+ /* AAD1 offset is from start of FD. FD length always 0. */
+ aad1_offset = 0;
+
+ aad2_offset = aad1_offset;
+ payload_offset = cipher_offset;
+ fmd->ctrl2 = aad1_offset |
+ (aad1_len << SPU2_AAD1_LEN_SHIFT) |
+ (aad2_offset << SPU2_AAD2_OFFSET_SHIFT) |
+ (payload_offset << SPU2_PL_OFFSET_SHIFT);
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "ctrl2:", &fmd->ctrl2, sizeof(uint64_t));
+#endif
+}
+
+static void
+spu2_fmd_ctrl3_write(struct spu2_fmd *fmd, uint64_t payload_len)
+{
+ fmd->ctrl3 = payload_len & SPU2_PL_LEN;
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "ctrl3:", &fmd->ctrl3, sizeof(uint64_t));
+#endif
+}
+
+int
+bcmfs_crypto_build_auth_req(struct bcmfs_sym_request *sreq,
+ enum rte_crypto_auth_algorithm a_alg,
+ enum rte_crypto_auth_operation auth_op,
+ struct fsattr *src, struct fsattr *dst,
+ struct fsattr *mac, struct fsattr *auth_key,
+ struct fsattr *iv)
+{
+ int ret;
+ uint64_t dst_size;
+ int src_index = 0;
+ struct spu2_fmd *fmd;
+ uint64_t payload_len;
+ enum spu2_hash_mode spu2_auth_mode;
+ enum spu2_hash_type spu2_auth_type = SPU2_HASH_TYPE_NONE;
+ uint64_t iv_size = (iv != NULL) ? fsattr_sz(iv) : 0;
+ uint64_t auth_ksize = (auth_key != NULL) ? fsattr_sz(auth_key) : 0;
+ bool is_inbound = (auth_op == RTE_CRYPTO_AUTH_OP_VERIFY);
+
+ if (src == NULL)
+ return -EINVAL;
+
+ payload_len = fsattr_sz(src);
+ if (!payload_len) {
+ BCMFS_DP_LOG(ERR, "null payload not supported");
+ return -EINVAL;
+ }
+
+ /* one of dst or mac should not be NULL */
+ if (dst == NULL && mac == NULL)
+ return -EINVAL;
+
+ if (auth_op == RTE_CRYPTO_AUTH_OP_GENERATE && dst != NULL)
+ dst_size = fsattr_sz(dst);
+ else if (auth_op == RTE_CRYPTO_AUTH_OP_VERIFY && mac != NULL)
+ dst_size = fsattr_sz(mac);
+ else
+ return -EINVAL;
+
+ /* spu2 hash algorithm and hash algorithm mode */
+ ret = spu2_hash_xlate(a_alg, auth_key, &spu2_auth_type,
+ &spu2_auth_mode);
+ if (ret)
+ return -EINVAL;
+
+ fmd = &sreq->fmd;
+
+ spu2_fmd_ctrl0_write(fmd, is_inbound, SPU2_VAL_NONE,
+ SPU2_PROTO_RESV, SPU2_VAL_NONE,
+ SPU2_VAL_NONE, spu2_auth_type, spu2_auth_mode);
+
+ spu2_fmd_ctrl1_write(fmd, is_inbound, SPU2_VAL_NONE,
+ auth_ksize, SPU2_VAL_NONE, false,
+ false, SPU2_VAL_NONE, SPU2_VAL_NONE,
+ SPU2_VAL_NONE, iv_size,
+ dst_size, SPU2_VAL_NONE, SPU2_VAL_NONE);
+
+ memset(&fmd->ctrl2, 0, sizeof(uint64_t));
+
+ spu2_fmd_ctrl3_write(fmd, fsattr_sz(src));
+
+ /* Source metadata and data pointers */
+ sreq->msgs.srcs_addr[src_index] = sreq->fptr;
+ sreq->msgs.srcs_len[src_index] = sizeof(struct spu2_fmd);
+ src_index++;
+
+ if (auth_key != NULL && fsattr_sz(auth_key) != 0) {
+ memcpy(sreq->auth_key, fsattr_va(auth_key),
+ fsattr_sz(auth_key));
+
+ sreq->msgs.srcs_addr[src_index] = sreq->aptr;
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(auth_key);
+ src_index++;
+ }
+
+ if (iv != NULL && fsattr_sz(iv) != 0) {
+ memcpy(sreq->iv, fsattr_va(iv), fsattr_sz(iv));
+ sreq->msgs.srcs_addr[src_index] = sreq->iptr;
+ sreq->msgs.srcs_len[src_index] = iv_size;
+ src_index++;
+ }
+
+ sreq->msgs.srcs_addr[src_index] = fsattr_pa(src);
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(src);
+ src_index++;
+
+ /*
+ * In case of authentication verify operation, use input mac data to
+ * SPU2 engine.
+ */
+ if (auth_op == RTE_CRYPTO_AUTH_OP_VERIFY && mac != NULL) {
+ sreq->msgs.srcs_addr[src_index] = fsattr_pa(mac);
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(mac);
+ src_index++;
+ }
+ sreq->msgs.srcs_count = src_index;
+
+ /*
+ * Output packet contains actual output from SPU2 and
+ * the status packet, so the dsts_count is always 2 below.
+ */
+ if (auth_op == RTE_CRYPTO_AUTH_OP_GENERATE) {
+ sreq->msgs.dsts_addr[0] = fsattr_pa(dst);
+ sreq->msgs.dsts_len[0] = fsattr_sz(dst);
+ } else {
+ /*
+ * In case of authentication verify operation, provide dummy
+ * location to SPU2 engine to generate hash. This is needed
+ * because SPU2 generates hash even in case of verify operation.
+ */
+ sreq->msgs.dsts_addr[0] = sreq->dptr;
+ sreq->msgs.dsts_len[0] = fsattr_sz(mac);
+ }
+
+ sreq->msgs.dsts_addr[1] = sreq->rptr;
+ sreq->msgs.dsts_len[1] = SPU2_STATUS_LEN;
+ sreq->msgs.dsts_count = 2;
+
+ return 0;
+}
+
+int
+bcmfs_crypto_build_cipher_req(struct bcmfs_sym_request *sreq,
+ enum rte_crypto_cipher_algorithm calgo,
+ enum rte_crypto_cipher_operation cipher_op,
+ struct fsattr *src, struct fsattr *dst,
+ struct fsattr *cipher_key, struct fsattr *iv)
+{
+ int ret = 0;
+ int src_index = 0;
+ struct spu2_fmd *fmd;
+ unsigned int xts_keylen;
+ enum spu2_cipher_mode spu2_ciph_mode = 0;
+ enum spu2_cipher_type spu2_ciph_type = SPU2_CIPHER_TYPE_NONE;
+ bool is_inbound = (cipher_op == RTE_CRYPTO_CIPHER_OP_DECRYPT);
+
+ if (src == NULL || dst == NULL || iv == NULL)
+ return -EINVAL;
+
+ fmd = &sreq->fmd;
+
+ /* spu2 cipher algorithm and cipher algorithm mode */
+ ret = spu2_cipher_xlate(calgo, cipher_key,
+ &spu2_ciph_type, &spu2_ciph_mode);
+ if (ret)
+ return -EINVAL;
+
+ spu2_fmd_ctrl0_write(fmd, is_inbound, SPU2_VAL_NONE,
+ SPU2_PROTO_RESV, spu2_ciph_type, spu2_ciph_mode,
+ SPU2_VAL_NONE, SPU2_VAL_NONE);
+
+ spu2_fmd_ctrl1_write(fmd, SPU2_VAL_NONE, SPU2_VAL_NONE, SPU2_VAL_NONE,
+ fsattr_sz(cipher_key), false, false,
+ SPU2_VAL_NONE, SPU2_VAL_NONE, SPU2_VAL_NONE,
+ fsattr_sz(iv), SPU2_VAL_NONE, SPU2_VAL_NONE,
+ SPU2_VAL_NONE);
+
+ /* Nothing for FMD2 */
+ memset(&fmd->ctrl2, 0, sizeof(uint64_t));
+
+ spu2_fmd_ctrl3_write(fmd, fsattr_sz(src));
+
+ /* Source metadata and data pointers */
+ sreq->msgs.srcs_addr[src_index] = sreq->fptr;
+ sreq->msgs.srcs_len[src_index] = sizeof(struct spu2_fmd);
+ src_index++;
+
+ if (cipher_key != NULL && fsattr_sz(cipher_key) != 0) {
+ if (calgo == RTE_CRYPTO_CIPHER_AES_XTS) {
+ xts_keylen = fsattr_sz(cipher_key) / 2;
+ memcpy(sreq->cipher_key,
+ (uint8_t *)fsattr_va(cipher_key) + xts_keylen,
+ xts_keylen);
+ memcpy(sreq->cipher_key + xts_keylen,
+ fsattr_va(cipher_key), xts_keylen);
+ } else {
+ memcpy(sreq->cipher_key,
+ fsattr_va(cipher_key), fsattr_sz(cipher_key));
+ }
+
+ sreq->msgs.srcs_addr[src_index] = sreq->cptr;
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(cipher_key);
+ src_index++;
+ }
+
+ if (iv != NULL && fsattr_sz(iv) != 0) {
+ memcpy(sreq->iv,
+ fsattr_va(iv), fsattr_sz(iv));
+ sreq->msgs.srcs_addr[src_index] = sreq->iptr;
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(iv);
+ src_index++;
+ }
+
+ sreq->msgs.srcs_addr[src_index] = fsattr_pa(src);
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(src);
+ src_index++;
+ sreq->msgs.srcs_count = src_index;
+
+ /**
+ * Output packet contains actual output from SPU2 and
+ * the status packet, so the dsts_count is always 2 below.
+ */
+ sreq->msgs.dsts_addr[0] = fsattr_pa(dst);
+ sreq->msgs.dsts_len[0] = fsattr_sz(dst);
+
+ sreq->msgs.dsts_addr[1] = sreq->rptr;
+ sreq->msgs.dsts_len[1] = SPU2_STATUS_LEN;
+ sreq->msgs.dsts_count = 2;
+
+ return 0;
+}
+
+int
+bcmfs_crypto_build_chain_request(struct bcmfs_sym_request *sreq,
+ enum rte_crypto_cipher_algorithm cipher_alg,
+ enum rte_crypto_cipher_operation cipher_op __rte_unused,
+ enum rte_crypto_auth_algorithm auth_alg,
+ enum rte_crypto_auth_operation auth_op,
+ struct fsattr *src, struct fsattr *dst,
+ struct fsattr *cipher_key,
+ struct fsattr *auth_key,
+ struct fsattr *iv, struct fsattr *aad,
+ struct fsattr *digest, bool cipher_first)
+{
+ int ret = 0;
+ int src_index = 0;
+ int dst_index = 0;
+ bool auth_first = 0;
+ struct spu2_fmd *fmd;
+ uint64_t payload_len;
+ enum spu2_cipher_mode spu2_ciph_mode = 0;
+ enum spu2_hash_mode spu2_auth_mode = 0;
+ uint64_t aad_size = (aad != NULL) ? fsattr_sz(aad) : 0;
+ uint64_t iv_size = (iv != NULL) ? fsattr_sz(iv) : 0;
+ enum spu2_cipher_type spu2_ciph_type = SPU2_CIPHER_TYPE_NONE;
+ uint64_t auth_ksize = (auth_key != NULL) ?
+ fsattr_sz(auth_key) : 0;
+ uint64_t cipher_ksize = (cipher_key != NULL) ?
+ fsattr_sz(cipher_key) : 0;
+ uint64_t digest_size = (digest != NULL) ?
+ fsattr_sz(digest) : 0;
+ enum spu2_hash_type spu2_auth_type = SPU2_HASH_TYPE_NONE;
+ bool is_inbound = (auth_op == RTE_CRYPTO_AUTH_OP_VERIFY);
+
+ if (src == NULL)
+ return -EINVAL;
+
+ payload_len = fsattr_sz(src);
+ if (!payload_len) {
+ BCMFS_DP_LOG(ERR, "null payload not supported");
+ return -EINVAL;
+ }
+
+ /* spu2 hash algorithm and hash algorithm mode */
+ ret = spu2_hash_xlate(auth_alg, auth_key, &spu2_auth_type,
+ &spu2_auth_mode);
+ if (ret)
+ return -EINVAL;
+
+ /* spu2 cipher algorithm and cipher algorithm mode */
+ ret = spu2_cipher_xlate(cipher_alg, cipher_key, &spu2_ciph_type,
+ &spu2_ciph_mode);
+ if (ret) {
+ BCMFS_DP_LOG(ERR, "cipher xlate error");
+ return -EINVAL;
+ }
+
+ auth_first = cipher_first ? 0 : 1;
+
+ if (iv != NULL && fsattr_sz(iv) != 0)
+ memcpy(sreq->iv, fsattr_va(iv), fsattr_sz(iv));
+
+ fmd = &sreq->fmd;
+
+ spu2_fmd_ctrl0_write(fmd, is_inbound, auth_first, SPU2_PROTO_RESV,
+ spu2_ciph_type, spu2_ciph_mode,
+ spu2_auth_type, spu2_auth_mode);
+
+ spu2_fmd_ctrl1_write(fmd, is_inbound, aad_size, auth_ksize,
+ cipher_ksize, false, false, SPU2_VAL_NONE,
+ SPU2_VAL_NONE, SPU2_VAL_NONE, iv_size,
+ digest_size, false, SPU2_VAL_NONE);
+
+ spu2_fmd_ctrl2_write(fmd, aad_size, auth_ksize, 0,
+ cipher_ksize, iv_size);
+
+ spu2_fmd_ctrl3_write(fmd, payload_len);
+
+ /* Source metadata and data pointers */
+ sreq->msgs.srcs_addr[src_index] = sreq->fptr;
+ sreq->msgs.srcs_len[src_index] = sizeof(struct spu2_fmd);
+ src_index++;
+
+ if (auth_key != NULL && fsattr_sz(auth_key) != 0) {
+ memcpy(sreq->auth_key,
+ fsattr_va(auth_key), fsattr_sz(auth_key));
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "auth key:", fsattr_va(auth_key),
+ fsattr_sz(auth_key));
+#endif
+ sreq->msgs.srcs_addr[src_index] = sreq->aptr;
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(auth_key);
+ src_index++;
+ }
+
+ if (cipher_key != NULL && fsattr_sz(cipher_key) != 0) {
+ memcpy(sreq->cipher_key,
+ fsattr_va(cipher_key), fsattr_sz(cipher_key));
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "cipher key:", fsattr_va(cipher_key),
+ fsattr_sz(cipher_key));
+#endif
+ sreq->msgs.srcs_addr[src_index] = sreq->cptr;
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(cipher_key);
+ src_index++;
+ }
+
+ if (iv != NULL && fsattr_sz(iv) != 0) {
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "iv key:", fsattr_va(iv),
+ fsattr_sz(iv));
+#endif
+ sreq->msgs.srcs_addr[src_index] = sreq->iptr;
+ sreq->msgs.srcs_len[src_index] = iv_size;
+ src_index++;
+ }
+
+ if (aad != NULL && fsattr_sz(aad) != 0) {
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "aad :", fsattr_va(aad),
+ fsattr_sz(aad));
+#endif
+ sreq->msgs.srcs_addr[src_index] = fsattr_pa(aad);
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(aad);
+ src_index++;
+ }
+
+ sreq->msgs.srcs_addr[src_index] = fsattr_pa(src);
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(src);
+ src_index++;
+
+ if (auth_op == RTE_CRYPTO_AUTH_OP_VERIFY && digest != NULL &&
+ fsattr_sz(digest) != 0) {
+ sreq->msgs.srcs_addr[src_index] = fsattr_pa(digest);
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(digest);
+ src_index++;
+ }
+ sreq->msgs.srcs_count = src_index;
+
+ if (dst != NULL) {
+ sreq->msgs.dsts_addr[dst_index] = fsattr_pa(dst);
+ sreq->msgs.dsts_len[dst_index] = fsattr_sz(dst);
+ dst_index++;
+ }
+
+ if (auth_op == RTE_CRYPTO_AUTH_OP_VERIFY) {
+ /*
+ * In case of decryption digest data is generated by
+ * SPU2 engine but application doesn't need digest
+ * as such. So program dummy location to capture
+ * digest data
+ */
+ if (digest != NULL && fsattr_sz(digest) != 0) {
+ sreq->msgs.dsts_addr[dst_index] =
+ sreq->dptr;
+ sreq->msgs.dsts_len[dst_index] =
+ fsattr_sz(digest);
+ dst_index++;
+ }
+ } else {
+ if (digest != NULL && fsattr_sz(digest) != 0) {
+ sreq->msgs.dsts_addr[dst_index] =
+ fsattr_pa(digest);
+ sreq->msgs.dsts_len[dst_index] =
+ fsattr_sz(digest);
+ dst_index++;
+ }
+ }
+
+ sreq->msgs.dsts_addr[dst_index] = sreq->rptr;
+ sreq->msgs.dsts_len[dst_index] = SPU2_STATUS_LEN;
+ dst_index++;
+ sreq->msgs.dsts_count = dst_index;
+
+ return 0;
+}
+
+static void
+bcmfs_crypto_ccm_update_iv(uint8_t *ivbuf,
+ unsigned int *ivlen, bool is_esp)
+{
+ int L; /* size of length field, in bytes */
+
+ /*
+ * In RFC4309 mode, L is fixed at 4 bytes; otherwise, IV from
+ * testmgr contains (L-1) in bottom 3 bits of first byte,
+ * per RFC 3610.
+ */
+ if (is_esp)
+ L = CCM_ESP_L_VALUE;
+ else
+ L = ((ivbuf[0] & CCM_B0_L_PRIME) >>
+ CCM_B0_L_PRIME_SHIFT) + 1;
+
+ /* SPU2 doesn't want these length bytes nor the first byte... */
+ *ivlen -= (1 + L);
+ memmove(ivbuf, &ivbuf[1], *ivlen);
+}
+
+int
+bcmfs_crypto_build_aead_request(struct bcmfs_sym_request *sreq,
+ enum rte_crypto_aead_algorithm ae_algo,
+ enum rte_crypto_aead_operation aeop,
+ struct fsattr *src, struct fsattr *dst,
+ struct fsattr *key, struct fsattr *iv,
+ struct fsattr *aad, struct fsattr *digest)
+{
+ int src_index = 0;
+ int dst_index = 0;
+ bool auth_first = 0;
+ struct spu2_fmd *fmd;
+ uint64_t payload_len;
+ uint64_t aad_size = (aad != NULL) ? fsattr_sz(aad) : 0;
+ unsigned int iv_size = (iv != NULL) ? fsattr_sz(iv) : 0;
+ enum spu2_cipher_mode spu2_ciph_mode = 0;
+ enum spu2_hash_mode spu2_auth_mode = 0;
+ enum spu2_cipher_type spu2_ciph_type = SPU2_CIPHER_TYPE_NONE;
+ enum spu2_hash_type spu2_auth_type = SPU2_HASH_TYPE_NONE;
+ uint64_t ksize = (key != NULL) ? fsattr_sz(key) : 0;
+ uint64_t digest_size = (digest != NULL) ?
+ fsattr_sz(digest) : 0;
+ bool is_inbound = (aeop == RTE_CRYPTO_AEAD_OP_DECRYPT);
+
+ if (src == NULL)
+ return -EINVAL;
+
+ payload_len = fsattr_sz(src);
+ if (!payload_len) {
+ BCMFS_DP_LOG(ERR, "null payload not supported");
+ return -EINVAL;
+ }
+
+ switch (ksize) {
+ case BCMFS_CRYPTO_AES128:
+ spu2_auth_type = SPU2_HASH_TYPE_AES128;
+ spu2_ciph_type = SPU2_CIPHER_TYPE_AES128;
+ break;
+ case BCMFS_CRYPTO_AES192:
+ spu2_auth_type = SPU2_HASH_TYPE_AES192;
+ spu2_ciph_type = SPU2_CIPHER_TYPE_AES192;
+ break;
+ case BCMFS_CRYPTO_AES256:
+ spu2_auth_type = SPU2_HASH_TYPE_AES256;
+ spu2_ciph_type = SPU2_CIPHER_TYPE_AES256;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ if (ae_algo == RTE_CRYPTO_AEAD_AES_GCM) {
+ spu2_auth_mode = SPU2_HASH_MODE_GCM;
+ spu2_ciph_mode = SPU2_CIPHER_MODE_GCM;
+ /*
+ * SPU2 needs in total 12 bytes of IV
+ * ie IV of 8 bytes(random number) and 4 bytes of salt.
+ */
+ if (fsattr_sz(iv) > 12)
+ iv_size = 12;
+
+ /*
+ * On SPU 2, aes gcm cipher first on encrypt, auth first on
+ * decrypt
+ */
+
+ auth_first = (aeop == RTE_CRYPTO_AEAD_OP_ENCRYPT) ?
+ 0 : 1;
+ }
+
+ if (iv != NULL && fsattr_sz(iv) != 0)
+ memcpy(sreq->iv, fsattr_va(iv), fsattr_sz(iv));
+
+ if (ae_algo == RTE_CRYPTO_AEAD_AES_CCM) {
+ spu2_auth_mode = SPU2_HASH_MODE_CCM;
+ spu2_ciph_mode = SPU2_CIPHER_MODE_CCM;
+ if (iv != NULL) {
+ memcpy(sreq->iv, fsattr_va(iv),
+ fsattr_sz(iv));
+ iv_size = fsattr_sz(iv);
+ bcmfs_crypto_ccm_update_iv(sreq->iv, &iv_size, false);
+ }
+
+ /* opposite for ccm (auth 1st on encrypt) */
+ auth_first = (aeop == RTE_CRYPTO_AEAD_OP_ENCRYPT) ?
+ 0 : 1;
+ }
+
+ fmd = &sreq->fmd;
+
+ spu2_fmd_ctrl0_write(fmd, is_inbound, auth_first, SPU2_PROTO_RESV,
+ spu2_ciph_type, spu2_ciph_mode,
+ spu2_auth_type, spu2_auth_mode);
+
+ spu2_fmd_ctrl1_write(fmd, is_inbound, aad_size, 0,
+ ksize, false, false, SPU2_VAL_NONE,
+ SPU2_VAL_NONE, SPU2_VAL_NONE, iv_size,
+ digest_size, false, SPU2_VAL_NONE);
+
+ spu2_fmd_ctrl2_write(fmd, aad_size, 0, 0,
+ ksize, iv_size);
+
+ spu2_fmd_ctrl3_write(fmd, payload_len);
+
+ /* Source metadata and data pointers */
+ sreq->msgs.srcs_addr[src_index] = sreq->fptr;
+ sreq->msgs.srcs_len[src_index] = sizeof(struct spu2_fmd);
+ src_index++;
+
+ if (key != NULL && fsattr_sz(key) != 0) {
+ memcpy(sreq->cipher_key,
+ fsattr_va(key), fsattr_sz(key));
+
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "cipher key:", fsattr_va(key),
+ fsattr_sz(key));
+#endif
+ sreq->msgs.srcs_addr[src_index] = sreq->cptr;
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(key);
+ src_index++;
+ }
+
+ if (iv != NULL && fsattr_sz(iv) != 0) {
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "iv key:", fsattr_va(iv),
+ fsattr_sz(iv));
+#endif
+ sreq->msgs.srcs_addr[src_index] = sreq->iptr;
+ sreq->msgs.srcs_len[src_index] = iv_size;
+ src_index++;
+ }
+
+ if (aad != NULL && fsattr_sz(aad) != 0) {
+#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
+ BCMFS_DP_HEXDUMP_LOG(DEBUG, "aad :", fsattr_va(aad),
+ fsattr_sz(aad));
+#endif
+ sreq->msgs.srcs_addr[src_index] = fsattr_pa(aad);
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(aad);
+ src_index++;
+ }
+
+ sreq->msgs.srcs_addr[src_index] = fsattr_pa(src);
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(src);
+ src_index++;
+
+ if (aeop == RTE_CRYPTO_AEAD_OP_DECRYPT && digest != NULL &&
+ fsattr_sz(digest) != 0) {
+ sreq->msgs.srcs_addr[src_index] = fsattr_pa(digest);
+ sreq->msgs.srcs_len[src_index] = fsattr_sz(digest);
+ src_index++;
+ }
+ sreq->msgs.srcs_count = src_index;
+
+ if (dst != NULL) {
+ sreq->msgs.dsts_addr[dst_index] = fsattr_pa(dst);
+ sreq->msgs.dsts_len[dst_index] = fsattr_sz(dst);
+ dst_index++;
+ }
+
+ if (aeop == RTE_CRYPTO_AEAD_OP_DECRYPT) {
+ /*
+ * In case of decryption digest data is generated by
+ * SPU2 engine but application doesn't need digest
+ * as such. So program dummy location to capture
+ * digest data
+ */
+ if (digest != NULL && fsattr_sz(digest) != 0) {
+ sreq->msgs.dsts_addr[dst_index] =
+ sreq->dptr;
+ sreq->msgs.dsts_len[dst_index] =
+ fsattr_sz(digest);
+ dst_index++;
+ }
+ } else {
+ if (digest != NULL && fsattr_sz(digest) != 0) {
+ sreq->msgs.dsts_addr[dst_index] =
+ fsattr_pa(digest);
+ sreq->msgs.dsts_len[dst_index] =
+ fsattr_sz(digest);
+ dst_index++;
+ }
+ }
+
+ sreq->msgs.dsts_addr[dst_index] = sreq->rptr;
+ sreq->msgs.dsts_len[dst_index] = SPU2_STATUS_LEN;
+ dst_index++;
+ sreq->msgs.dsts_count = dst_index;
+
+ return 0;
+}
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_engine.h b/drivers/crypto/bcmfs/bcmfs_sym_engine.h
new file mode 100644
index 0000000000..d9594246b5
--- /dev/null
+++ b/drivers/crypto/bcmfs/bcmfs_sym_engine.h
@@ -0,0 +1,115 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BCMFS_SYM_ENGINE_H_
+#define _BCMFS_SYM_ENGINE_H_
+
+#include <rte_crypto_sym.h>
+
+#include "bcmfs_dev_msg.h"
+#include "bcmfs_sym_defs.h"
+#include "bcmfs_sym_req.h"
+
+/* structure to hold element's arrtibutes */
+struct fsattr {
+ void *va;
+ uint64_t pa;
+ uint64_t sz;
+};
+
+#define fsattr_va(__ptr) ((__ptr)->va)
+#define fsattr_pa(__ptr) ((__ptr)->pa)
+#define fsattr_sz(__ptr) ((__ptr)->sz)
+
+/*
+ * Macros for Crypto h/w constraints
+ */
+
+#define BCMFS_CRYPTO_AES_BLOCK_SIZE 16
+#define BCMFS_CRYPTO_AES_MIN_KEY_SIZE 16
+#define BCMFS_CRYPTO_AES_MAX_KEY_SIZE 32
+
+#define BCMFS_CRYPTO_DES_BLOCK_SIZE 8
+#define BCMFS_CRYPTO_DES_KEY_SIZE 8
+
+#define BCMFS_CRYPTO_3DES_BLOCK_SIZE 8
+#define BCMFS_CRYPTO_3DES_KEY_SIZE (3 * 8)
+
+#define BCMFS_CRYPTO_MD5_DIGEST_SIZE 16
+#define BCMFS_CRYPTO_MD5_BLOCK_SIZE 64
+
+#define BCMFS_CRYPTO_SHA1_DIGEST_SIZE 20
+#define BCMFS_CRYPTO_SHA1_BLOCK_SIZE 64
+
+#define BCMFS_CRYPTO_SHA224_DIGEST_SIZE 28
+#define BCMFS_CRYPTO_SHA224_BLOCK_SIZE 64
+
+#define BCMFS_CRYPTO_SHA256_DIGEST_SIZE 32
+#define BCMFS_CRYPTO_SHA256_BLOCK_SIZE 64
+
+#define BCMFS_CRYPTO_SHA384_DIGEST_SIZE 48
+#define BCMFS_CRYPTO_SHA384_BLOCK_SIZE 128
+
+#define BCMFS_CRYPTO_SHA512_DIGEST_SIZE 64
+#define BCMFS_CRYPTO_SHA512_BLOCK_SIZE 128
+
+#define BCMFS_CRYPTO_SHA3_224_DIGEST_SIZE (224 / 8)
+#define BCMFS_CRYPTO_SHA3_224_BLOCK_SIZE (200 - 2 * \
+ BCMFS_CRYPTO_SHA3_224_DIGEST_SIZE)
+
+#define BCMFS_CRYPTO_SHA3_256_DIGEST_SIZE (256 / 8)
+#define BCMFS_CRYPTO_SHA3_256_BLOCK_SIZE (200 - 2 * \
+ BCMFS_CRYPTO_SHA3_256_DIGEST_SIZE)
+
+#define BCMFS_CRYPTO_SHA3_384_DIGEST_SIZE (384 / 8)
+#define BCMFS_CRYPTO_SHA3_384_BLOCK_SIZE (200 - 2 * \
+ BCMFS_CRYPTO_SHA3_384_DIGEST_SIZE)
+
+#define BCMFS_CRYPTO_SHA3_512_DIGEST_SIZE (512 / 8)
+#define BCMFS_CRYPTO_SHA3_512_BLOCK_SIZE (200 - 2 * \
+ BCMFS_CRYPTO_SHA3_512_DIGEST_SIZE)
+
+enum bcmfs_crypto_aes_cipher_key {
+ BCMFS_CRYPTO_AES128 = 16,
+ BCMFS_CRYPTO_AES192 = 24,
+ BCMFS_CRYPTO_AES256 = 32,
+};
+
+int
+bcmfs_crypto_build_cipher_req(struct bcmfs_sym_request *req,
+ enum rte_crypto_cipher_algorithm c_algo,
+ enum rte_crypto_cipher_operation cop,
+ struct fsattr *src, struct fsattr *dst,
+ struct fsattr *key, struct fsattr *iv);
+
+int
+bcmfs_crypto_build_auth_req(struct bcmfs_sym_request *req,
+ enum rte_crypto_auth_algorithm a_algo,
+ enum rte_crypto_auth_operation aop,
+ struct fsattr *src, struct fsattr *dst,
+ struct fsattr *mac, struct fsattr *key,
+ struct fsattr *iv);
+
+int
+bcmfs_crypto_build_chain_request(struct bcmfs_sym_request *req,
+ enum rte_crypto_cipher_algorithm c_algo,
+ enum rte_crypto_cipher_operation cop,
+ enum rte_crypto_auth_algorithm a_algo,
+ enum rte_crypto_auth_operation aop,
+ struct fsattr *src, struct fsattr *dst,
+ struct fsattr *cipher_key,
+ struct fsattr *auth_key,
+ struct fsattr *iv, struct fsattr *aad,
+ struct fsattr *digest, bool cipher_first);
+
+int
+bcmfs_crypto_build_aead_request(struct bcmfs_sym_request *req,
+ enum rte_crypto_aead_algorithm ae_algo,
+ enum rte_crypto_aead_operation aeop,
+ struct fsattr *src, struct fsattr *dst,
+ struct fsattr *key, struct fsattr *iv,
+ struct fsattr *aad, struct fsattr *digest);
+
+#endif /* _BCMFS_SYM_ENGINE_H_ */
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
index 381ca8ea48..568797b4fd 100644
--- a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
+++ b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
@@ -132,6 +132,12 @@ static void
spu_req_init(struct bcmfs_sym_request *sr, rte_iova_t iova __rte_unused)
{
memset(sr, 0, sizeof(*sr));
+ sr->fptr = iova;
+ sr->cptr = iova + offsetof(struct bcmfs_sym_request, cipher_key);
+ sr->aptr = iova + offsetof(struct bcmfs_sym_request, auth_key);
+ sr->iptr = iova + offsetof(struct bcmfs_sym_request, iv);
+ sr->dptr = iova + offsetof(struct bcmfs_sym_request, digest);
+ sr->rptr = iova + offsetof(struct bcmfs_sym_request, resp);
}
static void
@@ -244,6 +250,7 @@ bcmfs_sym_pmd_enqueue_op_burst(void *queue_pair,
uint16_t nb_ops)
{
int i, j;
+ int retval;
uint16_t enq = 0;
struct bcmfs_sym_request *sreq;
struct bcmfs_sym_session *sess;
@@ -273,6 +280,11 @@ bcmfs_sym_pmd_enqueue_op_burst(void *queue_pair,
/* save context */
qp->infl_msgs[i] = &sreq->msgs;
qp->infl_msgs[i]->ctx = (void *)sreq;
+
+ /* pre process the request crypto h/w acceleration */
+ retval = bcmfs_process_sym_crypto_op(ops[i], sess, sreq);
+ if (unlikely(retval < 0))
+ goto enqueue_err;
}
/* Send burst request to hw QP */
enq = bcmfs_enqueue_op_burst(qp, (void **)qp->infl_msgs, i);
@@ -289,6 +301,17 @@ bcmfs_sym_pmd_enqueue_op_burst(void *queue_pair,
return enq;
}
+static void bcmfs_sym_set_request_status(struct rte_crypto_op *op,
+ struct bcmfs_sym_request *out)
+{
+ if (*out->resp == BCMFS_SYM_RESPONSE_SUCCESS)
+ op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+ else if (*out->resp == BCMFS_SYM_RESPONSE_HASH_TAG_ERROR)
+ op->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
+ else
+ op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+}
+
static uint16_t
bcmfs_sym_pmd_dequeue_op_burst(void *queue_pair,
struct rte_crypto_op **ops,
@@ -308,6 +331,9 @@ bcmfs_sym_pmd_dequeue_op_burst(void *queue_pair,
for (i = 0; i < deq; i++) {
sreq = (struct bcmfs_sym_request *)qp->infl_msgs[i]->ctx;
+ /* set the status based on the response from the crypto h/w */
+ bcmfs_sym_set_request_status(sreq->op, sreq);
+
ops[pkts++] = sreq->op;
rte_mempool_put(qp->sr_mp, sreq);
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_req.h b/drivers/crypto/bcmfs/bcmfs_sym_req.h
index 0f0b051f1e..e53c50adc1 100644
--- a/drivers/crypto/bcmfs/bcmfs_sym_req.h
+++ b/drivers/crypto/bcmfs/bcmfs_sym_req.h
@@ -6,13 +6,53 @@
#ifndef _BCMFS_SYM_REQ_H_
#define _BCMFS_SYM_REQ_H_
+#include <rte_cryptodev.h>
+
#include "bcmfs_dev_msg.h"
+#include "bcmfs_sym_defs.h"
+
+/* Fixed SPU2 Metadata */
+struct spu2_fmd {
+ uint64_t ctrl0;
+ uint64_t ctrl1;
+ uint64_t ctrl2;
+ uint64_t ctrl3;
+};
/*
* This structure hold the supportive data required to process a
* rte_crypto_op
*/
struct bcmfs_sym_request {
+ /* spu2 engine related data */
+ struct spu2_fmd fmd;
+ /* cipher key */
+ uint8_t cipher_key[BCMFS_MAX_KEY_SIZE];
+ /* auth key */
+ uint8_t auth_key[BCMFS_MAX_KEY_SIZE];
+ /* iv key */
+ uint8_t iv[BCMFS_MAX_IV_SIZE];
+ /* digest data output from crypto h/w */
+ uint8_t digest[BCMFS_MAX_DIGEST_SIZE];
+ /* 2-Bytes response from crypto h/w */
+ uint8_t resp[2];
+ /*
+ * Below are all iovas for above members
+ * from top
+ */
+ /* iova for fmd */
+ rte_iova_t fptr;
+ /* iova for cipher key */
+ rte_iova_t cptr;
+ /* iova for auth key */
+ rte_iova_t aptr;
+ /* iova for iv key */
+ rte_iova_t iptr;
+ /* iova for digest */
+ rte_iova_t dptr;
+ /* iova for response */
+ rte_iova_t rptr;
+
/* bcmfs qp message for h/w queues to process */
struct bcmfs_qp_message msgs;
/* crypto op */
diff --git a/drivers/crypto/bcmfs/meson.build b/drivers/crypto/bcmfs/meson.build
index 2e86c733e1..7aa0f05dbd 100644
--- a/drivers/crypto/bcmfs/meson.build
+++ b/drivers/crypto/bcmfs/meson.build
@@ -14,5 +14,7 @@ sources = files(
'hw/bcmfs_rm_common.c',
'bcmfs_sym_pmd.c',
'bcmfs_sym_capabilities.c',
- 'bcmfs_sym_session.c'
+ 'bcmfs_sym_session.c',
+ 'bcmfs_sym.c',
+ 'bcmfs_sym_engine.c'
)
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* [dpdk-dev] [PATCH v5 8/8] crypto/bcmfs: add crypto pmd into cryptodev test
2020-10-07 17:18 ` [dpdk-dev] [PATCH v5 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
` (6 preceding siblings ...)
2020-10-07 17:18 ` [dpdk-dev] [PATCH v5 7/8] crypto/bcmfs: add crypto HW module Vikas Gupta
@ 2020-10-07 17:19 ` Vikas Gupta
2020-10-09 15:00 ` Akhil Goyal
2020-10-09 15:00 ` [dpdk-dev] [PATCH v5 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Akhil Goyal
8 siblings, 1 reply; 75+ messages in thread
From: Vikas Gupta @ 2020-10-07 17:19 UTC (permalink / raw)
To: dev, akhil.goyal; +Cc: vikram.prakash, Vikas Gupta, Raveendra Padasalagi
Add global test suite for bcmfs crypto pmd
Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
app/test/test_cryptodev.c | 17 +++++++++++++++++
app/test/test_cryptodev.h | 1 +
doc/guides/cryptodevs/bcmfs.rst | 11 +++++++++++
3 files changed, 29 insertions(+)
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 70bf6fe2c1..9157115ab3 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -13041,6 +13041,22 @@ test_cryptodev_nitrox(void)
return unit_test_suite_runner(&cryptodev_nitrox_testsuite);
}
+static int
+test_cryptodev_bcmfs(void)
+{
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_BCMFS_PMD));
+
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "BCMFS PMD must be loaded. Check if "
+ "CONFIG_RTE_LIBRTE_PMD_BCMFS is enabled "
+ "in config file to run this testsuite.\n");
+ return TEST_FAILED;
+ }
+
+ return unit_test_suite_runner(&cryptodev_testsuite);
+}
+
REGISTER_TEST_COMMAND(cryptodev_qat_autotest, test_cryptodev_qat);
REGISTER_TEST_COMMAND(cryptodev_aesni_mb_autotest, test_cryptodev_aesni_mb);
REGISTER_TEST_COMMAND(cryptodev_cpu_aesni_mb_autotest,
@@ -13063,3 +13079,4 @@ REGISTER_TEST_COMMAND(cryptodev_octeontx_autotest, test_cryptodev_octeontx);
REGISTER_TEST_COMMAND(cryptodev_octeontx2_autotest, test_cryptodev_octeontx2);
REGISTER_TEST_COMMAND(cryptodev_caam_jr_autotest, test_cryptodev_caam_jr);
REGISTER_TEST_COMMAND(cryptodev_nitrox_autotest, test_cryptodev_nitrox);
+REGISTER_TEST_COMMAND(cryptodev_bcmfs_autotest, test_cryptodev_bcmfs);
diff --git a/app/test/test_cryptodev.h b/app/test/test_cryptodev.h
index 41542e0552..c58126368c 100644
--- a/app/test/test_cryptodev.h
+++ b/app/test/test_cryptodev.h
@@ -70,6 +70,7 @@
#define CRYPTODEV_NAME_OCTEONTX2_PMD crypto_octeontx2
#define CRYPTODEV_NAME_CAAM_JR_PMD crypto_caam_jr
#define CRYPTODEV_NAME_NITROX_PMD crypto_nitrox_sym
+#define CRYPTODEV_NAME_BCMFS_PMD crypto_bcmfs
/**
* Write (spread) data from buffer to mbuf data
diff --git a/doc/guides/cryptodevs/bcmfs.rst b/doc/guides/cryptodevs/bcmfs.rst
index f7e15f4cfb..5a7eb23c0f 100644
--- a/doc/guides/cryptodevs/bcmfs.rst
+++ b/doc/guides/cryptodevs/bcmfs.rst
@@ -96,3 +96,14 @@ Limitations
* Only supports the session-oriented API implementation (session-less APIs are not supported).
* CCM is not supported on Broadcom`s SoCs having FlexSparc4 unit.
+
+Testing
+-------
+
+The symmetric crypto operations on BCMFS crypto PMD may be verified by running the test
+application:
+
+.. code-block:: console
+
+ ./test
+ RTE>>cryptodev_bcmfs_autotest
--
2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* Re: [dpdk-dev] [PATCH v5 8/8] crypto/bcmfs: add crypto pmd into cryptodev test
2020-10-07 17:19 ` [dpdk-dev] [PATCH v5 8/8] crypto/bcmfs: add crypto pmd into cryptodev test Vikas Gupta
@ 2020-10-09 15:00 ` Akhil Goyal
0 siblings, 0 replies; 75+ messages in thread
From: Akhil Goyal @ 2020-10-09 15:00 UTC (permalink / raw)
To: Vikas Gupta, dev; +Cc: vikram.prakash, Raveendra Padasalagi
Hi Vikas,
>
> Add global test suite for bcmfs crypto pmd
>
> Signed-off-by: Vikas Gupta <vikas.gupta@broadcom.com>
> Signed-off-by: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
> Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
> ---
> app/test/test_cryptodev.c | 17 +++++++++++++++++
> app/test/test_cryptodev.h | 1 +
> doc/guides/cryptodevs/bcmfs.rst | 11 +++++++++++
> 3 files changed, 29 insertions(+)
>
> diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
> index 70bf6fe2c1..9157115ab3 100644
> --- a/app/test/test_cryptodev.c
> +++ b/app/test/test_cryptodev.c
> @@ -13041,6 +13041,22 @@ test_cryptodev_nitrox(void)
> return unit_test_suite_runner(&cryptodev_nitrox_testsuite);
> }
>
> +static int
> +test_cryptodev_bcmfs(void)
> +{
> + gbl_driver_id = rte_cryptodev_driver_id_get(
> + RTE_STR(CRYPTODEV_NAME_BCMFS_PMD));
> +
> + if (gbl_driver_id == -1) {
> + RTE_LOG(ERR, USER1, "BCMFS PMD must be loaded. Check if "
> + "CONFIG_RTE_LIBRTE_PMD_BCMFS is enabled
> "
> + "in config file to run this testsuite.\n");
Modified this LOG print message to remove the config. Configs are no longer used.
> + return TEST_FAILED;
> + }
> +
> + return unit_test_suite_runner(&cryptodev_testsuite);
> +}
> +
> REGISTER_TEST_COMMAND(cryptodev_qat_autotest, test_cryptodev_qat);
> REGISTER_TEST_COMMAND(cryptodev_aesni_mb_autotest,
> test_cryptodev_aesni_mb);
> REGISTER_TEST_COMMAND(cryptodev_cpu_aesni_mb_autotest,
> @@ -13063,3 +13079,4 @@
> REGISTER_TEST_COMMAND(cryptodev_octeontx_autotest,
> test_cryptodev_octeontx);
> REGISTER_TEST_COMMAND(cryptodev_octeontx2_autotest,
> test_cryptodev_octeontx2);
> REGISTER_TEST_COMMAND(cryptodev_caam_jr_autotest,
> test_cryptodev_caam_jr);
> REGISTER_TEST_COMMAND(cryptodev_nitrox_autotest,
> test_cryptodev_nitrox);
> +REGISTER_TEST_COMMAND(cryptodev_bcmfs_autotest,
> test_cryptodev_bcmfs);
> diff --git a/app/test/test_cryptodev.h b/app/test/test_cryptodev.h
> index 41542e0552..c58126368c 100644
> --- a/app/test/test_cryptodev.h
> +++ b/app/test/test_cryptodev.h
> @@ -70,6 +70,7 @@
> #define CRYPTODEV_NAME_OCTEONTX2_PMD crypto_octeontx2
> #define CRYPTODEV_NAME_CAAM_JR_PMD crypto_caam_jr
> #define CRYPTODEV_NAME_NITROX_PMD crypto_nitrox_sym
> +#define CRYPTODEV_NAME_BCMFS_PMD crypto_bcmfs
>
> /**
> * Write (spread) data from buffer to mbuf data
> diff --git a/doc/guides/cryptodevs/bcmfs.rst b/doc/guides/cryptodevs/bcmfs.rst
> index f7e15f4cfb..5a7eb23c0f 100644
> --- a/doc/guides/cryptodevs/bcmfs.rst
> +++ b/doc/guides/cryptodevs/bcmfs.rst
> @@ -96,3 +96,14 @@ Limitations
>
> * Only supports the session-oriented API implementation (session-less APIs are
> not supported).
> * CCM is not supported on Broadcom`s SoCs having FlexSparc4 unit.
> +
> +Testing
> +-------
> +
> +The symmetric crypto operations on BCMFS crypto PMD may be verified by
> running the test
> +application:
> +
> +.. code-block:: console
> +
> + ./test
This has been changed to ./dpdk-test
> + RTE>>cryptodev_bcmfs_autotest
> --
> 2.17.1
^ permalink raw reply [flat|nested] 75+ messages in thread
* Re: [dpdk-dev] [PATCH v5 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices
2020-10-07 17:18 ` [dpdk-dev] [PATCH v5 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
` (7 preceding siblings ...)
2020-10-07 17:19 ` [dpdk-dev] [PATCH v5 8/8] crypto/bcmfs: add crypto pmd into cryptodev test Vikas Gupta
@ 2020-10-09 15:00 ` Akhil Goyal
2020-10-16 2:15 ` Thomas Monjalon
8 siblings, 1 reply; 75+ messages in thread
From: Akhil Goyal @ 2020-10-09 15:00 UTC (permalink / raw)
To: Vikas Gupta, dev; +Cc: vikram.prakash
> Hi,
> This patchset contains support for Crypto offload on Broadcom’s
> Stingray/Stingray2 SoCs having FlexSparc unit.
> BCMFS is an acronym for Broadcom FlexSparc device used in the patchest.
>
> The patchset progressively adds major modules as below.
> a) Detection of platform-device based on the known registered platforms and
> attaching with VFIO.
> b) Creation of Cryptodevice.
> c) Addition of session handling.
> d) Add Cryptodevice into test Cryptodev framework.
>
> The patchset has been tested on the above mentioned SoCs.
>
> Regards,
> Vikas
>
> Changes from v0->v1:
> Updated the ABI version in
> file .../crypto/bcmfs/rte_pmd_bcmfs_version.map
>
> Changes from v1->v2:
> - Fix compilation errors and coding style warnings.
> - Use global test crypto suite suggested by Adam Dybkowski
>
> Changes from v2->v3:
> - Release notes updated.
> - bcmfs.rst updated with missing information about installation.
> - Review comments from patch1 from v2 addressed.
> - Updated description about dependency of PMD driver on
> VFIO_PRESENT.
> - Fixed typo in bcmfs_hw_defs.h (comments on patch3 from v2
> addressed)
> - Comments on patch6 from v2 addressed and capability list is fixed.
> Removed redundant enums and macros from the file
> bcmfs_sym_defs.h and updated other impacted APIs
> accordingly.
> patch7 too is updated due to removal of redundancy.
> Thanks! to Akhil for pointing out the redundancy.
> - Fix minor code style issues in few files as part of review.
>
> Changes from v3->v4:
> - Code style issues fixed.
> - Change of barrier API in bcmfs4_rm.c and bcmfs5_rm.c
>
> Changes from v4->v5:
> - Change of barrier API in bcmfs4_rm.c. Missed one in v4
>
Series applied to dpdk-next-crypto With 2 fixes as mentioned in the last patch.
Thanks.
^ permalink raw reply [flat|nested] 75+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/8] crypto/bcmfs: add BCMFS driver
2020-10-07 17:18 ` [dpdk-dev] [PATCH v5 1/8] crypto/bcmfs: add BCMFS driver Vikas Gupta
@ 2020-10-15 0:50 ` Thomas Monjalon
2020-10-15 0:55 ` Thomas Monjalon
1 sibling, 0 replies; 75+ messages in thread
From: Thomas Monjalon @ 2020-10-15 0:50 UTC (permalink / raw)
To: akhil.goyal, Vikas Gupta, Raveendra Padasalagi, ajit.khaparde
Cc: dev, vikram.prakash
07/10/2020 19:18, Vikas Gupta:
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -1081,6 +1081,13 @@ F: drivers/crypto/zuc/
> F: doc/guides/cryptodevs/zuc.rst
> F: doc/guides/cryptodevs/features/zuc.ini
>
Drivers are alphabetically sorted.
> +Broadcom FlexSparc
> +M: Ajit Khaparde <ajit.khaparde@broadcom.com>
> +M: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
> +M: Vikas Gupta <vikas.gupta@@broadcom.com>
Nice trick for not being disturbed with emails: @@
> +F: drivers/crypto/bcmfs/
> +F: doc/guides/cryptodevs/bcmfs.rst
> +F: doc/guides/cryptodevs/features/bcmfs.ini
Will fix while pulling next-crypto.
^ permalink raw reply [flat|nested] 75+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/8] crypto/bcmfs: add BCMFS driver
2020-10-07 17:18 ` [dpdk-dev] [PATCH v5 1/8] crypto/bcmfs: add BCMFS driver Vikas Gupta
2020-10-15 0:50 ` Thomas Monjalon
@ 2020-10-15 0:55 ` Thomas Monjalon
1 sibling, 0 replies; 75+ messages in thread
From: Thomas Monjalon @ 2020-10-15 0:55 UTC (permalink / raw)
To: akhil.goyal, Raveendra Padasalagi, Vikas Gupta, ajit.khaparde
Cc: dev, vikram.prakash, mdr
07/10/2020 19:18, Vikas Gupta:
> --- /dev/null
> +++ b/drivers/crypto/bcmfs/rte_pmd_bcmfs_version.map
> @@ -0,0 +1,3 @@
> +DPDK_21.0 {
> + local: *;
> +};
No!
Please be careful, all other libs use ABI DPDK_21.
Will fix
^ permalink raw reply [flat|nested] 75+ messages in thread
* Re: [dpdk-dev] [PATCH v5 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices
2020-10-09 15:00 ` [dpdk-dev] [PATCH v5 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Akhil Goyal
@ 2020-10-16 2:15 ` Thomas Monjalon
0 siblings, 0 replies; 75+ messages in thread
From: Thomas Monjalon @ 2020-10-16 2:15 UTC (permalink / raw)
To: Vikas Gupta, raveendra.padasalagi, ajit.khaparde; +Cc: dev, Akhil Goyal
09/10/2020 17:00, Akhil Goyal:
> > This patchset contains support for Crypto offload on Broadcom’s
> > Stingray/Stingray2 SoCs having FlexSparc unit.
[...]
> Series applied to dpdk-next-crypto With 2 fixes as mentioned in the last patch.
There are 2 errors in the doc:
Warning generate_overview_table(): Unknown feature 'AES CBC MAC' in 'bcmfs.ini'
Warning generate_overview_table(): File 'bcmfs.ini' has no [Asymmetric] secton
Please send a patch to fix.
^ permalink raw reply [flat|nested] 75+ messages in thread
end of thread, other threads:[~2020-10-16 2:15 UTC | newest]
Thread overview: 75+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-08-11 14:58 [dpdk-dev] [PATCH 0 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
2020-08-11 14:58 ` [dpdk-dev] [PATCH 0 1/8] crypto/bcmfs: add BCMFS driver Vikas Gupta
2020-08-11 14:58 ` [dpdk-dev] [PATCH 0 2/8] crypto/bcmfs: add vfio support Vikas Gupta
2020-08-11 14:58 ` [dpdk-dev] [PATCH 0 3/8] crypto/bcmfs: add apis for queue pair management Vikas Gupta
2020-08-11 14:58 ` [dpdk-dev] [PATCH 0 4/8] crypto/bcmfs: add hw queue pair operations Vikas Gupta
2020-08-11 14:58 ` [dpdk-dev] [PATCH 0 5/8] crypto/bcmfs: create a symmetric cryptodev Vikas Gupta
2020-08-11 14:58 ` [dpdk-dev] [PATCH 0 6/8] crypto/bcmfs: add session handling and capabilities Vikas Gupta
2020-08-11 14:58 ` [dpdk-dev] [PATCH 0 7/8] crypto/bcmfs: add crypto h/w module Vikas Gupta
2020-08-11 14:58 ` [dpdk-dev] [PATCH 0 8/8] crypto/bcmfs: add crypto pmd into cryptodev test Vikas Gupta
2020-08-12 6:31 ` [dpdk-dev] [PATCH v1 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
2020-08-12 6:31 ` [dpdk-dev] [PATCH v1 1/8] crypto/bcmfs: add BCMFS driver Vikas Gupta
2020-08-12 6:31 ` [dpdk-dev] [PATCH v1 2/8] crypto/bcmfs: add vfio support Vikas Gupta
2020-08-12 6:31 ` [dpdk-dev] [PATCH v1 3/8] crypto/bcmfs: add apis for queue pair management Vikas Gupta
2020-08-12 6:31 ` [dpdk-dev] [PATCH v1 4/8] crypto/bcmfs: add hw queue pair operations Vikas Gupta
2020-08-12 6:31 ` [dpdk-dev] [PATCH v1 5/8] crypto/bcmfs: create a symmetric cryptodev Vikas Gupta
2020-08-12 6:31 ` [dpdk-dev] [PATCH v1 6/8] crypto/bcmfs: add session handling and capabilities Vikas Gupta
2020-08-12 6:31 ` [dpdk-dev] [PATCH v1 7/8] crypto/bcmfs: add crypto h/w module Vikas Gupta
2020-08-12 6:31 ` [dpdk-dev] [PATCH v1 8/8] crypto/bcmfs: add crypto pmd into cryptodev test Vikas Gupta
2020-08-12 13:44 ` Dybkowski, AdamX
2020-08-13 17:23 ` [dpdk-dev] [PATCH v2 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
2020-08-13 17:23 ` [dpdk-dev] [PATCH v2 1/8] crypto/bcmfs: add BCMFS driver Vikas Gupta
2020-09-28 18:49 ` Akhil Goyal
2020-09-29 10:52 ` Vikas Gupta
2020-08-13 17:23 ` [dpdk-dev] [PATCH v2 2/8] crypto/bcmfs: add vfio support Vikas Gupta
2020-09-28 19:00 ` Akhil Goyal
2020-09-29 11:01 ` Vikas Gupta
2020-09-29 12:39 ` Akhil Goyal
2020-08-13 17:23 ` [dpdk-dev] [PATCH v2 3/8] crypto/bcmfs: add apis for queue pair management Vikas Gupta
2020-09-28 19:29 ` Akhil Goyal
2020-09-29 11:04 ` Vikas Gupta
2020-08-13 17:23 ` [dpdk-dev] [PATCH v2 4/8] crypto/bcmfs: add hw queue pair operations Vikas Gupta
2020-08-13 17:23 ` [dpdk-dev] [PATCH v2 5/8] crypto/bcmfs: create a symmetric cryptodev Vikas Gupta
2020-08-13 17:23 ` [dpdk-dev] [PATCH v2 6/8] crypto/bcmfs: add session handling and capabilities Vikas Gupta
2020-09-28 19:46 ` Akhil Goyal
2020-09-29 11:12 ` Vikas Gupta
2020-08-13 17:23 ` [dpdk-dev] [PATCH v2 7/8] crypto/bcmfs: add crypto h/w module Vikas Gupta
2020-09-28 20:00 ` Akhil Goyal
2020-08-13 17:23 ` [dpdk-dev] [PATCH v2 8/8] crypto/bcmfs: add crypto pmd into cryptodev test Vikas Gupta
2020-09-28 20:01 ` Akhil Goyal
2020-09-28 20:06 ` [dpdk-dev] [PATCH v2 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Akhil Goyal
2020-10-05 15:39 ` Akhil Goyal
2020-10-05 16:46 ` Ajit Khaparde
2020-10-05 17:01 ` Vikas Gupta
2020-10-05 16:26 ` [dpdk-dev] [PATCH v3 " Vikas Gupta
2020-10-05 16:26 ` [dpdk-dev] [PATCH v3 1/8] crypto/bcmfs: add BCMFS driver Vikas Gupta
2020-10-05 16:26 ` [dpdk-dev] [PATCH v3 2/8] crypto/bcmfs: add vfio support Vikas Gupta
2020-10-05 16:26 ` [dpdk-dev] [PATCH v3 3/8] crypto/bcmfs: add apis for queue pair management Vikas Gupta
2020-10-05 16:26 ` [dpdk-dev] [PATCH v3 4/8] crypto/bcmfs: add hw queue pair operations Vikas Gupta
2020-10-05 16:26 ` [dpdk-dev] [PATCH v3 5/8] crypto/bcmfs: create a symmetric cryptodev Vikas Gupta
2020-10-05 16:26 ` [dpdk-dev] [PATCH v3 6/8] crypto/bcmfs: add session handling and capabilities Vikas Gupta
2020-10-05 16:26 ` [dpdk-dev] [PATCH v3 7/8] crypto/bcmfs: add crypto h/w module Vikas Gupta
2020-10-05 16:26 ` [dpdk-dev] [PATCH v3 8/8] crypto/bcmfs: add crypto pmd into cryptodev test Vikas Gupta
2020-10-07 16:45 ` [dpdk-dev] [PATCH v4 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
2020-10-07 16:45 ` [dpdk-dev] [PATCH v4 1/8] crypto/bcmfs: add BCMFS driver Vikas Gupta
2020-10-07 16:45 ` [dpdk-dev] [PATCH v4 2/8] crypto/bcmfs: add vfio support Vikas Gupta
2020-10-07 16:45 ` [dpdk-dev] [PATCH v4 3/8] crypto/bcmfs: add queue pair management API Vikas Gupta
2020-10-07 16:45 ` [dpdk-dev] [PATCH v4 4/8] crypto/bcmfs: add HW queue pair operations Vikas Gupta
2020-10-07 16:45 ` [dpdk-dev] [PATCH v4 5/8] crypto/bcmfs: create a symmetric cryptodev Vikas Gupta
2020-10-07 16:45 ` [dpdk-dev] [PATCH v4 6/8] crypto/bcmfs: add session handling and capabilities Vikas Gupta
2020-10-07 16:45 ` [dpdk-dev] [PATCH v4 7/8] crypto/bcmfs: add crypto HW module Vikas Gupta
2020-10-07 16:45 ` [dpdk-dev] [PATCH v4 8/8] crypto/bcmfs: add crypto pmd into cryptodev test Vikas Gupta
2020-10-07 17:18 ` [dpdk-dev] [PATCH v5 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Vikas Gupta
2020-10-07 17:18 ` [dpdk-dev] [PATCH v5 1/8] crypto/bcmfs: add BCMFS driver Vikas Gupta
2020-10-15 0:50 ` Thomas Monjalon
2020-10-15 0:55 ` Thomas Monjalon
2020-10-07 17:18 ` [dpdk-dev] [PATCH v5 2/8] crypto/bcmfs: add vfio support Vikas Gupta
2020-10-07 17:18 ` [dpdk-dev] [PATCH v5 3/8] crypto/bcmfs: add queue pair management API Vikas Gupta
2020-10-07 17:18 ` [dpdk-dev] [PATCH v5 4/8] crypto/bcmfs: add HW queue pair operations Vikas Gupta
2020-10-07 17:18 ` [dpdk-dev] [PATCH v5 5/8] crypto/bcmfs: create a symmetric cryptodev Vikas Gupta
2020-10-07 17:18 ` [dpdk-dev] [PATCH v5 6/8] crypto/bcmfs: add session handling and capabilities Vikas Gupta
2020-10-07 17:18 ` [dpdk-dev] [PATCH v5 7/8] crypto/bcmfs: add crypto HW module Vikas Gupta
2020-10-07 17:19 ` [dpdk-dev] [PATCH v5 8/8] crypto/bcmfs: add crypto pmd into cryptodev test Vikas Gupta
2020-10-09 15:00 ` Akhil Goyal
2020-10-09 15:00 ` [dpdk-dev] [PATCH v5 0/8] Add Crypto PMD for Broadcom`s FlexSparc devices Akhil Goyal
2020-10-16 2:15 ` Thomas Monjalon
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).