* [v1 00/16] crypto/virtio: vDPA and asymmetric support
@ 2024-12-24 7:36 Gowrishankar Muthukrishnan
2024-12-24 7:36 ` [v1 01/16] vhost: include AKCIPHER algorithms in crypto_config Gowrishankar Muthukrishnan
` (19 more replies)
0 siblings, 20 replies; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2024-12-24 7:36 UTC (permalink / raw)
To: dev, Akhil Goyal, Maxime Coquelin, Chenbo Xia, Fan Zhang, Jay Zhou
Cc: jerinj, anoobj, Rajesh Mudimadugula, Gowrishankar Muthukrishnan
This series introduces vDPA backend support to virtio crypto PMD.
Also added asymmetric RSA support.
Gowrishankar Muthukrishnan (14):
vhost: include AKCIPHER algorithms in crypto_config
crypto/virtio: add asymmetric RSA support
test/crypto: check for RSA capability
test/crypto: add asymmetric tests for virtio PMD
vhost: add asymmetric RSA support
examples/vhost_crypto: add asymmetric support
crypto/virtio: fix dataqueues iteration
crypto/virtio: refactor queue operations
crypto/virtio: add packed ring support
common/virtio: common virtio log
common/virtio: move vDPA to common directory
common/virtio: support cryptodev in vdev setup
crypto/virtio: add vhost backend to virtio_user
test/crypto: test virtio_crypto_user PMD
Rajesh Mudimadugula (2):
crypto/virtio: remove redundant crypto queue free
test/crypto: return proper codes in create session
.mailmap | 1 +
app/test/test_cryptodev.c | 45 +-
app/test/test_cryptodev.h | 1 +
app/test/test_cryptodev_asym.c | 68 +
app/test/test_cryptodev_rsa_test_vectors.h | 4 +
drivers/common/virtio/meson.build | 13 +
drivers/common/virtio/version.map | 9 +
drivers/{net => common}/virtio/virtio_logs.h | 16 +-
.../virtio/virtio_user/vhost.h | 2 -
.../virtio/virtio_user/vhost_vdpa.c | 31 +-
drivers/crypto/virtio/meson.build | 11 +-
drivers/crypto/virtio/virtio_crypto_algs.h | 2 +-
.../virtio/virtio_crypto_capabilities.h | 19 +
.../{virtio_logs.h => virtio_crypto_logs.h} | 30 +-
drivers/crypto/virtio/virtio_cryptodev.c | 1105 +++++++++++------
drivers/crypto/virtio/virtio_cryptodev.h | 16 +-
drivers/crypto/virtio/virtio_cvq.c | 229 ++++
drivers/crypto/virtio/virtio_cvq.h | 33 +
drivers/crypto/virtio/virtio_pci.h | 38 +-
drivers/crypto/virtio/virtio_ring.h | 65 +-
drivers/crypto/virtio/virtio_rxtx.c | 707 ++++++++++-
drivers/crypto/virtio/virtio_rxtx.h | 13 +
.../crypto/virtio/virtio_user/vhost_vdpa.c | 310 +++++
.../virtio/virtio_user/virtio_user_dev.c | 774 ++++++++++++
.../virtio/virtio_user/virtio_user_dev.h | 88 ++
drivers/crypto/virtio/virtio_user_cryptodev.c | 586 +++++++++
drivers/crypto/virtio/virtqueue.c | 229 +++-
drivers/crypto/virtio/virtqueue.h | 223 +++-
drivers/meson.build | 1 +
drivers/net/virtio/meson.build | 4 +-
drivers/net/virtio/virtio.c | 3 +-
drivers/net/virtio/virtio_ethdev.c | 5 +-
drivers/net/virtio/virtio_net_logs.h | 30 +
drivers/net/virtio/virtio_pci.c | 3 +-
drivers/net/virtio/virtio_pci_ethdev.c | 3 +-
drivers/net/virtio/virtio_rxtx.c | 3 +-
drivers/net/virtio/virtio_rxtx_packed.c | 3 +-
drivers/net/virtio/virtio_rxtx_packed.h | 3 +-
drivers/net/virtio/virtio_rxtx_packed_avx.h | 3 +-
drivers/net/virtio/virtio_rxtx_simple.h | 3 +-
drivers/net/virtio/virtio_user/vhost_kernel.c | 4 +-
.../net/virtio/virtio_user/vhost_kernel_tap.c | 3 +-
drivers/net/virtio/virtio_user/vhost_user.c | 2 +-
.../net/virtio/virtio_user/virtio_user_dev.c | 6 +-
.../net/virtio/virtio_user/virtio_user_dev.h | 24 +-
drivers/net/virtio/virtio_user_ethdev.c | 5 +-
drivers/net/virtio/virtqueue.c | 3 +-
drivers/net/virtio/virtqueue.h | 3 +-
examples/vhost_crypto/main.c | 54 +-
lib/cryptodev/cryptodev_pmd.h | 6 +
lib/vhost/vhost_crypto.c | 504 +++++++-
lib/vhost/vhost_user.h | 33 +-
lib/vhost/virtio_crypto.h | 82 +-
53 files changed, 4846 insertions(+), 615 deletions(-)
create mode 100644 drivers/common/virtio/meson.build
create mode 100644 drivers/common/virtio/version.map
rename drivers/{net => common}/virtio/virtio_logs.h (61%)
rename drivers/{net => common}/virtio/virtio_user/vhost.h (98%)
rename drivers/{net => common}/virtio/virtio_user/vhost_vdpa.c (96%)
rename drivers/crypto/virtio/{virtio_logs.h => virtio_crypto_logs.h} (74%)
create mode 100644 drivers/crypto/virtio/virtio_cvq.c
create mode 100644 drivers/crypto/virtio/virtio_cvq.h
create mode 100644 drivers/crypto/virtio/virtio_rxtx.h
create mode 100644 drivers/crypto/virtio/virtio_user/vhost_vdpa.c
create mode 100644 drivers/crypto/virtio/virtio_user/virtio_user_dev.c
create mode 100644 drivers/crypto/virtio/virtio_user/virtio_user_dev.h
create mode 100644 drivers/crypto/virtio/virtio_user_cryptodev.c
create mode 100644 drivers/net/virtio/virtio_net_logs.h
--
2.25.1
^ permalink raw reply [flat|nested] 58+ messages in thread
* [v1 01/16] vhost: include AKCIPHER algorithms in crypto_config
2024-12-24 7:36 [v1 00/16] crypto/virtio: vDPA and asymmetric support Gowrishankar Muthukrishnan
@ 2024-12-24 7:36 ` Gowrishankar Muthukrishnan
2024-12-24 7:37 ` [v1 02/16] crypto/virtio: remove redundant crypto queue free Gowrishankar Muthukrishnan
` (18 subsequent siblings)
19 siblings, 0 replies; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2024-12-24 7:36 UTC (permalink / raw)
To: dev, Akhil Goyal, Maxime Coquelin, Chenbo Xia, Fan Zhang, Jay Zhou
Cc: jerinj, anoobj, Rajesh Mudimadugula, Gowrishankar Muthukrishnan
Update virtio_crypto_config structure to include AKCIPHER algorithms,
as per VirtIO standard.
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
lib/vhost/virtio_crypto.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lib/vhost/virtio_crypto.h b/lib/vhost/virtio_crypto.h
index e3b93573c8..28877a5da3 100644
--- a/lib/vhost/virtio_crypto.h
+++ b/lib/vhost/virtio_crypto.h
@@ -410,7 +410,7 @@ struct virtio_crypto_config {
uint32_t max_cipher_key_len;
/* Maximum length of authenticated key */
uint32_t max_auth_key_len;
- uint32_t reserve;
+ uint32_t akcipher_algo;
/* Maximum size of each crypto request's content */
uint64_t max_size;
};
--
2.25.1
^ permalink raw reply [flat|nested] 58+ messages in thread
* [v1 02/16] crypto/virtio: remove redundant crypto queue free
2024-12-24 7:36 [v1 00/16] crypto/virtio: vDPA and asymmetric support Gowrishankar Muthukrishnan
2024-12-24 7:36 ` [v1 01/16] vhost: include AKCIPHER algorithms in crypto_config Gowrishankar Muthukrishnan
@ 2024-12-24 7:37 ` Gowrishankar Muthukrishnan
2024-12-24 7:37 ` [v1 03/16] crypto/virtio: add asymmetric RSA support Gowrishankar Muthukrishnan
` (17 subsequent siblings)
19 siblings, 0 replies; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2024-12-24 7:37 UTC (permalink / raw)
To: dev, Akhil Goyal, Maxime Coquelin, Chenbo Xia, Fan Zhang,
Jay Zhou, Thomas Monjalon
Cc: jerinj, anoobj, Rajesh Mudimadugula
From: Rajesh Mudimadugula <rmudimadugul@marvell.com>
Remove multiple invocations of virtio_crypto_queue_release,
and set virtio crypto queue as null upon free to avoid
segfaults.
Signed-off-by: Rajesh Mudimadugula <rmudimadugul@marvell.com>
---
.mailmap | 1 +
drivers/crypto/virtio/virtio_cryptodev.c | 11 +++++------
2 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/.mailmap b/.mailmap
index 818798273f..92d77bbb45 100644
--- a/.mailmap
+++ b/.mailmap
@@ -1247,6 +1247,7 @@ Rahul Gupta <rahul.gupta@broadcom.com>
Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Rahul Shah <rahul.r.shah@intel.com>
Raja Zidane <rzidane@nvidia.com>
+Rajesh Mudimadugula <rmudimadugul@marvell.com>
Rajesh Ravi <rajesh.ravi@broadcom.com>
Rakesh Kudurumalla <rkudurumalla@marvell.com> <rkudurumalla@caviumnetworks.com>
Ralf Hoffmann <ralf.hoffmann@allegro-packets.com>
diff --git a/drivers/crypto/virtio/virtio_cryptodev.c b/drivers/crypto/virtio/virtio_cryptodev.c
index 643921dc02..98415af123 100644
--- a/drivers/crypto/virtio/virtio_cryptodev.c
+++ b/drivers/crypto/virtio/virtio_cryptodev.c
@@ -478,10 +478,13 @@ virtio_crypto_free_queues(struct rte_cryptodev *dev)
/* control queue release */
virtio_crypto_queue_release(hw->cvq);
+ hw->cvq = NULL;
/* data queue release */
- for (i = 0; i < hw->max_dataqueues; i++)
+ for (i = 0; i < hw->max_dataqueues; i++) {
virtio_crypto_queue_release(dev->data->queue_pairs[i]);
+ dev->data->queue_pairs[i] = NULL;
+ }
}
static int
@@ -613,6 +616,7 @@ virtio_crypto_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id)
}
virtio_crypto_queue_release(vq);
+ dev->data->queue_pairs[queue_pair_id] = NULL;
return 0;
}
@@ -760,8 +764,6 @@ crypto_virtio_create(const char *name, struct rte_pci_device *pci_dev,
static int
virtio_crypto_dev_uninit(struct rte_cryptodev *cryptodev)
{
- struct virtio_crypto_hw *hw = cryptodev->data->dev_private;
-
PMD_INIT_FUNC_TRACE();
if (rte_eal_process_type() == RTE_PROC_SECONDARY)
@@ -776,9 +778,6 @@ virtio_crypto_dev_uninit(struct rte_cryptodev *cryptodev)
cryptodev->enqueue_burst = NULL;
cryptodev->dequeue_burst = NULL;
- /* release control queue */
- virtio_crypto_queue_release(hw->cvq);
-
rte_free(cryptodev->data);
cryptodev->data = NULL;
--
2.25.1
^ permalink raw reply [flat|nested] 58+ messages in thread
* [v1 03/16] crypto/virtio: add asymmetric RSA support
2024-12-24 7:36 [v1 00/16] crypto/virtio: vDPA and asymmetric support Gowrishankar Muthukrishnan
2024-12-24 7:36 ` [v1 01/16] vhost: include AKCIPHER algorithms in crypto_config Gowrishankar Muthukrishnan
2024-12-24 7:37 ` [v1 02/16] crypto/virtio: remove redundant crypto queue free Gowrishankar Muthukrishnan
@ 2024-12-24 7:37 ` Gowrishankar Muthukrishnan
2024-12-24 7:37 ` [v1 04/16] test/crypto: check for RSA capability Gowrishankar Muthukrishnan
` (16 subsequent siblings)
19 siblings, 0 replies; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2024-12-24 7:37 UTC (permalink / raw)
To: dev, Akhil Goyal, Maxime Coquelin, Chenbo Xia, Fan Zhang, Jay Zhou
Cc: jerinj, anoobj, Rajesh Mudimadugula, Gowrishankar Muthukrishnan
Asymmetric RSA operations (SIGN, VERIFY, ENCRYPT and DECRYPT) are
supported in virtio PMD.
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
.../virtio/virtio_crypto_capabilities.h | 19 +
drivers/crypto/virtio/virtio_cryptodev.c | 384 +++++++++++++++---
drivers/crypto/virtio/virtio_rxtx.c | 226 ++++++++++-
lib/cryptodev/cryptodev_pmd.h | 6 +
lib/vhost/virtio_crypto.h | 80 ++++
5 files changed, 647 insertions(+), 68 deletions(-)
diff --git a/drivers/crypto/virtio/virtio_crypto_capabilities.h b/drivers/crypto/virtio/virtio_crypto_capabilities.h
index 03c30deefd..1b26ff6720 100644
--- a/drivers/crypto/virtio/virtio_crypto_capabilities.h
+++ b/drivers/crypto/virtio/virtio_crypto_capabilities.h
@@ -48,4 +48,23 @@
}, } \
}
+#define VIRTIO_ASYM_CAPABILITIES \
+ { /* RSA */ \
+ .op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC, \
+ {.asym = { \
+ .xform_capa = { \
+ .xform_type = RTE_CRYPTO_ASYM_XFORM_RSA, \
+ .op_types = ((1 << RTE_CRYPTO_ASYM_OP_SIGN) | \
+ (1 << RTE_CRYPTO_ASYM_OP_VERIFY) | \
+ (1 << RTE_CRYPTO_ASYM_OP_ENCRYPT) | \
+ (1 << RTE_CRYPTO_ASYM_OP_DECRYPT)), \
+ {.modlen = { \
+ .min = 1, \
+ .max = 1024, \
+ .increment = 1 \
+ }, } \
+ } \
+ }, } \
+ }
+
#endif /* _VIRTIO_CRYPTO_CAPABILITIES_H_ */
diff --git a/drivers/crypto/virtio/virtio_cryptodev.c b/drivers/crypto/virtio/virtio_cryptodev.c
index 98415af123..f9a3f1e13a 100644
--- a/drivers/crypto/virtio/virtio_cryptodev.c
+++ b/drivers/crypto/virtio/virtio_cryptodev.c
@@ -41,6 +41,11 @@ static void virtio_crypto_sym_clear_session(struct rte_cryptodev *dev,
static int virtio_crypto_sym_configure_session(struct rte_cryptodev *dev,
struct rte_crypto_sym_xform *xform,
struct rte_cryptodev_sym_session *session);
+static void virtio_crypto_asym_clear_session(struct rte_cryptodev *dev,
+ struct rte_cryptodev_asym_session *sess);
+static int virtio_crypto_asym_configure_session(struct rte_cryptodev *dev,
+ struct rte_crypto_asym_xform *xform,
+ struct rte_cryptodev_asym_session *session);
/*
* The set of PCI devices this driver supports
@@ -53,6 +58,7 @@ static const struct rte_pci_id pci_id_virtio_crypto_map[] = {
static const struct rte_cryptodev_capabilities virtio_capabilities[] = {
VIRTIO_SYM_CAPABILITIES,
+ VIRTIO_ASYM_CAPABILITIES,
RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
};
@@ -88,7 +94,7 @@ virtio_crypto_send_command(struct virtqueue *vq,
return -EINVAL;
}
/* cipher only is supported, it is available if auth_key is NULL */
- if (!cipher_key) {
+ if (session->ctrl.header.algo == VIRTIO_CRYPTO_SERVICE_CIPHER && !cipher_key) {
VIRTIO_CRYPTO_SESSION_LOG_ERR("cipher key is NULL.");
return -EINVAL;
}
@@ -104,19 +110,23 @@ virtio_crypto_send_command(struct virtqueue *vq,
/* calculate the length of cipher key */
if (cipher_key) {
- switch (ctrl->u.sym_create_session.op_type) {
- case VIRTIO_CRYPTO_SYM_OP_CIPHER:
- len_cipher_key
- = ctrl->u.sym_create_session.u.cipher
- .para.keylen;
- break;
- case VIRTIO_CRYPTO_SYM_OP_ALGORITHM_CHAINING:
- len_cipher_key
- = ctrl->u.sym_create_session.u.chain
- .para.cipher_param.keylen;
- break;
- default:
- VIRTIO_CRYPTO_SESSION_LOG_ERR("invalid op type");
+ if (session->ctrl.header.algo == VIRTIO_CRYPTO_SERVICE_CIPHER) {
+ switch (ctrl->u.sym_create_session.op_type) {
+ case VIRTIO_CRYPTO_SYM_OP_CIPHER:
+ len_cipher_key = ctrl->u.sym_create_session.u.cipher.para.keylen;
+ break;
+ case VIRTIO_CRYPTO_SYM_OP_ALGORITHM_CHAINING:
+ len_cipher_key =
+ ctrl->u.sym_create_session.u.chain.para.cipher_param.keylen;
+ break;
+ default:
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("invalid op type");
+ return -EINVAL;
+ }
+ } else if (session->ctrl.header.algo == VIRTIO_CRYPTO_AKCIPHER_RSA) {
+ len_cipher_key = ctrl->u.akcipher_create_session.para.keylen;
+ } else {
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("Invalid crypto service for cipher key");
return -EINVAL;
}
}
@@ -513,7 +523,10 @@ static struct rte_cryptodev_ops virtio_crypto_dev_ops = {
/* Crypto related operations */
.sym_session_get_size = virtio_crypto_sym_get_session_private_size,
.sym_session_configure = virtio_crypto_sym_configure_session,
- .sym_session_clear = virtio_crypto_sym_clear_session
+ .sym_session_clear = virtio_crypto_sym_clear_session,
+ .asym_session_get_size = virtio_crypto_sym_get_session_private_size,
+ .asym_session_configure = virtio_crypto_asym_configure_session,
+ .asym_session_clear = virtio_crypto_asym_clear_session
};
static void
@@ -737,6 +750,8 @@ crypto_virtio_create(const char *name, struct rte_pci_device *pci_dev,
cryptodev->dequeue_burst = virtio_crypto_pkt_rx_burst;
cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
+ RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO |
+ RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_QT |
RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT;
@@ -924,32 +939,24 @@ virtio_crypto_check_sym_clear_session_paras(
#define NUM_ENTRY_SYM_CLEAR_SESSION 2
static void
-virtio_crypto_sym_clear_session(
+virtio_crypto_clear_session(
struct rte_cryptodev *dev,
- struct rte_cryptodev_sym_session *sess)
+ struct virtio_crypto_op_ctrl_req *ctrl)
{
struct virtio_crypto_hw *hw;
struct virtqueue *vq;
- struct virtio_crypto_session *session;
- struct virtio_crypto_op_ctrl_req *ctrl;
struct vring_desc *desc;
uint8_t *status;
uint8_t needed = 1;
uint32_t head;
- uint8_t *malloc_virt_addr;
uint64_t malloc_phys_addr;
uint8_t len_inhdr = sizeof(struct virtio_crypto_inhdr);
uint32_t len_op_ctrl_req = sizeof(struct virtio_crypto_op_ctrl_req);
uint32_t desc_offset = len_op_ctrl_req + len_inhdr;
-
- PMD_INIT_FUNC_TRACE();
-
- if (virtio_crypto_check_sym_clear_session_paras(dev, sess) < 0)
- return;
+ uint64_t session_id = ctrl->u.destroy_session.session_id;
hw = dev->data->dev_private;
vq = hw->cvq;
- session = CRYPTODEV_GET_SYM_SESS_PRIV(sess);
VIRTIO_CRYPTO_SESSION_LOG_INFO("vq->vq_desc_head_idx = %d, "
"vq = %p", vq->vq_desc_head_idx, vq);
@@ -961,34 +968,15 @@ virtio_crypto_sym_clear_session(
return;
}
- /*
- * malloc memory to store information of ctrl request op,
- * returned status and desc vring
- */
- malloc_virt_addr = rte_malloc(NULL, len_op_ctrl_req + len_inhdr
- + NUM_ENTRY_SYM_CLEAR_SESSION
- * sizeof(struct vring_desc), RTE_CACHE_LINE_SIZE);
- if (malloc_virt_addr == NULL) {
- VIRTIO_CRYPTO_SESSION_LOG_ERR("not enough heap room");
- return;
- }
- malloc_phys_addr = rte_malloc_virt2iova(malloc_virt_addr);
-
- /* assign ctrl request op part */
- ctrl = (struct virtio_crypto_op_ctrl_req *)malloc_virt_addr;
- ctrl->header.opcode = VIRTIO_CRYPTO_CIPHER_DESTROY_SESSION;
- /* default data virtqueue is 0 */
- ctrl->header.queue_id = 0;
- ctrl->u.destroy_session.session_id = session->session_id;
+ malloc_phys_addr = rte_malloc_virt2iova(ctrl);
/* status part */
status = &(((struct virtio_crypto_inhdr *)
- ((uint8_t *)malloc_virt_addr + len_op_ctrl_req))->status);
+ ((uint8_t *)ctrl + len_op_ctrl_req))->status);
*status = VIRTIO_CRYPTO_ERR;
/* indirect desc vring part */
- desc = (struct vring_desc *)((uint8_t *)malloc_virt_addr
- + desc_offset);
+ desc = (struct vring_desc *)((uint8_t *)ctrl + desc_offset);
/* ctrl request part */
desc[0].addr = malloc_phys_addr;
@@ -1050,8 +1038,8 @@ virtio_crypto_sym_clear_session(
if (*status != VIRTIO_CRYPTO_OK) {
VIRTIO_CRYPTO_SESSION_LOG_ERR("Close session failed "
"status=%"PRIu32", session_id=%"PRIu64"",
- *status, session->session_id);
- rte_free(malloc_virt_addr);
+ *status, session_id);
+ rte_free(ctrl);
return;
}
@@ -1059,9 +1047,86 @@ virtio_crypto_sym_clear_session(
VIRTIO_CRYPTO_INIT_LOG_DBG("vq->vq_desc_head_idx=%d", vq->vq_desc_head_idx);
VIRTIO_CRYPTO_SESSION_LOG_INFO("Close session %"PRIu64" successfully ",
- session->session_id);
+ session_id);
+
+ rte_free(ctrl);
+}
+
+static void
+virtio_crypto_sym_clear_session(
+ struct rte_cryptodev *dev,
+ struct rte_cryptodev_sym_session *sess)
+{
+ uint8_t len_inhdr = sizeof(struct virtio_crypto_inhdr);
+ uint32_t len_op_ctrl_req = sizeof(struct virtio_crypto_op_ctrl_req);
+ struct virtio_crypto_op_ctrl_req *ctrl;
+ struct virtio_crypto_session *session;
+ uint8_t *malloc_virt_addr;
- rte_free(malloc_virt_addr);
+ PMD_INIT_FUNC_TRACE();
+
+ if (virtio_crypto_check_sym_clear_session_paras(dev, sess) < 0)
+ return;
+
+ session = CRYPTODEV_GET_SYM_SESS_PRIV(sess);
+
+ /*
+ * malloc memory to store information of ctrl request op,
+ * returned status and desc vring
+ */
+ malloc_virt_addr = rte_malloc(NULL, len_op_ctrl_req + len_inhdr
+ + NUM_ENTRY_SYM_CLEAR_SESSION
+ * sizeof(struct vring_desc), RTE_CACHE_LINE_SIZE);
+ if (malloc_virt_addr == NULL) {
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("not enough heap room");
+ return;
+ }
+
+ /* assign ctrl request op part */
+ ctrl = (struct virtio_crypto_op_ctrl_req *)malloc_virt_addr;
+ ctrl->header.opcode = VIRTIO_CRYPTO_CIPHER_DESTROY_SESSION;
+ /* default data virtqueue is 0 */
+ ctrl->header.queue_id = 0;
+ ctrl->u.destroy_session.session_id = session->session_id;
+
+ return virtio_crypto_clear_session(dev, ctrl);
+}
+
+static void
+virtio_crypto_asym_clear_session(
+ struct rte_cryptodev *dev,
+ struct rte_cryptodev_asym_session *sess)
+{
+ uint8_t len_inhdr = sizeof(struct virtio_crypto_inhdr);
+ uint32_t len_op_ctrl_req = sizeof(struct virtio_crypto_op_ctrl_req);
+ struct virtio_crypto_op_ctrl_req *ctrl;
+ struct virtio_crypto_session *session;
+ uint8_t *malloc_virt_addr;
+
+ PMD_INIT_FUNC_TRACE();
+
+ session = CRYPTODEV_GET_ASYM_SESS_PRIV(sess);
+
+ /*
+ * malloc memory to store information of ctrl request op,
+ * returned status and desc vring
+ */
+ malloc_virt_addr = rte_malloc(NULL, len_op_ctrl_req + len_inhdr
+ + NUM_ENTRY_SYM_CLEAR_SESSION
+ * sizeof(struct vring_desc), RTE_CACHE_LINE_SIZE);
+ if (malloc_virt_addr == NULL) {
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("not enough heap room");
+ return;
+ }
+
+ /* assign ctrl request op part */
+ ctrl = (struct virtio_crypto_op_ctrl_req *)malloc_virt_addr;
+ ctrl->header.opcode = VIRTIO_CRYPTO_AKCIPHER_DESTROY_SESSION;
+ /* default data virtqueue is 0 */
+ ctrl->header.queue_id = 0;
+ ctrl->u.destroy_session.session_id = session->session_id;
+
+ return virtio_crypto_clear_session(dev, ctrl);
}
static struct rte_crypto_cipher_xform *
@@ -1292,6 +1357,23 @@ virtio_crypto_check_sym_configure_session_paras(
return 0;
}
+static int
+virtio_crypto_check_asym_configure_session_paras(
+ struct rte_cryptodev *dev,
+ struct rte_crypto_asym_xform *xform,
+ struct rte_cryptodev_asym_session *asym_sess)
+{
+ if (unlikely(xform == NULL) || unlikely(asym_sess == NULL)) {
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("NULL pointer");
+ return -1;
+ }
+
+ if (virtio_crypto_check_sym_session_paras(dev) < 0)
+ return -1;
+
+ return 0;
+}
+
static int
virtio_crypto_sym_configure_session(
struct rte_cryptodev *dev,
@@ -1383,6 +1465,204 @@ virtio_crypto_sym_configure_session(
return -1;
}
+static size_t
+tlv_encode(uint8_t **tlv, uint8_t type, uint8_t *data, size_t len)
+{
+ uint8_t *lenval = NULL;
+ size_t lenval_n = 0;
+
+ if (len > 65535) {
+ goto _exit;
+ } else if (len > 255) {
+ lenval_n = 4 + len;
+ lenval = rte_malloc(NULL, lenval_n, 0);
+
+ lenval[0] = type;
+ lenval[1] = 0x82;
+ lenval[2] = (len & 0xFF00) >> 8;
+ lenval[3] = (len & 0xFF);
+ rte_memcpy(&lenval[4], data, len);
+ } else if (len > 127) {
+ lenval_n = 3 + len;
+ lenval = rte_malloc(NULL, lenval_n, 0);
+
+ lenval[0] = type;
+ lenval[1] = 0x81;
+ lenval[2] = len;
+ rte_memcpy(&lenval[3], data, len);
+ } else {
+ lenval_n = 2 + len;
+ lenval = rte_malloc(NULL, lenval_n, 0);
+
+ lenval[0] = type;
+ lenval[1] = len;
+ rte_memcpy(&lenval[2], data, len);
+ }
+
+_exit:
+ *tlv = lenval;
+ return lenval_n;
+}
+
+static int
+virtio_crypto_asym_rsa_xform_to_der(
+ struct rte_crypto_asym_xform *xform,
+ unsigned char **der)
+{
+ size_t nlen, elen, dlen, plen, qlen, dplen, dqlen, qinvlen, tlen;
+ uint8_t *n, *e, *d, *p, *q, *dp, *dq, *qinv, *t;
+ uint8_t ver[3] = {0x02, 0x01, 0x00};
+
+ if (xform->xform_type != RTE_CRYPTO_ASYM_XFORM_RSA)
+ return -EINVAL;
+
+ /* Length of sequence in bytes */
+ tlen = RTE_DIM(ver);
+ nlen = tlv_encode(&n, 0x02, xform->rsa.n.data, xform->rsa.n.length);
+ elen = tlv_encode(&e, 0x02, xform->rsa.e.data, xform->rsa.e.length);
+ tlen += (nlen + elen);
+
+ dlen = tlv_encode(&d, 0x02, xform->rsa.d.data, xform->rsa.d.length);
+ tlen += dlen;
+
+ plen = tlv_encode(&p, 0x02, xform->rsa.qt.p.data, xform->rsa.qt.p.length);
+ qlen = tlv_encode(&q, 0x02, xform->rsa.qt.q.data, xform->rsa.qt.q.length);
+ dplen = tlv_encode(&dp, 0x02, xform->rsa.qt.dP.data, xform->rsa.qt.dP.length);
+ dqlen = tlv_encode(&dq, 0x02, xform->rsa.qt.dQ.data, xform->rsa.qt.dQ.length);
+ qinvlen = tlv_encode(&qinv, 0x02, xform->rsa.qt.qInv.data, xform->rsa.qt.qInv.length);
+ tlen += (plen + qlen + dplen + dqlen + qinvlen);
+
+ t = rte_malloc(NULL, tlen, 0);
+ *der = t;
+ rte_memcpy(t, ver, RTE_DIM(ver));
+ t += RTE_DIM(ver);
+ rte_memcpy(t, n, nlen);
+ t += nlen;
+ rte_memcpy(t, e, elen);
+ t += elen;
+ rte_free(n);
+ rte_free(e);
+
+ rte_memcpy(t, d, dlen);
+ t += dlen;
+ rte_free(d);
+
+ rte_memcpy(t, p, plen);
+ t += plen;
+ rte_memcpy(t, q, plen);
+ t += qlen;
+ rte_memcpy(t, dp, dplen);
+ t += dplen;
+ rte_memcpy(t, dq, dqlen);
+ t += dqlen;
+ rte_memcpy(t, qinv, qinvlen);
+ t += qinvlen;
+ rte_free(p);
+ rte_free(q);
+ rte_free(dp);
+ rte_free(dq);
+ rte_free(qinv);
+
+ t = *der;
+ tlen = tlv_encode(der, 0x30, t, tlen);
+ return tlen;
+}
+
+static int
+virtio_crypto_asym_configure_session(
+ struct rte_cryptodev *dev,
+ struct rte_crypto_asym_xform *xform,
+ struct rte_cryptodev_asym_session *sess)
+{
+ struct virtio_crypto_akcipher_session_para *para;
+ struct virtio_crypto_op_ctrl_req *ctrl_req;
+ struct virtio_crypto_session *session;
+ struct virtio_crypto_hw *hw;
+ struct virtqueue *control_vq;
+ uint8_t *key = NULL;
+ int ret;
+
+ PMD_INIT_FUNC_TRACE();
+
+ ret = virtio_crypto_check_asym_configure_session_paras(dev, xform,
+ sess);
+ if (ret < 0) {
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("Invalid parameters");
+ return ret;
+ }
+
+ session = CRYPTODEV_GET_ASYM_SESS_PRIV(sess);
+ memset(session, 0, sizeof(struct virtio_crypto_session));
+ ctrl_req = &session->ctrl;
+ ctrl_req->header.opcode = VIRTIO_CRYPTO_AKCIPHER_CREATE_SESSION;
+ /* FIXME: support multiqueue */
+ ctrl_req->header.queue_id = 0;
+ para = &ctrl_req->u.akcipher_create_session.para;
+
+ switch (xform->xform_type) {
+ case RTE_CRYPTO_ASYM_XFORM_RSA:
+ ctrl_req->header.algo = VIRTIO_CRYPTO_AKCIPHER_RSA;
+ para->algo = VIRTIO_CRYPTO_AKCIPHER_RSA;
+
+ if (xform->rsa.key_type == RTE_RSA_KEY_TYPE_EXP)
+ para->keytype = VIRTIO_CRYPTO_AKCIPHER_KEY_TYPE_PUBLIC;
+ else
+ para->keytype = VIRTIO_CRYPTO_AKCIPHER_KEY_TYPE_PRIVATE;
+
+ if (xform->rsa.padding.type == RTE_CRYPTO_RSA_PADDING_NONE) {
+ para->u.rsa.padding_algo = VIRTIO_CRYPTO_RSA_RAW_PADDING;
+ } else if (xform->rsa.padding.type == RTE_CRYPTO_RSA_PADDING_PKCS1_5) {
+ para->u.rsa.padding_algo = VIRTIO_CRYPTO_RSA_PKCS1_PADDING;
+ switch (xform->rsa.padding.hash) {
+ case RTE_CRYPTO_AUTH_SHA1:
+ para->u.rsa.hash_algo = VIRTIO_CRYPTO_RSA_SHA1;
+ break;
+ case RTE_CRYPTO_AUTH_SHA224:
+ para->u.rsa.hash_algo = VIRTIO_CRYPTO_RSA_SHA224;
+ break;
+ case RTE_CRYPTO_AUTH_SHA256:
+ para->u.rsa.hash_algo = VIRTIO_CRYPTO_RSA_SHA256;
+ break;
+ case RTE_CRYPTO_AUTH_SHA512:
+ para->u.rsa.hash_algo = VIRTIO_CRYPTO_RSA_SHA512;
+ break;
+ case RTE_CRYPTO_AUTH_MD5:
+ para->u.rsa.hash_algo = VIRTIO_CRYPTO_RSA_MD5;
+ break;
+ default:
+ para->u.rsa.hash_algo = VIRTIO_CRYPTO_RSA_NO_HASH;
+ }
+ } else {
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("Invalid padding type");
+ return -EINVAL;
+ }
+
+ ret = virtio_crypto_asym_rsa_xform_to_der(xform, &key);
+ if (ret <= 0) {
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("Invalid RSA primitives");
+ return ret;
+ }
+
+ ctrl_req->u.akcipher_create_session.para.keylen = ret;
+ break;
+ default:
+ para->algo = VIRTIO_CRYPTO_NO_AKCIPHER;
+ }
+
+ hw = dev->data->dev_private;
+ control_vq = hw->cvq;
+ ret = virtio_crypto_send_command(control_vq, ctrl_req,
+ key, NULL, session);
+ if (ret < 0) {
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("create session failed: %d", ret);
+ goto error_out;
+ }
+
+ return 0;
+error_out:
+ return -1;
+}
+
static void
virtio_crypto_dev_info_get(struct rte_cryptodev *dev,
struct rte_cryptodev_info *info)
diff --git a/drivers/crypto/virtio/virtio_rxtx.c b/drivers/crypto/virtio/virtio_rxtx.c
index 48b5f4ebbb..d00af8b7ce 100644
--- a/drivers/crypto/virtio/virtio_rxtx.c
+++ b/drivers/crypto/virtio/virtio_rxtx.c
@@ -343,6 +343,196 @@ virtqueue_crypto_sym_enqueue_xmit(
return 0;
}
+static int
+virtqueue_crypto_asym_pkt_header_arrange(
+ struct rte_crypto_op *cop,
+ struct virtio_crypto_op_data_req *data,
+ struct virtio_crypto_session *session)
+{
+ struct rte_crypto_asym_op *asym_op = cop->asym;
+ struct virtio_crypto_op_data_req *req_data = data;
+ struct virtio_crypto_op_ctrl_req *ctrl = &session->ctrl;
+
+ req_data->header.session_id = session->session_id;
+
+ switch (ctrl->header.algo) {
+ case VIRTIO_CRYPTO_AKCIPHER_RSA:
+ req_data->header.algo = ctrl->header.algo;
+ if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_SIGN) {
+ req_data->header.opcode = VIRTIO_CRYPTO_AKCIPHER_SIGN;
+ req_data->u.akcipher_req.para.src_data_len
+ = asym_op->rsa.message.length;
+ /* qemu does not accept zero size write buffer */
+ req_data->u.akcipher_req.para.dst_data_len
+ = asym_op->rsa.sign.length;
+ } else if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_VERIFY) {
+ req_data->header.opcode = VIRTIO_CRYPTO_AKCIPHER_VERIFY;
+ req_data->u.akcipher_req.para.src_data_len
+ = asym_op->rsa.sign.length;
+ /* qemu does not accept zero size write buffer */
+ req_data->u.akcipher_req.para.dst_data_len
+ = asym_op->rsa.message.length;
+ } else if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_ENCRYPT) {
+ req_data->header.opcode = VIRTIO_CRYPTO_AKCIPHER_ENCRYPT;
+ req_data->u.akcipher_req.para.src_data_len
+ = asym_op->rsa.message.length;
+ /* qemu does not accept zero size write buffer */
+ req_data->u.akcipher_req.para.dst_data_len
+ = asym_op->rsa.cipher.length;
+ } else if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_DECRYPT) {
+ req_data->header.opcode = VIRTIO_CRYPTO_AKCIPHER_DECRYPT;
+ req_data->u.akcipher_req.para.src_data_len
+ = asym_op->rsa.cipher.length;
+ /* qemu does not accept zero size write buffer */
+ req_data->u.akcipher_req.para.dst_data_len
+ = asym_op->rsa.message.length;
+ } else {
+ return -EINVAL;
+ }
+
+ break;
+ default:
+ req_data->header.algo = VIRTIO_CRYPTO_NO_AKCIPHER;
+ }
+
+ return 0;
+}
+
+static int
+virtqueue_crypto_asym_enqueue_xmit(
+ struct virtqueue *txvq,
+ struct rte_crypto_op *cop)
+{
+ uint16_t idx = 0;
+ uint16_t num_entry;
+ uint16_t needed = 1;
+ uint16_t head_idx;
+ struct vq_desc_extra *dxp;
+ struct vring_desc *start_dp;
+ struct vring_desc *desc;
+ uint64_t indirect_op_data_req_phys_addr;
+ uint16_t req_data_len = sizeof(struct virtio_crypto_op_data_req);
+ uint32_t indirect_vring_addr_offset = req_data_len +
+ sizeof(struct virtio_crypto_inhdr);
+ struct rte_crypto_asym_op *asym_op = cop->asym;
+ struct virtio_crypto_session *session =
+ CRYPTODEV_GET_ASYM_SESS_PRIV(cop->asym->session);
+ struct virtio_crypto_op_data_req *op_data_req;
+ struct virtio_crypto_op_cookie *crypto_op_cookie;
+
+ if (unlikely(txvq->vq_free_cnt == 0))
+ return -ENOSPC;
+ if (unlikely(txvq->vq_free_cnt < needed))
+ return -EMSGSIZE;
+ head_idx = txvq->vq_desc_head_idx;
+ if (unlikely(head_idx >= txvq->vq_nentries))
+ return -EFAULT;
+
+ dxp = &txvq->vq_descx[head_idx];
+
+ if (rte_mempool_get(txvq->mpool, &dxp->cookie)) {
+ VIRTIO_CRYPTO_TX_LOG_ERR("can not get cookie");
+ return -EFAULT;
+ }
+ crypto_op_cookie = dxp->cookie;
+ indirect_op_data_req_phys_addr =
+ rte_mempool_virt2iova(crypto_op_cookie);
+ op_data_req = (struct virtio_crypto_op_data_req *)crypto_op_cookie;
+ if (virtqueue_crypto_asym_pkt_header_arrange(cop, op_data_req, session))
+ return -EFAULT;
+
+ /* status is initialized to VIRTIO_CRYPTO_ERR */
+ ((struct virtio_crypto_inhdr *)
+ ((uint8_t *)op_data_req + req_data_len))->status =
+ VIRTIO_CRYPTO_ERR;
+
+ /* point to indirect vring entry */
+ desc = (struct vring_desc *)
+ ((uint8_t *)op_data_req + indirect_vring_addr_offset);
+ for (idx = 0; idx < (NUM_ENTRY_VIRTIO_CRYPTO_OP - 1); idx++)
+ desc[idx].next = idx + 1;
+ desc[NUM_ENTRY_VIRTIO_CRYPTO_OP - 1].next = VQ_RING_DESC_CHAIN_END;
+
+ idx = 0;
+
+ /* indirect vring: first part, virtio_crypto_op_data_req */
+ desc[idx].addr = indirect_op_data_req_phys_addr;
+ desc[idx].len = req_data_len;
+ desc[idx++].flags = VRING_DESC_F_NEXT;
+
+ if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_SIGN) {
+ /* indirect vring: src data */
+ desc[idx].addr = rte_mem_virt2iova(asym_op->rsa.message.data);
+ desc[idx].len = asym_op->rsa.message.length;
+ desc[idx++].flags = VRING_DESC_F_NEXT;
+
+ /* indirect vring: dst data */
+ desc[idx].addr = rte_mem_virt2iova(asym_op->rsa.sign.data);
+ desc[idx].len = asym_op->rsa.sign.length;
+ desc[idx++].flags = VRING_DESC_F_NEXT | VRING_DESC_F_WRITE;
+ } else if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_VERIFY) {
+ /* indirect vring: src data */
+ desc[idx].addr = rte_mem_virt2iova(asym_op->rsa.sign.data);
+ desc[idx].len = asym_op->rsa.sign.length;
+ desc[idx++].flags = VRING_DESC_F_NEXT;
+
+ /* indirect vring: dst data */
+ desc[idx].addr = rte_mem_virt2iova(asym_op->rsa.message.data);
+ desc[idx].len = asym_op->rsa.message.length;
+ desc[idx++].flags = VRING_DESC_F_NEXT;
+ } else if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_ENCRYPT) {
+ /* indirect vring: src data */
+ desc[idx].addr = rte_mem_virt2iova(asym_op->rsa.message.data);
+ desc[idx].len = asym_op->rsa.message.length;
+ desc[idx++].flags = VRING_DESC_F_NEXT;
+
+ /* indirect vring: dst data */
+ desc[idx].addr = rte_mem_virt2iova(asym_op->rsa.cipher.data);
+ desc[idx].len = asym_op->rsa.cipher.length;
+ desc[idx++].flags = VRING_DESC_F_NEXT | VRING_DESC_F_WRITE;
+ } else if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_DECRYPT) {
+ /* indirect vring: src data */
+ desc[idx].addr = rte_mem_virt2iova(asym_op->rsa.cipher.data);
+ desc[idx].len = asym_op->rsa.cipher.length;
+ desc[idx++].flags = VRING_DESC_F_NEXT;
+
+ /* indirect vring: dst data */
+ desc[idx].addr = rte_mem_virt2iova(asym_op->rsa.message.data);
+ desc[idx].len = asym_op->rsa.message.length;
+ desc[idx++].flags = VRING_DESC_F_NEXT | VRING_DESC_F_WRITE;
+ } else {
+ VIRTIO_CRYPTO_TX_LOG_ERR("Invalid asym op");
+ return -EINVAL;
+ }
+
+ /* indirect vring: last part, status returned */
+ desc[idx].addr = indirect_op_data_req_phys_addr + req_data_len;
+ desc[idx].len = sizeof(struct virtio_crypto_inhdr);
+ desc[idx++].flags = VRING_DESC_F_WRITE;
+
+ num_entry = idx;
+
+ /* save the infos to use when receiving packets */
+ dxp->crypto_op = (void *)cop;
+ dxp->ndescs = needed;
+
+ /* use a single buffer */
+ start_dp = txvq->vq_ring.desc;
+ start_dp[head_idx].addr = indirect_op_data_req_phys_addr +
+ indirect_vring_addr_offset;
+ start_dp[head_idx].len = num_entry * sizeof(struct vring_desc);
+ start_dp[head_idx].flags = VRING_DESC_F_INDIRECT;
+
+ idx = start_dp[head_idx].next;
+ txvq->vq_desc_head_idx = idx;
+ if (txvq->vq_desc_head_idx == VQ_RING_DESC_CHAIN_END)
+ txvq->vq_desc_tail_idx = idx;
+ txvq->vq_free_cnt = (uint16_t)(txvq->vq_free_cnt - needed);
+ vq_update_avail_ring(txvq, head_idx);
+
+ return 0;
+}
+
static int
virtqueue_crypto_enqueue_xmit(struct virtqueue *txvq,
struct rte_crypto_op *cop)
@@ -353,6 +543,9 @@ virtqueue_crypto_enqueue_xmit(struct virtqueue *txvq,
case RTE_CRYPTO_OP_TYPE_SYMMETRIC:
ret = virtqueue_crypto_sym_enqueue_xmit(txvq, cop);
break;
+ case RTE_CRYPTO_OP_TYPE_ASYMMETRIC:
+ ret = virtqueue_crypto_asym_enqueue_xmit(txvq, cop);
+ break;
default:
VIRTIO_CRYPTO_TX_LOG_ERR("invalid crypto op type %u",
cop->type);
@@ -476,27 +669,28 @@ virtio_crypto_pkt_tx_burst(void *tx_queue, struct rte_crypto_op **tx_pkts,
VIRTIO_CRYPTO_TX_LOG_DBG("%d packets to xmit", nb_pkts);
for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
- struct rte_mbuf *txm = tx_pkts[nb_tx]->sym->m_src;
- /* nb_segs is always 1 at virtio crypto situation */
- int need = txm->nb_segs - txvq->vq_free_cnt;
-
- /*
- * Positive value indicates it hasn't enough space in vring
- * descriptors
- */
- if (unlikely(need > 0)) {
+ if (tx_pkts[nb_tx]->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) {
+ struct rte_mbuf *txm = tx_pkts[nb_tx]->sym->m_src;
+ /* nb_segs is always 1 at virtio crypto situation */
+ int need = txm->nb_segs - txvq->vq_free_cnt;
+
/*
- * try it again because the receive process may be
- * free some space
+ * Positive value indicates it hasn't enough space in vring
+ * descriptors
*/
- need = txm->nb_segs - txvq->vq_free_cnt;
if (unlikely(need > 0)) {
- VIRTIO_CRYPTO_TX_LOG_DBG("No free tx "
- "descriptors to transmit");
- break;
+ /*
+ * try it again because the receive process may be
+ * free some space
+ */
+ need = txm->nb_segs - txvq->vq_free_cnt;
+ if (unlikely(need > 0)) {
+ VIRTIO_CRYPTO_TX_LOG_DBG("No free tx "
+ "descriptors to transmit");
+ break;
+ }
}
}
-
txvq->packets_sent_total++;
/* Enqueue Packet buffers */
diff --git a/lib/cryptodev/cryptodev_pmd.h b/lib/cryptodev/cryptodev_pmd.h
index 5c84a3b847..929c6defe9 100644
--- a/lib/cryptodev/cryptodev_pmd.h
+++ b/lib/cryptodev/cryptodev_pmd.h
@@ -715,6 +715,12 @@ struct rte_cryptodev_asym_session {
uint8_t sess_private_data[];
};
+/**
+ * Helper macro to get session private data
+ */
+#define CRYPTODEV_GET_ASYM_SESS_PRIV(s) \
+ ((void *)(((struct rte_cryptodev_asym_session *)s)->sess_private_data))
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/vhost/virtio_crypto.h b/lib/vhost/virtio_crypto.h
index 28877a5da3..d42af62f2f 100644
--- a/lib/vhost/virtio_crypto.h
+++ b/lib/vhost/virtio_crypto.h
@@ -9,6 +9,7 @@
#define VIRTIO_CRYPTO_SERVICE_HASH 1
#define VIRTIO_CRYPTO_SERVICE_MAC 2
#define VIRTIO_CRYPTO_SERVICE_AEAD 3
+#define VIRTIO_CRYPTO_SERVICE_AKCIPHER 4
#define VIRTIO_CRYPTO_OPCODE(service, op) (((service) << 8) | (op))
@@ -29,6 +30,10 @@ struct virtio_crypto_ctrl_header {
VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AEAD, 0x02)
#define VIRTIO_CRYPTO_AEAD_DESTROY_SESSION \
VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AEAD, 0x03)
+#define VIRTIO_CRYPTO_AKCIPHER_CREATE_SESSION \
+ VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AKCIPHER, 0x04)
+#define VIRTIO_CRYPTO_AKCIPHER_DESTROY_SESSION \
+ VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AKCIPHER, 0x05)
uint32_t opcode;
uint32_t algo;
uint32_t flag;
@@ -152,6 +157,58 @@ struct virtio_crypto_aead_create_session_req {
uint8_t padding[32];
};
+struct virtio_crypto_rsa_session_para {
+#define VIRTIO_CRYPTO_RSA_RAW_PADDING 0
+#define VIRTIO_CRYPTO_RSA_PKCS1_PADDING 1
+ uint32_t padding_algo;
+
+#define VIRTIO_CRYPTO_RSA_NO_HASH 0
+#define VIRTIO_CRYPTO_RSA_MD2 1
+#define VIRTIO_CRYPTO_RSA_MD3 2
+#define VIRTIO_CRYPTO_RSA_MD4 3
+#define VIRTIO_CRYPTO_RSA_MD5 4
+#define VIRTIO_CRYPTO_RSA_SHA1 5
+#define VIRTIO_CRYPTO_RSA_SHA256 6
+#define VIRTIO_CRYPTO_RSA_SHA384 7
+#define VIRTIO_CRYPTO_RSA_SHA512 8
+#define VIRTIO_CRYPTO_RSA_SHA224 9
+ uint32_t hash_algo;
+};
+
+struct virtio_crypto_ecdsa_session_para {
+#define VIRTIO_CRYPTO_CURVE_UNKNOWN 0
+#define VIRTIO_CRYPTO_CURVE_NIST_P192 1
+#define VIRTIO_CRYPTO_CURVE_NIST_P224 2
+#define VIRTIO_CRYPTO_CURVE_NIST_P256 3
+#define VIRTIO_CRYPTO_CURVE_NIST_P384 4
+#define VIRTIO_CRYPTO_CURVE_NIST_P521 5
+ uint32_t curve_id;
+ uint32_t padding;
+};
+
+struct virtio_crypto_akcipher_session_para {
+#define VIRTIO_CRYPTO_NO_AKCIPHER 0
+#define VIRTIO_CRYPTO_AKCIPHER_RSA 1
+#define VIRTIO_CRYPTO_AKCIPHER_DSA 2
+#define VIRTIO_CRYPTO_AKCIPHER_ECDSA 3
+ uint32_t algo;
+
+#define VIRTIO_CRYPTO_AKCIPHER_KEY_TYPE_PUBLIC 1
+#define VIRTIO_CRYPTO_AKCIPHER_KEY_TYPE_PRIVATE 2
+ uint32_t keytype;
+ uint32_t keylen;
+
+ union {
+ struct virtio_crypto_rsa_session_para rsa;
+ struct virtio_crypto_ecdsa_session_para ecdsa;
+ } u;
+};
+
+struct virtio_crypto_akcipher_create_session_req {
+ struct virtio_crypto_akcipher_session_para para;
+ uint8_t padding[36];
+};
+
struct virtio_crypto_alg_chain_session_para {
#define VIRTIO_CRYPTO_SYM_ALG_CHAIN_ORDER_HASH_THEN_CIPHER 1
#define VIRTIO_CRYPTO_SYM_ALG_CHAIN_ORDER_CIPHER_THEN_HASH 2
@@ -219,6 +276,8 @@ struct virtio_crypto_op_ctrl_req {
mac_create_session;
struct virtio_crypto_aead_create_session_req
aead_create_session;
+ struct virtio_crypto_akcipher_create_session_req
+ akcipher_create_session;
struct virtio_crypto_destroy_session_req
destroy_session;
uint8_t padding[56];
@@ -238,6 +297,14 @@ struct virtio_crypto_op_header {
VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AEAD, 0x00)
#define VIRTIO_CRYPTO_AEAD_DECRYPT \
VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AEAD, 0x01)
+#define VIRTIO_CRYPTO_AKCIPHER_ENCRYPT \
+ VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AKCIPHER, 0x00)
+#define VIRTIO_CRYPTO_AKCIPHER_DECRYPT \
+ VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AKCIPHER, 0x01)
+#define VIRTIO_CRYPTO_AKCIPHER_SIGN \
+ VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AKCIPHER, 0x02)
+#define VIRTIO_CRYPTO_AKCIPHER_VERIFY \
+ VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AKCIPHER, 0x03)
uint32_t opcode;
/* algo should be service-specific algorithms */
uint32_t algo;
@@ -362,6 +429,16 @@ struct virtio_crypto_aead_data_req {
uint8_t padding[32];
};
+struct virtio_crypto_akcipher_para {
+ uint32_t src_data_len;
+ uint32_t dst_data_len;
+};
+
+struct virtio_crypto_akcipher_data_req {
+ struct virtio_crypto_akcipher_para para;
+ uint8_t padding[40];
+};
+
/* The request of the data virtqueue's packet */
struct virtio_crypto_op_data_req {
struct virtio_crypto_op_header header;
@@ -371,6 +448,7 @@ struct virtio_crypto_op_data_req {
struct virtio_crypto_hash_data_req hash_req;
struct virtio_crypto_mac_data_req mac_req;
struct virtio_crypto_aead_data_req aead_req;
+ struct virtio_crypto_akcipher_data_req akcipher_req;
uint8_t padding[48];
} u;
};
@@ -380,6 +458,8 @@ struct virtio_crypto_op_data_req {
#define VIRTIO_CRYPTO_BADMSG 2
#define VIRTIO_CRYPTO_NOTSUPP 3
#define VIRTIO_CRYPTO_INVSESS 4 /* Invalid session id */
+#define VIRTIO_CRYPTO_NOSPC 5 /* no free session ID */
+#define VIRTIO_CRYPTO_KEY_REJECTED 6 /* Signature verification failed */
/* The accelerator hardware is ready */
#define VIRTIO_CRYPTO_S_HW_READY (1 << 0)
--
2.25.1
^ permalink raw reply [flat|nested] 58+ messages in thread
* [v1 04/16] test/crypto: check for RSA capability
2024-12-24 7:36 [v1 00/16] crypto/virtio: vDPA and asymmetric support Gowrishankar Muthukrishnan
` (2 preceding siblings ...)
2024-12-24 7:37 ` [v1 03/16] crypto/virtio: add asymmetric RSA support Gowrishankar Muthukrishnan
@ 2024-12-24 7:37 ` Gowrishankar Muthukrishnan
2024-12-24 7:37 ` [v1 05/16] test/crypto: return proper codes in create session Gowrishankar Muthukrishnan
` (15 subsequent siblings)
19 siblings, 0 replies; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2024-12-24 7:37 UTC (permalink / raw)
To: dev, Akhil Goyal, Maxime Coquelin, Chenbo Xia, Fan Zhang, Jay Zhou
Cc: jerinj, anoobj, Rajesh Mudimadugula, Gowrishankar Muthukrishnan
In RSA crypto tests, check if it is supported by PMD before
executing it.
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
app/test/test_cryptodev_asym.c | 24 ++++++++++++++++++++++++
1 file changed, 24 insertions(+)
diff --git a/app/test/test_cryptodev_asym.c b/app/test/test_cryptodev_asym.c
index e2f74702ad..364e81ecd9 100644
--- a/app/test/test_cryptodev_asym.c
+++ b/app/test/test_cryptodev_asym.c
@@ -234,11 +234,17 @@ test_rsa_sign_verify(void)
{
struct crypto_testsuite_params_asym *ts_params = &testsuite_params;
struct rte_mempool *sess_mpool = ts_params->session_mpool;
+ struct rte_cryptodev_asym_capability_idx idx;
uint8_t dev_id = ts_params->valid_devs[0];
void *sess = NULL;
struct rte_cryptodev_info dev_info;
int ret, status = TEST_SUCCESS;
+ /* Check RSA capability */
+ idx.type = RTE_CRYPTO_ASYM_XFORM_RSA;
+ if (rte_cryptodev_asym_capability_get(dev_id, &idx) == NULL)
+ return -ENOTSUP;
+
/* Test case supports op with exponent key only,
* Check in PMD feature flag for RSA exponent key type support.
*/
@@ -274,11 +280,17 @@ test_rsa_enc_dec(void)
{
struct crypto_testsuite_params_asym *ts_params = &testsuite_params;
struct rte_mempool *sess_mpool = ts_params->session_mpool;
+ struct rte_cryptodev_asym_capability_idx idx;
uint8_t dev_id = ts_params->valid_devs[0];
void *sess = NULL;
struct rte_cryptodev_info dev_info;
int ret, status = TEST_SUCCESS;
+ /* Check RSA capability */
+ idx.type = RTE_CRYPTO_ASYM_XFORM_RSA;
+ if (rte_cryptodev_asym_capability_get(dev_id, &idx) == NULL)
+ return -ENOTSUP;
+
/* Test case supports op with exponent key only,
* Check in PMD feature flag for RSA exponent key type support.
*/
@@ -314,11 +326,17 @@ test_rsa_sign_verify_crt(void)
{
struct crypto_testsuite_params_asym *ts_params = &testsuite_params;
struct rte_mempool *sess_mpool = ts_params->session_mpool;
+ struct rte_cryptodev_asym_capability_idx idx;
uint8_t dev_id = ts_params->valid_devs[0];
void *sess = NULL;
struct rte_cryptodev_info dev_info;
int ret, status = TEST_SUCCESS;
+ /* Check RSA capability */
+ idx.type = RTE_CRYPTO_ASYM_XFORM_RSA;
+ if (rte_cryptodev_asym_capability_get(dev_id, &idx) == NULL)
+ return -ENOTSUP;
+
/* Test case supports op with quintuple format key only,
* Check im PMD feature flag for RSA quintuple key type support.
*/
@@ -354,11 +372,17 @@ test_rsa_enc_dec_crt(void)
{
struct crypto_testsuite_params_asym *ts_params = &testsuite_params;
struct rte_mempool *sess_mpool = ts_params->session_mpool;
+ struct rte_cryptodev_asym_capability_idx idx;
uint8_t dev_id = ts_params->valid_devs[0];
void *sess = NULL;
struct rte_cryptodev_info dev_info;
int ret, status = TEST_SUCCESS;
+ /* Check RSA capability */
+ idx.type = RTE_CRYPTO_ASYM_XFORM_RSA;
+ if (rte_cryptodev_asym_capability_get(dev_id, &idx) == NULL)
+ return -ENOTSUP;
+
/* Test case supports op with quintuple format key only,
* Check in PMD feature flag for RSA quintuple key type support.
*/
--
2.25.1
^ permalink raw reply [flat|nested] 58+ messages in thread
* [v1 05/16] test/crypto: return proper codes in create session
2024-12-24 7:36 [v1 00/16] crypto/virtio: vDPA and asymmetric support Gowrishankar Muthukrishnan
` (3 preceding siblings ...)
2024-12-24 7:37 ` [v1 04/16] test/crypto: check for RSA capability Gowrishankar Muthukrishnan
@ 2024-12-24 7:37 ` Gowrishankar Muthukrishnan
2024-12-24 7:37 ` [v1 06/16] test/crypto: add asymmetric tests for virtio PMD Gowrishankar Muthukrishnan
` (14 subsequent siblings)
19 siblings, 0 replies; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2024-12-24 7:37 UTC (permalink / raw)
To: dev, Akhil Goyal, Maxime Coquelin, Chenbo Xia, Fan Zhang, Jay Zhou
Cc: jerinj, anoobj, Rajesh Mudimadugula
From: Rajesh Mudimadugula <rmudimadugul@marvell.com>
Return proper error codes in create_auth_session() to avoid
segfaults as a result of this.
Signed-off-by: Rajesh Mudimadugula <rmudimadugul@marvell.com>
---
app/test/test_cryptodev.c | 38 ++++++++++++++++++++++++++++----------
1 file changed, 28 insertions(+), 10 deletions(-)
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index a33ef574cc..7cddb1517c 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -13006,6 +13006,8 @@ test_cryptodev_error_recover_helper(uint8_t dev_id, const void *test_data, bool
ut_params->sess = rte_cryptodev_sym_session_create(dev_id, &ut_params->cipher_xform,
ts_params->session_mpool);
+ if (ut_params->sess == NULL && rte_errno == ENOTSUP)
+ return TEST_SKIPPED;
TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
ut_params->op = rte_crypto_op_alloc(ts_params->op_mpool, RTE_CRYPTO_OP_TYPE_SYMMETRIC);
@@ -14707,15 +14709,19 @@ test_multi_session(void)
sessions[i] = rte_cryptodev_sym_session_create(
ts_params->valid_devs[0], &ut_params->auth_xform,
ts_params->session_mpool);
- if (sessions[i] == NULL && rte_errno == ENOTSUP) {
+ if (sessions[i] == NULL) {
nb_sess = i;
- ret = TEST_SKIPPED;
+ if (rte_errno == ENOTSUP)
+ ret = TEST_SKIPPED;
+ else {
+ ret = TEST_FAILED;
+ printf("TestCase %s() line %d failed : "
+ "Session creation failed at session number %u",
+ __func__, __LINE__, i);
+ }
break;
}
- TEST_ASSERT_NOT_NULL(sessions[i],
- "Session creation failed at session number %u",
- i);
/* Attempt to send a request on each session */
ret = test_AES_CBC_HMAC_SHA512_decrypt_perform(
@@ -14843,15 +14849,19 @@ test_multi_session_random_usage(void)
ts_params->valid_devs[0],
&ut_paramz[i].ut_params.auth_xform,
ts_params->session_mpool);
- if (sessions[i] == NULL && rte_errno == ENOTSUP) {
+ if (sessions[i] == NULL) {
nb_sess = i;
- ret = TEST_SKIPPED;
+ if (rte_errno == ENOTSUP)
+ ret = TEST_SKIPPED;
+ else {
+ ret = TEST_FAILED;
+ printf("TestCase %s() line %d failed : "
+ "Session creation failed at session number %u",
+ __func__, __LINE__, i);
+ }
goto session_clear;
}
- TEST_ASSERT_NOT_NULL(sessions[i],
- "Session creation failed at session number %u",
- i);
}
nb_sess = i;
@@ -14934,6 +14944,8 @@ test_null_invalid_operation(void)
ut_params->sess = rte_cryptodev_sym_session_create(
ts_params->valid_devs[0], &ut_params->cipher_xform,
ts_params->session_mpool);
+ if (ut_params->sess == NULL && rte_errno == ENOTSUP)
+ return TEST_SKIPPED;
TEST_ASSERT(ut_params->sess == NULL,
"Session creation succeeded unexpectedly");
@@ -14948,6 +14960,8 @@ test_null_invalid_operation(void)
ut_params->sess = rte_cryptodev_sym_session_create(
ts_params->valid_devs[0], &ut_params->auth_xform,
ts_params->session_mpool);
+ if (ut_params->sess == NULL && rte_errno == ENOTSUP)
+ return TEST_SKIPPED;
TEST_ASSERT(ut_params->sess == NULL,
"Session creation succeeded unexpectedly");
@@ -15095,6 +15109,8 @@ test_enqdeq_callback_null_cipher(void)
/* Create Crypto session */
ut_params->sess = rte_cryptodev_sym_session_create(ts_params->valid_devs[0],
&ut_params->auth_xform, ts_params->session_mpool);
+ if (ut_params->sess == NULL && rte_errno == ENOTSUP)
+ return TEST_SKIPPED;
TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
ut_params->op = rte_crypto_op_alloc(ts_params->op_mpool, RTE_CRYPTO_OP_TYPE_SYMMETRIC);
@@ -16155,6 +16171,7 @@ create_auth_session(struct crypto_unittest_params *ut_params,
ts_params->session_mpool);
if (ut_params->sess == NULL && rte_errno == ENOTSUP)
return TEST_SKIPPED;
+ TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
return 0;
}
@@ -16205,6 +16222,7 @@ create_auth_cipher_session(struct crypto_unittest_params *ut_params,
ts_params->session_mpool);
if (ut_params->sess == NULL && rte_errno == ENOTSUP)
return TEST_SKIPPED;
+ TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
return 0;
}
--
2.25.1
^ permalink raw reply [flat|nested] 58+ messages in thread
* [v1 06/16] test/crypto: add asymmetric tests for virtio PMD
2024-12-24 7:36 [v1 00/16] crypto/virtio: vDPA and asymmetric support Gowrishankar Muthukrishnan
` (4 preceding siblings ...)
2024-12-24 7:37 ` [v1 05/16] test/crypto: return proper codes in create session Gowrishankar Muthukrishnan
@ 2024-12-24 7:37 ` Gowrishankar Muthukrishnan
2024-12-24 7:37 ` [v1 07/16] vhost: add asymmetric RSA support Gowrishankar Muthukrishnan
` (13 subsequent siblings)
19 siblings, 0 replies; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2024-12-24 7:37 UTC (permalink / raw)
To: dev, Akhil Goyal, Maxime Coquelin, Chenbo Xia, Fan Zhang, Jay Zhou
Cc: jerinj, anoobj, Rajesh Mudimadugula, Gowrishankar Muthukrishnan
Add asymmetric tests for Virtio PMD.
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
app/test/test_cryptodev_asym.c | 29 ++++++++++++++++++++++
app/test/test_cryptodev_rsa_test_vectors.h | 4 +++
2 files changed, 33 insertions(+)
diff --git a/app/test/test_cryptodev_asym.c b/app/test/test_cryptodev_asym.c
index 364e81ecd9..ec7ab05a2d 100644
--- a/app/test/test_cryptodev_asym.c
+++ b/app/test/test_cryptodev_asym.c
@@ -3997,6 +3997,19 @@ static struct unit_test_suite cryptodev_octeontx_asym_testsuite = {
}
};
+static struct unit_test_suite cryptodev_virtio_asym_testsuite = {
+ .suite_name = "Crypto Device VIRTIO ASYM Unit Test Suite",
+ .setup = testsuite_setup,
+ .teardown = testsuite_teardown,
+ .unit_test_cases = {
+ TEST_CASE_ST(ut_setup_asym, ut_teardown_asym, test_capability),
+ TEST_CASE_ST(ut_setup_asym, ut_teardown_asym,
+ test_rsa_sign_verify_crt),
+ TEST_CASE_ST(ut_setup_asym, ut_teardown_asym, test_rsa_enc_dec_crt),
+ TEST_CASES_END() /**< NULL terminate unit test array */
+ }
+};
+
static int
test_cryptodev_openssl_asym(void)
{
@@ -4065,6 +4078,22 @@ test_cryptodev_cn10k_asym(void)
return unit_test_suite_runner(&cryptodev_octeontx_asym_testsuite);
}
+static int
+test_cryptodev_virtio_asym(void)
+{
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_VIRTIO_PMD));
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "virtio PMD must be loaded.\n");
+ return TEST_FAILED;
+ }
+
+ /* Use test suite registered for crypto_virtio PMD */
+ return unit_test_suite_runner(&cryptodev_virtio_asym_testsuite);
+}
+
+REGISTER_DRIVER_TEST(cryptodev_virtio_asym_autotest, test_cryptodev_virtio_asym);
+
REGISTER_DRIVER_TEST(cryptodev_openssl_asym_autotest, test_cryptodev_openssl_asym);
REGISTER_DRIVER_TEST(cryptodev_qat_asym_autotest, test_cryptodev_qat_asym);
REGISTER_DRIVER_TEST(cryptodev_octeontx_asym_autotest, test_cryptodev_octeontx_asym);
diff --git a/app/test/test_cryptodev_rsa_test_vectors.h b/app/test/test_cryptodev_rsa_test_vectors.h
index 1b7b451387..52d054c7d9 100644
--- a/app/test/test_cryptodev_rsa_test_vectors.h
+++ b/app/test/test_cryptodev_rsa_test_vectors.h
@@ -377,6 +377,10 @@ struct rte_crypto_asym_xform rsa_xform_crt = {
.length = sizeof(rsa_e)
},
.key_type = RTE_RSA_KEY_TYPE_QT,
+ .d = {
+ .data = rsa_d,
+ .length = sizeof(rsa_d)
+ },
.qt = {
.p = {
.data = rsa_p,
--
2.25.1
^ permalink raw reply [flat|nested] 58+ messages in thread
* [v1 07/16] vhost: add asymmetric RSA support
2024-12-24 7:36 [v1 00/16] crypto/virtio: vDPA and asymmetric support Gowrishankar Muthukrishnan
` (5 preceding siblings ...)
2024-12-24 7:37 ` [v1 06/16] test/crypto: add asymmetric tests for virtio PMD Gowrishankar Muthukrishnan
@ 2024-12-24 7:37 ` Gowrishankar Muthukrishnan
2024-12-24 7:37 ` [v1 08/16] examples/vhost_crypto: add asymmetric support Gowrishankar Muthukrishnan
` (12 subsequent siblings)
19 siblings, 0 replies; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2024-12-24 7:37 UTC (permalink / raw)
To: dev, Akhil Goyal, Maxime Coquelin, Chenbo Xia, Fan Zhang, Jay Zhou
Cc: jerinj, anoobj, Rajesh Mudimadugula, Gowrishankar Muthukrishnan
Support asymmetric RSA crypto operations in vhost-user.
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
lib/vhost/vhost_crypto.c | 504 ++++++++++++++++++++++++++++++++++++---
lib/vhost/vhost_user.h | 33 ++-
2 files changed, 498 insertions(+), 39 deletions(-)
diff --git a/lib/vhost/vhost_crypto.c b/lib/vhost/vhost_crypto.c
index 7caf6d9afa..6ce06ef42b 100644
--- a/lib/vhost/vhost_crypto.c
+++ b/lib/vhost/vhost_crypto.c
@@ -54,6 +54,14 @@ RTE_LOG_REGISTER_SUFFIX(vhost_crypto_logtype, crypto, INFO);
*/
#define vhost_crypto_desc vring_desc
+struct vhost_crypto_session {
+ union {
+ struct rte_cryptodev_asym_session *asym;
+ struct rte_cryptodev_sym_session *sym;
+ };
+ enum rte_crypto_op_type type;
+};
+
static int
cipher_algo_transform(uint32_t virtio_cipher_algo,
enum rte_crypto_cipher_algorithm *algo)
@@ -206,8 +214,10 @@ struct __rte_cache_aligned vhost_crypto {
uint64_t last_session_id;
- uint64_t cache_session_id;
- struct rte_cryptodev_sym_session *cache_session;
+ uint64_t cache_sym_session_id;
+ struct rte_cryptodev_sym_session *cache_sym_session;
+ uint64_t cache_asym_session_id;
+ struct rte_cryptodev_asym_session *cache_asym_session;
/** socket id for the device */
int socket_id;
@@ -237,7 +247,7 @@ struct vhost_crypto_data_req {
static int
transform_cipher_param(struct rte_crypto_sym_xform *xform,
- VhostUserCryptoSessionParam *param)
+ VhostUserCryptoSymSessionParam *param)
{
int ret;
@@ -273,7 +283,7 @@ transform_cipher_param(struct rte_crypto_sym_xform *xform,
static int
transform_chain_param(struct rte_crypto_sym_xform *xforms,
- VhostUserCryptoSessionParam *param)
+ VhostUserCryptoSymSessionParam *param)
{
struct rte_crypto_sym_xform *xform_cipher, *xform_auth;
int ret;
@@ -334,17 +344,17 @@ transform_chain_param(struct rte_crypto_sym_xform *xforms,
}
static void
-vhost_crypto_create_sess(struct vhost_crypto *vcrypto,
+vhost_crypto_create_sym_sess(struct vhost_crypto *vcrypto,
VhostUserCryptoSessionParam *sess_param)
{
struct rte_crypto_sym_xform xform1 = {0}, xform2 = {0};
struct rte_cryptodev_sym_session *session;
int ret;
- switch (sess_param->op_type) {
+ switch (sess_param->u.sym_sess.op_type) {
case VIRTIO_CRYPTO_SYM_OP_NONE:
case VIRTIO_CRYPTO_SYM_OP_CIPHER:
- ret = transform_cipher_param(&xform1, sess_param);
+ ret = transform_cipher_param(&xform1, &sess_param->u.sym_sess);
if (unlikely(ret)) {
VC_LOG_ERR("Error transform session msg (%i)", ret);
sess_param->session_id = ret;
@@ -352,7 +362,7 @@ vhost_crypto_create_sess(struct vhost_crypto *vcrypto,
}
break;
case VIRTIO_CRYPTO_SYM_OP_ALGORITHM_CHAINING:
- if (unlikely(sess_param->hash_mode !=
+ if (unlikely(sess_param->u.sym_sess.hash_mode !=
VIRTIO_CRYPTO_SYM_HASH_MODE_AUTH)) {
sess_param->session_id = -VIRTIO_CRYPTO_NOTSUPP;
VC_LOG_ERR("Error transform session message (%i)",
@@ -362,7 +372,7 @@ vhost_crypto_create_sess(struct vhost_crypto *vcrypto,
xform1.next = &xform2;
- ret = transform_chain_param(&xform1, sess_param);
+ ret = transform_chain_param(&xform1, &sess_param->u.sym_sess);
if (unlikely(ret)) {
VC_LOG_ERR("Error transform session message (%i)", ret);
sess_param->session_id = ret;
@@ -402,22 +412,264 @@ vhost_crypto_create_sess(struct vhost_crypto *vcrypto,
vcrypto->last_session_id++;
}
+static int
+tlv_decode(uint8_t *tlv, uint8_t type, uint8_t **data, size_t *data_len)
+{
+ size_t tlen = -EINVAL, len;
+
+ if (tlv[0] != type)
+ return -EINVAL;
+
+ if (tlv[1] == 0x82) {
+ len = (tlv[2] << 8) | tlv[3];
+ *data = rte_malloc(NULL, len, 0);
+ rte_memcpy(*data, &tlv[4], len);
+ tlen = len + 4;
+ } else if (tlv[1] == 0x81) {
+ len = tlv[2];
+ *data = rte_malloc(NULL, len, 0);
+ rte_memcpy(*data, &tlv[3], len);
+ tlen = len + 3;
+ } else {
+ len = tlv[1];
+ *data = rte_malloc(NULL, len, 0);
+ rte_memcpy(*data, &tlv[2], len);
+ tlen = len + 2;
+ }
+
+ *data_len = len;
+ return tlen;
+}
+
+static int
+virtio_crypto_asym_rsa_der_to_xform(uint8_t *der, size_t der_len,
+ struct rte_crypto_asym_xform *xform)
+{
+ uint8_t *n = NULL, *e = NULL, *d = NULL, *p = NULL, *q = NULL, *dp = NULL,
+ *dq = NULL, *qinv = NULL, *v = NULL, *tlv;
+ size_t nlen, elen, dlen, plen, qlen, dplen, dqlen, qinvlen, vlen;
+ int len;
+
+ RTE_SET_USED(der_len);
+
+ if (der[0] != 0x30)
+ return -EINVAL;
+
+ if (der[1] == 0x82)
+ tlv = &der[4];
+ else if (der[1] == 0x81)
+ tlv = &der[3];
+ else
+ return -EINVAL;
+
+ len = tlv_decode(tlv, 0x02, &v, &vlen);
+ if (len < 0 || v[0] != 0x0 || vlen != 1) {
+ len = -EINVAL;
+ goto _error;
+ }
+
+ tlv = tlv + len;
+ len = tlv_decode(tlv, 0x02, &n, &nlen);
+ if (len < 0)
+ goto _error;
+
+ tlv = tlv + len;
+ len = tlv_decode(tlv, 0x02, &e, &elen);
+ if (len < 0)
+ goto _error;
+
+ tlv = tlv + len;
+ len = tlv_decode(tlv, 0x02, &d, &dlen);
+ if (len < 0)
+ goto _error;
+
+ tlv = tlv + len;
+ len = tlv_decode(tlv, 0x02, &p, &plen);
+ if (len < 0)
+ goto _error;
+
+ tlv = tlv + len;
+ len = tlv_decode(tlv, 0x02, &q, &qlen);
+ if (len < 0)
+ goto _error;
+
+ tlv = tlv + len;
+ len = tlv_decode(tlv, 0x02, &dp, &dplen);
+ if (len < 0)
+ goto _error;
+
+ tlv = tlv + len;
+ len = tlv_decode(tlv, 0x02, &dq, &dqlen);
+ if (len < 0)
+ goto _error;
+
+ tlv = tlv + len;
+ len = tlv_decode(tlv, 0x02, &qinv, &qinvlen);
+ if (len < 0)
+ goto _error;
+
+ xform->rsa.n.data = n;
+ xform->rsa.n.length = nlen;
+ xform->rsa.e.data = e;
+ xform->rsa.e.length = elen;
+ xform->rsa.d.data = d;
+ xform->rsa.d.length = dlen;
+ xform->rsa.qt.p.data = p;
+ xform->rsa.qt.p.length = plen;
+ xform->rsa.qt.q.data = q;
+ xform->rsa.qt.q.length = qlen;
+ xform->rsa.qt.dP.data = dp;
+ xform->rsa.qt.dP.length = dplen;
+ xform->rsa.qt.dQ.data = dq;
+ xform->rsa.qt.dQ.length = dqlen;
+ xform->rsa.qt.qInv.data = qinv;
+ xform->rsa.qt.qInv.length = qinvlen;
+
+ RTE_ASSERT((tlv + len - &der[0]) == der_len);
+ return 0;
+_error:
+ rte_free(v);
+ rte_free(n);
+ rte_free(e);
+ rte_free(d);
+ rte_free(p);
+ rte_free(q);
+ rte_free(dp);
+ rte_free(dq);
+ rte_free(qinv);
+ return len;
+}
+
+static int
+transform_rsa_param(struct rte_crypto_asym_xform *xform,
+ VhostUserCryptoAsymSessionParam *param)
+{
+ int ret = -EINVAL;
+
+ ret = virtio_crypto_asym_rsa_der_to_xform(param->key_buf, param->key_len, xform);
+ if (ret < 0)
+ goto _error;
+
+ switch (param->u.rsa.padding_algo) {
+ case VIRTIO_CRYPTO_RSA_RAW_PADDING:
+ xform->rsa.padding.type = RTE_CRYPTO_RSA_PADDING_NONE;
+ break;
+ case VIRTIO_CRYPTO_RSA_PKCS1_PADDING:
+ xform->rsa.padding.type = RTE_CRYPTO_RSA_PADDING_PKCS1_5;
+ break;
+ default:
+ VC_LOG_ERR("Unknown padding type");
+ goto _error;
+ }
+
+ xform->rsa.key_type = RTE_RSA_KEY_TYPE_QT;
+ xform->xform_type = RTE_CRYPTO_ASYM_XFORM_RSA;
+_error:
+ return ret;
+}
+
+static void
+vhost_crypto_create_asym_sess(struct vhost_crypto *vcrypto,
+ VhostUserCryptoSessionParam *sess_param)
+{
+ struct rte_cryptodev_asym_session *session = NULL;
+ struct vhost_crypto_session *vhost_session;
+ struct rte_crypto_asym_xform xform = {0};
+ int ret;
+
+ switch (sess_param->u.asym_sess.algo) {
+ case VIRTIO_CRYPTO_AKCIPHER_RSA:
+ ret = transform_rsa_param(&xform, &sess_param->u.asym_sess);
+ if (unlikely(ret)) {
+ VC_LOG_ERR("Error transform session msg (%i)", ret);
+ sess_param->session_id = ret;
+ return;
+ }
+ break;
+ default:
+ VC_LOG_ERR("Invalid op algo");
+ sess_param->session_id = -VIRTIO_CRYPTO_ERR;
+ return;
+ }
+
+ ret = rte_cryptodev_asym_session_create(vcrypto->cid, &xform,
+ vcrypto->sess_pool, (void *)&session);
+ if (!session) {
+ VC_LOG_ERR("Failed to create session");
+ sess_param->session_id = -VIRTIO_CRYPTO_ERR;
+ return;
+ }
+
+ /* insert session to map */
+ vhost_session = rte_malloc(NULL, sizeof(*vhost_session), 0);
+ if (vhost_session == NULL) {
+ VC_LOG_ERR("Failed to alloc session memory");
+ sess_param->session_id = -VIRTIO_CRYPTO_ERR;
+ return;
+ }
+
+ vhost_session->type = RTE_CRYPTO_OP_TYPE_ASYMMETRIC;
+ vhost_session->asym = session;
+ if ((rte_hash_add_key_data(vcrypto->session_map,
+ &vcrypto->last_session_id, vhost_session) < 0)) {
+ VC_LOG_ERR("Failed to insert session to hash table");
+
+ if (rte_cryptodev_asym_session_free(vcrypto->cid, session) < 0)
+ VC_LOG_ERR("Failed to free session");
+ sess_param->session_id = -VIRTIO_CRYPTO_ERR;
+ return;
+ }
+
+ VC_LOG_INFO("Session %"PRIu64" created for vdev %i.",
+ vcrypto->last_session_id, vcrypto->dev->vid);
+
+ sess_param->session_id = vcrypto->last_session_id;
+ vcrypto->last_session_id++;
+}
+
+static void
+vhost_crypto_create_sess(struct vhost_crypto *vcrypto,
+ VhostUserCryptoSessionParam *sess_param)
+{
+ if (sess_param->op_code == VIRTIO_CRYPTO_AKCIPHER_CREATE_SESSION)
+ vhost_crypto_create_asym_sess(vcrypto, sess_param);
+ else
+ vhost_crypto_create_sym_sess(vcrypto, sess_param);
+}
+
static int
vhost_crypto_close_sess(struct vhost_crypto *vcrypto, uint64_t session_id)
{
- struct rte_cryptodev_sym_session *session;
+ struct rte_cryptodev_asym_session *asym_session = NULL;
+ struct rte_cryptodev_sym_session *sym_session = NULL;
+ struct vhost_crypto_session *vhost_session = NULL;
uint64_t sess_id = session_id;
int ret;
ret = rte_hash_lookup_data(vcrypto->session_map, &sess_id,
- (void **)&session);
-
+ (void **)&vhost_session);
if (unlikely(ret < 0)) {
- VC_LOG_ERR("Failed to delete session %"PRIu64".", session_id);
+ VC_LOG_ERR("Failed to find session for id %"PRIu64".", session_id);
+ return -VIRTIO_CRYPTO_INVSESS;
+ }
+
+ if (vhost_session->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) {
+ sym_session = vhost_session->sym;
+ } else if (vhost_session->type == RTE_CRYPTO_OP_TYPE_ASYMMETRIC) {
+ asym_session = vhost_session->asym;
+ } else {
+ VC_LOG_ERR("Invalid session for id %"PRIu64".", session_id);
return -VIRTIO_CRYPTO_INVSESS;
}
- if (rte_cryptodev_sym_session_free(vcrypto->cid, session) < 0) {
+ if (sym_session != NULL &&
+ rte_cryptodev_sym_session_free(vcrypto->cid, sym_session) < 0) {
+ VC_LOG_DBG("Failed to free session");
+ return -VIRTIO_CRYPTO_ERR;
+ }
+
+ if (asym_session != NULL &&
+ rte_cryptodev_asym_session_free(vcrypto->cid, asym_session) < 0) {
VC_LOG_DBG("Failed to free session");
return -VIRTIO_CRYPTO_ERR;
}
@@ -430,6 +682,7 @@ vhost_crypto_close_sess(struct vhost_crypto *vcrypto, uint64_t session_id)
VC_LOG_INFO("Session %"PRIu64" deleted for vdev %i.", sess_id,
vcrypto->dev->vid);
+ rte_free(vhost_session);
return 0;
}
@@ -1123,6 +1376,118 @@ prepare_sym_chain_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op,
return ret;
}
+static __rte_always_inline uint8_t
+vhost_crypto_check_akcipher_request(struct virtio_crypto_akcipher_data_req *req)
+{
+ RTE_SET_USED(req);
+ return VIRTIO_CRYPTO_OK;
+}
+
+static __rte_always_inline uint8_t
+prepare_asym_rsa_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op,
+ struct vhost_crypto_data_req *vc_req,
+ struct virtio_crypto_op_data_req *req,
+ struct vhost_crypto_desc *head,
+ uint32_t max_n_descs)
+{
+ uint8_t ret = vhost_crypto_check_akcipher_request(&req->u.akcipher_req);
+ struct rte_crypto_rsa_op_param *rsa = &op->asym->rsa;
+ struct vhost_crypto_desc *desc = head;
+ uint16_t wlen = 0;
+
+ if (unlikely(ret != VIRTIO_CRYPTO_OK))
+ goto error_exit;
+
+ /* prepare */
+ switch (vcrypto->option) {
+ case RTE_VHOST_CRYPTO_ZERO_COPY_DISABLE:
+ vc_req->wb_pool = vcrypto->wb_pool;
+ if (req->header.opcode == VIRTIO_CRYPTO_AKCIPHER_SIGN) {
+ rsa->op_type = RTE_CRYPTO_ASYM_OP_SIGN;
+ rsa->message.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RO);
+ rsa->message.length = req->u.akcipher_req.para.src_data_len;
+ rsa->sign.length = req->u.akcipher_req.para.dst_data_len;
+ wlen = rsa->sign.length;
+ desc = find_write_desc(head, desc, max_n_descs);
+ if (unlikely(!desc)) {
+ VC_LOG_ERR("Cannot find write location");
+ ret = VIRTIO_CRYPTO_BADMSG;
+ goto error_exit;
+ }
+
+ rsa->sign.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RW);
+ if (unlikely(rsa->sign.data == NULL)) {
+ ret = VIRTIO_CRYPTO_ERR;
+ goto error_exit;
+ }
+
+ desc += 1;
+ } else if (req->header.opcode == VIRTIO_CRYPTO_AKCIPHER_VERIFY) {
+ rsa->op_type = RTE_CRYPTO_ASYM_OP_VERIFY;
+ rsa->sign.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RO);
+ rsa->sign.length = req->u.akcipher_req.para.src_data_len;
+ desc += 1;
+ rsa->message.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RO);
+ rsa->message.length = req->u.akcipher_req.para.dst_data_len;
+ desc += 1;
+ } else if (req->header.opcode == VIRTIO_CRYPTO_AKCIPHER_ENCRYPT) {
+ rsa->op_type = RTE_CRYPTO_ASYM_OP_ENCRYPT;
+ rsa->message.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RO);
+ rsa->message.length = req->u.akcipher_req.para.src_data_len;
+ rsa->cipher.length = req->u.akcipher_req.para.dst_data_len;
+ wlen = rsa->cipher.length;
+ desc = find_write_desc(head, desc, max_n_descs);
+ if (unlikely(!desc)) {
+ VC_LOG_ERR("Cannot find write location");
+ ret = VIRTIO_CRYPTO_BADMSG;
+ goto error_exit;
+ }
+
+ rsa->cipher.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RW);
+ if (unlikely(rsa->cipher.data == NULL)) {
+ ret = VIRTIO_CRYPTO_ERR;
+ goto error_exit;
+ }
+
+ desc += 1;
+ } else if (req->header.opcode == VIRTIO_CRYPTO_AKCIPHER_DECRYPT) {
+ rsa->op_type = RTE_CRYPTO_ASYM_OP_DECRYPT;
+ rsa->cipher.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RO);
+ rsa->cipher.length = req->u.akcipher_req.para.src_data_len;
+ desc += 1;
+ rsa->message.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RO);
+ rsa->message.length = req->u.akcipher_req.para.dst_data_len;
+ desc += 1;
+ } else {
+ goto error_exit;
+ }
+ break;
+ case RTE_VHOST_CRYPTO_ZERO_COPY_ENABLE:
+ default:
+ ret = VIRTIO_CRYPTO_BADMSG;
+ goto error_exit;
+ }
+
+ op->type = RTE_CRYPTO_OP_TYPE_ASYMMETRIC;
+ op->sess_type = RTE_CRYPTO_OP_WITH_SESSION;
+
+ vc_req->inhdr = get_data_ptr(vc_req, desc, VHOST_ACCESS_WO);
+ if (unlikely(vc_req->inhdr == NULL)) {
+ ret = VIRTIO_CRYPTO_BADMSG;
+ goto error_exit;
+ }
+
+ vc_req->inhdr->status = VIRTIO_CRYPTO_OK;
+ vc_req->len = wlen + INHDR_LEN;
+ return 0;
+error_exit:
+ if (vc_req->wb)
+ free_wb_data(vc_req->wb, vc_req->wb_pool);
+
+ vc_req->len = INHDR_LEN;
+ return ret;
+}
+
/**
* Process on descriptor
*/
@@ -1133,17 +1498,21 @@ vhost_crypto_process_one_req(struct vhost_crypto *vcrypto,
uint16_t desc_idx)
__rte_no_thread_safety_analysis /* FIXME: requires iotlb_lock? */
{
- struct vhost_crypto_data_req *vc_req = rte_mbuf_to_priv(op->sym->m_src);
- struct rte_cryptodev_sym_session *session;
+ struct vhost_crypto_data_req *vc_req, *vc_req_out;
+ struct rte_cryptodev_asym_session *asym_session;
+ struct rte_cryptodev_sym_session *sym_session;
+ struct vhost_crypto_session *vhost_session;
+ struct vhost_crypto_desc *desc = descs;
+ uint32_t nb_descs = 0, max_n_descs, i;
+ struct vhost_crypto_data_req data_req;
struct virtio_crypto_op_data_req req;
struct virtio_crypto_inhdr *inhdr;
- struct vhost_crypto_desc *desc = descs;
struct vring_desc *src_desc;
uint64_t session_id;
uint64_t dlen;
- uint32_t nb_descs = 0, max_n_descs, i;
int err;
+ vc_req = &data_req;
vc_req->desc_idx = desc_idx;
vc_req->dev = vcrypto->dev;
vc_req->vq = vq;
@@ -1226,12 +1595,14 @@ vhost_crypto_process_one_req(struct vhost_crypto *vcrypto,
switch (req.header.opcode) {
case VIRTIO_CRYPTO_CIPHER_ENCRYPT:
case VIRTIO_CRYPTO_CIPHER_DECRYPT:
+ vc_req_out = rte_mbuf_to_priv(op->sym->m_src);
+ rte_memcpy(vc_req_out, vc_req, sizeof(struct vhost_crypto_data_req));
session_id = req.header.session_id;
/* one branch to avoid unnecessary table lookup */
- if (vcrypto->cache_session_id != session_id) {
+ if (vcrypto->cache_sym_session_id != session_id) {
err = rte_hash_lookup_data(vcrypto->session_map,
- &session_id, (void **)&session);
+ &session_id, (void **)&vhost_session);
if (unlikely(err < 0)) {
err = VIRTIO_CRYPTO_ERR;
VC_LOG_ERR("Failed to find session %"PRIu64,
@@ -1239,13 +1610,14 @@ vhost_crypto_process_one_req(struct vhost_crypto *vcrypto,
goto error_exit;
}
- vcrypto->cache_session = session;
- vcrypto->cache_session_id = session_id;
+ vcrypto->cache_sym_session = vhost_session->sym;
+ vcrypto->cache_sym_session_id = session_id;
}
- session = vcrypto->cache_session;
+ sym_session = vcrypto->cache_sym_session;
+ op->type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
- err = rte_crypto_op_attach_sym_session(op, session);
+ err = rte_crypto_op_attach_sym_session(op, sym_session);
if (unlikely(err < 0)) {
err = VIRTIO_CRYPTO_ERR;
VC_LOG_ERR("Failed to attach session to op");
@@ -1257,12 +1629,12 @@ vhost_crypto_process_one_req(struct vhost_crypto *vcrypto,
err = VIRTIO_CRYPTO_NOTSUPP;
break;
case VIRTIO_CRYPTO_SYM_OP_CIPHER:
- err = prepare_sym_cipher_op(vcrypto, op, vc_req,
+ err = prepare_sym_cipher_op(vcrypto, op, vc_req_out,
&req.u.sym_req.u.cipher, desc,
max_n_descs);
break;
case VIRTIO_CRYPTO_SYM_OP_ALGORITHM_CHAINING:
- err = prepare_sym_chain_op(vcrypto, op, vc_req,
+ err = prepare_sym_chain_op(vcrypto, op, vc_req_out,
&req.u.sym_req.u.chain, desc,
max_n_descs);
break;
@@ -1271,6 +1643,53 @@ vhost_crypto_process_one_req(struct vhost_crypto *vcrypto,
VC_LOG_ERR("Failed to process sym request");
goto error_exit;
}
+ break;
+ case VIRTIO_CRYPTO_AKCIPHER_SIGN:
+ case VIRTIO_CRYPTO_AKCIPHER_VERIFY:
+ case VIRTIO_CRYPTO_AKCIPHER_ENCRYPT:
+ case VIRTIO_CRYPTO_AKCIPHER_DECRYPT:
+ session_id = req.header.session_id;
+
+ /* one branch to avoid unnecessary table lookup */
+ if (vcrypto->cache_asym_session_id != session_id) {
+ err = rte_hash_lookup_data(vcrypto->session_map,
+ &session_id, (void **)&vhost_session);
+ if (unlikely(err < 0)) {
+ err = VIRTIO_CRYPTO_ERR;
+ VC_LOG_ERR("Failed to find asym session %"PRIu64,
+ session_id);
+ goto error_exit;
+ }
+
+ vcrypto->cache_asym_session = vhost_session->asym;
+ vcrypto->cache_asym_session_id = session_id;
+ }
+
+ asym_session = vcrypto->cache_asym_session;
+ op->type = RTE_CRYPTO_OP_TYPE_ASYMMETRIC;
+
+ err = rte_crypto_op_attach_asym_session(op, asym_session);
+ if (unlikely(err < 0)) {
+ err = VIRTIO_CRYPTO_ERR;
+ VC_LOG_ERR("Failed to attach asym session to op");
+ goto error_exit;
+ }
+
+ vc_req_out = rte_cryptodev_asym_session_get_user_data(asym_session);
+ rte_memcpy(vc_req_out, vc_req, sizeof(struct vhost_crypto_data_req));
+ vc_req_out->wb = NULL;
+
+ switch (req.header.algo) {
+ case VIRTIO_CRYPTO_AKCIPHER_RSA:
+ err = prepare_asym_rsa_op(vcrypto, op, vc_req_out,
+ &req, desc, max_n_descs);
+ break;
+ }
+ if (unlikely(err != 0)) {
+ VC_LOG_ERR("Failed to process asym request");
+ goto error_exit;
+ }
+
break;
default:
err = VIRTIO_CRYPTO_ERR;
@@ -1294,12 +1713,22 @@ static __rte_always_inline struct vhost_virtqueue *
vhost_crypto_finalize_one_request(struct rte_crypto_op *op,
struct vhost_virtqueue *old_vq)
{
- struct rte_mbuf *m_src = op->sym->m_src;
- struct rte_mbuf *m_dst = op->sym->m_dst;
- struct vhost_crypto_data_req *vc_req = rte_mbuf_to_priv(m_src);
+ struct rte_mbuf *m_src = NULL, *m_dst = NULL;
+ struct vhost_crypto_data_req *vc_req;
struct vhost_virtqueue *vq;
uint16_t used_idx, desc_idx;
+ if (op->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) {
+ m_src = op->sym->m_src;
+ m_dst = op->sym->m_dst;
+ vc_req = rte_mbuf_to_priv(m_src);
+ } else if (op->type == RTE_CRYPTO_OP_TYPE_ASYMMETRIC) {
+ vc_req = rte_cryptodev_asym_session_get_user_data(op->asym->session);
+ } else {
+ VC_LOG_ERR("Invalid crypto op type");
+ return NULL;
+ }
+
if (unlikely(!vc_req)) {
VC_LOG_ERR("Failed to retrieve vc_req");
return NULL;
@@ -1321,10 +1750,11 @@ vhost_crypto_finalize_one_request(struct rte_crypto_op *op,
vq->used->ring[desc_idx].id = vq->avail->ring[desc_idx];
vq->used->ring[desc_idx].len = vc_req->len;
- rte_mempool_put(m_src->pool, (void *)m_src);
-
- if (m_dst)
- rte_mempool_put(m_dst->pool, (void *)m_dst);
+ if (op->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) {
+ rte_mempool_put(m_src->pool, (void *)m_src);
+ if (m_dst)
+ rte_mempool_put(m_dst->pool, (void *)m_dst);
+ }
return vc_req->vq;
}
@@ -1407,8 +1837,9 @@ rte_vhost_crypto_create(int vid, uint8_t cryptodev_id,
vcrypto->sess_pool = sess_pool;
vcrypto->cid = cryptodev_id;
- vcrypto->cache_session_id = UINT64_MAX;
- vcrypto->last_session_id = 1;
+ vcrypto->cache_sym_session_id = UINT64_MAX;
+ vcrypto->cache_asym_session_id = UINT64_MAX;
+ vcrypto->last_session_id = 0;
vcrypto->dev = dev;
vcrypto->option = RTE_VHOST_CRYPTO_ZERO_COPY_DISABLE;
@@ -1580,6 +2011,9 @@ rte_vhost_crypto_fetch_requests(int vid, uint32_t qid,
vq = dev->virtqueue[qid];
+ if (!vq || !vq->avail)
+ return 0;
+
avail_idx = *((volatile uint16_t *)&vq->avail->idx);
start_idx = vq->last_used_idx;
count = avail_idx - start_idx;
diff --git a/lib/vhost/vhost_user.h b/lib/vhost/vhost_user.h
index edf7adb3c0..3b9e3ce7c2 100644
--- a/lib/vhost/vhost_user.h
+++ b/lib/vhost/vhost_user.h
@@ -99,11 +99,10 @@ typedef struct VhostUserLog {
/* Comply with Cryptodev-Linux */
#define VHOST_USER_CRYPTO_MAX_HMAC_KEY_LENGTH 512
#define VHOST_USER_CRYPTO_MAX_CIPHER_KEY_LENGTH 64
+#define VHOST_USER_CRYPTO_MAX_KEY_LENGTH 1024
/* Same structure as vhost-user backend session info */
-typedef struct VhostUserCryptoSessionParam {
- int64_t session_id;
- uint32_t op_code;
+typedef struct VhostUserCryptoSymSessionParam {
uint32_t cipher_algo;
uint32_t cipher_key_len;
uint32_t hash_algo;
@@ -114,10 +113,36 @@ typedef struct VhostUserCryptoSessionParam {
uint8_t dir;
uint8_t hash_mode;
uint8_t chaining_dir;
- uint8_t *ciphe_key;
+ uint8_t *cipher_key;
uint8_t *auth_key;
uint8_t cipher_key_buf[VHOST_USER_CRYPTO_MAX_CIPHER_KEY_LENGTH];
uint8_t auth_key_buf[VHOST_USER_CRYPTO_MAX_HMAC_KEY_LENGTH];
+} VhostUserCryptoSymSessionParam;
+
+
+typedef struct VhostUserCryptoAsymRsaParam {
+ uint32_t padding_algo;
+ uint32_t hash_algo;
+} VhostUserCryptoAsymRsaParam;
+
+typedef struct VhostUserCryptoAsymSessionParam {
+ uint32_t algo;
+ uint32_t key_type;
+ uint32_t key_len;
+ uint8_t *key;
+ union {
+ VhostUserCryptoAsymRsaParam rsa;
+ } u;
+ uint8_t key_buf[VHOST_USER_CRYPTO_MAX_KEY_LENGTH];
+} VhostUserCryptoAsymSessionParam;
+
+typedef struct VhostUserCryptoSessionParam {
+ uint32_t op_code;
+ union {
+ VhostUserCryptoSymSessionParam sym_sess;
+ VhostUserCryptoAsymSessionParam asym_sess;
+ } u;
+ uint64_t session_id;
} VhostUserCryptoSessionParam;
typedef struct VhostUserVringArea {
--
2.25.1
^ permalink raw reply [flat|nested] 58+ messages in thread
* [v1 08/16] examples/vhost_crypto: add asymmetric support
2024-12-24 7:36 [v1 00/16] crypto/virtio: vDPA and asymmetric support Gowrishankar Muthukrishnan
` (6 preceding siblings ...)
2024-12-24 7:37 ` [v1 07/16] vhost: add asymmetric RSA support Gowrishankar Muthukrishnan
@ 2024-12-24 7:37 ` Gowrishankar Muthukrishnan
2024-12-24 7:37 ` [v1 09/16] crypto/virtio: fix dataqueues iteration Gowrishankar Muthukrishnan
` (11 subsequent siblings)
19 siblings, 0 replies; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2024-12-24 7:37 UTC (permalink / raw)
To: dev, Akhil Goyal, Maxime Coquelin, Chenbo Xia, Fan Zhang, Jay Zhou
Cc: jerinj, anoobj, Rajesh Mudimadugula, Gowrishankar Muthukrishnan
Add symmetric support.
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
examples/vhost_crypto/main.c | 54 ++++++++++++++++++++++++++----------
1 file changed, 40 insertions(+), 14 deletions(-)
diff --git a/examples/vhost_crypto/main.c b/examples/vhost_crypto/main.c
index 558c09a60f..8bdfc40c4b 100644
--- a/examples/vhost_crypto/main.c
+++ b/examples/vhost_crypto/main.c
@@ -59,6 +59,7 @@ struct vhost_crypto_options {
uint32_t nb_los;
uint32_t zero_copy;
uint32_t guest_polling;
+ bool asymmetric_crypto;
} options;
enum {
@@ -70,6 +71,8 @@ enum {
OPT_ZERO_COPY_NUM,
#define OPT_POLLING "guest-polling"
OPT_POLLING_NUM,
+#define OPT_ASYM "asymmetric-crypto"
+ OPT_ASYM_NUM,
};
#define NB_SOCKET_FIELDS (2)
@@ -202,9 +205,10 @@ vhost_crypto_usage(const char *prgname)
" --%s <lcore>,SOCKET-FILE-PATH\n"
" --%s (lcore,cdev_id,queue_id)[,(lcore,cdev_id,queue_id)]\n"
" --%s: zero copy\n"
- " --%s: guest polling\n",
+ " --%s: guest polling\n"
+ " --%s: asymmetric crypto\n",
prgname, OPT_SOCKET_FILE, OPT_CONFIG,
- OPT_ZERO_COPY, OPT_POLLING);
+ OPT_ZERO_COPY, OPT_POLLING, OPT_ASYM);
}
static int
@@ -223,6 +227,8 @@ vhost_crypto_parse_args(int argc, char **argv)
NULL, OPT_ZERO_COPY_NUM},
{OPT_POLLING, no_argument,
NULL, OPT_POLLING_NUM},
+ {OPT_ASYM, no_argument,
+ NULL, OPT_ASYM_NUM},
{NULL, 0, 0, 0}
};
@@ -262,6 +268,10 @@ vhost_crypto_parse_args(int argc, char **argv)
options.guest_polling = 1;
break;
+ case OPT_ASYM_NUM:
+ options.asymmetric_crypto = true;
+ break;
+
default:
vhost_crypto_usage(prgname);
return -EINVAL;
@@ -362,8 +372,8 @@ destroy_device(int vid)
}
static const struct rte_vhost_device_ops virtio_crypto_device_ops = {
- .new_device = new_device,
- .destroy_device = destroy_device,
+ .new_connection = new_device,
+ .destroy_connection = destroy_device,
};
static int
@@ -376,6 +386,7 @@ vhost_crypto_worker(void *arg)
int callfds[VIRTIO_CRYPTO_MAX_NUM_BURST_VQS];
uint32_t lcore_id = rte_lcore_id();
uint32_t burst_size = MAX_PKT_BURST;
+ enum rte_crypto_op_type cop_type;
uint32_t i, j, k;
uint32_t to_fetch, fetched;
@@ -383,9 +394,13 @@ vhost_crypto_worker(void *arg)
RTE_LOG(INFO, USER1, "Processing on Core %u started\n", lcore_id);
+ cop_type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
+ if (options.asymmetric_crypto)
+ cop_type = RTE_CRYPTO_OP_TYPE_ASYMMETRIC;
+
for (i = 0; i < NB_VIRTIO_QUEUES; i++) {
if (rte_crypto_op_bulk_alloc(info->cop_pool,
- RTE_CRYPTO_OP_TYPE_SYMMETRIC, ops[i],
+ cop_type, ops[i],
burst_size) < burst_size) {
RTE_LOG(ERR, USER1, "Failed to alloc cops\n");
ret = -1;
@@ -411,12 +426,11 @@ vhost_crypto_worker(void *arg)
fetched);
if (unlikely(rte_crypto_op_bulk_alloc(
info->cop_pool,
- RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ cop_type,
ops[j], fetched) < fetched)) {
RTE_LOG(ERR, USER1, "Failed realloc\n");
return -1;
}
-
fetched = rte_cryptodev_dequeue_burst(
info->cid, info->qid,
ops_deq[j], RTE_MIN(burst_size,
@@ -477,6 +491,7 @@ main(int argc, char *argv[])
struct rte_cryptodev_qp_conf qp_conf;
struct rte_cryptodev_config config;
struct rte_cryptodev_info dev_info;
+ enum rte_crypto_op_type cop_type;
char name[128];
uint32_t i, j, lcore;
int ret;
@@ -539,12 +554,21 @@ main(int argc, char *argv[])
goto error_exit;
}
- snprintf(name, 127, "SESS_POOL_%u", lo->lcore_id);
- info->sess_pool = rte_cryptodev_sym_session_pool_create(name,
- SESSION_MAP_ENTRIES,
- rte_cryptodev_sym_get_private_session_size(
- info->cid), 0, 0,
- rte_lcore_to_socket_id(lo->lcore_id));
+ if (!options.asymmetric_crypto) {
+ snprintf(name, 127, "SYM_SESS_POOL_%u", lo->lcore_id);
+ info->sess_pool = rte_cryptodev_sym_session_pool_create(name,
+ SESSION_MAP_ENTRIES,
+ rte_cryptodev_sym_get_private_session_size(
+ info->cid), 0, 0,
+ rte_lcore_to_socket_id(lo->lcore_id));
+ cop_type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
+ } else {
+ snprintf(name, 127, "ASYM_SESS_POOL_%u", lo->lcore_id);
+ info->sess_pool = rte_cryptodev_asym_session_pool_create(name,
+ SESSION_MAP_ENTRIES, 0, 64,
+ rte_lcore_to_socket_id(lo->lcore_id));
+ cop_type = RTE_CRYPTO_OP_TYPE_ASYMMETRIC;
+ }
if (!info->sess_pool) {
RTE_LOG(ERR, USER1, "Failed to create mempool");
@@ -553,7 +577,7 @@ main(int argc, char *argv[])
snprintf(name, 127, "COPPOOL_%u", lo->lcore_id);
info->cop_pool = rte_crypto_op_pool_create(name,
- RTE_CRYPTO_OP_TYPE_SYMMETRIC, NB_MEMPOOL_OBJS,
+ cop_type, NB_MEMPOOL_OBJS,
NB_CACHE_OBJS, VHOST_CRYPTO_MAX_IV_LEN,
rte_lcore_to_socket_id(lo->lcore_id));
@@ -567,6 +591,8 @@ main(int argc, char *argv[])
qp_conf.nb_descriptors = NB_CRYPTO_DESCRIPTORS;
qp_conf.mp_session = info->sess_pool;
+ if (options.asymmetric_crypto)
+ qp_conf.mp_session = NULL;
for (j = 0; j < dev_info.max_nb_queue_pairs; j++) {
ret = rte_cryptodev_queue_pair_setup(info->cid, j,
--
2.25.1
^ permalink raw reply [flat|nested] 58+ messages in thread
* [v1 09/16] crypto/virtio: fix dataqueues iteration
2024-12-24 7:36 [v1 00/16] crypto/virtio: vDPA and asymmetric support Gowrishankar Muthukrishnan
` (7 preceding siblings ...)
2024-12-24 7:37 ` [v1 08/16] examples/vhost_crypto: add asymmetric support Gowrishankar Muthukrishnan
@ 2024-12-24 7:37 ` Gowrishankar Muthukrishnan
2024-12-24 7:37 ` [v1 10/16] crypto/virtio: refactor queue operations Gowrishankar Muthukrishnan
` (10 subsequent siblings)
19 siblings, 0 replies; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2024-12-24 7:37 UTC (permalink / raw)
To: dev, Akhil Goyal, Maxime Coquelin, Chenbo Xia, Fan Zhang, Jay Zhou
Cc: jerinj, anoobj, Rajesh Mudimadugula, Gowrishankar Muthukrishnan, stable
Fix dataqueues iteration using nb_queue_pairs info available in
device data instead of max dataqueues as dataqueue count might
have been changed in device configuration.
Fixes: 6f0175ff53e0 ("crypto/virtio: support basic PMD ops")
Cc: stable@dpdk.org
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
drivers/crypto/virtio/virtio_cryptodev.c | 3 +--
drivers/crypto/virtio/virtio_rxtx.c | 3 +--
2 files changed, 2 insertions(+), 4 deletions(-)
diff --git a/drivers/crypto/virtio/virtio_cryptodev.c b/drivers/crypto/virtio/virtio_cryptodev.c
index f9a3f1e13a..afeab5a816 100644
--- a/drivers/crypto/virtio/virtio_cryptodev.c
+++ b/drivers/crypto/virtio/virtio_cryptodev.c
@@ -863,9 +863,8 @@ static void
virtio_crypto_dev_free_mbufs(struct rte_cryptodev *dev)
{
uint32_t i;
- struct virtio_crypto_hw *hw = dev->data->dev_private;
- for (i = 0; i < hw->max_dataqueues; i++) {
+ for (i = 0; i < dev->data->nb_queue_pairs; i++) {
VIRTIO_CRYPTO_INIT_LOG_DBG("Before freeing dataq[%d] used "
"and unused buf", i);
VIRTQUEUE_DUMP((struct virtqueue *)
diff --git a/drivers/crypto/virtio/virtio_rxtx.c b/drivers/crypto/virtio/virtio_rxtx.c
index d00af8b7ce..c456dc327e 100644
--- a/drivers/crypto/virtio/virtio_rxtx.c
+++ b/drivers/crypto/virtio/virtio_rxtx.c
@@ -612,12 +612,11 @@ virtio_crypto_dataq_start(struct rte_cryptodev *dev)
* - Setup vring structure for data queues
*/
uint16_t i;
- struct virtio_crypto_hw *hw = dev->data->dev_private;
PMD_INIT_FUNC_TRACE();
/* Start data vring. */
- for (i = 0; i < hw->max_dataqueues; i++) {
+ for (i = 0; i < dev->data->nb_queue_pairs; i++) {
virtio_crypto_vring_start(dev->data->queue_pairs[i]);
VIRTQUEUE_DUMP((struct virtqueue *)dev->data->queue_pairs[i]);
}
--
2.25.1
^ permalink raw reply [flat|nested] 58+ messages in thread
* [v1 10/16] crypto/virtio: refactor queue operations
2024-12-24 7:36 [v1 00/16] crypto/virtio: vDPA and asymmetric support Gowrishankar Muthukrishnan
` (8 preceding siblings ...)
2024-12-24 7:37 ` [v1 09/16] crypto/virtio: fix dataqueues iteration Gowrishankar Muthukrishnan
@ 2024-12-24 7:37 ` Gowrishankar Muthukrishnan
2024-12-24 7:37 ` [v1 11/16] crypto/virtio: add packed ring support Gowrishankar Muthukrishnan
` (9 subsequent siblings)
19 siblings, 0 replies; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2024-12-24 7:37 UTC (permalink / raw)
To: dev, Akhil Goyal, Maxime Coquelin, Chenbo Xia, Fan Zhang, Jay Zhou
Cc: jerinj, anoobj, Rajesh Mudimadugula, Gowrishankar Muthukrishnan
Move existing control queue operations into a common place
that would be shared with other virtio type of devices.
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
drivers/crypto/virtio/meson.build | 1 +
drivers/crypto/virtio/virtio_crypto_algs.h | 2 +-
drivers/crypto/virtio/virtio_cryptodev.c | 573 +++++++++------------
drivers/crypto/virtio/virtio_cvq.c | 130 +++++
drivers/crypto/virtio/virtio_cvq.h | 33 ++
drivers/crypto/virtio/virtio_pci.h | 6 +-
drivers/crypto/virtio/virtio_ring.h | 12 +-
drivers/crypto/virtio/virtio_rxtx.c | 42 +-
drivers/crypto/virtio/virtio_rxtx.h | 13 +
drivers/crypto/virtio/virtqueue.c | 191 ++++++-
drivers/crypto/virtio/virtqueue.h | 89 +++-
11 files changed, 706 insertions(+), 386 deletions(-)
create mode 100644 drivers/crypto/virtio/virtio_cvq.c
create mode 100644 drivers/crypto/virtio/virtio_cvq.h
create mode 100644 drivers/crypto/virtio/virtio_rxtx.h
diff --git a/drivers/crypto/virtio/meson.build b/drivers/crypto/virtio/meson.build
index 45533c9b89..d2c3b3ad07 100644
--- a/drivers/crypto/virtio/meson.build
+++ b/drivers/crypto/virtio/meson.build
@@ -11,6 +11,7 @@ includes += include_directories('../../../lib/vhost')
deps += 'bus_pci'
sources = files(
'virtio_cryptodev.c',
+ 'virtio_cvq.c',
'virtio_pci.c',
'virtio_rxtx.c',
'virtqueue.c',
diff --git a/drivers/crypto/virtio/virtio_crypto_algs.h b/drivers/crypto/virtio/virtio_crypto_algs.h
index 4c44af3733..3824017ca5 100644
--- a/drivers/crypto/virtio/virtio_crypto_algs.h
+++ b/drivers/crypto/virtio/virtio_crypto_algs.h
@@ -22,7 +22,7 @@ struct virtio_crypto_session {
phys_addr_t phys_addr;
} aad;
- struct virtio_crypto_op_ctrl_req ctrl;
+ struct virtio_pmd_ctrl ctrl;
};
#endif /* _VIRTIO_CRYPTO_ALGS_H_ */
diff --git a/drivers/crypto/virtio/virtio_cryptodev.c b/drivers/crypto/virtio/virtio_cryptodev.c
index afeab5a816..9a11cbe90a 100644
--- a/drivers/crypto/virtio/virtio_cryptodev.c
+++ b/drivers/crypto/virtio/virtio_cryptodev.c
@@ -64,213 +64,6 @@ static const struct rte_cryptodev_capabilities virtio_capabilities[] = {
uint8_t cryptodev_virtio_driver_id;
-#define NUM_ENTRY_SYM_CREATE_SESSION 4
-
-static int
-virtio_crypto_send_command(struct virtqueue *vq,
- struct virtio_crypto_op_ctrl_req *ctrl, uint8_t *cipher_key,
- uint8_t *auth_key, struct virtio_crypto_session *session)
-{
- uint8_t idx = 0;
- uint8_t needed = 1;
- uint32_t head = 0;
- uint32_t len_cipher_key = 0;
- uint32_t len_auth_key = 0;
- uint32_t len_ctrl_req = sizeof(struct virtio_crypto_op_ctrl_req);
- uint32_t len_session_input = sizeof(struct virtio_crypto_session_input);
- uint32_t len_total = 0;
- uint32_t input_offset = 0;
- void *virt_addr_started = NULL;
- phys_addr_t phys_addr_started;
- struct vring_desc *desc;
- uint32_t desc_offset;
- struct virtio_crypto_session_input *input;
- int ret;
-
- PMD_INIT_FUNC_TRACE();
-
- if (session == NULL) {
- VIRTIO_CRYPTO_SESSION_LOG_ERR("session is NULL.");
- return -EINVAL;
- }
- /* cipher only is supported, it is available if auth_key is NULL */
- if (session->ctrl.header.algo == VIRTIO_CRYPTO_SERVICE_CIPHER && !cipher_key) {
- VIRTIO_CRYPTO_SESSION_LOG_ERR("cipher key is NULL.");
- return -EINVAL;
- }
-
- head = vq->vq_desc_head_idx;
- VIRTIO_CRYPTO_INIT_LOG_DBG("vq->vq_desc_head_idx = %d, vq = %p",
- head, vq);
-
- if (vq->vq_free_cnt < needed) {
- VIRTIO_CRYPTO_SESSION_LOG_ERR("Not enough entry");
- return -ENOSPC;
- }
-
- /* calculate the length of cipher key */
- if (cipher_key) {
- if (session->ctrl.header.algo == VIRTIO_CRYPTO_SERVICE_CIPHER) {
- switch (ctrl->u.sym_create_session.op_type) {
- case VIRTIO_CRYPTO_SYM_OP_CIPHER:
- len_cipher_key = ctrl->u.sym_create_session.u.cipher.para.keylen;
- break;
- case VIRTIO_CRYPTO_SYM_OP_ALGORITHM_CHAINING:
- len_cipher_key =
- ctrl->u.sym_create_session.u.chain.para.cipher_param.keylen;
- break;
- default:
- VIRTIO_CRYPTO_SESSION_LOG_ERR("invalid op type");
- return -EINVAL;
- }
- } else if (session->ctrl.header.algo == VIRTIO_CRYPTO_AKCIPHER_RSA) {
- len_cipher_key = ctrl->u.akcipher_create_session.para.keylen;
- } else {
- VIRTIO_CRYPTO_SESSION_LOG_ERR("Invalid crypto service for cipher key");
- return -EINVAL;
- }
- }
-
- /* calculate the length of auth key */
- if (auth_key) {
- len_auth_key =
- ctrl->u.sym_create_session.u.chain.para.u.mac_param
- .auth_key_len;
- }
-
- /*
- * malloc memory to store indirect vring_desc entries, including
- * ctrl request, cipher key, auth key, session input and desc vring
- */
- desc_offset = len_ctrl_req + len_cipher_key + len_auth_key
- + len_session_input;
- virt_addr_started = rte_malloc(NULL,
- desc_offset + NUM_ENTRY_SYM_CREATE_SESSION
- * sizeof(struct vring_desc), RTE_CACHE_LINE_SIZE);
- if (virt_addr_started == NULL) {
- VIRTIO_CRYPTO_SESSION_LOG_ERR("not enough heap memory");
- return -ENOSPC;
- }
- phys_addr_started = rte_malloc_virt2iova(virt_addr_started);
-
- /* address to store indirect vring desc entries */
- desc = (struct vring_desc *)
- ((uint8_t *)virt_addr_started + desc_offset);
-
- /* ctrl req part */
- memcpy(virt_addr_started, ctrl, len_ctrl_req);
- desc[idx].addr = phys_addr_started;
- desc[idx].len = len_ctrl_req;
- desc[idx].flags = VRING_DESC_F_NEXT;
- desc[idx].next = idx + 1;
- idx++;
- len_total += len_ctrl_req;
- input_offset += len_ctrl_req;
-
- /* cipher key part */
- if (len_cipher_key > 0) {
- memcpy((uint8_t *)virt_addr_started + len_total,
- cipher_key, len_cipher_key);
-
- desc[idx].addr = phys_addr_started + len_total;
- desc[idx].len = len_cipher_key;
- desc[idx].flags = VRING_DESC_F_NEXT;
- desc[idx].next = idx + 1;
- idx++;
- len_total += len_cipher_key;
- input_offset += len_cipher_key;
- }
-
- /* auth key part */
- if (len_auth_key > 0) {
- memcpy((uint8_t *)virt_addr_started + len_total,
- auth_key, len_auth_key);
-
- desc[idx].addr = phys_addr_started + len_total;
- desc[idx].len = len_auth_key;
- desc[idx].flags = VRING_DESC_F_NEXT;
- desc[idx].next = idx + 1;
- idx++;
- len_total += len_auth_key;
- input_offset += len_auth_key;
- }
-
- /* input part */
- input = (struct virtio_crypto_session_input *)
- ((uint8_t *)virt_addr_started + input_offset);
- input->status = VIRTIO_CRYPTO_ERR;
- input->session_id = ~0ULL;
- desc[idx].addr = phys_addr_started + len_total;
- desc[idx].len = len_session_input;
- desc[idx].flags = VRING_DESC_F_WRITE;
- idx++;
-
- /* use a single desc entry */
- vq->vq_ring.desc[head].addr = phys_addr_started + desc_offset;
- vq->vq_ring.desc[head].len = idx * sizeof(struct vring_desc);
- vq->vq_ring.desc[head].flags = VRING_DESC_F_INDIRECT;
- vq->vq_free_cnt--;
-
- vq->vq_desc_head_idx = vq->vq_ring.desc[head].next;
-
- vq_update_avail_ring(vq, head);
- vq_update_avail_idx(vq);
-
- VIRTIO_CRYPTO_INIT_LOG_DBG("vq->vq_queue_index = %d",
- vq->vq_queue_index);
-
- virtqueue_notify(vq);
-
- rte_rmb();
- while (vq->vq_used_cons_idx == vq->vq_ring.used->idx) {
- rte_rmb();
- usleep(100);
- }
-
- while (vq->vq_used_cons_idx != vq->vq_ring.used->idx) {
- uint32_t idx, desc_idx, used_idx;
- struct vring_used_elem *uep;
-
- used_idx = (uint32_t)(vq->vq_used_cons_idx
- & (vq->vq_nentries - 1));
- uep = &vq->vq_ring.used->ring[used_idx];
- idx = (uint32_t) uep->id;
- desc_idx = idx;
-
- while (vq->vq_ring.desc[desc_idx].flags & VRING_DESC_F_NEXT) {
- desc_idx = vq->vq_ring.desc[desc_idx].next;
- vq->vq_free_cnt++;
- }
-
- vq->vq_ring.desc[desc_idx].next = vq->vq_desc_head_idx;
- vq->vq_desc_head_idx = idx;
-
- vq->vq_used_cons_idx++;
- vq->vq_free_cnt++;
- }
-
- VIRTIO_CRYPTO_INIT_LOG_DBG("vq->vq_free_cnt=%d", vq->vq_free_cnt);
- VIRTIO_CRYPTO_INIT_LOG_DBG("vq->vq_desc_head_idx=%d", vq->vq_desc_head_idx);
-
- /* get the result */
- if (input->status != VIRTIO_CRYPTO_OK) {
- VIRTIO_CRYPTO_SESSION_LOG_ERR("Something wrong on backend! "
- "status=%u, session_id=%" PRIu64 "",
- input->status, input->session_id);
- rte_free(virt_addr_started);
- ret = -1;
- } else {
- session->session_id = input->session_id;
-
- VIRTIO_CRYPTO_SESSION_LOG_INFO("Create session successfully, "
- "session_id=%" PRIu64 "", input->session_id);
- rte_free(virt_addr_started);
- ret = 0;
- }
-
- return ret;
-}
-
void
virtio_crypto_queue_release(struct virtqueue *vq)
{
@@ -283,6 +76,7 @@ virtio_crypto_queue_release(struct virtqueue *vq)
/* Select and deactivate the queue */
VTPCI_OPS(hw)->del_queue(hw, vq);
+ hw->vqs[vq->vq_queue_index] = NULL;
rte_memzone_free(vq->mz);
rte_mempool_free(vq->mpool);
rte_free(vq);
@@ -301,8 +95,7 @@ virtio_crypto_queue_setup(struct rte_cryptodev *dev,
{
char vq_name[VIRTQUEUE_MAX_NAME_SZ];
char mpool_name[MPOOL_MAX_NAME_SZ];
- const struct rte_memzone *mz;
- unsigned int vq_size, size;
+ unsigned int vq_size;
struct virtio_crypto_hw *hw = dev->data->dev_private;
struct virtqueue *vq = NULL;
uint32_t i = 0;
@@ -341,16 +134,26 @@ virtio_crypto_queue_setup(struct rte_cryptodev *dev,
"dev%d_controlqueue_mpool",
dev->data->dev_id);
}
- size = RTE_ALIGN_CEIL(sizeof(*vq) +
- vq_size * sizeof(struct vq_desc_extra),
- RTE_CACHE_LINE_SIZE);
- vq = rte_zmalloc_socket(vq_name, size, RTE_CACHE_LINE_SIZE,
- socket_id);
+
+ /*
+ * Using part of the vring entries is permitted, but the maximum
+ * is vq_size
+ */
+ if (nb_desc == 0 || nb_desc > vq_size)
+ nb_desc = vq_size;
+
+ if (hw->vqs[vtpci_queue_idx])
+ vq = hw->vqs[vtpci_queue_idx];
+ else
+ vq = virtcrypto_queue_alloc(hw, vtpci_queue_idx, nb_desc,
+ socket_id, vq_name);
if (vq == NULL) {
VIRTIO_CRYPTO_INIT_LOG_ERR("Can not allocate virtqueue");
return -ENOMEM;
}
+ hw->vqs[vtpci_queue_idx] = vq;
+
if (queue_type == VTCRYPTO_DATAQ) {
/* pre-allocate a mempool and use it in the data plane to
* improve performance
@@ -358,7 +161,7 @@ virtio_crypto_queue_setup(struct rte_cryptodev *dev,
vq->mpool = rte_mempool_lookup(mpool_name);
if (vq->mpool == NULL)
vq->mpool = rte_mempool_create(mpool_name,
- vq_size,
+ nb_desc,
sizeof(struct virtio_crypto_op_cookie),
RTE_CACHE_LINE_SIZE, 0,
NULL, NULL, NULL, NULL, socket_id,
@@ -368,7 +171,7 @@ virtio_crypto_queue_setup(struct rte_cryptodev *dev,
"Cannot create mempool");
goto mpool_create_err;
}
- for (i = 0; i < vq_size; i++) {
+ for (i = 0; i < nb_desc; i++) {
vq->vq_descx[i].cookie =
rte_zmalloc("crypto PMD op cookie pointer",
sizeof(struct virtio_crypto_op_cookie),
@@ -381,67 +184,10 @@ virtio_crypto_queue_setup(struct rte_cryptodev *dev,
}
}
- vq->hw = hw;
- vq->dev_id = dev->data->dev_id;
- vq->vq_queue_index = vtpci_queue_idx;
- vq->vq_nentries = vq_size;
-
- /*
- * Using part of the vring entries is permitted, but the maximum
- * is vq_size
- */
- if (nb_desc == 0 || nb_desc > vq_size)
- nb_desc = vq_size;
- vq->vq_free_cnt = nb_desc;
-
- /*
- * Reserve a memzone for vring elements
- */
- size = vring_size(vq_size, VIRTIO_PCI_VRING_ALIGN);
- vq->vq_ring_size = RTE_ALIGN_CEIL(size, VIRTIO_PCI_VRING_ALIGN);
- VIRTIO_CRYPTO_INIT_LOG_DBG("%s vring_size: %d, rounded_vring_size: %d",
- (queue_type == VTCRYPTO_DATAQ) ? "dataq" : "ctrlq",
- size, vq->vq_ring_size);
-
- mz = rte_memzone_reserve_aligned(vq_name, vq->vq_ring_size,
- socket_id, 0, VIRTIO_PCI_VRING_ALIGN);
- if (mz == NULL) {
- if (rte_errno == EEXIST)
- mz = rte_memzone_lookup(vq_name);
- if (mz == NULL) {
- VIRTIO_CRYPTO_INIT_LOG_ERR("not enough memory");
- goto mz_reserve_err;
- }
- }
-
- /*
- * Virtio PCI device VIRTIO_PCI_QUEUE_PF register is 32bit,
- * and only accepts 32 bit page frame number.
- * Check if the allocated physical memory exceeds 16TB.
- */
- if ((mz->iova + vq->vq_ring_size - 1)
- >> (VIRTIO_PCI_QUEUE_ADDR_SHIFT + 32)) {
- VIRTIO_CRYPTO_INIT_LOG_ERR("vring address shouldn't be "
- "above 16TB!");
- goto vring_addr_err;
- }
-
- memset(mz->addr, 0, sizeof(mz->len));
- vq->mz = mz;
- vq->vq_ring_mem = mz->iova;
- vq->vq_ring_virt_mem = mz->addr;
- VIRTIO_CRYPTO_INIT_LOG_DBG("vq->vq_ring_mem(physical): 0x%"PRIx64,
- (uint64_t)mz->iova);
- VIRTIO_CRYPTO_INIT_LOG_DBG("vq->vq_ring_virt_mem: 0x%"PRIx64,
- (uint64_t)(uintptr_t)mz->addr);
-
*pvq = vq;
return 0;
-vring_addr_err:
- rte_memzone_free(mz);
-mz_reserve_err:
cookie_alloc_err:
rte_mempool_free(vq->mpool);
if (i != 0) {
@@ -453,31 +199,6 @@ virtio_crypto_queue_setup(struct rte_cryptodev *dev,
return -ENOMEM;
}
-static int
-virtio_crypto_ctrlq_setup(struct rte_cryptodev *dev, uint16_t queue_idx)
-{
- int ret;
- struct virtqueue *vq;
- struct virtio_crypto_hw *hw = dev->data->dev_private;
-
- /* if virtio device has started, do not touch the virtqueues */
- if (dev->data->dev_started)
- return 0;
-
- PMD_INIT_FUNC_TRACE();
-
- ret = virtio_crypto_queue_setup(dev, VTCRYPTO_CTRLQ, queue_idx,
- 0, SOCKET_ID_ANY, &vq);
- if (ret < 0) {
- VIRTIO_CRYPTO_INIT_LOG_ERR("control vq initialization failed");
- return ret;
- }
-
- hw->cvq = vq;
-
- return 0;
-}
-
static void
virtio_crypto_free_queues(struct rte_cryptodev *dev)
{
@@ -486,10 +207,6 @@ virtio_crypto_free_queues(struct rte_cryptodev *dev)
PMD_INIT_FUNC_TRACE();
- /* control queue release */
- virtio_crypto_queue_release(hw->cvq);
- hw->cvq = NULL;
-
/* data queue release */
for (i = 0; i < hw->max_dataqueues; i++) {
virtio_crypto_queue_release(dev->data->queue_pairs[i]);
@@ -500,6 +217,15 @@ virtio_crypto_free_queues(struct rte_cryptodev *dev)
static int
virtio_crypto_dev_close(struct rte_cryptodev *dev __rte_unused)
{
+ struct virtio_crypto_hw *hw = dev->data->dev_private;
+
+ PMD_INIT_FUNC_TRACE();
+
+ /* control queue release */
+ if (hw->cvq)
+ virtio_crypto_queue_release(virtcrypto_cq_to_vq(hw->cvq));
+
+ hw->cvq = NULL;
return 0;
}
@@ -680,6 +406,99 @@ virtio_negotiate_features(struct virtio_crypto_hw *hw, uint64_t req_features)
return 0;
}
+static void
+virtio_control_queue_notify(struct virtqueue *vq, __rte_unused void *cookie)
+{
+ virtqueue_notify(vq);
+}
+
+static int
+virtio_crypto_init_queue(struct rte_cryptodev *dev, uint16_t queue_idx)
+{
+ char vq_name[VIRTQUEUE_MAX_NAME_SZ];
+ unsigned int vq_size;
+ struct virtio_crypto_hw *hw = dev->data->dev_private;
+ struct virtqueue *vq;
+ int queue_type = virtio_get_queue_type(hw, queue_idx);
+ int ret;
+ int numa_node = dev->device->numa_node;
+
+ PMD_INIT_LOG(INFO, "setting up queue: %u on NUMA node %d",
+ queue_idx, numa_node);
+
+ /*
+ * Read the virtqueue size from the Queue Size field
+ * Always power of 2 and if 0 virtqueue does not exist
+ */
+ vq_size = VTPCI_OPS(hw)->get_queue_num(hw, queue_idx);
+ PMD_INIT_LOG(DEBUG, "vq_size: %u", vq_size);
+ if (vq_size == 0) {
+ PMD_INIT_LOG(ERR, "virtqueue does not exist");
+ return -EINVAL;
+ }
+
+ if (!rte_is_power_of_2(vq_size)) {
+ PMD_INIT_LOG(ERR, "split virtqueue size is not power of 2");
+ return -EINVAL;
+ }
+
+ snprintf(vq_name, sizeof(vq_name), "dev%d_vq%d", dev->data->dev_id, queue_idx);
+
+ vq = virtcrypto_queue_alloc(hw, queue_idx, vq_size, numa_node, vq_name);
+ if (!vq) {
+ PMD_INIT_LOG(ERR, "virtqueue init failed");
+ return -ENOMEM;
+ }
+
+ hw->vqs[queue_idx] = vq;
+
+ if (queue_type == VTCRYPTO_CTRLQ) {
+ hw->cvq = &vq->cq;
+ vq->cq.notify_queue = &virtio_control_queue_notify;
+ }
+
+ if (VTPCI_OPS(hw)->setup_queue(hw, vq) < 0) {
+ PMD_INIT_LOG(ERR, "setup_queue failed");
+ ret = -EINVAL;
+ goto clean_vq;
+ }
+
+ return 0;
+
+clean_vq:
+ if (queue_type == VTCRYPTO_CTRLQ)
+ hw->cvq = NULL;
+ virtcrypto_queue_free(vq);
+ hw->vqs[queue_idx] = NULL;
+
+ return ret;
+}
+
+static int
+virtio_crypto_alloc_queues(struct rte_cryptodev *dev)
+{
+ struct virtio_crypto_hw *hw = dev->data->dev_private;
+ uint16_t nr_vq = hw->max_dataqueues + 1;
+ uint16_t i;
+ int ret;
+
+ hw->vqs = rte_zmalloc(NULL, sizeof(struct virtqueue *) * nr_vq, 0);
+ if (!hw->vqs) {
+ PMD_INIT_LOG(ERR, "failed to allocate vqs");
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < nr_vq; i++) {
+ ret = virtio_crypto_init_queue(dev, i);
+ if (ret < 0) {
+ virtio_crypto_free_queues(dev);
+ return ret;
+ }
+ }
+
+ return 0;
+}
+
/* reset device and renegotiate features if needed */
static int
virtio_crypto_init_device(struct rte_cryptodev *cryptodev,
@@ -805,8 +624,6 @@ static int
virtio_crypto_dev_configure(struct rte_cryptodev *cryptodev,
struct rte_cryptodev_config *config __rte_unused)
{
- struct virtio_crypto_hw *hw = cryptodev->data->dev_private;
-
PMD_INIT_FUNC_TRACE();
if (virtio_crypto_init_device(cryptodev,
@@ -817,10 +634,11 @@ virtio_crypto_dev_configure(struct rte_cryptodev *cryptodev,
* [0, 1, ... ,(config->max_dataqueues - 1)] are data queues
* config->max_dataqueues is the control queue
*/
- if (virtio_crypto_ctrlq_setup(cryptodev, hw->max_dataqueues) < 0) {
- VIRTIO_CRYPTO_INIT_LOG_ERR("control queue setup error");
+ if (virtio_crypto_alloc_queues(cryptodev) < 0) {
+ VIRTIO_CRYPTO_DRV_LOG_ERR("failed to create virtqueues");
return -1;
}
+
virtio_crypto_ctrlq_start(cryptodev);
return 0;
@@ -955,7 +773,7 @@ virtio_crypto_clear_session(
uint64_t session_id = ctrl->u.destroy_session.session_id;
hw = dev->data->dev_private;
- vq = hw->cvq;
+ vq = virtcrypto_cq_to_vq(hw->cvq);
VIRTIO_CRYPTO_SESSION_LOG_INFO("vq->vq_desc_head_idx = %d, "
"vq = %p", vq->vq_desc_head_idx, vq);
@@ -990,14 +808,14 @@ virtio_crypto_clear_session(
/* use only a single desc entry */
head = vq->vq_desc_head_idx;
- vq->vq_ring.desc[head].flags = VRING_DESC_F_INDIRECT;
- vq->vq_ring.desc[head].addr = malloc_phys_addr + desc_offset;
- vq->vq_ring.desc[head].len
+ vq->vq_split.ring.desc[head].flags = VRING_DESC_F_INDIRECT;
+ vq->vq_split.ring.desc[head].addr = malloc_phys_addr + desc_offset;
+ vq->vq_split.ring.desc[head].len
= NUM_ENTRY_SYM_CLEAR_SESSION
* sizeof(struct vring_desc);
vq->vq_free_cnt -= needed;
- vq->vq_desc_head_idx = vq->vq_ring.desc[head].next;
+ vq->vq_desc_head_idx = vq->vq_split.ring.desc[head].next;
vq_update_avail_ring(vq, head);
vq_update_avail_idx(vq);
@@ -1008,27 +826,27 @@ virtio_crypto_clear_session(
virtqueue_notify(vq);
rte_rmb();
- while (vq->vq_used_cons_idx == vq->vq_ring.used->idx) {
+ while (vq->vq_used_cons_idx == vq->vq_split.ring.used->idx) {
rte_rmb();
usleep(100);
}
- while (vq->vq_used_cons_idx != vq->vq_ring.used->idx) {
+ while (vq->vq_used_cons_idx != vq->vq_split.ring.used->idx) {
uint32_t idx, desc_idx, used_idx;
struct vring_used_elem *uep;
used_idx = (uint32_t)(vq->vq_used_cons_idx
& (vq->vq_nentries - 1));
- uep = &vq->vq_ring.used->ring[used_idx];
+ uep = &vq->vq_split.ring.used->ring[used_idx];
idx = (uint32_t) uep->id;
desc_idx = idx;
- while (vq->vq_ring.desc[desc_idx].flags
+ while (vq->vq_split.ring.desc[desc_idx].flags
& VRING_DESC_F_NEXT) {
- desc_idx = vq->vq_ring.desc[desc_idx].next;
+ desc_idx = vq->vq_split.ring.desc[desc_idx].next;
vq->vq_free_cnt++;
}
- vq->vq_ring.desc[desc_idx].next = vq->vq_desc_head_idx;
+ vq->vq_split.ring.desc[desc_idx].next = vq->vq_desc_head_idx;
vq->vq_desc_head_idx = idx;
vq->vq_used_cons_idx++;
vq->vq_free_cnt++;
@@ -1382,14 +1200,23 @@ virtio_crypto_sym_configure_session(
int ret;
struct virtio_crypto_session *session;
struct virtio_crypto_op_ctrl_req *ctrl_req;
+ struct virtio_crypto_session_input *input;
enum virtio_crypto_cmd_id cmd_id;
uint8_t cipher_key_data[VIRTIO_CRYPTO_MAX_KEY_SIZE] = {0};
uint8_t auth_key_data[VIRTIO_CRYPTO_MAX_KEY_SIZE] = {0};
struct virtio_crypto_hw *hw;
- struct virtqueue *control_vq;
+ struct virtio_pmd_ctrl *ctrl;
+ struct rte_crypto_cipher_xform *cipher_xform = NULL;
+ int dlen[2], dnum;
PMD_INIT_FUNC_TRACE();
+ cipher_xform = virtio_crypto_get_cipher_xform(xform);
+ if (cipher_xform == NULL) {
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("No cipher xform found");
+ return -1;
+ }
+
ret = virtio_crypto_check_sym_configure_session_paras(dev, xform,
sess);
if (ret < 0) {
@@ -1398,13 +1225,23 @@ virtio_crypto_sym_configure_session(
}
session = CRYPTODEV_GET_SYM_SESS_PRIV(sess);
memset(session, 0, sizeof(struct virtio_crypto_session));
- ctrl_req = &session->ctrl;
+ ctrl = &session->ctrl;
+ ctrl_req = &ctrl->hdr;
ctrl_req->header.opcode = VIRTIO_CRYPTO_CIPHER_CREATE_SESSION;
/* FIXME: support multiqueue */
ctrl_req->header.queue_id = 0;
hw = dev->data->dev_private;
- control_vq = hw->cvq;
+
+ switch (cipher_xform->algo) {
+ case RTE_CRYPTO_CIPHER_AES_CBC:
+ ctrl_req->header.algo = VIRTIO_CRYPTO_CIPHER_AES_CBC;
+ break;
+ default:
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("Crypto: Unsupported "
+ "Cipher alg %u", cipher_xform->algo);
+ return -1;
+ }
cmd_id = virtio_crypto_get_chain_order(xform);
if (cmd_id == VIRTIO_CRYPTO_CMD_CIPHER_HASH)
@@ -1416,7 +1253,13 @@ virtio_crypto_sym_configure_session(
switch (cmd_id) {
case VIRTIO_CRYPTO_CMD_CIPHER_HASH:
- case VIRTIO_CRYPTO_CMD_HASH_CIPHER:
+ case VIRTIO_CRYPTO_CMD_HASH_CIPHER: {
+ struct rte_crypto_auth_xform *auth_xform = NULL;
+ struct rte_crypto_cipher_xform *cipher_xform = NULL;
+
+ cipher_xform = virtio_crypto_get_cipher_xform(xform);
+ auth_xform = virtio_crypto_get_auth_xform(xform);
+
ctrl_req->u.sym_create_session.op_type
= VIRTIO_CRYPTO_SYM_OP_ALGORITHM_CHAINING;
@@ -1427,15 +1270,19 @@ virtio_crypto_sym_configure_session(
"padding sym op ctrl req failed");
goto error_out;
}
- ret = virtio_crypto_send_command(control_vq, ctrl_req,
- cipher_key_data, auth_key_data, session);
- if (ret < 0) {
- VIRTIO_CRYPTO_SESSION_LOG_ERR(
- "create session failed: %d", ret);
- goto error_out;
- }
+
+ dlen[0] = cipher_xform->key.length;
+ memcpy(ctrl->data, cipher_key_data, dlen[0]);
+ dlen[1] = auth_xform->key.length;
+ memcpy(ctrl->data + dlen[0], auth_key_data, dlen[1]);
+ dnum = 2;
break;
- case VIRTIO_CRYPTO_CMD_CIPHER:
+ }
+ case VIRTIO_CRYPTO_CMD_CIPHER: {
+ struct rte_crypto_cipher_xform *cipher_xform = NULL;
+
+ cipher_xform = virtio_crypto_get_cipher_xform(xform);
+
ctrl_req->u.sym_create_session.op_type
= VIRTIO_CRYPTO_SYM_OP_CIPHER;
ret = virtio_crypto_sym_pad_op_ctrl_req(ctrl_req, xform,
@@ -1445,21 +1292,42 @@ virtio_crypto_sym_configure_session(
"padding sym op ctrl req failed");
goto error_out;
}
- ret = virtio_crypto_send_command(control_vq, ctrl_req,
- cipher_key_data, NULL, session);
- if (ret < 0) {
- VIRTIO_CRYPTO_SESSION_LOG_ERR(
- "create session failed: %d", ret);
- goto error_out;
- }
+
+ dlen[0] = cipher_xform->key.length;
+ memcpy(ctrl->data, cipher_key_data, dlen[0]);
+ dnum = 1;
break;
+ }
default:
VIRTIO_CRYPTO_SESSION_LOG_ERR(
"Unsupported operation chain order parameter");
goto error_out;
}
- return 0;
+ input = &ctrl->input;
+ input->status = VIRTIO_CRYPTO_ERR;
+ input->session_id = ~0ULL;
+
+ ret = virtio_crypto_send_command(hw->cvq, ctrl, dlen, dnum);
+ if (ret < 0) {
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("create session failed: %d", ret);
+ goto error_out;
+ }
+
+ ctrl = hw->cvq->hdr_mz->addr;
+ input = &ctrl->input;
+ if (input->status != VIRTIO_CRYPTO_OK) {
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("Something wrong on backend! "
+ "status=%u, session_id=%" PRIu64 "",
+ input->status, input->session_id);
+ goto error_out;
+ } else {
+ session->session_id = input->session_id;
+ VIRTIO_CRYPTO_SESSION_LOG_INFO("Create session successfully, "
+ "session_id=%" PRIu64 "", input->session_id);
+ }
+
+ return 0;
error_out:
return -1;
}
@@ -1575,10 +1443,12 @@ virtio_crypto_asym_configure_session(
{
struct virtio_crypto_akcipher_session_para *para;
struct virtio_crypto_op_ctrl_req *ctrl_req;
+ struct virtio_crypto_session_input *input;
struct virtio_crypto_session *session;
struct virtio_crypto_hw *hw;
- struct virtqueue *control_vq;
+ struct virtio_pmd_ctrl *ctrl;
uint8_t *key = NULL;
+ int dlen[1];
int ret;
PMD_INIT_FUNC_TRACE();
@@ -1592,7 +1462,8 @@ virtio_crypto_asym_configure_session(
session = CRYPTODEV_GET_ASYM_SESS_PRIV(sess);
memset(session, 0, sizeof(struct virtio_crypto_session));
- ctrl_req = &session->ctrl;
+ ctrl = &session->ctrl;
+ ctrl_req = &ctrl->hdr;
ctrl_req->header.opcode = VIRTIO_CRYPTO_AKCIPHER_CREATE_SESSION;
/* FIXME: support multiqueue */
ctrl_req->header.queue_id = 0;
@@ -1648,15 +1519,33 @@ virtio_crypto_asym_configure_session(
para->algo = VIRTIO_CRYPTO_NO_AKCIPHER;
}
+ dlen[0] = ret;
+ memcpy(ctrl->data, key, dlen[0]);
+
+ input = &ctrl->input;
+ input->status = VIRTIO_CRYPTO_ERR;
+ input->session_id = ~0ULL;
+
hw = dev->data->dev_private;
- control_vq = hw->cvq;
- ret = virtio_crypto_send_command(control_vq, ctrl_req,
- key, NULL, session);
+ ret = virtio_crypto_send_command(hw->cvq, ctrl, dlen, 1);
if (ret < 0) {
VIRTIO_CRYPTO_SESSION_LOG_ERR("create session failed: %d", ret);
goto error_out;
}
+ ctrl = hw->cvq->hdr_mz->addr;
+ input = &ctrl->input;
+ if (input->status != VIRTIO_CRYPTO_OK) {
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("Something wrong on backend! "
+ "status=%u, session_id=%" PRIu64 "",
+ input->status, input->session_id);
+ goto error_out;
+ } else {
+ session->session_id = input->session_id;
+ VIRTIO_CRYPTO_SESSION_LOG_INFO("Create session successfully, "
+ "session_id=%" PRIu64 "", input->session_id);
+ }
+
return 0;
error_out:
return -1;
diff --git a/drivers/crypto/virtio/virtio_cvq.c b/drivers/crypto/virtio/virtio_cvq.c
new file mode 100644
index 0000000000..3f79c0c68c
--- /dev/null
+++ b/drivers/crypto/virtio/virtio_cvq.c
@@ -0,0 +1,130 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Marvell
+ */
+
+#include <unistd.h>
+
+#include <rte_common.h>
+#include <rte_eal.h>
+#include <rte_errno.h>
+
+#include "virtio_cvq.h"
+#include "virtqueue.h"
+
+static struct virtio_pmd_ctrl *
+virtio_send_command(struct virtcrypto_ctl *cvq,
+ struct virtio_pmd_ctrl *ctrl,
+ int *dlen, int dnum)
+{
+ struct virtio_pmd_ctrl *result;
+ struct virtqueue *vq = virtcrypto_cq_to_vq(cvq);
+ uint32_t head, i;
+ int k, sum = 0;
+
+ head = vq->vq_desc_head_idx;
+
+ /*
+ * Format is enforced in qemu code:
+ * One TX packet for header;
+ * At least one TX packet per argument;
+ * One RX packet for ACK.
+ */
+ vq->vq_split.ring.desc[head].flags = VRING_DESC_F_NEXT;
+ vq->vq_split.ring.desc[head].addr = cvq->hdr_mem;
+ vq->vq_split.ring.desc[head].len = sizeof(struct virtio_crypto_op_ctrl_req);
+ vq->vq_free_cnt--;
+ i = vq->vq_split.ring.desc[head].next;
+
+ for (k = 0; k < dnum; k++) {
+ vq->vq_split.ring.desc[i].flags = VRING_DESC_F_NEXT;
+ vq->vq_split.ring.desc[i].addr = cvq->hdr_mem
+ + sizeof(struct virtio_crypto_op_ctrl_req)
+ + sizeof(ctrl->input) + sizeof(uint8_t) * sum;
+ vq->vq_split.ring.desc[i].len = dlen[k];
+ sum += dlen[k];
+ vq->vq_free_cnt--;
+ i = vq->vq_split.ring.desc[i].next;
+ }
+
+ vq->vq_split.ring.desc[i].flags = VRING_DESC_F_WRITE;
+ vq->vq_split.ring.desc[i].addr = cvq->hdr_mem
+ + sizeof(struct virtio_crypto_op_ctrl_req);
+ vq->vq_split.ring.desc[i].len = sizeof(ctrl->input);
+ vq->vq_free_cnt--;
+
+ vq->vq_desc_head_idx = vq->vq_split.ring.desc[i].next;
+
+ vq_update_avail_ring(vq, head);
+ vq_update_avail_idx(vq);
+
+ PMD_INIT_LOG(DEBUG, "vq->vq_queue_index = %d", vq->vq_queue_index);
+
+ cvq->notify_queue(vq, cvq->notify_cookie);
+
+ while (virtqueue_nused(vq) == 0)
+ usleep(100);
+
+ while (virtqueue_nused(vq)) {
+ uint32_t idx, desc_idx, used_idx;
+ struct vring_used_elem *uep;
+
+ used_idx = (uint32_t)(vq->vq_used_cons_idx
+ & (vq->vq_nentries - 1));
+ uep = &vq->vq_split.ring.used->ring[used_idx];
+ idx = (uint32_t)uep->id;
+ desc_idx = idx;
+
+ while (vq->vq_split.ring.desc[desc_idx].flags &
+ VRING_DESC_F_NEXT) {
+ desc_idx = vq->vq_split.ring.desc[desc_idx].next;
+ vq->vq_free_cnt++;
+ }
+
+ vq->vq_split.ring.desc[desc_idx].next = vq->vq_desc_head_idx;
+ vq->vq_desc_head_idx = idx;
+
+ vq->vq_used_cons_idx++;
+ vq->vq_free_cnt++;
+ }
+
+ PMD_INIT_LOG(DEBUG, "vq->vq_free_cnt=%d vq->vq_desc_head_idx=%d",
+ vq->vq_free_cnt, vq->vq_desc_head_idx);
+
+ result = cvq->hdr_mz->addr;
+ return result;
+}
+
+int
+virtio_crypto_send_command(struct virtcrypto_ctl *cvq, struct virtio_pmd_ctrl *ctrl,
+ int *dlen, int dnum)
+{
+ uint8_t status = ~0;
+ struct virtio_pmd_ctrl *result;
+ struct virtqueue *vq;
+
+ ctrl->input.status = status;
+
+ if (!cvq) {
+ PMD_INIT_LOG(ERR, "Control queue is not supported.");
+ return -1;
+ }
+
+ rte_spinlock_lock(&cvq->lock);
+ vq = virtcrypto_cq_to_vq(cvq);
+
+ PMD_INIT_LOG(DEBUG, "vq->vq_desc_head_idx = %d, status = %d, "
+ "vq->hw->cvq = %p vq = %p",
+ vq->vq_desc_head_idx, status, vq->hw->cvq, vq);
+
+ if (vq->vq_free_cnt < dnum + 2 || dnum < 1) {
+ rte_spinlock_unlock(&cvq->lock);
+ return -1;
+ }
+
+ memcpy(cvq->hdr_mz->addr, ctrl, sizeof(struct virtio_pmd_ctrl));
+ result = virtio_send_command(cvq, ctrl, dlen, dnum);
+
+ rte_spinlock_unlock(&cvq->lock);
+ return result->input.status;
+}
+
diff --git a/drivers/crypto/virtio/virtio_cvq.h b/drivers/crypto/virtio/virtio_cvq.h
new file mode 100644
index 0000000000..c24dcbfb2b
--- /dev/null
+++ b/drivers/crypto/virtio/virtio_cvq.h
@@ -0,0 +1,33 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Marvell
+ */
+
+#ifndef _VIRTIO_CVQ_H_
+#define _VIRTIO_CVQ_H_
+
+#include <rte_spinlock.h>
+#include <virtio_crypto.h>
+
+struct virtqueue;
+
+struct virtcrypto_ctl {
+ const struct rte_memzone *hdr_mz; /**< memzone to populate hdr. */
+ rte_iova_t hdr_mem; /**< hdr for each xmit packet */
+ rte_spinlock_t lock; /**< spinlock for control queue. */
+ void (*notify_queue)(struct virtqueue *vq, void *cookie); /**< notify ops. */
+ void *notify_cookie; /**< cookie for notify ops */
+};
+
+#define VIRTIO_MAX_CTRL_DATA 2048
+
+struct virtio_pmd_ctrl {
+ struct virtio_crypto_op_ctrl_req hdr;
+ struct virtio_crypto_session_input input;
+ uint8_t data[VIRTIO_MAX_CTRL_DATA];
+};
+
+int
+virtio_crypto_send_command(struct virtcrypto_ctl *cvq, struct virtio_pmd_ctrl *ctrl,
+ int *dlen, int pkt_num);
+
+#endif /* _VIRTIO_CVQ_H_ */
diff --git a/drivers/crypto/virtio/virtio_pci.h b/drivers/crypto/virtio/virtio_pci.h
index 41949c3d13..7e94c6a3c5 100644
--- a/drivers/crypto/virtio/virtio_pci.h
+++ b/drivers/crypto/virtio/virtio_pci.h
@@ -176,8 +176,7 @@ struct virtio_pci_ops {
};
struct virtio_crypto_hw {
- /* control queue */
- struct virtqueue *cvq;
+ struct virtqueue **vqs;
uint16_t dev_id;
uint16_t max_dataqueues;
uint64_t req_guest_features;
@@ -190,6 +189,9 @@ struct virtio_crypto_hw {
struct virtio_pci_common_cfg *common_cfg;
struct virtio_crypto_config *dev_cfg;
const struct rte_cryptodev_capabilities *virtio_dev_capabilities;
+ uint8_t weak_barriers;
+ struct virtcrypto_ctl *cvq;
+ bool use_va;
};
/*
diff --git a/drivers/crypto/virtio/virtio_ring.h b/drivers/crypto/virtio/virtio_ring.h
index 55839279fd..e5b0ad74d2 100644
--- a/drivers/crypto/virtio/virtio_ring.h
+++ b/drivers/crypto/virtio/virtio_ring.h
@@ -59,6 +59,7 @@ struct vring_used {
struct vring {
unsigned int num;
+ rte_iova_t desc_iova;
struct vring_desc *desc;
struct vring_avail *avail;
struct vring_used *used;
@@ -111,17 +112,24 @@ vring_size(unsigned int num, unsigned long align)
}
static inline void
-vring_init(struct vring *vr, unsigned int num, uint8_t *p,
- unsigned long align)
+vring_init_split(struct vring *vr, uint8_t *p, rte_iova_t iova,
+ unsigned long align, unsigned int num)
{
vr->num = num;
vr->desc = (struct vring_desc *) p;
+ vr->desc_iova = iova;
vr->avail = (struct vring_avail *) (p +
num * sizeof(struct vring_desc));
vr->used = (void *)
RTE_ALIGN_CEIL((uintptr_t)(&vr->avail->ring[num]), align);
}
+static inline void
+vring_init(struct vring *vr, unsigned int num, uint8_t *p, unsigned long align)
+{
+ vring_init_split(vr, p, 0, align, num);
+}
+
/*
* The following is used with VIRTIO_RING_F_EVENT_IDX.
* Assuming a given event_idx value from the other size, if we have
diff --git a/drivers/crypto/virtio/virtio_rxtx.c b/drivers/crypto/virtio/virtio_rxtx.c
index c456dc327e..0e8a716917 100644
--- a/drivers/crypto/virtio/virtio_rxtx.c
+++ b/drivers/crypto/virtio/virtio_rxtx.c
@@ -14,13 +14,13 @@ vq_ring_free_chain(struct virtqueue *vq, uint16_t desc_idx)
struct vq_desc_extra *dxp;
uint16_t desc_idx_last = desc_idx;
- dp = &vq->vq_ring.desc[desc_idx];
+ dp = &vq->vq_split.ring.desc[desc_idx];
dxp = &vq->vq_descx[desc_idx];
vq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt + dxp->ndescs);
if ((dp->flags & VRING_DESC_F_INDIRECT) == 0) {
while (dp->flags & VRING_DESC_F_NEXT) {
desc_idx_last = dp->next;
- dp = &vq->vq_ring.desc[dp->next];
+ dp = &vq->vq_split.ring.desc[dp->next];
}
}
dxp->ndescs = 0;
@@ -33,7 +33,7 @@ vq_ring_free_chain(struct virtqueue *vq, uint16_t desc_idx)
if (vq->vq_desc_tail_idx == VQ_RING_DESC_CHAIN_END) {
vq->vq_desc_head_idx = desc_idx;
} else {
- dp_tail = &vq->vq_ring.desc[vq->vq_desc_tail_idx];
+ dp_tail = &vq->vq_split.ring.desc[vq->vq_desc_tail_idx];
dp_tail->next = desc_idx;
}
@@ -56,7 +56,7 @@ virtqueue_dequeue_burst_rx(struct virtqueue *vq,
for (i = 0; i < num ; i++) {
used_idx = (uint16_t)(vq->vq_used_cons_idx
& (vq->vq_nentries - 1));
- uep = &vq->vq_ring.used->ring[used_idx];
+ uep = &vq->vq_split.ring.used->ring[used_idx];
desc_idx = (uint16_t)uep->id;
cop = (struct rte_crypto_op *)
vq->vq_descx[desc_idx].crypto_op;
@@ -115,7 +115,7 @@ virtqueue_crypto_sym_pkt_header_arrange(
{
struct rte_crypto_sym_op *sym_op = cop->sym;
struct virtio_crypto_op_data_req *req_data = data;
- struct virtio_crypto_op_ctrl_req *ctrl = &session->ctrl;
+ struct virtio_crypto_op_ctrl_req *ctrl = &session->ctrl.hdr;
struct virtio_crypto_sym_create_session_req *sym_sess_req =
&ctrl->u.sym_create_session;
struct virtio_crypto_alg_chain_session_para *chain_para =
@@ -304,7 +304,7 @@ virtqueue_crypto_sym_enqueue_xmit(
desc[idx++].flags = VRING_DESC_F_WRITE | VRING_DESC_F_NEXT;
/* indirect vring: digest result */
- para = &(session->ctrl.u.sym_create_session.u.chain.para);
+ para = &(session->ctrl.hdr.u.sym_create_session.u.chain.para);
if (para->hash_mode == VIRTIO_CRYPTO_SYM_HASH_MODE_PLAIN)
hash_result_len = para->u.hash_param.hash_result_len;
if (para->hash_mode == VIRTIO_CRYPTO_SYM_HASH_MODE_AUTH)
@@ -327,7 +327,7 @@ virtqueue_crypto_sym_enqueue_xmit(
dxp->ndescs = needed;
/* use a single buffer */
- start_dp = txvq->vq_ring.desc;
+ start_dp = txvq->vq_split.ring.desc;
start_dp[head_idx].addr = indirect_op_data_req_phys_addr +
indirect_vring_addr_offset;
start_dp[head_idx].len = num_entry * sizeof(struct vring_desc);
@@ -351,7 +351,7 @@ virtqueue_crypto_asym_pkt_header_arrange(
{
struct rte_crypto_asym_op *asym_op = cop->asym;
struct virtio_crypto_op_data_req *req_data = data;
- struct virtio_crypto_op_ctrl_req *ctrl = &session->ctrl;
+ struct virtio_crypto_op_ctrl_req *ctrl = &session->ctrl.hdr;
req_data->header.session_id = session->session_id;
@@ -517,7 +517,7 @@ virtqueue_crypto_asym_enqueue_xmit(
dxp->ndescs = needed;
/* use a single buffer */
- start_dp = txvq->vq_ring.desc;
+ start_dp = txvq->vq_split.ring.desc;
start_dp[head_idx].addr = indirect_op_data_req_phys_addr +
indirect_vring_addr_offset;
start_dp[head_idx].len = num_entry * sizeof(struct vring_desc);
@@ -560,25 +560,14 @@ static int
virtio_crypto_vring_start(struct virtqueue *vq)
{
struct virtio_crypto_hw *hw = vq->hw;
- int i, size = vq->vq_nentries;
- struct vring *vr = &vq->vq_ring;
uint8_t *ring_mem = vq->vq_ring_virt_mem;
PMD_INIT_FUNC_TRACE();
- vring_init(vr, size, ring_mem, VIRTIO_PCI_VRING_ALIGN);
- vq->vq_desc_tail_idx = (uint16_t)(vq->vq_nentries - 1);
- vq->vq_free_cnt = vq->vq_nentries;
-
- /* Chain all the descriptors in the ring with an END */
- for (i = 0; i < size - 1; i++)
- vr->desc[i].next = (uint16_t)(i + 1);
- vr->desc[i].next = VQ_RING_DESC_CHAIN_END;
-
- /*
- * Disable device(host) interrupting guest
- */
- virtqueue_disable_intr(vq);
+ if (ring_mem == NULL) {
+ VIRTIO_CRYPTO_INIT_LOG_ERR("virtqueue ring memory is NULL");
+ return -EINVAL;
+ }
/*
* Set guest physical address of the virtqueue
@@ -599,8 +588,9 @@ virtio_crypto_ctrlq_start(struct rte_cryptodev *dev)
struct virtio_crypto_hw *hw = dev->data->dev_private;
if (hw->cvq) {
- virtio_crypto_vring_start(hw->cvq);
- VIRTQUEUE_DUMP((struct virtqueue *)hw->cvq);
+ rte_spinlock_init(&hw->cvq->lock);
+ virtio_crypto_vring_start(virtcrypto_cq_to_vq(hw->cvq));
+ VIRTQUEUE_DUMP(virtcrypto_cq_to_vq(hw->cvq));
}
}
diff --git a/drivers/crypto/virtio/virtio_rxtx.h b/drivers/crypto/virtio/virtio_rxtx.h
new file mode 100644
index 0000000000..1d5e5b0132
--- /dev/null
+++ b/drivers/crypto/virtio/virtio_rxtx.h
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Marvell.
+ */
+
+#ifndef _VIRTIO_RXTX_H_
+#define _VIRTIO_RXTX_H_
+
+struct virtcrypto_data {
+ const struct rte_memzone *hdr_mz; /**< memzone to populate hdr. */
+ rte_iova_t hdr_mem; /**< hdr for each xmit packet */
+};
+
+#endif /* _VIRTIO_RXTX_H_ */
diff --git a/drivers/crypto/virtio/virtqueue.c b/drivers/crypto/virtio/virtqueue.c
index 3e2db1ebd2..3a9ec98b18 100644
--- a/drivers/crypto/virtio/virtqueue.c
+++ b/drivers/crypto/virtio/virtqueue.c
@@ -7,7 +7,9 @@
#include <rte_mbuf.h>
#include <rte_crypto.h>
#include <rte_malloc.h>
+#include <rte_errno.h>
+#include "virtio_cryptodev.h"
#include "virtqueue.h"
void
@@ -18,7 +20,7 @@ virtqueue_disable_intr(struct virtqueue *vq)
* not to interrupt when it consumes packets
* Note: this is only considered a hint to the host
*/
- vq->vq_ring.avail->flags |= VRING_AVAIL_F_NO_INTERRUPT;
+ vq->vq_split.ring.avail->flags |= VRING_AVAIL_F_NO_INTERRUPT;
}
void
@@ -32,10 +34,193 @@ virtqueue_detatch_unused(struct virtqueue *vq)
for (idx = 0; idx < vq->vq_nentries; idx++) {
cop = vq->vq_descx[idx].crypto_op;
if (cop) {
- rte_pktmbuf_free(cop->sym->m_src);
- rte_pktmbuf_free(cop->sym->m_dst);
+ if (cop->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) {
+ rte_pktmbuf_free(cop->sym->m_src);
+ rte_pktmbuf_free(cop->sym->m_dst);
+ }
+
rte_crypto_op_free(cop);
vq->vq_descx[idx].crypto_op = NULL;
}
}
}
+
+static void
+virtio_init_vring(struct virtqueue *vq)
+{
+ int size = vq->vq_nentries;
+ uint8_t *ring_mem = vq->vq_ring_virt_mem;
+ struct vring *vr = &vq->vq_split.ring;
+
+ PMD_INIT_FUNC_TRACE();
+
+ memset(ring_mem, 0, vq->vq_ring_size);
+
+ vq->vq_used_cons_idx = 0;
+ vq->vq_desc_head_idx = 0;
+ vq->vq_avail_idx = 0;
+ vq->vq_desc_tail_idx = (uint16_t)(vq->vq_nentries - 1);
+ vq->vq_free_cnt = vq->vq_nentries;
+ memset(vq->vq_descx, 0, sizeof(struct vq_desc_extra) * vq->vq_nentries);
+
+ vring_init_split(vr, ring_mem, vq->vq_ring_mem, VIRTIO_PCI_VRING_ALIGN, size);
+ vring_desc_init_split(vr->desc, size);
+
+ /*
+ * Disable device(host) interrupting guest
+ */
+ virtqueue_disable_intr(vq);
+}
+
+static int
+virtio_alloc_queue_headers(struct virtqueue *vq, int numa_node, const char *name)
+{
+ char hdr_name[VIRTQUEUE_MAX_NAME_SZ];
+ const struct rte_memzone **hdr_mz;
+ rte_iova_t *hdr_mem;
+ ssize_t size;
+ int queue_type;
+
+ queue_type = virtio_get_queue_type(vq->hw, vq->vq_queue_index);
+ switch (queue_type) {
+ case VTCRYPTO_DATAQ:
+ /*
+ * Op cookie for every ring element. This memory can be optimized
+ * based on descriptor requirements. For example, if a descriptor
+ * is indirect, then the cookie can be shared among all the
+ * descriptors in the chain.
+ */
+ size = vq->vq_nentries * sizeof(struct virtio_crypto_op_cookie);
+ hdr_mz = &vq->dq.hdr_mz;
+ hdr_mem = &vq->dq.hdr_mem;
+ break;
+ case VTCRYPTO_CTRLQ:
+ /* One control operation at a time in control queue */
+ size = sizeof(struct virtio_pmd_ctrl);
+ hdr_mz = &vq->cq.hdr_mz;
+ hdr_mem = &vq->cq.hdr_mem;
+ break;
+ default:
+ return 0;
+ }
+
+ snprintf(hdr_name, sizeof(hdr_name), "%s_hdr", name);
+ *hdr_mz = rte_memzone_reserve_aligned(hdr_name, size, numa_node,
+ RTE_MEMZONE_IOVA_CONTIG, RTE_CACHE_LINE_SIZE);
+ if (*hdr_mz == NULL) {
+ if (rte_errno == EEXIST)
+ *hdr_mz = rte_memzone_lookup(hdr_name);
+ if (*hdr_mz == NULL)
+ return -ENOMEM;
+ }
+
+ memset((*hdr_mz)->addr, 0, size);
+
+ if (vq->hw->use_va)
+ *hdr_mem = (uintptr_t)(*hdr_mz)->addr;
+ else
+ *hdr_mem = (uintptr_t)(*hdr_mz)->iova;
+
+ return 0;
+}
+
+static void
+virtio_free_queue_headers(struct virtqueue *vq)
+{
+ const struct rte_memzone **hdr_mz;
+ rte_iova_t *hdr_mem;
+ int queue_type;
+
+ queue_type = virtio_get_queue_type(vq->hw, vq->vq_queue_index);
+ switch (queue_type) {
+ case VTCRYPTO_DATAQ:
+ hdr_mz = &vq->dq.hdr_mz;
+ hdr_mem = &vq->dq.hdr_mem;
+ break;
+ case VTCRYPTO_CTRLQ:
+ hdr_mz = &vq->cq.hdr_mz;
+ hdr_mem = &vq->cq.hdr_mem;
+ break;
+ default:
+ return;
+ }
+
+ rte_memzone_free(*hdr_mz);
+ *hdr_mz = NULL;
+ *hdr_mem = 0;
+}
+
+struct virtqueue *
+virtcrypto_queue_alloc(struct virtio_crypto_hw *hw, uint16_t index, uint16_t num,
+ int node, const char *name)
+{
+ struct virtqueue *vq;
+ const struct rte_memzone *mz;
+ unsigned int size;
+
+ size = sizeof(*vq) + num * sizeof(struct vq_desc_extra);
+ size = RTE_ALIGN_CEIL(size, RTE_CACHE_LINE_SIZE);
+
+ vq = rte_zmalloc_socket(name, size, RTE_CACHE_LINE_SIZE, node);
+ if (vq == NULL) {
+ PMD_INIT_LOG(ERR, "can not allocate vq");
+ return NULL;
+ }
+
+ PMD_INIT_LOG(DEBUG, "vq: %p", vq);
+ vq->hw = hw;
+ vq->vq_queue_index = index;
+ vq->vq_nentries = num;
+
+ /*
+ * Reserve a memzone for vring elements
+ */
+ size = vring_size(num, VIRTIO_PCI_VRING_ALIGN);
+ vq->vq_ring_size = RTE_ALIGN_CEIL(size, VIRTIO_PCI_VRING_ALIGN);
+ PMD_INIT_LOG(DEBUG, "vring_size: %d, rounded_vring_size: %d", size, vq->vq_ring_size);
+
+ mz = rte_memzone_reserve_aligned(name, vq->vq_ring_size, node,
+ RTE_MEMZONE_IOVA_CONTIG, VIRTIO_PCI_VRING_ALIGN);
+ if (mz == NULL) {
+ if (rte_errno == EEXIST)
+ mz = rte_memzone_lookup(name);
+ if (mz == NULL)
+ goto free_vq;
+ }
+
+ memset(mz->addr, 0, mz->len);
+ vq->mz = mz;
+ vq->vq_ring_virt_mem = mz->addr;
+
+ if (hw->use_va)
+ vq->vq_ring_mem = (uintptr_t)mz->addr;
+ else
+ vq->vq_ring_mem = mz->iova;
+
+ PMD_INIT_LOG(DEBUG, "vq->vq_ring_mem: 0x%" PRIx64, vq->vq_ring_mem);
+ PMD_INIT_LOG(DEBUG, "vq->vq_ring_virt_mem: %p", vq->vq_ring_virt_mem);
+
+ virtio_init_vring(vq);
+
+ if (virtio_alloc_queue_headers(vq, node, name)) {
+ PMD_INIT_LOG(ERR, "Failed to alloc queue headers");
+ goto free_mz;
+ }
+
+ return vq;
+
+free_mz:
+ rte_memzone_free(mz);
+free_vq:
+ rte_free(vq);
+
+ return NULL;
+}
+
+void
+virtcrypto_queue_free(struct virtqueue *vq)
+{
+ virtio_free_queue_headers(vq);
+ rte_memzone_free(vq->mz);
+ rte_free(vq);
+}
diff --git a/drivers/crypto/virtio/virtqueue.h b/drivers/crypto/virtio/virtqueue.h
index cb08bea94f..b4a0ed3553 100644
--- a/drivers/crypto/virtio/virtqueue.h
+++ b/drivers/crypto/virtio/virtqueue.h
@@ -12,10 +12,12 @@
#include <rte_memzone.h>
#include <rte_mempool.h>
+#include "virtio_cvq.h"
#include "virtio_pci.h"
#include "virtio_ring.h"
#include "virtio_logs.h"
#include "virtio_crypto.h"
+#include "virtio_rxtx.h"
struct rte_mbuf;
@@ -46,11 +48,26 @@ struct vq_desc_extra {
void *crypto_op;
void *cookie;
uint16_t ndescs;
+ uint16_t next;
};
+#define virtcrypto_dq_to_vq(dvq) container_of(dvq, struct virtqueue, dq)
+#define virtcrypto_cq_to_vq(cvq) container_of(cvq, struct virtqueue, cq)
+
struct virtqueue {
/**< virtio_crypto_hw structure pointer. */
struct virtio_crypto_hw *hw;
+ union {
+ struct {
+ /**< vring keeping desc, used and avail */
+ struct vring ring;
+ } vq_split;
+ };
+ union {
+ struct virtcrypto_data dq;
+ struct virtcrypto_ctl cq;
+ };
+
/**< mem zone to populate RX ring. */
const struct rte_memzone *mz;
/**< memzone to populate hdr and request. */
@@ -62,7 +79,6 @@ struct virtqueue {
unsigned int vq_ring_size;
phys_addr_t vq_ring_mem; /**< physical address of vring */
- struct vring vq_ring; /**< vring keeping desc, used and avail */
uint16_t vq_free_cnt; /**< num of desc available */
uint16_t vq_nentries; /**< vring desc numbers */
@@ -101,6 +117,11 @@ void virtqueue_disable_intr(struct virtqueue *vq);
*/
void virtqueue_detatch_unused(struct virtqueue *vq);
+struct virtqueue *virtcrypto_queue_alloc(struct virtio_crypto_hw *hw, uint16_t index,
+ uint16_t num, int node, const char *name);
+
+void virtcrypto_queue_free(struct virtqueue *vq);
+
static inline int
virtqueue_full(const struct virtqueue *vq)
{
@@ -108,13 +129,13 @@ virtqueue_full(const struct virtqueue *vq)
}
#define VIRTQUEUE_NUSED(vq) \
- ((uint16_t)((vq)->vq_ring.used->idx - (vq)->vq_used_cons_idx))
+ ((uint16_t)((vq)->vq_split.ring.used->idx - (vq)->vq_used_cons_idx))
static inline void
vq_update_avail_idx(struct virtqueue *vq)
{
virtio_wmb();
- vq->vq_ring.avail->idx = vq->vq_avail_idx;
+ vq->vq_split.ring.avail->idx = vq->vq_avail_idx;
}
static inline void
@@ -129,15 +150,15 @@ vq_update_avail_ring(struct virtqueue *vq, uint16_t desc_idx)
* descriptor.
*/
avail_idx = (uint16_t)(vq->vq_avail_idx & (vq->vq_nentries - 1));
- if (unlikely(vq->vq_ring.avail->ring[avail_idx] != desc_idx))
- vq->vq_ring.avail->ring[avail_idx] = desc_idx;
+ if (unlikely(vq->vq_split.ring.avail->ring[avail_idx] != desc_idx))
+ vq->vq_split.ring.avail->ring[avail_idx] = desc_idx;
vq->vq_avail_idx++;
}
static inline int
virtqueue_kick_prepare(struct virtqueue *vq)
{
- return !(vq->vq_ring.used->flags & VRING_USED_F_NO_NOTIFY);
+ return !(vq->vq_split.ring.used->flags & VRING_USED_F_NO_NOTIFY);
}
static inline void
@@ -151,21 +172,69 @@ virtqueue_notify(struct virtqueue *vq)
VTPCI_OPS(vq->hw)->notify_queue(vq->hw, vq);
}
+/* Chain all the descriptors in the ring with an END */
+static inline void
+vring_desc_init_split(struct vring_desc *dp, uint16_t n)
+{
+ uint16_t i;
+
+ for (i = 0; i < n - 1; i++)
+ dp[i].next = (uint16_t)(i + 1);
+ dp[i].next = VQ_RING_DESC_CHAIN_END;
+}
+
+static inline int
+virtio_get_queue_type(struct virtio_crypto_hw *hw, uint16_t vq_idx)
+{
+ if (vq_idx == hw->max_dataqueues)
+ return VTCRYPTO_CTRLQ;
+ else
+ return VTCRYPTO_DATAQ;
+}
+
+/* virtqueue_nused has load-acquire or rte_io_rmb insed */
+static inline uint16_t
+virtqueue_nused(const struct virtqueue *vq)
+{
+ uint16_t idx;
+
+ if (vq->hw->weak_barriers) {
+ /**
+ * x86 prefers to using rte_smp_rmb over rte_atomic_load_explicit as it
+ * reports a slightly better perf, which comes from the saved
+ * branch by the compiler.
+ * The if and else branches are identical with the smp and io
+ * barriers both defined as compiler barriers on x86.
+ */
+#ifdef RTE_ARCH_X86_64
+ idx = vq->vq_split.ring.used->idx;
+ rte_smp_rmb();
+#else
+ idx = rte_atomic_load_explicit(&(vq)->vq_split.ring.used->idx,
+ rte_memory_order_acquire);
+#endif
+ } else {
+ idx = vq->vq_split.ring.used->idx;
+ rte_io_rmb();
+ }
+ return idx - vq->vq_used_cons_idx;
+}
+
/**
* Dump virtqueue internal structures, for debug purpose only.
*/
#define VIRTQUEUE_DUMP(vq) do { \
uint16_t used_idx, nused; \
- used_idx = (vq)->vq_ring.used->idx; \
+ used_idx = (vq)->vq_split.ring.used->idx; \
nused = (uint16_t)(used_idx - (vq)->vq_used_cons_idx); \
VIRTIO_CRYPTO_INIT_LOG_DBG(\
"VQ: - size=%d; free=%d; used=%d; desc_head_idx=%d;" \
" avail.idx=%d; used_cons_idx=%d; used.idx=%d;" \
" avail.flags=0x%x; used.flags=0x%x", \
(vq)->vq_nentries, (vq)->vq_free_cnt, nused, \
- (vq)->vq_desc_head_idx, (vq)->vq_ring.avail->idx, \
- (vq)->vq_used_cons_idx, (vq)->vq_ring.used->idx, \
- (vq)->vq_ring.avail->flags, (vq)->vq_ring.used->flags); \
+ (vq)->vq_desc_head_idx, (vq)->vq_split.ring.avail->idx, \
+ (vq)->vq_used_cons_idx, (vq)->vq_split.ring.used->idx, \
+ (vq)->vq_split.ring.avail->flags, (vq)->vq_split.ring.used->flags); \
} while (0)
#endif /* _VIRTQUEUE_H_ */
--
2.25.1
^ permalink raw reply [flat|nested] 58+ messages in thread
* [v1 11/16] crypto/virtio: add packed ring support
2024-12-24 7:36 [v1 00/16] crypto/virtio: vDPA and asymmetric support Gowrishankar Muthukrishnan
` (9 preceding siblings ...)
2024-12-24 7:37 ` [v1 10/16] crypto/virtio: refactor queue operations Gowrishankar Muthukrishnan
@ 2024-12-24 7:37 ` Gowrishankar Muthukrishnan
2024-12-24 7:37 ` [v1 12/16] common/virtio: common virtio log Gowrishankar Muthukrishnan
` (8 subsequent siblings)
19 siblings, 0 replies; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2024-12-24 7:37 UTC (permalink / raw)
To: dev, Akhil Goyal, Maxime Coquelin, Chenbo Xia, Fan Zhang, Jay Zhou
Cc: jerinj, anoobj, Rajesh Mudimadugula, Gowrishankar Muthukrishnan
Add packed ring support.
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
drivers/crypto/virtio/virtio_cryptodev.c | 125 +++++++
drivers/crypto/virtio/virtio_cryptodev.h | 13 +-
drivers/crypto/virtio/virtio_cvq.c | 103 +++++-
drivers/crypto/virtio/virtio_pci.h | 25 ++
drivers/crypto/virtio/virtio_ring.h | 59 ++-
drivers/crypto/virtio/virtio_rxtx.c | 442 ++++++++++++++++++++++-
drivers/crypto/virtio/virtqueue.c | 50 ++-
drivers/crypto/virtio/virtqueue.h | 132 ++++++-
8 files changed, 920 insertions(+), 29 deletions(-)
diff --git a/drivers/crypto/virtio/virtio_cryptodev.c b/drivers/crypto/virtio/virtio_cryptodev.c
index 9a11cbe90a..d3db4f898e 100644
--- a/drivers/crypto/virtio/virtio_cryptodev.c
+++ b/drivers/crypto/virtio/virtio_cryptodev.c
@@ -869,6 +869,125 @@ virtio_crypto_clear_session(
rte_free(ctrl);
}
+static void
+virtio_crypto_clear_session_packed(
+ struct rte_cryptodev *dev,
+ struct virtio_crypto_op_ctrl_req *ctrl)
+{
+ struct virtio_crypto_hw *hw;
+ struct virtqueue *vq;
+ struct vring_packed_desc *desc;
+ uint8_t *status;
+ uint8_t needed = 1;
+ uint32_t head;
+ uint64_t malloc_phys_addr;
+ uint8_t len_inhdr = sizeof(struct virtio_crypto_inhdr);
+ uint32_t len_op_ctrl_req = sizeof(struct virtio_crypto_op_ctrl_req);
+ uint64_t session_id = ctrl->u.destroy_session.session_id;
+ uint16_t flags;
+ uint8_t nb_descs = 0;
+
+ hw = dev->data->dev_private;
+ vq = virtcrypto_cq_to_vq(hw->cvq);
+ head = vq->vq_avail_idx;
+ flags = vq->vq_packed.cached_flags;
+
+ VIRTIO_CRYPTO_SESSION_LOG_INFO("vq->vq_desc_head_idx = %d, "
+ "vq = %p", vq->vq_desc_head_idx, vq);
+
+ if (vq->vq_free_cnt < needed) {
+ VIRTIO_CRYPTO_SESSION_LOG_ERR(
+ "vq->vq_free_cnt = %d is less than %d, "
+ "not enough", vq->vq_free_cnt, needed);
+ return;
+ }
+
+ malloc_phys_addr = rte_malloc_virt2iova(ctrl);
+
+ /* status part */
+ status = &(((struct virtio_crypto_inhdr *)
+ ((uint8_t *)ctrl + len_op_ctrl_req))->status);
+ *status = VIRTIO_CRYPTO_ERR;
+
+ /* indirect desc vring part */
+ desc = vq->vq_packed.ring.desc;
+
+ /* ctrl request part */
+ desc[head].addr = malloc_phys_addr;
+ desc[head].len = len_op_ctrl_req;
+ desc[head].flags = VRING_DESC_F_NEXT | vq->vq_packed.cached_flags;
+ vq->vq_free_cnt--;
+ nb_descs++;
+ if (++vq->vq_avail_idx >= vq->vq_nentries) {
+ vq->vq_avail_idx -= vq->vq_nentries;
+ vq->vq_packed.cached_flags ^= VRING_PACKED_DESC_F_AVAIL_USED;
+ }
+
+ /* status part */
+ desc[vq->vq_avail_idx].addr = malloc_phys_addr + len_op_ctrl_req;
+ desc[vq->vq_avail_idx].len = len_inhdr;
+ desc[vq->vq_avail_idx].flags = VRING_DESC_F_WRITE;
+ vq->vq_free_cnt--;
+ nb_descs++;
+ if (++vq->vq_avail_idx >= vq->vq_nentries) {
+ vq->vq_avail_idx -= vq->vq_nentries;
+ vq->vq_packed.cached_flags ^= VRING_PACKED_DESC_F_AVAIL_USED;
+ }
+
+ virtqueue_store_flags_packed(&desc[head], VRING_DESC_F_NEXT | flags,
+ vq->hw->weak_barriers);
+
+ virtio_wmb(vq->hw->weak_barriers);
+ virtqueue_notify(vq);
+
+ /* wait for used desc in virtqueue
+ * desc_is_used has a load-acquire or rte_io_rmb inside
+ */
+ rte_rmb();
+ while (!desc_is_used(&desc[head], vq)) {
+ rte_rmb();
+ usleep(100);
+ }
+
+ /* now get used descriptors */
+ vq->vq_free_cnt += nb_descs;
+ vq->vq_used_cons_idx += nb_descs;
+ if (vq->vq_used_cons_idx >= vq->vq_nentries) {
+ vq->vq_used_cons_idx -= vq->vq_nentries;
+ vq->vq_packed.used_wrap_counter ^= 1;
+ }
+
+ PMD_INIT_LOG(DEBUG, "vq->vq_free_cnt=%d "
+ "vq->vq_queue_idx=%d "
+ "vq->vq_avail_idx=%d "
+ "vq->vq_used_cons_idx=%d "
+ "vq->vq_packed.cached_flags=0x%x "
+ "vq->vq_packed.used_wrap_counter=%d",
+ vq->vq_free_cnt,
+ vq->vq_queue_index,
+ vq->vq_avail_idx,
+ vq->vq_used_cons_idx,
+ vq->vq_packed.cached_flags,
+ vq->vq_packed.used_wrap_counter);
+
+ if (*status != VIRTIO_CRYPTO_OK) {
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("Close session failed "
+ "status=%"PRIu32", session_id=%"PRIu64"",
+ *status, session_id);
+ rte_free(ctrl);
+ return;
+ }
+
+ VIRTIO_CRYPTO_INIT_LOG_DBG("vq->vq_free_cnt=%d "
+ "vq->vq_desc_head_idx=%d",
+ vq->vq_free_cnt, vq->vq_desc_head_idx);
+
+ VIRTIO_CRYPTO_SESSION_LOG_INFO("Close session %"PRIu64" successfully ",
+ session_id);
+
+ rte_free(ctrl);
+}
+
static void
virtio_crypto_sym_clear_session(
struct rte_cryptodev *dev,
@@ -906,6 +1025,9 @@ virtio_crypto_sym_clear_session(
ctrl->header.queue_id = 0;
ctrl->u.destroy_session.session_id = session->session_id;
+ if (vtpci_with_packed_queue(dev->data->dev_private))
+ return virtio_crypto_clear_session_packed(dev, ctrl);
+
return virtio_crypto_clear_session(dev, ctrl);
}
@@ -943,6 +1065,9 @@ virtio_crypto_asym_clear_session(
ctrl->header.queue_id = 0;
ctrl->u.destroy_session.session_id = session->session_id;
+ if (vtpci_with_packed_queue(dev->data->dev_private))
+ return virtio_crypto_clear_session_packed(dev, ctrl);
+
return virtio_crypto_clear_session(dev, ctrl);
}
diff --git a/drivers/crypto/virtio/virtio_cryptodev.h b/drivers/crypto/virtio/virtio_cryptodev.h
index 215bce7863..b4bdd9800b 100644
--- a/drivers/crypto/virtio/virtio_cryptodev.h
+++ b/drivers/crypto/virtio/virtio_cryptodev.h
@@ -10,13 +10,21 @@
#include "virtio_ring.h"
/* Features desired/implemented by this driver. */
-#define VIRTIO_CRYPTO_PMD_GUEST_FEATURES (1ULL << VIRTIO_F_VERSION_1)
+#define VIRTIO_CRYPTO_PMD_GUEST_FEATURES (1ULL << VIRTIO_F_VERSION_1 | \
+ 1ULL << VIRTIO_F_IN_ORDER | \
+ 1ULL << VIRTIO_F_RING_PACKED | \
+ 1ULL << VIRTIO_F_NOTIFICATION_DATA | \
+ 1ULL << VIRTIO_RING_F_INDIRECT_DESC | \
+ 1ULL << VIRTIO_F_ORDER_PLATFORM)
#define CRYPTODEV_NAME_VIRTIO_PMD crypto_virtio
#define NUM_ENTRY_VIRTIO_CRYPTO_OP 7
#define VIRTIO_CRYPTO_MAX_IV_SIZE 16
+#define VIRTIO_CRYPTO_MAX_MSG_SIZE 512
+#define VIRTIO_CRYPTO_MAX_SIGN_SIZE 512
+#define VIRTIO_CRYPTO_MAX_CIPHER_SIZE 1024
#define VIRTIO_CRYPTO_MAX_KEY_SIZE 256
@@ -34,6 +42,9 @@ struct virtio_crypto_op_cookie {
struct virtio_crypto_inhdr inhdr;
struct vring_desc desc[NUM_ENTRY_VIRTIO_CRYPTO_OP];
uint8_t iv[VIRTIO_CRYPTO_MAX_IV_SIZE];
+ uint8_t message[VIRTIO_CRYPTO_MAX_MSG_SIZE];
+ uint8_t sign[VIRTIO_CRYPTO_MAX_SIGN_SIZE];
+ uint8_t cipher[VIRTIO_CRYPTO_MAX_CIPHER_SIZE];
};
/*
diff --git a/drivers/crypto/virtio/virtio_cvq.c b/drivers/crypto/virtio/virtio_cvq.c
index 3f79c0c68c..14e65795f4 100644
--- a/drivers/crypto/virtio/virtio_cvq.c
+++ b/drivers/crypto/virtio/virtio_cvq.c
@@ -12,7 +12,102 @@
#include "virtqueue.h"
static struct virtio_pmd_ctrl *
-virtio_send_command(struct virtcrypto_ctl *cvq,
+virtio_send_command_packed(struct virtcrypto_ctl *cvq,
+ struct virtio_pmd_ctrl *ctrl,
+ int *dlen, int dnum)
+{
+ struct virtqueue *vq = virtcrypto_cq_to_vq(cvq);
+ int head;
+ struct vring_packed_desc *desc = vq->vq_packed.ring.desc;
+ struct virtio_pmd_ctrl *result;
+ uint16_t flags;
+ int sum = 0;
+ int nb_descs = 0;
+ int k;
+
+ /*
+ * Format is enforced in qemu code:
+ * One TX packet for header;
+ * At least one TX packet per argument;
+ * One RX packet for ACK.
+ */
+ head = vq->vq_avail_idx;
+ flags = vq->vq_packed.cached_flags;
+ desc[head].addr = cvq->hdr_mem;
+ desc[head].len = sizeof(struct virtio_crypto_op_ctrl_req);
+ vq->vq_free_cnt--;
+ nb_descs++;
+ if (++vq->vq_avail_idx >= vq->vq_nentries) {
+ vq->vq_avail_idx -= vq->vq_nentries;
+ vq->vq_packed.cached_flags ^= VRING_PACKED_DESC_F_AVAIL_USED;
+ }
+
+ for (k = 0; k < dnum; k++) {
+ desc[vq->vq_avail_idx].addr = cvq->hdr_mem
+ + sizeof(struct virtio_crypto_op_ctrl_req)
+ + sizeof(ctrl->input) + sizeof(uint8_t) * sum;
+ desc[vq->vq_avail_idx].len = dlen[k];
+ desc[vq->vq_avail_idx].flags = VRING_DESC_F_NEXT |
+ vq->vq_packed.cached_flags;
+ sum += dlen[k];
+ vq->vq_free_cnt--;
+ nb_descs++;
+ if (++vq->vq_avail_idx >= vq->vq_nentries) {
+ vq->vq_avail_idx -= vq->vq_nentries;
+ vq->vq_packed.cached_flags ^=
+ VRING_PACKED_DESC_F_AVAIL_USED;
+ }
+ }
+
+ desc[vq->vq_avail_idx].addr = cvq->hdr_mem
+ + sizeof(struct virtio_crypto_op_ctrl_req);
+ desc[vq->vq_avail_idx].len = sizeof(ctrl->input);
+ desc[vq->vq_avail_idx].flags = VRING_DESC_F_WRITE |
+ vq->vq_packed.cached_flags;
+ vq->vq_free_cnt--;
+ nb_descs++;
+ if (++vq->vq_avail_idx >= vq->vq_nentries) {
+ vq->vq_avail_idx -= vq->vq_nentries;
+ vq->vq_packed.cached_flags ^= VRING_PACKED_DESC_F_AVAIL_USED;
+ }
+
+ virtqueue_store_flags_packed(&desc[head], VRING_DESC_F_NEXT | flags,
+ vq->hw->weak_barriers);
+
+ virtio_wmb(vq->hw->weak_barriers);
+ cvq->notify_queue(vq, cvq->notify_cookie);
+
+ /* wait for used desc in virtqueue
+ * desc_is_used has a load-acquire or rte_io_rmb inside
+ */
+ while (!desc_is_used(&desc[head], vq))
+ usleep(100);
+
+ /* now get used descriptors */
+ vq->vq_free_cnt += nb_descs;
+ vq->vq_used_cons_idx += nb_descs;
+ if (vq->vq_used_cons_idx >= vq->vq_nentries) {
+ vq->vq_used_cons_idx -= vq->vq_nentries;
+ vq->vq_packed.used_wrap_counter ^= 1;
+ }
+
+ PMD_INIT_LOG(DEBUG, "vq->vq_free_cnt=%d "
+ "vq->vq_avail_idx=%d "
+ "vq->vq_used_cons_idx=%d "
+ "vq->vq_packed.cached_flags=0x%x "
+ "vq->vq_packed.used_wrap_counter=%d",
+ vq->vq_free_cnt,
+ vq->vq_avail_idx,
+ vq->vq_used_cons_idx,
+ vq->vq_packed.cached_flags,
+ vq->vq_packed.used_wrap_counter);
+
+ result = cvq->hdr_mz->addr;
+ return result;
+}
+
+static struct virtio_pmd_ctrl *
+virtio_send_command_split(struct virtcrypto_ctl *cvq,
struct virtio_pmd_ctrl *ctrl,
int *dlen, int dnum)
{
@@ -122,7 +217,11 @@ virtio_crypto_send_command(struct virtcrypto_ctl *cvq, struct virtio_pmd_ctrl *c
}
memcpy(cvq->hdr_mz->addr, ctrl, sizeof(struct virtio_pmd_ctrl));
- result = virtio_send_command(cvq, ctrl, dlen, dnum);
+
+ if (vtpci_with_packed_queue(vq->hw))
+ result = virtio_send_command_packed(cvq, ctrl, dlen, dnum);
+ else
+ result = virtio_send_command_split(cvq, ctrl, dlen, dnum);
rte_spinlock_unlock(&cvq->lock);
return result->input.status;
diff --git a/drivers/crypto/virtio/virtio_pci.h b/drivers/crypto/virtio/virtio_pci.h
index 7e94c6a3c5..79945cb88e 100644
--- a/drivers/crypto/virtio/virtio_pci.h
+++ b/drivers/crypto/virtio/virtio_pci.h
@@ -83,6 +83,25 @@ struct virtqueue;
#define VIRTIO_F_VERSION_1 32
#define VIRTIO_F_IOMMU_PLATFORM 33
+#define VIRTIO_F_RING_PACKED 34
+
+/*
+ * Inorder feature indicates that all buffers are used by the device
+ * in the same order in which they have been made available.
+ */
+#define VIRTIO_F_IN_ORDER 35
+
+/*
+ * This feature indicates that memory accesses by the driver and the device
+ * are ordered in a way described by the platform.
+ */
+#define VIRTIO_F_ORDER_PLATFORM 36
+
+/*
+ * This feature indicates that the driver passes extra data (besides
+ * identifying the virtqueue) in its device notifications.
+ */
+#define VIRTIO_F_NOTIFICATION_DATA 38
/* The Guest publishes the used index for which it expects an interrupt
* at the end of the avail ring. Host should ignore the avail->flags field.
@@ -230,6 +249,12 @@ vtpci_with_feature(struct virtio_crypto_hw *hw, uint64_t bit)
return (hw->guest_features & (1ULL << bit)) != 0;
}
+static inline int
+vtpci_with_packed_queue(struct virtio_crypto_hw *hw)
+{
+ return vtpci_with_feature(hw, VIRTIO_F_RING_PACKED);
+}
+
/*
* Function declaration from virtio_pci.c
*/
diff --git a/drivers/crypto/virtio/virtio_ring.h b/drivers/crypto/virtio/virtio_ring.h
index e5b0ad74d2..c74d1172b7 100644
--- a/drivers/crypto/virtio/virtio_ring.h
+++ b/drivers/crypto/virtio/virtio_ring.h
@@ -16,6 +16,15 @@
/* This means the buffer contains a list of buffer descriptors. */
#define VRING_DESC_F_INDIRECT 4
+/* This flag means the descriptor was made available by the driver */
+#define VRING_PACKED_DESC_F_AVAIL (1 << 7)
+/* This flag means the descriptor was used by the device */
+#define VRING_PACKED_DESC_F_USED (1 << 15)
+
+/* Frequently used combinations */
+#define VRING_PACKED_DESC_F_AVAIL_USED (VRING_PACKED_DESC_F_AVAIL | \
+ VRING_PACKED_DESC_F_USED)
+
/* The Host uses this in used->flags to advise the Guest: don't kick me
* when you add a buffer. It's unreliable, so it's simply an
* optimization. Guest will still kick if it's out of buffers.
@@ -57,6 +66,32 @@ struct vring_used {
struct vring_used_elem ring[];
};
+/* For support of packed virtqueues in Virtio 1.1 the format of descriptors
+ * looks like this.
+ */
+struct vring_packed_desc {
+ uint64_t addr;
+ uint32_t len;
+ uint16_t id;
+ uint16_t flags;
+};
+
+#define RING_EVENT_FLAGS_ENABLE 0x0
+#define RING_EVENT_FLAGS_DISABLE 0x1
+#define RING_EVENT_FLAGS_DESC 0x2
+struct vring_packed_desc_event {
+ uint16_t desc_event_off_wrap;
+ uint16_t desc_event_flags;
+};
+
+struct vring_packed {
+ unsigned int num;
+ rte_iova_t desc_iova;
+ struct vring_packed_desc *desc;
+ struct vring_packed_desc_event *driver;
+ struct vring_packed_desc_event *device;
+};
+
struct vring {
unsigned int num;
rte_iova_t desc_iova;
@@ -99,10 +134,18 @@ struct vring {
#define vring_avail_event(vr) (*(uint16_t *)&(vr)->used->ring[(vr)->num])
static inline size_t
-vring_size(unsigned int num, unsigned long align)
+vring_size(struct virtio_crypto_hw *hw, unsigned int num, unsigned long align)
{
size_t size;
+ if (vtpci_with_packed_queue(hw)) {
+ size = num * sizeof(struct vring_packed_desc);
+ size += sizeof(struct vring_packed_desc_event);
+ size = RTE_ALIGN_CEIL(size, align);
+ size += sizeof(struct vring_packed_desc_event);
+ return size;
+ }
+
size = num * sizeof(struct vring_desc);
size += sizeof(struct vring_avail) + (num * sizeof(uint16_t));
size = RTE_ALIGN_CEIL(size, align);
@@ -124,6 +167,20 @@ vring_init_split(struct vring *vr, uint8_t *p, rte_iova_t iova,
RTE_ALIGN_CEIL((uintptr_t)(&vr->avail->ring[num]), align);
}
+static inline void
+vring_init_packed(struct vring_packed *vr, uint8_t *p, rte_iova_t iova,
+ unsigned long align, unsigned int num)
+{
+ vr->num = num;
+ vr->desc = (struct vring_packed_desc *)p;
+ vr->desc_iova = iova;
+ vr->driver = (struct vring_packed_desc_event *)(p +
+ vr->num * sizeof(struct vring_packed_desc));
+ vr->device = (struct vring_packed_desc_event *)
+ RTE_ALIGN_CEIL(((uintptr_t)vr->driver +
+ sizeof(struct vring_packed_desc_event)), align);
+}
+
static inline void
vring_init(struct vring *vr, unsigned int num, uint8_t *p, unsigned long align)
{
diff --git a/drivers/crypto/virtio/virtio_rxtx.c b/drivers/crypto/virtio/virtio_rxtx.c
index 0e8a716917..8d6ff98fa5 100644
--- a/drivers/crypto/virtio/virtio_rxtx.c
+++ b/drivers/crypto/virtio/virtio_rxtx.c
@@ -4,6 +4,7 @@
#include <cryptodev_pmd.h>
#include "virtqueue.h"
+#include "virtio_ring.h"
#include "virtio_cryptodev.h"
#include "virtio_crypto_algs.h"
@@ -107,6 +108,91 @@ virtqueue_dequeue_burst_rx(struct virtqueue *vq,
return i;
}
+static uint16_t
+virtqueue_dequeue_burst_rx_packed(struct virtqueue *vq,
+ struct rte_crypto_op **rx_pkts, uint16_t num)
+{
+ struct rte_crypto_op *cop;
+ uint16_t used_idx;
+ uint16_t i;
+ struct virtio_crypto_inhdr *inhdr;
+ struct virtio_crypto_op_cookie *op_cookie;
+ struct vring_packed_desc *desc;
+
+ desc = vq->vq_packed.ring.desc;
+
+ /* Caller does the check */
+ for (i = 0; i < num ; i++) {
+ used_idx = vq->vq_used_cons_idx;
+ if (!desc_is_used(&desc[used_idx], vq))
+ break;
+
+ cop = (struct rte_crypto_op *)
+ vq->vq_descx[used_idx].crypto_op;
+ if (unlikely(cop == NULL)) {
+ VIRTIO_CRYPTO_RX_LOG_DBG("vring descriptor with no "
+ "mbuf cookie at %u",
+ vq->vq_used_cons_idx);
+ break;
+ }
+
+ op_cookie = (struct virtio_crypto_op_cookie *)
+ vq->vq_descx[used_idx].cookie;
+ inhdr = &(op_cookie->inhdr);
+ switch (inhdr->status) {
+ case VIRTIO_CRYPTO_OK:
+ cop->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+ break;
+ case VIRTIO_CRYPTO_ERR:
+ cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ vq->packets_received_failed++;
+ break;
+ case VIRTIO_CRYPTO_BADMSG:
+ cop->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
+ vq->packets_received_failed++;
+ break;
+ case VIRTIO_CRYPTO_NOTSUPP:
+ cop->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
+ vq->packets_received_failed++;
+ break;
+ case VIRTIO_CRYPTO_INVSESS:
+ cop->status = RTE_CRYPTO_OP_STATUS_INVALID_SESSION;
+ vq->packets_received_failed++;
+ break;
+ default:
+ break;
+ }
+
+ vq->packets_received_total++;
+
+ if (cop->asym->rsa.op_type == RTE_CRYPTO_ASYM_OP_SIGN)
+ memcpy(cop->asym->rsa.sign.data, op_cookie->sign,
+ cop->asym->rsa.sign.length);
+ else if (cop->asym->rsa.op_type == RTE_CRYPTO_ASYM_OP_VERIFY)
+ memcpy(cop->asym->rsa.message.data, op_cookie->message,
+ cop->asym->rsa.message.length);
+ else if (cop->asym->rsa.op_type == RTE_CRYPTO_ASYM_OP_ENCRYPT)
+ memcpy(cop->asym->rsa.cipher.data, op_cookie->cipher,
+ cop->asym->rsa.cipher.length);
+ else if (cop->asym->rsa.op_type == RTE_CRYPTO_ASYM_OP_DECRYPT)
+ memcpy(cop->asym->rsa.message.data, op_cookie->message,
+ cop->asym->rsa.message.length);
+
+ rx_pkts[i] = cop;
+ rte_mempool_put(vq->mpool, op_cookie);
+
+ vq->vq_free_cnt += 4;
+ vq->vq_used_cons_idx += 4;
+ vq->vq_descx[used_idx].crypto_op = NULL;
+ if (vq->vq_used_cons_idx >= vq->vq_nentries) {
+ vq->vq_used_cons_idx -= vq->vq_nentries;
+ vq->vq_packed.used_wrap_counter ^= 1;
+ }
+ }
+
+ return i;
+}
+
static int
virtqueue_crypto_sym_pkt_header_arrange(
struct rte_crypto_op *cop,
@@ -188,7 +274,7 @@ virtqueue_crypto_sym_pkt_header_arrange(
}
static int
-virtqueue_crypto_sym_enqueue_xmit(
+virtqueue_crypto_sym_enqueue_xmit_split(
struct virtqueue *txvq,
struct rte_crypto_op *cop)
{
@@ -343,6 +429,160 @@ virtqueue_crypto_sym_enqueue_xmit(
return 0;
}
+static int
+virtqueue_crypto_sym_enqueue_xmit_packed(
+ struct virtqueue *txvq,
+ struct rte_crypto_op *cop)
+{
+ uint16_t idx = 0;
+ uint16_t needed = 1;
+ uint16_t head_idx;
+ struct vq_desc_extra *dxp;
+ struct vring_packed_desc *start_dp;
+ struct vring_packed_desc *desc;
+ uint64_t op_data_req_phys_addr;
+ uint16_t req_data_len = sizeof(struct virtio_crypto_op_data_req);
+ uint32_t iv_addr_offset =
+ offsetof(struct virtio_crypto_op_cookie, iv);
+ struct rte_crypto_sym_op *sym_op = cop->sym;
+ struct virtio_crypto_session *session =
+ CRYPTODEV_GET_SYM_SESS_PRIV(cop->sym->session);
+ struct virtio_crypto_op_data_req *op_data_req;
+ uint32_t hash_result_len = 0;
+ struct virtio_crypto_op_cookie *crypto_op_cookie;
+ struct virtio_crypto_alg_chain_session_para *para;
+ uint16_t flags = VRING_DESC_F_NEXT;
+
+ if (unlikely(sym_op->m_src->nb_segs != 1))
+ return -EMSGSIZE;
+ if (unlikely(txvq->vq_free_cnt == 0))
+ return -ENOSPC;
+ if (unlikely(txvq->vq_free_cnt < needed))
+ return -EMSGSIZE;
+ head_idx = txvq->vq_desc_head_idx;
+ if (unlikely(head_idx >= txvq->vq_nentries))
+ return -EFAULT;
+ if (unlikely(session == NULL))
+ return -EFAULT;
+
+ dxp = &txvq->vq_descx[head_idx];
+
+ if (rte_mempool_get(txvq->mpool, &dxp->cookie)) {
+ VIRTIO_CRYPTO_TX_LOG_ERR("can not get cookie");
+ return -EFAULT;
+ }
+ crypto_op_cookie = dxp->cookie;
+ op_data_req_phys_addr = rte_mempool_virt2iova(crypto_op_cookie);
+ op_data_req = (struct virtio_crypto_op_data_req *)crypto_op_cookie;
+
+ if (virtqueue_crypto_sym_pkt_header_arrange(cop, op_data_req, session))
+ return -EFAULT;
+
+ /* status is initialized to VIRTIO_CRYPTO_ERR */
+ ((struct virtio_crypto_inhdr *)
+ ((uint8_t *)op_data_req + req_data_len))->status =
+ VIRTIO_CRYPTO_ERR;
+
+ desc = &txvq->vq_packed.ring.desc[txvq->vq_desc_head_idx];
+ needed = 4;
+ flags |= txvq->vq_packed.cached_flags;
+
+ start_dp = desc;
+ idx = 0;
+
+ /* packed vring: first part, virtio_crypto_op_data_req */
+ desc[idx].addr = op_data_req_phys_addr;
+ desc[idx].len = req_data_len;
+ desc[idx++].flags = flags;
+
+ /* packed vring: iv of cipher */
+ if (session->iv.length) {
+ if (cop->phys_addr)
+ desc[idx].addr = cop->phys_addr + session->iv.offset;
+ else {
+ if (session->iv.length > VIRTIO_CRYPTO_MAX_IV_SIZE)
+ return -ENOMEM;
+
+ rte_memcpy(crypto_op_cookie->iv,
+ rte_crypto_op_ctod_offset(cop,
+ uint8_t *, session->iv.offset),
+ session->iv.length);
+ desc[idx].addr = op_data_req_phys_addr + iv_addr_offset;
+ }
+
+ desc[idx].len = session->iv.length;
+ desc[idx++].flags = flags;
+ }
+
+ /* packed vring: additional auth data */
+ if (session->aad.length) {
+ desc[idx].addr = session->aad.phys_addr;
+ desc[idx].len = session->aad.length;
+ desc[idx++].flags = flags;
+ }
+
+ /* packed vring: src data */
+ desc[idx].addr = rte_pktmbuf_iova_offset(sym_op->m_src, 0);
+ desc[idx].len = (sym_op->cipher.data.offset
+ + sym_op->cipher.data.length);
+ desc[idx++].flags = flags;
+
+ /* packed vring: dst data */
+ if (sym_op->m_dst) {
+ desc[idx].addr = rte_pktmbuf_iova_offset(sym_op->m_dst, 0);
+ desc[idx].len = (sym_op->cipher.data.offset
+ + sym_op->cipher.data.length);
+ } else {
+ desc[idx].addr = rte_pktmbuf_iova_offset(sym_op->m_src, 0);
+ desc[idx].len = (sym_op->cipher.data.offset
+ + sym_op->cipher.data.length);
+ }
+ desc[idx++].flags = VRING_DESC_F_WRITE | VRING_DESC_F_NEXT;
+
+ /* packed vring: digest result */
+ para = &(session->ctrl.hdr.u.sym_create_session.u.chain.para);
+ if (para->hash_mode == VIRTIO_CRYPTO_SYM_HASH_MODE_PLAIN)
+ hash_result_len = para->u.hash_param.hash_result_len;
+ if (para->hash_mode == VIRTIO_CRYPTO_SYM_HASH_MODE_AUTH)
+ hash_result_len = para->u.mac_param.hash_result_len;
+ if (hash_result_len > 0) {
+ desc[idx].addr = sym_op->auth.digest.phys_addr;
+ desc[idx].len = hash_result_len;
+ desc[idx++].flags = VRING_DESC_F_WRITE | VRING_DESC_F_NEXT;
+ }
+
+ /* packed vring: last part, status returned */
+ desc[idx].addr = op_data_req_phys_addr + req_data_len;
+ desc[idx].len = sizeof(struct virtio_crypto_inhdr);
+ desc[idx++].flags = flags | VRING_DESC_F_WRITE;
+
+ /* save the infos to use when receiving packets */
+ dxp->crypto_op = (void *)cop;
+ dxp->ndescs = needed;
+
+ txvq->vq_desc_head_idx += idx & (txvq->vq_nentries - 1);
+ if (txvq->vq_desc_head_idx == VQ_RING_DESC_CHAIN_END)
+ txvq->vq_desc_tail_idx = idx;
+ txvq->vq_free_cnt = (uint16_t)(txvq->vq_free_cnt - needed);
+ virtqueue_store_flags_packed(&start_dp[0],
+ start_dp[0].flags | flags,
+ txvq->hw->weak_barriers);
+ virtio_wmb(txvq->hw->weak_barriers);
+
+ return 0;
+}
+
+static int
+virtqueue_crypto_sym_enqueue_xmit(
+ struct virtqueue *txvq,
+ struct rte_crypto_op *cop)
+{
+ if (vtpci_with_packed_queue(txvq->hw))
+ return virtqueue_crypto_sym_enqueue_xmit_packed(txvq, cop);
+ else
+ return virtqueue_crypto_sym_enqueue_xmit_split(txvq, cop);
+}
+
static int
virtqueue_crypto_asym_pkt_header_arrange(
struct rte_crypto_op *cop,
@@ -399,7 +639,7 @@ virtqueue_crypto_asym_pkt_header_arrange(
}
static int
-virtqueue_crypto_asym_enqueue_xmit(
+virtqueue_crypto_asym_enqueue_xmit_split(
struct virtqueue *txvq,
struct rte_crypto_op *cop)
{
@@ -533,6 +773,179 @@ virtqueue_crypto_asym_enqueue_xmit(
return 0;
}
+static int
+virtqueue_crypto_asym_enqueue_xmit_packed(
+ struct virtqueue *txvq,
+ struct rte_crypto_op *cop)
+{
+ uint16_t idx = 0;
+ uint16_t num_entry;
+ uint16_t needed = 1;
+ uint16_t head_idx;
+ struct vq_desc_extra *dxp;
+ struct vring_packed_desc *start_dp;
+ struct vring_packed_desc *desc;
+ uint64_t op_data_req_phys_addr;
+ uint16_t req_data_len = sizeof(struct virtio_crypto_op_data_req);
+ struct rte_crypto_asym_op *asym_op = cop->asym;
+ struct virtio_crypto_session *session =
+ CRYPTODEV_GET_ASYM_SESS_PRIV(cop->asym->session);
+ struct virtio_crypto_op_data_req *op_data_req;
+ struct virtio_crypto_op_cookie *crypto_op_cookie;
+ uint16_t flags = VRING_DESC_F_NEXT;
+
+ if (unlikely(txvq->vq_free_cnt == 0))
+ return -ENOSPC;
+ if (unlikely(txvq->vq_free_cnt < needed))
+ return -EMSGSIZE;
+ head_idx = txvq->vq_desc_head_idx;
+ if (unlikely(head_idx >= txvq->vq_nentries))
+ return -EFAULT;
+
+ dxp = &txvq->vq_descx[head_idx];
+
+ if (rte_mempool_get(txvq->mpool, &dxp->cookie)) {
+ VIRTIO_CRYPTO_TX_LOG_ERR("can not get cookie");
+ return -EFAULT;
+ }
+ crypto_op_cookie = dxp->cookie;
+ op_data_req_phys_addr = rte_mempool_virt2iova(crypto_op_cookie);
+ op_data_req = (struct virtio_crypto_op_data_req *)crypto_op_cookie;
+ if (virtqueue_crypto_asym_pkt_header_arrange(cop, op_data_req, session))
+ return -EFAULT;
+
+ /* status is initialized to VIRTIO_CRYPTO_ERR */
+ ((struct virtio_crypto_inhdr *)
+ ((uint8_t *)op_data_req + req_data_len))->status =
+ VIRTIO_CRYPTO_ERR;
+
+ desc = &txvq->vq_packed.ring.desc[txvq->vq_desc_head_idx];
+ needed = 4;
+ flags |= txvq->vq_packed.cached_flags;
+
+ start_dp = desc;
+ idx = 0;
+
+ /* packed vring: first part, virtio_crypto_op_data_req */
+ desc[idx].addr = op_data_req_phys_addr;
+ desc[idx].len = sizeof(struct virtio_crypto_op_data_req);
+ desc[idx++].flags = flags;
+
+ if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_SIGN) {
+ /* packed vring: src data */
+ if (asym_op->rsa.message.length > VIRTIO_CRYPTO_MAX_MSG_SIZE)
+ return -ENOMEM;
+ memcpy(crypto_op_cookie->message, asym_op->rsa.message.data,
+ asym_op->rsa.message.length);
+ desc[idx].addr = op_data_req_phys_addr +
+ offsetof(struct virtio_crypto_op_cookie, message);
+ desc[idx].len = asym_op->rsa.message.length;
+ desc[idx++].flags = flags;
+
+ /* packed vring: dst data */
+ if (asym_op->rsa.sign.length > VIRTIO_CRYPTO_MAX_SIGN_SIZE)
+ return -ENOMEM;
+ desc[idx].addr = op_data_req_phys_addr +
+ offsetof(struct virtio_crypto_op_cookie, sign);
+ desc[idx].len = asym_op->rsa.sign.length;
+ desc[idx++].flags = flags | VRING_DESC_F_WRITE;
+ } else if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_VERIFY) {
+ /* packed vring: src data */
+ if (asym_op->rsa.sign.length > VIRTIO_CRYPTO_MAX_SIGN_SIZE)
+ return -ENOMEM;
+ memcpy(crypto_op_cookie->sign, asym_op->rsa.sign.data,
+ asym_op->rsa.sign.length);
+ desc[idx].addr = op_data_req_phys_addr +
+ offsetof(struct virtio_crypto_op_cookie, sign);
+ desc[idx].len = asym_op->rsa.sign.length;
+ desc[idx++].flags = flags;
+
+ /* packed vring: dst data */
+ if (asym_op->rsa.message.length > VIRTIO_CRYPTO_MAX_MSG_SIZE)
+ return -ENOMEM;
+ desc[idx].addr = op_data_req_phys_addr +
+ offsetof(struct virtio_crypto_op_cookie, message);
+ desc[idx].len = asym_op->rsa.message.length;
+ desc[idx++].flags = flags;
+ } else if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_ENCRYPT) {
+ /* packed vring: src data */
+ if (asym_op->rsa.message.length > VIRTIO_CRYPTO_MAX_MSG_SIZE)
+ return -ENOMEM;
+ memcpy(crypto_op_cookie->message, asym_op->rsa.message.data,
+ asym_op->rsa.message.length);
+ desc[idx].addr = op_data_req_phys_addr +
+ offsetof(struct virtio_crypto_op_cookie, message);
+ desc[idx].len = asym_op->rsa.message.length;
+ desc[idx++].flags = flags;
+
+ /* packed vring: dst data */
+ if (asym_op->rsa.cipher.length > VIRTIO_CRYPTO_MAX_CIPHER_SIZE)
+ return -ENOMEM;
+ desc[idx].addr = op_data_req_phys_addr +
+ offsetof(struct virtio_crypto_op_cookie, cipher);
+ desc[idx].len = asym_op->rsa.cipher.length;
+ desc[idx++].flags = flags | VRING_DESC_F_WRITE;
+ } else if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_DECRYPT) {
+ /* packed vring: src data */
+ if (asym_op->rsa.cipher.length > VIRTIO_CRYPTO_MAX_CIPHER_SIZE)
+ return -ENOMEM;
+ memcpy(crypto_op_cookie->cipher, asym_op->rsa.cipher.data,
+ asym_op->rsa.cipher.length);
+ desc[idx].addr = op_data_req_phys_addr +
+ offsetof(struct virtio_crypto_op_cookie, cipher);
+ desc[idx].len = asym_op->rsa.cipher.length;
+ desc[idx++].flags = flags;
+
+ /* packed vring: dst data */
+ if (asym_op->rsa.message.length > VIRTIO_CRYPTO_MAX_MSG_SIZE)
+ return -ENOMEM;
+ desc[idx].addr = op_data_req_phys_addr +
+ offsetof(struct virtio_crypto_op_cookie, message);
+ desc[idx].len = asym_op->rsa.message.length;
+ desc[idx++].flags = flags | VRING_DESC_F_WRITE;
+ } else {
+ VIRTIO_CRYPTO_TX_LOG_ERR("Invalid asym op");
+ return -EINVAL;
+ }
+
+ /* packed vring: last part, status returned */
+ desc[idx].addr = op_data_req_phys_addr + req_data_len;
+ desc[idx].len = sizeof(struct virtio_crypto_inhdr);
+ desc[idx++].flags = txvq->vq_packed.cached_flags | VRING_DESC_F_WRITE;
+
+ num_entry = idx;
+ txvq->vq_avail_idx += num_entry;
+ if (txvq->vq_avail_idx >= txvq->vq_nentries) {
+ txvq->vq_avail_idx -= txvq->vq_nentries;
+ txvq->vq_packed.cached_flags ^= VRING_PACKED_DESC_F_AVAIL_USED;
+ }
+
+ /* save the infos to use when receiving packets */
+ dxp->crypto_op = (void *)cop;
+ dxp->ndescs = needed;
+
+ txvq->vq_desc_head_idx = (txvq->vq_desc_head_idx + idx) & (txvq->vq_nentries - 1);
+ if (txvq->vq_desc_head_idx == VQ_RING_DESC_CHAIN_END)
+ txvq->vq_desc_tail_idx = idx;
+ txvq->vq_free_cnt = (uint16_t)(txvq->vq_free_cnt - needed);
+ virtqueue_store_flags_packed(&start_dp[0],
+ start_dp[0].flags | flags,
+ txvq->hw->weak_barriers);
+ virtio_wmb(txvq->hw->weak_barriers);
+ return 0;
+}
+
+static int
+virtqueue_crypto_asym_enqueue_xmit(
+ struct virtqueue *txvq,
+ struct rte_crypto_op *cop)
+{
+ if (vtpci_with_packed_queue(txvq->hw))
+ return virtqueue_crypto_asym_enqueue_xmit_packed(txvq, cop);
+ else
+ return virtqueue_crypto_asym_enqueue_xmit_split(txvq, cop);
+}
+
static int
virtqueue_crypto_enqueue_xmit(struct virtqueue *txvq,
struct rte_crypto_op *cop)
@@ -620,21 +1033,20 @@ virtio_crypto_pkt_rx_burst(void *tx_queue, struct rte_crypto_op **rx_pkts,
uint16_t nb_pkts)
{
struct virtqueue *txvq = tx_queue;
- uint16_t nb_used, num, nb_rx;
-
- nb_used = VIRTQUEUE_NUSED(txvq);
+ uint16_t num, nb_rx;
- virtio_rmb();
-
- num = (uint16_t)(likely(nb_used <= nb_pkts) ? nb_used : nb_pkts);
- num = (uint16_t)(likely(num <= VIRTIO_MBUF_BURST_SZ)
- ? num : VIRTIO_MBUF_BURST_SZ);
+ virtio_rmb(0);
+ num = RTE_MIN(VIRTIO_MBUF_BURST_SZ, nb_pkts);
if (num == 0)
return 0;
- nb_rx = virtqueue_dequeue_burst_rx(txvq, rx_pkts, num);
- VIRTIO_CRYPTO_RX_LOG_DBG("used:%d dequeue:%d", nb_used, num);
+ if (likely(vtpci_with_packed_queue(txvq->hw)))
+ nb_rx = virtqueue_dequeue_burst_rx_packed(txvq, rx_pkts, num);
+ else
+ nb_rx = virtqueue_dequeue_burst_rx(txvq, rx_pkts, num);
+
+ VIRTIO_CRYPTO_RX_LOG_DBG("used:%d dequeue:%d", nb_rx, num);
return nb_rx;
}
@@ -700,6 +1112,12 @@ virtio_crypto_pkt_tx_burst(void *tx_queue, struct rte_crypto_op **tx_pkts,
}
if (likely(nb_tx)) {
+ if (vtpci_with_packed_queue(txvq->hw)) {
+ virtqueue_notify(txvq);
+ VIRTIO_CRYPTO_TX_LOG_DBG("Notified backend after xmit");
+ return nb_tx;
+ }
+
vq_update_avail_idx(txvq);
if (unlikely(virtqueue_kick_prepare(txvq))) {
diff --git a/drivers/crypto/virtio/virtqueue.c b/drivers/crypto/virtio/virtqueue.c
index 3a9ec98b18..a6b47d4466 100644
--- a/drivers/crypto/virtio/virtqueue.c
+++ b/drivers/crypto/virtio/virtqueue.c
@@ -12,8 +12,23 @@
#include "virtio_cryptodev.h"
#include "virtqueue.h"
-void
-virtqueue_disable_intr(struct virtqueue *vq)
+static inline void
+virtqueue_disable_intr_packed(struct virtqueue *vq)
+{
+ /*
+ * Set RING_EVENT_FLAGS_DISABLE to hint host
+ * not to interrupt when it consumes packets
+ * Note: this is only considered a hint to the host
+ */
+ if (vq->vq_packed.event_flags_shadow != RING_EVENT_FLAGS_DISABLE) {
+ vq->vq_packed.event_flags_shadow = RING_EVENT_FLAGS_DISABLE;
+ vq->vq_packed.ring.driver->desc_event_flags =
+ vq->vq_packed.event_flags_shadow;
+ }
+}
+
+static inline void
+virtqueue_disable_intr_split(struct virtqueue *vq)
{
/*
* Set VRING_AVAIL_F_NO_INTERRUPT to hint host
@@ -23,6 +38,15 @@ virtqueue_disable_intr(struct virtqueue *vq)
vq->vq_split.ring.avail->flags |= VRING_AVAIL_F_NO_INTERRUPT;
}
+void
+virtqueue_disable_intr(struct virtqueue *vq)
+{
+ if (vtpci_with_packed_queue(vq->hw))
+ virtqueue_disable_intr_packed(vq);
+ else
+ virtqueue_disable_intr_split(vq);
+}
+
void
virtqueue_detatch_unused(struct virtqueue *vq)
{
@@ -50,7 +74,6 @@ virtio_init_vring(struct virtqueue *vq)
{
int size = vq->vq_nentries;
uint8_t *ring_mem = vq->vq_ring_virt_mem;
- struct vring *vr = &vq->vq_split.ring;
PMD_INIT_FUNC_TRACE();
@@ -62,10 +85,16 @@ virtio_init_vring(struct virtqueue *vq)
vq->vq_desc_tail_idx = (uint16_t)(vq->vq_nentries - 1);
vq->vq_free_cnt = vq->vq_nentries;
memset(vq->vq_descx, 0, sizeof(struct vq_desc_extra) * vq->vq_nentries);
-
- vring_init_split(vr, ring_mem, vq->vq_ring_mem, VIRTIO_PCI_VRING_ALIGN, size);
- vring_desc_init_split(vr->desc, size);
-
+ if (vtpci_with_packed_queue(vq->hw)) {
+ vring_init_packed(&vq->vq_packed.ring, ring_mem, vq->vq_ring_mem,
+ VIRTIO_PCI_VRING_ALIGN, size);
+ vring_desc_init_packed(vq, size);
+ } else {
+ struct vring *vr = &vq->vq_split.ring;
+
+ vring_init_split(vr, ring_mem, vq->vq_ring_mem, VIRTIO_PCI_VRING_ALIGN, size);
+ vring_desc_init_split(vr->desc, size);
+ }
/*
* Disable device(host) interrupting guest
*/
@@ -171,11 +200,16 @@ virtcrypto_queue_alloc(struct virtio_crypto_hw *hw, uint16_t index, uint16_t num
vq->hw = hw;
vq->vq_queue_index = index;
vq->vq_nentries = num;
+ if (vtpci_with_packed_queue(hw)) {
+ vq->vq_packed.used_wrap_counter = 1;
+ vq->vq_packed.cached_flags = VRING_PACKED_DESC_F_AVAIL;
+ vq->vq_packed.event_flags_shadow = 0;
+ }
/*
* Reserve a memzone for vring elements
*/
- size = vring_size(num, VIRTIO_PCI_VRING_ALIGN);
+ size = vring_size(hw, num, VIRTIO_PCI_VRING_ALIGN);
vq->vq_ring_size = RTE_ALIGN_CEIL(size, VIRTIO_PCI_VRING_ALIGN);
PMD_INIT_LOG(DEBUG, "vring_size: %d, rounded_vring_size: %d", size, vq->vq_ring_size);
diff --git a/drivers/crypto/virtio/virtqueue.h b/drivers/crypto/virtio/virtqueue.h
index b4a0ed3553..b31342940e 100644
--- a/drivers/crypto/virtio/virtqueue.h
+++ b/drivers/crypto/virtio/virtqueue.h
@@ -28,9 +28,78 @@ struct rte_mbuf;
* sufficient.
*
*/
-#define virtio_mb() rte_smp_mb()
-#define virtio_rmb() rte_smp_rmb()
-#define virtio_wmb() rte_smp_wmb()
+static inline void
+virtio_mb(uint8_t weak_barriers)
+{
+ if (weak_barriers)
+ rte_atomic_thread_fence(rte_memory_order_seq_cst);
+ else
+ rte_mb();
+}
+
+static inline void
+virtio_rmb(uint8_t weak_barriers)
+{
+ if (weak_barriers)
+ rte_atomic_thread_fence(rte_memory_order_acquire);
+ else
+ rte_io_rmb();
+}
+
+static inline void
+virtio_wmb(uint8_t weak_barriers)
+{
+ if (weak_barriers)
+ rte_atomic_thread_fence(rte_memory_order_release);
+ else
+ rte_io_wmb();
+}
+
+static inline uint16_t
+virtqueue_fetch_flags_packed(struct vring_packed_desc *dp,
+ uint8_t weak_barriers)
+{
+ uint16_t flags;
+
+ if (weak_barriers) {
+/* x86 prefers to using rte_io_rmb over rte_atomic_load_explicit as it reports
+ * a better perf(~1.5%), which comes from the saved branch by the compiler.
+ * The if and else branch are identical on the platforms except Arm.
+ */
+#ifdef RTE_ARCH_ARM
+ flags = rte_atomic_load_explicit(&dp->flags, rte_memory_order_acquire);
+#else
+ flags = dp->flags;
+ rte_io_rmb();
+#endif
+ } else {
+ flags = dp->flags;
+ rte_io_rmb();
+ }
+
+ return flags;
+}
+
+static inline void
+virtqueue_store_flags_packed(struct vring_packed_desc *dp,
+ uint16_t flags, uint8_t weak_barriers)
+{
+ if (weak_barriers) {
+/* x86 prefers to using rte_io_wmb over rte_atomic_store_explicit as it reports
+ * a better perf(~1.5%), which comes from the saved branch by the compiler.
+ * The if and else branch are identical on the platforms except Arm.
+ */
+#ifdef RTE_ARCH_ARM
+ rte_atomic_store_explicit(&dp->flags, flags, rte_memory_order_release);
+#else
+ rte_io_wmb();
+ dp->flags = flags;
+#endif
+ } else {
+ rte_io_wmb();
+ dp->flags = flags;
+ }
+}
#define VIRTQUEUE_MAX_NAME_SZ 32
@@ -62,7 +131,16 @@ struct virtqueue {
/**< vring keeping desc, used and avail */
struct vring ring;
} vq_split;
+
+ struct {
+ /**< vring keeping descs and events */
+ struct vring_packed ring;
+ bool used_wrap_counter;
+ uint16_t cached_flags; /**< cached flags for descs */
+ uint16_t event_flags_shadow;
+ } vq_packed;
};
+
union {
struct virtcrypto_data dq;
struct virtcrypto_ctl cq;
@@ -134,7 +212,7 @@ virtqueue_full(const struct virtqueue *vq)
static inline void
vq_update_avail_idx(struct virtqueue *vq)
{
- virtio_wmb();
+ virtio_wmb(0);
vq->vq_split.ring.avail->idx = vq->vq_avail_idx;
}
@@ -172,6 +250,30 @@ virtqueue_notify(struct virtqueue *vq)
VTPCI_OPS(vq->hw)->notify_queue(vq->hw, vq);
}
+static inline int
+desc_is_used(struct vring_packed_desc *desc, struct virtqueue *vq)
+{
+ uint16_t used, avail, flags;
+
+ flags = virtqueue_fetch_flags_packed(desc, vq->hw->weak_barriers);
+ used = !!(flags & VRING_PACKED_DESC_F_USED);
+ avail = !!(flags & VRING_PACKED_DESC_F_AVAIL);
+
+ return avail == used && used == vq->vq_packed.used_wrap_counter;
+}
+
+static inline void
+vring_desc_init_packed(struct virtqueue *vq, int n)
+{
+ int i;
+ for (i = 0; i < n - 1; i++) {
+ vq->vq_packed.ring.desc[i].id = i;
+ vq->vq_descx[i].next = i + 1;
+ }
+ vq->vq_packed.ring.desc[i].id = i;
+ vq->vq_descx[i].next = VQ_RING_DESC_CHAIN_END;
+}
+
/* Chain all the descriptors in the ring with an END */
static inline void
vring_desc_init_split(struct vring_desc *dp, uint16_t n)
@@ -223,7 +325,7 @@ virtqueue_nused(const struct virtqueue *vq)
/**
* Dump virtqueue internal structures, for debug purpose only.
*/
-#define VIRTQUEUE_DUMP(vq) do { \
+#define VIRTQUEUE_SPLIT_DUMP(vq) do { \
uint16_t used_idx, nused; \
used_idx = (vq)->vq_split.ring.used->idx; \
nused = (uint16_t)(used_idx - (vq)->vq_used_cons_idx); \
@@ -237,4 +339,24 @@ virtqueue_nused(const struct virtqueue *vq)
(vq)->vq_split.ring.avail->flags, (vq)->vq_split.ring.used->flags); \
} while (0)
+#define VIRTQUEUE_PACKED_DUMP(vq) do { \
+ uint16_t nused; \
+ nused = (vq)->vq_nentries - (vq)->vq_free_cnt; \
+ VIRTIO_CRYPTO_INIT_LOG_DBG(\
+ "VQ: - size=%d; free=%d; used=%d; desc_head_idx=%d;" \
+ " avail_idx=%d; used_cons_idx=%d;" \
+ " avail.flags=0x%x; wrap_counter=%d", \
+ (vq)->vq_nentries, (vq)->vq_free_cnt, nused, \
+ (vq)->vq_desc_head_idx, (vq)->vq_avail_idx, \
+ (vq)->vq_used_cons_idx, (vq)->vq_packed.cached_flags, \
+ (vq)->vq_packed.used_wrap_counter); \
+} while (0)
+
+#define VIRTQUEUE_DUMP(vq) do { \
+ if (vtpci_with_packed_queue((vq)->hw)) \
+ VIRTQUEUE_PACKED_DUMP(vq); \
+ else \
+ VIRTQUEUE_SPLIT_DUMP(vq); \
+} while (0)
+
#endif /* _VIRTQUEUE_H_ */
--
2.25.1
^ permalink raw reply [flat|nested] 58+ messages in thread
* [v1 12/16] common/virtio: common virtio log
2024-12-24 7:36 [v1 00/16] crypto/virtio: vDPA and asymmetric support Gowrishankar Muthukrishnan
` (10 preceding siblings ...)
2024-12-24 7:37 ` [v1 11/16] crypto/virtio: add packed ring support Gowrishankar Muthukrishnan
@ 2024-12-24 7:37 ` Gowrishankar Muthukrishnan
2024-12-24 8:14 ` David Marchand
2024-12-24 7:37 ` [v1 13/16] common/virtio: move vDPA to common directory Gowrishankar Muthukrishnan
` (7 subsequent siblings)
19 siblings, 1 reply; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2024-12-24 7:37 UTC (permalink / raw)
To: dev, Akhil Goyal, Maxime Coquelin, Chenbo Xia, Fan Zhang,
Jay Zhou, Bruce Richardson, Konstantin Ananyev
Cc: jerinj, anoobj, Rajesh Mudimadugula, Gowrishankar Muthukrishnan
Common virtio log include file.
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
drivers/{net => common}/virtio/virtio_logs.h | 16 ++--------
drivers/crypto/virtio/meson.build | 1 +
.../{virtio_logs.h => virtio_crypto_logs.h} | 30 ++++++++-----------
drivers/crypto/virtio/virtio_cryptodev.c | 4 +--
drivers/crypto/virtio/virtqueue.h | 2 +-
drivers/net/virtio/meson.build | 3 +-
drivers/net/virtio/virtio.c | 3 +-
drivers/net/virtio/virtio_ethdev.c | 3 +-
drivers/net/virtio/virtio_net_logs.h | 30 +++++++++++++++++++
drivers/net/virtio/virtio_pci.c | 3 +-
drivers/net/virtio/virtio_pci_ethdev.c | 3 +-
drivers/net/virtio/virtio_rxtx.c | 3 +-
drivers/net/virtio/virtio_rxtx_packed.c | 3 +-
drivers/net/virtio/virtio_rxtx_packed.h | 3 +-
drivers/net/virtio/virtio_rxtx_packed_avx.h | 3 +-
drivers/net/virtio/virtio_rxtx_simple.h | 3 +-
.../net/virtio/virtio_user/vhost_kernel_tap.c | 3 +-
drivers/net/virtio/virtio_user/vhost_vdpa.c | 3 +-
drivers/net/virtio/virtio_user_ethdev.c | 3 +-
drivers/net/virtio/virtqueue.c | 3 +-
drivers/net/virtio/virtqueue.h | 3 +-
21 files changed, 77 insertions(+), 51 deletions(-)
rename drivers/{net => common}/virtio/virtio_logs.h (61%)
rename drivers/crypto/virtio/{virtio_logs.h => virtio_crypto_logs.h} (74%)
create mode 100644 drivers/net/virtio/virtio_net_logs.h
diff --git a/drivers/net/virtio/virtio_logs.h b/drivers/common/virtio/virtio_logs.h
similarity index 61%
rename from drivers/net/virtio/virtio_logs.h
rename to drivers/common/virtio/virtio_logs.h
index dea1a7ac11..bc115e7a36 100644
--- a/drivers/net/virtio/virtio_logs.h
+++ b/drivers/common/virtio/virtio_logs.h
@@ -5,6 +5,8 @@
#ifndef _VIRTIO_LOGS_H_
#define _VIRTIO_LOGS_H_
+#include <inttypes.h>
+
#include <rte_log.h>
extern int virtio_logtype_init;
@@ -14,20 +16,6 @@ extern int virtio_logtype_init;
#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
-#ifdef RTE_LIBRTE_VIRTIO_DEBUG_RX
-#define PMD_RX_LOG(level, ...) \
- RTE_LOG_LINE_PREFIX(level, VIRTIO_DRIVER, "%s() rx: ", __func__, __VA_ARGS__)
-#else
-#define PMD_RX_LOG(...) do { } while(0)
-#endif
-
-#ifdef RTE_LIBRTE_VIRTIO_DEBUG_TX
-#define PMD_TX_LOG(level, ...) \
- RTE_LOG_LINE_PREFIX(level, VIRTIO_DRIVER, "%s() tx: ", __func__, __VA_ARGS__)
-#else
-#define PMD_TX_LOG(...) do { } while(0)
-#endif
-
extern int virtio_logtype_driver;
#define RTE_LOGTYPE_VIRTIO_DRIVER virtio_logtype_driver
#define PMD_DRV_LOG(level, ...) \
diff --git a/drivers/crypto/virtio/meson.build b/drivers/crypto/virtio/meson.build
index d2c3b3ad07..6c082a3112 100644
--- a/drivers/crypto/virtio/meson.build
+++ b/drivers/crypto/virtio/meson.build
@@ -8,6 +8,7 @@ if is_windows
endif
includes += include_directories('../../../lib/vhost')
+includes += include_directories('../../common/virtio')
deps += 'bus_pci'
sources = files(
'virtio_cryptodev.c',
diff --git a/drivers/crypto/virtio/virtio_logs.h b/drivers/crypto/virtio/virtio_crypto_logs.h
similarity index 74%
rename from drivers/crypto/virtio/virtio_logs.h
rename to drivers/crypto/virtio/virtio_crypto_logs.h
index 988514919f..56caa162d4 100644
--- a/drivers/crypto/virtio/virtio_logs.h
+++ b/drivers/crypto/virtio/virtio_crypto_logs.h
@@ -2,24 +2,18 @@
* Copyright(c) 2018 HUAWEI TECHNOLOGIES CO., LTD.
*/
-#ifndef _VIRTIO_LOGS_H_
-#define _VIRTIO_LOGS_H_
+#ifndef _VIRTIO_CRYPTO_LOGS_H_
+#define _VIRTIO_CRYPTO_LOGS_H_
#include <rte_log.h>
-extern int virtio_crypto_logtype_init;
-#define RTE_LOGTYPE_VIRTIO_CRYPTO_INIT virtio_crypto_logtype_init
+#include "virtio_logs.h"
-#define PMD_INIT_LOG(level, ...) \
- RTE_LOG_LINE_PREFIX(level, VIRTIO_CRYPTO_INIT, "%s(): ", __func__, __VA_ARGS__)
+extern int virtio_logtype_init;
-#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
-
-extern int virtio_crypto_logtype_init;
-#define RTE_LOGTYPE_VIRTIO_CRYPTO_INIT virtio_crypto_logtype_init
-
-#define VIRTIO_CRYPTO_INIT_LOG_IMPL(level, ...) \
- RTE_LOG_LINE_PREFIX(level, VIRTIO_CRYPTO_INIT, "%s(): ", __func__, __VA_ARGS__)
+#define VIRTIO_CRYPTO_INIT_LOG_IMPL(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, virtio_logtype_init, \
+ "INIT: %s(): " fmt "\n", __func__, ##args)
#define VIRTIO_CRYPTO_INIT_LOG_INFO(fmt, ...) \
VIRTIO_CRYPTO_INIT_LOG_IMPL(INFO, fmt, ## __VA_ARGS__)
@@ -75,11 +69,11 @@ extern int virtio_crypto_logtype_tx;
#define VIRTIO_CRYPTO_TX_LOG_ERR(fmt, ...) \
VIRTIO_CRYPTO_TX_LOG_IMPL(ERR, fmt, ## __VA_ARGS__)
-extern int virtio_crypto_logtype_driver;
-#define RTE_LOGTYPE_VIRTIO_CRYPTO_DRIVER virtio_crypto_logtype_driver
+extern int virtio_logtype_driver;
-#define VIRTIO_CRYPTO_DRV_LOG_IMPL(level, ...) \
- RTE_LOG_LINE_PREFIX(level, VIRTIO_CRYPTO_DRIVER, "%s(): ", __func__, __VA_ARGS__)
+#define VIRTIO_CRYPTO_DRV_LOG_IMPL(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, virtio_logtype_driver, \
+ "DRIVER: %s(): " fmt "\n", __func__, ##args)
#define VIRTIO_CRYPTO_DRV_LOG_INFO(fmt, ...) \
VIRTIO_CRYPTO_DRV_LOG_IMPL(INFO, fmt, ## __VA_ARGS__)
@@ -90,4 +84,4 @@ extern int virtio_crypto_logtype_driver;
#define VIRTIO_CRYPTO_DRV_LOG_ERR(fmt, ...) \
VIRTIO_CRYPTO_DRV_LOG_IMPL(ERR, fmt, ## __VA_ARGS__)
-#endif /* _VIRTIO_LOGS_H_ */
+#endif /* _VIRTIO_CRYPTO_LOGS_H_ */
diff --git a/drivers/crypto/virtio/virtio_cryptodev.c b/drivers/crypto/virtio/virtio_cryptodev.c
index d3db4f898e..b31e7ea0cf 100644
--- a/drivers/crypto/virtio/virtio_cryptodev.c
+++ b/drivers/crypto/virtio/virtio_cryptodev.c
@@ -1749,8 +1749,8 @@ RTE_PMD_REGISTER_PCI(CRYPTODEV_NAME_VIRTIO_PMD, rte_virtio_crypto_driver);
RTE_PMD_REGISTER_CRYPTO_DRIVER(virtio_crypto_drv,
rte_virtio_crypto_driver.driver,
cryptodev_virtio_driver_id);
-RTE_LOG_REGISTER_SUFFIX(virtio_crypto_logtype_init, init, NOTICE);
+RTE_LOG_REGISTER_SUFFIX(virtio_logtype_init, init, NOTICE);
RTE_LOG_REGISTER_SUFFIX(virtio_crypto_logtype_session, session, NOTICE);
RTE_LOG_REGISTER_SUFFIX(virtio_crypto_logtype_rx, rx, NOTICE);
RTE_LOG_REGISTER_SUFFIX(virtio_crypto_logtype_tx, tx, NOTICE);
-RTE_LOG_REGISTER_SUFFIX(virtio_crypto_logtype_driver, driver, NOTICE);
+RTE_LOG_REGISTER_SUFFIX(virtio_logtype_driver, driver, NOTICE);
diff --git a/drivers/crypto/virtio/virtqueue.h b/drivers/crypto/virtio/virtqueue.h
index b31342940e..ccf45800c0 100644
--- a/drivers/crypto/virtio/virtqueue.h
+++ b/drivers/crypto/virtio/virtqueue.h
@@ -15,7 +15,7 @@
#include "virtio_cvq.h"
#include "virtio_pci.h"
#include "virtio_ring.h"
-#include "virtio_logs.h"
+#include "virtio_crypto_logs.h"
#include "virtio_crypto.h"
#include "virtio_rxtx.h"
diff --git a/drivers/net/virtio/meson.build b/drivers/net/virtio/meson.build
index 02742da5c2..6331366712 100644
--- a/drivers/net/virtio/meson.build
+++ b/drivers/net/virtio/meson.build
@@ -22,6 +22,7 @@ sources += files(
'virtqueue.c',
)
deps += ['kvargs', 'bus_pci']
+includes += include_directories('../../common/virtio')
if arch_subdir == 'x86'
if cc_has_avx512
@@ -56,5 +57,5 @@ if is_linux
'virtio_user/vhost_user.c',
'virtio_user/vhost_vdpa.c',
'virtio_user/virtio_user_dev.c')
- deps += ['bus_vdev']
+ deps += ['bus_vdev', 'common_virtio']
endif
diff --git a/drivers/net/virtio/virtio.c b/drivers/net/virtio/virtio.c
index d9e642f412..21b0490fe7 100644
--- a/drivers/net/virtio/virtio.c
+++ b/drivers/net/virtio/virtio.c
@@ -5,8 +5,9 @@
#include <unistd.h>
+#include "virtio_net_logs.h"
+
#include "virtio.h"
-#include "virtio_logs.h"
uint64_t
virtio_negotiate_features(struct virtio_hw *hw, uint64_t host_features)
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 70d4839def..491b75ec19 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -29,9 +29,10 @@
#include <rte_cycles.h>
#include <rte_kvargs.h>
+#include "virtio_net_logs.h"
+
#include "virtio_ethdev.h"
#include "virtio.h"
-#include "virtio_logs.h"
#include "virtqueue.h"
#include "virtio_cvq.h"
#include "virtio_rxtx.h"
diff --git a/drivers/net/virtio/virtio_net_logs.h b/drivers/net/virtio/virtio_net_logs.h
new file mode 100644
index 0000000000..bd5867b1fe
--- /dev/null
+++ b/drivers/net/virtio/virtio_net_logs.h
@@ -0,0 +1,30 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2014 Intel Corporation
+ */
+
+#ifndef _VIRTIO_NET_LOGS_H_
+#define _VIRTIO_NET_LOGS_H_
+
+#include <inttypes.h>
+
+#include <rte_log.h>
+
+#include "virtio_logs.h"
+
+#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
+
+#ifdef RTE_LIBRTE_VIRTIO_DEBUG_RX
+#define PMD_RX_LOG(level, fmt, args...) \
+ RTE_LOG(level, VIRTIO_DRIVER, "%s() rx: " fmt "\n", __func__, ## args)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_VIRTIO_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+ RTE_LOG(level, VIRTIO_DRIVER, "%s() tx: " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#endif /* _VIRTIO_NET_LOGS_H_ */
diff --git a/drivers/net/virtio/virtio_pci.c b/drivers/net/virtio/virtio_pci.c
index 90bbb53502..ca0ccdebd8 100644
--- a/drivers/net/virtio/virtio_pci.c
+++ b/drivers/net/virtio/virtio_pci.c
@@ -11,8 +11,9 @@
#include <rte_io.h>
#include <bus_driver.h>
+#include "virtio_net_logs.h"
+
#include "virtio_pci.h"
-#include "virtio_logs.h"
#include "virtqueue.h"
/*
diff --git a/drivers/net/virtio/virtio_pci_ethdev.c b/drivers/net/virtio/virtio_pci_ethdev.c
index 9b4b846f8a..8aa9d48807 100644
--- a/drivers/net/virtio/virtio_pci_ethdev.c
+++ b/drivers/net/virtio/virtio_pci_ethdev.c
@@ -19,10 +19,11 @@
#include <dev_driver.h>
#include <rte_kvargs.h>
+#include "virtio_net_logs.h"
+
#include "virtio.h"
#include "virtio_ethdev.h"
#include "virtio_pci.h"
-#include "virtio_logs.h"
/*
* The set of PCI devices this driver supports
diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index b67f063b31..f645d70202 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -26,7 +26,8 @@
#include <rte_udp.h>
#include <rte_tcp.h>
-#include "virtio_logs.h"
+#include "virtio_net_logs.h"
+
#include "virtio_ethdev.h"
#include "virtio.h"
#include "virtqueue.h"
diff --git a/drivers/net/virtio/virtio_rxtx_packed.c b/drivers/net/virtio/virtio_rxtx_packed.c
index 5f7d4903bc..6eed0d7872 100644
--- a/drivers/net/virtio/virtio_rxtx_packed.c
+++ b/drivers/net/virtio/virtio_rxtx_packed.c
@@ -10,7 +10,8 @@
#include <rte_net.h>
-#include "virtio_logs.h"
+#include "virtio_net_logs.h"
+
#include "virtio_ethdev.h"
#include "virtio_pci.h"
#include "virtio_rxtx_packed.h"
diff --git a/drivers/net/virtio/virtio_rxtx_packed.h b/drivers/net/virtio/virtio_rxtx_packed.h
index 536112983c..d6f530ec10 100644
--- a/drivers/net/virtio/virtio_rxtx_packed.h
+++ b/drivers/net/virtio/virtio_rxtx_packed.h
@@ -13,7 +13,8 @@
#include <rte_net.h>
-#include "virtio_logs.h"
+#include "virtio_net_logs.h"
+
#include "virtio_ethdev.h"
#include "virtio.h"
#include "virtqueue.h"
diff --git a/drivers/net/virtio/virtio_rxtx_packed_avx.h b/drivers/net/virtio/virtio_rxtx_packed_avx.h
index 584ac72f95..de8f2b2ba8 100644
--- a/drivers/net/virtio/virtio_rxtx_packed_avx.h
+++ b/drivers/net/virtio/virtio_rxtx_packed_avx.h
@@ -10,7 +10,8 @@
#include <rte_net.h>
-#include "virtio_logs.h"
+#include "virtio_net_logs.h"
+
#include "virtio_ethdev.h"
#include "virtio.h"
#include "virtio_rxtx_packed.h"
diff --git a/drivers/net/virtio/virtio_rxtx_simple.h b/drivers/net/virtio/virtio_rxtx_simple.h
index 79196ed86e..d32af60337 100644
--- a/drivers/net/virtio/virtio_rxtx_simple.h
+++ b/drivers/net/virtio/virtio_rxtx_simple.h
@@ -7,7 +7,8 @@
#include <stdint.h>
-#include "virtio_logs.h"
+#include "virtio_net_logs.h"
+
#include "virtio_ethdev.h"
#include "virtqueue.h"
#include "virtio_rxtx.h"
diff --git a/drivers/net/virtio/virtio_user/vhost_kernel_tap.c b/drivers/net/virtio/virtio_user/vhost_kernel_tap.c
index 611e2e25ec..c2d925bbe2 100644
--- a/drivers/net/virtio/virtio_user/vhost_kernel_tap.c
+++ b/drivers/net/virtio/virtio_user/vhost_kernel_tap.c
@@ -14,8 +14,9 @@
#include <rte_ether.h>
+#include "virtio_net_logs.h"
+
#include "vhost_kernel_tap.h"
-#include "../virtio_logs.h"
#include "../virtio.h"
diff --git a/drivers/net/virtio/virtio_user/vhost_vdpa.c b/drivers/net/virtio/virtio_user/vhost_vdpa.c
index bc3e2a9af5..77e2fd62d8 100644
--- a/drivers/net/virtio/virtio_user/vhost_vdpa.c
+++ b/drivers/net/virtio/virtio_user/vhost_vdpa.c
@@ -12,8 +12,7 @@
#include <rte_memory.h>
-#include "vhost.h"
-#include "virtio_user_dev.h"
+#include "../virtio_net_logs.h"
struct vhost_vdpa_data {
int vhostfd;
diff --git a/drivers/net/virtio/virtio_user_ethdev.c b/drivers/net/virtio/virtio_user_ethdev.c
index 747dddeb2e..fda6634c94 100644
--- a/drivers/net/virtio/virtio_user_ethdev.c
+++ b/drivers/net/virtio/virtio_user_ethdev.c
@@ -20,8 +20,9 @@
#include <rte_cycles.h>
#include <rte_io.h>
+#include "virtio_net_logs.h"
+
#include "virtio_ethdev.h"
-#include "virtio_logs.h"
#include "virtio.h"
#include "virtqueue.h"
#include "virtio_rxtx.h"
diff --git a/drivers/net/virtio/virtqueue.c b/drivers/net/virtio/virtqueue.c
index cf46abfd06..95cf2fdafc 100644
--- a/drivers/net/virtio/virtqueue.c
+++ b/drivers/net/virtio/virtqueue.c
@@ -9,8 +9,9 @@
#include <rte_mbuf.h>
#include <rte_memzone.h>
+#include "virtio_net_logs.h"
+
#include "virtqueue.h"
-#include "virtio_logs.h"
#include "virtio.h"
#include "virtio_rxtx_simple.h"
diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
index 60211a40c9..13503edc21 100644
--- a/drivers/net/virtio/virtqueue.h
+++ b/drivers/net/virtio/virtqueue.h
@@ -12,9 +12,10 @@
#include <rte_mempool.h>
#include <rte_net.h>
+#include "virtio_net_logs.h"
+
#include "virtio.h"
#include "virtio_ring.h"
-#include "virtio_logs.h"
#include "virtio_rxtx.h"
#include "virtio_cvq.h"
--
2.25.1
^ permalink raw reply [flat|nested] 58+ messages in thread
* [v1 13/16] common/virtio: move vDPA to common directory
2024-12-24 7:36 [v1 00/16] crypto/virtio: vDPA and asymmetric support Gowrishankar Muthukrishnan
` (11 preceding siblings ...)
2024-12-24 7:37 ` [v1 12/16] common/virtio: common virtio log Gowrishankar Muthukrishnan
@ 2024-12-24 7:37 ` Gowrishankar Muthukrishnan
2024-12-24 7:37 ` [v1 14/16] common/virtio: support cryptodev in vdev setup Gowrishankar Muthukrishnan
` (6 subsequent siblings)
19 siblings, 0 replies; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2024-12-24 7:37 UTC (permalink / raw)
To: dev, Akhil Goyal, Maxime Coquelin, Chenbo Xia, Fan Zhang, Jay Zhou
Cc: jerinj, anoobj, Rajesh Mudimadugula, Gowrishankar Muthukrishnan
Move vhost-vdpa backend implementation into common folder.
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
drivers/common/virtio/meson.build | 13 +++++++++
drivers/common/virtio/version.map | 9 ++++++
.../virtio/virtio_user/vhost.h | 2 --
.../virtio/virtio_user/vhost_vdpa.c | 29 ++++++++++++++++++-
drivers/crypto/virtio/meson.build | 2 +-
drivers/crypto/virtio/virtio_cryptodev.c | 2 --
drivers/meson.build | 1 +
drivers/net/virtio/meson.build | 1 -
drivers/net/virtio/virtio_ethdev.c | 2 --
drivers/net/virtio/virtio_user/vhost_kernel.c | 4 ++-
drivers/net/virtio/virtio_user/vhost_user.c | 2 +-
.../net/virtio/virtio_user/virtio_user_dev.c | 6 ++--
.../net/virtio/virtio_user/virtio_user_dev.h | 24 ++++++++-------
drivers/net/virtio/virtio_user_ethdev.c | 2 +-
14 files changed, 75 insertions(+), 24 deletions(-)
create mode 100644 drivers/common/virtio/meson.build
create mode 100644 drivers/common/virtio/version.map
rename drivers/{net => common}/virtio/virtio_user/vhost.h (98%)
rename drivers/{net => common}/virtio/virtio_user/vhost_vdpa.c (96%)
diff --git a/drivers/common/virtio/meson.build b/drivers/common/virtio/meson.build
new file mode 100644
index 0000000000..5ea5dc5d57
--- /dev/null
+++ b/drivers/common/virtio/meson.build
@@ -0,0 +1,13 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2024 Marvell
+
+if is_windows
+ build = false
+ reason = 'not supported on Windows'
+ subdir_done()
+endif
+
+if is_linux
+ sources += files('virtio_user/vhost_vdpa.c')
+ deps += ['bus_vdev']
+endif
diff --git a/drivers/common/virtio/version.map b/drivers/common/virtio/version.map
new file mode 100644
index 0000000000..a1e45cd354
--- /dev/null
+++ b/drivers/common/virtio/version.map
@@ -0,0 +1,9 @@
+INTERNAL {
+ global:
+
+ virtio_ops_vdpa;
+ virtio_logtype_init;
+ virtio_logtype_driver;
+
+ local: *;
+};
diff --git a/drivers/net/virtio/virtio_user/vhost.h b/drivers/common/virtio/virtio_user/vhost.h
similarity index 98%
rename from drivers/net/virtio/virtio_user/vhost.h
rename to drivers/common/virtio/virtio_user/vhost.h
index eee3a4bc47..50b089a5dc 100644
--- a/drivers/net/virtio/virtio_user/vhost.h
+++ b/drivers/common/virtio/virtio_user/vhost.h
@@ -11,9 +11,7 @@
#include <rte_errno.h>
-#include "../virtio.h"
#include "../virtio_logs.h"
-#include "../virtqueue.h"
struct vhost_vring_state {
unsigned int index;
diff --git a/drivers/net/virtio/virtio_user/vhost_vdpa.c b/drivers/common/virtio/virtio_user/vhost_vdpa.c
similarity index 96%
rename from drivers/net/virtio/virtio_user/vhost_vdpa.c
rename to drivers/common/virtio/virtio_user/vhost_vdpa.c
index 77e2fd62d8..c32cfdeb18 100644
--- a/drivers/net/virtio/virtio_user/vhost_vdpa.c
+++ b/drivers/common/virtio/virtio_user/vhost_vdpa.c
@@ -12,7 +12,8 @@
#include <rte_memory.h>
-#include "../virtio_net_logs.h"
+#include "vhost.h"
+#include "../virtio_logs.h"
struct vhost_vdpa_data {
int vhostfd;
@@ -99,6 +100,29 @@ vhost_vdpa_ioctl(int fd, uint64_t request, void *arg)
return 0;
}
+struct virtio_hw {
+ struct virtqueue **vqs;
+};
+
+struct virtio_user_dev {
+ union {
+ struct virtio_hw hw;
+ uint8_t dummy[256];
+ };
+
+ void *backend_data;
+ uint16_t **notify_area;
+ char path[PATH_MAX];
+ bool hw_cvq;
+ uint16_t max_queue_pairs;
+ uint64_t device_features;
+ bool *qp_enabled;
+};
+
+#define VIRTIO_NET_F_CTRL_VQ 17
+#define VIRTIO_F_IOMMU_PLATFORM 33
+#define VIRTIO_ID_NETWORK 0x01
+
static int
vhost_vdpa_set_owner(struct virtio_user_dev *dev)
{
@@ -714,3 +738,6 @@ struct virtio_user_backend_ops virtio_ops_vdpa = {
.map_notification_area = vhost_vdpa_map_notification_area,
.unmap_notification_area = vhost_vdpa_unmap_notification_area,
};
+
+RTE_LOG_REGISTER_SUFFIX(virtio_logtype_init, init, NOTICE);
+RTE_LOG_REGISTER_SUFFIX(virtio_logtype_driver, driver, NOTICE);
diff --git a/drivers/crypto/virtio/meson.build b/drivers/crypto/virtio/meson.build
index 6c082a3112..a4954a094b 100644
--- a/drivers/crypto/virtio/meson.build
+++ b/drivers/crypto/virtio/meson.build
@@ -9,7 +9,7 @@ endif
includes += include_directories('../../../lib/vhost')
includes += include_directories('../../common/virtio')
-deps += 'bus_pci'
+deps += ['bus_pci', 'common_virtio']
sources = files(
'virtio_cryptodev.c',
'virtio_cvq.c',
diff --git a/drivers/crypto/virtio/virtio_cryptodev.c b/drivers/crypto/virtio/virtio_cryptodev.c
index b31e7ea0cf..159e96f7db 100644
--- a/drivers/crypto/virtio/virtio_cryptodev.c
+++ b/drivers/crypto/virtio/virtio_cryptodev.c
@@ -1749,8 +1749,6 @@ RTE_PMD_REGISTER_PCI(CRYPTODEV_NAME_VIRTIO_PMD, rte_virtio_crypto_driver);
RTE_PMD_REGISTER_CRYPTO_DRIVER(virtio_crypto_drv,
rte_virtio_crypto_driver.driver,
cryptodev_virtio_driver_id);
-RTE_LOG_REGISTER_SUFFIX(virtio_logtype_init, init, NOTICE);
RTE_LOG_REGISTER_SUFFIX(virtio_crypto_logtype_session, session, NOTICE);
RTE_LOG_REGISTER_SUFFIX(virtio_crypto_logtype_rx, rx, NOTICE);
RTE_LOG_REGISTER_SUFFIX(virtio_crypto_logtype_tx, tx, NOTICE);
-RTE_LOG_REGISTER_SUFFIX(virtio_logtype_driver, driver, NOTICE);
diff --git a/drivers/meson.build b/drivers/meson.build
index 495e21b54a..2f0d312479 100644
--- a/drivers/meson.build
+++ b/drivers/meson.build
@@ -17,6 +17,7 @@ subdirs = [
'common/nitrox', # depends on bus.
'common/qat', # depends on bus.
'common/sfc_efx', # depends on bus.
+ 'common/virtio', # depends on bus.
'mempool', # depends on common and bus.
'dma', # depends on common and bus.
'net', # depends on common, bus, mempool
diff --git a/drivers/net/virtio/meson.build b/drivers/net/virtio/meson.build
index 6331366712..bc80d45efc 100644
--- a/drivers/net/virtio/meson.build
+++ b/drivers/net/virtio/meson.build
@@ -55,7 +55,6 @@ if is_linux
'virtio_user/vhost_kernel.c',
'virtio_user/vhost_kernel_tap.c',
'virtio_user/vhost_user.c',
- 'virtio_user/vhost_vdpa.c',
'virtio_user/virtio_user_dev.c')
deps += ['bus_vdev', 'common_virtio']
endif
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 491b75ec19..b257c9cfc4 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -2712,5 +2712,3 @@ __rte_unused uint8_t is_rx)
return 0;
}
-RTE_LOG_REGISTER_SUFFIX(virtio_logtype_init, init, NOTICE);
-RTE_LOG_REGISTER_SUFFIX(virtio_logtype_driver, driver, NOTICE);
diff --git a/drivers/net/virtio/virtio_user/vhost_kernel.c b/drivers/net/virtio/virtio_user/vhost_kernel.c
index e42bb35935..b48a1e058d 100644
--- a/drivers/net/virtio/virtio_user/vhost_kernel.c
+++ b/drivers/net/virtio/virtio_user/vhost_kernel.c
@@ -11,9 +11,11 @@
#include <rte_memory.h>
-#include "vhost.h"
+#include "virtio_user/vhost.h"
+
#include "virtio_user_dev.h"
#include "vhost_kernel_tap.h"
+#include "../virtqueue.h"
struct vhost_kernel_data {
int *vhostfds;
diff --git a/drivers/net/virtio/virtio_user/vhost_user.c b/drivers/net/virtio/virtio_user/vhost_user.c
index c10252506b..3f8ece914a 100644
--- a/drivers/net/virtio/virtio_user/vhost_user.c
+++ b/drivers/net/virtio/virtio_user/vhost_user.c
@@ -16,7 +16,7 @@
#include <rte_string_fns.h>
#include <rte_fbarray.h>
-#include "vhost.h"
+#include "virtio_user/vhost.h"
#include "virtio_user_dev.h"
struct vhost_user_data {
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index 2997d2bd26..87ebb2cba3 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -20,10 +20,12 @@
#include <rte_malloc.h>
#include <rte_io.h>
-#include "vhost.h"
-#include "virtio.h"
+#include "virtio_user/vhost.h"
+
#include "virtio_user_dev.h"
+#include "../virtqueue.h"
#include "../virtio_ethdev.h"
+#include "../virtio_net_logs.h"
#define VIRTIO_USER_MEM_EVENT_CLB_NAME "virtio_user_mem_event_clb"
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.h b/drivers/net/virtio/virtio_user/virtio_user_dev.h
index 66400b3b62..70604d6956 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.h
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.h
@@ -25,26 +25,36 @@ struct virtio_user_queue {
};
struct virtio_user_dev {
- struct virtio_hw hw;
+ union {
+ struct virtio_hw hw;
+ uint8_t dummy[256];
+ };
+
+ void *backend_data;
+ uint16_t **notify_area;
+ char path[PATH_MAX];
+ bool hw_cvq;
+ uint16_t max_queue_pairs;
+ uint64_t device_features; /* supported features by device */
+ bool *qp_enabled;
+
enum virtio_user_backend_type backend_type;
bool is_server; /* server or client mode */
int *callfds;
int *kickfds;
int mac_specified;
- uint16_t max_queue_pairs;
+
uint16_t queue_pairs;
uint32_t queue_size;
uint64_t features; /* the negotiated features with driver,
* and will be sync with device
*/
- uint64_t device_features; /* supported features by device */
uint64_t frontend_features; /* enabled frontend features */
uint64_t unsupported_features; /* unsupported features mask */
uint8_t status;
uint16_t net_status;
uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
- char path[PATH_MAX];
char *ifname;
union {
@@ -54,18 +64,12 @@ struct virtio_user_dev {
} vrings;
struct virtio_user_queue *packed_queues;
- bool *qp_enabled;
struct virtio_user_backend_ops *ops;
pthread_mutex_t mutex;
bool started;
- bool hw_cvq;
struct virtqueue *scvq;
-
- void *backend_data;
-
- uint16_t **notify_area;
};
int virtio_user_dev_set_features(struct virtio_user_dev *dev);
diff --git a/drivers/net/virtio/virtio_user_ethdev.c b/drivers/net/virtio/virtio_user_ethdev.c
index fda6634c94..41e78e57fb 100644
--- a/drivers/net/virtio/virtio_user_ethdev.c
+++ b/drivers/net/virtio/virtio_user_ethdev.c
@@ -21,13 +21,13 @@
#include <rte_io.h>
#include "virtio_net_logs.h"
+#include "virtio_user/vhost.h"
#include "virtio_ethdev.h"
#include "virtio.h"
#include "virtqueue.h"
#include "virtio_rxtx.h"
#include "virtio_user/virtio_user_dev.h"
-#include "virtio_user/vhost.h"
#define virtio_user_get_dev(hwp) container_of(hwp, struct virtio_user_dev, hw)
--
2.25.1
^ permalink raw reply [flat|nested] 58+ messages in thread
* [v1 14/16] common/virtio: support cryptodev in vdev setup
2024-12-24 7:36 [v1 00/16] crypto/virtio: vDPA and asymmetric support Gowrishankar Muthukrishnan
` (12 preceding siblings ...)
2024-12-24 7:37 ` [v1 13/16] common/virtio: move vDPA to common directory Gowrishankar Muthukrishnan
@ 2024-12-24 7:37 ` Gowrishankar Muthukrishnan
2024-12-24 7:37 ` [v1 15/16] crypto/virtio: add vhost backend to virtio_user Gowrishankar Muthukrishnan
` (5 subsequent siblings)
19 siblings, 0 replies; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2024-12-24 7:37 UTC (permalink / raw)
To: dev, Akhil Goyal, Maxime Coquelin, Chenbo Xia, Fan Zhang, Jay Zhou
Cc: jerinj, anoobj, Rajesh Mudimadugula, Gowrishankar Muthukrishnan
Support cryptodev in vdev setup.
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
drivers/common/virtio/virtio_user/vhost_vdpa.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/common/virtio/virtio_user/vhost_vdpa.c b/drivers/common/virtio/virtio_user/vhost_vdpa.c
index c32cfdeb18..07d695e0a4 100644
--- a/drivers/common/virtio/virtio_user/vhost_vdpa.c
+++ b/drivers/common/virtio/virtio_user/vhost_vdpa.c
@@ -122,6 +122,7 @@ struct virtio_user_dev {
#define VIRTIO_NET_F_CTRL_VQ 17
#define VIRTIO_F_IOMMU_PLATFORM 33
#define VIRTIO_ID_NETWORK 0x01
+#define VIRTIO_ID_CRYPTO 0x20
static int
vhost_vdpa_set_owner(struct virtio_user_dev *dev)
@@ -560,7 +561,7 @@ vhost_vdpa_setup(struct virtio_user_dev *dev)
}
if (ioctl(data->vhostfd, VHOST_VDPA_GET_DEVICE_ID, &did) < 0 ||
- did != VIRTIO_ID_NETWORK) {
+ (did != VIRTIO_ID_NETWORK) || (did != VIRTIO_ID_CRYPTO)) {
PMD_DRV_LOG(ERR, "Invalid vdpa device ID: %u", did);
close(data->vhostfd);
free(data);
--
2.25.1
^ permalink raw reply [flat|nested] 58+ messages in thread
* [v1 15/16] crypto/virtio: add vhost backend to virtio_user
2024-12-24 7:36 [v1 00/16] crypto/virtio: vDPA and asymmetric support Gowrishankar Muthukrishnan
` (13 preceding siblings ...)
2024-12-24 7:37 ` [v1 14/16] common/virtio: support cryptodev in vdev setup Gowrishankar Muthukrishnan
@ 2024-12-24 7:37 ` Gowrishankar Muthukrishnan
2024-12-24 7:37 ` [v1 16/16] test/crypto: test virtio_crypto_user PMD Gowrishankar Muthukrishnan
` (4 subsequent siblings)
19 siblings, 0 replies; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2024-12-24 7:37 UTC (permalink / raw)
To: dev, Akhil Goyal, Maxime Coquelin, Chenbo Xia, Fan Zhang, Jay Zhou
Cc: jerinj, anoobj, Rajesh Mudimadugula, Gowrishankar Muthukrishnan
Add vhost backend to virtio_user crypto.
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
drivers/crypto/virtio/meson.build | 7 +
drivers/crypto/virtio/virtio_cryptodev.c | 57 +-
drivers/crypto/virtio/virtio_cryptodev.h | 3 +
drivers/crypto/virtio/virtio_pci.h | 7 +
drivers/crypto/virtio/virtio_ring.h | 6 -
.../crypto/virtio/virtio_user/vhost_vdpa.c | 310 +++++++
.../virtio/virtio_user/virtio_user_dev.c | 774 ++++++++++++++++++
.../virtio/virtio_user/virtio_user_dev.h | 88 ++
drivers/crypto/virtio/virtio_user_cryptodev.c | 586 +++++++++++++
9 files changed, 1810 insertions(+), 28 deletions(-)
create mode 100644 drivers/crypto/virtio/virtio_user/vhost_vdpa.c
create mode 100644 drivers/crypto/virtio/virtio_user/virtio_user_dev.c
create mode 100644 drivers/crypto/virtio/virtio_user/virtio_user_dev.h
create mode 100644 drivers/crypto/virtio/virtio_user_cryptodev.c
diff --git a/drivers/crypto/virtio/meson.build b/drivers/crypto/virtio/meson.build
index a4954a094b..a178a61487 100644
--- a/drivers/crypto/virtio/meson.build
+++ b/drivers/crypto/virtio/meson.build
@@ -17,3 +17,10 @@ sources = files(
'virtio_rxtx.c',
'virtqueue.c',
)
+
+if is_linux
+ sources += files('virtio_user_cryptodev.c',
+ 'virtio_user/vhost_vdpa.c',
+ 'virtio_user/virtio_user_dev.c')
+ deps += ['bus_vdev', 'common_virtio']
+endif
diff --git a/drivers/crypto/virtio/virtio_cryptodev.c b/drivers/crypto/virtio/virtio_cryptodev.c
index 159e96f7db..e9e65366fe 100644
--- a/drivers/crypto/virtio/virtio_cryptodev.c
+++ b/drivers/crypto/virtio/virtio_cryptodev.c
@@ -544,24 +544,12 @@ virtio_crypto_init_device(struct rte_cryptodev *cryptodev,
return 0;
}
-/*
- * This function is based on probe() function
- * It returns 0 on success.
- */
-static int
-crypto_virtio_create(const char *name, struct rte_pci_device *pci_dev,
- struct rte_cryptodev_pmd_init_params *init_params)
+int
+crypto_virtio_dev_init(struct rte_cryptodev *cryptodev, uint64_t features,
+ struct rte_pci_device *pci_dev)
{
- struct rte_cryptodev *cryptodev;
struct virtio_crypto_hw *hw;
- PMD_INIT_FUNC_TRACE();
-
- cryptodev = rte_cryptodev_pmd_create(name, &pci_dev->device,
- init_params);
- if (cryptodev == NULL)
- return -ENODEV;
-
cryptodev->driver_id = cryptodev_virtio_driver_id;
cryptodev->dev_ops = &virtio_crypto_dev_ops;
@@ -578,16 +566,41 @@ crypto_virtio_create(const char *name, struct rte_pci_device *pci_dev,
hw->dev_id = cryptodev->data->dev_id;
hw->virtio_dev_capabilities = virtio_capabilities;
- VIRTIO_CRYPTO_INIT_LOG_DBG("dev %d vendorID=0x%x deviceID=0x%x",
- cryptodev->data->dev_id, pci_dev->id.vendor_id,
- pci_dev->id.device_id);
+ if (pci_dev) {
+ /* pci device init */
+ VIRTIO_CRYPTO_INIT_LOG_DBG("dev %d vendorID=0x%x deviceID=0x%x",
+ cryptodev->data->dev_id, pci_dev->id.vendor_id,
+ pci_dev->id.device_id);
- /* pci device init */
- if (vtpci_cryptodev_init(pci_dev, hw))
+ if (vtpci_cryptodev_init(pci_dev, hw))
+ return -1;
+ }
+
+ if (virtio_crypto_init_device(cryptodev, features) < 0)
return -1;
- if (virtio_crypto_init_device(cryptodev,
- VIRTIO_CRYPTO_PMD_GUEST_FEATURES) < 0)
+ return 0;
+}
+
+/*
+ * This function is based on probe() function
+ * It returns 0 on success.
+ */
+static int
+crypto_virtio_create(const char *name, struct rte_pci_device *pci_dev,
+ struct rte_cryptodev_pmd_init_params *init_params)
+{
+ struct rte_cryptodev *cryptodev;
+
+ PMD_INIT_FUNC_TRACE();
+
+ cryptodev = rte_cryptodev_pmd_create(name, &pci_dev->device,
+ init_params);
+ if (cryptodev == NULL)
+ return -ENODEV;
+
+ if (crypto_virtio_dev_init(cryptodev, VIRTIO_CRYPTO_PMD_GUEST_FEATURES,
+ pci_dev) < 0)
return -1;
rte_cryptodev_pmd_probing_finish(cryptodev);
diff --git a/drivers/crypto/virtio/virtio_cryptodev.h b/drivers/crypto/virtio/virtio_cryptodev.h
index b4bdd9800b..95a1e09dca 100644
--- a/drivers/crypto/virtio/virtio_cryptodev.h
+++ b/drivers/crypto/virtio/virtio_cryptodev.h
@@ -74,4 +74,7 @@ uint16_t virtio_crypto_pkt_rx_burst(void *tx_queue,
struct rte_crypto_op **tx_pkts,
uint16_t nb_pkts);
+int crypto_virtio_dev_init(struct rte_cryptodev *cryptodev, uint64_t features,
+ struct rte_pci_device *pci_dev);
+
#endif /* _VIRTIO_CRYPTODEV_H_ */
diff --git a/drivers/crypto/virtio/virtio_pci.h b/drivers/crypto/virtio/virtio_pci.h
index 79945cb88e..c75777e005 100644
--- a/drivers/crypto/virtio/virtio_pci.h
+++ b/drivers/crypto/virtio/virtio_pci.h
@@ -20,6 +20,9 @@ struct virtqueue;
#define VIRTIO_CRYPTO_PCI_VENDORID 0x1AF4
#define VIRTIO_CRYPTO_PCI_DEVICEID 0x1054
+/* VirtIO device IDs. */
+#define VIRTIO_ID_CRYPTO 20
+
/* VirtIO ABI version, this must match exactly. */
#define VIRTIO_PCI_ABI_VERSION 0
@@ -56,8 +59,12 @@ struct virtqueue;
#define VIRTIO_CONFIG_STATUS_DRIVER 0x02
#define VIRTIO_CONFIG_STATUS_DRIVER_OK 0x04
#define VIRTIO_CONFIG_STATUS_FEATURES_OK 0x08
+#define VIRTIO_CONFIG_STATUS_DEV_NEED_RESET 0x40
#define VIRTIO_CONFIG_STATUS_FAILED 0x80
+/* The alignment to use between consumer and producer parts of vring. */
+#define VIRTIO_VRING_ALIGN 4096
+
/*
* Each virtqueue indirect descriptor list must be physically contiguous.
* To allow us to malloc(9) each list individually, limit the number
diff --git a/drivers/crypto/virtio/virtio_ring.h b/drivers/crypto/virtio/virtio_ring.h
index c74d1172b7..4b418f6e60 100644
--- a/drivers/crypto/virtio/virtio_ring.h
+++ b/drivers/crypto/virtio/virtio_ring.h
@@ -181,12 +181,6 @@ vring_init_packed(struct vring_packed *vr, uint8_t *p, rte_iova_t iova,
sizeof(struct vring_packed_desc_event)), align);
}
-static inline void
-vring_init(struct vring *vr, unsigned int num, uint8_t *p, unsigned long align)
-{
- vring_init_split(vr, p, 0, align, num);
-}
-
/*
* The following is used with VIRTIO_RING_F_EVENT_IDX.
* Assuming a given event_idx value from the other size, if we have
diff --git a/drivers/crypto/virtio/virtio_user/vhost_vdpa.c b/drivers/crypto/virtio/virtio_user/vhost_vdpa.c
new file mode 100644
index 0000000000..3fedade775
--- /dev/null
+++ b/drivers/crypto/virtio/virtio_user/vhost_vdpa.c
@@ -0,0 +1,310 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Marvell
+ */
+
+#include <sys/ioctl.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <sys/mman.h>
+#include <fcntl.h>
+#include <stdlib.h>
+#include <unistd.h>
+
+#include <rte_memory.h>
+
+#include "virtio_user/vhost.h"
+
+#include "virtio_user_dev.h"
+#include "../virtio_pci.h"
+
+struct vhost_vdpa_data {
+ int vhostfd;
+ uint64_t protocol_features;
+};
+
+#define VHOST_VDPA_SUPPORTED_BACKEND_FEATURES \
+ (1ULL << VHOST_BACKEND_F_IOTLB_MSG_V2 | \
+ 1ULL << VHOST_BACKEND_F_IOTLB_BATCH)
+
+/* vhost kernel & vdpa ioctls */
+#define VHOST_VIRTIO 0xAF
+#define VHOST_GET_FEATURES _IOR(VHOST_VIRTIO, 0x00, __u64)
+#define VHOST_SET_FEATURES _IOW(VHOST_VIRTIO, 0x00, __u64)
+#define VHOST_SET_OWNER _IO(VHOST_VIRTIO, 0x01)
+#define VHOST_RESET_OWNER _IO(VHOST_VIRTIO, 0x02)
+#define VHOST_SET_LOG_BASE _IOW(VHOST_VIRTIO, 0x04, __u64)
+#define VHOST_SET_LOG_FD _IOW(VHOST_VIRTIO, 0x07, int)
+#define VHOST_SET_VRING_NUM _IOW(VHOST_VIRTIO, 0x10, struct vhost_vring_state)
+#define VHOST_SET_VRING_ADDR _IOW(VHOST_VIRTIO, 0x11, struct vhost_vring_addr)
+#define VHOST_SET_VRING_BASE _IOW(VHOST_VIRTIO, 0x12, struct vhost_vring_state)
+#define VHOST_GET_VRING_BASE _IOWR(VHOST_VIRTIO, 0x12, struct vhost_vring_state)
+#define VHOST_SET_VRING_KICK _IOW(VHOST_VIRTIO, 0x20, struct vhost_vring_file)
+#define VHOST_SET_VRING_CALL _IOW(VHOST_VIRTIO, 0x21, struct vhost_vring_file)
+#define VHOST_SET_VRING_ERR _IOW(VHOST_VIRTIO, 0x22, struct vhost_vring_file)
+#define VHOST_NET_SET_BACKEND _IOW(VHOST_VIRTIO, 0x30, struct vhost_vring_file)
+#define VHOST_VDPA_GET_DEVICE_ID _IOR(VHOST_VIRTIO, 0x70, __u32)
+#define VHOST_VDPA_GET_STATUS _IOR(VHOST_VIRTIO, 0x71, __u8)
+#define VHOST_VDPA_SET_STATUS _IOW(VHOST_VIRTIO, 0x72, __u8)
+#define VHOST_VDPA_GET_CONFIG _IOR(VHOST_VIRTIO, 0x73, struct vhost_vdpa_config)
+#define VHOST_VDPA_SET_CONFIG _IOW(VHOST_VIRTIO, 0x74, struct vhost_vdpa_config)
+#define VHOST_VDPA_SET_VRING_ENABLE _IOW(VHOST_VIRTIO, 0x75, struct vhost_vring_state)
+#define VHOST_SET_BACKEND_FEATURES _IOW(VHOST_VIRTIO, 0x25, __u64)
+#define VHOST_GET_BACKEND_FEATURES _IOR(VHOST_VIRTIO, 0x26, __u64)
+
+/* no alignment requirement */
+struct vhost_iotlb_msg {
+ uint64_t iova;
+ uint64_t size;
+ uint64_t uaddr;
+#define VHOST_ACCESS_RO 0x1
+#define VHOST_ACCESS_WO 0x2
+#define VHOST_ACCESS_RW 0x3
+ uint8_t perm;
+#define VHOST_IOTLB_MISS 1
+#define VHOST_IOTLB_UPDATE 2
+#define VHOST_IOTLB_INVALIDATE 3
+#define VHOST_IOTLB_ACCESS_FAIL 4
+#define VHOST_IOTLB_BATCH_BEGIN 5
+#define VHOST_IOTLB_BATCH_END 6
+ uint8_t type;
+};
+
+#define VHOST_IOTLB_MSG_V2 0x2
+
+struct vhost_vdpa_config {
+ uint32_t off;
+ uint32_t len;
+ uint8_t buf[];
+};
+
+struct vhost_msg {
+ uint32_t type;
+ uint32_t reserved;
+ union {
+ struct vhost_iotlb_msg iotlb;
+ uint8_t padding[64];
+ };
+};
+
+
+static int
+vhost_vdpa_ioctl(int fd, uint64_t request, void *arg)
+{
+ int ret;
+
+ ret = ioctl(fd, request, arg);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "Vhost-vDPA ioctl %"PRIu64" failed (%s)",
+ request, strerror(errno));
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+vhost_vdpa_get_protocol_features(struct virtio_user_dev *dev, uint64_t *features)
+{
+ struct vhost_vdpa_data *data = dev->backend_data;
+
+ return vhost_vdpa_ioctl(data->vhostfd, VHOST_GET_BACKEND_FEATURES, features);
+}
+
+static int
+vhost_vdpa_set_protocol_features(struct virtio_user_dev *dev, uint64_t features)
+{
+ struct vhost_vdpa_data *data = dev->backend_data;
+
+ return vhost_vdpa_ioctl(data->vhostfd, VHOST_SET_BACKEND_FEATURES, &features);
+}
+
+static int
+vhost_vdpa_get_features(struct virtio_user_dev *dev, uint64_t *features)
+{
+ struct vhost_vdpa_data *data = dev->backend_data;
+ int ret;
+
+ ret = vhost_vdpa_ioctl(data->vhostfd, VHOST_GET_FEATURES, features);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "Failed to get features");
+ return -1;
+ }
+
+ /* Negotiated vDPA backend features */
+ ret = vhost_vdpa_get_protocol_features(dev, &data->protocol_features);
+ if (ret < 0) {
+ PMD_DRV_LOG(ERR, "Failed to get backend features");
+ return -1;
+ }
+
+ data->protocol_features &= VHOST_VDPA_SUPPORTED_BACKEND_FEATURES;
+
+ ret = vhost_vdpa_set_protocol_features(dev, data->protocol_features);
+ if (ret < 0) {
+ PMD_DRV_LOG(ERR, "Failed to set backend features");
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+vhost_vdpa_set_vring_enable(struct virtio_user_dev *dev, struct vhost_vring_state *state)
+{
+ struct vhost_vdpa_data *data = dev->backend_data;
+
+ return vhost_vdpa_ioctl(data->vhostfd, VHOST_VDPA_SET_VRING_ENABLE, state);
+}
+
+/**
+ * Set up environment to talk with a vhost vdpa backend.
+ *
+ * @return
+ * - (-1) if fail to set up;
+ * - (>=0) if successful.
+ */
+static int
+vhost_vdpa_setup(struct virtio_user_dev *dev)
+{
+ struct vhost_vdpa_data *data;
+ uint32_t did = (uint32_t)-1;
+
+ data = malloc(sizeof(*data));
+ if (!data) {
+ PMD_DRV_LOG(ERR, "(%s) Faidle to allocate backend data", dev->path);
+ return -1;
+ }
+
+ data->vhostfd = open(dev->path, O_RDWR);
+ if (data->vhostfd < 0) {
+ PMD_DRV_LOG(ERR, "Failed to open %s: %s",
+ dev->path, strerror(errno));
+ free(data);
+ return -1;
+ }
+
+ if (ioctl(data->vhostfd, VHOST_VDPA_GET_DEVICE_ID, &did) < 0 ||
+ did != VIRTIO_ID_CRYPTO) {
+ PMD_DRV_LOG(ERR, "Invalid vdpa device ID: %u", did);
+ close(data->vhostfd);
+ free(data);
+ return -1;
+ }
+
+ dev->backend_data = data;
+
+ return 0;
+}
+
+static int
+vhost_vdpa_cvq_enable(struct virtio_user_dev *dev, int enable)
+{
+ struct vhost_vring_state state = {
+ .index = dev->max_queue_pairs,
+ .num = enable,
+ };
+
+ return vhost_vdpa_set_vring_enable(dev, &state);
+}
+
+static int
+vhost_vdpa_enable_queue_pair(struct virtio_user_dev *dev,
+ uint16_t pair_idx,
+ int enable)
+{
+ struct vhost_vring_state state = {
+ .index = pair_idx,
+ .num = enable,
+ };
+
+ if (dev->qp_enabled[pair_idx] == enable)
+ return 0;
+
+ if (vhost_vdpa_set_vring_enable(dev, &state))
+ return -1;
+
+ dev->qp_enabled[pair_idx] = enable;
+ return 0;
+}
+
+static int
+vhost_vdpa_update_link_state(struct virtio_user_dev *dev)
+{
+ dev->crypto_status = VIRTIO_CRYPTO_S_HW_READY;
+ return 0;
+}
+
+static int
+vhost_vdpa_get_nr_vrings(struct virtio_user_dev *dev)
+{
+ int nr_vrings = dev->max_queue_pairs;
+
+ return nr_vrings;
+}
+
+static int
+vhost_vdpa_unmap_notification_area(struct virtio_user_dev *dev)
+{
+ int i, nr_vrings;
+
+ nr_vrings = vhost_vdpa_get_nr_vrings(dev);
+
+ for (i = 0; i < nr_vrings; i++) {
+ if (dev->notify_area[i])
+ munmap(dev->notify_area[i], getpagesize());
+ }
+ free(dev->notify_area);
+ dev->notify_area = NULL;
+
+ return 0;
+}
+
+static int
+vhost_vdpa_map_notification_area(struct virtio_user_dev *dev)
+{
+ struct vhost_vdpa_data *data = dev->backend_data;
+ int nr_vrings, i, page_size = getpagesize();
+ uint16_t **notify_area;
+
+ nr_vrings = vhost_vdpa_get_nr_vrings(dev);
+
+ /* CQ is another vring */
+ nr_vrings++;
+
+ notify_area = malloc(nr_vrings * sizeof(*notify_area));
+ if (!notify_area) {
+ PMD_DRV_LOG(ERR, "(%s) Failed to allocate notify area array", dev->path);
+ return -1;
+ }
+
+ for (i = 0; i < nr_vrings; i++) {
+ notify_area[i] = mmap(NULL, page_size, PROT_WRITE, MAP_SHARED | MAP_FILE,
+ data->vhostfd, i * page_size);
+ if (notify_area[i] == MAP_FAILED) {
+ PMD_DRV_LOG(ERR, "(%s) Map failed for notify address of queue %d",
+ dev->path, i);
+ i--;
+ goto map_err;
+ }
+ }
+ dev->notify_area = notify_area;
+
+ return 0;
+
+map_err:
+ for (; i >= 0; i--)
+ munmap(notify_area[i], page_size);
+ free(notify_area);
+
+ return -1;
+}
+
+struct virtio_user_backend_ops virtio_crypto_ops_vdpa = {
+ .setup = vhost_vdpa_setup,
+ .get_features = vhost_vdpa_get_features,
+ .cvq_enable = vhost_vdpa_cvq_enable,
+ .enable_qp = vhost_vdpa_enable_queue_pair,
+ .update_link_state = vhost_vdpa_update_link_state,
+ .map_notification_area = vhost_vdpa_map_notification_area,
+ .unmap_notification_area = vhost_vdpa_unmap_notification_area,
+};
diff --git a/drivers/crypto/virtio/virtio_user/virtio_user_dev.c b/drivers/crypto/virtio/virtio_user/virtio_user_dev.c
new file mode 100644
index 0000000000..fed740073d
--- /dev/null
+++ b/drivers/crypto/virtio/virtio_user/virtio_user_dev.c
@@ -0,0 +1,774 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Marvell.
+ */
+
+#include <stdint.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <fcntl.h>
+#include <string.h>
+#include <errno.h>
+#include <sys/mman.h>
+#include <unistd.h>
+#include <sys/eventfd.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <pthread.h>
+
+#include <rte_alarm.h>
+#include <rte_string_fns.h>
+#include <rte_eal_memconfig.h>
+#include <rte_malloc.h>
+#include <rte_io.h>
+
+#include "virtio_user/vhost.h"
+#include "virtio_logs.h"
+
+#include "cryptodev_pmd.h"
+#include "virtio_crypto.h"
+#include "virtio_cvq.h"
+#include "virtio_user_dev.h"
+#include "virtqueue.h"
+
+#define VIRTIO_USER_MEM_EVENT_CLB_NAME "virtio_user_mem_event_clb"
+
+const char * const crypto_virtio_user_backend_strings[] = {
+ [VIRTIO_USER_BACKEND_UNKNOWN] = "VIRTIO_USER_BACKEND_UNKNOWN",
+ [VIRTIO_USER_BACKEND_VHOST_VDPA] = "VHOST_VDPA",
+};
+
+static int
+virtio_user_uninit_notify_queue(struct virtio_user_dev *dev, uint32_t queue_sel)
+{
+ if (dev->kickfds[queue_sel] >= 0) {
+ close(dev->kickfds[queue_sel]);
+ dev->kickfds[queue_sel] = -1;
+ }
+
+ if (dev->callfds[queue_sel] >= 0) {
+ close(dev->callfds[queue_sel]);
+ dev->callfds[queue_sel] = -1;
+ }
+
+ return 0;
+}
+
+static int
+virtio_user_init_notify_queue(struct virtio_user_dev *dev, uint32_t queue_sel)
+{
+ /* May use invalid flag, but some backend uses kickfd and
+ * callfd as criteria to judge if dev is alive. so finally we
+ * use real event_fd.
+ */
+ dev->callfds[queue_sel] = eventfd(0, EFD_CLOEXEC | EFD_NONBLOCK);
+ if (dev->callfds[queue_sel] < 0) {
+ PMD_DRV_LOG(ERR, "(%s) Failed to setup callfd for queue %u: %s",
+ dev->path, queue_sel, strerror(errno));
+ return -1;
+ }
+ dev->kickfds[queue_sel] = eventfd(0, EFD_CLOEXEC | EFD_NONBLOCK);
+ if (dev->kickfds[queue_sel] < 0) {
+ PMD_DRV_LOG(ERR, "(%s) Failed to setup kickfd for queue %u: %s",
+ dev->path, queue_sel, strerror(errno));
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+virtio_user_destroy_queue(struct virtio_user_dev *dev, uint32_t queue_sel)
+{
+ struct vhost_vring_state state;
+ int ret;
+
+ state.index = queue_sel;
+ ret = dev->ops->get_vring_base(dev, &state);
+ if (ret < 0) {
+ PMD_DRV_LOG(ERR, "(%s) Failed to destroy queue %u", dev->path, queue_sel);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+virtio_user_create_queue(struct virtio_user_dev *dev, uint32_t queue_sel)
+{
+ /* Of all per virtqueue MSGs, make sure VHOST_SET_VRING_CALL come
+ * firstly because vhost depends on this msg to allocate virtqueue
+ * pair.
+ */
+ struct vhost_vring_file file;
+ int ret;
+
+ file.index = queue_sel;
+ file.fd = dev->callfds[queue_sel];
+ ret = dev->ops->set_vring_call(dev, &file);
+ if (ret < 0) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to create queue %u", dev->path, queue_sel);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+virtio_user_kick_queue(struct virtio_user_dev *dev, uint32_t queue_sel)
+{
+ int ret;
+ struct vhost_vring_file file;
+ struct vhost_vring_state state;
+ struct vring *vring = &dev->vrings.split[queue_sel];
+ struct vring_packed *pq_vring = &dev->vrings.packed[queue_sel];
+ uint64_t desc_addr, avail_addr, used_addr;
+ struct vhost_vring_addr addr = {
+ .index = queue_sel,
+ .log_guest_addr = 0,
+ .flags = 0, /* disable log */
+ };
+
+ if (queue_sel == dev->max_queue_pairs) {
+ if (!dev->scvq) {
+ PMD_INIT_LOG(ERR, "(%s) Shadow control queue expected but missing",
+ dev->path);
+ goto err;
+ }
+
+ /* Use shadow control queue information */
+ vring = &dev->scvq->vq_split.ring;
+ pq_vring = &dev->scvq->vq_packed.ring;
+ }
+
+ if (dev->features & (1ULL << VIRTIO_F_RING_PACKED)) {
+ desc_addr = pq_vring->desc_iova;
+ avail_addr = desc_addr + pq_vring->num * sizeof(struct vring_packed_desc);
+ used_addr = RTE_ALIGN_CEIL(avail_addr + sizeof(struct vring_packed_desc_event),
+ VIRTIO_VRING_ALIGN);
+
+ addr.desc_user_addr = desc_addr;
+ addr.avail_user_addr = avail_addr;
+ addr.used_user_addr = used_addr;
+ } else {
+ desc_addr = vring->desc_iova;
+ avail_addr = desc_addr + vring->num * sizeof(struct vring_desc);
+ used_addr = RTE_ALIGN_CEIL((uintptr_t)(&vring->avail->ring[vring->num]),
+ VIRTIO_VRING_ALIGN);
+
+ addr.desc_user_addr = desc_addr;
+ addr.avail_user_addr = avail_addr;
+ addr.used_user_addr = used_addr;
+ }
+
+ state.index = queue_sel;
+ state.num = vring->num;
+ ret = dev->ops->set_vring_num(dev, &state);
+ if (ret < 0)
+ goto err;
+
+ state.index = queue_sel;
+ state.num = 0; /* no reservation */
+ if (dev->features & (1ULL << VIRTIO_F_RING_PACKED))
+ state.num |= (1 << 15);
+ ret = dev->ops->set_vring_base(dev, &state);
+ if (ret < 0)
+ goto err;
+
+ ret = dev->ops->set_vring_addr(dev, &addr);
+ if (ret < 0)
+ goto err;
+
+ /* Of all per virtqueue MSGs, make sure VHOST_USER_SET_VRING_KICK comes
+ * lastly because vhost depends on this msg to judge if
+ * virtio is ready.
+ */
+ file.index = queue_sel;
+ file.fd = dev->kickfds[queue_sel];
+ ret = dev->ops->set_vring_kick(dev, &file);
+ if (ret < 0)
+ goto err;
+
+ return 0;
+err:
+ PMD_INIT_LOG(ERR, "(%s) Failed to kick queue %u", dev->path, queue_sel);
+
+ return -1;
+}
+
+static int
+virtio_user_foreach_queue(struct virtio_user_dev *dev,
+ int (*fn)(struct virtio_user_dev *, uint32_t))
+{
+ uint32_t i, nr_vq;
+
+ nr_vq = dev->max_queue_pairs;
+
+ for (i = 0; i < nr_vq; i++)
+ if (fn(dev, i) < 0)
+ return -1;
+
+ return 0;
+}
+
+int
+crypto_virtio_user_dev_set_features(struct virtio_user_dev *dev)
+{
+ uint64_t features;
+ int ret = -1;
+
+ pthread_mutex_lock(&dev->mutex);
+
+ /* Step 0: tell vhost to create queues */
+ if (virtio_user_foreach_queue(dev, virtio_user_create_queue) < 0)
+ goto error;
+
+ features = dev->features;
+
+ ret = dev->ops->set_features(dev, features);
+ if (ret < 0)
+ goto error;
+ PMD_DRV_LOG(INFO, "(%s) set features: 0x%" PRIx64, dev->path, features);
+error:
+ pthread_mutex_unlock(&dev->mutex);
+
+ return ret;
+}
+
+int
+crypto_virtio_user_start_device(struct virtio_user_dev *dev)
+{
+ int ret;
+
+ /*
+ * XXX workaround!
+ *
+ * We need to make sure that the locks will be
+ * taken in the correct order to avoid deadlocks.
+ *
+ * Before releasing this lock, this thread should
+ * not trigger any memory hotplug events.
+ *
+ * This is a temporary workaround, and should be
+ * replaced when we get proper supports from the
+ * memory subsystem in the future.
+ */
+ rte_mcfg_mem_read_lock();
+ pthread_mutex_lock(&dev->mutex);
+
+ /* Step 2: share memory regions */
+ ret = dev->ops->set_memory_table(dev);
+ if (ret < 0)
+ goto error;
+
+ /* Step 3: kick queues */
+ ret = virtio_user_foreach_queue(dev, virtio_user_kick_queue);
+ if (ret < 0)
+ goto error;
+
+ ret = virtio_user_kick_queue(dev, dev->max_queue_pairs);
+ if (ret < 0)
+ goto error;
+
+ /* Step 4: enable queues */
+ for (int i = 0; i < dev->max_queue_pairs; i++) {
+ ret = dev->ops->enable_qp(dev, i, 1);
+ if (ret < 0)
+ goto error;
+ }
+
+ dev->started = true;
+
+ pthread_mutex_unlock(&dev->mutex);
+ rte_mcfg_mem_read_unlock();
+
+ return 0;
+error:
+ pthread_mutex_unlock(&dev->mutex);
+ rte_mcfg_mem_read_unlock();
+
+ PMD_INIT_LOG(ERR, "(%s) Failed to start device", dev->path);
+
+ return -1;
+}
+
+int crypto_virtio_user_stop_device(struct virtio_user_dev *dev)
+{
+ uint32_t i;
+ int ret;
+
+ pthread_mutex_lock(&dev->mutex);
+ if (!dev->started)
+ goto out;
+
+ for (i = 0; i < dev->max_queue_pairs; ++i) {
+ ret = dev->ops->enable_qp(dev, i, 0);
+ if (ret < 0)
+ goto err;
+ }
+
+ if (dev->scvq) {
+ ret = dev->ops->cvq_enable(dev, 0);
+ if (ret < 0)
+ goto err;
+ }
+
+ /* Stop the backend. */
+ if (virtio_user_foreach_queue(dev, virtio_user_destroy_queue) < 0)
+ goto err;
+
+ dev->started = false;
+
+out:
+ pthread_mutex_unlock(&dev->mutex);
+
+ return 0;
+err:
+ pthread_mutex_unlock(&dev->mutex);
+
+ PMD_INIT_LOG(ERR, "(%s) Failed to stop device", dev->path);
+
+ return -1;
+}
+
+static int
+virtio_user_dev_init_max_queue_pairs(struct virtio_user_dev *dev, uint32_t user_max_qp)
+{
+ int ret;
+
+ if (!dev->ops->get_config) {
+ dev->max_queue_pairs = user_max_qp;
+ return 0;
+ }
+
+ ret = dev->ops->get_config(dev, (uint8_t *)&dev->max_queue_pairs,
+ offsetof(struct virtio_crypto_config, max_dataqueues),
+ sizeof(uint16_t));
+ if (ret) {
+ /*
+ * We need to know the max queue pair from the device so that
+ * the control queue gets the right index.
+ */
+ dev->max_queue_pairs = 1;
+ PMD_DRV_LOG(ERR, "(%s) Failed to get max queue pairs from device", dev->path);
+
+ return ret;
+ }
+
+ return 0;
+}
+
+static int
+virtio_user_dev_init_cipher_services(struct virtio_user_dev *dev)
+{
+ struct virtio_crypto_config config;
+ int ret;
+
+ dev->crypto_services = RTE_BIT32(VIRTIO_CRYPTO_SERVICE_CIPHER);
+ dev->cipher_algo = 0;
+ dev->auth_algo = 0;
+ dev->akcipher_algo = 0;
+
+ if (!dev->ops->get_config)
+ return 0;
+
+ ret = dev->ops->get_config(dev, (uint8_t *)&config, 0, sizeof(config));
+ if (ret) {
+ PMD_DRV_LOG(ERR, "(%s) Failed to get crypto config from device", dev->path);
+ return ret;
+ }
+
+ dev->crypto_services = config.crypto_services;
+ dev->cipher_algo = ((uint64_t)config.cipher_algo_h << 32) |
+ config.cipher_algo_l;
+ dev->hash_algo = config.hash_algo;
+ dev->auth_algo = ((uint64_t)config.mac_algo_h << 32) |
+ config.mac_algo_l;
+ dev->aead_algo = config.aead_algo;
+ dev->akcipher_algo = config.akcipher_algo;
+ return 0;
+}
+
+static int
+virtio_user_dev_init_notify(struct virtio_user_dev *dev)
+{
+
+ if (virtio_user_foreach_queue(dev, virtio_user_init_notify_queue) < 0)
+ goto err;
+
+ if (dev->device_features & (1ULL << VIRTIO_F_NOTIFICATION_DATA))
+ if (dev->ops->map_notification_area &&
+ dev->ops->map_notification_area(dev))
+ goto err;
+
+ return 0;
+err:
+ virtio_user_foreach_queue(dev, virtio_user_uninit_notify_queue);
+
+ return -1;
+}
+
+static void
+virtio_user_dev_uninit_notify(struct virtio_user_dev *dev)
+{
+ virtio_user_foreach_queue(dev, virtio_user_uninit_notify_queue);
+
+ if (dev->ops->unmap_notification_area && dev->notify_area)
+ dev->ops->unmap_notification_area(dev);
+}
+
+static void
+virtio_user_mem_event_cb(enum rte_mem_event type __rte_unused,
+ const void *addr,
+ size_t len __rte_unused,
+ void *arg)
+{
+ struct virtio_user_dev *dev = arg;
+ struct rte_memseg_list *msl;
+ uint16_t i;
+ int ret = 0;
+
+ /* ignore externally allocated memory */
+ msl = rte_mem_virt2memseg_list(addr);
+ if (msl->external)
+ return;
+
+ pthread_mutex_lock(&dev->mutex);
+
+ if (dev->started == false)
+ goto exit;
+
+ /* Step 1: pause the active queues */
+ for (i = 0; i < dev->queue_pairs; i++) {
+ ret = dev->ops->enable_qp(dev, i, 0);
+ if (ret < 0)
+ goto exit;
+ }
+
+ /* Step 2: update memory regions */
+ ret = dev->ops->set_memory_table(dev);
+ if (ret < 0)
+ goto exit;
+
+ /* Step 3: resume the active queues */
+ for (i = 0; i < dev->queue_pairs; i++) {
+ ret = dev->ops->enable_qp(dev, i, 1);
+ if (ret < 0)
+ goto exit;
+ }
+
+exit:
+ pthread_mutex_unlock(&dev->mutex);
+
+ if (ret < 0)
+ PMD_DRV_LOG(ERR, "(%s) Failed to update memory table", dev->path);
+}
+
+static int
+virtio_user_dev_setup(struct virtio_user_dev *dev)
+{
+ if (dev->is_server) {
+ if (dev->backend_type != VIRTIO_USER_BACKEND_VHOST_USER) {
+ PMD_DRV_LOG(ERR, "Server mode only supports vhost-user!");
+ return -1;
+ }
+ }
+
+ switch (dev->backend_type) {
+ case VIRTIO_USER_BACKEND_VHOST_VDPA:
+ dev->ops = &virtio_ops_vdpa;
+ dev->ops->setup = virtio_crypto_ops_vdpa.setup;
+ dev->ops->get_features = virtio_crypto_ops_vdpa.get_features;
+ dev->ops->cvq_enable = virtio_crypto_ops_vdpa.cvq_enable;
+ dev->ops->enable_qp = virtio_crypto_ops_vdpa.enable_qp;
+ dev->ops->update_link_state = virtio_crypto_ops_vdpa.update_link_state;
+ dev->ops->map_notification_area = virtio_crypto_ops_vdpa.map_notification_area;
+ dev->ops->unmap_notification_area = virtio_crypto_ops_vdpa.unmap_notification_area;
+ break;
+ default:
+ PMD_DRV_LOG(ERR, "(%s) Unknown backend type", dev->path);
+ return -1;
+ }
+
+ if (dev->ops->setup(dev) < 0) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to setup backend", dev->path);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+virtio_user_alloc_vrings(struct virtio_user_dev *dev)
+{
+ int i, size, nr_vrings;
+ bool packed_ring = !!(dev->device_features & (1ull << VIRTIO_F_RING_PACKED));
+
+ nr_vrings = dev->max_queue_pairs + 1;
+
+ dev->callfds = rte_zmalloc("virtio_user_dev", nr_vrings * sizeof(*dev->callfds), 0);
+ if (!dev->callfds) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to alloc callfds", dev->path);
+ return -1;
+ }
+
+ dev->kickfds = rte_zmalloc("virtio_user_dev", nr_vrings * sizeof(*dev->kickfds), 0);
+ if (!dev->kickfds) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to alloc kickfds", dev->path);
+ goto free_callfds;
+ }
+
+ for (i = 0; i < nr_vrings; i++) {
+ dev->callfds[i] = -1;
+ dev->kickfds[i] = -1;
+ }
+
+ if (packed_ring)
+ size = sizeof(*dev->vrings.packed);
+ else
+ size = sizeof(*dev->vrings.split);
+ dev->vrings.ptr = rte_zmalloc("virtio_user_dev", nr_vrings * size, 0);
+ if (!dev->vrings.ptr) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to alloc vrings metadata", dev->path);
+ goto free_kickfds;
+ }
+
+ if (packed_ring) {
+ dev->packed_queues = rte_zmalloc("virtio_user_dev",
+ nr_vrings * sizeof(*dev->packed_queues), 0);
+ if (!dev->packed_queues) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to alloc packed queues metadata",
+ dev->path);
+ goto free_vrings;
+ }
+ }
+
+ dev->qp_enabled = rte_zmalloc("virtio_user_dev",
+ nr_vrings * sizeof(*dev->qp_enabled), 0);
+ if (!dev->qp_enabled) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to alloc QP enable states", dev->path);
+ goto free_packed_queues;
+ }
+
+ return 0;
+
+free_packed_queues:
+ rte_free(dev->packed_queues);
+ dev->packed_queues = NULL;
+free_vrings:
+ rte_free(dev->vrings.ptr);
+ dev->vrings.ptr = NULL;
+free_kickfds:
+ rte_free(dev->kickfds);
+ dev->kickfds = NULL;
+free_callfds:
+ rte_free(dev->callfds);
+ dev->callfds = NULL;
+
+ return -1;
+}
+
+static void
+virtio_user_free_vrings(struct virtio_user_dev *dev)
+{
+ rte_free(dev->qp_enabled);
+ dev->qp_enabled = NULL;
+ rte_free(dev->packed_queues);
+ dev->packed_queues = NULL;
+ rte_free(dev->vrings.ptr);
+ dev->vrings.ptr = NULL;
+ rte_free(dev->kickfds);
+ dev->kickfds = NULL;
+ rte_free(dev->callfds);
+ dev->callfds = NULL;
+}
+
+#define VIRTIO_USER_SUPPORTED_FEATURES \
+ (1ULL << VIRTIO_CRYPTO_SERVICE_CIPHER | \
+ 1ULL << VIRTIO_CRYPTO_SERVICE_HASH | \
+ 1ULL << VIRTIO_CRYPTO_SERVICE_AKCIPHER | \
+ 1ULL << VIRTIO_F_VERSION_1 | \
+ 1ULL << VIRTIO_F_IN_ORDER | \
+ 1ULL << VIRTIO_F_RING_PACKED | \
+ 1ULL << VIRTIO_F_NOTIFICATION_DATA | \
+ 1ULL << VIRTIO_F_ORDER_PLATFORM)
+
+int
+crypto_virtio_user_dev_init(struct virtio_user_dev *dev, char *path, uint16_t queues,
+ int queue_size, int server)
+{
+ uint64_t backend_features;
+
+ pthread_mutex_init(&dev->mutex, NULL);
+ strlcpy(dev->path, path, PATH_MAX);
+
+ dev->started = 0;
+ dev->queue_pairs = 1; /* mq disabled by default */
+ dev->max_queue_pairs = queues; /* initialize to user requested value for kernel backend */
+ dev->queue_size = queue_size;
+ dev->is_server = server;
+ dev->frontend_features = 0;
+ dev->unsupported_features = 0;
+ dev->backend_type = VIRTIO_USER_BACKEND_VHOST_VDPA;
+ dev->hw.modern = 1;
+
+ if (virtio_user_dev_setup(dev) < 0) {
+ PMD_INIT_LOG(ERR, "(%s) backend set up fails", dev->path);
+ return -1;
+ }
+
+ if (dev->ops->set_owner(dev) < 0) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to set backend owner", dev->path);
+ goto destroy;
+ }
+
+ if (dev->ops->get_backend_features(&backend_features) < 0) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to get backend features", dev->path);
+ goto destroy;
+ }
+
+ dev->unsupported_features = ~(VIRTIO_USER_SUPPORTED_FEATURES | backend_features);
+
+ if (dev->ops->get_features(dev, &dev->device_features) < 0) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to get device features", dev->path);
+ goto destroy;
+ }
+
+ if (virtio_user_dev_init_max_queue_pairs(dev, queues)) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to get max queue pairs", dev->path);
+ goto destroy;
+ }
+
+ if (virtio_user_dev_init_cipher_services(dev)) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to get cipher services", dev->path);
+ goto destroy;
+ }
+
+ dev->frontend_features &= ~dev->unsupported_features;
+ dev->device_features &= ~dev->unsupported_features;
+
+ if (virtio_user_alloc_vrings(dev) < 0) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to allocate vring metadata", dev->path);
+ goto destroy;
+ }
+
+ if (virtio_user_dev_init_notify(dev) < 0) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to init notifiers", dev->path);
+ goto free_vrings;
+ }
+
+ if (rte_mem_event_callback_register(VIRTIO_USER_MEM_EVENT_CLB_NAME,
+ virtio_user_mem_event_cb, dev)) {
+ if (rte_errno != ENOTSUP) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to register mem event callback",
+ dev->path);
+ goto notify_uninit;
+ }
+ }
+
+ return 0;
+
+notify_uninit:
+ virtio_user_dev_uninit_notify(dev);
+free_vrings:
+ virtio_user_free_vrings(dev);
+destroy:
+ dev->ops->destroy(dev);
+
+ return -1;
+}
+
+void
+crypto_virtio_user_dev_uninit(struct virtio_user_dev *dev)
+{
+ crypto_virtio_user_stop_device(dev);
+
+ rte_mem_event_callback_unregister(VIRTIO_USER_MEM_EVENT_CLB_NAME, dev);
+
+ virtio_user_dev_uninit_notify(dev);
+
+ virtio_user_free_vrings(dev);
+
+ if (dev->is_server)
+ unlink(dev->path);
+
+ dev->ops->destroy(dev);
+}
+
+#define CVQ_MAX_DATA_DESCS 32
+
+static inline void *
+virtio_user_iova2virt(struct virtio_user_dev *dev __rte_unused, rte_iova_t iova)
+{
+ if (rte_eal_iova_mode() == RTE_IOVA_VA)
+ return (void *)(uintptr_t)iova;
+ else
+ return rte_mem_iova2virt(iova);
+}
+
+static inline int
+desc_is_avail(struct vring_packed_desc *desc, bool wrap_counter)
+{
+ uint16_t flags = rte_atomic_load_explicit(&desc->flags, rte_memory_order_acquire);
+
+ return wrap_counter == !!(flags & VRING_PACKED_DESC_F_AVAIL) &&
+ wrap_counter != !!(flags & VRING_PACKED_DESC_F_USED);
+}
+
+int
+crypto_virtio_user_dev_set_status(struct virtio_user_dev *dev, uint8_t status)
+{
+ int ret;
+
+ pthread_mutex_lock(&dev->mutex);
+ dev->status = status;
+ ret = dev->ops->set_status(dev, status);
+ if (ret && ret != -ENOTSUP)
+ PMD_INIT_LOG(ERR, "(%s) Failed to set backend status", dev->path);
+
+ pthread_mutex_unlock(&dev->mutex);
+ return ret;
+}
+
+int
+crypto_virtio_user_dev_update_status(struct virtio_user_dev *dev)
+{
+ int ret;
+ uint8_t status;
+
+ pthread_mutex_lock(&dev->mutex);
+
+ ret = dev->ops->get_status(dev, &status);
+ if (!ret) {
+ dev->status = status;
+ PMD_INIT_LOG(DEBUG, "Updated Device Status(0x%08x):"
+ "\t-RESET: %u "
+ "\t-ACKNOWLEDGE: %u "
+ "\t-DRIVER: %u "
+ "\t-DRIVER_OK: %u "
+ "\t-FEATURES_OK: %u "
+ "\t-DEVICE_NEED_RESET: %u "
+ "\t-FAILED: %u",
+ dev->status,
+ (dev->status == VIRTIO_CONFIG_STATUS_RESET),
+ !!(dev->status & VIRTIO_CONFIG_STATUS_ACK),
+ !!(dev->status & VIRTIO_CONFIG_STATUS_DRIVER),
+ !!(dev->status & VIRTIO_CONFIG_STATUS_DRIVER_OK),
+ !!(dev->status & VIRTIO_CONFIG_STATUS_FEATURES_OK),
+ !!(dev->status & VIRTIO_CONFIG_STATUS_DEV_NEED_RESET),
+ !!(dev->status & VIRTIO_CONFIG_STATUS_FAILED));
+ } else if (ret != -ENOTSUP) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to get backend status", dev->path);
+ }
+
+ pthread_mutex_unlock(&dev->mutex);
+ return ret;
+}
+
+int
+crypto_virtio_user_dev_update_link_state(struct virtio_user_dev *dev)
+{
+ if (dev->ops->update_link_state)
+ return dev->ops->update_link_state(dev);
+
+ return 0;
+}
diff --git a/drivers/crypto/virtio/virtio_user/virtio_user_dev.h b/drivers/crypto/virtio/virtio_user/virtio_user_dev.h
new file mode 100644
index 0000000000..2a0052b3ca
--- /dev/null
+++ b/drivers/crypto/virtio/virtio_user/virtio_user_dev.h
@@ -0,0 +1,88 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Marvell.
+ */
+
+#ifndef _VIRTIO_USER_DEV_H
+#define _VIRTIO_USER_DEV_H
+
+#include <limits.h>
+#include <stdbool.h>
+
+#include "../virtio_pci.h"
+#include "../virtio_ring.h"
+
+extern struct virtio_user_backend_ops virtio_crypto_ops_vdpa;
+
+enum virtio_user_backend_type {
+ VIRTIO_USER_BACKEND_UNKNOWN,
+ VIRTIO_USER_BACKEND_VHOST_USER,
+ VIRTIO_USER_BACKEND_VHOST_VDPA,
+};
+
+struct virtio_user_queue {
+ uint16_t used_idx;
+ bool avail_wrap_counter;
+ bool used_wrap_counter;
+};
+
+struct virtio_user_dev {
+ union {
+ struct virtio_crypto_hw hw;
+ uint8_t dummy[256];
+ };
+
+ void *backend_data;
+ uint16_t **notify_area;
+ char path[PATH_MAX];
+ bool hw_cvq;
+ uint16_t max_queue_pairs;
+ uint64_t device_features; /* supported features by device */
+ bool *qp_enabled;
+
+ enum virtio_user_backend_type backend_type;
+ bool is_server; /* server or client mode */
+
+ int *callfds;
+ int *kickfds;
+ uint16_t queue_pairs;
+ uint32_t queue_size;
+ uint64_t features; /* the negotiated features with driver,
+ * and will be sync with device
+ */
+ uint64_t frontend_features; /* enabled frontend features */
+ uint64_t unsupported_features; /* unsupported features mask */
+ uint8_t status;
+ uint32_t crypto_status;
+ uint32_t crypto_services;
+ uint64_t cipher_algo;
+ uint32_t hash_algo;
+ uint64_t auth_algo;
+ uint32_t aead_algo;
+ uint32_t akcipher_algo;
+
+ union {
+ void *ptr;
+ struct vring *split;
+ struct vring_packed *packed;
+ } vrings;
+
+ struct virtio_user_queue *packed_queues;
+
+ struct virtio_user_backend_ops *ops;
+ pthread_mutex_t mutex;
+ bool started;
+
+ struct virtqueue *scvq;
+};
+
+int crypto_virtio_user_dev_set_features(struct virtio_user_dev *dev);
+int crypto_virtio_user_start_device(struct virtio_user_dev *dev);
+int crypto_virtio_user_stop_device(struct virtio_user_dev *dev);
+int crypto_virtio_user_dev_init(struct virtio_user_dev *dev, char *path, uint16_t queues,
+ int queue_size, int server);
+void crypto_virtio_user_dev_uninit(struct virtio_user_dev *dev);
+int crypto_virtio_user_dev_set_status(struct virtio_user_dev *dev, uint8_t status);
+int crypto_virtio_user_dev_update_status(struct virtio_user_dev *dev);
+int crypto_virtio_user_dev_update_link_state(struct virtio_user_dev *dev);
+extern const char * const crypto_virtio_user_backend_strings[];
+#endif
diff --git a/drivers/crypto/virtio/virtio_user_cryptodev.c b/drivers/crypto/virtio/virtio_user_cryptodev.c
new file mode 100644
index 0000000000..f5725f0a59
--- /dev/null
+++ b/drivers/crypto/virtio/virtio_user_cryptodev.c
@@ -0,0 +1,586 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Marvell
+ */
+
+#include <stdint.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <unistd.h>
+#include <fcntl.h>
+
+#include <rte_malloc.h>
+#include <rte_kvargs.h>
+#include <bus_vdev_driver.h>
+#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
+#include <rte_alarm.h>
+#include <rte_cycles.h>
+#include <rte_io.h>
+
+#include "virtio_user/virtio_user_dev.h"
+#include "virtio_user/vhost.h"
+#include "virtio_cryptodev.h"
+#include "virtio_logs.h"
+#include "virtio_pci.h"
+#include "virtqueue.h"
+
+#define virtio_user_get_dev(hwp) container_of(hwp, struct virtio_user_dev, hw)
+
+static void
+virtio_user_read_dev_config(struct virtio_crypto_hw *hw, size_t offset,
+ void *dst, int length __rte_unused)
+{
+ struct virtio_user_dev *dev = virtio_user_get_dev(hw);
+
+ if (offset == offsetof(struct virtio_crypto_config, status)) {
+ crypto_virtio_user_dev_update_link_state(dev);
+ *(uint32_t *)dst = dev->crypto_status;
+ } else if (offset == offsetof(struct virtio_crypto_config, max_dataqueues))
+ *(uint16_t *)dst = dev->max_queue_pairs;
+ else if (offset == offsetof(struct virtio_crypto_config, crypto_services))
+ *(uint32_t *)dst = dev->crypto_services;
+ else if (offset == offsetof(struct virtio_crypto_config, cipher_algo_l))
+ *(uint32_t *)dst = dev->cipher_algo & 0xFFFF;
+ else if (offset == offsetof(struct virtio_crypto_config, cipher_algo_h))
+ *(uint32_t *)dst = dev->cipher_algo >> 32;
+ else if (offset == offsetof(struct virtio_crypto_config, hash_algo))
+ *(uint32_t *)dst = dev->hash_algo;
+ else if (offset == offsetof(struct virtio_crypto_config, mac_algo_l))
+ *(uint32_t *)dst = dev->auth_algo & 0xFFFF;
+ else if (offset == offsetof(struct virtio_crypto_config, mac_algo_h))
+ *(uint32_t *)dst = dev->auth_algo >> 32;
+ else if (offset == offsetof(struct virtio_crypto_config, aead_algo))
+ *(uint32_t *)dst = dev->aead_algo;
+ else if (offset == offsetof(struct virtio_crypto_config, akcipher_algo))
+ *(uint32_t *)dst = dev->akcipher_algo;
+}
+
+static void
+virtio_user_write_dev_config(struct virtio_crypto_hw *hw, size_t offset,
+ const void *src, int length)
+{
+ RTE_SET_USED(hw);
+ RTE_SET_USED(src);
+
+ PMD_DRV_LOG(ERR, "not supported offset=%zu, len=%d",
+ offset, length);
+}
+
+static void
+virtio_user_reset(struct virtio_crypto_hw *hw)
+{
+ struct virtio_user_dev *dev = virtio_user_get_dev(hw);
+
+ if (dev->status & VIRTIO_CONFIG_STATUS_DRIVER_OK)
+ crypto_virtio_user_stop_device(dev);
+}
+
+static void
+virtio_user_set_status(struct virtio_crypto_hw *hw, uint8_t status)
+{
+ struct virtio_user_dev *dev = virtio_user_get_dev(hw);
+ uint8_t old_status = dev->status;
+
+ if (status & VIRTIO_CONFIG_STATUS_FEATURES_OK &&
+ ~old_status & VIRTIO_CONFIG_STATUS_FEATURES_OK) {
+ crypto_virtio_user_dev_set_features(dev);
+ /* Feature negotiation should be only done in probe time.
+ * So we skip any more request here.
+ */
+ dev->status |= VIRTIO_CONFIG_STATUS_FEATURES_OK;
+ }
+
+ if (status & VIRTIO_CONFIG_STATUS_DRIVER_OK) {
+ if (crypto_virtio_user_start_device(dev)) {
+ crypto_virtio_user_dev_update_status(dev);
+ return;
+ }
+ } else if (status == VIRTIO_CONFIG_STATUS_RESET) {
+ virtio_user_reset(hw);
+ }
+
+ crypto_virtio_user_dev_set_status(dev, status);
+ if (status & VIRTIO_CONFIG_STATUS_DRIVER_OK && dev->scvq) {
+ if (dev->ops->cvq_enable(dev, 1) < 0) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to start ctrlq", dev->path);
+ crypto_virtio_user_dev_update_status(dev);
+ return;
+ }
+ }
+}
+
+static uint8_t
+virtio_user_get_status(struct virtio_crypto_hw *hw)
+{
+ struct virtio_user_dev *dev = virtio_user_get_dev(hw);
+
+ crypto_virtio_user_dev_update_status(dev);
+
+ return dev->status;
+}
+
+#define VIRTIO_USER_CRYPTO_PMD_GUEST_FEATURES \
+ (1ULL << VIRTIO_CRYPTO_SERVICE_CIPHER | \
+ 1ULL << VIRTIO_CRYPTO_SERVICE_AKCIPHER | \
+ 1ULL << VIRTIO_F_VERSION_1 | \
+ 1ULL << VIRTIO_F_IN_ORDER | \
+ 1ULL << VIRTIO_F_RING_PACKED | \
+ 1ULL << VIRTIO_F_NOTIFICATION_DATA | \
+ 1ULL << VIRTIO_RING_F_INDIRECT_DESC | \
+ 1ULL << VIRTIO_F_ORDER_PLATFORM)
+
+static uint64_t
+virtio_user_get_features(struct virtio_crypto_hw *hw)
+{
+ struct virtio_user_dev *dev = virtio_user_get_dev(hw);
+
+ /* unmask feature bits defined in vhost user protocol */
+ return (dev->device_features | dev->frontend_features) &
+ VIRTIO_USER_CRYPTO_PMD_GUEST_FEATURES;
+}
+
+static void
+virtio_user_set_features(struct virtio_crypto_hw *hw, uint64_t features)
+{
+ struct virtio_user_dev *dev = virtio_user_get_dev(hw);
+
+ dev->features = features & (dev->device_features | dev->frontend_features);
+}
+
+static uint8_t
+virtio_user_get_isr(struct virtio_crypto_hw *hw __rte_unused)
+{
+ /* rxq interrupts and config interrupt are separated in virtio-user,
+ * here we only report config change.
+ */
+ return VIRTIO_PCI_CAP_ISR_CFG;
+}
+
+static uint16_t
+virtio_user_set_config_irq(struct virtio_crypto_hw *hw __rte_unused,
+ uint16_t vec __rte_unused)
+{
+ return 0;
+}
+
+static uint16_t
+virtio_user_set_queue_irq(struct virtio_crypto_hw *hw __rte_unused,
+ struct virtqueue *vq __rte_unused,
+ uint16_t vec)
+{
+ /* pretend we have done that */
+ return vec;
+}
+
+/* This function is to get the queue size, aka, number of descs, of a specified
+ * queue. Different with the VHOST_USER_GET_QUEUE_NUM, which is used to get the
+ * max supported queues.
+ */
+static uint16_t
+virtio_user_get_queue_num(struct virtio_crypto_hw *hw, uint16_t queue_id __rte_unused)
+{
+ struct virtio_user_dev *dev = virtio_user_get_dev(hw);
+
+ /* Currently, each queue has same queue size */
+ return dev->queue_size;
+}
+
+static void
+virtio_user_setup_queue_packed(struct virtqueue *vq,
+ struct virtio_user_dev *dev)
+{
+ uint16_t queue_idx = vq->vq_queue_index;
+ struct vring_packed *vring;
+ uint64_t desc_addr;
+ uint64_t avail_addr;
+ uint64_t used_addr;
+ uint16_t i;
+
+ vring = &dev->vrings.packed[queue_idx];
+ desc_addr = (uintptr_t)vq->vq_ring_virt_mem;
+ avail_addr = desc_addr + vq->vq_nentries *
+ sizeof(struct vring_packed_desc);
+ used_addr = RTE_ALIGN_CEIL(avail_addr +
+ sizeof(struct vring_packed_desc_event),
+ VIRTIO_VRING_ALIGN);
+ vring->num = vq->vq_nentries;
+ vring->desc_iova = vq->vq_ring_mem;
+ vring->desc = (void *)(uintptr_t)desc_addr;
+ vring->driver = (void *)(uintptr_t)avail_addr;
+ vring->device = (void *)(uintptr_t)used_addr;
+ dev->packed_queues[queue_idx].avail_wrap_counter = true;
+ dev->packed_queues[queue_idx].used_wrap_counter = true;
+ dev->packed_queues[queue_idx].used_idx = 0;
+
+ for (i = 0; i < vring->num; i++)
+ vring->desc[i].flags = 0;
+}
+
+static void
+virtio_user_setup_queue_split(struct virtqueue *vq, struct virtio_user_dev *dev)
+{
+ uint16_t queue_idx = vq->vq_queue_index;
+ uint64_t desc_addr, avail_addr, used_addr;
+
+ desc_addr = (uintptr_t)vq->vq_ring_virt_mem;
+ avail_addr = desc_addr + vq->vq_nentries * sizeof(struct vring_desc);
+ used_addr = RTE_ALIGN_CEIL(avail_addr + offsetof(struct vring_avail,
+ ring[vq->vq_nentries]),
+ VIRTIO_VRING_ALIGN);
+
+ dev->vrings.split[queue_idx].num = vq->vq_nentries;
+ dev->vrings.split[queue_idx].desc_iova = vq->vq_ring_mem;
+ dev->vrings.split[queue_idx].desc = (void *)(uintptr_t)desc_addr;
+ dev->vrings.split[queue_idx].avail = (void *)(uintptr_t)avail_addr;
+ dev->vrings.split[queue_idx].used = (void *)(uintptr_t)used_addr;
+}
+
+static int
+virtio_user_setup_queue(struct virtio_crypto_hw *hw, struct virtqueue *vq)
+{
+ struct virtio_user_dev *dev = virtio_user_get_dev(hw);
+
+ if (vtpci_with_packed_queue(hw))
+ virtio_user_setup_queue_packed(vq, dev);
+ else
+ virtio_user_setup_queue_split(vq, dev);
+
+ if (dev->notify_area)
+ vq->notify_addr = dev->notify_area[vq->vq_queue_index];
+
+ if (virtcrypto_cq_to_vq(hw->cvq) == vq)
+ dev->scvq = virtcrypto_cq_to_vq(hw->cvq);
+
+ return 0;
+}
+
+static void
+virtio_user_del_queue(struct virtio_crypto_hw *hw, struct virtqueue *vq)
+{
+ /* For legacy devices, write 0 to VIRTIO_PCI_QUEUE_PFN port, QEMU
+ * correspondingly stops the ioeventfds, and reset the status of
+ * the device.
+ * For modern devices, set queue desc, avail, used in PCI bar to 0,
+ * not see any more behavior in QEMU.
+ *
+ * Here we just care about what information to deliver to vhost-user
+ * or vhost-kernel. So we just close ioeventfd for now.
+ */
+
+ RTE_SET_USED(hw);
+ RTE_SET_USED(vq);
+}
+
+static void
+virtio_user_notify_queue(struct virtio_crypto_hw *hw, struct virtqueue *vq)
+{
+ struct virtio_user_dev *dev = virtio_user_get_dev(hw);
+ uint64_t notify_data = 1;
+
+ if (!dev->notify_area) {
+ if (write(dev->kickfds[vq->vq_queue_index], ¬ify_data,
+ sizeof(notify_data)) < 0)
+ PMD_DRV_LOG(ERR, "failed to kick backend: %s",
+ strerror(errno));
+ return;
+ } else if (!vtpci_with_feature(hw, VIRTIO_F_NOTIFICATION_DATA)) {
+ rte_write16(vq->vq_queue_index, vq->notify_addr);
+ return;
+ }
+
+ if (vtpci_with_packed_queue(hw)) {
+ /* Bit[0:15]: vq queue index
+ * Bit[16:30]: avail index
+ * Bit[31]: avail wrap counter
+ */
+ notify_data = ((uint32_t)(!!(vq->vq_packed.cached_flags &
+ VRING_PACKED_DESC_F_AVAIL)) << 31) |
+ ((uint32_t)vq->vq_avail_idx << 16) |
+ vq->vq_queue_index;
+ } else {
+ /* Bit[0:15]: vq queue index
+ * Bit[16:31]: avail index
+ */
+ notify_data = ((uint32_t)vq->vq_avail_idx << 16) |
+ vq->vq_queue_index;
+ }
+ rte_write32(notify_data, vq->notify_addr);
+}
+
+const struct virtio_pci_ops crypto_virtio_user_ops = {
+ .read_dev_cfg = virtio_user_read_dev_config,
+ .write_dev_cfg = virtio_user_write_dev_config,
+ .reset = virtio_user_reset,
+ .get_status = virtio_user_get_status,
+ .set_status = virtio_user_set_status,
+ .get_features = virtio_user_get_features,
+ .set_features = virtio_user_set_features,
+ .get_isr = virtio_user_get_isr,
+ .set_config_irq = virtio_user_set_config_irq,
+ .set_queue_irq = virtio_user_set_queue_irq,
+ .get_queue_num = virtio_user_get_queue_num,
+ .setup_queue = virtio_user_setup_queue,
+ .del_queue = virtio_user_del_queue,
+ .notify_queue = virtio_user_notify_queue,
+};
+
+static const char * const valid_args[] = {
+#define VIRTIO_USER_ARG_QUEUES_NUM "queues"
+ VIRTIO_USER_ARG_QUEUES_NUM,
+#define VIRTIO_USER_ARG_QUEUE_SIZE "queue_size"
+ VIRTIO_USER_ARG_QUEUE_SIZE,
+#define VIRTIO_USER_ARG_PATH "path"
+ VIRTIO_USER_ARG_PATH,
+#define VIRTIO_USER_ARG_SERVER_MODE "server"
+ VIRTIO_USER_ARG_SERVER_MODE,
+ NULL
+};
+
+#define VIRTIO_USER_DEF_Q_NUM 1
+#define VIRTIO_USER_DEF_Q_SZ 256
+#define VIRTIO_USER_DEF_SERVER_MODE 0
+
+static int
+get_string_arg(const char *key __rte_unused,
+ const char *value, void *extra_args)
+{
+ if (!value || !extra_args)
+ return -EINVAL;
+
+ *(char **)extra_args = strdup(value);
+
+ if (!*(char **)extra_args)
+ return -ENOMEM;
+
+ return 0;
+}
+
+static int
+get_integer_arg(const char *key __rte_unused,
+ const char *value, void *extra_args)
+{
+ uint64_t integer = 0;
+ if (!value || !extra_args)
+ return -EINVAL;
+ errno = 0;
+ integer = strtoull(value, NULL, 0);
+ /* extra_args keeps default value, it should be replaced
+ * only in case of successful parsing of the 'value' arg
+ */
+ if (errno == 0)
+ *(uint64_t *)extra_args = integer;
+ return -errno;
+}
+
+static struct rte_cryptodev *
+virtio_user_cryptodev_alloc(struct rte_vdev_device *vdev)
+{
+ struct rte_cryptodev_pmd_init_params init_params = {
+ .name = "",
+ .private_data_size = sizeof(struct virtio_user_dev),
+ };
+ struct rte_cryptodev_data *data;
+ struct rte_cryptodev *cryptodev;
+ struct virtio_user_dev *dev;
+ struct virtio_crypto_hw *hw;
+
+ init_params.socket_id = vdev->device.numa_node;
+ init_params.private_data_size = sizeof(struct virtio_user_dev);
+ cryptodev = rte_cryptodev_pmd_create(vdev->device.name, &vdev->device, &init_params);
+ if (cryptodev == NULL) {
+ PMD_INIT_LOG(ERR, "failed to create cryptodev vdev");
+ return NULL;
+ }
+
+ data = cryptodev->data;
+ dev = data->dev_private;
+ hw = &dev->hw;
+
+ hw->dev_id = data->dev_id;
+ VTPCI_OPS(hw) = &crypto_virtio_user_ops;
+
+ return cryptodev;
+}
+
+static void
+virtio_user_cryptodev_free(struct rte_cryptodev *cryptodev)
+{
+ rte_cryptodev_pmd_destroy(cryptodev);
+}
+
+static int
+virtio_user_pmd_probe(struct rte_vdev_device *vdev)
+{
+ uint64_t server_mode = VIRTIO_USER_DEF_SERVER_MODE;
+ uint64_t queue_size = VIRTIO_USER_DEF_Q_SZ;
+ uint64_t queues = VIRTIO_USER_DEF_Q_NUM;
+ struct rte_cryptodev *cryptodev = NULL;
+ struct rte_kvargs *kvlist = NULL;
+ struct virtio_user_dev *dev;
+ char *path = NULL;
+ int ret;
+
+ kvlist = rte_kvargs_parse(rte_vdev_device_args(vdev), valid_args);
+
+ if (!kvlist) {
+ PMD_INIT_LOG(ERR, "error when parsing param");
+ goto end;
+ }
+
+ if (rte_kvargs_count(kvlist, VIRTIO_USER_ARG_PATH) == 1) {
+ if (rte_kvargs_process(kvlist, VIRTIO_USER_ARG_PATH,
+ &get_string_arg, &path) < 0) {
+ PMD_INIT_LOG(ERR, "error to parse %s",
+ VIRTIO_USER_ARG_PATH);
+ goto end;
+ }
+ } else {
+ PMD_INIT_LOG(ERR, "arg %s is mandatory for virtio_user",
+ VIRTIO_USER_ARG_PATH);
+ goto end;
+ }
+
+ if (rte_kvargs_count(kvlist, VIRTIO_USER_ARG_QUEUES_NUM) == 1) {
+ if (rte_kvargs_process(kvlist, VIRTIO_USER_ARG_QUEUES_NUM,
+ &get_integer_arg, &queues) < 0) {
+ PMD_INIT_LOG(ERR, "error to parse %s",
+ VIRTIO_USER_ARG_QUEUES_NUM);
+ goto end;
+ }
+ }
+
+ if (rte_kvargs_count(kvlist, VIRTIO_USER_ARG_QUEUE_SIZE) == 1) {
+ if (rte_kvargs_process(kvlist, VIRTIO_USER_ARG_QUEUE_SIZE,
+ &get_integer_arg, &queue_size) < 0) {
+ PMD_INIT_LOG(ERR, "error to parse %s",
+ VIRTIO_USER_ARG_QUEUE_SIZE);
+ goto end;
+ }
+ }
+
+ cryptodev = virtio_user_cryptodev_alloc(vdev);
+ if (!cryptodev) {
+ PMD_INIT_LOG(ERR, "virtio_user fails to alloc device");
+ goto end;
+ }
+
+ dev = cryptodev->data->dev_private;
+ if (crypto_virtio_user_dev_init(dev, path, queues, queue_size,
+ server_mode) < 0) {
+ PMD_INIT_LOG(ERR, "virtio_user_dev_init fails");
+ virtio_user_cryptodev_free(cryptodev);
+ goto end;
+ }
+
+ if (crypto_virtio_dev_init(cryptodev, VIRTIO_USER_CRYPTO_PMD_GUEST_FEATURES,
+ NULL) < 0) {
+ PMD_INIT_LOG(ERR, "crypto_virtio_dev_init fails");
+ crypto_virtio_user_dev_uninit(dev);
+ virtio_user_cryptodev_free(cryptodev);
+ goto end;
+ }
+
+ rte_cryptodev_pmd_probing_finish(cryptodev);
+
+ ret = 0;
+end:
+ rte_kvargs_free(kvlist);
+ free(path);
+ return ret;
+}
+
+static int
+virtio_user_pmd_remove(struct rte_vdev_device *vdev)
+{
+ struct rte_cryptodev *cryptodev;
+ const char *name;
+ int devid;
+
+ if (!vdev)
+ return -EINVAL;
+
+ name = rte_vdev_device_name(vdev);
+ PMD_DRV_LOG(INFO, "Removing %s", name);
+
+ devid = rte_cryptodev_get_dev_id(name);
+ if (devid < 0)
+ return -EINVAL;
+
+ rte_cryptodev_stop(devid);
+
+ cryptodev = rte_cryptodev_pmd_get_named_dev(name);
+ if (cryptodev == NULL)
+ return -ENODEV;
+
+ if (rte_cryptodev_pmd_destroy(cryptodev) < 0) {
+ PMD_DRV_LOG(ERR, "Failed to remove %s", name);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+static int virtio_user_pmd_dma_map(struct rte_vdev_device *vdev, void *addr,
+ uint64_t iova, size_t len)
+{
+ struct rte_cryptodev *cryptodev;
+ struct virtio_user_dev *dev;
+ const char *name;
+
+ if (!vdev)
+ return -EINVAL;
+
+ name = rte_vdev_device_name(vdev);
+ cryptodev = rte_cryptodev_pmd_get_named_dev(name);
+ if (cryptodev == NULL)
+ return -EINVAL;
+
+ dev = cryptodev->data->dev_private;
+
+ if (dev->ops->dma_map)
+ return dev->ops->dma_map(dev, addr, iova, len);
+
+ return 0;
+}
+
+static int virtio_user_pmd_dma_unmap(struct rte_vdev_device *vdev, void *addr,
+ uint64_t iova, size_t len)
+{
+ struct rte_cryptodev *cryptodev;
+ struct virtio_user_dev *dev;
+ const char *name;
+
+ if (!vdev)
+ return -EINVAL;
+
+ name = rte_vdev_device_name(vdev);
+ cryptodev = rte_cryptodev_pmd_get_named_dev(name);
+ if (cryptodev == NULL)
+ return -EINVAL;
+
+ dev = cryptodev->data->dev_private;
+
+ if (dev->ops->dma_unmap)
+ return dev->ops->dma_unmap(dev, addr, iova, len);
+
+ return 0;
+}
+
+static struct rte_vdev_driver virtio_user_driver = {
+ .probe = virtio_user_pmd_probe,
+ .remove = virtio_user_pmd_remove,
+ .dma_map = virtio_user_pmd_dma_map,
+ .dma_unmap = virtio_user_pmd_dma_unmap,
+};
+
+static struct cryptodev_driver virtio_crypto_drv;
+
+RTE_PMD_REGISTER_VDEV(crypto_virtio_user, virtio_user_driver);
+RTE_PMD_REGISTER_CRYPTO_DRIVER(virtio_crypto_drv,
+ virtio_user_driver.driver,
+ cryptodev_virtio_driver_id);
+RTE_PMD_REGISTER_ALIAS(crypto_virtio_user, crypto_virtio);
+RTE_PMD_REGISTER_PARAM_STRING(crypto_virtio_user,
+ "path=<path> "
+ "queues=<int> "
+ "queue_size=<int>");
--
2.25.1
^ permalink raw reply [flat|nested] 58+ messages in thread
* [v1 16/16] test/crypto: test virtio_crypto_user PMD
2024-12-24 7:36 [v1 00/16] crypto/virtio: vDPA and asymmetric support Gowrishankar Muthukrishnan
` (14 preceding siblings ...)
2024-12-24 7:37 ` [v1 15/16] crypto/virtio: add vhost backend to virtio_user Gowrishankar Muthukrishnan
@ 2024-12-24 7:37 ` Gowrishankar Muthukrishnan
2025-01-07 17:52 ` [v2 0/2] crypto/virtio: add RSA support Gowrishankar Muthukrishnan
` (3 subsequent siblings)
19 siblings, 0 replies; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2024-12-24 7:37 UTC (permalink / raw)
To: dev, Akhil Goyal, Maxime Coquelin, Chenbo Xia, Fan Zhang, Jay Zhou
Cc: jerinj, anoobj, Rajesh Mudimadugula, Gowrishankar Muthukrishnan
Reuse virtio_crypto tests for testing virtio_crypto_user PMD.
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
app/test/test_cryptodev.c | 7 +++++++
app/test/test_cryptodev.h | 1 +
app/test/test_cryptodev_asym.c | 15 +++++++++++++++
3 files changed, 23 insertions(+)
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 7cddb1517c..0ba2281b87 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -19737,6 +19737,12 @@ test_cryptodev_virtio(void)
return run_cryptodev_testsuite(RTE_STR(CRYPTODEV_NAME_VIRTIO_PMD));
}
+static int
+test_cryptodev_virtio_user(void)
+{
+ return run_cryptodev_testsuite(RTE_STR(CRYPTODEV_NAME_VIRTIO_USER_PMD));
+}
+
static int
test_cryptodev_aesni_mb(void)
{
@@ -20074,6 +20080,7 @@ REGISTER_DRIVER_TEST(cryptodev_dpaa_sec_autotest, test_cryptodev_dpaa_sec);
REGISTER_DRIVER_TEST(cryptodev_ccp_autotest, test_cryptodev_ccp);
REGISTER_DRIVER_TEST(cryptodev_uadk_autotest, test_cryptodev_uadk);
REGISTER_DRIVER_TEST(cryptodev_virtio_autotest, test_cryptodev_virtio);
+REGISTER_DRIVER_TEST(cryptodev_virtio_user_autotest, test_cryptodev_virtio_user);
REGISTER_DRIVER_TEST(cryptodev_octeontx_autotest, test_cryptodev_octeontx);
REGISTER_DRIVER_TEST(cryptodev_caam_jr_autotest, test_cryptodev_caam_jr);
REGISTER_DRIVER_TEST(cryptodev_nitrox_autotest, test_cryptodev_nitrox);
diff --git a/app/test/test_cryptodev.h b/app/test/test_cryptodev.h
index bb54a33d62..f6c7478f19 100644
--- a/app/test/test_cryptodev.h
+++ b/app/test/test_cryptodev.h
@@ -64,6 +64,7 @@
#define CRYPTODEV_NAME_MVSAM_PMD crypto_mvsam
#define CRYPTODEV_NAME_CCP_PMD crypto_ccp
#define CRYPTODEV_NAME_VIRTIO_PMD crypto_virtio
+#define CRYPTODEV_NAME_VIRTIO_USER_PMD crypto_virtio_user
#define CRYPTODEV_NAME_OCTEONTX_SYM_PMD crypto_octeontx
#define CRYPTODEV_NAME_CAAM_JR_PMD crypto_caam_jr
#define CRYPTODEV_NAME_NITROX_PMD crypto_nitrox_sym
diff --git a/app/test/test_cryptodev_asym.c b/app/test/test_cryptodev_asym.c
index ec7ab05a2d..e3e202a87c 100644
--- a/app/test/test_cryptodev_asym.c
+++ b/app/test/test_cryptodev_asym.c
@@ -4092,7 +4092,22 @@ test_cryptodev_virtio_asym(void)
return unit_test_suite_runner(&cryptodev_virtio_asym_testsuite);
}
+static int
+test_cryptodev_virtio_user_asym(void)
+{
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_VIRTIO_USER_PMD));
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "virtio user PMD must be loaded.\n");
+ return TEST_FAILED;
+ }
+
+ /* Use test suite registered for crypto_virtio_user PMD */
+ return unit_test_suite_runner(&cryptodev_virtio_asym_testsuite);
+}
+
REGISTER_DRIVER_TEST(cryptodev_virtio_asym_autotest, test_cryptodev_virtio_asym);
+REGISTER_DRIVER_TEST(cryptodev_virtio_user_asym_autotest, test_cryptodev_virtio_user_asym);
REGISTER_DRIVER_TEST(cryptodev_openssl_asym_autotest, test_cryptodev_openssl_asym);
REGISTER_DRIVER_TEST(cryptodev_qat_asym_autotest, test_cryptodev_qat_asym);
--
2.25.1
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [v1 12/16] common/virtio: common virtio log
2024-12-24 7:37 ` [v1 12/16] common/virtio: common virtio log Gowrishankar Muthukrishnan
@ 2024-12-24 8:14 ` David Marchand
2025-01-07 10:57 ` [EXTERNAL] " Gowrishankar Muthukrishnan
0 siblings, 1 reply; 58+ messages in thread
From: David Marchand @ 2024-12-24 8:14 UTC (permalink / raw)
To: Gowrishankar Muthukrishnan
Cc: dev, Akhil Goyal, Maxime Coquelin, Chenbo Xia, Fan Zhang,
Jay Zhou, Bruce Richardson, Konstantin Ananyev, jerinj, anoobj,
Rajesh Mudimadugula
Hello Gowri,
On Tue, Dec 24, 2024 at 8:39 AM Gowrishankar Muthukrishnan
<gmuthukrishn@marvell.com> wrote:
>
> Common virtio log include file.
That's really a short commitlog..
What are you trying to achieve?
The net/virtio and crypto/virtio drivers had dedicated logtypes so
far, which seems preferable.
I don't see a case when using a single logtype for both net and crypto
would help.
Some comments below.
And please run checkpatch.
>
> Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
> ---
> drivers/{net => common}/virtio/virtio_logs.h | 16 ++--------
> drivers/crypto/virtio/meson.build | 1 +
> .../{virtio_logs.h => virtio_crypto_logs.h} | 30 ++++++++-----------
> drivers/crypto/virtio/virtio_cryptodev.c | 4 +--
> drivers/crypto/virtio/virtqueue.h | 2 +-
> drivers/net/virtio/meson.build | 3 +-
> drivers/net/virtio/virtio.c | 3 +-
> drivers/net/virtio/virtio_ethdev.c | 3 +-
> drivers/net/virtio/virtio_net_logs.h | 30 +++++++++++++++++++
> drivers/net/virtio/virtio_pci.c | 3 +-
> drivers/net/virtio/virtio_pci_ethdev.c | 3 +-
> drivers/net/virtio/virtio_rxtx.c | 3 +-
> drivers/net/virtio/virtio_rxtx_packed.c | 3 +-
> drivers/net/virtio/virtio_rxtx_packed.h | 3 +-
> drivers/net/virtio/virtio_rxtx_packed_avx.h | 3 +-
> drivers/net/virtio/virtio_rxtx_simple.h | 3 +-
> .../net/virtio/virtio_user/vhost_kernel_tap.c | 3 +-
> drivers/net/virtio/virtio_user/vhost_vdpa.c | 3 +-
> drivers/net/virtio/virtio_user_ethdev.c | 3 +-
> drivers/net/virtio/virtqueue.c | 3 +-
> drivers/net/virtio/virtqueue.h | 3 +-
> 21 files changed, 77 insertions(+), 51 deletions(-)
> rename drivers/{net => common}/virtio/virtio_logs.h (61%)
> rename drivers/crypto/virtio/{virtio_logs.h => virtio_crypto_logs.h} (74%)
> create mode 100644 drivers/net/virtio/virtio_net_logs.h
>
> diff --git a/drivers/net/virtio/virtio_logs.h b/drivers/common/virtio/virtio_logs.h
> similarity index 61%
> rename from drivers/net/virtio/virtio_logs.h
> rename to drivers/common/virtio/virtio_logs.h
> index dea1a7ac11..bc115e7a36 100644
> --- a/drivers/net/virtio/virtio_logs.h
> +++ b/drivers/common/virtio/virtio_logs.h
> @@ -5,6 +5,8 @@
> #ifndef _VIRTIO_LOGS_H_
> #define _VIRTIO_LOGS_H_
>
> +#include <inttypes.h>
> +
?
Seems unrelated.
> #include <rte_log.h>
>
> extern int virtio_logtype_init;
> @@ -14,20 +16,6 @@ extern int virtio_logtype_init;
>
> #define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
>
> -#ifdef RTE_LIBRTE_VIRTIO_DEBUG_RX
> -#define PMD_RX_LOG(level, ...) \
> - RTE_LOG_LINE_PREFIX(level, VIRTIO_DRIVER, "%s() rx: ", __func__, __VA_ARGS__)
> -#else
> -#define PMD_RX_LOG(...) do { } while(0)
> -#endif
> -
> -#ifdef RTE_LIBRTE_VIRTIO_DEBUG_TX
> -#define PMD_TX_LOG(level, ...) \
> - RTE_LOG_LINE_PREFIX(level, VIRTIO_DRIVER, "%s() tx: ", __func__, __VA_ARGS__)
> -#else
> -#define PMD_TX_LOG(...) do { } while(0)
> -#endif
> -
> extern int virtio_logtype_driver;
> #define RTE_LOGTYPE_VIRTIO_DRIVER virtio_logtype_driver
> #define PMD_DRV_LOG(level, ...) \
> diff --git a/drivers/crypto/virtio/meson.build b/drivers/crypto/virtio/meson.build
> index d2c3b3ad07..6c082a3112 100644
> --- a/drivers/crypto/virtio/meson.build
> +++ b/drivers/crypto/virtio/meson.build
> @@ -8,6 +8,7 @@ if is_windows
> endif
>
> includes += include_directories('../../../lib/vhost')
> +includes += include_directories('../../common/virtio')
There are some special cases when a driver can't rely on meson
dependencies (like order of subdirs evaluation in
drivers/meson.build).
For those special cases, include_directories are used.
But this driver does not seem concerned.
There should be dependencies on vhost and common_virtio instead.
> deps += 'bus_pci'
> sources = files(
> 'virtio_cryptodev.c',
> diff --git a/drivers/crypto/virtio/virtio_logs.h b/drivers/crypto/virtio/virtio_crypto_logs.h
> similarity index 74%
> rename from drivers/crypto/virtio/virtio_logs.h
> rename to drivers/crypto/virtio/virtio_crypto_logs.h
> index 988514919f..56caa162d4 100644
> --- a/drivers/crypto/virtio/virtio_logs.h
> +++ b/drivers/crypto/virtio/virtio_crypto_logs.h
> @@ -2,24 +2,18 @@
> * Copyright(c) 2018 HUAWEI TECHNOLOGIES CO., LTD.
> */
>
> -#ifndef _VIRTIO_LOGS_H_
> -#define _VIRTIO_LOGS_H_
> +#ifndef _VIRTIO_CRYPTO_LOGS_H_
> +#define _VIRTIO_CRYPTO_LOGS_H_
>
> #include <rte_log.h>
>
> -extern int virtio_crypto_logtype_init;
> -#define RTE_LOGTYPE_VIRTIO_CRYPTO_INIT virtio_crypto_logtype_init
> +#include "virtio_logs.h"
>
> -#define PMD_INIT_LOG(level, ...) \
> - RTE_LOG_LINE_PREFIX(level, VIRTIO_CRYPTO_INIT, "%s(): ", __func__, __VA_ARGS__)
> +extern int virtio_logtype_init;
>
> -#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
> -
> -extern int virtio_crypto_logtype_init;
> -#define RTE_LOGTYPE_VIRTIO_CRYPTO_INIT virtio_crypto_logtype_init
> -
> -#define VIRTIO_CRYPTO_INIT_LOG_IMPL(level, ...) \
> - RTE_LOG_LINE_PREFIX(level, VIRTIO_CRYPTO_INIT, "%s(): ", __func__, __VA_ARGS__)
> +#define VIRTIO_CRYPTO_INIT_LOG_IMPL(level, fmt, args...) \
> + rte_log(RTE_LOG_ ## level, virtio_logtype_init, \
> + "INIT: %s(): " fmt "\n", __func__, ##args)
Don't add back macros directly calling rte_log().
Afaiu, this hunk should be only redirecting
RTE_LOGTYPE_VIRTIO_CRYPTO_INIT to virtio_logtype_init.
>
> #define VIRTIO_CRYPTO_INIT_LOG_INFO(fmt, ...) \
> VIRTIO_CRYPTO_INIT_LOG_IMPL(INFO, fmt, ## __VA_ARGS__)
> @@ -75,11 +69,11 @@ extern int virtio_crypto_logtype_tx;
> #define VIRTIO_CRYPTO_TX_LOG_ERR(fmt, ...) \
> VIRTIO_CRYPTO_TX_LOG_IMPL(ERR, fmt, ## __VA_ARGS__)
>
> -extern int virtio_crypto_logtype_driver;
> -#define RTE_LOGTYPE_VIRTIO_CRYPTO_DRIVER virtio_crypto_logtype_driver
> +extern int virtio_logtype_driver;
>
> -#define VIRTIO_CRYPTO_DRV_LOG_IMPL(level, ...) \
> - RTE_LOG_LINE_PREFIX(level, VIRTIO_CRYPTO_DRIVER, "%s(): ", __func__, __VA_ARGS__)
> +#define VIRTIO_CRYPTO_DRV_LOG_IMPL(level, fmt, args...) \
> + rte_log(RTE_LOG_ ## level, virtio_logtype_driver, \
> + "DRIVER: %s(): " fmt "\n", __func__, ##args)
>
> #define VIRTIO_CRYPTO_DRV_LOG_INFO(fmt, ...) \
> VIRTIO_CRYPTO_DRV_LOG_IMPL(INFO, fmt, ## __VA_ARGS__)
> @@ -90,4 +84,4 @@ extern int virtio_crypto_logtype_driver;
> #define VIRTIO_CRYPTO_DRV_LOG_ERR(fmt, ...) \
> VIRTIO_CRYPTO_DRV_LOG_IMPL(ERR, fmt, ## __VA_ARGS__)
>
> -#endif /* _VIRTIO_LOGS_H_ */
> +#endif /* _VIRTIO_CRYPTO_LOGS_H_ */
> diff --git a/drivers/crypto/virtio/virtio_cryptodev.c b/drivers/crypto/virtio/virtio_cryptodev.c
> index d3db4f898e..b31e7ea0cf 100644
> --- a/drivers/crypto/virtio/virtio_cryptodev.c
> +++ b/drivers/crypto/virtio/virtio_cryptodev.c
> @@ -1749,8 +1749,8 @@ RTE_PMD_REGISTER_PCI(CRYPTODEV_NAME_VIRTIO_PMD, rte_virtio_crypto_driver);
> RTE_PMD_REGISTER_CRYPTO_DRIVER(virtio_crypto_drv,
> rte_virtio_crypto_driver.driver,
> cryptodev_virtio_driver_id);
> -RTE_LOG_REGISTER_SUFFIX(virtio_crypto_logtype_init, init, NOTICE);
> +RTE_LOG_REGISTER_SUFFIX(virtio_logtype_init, init, NOTICE);
> RTE_LOG_REGISTER_SUFFIX(virtio_crypto_logtype_session, session, NOTICE);
> RTE_LOG_REGISTER_SUFFIX(virtio_crypto_logtype_rx, rx, NOTICE);
> RTE_LOG_REGISTER_SUFFIX(virtio_crypto_logtype_tx, tx, NOTICE);
> -RTE_LOG_REGISTER_SUFFIX(virtio_crypto_logtype_driver, driver, NOTICE);
> +RTE_LOG_REGISTER_SUFFIX(virtio_logtype_driver, driver, NOTICE);
> diff --git a/drivers/crypto/virtio/virtqueue.h b/drivers/crypto/virtio/virtqueue.h
> index b31342940e..ccf45800c0 100644
> --- a/drivers/crypto/virtio/virtqueue.h
> +++ b/drivers/crypto/virtio/virtqueue.h
> @@ -15,7 +15,7 @@
> #include "virtio_cvq.h"
> #include "virtio_pci.h"
> #include "virtio_ring.h"
> -#include "virtio_logs.h"
> +#include "virtio_crypto_logs.h"
> #include "virtio_crypto.h"
> #include "virtio_rxtx.h"
>
> diff --git a/drivers/net/virtio/meson.build b/drivers/net/virtio/meson.build
> index 02742da5c2..6331366712 100644
> --- a/drivers/net/virtio/meson.build
> +++ b/drivers/net/virtio/meson.build
> @@ -22,6 +22,7 @@ sources += files(
> 'virtqueue.c',
> )
> deps += ['kvargs', 'bus_pci']
> +includes += include_directories('../../common/virtio')
Idem, this is unneeded, as long as there is a meson dependency (like below).
>
> if arch_subdir == 'x86'
> if cc_has_avx512
> @@ -56,5 +57,5 @@ if is_linux
> 'virtio_user/vhost_user.c',
> 'virtio_user/vhost_vdpa.c',
> 'virtio_user/virtio_user_dev.c')
> - deps += ['bus_vdev']
> + deps += ['bus_vdev', 'common_virtio']
> endif
> diff --git a/drivers/net/virtio/virtio.c b/drivers/net/virtio/virtio.c
> index d9e642f412..21b0490fe7 100644
> --- a/drivers/net/virtio/virtio.c
> +++ b/drivers/net/virtio/virtio.c
> @@ -5,8 +5,9 @@
>
> #include <unistd.h>
>
> +#include "virtio_net_logs.h"
> +
> #include "virtio.h"
> -#include "virtio_logs.h"
>
> uint64_t
> virtio_negotiate_features(struct virtio_hw *hw, uint64_t host_features)
> diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
> index 70d4839def..491b75ec19 100644
> --- a/drivers/net/virtio/virtio_ethdev.c
> +++ b/drivers/net/virtio/virtio_ethdev.c
> @@ -29,9 +29,10 @@
> #include <rte_cycles.h>
> #include <rte_kvargs.h>
>
> +#include "virtio_net_logs.h"
> +
> #include "virtio_ethdev.h"
> #include "virtio.h"
> -#include "virtio_logs.h"
> #include "virtqueue.h"
> #include "virtio_cvq.h"
> #include "virtio_rxtx.h"
> diff --git a/drivers/net/virtio/virtio_net_logs.h b/drivers/net/virtio/virtio_net_logs.h
> new file mode 100644
> index 0000000000..bd5867b1fe
> --- /dev/null
> +++ b/drivers/net/virtio/virtio_net_logs.h
> @@ -0,0 +1,30 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2010-2014 Intel Corporation
> + */
> +
> +#ifndef _VIRTIO_NET_LOGS_H_
> +#define _VIRTIO_NET_LOGS_H_
> +
> +#include <inttypes.h>
> +
> +#include <rte_log.h>
> +
> +#include "virtio_logs.h"
> +
> +#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
> +
> +#ifdef RTE_LIBRTE_VIRTIO_DEBUG_RX
> +#define PMD_RX_LOG(level, fmt, args...) \
> + RTE_LOG(level, VIRTIO_DRIVER, "%s() rx: " fmt "\n", __func__, ## args)
Rebase damage.
> +#else
> +#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
> +#endif
> +
> +#ifdef RTE_LIBRTE_VIRTIO_DEBUG_TX
> +#define PMD_TX_LOG(level, fmt, args...) \
> + RTE_LOG(level, VIRTIO_DRIVER, "%s() tx: " fmt "\n", __func__, ## args)
> +#else
> +#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
> +#endif
> +
> +#endif /* _VIRTIO_NET_LOGS_H_ */
--
David Marchand
^ permalink raw reply [flat|nested] 58+ messages in thread
* RE: [EXTERNAL] Re: [v1 12/16] common/virtio: common virtio log
2024-12-24 8:14 ` David Marchand
@ 2025-01-07 10:57 ` Gowrishankar Muthukrishnan
0 siblings, 0 replies; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2025-01-07 10:57 UTC (permalink / raw)
To: David Marchand
Cc: dev, Akhil Goyal, Maxime Coquelin, Chenbo Xia, Fan Zhang,
Jay Zhou, Bruce Richardson, Konstantin Ananyev, Jerin Jacob,
Anoob Joseph, Rajesh Mudimadugula [C]
Hi David,
> Hello Gowri,
>
> On Tue, Dec 24, 2024 at 8:39 AM Gowrishankar Muthukrishnan
> <gmuthukrishn@marvell.com> wrote:
> >
> > Common virtio log include file.
>
> That's really a short commitlog..
> What are you trying to achieve?
As part of sharing vDPA backend ops implementation between net and crypto (in patch 13/16),
I added this patch, but I got your point. I have removed this patch in newer patch series v2 (that I am sending now).
Thanks for the time reviewing it.
Regards,
Gowrishankar
>
^ permalink raw reply [flat|nested] 58+ messages in thread
* [v2 0/2] crypto/virtio: add RSA support
2024-12-24 7:36 [v1 00/16] crypto/virtio: vDPA and asymmetric support Gowrishankar Muthukrishnan
` (15 preceding siblings ...)
2024-12-24 7:37 ` [v1 16/16] test/crypto: test virtio_crypto_user PMD Gowrishankar Muthukrishnan
@ 2025-01-07 17:52 ` Gowrishankar Muthukrishnan
2025-01-07 17:52 ` [v2 1/2] crypto/virtio: add asymmetric " Gowrishankar Muthukrishnan
` (2 more replies)
2025-01-07 18:02 ` [v2 0/2] vhost: add RSA support Gowrishankar Muthukrishnan
` (2 subsequent siblings)
19 siblings, 3 replies; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2025-01-07 17:52 UTC (permalink / raw)
To: dev, Akhil Goyal, Maxime Coquelin, Chenbo Xia, Fan Zhang, Jay Zhou
Cc: jerinj, anoobj, Gowrishankar Muthukrishnan
This series adds RSA support in virtio crypto PMD.
v2:
- split from v1 series.
Gowrishankar Muthukrishnan (2):
crypto/virtio: add asymmetric RSA support
test/crypto: add asymmetric tests for virtio PMD
app/test/test_cryptodev_asym.c | 29 ++
app/test/test_cryptodev_rsa_test_vectors.h | 4 +
.../virtio/virtio_crypto_capabilities.h | 19 +
drivers/crypto/virtio/virtio_cryptodev.c | 384 +++++++++++++++---
drivers/crypto/virtio/virtio_rxtx.c | 226 ++++++++++-
lib/cryptodev/cryptodev_pmd.h | 6 +
lib/vhost/virtio_crypto.h | 80 ++++
7 files changed, 680 insertions(+), 68 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 58+ messages in thread
* [v2 1/2] crypto/virtio: add asymmetric RSA support
2025-01-07 17:52 ` [v2 0/2] crypto/virtio: add RSA support Gowrishankar Muthukrishnan
@ 2025-01-07 17:52 ` Gowrishankar Muthukrishnan
2025-01-07 17:52 ` [v2 2/2] test/crypto: add asymmetric tests for virtio PMD Gowrishankar Muthukrishnan
2025-02-21 17:41 ` [v3 0/6] crypto/virtio: enhancements for RSA and vDPA Gowrishankar Muthukrishnan
2 siblings, 0 replies; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2025-01-07 17:52 UTC (permalink / raw)
To: dev, Akhil Goyal, Maxime Coquelin, Chenbo Xia, Fan Zhang, Jay Zhou
Cc: jerinj, anoobj, Gowrishankar Muthukrishnan
Asymmetric RSA operations (SIGN, VERIFY, ENCRYPT and DECRYPT) are
supported in virtio PMD.
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
.../virtio/virtio_crypto_capabilities.h | 19 +
drivers/crypto/virtio/virtio_cryptodev.c | 384 +++++++++++++++---
drivers/crypto/virtio/virtio_rxtx.c | 226 ++++++++++-
lib/cryptodev/cryptodev_pmd.h | 6 +
lib/vhost/virtio_crypto.h | 80 ++++
5 files changed, 647 insertions(+), 68 deletions(-)
diff --git a/drivers/crypto/virtio/virtio_crypto_capabilities.h b/drivers/crypto/virtio/virtio_crypto_capabilities.h
index 03c30deefd..1b26ff6720 100644
--- a/drivers/crypto/virtio/virtio_crypto_capabilities.h
+++ b/drivers/crypto/virtio/virtio_crypto_capabilities.h
@@ -48,4 +48,23 @@
}, } \
}
+#define VIRTIO_ASYM_CAPABILITIES \
+ { /* RSA */ \
+ .op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC, \
+ {.asym = { \
+ .xform_capa = { \
+ .xform_type = RTE_CRYPTO_ASYM_XFORM_RSA, \
+ .op_types = ((1 << RTE_CRYPTO_ASYM_OP_SIGN) | \
+ (1 << RTE_CRYPTO_ASYM_OP_VERIFY) | \
+ (1 << RTE_CRYPTO_ASYM_OP_ENCRYPT) | \
+ (1 << RTE_CRYPTO_ASYM_OP_DECRYPT)), \
+ {.modlen = { \
+ .min = 1, \
+ .max = 1024, \
+ .increment = 1 \
+ }, } \
+ } \
+ }, } \
+ }
+
#endif /* _VIRTIO_CRYPTO_CAPABILITIES_H_ */
diff --git a/drivers/crypto/virtio/virtio_cryptodev.c b/drivers/crypto/virtio/virtio_cryptodev.c
index b4a6fae9e0..afeab5a816 100644
--- a/drivers/crypto/virtio/virtio_cryptodev.c
+++ b/drivers/crypto/virtio/virtio_cryptodev.c
@@ -41,6 +41,11 @@ static void virtio_crypto_sym_clear_session(struct rte_cryptodev *dev,
static int virtio_crypto_sym_configure_session(struct rte_cryptodev *dev,
struct rte_crypto_sym_xform *xform,
struct rte_cryptodev_sym_session *session);
+static void virtio_crypto_asym_clear_session(struct rte_cryptodev *dev,
+ struct rte_cryptodev_asym_session *sess);
+static int virtio_crypto_asym_configure_session(struct rte_cryptodev *dev,
+ struct rte_crypto_asym_xform *xform,
+ struct rte_cryptodev_asym_session *session);
/*
* The set of PCI devices this driver supports
@@ -53,6 +58,7 @@ static const struct rte_pci_id pci_id_virtio_crypto_map[] = {
static const struct rte_cryptodev_capabilities virtio_capabilities[] = {
VIRTIO_SYM_CAPABILITIES,
+ VIRTIO_ASYM_CAPABILITIES,
RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
};
@@ -88,7 +94,7 @@ virtio_crypto_send_command(struct virtqueue *vq,
return -EINVAL;
}
/* cipher only is supported, it is available if auth_key is NULL */
- if (!cipher_key) {
+ if (session->ctrl.header.algo == VIRTIO_CRYPTO_SERVICE_CIPHER && !cipher_key) {
VIRTIO_CRYPTO_SESSION_LOG_ERR("cipher key is NULL.");
return -EINVAL;
}
@@ -104,19 +110,23 @@ virtio_crypto_send_command(struct virtqueue *vq,
/* calculate the length of cipher key */
if (cipher_key) {
- switch (ctrl->u.sym_create_session.op_type) {
- case VIRTIO_CRYPTO_SYM_OP_CIPHER:
- len_cipher_key
- = ctrl->u.sym_create_session.u.cipher
- .para.keylen;
- break;
- case VIRTIO_CRYPTO_SYM_OP_ALGORITHM_CHAINING:
- len_cipher_key
- = ctrl->u.sym_create_session.u.chain
- .para.cipher_param.keylen;
- break;
- default:
- VIRTIO_CRYPTO_SESSION_LOG_ERR("invalid op type");
+ if (session->ctrl.header.algo == VIRTIO_CRYPTO_SERVICE_CIPHER) {
+ switch (ctrl->u.sym_create_session.op_type) {
+ case VIRTIO_CRYPTO_SYM_OP_CIPHER:
+ len_cipher_key = ctrl->u.sym_create_session.u.cipher.para.keylen;
+ break;
+ case VIRTIO_CRYPTO_SYM_OP_ALGORITHM_CHAINING:
+ len_cipher_key =
+ ctrl->u.sym_create_session.u.chain.para.cipher_param.keylen;
+ break;
+ default:
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("invalid op type");
+ return -EINVAL;
+ }
+ } else if (session->ctrl.header.algo == VIRTIO_CRYPTO_AKCIPHER_RSA) {
+ len_cipher_key = ctrl->u.akcipher_create_session.para.keylen;
+ } else {
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("Invalid crypto service for cipher key");
return -EINVAL;
}
}
@@ -513,7 +523,10 @@ static struct rte_cryptodev_ops virtio_crypto_dev_ops = {
/* Crypto related operations */
.sym_session_get_size = virtio_crypto_sym_get_session_private_size,
.sym_session_configure = virtio_crypto_sym_configure_session,
- .sym_session_clear = virtio_crypto_sym_clear_session
+ .sym_session_clear = virtio_crypto_sym_clear_session,
+ .asym_session_get_size = virtio_crypto_sym_get_session_private_size,
+ .asym_session_configure = virtio_crypto_asym_configure_session,
+ .asym_session_clear = virtio_crypto_asym_clear_session
};
static void
@@ -737,6 +750,8 @@ crypto_virtio_create(const char *name, struct rte_pci_device *pci_dev,
cryptodev->dequeue_burst = virtio_crypto_pkt_rx_burst;
cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
+ RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO |
+ RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_QT |
RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT;
@@ -923,32 +938,24 @@ virtio_crypto_check_sym_clear_session_paras(
#define NUM_ENTRY_SYM_CLEAR_SESSION 2
static void
-virtio_crypto_sym_clear_session(
+virtio_crypto_clear_session(
struct rte_cryptodev *dev,
- struct rte_cryptodev_sym_session *sess)
+ struct virtio_crypto_op_ctrl_req *ctrl)
{
struct virtio_crypto_hw *hw;
struct virtqueue *vq;
- struct virtio_crypto_session *session;
- struct virtio_crypto_op_ctrl_req *ctrl;
struct vring_desc *desc;
uint8_t *status;
uint8_t needed = 1;
uint32_t head;
- uint8_t *malloc_virt_addr;
uint64_t malloc_phys_addr;
uint8_t len_inhdr = sizeof(struct virtio_crypto_inhdr);
uint32_t len_op_ctrl_req = sizeof(struct virtio_crypto_op_ctrl_req);
uint32_t desc_offset = len_op_ctrl_req + len_inhdr;
-
- PMD_INIT_FUNC_TRACE();
-
- if (virtio_crypto_check_sym_clear_session_paras(dev, sess) < 0)
- return;
+ uint64_t session_id = ctrl->u.destroy_session.session_id;
hw = dev->data->dev_private;
vq = hw->cvq;
- session = CRYPTODEV_GET_SYM_SESS_PRIV(sess);
VIRTIO_CRYPTO_SESSION_LOG_INFO("vq->vq_desc_head_idx = %d, "
"vq = %p", vq->vq_desc_head_idx, vq);
@@ -960,34 +967,15 @@ virtio_crypto_sym_clear_session(
return;
}
- /*
- * malloc memory to store information of ctrl request op,
- * returned status and desc vring
- */
- malloc_virt_addr = rte_malloc(NULL, len_op_ctrl_req + len_inhdr
- + NUM_ENTRY_SYM_CLEAR_SESSION
- * sizeof(struct vring_desc), RTE_CACHE_LINE_SIZE);
- if (malloc_virt_addr == NULL) {
- VIRTIO_CRYPTO_SESSION_LOG_ERR("not enough heap room");
- return;
- }
- malloc_phys_addr = rte_malloc_virt2iova(malloc_virt_addr);
-
- /* assign ctrl request op part */
- ctrl = (struct virtio_crypto_op_ctrl_req *)malloc_virt_addr;
- ctrl->header.opcode = VIRTIO_CRYPTO_CIPHER_DESTROY_SESSION;
- /* default data virtqueue is 0 */
- ctrl->header.queue_id = 0;
- ctrl->u.destroy_session.session_id = session->session_id;
+ malloc_phys_addr = rte_malloc_virt2iova(ctrl);
/* status part */
status = &(((struct virtio_crypto_inhdr *)
- ((uint8_t *)malloc_virt_addr + len_op_ctrl_req))->status);
+ ((uint8_t *)ctrl + len_op_ctrl_req))->status);
*status = VIRTIO_CRYPTO_ERR;
/* indirect desc vring part */
- desc = (struct vring_desc *)((uint8_t *)malloc_virt_addr
- + desc_offset);
+ desc = (struct vring_desc *)((uint8_t *)ctrl + desc_offset);
/* ctrl request part */
desc[0].addr = malloc_phys_addr;
@@ -1049,8 +1037,8 @@ virtio_crypto_sym_clear_session(
if (*status != VIRTIO_CRYPTO_OK) {
VIRTIO_CRYPTO_SESSION_LOG_ERR("Close session failed "
"status=%"PRIu32", session_id=%"PRIu64"",
- *status, session->session_id);
- rte_free(malloc_virt_addr);
+ *status, session_id);
+ rte_free(ctrl);
return;
}
@@ -1058,9 +1046,86 @@ virtio_crypto_sym_clear_session(
VIRTIO_CRYPTO_INIT_LOG_DBG("vq->vq_desc_head_idx=%d", vq->vq_desc_head_idx);
VIRTIO_CRYPTO_SESSION_LOG_INFO("Close session %"PRIu64" successfully ",
- session->session_id);
+ session_id);
+
+ rte_free(ctrl);
+}
+
+static void
+virtio_crypto_sym_clear_session(
+ struct rte_cryptodev *dev,
+ struct rte_cryptodev_sym_session *sess)
+{
+ uint8_t len_inhdr = sizeof(struct virtio_crypto_inhdr);
+ uint32_t len_op_ctrl_req = sizeof(struct virtio_crypto_op_ctrl_req);
+ struct virtio_crypto_op_ctrl_req *ctrl;
+ struct virtio_crypto_session *session;
+ uint8_t *malloc_virt_addr;
- rte_free(malloc_virt_addr);
+ PMD_INIT_FUNC_TRACE();
+
+ if (virtio_crypto_check_sym_clear_session_paras(dev, sess) < 0)
+ return;
+
+ session = CRYPTODEV_GET_SYM_SESS_PRIV(sess);
+
+ /*
+ * malloc memory to store information of ctrl request op,
+ * returned status and desc vring
+ */
+ malloc_virt_addr = rte_malloc(NULL, len_op_ctrl_req + len_inhdr
+ + NUM_ENTRY_SYM_CLEAR_SESSION
+ * sizeof(struct vring_desc), RTE_CACHE_LINE_SIZE);
+ if (malloc_virt_addr == NULL) {
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("not enough heap room");
+ return;
+ }
+
+ /* assign ctrl request op part */
+ ctrl = (struct virtio_crypto_op_ctrl_req *)malloc_virt_addr;
+ ctrl->header.opcode = VIRTIO_CRYPTO_CIPHER_DESTROY_SESSION;
+ /* default data virtqueue is 0 */
+ ctrl->header.queue_id = 0;
+ ctrl->u.destroy_session.session_id = session->session_id;
+
+ return virtio_crypto_clear_session(dev, ctrl);
+}
+
+static void
+virtio_crypto_asym_clear_session(
+ struct rte_cryptodev *dev,
+ struct rte_cryptodev_asym_session *sess)
+{
+ uint8_t len_inhdr = sizeof(struct virtio_crypto_inhdr);
+ uint32_t len_op_ctrl_req = sizeof(struct virtio_crypto_op_ctrl_req);
+ struct virtio_crypto_op_ctrl_req *ctrl;
+ struct virtio_crypto_session *session;
+ uint8_t *malloc_virt_addr;
+
+ PMD_INIT_FUNC_TRACE();
+
+ session = CRYPTODEV_GET_ASYM_SESS_PRIV(sess);
+
+ /*
+ * malloc memory to store information of ctrl request op,
+ * returned status and desc vring
+ */
+ malloc_virt_addr = rte_malloc(NULL, len_op_ctrl_req + len_inhdr
+ + NUM_ENTRY_SYM_CLEAR_SESSION
+ * sizeof(struct vring_desc), RTE_CACHE_LINE_SIZE);
+ if (malloc_virt_addr == NULL) {
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("not enough heap room");
+ return;
+ }
+
+ /* assign ctrl request op part */
+ ctrl = (struct virtio_crypto_op_ctrl_req *)malloc_virt_addr;
+ ctrl->header.opcode = VIRTIO_CRYPTO_AKCIPHER_DESTROY_SESSION;
+ /* default data virtqueue is 0 */
+ ctrl->header.queue_id = 0;
+ ctrl->u.destroy_session.session_id = session->session_id;
+
+ return virtio_crypto_clear_session(dev, ctrl);
}
static struct rte_crypto_cipher_xform *
@@ -1291,6 +1356,23 @@ virtio_crypto_check_sym_configure_session_paras(
return 0;
}
+static int
+virtio_crypto_check_asym_configure_session_paras(
+ struct rte_cryptodev *dev,
+ struct rte_crypto_asym_xform *xform,
+ struct rte_cryptodev_asym_session *asym_sess)
+{
+ if (unlikely(xform == NULL) || unlikely(asym_sess == NULL)) {
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("NULL pointer");
+ return -1;
+ }
+
+ if (virtio_crypto_check_sym_session_paras(dev) < 0)
+ return -1;
+
+ return 0;
+}
+
static int
virtio_crypto_sym_configure_session(
struct rte_cryptodev *dev,
@@ -1382,6 +1464,204 @@ virtio_crypto_sym_configure_session(
return -1;
}
+static size_t
+tlv_encode(uint8_t **tlv, uint8_t type, uint8_t *data, size_t len)
+{
+ uint8_t *lenval = NULL;
+ size_t lenval_n = 0;
+
+ if (len > 65535) {
+ goto _exit;
+ } else if (len > 255) {
+ lenval_n = 4 + len;
+ lenval = rte_malloc(NULL, lenval_n, 0);
+
+ lenval[0] = type;
+ lenval[1] = 0x82;
+ lenval[2] = (len & 0xFF00) >> 8;
+ lenval[3] = (len & 0xFF);
+ rte_memcpy(&lenval[4], data, len);
+ } else if (len > 127) {
+ lenval_n = 3 + len;
+ lenval = rte_malloc(NULL, lenval_n, 0);
+
+ lenval[0] = type;
+ lenval[1] = 0x81;
+ lenval[2] = len;
+ rte_memcpy(&lenval[3], data, len);
+ } else {
+ lenval_n = 2 + len;
+ lenval = rte_malloc(NULL, lenval_n, 0);
+
+ lenval[0] = type;
+ lenval[1] = len;
+ rte_memcpy(&lenval[2], data, len);
+ }
+
+_exit:
+ *tlv = lenval;
+ return lenval_n;
+}
+
+static int
+virtio_crypto_asym_rsa_xform_to_der(
+ struct rte_crypto_asym_xform *xform,
+ unsigned char **der)
+{
+ size_t nlen, elen, dlen, plen, qlen, dplen, dqlen, qinvlen, tlen;
+ uint8_t *n, *e, *d, *p, *q, *dp, *dq, *qinv, *t;
+ uint8_t ver[3] = {0x02, 0x01, 0x00};
+
+ if (xform->xform_type != RTE_CRYPTO_ASYM_XFORM_RSA)
+ return -EINVAL;
+
+ /* Length of sequence in bytes */
+ tlen = RTE_DIM(ver);
+ nlen = tlv_encode(&n, 0x02, xform->rsa.n.data, xform->rsa.n.length);
+ elen = tlv_encode(&e, 0x02, xform->rsa.e.data, xform->rsa.e.length);
+ tlen += (nlen + elen);
+
+ dlen = tlv_encode(&d, 0x02, xform->rsa.d.data, xform->rsa.d.length);
+ tlen += dlen;
+
+ plen = tlv_encode(&p, 0x02, xform->rsa.qt.p.data, xform->rsa.qt.p.length);
+ qlen = tlv_encode(&q, 0x02, xform->rsa.qt.q.data, xform->rsa.qt.q.length);
+ dplen = tlv_encode(&dp, 0x02, xform->rsa.qt.dP.data, xform->rsa.qt.dP.length);
+ dqlen = tlv_encode(&dq, 0x02, xform->rsa.qt.dQ.data, xform->rsa.qt.dQ.length);
+ qinvlen = tlv_encode(&qinv, 0x02, xform->rsa.qt.qInv.data, xform->rsa.qt.qInv.length);
+ tlen += (plen + qlen + dplen + dqlen + qinvlen);
+
+ t = rte_malloc(NULL, tlen, 0);
+ *der = t;
+ rte_memcpy(t, ver, RTE_DIM(ver));
+ t += RTE_DIM(ver);
+ rte_memcpy(t, n, nlen);
+ t += nlen;
+ rte_memcpy(t, e, elen);
+ t += elen;
+ rte_free(n);
+ rte_free(e);
+
+ rte_memcpy(t, d, dlen);
+ t += dlen;
+ rte_free(d);
+
+ rte_memcpy(t, p, plen);
+ t += plen;
+ rte_memcpy(t, q, plen);
+ t += qlen;
+ rte_memcpy(t, dp, dplen);
+ t += dplen;
+ rte_memcpy(t, dq, dqlen);
+ t += dqlen;
+ rte_memcpy(t, qinv, qinvlen);
+ t += qinvlen;
+ rte_free(p);
+ rte_free(q);
+ rte_free(dp);
+ rte_free(dq);
+ rte_free(qinv);
+
+ t = *der;
+ tlen = tlv_encode(der, 0x30, t, tlen);
+ return tlen;
+}
+
+static int
+virtio_crypto_asym_configure_session(
+ struct rte_cryptodev *dev,
+ struct rte_crypto_asym_xform *xform,
+ struct rte_cryptodev_asym_session *sess)
+{
+ struct virtio_crypto_akcipher_session_para *para;
+ struct virtio_crypto_op_ctrl_req *ctrl_req;
+ struct virtio_crypto_session *session;
+ struct virtio_crypto_hw *hw;
+ struct virtqueue *control_vq;
+ uint8_t *key = NULL;
+ int ret;
+
+ PMD_INIT_FUNC_TRACE();
+
+ ret = virtio_crypto_check_asym_configure_session_paras(dev, xform,
+ sess);
+ if (ret < 0) {
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("Invalid parameters");
+ return ret;
+ }
+
+ session = CRYPTODEV_GET_ASYM_SESS_PRIV(sess);
+ memset(session, 0, sizeof(struct virtio_crypto_session));
+ ctrl_req = &session->ctrl;
+ ctrl_req->header.opcode = VIRTIO_CRYPTO_AKCIPHER_CREATE_SESSION;
+ /* FIXME: support multiqueue */
+ ctrl_req->header.queue_id = 0;
+ para = &ctrl_req->u.akcipher_create_session.para;
+
+ switch (xform->xform_type) {
+ case RTE_CRYPTO_ASYM_XFORM_RSA:
+ ctrl_req->header.algo = VIRTIO_CRYPTO_AKCIPHER_RSA;
+ para->algo = VIRTIO_CRYPTO_AKCIPHER_RSA;
+
+ if (xform->rsa.key_type == RTE_RSA_KEY_TYPE_EXP)
+ para->keytype = VIRTIO_CRYPTO_AKCIPHER_KEY_TYPE_PUBLIC;
+ else
+ para->keytype = VIRTIO_CRYPTO_AKCIPHER_KEY_TYPE_PRIVATE;
+
+ if (xform->rsa.padding.type == RTE_CRYPTO_RSA_PADDING_NONE) {
+ para->u.rsa.padding_algo = VIRTIO_CRYPTO_RSA_RAW_PADDING;
+ } else if (xform->rsa.padding.type == RTE_CRYPTO_RSA_PADDING_PKCS1_5) {
+ para->u.rsa.padding_algo = VIRTIO_CRYPTO_RSA_PKCS1_PADDING;
+ switch (xform->rsa.padding.hash) {
+ case RTE_CRYPTO_AUTH_SHA1:
+ para->u.rsa.hash_algo = VIRTIO_CRYPTO_RSA_SHA1;
+ break;
+ case RTE_CRYPTO_AUTH_SHA224:
+ para->u.rsa.hash_algo = VIRTIO_CRYPTO_RSA_SHA224;
+ break;
+ case RTE_CRYPTO_AUTH_SHA256:
+ para->u.rsa.hash_algo = VIRTIO_CRYPTO_RSA_SHA256;
+ break;
+ case RTE_CRYPTO_AUTH_SHA512:
+ para->u.rsa.hash_algo = VIRTIO_CRYPTO_RSA_SHA512;
+ break;
+ case RTE_CRYPTO_AUTH_MD5:
+ para->u.rsa.hash_algo = VIRTIO_CRYPTO_RSA_MD5;
+ break;
+ default:
+ para->u.rsa.hash_algo = VIRTIO_CRYPTO_RSA_NO_HASH;
+ }
+ } else {
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("Invalid padding type");
+ return -EINVAL;
+ }
+
+ ret = virtio_crypto_asym_rsa_xform_to_der(xform, &key);
+ if (ret <= 0) {
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("Invalid RSA primitives");
+ return ret;
+ }
+
+ ctrl_req->u.akcipher_create_session.para.keylen = ret;
+ break;
+ default:
+ para->algo = VIRTIO_CRYPTO_NO_AKCIPHER;
+ }
+
+ hw = dev->data->dev_private;
+ control_vq = hw->cvq;
+ ret = virtio_crypto_send_command(control_vq, ctrl_req,
+ key, NULL, session);
+ if (ret < 0) {
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("create session failed: %d", ret);
+ goto error_out;
+ }
+
+ return 0;
+error_out:
+ return -1;
+}
+
static void
virtio_crypto_dev_info_get(struct rte_cryptodev *dev,
struct rte_cryptodev_info *info)
diff --git a/drivers/crypto/virtio/virtio_rxtx.c b/drivers/crypto/virtio/virtio_rxtx.c
index d02486661f..c456dc327e 100644
--- a/drivers/crypto/virtio/virtio_rxtx.c
+++ b/drivers/crypto/virtio/virtio_rxtx.c
@@ -343,6 +343,196 @@ virtqueue_crypto_sym_enqueue_xmit(
return 0;
}
+static int
+virtqueue_crypto_asym_pkt_header_arrange(
+ struct rte_crypto_op *cop,
+ struct virtio_crypto_op_data_req *data,
+ struct virtio_crypto_session *session)
+{
+ struct rte_crypto_asym_op *asym_op = cop->asym;
+ struct virtio_crypto_op_data_req *req_data = data;
+ struct virtio_crypto_op_ctrl_req *ctrl = &session->ctrl;
+
+ req_data->header.session_id = session->session_id;
+
+ switch (ctrl->header.algo) {
+ case VIRTIO_CRYPTO_AKCIPHER_RSA:
+ req_data->header.algo = ctrl->header.algo;
+ if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_SIGN) {
+ req_data->header.opcode = VIRTIO_CRYPTO_AKCIPHER_SIGN;
+ req_data->u.akcipher_req.para.src_data_len
+ = asym_op->rsa.message.length;
+ /* qemu does not accept zero size write buffer */
+ req_data->u.akcipher_req.para.dst_data_len
+ = asym_op->rsa.sign.length;
+ } else if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_VERIFY) {
+ req_data->header.opcode = VIRTIO_CRYPTO_AKCIPHER_VERIFY;
+ req_data->u.akcipher_req.para.src_data_len
+ = asym_op->rsa.sign.length;
+ /* qemu does not accept zero size write buffer */
+ req_data->u.akcipher_req.para.dst_data_len
+ = asym_op->rsa.message.length;
+ } else if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_ENCRYPT) {
+ req_data->header.opcode = VIRTIO_CRYPTO_AKCIPHER_ENCRYPT;
+ req_data->u.akcipher_req.para.src_data_len
+ = asym_op->rsa.message.length;
+ /* qemu does not accept zero size write buffer */
+ req_data->u.akcipher_req.para.dst_data_len
+ = asym_op->rsa.cipher.length;
+ } else if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_DECRYPT) {
+ req_data->header.opcode = VIRTIO_CRYPTO_AKCIPHER_DECRYPT;
+ req_data->u.akcipher_req.para.src_data_len
+ = asym_op->rsa.cipher.length;
+ /* qemu does not accept zero size write buffer */
+ req_data->u.akcipher_req.para.dst_data_len
+ = asym_op->rsa.message.length;
+ } else {
+ return -EINVAL;
+ }
+
+ break;
+ default:
+ req_data->header.algo = VIRTIO_CRYPTO_NO_AKCIPHER;
+ }
+
+ return 0;
+}
+
+static int
+virtqueue_crypto_asym_enqueue_xmit(
+ struct virtqueue *txvq,
+ struct rte_crypto_op *cop)
+{
+ uint16_t idx = 0;
+ uint16_t num_entry;
+ uint16_t needed = 1;
+ uint16_t head_idx;
+ struct vq_desc_extra *dxp;
+ struct vring_desc *start_dp;
+ struct vring_desc *desc;
+ uint64_t indirect_op_data_req_phys_addr;
+ uint16_t req_data_len = sizeof(struct virtio_crypto_op_data_req);
+ uint32_t indirect_vring_addr_offset = req_data_len +
+ sizeof(struct virtio_crypto_inhdr);
+ struct rte_crypto_asym_op *asym_op = cop->asym;
+ struct virtio_crypto_session *session =
+ CRYPTODEV_GET_ASYM_SESS_PRIV(cop->asym->session);
+ struct virtio_crypto_op_data_req *op_data_req;
+ struct virtio_crypto_op_cookie *crypto_op_cookie;
+
+ if (unlikely(txvq->vq_free_cnt == 0))
+ return -ENOSPC;
+ if (unlikely(txvq->vq_free_cnt < needed))
+ return -EMSGSIZE;
+ head_idx = txvq->vq_desc_head_idx;
+ if (unlikely(head_idx >= txvq->vq_nentries))
+ return -EFAULT;
+
+ dxp = &txvq->vq_descx[head_idx];
+
+ if (rte_mempool_get(txvq->mpool, &dxp->cookie)) {
+ VIRTIO_CRYPTO_TX_LOG_ERR("can not get cookie");
+ return -EFAULT;
+ }
+ crypto_op_cookie = dxp->cookie;
+ indirect_op_data_req_phys_addr =
+ rte_mempool_virt2iova(crypto_op_cookie);
+ op_data_req = (struct virtio_crypto_op_data_req *)crypto_op_cookie;
+ if (virtqueue_crypto_asym_pkt_header_arrange(cop, op_data_req, session))
+ return -EFAULT;
+
+ /* status is initialized to VIRTIO_CRYPTO_ERR */
+ ((struct virtio_crypto_inhdr *)
+ ((uint8_t *)op_data_req + req_data_len))->status =
+ VIRTIO_CRYPTO_ERR;
+
+ /* point to indirect vring entry */
+ desc = (struct vring_desc *)
+ ((uint8_t *)op_data_req + indirect_vring_addr_offset);
+ for (idx = 0; idx < (NUM_ENTRY_VIRTIO_CRYPTO_OP - 1); idx++)
+ desc[idx].next = idx + 1;
+ desc[NUM_ENTRY_VIRTIO_CRYPTO_OP - 1].next = VQ_RING_DESC_CHAIN_END;
+
+ idx = 0;
+
+ /* indirect vring: first part, virtio_crypto_op_data_req */
+ desc[idx].addr = indirect_op_data_req_phys_addr;
+ desc[idx].len = req_data_len;
+ desc[idx++].flags = VRING_DESC_F_NEXT;
+
+ if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_SIGN) {
+ /* indirect vring: src data */
+ desc[idx].addr = rte_mem_virt2iova(asym_op->rsa.message.data);
+ desc[idx].len = asym_op->rsa.message.length;
+ desc[idx++].flags = VRING_DESC_F_NEXT;
+
+ /* indirect vring: dst data */
+ desc[idx].addr = rte_mem_virt2iova(asym_op->rsa.sign.data);
+ desc[idx].len = asym_op->rsa.sign.length;
+ desc[idx++].flags = VRING_DESC_F_NEXT | VRING_DESC_F_WRITE;
+ } else if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_VERIFY) {
+ /* indirect vring: src data */
+ desc[idx].addr = rte_mem_virt2iova(asym_op->rsa.sign.data);
+ desc[idx].len = asym_op->rsa.sign.length;
+ desc[idx++].flags = VRING_DESC_F_NEXT;
+
+ /* indirect vring: dst data */
+ desc[idx].addr = rte_mem_virt2iova(asym_op->rsa.message.data);
+ desc[idx].len = asym_op->rsa.message.length;
+ desc[idx++].flags = VRING_DESC_F_NEXT;
+ } else if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_ENCRYPT) {
+ /* indirect vring: src data */
+ desc[idx].addr = rte_mem_virt2iova(asym_op->rsa.message.data);
+ desc[idx].len = asym_op->rsa.message.length;
+ desc[idx++].flags = VRING_DESC_F_NEXT;
+
+ /* indirect vring: dst data */
+ desc[idx].addr = rte_mem_virt2iova(asym_op->rsa.cipher.data);
+ desc[idx].len = asym_op->rsa.cipher.length;
+ desc[idx++].flags = VRING_DESC_F_NEXT | VRING_DESC_F_WRITE;
+ } else if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_DECRYPT) {
+ /* indirect vring: src data */
+ desc[idx].addr = rte_mem_virt2iova(asym_op->rsa.cipher.data);
+ desc[idx].len = asym_op->rsa.cipher.length;
+ desc[idx++].flags = VRING_DESC_F_NEXT;
+
+ /* indirect vring: dst data */
+ desc[idx].addr = rte_mem_virt2iova(asym_op->rsa.message.data);
+ desc[idx].len = asym_op->rsa.message.length;
+ desc[idx++].flags = VRING_DESC_F_NEXT | VRING_DESC_F_WRITE;
+ } else {
+ VIRTIO_CRYPTO_TX_LOG_ERR("Invalid asym op");
+ return -EINVAL;
+ }
+
+ /* indirect vring: last part, status returned */
+ desc[idx].addr = indirect_op_data_req_phys_addr + req_data_len;
+ desc[idx].len = sizeof(struct virtio_crypto_inhdr);
+ desc[idx++].flags = VRING_DESC_F_WRITE;
+
+ num_entry = idx;
+
+ /* save the infos to use when receiving packets */
+ dxp->crypto_op = (void *)cop;
+ dxp->ndescs = needed;
+
+ /* use a single buffer */
+ start_dp = txvq->vq_ring.desc;
+ start_dp[head_idx].addr = indirect_op_data_req_phys_addr +
+ indirect_vring_addr_offset;
+ start_dp[head_idx].len = num_entry * sizeof(struct vring_desc);
+ start_dp[head_idx].flags = VRING_DESC_F_INDIRECT;
+
+ idx = start_dp[head_idx].next;
+ txvq->vq_desc_head_idx = idx;
+ if (txvq->vq_desc_head_idx == VQ_RING_DESC_CHAIN_END)
+ txvq->vq_desc_tail_idx = idx;
+ txvq->vq_free_cnt = (uint16_t)(txvq->vq_free_cnt - needed);
+ vq_update_avail_ring(txvq, head_idx);
+
+ return 0;
+}
+
static int
virtqueue_crypto_enqueue_xmit(struct virtqueue *txvq,
struct rte_crypto_op *cop)
@@ -353,6 +543,9 @@ virtqueue_crypto_enqueue_xmit(struct virtqueue *txvq,
case RTE_CRYPTO_OP_TYPE_SYMMETRIC:
ret = virtqueue_crypto_sym_enqueue_xmit(txvq, cop);
break;
+ case RTE_CRYPTO_OP_TYPE_ASYMMETRIC:
+ ret = virtqueue_crypto_asym_enqueue_xmit(txvq, cop);
+ break;
default:
VIRTIO_CRYPTO_TX_LOG_ERR("invalid crypto op type %u",
cop->type);
@@ -475,27 +668,28 @@ virtio_crypto_pkt_tx_burst(void *tx_queue, struct rte_crypto_op **tx_pkts,
VIRTIO_CRYPTO_TX_LOG_DBG("%d packets to xmit", nb_pkts);
for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
- struct rte_mbuf *txm = tx_pkts[nb_tx]->sym->m_src;
- /* nb_segs is always 1 at virtio crypto situation */
- int need = txm->nb_segs - txvq->vq_free_cnt;
-
- /*
- * Positive value indicates it hasn't enough space in vring
- * descriptors
- */
- if (unlikely(need > 0)) {
+ if (tx_pkts[nb_tx]->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) {
+ struct rte_mbuf *txm = tx_pkts[nb_tx]->sym->m_src;
+ /* nb_segs is always 1 at virtio crypto situation */
+ int need = txm->nb_segs - txvq->vq_free_cnt;
+
/*
- * try it again because the receive process may be
- * free some space
+ * Positive value indicates it hasn't enough space in vring
+ * descriptors
*/
- need = txm->nb_segs - txvq->vq_free_cnt;
if (unlikely(need > 0)) {
- VIRTIO_CRYPTO_TX_LOG_DBG("No free tx "
- "descriptors to transmit");
- break;
+ /*
+ * try it again because the receive process may be
+ * free some space
+ */
+ need = txm->nb_segs - txvq->vq_free_cnt;
+ if (unlikely(need > 0)) {
+ VIRTIO_CRYPTO_TX_LOG_DBG("No free tx "
+ "descriptors to transmit");
+ break;
+ }
}
}
-
txvq->packets_sent_total++;
/* Enqueue Packet buffers */
diff --git a/lib/cryptodev/cryptodev_pmd.h b/lib/cryptodev/cryptodev_pmd.h
index 5c84a3b847..929c6defe9 100644
--- a/lib/cryptodev/cryptodev_pmd.h
+++ b/lib/cryptodev/cryptodev_pmd.h
@@ -715,6 +715,12 @@ struct rte_cryptodev_asym_session {
uint8_t sess_private_data[];
};
+/**
+ * Helper macro to get session private data
+ */
+#define CRYPTODEV_GET_ASYM_SESS_PRIV(s) \
+ ((void *)(((struct rte_cryptodev_asym_session *)s)->sess_private_data))
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/vhost/virtio_crypto.h b/lib/vhost/virtio_crypto.h
index 28877a5da3..d42af62f2f 100644
--- a/lib/vhost/virtio_crypto.h
+++ b/lib/vhost/virtio_crypto.h
@@ -9,6 +9,7 @@
#define VIRTIO_CRYPTO_SERVICE_HASH 1
#define VIRTIO_CRYPTO_SERVICE_MAC 2
#define VIRTIO_CRYPTO_SERVICE_AEAD 3
+#define VIRTIO_CRYPTO_SERVICE_AKCIPHER 4
#define VIRTIO_CRYPTO_OPCODE(service, op) (((service) << 8) | (op))
@@ -29,6 +30,10 @@ struct virtio_crypto_ctrl_header {
VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AEAD, 0x02)
#define VIRTIO_CRYPTO_AEAD_DESTROY_SESSION \
VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AEAD, 0x03)
+#define VIRTIO_CRYPTO_AKCIPHER_CREATE_SESSION \
+ VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AKCIPHER, 0x04)
+#define VIRTIO_CRYPTO_AKCIPHER_DESTROY_SESSION \
+ VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AKCIPHER, 0x05)
uint32_t opcode;
uint32_t algo;
uint32_t flag;
@@ -152,6 +157,58 @@ struct virtio_crypto_aead_create_session_req {
uint8_t padding[32];
};
+struct virtio_crypto_rsa_session_para {
+#define VIRTIO_CRYPTO_RSA_RAW_PADDING 0
+#define VIRTIO_CRYPTO_RSA_PKCS1_PADDING 1
+ uint32_t padding_algo;
+
+#define VIRTIO_CRYPTO_RSA_NO_HASH 0
+#define VIRTIO_CRYPTO_RSA_MD2 1
+#define VIRTIO_CRYPTO_RSA_MD3 2
+#define VIRTIO_CRYPTO_RSA_MD4 3
+#define VIRTIO_CRYPTO_RSA_MD5 4
+#define VIRTIO_CRYPTO_RSA_SHA1 5
+#define VIRTIO_CRYPTO_RSA_SHA256 6
+#define VIRTIO_CRYPTO_RSA_SHA384 7
+#define VIRTIO_CRYPTO_RSA_SHA512 8
+#define VIRTIO_CRYPTO_RSA_SHA224 9
+ uint32_t hash_algo;
+};
+
+struct virtio_crypto_ecdsa_session_para {
+#define VIRTIO_CRYPTO_CURVE_UNKNOWN 0
+#define VIRTIO_CRYPTO_CURVE_NIST_P192 1
+#define VIRTIO_CRYPTO_CURVE_NIST_P224 2
+#define VIRTIO_CRYPTO_CURVE_NIST_P256 3
+#define VIRTIO_CRYPTO_CURVE_NIST_P384 4
+#define VIRTIO_CRYPTO_CURVE_NIST_P521 5
+ uint32_t curve_id;
+ uint32_t padding;
+};
+
+struct virtio_crypto_akcipher_session_para {
+#define VIRTIO_CRYPTO_NO_AKCIPHER 0
+#define VIRTIO_CRYPTO_AKCIPHER_RSA 1
+#define VIRTIO_CRYPTO_AKCIPHER_DSA 2
+#define VIRTIO_CRYPTO_AKCIPHER_ECDSA 3
+ uint32_t algo;
+
+#define VIRTIO_CRYPTO_AKCIPHER_KEY_TYPE_PUBLIC 1
+#define VIRTIO_CRYPTO_AKCIPHER_KEY_TYPE_PRIVATE 2
+ uint32_t keytype;
+ uint32_t keylen;
+
+ union {
+ struct virtio_crypto_rsa_session_para rsa;
+ struct virtio_crypto_ecdsa_session_para ecdsa;
+ } u;
+};
+
+struct virtio_crypto_akcipher_create_session_req {
+ struct virtio_crypto_akcipher_session_para para;
+ uint8_t padding[36];
+};
+
struct virtio_crypto_alg_chain_session_para {
#define VIRTIO_CRYPTO_SYM_ALG_CHAIN_ORDER_HASH_THEN_CIPHER 1
#define VIRTIO_CRYPTO_SYM_ALG_CHAIN_ORDER_CIPHER_THEN_HASH 2
@@ -219,6 +276,8 @@ struct virtio_crypto_op_ctrl_req {
mac_create_session;
struct virtio_crypto_aead_create_session_req
aead_create_session;
+ struct virtio_crypto_akcipher_create_session_req
+ akcipher_create_session;
struct virtio_crypto_destroy_session_req
destroy_session;
uint8_t padding[56];
@@ -238,6 +297,14 @@ struct virtio_crypto_op_header {
VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AEAD, 0x00)
#define VIRTIO_CRYPTO_AEAD_DECRYPT \
VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AEAD, 0x01)
+#define VIRTIO_CRYPTO_AKCIPHER_ENCRYPT \
+ VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AKCIPHER, 0x00)
+#define VIRTIO_CRYPTO_AKCIPHER_DECRYPT \
+ VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AKCIPHER, 0x01)
+#define VIRTIO_CRYPTO_AKCIPHER_SIGN \
+ VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AKCIPHER, 0x02)
+#define VIRTIO_CRYPTO_AKCIPHER_VERIFY \
+ VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AKCIPHER, 0x03)
uint32_t opcode;
/* algo should be service-specific algorithms */
uint32_t algo;
@@ -362,6 +429,16 @@ struct virtio_crypto_aead_data_req {
uint8_t padding[32];
};
+struct virtio_crypto_akcipher_para {
+ uint32_t src_data_len;
+ uint32_t dst_data_len;
+};
+
+struct virtio_crypto_akcipher_data_req {
+ struct virtio_crypto_akcipher_para para;
+ uint8_t padding[40];
+};
+
/* The request of the data virtqueue's packet */
struct virtio_crypto_op_data_req {
struct virtio_crypto_op_header header;
@@ -371,6 +448,7 @@ struct virtio_crypto_op_data_req {
struct virtio_crypto_hash_data_req hash_req;
struct virtio_crypto_mac_data_req mac_req;
struct virtio_crypto_aead_data_req aead_req;
+ struct virtio_crypto_akcipher_data_req akcipher_req;
uint8_t padding[48];
} u;
};
@@ -380,6 +458,8 @@ struct virtio_crypto_op_data_req {
#define VIRTIO_CRYPTO_BADMSG 2
#define VIRTIO_CRYPTO_NOTSUPP 3
#define VIRTIO_CRYPTO_INVSESS 4 /* Invalid session id */
+#define VIRTIO_CRYPTO_NOSPC 5 /* no free session ID */
+#define VIRTIO_CRYPTO_KEY_REJECTED 6 /* Signature verification failed */
/* The accelerator hardware is ready */
#define VIRTIO_CRYPTO_S_HW_READY (1 << 0)
--
2.25.1
^ permalink raw reply [flat|nested] 58+ messages in thread
* [v2 2/2] test/crypto: add asymmetric tests for virtio PMD
2025-01-07 17:52 ` [v2 0/2] crypto/virtio: add RSA support Gowrishankar Muthukrishnan
2025-01-07 17:52 ` [v2 1/2] crypto/virtio: add asymmetric " Gowrishankar Muthukrishnan
@ 2025-01-07 17:52 ` Gowrishankar Muthukrishnan
2025-02-21 17:41 ` [v3 0/6] crypto/virtio: enhancements for RSA and vDPA Gowrishankar Muthukrishnan
2 siblings, 0 replies; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2025-01-07 17:52 UTC (permalink / raw)
To: dev, Akhil Goyal, Maxime Coquelin, Chenbo Xia, Fan Zhang, Jay Zhou
Cc: jerinj, anoobj, Gowrishankar Muthukrishnan
Add asymmetric tests for Virtio PMD.
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
app/test/test_cryptodev_asym.c | 29 ++++++++++++++++++++++
app/test/test_cryptodev_rsa_test_vectors.h | 4 +++
2 files changed, 33 insertions(+)
diff --git a/app/test/test_cryptodev_asym.c b/app/test/test_cryptodev_asym.c
index 364e81ecd9..ec7ab05a2d 100644
--- a/app/test/test_cryptodev_asym.c
+++ b/app/test/test_cryptodev_asym.c
@@ -3997,6 +3997,19 @@ static struct unit_test_suite cryptodev_octeontx_asym_testsuite = {
}
};
+static struct unit_test_suite cryptodev_virtio_asym_testsuite = {
+ .suite_name = "Crypto Device VIRTIO ASYM Unit Test Suite",
+ .setup = testsuite_setup,
+ .teardown = testsuite_teardown,
+ .unit_test_cases = {
+ TEST_CASE_ST(ut_setup_asym, ut_teardown_asym, test_capability),
+ TEST_CASE_ST(ut_setup_asym, ut_teardown_asym,
+ test_rsa_sign_verify_crt),
+ TEST_CASE_ST(ut_setup_asym, ut_teardown_asym, test_rsa_enc_dec_crt),
+ TEST_CASES_END() /**< NULL terminate unit test array */
+ }
+};
+
static int
test_cryptodev_openssl_asym(void)
{
@@ -4065,6 +4078,22 @@ test_cryptodev_cn10k_asym(void)
return unit_test_suite_runner(&cryptodev_octeontx_asym_testsuite);
}
+static int
+test_cryptodev_virtio_asym(void)
+{
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_VIRTIO_PMD));
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "virtio PMD must be loaded.\n");
+ return TEST_FAILED;
+ }
+
+ /* Use test suite registered for crypto_virtio PMD */
+ return unit_test_suite_runner(&cryptodev_virtio_asym_testsuite);
+}
+
+REGISTER_DRIVER_TEST(cryptodev_virtio_asym_autotest, test_cryptodev_virtio_asym);
+
REGISTER_DRIVER_TEST(cryptodev_openssl_asym_autotest, test_cryptodev_openssl_asym);
REGISTER_DRIVER_TEST(cryptodev_qat_asym_autotest, test_cryptodev_qat_asym);
REGISTER_DRIVER_TEST(cryptodev_octeontx_asym_autotest, test_cryptodev_octeontx_asym);
diff --git a/app/test/test_cryptodev_rsa_test_vectors.h b/app/test/test_cryptodev_rsa_test_vectors.h
index 1b7b451387..52d054c7d9 100644
--- a/app/test/test_cryptodev_rsa_test_vectors.h
+++ b/app/test/test_cryptodev_rsa_test_vectors.h
@@ -377,6 +377,10 @@ struct rte_crypto_asym_xform rsa_xform_crt = {
.length = sizeof(rsa_e)
},
.key_type = RTE_RSA_KEY_TYPE_QT,
+ .d = {
+ .data = rsa_d,
+ .length = sizeof(rsa_d)
+ },
.qt = {
.p = {
.data = rsa_p,
--
2.25.1
^ permalink raw reply [flat|nested] 58+ messages in thread
* [v2 0/2] vhost: add RSA support
2024-12-24 7:36 [v1 00/16] crypto/virtio: vDPA and asymmetric support Gowrishankar Muthukrishnan
` (16 preceding siblings ...)
2025-01-07 17:52 ` [v2 0/2] crypto/virtio: add RSA support Gowrishankar Muthukrishnan
@ 2025-01-07 18:02 ` Gowrishankar Muthukrishnan
2025-01-07 18:02 ` [v2 1/2] vhost: add asymmetric " Gowrishankar Muthukrishnan
` (2 more replies)
2025-01-07 18:08 ` [v2 0/2] crypto/virtio: add packed ring support Gowrishankar Muthukrishnan
2025-01-07 18:44 ` [v2 0/4] crypto/virtio: add vDPA backend support Gowrishankar Muthukrishnan
19 siblings, 3 replies; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2025-01-07 18:02 UTC (permalink / raw)
To: dev, Akhil Goyal, Maxime Coquelin, Chenbo Xia, Fan Zhang, Jay Zhou
Cc: jerinj, anoobj, Gowrishankar Muthukrishnan
This series adds RSA support in vhost_crypto library.
v2:
- split from v1 series.
Depends-on: series-34291 ("crypto/virtio: add RSA support")
Gowrishankar Muthukrishnan (2):
vhost: add asymmetric RSA support
examples/vhost_crypto: add asymmetric support
examples/vhost_crypto/main.c | 54 +++-
lib/vhost/vhost_crypto.c | 504 ++++++++++++++++++++++++++++++++---
lib/vhost/vhost_user.h | 33 ++-
3 files changed, 538 insertions(+), 53 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 58+ messages in thread
* [v2 1/2] vhost: add asymmetric RSA support
2025-01-07 18:02 ` [v2 0/2] vhost: add RSA support Gowrishankar Muthukrishnan
@ 2025-01-07 18:02 ` Gowrishankar Muthukrishnan
2025-01-29 16:07 ` Maxime Coquelin
2025-01-07 18:02 ` [v2 2/2] examples/vhost_crypto: add asymmetric support Gowrishankar Muthukrishnan
2025-02-21 17:30 ` [v3 0/5] vhost: add RSA support Gowrishankar Muthukrishnan
2 siblings, 1 reply; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2025-01-07 18:02 UTC (permalink / raw)
To: dev, Akhil Goyal, Maxime Coquelin, Chenbo Xia, Fan Zhang, Jay Zhou
Cc: jerinj, anoobj, Gowrishankar Muthukrishnan
Support asymmetric RSA crypto operations in vhost-user.
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
Depends-on: series-34291 ("crypto/virtio: add RSA support")
lib/vhost/vhost_crypto.c | 504 ++++++++++++++++++++++++++++++++++++---
lib/vhost/vhost_user.h | 33 ++-
2 files changed, 498 insertions(+), 39 deletions(-)
diff --git a/lib/vhost/vhost_crypto.c b/lib/vhost/vhost_crypto.c
index 7caf6d9afa..6ce06ef42b 100644
--- a/lib/vhost/vhost_crypto.c
+++ b/lib/vhost/vhost_crypto.c
@@ -54,6 +54,14 @@ RTE_LOG_REGISTER_SUFFIX(vhost_crypto_logtype, crypto, INFO);
*/
#define vhost_crypto_desc vring_desc
+struct vhost_crypto_session {
+ union {
+ struct rte_cryptodev_asym_session *asym;
+ struct rte_cryptodev_sym_session *sym;
+ };
+ enum rte_crypto_op_type type;
+};
+
static int
cipher_algo_transform(uint32_t virtio_cipher_algo,
enum rte_crypto_cipher_algorithm *algo)
@@ -206,8 +214,10 @@ struct __rte_cache_aligned vhost_crypto {
uint64_t last_session_id;
- uint64_t cache_session_id;
- struct rte_cryptodev_sym_session *cache_session;
+ uint64_t cache_sym_session_id;
+ struct rte_cryptodev_sym_session *cache_sym_session;
+ uint64_t cache_asym_session_id;
+ struct rte_cryptodev_asym_session *cache_asym_session;
/** socket id for the device */
int socket_id;
@@ -237,7 +247,7 @@ struct vhost_crypto_data_req {
static int
transform_cipher_param(struct rte_crypto_sym_xform *xform,
- VhostUserCryptoSessionParam *param)
+ VhostUserCryptoSymSessionParam *param)
{
int ret;
@@ -273,7 +283,7 @@ transform_cipher_param(struct rte_crypto_sym_xform *xform,
static int
transform_chain_param(struct rte_crypto_sym_xform *xforms,
- VhostUserCryptoSessionParam *param)
+ VhostUserCryptoSymSessionParam *param)
{
struct rte_crypto_sym_xform *xform_cipher, *xform_auth;
int ret;
@@ -334,17 +344,17 @@ transform_chain_param(struct rte_crypto_sym_xform *xforms,
}
static void
-vhost_crypto_create_sess(struct vhost_crypto *vcrypto,
+vhost_crypto_create_sym_sess(struct vhost_crypto *vcrypto,
VhostUserCryptoSessionParam *sess_param)
{
struct rte_crypto_sym_xform xform1 = {0}, xform2 = {0};
struct rte_cryptodev_sym_session *session;
int ret;
- switch (sess_param->op_type) {
+ switch (sess_param->u.sym_sess.op_type) {
case VIRTIO_CRYPTO_SYM_OP_NONE:
case VIRTIO_CRYPTO_SYM_OP_CIPHER:
- ret = transform_cipher_param(&xform1, sess_param);
+ ret = transform_cipher_param(&xform1, &sess_param->u.sym_sess);
if (unlikely(ret)) {
VC_LOG_ERR("Error transform session msg (%i)", ret);
sess_param->session_id = ret;
@@ -352,7 +362,7 @@ vhost_crypto_create_sess(struct vhost_crypto *vcrypto,
}
break;
case VIRTIO_CRYPTO_SYM_OP_ALGORITHM_CHAINING:
- if (unlikely(sess_param->hash_mode !=
+ if (unlikely(sess_param->u.sym_sess.hash_mode !=
VIRTIO_CRYPTO_SYM_HASH_MODE_AUTH)) {
sess_param->session_id = -VIRTIO_CRYPTO_NOTSUPP;
VC_LOG_ERR("Error transform session message (%i)",
@@ -362,7 +372,7 @@ vhost_crypto_create_sess(struct vhost_crypto *vcrypto,
xform1.next = &xform2;
- ret = transform_chain_param(&xform1, sess_param);
+ ret = transform_chain_param(&xform1, &sess_param->u.sym_sess);
if (unlikely(ret)) {
VC_LOG_ERR("Error transform session message (%i)", ret);
sess_param->session_id = ret;
@@ -402,22 +412,264 @@ vhost_crypto_create_sess(struct vhost_crypto *vcrypto,
vcrypto->last_session_id++;
}
+static int
+tlv_decode(uint8_t *tlv, uint8_t type, uint8_t **data, size_t *data_len)
+{
+ size_t tlen = -EINVAL, len;
+
+ if (tlv[0] != type)
+ return -EINVAL;
+
+ if (tlv[1] == 0x82) {
+ len = (tlv[2] << 8) | tlv[3];
+ *data = rte_malloc(NULL, len, 0);
+ rte_memcpy(*data, &tlv[4], len);
+ tlen = len + 4;
+ } else if (tlv[1] == 0x81) {
+ len = tlv[2];
+ *data = rte_malloc(NULL, len, 0);
+ rte_memcpy(*data, &tlv[3], len);
+ tlen = len + 3;
+ } else {
+ len = tlv[1];
+ *data = rte_malloc(NULL, len, 0);
+ rte_memcpy(*data, &tlv[2], len);
+ tlen = len + 2;
+ }
+
+ *data_len = len;
+ return tlen;
+}
+
+static int
+virtio_crypto_asym_rsa_der_to_xform(uint8_t *der, size_t der_len,
+ struct rte_crypto_asym_xform *xform)
+{
+ uint8_t *n = NULL, *e = NULL, *d = NULL, *p = NULL, *q = NULL, *dp = NULL,
+ *dq = NULL, *qinv = NULL, *v = NULL, *tlv;
+ size_t nlen, elen, dlen, plen, qlen, dplen, dqlen, qinvlen, vlen;
+ int len;
+
+ RTE_SET_USED(der_len);
+
+ if (der[0] != 0x30)
+ return -EINVAL;
+
+ if (der[1] == 0x82)
+ tlv = &der[4];
+ else if (der[1] == 0x81)
+ tlv = &der[3];
+ else
+ return -EINVAL;
+
+ len = tlv_decode(tlv, 0x02, &v, &vlen);
+ if (len < 0 || v[0] != 0x0 || vlen != 1) {
+ len = -EINVAL;
+ goto _error;
+ }
+
+ tlv = tlv + len;
+ len = tlv_decode(tlv, 0x02, &n, &nlen);
+ if (len < 0)
+ goto _error;
+
+ tlv = tlv + len;
+ len = tlv_decode(tlv, 0x02, &e, &elen);
+ if (len < 0)
+ goto _error;
+
+ tlv = tlv + len;
+ len = tlv_decode(tlv, 0x02, &d, &dlen);
+ if (len < 0)
+ goto _error;
+
+ tlv = tlv + len;
+ len = tlv_decode(tlv, 0x02, &p, &plen);
+ if (len < 0)
+ goto _error;
+
+ tlv = tlv + len;
+ len = tlv_decode(tlv, 0x02, &q, &qlen);
+ if (len < 0)
+ goto _error;
+
+ tlv = tlv + len;
+ len = tlv_decode(tlv, 0x02, &dp, &dplen);
+ if (len < 0)
+ goto _error;
+
+ tlv = tlv + len;
+ len = tlv_decode(tlv, 0x02, &dq, &dqlen);
+ if (len < 0)
+ goto _error;
+
+ tlv = tlv + len;
+ len = tlv_decode(tlv, 0x02, &qinv, &qinvlen);
+ if (len < 0)
+ goto _error;
+
+ xform->rsa.n.data = n;
+ xform->rsa.n.length = nlen;
+ xform->rsa.e.data = e;
+ xform->rsa.e.length = elen;
+ xform->rsa.d.data = d;
+ xform->rsa.d.length = dlen;
+ xform->rsa.qt.p.data = p;
+ xform->rsa.qt.p.length = plen;
+ xform->rsa.qt.q.data = q;
+ xform->rsa.qt.q.length = qlen;
+ xform->rsa.qt.dP.data = dp;
+ xform->rsa.qt.dP.length = dplen;
+ xform->rsa.qt.dQ.data = dq;
+ xform->rsa.qt.dQ.length = dqlen;
+ xform->rsa.qt.qInv.data = qinv;
+ xform->rsa.qt.qInv.length = qinvlen;
+
+ RTE_ASSERT((tlv + len - &der[0]) == der_len);
+ return 0;
+_error:
+ rte_free(v);
+ rte_free(n);
+ rte_free(e);
+ rte_free(d);
+ rte_free(p);
+ rte_free(q);
+ rte_free(dp);
+ rte_free(dq);
+ rte_free(qinv);
+ return len;
+}
+
+static int
+transform_rsa_param(struct rte_crypto_asym_xform *xform,
+ VhostUserCryptoAsymSessionParam *param)
+{
+ int ret = -EINVAL;
+
+ ret = virtio_crypto_asym_rsa_der_to_xform(param->key_buf, param->key_len, xform);
+ if (ret < 0)
+ goto _error;
+
+ switch (param->u.rsa.padding_algo) {
+ case VIRTIO_CRYPTO_RSA_RAW_PADDING:
+ xform->rsa.padding.type = RTE_CRYPTO_RSA_PADDING_NONE;
+ break;
+ case VIRTIO_CRYPTO_RSA_PKCS1_PADDING:
+ xform->rsa.padding.type = RTE_CRYPTO_RSA_PADDING_PKCS1_5;
+ break;
+ default:
+ VC_LOG_ERR("Unknown padding type");
+ goto _error;
+ }
+
+ xform->rsa.key_type = RTE_RSA_KEY_TYPE_QT;
+ xform->xform_type = RTE_CRYPTO_ASYM_XFORM_RSA;
+_error:
+ return ret;
+}
+
+static void
+vhost_crypto_create_asym_sess(struct vhost_crypto *vcrypto,
+ VhostUserCryptoSessionParam *sess_param)
+{
+ struct rte_cryptodev_asym_session *session = NULL;
+ struct vhost_crypto_session *vhost_session;
+ struct rte_crypto_asym_xform xform = {0};
+ int ret;
+
+ switch (sess_param->u.asym_sess.algo) {
+ case VIRTIO_CRYPTO_AKCIPHER_RSA:
+ ret = transform_rsa_param(&xform, &sess_param->u.asym_sess);
+ if (unlikely(ret)) {
+ VC_LOG_ERR("Error transform session msg (%i)", ret);
+ sess_param->session_id = ret;
+ return;
+ }
+ break;
+ default:
+ VC_LOG_ERR("Invalid op algo");
+ sess_param->session_id = -VIRTIO_CRYPTO_ERR;
+ return;
+ }
+
+ ret = rte_cryptodev_asym_session_create(vcrypto->cid, &xform,
+ vcrypto->sess_pool, (void *)&session);
+ if (!session) {
+ VC_LOG_ERR("Failed to create session");
+ sess_param->session_id = -VIRTIO_CRYPTO_ERR;
+ return;
+ }
+
+ /* insert session to map */
+ vhost_session = rte_malloc(NULL, sizeof(*vhost_session), 0);
+ if (vhost_session == NULL) {
+ VC_LOG_ERR("Failed to alloc session memory");
+ sess_param->session_id = -VIRTIO_CRYPTO_ERR;
+ return;
+ }
+
+ vhost_session->type = RTE_CRYPTO_OP_TYPE_ASYMMETRIC;
+ vhost_session->asym = session;
+ if ((rte_hash_add_key_data(vcrypto->session_map,
+ &vcrypto->last_session_id, vhost_session) < 0)) {
+ VC_LOG_ERR("Failed to insert session to hash table");
+
+ if (rte_cryptodev_asym_session_free(vcrypto->cid, session) < 0)
+ VC_LOG_ERR("Failed to free session");
+ sess_param->session_id = -VIRTIO_CRYPTO_ERR;
+ return;
+ }
+
+ VC_LOG_INFO("Session %"PRIu64" created for vdev %i.",
+ vcrypto->last_session_id, vcrypto->dev->vid);
+
+ sess_param->session_id = vcrypto->last_session_id;
+ vcrypto->last_session_id++;
+}
+
+static void
+vhost_crypto_create_sess(struct vhost_crypto *vcrypto,
+ VhostUserCryptoSessionParam *sess_param)
+{
+ if (sess_param->op_code == VIRTIO_CRYPTO_AKCIPHER_CREATE_SESSION)
+ vhost_crypto_create_asym_sess(vcrypto, sess_param);
+ else
+ vhost_crypto_create_sym_sess(vcrypto, sess_param);
+}
+
static int
vhost_crypto_close_sess(struct vhost_crypto *vcrypto, uint64_t session_id)
{
- struct rte_cryptodev_sym_session *session;
+ struct rte_cryptodev_asym_session *asym_session = NULL;
+ struct rte_cryptodev_sym_session *sym_session = NULL;
+ struct vhost_crypto_session *vhost_session = NULL;
uint64_t sess_id = session_id;
int ret;
ret = rte_hash_lookup_data(vcrypto->session_map, &sess_id,
- (void **)&session);
-
+ (void **)&vhost_session);
if (unlikely(ret < 0)) {
- VC_LOG_ERR("Failed to delete session %"PRIu64".", session_id);
+ VC_LOG_ERR("Failed to find session for id %"PRIu64".", session_id);
+ return -VIRTIO_CRYPTO_INVSESS;
+ }
+
+ if (vhost_session->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) {
+ sym_session = vhost_session->sym;
+ } else if (vhost_session->type == RTE_CRYPTO_OP_TYPE_ASYMMETRIC) {
+ asym_session = vhost_session->asym;
+ } else {
+ VC_LOG_ERR("Invalid session for id %"PRIu64".", session_id);
return -VIRTIO_CRYPTO_INVSESS;
}
- if (rte_cryptodev_sym_session_free(vcrypto->cid, session) < 0) {
+ if (sym_session != NULL &&
+ rte_cryptodev_sym_session_free(vcrypto->cid, sym_session) < 0) {
+ VC_LOG_DBG("Failed to free session");
+ return -VIRTIO_CRYPTO_ERR;
+ }
+
+ if (asym_session != NULL &&
+ rte_cryptodev_asym_session_free(vcrypto->cid, asym_session) < 0) {
VC_LOG_DBG("Failed to free session");
return -VIRTIO_CRYPTO_ERR;
}
@@ -430,6 +682,7 @@ vhost_crypto_close_sess(struct vhost_crypto *vcrypto, uint64_t session_id)
VC_LOG_INFO("Session %"PRIu64" deleted for vdev %i.", sess_id,
vcrypto->dev->vid);
+ rte_free(vhost_session);
return 0;
}
@@ -1123,6 +1376,118 @@ prepare_sym_chain_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op,
return ret;
}
+static __rte_always_inline uint8_t
+vhost_crypto_check_akcipher_request(struct virtio_crypto_akcipher_data_req *req)
+{
+ RTE_SET_USED(req);
+ return VIRTIO_CRYPTO_OK;
+}
+
+static __rte_always_inline uint8_t
+prepare_asym_rsa_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op,
+ struct vhost_crypto_data_req *vc_req,
+ struct virtio_crypto_op_data_req *req,
+ struct vhost_crypto_desc *head,
+ uint32_t max_n_descs)
+{
+ uint8_t ret = vhost_crypto_check_akcipher_request(&req->u.akcipher_req);
+ struct rte_crypto_rsa_op_param *rsa = &op->asym->rsa;
+ struct vhost_crypto_desc *desc = head;
+ uint16_t wlen = 0;
+
+ if (unlikely(ret != VIRTIO_CRYPTO_OK))
+ goto error_exit;
+
+ /* prepare */
+ switch (vcrypto->option) {
+ case RTE_VHOST_CRYPTO_ZERO_COPY_DISABLE:
+ vc_req->wb_pool = vcrypto->wb_pool;
+ if (req->header.opcode == VIRTIO_CRYPTO_AKCIPHER_SIGN) {
+ rsa->op_type = RTE_CRYPTO_ASYM_OP_SIGN;
+ rsa->message.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RO);
+ rsa->message.length = req->u.akcipher_req.para.src_data_len;
+ rsa->sign.length = req->u.akcipher_req.para.dst_data_len;
+ wlen = rsa->sign.length;
+ desc = find_write_desc(head, desc, max_n_descs);
+ if (unlikely(!desc)) {
+ VC_LOG_ERR("Cannot find write location");
+ ret = VIRTIO_CRYPTO_BADMSG;
+ goto error_exit;
+ }
+
+ rsa->sign.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RW);
+ if (unlikely(rsa->sign.data == NULL)) {
+ ret = VIRTIO_CRYPTO_ERR;
+ goto error_exit;
+ }
+
+ desc += 1;
+ } else if (req->header.opcode == VIRTIO_CRYPTO_AKCIPHER_VERIFY) {
+ rsa->op_type = RTE_CRYPTO_ASYM_OP_VERIFY;
+ rsa->sign.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RO);
+ rsa->sign.length = req->u.akcipher_req.para.src_data_len;
+ desc += 1;
+ rsa->message.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RO);
+ rsa->message.length = req->u.akcipher_req.para.dst_data_len;
+ desc += 1;
+ } else if (req->header.opcode == VIRTIO_CRYPTO_AKCIPHER_ENCRYPT) {
+ rsa->op_type = RTE_CRYPTO_ASYM_OP_ENCRYPT;
+ rsa->message.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RO);
+ rsa->message.length = req->u.akcipher_req.para.src_data_len;
+ rsa->cipher.length = req->u.akcipher_req.para.dst_data_len;
+ wlen = rsa->cipher.length;
+ desc = find_write_desc(head, desc, max_n_descs);
+ if (unlikely(!desc)) {
+ VC_LOG_ERR("Cannot find write location");
+ ret = VIRTIO_CRYPTO_BADMSG;
+ goto error_exit;
+ }
+
+ rsa->cipher.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RW);
+ if (unlikely(rsa->cipher.data == NULL)) {
+ ret = VIRTIO_CRYPTO_ERR;
+ goto error_exit;
+ }
+
+ desc += 1;
+ } else if (req->header.opcode == VIRTIO_CRYPTO_AKCIPHER_DECRYPT) {
+ rsa->op_type = RTE_CRYPTO_ASYM_OP_DECRYPT;
+ rsa->cipher.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RO);
+ rsa->cipher.length = req->u.akcipher_req.para.src_data_len;
+ desc += 1;
+ rsa->message.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RO);
+ rsa->message.length = req->u.akcipher_req.para.dst_data_len;
+ desc += 1;
+ } else {
+ goto error_exit;
+ }
+ break;
+ case RTE_VHOST_CRYPTO_ZERO_COPY_ENABLE:
+ default:
+ ret = VIRTIO_CRYPTO_BADMSG;
+ goto error_exit;
+ }
+
+ op->type = RTE_CRYPTO_OP_TYPE_ASYMMETRIC;
+ op->sess_type = RTE_CRYPTO_OP_WITH_SESSION;
+
+ vc_req->inhdr = get_data_ptr(vc_req, desc, VHOST_ACCESS_WO);
+ if (unlikely(vc_req->inhdr == NULL)) {
+ ret = VIRTIO_CRYPTO_BADMSG;
+ goto error_exit;
+ }
+
+ vc_req->inhdr->status = VIRTIO_CRYPTO_OK;
+ vc_req->len = wlen + INHDR_LEN;
+ return 0;
+error_exit:
+ if (vc_req->wb)
+ free_wb_data(vc_req->wb, vc_req->wb_pool);
+
+ vc_req->len = INHDR_LEN;
+ return ret;
+}
+
/**
* Process on descriptor
*/
@@ -1133,17 +1498,21 @@ vhost_crypto_process_one_req(struct vhost_crypto *vcrypto,
uint16_t desc_idx)
__rte_no_thread_safety_analysis /* FIXME: requires iotlb_lock? */
{
- struct vhost_crypto_data_req *vc_req = rte_mbuf_to_priv(op->sym->m_src);
- struct rte_cryptodev_sym_session *session;
+ struct vhost_crypto_data_req *vc_req, *vc_req_out;
+ struct rte_cryptodev_asym_session *asym_session;
+ struct rte_cryptodev_sym_session *sym_session;
+ struct vhost_crypto_session *vhost_session;
+ struct vhost_crypto_desc *desc = descs;
+ uint32_t nb_descs = 0, max_n_descs, i;
+ struct vhost_crypto_data_req data_req;
struct virtio_crypto_op_data_req req;
struct virtio_crypto_inhdr *inhdr;
- struct vhost_crypto_desc *desc = descs;
struct vring_desc *src_desc;
uint64_t session_id;
uint64_t dlen;
- uint32_t nb_descs = 0, max_n_descs, i;
int err;
+ vc_req = &data_req;
vc_req->desc_idx = desc_idx;
vc_req->dev = vcrypto->dev;
vc_req->vq = vq;
@@ -1226,12 +1595,14 @@ vhost_crypto_process_one_req(struct vhost_crypto *vcrypto,
switch (req.header.opcode) {
case VIRTIO_CRYPTO_CIPHER_ENCRYPT:
case VIRTIO_CRYPTO_CIPHER_DECRYPT:
+ vc_req_out = rte_mbuf_to_priv(op->sym->m_src);
+ rte_memcpy(vc_req_out, vc_req, sizeof(struct vhost_crypto_data_req));
session_id = req.header.session_id;
/* one branch to avoid unnecessary table lookup */
- if (vcrypto->cache_session_id != session_id) {
+ if (vcrypto->cache_sym_session_id != session_id) {
err = rte_hash_lookup_data(vcrypto->session_map,
- &session_id, (void **)&session);
+ &session_id, (void **)&vhost_session);
if (unlikely(err < 0)) {
err = VIRTIO_CRYPTO_ERR;
VC_LOG_ERR("Failed to find session %"PRIu64,
@@ -1239,13 +1610,14 @@ vhost_crypto_process_one_req(struct vhost_crypto *vcrypto,
goto error_exit;
}
- vcrypto->cache_session = session;
- vcrypto->cache_session_id = session_id;
+ vcrypto->cache_sym_session = vhost_session->sym;
+ vcrypto->cache_sym_session_id = session_id;
}
- session = vcrypto->cache_session;
+ sym_session = vcrypto->cache_sym_session;
+ op->type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
- err = rte_crypto_op_attach_sym_session(op, session);
+ err = rte_crypto_op_attach_sym_session(op, sym_session);
if (unlikely(err < 0)) {
err = VIRTIO_CRYPTO_ERR;
VC_LOG_ERR("Failed to attach session to op");
@@ -1257,12 +1629,12 @@ vhost_crypto_process_one_req(struct vhost_crypto *vcrypto,
err = VIRTIO_CRYPTO_NOTSUPP;
break;
case VIRTIO_CRYPTO_SYM_OP_CIPHER:
- err = prepare_sym_cipher_op(vcrypto, op, vc_req,
+ err = prepare_sym_cipher_op(vcrypto, op, vc_req_out,
&req.u.sym_req.u.cipher, desc,
max_n_descs);
break;
case VIRTIO_CRYPTO_SYM_OP_ALGORITHM_CHAINING:
- err = prepare_sym_chain_op(vcrypto, op, vc_req,
+ err = prepare_sym_chain_op(vcrypto, op, vc_req_out,
&req.u.sym_req.u.chain, desc,
max_n_descs);
break;
@@ -1271,6 +1643,53 @@ vhost_crypto_process_one_req(struct vhost_crypto *vcrypto,
VC_LOG_ERR("Failed to process sym request");
goto error_exit;
}
+ break;
+ case VIRTIO_CRYPTO_AKCIPHER_SIGN:
+ case VIRTIO_CRYPTO_AKCIPHER_VERIFY:
+ case VIRTIO_CRYPTO_AKCIPHER_ENCRYPT:
+ case VIRTIO_CRYPTO_AKCIPHER_DECRYPT:
+ session_id = req.header.session_id;
+
+ /* one branch to avoid unnecessary table lookup */
+ if (vcrypto->cache_asym_session_id != session_id) {
+ err = rte_hash_lookup_data(vcrypto->session_map,
+ &session_id, (void **)&vhost_session);
+ if (unlikely(err < 0)) {
+ err = VIRTIO_CRYPTO_ERR;
+ VC_LOG_ERR("Failed to find asym session %"PRIu64,
+ session_id);
+ goto error_exit;
+ }
+
+ vcrypto->cache_asym_session = vhost_session->asym;
+ vcrypto->cache_asym_session_id = session_id;
+ }
+
+ asym_session = vcrypto->cache_asym_session;
+ op->type = RTE_CRYPTO_OP_TYPE_ASYMMETRIC;
+
+ err = rte_crypto_op_attach_asym_session(op, asym_session);
+ if (unlikely(err < 0)) {
+ err = VIRTIO_CRYPTO_ERR;
+ VC_LOG_ERR("Failed to attach asym session to op");
+ goto error_exit;
+ }
+
+ vc_req_out = rte_cryptodev_asym_session_get_user_data(asym_session);
+ rte_memcpy(vc_req_out, vc_req, sizeof(struct vhost_crypto_data_req));
+ vc_req_out->wb = NULL;
+
+ switch (req.header.algo) {
+ case VIRTIO_CRYPTO_AKCIPHER_RSA:
+ err = prepare_asym_rsa_op(vcrypto, op, vc_req_out,
+ &req, desc, max_n_descs);
+ break;
+ }
+ if (unlikely(err != 0)) {
+ VC_LOG_ERR("Failed to process asym request");
+ goto error_exit;
+ }
+
break;
default:
err = VIRTIO_CRYPTO_ERR;
@@ -1294,12 +1713,22 @@ static __rte_always_inline struct vhost_virtqueue *
vhost_crypto_finalize_one_request(struct rte_crypto_op *op,
struct vhost_virtqueue *old_vq)
{
- struct rte_mbuf *m_src = op->sym->m_src;
- struct rte_mbuf *m_dst = op->sym->m_dst;
- struct vhost_crypto_data_req *vc_req = rte_mbuf_to_priv(m_src);
+ struct rte_mbuf *m_src = NULL, *m_dst = NULL;
+ struct vhost_crypto_data_req *vc_req;
struct vhost_virtqueue *vq;
uint16_t used_idx, desc_idx;
+ if (op->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) {
+ m_src = op->sym->m_src;
+ m_dst = op->sym->m_dst;
+ vc_req = rte_mbuf_to_priv(m_src);
+ } else if (op->type == RTE_CRYPTO_OP_TYPE_ASYMMETRIC) {
+ vc_req = rte_cryptodev_asym_session_get_user_data(op->asym->session);
+ } else {
+ VC_LOG_ERR("Invalid crypto op type");
+ return NULL;
+ }
+
if (unlikely(!vc_req)) {
VC_LOG_ERR("Failed to retrieve vc_req");
return NULL;
@@ -1321,10 +1750,11 @@ vhost_crypto_finalize_one_request(struct rte_crypto_op *op,
vq->used->ring[desc_idx].id = vq->avail->ring[desc_idx];
vq->used->ring[desc_idx].len = vc_req->len;
- rte_mempool_put(m_src->pool, (void *)m_src);
-
- if (m_dst)
- rte_mempool_put(m_dst->pool, (void *)m_dst);
+ if (op->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) {
+ rte_mempool_put(m_src->pool, (void *)m_src);
+ if (m_dst)
+ rte_mempool_put(m_dst->pool, (void *)m_dst);
+ }
return vc_req->vq;
}
@@ -1407,8 +1837,9 @@ rte_vhost_crypto_create(int vid, uint8_t cryptodev_id,
vcrypto->sess_pool = sess_pool;
vcrypto->cid = cryptodev_id;
- vcrypto->cache_session_id = UINT64_MAX;
- vcrypto->last_session_id = 1;
+ vcrypto->cache_sym_session_id = UINT64_MAX;
+ vcrypto->cache_asym_session_id = UINT64_MAX;
+ vcrypto->last_session_id = 0;
vcrypto->dev = dev;
vcrypto->option = RTE_VHOST_CRYPTO_ZERO_COPY_DISABLE;
@@ -1580,6 +2011,9 @@ rte_vhost_crypto_fetch_requests(int vid, uint32_t qid,
vq = dev->virtqueue[qid];
+ if (!vq || !vq->avail)
+ return 0;
+
avail_idx = *((volatile uint16_t *)&vq->avail->idx);
start_idx = vq->last_used_idx;
count = avail_idx - start_idx;
diff --git a/lib/vhost/vhost_user.h b/lib/vhost/vhost_user.h
index edf7adb3c0..3b9e3ce7c2 100644
--- a/lib/vhost/vhost_user.h
+++ b/lib/vhost/vhost_user.h
@@ -99,11 +99,10 @@ typedef struct VhostUserLog {
/* Comply with Cryptodev-Linux */
#define VHOST_USER_CRYPTO_MAX_HMAC_KEY_LENGTH 512
#define VHOST_USER_CRYPTO_MAX_CIPHER_KEY_LENGTH 64
+#define VHOST_USER_CRYPTO_MAX_KEY_LENGTH 1024
/* Same structure as vhost-user backend session info */
-typedef struct VhostUserCryptoSessionParam {
- int64_t session_id;
- uint32_t op_code;
+typedef struct VhostUserCryptoSymSessionParam {
uint32_t cipher_algo;
uint32_t cipher_key_len;
uint32_t hash_algo;
@@ -114,10 +113,36 @@ typedef struct VhostUserCryptoSessionParam {
uint8_t dir;
uint8_t hash_mode;
uint8_t chaining_dir;
- uint8_t *ciphe_key;
+ uint8_t *cipher_key;
uint8_t *auth_key;
uint8_t cipher_key_buf[VHOST_USER_CRYPTO_MAX_CIPHER_KEY_LENGTH];
uint8_t auth_key_buf[VHOST_USER_CRYPTO_MAX_HMAC_KEY_LENGTH];
+} VhostUserCryptoSymSessionParam;
+
+
+typedef struct VhostUserCryptoAsymRsaParam {
+ uint32_t padding_algo;
+ uint32_t hash_algo;
+} VhostUserCryptoAsymRsaParam;
+
+typedef struct VhostUserCryptoAsymSessionParam {
+ uint32_t algo;
+ uint32_t key_type;
+ uint32_t key_len;
+ uint8_t *key;
+ union {
+ VhostUserCryptoAsymRsaParam rsa;
+ } u;
+ uint8_t key_buf[VHOST_USER_CRYPTO_MAX_KEY_LENGTH];
+} VhostUserCryptoAsymSessionParam;
+
+typedef struct VhostUserCryptoSessionParam {
+ uint32_t op_code;
+ union {
+ VhostUserCryptoSymSessionParam sym_sess;
+ VhostUserCryptoAsymSessionParam asym_sess;
+ } u;
+ uint64_t session_id;
} VhostUserCryptoSessionParam;
typedef struct VhostUserVringArea {
--
2.25.1
^ permalink raw reply [flat|nested] 58+ messages in thread
* [v2 2/2] examples/vhost_crypto: add asymmetric support
2025-01-07 18:02 ` [v2 0/2] vhost: add RSA support Gowrishankar Muthukrishnan
2025-01-07 18:02 ` [v2 1/2] vhost: add asymmetric " Gowrishankar Muthukrishnan
@ 2025-01-07 18:02 ` Gowrishankar Muthukrishnan
2025-01-29 16:13 ` Maxime Coquelin
2025-02-21 17:30 ` [v3 0/5] vhost: add RSA support Gowrishankar Muthukrishnan
2 siblings, 1 reply; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2025-01-07 18:02 UTC (permalink / raw)
To: dev, Akhil Goyal, Maxime Coquelin, Chenbo Xia, Fan Zhang, Jay Zhou
Cc: jerinj, anoobj, Gowrishankar Muthukrishnan
Add symmetric support.
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
examples/vhost_crypto/main.c | 54 ++++++++++++++++++++++++++----------
1 file changed, 40 insertions(+), 14 deletions(-)
diff --git a/examples/vhost_crypto/main.c b/examples/vhost_crypto/main.c
index 558c09a60f..8bdfc40c4b 100644
--- a/examples/vhost_crypto/main.c
+++ b/examples/vhost_crypto/main.c
@@ -59,6 +59,7 @@ struct vhost_crypto_options {
uint32_t nb_los;
uint32_t zero_copy;
uint32_t guest_polling;
+ bool asymmetric_crypto;
} options;
enum {
@@ -70,6 +71,8 @@ enum {
OPT_ZERO_COPY_NUM,
#define OPT_POLLING "guest-polling"
OPT_POLLING_NUM,
+#define OPT_ASYM "asymmetric-crypto"
+ OPT_ASYM_NUM,
};
#define NB_SOCKET_FIELDS (2)
@@ -202,9 +205,10 @@ vhost_crypto_usage(const char *prgname)
" --%s <lcore>,SOCKET-FILE-PATH\n"
" --%s (lcore,cdev_id,queue_id)[,(lcore,cdev_id,queue_id)]\n"
" --%s: zero copy\n"
- " --%s: guest polling\n",
+ " --%s: guest polling\n"
+ " --%s: asymmetric crypto\n",
prgname, OPT_SOCKET_FILE, OPT_CONFIG,
- OPT_ZERO_COPY, OPT_POLLING);
+ OPT_ZERO_COPY, OPT_POLLING, OPT_ASYM);
}
static int
@@ -223,6 +227,8 @@ vhost_crypto_parse_args(int argc, char **argv)
NULL, OPT_ZERO_COPY_NUM},
{OPT_POLLING, no_argument,
NULL, OPT_POLLING_NUM},
+ {OPT_ASYM, no_argument,
+ NULL, OPT_ASYM_NUM},
{NULL, 0, 0, 0}
};
@@ -262,6 +268,10 @@ vhost_crypto_parse_args(int argc, char **argv)
options.guest_polling = 1;
break;
+ case OPT_ASYM_NUM:
+ options.asymmetric_crypto = true;
+ break;
+
default:
vhost_crypto_usage(prgname);
return -EINVAL;
@@ -362,8 +372,8 @@ destroy_device(int vid)
}
static const struct rte_vhost_device_ops virtio_crypto_device_ops = {
- .new_device = new_device,
- .destroy_device = destroy_device,
+ .new_connection = new_device,
+ .destroy_connection = destroy_device,
};
static int
@@ -376,6 +386,7 @@ vhost_crypto_worker(void *arg)
int callfds[VIRTIO_CRYPTO_MAX_NUM_BURST_VQS];
uint32_t lcore_id = rte_lcore_id();
uint32_t burst_size = MAX_PKT_BURST;
+ enum rte_crypto_op_type cop_type;
uint32_t i, j, k;
uint32_t to_fetch, fetched;
@@ -383,9 +394,13 @@ vhost_crypto_worker(void *arg)
RTE_LOG(INFO, USER1, "Processing on Core %u started\n", lcore_id);
+ cop_type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
+ if (options.asymmetric_crypto)
+ cop_type = RTE_CRYPTO_OP_TYPE_ASYMMETRIC;
+
for (i = 0; i < NB_VIRTIO_QUEUES; i++) {
if (rte_crypto_op_bulk_alloc(info->cop_pool,
- RTE_CRYPTO_OP_TYPE_SYMMETRIC, ops[i],
+ cop_type, ops[i],
burst_size) < burst_size) {
RTE_LOG(ERR, USER1, "Failed to alloc cops\n");
ret = -1;
@@ -411,12 +426,11 @@ vhost_crypto_worker(void *arg)
fetched);
if (unlikely(rte_crypto_op_bulk_alloc(
info->cop_pool,
- RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ cop_type,
ops[j], fetched) < fetched)) {
RTE_LOG(ERR, USER1, "Failed realloc\n");
return -1;
}
-
fetched = rte_cryptodev_dequeue_burst(
info->cid, info->qid,
ops_deq[j], RTE_MIN(burst_size,
@@ -477,6 +491,7 @@ main(int argc, char *argv[])
struct rte_cryptodev_qp_conf qp_conf;
struct rte_cryptodev_config config;
struct rte_cryptodev_info dev_info;
+ enum rte_crypto_op_type cop_type;
char name[128];
uint32_t i, j, lcore;
int ret;
@@ -539,12 +554,21 @@ main(int argc, char *argv[])
goto error_exit;
}
- snprintf(name, 127, "SESS_POOL_%u", lo->lcore_id);
- info->sess_pool = rte_cryptodev_sym_session_pool_create(name,
- SESSION_MAP_ENTRIES,
- rte_cryptodev_sym_get_private_session_size(
- info->cid), 0, 0,
- rte_lcore_to_socket_id(lo->lcore_id));
+ if (!options.asymmetric_crypto) {
+ snprintf(name, 127, "SYM_SESS_POOL_%u", lo->lcore_id);
+ info->sess_pool = rte_cryptodev_sym_session_pool_create(name,
+ SESSION_MAP_ENTRIES,
+ rte_cryptodev_sym_get_private_session_size(
+ info->cid), 0, 0,
+ rte_lcore_to_socket_id(lo->lcore_id));
+ cop_type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
+ } else {
+ snprintf(name, 127, "ASYM_SESS_POOL_%u", lo->lcore_id);
+ info->sess_pool = rte_cryptodev_asym_session_pool_create(name,
+ SESSION_MAP_ENTRIES, 0, 64,
+ rte_lcore_to_socket_id(lo->lcore_id));
+ cop_type = RTE_CRYPTO_OP_TYPE_ASYMMETRIC;
+ }
if (!info->sess_pool) {
RTE_LOG(ERR, USER1, "Failed to create mempool");
@@ -553,7 +577,7 @@ main(int argc, char *argv[])
snprintf(name, 127, "COPPOOL_%u", lo->lcore_id);
info->cop_pool = rte_crypto_op_pool_create(name,
- RTE_CRYPTO_OP_TYPE_SYMMETRIC, NB_MEMPOOL_OBJS,
+ cop_type, NB_MEMPOOL_OBJS,
NB_CACHE_OBJS, VHOST_CRYPTO_MAX_IV_LEN,
rte_lcore_to_socket_id(lo->lcore_id));
@@ -567,6 +591,8 @@ main(int argc, char *argv[])
qp_conf.nb_descriptors = NB_CRYPTO_DESCRIPTORS;
qp_conf.mp_session = info->sess_pool;
+ if (options.asymmetric_crypto)
+ qp_conf.mp_session = NULL;
for (j = 0; j < dev_info.max_nb_queue_pairs; j++) {
ret = rte_cryptodev_queue_pair_setup(info->cid, j,
--
2.25.1
^ permalink raw reply [flat|nested] 58+ messages in thread
* [v2 0/2] crypto/virtio: add packed ring support
2024-12-24 7:36 [v1 00/16] crypto/virtio: vDPA and asymmetric support Gowrishankar Muthukrishnan
` (17 preceding siblings ...)
2025-01-07 18:02 ` [v2 0/2] vhost: add RSA support Gowrishankar Muthukrishnan
@ 2025-01-07 18:08 ` Gowrishankar Muthukrishnan
2025-01-07 18:08 ` [v2 1/2] crypto/virtio: refactor queue operations Gowrishankar Muthukrishnan
2025-01-07 18:08 ` [v2 2/2] crypto/virtio: add packed ring support Gowrishankar Muthukrishnan
2025-01-07 18:44 ` [v2 0/4] crypto/virtio: add vDPA backend support Gowrishankar Muthukrishnan
19 siblings, 2 replies; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2025-01-07 18:08 UTC (permalink / raw)
To: dev, Akhil Goyal, Maxime Coquelin, Chenbo Xia, Fan Zhang, Jay Zhou
Cc: jerinj, anoobj, Gowrishankar Muthukrishnan
This series adds packed ring support in virtio crypto PMD.
Depends-on: series-34291 ("crypto/virtio: add RSA support")
v2:
- split from v1 series.
Gowrishankar Muthukrishnan (2):
crypto/virtio: refactor queue operations
crypto/virtio: add packed ring support
drivers/crypto/virtio/meson.build | 1 +
drivers/crypto/virtio/virtio_crypto_algs.h | 2 +-
drivers/crypto/virtio/virtio_cryptodev.c | 698 +++++++++++----------
drivers/crypto/virtio/virtio_cryptodev.h | 13 +-
drivers/crypto/virtio/virtio_cvq.c | 228 +++++++
drivers/crypto/virtio/virtio_cvq.h | 33 +
drivers/crypto/virtio/virtio_pci.h | 31 +-
drivers/crypto/virtio/virtio_ring.h | 71 ++-
drivers/crypto/virtio/virtio_rxtx.c | 484 ++++++++++++--
drivers/crypto/virtio/virtio_rxtx.h | 13 +
drivers/crypto/virtio/virtqueue.c | 229 ++++++-
drivers/crypto/virtio/virtqueue.h | 221 ++++++-
12 files changed, 1617 insertions(+), 407 deletions(-)
create mode 100644 drivers/crypto/virtio/virtio_cvq.c
create mode 100644 drivers/crypto/virtio/virtio_cvq.h
create mode 100644 drivers/crypto/virtio/virtio_rxtx.h
--
2.25.1
^ permalink raw reply [flat|nested] 58+ messages in thread
* [v2 1/2] crypto/virtio: refactor queue operations
2025-01-07 18:08 ` [v2 0/2] crypto/virtio: add packed ring support Gowrishankar Muthukrishnan
@ 2025-01-07 18:08 ` Gowrishankar Muthukrishnan
2025-01-07 18:08 ` [v2 2/2] crypto/virtio: add packed ring support Gowrishankar Muthukrishnan
1 sibling, 0 replies; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2025-01-07 18:08 UTC (permalink / raw)
To: dev, Akhil Goyal, Maxime Coquelin, Chenbo Xia, Fan Zhang, Jay Zhou
Cc: jerinj, anoobj, Gowrishankar Muthukrishnan
Move existing control queue operations into a common place
that would be shared with other virtio type of devices.
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
Depends-on: series-34291 ("crypto/virtio: add RSA support")
drivers/crypto/virtio/meson.build | 1 +
drivers/crypto/virtio/virtio_crypto_algs.h | 2 +-
drivers/crypto/virtio/virtio_cryptodev.c | 573 +++++++++------------
drivers/crypto/virtio/virtio_cvq.c | 129 +++++
drivers/crypto/virtio/virtio_cvq.h | 33 ++
drivers/crypto/virtio/virtio_pci.h | 6 +-
drivers/crypto/virtio/virtio_ring.h | 12 +-
drivers/crypto/virtio/virtio_rxtx.c | 42 +-
drivers/crypto/virtio/virtio_rxtx.h | 13 +
drivers/crypto/virtio/virtqueue.c | 191 ++++++-
drivers/crypto/virtio/virtqueue.h | 89 +++-
11 files changed, 705 insertions(+), 386 deletions(-)
create mode 100644 drivers/crypto/virtio/virtio_cvq.c
create mode 100644 drivers/crypto/virtio/virtio_cvq.h
create mode 100644 drivers/crypto/virtio/virtio_rxtx.h
diff --git a/drivers/crypto/virtio/meson.build b/drivers/crypto/virtio/meson.build
index 45533c9b89..d2c3b3ad07 100644
--- a/drivers/crypto/virtio/meson.build
+++ b/drivers/crypto/virtio/meson.build
@@ -11,6 +11,7 @@ includes += include_directories('../../../lib/vhost')
deps += 'bus_pci'
sources = files(
'virtio_cryptodev.c',
+ 'virtio_cvq.c',
'virtio_pci.c',
'virtio_rxtx.c',
'virtqueue.c',
diff --git a/drivers/crypto/virtio/virtio_crypto_algs.h b/drivers/crypto/virtio/virtio_crypto_algs.h
index 4c44af3733..3824017ca5 100644
--- a/drivers/crypto/virtio/virtio_crypto_algs.h
+++ b/drivers/crypto/virtio/virtio_crypto_algs.h
@@ -22,7 +22,7 @@ struct virtio_crypto_session {
phys_addr_t phys_addr;
} aad;
- struct virtio_crypto_op_ctrl_req ctrl;
+ struct virtio_pmd_ctrl ctrl;
};
#endif /* _VIRTIO_CRYPTO_ALGS_H_ */
diff --git a/drivers/crypto/virtio/virtio_cryptodev.c b/drivers/crypto/virtio/virtio_cryptodev.c
index afeab5a816..9a11cbe90a 100644
--- a/drivers/crypto/virtio/virtio_cryptodev.c
+++ b/drivers/crypto/virtio/virtio_cryptodev.c
@@ -64,213 +64,6 @@ static const struct rte_cryptodev_capabilities virtio_capabilities[] = {
uint8_t cryptodev_virtio_driver_id;
-#define NUM_ENTRY_SYM_CREATE_SESSION 4
-
-static int
-virtio_crypto_send_command(struct virtqueue *vq,
- struct virtio_crypto_op_ctrl_req *ctrl, uint8_t *cipher_key,
- uint8_t *auth_key, struct virtio_crypto_session *session)
-{
- uint8_t idx = 0;
- uint8_t needed = 1;
- uint32_t head = 0;
- uint32_t len_cipher_key = 0;
- uint32_t len_auth_key = 0;
- uint32_t len_ctrl_req = sizeof(struct virtio_crypto_op_ctrl_req);
- uint32_t len_session_input = sizeof(struct virtio_crypto_session_input);
- uint32_t len_total = 0;
- uint32_t input_offset = 0;
- void *virt_addr_started = NULL;
- phys_addr_t phys_addr_started;
- struct vring_desc *desc;
- uint32_t desc_offset;
- struct virtio_crypto_session_input *input;
- int ret;
-
- PMD_INIT_FUNC_TRACE();
-
- if (session == NULL) {
- VIRTIO_CRYPTO_SESSION_LOG_ERR("session is NULL.");
- return -EINVAL;
- }
- /* cipher only is supported, it is available if auth_key is NULL */
- if (session->ctrl.header.algo == VIRTIO_CRYPTO_SERVICE_CIPHER && !cipher_key) {
- VIRTIO_CRYPTO_SESSION_LOG_ERR("cipher key is NULL.");
- return -EINVAL;
- }
-
- head = vq->vq_desc_head_idx;
- VIRTIO_CRYPTO_INIT_LOG_DBG("vq->vq_desc_head_idx = %d, vq = %p",
- head, vq);
-
- if (vq->vq_free_cnt < needed) {
- VIRTIO_CRYPTO_SESSION_LOG_ERR("Not enough entry");
- return -ENOSPC;
- }
-
- /* calculate the length of cipher key */
- if (cipher_key) {
- if (session->ctrl.header.algo == VIRTIO_CRYPTO_SERVICE_CIPHER) {
- switch (ctrl->u.sym_create_session.op_type) {
- case VIRTIO_CRYPTO_SYM_OP_CIPHER:
- len_cipher_key = ctrl->u.sym_create_session.u.cipher.para.keylen;
- break;
- case VIRTIO_CRYPTO_SYM_OP_ALGORITHM_CHAINING:
- len_cipher_key =
- ctrl->u.sym_create_session.u.chain.para.cipher_param.keylen;
- break;
- default:
- VIRTIO_CRYPTO_SESSION_LOG_ERR("invalid op type");
- return -EINVAL;
- }
- } else if (session->ctrl.header.algo == VIRTIO_CRYPTO_AKCIPHER_RSA) {
- len_cipher_key = ctrl->u.akcipher_create_session.para.keylen;
- } else {
- VIRTIO_CRYPTO_SESSION_LOG_ERR("Invalid crypto service for cipher key");
- return -EINVAL;
- }
- }
-
- /* calculate the length of auth key */
- if (auth_key) {
- len_auth_key =
- ctrl->u.sym_create_session.u.chain.para.u.mac_param
- .auth_key_len;
- }
-
- /*
- * malloc memory to store indirect vring_desc entries, including
- * ctrl request, cipher key, auth key, session input and desc vring
- */
- desc_offset = len_ctrl_req + len_cipher_key + len_auth_key
- + len_session_input;
- virt_addr_started = rte_malloc(NULL,
- desc_offset + NUM_ENTRY_SYM_CREATE_SESSION
- * sizeof(struct vring_desc), RTE_CACHE_LINE_SIZE);
- if (virt_addr_started == NULL) {
- VIRTIO_CRYPTO_SESSION_LOG_ERR("not enough heap memory");
- return -ENOSPC;
- }
- phys_addr_started = rte_malloc_virt2iova(virt_addr_started);
-
- /* address to store indirect vring desc entries */
- desc = (struct vring_desc *)
- ((uint8_t *)virt_addr_started + desc_offset);
-
- /* ctrl req part */
- memcpy(virt_addr_started, ctrl, len_ctrl_req);
- desc[idx].addr = phys_addr_started;
- desc[idx].len = len_ctrl_req;
- desc[idx].flags = VRING_DESC_F_NEXT;
- desc[idx].next = idx + 1;
- idx++;
- len_total += len_ctrl_req;
- input_offset += len_ctrl_req;
-
- /* cipher key part */
- if (len_cipher_key > 0) {
- memcpy((uint8_t *)virt_addr_started + len_total,
- cipher_key, len_cipher_key);
-
- desc[idx].addr = phys_addr_started + len_total;
- desc[idx].len = len_cipher_key;
- desc[idx].flags = VRING_DESC_F_NEXT;
- desc[idx].next = idx + 1;
- idx++;
- len_total += len_cipher_key;
- input_offset += len_cipher_key;
- }
-
- /* auth key part */
- if (len_auth_key > 0) {
- memcpy((uint8_t *)virt_addr_started + len_total,
- auth_key, len_auth_key);
-
- desc[idx].addr = phys_addr_started + len_total;
- desc[idx].len = len_auth_key;
- desc[idx].flags = VRING_DESC_F_NEXT;
- desc[idx].next = idx + 1;
- idx++;
- len_total += len_auth_key;
- input_offset += len_auth_key;
- }
-
- /* input part */
- input = (struct virtio_crypto_session_input *)
- ((uint8_t *)virt_addr_started + input_offset);
- input->status = VIRTIO_CRYPTO_ERR;
- input->session_id = ~0ULL;
- desc[idx].addr = phys_addr_started + len_total;
- desc[idx].len = len_session_input;
- desc[idx].flags = VRING_DESC_F_WRITE;
- idx++;
-
- /* use a single desc entry */
- vq->vq_ring.desc[head].addr = phys_addr_started + desc_offset;
- vq->vq_ring.desc[head].len = idx * sizeof(struct vring_desc);
- vq->vq_ring.desc[head].flags = VRING_DESC_F_INDIRECT;
- vq->vq_free_cnt--;
-
- vq->vq_desc_head_idx = vq->vq_ring.desc[head].next;
-
- vq_update_avail_ring(vq, head);
- vq_update_avail_idx(vq);
-
- VIRTIO_CRYPTO_INIT_LOG_DBG("vq->vq_queue_index = %d",
- vq->vq_queue_index);
-
- virtqueue_notify(vq);
-
- rte_rmb();
- while (vq->vq_used_cons_idx == vq->vq_ring.used->idx) {
- rte_rmb();
- usleep(100);
- }
-
- while (vq->vq_used_cons_idx != vq->vq_ring.used->idx) {
- uint32_t idx, desc_idx, used_idx;
- struct vring_used_elem *uep;
-
- used_idx = (uint32_t)(vq->vq_used_cons_idx
- & (vq->vq_nentries - 1));
- uep = &vq->vq_ring.used->ring[used_idx];
- idx = (uint32_t) uep->id;
- desc_idx = idx;
-
- while (vq->vq_ring.desc[desc_idx].flags & VRING_DESC_F_NEXT) {
- desc_idx = vq->vq_ring.desc[desc_idx].next;
- vq->vq_free_cnt++;
- }
-
- vq->vq_ring.desc[desc_idx].next = vq->vq_desc_head_idx;
- vq->vq_desc_head_idx = idx;
-
- vq->vq_used_cons_idx++;
- vq->vq_free_cnt++;
- }
-
- VIRTIO_CRYPTO_INIT_LOG_DBG("vq->vq_free_cnt=%d", vq->vq_free_cnt);
- VIRTIO_CRYPTO_INIT_LOG_DBG("vq->vq_desc_head_idx=%d", vq->vq_desc_head_idx);
-
- /* get the result */
- if (input->status != VIRTIO_CRYPTO_OK) {
- VIRTIO_CRYPTO_SESSION_LOG_ERR("Something wrong on backend! "
- "status=%u, session_id=%" PRIu64 "",
- input->status, input->session_id);
- rte_free(virt_addr_started);
- ret = -1;
- } else {
- session->session_id = input->session_id;
-
- VIRTIO_CRYPTO_SESSION_LOG_INFO("Create session successfully, "
- "session_id=%" PRIu64 "", input->session_id);
- rte_free(virt_addr_started);
- ret = 0;
- }
-
- return ret;
-}
-
void
virtio_crypto_queue_release(struct virtqueue *vq)
{
@@ -283,6 +76,7 @@ virtio_crypto_queue_release(struct virtqueue *vq)
/* Select and deactivate the queue */
VTPCI_OPS(hw)->del_queue(hw, vq);
+ hw->vqs[vq->vq_queue_index] = NULL;
rte_memzone_free(vq->mz);
rte_mempool_free(vq->mpool);
rte_free(vq);
@@ -301,8 +95,7 @@ virtio_crypto_queue_setup(struct rte_cryptodev *dev,
{
char vq_name[VIRTQUEUE_MAX_NAME_SZ];
char mpool_name[MPOOL_MAX_NAME_SZ];
- const struct rte_memzone *mz;
- unsigned int vq_size, size;
+ unsigned int vq_size;
struct virtio_crypto_hw *hw = dev->data->dev_private;
struct virtqueue *vq = NULL;
uint32_t i = 0;
@@ -341,16 +134,26 @@ virtio_crypto_queue_setup(struct rte_cryptodev *dev,
"dev%d_controlqueue_mpool",
dev->data->dev_id);
}
- size = RTE_ALIGN_CEIL(sizeof(*vq) +
- vq_size * sizeof(struct vq_desc_extra),
- RTE_CACHE_LINE_SIZE);
- vq = rte_zmalloc_socket(vq_name, size, RTE_CACHE_LINE_SIZE,
- socket_id);
+
+ /*
+ * Using part of the vring entries is permitted, but the maximum
+ * is vq_size
+ */
+ if (nb_desc == 0 || nb_desc > vq_size)
+ nb_desc = vq_size;
+
+ if (hw->vqs[vtpci_queue_idx])
+ vq = hw->vqs[vtpci_queue_idx];
+ else
+ vq = virtcrypto_queue_alloc(hw, vtpci_queue_idx, nb_desc,
+ socket_id, vq_name);
if (vq == NULL) {
VIRTIO_CRYPTO_INIT_LOG_ERR("Can not allocate virtqueue");
return -ENOMEM;
}
+ hw->vqs[vtpci_queue_idx] = vq;
+
if (queue_type == VTCRYPTO_DATAQ) {
/* pre-allocate a mempool and use it in the data plane to
* improve performance
@@ -358,7 +161,7 @@ virtio_crypto_queue_setup(struct rte_cryptodev *dev,
vq->mpool = rte_mempool_lookup(mpool_name);
if (vq->mpool == NULL)
vq->mpool = rte_mempool_create(mpool_name,
- vq_size,
+ nb_desc,
sizeof(struct virtio_crypto_op_cookie),
RTE_CACHE_LINE_SIZE, 0,
NULL, NULL, NULL, NULL, socket_id,
@@ -368,7 +171,7 @@ virtio_crypto_queue_setup(struct rte_cryptodev *dev,
"Cannot create mempool");
goto mpool_create_err;
}
- for (i = 0; i < vq_size; i++) {
+ for (i = 0; i < nb_desc; i++) {
vq->vq_descx[i].cookie =
rte_zmalloc("crypto PMD op cookie pointer",
sizeof(struct virtio_crypto_op_cookie),
@@ -381,67 +184,10 @@ virtio_crypto_queue_setup(struct rte_cryptodev *dev,
}
}
- vq->hw = hw;
- vq->dev_id = dev->data->dev_id;
- vq->vq_queue_index = vtpci_queue_idx;
- vq->vq_nentries = vq_size;
-
- /*
- * Using part of the vring entries is permitted, but the maximum
- * is vq_size
- */
- if (nb_desc == 0 || nb_desc > vq_size)
- nb_desc = vq_size;
- vq->vq_free_cnt = nb_desc;
-
- /*
- * Reserve a memzone for vring elements
- */
- size = vring_size(vq_size, VIRTIO_PCI_VRING_ALIGN);
- vq->vq_ring_size = RTE_ALIGN_CEIL(size, VIRTIO_PCI_VRING_ALIGN);
- VIRTIO_CRYPTO_INIT_LOG_DBG("%s vring_size: %d, rounded_vring_size: %d",
- (queue_type == VTCRYPTO_DATAQ) ? "dataq" : "ctrlq",
- size, vq->vq_ring_size);
-
- mz = rte_memzone_reserve_aligned(vq_name, vq->vq_ring_size,
- socket_id, 0, VIRTIO_PCI_VRING_ALIGN);
- if (mz == NULL) {
- if (rte_errno == EEXIST)
- mz = rte_memzone_lookup(vq_name);
- if (mz == NULL) {
- VIRTIO_CRYPTO_INIT_LOG_ERR("not enough memory");
- goto mz_reserve_err;
- }
- }
-
- /*
- * Virtio PCI device VIRTIO_PCI_QUEUE_PF register is 32bit,
- * and only accepts 32 bit page frame number.
- * Check if the allocated physical memory exceeds 16TB.
- */
- if ((mz->iova + vq->vq_ring_size - 1)
- >> (VIRTIO_PCI_QUEUE_ADDR_SHIFT + 32)) {
- VIRTIO_CRYPTO_INIT_LOG_ERR("vring address shouldn't be "
- "above 16TB!");
- goto vring_addr_err;
- }
-
- memset(mz->addr, 0, sizeof(mz->len));
- vq->mz = mz;
- vq->vq_ring_mem = mz->iova;
- vq->vq_ring_virt_mem = mz->addr;
- VIRTIO_CRYPTO_INIT_LOG_DBG("vq->vq_ring_mem(physical): 0x%"PRIx64,
- (uint64_t)mz->iova);
- VIRTIO_CRYPTO_INIT_LOG_DBG("vq->vq_ring_virt_mem: 0x%"PRIx64,
- (uint64_t)(uintptr_t)mz->addr);
-
*pvq = vq;
return 0;
-vring_addr_err:
- rte_memzone_free(mz);
-mz_reserve_err:
cookie_alloc_err:
rte_mempool_free(vq->mpool);
if (i != 0) {
@@ -453,31 +199,6 @@ virtio_crypto_queue_setup(struct rte_cryptodev *dev,
return -ENOMEM;
}
-static int
-virtio_crypto_ctrlq_setup(struct rte_cryptodev *dev, uint16_t queue_idx)
-{
- int ret;
- struct virtqueue *vq;
- struct virtio_crypto_hw *hw = dev->data->dev_private;
-
- /* if virtio device has started, do not touch the virtqueues */
- if (dev->data->dev_started)
- return 0;
-
- PMD_INIT_FUNC_TRACE();
-
- ret = virtio_crypto_queue_setup(dev, VTCRYPTO_CTRLQ, queue_idx,
- 0, SOCKET_ID_ANY, &vq);
- if (ret < 0) {
- VIRTIO_CRYPTO_INIT_LOG_ERR("control vq initialization failed");
- return ret;
- }
-
- hw->cvq = vq;
-
- return 0;
-}
-
static void
virtio_crypto_free_queues(struct rte_cryptodev *dev)
{
@@ -486,10 +207,6 @@ virtio_crypto_free_queues(struct rte_cryptodev *dev)
PMD_INIT_FUNC_TRACE();
- /* control queue release */
- virtio_crypto_queue_release(hw->cvq);
- hw->cvq = NULL;
-
/* data queue release */
for (i = 0; i < hw->max_dataqueues; i++) {
virtio_crypto_queue_release(dev->data->queue_pairs[i]);
@@ -500,6 +217,15 @@ virtio_crypto_free_queues(struct rte_cryptodev *dev)
static int
virtio_crypto_dev_close(struct rte_cryptodev *dev __rte_unused)
{
+ struct virtio_crypto_hw *hw = dev->data->dev_private;
+
+ PMD_INIT_FUNC_TRACE();
+
+ /* control queue release */
+ if (hw->cvq)
+ virtio_crypto_queue_release(virtcrypto_cq_to_vq(hw->cvq));
+
+ hw->cvq = NULL;
return 0;
}
@@ -680,6 +406,99 @@ virtio_negotiate_features(struct virtio_crypto_hw *hw, uint64_t req_features)
return 0;
}
+static void
+virtio_control_queue_notify(struct virtqueue *vq, __rte_unused void *cookie)
+{
+ virtqueue_notify(vq);
+}
+
+static int
+virtio_crypto_init_queue(struct rte_cryptodev *dev, uint16_t queue_idx)
+{
+ char vq_name[VIRTQUEUE_MAX_NAME_SZ];
+ unsigned int vq_size;
+ struct virtio_crypto_hw *hw = dev->data->dev_private;
+ struct virtqueue *vq;
+ int queue_type = virtio_get_queue_type(hw, queue_idx);
+ int ret;
+ int numa_node = dev->device->numa_node;
+
+ PMD_INIT_LOG(INFO, "setting up queue: %u on NUMA node %d",
+ queue_idx, numa_node);
+
+ /*
+ * Read the virtqueue size from the Queue Size field
+ * Always power of 2 and if 0 virtqueue does not exist
+ */
+ vq_size = VTPCI_OPS(hw)->get_queue_num(hw, queue_idx);
+ PMD_INIT_LOG(DEBUG, "vq_size: %u", vq_size);
+ if (vq_size == 0) {
+ PMD_INIT_LOG(ERR, "virtqueue does not exist");
+ return -EINVAL;
+ }
+
+ if (!rte_is_power_of_2(vq_size)) {
+ PMD_INIT_LOG(ERR, "split virtqueue size is not power of 2");
+ return -EINVAL;
+ }
+
+ snprintf(vq_name, sizeof(vq_name), "dev%d_vq%d", dev->data->dev_id, queue_idx);
+
+ vq = virtcrypto_queue_alloc(hw, queue_idx, vq_size, numa_node, vq_name);
+ if (!vq) {
+ PMD_INIT_LOG(ERR, "virtqueue init failed");
+ return -ENOMEM;
+ }
+
+ hw->vqs[queue_idx] = vq;
+
+ if (queue_type == VTCRYPTO_CTRLQ) {
+ hw->cvq = &vq->cq;
+ vq->cq.notify_queue = &virtio_control_queue_notify;
+ }
+
+ if (VTPCI_OPS(hw)->setup_queue(hw, vq) < 0) {
+ PMD_INIT_LOG(ERR, "setup_queue failed");
+ ret = -EINVAL;
+ goto clean_vq;
+ }
+
+ return 0;
+
+clean_vq:
+ if (queue_type == VTCRYPTO_CTRLQ)
+ hw->cvq = NULL;
+ virtcrypto_queue_free(vq);
+ hw->vqs[queue_idx] = NULL;
+
+ return ret;
+}
+
+static int
+virtio_crypto_alloc_queues(struct rte_cryptodev *dev)
+{
+ struct virtio_crypto_hw *hw = dev->data->dev_private;
+ uint16_t nr_vq = hw->max_dataqueues + 1;
+ uint16_t i;
+ int ret;
+
+ hw->vqs = rte_zmalloc(NULL, sizeof(struct virtqueue *) * nr_vq, 0);
+ if (!hw->vqs) {
+ PMD_INIT_LOG(ERR, "failed to allocate vqs");
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < nr_vq; i++) {
+ ret = virtio_crypto_init_queue(dev, i);
+ if (ret < 0) {
+ virtio_crypto_free_queues(dev);
+ return ret;
+ }
+ }
+
+ return 0;
+}
+
/* reset device and renegotiate features if needed */
static int
virtio_crypto_init_device(struct rte_cryptodev *cryptodev,
@@ -805,8 +624,6 @@ static int
virtio_crypto_dev_configure(struct rte_cryptodev *cryptodev,
struct rte_cryptodev_config *config __rte_unused)
{
- struct virtio_crypto_hw *hw = cryptodev->data->dev_private;
-
PMD_INIT_FUNC_TRACE();
if (virtio_crypto_init_device(cryptodev,
@@ -817,10 +634,11 @@ virtio_crypto_dev_configure(struct rte_cryptodev *cryptodev,
* [0, 1, ... ,(config->max_dataqueues - 1)] are data queues
* config->max_dataqueues is the control queue
*/
- if (virtio_crypto_ctrlq_setup(cryptodev, hw->max_dataqueues) < 0) {
- VIRTIO_CRYPTO_INIT_LOG_ERR("control queue setup error");
+ if (virtio_crypto_alloc_queues(cryptodev) < 0) {
+ VIRTIO_CRYPTO_DRV_LOG_ERR("failed to create virtqueues");
return -1;
}
+
virtio_crypto_ctrlq_start(cryptodev);
return 0;
@@ -955,7 +773,7 @@ virtio_crypto_clear_session(
uint64_t session_id = ctrl->u.destroy_session.session_id;
hw = dev->data->dev_private;
- vq = hw->cvq;
+ vq = virtcrypto_cq_to_vq(hw->cvq);
VIRTIO_CRYPTO_SESSION_LOG_INFO("vq->vq_desc_head_idx = %d, "
"vq = %p", vq->vq_desc_head_idx, vq);
@@ -990,14 +808,14 @@ virtio_crypto_clear_session(
/* use only a single desc entry */
head = vq->vq_desc_head_idx;
- vq->vq_ring.desc[head].flags = VRING_DESC_F_INDIRECT;
- vq->vq_ring.desc[head].addr = malloc_phys_addr + desc_offset;
- vq->vq_ring.desc[head].len
+ vq->vq_split.ring.desc[head].flags = VRING_DESC_F_INDIRECT;
+ vq->vq_split.ring.desc[head].addr = malloc_phys_addr + desc_offset;
+ vq->vq_split.ring.desc[head].len
= NUM_ENTRY_SYM_CLEAR_SESSION
* sizeof(struct vring_desc);
vq->vq_free_cnt -= needed;
- vq->vq_desc_head_idx = vq->vq_ring.desc[head].next;
+ vq->vq_desc_head_idx = vq->vq_split.ring.desc[head].next;
vq_update_avail_ring(vq, head);
vq_update_avail_idx(vq);
@@ -1008,27 +826,27 @@ virtio_crypto_clear_session(
virtqueue_notify(vq);
rte_rmb();
- while (vq->vq_used_cons_idx == vq->vq_ring.used->idx) {
+ while (vq->vq_used_cons_idx == vq->vq_split.ring.used->idx) {
rte_rmb();
usleep(100);
}
- while (vq->vq_used_cons_idx != vq->vq_ring.used->idx) {
+ while (vq->vq_used_cons_idx != vq->vq_split.ring.used->idx) {
uint32_t idx, desc_idx, used_idx;
struct vring_used_elem *uep;
used_idx = (uint32_t)(vq->vq_used_cons_idx
& (vq->vq_nentries - 1));
- uep = &vq->vq_ring.used->ring[used_idx];
+ uep = &vq->vq_split.ring.used->ring[used_idx];
idx = (uint32_t) uep->id;
desc_idx = idx;
- while (vq->vq_ring.desc[desc_idx].flags
+ while (vq->vq_split.ring.desc[desc_idx].flags
& VRING_DESC_F_NEXT) {
- desc_idx = vq->vq_ring.desc[desc_idx].next;
+ desc_idx = vq->vq_split.ring.desc[desc_idx].next;
vq->vq_free_cnt++;
}
- vq->vq_ring.desc[desc_idx].next = vq->vq_desc_head_idx;
+ vq->vq_split.ring.desc[desc_idx].next = vq->vq_desc_head_idx;
vq->vq_desc_head_idx = idx;
vq->vq_used_cons_idx++;
vq->vq_free_cnt++;
@@ -1382,14 +1200,23 @@ virtio_crypto_sym_configure_session(
int ret;
struct virtio_crypto_session *session;
struct virtio_crypto_op_ctrl_req *ctrl_req;
+ struct virtio_crypto_session_input *input;
enum virtio_crypto_cmd_id cmd_id;
uint8_t cipher_key_data[VIRTIO_CRYPTO_MAX_KEY_SIZE] = {0};
uint8_t auth_key_data[VIRTIO_CRYPTO_MAX_KEY_SIZE] = {0};
struct virtio_crypto_hw *hw;
- struct virtqueue *control_vq;
+ struct virtio_pmd_ctrl *ctrl;
+ struct rte_crypto_cipher_xform *cipher_xform = NULL;
+ int dlen[2], dnum;
PMD_INIT_FUNC_TRACE();
+ cipher_xform = virtio_crypto_get_cipher_xform(xform);
+ if (cipher_xform == NULL) {
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("No cipher xform found");
+ return -1;
+ }
+
ret = virtio_crypto_check_sym_configure_session_paras(dev, xform,
sess);
if (ret < 0) {
@@ -1398,13 +1225,23 @@ virtio_crypto_sym_configure_session(
}
session = CRYPTODEV_GET_SYM_SESS_PRIV(sess);
memset(session, 0, sizeof(struct virtio_crypto_session));
- ctrl_req = &session->ctrl;
+ ctrl = &session->ctrl;
+ ctrl_req = &ctrl->hdr;
ctrl_req->header.opcode = VIRTIO_CRYPTO_CIPHER_CREATE_SESSION;
/* FIXME: support multiqueue */
ctrl_req->header.queue_id = 0;
hw = dev->data->dev_private;
- control_vq = hw->cvq;
+
+ switch (cipher_xform->algo) {
+ case RTE_CRYPTO_CIPHER_AES_CBC:
+ ctrl_req->header.algo = VIRTIO_CRYPTO_CIPHER_AES_CBC;
+ break;
+ default:
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("Crypto: Unsupported "
+ "Cipher alg %u", cipher_xform->algo);
+ return -1;
+ }
cmd_id = virtio_crypto_get_chain_order(xform);
if (cmd_id == VIRTIO_CRYPTO_CMD_CIPHER_HASH)
@@ -1416,7 +1253,13 @@ virtio_crypto_sym_configure_session(
switch (cmd_id) {
case VIRTIO_CRYPTO_CMD_CIPHER_HASH:
- case VIRTIO_CRYPTO_CMD_HASH_CIPHER:
+ case VIRTIO_CRYPTO_CMD_HASH_CIPHER: {
+ struct rte_crypto_auth_xform *auth_xform = NULL;
+ struct rte_crypto_cipher_xform *cipher_xform = NULL;
+
+ cipher_xform = virtio_crypto_get_cipher_xform(xform);
+ auth_xform = virtio_crypto_get_auth_xform(xform);
+
ctrl_req->u.sym_create_session.op_type
= VIRTIO_CRYPTO_SYM_OP_ALGORITHM_CHAINING;
@@ -1427,15 +1270,19 @@ virtio_crypto_sym_configure_session(
"padding sym op ctrl req failed");
goto error_out;
}
- ret = virtio_crypto_send_command(control_vq, ctrl_req,
- cipher_key_data, auth_key_data, session);
- if (ret < 0) {
- VIRTIO_CRYPTO_SESSION_LOG_ERR(
- "create session failed: %d", ret);
- goto error_out;
- }
+
+ dlen[0] = cipher_xform->key.length;
+ memcpy(ctrl->data, cipher_key_data, dlen[0]);
+ dlen[1] = auth_xform->key.length;
+ memcpy(ctrl->data + dlen[0], auth_key_data, dlen[1]);
+ dnum = 2;
break;
- case VIRTIO_CRYPTO_CMD_CIPHER:
+ }
+ case VIRTIO_CRYPTO_CMD_CIPHER: {
+ struct rte_crypto_cipher_xform *cipher_xform = NULL;
+
+ cipher_xform = virtio_crypto_get_cipher_xform(xform);
+
ctrl_req->u.sym_create_session.op_type
= VIRTIO_CRYPTO_SYM_OP_CIPHER;
ret = virtio_crypto_sym_pad_op_ctrl_req(ctrl_req, xform,
@@ -1445,21 +1292,42 @@ virtio_crypto_sym_configure_session(
"padding sym op ctrl req failed");
goto error_out;
}
- ret = virtio_crypto_send_command(control_vq, ctrl_req,
- cipher_key_data, NULL, session);
- if (ret < 0) {
- VIRTIO_CRYPTO_SESSION_LOG_ERR(
- "create session failed: %d", ret);
- goto error_out;
- }
+
+ dlen[0] = cipher_xform->key.length;
+ memcpy(ctrl->data, cipher_key_data, dlen[0]);
+ dnum = 1;
break;
+ }
default:
VIRTIO_CRYPTO_SESSION_LOG_ERR(
"Unsupported operation chain order parameter");
goto error_out;
}
- return 0;
+ input = &ctrl->input;
+ input->status = VIRTIO_CRYPTO_ERR;
+ input->session_id = ~0ULL;
+
+ ret = virtio_crypto_send_command(hw->cvq, ctrl, dlen, dnum);
+ if (ret < 0) {
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("create session failed: %d", ret);
+ goto error_out;
+ }
+
+ ctrl = hw->cvq->hdr_mz->addr;
+ input = &ctrl->input;
+ if (input->status != VIRTIO_CRYPTO_OK) {
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("Something wrong on backend! "
+ "status=%u, session_id=%" PRIu64 "",
+ input->status, input->session_id);
+ goto error_out;
+ } else {
+ session->session_id = input->session_id;
+ VIRTIO_CRYPTO_SESSION_LOG_INFO("Create session successfully, "
+ "session_id=%" PRIu64 "", input->session_id);
+ }
+
+ return 0;
error_out:
return -1;
}
@@ -1575,10 +1443,12 @@ virtio_crypto_asym_configure_session(
{
struct virtio_crypto_akcipher_session_para *para;
struct virtio_crypto_op_ctrl_req *ctrl_req;
+ struct virtio_crypto_session_input *input;
struct virtio_crypto_session *session;
struct virtio_crypto_hw *hw;
- struct virtqueue *control_vq;
+ struct virtio_pmd_ctrl *ctrl;
uint8_t *key = NULL;
+ int dlen[1];
int ret;
PMD_INIT_FUNC_TRACE();
@@ -1592,7 +1462,8 @@ virtio_crypto_asym_configure_session(
session = CRYPTODEV_GET_ASYM_SESS_PRIV(sess);
memset(session, 0, sizeof(struct virtio_crypto_session));
- ctrl_req = &session->ctrl;
+ ctrl = &session->ctrl;
+ ctrl_req = &ctrl->hdr;
ctrl_req->header.opcode = VIRTIO_CRYPTO_AKCIPHER_CREATE_SESSION;
/* FIXME: support multiqueue */
ctrl_req->header.queue_id = 0;
@@ -1648,15 +1519,33 @@ virtio_crypto_asym_configure_session(
para->algo = VIRTIO_CRYPTO_NO_AKCIPHER;
}
+ dlen[0] = ret;
+ memcpy(ctrl->data, key, dlen[0]);
+
+ input = &ctrl->input;
+ input->status = VIRTIO_CRYPTO_ERR;
+ input->session_id = ~0ULL;
+
hw = dev->data->dev_private;
- control_vq = hw->cvq;
- ret = virtio_crypto_send_command(control_vq, ctrl_req,
- key, NULL, session);
+ ret = virtio_crypto_send_command(hw->cvq, ctrl, dlen, 1);
if (ret < 0) {
VIRTIO_CRYPTO_SESSION_LOG_ERR("create session failed: %d", ret);
goto error_out;
}
+ ctrl = hw->cvq->hdr_mz->addr;
+ input = &ctrl->input;
+ if (input->status != VIRTIO_CRYPTO_OK) {
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("Something wrong on backend! "
+ "status=%u, session_id=%" PRIu64 "",
+ input->status, input->session_id);
+ goto error_out;
+ } else {
+ session->session_id = input->session_id;
+ VIRTIO_CRYPTO_SESSION_LOG_INFO("Create session successfully, "
+ "session_id=%" PRIu64 "", input->session_id);
+ }
+
return 0;
error_out:
return -1;
diff --git a/drivers/crypto/virtio/virtio_cvq.c b/drivers/crypto/virtio/virtio_cvq.c
new file mode 100644
index 0000000000..91c6b5a9f2
--- /dev/null
+++ b/drivers/crypto/virtio/virtio_cvq.c
@@ -0,0 +1,129 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Marvell
+ */
+
+#include <unistd.h>
+
+#include <rte_common.h>
+#include <rte_eal.h>
+#include <rte_errno.h>
+
+#include "virtio_cvq.h"
+#include "virtqueue.h"
+
+static struct virtio_pmd_ctrl *
+virtio_send_command(struct virtcrypto_ctl *cvq,
+ struct virtio_pmd_ctrl *ctrl,
+ int *dlen, int dnum)
+{
+ struct virtio_pmd_ctrl *result;
+ struct virtqueue *vq = virtcrypto_cq_to_vq(cvq);
+ uint32_t head, i;
+ int k, sum = 0;
+
+ head = vq->vq_desc_head_idx;
+
+ /*
+ * Format is enforced in qemu code:
+ * One TX packet for header;
+ * At least one TX packet per argument;
+ * One RX packet for ACK.
+ */
+ vq->vq_split.ring.desc[head].flags = VRING_DESC_F_NEXT;
+ vq->vq_split.ring.desc[head].addr = cvq->hdr_mem;
+ vq->vq_split.ring.desc[head].len = sizeof(struct virtio_crypto_op_ctrl_req);
+ vq->vq_free_cnt--;
+ i = vq->vq_split.ring.desc[head].next;
+
+ for (k = 0; k < dnum; k++) {
+ vq->vq_split.ring.desc[i].flags = VRING_DESC_F_NEXT;
+ vq->vq_split.ring.desc[i].addr = cvq->hdr_mem
+ + sizeof(struct virtio_crypto_op_ctrl_req)
+ + sizeof(ctrl->input) + sizeof(uint8_t) * sum;
+ vq->vq_split.ring.desc[i].len = dlen[k];
+ sum += dlen[k];
+ vq->vq_free_cnt--;
+ i = vq->vq_split.ring.desc[i].next;
+ }
+
+ vq->vq_split.ring.desc[i].flags = VRING_DESC_F_WRITE;
+ vq->vq_split.ring.desc[i].addr = cvq->hdr_mem
+ + sizeof(struct virtio_crypto_op_ctrl_req);
+ vq->vq_split.ring.desc[i].len = sizeof(ctrl->input);
+ vq->vq_free_cnt--;
+
+ vq->vq_desc_head_idx = vq->vq_split.ring.desc[i].next;
+
+ vq_update_avail_ring(vq, head);
+ vq_update_avail_idx(vq);
+
+ PMD_INIT_LOG(DEBUG, "vq->vq_queue_index = %d", vq->vq_queue_index);
+
+ cvq->notify_queue(vq, cvq->notify_cookie);
+
+ while (virtqueue_nused(vq) == 0)
+ usleep(100);
+
+ while (virtqueue_nused(vq)) {
+ uint32_t idx, desc_idx, used_idx;
+ struct vring_used_elem *uep;
+
+ used_idx = (uint32_t)(vq->vq_used_cons_idx
+ & (vq->vq_nentries - 1));
+ uep = &vq->vq_split.ring.used->ring[used_idx];
+ idx = (uint32_t)uep->id;
+ desc_idx = idx;
+
+ while (vq->vq_split.ring.desc[desc_idx].flags &
+ VRING_DESC_F_NEXT) {
+ desc_idx = vq->vq_split.ring.desc[desc_idx].next;
+ vq->vq_free_cnt++;
+ }
+
+ vq->vq_split.ring.desc[desc_idx].next = vq->vq_desc_head_idx;
+ vq->vq_desc_head_idx = idx;
+
+ vq->vq_used_cons_idx++;
+ vq->vq_free_cnt++;
+ }
+
+ PMD_INIT_LOG(DEBUG, "vq->vq_free_cnt=%d vq->vq_desc_head_idx=%d",
+ vq->vq_free_cnt, vq->vq_desc_head_idx);
+
+ result = cvq->hdr_mz->addr;
+ return result;
+}
+
+int
+virtio_crypto_send_command(struct virtcrypto_ctl *cvq, struct virtio_pmd_ctrl *ctrl,
+ int *dlen, int dnum)
+{
+ uint8_t status = ~0;
+ struct virtio_pmd_ctrl *result;
+ struct virtqueue *vq;
+
+ ctrl->input.status = status;
+
+ if (!cvq) {
+ PMD_INIT_LOG(ERR, "Control queue is not supported.");
+ return -1;
+ }
+
+ rte_spinlock_lock(&cvq->lock);
+ vq = virtcrypto_cq_to_vq(cvq);
+
+ PMD_INIT_LOG(DEBUG, "vq->vq_desc_head_idx = %d, status = %d, "
+ "vq->hw->cvq = %p vq = %p",
+ vq->vq_desc_head_idx, status, vq->hw->cvq, vq);
+
+ if (vq->vq_free_cnt < dnum + 2 || dnum < 1) {
+ rte_spinlock_unlock(&cvq->lock);
+ return -1;
+ }
+
+ memcpy(cvq->hdr_mz->addr, ctrl, sizeof(struct virtio_pmd_ctrl));
+ result = virtio_send_command(cvq, ctrl, dlen, dnum);
+
+ rte_spinlock_unlock(&cvq->lock);
+ return result->input.status;
+}
diff --git a/drivers/crypto/virtio/virtio_cvq.h b/drivers/crypto/virtio/virtio_cvq.h
new file mode 100644
index 0000000000..a8824a65de
--- /dev/null
+++ b/drivers/crypto/virtio/virtio_cvq.h
@@ -0,0 +1,33 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Marvell
+ */
+
+#ifndef _VIRTIO_CVQ_H_
+#define _VIRTIO_CVQ_H_
+
+#include <rte_spinlock.h>
+#include <virtio_crypto.h>
+
+struct virtqueue;
+
+struct virtcrypto_ctl {
+ const struct rte_memzone *hdr_mz; /**< memzone to populate hdr. */
+ rte_iova_t hdr_mem; /**< hdr for each xmit packet */
+ rte_spinlock_t lock; /**< spinlock for control queue. */
+ void (*notify_queue)(struct virtqueue *vq, void *cookie); /**< notify ops. */
+ void *notify_cookie; /**< cookie for notify ops */
+};
+
+#define VIRTIO_MAX_CTRL_DATA 2048
+
+struct virtio_pmd_ctrl {
+ struct virtio_crypto_op_ctrl_req hdr;
+ struct virtio_crypto_session_input input;
+ uint8_t data[VIRTIO_MAX_CTRL_DATA];
+};
+
+int
+virtio_crypto_send_command(struct virtcrypto_ctl *cvq, struct virtio_pmd_ctrl *ctrl,
+ int *dlen, int pkt_num);
+
+#endif /* _VIRTIO_CVQ_H_ */
diff --git a/drivers/crypto/virtio/virtio_pci.h b/drivers/crypto/virtio/virtio_pci.h
index 41949c3d13..7e94c6a3c5 100644
--- a/drivers/crypto/virtio/virtio_pci.h
+++ b/drivers/crypto/virtio/virtio_pci.h
@@ -176,8 +176,7 @@ struct virtio_pci_ops {
};
struct virtio_crypto_hw {
- /* control queue */
- struct virtqueue *cvq;
+ struct virtqueue **vqs;
uint16_t dev_id;
uint16_t max_dataqueues;
uint64_t req_guest_features;
@@ -190,6 +189,9 @@ struct virtio_crypto_hw {
struct virtio_pci_common_cfg *common_cfg;
struct virtio_crypto_config *dev_cfg;
const struct rte_cryptodev_capabilities *virtio_dev_capabilities;
+ uint8_t weak_barriers;
+ struct virtcrypto_ctl *cvq;
+ bool use_va;
};
/*
diff --git a/drivers/crypto/virtio/virtio_ring.h b/drivers/crypto/virtio/virtio_ring.h
index 55839279fd..e5b0ad74d2 100644
--- a/drivers/crypto/virtio/virtio_ring.h
+++ b/drivers/crypto/virtio/virtio_ring.h
@@ -59,6 +59,7 @@ struct vring_used {
struct vring {
unsigned int num;
+ rte_iova_t desc_iova;
struct vring_desc *desc;
struct vring_avail *avail;
struct vring_used *used;
@@ -111,17 +112,24 @@ vring_size(unsigned int num, unsigned long align)
}
static inline void
-vring_init(struct vring *vr, unsigned int num, uint8_t *p,
- unsigned long align)
+vring_init_split(struct vring *vr, uint8_t *p, rte_iova_t iova,
+ unsigned long align, unsigned int num)
{
vr->num = num;
vr->desc = (struct vring_desc *) p;
+ vr->desc_iova = iova;
vr->avail = (struct vring_avail *) (p +
num * sizeof(struct vring_desc));
vr->used = (void *)
RTE_ALIGN_CEIL((uintptr_t)(&vr->avail->ring[num]), align);
}
+static inline void
+vring_init(struct vring *vr, unsigned int num, uint8_t *p, unsigned long align)
+{
+ vring_init_split(vr, p, 0, align, num);
+}
+
/*
* The following is used with VIRTIO_RING_F_EVENT_IDX.
* Assuming a given event_idx value from the other size, if we have
diff --git a/drivers/crypto/virtio/virtio_rxtx.c b/drivers/crypto/virtio/virtio_rxtx.c
index c456dc327e..0e8a716917 100644
--- a/drivers/crypto/virtio/virtio_rxtx.c
+++ b/drivers/crypto/virtio/virtio_rxtx.c
@@ -14,13 +14,13 @@ vq_ring_free_chain(struct virtqueue *vq, uint16_t desc_idx)
struct vq_desc_extra *dxp;
uint16_t desc_idx_last = desc_idx;
- dp = &vq->vq_ring.desc[desc_idx];
+ dp = &vq->vq_split.ring.desc[desc_idx];
dxp = &vq->vq_descx[desc_idx];
vq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt + dxp->ndescs);
if ((dp->flags & VRING_DESC_F_INDIRECT) == 0) {
while (dp->flags & VRING_DESC_F_NEXT) {
desc_idx_last = dp->next;
- dp = &vq->vq_ring.desc[dp->next];
+ dp = &vq->vq_split.ring.desc[dp->next];
}
}
dxp->ndescs = 0;
@@ -33,7 +33,7 @@ vq_ring_free_chain(struct virtqueue *vq, uint16_t desc_idx)
if (vq->vq_desc_tail_idx == VQ_RING_DESC_CHAIN_END) {
vq->vq_desc_head_idx = desc_idx;
} else {
- dp_tail = &vq->vq_ring.desc[vq->vq_desc_tail_idx];
+ dp_tail = &vq->vq_split.ring.desc[vq->vq_desc_tail_idx];
dp_tail->next = desc_idx;
}
@@ -56,7 +56,7 @@ virtqueue_dequeue_burst_rx(struct virtqueue *vq,
for (i = 0; i < num ; i++) {
used_idx = (uint16_t)(vq->vq_used_cons_idx
& (vq->vq_nentries - 1));
- uep = &vq->vq_ring.used->ring[used_idx];
+ uep = &vq->vq_split.ring.used->ring[used_idx];
desc_idx = (uint16_t)uep->id;
cop = (struct rte_crypto_op *)
vq->vq_descx[desc_idx].crypto_op;
@@ -115,7 +115,7 @@ virtqueue_crypto_sym_pkt_header_arrange(
{
struct rte_crypto_sym_op *sym_op = cop->sym;
struct virtio_crypto_op_data_req *req_data = data;
- struct virtio_crypto_op_ctrl_req *ctrl = &session->ctrl;
+ struct virtio_crypto_op_ctrl_req *ctrl = &session->ctrl.hdr;
struct virtio_crypto_sym_create_session_req *sym_sess_req =
&ctrl->u.sym_create_session;
struct virtio_crypto_alg_chain_session_para *chain_para =
@@ -304,7 +304,7 @@ virtqueue_crypto_sym_enqueue_xmit(
desc[idx++].flags = VRING_DESC_F_WRITE | VRING_DESC_F_NEXT;
/* indirect vring: digest result */
- para = &(session->ctrl.u.sym_create_session.u.chain.para);
+ para = &(session->ctrl.hdr.u.sym_create_session.u.chain.para);
if (para->hash_mode == VIRTIO_CRYPTO_SYM_HASH_MODE_PLAIN)
hash_result_len = para->u.hash_param.hash_result_len;
if (para->hash_mode == VIRTIO_CRYPTO_SYM_HASH_MODE_AUTH)
@@ -327,7 +327,7 @@ virtqueue_crypto_sym_enqueue_xmit(
dxp->ndescs = needed;
/* use a single buffer */
- start_dp = txvq->vq_ring.desc;
+ start_dp = txvq->vq_split.ring.desc;
start_dp[head_idx].addr = indirect_op_data_req_phys_addr +
indirect_vring_addr_offset;
start_dp[head_idx].len = num_entry * sizeof(struct vring_desc);
@@ -351,7 +351,7 @@ virtqueue_crypto_asym_pkt_header_arrange(
{
struct rte_crypto_asym_op *asym_op = cop->asym;
struct virtio_crypto_op_data_req *req_data = data;
- struct virtio_crypto_op_ctrl_req *ctrl = &session->ctrl;
+ struct virtio_crypto_op_ctrl_req *ctrl = &session->ctrl.hdr;
req_data->header.session_id = session->session_id;
@@ -517,7 +517,7 @@ virtqueue_crypto_asym_enqueue_xmit(
dxp->ndescs = needed;
/* use a single buffer */
- start_dp = txvq->vq_ring.desc;
+ start_dp = txvq->vq_split.ring.desc;
start_dp[head_idx].addr = indirect_op_data_req_phys_addr +
indirect_vring_addr_offset;
start_dp[head_idx].len = num_entry * sizeof(struct vring_desc);
@@ -560,25 +560,14 @@ static int
virtio_crypto_vring_start(struct virtqueue *vq)
{
struct virtio_crypto_hw *hw = vq->hw;
- int i, size = vq->vq_nentries;
- struct vring *vr = &vq->vq_ring;
uint8_t *ring_mem = vq->vq_ring_virt_mem;
PMD_INIT_FUNC_TRACE();
- vring_init(vr, size, ring_mem, VIRTIO_PCI_VRING_ALIGN);
- vq->vq_desc_tail_idx = (uint16_t)(vq->vq_nentries - 1);
- vq->vq_free_cnt = vq->vq_nentries;
-
- /* Chain all the descriptors in the ring with an END */
- for (i = 0; i < size - 1; i++)
- vr->desc[i].next = (uint16_t)(i + 1);
- vr->desc[i].next = VQ_RING_DESC_CHAIN_END;
-
- /*
- * Disable device(host) interrupting guest
- */
- virtqueue_disable_intr(vq);
+ if (ring_mem == NULL) {
+ VIRTIO_CRYPTO_INIT_LOG_ERR("virtqueue ring memory is NULL");
+ return -EINVAL;
+ }
/*
* Set guest physical address of the virtqueue
@@ -599,8 +588,9 @@ virtio_crypto_ctrlq_start(struct rte_cryptodev *dev)
struct virtio_crypto_hw *hw = dev->data->dev_private;
if (hw->cvq) {
- virtio_crypto_vring_start(hw->cvq);
- VIRTQUEUE_DUMP((struct virtqueue *)hw->cvq);
+ rte_spinlock_init(&hw->cvq->lock);
+ virtio_crypto_vring_start(virtcrypto_cq_to_vq(hw->cvq));
+ VIRTQUEUE_DUMP(virtcrypto_cq_to_vq(hw->cvq));
}
}
diff --git a/drivers/crypto/virtio/virtio_rxtx.h b/drivers/crypto/virtio/virtio_rxtx.h
new file mode 100644
index 0000000000..2771062e44
--- /dev/null
+++ b/drivers/crypto/virtio/virtio_rxtx.h
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Marvell.
+ */
+
+#ifndef _VIRTIO_RXTX_H_
+#define _VIRTIO_RXTX_H_
+
+struct virtcrypto_data {
+ const struct rte_memzone *hdr_mz; /**< memzone to populate hdr. */
+ rte_iova_t hdr_mem; /**< hdr for each xmit packet */
+};
+
+#endif /* _VIRTIO_RXTX_H_ */
diff --git a/drivers/crypto/virtio/virtqueue.c b/drivers/crypto/virtio/virtqueue.c
index 3e2db1ebd2..3a9ec98b18 100644
--- a/drivers/crypto/virtio/virtqueue.c
+++ b/drivers/crypto/virtio/virtqueue.c
@@ -7,7 +7,9 @@
#include <rte_mbuf.h>
#include <rte_crypto.h>
#include <rte_malloc.h>
+#include <rte_errno.h>
+#include "virtio_cryptodev.h"
#include "virtqueue.h"
void
@@ -18,7 +20,7 @@ virtqueue_disable_intr(struct virtqueue *vq)
* not to interrupt when it consumes packets
* Note: this is only considered a hint to the host
*/
- vq->vq_ring.avail->flags |= VRING_AVAIL_F_NO_INTERRUPT;
+ vq->vq_split.ring.avail->flags |= VRING_AVAIL_F_NO_INTERRUPT;
}
void
@@ -32,10 +34,193 @@ virtqueue_detatch_unused(struct virtqueue *vq)
for (idx = 0; idx < vq->vq_nentries; idx++) {
cop = vq->vq_descx[idx].crypto_op;
if (cop) {
- rte_pktmbuf_free(cop->sym->m_src);
- rte_pktmbuf_free(cop->sym->m_dst);
+ if (cop->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) {
+ rte_pktmbuf_free(cop->sym->m_src);
+ rte_pktmbuf_free(cop->sym->m_dst);
+ }
+
rte_crypto_op_free(cop);
vq->vq_descx[idx].crypto_op = NULL;
}
}
}
+
+static void
+virtio_init_vring(struct virtqueue *vq)
+{
+ int size = vq->vq_nentries;
+ uint8_t *ring_mem = vq->vq_ring_virt_mem;
+ struct vring *vr = &vq->vq_split.ring;
+
+ PMD_INIT_FUNC_TRACE();
+
+ memset(ring_mem, 0, vq->vq_ring_size);
+
+ vq->vq_used_cons_idx = 0;
+ vq->vq_desc_head_idx = 0;
+ vq->vq_avail_idx = 0;
+ vq->vq_desc_tail_idx = (uint16_t)(vq->vq_nentries - 1);
+ vq->vq_free_cnt = vq->vq_nentries;
+ memset(vq->vq_descx, 0, sizeof(struct vq_desc_extra) * vq->vq_nentries);
+
+ vring_init_split(vr, ring_mem, vq->vq_ring_mem, VIRTIO_PCI_VRING_ALIGN, size);
+ vring_desc_init_split(vr->desc, size);
+
+ /*
+ * Disable device(host) interrupting guest
+ */
+ virtqueue_disable_intr(vq);
+}
+
+static int
+virtio_alloc_queue_headers(struct virtqueue *vq, int numa_node, const char *name)
+{
+ char hdr_name[VIRTQUEUE_MAX_NAME_SZ];
+ const struct rte_memzone **hdr_mz;
+ rte_iova_t *hdr_mem;
+ ssize_t size;
+ int queue_type;
+
+ queue_type = virtio_get_queue_type(vq->hw, vq->vq_queue_index);
+ switch (queue_type) {
+ case VTCRYPTO_DATAQ:
+ /*
+ * Op cookie for every ring element. This memory can be optimized
+ * based on descriptor requirements. For example, if a descriptor
+ * is indirect, then the cookie can be shared among all the
+ * descriptors in the chain.
+ */
+ size = vq->vq_nentries * sizeof(struct virtio_crypto_op_cookie);
+ hdr_mz = &vq->dq.hdr_mz;
+ hdr_mem = &vq->dq.hdr_mem;
+ break;
+ case VTCRYPTO_CTRLQ:
+ /* One control operation at a time in control queue */
+ size = sizeof(struct virtio_pmd_ctrl);
+ hdr_mz = &vq->cq.hdr_mz;
+ hdr_mem = &vq->cq.hdr_mem;
+ break;
+ default:
+ return 0;
+ }
+
+ snprintf(hdr_name, sizeof(hdr_name), "%s_hdr", name);
+ *hdr_mz = rte_memzone_reserve_aligned(hdr_name, size, numa_node,
+ RTE_MEMZONE_IOVA_CONTIG, RTE_CACHE_LINE_SIZE);
+ if (*hdr_mz == NULL) {
+ if (rte_errno == EEXIST)
+ *hdr_mz = rte_memzone_lookup(hdr_name);
+ if (*hdr_mz == NULL)
+ return -ENOMEM;
+ }
+
+ memset((*hdr_mz)->addr, 0, size);
+
+ if (vq->hw->use_va)
+ *hdr_mem = (uintptr_t)(*hdr_mz)->addr;
+ else
+ *hdr_mem = (uintptr_t)(*hdr_mz)->iova;
+
+ return 0;
+}
+
+static void
+virtio_free_queue_headers(struct virtqueue *vq)
+{
+ const struct rte_memzone **hdr_mz;
+ rte_iova_t *hdr_mem;
+ int queue_type;
+
+ queue_type = virtio_get_queue_type(vq->hw, vq->vq_queue_index);
+ switch (queue_type) {
+ case VTCRYPTO_DATAQ:
+ hdr_mz = &vq->dq.hdr_mz;
+ hdr_mem = &vq->dq.hdr_mem;
+ break;
+ case VTCRYPTO_CTRLQ:
+ hdr_mz = &vq->cq.hdr_mz;
+ hdr_mem = &vq->cq.hdr_mem;
+ break;
+ default:
+ return;
+ }
+
+ rte_memzone_free(*hdr_mz);
+ *hdr_mz = NULL;
+ *hdr_mem = 0;
+}
+
+struct virtqueue *
+virtcrypto_queue_alloc(struct virtio_crypto_hw *hw, uint16_t index, uint16_t num,
+ int node, const char *name)
+{
+ struct virtqueue *vq;
+ const struct rte_memzone *mz;
+ unsigned int size;
+
+ size = sizeof(*vq) + num * sizeof(struct vq_desc_extra);
+ size = RTE_ALIGN_CEIL(size, RTE_CACHE_LINE_SIZE);
+
+ vq = rte_zmalloc_socket(name, size, RTE_CACHE_LINE_SIZE, node);
+ if (vq == NULL) {
+ PMD_INIT_LOG(ERR, "can not allocate vq");
+ return NULL;
+ }
+
+ PMD_INIT_LOG(DEBUG, "vq: %p", vq);
+ vq->hw = hw;
+ vq->vq_queue_index = index;
+ vq->vq_nentries = num;
+
+ /*
+ * Reserve a memzone for vring elements
+ */
+ size = vring_size(num, VIRTIO_PCI_VRING_ALIGN);
+ vq->vq_ring_size = RTE_ALIGN_CEIL(size, VIRTIO_PCI_VRING_ALIGN);
+ PMD_INIT_LOG(DEBUG, "vring_size: %d, rounded_vring_size: %d", size, vq->vq_ring_size);
+
+ mz = rte_memzone_reserve_aligned(name, vq->vq_ring_size, node,
+ RTE_MEMZONE_IOVA_CONTIG, VIRTIO_PCI_VRING_ALIGN);
+ if (mz == NULL) {
+ if (rte_errno == EEXIST)
+ mz = rte_memzone_lookup(name);
+ if (mz == NULL)
+ goto free_vq;
+ }
+
+ memset(mz->addr, 0, mz->len);
+ vq->mz = mz;
+ vq->vq_ring_virt_mem = mz->addr;
+
+ if (hw->use_va)
+ vq->vq_ring_mem = (uintptr_t)mz->addr;
+ else
+ vq->vq_ring_mem = mz->iova;
+
+ PMD_INIT_LOG(DEBUG, "vq->vq_ring_mem: 0x%" PRIx64, vq->vq_ring_mem);
+ PMD_INIT_LOG(DEBUG, "vq->vq_ring_virt_mem: %p", vq->vq_ring_virt_mem);
+
+ virtio_init_vring(vq);
+
+ if (virtio_alloc_queue_headers(vq, node, name)) {
+ PMD_INIT_LOG(ERR, "Failed to alloc queue headers");
+ goto free_mz;
+ }
+
+ return vq;
+
+free_mz:
+ rte_memzone_free(mz);
+free_vq:
+ rte_free(vq);
+
+ return NULL;
+}
+
+void
+virtcrypto_queue_free(struct virtqueue *vq)
+{
+ virtio_free_queue_headers(vq);
+ rte_memzone_free(vq->mz);
+ rte_free(vq);
+}
diff --git a/drivers/crypto/virtio/virtqueue.h b/drivers/crypto/virtio/virtqueue.h
index cb08bea94f..eb6580ff52 100644
--- a/drivers/crypto/virtio/virtqueue.h
+++ b/drivers/crypto/virtio/virtqueue.h
@@ -12,10 +12,12 @@
#include <rte_memzone.h>
#include <rte_mempool.h>
+#include "virtio_cvq.h"
#include "virtio_pci.h"
#include "virtio_ring.h"
#include "virtio_logs.h"
#include "virtio_crypto.h"
+#include "virtio_rxtx.h"
struct rte_mbuf;
@@ -46,11 +48,26 @@ struct vq_desc_extra {
void *crypto_op;
void *cookie;
uint16_t ndescs;
+ uint16_t next;
};
+#define virtcrypto_dq_to_vq(dvq) container_of(dvq, struct virtqueue, dq)
+#define virtcrypto_cq_to_vq(cvq) container_of(cvq, struct virtqueue, cq)
+
struct virtqueue {
/**< virtio_crypto_hw structure pointer. */
struct virtio_crypto_hw *hw;
+ union {
+ struct {
+ /**< vring keeping desc, used and avail */
+ struct vring ring;
+ } vq_split;
+ };
+ union {
+ struct virtcrypto_data dq;
+ struct virtcrypto_ctl cq;
+ };
+
/**< mem zone to populate RX ring. */
const struct rte_memzone *mz;
/**< memzone to populate hdr and request. */
@@ -62,7 +79,6 @@ struct virtqueue {
unsigned int vq_ring_size;
phys_addr_t vq_ring_mem; /**< physical address of vring */
- struct vring vq_ring; /**< vring keeping desc, used and avail */
uint16_t vq_free_cnt; /**< num of desc available */
uint16_t vq_nentries; /**< vring desc numbers */
@@ -101,6 +117,11 @@ void virtqueue_disable_intr(struct virtqueue *vq);
*/
void virtqueue_detatch_unused(struct virtqueue *vq);
+struct virtqueue *virtcrypto_queue_alloc(struct virtio_crypto_hw *hw, uint16_t index,
+ uint16_t num, int node, const char *name);
+
+void virtcrypto_queue_free(struct virtqueue *vq);
+
static inline int
virtqueue_full(const struct virtqueue *vq)
{
@@ -108,13 +129,13 @@ virtqueue_full(const struct virtqueue *vq)
}
#define VIRTQUEUE_NUSED(vq) \
- ((uint16_t)((vq)->vq_ring.used->idx - (vq)->vq_used_cons_idx))
+ ((uint16_t)((vq)->vq_split.ring.used->idx - (vq)->vq_used_cons_idx))
static inline void
vq_update_avail_idx(struct virtqueue *vq)
{
virtio_wmb();
- vq->vq_ring.avail->idx = vq->vq_avail_idx;
+ vq->vq_split.ring.avail->idx = vq->vq_avail_idx;
}
static inline void
@@ -129,15 +150,15 @@ vq_update_avail_ring(struct virtqueue *vq, uint16_t desc_idx)
* descriptor.
*/
avail_idx = (uint16_t)(vq->vq_avail_idx & (vq->vq_nentries - 1));
- if (unlikely(vq->vq_ring.avail->ring[avail_idx] != desc_idx))
- vq->vq_ring.avail->ring[avail_idx] = desc_idx;
+ if (unlikely(vq->vq_split.ring.avail->ring[avail_idx] != desc_idx))
+ vq->vq_split.ring.avail->ring[avail_idx] = desc_idx;
vq->vq_avail_idx++;
}
static inline int
virtqueue_kick_prepare(struct virtqueue *vq)
{
- return !(vq->vq_ring.used->flags & VRING_USED_F_NO_NOTIFY);
+ return !(vq->vq_split.ring.used->flags & VRING_USED_F_NO_NOTIFY);
}
static inline void
@@ -151,21 +172,69 @@ virtqueue_notify(struct virtqueue *vq)
VTPCI_OPS(vq->hw)->notify_queue(vq->hw, vq);
}
+/* Chain all the descriptors in the ring with an END */
+static inline void
+vring_desc_init_split(struct vring_desc *dp, uint16_t n)
+{
+ uint16_t i;
+
+ for (i = 0; i < n - 1; i++)
+ dp[i].next = (uint16_t)(i + 1);
+ dp[i].next = VQ_RING_DESC_CHAIN_END;
+}
+
+static inline int
+virtio_get_queue_type(struct virtio_crypto_hw *hw, uint16_t vq_idx)
+{
+ if (vq_idx == hw->max_dataqueues)
+ return VTCRYPTO_CTRLQ;
+ else
+ return VTCRYPTO_DATAQ;
+}
+
+/* virtqueue_nused has load-acquire or rte_io_rmb insed */
+static inline uint16_t
+virtqueue_nused(const struct virtqueue *vq)
+{
+ uint16_t idx;
+
+ if (vq->hw->weak_barriers) {
+ /**
+ * x86 prefers to using rte_smp_rmb over rte_atomic_load_explicit as it
+ * reports a slightly better perf, which comes from the saved
+ * branch by the compiler.
+ * The if and else branches are identical with the smp and io
+ * barriers both defined as compiler barriers on x86.
+ */
+#ifdef RTE_ARCH_X86_64
+ idx = vq->vq_split.ring.used->idx;
+ virtio_rmb(0);
+#else
+ idx = rte_atomic_load_explicit(&(vq)->vq_split.ring.used->idx,
+ rte_memory_order_acquire);
+#endif
+ } else {
+ idx = vq->vq_split.ring.used->idx;
+ rte_io_rmb();
+ }
+ return idx - vq->vq_used_cons_idx;
+}
+
/**
* Dump virtqueue internal structures, for debug purpose only.
*/
#define VIRTQUEUE_DUMP(vq) do { \
uint16_t used_idx, nused; \
- used_idx = (vq)->vq_ring.used->idx; \
+ used_idx = (vq)->vq_split.ring.used->idx; \
nused = (uint16_t)(used_idx - (vq)->vq_used_cons_idx); \
VIRTIO_CRYPTO_INIT_LOG_DBG(\
"VQ: - size=%d; free=%d; used=%d; desc_head_idx=%d;" \
" avail.idx=%d; used_cons_idx=%d; used.idx=%d;" \
" avail.flags=0x%x; used.flags=0x%x", \
(vq)->vq_nentries, (vq)->vq_free_cnt, nused, \
- (vq)->vq_desc_head_idx, (vq)->vq_ring.avail->idx, \
- (vq)->vq_used_cons_idx, (vq)->vq_ring.used->idx, \
- (vq)->vq_ring.avail->flags, (vq)->vq_ring.used->flags); \
+ (vq)->vq_desc_head_idx, (vq)->vq_split.ring.avail->idx, \
+ (vq)->vq_used_cons_idx, (vq)->vq_split.ring.used->idx, \
+ (vq)->vq_split.ring.avail->flags, (vq)->vq_split.ring.used->flags); \
} while (0)
#endif /* _VIRTQUEUE_H_ */
--
2.25.1
^ permalink raw reply [flat|nested] 58+ messages in thread
* [v2 2/2] crypto/virtio: add packed ring support
2025-01-07 18:08 ` [v2 0/2] crypto/virtio: add packed ring support Gowrishankar Muthukrishnan
2025-01-07 18:08 ` [v2 1/2] crypto/virtio: refactor queue operations Gowrishankar Muthukrishnan
@ 2025-01-07 18:08 ` Gowrishankar Muthukrishnan
1 sibling, 0 replies; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2025-01-07 18:08 UTC (permalink / raw)
To: dev, Akhil Goyal, Maxime Coquelin, Chenbo Xia, Fan Zhang, Jay Zhou
Cc: jerinj, anoobj, Gowrishankar Muthukrishnan
Add packed ring support.
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
drivers/crypto/virtio/virtio_cryptodev.c | 125 +++++++
drivers/crypto/virtio/virtio_cryptodev.h | 13 +-
drivers/crypto/virtio/virtio_cvq.c | 103 +++++-
drivers/crypto/virtio/virtio_pci.h | 25 ++
drivers/crypto/virtio/virtio_ring.h | 59 ++-
drivers/crypto/virtio/virtio_rxtx.c | 442 ++++++++++++++++++++++-
drivers/crypto/virtio/virtqueue.c | 50 ++-
drivers/crypto/virtio/virtqueue.h | 132 ++++++-
8 files changed, 920 insertions(+), 29 deletions(-)
diff --git a/drivers/crypto/virtio/virtio_cryptodev.c b/drivers/crypto/virtio/virtio_cryptodev.c
index 9a11cbe90a..d3db4f898e 100644
--- a/drivers/crypto/virtio/virtio_cryptodev.c
+++ b/drivers/crypto/virtio/virtio_cryptodev.c
@@ -869,6 +869,125 @@ virtio_crypto_clear_session(
rte_free(ctrl);
}
+static void
+virtio_crypto_clear_session_packed(
+ struct rte_cryptodev *dev,
+ struct virtio_crypto_op_ctrl_req *ctrl)
+{
+ struct virtio_crypto_hw *hw;
+ struct virtqueue *vq;
+ struct vring_packed_desc *desc;
+ uint8_t *status;
+ uint8_t needed = 1;
+ uint32_t head;
+ uint64_t malloc_phys_addr;
+ uint8_t len_inhdr = sizeof(struct virtio_crypto_inhdr);
+ uint32_t len_op_ctrl_req = sizeof(struct virtio_crypto_op_ctrl_req);
+ uint64_t session_id = ctrl->u.destroy_session.session_id;
+ uint16_t flags;
+ uint8_t nb_descs = 0;
+
+ hw = dev->data->dev_private;
+ vq = virtcrypto_cq_to_vq(hw->cvq);
+ head = vq->vq_avail_idx;
+ flags = vq->vq_packed.cached_flags;
+
+ VIRTIO_CRYPTO_SESSION_LOG_INFO("vq->vq_desc_head_idx = %d, "
+ "vq = %p", vq->vq_desc_head_idx, vq);
+
+ if (vq->vq_free_cnt < needed) {
+ VIRTIO_CRYPTO_SESSION_LOG_ERR(
+ "vq->vq_free_cnt = %d is less than %d, "
+ "not enough", vq->vq_free_cnt, needed);
+ return;
+ }
+
+ malloc_phys_addr = rte_malloc_virt2iova(ctrl);
+
+ /* status part */
+ status = &(((struct virtio_crypto_inhdr *)
+ ((uint8_t *)ctrl + len_op_ctrl_req))->status);
+ *status = VIRTIO_CRYPTO_ERR;
+
+ /* indirect desc vring part */
+ desc = vq->vq_packed.ring.desc;
+
+ /* ctrl request part */
+ desc[head].addr = malloc_phys_addr;
+ desc[head].len = len_op_ctrl_req;
+ desc[head].flags = VRING_DESC_F_NEXT | vq->vq_packed.cached_flags;
+ vq->vq_free_cnt--;
+ nb_descs++;
+ if (++vq->vq_avail_idx >= vq->vq_nentries) {
+ vq->vq_avail_idx -= vq->vq_nentries;
+ vq->vq_packed.cached_flags ^= VRING_PACKED_DESC_F_AVAIL_USED;
+ }
+
+ /* status part */
+ desc[vq->vq_avail_idx].addr = malloc_phys_addr + len_op_ctrl_req;
+ desc[vq->vq_avail_idx].len = len_inhdr;
+ desc[vq->vq_avail_idx].flags = VRING_DESC_F_WRITE;
+ vq->vq_free_cnt--;
+ nb_descs++;
+ if (++vq->vq_avail_idx >= vq->vq_nentries) {
+ vq->vq_avail_idx -= vq->vq_nentries;
+ vq->vq_packed.cached_flags ^= VRING_PACKED_DESC_F_AVAIL_USED;
+ }
+
+ virtqueue_store_flags_packed(&desc[head], VRING_DESC_F_NEXT | flags,
+ vq->hw->weak_barriers);
+
+ virtio_wmb(vq->hw->weak_barriers);
+ virtqueue_notify(vq);
+
+ /* wait for used desc in virtqueue
+ * desc_is_used has a load-acquire or rte_io_rmb inside
+ */
+ rte_rmb();
+ while (!desc_is_used(&desc[head], vq)) {
+ rte_rmb();
+ usleep(100);
+ }
+
+ /* now get used descriptors */
+ vq->vq_free_cnt += nb_descs;
+ vq->vq_used_cons_idx += nb_descs;
+ if (vq->vq_used_cons_idx >= vq->vq_nentries) {
+ vq->vq_used_cons_idx -= vq->vq_nentries;
+ vq->vq_packed.used_wrap_counter ^= 1;
+ }
+
+ PMD_INIT_LOG(DEBUG, "vq->vq_free_cnt=%d "
+ "vq->vq_queue_idx=%d "
+ "vq->vq_avail_idx=%d "
+ "vq->vq_used_cons_idx=%d "
+ "vq->vq_packed.cached_flags=0x%x "
+ "vq->vq_packed.used_wrap_counter=%d",
+ vq->vq_free_cnt,
+ vq->vq_queue_index,
+ vq->vq_avail_idx,
+ vq->vq_used_cons_idx,
+ vq->vq_packed.cached_flags,
+ vq->vq_packed.used_wrap_counter);
+
+ if (*status != VIRTIO_CRYPTO_OK) {
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("Close session failed "
+ "status=%"PRIu32", session_id=%"PRIu64"",
+ *status, session_id);
+ rte_free(ctrl);
+ return;
+ }
+
+ VIRTIO_CRYPTO_INIT_LOG_DBG("vq->vq_free_cnt=%d "
+ "vq->vq_desc_head_idx=%d",
+ vq->vq_free_cnt, vq->vq_desc_head_idx);
+
+ VIRTIO_CRYPTO_SESSION_LOG_INFO("Close session %"PRIu64" successfully ",
+ session_id);
+
+ rte_free(ctrl);
+}
+
static void
virtio_crypto_sym_clear_session(
struct rte_cryptodev *dev,
@@ -906,6 +1025,9 @@ virtio_crypto_sym_clear_session(
ctrl->header.queue_id = 0;
ctrl->u.destroy_session.session_id = session->session_id;
+ if (vtpci_with_packed_queue(dev->data->dev_private))
+ return virtio_crypto_clear_session_packed(dev, ctrl);
+
return virtio_crypto_clear_session(dev, ctrl);
}
@@ -943,6 +1065,9 @@ virtio_crypto_asym_clear_session(
ctrl->header.queue_id = 0;
ctrl->u.destroy_session.session_id = session->session_id;
+ if (vtpci_with_packed_queue(dev->data->dev_private))
+ return virtio_crypto_clear_session_packed(dev, ctrl);
+
return virtio_crypto_clear_session(dev, ctrl);
}
diff --git a/drivers/crypto/virtio/virtio_cryptodev.h b/drivers/crypto/virtio/virtio_cryptodev.h
index 215bce7863..b4bdd9800b 100644
--- a/drivers/crypto/virtio/virtio_cryptodev.h
+++ b/drivers/crypto/virtio/virtio_cryptodev.h
@@ -10,13 +10,21 @@
#include "virtio_ring.h"
/* Features desired/implemented by this driver. */
-#define VIRTIO_CRYPTO_PMD_GUEST_FEATURES (1ULL << VIRTIO_F_VERSION_1)
+#define VIRTIO_CRYPTO_PMD_GUEST_FEATURES (1ULL << VIRTIO_F_VERSION_1 | \
+ 1ULL << VIRTIO_F_IN_ORDER | \
+ 1ULL << VIRTIO_F_RING_PACKED | \
+ 1ULL << VIRTIO_F_NOTIFICATION_DATA | \
+ 1ULL << VIRTIO_RING_F_INDIRECT_DESC | \
+ 1ULL << VIRTIO_F_ORDER_PLATFORM)
#define CRYPTODEV_NAME_VIRTIO_PMD crypto_virtio
#define NUM_ENTRY_VIRTIO_CRYPTO_OP 7
#define VIRTIO_CRYPTO_MAX_IV_SIZE 16
+#define VIRTIO_CRYPTO_MAX_MSG_SIZE 512
+#define VIRTIO_CRYPTO_MAX_SIGN_SIZE 512
+#define VIRTIO_CRYPTO_MAX_CIPHER_SIZE 1024
#define VIRTIO_CRYPTO_MAX_KEY_SIZE 256
@@ -34,6 +42,9 @@ struct virtio_crypto_op_cookie {
struct virtio_crypto_inhdr inhdr;
struct vring_desc desc[NUM_ENTRY_VIRTIO_CRYPTO_OP];
uint8_t iv[VIRTIO_CRYPTO_MAX_IV_SIZE];
+ uint8_t message[VIRTIO_CRYPTO_MAX_MSG_SIZE];
+ uint8_t sign[VIRTIO_CRYPTO_MAX_SIGN_SIZE];
+ uint8_t cipher[VIRTIO_CRYPTO_MAX_CIPHER_SIZE];
};
/*
diff --git a/drivers/crypto/virtio/virtio_cvq.c b/drivers/crypto/virtio/virtio_cvq.c
index 91c6b5a9f2..18d8e54848 100644
--- a/drivers/crypto/virtio/virtio_cvq.c
+++ b/drivers/crypto/virtio/virtio_cvq.c
@@ -12,7 +12,102 @@
#include "virtqueue.h"
static struct virtio_pmd_ctrl *
-virtio_send_command(struct virtcrypto_ctl *cvq,
+virtio_send_command_packed(struct virtcrypto_ctl *cvq,
+ struct virtio_pmd_ctrl *ctrl,
+ int *dlen, int dnum)
+{
+ struct virtqueue *vq = virtcrypto_cq_to_vq(cvq);
+ int head;
+ struct vring_packed_desc *desc = vq->vq_packed.ring.desc;
+ struct virtio_pmd_ctrl *result;
+ uint16_t flags;
+ int sum = 0;
+ int nb_descs = 0;
+ int k;
+
+ /*
+ * Format is enforced in qemu code:
+ * One TX packet for header;
+ * At least one TX packet per argument;
+ * One RX packet for ACK.
+ */
+ head = vq->vq_avail_idx;
+ flags = vq->vq_packed.cached_flags;
+ desc[head].addr = cvq->hdr_mem;
+ desc[head].len = sizeof(struct virtio_crypto_op_ctrl_req);
+ vq->vq_free_cnt--;
+ nb_descs++;
+ if (++vq->vq_avail_idx >= vq->vq_nentries) {
+ vq->vq_avail_idx -= vq->vq_nentries;
+ vq->vq_packed.cached_flags ^= VRING_PACKED_DESC_F_AVAIL_USED;
+ }
+
+ for (k = 0; k < dnum; k++) {
+ desc[vq->vq_avail_idx].addr = cvq->hdr_mem
+ + sizeof(struct virtio_crypto_op_ctrl_req)
+ + sizeof(ctrl->input) + sizeof(uint8_t) * sum;
+ desc[vq->vq_avail_idx].len = dlen[k];
+ desc[vq->vq_avail_idx].flags = VRING_DESC_F_NEXT |
+ vq->vq_packed.cached_flags;
+ sum += dlen[k];
+ vq->vq_free_cnt--;
+ nb_descs++;
+ if (++vq->vq_avail_idx >= vq->vq_nentries) {
+ vq->vq_avail_idx -= vq->vq_nentries;
+ vq->vq_packed.cached_flags ^=
+ VRING_PACKED_DESC_F_AVAIL_USED;
+ }
+ }
+
+ desc[vq->vq_avail_idx].addr = cvq->hdr_mem
+ + sizeof(struct virtio_crypto_op_ctrl_req);
+ desc[vq->vq_avail_idx].len = sizeof(ctrl->input);
+ desc[vq->vq_avail_idx].flags = VRING_DESC_F_WRITE |
+ vq->vq_packed.cached_flags;
+ vq->vq_free_cnt--;
+ nb_descs++;
+ if (++vq->vq_avail_idx >= vq->vq_nentries) {
+ vq->vq_avail_idx -= vq->vq_nentries;
+ vq->vq_packed.cached_flags ^= VRING_PACKED_DESC_F_AVAIL_USED;
+ }
+
+ virtqueue_store_flags_packed(&desc[head], VRING_DESC_F_NEXT | flags,
+ vq->hw->weak_barriers);
+
+ virtio_wmb(vq->hw->weak_barriers);
+ cvq->notify_queue(vq, cvq->notify_cookie);
+
+ /* wait for used desc in virtqueue
+ * desc_is_used has a load-acquire or rte_io_rmb inside
+ */
+ while (!desc_is_used(&desc[head], vq))
+ usleep(100);
+
+ /* now get used descriptors */
+ vq->vq_free_cnt += nb_descs;
+ vq->vq_used_cons_idx += nb_descs;
+ if (vq->vq_used_cons_idx >= vq->vq_nentries) {
+ vq->vq_used_cons_idx -= vq->vq_nentries;
+ vq->vq_packed.used_wrap_counter ^= 1;
+ }
+
+ PMD_INIT_LOG(DEBUG, "vq->vq_free_cnt=%d "
+ "vq->vq_avail_idx=%d "
+ "vq->vq_used_cons_idx=%d "
+ "vq->vq_packed.cached_flags=0x%x "
+ "vq->vq_packed.used_wrap_counter=%d",
+ vq->vq_free_cnt,
+ vq->vq_avail_idx,
+ vq->vq_used_cons_idx,
+ vq->vq_packed.cached_flags,
+ vq->vq_packed.used_wrap_counter);
+
+ result = cvq->hdr_mz->addr;
+ return result;
+}
+
+static struct virtio_pmd_ctrl *
+virtio_send_command_split(struct virtcrypto_ctl *cvq,
struct virtio_pmd_ctrl *ctrl,
int *dlen, int dnum)
{
@@ -122,7 +217,11 @@ virtio_crypto_send_command(struct virtcrypto_ctl *cvq, struct virtio_pmd_ctrl *c
}
memcpy(cvq->hdr_mz->addr, ctrl, sizeof(struct virtio_pmd_ctrl));
- result = virtio_send_command(cvq, ctrl, dlen, dnum);
+
+ if (vtpci_with_packed_queue(vq->hw))
+ result = virtio_send_command_packed(cvq, ctrl, dlen, dnum);
+ else
+ result = virtio_send_command_split(cvq, ctrl, dlen, dnum);
rte_spinlock_unlock(&cvq->lock);
return result->input.status;
diff --git a/drivers/crypto/virtio/virtio_pci.h b/drivers/crypto/virtio/virtio_pci.h
index 7e94c6a3c5..79945cb88e 100644
--- a/drivers/crypto/virtio/virtio_pci.h
+++ b/drivers/crypto/virtio/virtio_pci.h
@@ -83,6 +83,25 @@ struct virtqueue;
#define VIRTIO_F_VERSION_1 32
#define VIRTIO_F_IOMMU_PLATFORM 33
+#define VIRTIO_F_RING_PACKED 34
+
+/*
+ * Inorder feature indicates that all buffers are used by the device
+ * in the same order in which they have been made available.
+ */
+#define VIRTIO_F_IN_ORDER 35
+
+/*
+ * This feature indicates that memory accesses by the driver and the device
+ * are ordered in a way described by the platform.
+ */
+#define VIRTIO_F_ORDER_PLATFORM 36
+
+/*
+ * This feature indicates that the driver passes extra data (besides
+ * identifying the virtqueue) in its device notifications.
+ */
+#define VIRTIO_F_NOTIFICATION_DATA 38
/* The Guest publishes the used index for which it expects an interrupt
* at the end of the avail ring. Host should ignore the avail->flags field.
@@ -230,6 +249,12 @@ vtpci_with_feature(struct virtio_crypto_hw *hw, uint64_t bit)
return (hw->guest_features & (1ULL << bit)) != 0;
}
+static inline int
+vtpci_with_packed_queue(struct virtio_crypto_hw *hw)
+{
+ return vtpci_with_feature(hw, VIRTIO_F_RING_PACKED);
+}
+
/*
* Function declaration from virtio_pci.c
*/
diff --git a/drivers/crypto/virtio/virtio_ring.h b/drivers/crypto/virtio/virtio_ring.h
index e5b0ad74d2..c74d1172b7 100644
--- a/drivers/crypto/virtio/virtio_ring.h
+++ b/drivers/crypto/virtio/virtio_ring.h
@@ -16,6 +16,15 @@
/* This means the buffer contains a list of buffer descriptors. */
#define VRING_DESC_F_INDIRECT 4
+/* This flag means the descriptor was made available by the driver */
+#define VRING_PACKED_DESC_F_AVAIL (1 << 7)
+/* This flag means the descriptor was used by the device */
+#define VRING_PACKED_DESC_F_USED (1 << 15)
+
+/* Frequently used combinations */
+#define VRING_PACKED_DESC_F_AVAIL_USED (VRING_PACKED_DESC_F_AVAIL | \
+ VRING_PACKED_DESC_F_USED)
+
/* The Host uses this in used->flags to advise the Guest: don't kick me
* when you add a buffer. It's unreliable, so it's simply an
* optimization. Guest will still kick if it's out of buffers.
@@ -57,6 +66,32 @@ struct vring_used {
struct vring_used_elem ring[];
};
+/* For support of packed virtqueues in Virtio 1.1 the format of descriptors
+ * looks like this.
+ */
+struct vring_packed_desc {
+ uint64_t addr;
+ uint32_t len;
+ uint16_t id;
+ uint16_t flags;
+};
+
+#define RING_EVENT_FLAGS_ENABLE 0x0
+#define RING_EVENT_FLAGS_DISABLE 0x1
+#define RING_EVENT_FLAGS_DESC 0x2
+struct vring_packed_desc_event {
+ uint16_t desc_event_off_wrap;
+ uint16_t desc_event_flags;
+};
+
+struct vring_packed {
+ unsigned int num;
+ rte_iova_t desc_iova;
+ struct vring_packed_desc *desc;
+ struct vring_packed_desc_event *driver;
+ struct vring_packed_desc_event *device;
+};
+
struct vring {
unsigned int num;
rte_iova_t desc_iova;
@@ -99,10 +134,18 @@ struct vring {
#define vring_avail_event(vr) (*(uint16_t *)&(vr)->used->ring[(vr)->num])
static inline size_t
-vring_size(unsigned int num, unsigned long align)
+vring_size(struct virtio_crypto_hw *hw, unsigned int num, unsigned long align)
{
size_t size;
+ if (vtpci_with_packed_queue(hw)) {
+ size = num * sizeof(struct vring_packed_desc);
+ size += sizeof(struct vring_packed_desc_event);
+ size = RTE_ALIGN_CEIL(size, align);
+ size += sizeof(struct vring_packed_desc_event);
+ return size;
+ }
+
size = num * sizeof(struct vring_desc);
size += sizeof(struct vring_avail) + (num * sizeof(uint16_t));
size = RTE_ALIGN_CEIL(size, align);
@@ -124,6 +167,20 @@ vring_init_split(struct vring *vr, uint8_t *p, rte_iova_t iova,
RTE_ALIGN_CEIL((uintptr_t)(&vr->avail->ring[num]), align);
}
+static inline void
+vring_init_packed(struct vring_packed *vr, uint8_t *p, rte_iova_t iova,
+ unsigned long align, unsigned int num)
+{
+ vr->num = num;
+ vr->desc = (struct vring_packed_desc *)p;
+ vr->desc_iova = iova;
+ vr->driver = (struct vring_packed_desc_event *)(p +
+ vr->num * sizeof(struct vring_packed_desc));
+ vr->device = (struct vring_packed_desc_event *)
+ RTE_ALIGN_CEIL(((uintptr_t)vr->driver +
+ sizeof(struct vring_packed_desc_event)), align);
+}
+
static inline void
vring_init(struct vring *vr, unsigned int num, uint8_t *p, unsigned long align)
{
diff --git a/drivers/crypto/virtio/virtio_rxtx.c b/drivers/crypto/virtio/virtio_rxtx.c
index 0e8a716917..8d6ff98fa5 100644
--- a/drivers/crypto/virtio/virtio_rxtx.c
+++ b/drivers/crypto/virtio/virtio_rxtx.c
@@ -4,6 +4,7 @@
#include <cryptodev_pmd.h>
#include "virtqueue.h"
+#include "virtio_ring.h"
#include "virtio_cryptodev.h"
#include "virtio_crypto_algs.h"
@@ -107,6 +108,91 @@ virtqueue_dequeue_burst_rx(struct virtqueue *vq,
return i;
}
+static uint16_t
+virtqueue_dequeue_burst_rx_packed(struct virtqueue *vq,
+ struct rte_crypto_op **rx_pkts, uint16_t num)
+{
+ struct rte_crypto_op *cop;
+ uint16_t used_idx;
+ uint16_t i;
+ struct virtio_crypto_inhdr *inhdr;
+ struct virtio_crypto_op_cookie *op_cookie;
+ struct vring_packed_desc *desc;
+
+ desc = vq->vq_packed.ring.desc;
+
+ /* Caller does the check */
+ for (i = 0; i < num ; i++) {
+ used_idx = vq->vq_used_cons_idx;
+ if (!desc_is_used(&desc[used_idx], vq))
+ break;
+
+ cop = (struct rte_crypto_op *)
+ vq->vq_descx[used_idx].crypto_op;
+ if (unlikely(cop == NULL)) {
+ VIRTIO_CRYPTO_RX_LOG_DBG("vring descriptor with no "
+ "mbuf cookie at %u",
+ vq->vq_used_cons_idx);
+ break;
+ }
+
+ op_cookie = (struct virtio_crypto_op_cookie *)
+ vq->vq_descx[used_idx].cookie;
+ inhdr = &(op_cookie->inhdr);
+ switch (inhdr->status) {
+ case VIRTIO_CRYPTO_OK:
+ cop->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+ break;
+ case VIRTIO_CRYPTO_ERR:
+ cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ vq->packets_received_failed++;
+ break;
+ case VIRTIO_CRYPTO_BADMSG:
+ cop->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
+ vq->packets_received_failed++;
+ break;
+ case VIRTIO_CRYPTO_NOTSUPP:
+ cop->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
+ vq->packets_received_failed++;
+ break;
+ case VIRTIO_CRYPTO_INVSESS:
+ cop->status = RTE_CRYPTO_OP_STATUS_INVALID_SESSION;
+ vq->packets_received_failed++;
+ break;
+ default:
+ break;
+ }
+
+ vq->packets_received_total++;
+
+ if (cop->asym->rsa.op_type == RTE_CRYPTO_ASYM_OP_SIGN)
+ memcpy(cop->asym->rsa.sign.data, op_cookie->sign,
+ cop->asym->rsa.sign.length);
+ else if (cop->asym->rsa.op_type == RTE_CRYPTO_ASYM_OP_VERIFY)
+ memcpy(cop->asym->rsa.message.data, op_cookie->message,
+ cop->asym->rsa.message.length);
+ else if (cop->asym->rsa.op_type == RTE_CRYPTO_ASYM_OP_ENCRYPT)
+ memcpy(cop->asym->rsa.cipher.data, op_cookie->cipher,
+ cop->asym->rsa.cipher.length);
+ else if (cop->asym->rsa.op_type == RTE_CRYPTO_ASYM_OP_DECRYPT)
+ memcpy(cop->asym->rsa.message.data, op_cookie->message,
+ cop->asym->rsa.message.length);
+
+ rx_pkts[i] = cop;
+ rte_mempool_put(vq->mpool, op_cookie);
+
+ vq->vq_free_cnt += 4;
+ vq->vq_used_cons_idx += 4;
+ vq->vq_descx[used_idx].crypto_op = NULL;
+ if (vq->vq_used_cons_idx >= vq->vq_nentries) {
+ vq->vq_used_cons_idx -= vq->vq_nentries;
+ vq->vq_packed.used_wrap_counter ^= 1;
+ }
+ }
+
+ return i;
+}
+
static int
virtqueue_crypto_sym_pkt_header_arrange(
struct rte_crypto_op *cop,
@@ -188,7 +274,7 @@ virtqueue_crypto_sym_pkt_header_arrange(
}
static int
-virtqueue_crypto_sym_enqueue_xmit(
+virtqueue_crypto_sym_enqueue_xmit_split(
struct virtqueue *txvq,
struct rte_crypto_op *cop)
{
@@ -343,6 +429,160 @@ virtqueue_crypto_sym_enqueue_xmit(
return 0;
}
+static int
+virtqueue_crypto_sym_enqueue_xmit_packed(
+ struct virtqueue *txvq,
+ struct rte_crypto_op *cop)
+{
+ uint16_t idx = 0;
+ uint16_t needed = 1;
+ uint16_t head_idx;
+ struct vq_desc_extra *dxp;
+ struct vring_packed_desc *start_dp;
+ struct vring_packed_desc *desc;
+ uint64_t op_data_req_phys_addr;
+ uint16_t req_data_len = sizeof(struct virtio_crypto_op_data_req);
+ uint32_t iv_addr_offset =
+ offsetof(struct virtio_crypto_op_cookie, iv);
+ struct rte_crypto_sym_op *sym_op = cop->sym;
+ struct virtio_crypto_session *session =
+ CRYPTODEV_GET_SYM_SESS_PRIV(cop->sym->session);
+ struct virtio_crypto_op_data_req *op_data_req;
+ uint32_t hash_result_len = 0;
+ struct virtio_crypto_op_cookie *crypto_op_cookie;
+ struct virtio_crypto_alg_chain_session_para *para;
+ uint16_t flags = VRING_DESC_F_NEXT;
+
+ if (unlikely(sym_op->m_src->nb_segs != 1))
+ return -EMSGSIZE;
+ if (unlikely(txvq->vq_free_cnt == 0))
+ return -ENOSPC;
+ if (unlikely(txvq->vq_free_cnt < needed))
+ return -EMSGSIZE;
+ head_idx = txvq->vq_desc_head_idx;
+ if (unlikely(head_idx >= txvq->vq_nentries))
+ return -EFAULT;
+ if (unlikely(session == NULL))
+ return -EFAULT;
+
+ dxp = &txvq->vq_descx[head_idx];
+
+ if (rte_mempool_get(txvq->mpool, &dxp->cookie)) {
+ VIRTIO_CRYPTO_TX_LOG_ERR("can not get cookie");
+ return -EFAULT;
+ }
+ crypto_op_cookie = dxp->cookie;
+ op_data_req_phys_addr = rte_mempool_virt2iova(crypto_op_cookie);
+ op_data_req = (struct virtio_crypto_op_data_req *)crypto_op_cookie;
+
+ if (virtqueue_crypto_sym_pkt_header_arrange(cop, op_data_req, session))
+ return -EFAULT;
+
+ /* status is initialized to VIRTIO_CRYPTO_ERR */
+ ((struct virtio_crypto_inhdr *)
+ ((uint8_t *)op_data_req + req_data_len))->status =
+ VIRTIO_CRYPTO_ERR;
+
+ desc = &txvq->vq_packed.ring.desc[txvq->vq_desc_head_idx];
+ needed = 4;
+ flags |= txvq->vq_packed.cached_flags;
+
+ start_dp = desc;
+ idx = 0;
+
+ /* packed vring: first part, virtio_crypto_op_data_req */
+ desc[idx].addr = op_data_req_phys_addr;
+ desc[idx].len = req_data_len;
+ desc[idx++].flags = flags;
+
+ /* packed vring: iv of cipher */
+ if (session->iv.length) {
+ if (cop->phys_addr)
+ desc[idx].addr = cop->phys_addr + session->iv.offset;
+ else {
+ if (session->iv.length > VIRTIO_CRYPTO_MAX_IV_SIZE)
+ return -ENOMEM;
+
+ rte_memcpy(crypto_op_cookie->iv,
+ rte_crypto_op_ctod_offset(cop,
+ uint8_t *, session->iv.offset),
+ session->iv.length);
+ desc[idx].addr = op_data_req_phys_addr + iv_addr_offset;
+ }
+
+ desc[idx].len = session->iv.length;
+ desc[idx++].flags = flags;
+ }
+
+ /* packed vring: additional auth data */
+ if (session->aad.length) {
+ desc[idx].addr = session->aad.phys_addr;
+ desc[idx].len = session->aad.length;
+ desc[idx++].flags = flags;
+ }
+
+ /* packed vring: src data */
+ desc[idx].addr = rte_pktmbuf_iova_offset(sym_op->m_src, 0);
+ desc[idx].len = (sym_op->cipher.data.offset
+ + sym_op->cipher.data.length);
+ desc[idx++].flags = flags;
+
+ /* packed vring: dst data */
+ if (sym_op->m_dst) {
+ desc[idx].addr = rte_pktmbuf_iova_offset(sym_op->m_dst, 0);
+ desc[idx].len = (sym_op->cipher.data.offset
+ + sym_op->cipher.data.length);
+ } else {
+ desc[idx].addr = rte_pktmbuf_iova_offset(sym_op->m_src, 0);
+ desc[idx].len = (sym_op->cipher.data.offset
+ + sym_op->cipher.data.length);
+ }
+ desc[idx++].flags = VRING_DESC_F_WRITE | VRING_DESC_F_NEXT;
+
+ /* packed vring: digest result */
+ para = &(session->ctrl.hdr.u.sym_create_session.u.chain.para);
+ if (para->hash_mode == VIRTIO_CRYPTO_SYM_HASH_MODE_PLAIN)
+ hash_result_len = para->u.hash_param.hash_result_len;
+ if (para->hash_mode == VIRTIO_CRYPTO_SYM_HASH_MODE_AUTH)
+ hash_result_len = para->u.mac_param.hash_result_len;
+ if (hash_result_len > 0) {
+ desc[idx].addr = sym_op->auth.digest.phys_addr;
+ desc[idx].len = hash_result_len;
+ desc[idx++].flags = VRING_DESC_F_WRITE | VRING_DESC_F_NEXT;
+ }
+
+ /* packed vring: last part, status returned */
+ desc[idx].addr = op_data_req_phys_addr + req_data_len;
+ desc[idx].len = sizeof(struct virtio_crypto_inhdr);
+ desc[idx++].flags = flags | VRING_DESC_F_WRITE;
+
+ /* save the infos to use when receiving packets */
+ dxp->crypto_op = (void *)cop;
+ dxp->ndescs = needed;
+
+ txvq->vq_desc_head_idx += idx & (txvq->vq_nentries - 1);
+ if (txvq->vq_desc_head_idx == VQ_RING_DESC_CHAIN_END)
+ txvq->vq_desc_tail_idx = idx;
+ txvq->vq_free_cnt = (uint16_t)(txvq->vq_free_cnt - needed);
+ virtqueue_store_flags_packed(&start_dp[0],
+ start_dp[0].flags | flags,
+ txvq->hw->weak_barriers);
+ virtio_wmb(txvq->hw->weak_barriers);
+
+ return 0;
+}
+
+static int
+virtqueue_crypto_sym_enqueue_xmit(
+ struct virtqueue *txvq,
+ struct rte_crypto_op *cop)
+{
+ if (vtpci_with_packed_queue(txvq->hw))
+ return virtqueue_crypto_sym_enqueue_xmit_packed(txvq, cop);
+ else
+ return virtqueue_crypto_sym_enqueue_xmit_split(txvq, cop);
+}
+
static int
virtqueue_crypto_asym_pkt_header_arrange(
struct rte_crypto_op *cop,
@@ -399,7 +639,7 @@ virtqueue_crypto_asym_pkt_header_arrange(
}
static int
-virtqueue_crypto_asym_enqueue_xmit(
+virtqueue_crypto_asym_enqueue_xmit_split(
struct virtqueue *txvq,
struct rte_crypto_op *cop)
{
@@ -533,6 +773,179 @@ virtqueue_crypto_asym_enqueue_xmit(
return 0;
}
+static int
+virtqueue_crypto_asym_enqueue_xmit_packed(
+ struct virtqueue *txvq,
+ struct rte_crypto_op *cop)
+{
+ uint16_t idx = 0;
+ uint16_t num_entry;
+ uint16_t needed = 1;
+ uint16_t head_idx;
+ struct vq_desc_extra *dxp;
+ struct vring_packed_desc *start_dp;
+ struct vring_packed_desc *desc;
+ uint64_t op_data_req_phys_addr;
+ uint16_t req_data_len = sizeof(struct virtio_crypto_op_data_req);
+ struct rte_crypto_asym_op *asym_op = cop->asym;
+ struct virtio_crypto_session *session =
+ CRYPTODEV_GET_ASYM_SESS_PRIV(cop->asym->session);
+ struct virtio_crypto_op_data_req *op_data_req;
+ struct virtio_crypto_op_cookie *crypto_op_cookie;
+ uint16_t flags = VRING_DESC_F_NEXT;
+
+ if (unlikely(txvq->vq_free_cnt == 0))
+ return -ENOSPC;
+ if (unlikely(txvq->vq_free_cnt < needed))
+ return -EMSGSIZE;
+ head_idx = txvq->vq_desc_head_idx;
+ if (unlikely(head_idx >= txvq->vq_nentries))
+ return -EFAULT;
+
+ dxp = &txvq->vq_descx[head_idx];
+
+ if (rte_mempool_get(txvq->mpool, &dxp->cookie)) {
+ VIRTIO_CRYPTO_TX_LOG_ERR("can not get cookie");
+ return -EFAULT;
+ }
+ crypto_op_cookie = dxp->cookie;
+ op_data_req_phys_addr = rte_mempool_virt2iova(crypto_op_cookie);
+ op_data_req = (struct virtio_crypto_op_data_req *)crypto_op_cookie;
+ if (virtqueue_crypto_asym_pkt_header_arrange(cop, op_data_req, session))
+ return -EFAULT;
+
+ /* status is initialized to VIRTIO_CRYPTO_ERR */
+ ((struct virtio_crypto_inhdr *)
+ ((uint8_t *)op_data_req + req_data_len))->status =
+ VIRTIO_CRYPTO_ERR;
+
+ desc = &txvq->vq_packed.ring.desc[txvq->vq_desc_head_idx];
+ needed = 4;
+ flags |= txvq->vq_packed.cached_flags;
+
+ start_dp = desc;
+ idx = 0;
+
+ /* packed vring: first part, virtio_crypto_op_data_req */
+ desc[idx].addr = op_data_req_phys_addr;
+ desc[idx].len = sizeof(struct virtio_crypto_op_data_req);
+ desc[idx++].flags = flags;
+
+ if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_SIGN) {
+ /* packed vring: src data */
+ if (asym_op->rsa.message.length > VIRTIO_CRYPTO_MAX_MSG_SIZE)
+ return -ENOMEM;
+ memcpy(crypto_op_cookie->message, asym_op->rsa.message.data,
+ asym_op->rsa.message.length);
+ desc[idx].addr = op_data_req_phys_addr +
+ offsetof(struct virtio_crypto_op_cookie, message);
+ desc[idx].len = asym_op->rsa.message.length;
+ desc[idx++].flags = flags;
+
+ /* packed vring: dst data */
+ if (asym_op->rsa.sign.length > VIRTIO_CRYPTO_MAX_SIGN_SIZE)
+ return -ENOMEM;
+ desc[idx].addr = op_data_req_phys_addr +
+ offsetof(struct virtio_crypto_op_cookie, sign);
+ desc[idx].len = asym_op->rsa.sign.length;
+ desc[idx++].flags = flags | VRING_DESC_F_WRITE;
+ } else if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_VERIFY) {
+ /* packed vring: src data */
+ if (asym_op->rsa.sign.length > VIRTIO_CRYPTO_MAX_SIGN_SIZE)
+ return -ENOMEM;
+ memcpy(crypto_op_cookie->sign, asym_op->rsa.sign.data,
+ asym_op->rsa.sign.length);
+ desc[idx].addr = op_data_req_phys_addr +
+ offsetof(struct virtio_crypto_op_cookie, sign);
+ desc[idx].len = asym_op->rsa.sign.length;
+ desc[idx++].flags = flags;
+
+ /* packed vring: dst data */
+ if (asym_op->rsa.message.length > VIRTIO_CRYPTO_MAX_MSG_SIZE)
+ return -ENOMEM;
+ desc[idx].addr = op_data_req_phys_addr +
+ offsetof(struct virtio_crypto_op_cookie, message);
+ desc[idx].len = asym_op->rsa.message.length;
+ desc[idx++].flags = flags;
+ } else if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_ENCRYPT) {
+ /* packed vring: src data */
+ if (asym_op->rsa.message.length > VIRTIO_CRYPTO_MAX_MSG_SIZE)
+ return -ENOMEM;
+ memcpy(crypto_op_cookie->message, asym_op->rsa.message.data,
+ asym_op->rsa.message.length);
+ desc[idx].addr = op_data_req_phys_addr +
+ offsetof(struct virtio_crypto_op_cookie, message);
+ desc[idx].len = asym_op->rsa.message.length;
+ desc[idx++].flags = flags;
+
+ /* packed vring: dst data */
+ if (asym_op->rsa.cipher.length > VIRTIO_CRYPTO_MAX_CIPHER_SIZE)
+ return -ENOMEM;
+ desc[idx].addr = op_data_req_phys_addr +
+ offsetof(struct virtio_crypto_op_cookie, cipher);
+ desc[idx].len = asym_op->rsa.cipher.length;
+ desc[idx++].flags = flags | VRING_DESC_F_WRITE;
+ } else if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_DECRYPT) {
+ /* packed vring: src data */
+ if (asym_op->rsa.cipher.length > VIRTIO_CRYPTO_MAX_CIPHER_SIZE)
+ return -ENOMEM;
+ memcpy(crypto_op_cookie->cipher, asym_op->rsa.cipher.data,
+ asym_op->rsa.cipher.length);
+ desc[idx].addr = op_data_req_phys_addr +
+ offsetof(struct virtio_crypto_op_cookie, cipher);
+ desc[idx].len = asym_op->rsa.cipher.length;
+ desc[idx++].flags = flags;
+
+ /* packed vring: dst data */
+ if (asym_op->rsa.message.length > VIRTIO_CRYPTO_MAX_MSG_SIZE)
+ return -ENOMEM;
+ desc[idx].addr = op_data_req_phys_addr +
+ offsetof(struct virtio_crypto_op_cookie, message);
+ desc[idx].len = asym_op->rsa.message.length;
+ desc[idx++].flags = flags | VRING_DESC_F_WRITE;
+ } else {
+ VIRTIO_CRYPTO_TX_LOG_ERR("Invalid asym op");
+ return -EINVAL;
+ }
+
+ /* packed vring: last part, status returned */
+ desc[idx].addr = op_data_req_phys_addr + req_data_len;
+ desc[idx].len = sizeof(struct virtio_crypto_inhdr);
+ desc[idx++].flags = txvq->vq_packed.cached_flags | VRING_DESC_F_WRITE;
+
+ num_entry = idx;
+ txvq->vq_avail_idx += num_entry;
+ if (txvq->vq_avail_idx >= txvq->vq_nentries) {
+ txvq->vq_avail_idx -= txvq->vq_nentries;
+ txvq->vq_packed.cached_flags ^= VRING_PACKED_DESC_F_AVAIL_USED;
+ }
+
+ /* save the infos to use when receiving packets */
+ dxp->crypto_op = (void *)cop;
+ dxp->ndescs = needed;
+
+ txvq->vq_desc_head_idx = (txvq->vq_desc_head_idx + idx) & (txvq->vq_nentries - 1);
+ if (txvq->vq_desc_head_idx == VQ_RING_DESC_CHAIN_END)
+ txvq->vq_desc_tail_idx = idx;
+ txvq->vq_free_cnt = (uint16_t)(txvq->vq_free_cnt - needed);
+ virtqueue_store_flags_packed(&start_dp[0],
+ start_dp[0].flags | flags,
+ txvq->hw->weak_barriers);
+ virtio_wmb(txvq->hw->weak_barriers);
+ return 0;
+}
+
+static int
+virtqueue_crypto_asym_enqueue_xmit(
+ struct virtqueue *txvq,
+ struct rte_crypto_op *cop)
+{
+ if (vtpci_with_packed_queue(txvq->hw))
+ return virtqueue_crypto_asym_enqueue_xmit_packed(txvq, cop);
+ else
+ return virtqueue_crypto_asym_enqueue_xmit_split(txvq, cop);
+}
+
static int
virtqueue_crypto_enqueue_xmit(struct virtqueue *txvq,
struct rte_crypto_op *cop)
@@ -620,21 +1033,20 @@ virtio_crypto_pkt_rx_burst(void *tx_queue, struct rte_crypto_op **rx_pkts,
uint16_t nb_pkts)
{
struct virtqueue *txvq = tx_queue;
- uint16_t nb_used, num, nb_rx;
-
- nb_used = VIRTQUEUE_NUSED(txvq);
+ uint16_t num, nb_rx;
- virtio_rmb();
-
- num = (uint16_t)(likely(nb_used <= nb_pkts) ? nb_used : nb_pkts);
- num = (uint16_t)(likely(num <= VIRTIO_MBUF_BURST_SZ)
- ? num : VIRTIO_MBUF_BURST_SZ);
+ virtio_rmb(0);
+ num = RTE_MIN(VIRTIO_MBUF_BURST_SZ, nb_pkts);
if (num == 0)
return 0;
- nb_rx = virtqueue_dequeue_burst_rx(txvq, rx_pkts, num);
- VIRTIO_CRYPTO_RX_LOG_DBG("used:%d dequeue:%d", nb_used, num);
+ if (likely(vtpci_with_packed_queue(txvq->hw)))
+ nb_rx = virtqueue_dequeue_burst_rx_packed(txvq, rx_pkts, num);
+ else
+ nb_rx = virtqueue_dequeue_burst_rx(txvq, rx_pkts, num);
+
+ VIRTIO_CRYPTO_RX_LOG_DBG("used:%d dequeue:%d", nb_rx, num);
return nb_rx;
}
@@ -700,6 +1112,12 @@ virtio_crypto_pkt_tx_burst(void *tx_queue, struct rte_crypto_op **tx_pkts,
}
if (likely(nb_tx)) {
+ if (vtpci_with_packed_queue(txvq->hw)) {
+ virtqueue_notify(txvq);
+ VIRTIO_CRYPTO_TX_LOG_DBG("Notified backend after xmit");
+ return nb_tx;
+ }
+
vq_update_avail_idx(txvq);
if (unlikely(virtqueue_kick_prepare(txvq))) {
diff --git a/drivers/crypto/virtio/virtqueue.c b/drivers/crypto/virtio/virtqueue.c
index 3a9ec98b18..a6b47d4466 100644
--- a/drivers/crypto/virtio/virtqueue.c
+++ b/drivers/crypto/virtio/virtqueue.c
@@ -12,8 +12,23 @@
#include "virtio_cryptodev.h"
#include "virtqueue.h"
-void
-virtqueue_disable_intr(struct virtqueue *vq)
+static inline void
+virtqueue_disable_intr_packed(struct virtqueue *vq)
+{
+ /*
+ * Set RING_EVENT_FLAGS_DISABLE to hint host
+ * not to interrupt when it consumes packets
+ * Note: this is only considered a hint to the host
+ */
+ if (vq->vq_packed.event_flags_shadow != RING_EVENT_FLAGS_DISABLE) {
+ vq->vq_packed.event_flags_shadow = RING_EVENT_FLAGS_DISABLE;
+ vq->vq_packed.ring.driver->desc_event_flags =
+ vq->vq_packed.event_flags_shadow;
+ }
+}
+
+static inline void
+virtqueue_disable_intr_split(struct virtqueue *vq)
{
/*
* Set VRING_AVAIL_F_NO_INTERRUPT to hint host
@@ -23,6 +38,15 @@ virtqueue_disable_intr(struct virtqueue *vq)
vq->vq_split.ring.avail->flags |= VRING_AVAIL_F_NO_INTERRUPT;
}
+void
+virtqueue_disable_intr(struct virtqueue *vq)
+{
+ if (vtpci_with_packed_queue(vq->hw))
+ virtqueue_disable_intr_packed(vq);
+ else
+ virtqueue_disable_intr_split(vq);
+}
+
void
virtqueue_detatch_unused(struct virtqueue *vq)
{
@@ -50,7 +74,6 @@ virtio_init_vring(struct virtqueue *vq)
{
int size = vq->vq_nentries;
uint8_t *ring_mem = vq->vq_ring_virt_mem;
- struct vring *vr = &vq->vq_split.ring;
PMD_INIT_FUNC_TRACE();
@@ -62,10 +85,16 @@ virtio_init_vring(struct virtqueue *vq)
vq->vq_desc_tail_idx = (uint16_t)(vq->vq_nentries - 1);
vq->vq_free_cnt = vq->vq_nentries;
memset(vq->vq_descx, 0, sizeof(struct vq_desc_extra) * vq->vq_nentries);
-
- vring_init_split(vr, ring_mem, vq->vq_ring_mem, VIRTIO_PCI_VRING_ALIGN, size);
- vring_desc_init_split(vr->desc, size);
-
+ if (vtpci_with_packed_queue(vq->hw)) {
+ vring_init_packed(&vq->vq_packed.ring, ring_mem, vq->vq_ring_mem,
+ VIRTIO_PCI_VRING_ALIGN, size);
+ vring_desc_init_packed(vq, size);
+ } else {
+ struct vring *vr = &vq->vq_split.ring;
+
+ vring_init_split(vr, ring_mem, vq->vq_ring_mem, VIRTIO_PCI_VRING_ALIGN, size);
+ vring_desc_init_split(vr->desc, size);
+ }
/*
* Disable device(host) interrupting guest
*/
@@ -171,11 +200,16 @@ virtcrypto_queue_alloc(struct virtio_crypto_hw *hw, uint16_t index, uint16_t num
vq->hw = hw;
vq->vq_queue_index = index;
vq->vq_nentries = num;
+ if (vtpci_with_packed_queue(hw)) {
+ vq->vq_packed.used_wrap_counter = 1;
+ vq->vq_packed.cached_flags = VRING_PACKED_DESC_F_AVAIL;
+ vq->vq_packed.event_flags_shadow = 0;
+ }
/*
* Reserve a memzone for vring elements
*/
- size = vring_size(num, VIRTIO_PCI_VRING_ALIGN);
+ size = vring_size(hw, num, VIRTIO_PCI_VRING_ALIGN);
vq->vq_ring_size = RTE_ALIGN_CEIL(size, VIRTIO_PCI_VRING_ALIGN);
PMD_INIT_LOG(DEBUG, "vring_size: %d, rounded_vring_size: %d", size, vq->vq_ring_size);
diff --git a/drivers/crypto/virtio/virtqueue.h b/drivers/crypto/virtio/virtqueue.h
index eb6580ff52..97a3ace48c 100644
--- a/drivers/crypto/virtio/virtqueue.h
+++ b/drivers/crypto/virtio/virtqueue.h
@@ -28,9 +28,78 @@ struct rte_mbuf;
* sufficient.
*
*/
-#define virtio_mb() rte_smp_mb()
-#define virtio_rmb() rte_smp_rmb()
-#define virtio_wmb() rte_smp_wmb()
+static inline void
+virtio_mb(uint8_t weak_barriers)
+{
+ if (weak_barriers)
+ rte_atomic_thread_fence(rte_memory_order_seq_cst);
+ else
+ rte_mb();
+}
+
+static inline void
+virtio_rmb(uint8_t weak_barriers)
+{
+ if (weak_barriers)
+ rte_atomic_thread_fence(rte_memory_order_acquire);
+ else
+ rte_io_rmb();
+}
+
+static inline void
+virtio_wmb(uint8_t weak_barriers)
+{
+ if (weak_barriers)
+ rte_atomic_thread_fence(rte_memory_order_release);
+ else
+ rte_io_wmb();
+}
+
+static inline uint16_t
+virtqueue_fetch_flags_packed(struct vring_packed_desc *dp,
+ uint8_t weak_barriers)
+{
+ uint16_t flags;
+
+ if (weak_barriers) {
+/* x86 prefers to using rte_io_rmb over rte_atomic_load_explicit as it reports
+ * a better perf(~1.5%), which comes from the saved branch by the compiler.
+ * The if and else branch are identical on the platforms except Arm.
+ */
+#ifdef RTE_ARCH_ARM
+ flags = rte_atomic_load_explicit(&dp->flags, rte_memory_order_acquire);
+#else
+ flags = dp->flags;
+ rte_io_rmb();
+#endif
+ } else {
+ flags = dp->flags;
+ rte_io_rmb();
+ }
+
+ return flags;
+}
+
+static inline void
+virtqueue_store_flags_packed(struct vring_packed_desc *dp,
+ uint16_t flags, uint8_t weak_barriers)
+{
+ if (weak_barriers) {
+/* x86 prefers to using rte_io_wmb over rte_atomic_store_explicit as it reports
+ * a better perf(~1.5%), which comes from the saved branch by the compiler.
+ * The if and else branch are identical on the platforms except Arm.
+ */
+#ifdef RTE_ARCH_ARM
+ rte_atomic_store_explicit(&dp->flags, flags, rte_memory_order_release);
+#else
+ rte_io_wmb();
+ dp->flags = flags;
+#endif
+ } else {
+ rte_io_wmb();
+ dp->flags = flags;
+ }
+}
#define VIRTQUEUE_MAX_NAME_SZ 32
@@ -62,7 +131,16 @@ struct virtqueue {
/**< vring keeping desc, used and avail */
struct vring ring;
} vq_split;
+
+ struct {
+ /**< vring keeping descs and events */
+ struct vring_packed ring;
+ bool used_wrap_counter;
+ uint16_t cached_flags; /**< cached flags for descs */
+ uint16_t event_flags_shadow;
+ } vq_packed;
};
+
union {
struct virtcrypto_data dq;
struct virtcrypto_ctl cq;
@@ -134,7 +212,7 @@ virtqueue_full(const struct virtqueue *vq)
static inline void
vq_update_avail_idx(struct virtqueue *vq)
{
- virtio_wmb();
+ virtio_wmb(0);
vq->vq_split.ring.avail->idx = vq->vq_avail_idx;
}
@@ -172,6 +250,30 @@ virtqueue_notify(struct virtqueue *vq)
VTPCI_OPS(vq->hw)->notify_queue(vq->hw, vq);
}
+static inline int
+desc_is_used(struct vring_packed_desc *desc, struct virtqueue *vq)
+{
+ uint16_t used, avail, flags;
+
+ flags = virtqueue_fetch_flags_packed(desc, vq->hw->weak_barriers);
+ used = !!(flags & VRING_PACKED_DESC_F_USED);
+ avail = !!(flags & VRING_PACKED_DESC_F_AVAIL);
+
+ return avail == used && used == vq->vq_packed.used_wrap_counter;
+}
+
+static inline void
+vring_desc_init_packed(struct virtqueue *vq, int n)
+{
+ int i;
+ for (i = 0; i < n - 1; i++) {
+ vq->vq_packed.ring.desc[i].id = i;
+ vq->vq_descx[i].next = i + 1;
+ }
+ vq->vq_packed.ring.desc[i].id = i;
+ vq->vq_descx[i].next = VQ_RING_DESC_CHAIN_END;
+}
+
/* Chain all the descriptors in the ring with an END */
static inline void
vring_desc_init_split(struct vring_desc *dp, uint16_t n)
@@ -223,7 +325,7 @@ virtqueue_nused(const struct virtqueue *vq)
/**
* Dump virtqueue internal structures, for debug purpose only.
*/
-#define VIRTQUEUE_DUMP(vq) do { \
+#define VIRTQUEUE_SPLIT_DUMP(vq) do { \
uint16_t used_idx, nused; \
used_idx = (vq)->vq_split.ring.used->idx; \
nused = (uint16_t)(used_idx - (vq)->vq_used_cons_idx); \
@@ -237,4 +339,24 @@ virtqueue_nused(const struct virtqueue *vq)
(vq)->vq_split.ring.avail->flags, (vq)->vq_split.ring.used->flags); \
} while (0)
+#define VIRTQUEUE_PACKED_DUMP(vq) do { \
+ uint16_t nused; \
+ nused = (vq)->vq_nentries - (vq)->vq_free_cnt; \
+ VIRTIO_CRYPTO_INIT_LOG_DBG(\
+ "VQ: - size=%d; free=%d; used=%d; desc_head_idx=%d;" \
+ " avail_idx=%d; used_cons_idx=%d;" \
+ " avail.flags=0x%x; wrap_counter=%d", \
+ (vq)->vq_nentries, (vq)->vq_free_cnt, nused, \
+ (vq)->vq_desc_head_idx, (vq)->vq_avail_idx, \
+ (vq)->vq_used_cons_idx, (vq)->vq_packed.cached_flags, \
+ (vq)->vq_packed.used_wrap_counter); \
+} while (0)
+
+#define VIRTQUEUE_DUMP(vq) do { \
+ if (vtpci_with_packed_queue((vq)->hw)) \
+ VIRTQUEUE_PACKED_DUMP(vq); \
+ else \
+ VIRTQUEUE_SPLIT_DUMP(vq); \
+} while (0)
+
#endif /* _VIRTQUEUE_H_ */
--
2.25.1
^ permalink raw reply [flat|nested] 58+ messages in thread
* [v2 0/4] crypto/virtio: add vDPA backend support
2024-12-24 7:36 [v1 00/16] crypto/virtio: vDPA and asymmetric support Gowrishankar Muthukrishnan
` (18 preceding siblings ...)
2025-01-07 18:08 ` [v2 0/2] crypto/virtio: add packed ring support Gowrishankar Muthukrishnan
@ 2025-01-07 18:44 ` Gowrishankar Muthukrishnan
2025-01-07 18:44 ` [v2 1/4] common/virtio: move vDPA to common directory Gowrishankar Muthukrishnan
` (3 more replies)
19 siblings, 4 replies; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2025-01-07 18:44 UTC (permalink / raw)
To: dev, Akhil Goyal, Maxime Coquelin, Chenbo Xia, Fan Zhang, Jay Zhou
Cc: jerinj, anoobj, David Marchand, Gowrishankar Muthukrishnan
This series add vDPA backend support in virtio crypto PMD.
Depends-on: patch-149672 ("vhost: include AKCIPHER algorithms in crypto_config")
Depends-on: patch-148913 ("crypto/virtio: remove redundant crypto queue free")
Depends-on: series-34293 ("crypto/virtio: add packed ring support")
Depends-on: series-34291 ("crypto/virtio: add RSA support")
v2:
- split from v1 series.
Gowrishankar Muthukrishnan (4):
common/virtio: move vDPA to common directory
common/virtio: support cryptodev in vdev setup
crypto/virtio: add vhost backend to virtio_user
test/crypto: test virtio_crypto_user PMD
app/test/test_cryptodev.c | 7 +
app/test/test_cryptodev.h | 1 +
app/test/test_cryptodev_asym.c | 15 +
drivers/common/virtio/meson.build | 13 +
drivers/common/virtio/version.map | 8 +
.../virtio/virtio_user/vhost.h | 4 -
.../common/virtio/virtio_user/vhost_logs.h | 15 +
.../virtio/virtio_user/vhost_vdpa.c | 31 +-
drivers/crypto/virtio/meson.build | 9 +-
drivers/crypto/virtio/virtio_cryptodev.c | 57 +-
drivers/crypto/virtio/virtio_cryptodev.h | 3 +
drivers/crypto/virtio/virtio_pci.h | 7 +
drivers/crypto/virtio/virtio_ring.h | 6 -
.../crypto/virtio/virtio_user/vhost_vdpa.c | 312 +++++++
.../virtio/virtio_user/virtio_user_dev.c | 776 ++++++++++++++++++
.../virtio/virtio_user/virtio_user_dev.h | 88 ++
drivers/crypto/virtio/virtio_user_cryptodev.c | 587 +++++++++++++
drivers/meson.build | 1 +
drivers/net/virtio/meson.build | 3 +-
drivers/net/virtio/virtio_user/vhost_kernel.c | 3 +-
drivers/net/virtio/virtio_user/vhost_user.c | 3 +-
.../net/virtio/virtio_user/virtio_user_dev.c | 5 +-
.../net/virtio/virtio_user/virtio_user_dev.h | 24 +-
23 files changed, 1927 insertions(+), 51 deletions(-)
create mode 100644 drivers/common/virtio/meson.build
create mode 100644 drivers/common/virtio/version.map
rename drivers/{net => common}/virtio/virtio_user/vhost.h (97%)
create mode 100644 drivers/common/virtio/virtio_user/vhost_logs.h
rename drivers/{net => common}/virtio/virtio_user/vhost_vdpa.c (96%)
create mode 100644 drivers/crypto/virtio/virtio_user/vhost_vdpa.c
create mode 100644 drivers/crypto/virtio/virtio_user/virtio_user_dev.c
create mode 100644 drivers/crypto/virtio/virtio_user/virtio_user_dev.h
create mode 100644 drivers/crypto/virtio/virtio_user_cryptodev.c
--
2.25.1
^ permalink raw reply [flat|nested] 58+ messages in thread
* [v2 1/4] common/virtio: move vDPA to common directory
2025-01-07 18:44 ` [v2 0/4] crypto/virtio: add vDPA backend support Gowrishankar Muthukrishnan
@ 2025-01-07 18:44 ` Gowrishankar Muthukrishnan
2025-02-06 9:40 ` Maxime Coquelin
2025-01-07 18:44 ` [v2 2/4] common/virtio: support cryptodev in vdev setup Gowrishankar Muthukrishnan
` (2 subsequent siblings)
3 siblings, 1 reply; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2025-01-07 18:44 UTC (permalink / raw)
To: dev, Akhil Goyal, Maxime Coquelin, Chenbo Xia, Fan Zhang, Jay Zhou
Cc: jerinj, anoobj, David Marchand, Gowrishankar Muthukrishnan
Move vhost-vdpa backend implementation into common folder.
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
Depends-on: patch-149672 ("vhost: include AKCIPHER algorithms in crypto_config")
Depends-on: patch-148913 ("crypto/virtio: remove redundant crypto queue free")
Depends-on: series-34293 ("crypto/virtio: add packed ring support")
Depends-on: series-34291 ("crypto/virtio: add RSA support")
drivers/common/virtio/meson.build | 13 +++++++++
drivers/common/virtio/version.map | 8 ++++++
.../virtio/virtio_user/vhost.h | 4 ---
.../common/virtio/virtio_user/vhost_logs.h | 15 ++++++++++
.../virtio/virtio_user/vhost_vdpa.c | 28 ++++++++++++++++++-
drivers/crypto/virtio/meson.build | 2 +-
drivers/meson.build | 1 +
drivers/net/virtio/meson.build | 3 +-
drivers/net/virtio/virtio_user/vhost_kernel.c | 3 +-
drivers/net/virtio/virtio_user/vhost_user.c | 3 +-
.../net/virtio/virtio_user/virtio_user_dev.c | 5 ++--
.../net/virtio/virtio_user/virtio_user_dev.h | 24 +++++++++-------
12 files changed, 87 insertions(+), 22 deletions(-)
create mode 100644 drivers/common/virtio/meson.build
create mode 100644 drivers/common/virtio/version.map
rename drivers/{net => common}/virtio/virtio_user/vhost.h (97%)
create mode 100644 drivers/common/virtio/virtio_user/vhost_logs.h
rename drivers/{net => common}/virtio/virtio_user/vhost_vdpa.c (97%)
diff --git a/drivers/common/virtio/meson.build b/drivers/common/virtio/meson.build
new file mode 100644
index 0000000000..a19db9e088
--- /dev/null
+++ b/drivers/common/virtio/meson.build
@@ -0,0 +1,13 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2025 Marvell
+
+if is_windows
+ build = false
+ reason = 'not supported on Windows'
+ subdir_done()
+endif
+
+if is_linux
+ sources += files('virtio_user/vhost_vdpa.c')
+ deps += ['bus_vdev']
+endif
diff --git a/drivers/common/virtio/version.map b/drivers/common/virtio/version.map
new file mode 100644
index 0000000000..fb98a0ab2e
--- /dev/null
+++ b/drivers/common/virtio/version.map
@@ -0,0 +1,8 @@
+INTERNAL {
+ global:
+
+ virtio_ops_vdpa;
+ vhost_logtype_driver;
+
+ local: *;
+};
diff --git a/drivers/net/virtio/virtio_user/vhost.h b/drivers/common/virtio/virtio_user/vhost.h
similarity index 97%
rename from drivers/net/virtio/virtio_user/vhost.h
rename to drivers/common/virtio/virtio_user/vhost.h
index eee3a4bc47..adf6551681 100644
--- a/drivers/net/virtio/virtio_user/vhost.h
+++ b/drivers/common/virtio/virtio_user/vhost.h
@@ -11,10 +11,6 @@
#include <rte_errno.h>
-#include "../virtio.h"
-#include "../virtio_logs.h"
-#include "../virtqueue.h"
-
struct vhost_vring_state {
unsigned int index;
unsigned int num;
diff --git a/drivers/common/virtio/virtio_user/vhost_logs.h b/drivers/common/virtio/virtio_user/vhost_logs.h
new file mode 100644
index 0000000000..653d4d0b5e
--- /dev/null
+++ b/drivers/common/virtio/virtio_user/vhost_logs.h
@@ -0,0 +1,15 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2025 Marvell
+ */
+
+#ifndef _VHOST_LOGS_H_
+#define _VHOST_LOGS_H_
+
+#include <rte_log.h>
+
+extern int vhost_logtype_driver;
+#define RTE_LOGTYPE_VHOST_DRIVER vhost_logtype_driver
+#define PMD_DRV_LOG(level, ...) \
+ RTE_LOG_LINE_PREFIX(level, VHOST_DRIVER, "%s(): ", __func__, __VA_ARGS__)
+
+#endif /* _VHOST_LOGS_H_ */
diff --git a/drivers/net/virtio/virtio_user/vhost_vdpa.c b/drivers/common/virtio/virtio_user/vhost_vdpa.c
similarity index 97%
rename from drivers/net/virtio/virtio_user/vhost_vdpa.c
rename to drivers/common/virtio/virtio_user/vhost_vdpa.c
index bc3e2a9af5..af5c4cbf33 100644
--- a/drivers/net/virtio/virtio_user/vhost_vdpa.c
+++ b/drivers/common/virtio/virtio_user/vhost_vdpa.c
@@ -9,11 +9,12 @@
#include <fcntl.h>
#include <stdlib.h>
#include <unistd.h>
+#include <inttypes.h>
#include <rte_memory.h>
#include "vhost.h"
-#include "virtio_user_dev.h"
+#include "vhost_logs.h"
struct vhost_vdpa_data {
int vhostfd;
@@ -100,6 +101,29 @@ vhost_vdpa_ioctl(int fd, uint64_t request, void *arg)
return 0;
}
+struct virtio_hw {
+ struct virtqueue **vqs;
+};
+
+struct virtio_user_dev {
+ union {
+ struct virtio_hw hw;
+ uint8_t dummy[256];
+ };
+
+ void *backend_data;
+ uint16_t **notify_area;
+ char path[PATH_MAX];
+ bool hw_cvq;
+ uint16_t max_queue_pairs;
+ uint64_t device_features;
+ bool *qp_enabled;
+};
+
+#define VIRTIO_NET_F_CTRL_VQ 17
+#define VIRTIO_F_IOMMU_PLATFORM 33
+#define VIRTIO_ID_NETWORK 0x01
+
static int
vhost_vdpa_set_owner(struct virtio_user_dev *dev)
{
@@ -715,3 +739,5 @@ struct virtio_user_backend_ops virtio_ops_vdpa = {
.map_notification_area = vhost_vdpa_map_notification_area,
.unmap_notification_area = vhost_vdpa_unmap_notification_area,
};
+
+RTE_LOG_REGISTER_SUFFIX(vhost_logtype_driver, driver, NOTICE);
diff --git a/drivers/crypto/virtio/meson.build b/drivers/crypto/virtio/meson.build
index d2c3b3ad07..8181c8296f 100644
--- a/drivers/crypto/virtio/meson.build
+++ b/drivers/crypto/virtio/meson.build
@@ -8,7 +8,7 @@ if is_windows
endif
includes += include_directories('../../../lib/vhost')
-deps += 'bus_pci'
+deps += ['bus_pci', 'common_virtio']
sources = files(
'virtio_cryptodev.c',
'virtio_cvq.c',
diff --git a/drivers/meson.build b/drivers/meson.build
index 495e21b54a..2f0d312479 100644
--- a/drivers/meson.build
+++ b/drivers/meson.build
@@ -17,6 +17,7 @@ subdirs = [
'common/nitrox', # depends on bus.
'common/qat', # depends on bus.
'common/sfc_efx', # depends on bus.
+ 'common/virtio', # depends on bus.
'mempool', # depends on common and bus.
'dma', # depends on common and bus.
'net', # depends on common, bus, mempool
diff --git a/drivers/net/virtio/meson.build b/drivers/net/virtio/meson.build
index 02742da5c2..bbd73741f0 100644
--- a/drivers/net/virtio/meson.build
+++ b/drivers/net/virtio/meson.build
@@ -54,7 +54,6 @@ if is_linux
'virtio_user/vhost_kernel.c',
'virtio_user/vhost_kernel_tap.c',
'virtio_user/vhost_user.c',
- 'virtio_user/vhost_vdpa.c',
'virtio_user/virtio_user_dev.c')
- deps += ['bus_vdev']
+ deps += ['bus_vdev', 'common_virtio']
endif
diff --git a/drivers/net/virtio/virtio_user/vhost_kernel.c b/drivers/net/virtio/virtio_user/vhost_kernel.c
index e42bb35935..3a95ce34d6 100644
--- a/drivers/net/virtio/virtio_user/vhost_kernel.c
+++ b/drivers/net/virtio/virtio_user/vhost_kernel.c
@@ -11,9 +11,10 @@
#include <rte_memory.h>
-#include "vhost.h"
+#include "virtio_user/vhost.h"
#include "virtio_user_dev.h"
#include "vhost_kernel_tap.h"
+#include "../virtqueue.h"
struct vhost_kernel_data {
int *vhostfds;
diff --git a/drivers/net/virtio/virtio_user/vhost_user.c b/drivers/net/virtio/virtio_user/vhost_user.c
index c10252506b..2a158aff7e 100644
--- a/drivers/net/virtio/virtio_user/vhost_user.c
+++ b/drivers/net/virtio/virtio_user/vhost_user.c
@@ -16,7 +16,8 @@
#include <rte_string_fns.h>
#include <rte_fbarray.h>
-#include "vhost.h"
+#include "virtio_user/vhost_logs.h"
+#include "virtio_user/vhost.h"
#include "virtio_user_dev.h"
struct vhost_user_data {
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index 2997d2bd26..7105c54b43 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -20,10 +20,11 @@
#include <rte_malloc.h>
#include <rte_io.h>
-#include "vhost.h"
-#include "virtio.h"
+#include "virtio_user/vhost.h"
#include "virtio_user_dev.h"
+#include "../virtqueue.h"
#include "../virtio_ethdev.h"
+#include "../virtio_logs.h"
#define VIRTIO_USER_MEM_EVENT_CLB_NAME "virtio_user_mem_event_clb"
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.h b/drivers/net/virtio/virtio_user/virtio_user_dev.h
index 66400b3b62..70604d6956 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.h
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.h
@@ -25,26 +25,36 @@ struct virtio_user_queue {
};
struct virtio_user_dev {
- struct virtio_hw hw;
+ union {
+ struct virtio_hw hw;
+ uint8_t dummy[256];
+ };
+
+ void *backend_data;
+ uint16_t **notify_area;
+ char path[PATH_MAX];
+ bool hw_cvq;
+ uint16_t max_queue_pairs;
+ uint64_t device_features; /* supported features by device */
+ bool *qp_enabled;
+
enum virtio_user_backend_type backend_type;
bool is_server; /* server or client mode */
int *callfds;
int *kickfds;
int mac_specified;
- uint16_t max_queue_pairs;
+
uint16_t queue_pairs;
uint32_t queue_size;
uint64_t features; /* the negotiated features with driver,
* and will be sync with device
*/
- uint64_t device_features; /* supported features by device */
uint64_t frontend_features; /* enabled frontend features */
uint64_t unsupported_features; /* unsupported features mask */
uint8_t status;
uint16_t net_status;
uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
- char path[PATH_MAX];
char *ifname;
union {
@@ -54,18 +64,12 @@ struct virtio_user_dev {
} vrings;
struct virtio_user_queue *packed_queues;
- bool *qp_enabled;
struct virtio_user_backend_ops *ops;
pthread_mutex_t mutex;
bool started;
- bool hw_cvq;
struct virtqueue *scvq;
-
- void *backend_data;
-
- uint16_t **notify_area;
};
int virtio_user_dev_set_features(struct virtio_user_dev *dev);
--
2.25.1
^ permalink raw reply [flat|nested] 58+ messages in thread
* [v2 2/4] common/virtio: support cryptodev in vdev setup
2025-01-07 18:44 ` [v2 0/4] crypto/virtio: add vDPA backend support Gowrishankar Muthukrishnan
2025-01-07 18:44 ` [v2 1/4] common/virtio: move vDPA to common directory Gowrishankar Muthukrishnan
@ 2025-01-07 18:44 ` Gowrishankar Muthukrishnan
2025-01-07 18:44 ` [v2 3/4] crypto/virtio: add vhost backend to virtio_user Gowrishankar Muthukrishnan
2025-01-07 18:44 ` [v2 4/4] test/crypto: test virtio_crypto_user PMD Gowrishankar Muthukrishnan
3 siblings, 0 replies; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2025-01-07 18:44 UTC (permalink / raw)
To: dev, Akhil Goyal, Maxime Coquelin, Chenbo Xia, Fan Zhang, Jay Zhou
Cc: jerinj, anoobj, David Marchand, Gowrishankar Muthukrishnan
Support cryptodev in vdev setup.
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
drivers/common/virtio/virtio_user/vhost_vdpa.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/common/virtio/virtio_user/vhost_vdpa.c b/drivers/common/virtio/virtio_user/vhost_vdpa.c
index af5c4cbf33..d143967aaa 100644
--- a/drivers/common/virtio/virtio_user/vhost_vdpa.c
+++ b/drivers/common/virtio/virtio_user/vhost_vdpa.c
@@ -123,6 +123,7 @@ struct virtio_user_dev {
#define VIRTIO_NET_F_CTRL_VQ 17
#define VIRTIO_F_IOMMU_PLATFORM 33
#define VIRTIO_ID_NETWORK 0x01
+#define VIRTIO_ID_CRYPTO 0x20
static int
vhost_vdpa_set_owner(struct virtio_user_dev *dev)
@@ -561,7 +562,7 @@ vhost_vdpa_setup(struct virtio_user_dev *dev)
}
if (ioctl(data->vhostfd, VHOST_VDPA_GET_DEVICE_ID, &did) < 0 ||
- did != VIRTIO_ID_NETWORK) {
+ (did != VIRTIO_ID_NETWORK) || (did != VIRTIO_ID_CRYPTO)) {
PMD_DRV_LOG(ERR, "Invalid vdpa device ID: %u", did);
close(data->vhostfd);
free(data);
--
2.25.1
^ permalink raw reply [flat|nested] 58+ messages in thread
* [v2 3/4] crypto/virtio: add vhost backend to virtio_user
2025-01-07 18:44 ` [v2 0/4] crypto/virtio: add vDPA backend support Gowrishankar Muthukrishnan
2025-01-07 18:44 ` [v2 1/4] common/virtio: move vDPA to common directory Gowrishankar Muthukrishnan
2025-01-07 18:44 ` [v2 2/4] common/virtio: support cryptodev in vdev setup Gowrishankar Muthukrishnan
@ 2025-01-07 18:44 ` Gowrishankar Muthukrishnan
2025-02-06 13:14 ` Maxime Coquelin
2025-01-07 18:44 ` [v2 4/4] test/crypto: test virtio_crypto_user PMD Gowrishankar Muthukrishnan
3 siblings, 1 reply; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2025-01-07 18:44 UTC (permalink / raw)
To: dev, Akhil Goyal, Maxime Coquelin, Chenbo Xia, Fan Zhang, Jay Zhou
Cc: jerinj, anoobj, David Marchand, Gowrishankar Muthukrishnan
Add vhost backend to virtio_user crypto.
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
drivers/crypto/virtio/meson.build | 7 +
drivers/crypto/virtio/virtio_cryptodev.c | 57 +-
drivers/crypto/virtio/virtio_cryptodev.h | 3 +
drivers/crypto/virtio/virtio_pci.h | 7 +
drivers/crypto/virtio/virtio_ring.h | 6 -
.../crypto/virtio/virtio_user/vhost_vdpa.c | 312 +++++++
.../virtio/virtio_user/virtio_user_dev.c | 776 ++++++++++++++++++
.../virtio/virtio_user/virtio_user_dev.h | 88 ++
drivers/crypto/virtio/virtio_user_cryptodev.c | 587 +++++++++++++
9 files changed, 1815 insertions(+), 28 deletions(-)
create mode 100644 drivers/crypto/virtio/virtio_user/vhost_vdpa.c
create mode 100644 drivers/crypto/virtio/virtio_user/virtio_user_dev.c
create mode 100644 drivers/crypto/virtio/virtio_user/virtio_user_dev.h
create mode 100644 drivers/crypto/virtio/virtio_user_cryptodev.c
diff --git a/drivers/crypto/virtio/meson.build b/drivers/crypto/virtio/meson.build
index 8181c8296f..e5bce54cca 100644
--- a/drivers/crypto/virtio/meson.build
+++ b/drivers/crypto/virtio/meson.build
@@ -16,3 +16,10 @@ sources = files(
'virtio_rxtx.c',
'virtqueue.c',
)
+
+if is_linux
+ sources += files('virtio_user_cryptodev.c',
+ 'virtio_user/vhost_vdpa.c',
+ 'virtio_user/virtio_user_dev.c')
+ deps += ['bus_vdev', 'common_virtio']
+endif
diff --git a/drivers/crypto/virtio/virtio_cryptodev.c b/drivers/crypto/virtio/virtio_cryptodev.c
index d3db4f898e..c9f20cb338 100644
--- a/drivers/crypto/virtio/virtio_cryptodev.c
+++ b/drivers/crypto/virtio/virtio_cryptodev.c
@@ -544,24 +544,12 @@ virtio_crypto_init_device(struct rte_cryptodev *cryptodev,
return 0;
}
-/*
- * This function is based on probe() function
- * It returns 0 on success.
- */
-static int
-crypto_virtio_create(const char *name, struct rte_pci_device *pci_dev,
- struct rte_cryptodev_pmd_init_params *init_params)
+int
+crypto_virtio_dev_init(struct rte_cryptodev *cryptodev, uint64_t features,
+ struct rte_pci_device *pci_dev)
{
- struct rte_cryptodev *cryptodev;
struct virtio_crypto_hw *hw;
- PMD_INIT_FUNC_TRACE();
-
- cryptodev = rte_cryptodev_pmd_create(name, &pci_dev->device,
- init_params);
- if (cryptodev == NULL)
- return -ENODEV;
-
cryptodev->driver_id = cryptodev_virtio_driver_id;
cryptodev->dev_ops = &virtio_crypto_dev_ops;
@@ -578,16 +566,41 @@ crypto_virtio_create(const char *name, struct rte_pci_device *pci_dev,
hw->dev_id = cryptodev->data->dev_id;
hw->virtio_dev_capabilities = virtio_capabilities;
- VIRTIO_CRYPTO_INIT_LOG_DBG("dev %d vendorID=0x%x deviceID=0x%x",
- cryptodev->data->dev_id, pci_dev->id.vendor_id,
- pci_dev->id.device_id);
+ if (pci_dev) {
+ /* pci device init */
+ VIRTIO_CRYPTO_INIT_LOG_DBG("dev %d vendorID=0x%x deviceID=0x%x",
+ cryptodev->data->dev_id, pci_dev->id.vendor_id,
+ pci_dev->id.device_id);
- /* pci device init */
- if (vtpci_cryptodev_init(pci_dev, hw))
+ if (vtpci_cryptodev_init(pci_dev, hw))
+ return -1;
+ }
+
+ if (virtio_crypto_init_device(cryptodev, features) < 0)
return -1;
- if (virtio_crypto_init_device(cryptodev,
- VIRTIO_CRYPTO_PMD_GUEST_FEATURES) < 0)
+ return 0;
+}
+
+/*
+ * This function is based on probe() function
+ * It returns 0 on success.
+ */
+static int
+crypto_virtio_create(const char *name, struct rte_pci_device *pci_dev,
+ struct rte_cryptodev_pmd_init_params *init_params)
+{
+ struct rte_cryptodev *cryptodev;
+
+ PMD_INIT_FUNC_TRACE();
+
+ cryptodev = rte_cryptodev_pmd_create(name, &pci_dev->device,
+ init_params);
+ if (cryptodev == NULL)
+ return -ENODEV;
+
+ if (crypto_virtio_dev_init(cryptodev, VIRTIO_CRYPTO_PMD_GUEST_FEATURES,
+ pci_dev) < 0)
return -1;
rte_cryptodev_pmd_probing_finish(cryptodev);
diff --git a/drivers/crypto/virtio/virtio_cryptodev.h b/drivers/crypto/virtio/virtio_cryptodev.h
index b4bdd9800b..95a1e09dca 100644
--- a/drivers/crypto/virtio/virtio_cryptodev.h
+++ b/drivers/crypto/virtio/virtio_cryptodev.h
@@ -74,4 +74,7 @@ uint16_t virtio_crypto_pkt_rx_burst(void *tx_queue,
struct rte_crypto_op **tx_pkts,
uint16_t nb_pkts);
+int crypto_virtio_dev_init(struct rte_cryptodev *cryptodev, uint64_t features,
+ struct rte_pci_device *pci_dev);
+
#endif /* _VIRTIO_CRYPTODEV_H_ */
diff --git a/drivers/crypto/virtio/virtio_pci.h b/drivers/crypto/virtio/virtio_pci.h
index 79945cb88e..c75777e005 100644
--- a/drivers/crypto/virtio/virtio_pci.h
+++ b/drivers/crypto/virtio/virtio_pci.h
@@ -20,6 +20,9 @@ struct virtqueue;
#define VIRTIO_CRYPTO_PCI_VENDORID 0x1AF4
#define VIRTIO_CRYPTO_PCI_DEVICEID 0x1054
+/* VirtIO device IDs. */
+#define VIRTIO_ID_CRYPTO 20
+
/* VirtIO ABI version, this must match exactly. */
#define VIRTIO_PCI_ABI_VERSION 0
@@ -56,8 +59,12 @@ struct virtqueue;
#define VIRTIO_CONFIG_STATUS_DRIVER 0x02
#define VIRTIO_CONFIG_STATUS_DRIVER_OK 0x04
#define VIRTIO_CONFIG_STATUS_FEATURES_OK 0x08
+#define VIRTIO_CONFIG_STATUS_DEV_NEED_RESET 0x40
#define VIRTIO_CONFIG_STATUS_FAILED 0x80
+/* The alignment to use between consumer and producer parts of vring. */
+#define VIRTIO_VRING_ALIGN 4096
+
/*
* Each virtqueue indirect descriptor list must be physically contiguous.
* To allow us to malloc(9) each list individually, limit the number
diff --git a/drivers/crypto/virtio/virtio_ring.h b/drivers/crypto/virtio/virtio_ring.h
index c74d1172b7..4b418f6e60 100644
--- a/drivers/crypto/virtio/virtio_ring.h
+++ b/drivers/crypto/virtio/virtio_ring.h
@@ -181,12 +181,6 @@ vring_init_packed(struct vring_packed *vr, uint8_t *p, rte_iova_t iova,
sizeof(struct vring_packed_desc_event)), align);
}
-static inline void
-vring_init(struct vring *vr, unsigned int num, uint8_t *p, unsigned long align)
-{
- vring_init_split(vr, p, 0, align, num);
-}
-
/*
* The following is used with VIRTIO_RING_F_EVENT_IDX.
* Assuming a given event_idx value from the other size, if we have
diff --git a/drivers/crypto/virtio/virtio_user/vhost_vdpa.c b/drivers/crypto/virtio/virtio_user/vhost_vdpa.c
new file mode 100644
index 0000000000..41696c4095
--- /dev/null
+++ b/drivers/crypto/virtio/virtio_user/vhost_vdpa.c
@@ -0,0 +1,312 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Marvell
+ */
+
+#include <sys/ioctl.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <sys/mman.h>
+#include <fcntl.h>
+#include <stdlib.h>
+#include <unistd.h>
+
+#include <rte_memory.h>
+
+#include "virtio_user/vhost.h"
+#include "virtio_user/vhost_logs.h"
+
+#include "virtio_user_dev.h"
+#include "../virtio_pci.h"
+
+struct vhost_vdpa_data {
+ int vhostfd;
+ uint64_t protocol_features;
+};
+
+#define VHOST_VDPA_SUPPORTED_BACKEND_FEATURES \
+ (1ULL << VHOST_BACKEND_F_IOTLB_MSG_V2 | \
+ 1ULL << VHOST_BACKEND_F_IOTLB_BATCH)
+
+/* vhost kernel & vdpa ioctls */
+#define VHOST_VIRTIO 0xAF
+#define VHOST_GET_FEATURES _IOR(VHOST_VIRTIO, 0x00, __u64)
+#define VHOST_SET_FEATURES _IOW(VHOST_VIRTIO, 0x00, __u64)
+#define VHOST_SET_OWNER _IO(VHOST_VIRTIO, 0x01)
+#define VHOST_RESET_OWNER _IO(VHOST_VIRTIO, 0x02)
+#define VHOST_SET_LOG_BASE _IOW(VHOST_VIRTIO, 0x04, __u64)
+#define VHOST_SET_LOG_FD _IOW(VHOST_VIRTIO, 0x07, int)
+#define VHOST_SET_VRING_NUM _IOW(VHOST_VIRTIO, 0x10, struct vhost_vring_state)
+#define VHOST_SET_VRING_ADDR _IOW(VHOST_VIRTIO, 0x11, struct vhost_vring_addr)
+#define VHOST_SET_VRING_BASE _IOW(VHOST_VIRTIO, 0x12, struct vhost_vring_state)
+#define VHOST_GET_VRING_BASE _IOWR(VHOST_VIRTIO, 0x12, struct vhost_vring_state)
+#define VHOST_SET_VRING_KICK _IOW(VHOST_VIRTIO, 0x20, struct vhost_vring_file)
+#define VHOST_SET_VRING_CALL _IOW(VHOST_VIRTIO, 0x21, struct vhost_vring_file)
+#define VHOST_SET_VRING_ERR _IOW(VHOST_VIRTIO, 0x22, struct vhost_vring_file)
+#define VHOST_NET_SET_BACKEND _IOW(VHOST_VIRTIO, 0x30, struct vhost_vring_file)
+#define VHOST_VDPA_GET_DEVICE_ID _IOR(VHOST_VIRTIO, 0x70, __u32)
+#define VHOST_VDPA_GET_STATUS _IOR(VHOST_VIRTIO, 0x71, __u8)
+#define VHOST_VDPA_SET_STATUS _IOW(VHOST_VIRTIO, 0x72, __u8)
+#define VHOST_VDPA_GET_CONFIG _IOR(VHOST_VIRTIO, 0x73, struct vhost_vdpa_config)
+#define VHOST_VDPA_SET_CONFIG _IOW(VHOST_VIRTIO, 0x74, struct vhost_vdpa_config)
+#define VHOST_VDPA_SET_VRING_ENABLE _IOW(VHOST_VIRTIO, 0x75, struct vhost_vring_state)
+#define VHOST_SET_BACKEND_FEATURES _IOW(VHOST_VIRTIO, 0x25, __u64)
+#define VHOST_GET_BACKEND_FEATURES _IOR(VHOST_VIRTIO, 0x26, __u64)
+
+/* no alignment requirement */
+struct vhost_iotlb_msg {
+ uint64_t iova;
+ uint64_t size;
+ uint64_t uaddr;
+#define VHOST_ACCESS_RO 0x1
+#define VHOST_ACCESS_WO 0x2
+#define VHOST_ACCESS_RW 0x3
+ uint8_t perm;
+#define VHOST_IOTLB_MISS 1
+#define VHOST_IOTLB_UPDATE 2
+#define VHOST_IOTLB_INVALIDATE 3
+#define VHOST_IOTLB_ACCESS_FAIL 4
+#define VHOST_IOTLB_BATCH_BEGIN 5
+#define VHOST_IOTLB_BATCH_END 6
+ uint8_t type;
+};
+
+#define VHOST_IOTLB_MSG_V2 0x2
+
+struct vhost_vdpa_config {
+ uint32_t off;
+ uint32_t len;
+ uint8_t buf[];
+};
+
+struct vhost_msg {
+ uint32_t type;
+ uint32_t reserved;
+ union {
+ struct vhost_iotlb_msg iotlb;
+ uint8_t padding[64];
+ };
+};
+
+
+static int
+vhost_vdpa_ioctl(int fd, uint64_t request, void *arg)
+{
+ int ret;
+
+ ret = ioctl(fd, request, arg);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "Vhost-vDPA ioctl %"PRIu64" failed (%s)",
+ request, strerror(errno));
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+vhost_vdpa_get_protocol_features(struct virtio_user_dev *dev, uint64_t *features)
+{
+ struct vhost_vdpa_data *data = dev->backend_data;
+
+ return vhost_vdpa_ioctl(data->vhostfd, VHOST_GET_BACKEND_FEATURES, features);
+}
+
+static int
+vhost_vdpa_set_protocol_features(struct virtio_user_dev *dev, uint64_t features)
+{
+ struct vhost_vdpa_data *data = dev->backend_data;
+
+ return vhost_vdpa_ioctl(data->vhostfd, VHOST_SET_BACKEND_FEATURES, &features);
+}
+
+static int
+vhost_vdpa_get_features(struct virtio_user_dev *dev, uint64_t *features)
+{
+ struct vhost_vdpa_data *data = dev->backend_data;
+ int ret;
+
+ ret = vhost_vdpa_ioctl(data->vhostfd, VHOST_GET_FEATURES, features);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "Failed to get features");
+ return -1;
+ }
+
+ /* Negotiated vDPA backend features */
+ ret = vhost_vdpa_get_protocol_features(dev, &data->protocol_features);
+ if (ret < 0) {
+ PMD_DRV_LOG(ERR, "Failed to get backend features");
+ return -1;
+ }
+
+ data->protocol_features &= VHOST_VDPA_SUPPORTED_BACKEND_FEATURES;
+
+ ret = vhost_vdpa_set_protocol_features(dev, data->protocol_features);
+ if (ret < 0) {
+ PMD_DRV_LOG(ERR, "Failed to set backend features");
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+vhost_vdpa_set_vring_enable(struct virtio_user_dev *dev, struct vhost_vring_state *state)
+{
+ struct vhost_vdpa_data *data = dev->backend_data;
+
+ return vhost_vdpa_ioctl(data->vhostfd, VHOST_VDPA_SET_VRING_ENABLE, state);
+}
+
+/**
+ * Set up environment to talk with a vhost vdpa backend.
+ *
+ * @return
+ * - (-1) if fail to set up;
+ * - (>=0) if successful.
+ */
+static int
+vhost_vdpa_setup(struct virtio_user_dev *dev)
+{
+ struct vhost_vdpa_data *data;
+ uint32_t did = (uint32_t)-1;
+
+ data = malloc(sizeof(*data));
+ if (!data) {
+ PMD_DRV_LOG(ERR, "(%s) Faidle to allocate backend data", dev->path);
+ return -1;
+ }
+
+ data->vhostfd = open(dev->path, O_RDWR);
+ if (data->vhostfd < 0) {
+ PMD_DRV_LOG(ERR, "Failed to open %s: %s",
+ dev->path, strerror(errno));
+ free(data);
+ return -1;
+ }
+
+ if (ioctl(data->vhostfd, VHOST_VDPA_GET_DEVICE_ID, &did) < 0 ||
+ did != VIRTIO_ID_CRYPTO) {
+ PMD_DRV_LOG(ERR, "Invalid vdpa device ID: %u", did);
+ close(data->vhostfd);
+ free(data);
+ return -1;
+ }
+
+ dev->backend_data = data;
+
+ return 0;
+}
+
+static int
+vhost_vdpa_cvq_enable(struct virtio_user_dev *dev, int enable)
+{
+ struct vhost_vring_state state = {
+ .index = dev->max_queue_pairs,
+ .num = enable,
+ };
+
+ return vhost_vdpa_set_vring_enable(dev, &state);
+}
+
+static int
+vhost_vdpa_enable_queue_pair(struct virtio_user_dev *dev,
+ uint16_t pair_idx,
+ int enable)
+{
+ struct vhost_vring_state state = {
+ .index = pair_idx,
+ .num = enable,
+ };
+
+ if (dev->qp_enabled[pair_idx] == enable)
+ return 0;
+
+ if (vhost_vdpa_set_vring_enable(dev, &state))
+ return -1;
+
+ dev->qp_enabled[pair_idx] = enable;
+ return 0;
+}
+
+static int
+vhost_vdpa_update_link_state(struct virtio_user_dev *dev)
+{
+ /* TODO: It is W/A until a cleaner approach to find cpt status */
+ dev->crypto_status = VIRTIO_CRYPTO_S_HW_READY;
+ return 0;
+}
+
+static int
+vhost_vdpa_get_nr_vrings(struct virtio_user_dev *dev)
+{
+ int nr_vrings = dev->max_queue_pairs;
+
+ return nr_vrings;
+}
+
+static int
+vhost_vdpa_unmap_notification_area(struct virtio_user_dev *dev)
+{
+ int i, nr_vrings;
+
+ nr_vrings = vhost_vdpa_get_nr_vrings(dev);
+
+ for (i = 0; i < nr_vrings; i++) {
+ if (dev->notify_area[i])
+ munmap(dev->notify_area[i], getpagesize());
+ }
+ free(dev->notify_area);
+ dev->notify_area = NULL;
+
+ return 0;
+}
+
+static int
+vhost_vdpa_map_notification_area(struct virtio_user_dev *dev)
+{
+ struct vhost_vdpa_data *data = dev->backend_data;
+ int nr_vrings, i, page_size = getpagesize();
+ uint16_t **notify_area;
+
+ nr_vrings = vhost_vdpa_get_nr_vrings(dev);
+
+ /* CQ is another vring */
+ nr_vrings++;
+
+ notify_area = malloc(nr_vrings * sizeof(*notify_area));
+ if (!notify_area) {
+ PMD_DRV_LOG(ERR, "(%s) Failed to allocate notify area array", dev->path);
+ return -1;
+ }
+
+ for (i = 0; i < nr_vrings; i++) {
+ notify_area[i] = mmap(NULL, page_size, PROT_WRITE, MAP_SHARED | MAP_FILE,
+ data->vhostfd, i * page_size);
+ if (notify_area[i] == MAP_FAILED) {
+ PMD_DRV_LOG(ERR, "(%s) Map failed for notify address of queue %d",
+ dev->path, i);
+ i--;
+ goto map_err;
+ }
+ }
+ dev->notify_area = notify_area;
+
+ return 0;
+
+map_err:
+ for (; i >= 0; i--)
+ munmap(notify_area[i], page_size);
+ free(notify_area);
+
+ return -1;
+}
+
+struct virtio_user_backend_ops virtio_crypto_ops_vdpa = {
+ .setup = vhost_vdpa_setup,
+ .get_features = vhost_vdpa_get_features,
+ .cvq_enable = vhost_vdpa_cvq_enable,
+ .enable_qp = vhost_vdpa_enable_queue_pair,
+ .update_link_state = vhost_vdpa_update_link_state,
+ .map_notification_area = vhost_vdpa_map_notification_area,
+ .unmap_notification_area = vhost_vdpa_unmap_notification_area,
+};
diff --git a/drivers/crypto/virtio/virtio_user/virtio_user_dev.c b/drivers/crypto/virtio/virtio_user/virtio_user_dev.c
new file mode 100644
index 0000000000..ac53ca78d4
--- /dev/null
+++ b/drivers/crypto/virtio/virtio_user/virtio_user_dev.c
@@ -0,0 +1,776 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Marvell.
+ */
+
+#include <stdint.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <fcntl.h>
+#include <string.h>
+#include <errno.h>
+#include <sys/mman.h>
+#include <unistd.h>
+#include <sys/eventfd.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <pthread.h>
+
+#include <rte_alarm.h>
+#include <rte_string_fns.h>
+#include <rte_eal_memconfig.h>
+#include <rte_malloc.h>
+#include <rte_io.h>
+
+#include "virtio_user/vhost.h"
+#include "virtio_user/vhost_logs.h"
+#include "virtio_logs.h"
+
+#include "cryptodev_pmd.h"
+#include "virtio_crypto.h"
+#include "virtio_cvq.h"
+#include "virtio_user_dev.h"
+#include "virtqueue.h"
+
+#define VIRTIO_USER_MEM_EVENT_CLB_NAME "virtio_user_mem_event_clb"
+
+const char * const crypto_virtio_user_backend_strings[] = {
+ [VIRTIO_USER_BACKEND_UNKNOWN] = "VIRTIO_USER_BACKEND_UNKNOWN",
+ [VIRTIO_USER_BACKEND_VHOST_VDPA] = "VHOST_VDPA",
+};
+
+static int
+virtio_user_uninit_notify_queue(struct virtio_user_dev *dev, uint32_t queue_sel)
+{
+ if (dev->kickfds[queue_sel] >= 0) {
+ close(dev->kickfds[queue_sel]);
+ dev->kickfds[queue_sel] = -1;
+ }
+
+ if (dev->callfds[queue_sel] >= 0) {
+ close(dev->callfds[queue_sel]);
+ dev->callfds[queue_sel] = -1;
+ }
+
+ return 0;
+}
+
+static int
+virtio_user_init_notify_queue(struct virtio_user_dev *dev, uint32_t queue_sel)
+{
+ /* May use invalid flag, but some backend uses kickfd and
+ * callfd as criteria to judge if dev is alive. so finally we
+ * use real event_fd.
+ */
+ dev->callfds[queue_sel] = eventfd(0, EFD_CLOEXEC | EFD_NONBLOCK);
+ if (dev->callfds[queue_sel] < 0) {
+ PMD_DRV_LOG(ERR, "(%s) Failed to setup callfd for queue %u: %s",
+ dev->path, queue_sel, strerror(errno));
+ return -1;
+ }
+ dev->kickfds[queue_sel] = eventfd(0, EFD_CLOEXEC | EFD_NONBLOCK);
+ if (dev->kickfds[queue_sel] < 0) {
+ PMD_DRV_LOG(ERR, "(%s) Failed to setup kickfd for queue %u: %s",
+ dev->path, queue_sel, strerror(errno));
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+virtio_user_destroy_queue(struct virtio_user_dev *dev, uint32_t queue_sel)
+{
+ struct vhost_vring_state state;
+ int ret;
+
+ state.index = queue_sel;
+ ret = dev->ops->get_vring_base(dev, &state);
+ if (ret < 0) {
+ PMD_DRV_LOG(ERR, "(%s) Failed to destroy queue %u", dev->path, queue_sel);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+virtio_user_create_queue(struct virtio_user_dev *dev, uint32_t queue_sel)
+{
+ /* Of all per virtqueue MSGs, make sure VHOST_SET_VRING_CALL come
+ * firstly because vhost depends on this msg to allocate virtqueue
+ * pair.
+ */
+ struct vhost_vring_file file;
+ int ret;
+
+ file.index = queue_sel;
+ file.fd = dev->callfds[queue_sel];
+ ret = dev->ops->set_vring_call(dev, &file);
+ if (ret < 0) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to create queue %u", dev->path, queue_sel);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+virtio_user_kick_queue(struct virtio_user_dev *dev, uint32_t queue_sel)
+{
+ int ret;
+ struct vhost_vring_file file;
+ struct vhost_vring_state state;
+ struct vring *vring = &dev->vrings.split[queue_sel];
+ struct vring_packed *pq_vring = &dev->vrings.packed[queue_sel];
+ uint64_t desc_addr, avail_addr, used_addr;
+ struct vhost_vring_addr addr = {
+ .index = queue_sel,
+ .log_guest_addr = 0,
+ .flags = 0, /* disable log */
+ };
+
+ if (queue_sel == dev->max_queue_pairs) {
+ if (!dev->scvq) {
+ PMD_INIT_LOG(ERR, "(%s) Shadow control queue expected but missing",
+ dev->path);
+ goto err;
+ }
+
+ /* Use shadow control queue information */
+ vring = &dev->scvq->vq_split.ring;
+ pq_vring = &dev->scvq->vq_packed.ring;
+ }
+
+ if (dev->features & (1ULL << VIRTIO_F_RING_PACKED)) {
+ desc_addr = pq_vring->desc_iova;
+ avail_addr = desc_addr + pq_vring->num * sizeof(struct vring_packed_desc);
+ used_addr = RTE_ALIGN_CEIL(avail_addr + sizeof(struct vring_packed_desc_event),
+ VIRTIO_VRING_ALIGN);
+
+ addr.desc_user_addr = desc_addr;
+ addr.avail_user_addr = avail_addr;
+ addr.used_user_addr = used_addr;
+ } else {
+ desc_addr = vring->desc_iova;
+ avail_addr = desc_addr + vring->num * sizeof(struct vring_desc);
+ used_addr = RTE_ALIGN_CEIL((uintptr_t)(&vring->avail->ring[vring->num]),
+ VIRTIO_VRING_ALIGN);
+
+ addr.desc_user_addr = desc_addr;
+ addr.avail_user_addr = avail_addr;
+ addr.used_user_addr = used_addr;
+ }
+
+ state.index = queue_sel;
+ state.num = vring->num;
+ ret = dev->ops->set_vring_num(dev, &state);
+ if (ret < 0)
+ goto err;
+
+ state.index = queue_sel;
+ state.num = 0; /* no reservation */
+ if (dev->features & (1ULL << VIRTIO_F_RING_PACKED))
+ state.num |= (1 << 15);
+ ret = dev->ops->set_vring_base(dev, &state);
+ if (ret < 0)
+ goto err;
+
+ ret = dev->ops->set_vring_addr(dev, &addr);
+ if (ret < 0)
+ goto err;
+
+ /* Of all per virtqueue MSGs, make sure VHOST_USER_SET_VRING_KICK comes
+ * lastly because vhost depends on this msg to judge if
+ * virtio is ready.
+ */
+ file.index = queue_sel;
+ file.fd = dev->kickfds[queue_sel];
+ ret = dev->ops->set_vring_kick(dev, &file);
+ if (ret < 0)
+ goto err;
+
+ return 0;
+err:
+ PMD_INIT_LOG(ERR, "(%s) Failed to kick queue %u", dev->path, queue_sel);
+
+ return -1;
+}
+
+static int
+virtio_user_foreach_queue(struct virtio_user_dev *dev,
+ int (*fn)(struct virtio_user_dev *, uint32_t))
+{
+ uint32_t i, nr_vq;
+
+ nr_vq = dev->max_queue_pairs;
+
+ for (i = 0; i < nr_vq; i++)
+ if (fn(dev, i) < 0)
+ return -1;
+
+ return 0;
+}
+
+int
+crypto_virtio_user_dev_set_features(struct virtio_user_dev *dev)
+{
+ uint64_t features;
+ int ret = -1;
+
+ pthread_mutex_lock(&dev->mutex);
+
+ /* Step 0: tell vhost to create queues */
+ if (virtio_user_foreach_queue(dev, virtio_user_create_queue) < 0)
+ goto error;
+
+ features = dev->features;
+
+ ret = dev->ops->set_features(dev, features);
+ if (ret < 0)
+ goto error;
+ PMD_DRV_LOG(INFO, "(%s) set features: 0x%" PRIx64, dev->path, features);
+error:
+ pthread_mutex_unlock(&dev->mutex);
+
+ return ret;
+}
+
+int
+crypto_virtio_user_start_device(struct virtio_user_dev *dev)
+{
+ int ret;
+
+ /*
+ * XXX workaround!
+ *
+ * We need to make sure that the locks will be
+ * taken in the correct order to avoid deadlocks.
+ *
+ * Before releasing this lock, this thread should
+ * not trigger any memory hotplug events.
+ *
+ * This is a temporary workaround, and should be
+ * replaced when we get proper supports from the
+ * memory subsystem in the future.
+ */
+ rte_mcfg_mem_read_lock();
+ pthread_mutex_lock(&dev->mutex);
+
+ /* Step 2: share memory regions */
+ ret = dev->ops->set_memory_table(dev);
+ if (ret < 0)
+ goto error;
+
+ /* Step 3: kick queues */
+ ret = virtio_user_foreach_queue(dev, virtio_user_kick_queue);
+ if (ret < 0)
+ goto error;
+
+ ret = virtio_user_kick_queue(dev, dev->max_queue_pairs);
+ if (ret < 0)
+ goto error;
+
+ /* Step 4: enable queues */
+ for (int i = 0; i < dev->max_queue_pairs; i++) {
+ ret = dev->ops->enable_qp(dev, i, 1);
+ if (ret < 0)
+ goto error;
+ }
+
+ dev->started = true;
+
+ pthread_mutex_unlock(&dev->mutex);
+ rte_mcfg_mem_read_unlock();
+
+ return 0;
+error:
+ pthread_mutex_unlock(&dev->mutex);
+ rte_mcfg_mem_read_unlock();
+
+ PMD_INIT_LOG(ERR, "(%s) Failed to start device", dev->path);
+
+ /* TODO: free resource here or caller to check */
+ return -1;
+}
+
+int crypto_virtio_user_stop_device(struct virtio_user_dev *dev)
+{
+ uint32_t i;
+ int ret;
+
+ pthread_mutex_lock(&dev->mutex);
+ if (!dev->started)
+ goto out;
+
+ for (i = 0; i < dev->max_queue_pairs; ++i) {
+ ret = dev->ops->enable_qp(dev, i, 0);
+ if (ret < 0)
+ goto err;
+ }
+
+ if (dev->scvq) {
+ ret = dev->ops->cvq_enable(dev, 0);
+ if (ret < 0)
+ goto err;
+ }
+
+ /* Stop the backend. */
+ if (virtio_user_foreach_queue(dev, virtio_user_destroy_queue) < 0)
+ goto err;
+
+ dev->started = false;
+
+out:
+ pthread_mutex_unlock(&dev->mutex);
+
+ return 0;
+err:
+ pthread_mutex_unlock(&dev->mutex);
+
+ PMD_INIT_LOG(ERR, "(%s) Failed to stop device", dev->path);
+
+ return -1;
+}
+
+static int
+virtio_user_dev_init_max_queue_pairs(struct virtio_user_dev *dev, uint32_t user_max_qp)
+{
+ int ret;
+
+ if (!dev->ops->get_config) {
+ dev->max_queue_pairs = user_max_qp;
+ return 0;
+ }
+
+ ret = dev->ops->get_config(dev, (uint8_t *)&dev->max_queue_pairs,
+ offsetof(struct virtio_crypto_config, max_dataqueues),
+ sizeof(uint16_t));
+ if (ret) {
+ /*
+ * We need to know the max queue pair from the device so that
+ * the control queue gets the right index.
+ */
+ dev->max_queue_pairs = 1;
+ PMD_DRV_LOG(ERR, "(%s) Failed to get max queue pairs from device", dev->path);
+
+ return ret;
+ }
+
+ return 0;
+}
+
+static int
+virtio_user_dev_init_cipher_services(struct virtio_user_dev *dev)
+{
+ struct virtio_crypto_config config;
+ int ret;
+
+ dev->crypto_services = RTE_BIT32(VIRTIO_CRYPTO_SERVICE_CIPHER);
+ dev->cipher_algo = 0;
+ dev->auth_algo = 0;
+ dev->akcipher_algo = 0;
+
+ if (!dev->ops->get_config)
+ return 0;
+
+ ret = dev->ops->get_config(dev, (uint8_t *)&config, 0, sizeof(config));
+ if (ret) {
+ PMD_DRV_LOG(ERR, "(%s) Failed to get crypto config from device", dev->path);
+ return ret;
+ }
+
+ dev->crypto_services = config.crypto_services;
+ dev->cipher_algo = ((uint64_t)config.cipher_algo_h << 32) |
+ config.cipher_algo_l;
+ dev->hash_algo = config.hash_algo;
+ dev->auth_algo = ((uint64_t)config.mac_algo_h << 32) |
+ config.mac_algo_l;
+ dev->aead_algo = config.aead_algo;
+ dev->akcipher_algo = config.akcipher_algo;
+ return 0;
+}
+
+static int
+virtio_user_dev_init_notify(struct virtio_user_dev *dev)
+{
+
+ if (virtio_user_foreach_queue(dev, virtio_user_init_notify_queue) < 0)
+ goto err;
+
+ if (dev->device_features & (1ULL << VIRTIO_F_NOTIFICATION_DATA))
+ if (dev->ops->map_notification_area &&
+ dev->ops->map_notification_area(dev))
+ goto err;
+
+ return 0;
+err:
+ virtio_user_foreach_queue(dev, virtio_user_uninit_notify_queue);
+
+ return -1;
+}
+
+static void
+virtio_user_dev_uninit_notify(struct virtio_user_dev *dev)
+{
+ virtio_user_foreach_queue(dev, virtio_user_uninit_notify_queue);
+
+ if (dev->ops->unmap_notification_area && dev->notify_area)
+ dev->ops->unmap_notification_area(dev);
+}
+
+static void
+virtio_user_mem_event_cb(enum rte_mem_event type __rte_unused,
+ const void *addr,
+ size_t len __rte_unused,
+ void *arg)
+{
+ struct virtio_user_dev *dev = arg;
+ struct rte_memseg_list *msl;
+ uint16_t i;
+ int ret = 0;
+
+ /* ignore externally allocated memory */
+ msl = rte_mem_virt2memseg_list(addr);
+ if (msl->external)
+ return;
+
+ pthread_mutex_lock(&dev->mutex);
+
+ if (dev->started == false)
+ goto exit;
+
+ /* Step 1: pause the active queues */
+ for (i = 0; i < dev->queue_pairs; i++) {
+ ret = dev->ops->enable_qp(dev, i, 0);
+ if (ret < 0)
+ goto exit;
+ }
+
+ /* Step 2: update memory regions */
+ ret = dev->ops->set_memory_table(dev);
+ if (ret < 0)
+ goto exit;
+
+ /* Step 3: resume the active queues */
+ for (i = 0; i < dev->queue_pairs; i++) {
+ ret = dev->ops->enable_qp(dev, i, 1);
+ if (ret < 0)
+ goto exit;
+ }
+
+exit:
+ pthread_mutex_unlock(&dev->mutex);
+
+ if (ret < 0)
+ PMD_DRV_LOG(ERR, "(%s) Failed to update memory table", dev->path);
+}
+
+static int
+virtio_user_dev_setup(struct virtio_user_dev *dev)
+{
+ if (dev->is_server) {
+ if (dev->backend_type != VIRTIO_USER_BACKEND_VHOST_USER) {
+ PMD_DRV_LOG(ERR, "Server mode only supports vhost-user!");
+ return -1;
+ }
+ }
+
+ switch (dev->backend_type) {
+ case VIRTIO_USER_BACKEND_VHOST_VDPA:
+ dev->ops = &virtio_ops_vdpa;
+ dev->ops->setup = virtio_crypto_ops_vdpa.setup;
+ dev->ops->get_features = virtio_crypto_ops_vdpa.get_features;
+ dev->ops->cvq_enable = virtio_crypto_ops_vdpa.cvq_enable;
+ dev->ops->enable_qp = virtio_crypto_ops_vdpa.enable_qp;
+ dev->ops->update_link_state = virtio_crypto_ops_vdpa.update_link_state;
+ dev->ops->map_notification_area = virtio_crypto_ops_vdpa.map_notification_area;
+ dev->ops->unmap_notification_area = virtio_crypto_ops_vdpa.unmap_notification_area;
+ break;
+ default:
+ PMD_DRV_LOG(ERR, "(%s) Unknown backend type", dev->path);
+ return -1;
+ }
+
+ if (dev->ops->setup(dev) < 0) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to setup backend", dev->path);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+virtio_user_alloc_vrings(struct virtio_user_dev *dev)
+{
+ int i, size, nr_vrings;
+ bool packed_ring = !!(dev->device_features & (1ull << VIRTIO_F_RING_PACKED));
+
+ nr_vrings = dev->max_queue_pairs + 1;
+
+ dev->callfds = rte_zmalloc("virtio_user_dev", nr_vrings * sizeof(*dev->callfds), 0);
+ if (!dev->callfds) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to alloc callfds", dev->path);
+ return -1;
+ }
+
+ dev->kickfds = rte_zmalloc("virtio_user_dev", nr_vrings * sizeof(*dev->kickfds), 0);
+ if (!dev->kickfds) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to alloc kickfds", dev->path);
+ goto free_callfds;
+ }
+
+ for (i = 0; i < nr_vrings; i++) {
+ dev->callfds[i] = -1;
+ dev->kickfds[i] = -1;
+ }
+
+ if (packed_ring)
+ size = sizeof(*dev->vrings.packed);
+ else
+ size = sizeof(*dev->vrings.split);
+ dev->vrings.ptr = rte_zmalloc("virtio_user_dev", nr_vrings * size, 0);
+ if (!dev->vrings.ptr) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to alloc vrings metadata", dev->path);
+ goto free_kickfds;
+ }
+
+ if (packed_ring) {
+ dev->packed_queues = rte_zmalloc("virtio_user_dev",
+ nr_vrings * sizeof(*dev->packed_queues), 0);
+ if (!dev->packed_queues) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to alloc packed queues metadata",
+ dev->path);
+ goto free_vrings;
+ }
+ }
+
+ dev->qp_enabled = rte_zmalloc("virtio_user_dev",
+ nr_vrings * sizeof(*dev->qp_enabled), 0);
+ if (!dev->qp_enabled) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to alloc QP enable states", dev->path);
+ goto free_packed_queues;
+ }
+
+ return 0;
+
+free_packed_queues:
+ rte_free(dev->packed_queues);
+ dev->packed_queues = NULL;
+free_vrings:
+ rte_free(dev->vrings.ptr);
+ dev->vrings.ptr = NULL;
+free_kickfds:
+ rte_free(dev->kickfds);
+ dev->kickfds = NULL;
+free_callfds:
+ rte_free(dev->callfds);
+ dev->callfds = NULL;
+
+ return -1;
+}
+
+static void
+virtio_user_free_vrings(struct virtio_user_dev *dev)
+{
+ rte_free(dev->qp_enabled);
+ dev->qp_enabled = NULL;
+ rte_free(dev->packed_queues);
+ dev->packed_queues = NULL;
+ rte_free(dev->vrings.ptr);
+ dev->vrings.ptr = NULL;
+ rte_free(dev->kickfds);
+ dev->kickfds = NULL;
+ rte_free(dev->callfds);
+ dev->callfds = NULL;
+}
+
+#define VIRTIO_USER_SUPPORTED_FEATURES \
+ (1ULL << VIRTIO_CRYPTO_SERVICE_CIPHER | \
+ 1ULL << VIRTIO_CRYPTO_SERVICE_HASH | \
+ 1ULL << VIRTIO_CRYPTO_SERVICE_AKCIPHER | \
+ 1ULL << VIRTIO_F_VERSION_1 | \
+ 1ULL << VIRTIO_F_IN_ORDER | \
+ 1ULL << VIRTIO_F_RING_PACKED | \
+ 1ULL << VIRTIO_F_NOTIFICATION_DATA | \
+ 1ULL << VIRTIO_F_ORDER_PLATFORM)
+
+int
+crypto_virtio_user_dev_init(struct virtio_user_dev *dev, char *path, uint16_t queues,
+ int queue_size, int server)
+{
+ uint64_t backend_features;
+
+ pthread_mutex_init(&dev->mutex, NULL);
+ strlcpy(dev->path, path, PATH_MAX);
+
+ dev->started = 0;
+ dev->queue_pairs = 1; /* mq disabled by default */
+ dev->max_queue_pairs = queues; /* initialize to user requested value for kernel backend */
+ dev->queue_size = queue_size;
+ dev->is_server = server;
+ dev->frontend_features = 0;
+ dev->unsupported_features = 0;
+ dev->backend_type = VIRTIO_USER_BACKEND_VHOST_VDPA;
+ dev->hw.modern = 1;
+
+ if (virtio_user_dev_setup(dev) < 0) {
+ PMD_INIT_LOG(ERR, "(%s) backend set up fails", dev->path);
+ return -1;
+ }
+
+ if (dev->ops->set_owner(dev) < 0) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to set backend owner", dev->path);
+ goto destroy;
+ }
+
+ if (dev->ops->get_backend_features(&backend_features) < 0) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to get backend features", dev->path);
+ goto destroy;
+ }
+
+ dev->unsupported_features = ~(VIRTIO_USER_SUPPORTED_FEATURES | backend_features);
+
+ if (dev->ops->get_features(dev, &dev->device_features) < 0) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to get device features", dev->path);
+ goto destroy;
+ }
+
+ if (virtio_user_dev_init_max_queue_pairs(dev, queues)) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to get max queue pairs", dev->path);
+ goto destroy;
+ }
+
+ if (virtio_user_dev_init_cipher_services(dev)) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to get cipher services", dev->path);
+ goto destroy;
+ }
+
+ dev->frontend_features &= ~dev->unsupported_features;
+ dev->device_features &= ~dev->unsupported_features;
+
+ if (virtio_user_alloc_vrings(dev) < 0) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to allocate vring metadata", dev->path);
+ goto destroy;
+ }
+
+ if (virtio_user_dev_init_notify(dev) < 0) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to init notifiers", dev->path);
+ goto free_vrings;
+ }
+
+ if (rte_mem_event_callback_register(VIRTIO_USER_MEM_EVENT_CLB_NAME,
+ virtio_user_mem_event_cb, dev)) {
+ if (rte_errno != ENOTSUP) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to register mem event callback",
+ dev->path);
+ goto notify_uninit;
+ }
+ }
+
+ return 0;
+
+notify_uninit:
+ virtio_user_dev_uninit_notify(dev);
+free_vrings:
+ virtio_user_free_vrings(dev);
+destroy:
+ dev->ops->destroy(dev);
+
+ return -1;
+}
+
+void
+crypto_virtio_user_dev_uninit(struct virtio_user_dev *dev)
+{
+ crypto_virtio_user_stop_device(dev);
+
+ rte_mem_event_callback_unregister(VIRTIO_USER_MEM_EVENT_CLB_NAME, dev);
+
+ virtio_user_dev_uninit_notify(dev);
+
+ virtio_user_free_vrings(dev);
+
+ if (dev->is_server)
+ unlink(dev->path);
+
+ dev->ops->destroy(dev);
+}
+
+#define CVQ_MAX_DATA_DESCS 32
+
+static inline void *
+virtio_user_iova2virt(struct virtio_user_dev *dev __rte_unused, rte_iova_t iova)
+{
+ if (rte_eal_iova_mode() == RTE_IOVA_VA)
+ return (void *)(uintptr_t)iova;
+ else
+ return rte_mem_iova2virt(iova);
+}
+
+static inline int
+desc_is_avail(struct vring_packed_desc *desc, bool wrap_counter)
+{
+ uint16_t flags = rte_atomic_load_explicit(&desc->flags, rte_memory_order_acquire);
+
+ return wrap_counter == !!(flags & VRING_PACKED_DESC_F_AVAIL) &&
+ wrap_counter != !!(flags & VRING_PACKED_DESC_F_USED);
+}
+
+int
+crypto_virtio_user_dev_set_status(struct virtio_user_dev *dev, uint8_t status)
+{
+ int ret;
+
+ pthread_mutex_lock(&dev->mutex);
+ dev->status = status;
+ ret = dev->ops->set_status(dev, status);
+ if (ret && ret != -ENOTSUP)
+ PMD_INIT_LOG(ERR, "(%s) Failed to set backend status", dev->path);
+
+ pthread_mutex_unlock(&dev->mutex);
+ return ret;
+}
+
+int
+crypto_virtio_user_dev_update_status(struct virtio_user_dev *dev)
+{
+ int ret;
+ uint8_t status;
+
+ pthread_mutex_lock(&dev->mutex);
+
+ ret = dev->ops->get_status(dev, &status);
+ if (!ret) {
+ dev->status = status;
+ PMD_INIT_LOG(DEBUG, "Updated Device Status(0x%08x):"
+ "\t-RESET: %u "
+ "\t-ACKNOWLEDGE: %u "
+ "\t-DRIVER: %u "
+ "\t-DRIVER_OK: %u "
+ "\t-FEATURES_OK: %u "
+ "\t-DEVICE_NEED_RESET: %u "
+ "\t-FAILED: %u",
+ dev->status,
+ (dev->status == VIRTIO_CONFIG_STATUS_RESET),
+ !!(dev->status & VIRTIO_CONFIG_STATUS_ACK),
+ !!(dev->status & VIRTIO_CONFIG_STATUS_DRIVER),
+ !!(dev->status & VIRTIO_CONFIG_STATUS_DRIVER_OK),
+ !!(dev->status & VIRTIO_CONFIG_STATUS_FEATURES_OK),
+ !!(dev->status & VIRTIO_CONFIG_STATUS_DEV_NEED_RESET),
+ !!(dev->status & VIRTIO_CONFIG_STATUS_FAILED));
+ } else if (ret != -ENOTSUP) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to get backend status", dev->path);
+ }
+
+ pthread_mutex_unlock(&dev->mutex);
+ return ret;
+}
+
+int
+crypto_virtio_user_dev_update_link_state(struct virtio_user_dev *dev)
+{
+ if (dev->ops->update_link_state)
+ return dev->ops->update_link_state(dev);
+
+ return 0;
+}
diff --git a/drivers/crypto/virtio/virtio_user/virtio_user_dev.h b/drivers/crypto/virtio/virtio_user/virtio_user_dev.h
new file mode 100644
index 0000000000..ef648fd14b
--- /dev/null
+++ b/drivers/crypto/virtio/virtio_user/virtio_user_dev.h
@@ -0,0 +1,88 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Marvell.
+ */
+
+#ifndef _VIRTIO_USER_DEV_H
+#define _VIRTIO_USER_DEV_H
+
+#include <limits.h>
+#include <stdbool.h>
+
+#include "../virtio_pci.h"
+#include "../virtio_ring.h"
+
+extern struct virtio_user_backend_ops virtio_crypto_ops_vdpa;
+
+enum virtio_user_backend_type {
+ VIRTIO_USER_BACKEND_UNKNOWN,
+ VIRTIO_USER_BACKEND_VHOST_USER,
+ VIRTIO_USER_BACKEND_VHOST_VDPA,
+};
+
+struct virtio_user_queue {
+ uint16_t used_idx;
+ bool avail_wrap_counter;
+ bool used_wrap_counter;
+};
+
+struct virtio_user_dev {
+ union {
+ struct virtio_crypto_hw hw;
+ uint8_t dummy[256];
+ };
+
+ void *backend_data;
+ uint16_t **notify_area;
+ char path[PATH_MAX];
+ bool hw_cvq;
+ uint16_t max_queue_pairs;
+ uint64_t device_features; /* supported features by device */
+ bool *qp_enabled;
+
+ enum virtio_user_backend_type backend_type;
+ bool is_server; /* server or client mode */
+
+ int *callfds;
+ int *kickfds;
+ uint16_t queue_pairs;
+ uint32_t queue_size;
+ uint64_t features; /* the negotiated features with driver,
+ * and will be sync with device
+ */
+ uint64_t frontend_features; /* enabled frontend features */
+ uint64_t unsupported_features; /* unsupported features mask */
+ uint8_t status;
+ uint32_t crypto_status;
+ uint32_t crypto_services;
+ uint64_t cipher_algo;
+ uint32_t hash_algo;
+ uint64_t auth_algo;
+ uint32_t aead_algo;
+ uint32_t akcipher_algo;
+
+ union {
+ void *ptr;
+ struct vring *split;
+ struct vring_packed *packed;
+ } vrings;
+
+ struct virtio_user_queue *packed_queues;
+
+ struct virtio_user_backend_ops *ops;
+ pthread_mutex_t mutex;
+ bool started;
+
+ struct virtqueue *scvq;
+};
+
+int crypto_virtio_user_dev_set_features(struct virtio_user_dev *dev);
+int crypto_virtio_user_start_device(struct virtio_user_dev *dev);
+int crypto_virtio_user_stop_device(struct virtio_user_dev *dev);
+int crypto_virtio_user_dev_init(struct virtio_user_dev *dev, char *path, uint16_t queues,
+ int queue_size, int server);
+void crypto_virtio_user_dev_uninit(struct virtio_user_dev *dev);
+int crypto_virtio_user_dev_set_status(struct virtio_user_dev *dev, uint8_t status);
+int crypto_virtio_user_dev_update_status(struct virtio_user_dev *dev);
+int crypto_virtio_user_dev_update_link_state(struct virtio_user_dev *dev);
+extern const char * const crypto_virtio_user_backend_strings[];
+#endif
diff --git a/drivers/crypto/virtio/virtio_user_cryptodev.c b/drivers/crypto/virtio/virtio_user_cryptodev.c
new file mode 100644
index 0000000000..606639b872
--- /dev/null
+++ b/drivers/crypto/virtio/virtio_user_cryptodev.c
@@ -0,0 +1,587 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Marvell
+ */
+
+#include <stdint.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <unistd.h>
+#include <fcntl.h>
+
+#include <rte_malloc.h>
+#include <rte_kvargs.h>
+#include <bus_vdev_driver.h>
+#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
+#include <rte_alarm.h>
+#include <rte_cycles.h>
+#include <rte_io.h>
+
+#include "virtio_user/virtio_user_dev.h"
+#include "virtio_user/vhost.h"
+#include "virtio_user/vhost_logs.h"
+#include "virtio_cryptodev.h"
+#include "virtio_logs.h"
+#include "virtio_pci.h"
+#include "virtqueue.h"
+
+#define virtio_user_get_dev(hwp) container_of(hwp, struct virtio_user_dev, hw)
+
+static void
+virtio_user_read_dev_config(struct virtio_crypto_hw *hw, size_t offset,
+ void *dst, int length __rte_unused)
+{
+ struct virtio_user_dev *dev = virtio_user_get_dev(hw);
+
+ if (offset == offsetof(struct virtio_crypto_config, status)) {
+ crypto_virtio_user_dev_update_link_state(dev);
+ *(uint32_t *)dst = dev->crypto_status;
+ } else if (offset == offsetof(struct virtio_crypto_config, max_dataqueues))
+ *(uint16_t *)dst = dev->max_queue_pairs;
+ else if (offset == offsetof(struct virtio_crypto_config, crypto_services))
+ *(uint32_t *)dst = dev->crypto_services;
+ else if (offset == offsetof(struct virtio_crypto_config, cipher_algo_l))
+ *(uint32_t *)dst = dev->cipher_algo & 0xFFFF;
+ else if (offset == offsetof(struct virtio_crypto_config, cipher_algo_h))
+ *(uint32_t *)dst = dev->cipher_algo >> 32;
+ else if (offset == offsetof(struct virtio_crypto_config, hash_algo))
+ *(uint32_t *)dst = dev->hash_algo;
+ else if (offset == offsetof(struct virtio_crypto_config, mac_algo_l))
+ *(uint32_t *)dst = dev->auth_algo & 0xFFFF;
+ else if (offset == offsetof(struct virtio_crypto_config, mac_algo_h))
+ *(uint32_t *)dst = dev->auth_algo >> 32;
+ else if (offset == offsetof(struct virtio_crypto_config, aead_algo))
+ *(uint32_t *)dst = dev->aead_algo;
+ else if (offset == offsetof(struct virtio_crypto_config, akcipher_algo))
+ *(uint32_t *)dst = dev->akcipher_algo;
+}
+
+static void
+virtio_user_write_dev_config(struct virtio_crypto_hw *hw, size_t offset,
+ const void *src, int length)
+{
+ RTE_SET_USED(hw);
+ RTE_SET_USED(src);
+
+ PMD_DRV_LOG(ERR, "not supported offset=%zu, len=%d",
+ offset, length);
+}
+
+static void
+virtio_user_reset(struct virtio_crypto_hw *hw)
+{
+ struct virtio_user_dev *dev = virtio_user_get_dev(hw);
+
+ if (dev->status & VIRTIO_CONFIG_STATUS_DRIVER_OK)
+ crypto_virtio_user_stop_device(dev);
+}
+
+static void
+virtio_user_set_status(struct virtio_crypto_hw *hw, uint8_t status)
+{
+ struct virtio_user_dev *dev = virtio_user_get_dev(hw);
+ uint8_t old_status = dev->status;
+
+ if (status & VIRTIO_CONFIG_STATUS_FEATURES_OK &&
+ ~old_status & VIRTIO_CONFIG_STATUS_FEATURES_OK) {
+ crypto_virtio_user_dev_set_features(dev);
+ /* Feature negotiation should be only done in probe time.
+ * So we skip any more request here.
+ */
+ dev->status |= VIRTIO_CONFIG_STATUS_FEATURES_OK;
+ }
+
+ if (status & VIRTIO_CONFIG_STATUS_DRIVER_OK) {
+ if (crypto_virtio_user_start_device(dev)) {
+ crypto_virtio_user_dev_update_status(dev);
+ return;
+ }
+ } else if (status == VIRTIO_CONFIG_STATUS_RESET) {
+ virtio_user_reset(hw);
+ }
+
+ crypto_virtio_user_dev_set_status(dev, status);
+ if (status & VIRTIO_CONFIG_STATUS_DRIVER_OK && dev->scvq) {
+ if (dev->ops->cvq_enable(dev, 1) < 0) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to start ctrlq", dev->path);
+ crypto_virtio_user_dev_update_status(dev);
+ return;
+ }
+ }
+}
+
+static uint8_t
+virtio_user_get_status(struct virtio_crypto_hw *hw)
+{
+ struct virtio_user_dev *dev = virtio_user_get_dev(hw);
+
+ crypto_virtio_user_dev_update_status(dev);
+
+ return dev->status;
+}
+
+#define VIRTIO_USER_CRYPTO_PMD_GUEST_FEATURES \
+ (1ULL << VIRTIO_CRYPTO_SERVICE_CIPHER | \
+ 1ULL << VIRTIO_CRYPTO_SERVICE_AKCIPHER | \
+ 1ULL << VIRTIO_F_VERSION_1 | \
+ 1ULL << VIRTIO_F_IN_ORDER | \
+ 1ULL << VIRTIO_F_RING_PACKED | \
+ 1ULL << VIRTIO_F_NOTIFICATION_DATA | \
+ 1ULL << VIRTIO_RING_F_INDIRECT_DESC | \
+ 1ULL << VIRTIO_F_ORDER_PLATFORM)
+
+static uint64_t
+virtio_user_get_features(struct virtio_crypto_hw *hw)
+{
+ struct virtio_user_dev *dev = virtio_user_get_dev(hw);
+
+ /* unmask feature bits defined in vhost user protocol */
+ return (dev->device_features | dev->frontend_features) &
+ VIRTIO_USER_CRYPTO_PMD_GUEST_FEATURES;
+}
+
+static void
+virtio_user_set_features(struct virtio_crypto_hw *hw, uint64_t features)
+{
+ struct virtio_user_dev *dev = virtio_user_get_dev(hw);
+
+ dev->features = features & (dev->device_features | dev->frontend_features);
+}
+
+static uint8_t
+virtio_user_get_isr(struct virtio_crypto_hw *hw __rte_unused)
+{
+ /* rxq interrupts and config interrupt are separated in virtio-user,
+ * here we only report config change.
+ */
+ return VIRTIO_PCI_CAP_ISR_CFG;
+}
+
+static uint16_t
+virtio_user_set_config_irq(struct virtio_crypto_hw *hw __rte_unused,
+ uint16_t vec __rte_unused)
+{
+ return 0;
+}
+
+static uint16_t
+virtio_user_set_queue_irq(struct virtio_crypto_hw *hw __rte_unused,
+ struct virtqueue *vq __rte_unused,
+ uint16_t vec)
+{
+ /* pretend we have done that */
+ return vec;
+}
+
+/* This function is to get the queue size, aka, number of descs, of a specified
+ * queue. Different with the VHOST_USER_GET_QUEUE_NUM, which is used to get the
+ * max supported queues.
+ */
+static uint16_t
+virtio_user_get_queue_num(struct virtio_crypto_hw *hw, uint16_t queue_id __rte_unused)
+{
+ struct virtio_user_dev *dev = virtio_user_get_dev(hw);
+
+ /* Currently, each queue has same queue size */
+ return dev->queue_size;
+}
+
+static void
+virtio_user_setup_queue_packed(struct virtqueue *vq,
+ struct virtio_user_dev *dev)
+{
+ uint16_t queue_idx = vq->vq_queue_index;
+ struct vring_packed *vring;
+ uint64_t desc_addr;
+ uint64_t avail_addr;
+ uint64_t used_addr;
+ uint16_t i;
+
+ vring = &dev->vrings.packed[queue_idx];
+ desc_addr = (uintptr_t)vq->vq_ring_virt_mem;
+ avail_addr = desc_addr + vq->vq_nentries *
+ sizeof(struct vring_packed_desc);
+ used_addr = RTE_ALIGN_CEIL(avail_addr +
+ sizeof(struct vring_packed_desc_event),
+ VIRTIO_VRING_ALIGN);
+ vring->num = vq->vq_nentries;
+ vring->desc_iova = vq->vq_ring_mem;
+ vring->desc = (void *)(uintptr_t)desc_addr;
+ vring->driver = (void *)(uintptr_t)avail_addr;
+ vring->device = (void *)(uintptr_t)used_addr;
+ dev->packed_queues[queue_idx].avail_wrap_counter = true;
+ dev->packed_queues[queue_idx].used_wrap_counter = true;
+ dev->packed_queues[queue_idx].used_idx = 0;
+
+ for (i = 0; i < vring->num; i++)
+ vring->desc[i].flags = 0;
+}
+
+static void
+virtio_user_setup_queue_split(struct virtqueue *vq, struct virtio_user_dev *dev)
+{
+ uint16_t queue_idx = vq->vq_queue_index;
+ uint64_t desc_addr, avail_addr, used_addr;
+
+ desc_addr = (uintptr_t)vq->vq_ring_virt_mem;
+ avail_addr = desc_addr + vq->vq_nentries * sizeof(struct vring_desc);
+ used_addr = RTE_ALIGN_CEIL(avail_addr + offsetof(struct vring_avail,
+ ring[vq->vq_nentries]),
+ VIRTIO_VRING_ALIGN);
+
+ dev->vrings.split[queue_idx].num = vq->vq_nentries;
+ dev->vrings.split[queue_idx].desc_iova = vq->vq_ring_mem;
+ dev->vrings.split[queue_idx].desc = (void *)(uintptr_t)desc_addr;
+ dev->vrings.split[queue_idx].avail = (void *)(uintptr_t)avail_addr;
+ dev->vrings.split[queue_idx].used = (void *)(uintptr_t)used_addr;
+}
+
+static int
+virtio_user_setup_queue(struct virtio_crypto_hw *hw, struct virtqueue *vq)
+{
+ struct virtio_user_dev *dev = virtio_user_get_dev(hw);
+
+ if (vtpci_with_packed_queue(hw))
+ virtio_user_setup_queue_packed(vq, dev);
+ else
+ virtio_user_setup_queue_split(vq, dev);
+
+ if (dev->notify_area)
+ vq->notify_addr = dev->notify_area[vq->vq_queue_index];
+
+ if (virtcrypto_cq_to_vq(hw->cvq) == vq)
+ dev->scvq = virtcrypto_cq_to_vq(hw->cvq);
+
+ return 0;
+}
+
+static void
+virtio_user_del_queue(struct virtio_crypto_hw *hw, struct virtqueue *vq)
+{
+ /* For legacy devices, write 0 to VIRTIO_PCI_QUEUE_PFN port, QEMU
+ * correspondingly stops the ioeventfds, and reset the status of
+ * the device.
+ * For modern devices, set queue desc, avail, used in PCI bar to 0,
+ * not see any more behavior in QEMU.
+ *
+ * Here we just care about what information to deliver to vhost-user
+ * or vhost-kernel. So we just close ioeventfd for now.
+ */
+
+ RTE_SET_USED(hw);
+ RTE_SET_USED(vq);
+}
+
+static void
+virtio_user_notify_queue(struct virtio_crypto_hw *hw, struct virtqueue *vq)
+{
+ struct virtio_user_dev *dev = virtio_user_get_dev(hw);
+ uint64_t notify_data = 1;
+
+ if (!dev->notify_area) {
+ if (write(dev->kickfds[vq->vq_queue_index], ¬ify_data,
+ sizeof(notify_data)) < 0)
+ PMD_DRV_LOG(ERR, "failed to kick backend: %s",
+ strerror(errno));
+ return;
+ } else if (!vtpci_with_feature(hw, VIRTIO_F_NOTIFICATION_DATA)) {
+ rte_write16(vq->vq_queue_index, vq->notify_addr);
+ return;
+ }
+
+ if (vtpci_with_packed_queue(hw)) {
+ /* Bit[0:15]: vq queue index
+ * Bit[16:30]: avail index
+ * Bit[31]: avail wrap counter
+ */
+ notify_data = ((uint32_t)(!!(vq->vq_packed.cached_flags &
+ VRING_PACKED_DESC_F_AVAIL)) << 31) |
+ ((uint32_t)vq->vq_avail_idx << 16) |
+ vq->vq_queue_index;
+ } else {
+ /* Bit[0:15]: vq queue index
+ * Bit[16:31]: avail index
+ */
+ notify_data = ((uint32_t)vq->vq_avail_idx << 16) |
+ vq->vq_queue_index;
+ }
+ rte_write32(notify_data, vq->notify_addr);
+}
+
+const struct virtio_pci_ops crypto_virtio_user_ops = {
+ .read_dev_cfg = virtio_user_read_dev_config,
+ .write_dev_cfg = virtio_user_write_dev_config,
+ .reset = virtio_user_reset,
+ .get_status = virtio_user_get_status,
+ .set_status = virtio_user_set_status,
+ .get_features = virtio_user_get_features,
+ .set_features = virtio_user_set_features,
+ .get_isr = virtio_user_get_isr,
+ .set_config_irq = virtio_user_set_config_irq,
+ .set_queue_irq = virtio_user_set_queue_irq,
+ .get_queue_num = virtio_user_get_queue_num,
+ .setup_queue = virtio_user_setup_queue,
+ .del_queue = virtio_user_del_queue,
+ .notify_queue = virtio_user_notify_queue,
+};
+
+static const char * const valid_args[] = {
+#define VIRTIO_USER_ARG_QUEUES_NUM "queues"
+ VIRTIO_USER_ARG_QUEUES_NUM,
+#define VIRTIO_USER_ARG_QUEUE_SIZE "queue_size"
+ VIRTIO_USER_ARG_QUEUE_SIZE,
+#define VIRTIO_USER_ARG_PATH "path"
+ VIRTIO_USER_ARG_PATH,
+#define VIRTIO_USER_ARG_SERVER_MODE "server"
+ VIRTIO_USER_ARG_SERVER_MODE,
+ NULL
+};
+
+#define VIRTIO_USER_DEF_Q_NUM 1
+#define VIRTIO_USER_DEF_Q_SZ 256
+#define VIRTIO_USER_DEF_SERVER_MODE 0
+
+static int
+get_string_arg(const char *key __rte_unused,
+ const char *value, void *extra_args)
+{
+ if (!value || !extra_args)
+ return -EINVAL;
+
+ *(char **)extra_args = strdup(value);
+
+ if (!*(char **)extra_args)
+ return -ENOMEM;
+
+ return 0;
+}
+
+static int
+get_integer_arg(const char *key __rte_unused,
+ const char *value, void *extra_args)
+{
+ uint64_t integer = 0;
+ if (!value || !extra_args)
+ return -EINVAL;
+ errno = 0;
+ integer = strtoull(value, NULL, 0);
+ /* extra_args keeps default value, it should be replaced
+ * only in case of successful parsing of the 'value' arg
+ */
+ if (errno == 0)
+ *(uint64_t *)extra_args = integer;
+ return -errno;
+}
+
+static struct rte_cryptodev *
+virtio_user_cryptodev_alloc(struct rte_vdev_device *vdev)
+{
+ struct rte_cryptodev_pmd_init_params init_params = {
+ .name = "",
+ .private_data_size = sizeof(struct virtio_user_dev),
+ };
+ struct rte_cryptodev_data *data;
+ struct rte_cryptodev *cryptodev;
+ struct virtio_user_dev *dev;
+ struct virtio_crypto_hw *hw;
+
+ init_params.socket_id = vdev->device.numa_node;
+ init_params.private_data_size = sizeof(struct virtio_user_dev);
+ cryptodev = rte_cryptodev_pmd_create(vdev->device.name, &vdev->device, &init_params);
+ if (cryptodev == NULL) {
+ PMD_INIT_LOG(ERR, "failed to create cryptodev vdev");
+ return NULL;
+ }
+
+ data = cryptodev->data;
+ dev = data->dev_private;
+ hw = &dev->hw;
+
+ hw->dev_id = data->dev_id;
+ VTPCI_OPS(hw) = &crypto_virtio_user_ops;
+
+ return cryptodev;
+}
+
+static void
+virtio_user_cryptodev_free(struct rte_cryptodev *cryptodev)
+{
+ rte_cryptodev_pmd_destroy(cryptodev);
+}
+
+static int
+virtio_user_pmd_probe(struct rte_vdev_device *vdev)
+{
+ uint64_t server_mode = VIRTIO_USER_DEF_SERVER_MODE;
+ uint64_t queue_size = VIRTIO_USER_DEF_Q_SZ;
+ uint64_t queues = VIRTIO_USER_DEF_Q_NUM;
+ struct rte_cryptodev *cryptodev = NULL;
+ struct rte_kvargs *kvlist = NULL;
+ struct virtio_user_dev *dev;
+ char *path = NULL;
+ int ret;
+
+ kvlist = rte_kvargs_parse(rte_vdev_device_args(vdev), valid_args);
+
+ if (!kvlist) {
+ PMD_INIT_LOG(ERR, "error when parsing param");
+ goto end;
+ }
+
+ if (rte_kvargs_count(kvlist, VIRTIO_USER_ARG_PATH) == 1) {
+ if (rte_kvargs_process(kvlist, VIRTIO_USER_ARG_PATH,
+ &get_string_arg, &path) < 0) {
+ PMD_INIT_LOG(ERR, "error to parse %s",
+ VIRTIO_USER_ARG_PATH);
+ goto end;
+ }
+ } else {
+ PMD_INIT_LOG(ERR, "arg %s is mandatory for virtio_user",
+ VIRTIO_USER_ARG_PATH);
+ goto end;
+ }
+
+ if (rte_kvargs_count(kvlist, VIRTIO_USER_ARG_QUEUES_NUM) == 1) {
+ if (rte_kvargs_process(kvlist, VIRTIO_USER_ARG_QUEUES_NUM,
+ &get_integer_arg, &queues) < 0) {
+ PMD_INIT_LOG(ERR, "error to parse %s",
+ VIRTIO_USER_ARG_QUEUES_NUM);
+ goto end;
+ }
+ }
+
+ if (rte_kvargs_count(kvlist, VIRTIO_USER_ARG_QUEUE_SIZE) == 1) {
+ if (rte_kvargs_process(kvlist, VIRTIO_USER_ARG_QUEUE_SIZE,
+ &get_integer_arg, &queue_size) < 0) {
+ PMD_INIT_LOG(ERR, "error to parse %s",
+ VIRTIO_USER_ARG_QUEUE_SIZE);
+ goto end;
+ }
+ }
+
+ cryptodev = virtio_user_cryptodev_alloc(vdev);
+ if (!cryptodev) {
+ PMD_INIT_LOG(ERR, "virtio_user fails to alloc device");
+ goto end;
+ }
+
+ dev = cryptodev->data->dev_private;
+ if (crypto_virtio_user_dev_init(dev, path, queues, queue_size,
+ server_mode) < 0) {
+ PMD_INIT_LOG(ERR, "virtio_user_dev_init fails");
+ virtio_user_cryptodev_free(cryptodev);
+ goto end;
+ }
+
+ if (crypto_virtio_dev_init(cryptodev, VIRTIO_USER_CRYPTO_PMD_GUEST_FEATURES,
+ NULL) < 0) {
+ PMD_INIT_LOG(ERR, "crypto_virtio_dev_init fails");
+ crypto_virtio_user_dev_uninit(dev);
+ virtio_user_cryptodev_free(cryptodev);
+ goto end;
+ }
+
+ rte_cryptodev_pmd_probing_finish(cryptodev);
+
+ ret = 0;
+end:
+ rte_kvargs_free(kvlist);
+ free(path);
+ return ret;
+}
+
+static int
+virtio_user_pmd_remove(struct rte_vdev_device *vdev)
+{
+ struct rte_cryptodev *cryptodev;
+ const char *name;
+ int devid;
+
+ if (!vdev)
+ return -EINVAL;
+
+ name = rte_vdev_device_name(vdev);
+ PMD_DRV_LOG(INFO, "Removing %s", name);
+
+ devid = rte_cryptodev_get_dev_id(name);
+ if (devid < 0)
+ return -EINVAL;
+
+ rte_cryptodev_stop(devid);
+
+ cryptodev = rte_cryptodev_pmd_get_named_dev(name);
+ if (cryptodev == NULL)
+ return -ENODEV;
+
+ if (rte_cryptodev_pmd_destroy(cryptodev) < 0) {
+ PMD_DRV_LOG(ERR, "Failed to remove %s", name);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+static int virtio_user_pmd_dma_map(struct rte_vdev_device *vdev, void *addr,
+ uint64_t iova, size_t len)
+{
+ struct rte_cryptodev *cryptodev;
+ struct virtio_user_dev *dev;
+ const char *name;
+
+ if (!vdev)
+ return -EINVAL;
+
+ name = rte_vdev_device_name(vdev);
+ cryptodev = rte_cryptodev_pmd_get_named_dev(name);
+ if (cryptodev == NULL)
+ return -EINVAL;
+
+ dev = cryptodev->data->dev_private;
+
+ if (dev->ops->dma_map)
+ return dev->ops->dma_map(dev, addr, iova, len);
+
+ return 0;
+}
+
+static int virtio_user_pmd_dma_unmap(struct rte_vdev_device *vdev, void *addr,
+ uint64_t iova, size_t len)
+{
+ struct rte_cryptodev *cryptodev;
+ struct virtio_user_dev *dev;
+ const char *name;
+
+ if (!vdev)
+ return -EINVAL;
+
+ name = rte_vdev_device_name(vdev);
+ cryptodev = rte_cryptodev_pmd_get_named_dev(name);
+ if (cryptodev == NULL)
+ return -EINVAL;
+
+ dev = cryptodev->data->dev_private;
+
+ if (dev->ops->dma_unmap)
+ return dev->ops->dma_unmap(dev, addr, iova, len);
+
+ return 0;
+}
+
+static struct rte_vdev_driver virtio_user_driver = {
+ .probe = virtio_user_pmd_probe,
+ .remove = virtio_user_pmd_remove,
+ .dma_map = virtio_user_pmd_dma_map,
+ .dma_unmap = virtio_user_pmd_dma_unmap,
+};
+
+static struct cryptodev_driver virtio_crypto_drv;
+
+RTE_PMD_REGISTER_VDEV(crypto_virtio_user, virtio_user_driver);
+RTE_PMD_REGISTER_CRYPTO_DRIVER(virtio_crypto_drv,
+ virtio_user_driver.driver,
+ cryptodev_virtio_driver_id);
+RTE_PMD_REGISTER_ALIAS(crypto_virtio_user, crypto_virtio);
+RTE_PMD_REGISTER_PARAM_STRING(crypto_virtio_user,
+ "path=<path> "
+ "queues=<int> "
+ "queue_size=<int>");
--
2.25.1
^ permalink raw reply [flat|nested] 58+ messages in thread
* [v2 4/4] test/crypto: test virtio_crypto_user PMD
2025-01-07 18:44 ` [v2 0/4] crypto/virtio: add vDPA backend support Gowrishankar Muthukrishnan
` (2 preceding siblings ...)
2025-01-07 18:44 ` [v2 3/4] crypto/virtio: add vhost backend to virtio_user Gowrishankar Muthukrishnan
@ 2025-01-07 18:44 ` Gowrishankar Muthukrishnan
3 siblings, 0 replies; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2025-01-07 18:44 UTC (permalink / raw)
To: dev, Akhil Goyal, Maxime Coquelin, Chenbo Xia, Fan Zhang, Jay Zhou
Cc: jerinj, anoobj, David Marchand, Gowrishankar Muthukrishnan
Reuse virtio_crypto tests for testing virtio_crypto_user PMD.
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
app/test/test_cryptodev.c | 7 +++++++
app/test/test_cryptodev.h | 1 +
app/test/test_cryptodev_asym.c | 15 +++++++++++++++
3 files changed, 23 insertions(+)
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 7cddb1517c..0ba2281b87 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -19737,6 +19737,12 @@ test_cryptodev_virtio(void)
return run_cryptodev_testsuite(RTE_STR(CRYPTODEV_NAME_VIRTIO_PMD));
}
+static int
+test_cryptodev_virtio_user(void)
+{
+ return run_cryptodev_testsuite(RTE_STR(CRYPTODEV_NAME_VIRTIO_USER_PMD));
+}
+
static int
test_cryptodev_aesni_mb(void)
{
@@ -20074,6 +20080,7 @@ REGISTER_DRIVER_TEST(cryptodev_dpaa_sec_autotest, test_cryptodev_dpaa_sec);
REGISTER_DRIVER_TEST(cryptodev_ccp_autotest, test_cryptodev_ccp);
REGISTER_DRIVER_TEST(cryptodev_uadk_autotest, test_cryptodev_uadk);
REGISTER_DRIVER_TEST(cryptodev_virtio_autotest, test_cryptodev_virtio);
+REGISTER_DRIVER_TEST(cryptodev_virtio_user_autotest, test_cryptodev_virtio_user);
REGISTER_DRIVER_TEST(cryptodev_octeontx_autotest, test_cryptodev_octeontx);
REGISTER_DRIVER_TEST(cryptodev_caam_jr_autotest, test_cryptodev_caam_jr);
REGISTER_DRIVER_TEST(cryptodev_nitrox_autotest, test_cryptodev_nitrox);
diff --git a/app/test/test_cryptodev.h b/app/test/test_cryptodev.h
index bb54a33d62..f6c7478f19 100644
--- a/app/test/test_cryptodev.h
+++ b/app/test/test_cryptodev.h
@@ -64,6 +64,7 @@
#define CRYPTODEV_NAME_MVSAM_PMD crypto_mvsam
#define CRYPTODEV_NAME_CCP_PMD crypto_ccp
#define CRYPTODEV_NAME_VIRTIO_PMD crypto_virtio
+#define CRYPTODEV_NAME_VIRTIO_USER_PMD crypto_virtio_user
#define CRYPTODEV_NAME_OCTEONTX_SYM_PMD crypto_octeontx
#define CRYPTODEV_NAME_CAAM_JR_PMD crypto_caam_jr
#define CRYPTODEV_NAME_NITROX_PMD crypto_nitrox_sym
diff --git a/app/test/test_cryptodev_asym.c b/app/test/test_cryptodev_asym.c
index ec7ab05a2d..e3e202a87c 100644
--- a/app/test/test_cryptodev_asym.c
+++ b/app/test/test_cryptodev_asym.c
@@ -4092,7 +4092,22 @@ test_cryptodev_virtio_asym(void)
return unit_test_suite_runner(&cryptodev_virtio_asym_testsuite);
}
+static int
+test_cryptodev_virtio_user_asym(void)
+{
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_VIRTIO_USER_PMD));
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "virtio user PMD must be loaded.\n");
+ return TEST_FAILED;
+ }
+
+ /* Use test suite registered for crypto_virtio_user PMD */
+ return unit_test_suite_runner(&cryptodev_virtio_asym_testsuite);
+}
+
REGISTER_DRIVER_TEST(cryptodev_virtio_asym_autotest, test_cryptodev_virtio_asym);
+REGISTER_DRIVER_TEST(cryptodev_virtio_user_asym_autotest, test_cryptodev_virtio_user_asym);
REGISTER_DRIVER_TEST(cryptodev_openssl_asym_autotest, test_cryptodev_openssl_asym);
REGISTER_DRIVER_TEST(cryptodev_qat_asym_autotest, test_cryptodev_qat_asym);
--
2.25.1
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [v2 1/2] vhost: add asymmetric RSA support
2025-01-07 18:02 ` [v2 1/2] vhost: add asymmetric " Gowrishankar Muthukrishnan
@ 2025-01-29 16:07 ` Maxime Coquelin
0 siblings, 0 replies; 58+ messages in thread
From: Maxime Coquelin @ 2025-01-29 16:07 UTC (permalink / raw)
To: Gowrishankar Muthukrishnan, dev, Akhil Goyal, Chenbo Xia,
Fan Zhang, Jay Zhou
Cc: jerinj, anoobj
On 1/7/25 7:02 PM, Gowrishankar Muthukrishnan wrote:
> Support asymmetric RSA crypto operations in vhost-user.
>
> Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
> ---
> Depends-on: series-34291 ("crypto/virtio: add RSA support")
>
> lib/vhost/vhost_crypto.c | 504 ++++++++++++++++++++++++++++++++++++---
> lib/vhost/vhost_user.h | 33 ++-
> 2 files changed, 498 insertions(+), 39 deletions(-)
>
I'm not a crypto expert, so a second pair of eyes would be welcome.
I did not find any obvious bug while reviewing this patch:
Acked-by: Maxime Coquelin <maxime.coquelin@redhat.com>
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [v2 2/2] examples/vhost_crypto: add asymmetric support
2025-01-07 18:02 ` [v2 2/2] examples/vhost_crypto: add asymmetric support Gowrishankar Muthukrishnan
@ 2025-01-29 16:13 ` Maxime Coquelin
2025-01-30 9:29 ` [EXTERNAL] " Gowrishankar Muthukrishnan
0 siblings, 1 reply; 58+ messages in thread
From: Maxime Coquelin @ 2025-01-29 16:13 UTC (permalink / raw)
To: Gowrishankar Muthukrishnan, dev, Akhil Goyal, Chenbo Xia,
Fan Zhang, Jay Zhou
Cc: jerinj, anoobj
On 1/7/25 7:02 PM, Gowrishankar Muthukrishnan wrote:
> Add symmetric support.
>
> Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
> ---
> examples/vhost_crypto/main.c | 54 ++++++++++++++++++++++++++----------
> 1 file changed, 40 insertions(+), 14 deletions(-)
>
> diff --git a/examples/vhost_crypto/main.c b/examples/vhost_crypto/main.c
> index 558c09a60f..8bdfc40c4b 100644
> --- a/examples/vhost_crypto/main.c
> +++ b/examples/vhost_crypto/main.c
> @@ -59,6 +59,7 @@ struct vhost_crypto_options {
> uint32_t nb_los;
> uint32_t zero_copy;
> uint32_t guest_polling;
> + bool asymmetric_crypto;
> } options;
>
> enum {
> @@ -70,6 +71,8 @@ enum {
> OPT_ZERO_COPY_NUM,
> #define OPT_POLLING "guest-polling"
> OPT_POLLING_NUM,
> +#define OPT_ASYM "asymmetric-crypto"
> + OPT_ASYM_NUM,
> };
>
> #define NB_SOCKET_FIELDS (2)
> @@ -202,9 +205,10 @@ vhost_crypto_usage(const char *prgname)
> " --%s <lcore>,SOCKET-FILE-PATH\n"
> " --%s (lcore,cdev_id,queue_id)[,(lcore,cdev_id,queue_id)]\n"
> " --%s: zero copy\n"
> - " --%s: guest polling\n",
> + " --%s: guest polling\n"
> + " --%s: asymmetric crypto\n",
> prgname, OPT_SOCKET_FILE, OPT_CONFIG,
> - OPT_ZERO_COPY, OPT_POLLING);
> + OPT_ZERO_COPY, OPT_POLLING, OPT_ASYM);
> }
>
> static int
> @@ -223,6 +227,8 @@ vhost_crypto_parse_args(int argc, char **argv)
> NULL, OPT_ZERO_COPY_NUM},
> {OPT_POLLING, no_argument,
> NULL, OPT_POLLING_NUM},
> + {OPT_ASYM, no_argument,
> + NULL, OPT_ASYM_NUM},
> {NULL, 0, 0, 0}
> };
>
> @@ -262,6 +268,10 @@ vhost_crypto_parse_args(int argc, char **argv)
> options.guest_polling = 1;
> break;
>
> + case OPT_ASYM_NUM:
> + options.asymmetric_crypto = true;
> + break;
> +
> default:
> vhost_crypto_usage(prgname);
> return -EINVAL;
> @@ -362,8 +372,8 @@ destroy_device(int vid)
> }
>
> static const struct rte_vhost_device_ops virtio_crypto_device_ops = {
> - .new_device = new_device,
> - .destroy_device = destroy_device,
> + .new_connection = new_device,
> + .destroy_connection = destroy_device,
It may be worth explaining in the commit message why you are moving from
new_device to new_connection.
> };
>
> static int
> @@ -376,6 +386,7 @@ vhost_crypto_worker(void *arg)
> int callfds[VIRTIO_CRYPTO_MAX_NUM_BURST_VQS];
> uint32_t lcore_id = rte_lcore_id();
> uint32_t burst_size = MAX_PKT_BURST;
> + enum rte_crypto_op_type cop_type;
> uint32_t i, j, k;
> uint32_t to_fetch, fetched;
>
> @@ -383,9 +394,13 @@ vhost_crypto_worker(void *arg)
>
> RTE_LOG(INFO, USER1, "Processing on Core %u started\n", lcore_id);
>
> + cop_type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
> + if (options.asymmetric_crypto)
> + cop_type = RTE_CRYPTO_OP_TYPE_ASYMMETRIC;
> +
> for (i = 0; i < NB_VIRTIO_QUEUES; i++) {
> if (rte_crypto_op_bulk_alloc(info->cop_pool,
> - RTE_CRYPTO_OP_TYPE_SYMMETRIC, ops[i],
> + cop_type, ops[i],
> burst_size) < burst_size) {
> RTE_LOG(ERR, USER1, "Failed to alloc cops\n");
> ret = -1;
> @@ -411,12 +426,11 @@ vhost_crypto_worker(void *arg)
> fetched);
> if (unlikely(rte_crypto_op_bulk_alloc(
> info->cop_pool,
> - RTE_CRYPTO_OP_TYPE_SYMMETRIC,
> + cop_type,
> ops[j], fetched) < fetched)) {
> RTE_LOG(ERR, USER1, "Failed realloc\n");
> return -1;
> }
> -
> fetched = rte_cryptodev_dequeue_burst(
> info->cid, info->qid,
> ops_deq[j], RTE_MIN(burst_size,
> @@ -477,6 +491,7 @@ main(int argc, char *argv[])
> struct rte_cryptodev_qp_conf qp_conf;
> struct rte_cryptodev_config config;
> struct rte_cryptodev_info dev_info;
> + enum rte_crypto_op_type cop_type;
> char name[128];
> uint32_t i, j, lcore;
> int ret;
> @@ -539,12 +554,21 @@ main(int argc, char *argv[])
> goto error_exit;
> }
>
> - snprintf(name, 127, "SESS_POOL_%u", lo->lcore_id);
> - info->sess_pool = rte_cryptodev_sym_session_pool_create(name,
> - SESSION_MAP_ENTRIES,
> - rte_cryptodev_sym_get_private_session_size(
> - info->cid), 0, 0,
> - rte_lcore_to_socket_id(lo->lcore_id));
> + if (!options.asymmetric_crypto) {
> + snprintf(name, 127, "SYM_SESS_POOL_%u", lo->lcore_id);
> + info->sess_pool = rte_cryptodev_sym_session_pool_create(name,
> + SESSION_MAP_ENTRIES,
> + rte_cryptodev_sym_get_private_session_size(
> + info->cid), 0, 0,
> + rte_lcore_to_socket_id(lo->lcore_id));
> + cop_type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
> + } else {
> + snprintf(name, 127, "ASYM_SESS_POOL_%u", lo->lcore_id);
> + info->sess_pool = rte_cryptodev_asym_session_pool_create(name,
> + SESSION_MAP_ENTRIES, 0, 64,
> + rte_lcore_to_socket_id(lo->lcore_id));
> + cop_type = RTE_CRYPTO_OP_TYPE_ASYMMETRIC;
> + }
>
> if (!info->sess_pool) {
> RTE_LOG(ERR, USER1, "Failed to create mempool");
> @@ -553,7 +577,7 @@ main(int argc, char *argv[])
>
> snprintf(name, 127, "COPPOOL_%u", lo->lcore_id);
> info->cop_pool = rte_crypto_op_pool_create(name,
> - RTE_CRYPTO_OP_TYPE_SYMMETRIC, NB_MEMPOOL_OBJS,
> + cop_type, NB_MEMPOOL_OBJS,
> NB_CACHE_OBJS, VHOST_CRYPTO_MAX_IV_LEN,
> rte_lcore_to_socket_id(lo->lcore_id));
>
> @@ -567,6 +591,8 @@ main(int argc, char *argv[])
>
> qp_conf.nb_descriptors = NB_CRYPTO_DESCRIPTORS;
> qp_conf.mp_session = info->sess_pool;
> + if (options.asymmetric_crypto)
> + qp_conf.mp_session = NULL;
>
> for (j = 0; j < dev_info.max_nb_queue_pairs; j++) {
> ret = rte_cryptodev_queue_pair_setup(info->cid, j,
^ permalink raw reply [flat|nested] 58+ messages in thread
* RE: [EXTERNAL] Re: [v2 2/2] examples/vhost_crypto: add asymmetric support
2025-01-29 16:13 ` Maxime Coquelin
@ 2025-01-30 9:29 ` Gowrishankar Muthukrishnan
0 siblings, 0 replies; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2025-01-30 9:29 UTC (permalink / raw)
To: Maxime Coquelin, dev, Akhil Goyal, Chenbo Xia, Fan Zhang, Jay Zhou
Cc: Jerin Jacob, Anoob Joseph
Hi Maxime,
> > static const struct rte_vhost_device_ops virtio_crypto_device_ops = {
> > - .new_device = new_device,
> > - .destroy_device = destroy_device,
> > + .new_connection = new_device,
> > + .destroy_connection = destroy_device,
> It may be worth explaining in the commit message why you are moving from
> new_device to new_connection.
This change is required when this backend application runs in server mode.
I understand this change is irrelevant to the scope of this patch and will be taken out in a separate patch (in the new version of this series).
Thanks,
Gowrishankar
>
> > };
> >
> > static int
> > @@ -376,6 +386,7 @@ vhost_crypto_worker(void *arg)
> > int callfds[VIRTIO_CRYPTO_MAX_NUM_BURST_VQS];
> > uint32_t lcore_id = rte_lcore_id();
> > uint32_t burst_size = MAX_PKT_BURST;
> > + enum rte_crypto_op_type cop_type;
> > uint32_t i, j, k;
> > uint32_t to_fetch, fetched;
> >
> > @@ -383,9 +394,13 @@ vhost_crypto_worker(void *arg)
> >
> > RTE_LOG(INFO, USER1, "Processing on Core %u started\n", lcore_id);
> >
> > + cop_type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
> > + if (options.asymmetric_crypto)
> > + cop_type = RTE_CRYPTO_OP_TYPE_ASYMMETRIC;
> > +
> > for (i = 0; i < NB_VIRTIO_QUEUES; i++) {
> > if (rte_crypto_op_bulk_alloc(info->cop_pool,
> > - RTE_CRYPTO_OP_TYPE_SYMMETRIC, ops[i],
> > + cop_type, ops[i],
> > burst_size) < burst_size) {
> > RTE_LOG(ERR, USER1, "Failed to alloc cops\n");
> > ret = -1;
> > @@ -411,12 +426,11 @@ vhost_crypto_worker(void *arg)
> > fetched);
> > if (unlikely(rte_crypto_op_bulk_alloc(
> > info->cop_pool,
> > -
> RTE_CRYPTO_OP_TYPE_SYMMETRIC,
> > + cop_type,
> > ops[j], fetched) < fetched)) {
> > RTE_LOG(ERR, USER1, "Failed
> realloc\n");
> > return -1;
> > }
> > -
> > fetched = rte_cryptodev_dequeue_burst(
> > info->cid, info->qid,
> > ops_deq[j],
> RTE_MIN(burst_size, @@ -477,6 +491,7 @@ main(int
> > argc, char *argv[])
> > struct rte_cryptodev_qp_conf qp_conf;
> > struct rte_cryptodev_config config;
> > struct rte_cryptodev_info dev_info;
> > + enum rte_crypto_op_type cop_type;
> > char name[128];
> > uint32_t i, j, lcore;
> > int ret;
> > @@ -539,12 +554,21 @@ main(int argc, char *argv[])
> > goto error_exit;
> > }
> >
> > - snprintf(name, 127, "SESS_POOL_%u", lo->lcore_id);
> > - info->sess_pool =
> rte_cryptodev_sym_session_pool_create(name,
> > - SESSION_MAP_ENTRIES,
> > - rte_cryptodev_sym_get_private_session_size(
> > - info->cid), 0, 0,
> > - rte_lcore_to_socket_id(lo->lcore_id));
> > + if (!options.asymmetric_crypto) {
> > + snprintf(name, 127, "SYM_SESS_POOL_%u", lo-
> >lcore_id);
> > + info->sess_pool =
> rte_cryptodev_sym_session_pool_create(name,
> > + SESSION_MAP_ENTRIES,
> > +
> rte_cryptodev_sym_get_private_session_size(
> > + info->cid), 0, 0,
> > + rte_lcore_to_socket_id(lo->lcore_id));
> > + cop_type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
> > + } else {
> > + snprintf(name, 127, "ASYM_SESS_POOL_%u", lo-
> >lcore_id);
> > + info->sess_pool =
> rte_cryptodev_asym_session_pool_create(name,
> > + SESSION_MAP_ENTRIES, 0, 64,
> > + rte_lcore_to_socket_id(lo->lcore_id));
> > + cop_type = RTE_CRYPTO_OP_TYPE_ASYMMETRIC;
> > + }
> >
> > if (!info->sess_pool) {
> > RTE_LOG(ERR, USER1, "Failed to create mempool");
> @@ -553,7 +577,7
> > @@ main(int argc, char *argv[])
> >
> > snprintf(name, 127, "COPPOOL_%u", lo->lcore_id);
> > info->cop_pool = rte_crypto_op_pool_create(name,
> > - RTE_CRYPTO_OP_TYPE_SYMMETRIC,
> NB_MEMPOOL_OBJS,
> > + cop_type, NB_MEMPOOL_OBJS,
> > NB_CACHE_OBJS,
> VHOST_CRYPTO_MAX_IV_LEN,
> > rte_lcore_to_socket_id(lo->lcore_id));
> >
> > @@ -567,6 +591,8 @@ main(int argc, char *argv[])
> >
> > qp_conf.nb_descriptors = NB_CRYPTO_DESCRIPTORS;
> > qp_conf.mp_session = info->sess_pool;
> > + if (options.asymmetric_crypto)
> > + qp_conf.mp_session = NULL;
> >
> > for (j = 0; j < dev_info.max_nb_queue_pairs; j++) {
> > ret = rte_cryptodev_queue_pair_setup(info->cid, j,
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [v2 1/4] common/virtio: move vDPA to common directory
2025-01-07 18:44 ` [v2 1/4] common/virtio: move vDPA to common directory Gowrishankar Muthukrishnan
@ 2025-02-06 9:40 ` Maxime Coquelin
2025-02-06 14:21 ` [EXTERNAL] " Gowrishankar Muthukrishnan
0 siblings, 1 reply; 58+ messages in thread
From: Maxime Coquelin @ 2025-02-06 9:40 UTC (permalink / raw)
To: Gowrishankar Muthukrishnan, dev, Akhil Goyal, Chenbo Xia,
Fan Zhang, Jay Zhou
Cc: jerinj, anoobj, David Marchand
Hi Gowrishankar,
On 1/7/25 7:44 PM, Gowrishankar Muthukrishnan wrote:
> Move vhost-vdpa backend implementation into common folder.
If we decided to have a common base for Virtio devices, which I think is
a good idea to avoid needless duplication, we should do a deeper
refactoring by sharing all transport layers: PCI and Virtio-user.
I understand it is not realistic to do this for v25.03 release, so in
the mean time I prefer you duplicate what you need from Vhost-vDPA
implementation than having an half-baked solution.
Maintainers, what do you think?
Thanks,
Maxime
> Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
> ---
> Depends-on: patch-149672 ("vhost: include AKCIPHER algorithms in crypto_config")
> Depends-on: patch-148913 ("crypto/virtio: remove redundant crypto queue free")
> Depends-on: series-34293 ("crypto/virtio: add packed ring support")
> Depends-on: series-34291 ("crypto/virtio: add RSA support")
>
>
> drivers/common/virtio/meson.build | 13 +++++++++
> drivers/common/virtio/version.map | 8 ++++++
> .../virtio/virtio_user/vhost.h | 4 ---
> .../common/virtio/virtio_user/vhost_logs.h | 15 ++++++++++
> .../virtio/virtio_user/vhost_vdpa.c | 28 ++++++++++++++++++-
> drivers/crypto/virtio/meson.build | 2 +-
> drivers/meson.build | 1 +
> drivers/net/virtio/meson.build | 3 +-
> drivers/net/virtio/virtio_user/vhost_kernel.c | 3 +-
> drivers/net/virtio/virtio_user/vhost_user.c | 3 +-
> .../net/virtio/virtio_user/virtio_user_dev.c | 5 ++--
> .../net/virtio/virtio_user/virtio_user_dev.h | 24 +++++++++-------
> 12 files changed, 87 insertions(+), 22 deletions(-)
> create mode 100644 drivers/common/virtio/meson.build
> create mode 100644 drivers/common/virtio/version.map
> rename drivers/{net => common}/virtio/virtio_user/vhost.h (97%)
> create mode 100644 drivers/common/virtio/virtio_user/vhost_logs.h
> rename drivers/{net => common}/virtio/virtio_user/vhost_vdpa.c (97%)
>
> diff --git a/drivers/common/virtio/meson.build b/drivers/common/virtio/meson.build
> new file mode 100644
> index 0000000000..a19db9e088
> --- /dev/null
> +++ b/drivers/common/virtio/meson.build
> @@ -0,0 +1,13 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2025 Marvell
> +
> +if is_windows
> + build = false
> + reason = 'not supported on Windows'
> + subdir_done()
> +endif
> +
> +if is_linux
> + sources += files('virtio_user/vhost_vdpa.c')
> + deps += ['bus_vdev']
> +endif
> diff --git a/drivers/common/virtio/version.map b/drivers/common/virtio/version.map
> new file mode 100644
> index 0000000000..fb98a0ab2e
> --- /dev/null
> +++ b/drivers/common/virtio/version.map
> @@ -0,0 +1,8 @@
> +INTERNAL {
> + global:
> +
> + virtio_ops_vdpa;
> + vhost_logtype_driver;
> +
> + local: *;
> +};
> diff --git a/drivers/net/virtio/virtio_user/vhost.h b/drivers/common/virtio/virtio_user/vhost.h
> similarity index 97%
> rename from drivers/net/virtio/virtio_user/vhost.h
> rename to drivers/common/virtio/virtio_user/vhost.h
> index eee3a4bc47..adf6551681 100644
> --- a/drivers/net/virtio/virtio_user/vhost.h
> +++ b/drivers/common/virtio/virtio_user/vhost.h
> @@ -11,10 +11,6 @@
>
> #include <rte_errno.h>
>
> -#include "../virtio.h"
> -#include "../virtio_logs.h"
> -#include "../virtqueue.h"
> -
> struct vhost_vring_state {
> unsigned int index;
> unsigned int num;
> diff --git a/drivers/common/virtio/virtio_user/vhost_logs.h b/drivers/common/virtio/virtio_user/vhost_logs.h
> new file mode 100644
> index 0000000000..653d4d0b5e
> --- /dev/null
> +++ b/drivers/common/virtio/virtio_user/vhost_logs.h
> @@ -0,0 +1,15 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(C) 2025 Marvell
> + */
> +
> +#ifndef _VHOST_LOGS_H_
> +#define _VHOST_LOGS_H_
> +
> +#include <rte_log.h>
> +
> +extern int vhost_logtype_driver;
> +#define RTE_LOGTYPE_VHOST_DRIVER vhost_logtype_driver
> +#define PMD_DRV_LOG(level, ...) \
> + RTE_LOG_LINE_PREFIX(level, VHOST_DRIVER, "%s(): ", __func__, __VA_ARGS__)
> +
> +#endif /* _VHOST_LOGS_H_ */
> diff --git a/drivers/net/virtio/virtio_user/vhost_vdpa.c b/drivers/common/virtio/virtio_user/vhost_vdpa.c
> similarity index 97%
> rename from drivers/net/virtio/virtio_user/vhost_vdpa.c
> rename to drivers/common/virtio/virtio_user/vhost_vdpa.c
> index bc3e2a9af5..af5c4cbf33 100644
> --- a/drivers/net/virtio/virtio_user/vhost_vdpa.c
> +++ b/drivers/common/virtio/virtio_user/vhost_vdpa.c
> @@ -9,11 +9,12 @@
> #include <fcntl.h>
> #include <stdlib.h>
> #include <unistd.h>
> +#include <inttypes.h>
>
> #include <rte_memory.h>
>
> #include "vhost.h"
> -#include "virtio_user_dev.h"
> +#include "vhost_logs.h"
>
> struct vhost_vdpa_data {
> int vhostfd;
> @@ -100,6 +101,29 @@ vhost_vdpa_ioctl(int fd, uint64_t request, void *arg)
> return 0;
> }
>
> +struct virtio_hw {
> + struct virtqueue **vqs;
> +};
> +
> +struct virtio_user_dev {
> + union {
> + struct virtio_hw hw;
> + uint8_t dummy[256];
> + };
> +
> + void *backend_data;
> + uint16_t **notify_area;
> + char path[PATH_MAX];
> + bool hw_cvq;
> + uint16_t max_queue_pairs;
> + uint64_t device_features;
> + bool *qp_enabled;
> +};
> +
> +#define VIRTIO_NET_F_CTRL_VQ 17
> +#define VIRTIO_F_IOMMU_PLATFORM 33
> +#define VIRTIO_ID_NETWORK 0x01
> +
> static int
> vhost_vdpa_set_owner(struct virtio_user_dev *dev)
> {
> @@ -715,3 +739,5 @@ struct virtio_user_backend_ops virtio_ops_vdpa = {
> .map_notification_area = vhost_vdpa_map_notification_area,
> .unmap_notification_area = vhost_vdpa_unmap_notification_area,
> };
> +
> +RTE_LOG_REGISTER_SUFFIX(vhost_logtype_driver, driver, NOTICE);
> diff --git a/drivers/crypto/virtio/meson.build b/drivers/crypto/virtio/meson.build
> index d2c3b3ad07..8181c8296f 100644
> --- a/drivers/crypto/virtio/meson.build
> +++ b/drivers/crypto/virtio/meson.build
> @@ -8,7 +8,7 @@ if is_windows
> endif
>
> includes += include_directories('../../../lib/vhost')
> -deps += 'bus_pci'
> +deps += ['bus_pci', 'common_virtio']
> sources = files(
> 'virtio_cryptodev.c',
> 'virtio_cvq.c',
> diff --git a/drivers/meson.build b/drivers/meson.build
> index 495e21b54a..2f0d312479 100644
> --- a/drivers/meson.build
> +++ b/drivers/meson.build
> @@ -17,6 +17,7 @@ subdirs = [
> 'common/nitrox', # depends on bus.
> 'common/qat', # depends on bus.
> 'common/sfc_efx', # depends on bus.
> + 'common/virtio', # depends on bus.
> 'mempool', # depends on common and bus.
> 'dma', # depends on common and bus.
> 'net', # depends on common, bus, mempool
> diff --git a/drivers/net/virtio/meson.build b/drivers/net/virtio/meson.build
> index 02742da5c2..bbd73741f0 100644
> --- a/drivers/net/virtio/meson.build
> +++ b/drivers/net/virtio/meson.build
> @@ -54,7 +54,6 @@ if is_linux
> 'virtio_user/vhost_kernel.c',
> 'virtio_user/vhost_kernel_tap.c',
> 'virtio_user/vhost_user.c',
> - 'virtio_user/vhost_vdpa.c',
> 'virtio_user/virtio_user_dev.c')
> - deps += ['bus_vdev']
> + deps += ['bus_vdev', 'common_virtio']
> endif
> diff --git a/drivers/net/virtio/virtio_user/vhost_kernel.c b/drivers/net/virtio/virtio_user/vhost_kernel.c
> index e42bb35935..3a95ce34d6 100644
> --- a/drivers/net/virtio/virtio_user/vhost_kernel.c
> +++ b/drivers/net/virtio/virtio_user/vhost_kernel.c
> @@ -11,9 +11,10 @@
>
> #include <rte_memory.h>
>
> -#include "vhost.h"
> +#include "virtio_user/vhost.h"
> #include "virtio_user_dev.h"
> #include "vhost_kernel_tap.h"
> +#include "../virtqueue.h"
>
> struct vhost_kernel_data {
> int *vhostfds;
> diff --git a/drivers/net/virtio/virtio_user/vhost_user.c b/drivers/net/virtio/virtio_user/vhost_user.c
> index c10252506b..2a158aff7e 100644
> --- a/drivers/net/virtio/virtio_user/vhost_user.c
> +++ b/drivers/net/virtio/virtio_user/vhost_user.c
> @@ -16,7 +16,8 @@
> #include <rte_string_fns.h>
> #include <rte_fbarray.h>
>
> -#include "vhost.h"
> +#include "virtio_user/vhost_logs.h"
> +#include "virtio_user/vhost.h"
> #include "virtio_user_dev.h"
>
> struct vhost_user_data {
> diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
> index 2997d2bd26..7105c54b43 100644
> --- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
> +++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
> @@ -20,10 +20,11 @@
> #include <rte_malloc.h>
> #include <rte_io.h>
>
> -#include "vhost.h"
> -#include "virtio.h"
> +#include "virtio_user/vhost.h"
> #include "virtio_user_dev.h"
> +#include "../virtqueue.h"
> #include "../virtio_ethdev.h"
> +#include "../virtio_logs.h"
>
> #define VIRTIO_USER_MEM_EVENT_CLB_NAME "virtio_user_mem_event_clb"
>
> diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.h b/drivers/net/virtio/virtio_user/virtio_user_dev.h
> index 66400b3b62..70604d6956 100644
> --- a/drivers/net/virtio/virtio_user/virtio_user_dev.h
> +++ b/drivers/net/virtio/virtio_user/virtio_user_dev.h
> @@ -25,26 +25,36 @@ struct virtio_user_queue {
> };
>
> struct virtio_user_dev {
> - struct virtio_hw hw;
> + union {
> + struct virtio_hw hw;
> + uint8_t dummy[256];
> + };
> +
> + void *backend_data;
> + uint16_t **notify_area;
> + char path[PATH_MAX];
> + bool hw_cvq;
> + uint16_t max_queue_pairs;
> + uint64_t device_features; /* supported features by device */
> + bool *qp_enabled;
> +
> enum virtio_user_backend_type backend_type;
> bool is_server; /* server or client mode */
>
> int *callfds;
> int *kickfds;
> int mac_specified;
> - uint16_t max_queue_pairs;
> +
> uint16_t queue_pairs;
> uint32_t queue_size;
> uint64_t features; /* the negotiated features with driver,
> * and will be sync with device
> */
> - uint64_t device_features; /* supported features by device */
> uint64_t frontend_features; /* enabled frontend features */
> uint64_t unsupported_features; /* unsupported features mask */
> uint8_t status;
> uint16_t net_status;
> uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
> - char path[PATH_MAX];
> char *ifname;
>
> union {
> @@ -54,18 +64,12 @@ struct virtio_user_dev {
> } vrings;
>
> struct virtio_user_queue *packed_queues;
> - bool *qp_enabled;
>
> struct virtio_user_backend_ops *ops;
> pthread_mutex_t mutex;
> bool started;
>
> - bool hw_cvq;
> struct virtqueue *scvq;
> -
> - void *backend_data;
> -
> - uint16_t **notify_area;
> };
>
> int virtio_user_dev_set_features(struct virtio_user_dev *dev);
^ permalink raw reply [flat|nested] 58+ messages in thread
* Re: [v2 3/4] crypto/virtio: add vhost backend to virtio_user
2025-01-07 18:44 ` [v2 3/4] crypto/virtio: add vhost backend to virtio_user Gowrishankar Muthukrishnan
@ 2025-02-06 13:14 ` Maxime Coquelin
0 siblings, 0 replies; 58+ messages in thread
From: Maxime Coquelin @ 2025-02-06 13:14 UTC (permalink / raw)
To: Gowrishankar Muthukrishnan, dev, Akhil Goyal, Chenbo Xia,
Fan Zhang, Jay Zhou
Cc: jerinj, anoobj, David Marchand
On 1/7/25 7:44 PM, Gowrishankar Muthukrishnan wrote:
> Add vhost backend to virtio_user crypto.
>
> Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
> ---
> drivers/crypto/virtio/meson.build | 7 +
> drivers/crypto/virtio/virtio_cryptodev.c | 57 +-
> drivers/crypto/virtio/virtio_cryptodev.h | 3 +
> drivers/crypto/virtio/virtio_pci.h | 7 +
> drivers/crypto/virtio/virtio_ring.h | 6 -
> .../crypto/virtio/virtio_user/vhost_vdpa.c | 312 +++++++
> .../virtio/virtio_user/virtio_user_dev.c | 776 ++++++++++++++++++
> .../virtio/virtio_user/virtio_user_dev.h | 88 ++
> drivers/crypto/virtio/virtio_user_cryptodev.c | 587 +++++++++++++
> 9 files changed, 1815 insertions(+), 28 deletions(-)
> create mode 100644 drivers/crypto/virtio/virtio_user/vhost_vdpa.c
> create mode 100644 drivers/crypto/virtio/virtio_user/virtio_user_dev.c
> create mode 100644 drivers/crypto/virtio/virtio_user/virtio_user_dev.h
> create mode 100644 drivers/crypto/virtio/virtio_user_cryptodev.c
>
I don't understand the purpose of the common base as as most of the code
ends up being duplicated anyways.
Thanks,
Maxime
> diff --git a/drivers/crypto/virtio/meson.build b/drivers/crypto/virtio/meson.build
> index 8181c8296f..e5bce54cca 100644
> --- a/drivers/crypto/virtio/meson.build
> +++ b/drivers/crypto/virtio/meson.build
> @@ -16,3 +16,10 @@ sources = files(
> 'virtio_rxtx.c',
> 'virtqueue.c',
> )
> +
> +if is_linux
> + sources += files('virtio_user_cryptodev.c',
> + 'virtio_user/vhost_vdpa.c',
> + 'virtio_user/virtio_user_dev.c')
> + deps += ['bus_vdev', 'common_virtio']
> +endif
> diff --git a/drivers/crypto/virtio/virtio_cryptodev.c b/drivers/crypto/virtio/virtio_cryptodev.c
> index d3db4f898e..c9f20cb338 100644
> --- a/drivers/crypto/virtio/virtio_cryptodev.c
> +++ b/drivers/crypto/virtio/virtio_cryptodev.c
> @@ -544,24 +544,12 @@ virtio_crypto_init_device(struct rte_cryptodev *cryptodev,
> return 0;
> }
>
> -/*
> - * This function is based on probe() function
> - * It returns 0 on success.
> - */
> -static int
> -crypto_virtio_create(const char *name, struct rte_pci_device *pci_dev,
> - struct rte_cryptodev_pmd_init_params *init_params)
> +int
> +crypto_virtio_dev_init(struct rte_cryptodev *cryptodev, uint64_t features,
> + struct rte_pci_device *pci_dev)
> {
> - struct rte_cryptodev *cryptodev;
> struct virtio_crypto_hw *hw;
>
> - PMD_INIT_FUNC_TRACE();
> -
> - cryptodev = rte_cryptodev_pmd_create(name, &pci_dev->device,
> - init_params);
> - if (cryptodev == NULL)
> - return -ENODEV;
> -
> cryptodev->driver_id = cryptodev_virtio_driver_id;
> cryptodev->dev_ops = &virtio_crypto_dev_ops;
>
> @@ -578,16 +566,41 @@ crypto_virtio_create(const char *name, struct rte_pci_device *pci_dev,
> hw->dev_id = cryptodev->data->dev_id;
> hw->virtio_dev_capabilities = virtio_capabilities;
>
> - VIRTIO_CRYPTO_INIT_LOG_DBG("dev %d vendorID=0x%x deviceID=0x%x",
> - cryptodev->data->dev_id, pci_dev->id.vendor_id,
> - pci_dev->id.device_id);
> + if (pci_dev) {
> + /* pci device init */
> + VIRTIO_CRYPTO_INIT_LOG_DBG("dev %d vendorID=0x%x deviceID=0x%x",
> + cryptodev->data->dev_id, pci_dev->id.vendor_id,
> + pci_dev->id.device_id);
>
> - /* pci device init */
> - if (vtpci_cryptodev_init(pci_dev, hw))
> + if (vtpci_cryptodev_init(pci_dev, hw))
> + return -1;
> + }
> +
> + if (virtio_crypto_init_device(cryptodev, features) < 0)
> return -1;
>
> - if (virtio_crypto_init_device(cryptodev,
> - VIRTIO_CRYPTO_PMD_GUEST_FEATURES) < 0)
> + return 0;
> +}
> +
> +/*
> + * This function is based on probe() function
> + * It returns 0 on success.
> + */
> +static int
> +crypto_virtio_create(const char *name, struct rte_pci_device *pci_dev,
> + struct rte_cryptodev_pmd_init_params *init_params)
> +{
> + struct rte_cryptodev *cryptodev;
> +
> + PMD_INIT_FUNC_TRACE();
> +
> + cryptodev = rte_cryptodev_pmd_create(name, &pci_dev->device,
> + init_params);
> + if (cryptodev == NULL)
> + return -ENODEV;
> +
> + if (crypto_virtio_dev_init(cryptodev, VIRTIO_CRYPTO_PMD_GUEST_FEATURES,
> + pci_dev) < 0)
> return -1;
>
> rte_cryptodev_pmd_probing_finish(cryptodev);
> diff --git a/drivers/crypto/virtio/virtio_cryptodev.h b/drivers/crypto/virtio/virtio_cryptodev.h
> index b4bdd9800b..95a1e09dca 100644
> --- a/drivers/crypto/virtio/virtio_cryptodev.h
> +++ b/drivers/crypto/virtio/virtio_cryptodev.h
> @@ -74,4 +74,7 @@ uint16_t virtio_crypto_pkt_rx_burst(void *tx_queue,
> struct rte_crypto_op **tx_pkts,
> uint16_t nb_pkts);
>
> +int crypto_virtio_dev_init(struct rte_cryptodev *cryptodev, uint64_t features,
> + struct rte_pci_device *pci_dev);
> +
> #endif /* _VIRTIO_CRYPTODEV_H_ */
> diff --git a/drivers/crypto/virtio/virtio_pci.h b/drivers/crypto/virtio/virtio_pci.h
> index 79945cb88e..c75777e005 100644
> --- a/drivers/crypto/virtio/virtio_pci.h
> +++ b/drivers/crypto/virtio/virtio_pci.h
> @@ -20,6 +20,9 @@ struct virtqueue;
> #define VIRTIO_CRYPTO_PCI_VENDORID 0x1AF4
> #define VIRTIO_CRYPTO_PCI_DEVICEID 0x1054
>
> +/* VirtIO device IDs. */
> +#define VIRTIO_ID_CRYPTO 20
> +
> /* VirtIO ABI version, this must match exactly. */
> #define VIRTIO_PCI_ABI_VERSION 0
>
> @@ -56,8 +59,12 @@ struct virtqueue;
> #define VIRTIO_CONFIG_STATUS_DRIVER 0x02
> #define VIRTIO_CONFIG_STATUS_DRIVER_OK 0x04
> #define VIRTIO_CONFIG_STATUS_FEATURES_OK 0x08
> +#define VIRTIO_CONFIG_STATUS_DEV_NEED_RESET 0x40
> #define VIRTIO_CONFIG_STATUS_FAILED 0x80
>
> +/* The alignment to use between consumer and producer parts of vring. */
> +#define VIRTIO_VRING_ALIGN 4096
> +
> /*
> * Each virtqueue indirect descriptor list must be physically contiguous.
> * To allow us to malloc(9) each list individually, limit the number
> diff --git a/drivers/crypto/virtio/virtio_ring.h b/drivers/crypto/virtio/virtio_ring.h
> index c74d1172b7..4b418f6e60 100644
> --- a/drivers/crypto/virtio/virtio_ring.h
> +++ b/drivers/crypto/virtio/virtio_ring.h
> @@ -181,12 +181,6 @@ vring_init_packed(struct vring_packed *vr, uint8_t *p, rte_iova_t iova,
> sizeof(struct vring_packed_desc_event)), align);
> }
>
> -static inline void
> -vring_init(struct vring *vr, unsigned int num, uint8_t *p, unsigned long align)
> -{
> - vring_init_split(vr, p, 0, align, num);
> -}
> -
> /*
> * The following is used with VIRTIO_RING_F_EVENT_IDX.
> * Assuming a given event_idx value from the other size, if we have
> diff --git a/drivers/crypto/virtio/virtio_user/vhost_vdpa.c b/drivers/crypto/virtio/virtio_user/vhost_vdpa.c
> new file mode 100644
> index 0000000000..41696c4095
> --- /dev/null
> +++ b/drivers/crypto/virtio/virtio_user/vhost_vdpa.c
> @@ -0,0 +1,312 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2025 Marvell
> + */
> +
> +#include <sys/ioctl.h>
> +#include <sys/types.h>
> +#include <sys/stat.h>
> +#include <sys/mman.h>
> +#include <fcntl.h>
> +#include <stdlib.h>
> +#include <unistd.h>
> +
> +#include <rte_memory.h>
> +
> +#include "virtio_user/vhost.h"
> +#include "virtio_user/vhost_logs.h"
> +
> +#include "virtio_user_dev.h"
> +#include "../virtio_pci.h"
> +
> +struct vhost_vdpa_data {
> + int vhostfd;
> + uint64_t protocol_features;
> +};
> +
> +#define VHOST_VDPA_SUPPORTED_BACKEND_FEATURES \
> + (1ULL << VHOST_BACKEND_F_IOTLB_MSG_V2 | \
> + 1ULL << VHOST_BACKEND_F_IOTLB_BATCH)
> +
> +/* vhost kernel & vdpa ioctls */
> +#define VHOST_VIRTIO 0xAF
> +#define VHOST_GET_FEATURES _IOR(VHOST_VIRTIO, 0x00, __u64)
> +#define VHOST_SET_FEATURES _IOW(VHOST_VIRTIO, 0x00, __u64)
> +#define VHOST_SET_OWNER _IO(VHOST_VIRTIO, 0x01)
> +#define VHOST_RESET_OWNER _IO(VHOST_VIRTIO, 0x02)
> +#define VHOST_SET_LOG_BASE _IOW(VHOST_VIRTIO, 0x04, __u64)
> +#define VHOST_SET_LOG_FD _IOW(VHOST_VIRTIO, 0x07, int)
> +#define VHOST_SET_VRING_NUM _IOW(VHOST_VIRTIO, 0x10, struct vhost_vring_state)
> +#define VHOST_SET_VRING_ADDR _IOW(VHOST_VIRTIO, 0x11, struct vhost_vring_addr)
> +#define VHOST_SET_VRING_BASE _IOW(VHOST_VIRTIO, 0x12, struct vhost_vring_state)
> +#define VHOST_GET_VRING_BASE _IOWR(VHOST_VIRTIO, 0x12, struct vhost_vring_state)
> +#define VHOST_SET_VRING_KICK _IOW(VHOST_VIRTIO, 0x20, struct vhost_vring_file)
> +#define VHOST_SET_VRING_CALL _IOW(VHOST_VIRTIO, 0x21, struct vhost_vring_file)
> +#define VHOST_SET_VRING_ERR _IOW(VHOST_VIRTIO, 0x22, struct vhost_vring_file)
> +#define VHOST_NET_SET_BACKEND _IOW(VHOST_VIRTIO, 0x30, struct vhost_vring_file)
> +#define VHOST_VDPA_GET_DEVICE_ID _IOR(VHOST_VIRTIO, 0x70, __u32)
> +#define VHOST_VDPA_GET_STATUS _IOR(VHOST_VIRTIO, 0x71, __u8)
> +#define VHOST_VDPA_SET_STATUS _IOW(VHOST_VIRTIO, 0x72, __u8)
> +#define VHOST_VDPA_GET_CONFIG _IOR(VHOST_VIRTIO, 0x73, struct vhost_vdpa_config)
> +#define VHOST_VDPA_SET_CONFIG _IOW(VHOST_VIRTIO, 0x74, struct vhost_vdpa_config)
> +#define VHOST_VDPA_SET_VRING_ENABLE _IOW(VHOST_VIRTIO, 0x75, struct vhost_vring_state)
> +#define VHOST_SET_BACKEND_FEATURES _IOW(VHOST_VIRTIO, 0x25, __u64)
> +#define VHOST_GET_BACKEND_FEATURES _IOR(VHOST_VIRTIO, 0x26, __u64)
> +
> +/* no alignment requirement */
> +struct vhost_iotlb_msg {
> + uint64_t iova;
> + uint64_t size;
> + uint64_t uaddr;
> +#define VHOST_ACCESS_RO 0x1
> +#define VHOST_ACCESS_WO 0x2
> +#define VHOST_ACCESS_RW 0x3
> + uint8_t perm;
> +#define VHOST_IOTLB_MISS 1
> +#define VHOST_IOTLB_UPDATE 2
> +#define VHOST_IOTLB_INVALIDATE 3
> +#define VHOST_IOTLB_ACCESS_FAIL 4
> +#define VHOST_IOTLB_BATCH_BEGIN 5
> +#define VHOST_IOTLB_BATCH_END 6
> + uint8_t type;
> +};
> +
> +#define VHOST_IOTLB_MSG_V2 0x2
> +
> +struct vhost_vdpa_config {
> + uint32_t off;
> + uint32_t len;
> + uint8_t buf[];
> +};
> +
> +struct vhost_msg {
> + uint32_t type;
> + uint32_t reserved;
> + union {
> + struct vhost_iotlb_msg iotlb;
> + uint8_t padding[64];
> + };
> +};
> +
> +
> +static int
> +vhost_vdpa_ioctl(int fd, uint64_t request, void *arg)
> +{
> + int ret;
> +
> + ret = ioctl(fd, request, arg);
> + if (ret) {
> + PMD_DRV_LOG(ERR, "Vhost-vDPA ioctl %"PRIu64" failed (%s)",
> + request, strerror(errno));
> + return -1;
> + }
> +
> + return 0;
> +}
> +
> +static int
> +vhost_vdpa_get_protocol_features(struct virtio_user_dev *dev, uint64_t *features)
> +{
> + struct vhost_vdpa_data *data = dev->backend_data;
> +
> + return vhost_vdpa_ioctl(data->vhostfd, VHOST_GET_BACKEND_FEATURES, features);
> +}
> +
> +static int
> +vhost_vdpa_set_protocol_features(struct virtio_user_dev *dev, uint64_t features)
> +{
> + struct vhost_vdpa_data *data = dev->backend_data;
> +
> + return vhost_vdpa_ioctl(data->vhostfd, VHOST_SET_BACKEND_FEATURES, &features);
> +}
> +
> +static int
> +vhost_vdpa_get_features(struct virtio_user_dev *dev, uint64_t *features)
> +{
> + struct vhost_vdpa_data *data = dev->backend_data;
> + int ret;
> +
> + ret = vhost_vdpa_ioctl(data->vhostfd, VHOST_GET_FEATURES, features);
> + if (ret) {
> + PMD_DRV_LOG(ERR, "Failed to get features");
> + return -1;
> + }
> +
> + /* Negotiated vDPA backend features */
> + ret = vhost_vdpa_get_protocol_features(dev, &data->protocol_features);
> + if (ret < 0) {
> + PMD_DRV_LOG(ERR, "Failed to get backend features");
> + return -1;
> + }
> +
> + data->protocol_features &= VHOST_VDPA_SUPPORTED_BACKEND_FEATURES;
> +
> + ret = vhost_vdpa_set_protocol_features(dev, data->protocol_features);
> + if (ret < 0) {
> + PMD_DRV_LOG(ERR, "Failed to set backend features");
> + return -1;
> + }
> +
> + return 0;
> +}
> +
> +static int
> +vhost_vdpa_set_vring_enable(struct virtio_user_dev *dev, struct vhost_vring_state *state)
> +{
> + struct vhost_vdpa_data *data = dev->backend_data;
> +
> + return vhost_vdpa_ioctl(data->vhostfd, VHOST_VDPA_SET_VRING_ENABLE, state);
> +}
> +
> +/**
> + * Set up environment to talk with a vhost vdpa backend.
> + *
> + * @return
> + * - (-1) if fail to set up;
> + * - (>=0) if successful.
> + */
> +static int
> +vhost_vdpa_setup(struct virtio_user_dev *dev)
> +{
> + struct vhost_vdpa_data *data;
> + uint32_t did = (uint32_t)-1;
> +
> + data = malloc(sizeof(*data));
> + if (!data) {
> + PMD_DRV_LOG(ERR, "(%s) Faidle to allocate backend data", dev->path);
> + return -1;
> + }
> +
> + data->vhostfd = open(dev->path, O_RDWR);
> + if (data->vhostfd < 0) {
> + PMD_DRV_LOG(ERR, "Failed to open %s: %s",
> + dev->path, strerror(errno));
> + free(data);
> + return -1;
> + }
> +
> + if (ioctl(data->vhostfd, VHOST_VDPA_GET_DEVICE_ID, &did) < 0 ||
> + did != VIRTIO_ID_CRYPTO) {
> + PMD_DRV_LOG(ERR, "Invalid vdpa device ID: %u", did);
> + close(data->vhostfd);
> + free(data);
> + return -1;
> + }
> +
> + dev->backend_data = data;
> +
> + return 0;
> +}
> +
> +static int
> +vhost_vdpa_cvq_enable(struct virtio_user_dev *dev, int enable)
> +{
> + struct vhost_vring_state state = {
> + .index = dev->max_queue_pairs,
> + .num = enable,
> + };
> +
> + return vhost_vdpa_set_vring_enable(dev, &state);
> +}
> +
> +static int
> +vhost_vdpa_enable_queue_pair(struct virtio_user_dev *dev,
> + uint16_t pair_idx,
> + int enable)
> +{
> + struct vhost_vring_state state = {
> + .index = pair_idx,
> + .num = enable,
> + };
> +
> + if (dev->qp_enabled[pair_idx] == enable)
> + return 0;
> +
> + if (vhost_vdpa_set_vring_enable(dev, &state))
> + return -1;
> +
> + dev->qp_enabled[pair_idx] = enable;
> + return 0;
> +}
> +
> +static int
> +vhost_vdpa_update_link_state(struct virtio_user_dev *dev)
> +{
> + /* TODO: It is W/A until a cleaner approach to find cpt status */
> + dev->crypto_status = VIRTIO_CRYPTO_S_HW_READY;
> + return 0;
> +}
> +
> +static int
> +vhost_vdpa_get_nr_vrings(struct virtio_user_dev *dev)
> +{
> + int nr_vrings = dev->max_queue_pairs;
> +
> + return nr_vrings;
> +}
> +
> +static int
> +vhost_vdpa_unmap_notification_area(struct virtio_user_dev *dev)
> +{
> + int i, nr_vrings;
> +
> + nr_vrings = vhost_vdpa_get_nr_vrings(dev);
> +
> + for (i = 0; i < nr_vrings; i++) {
> + if (dev->notify_area[i])
> + munmap(dev->notify_area[i], getpagesize());
> + }
> + free(dev->notify_area);
> + dev->notify_area = NULL;
> +
> + return 0;
> +}
> +
> +static int
> +vhost_vdpa_map_notification_area(struct virtio_user_dev *dev)
> +{
> + struct vhost_vdpa_data *data = dev->backend_data;
> + int nr_vrings, i, page_size = getpagesize();
> + uint16_t **notify_area;
> +
> + nr_vrings = vhost_vdpa_get_nr_vrings(dev);
> +
> + /* CQ is another vring */
> + nr_vrings++;
> +
> + notify_area = malloc(nr_vrings * sizeof(*notify_area));
> + if (!notify_area) {
> + PMD_DRV_LOG(ERR, "(%s) Failed to allocate notify area array", dev->path);
> + return -1;
> + }
> +
> + for (i = 0; i < nr_vrings; i++) {
> + notify_area[i] = mmap(NULL, page_size, PROT_WRITE, MAP_SHARED | MAP_FILE,
> + data->vhostfd, i * page_size);
> + if (notify_area[i] == MAP_FAILED) {
> + PMD_DRV_LOG(ERR, "(%s) Map failed for notify address of queue %d",
> + dev->path, i);
> + i--;
> + goto map_err;
> + }
> + }
> + dev->notify_area = notify_area;
> +
> + return 0;
> +
> +map_err:
> + for (; i >= 0; i--)
> + munmap(notify_area[i], page_size);
> + free(notify_area);
> +
> + return -1;
> +}
> +
> +struct virtio_user_backend_ops virtio_crypto_ops_vdpa = {
> + .setup = vhost_vdpa_setup,
> + .get_features = vhost_vdpa_get_features,
> + .cvq_enable = vhost_vdpa_cvq_enable,
> + .enable_qp = vhost_vdpa_enable_queue_pair,
> + .update_link_state = vhost_vdpa_update_link_state,
> + .map_notification_area = vhost_vdpa_map_notification_area,
> + .unmap_notification_area = vhost_vdpa_unmap_notification_area,
> +};
> diff --git a/drivers/crypto/virtio/virtio_user/virtio_user_dev.c b/drivers/crypto/virtio/virtio_user/virtio_user_dev.c
> new file mode 100644
> index 0000000000..ac53ca78d4
> --- /dev/null
> +++ b/drivers/crypto/virtio/virtio_user/virtio_user_dev.c
> @@ -0,0 +1,776 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2025 Marvell.
> + */
> +
> +#include <stdint.h>
> +#include <stdio.h>
> +#include <stdlib.h>
> +#include <fcntl.h>
> +#include <string.h>
> +#include <errno.h>
> +#include <sys/mman.h>
> +#include <unistd.h>
> +#include <sys/eventfd.h>
> +#include <sys/types.h>
> +#include <sys/stat.h>
> +#include <pthread.h>
> +
> +#include <rte_alarm.h>
> +#include <rte_string_fns.h>
> +#include <rte_eal_memconfig.h>
> +#include <rte_malloc.h>
> +#include <rte_io.h>
> +
> +#include "virtio_user/vhost.h"
> +#include "virtio_user/vhost_logs.h"
> +#include "virtio_logs.h"
> +
> +#include "cryptodev_pmd.h"
> +#include "virtio_crypto.h"
> +#include "virtio_cvq.h"
> +#include "virtio_user_dev.h"
> +#include "virtqueue.h"
> +
> +#define VIRTIO_USER_MEM_EVENT_CLB_NAME "virtio_user_mem_event_clb"
> +
> +const char * const crypto_virtio_user_backend_strings[] = {
> + [VIRTIO_USER_BACKEND_UNKNOWN] = "VIRTIO_USER_BACKEND_UNKNOWN",
> + [VIRTIO_USER_BACKEND_VHOST_VDPA] = "VHOST_VDPA",
> +};
> +
> +static int
> +virtio_user_uninit_notify_queue(struct virtio_user_dev *dev, uint32_t queue_sel)
> +{
> + if (dev->kickfds[queue_sel] >= 0) {
> + close(dev->kickfds[queue_sel]);
> + dev->kickfds[queue_sel] = -1;
> + }
> +
> + if (dev->callfds[queue_sel] >= 0) {
> + close(dev->callfds[queue_sel]);
> + dev->callfds[queue_sel] = -1;
> + }
> +
> + return 0;
> +}
> +
> +static int
> +virtio_user_init_notify_queue(struct virtio_user_dev *dev, uint32_t queue_sel)
> +{
> + /* May use invalid flag, but some backend uses kickfd and
> + * callfd as criteria to judge if dev is alive. so finally we
> + * use real event_fd.
> + */
> + dev->callfds[queue_sel] = eventfd(0, EFD_CLOEXEC | EFD_NONBLOCK);
> + if (dev->callfds[queue_sel] < 0) {
> + PMD_DRV_LOG(ERR, "(%s) Failed to setup callfd for queue %u: %s",
> + dev->path, queue_sel, strerror(errno));
> + return -1;
> + }
> + dev->kickfds[queue_sel] = eventfd(0, EFD_CLOEXEC | EFD_NONBLOCK);
> + if (dev->kickfds[queue_sel] < 0) {
> + PMD_DRV_LOG(ERR, "(%s) Failed to setup kickfd for queue %u: %s",
> + dev->path, queue_sel, strerror(errno));
> + return -1;
> + }
> +
> + return 0;
> +}
> +
> +static int
> +virtio_user_destroy_queue(struct virtio_user_dev *dev, uint32_t queue_sel)
> +{
> + struct vhost_vring_state state;
> + int ret;
> +
> + state.index = queue_sel;
> + ret = dev->ops->get_vring_base(dev, &state);
> + if (ret < 0) {
> + PMD_DRV_LOG(ERR, "(%s) Failed to destroy queue %u", dev->path, queue_sel);
> + return -1;
> + }
> +
> + return 0;
> +}
> +
> +static int
> +virtio_user_create_queue(struct virtio_user_dev *dev, uint32_t queue_sel)
> +{
> + /* Of all per virtqueue MSGs, make sure VHOST_SET_VRING_CALL come
> + * firstly because vhost depends on this msg to allocate virtqueue
> + * pair.
> + */
> + struct vhost_vring_file file;
> + int ret;
> +
> + file.index = queue_sel;
> + file.fd = dev->callfds[queue_sel];
> + ret = dev->ops->set_vring_call(dev, &file);
> + if (ret < 0) {
> + PMD_INIT_LOG(ERR, "(%s) Failed to create queue %u", dev->path, queue_sel);
> + return -1;
> + }
> +
> + return 0;
> +}
> +
> +static int
> +virtio_user_kick_queue(struct virtio_user_dev *dev, uint32_t queue_sel)
> +{
> + int ret;
> + struct vhost_vring_file file;
> + struct vhost_vring_state state;
> + struct vring *vring = &dev->vrings.split[queue_sel];
> + struct vring_packed *pq_vring = &dev->vrings.packed[queue_sel];
> + uint64_t desc_addr, avail_addr, used_addr;
> + struct vhost_vring_addr addr = {
> + .index = queue_sel,
> + .log_guest_addr = 0,
> + .flags = 0, /* disable log */
> + };
> +
> + if (queue_sel == dev->max_queue_pairs) {
> + if (!dev->scvq) {
> + PMD_INIT_LOG(ERR, "(%s) Shadow control queue expected but missing",
> + dev->path);
> + goto err;
> + }
> +
> + /* Use shadow control queue information */
> + vring = &dev->scvq->vq_split.ring;
> + pq_vring = &dev->scvq->vq_packed.ring;
> + }
> +
> + if (dev->features & (1ULL << VIRTIO_F_RING_PACKED)) {
> + desc_addr = pq_vring->desc_iova;
> + avail_addr = desc_addr + pq_vring->num * sizeof(struct vring_packed_desc);
> + used_addr = RTE_ALIGN_CEIL(avail_addr + sizeof(struct vring_packed_desc_event),
> + VIRTIO_VRING_ALIGN);
> +
> + addr.desc_user_addr = desc_addr;
> + addr.avail_user_addr = avail_addr;
> + addr.used_user_addr = used_addr;
> + } else {
> + desc_addr = vring->desc_iova;
> + avail_addr = desc_addr + vring->num * sizeof(struct vring_desc);
> + used_addr = RTE_ALIGN_CEIL((uintptr_t)(&vring->avail->ring[vring->num]),
> + VIRTIO_VRING_ALIGN);
> +
> + addr.desc_user_addr = desc_addr;
> + addr.avail_user_addr = avail_addr;
> + addr.used_user_addr = used_addr;
> + }
> +
> + state.index = queue_sel;
> + state.num = vring->num;
> + ret = dev->ops->set_vring_num(dev, &state);
> + if (ret < 0)
> + goto err;
> +
> + state.index = queue_sel;
> + state.num = 0; /* no reservation */
> + if (dev->features & (1ULL << VIRTIO_F_RING_PACKED))
> + state.num |= (1 << 15);
> + ret = dev->ops->set_vring_base(dev, &state);
> + if (ret < 0)
> + goto err;
> +
> + ret = dev->ops->set_vring_addr(dev, &addr);
> + if (ret < 0)
> + goto err;
> +
> + /* Of all per virtqueue MSGs, make sure VHOST_USER_SET_VRING_KICK comes
> + * lastly because vhost depends on this msg to judge if
> + * virtio is ready.
> + */
> + file.index = queue_sel;
> + file.fd = dev->kickfds[queue_sel];
> + ret = dev->ops->set_vring_kick(dev, &file);
> + if (ret < 0)
> + goto err;
> +
> + return 0;
> +err:
> + PMD_INIT_LOG(ERR, "(%s) Failed to kick queue %u", dev->path, queue_sel);
> +
> + return -1;
> +}
> +
> +static int
> +virtio_user_foreach_queue(struct virtio_user_dev *dev,
> + int (*fn)(struct virtio_user_dev *, uint32_t))
> +{
> + uint32_t i, nr_vq;
> +
> + nr_vq = dev->max_queue_pairs;
> +
> + for (i = 0; i < nr_vq; i++)
> + if (fn(dev, i) < 0)
> + return -1;
> +
> + return 0;
> +}
> +
> +int
> +crypto_virtio_user_dev_set_features(struct virtio_user_dev *dev)
> +{
> + uint64_t features;
> + int ret = -1;
> +
> + pthread_mutex_lock(&dev->mutex);
> +
> + /* Step 0: tell vhost to create queues */
> + if (virtio_user_foreach_queue(dev, virtio_user_create_queue) < 0)
> + goto error;
> +
> + features = dev->features;
> +
> + ret = dev->ops->set_features(dev, features);
> + if (ret < 0)
> + goto error;
> + PMD_DRV_LOG(INFO, "(%s) set features: 0x%" PRIx64, dev->path, features);
> +error:
> + pthread_mutex_unlock(&dev->mutex);
> +
> + return ret;
> +}
> +
> +int
> +crypto_virtio_user_start_device(struct virtio_user_dev *dev)
> +{
> + int ret;
> +
> + /*
> + * XXX workaround!
> + *
> + * We need to make sure that the locks will be
> + * taken in the correct order to avoid deadlocks.
> + *
> + * Before releasing this lock, this thread should
> + * not trigger any memory hotplug events.
> + *
> + * This is a temporary workaround, and should be
> + * replaced when we get proper supports from the
> + * memory subsystem in the future.
> + */
> + rte_mcfg_mem_read_lock();
> + pthread_mutex_lock(&dev->mutex);
> +
> + /* Step 2: share memory regions */
> + ret = dev->ops->set_memory_table(dev);
> + if (ret < 0)
> + goto error;
> +
> + /* Step 3: kick queues */
> + ret = virtio_user_foreach_queue(dev, virtio_user_kick_queue);
> + if (ret < 0)
> + goto error;
> +
> + ret = virtio_user_kick_queue(dev, dev->max_queue_pairs);
> + if (ret < 0)
> + goto error;
> +
> + /* Step 4: enable queues */
> + for (int i = 0; i < dev->max_queue_pairs; i++) {
> + ret = dev->ops->enable_qp(dev, i, 1);
> + if (ret < 0)
> + goto error;
> + }
> +
> + dev->started = true;
> +
> + pthread_mutex_unlock(&dev->mutex);
> + rte_mcfg_mem_read_unlock();
> +
> + return 0;
> +error:
> + pthread_mutex_unlock(&dev->mutex);
> + rte_mcfg_mem_read_unlock();
> +
> + PMD_INIT_LOG(ERR, "(%s) Failed to start device", dev->path);
> +
> + /* TODO: free resource here or caller to check */
> + return -1;
> +}
> +
> +int crypto_virtio_user_stop_device(struct virtio_user_dev *dev)
> +{
> + uint32_t i;
> + int ret;
> +
> + pthread_mutex_lock(&dev->mutex);
> + if (!dev->started)
> + goto out;
> +
> + for (i = 0; i < dev->max_queue_pairs; ++i) {
> + ret = dev->ops->enable_qp(dev, i, 0);
> + if (ret < 0)
> + goto err;
> + }
> +
> + if (dev->scvq) {
> + ret = dev->ops->cvq_enable(dev, 0);
> + if (ret < 0)
> + goto err;
> + }
> +
> + /* Stop the backend. */
> + if (virtio_user_foreach_queue(dev, virtio_user_destroy_queue) < 0)
> + goto err;
> +
> + dev->started = false;
> +
> +out:
> + pthread_mutex_unlock(&dev->mutex);
> +
> + return 0;
> +err:
> + pthread_mutex_unlock(&dev->mutex);
> +
> + PMD_INIT_LOG(ERR, "(%s) Failed to stop device", dev->path);
> +
> + return -1;
> +}
> +
> +static int
> +virtio_user_dev_init_max_queue_pairs(struct virtio_user_dev *dev, uint32_t user_max_qp)
> +{
> + int ret;
> +
> + if (!dev->ops->get_config) {
> + dev->max_queue_pairs = user_max_qp;
> + return 0;
> + }
> +
> + ret = dev->ops->get_config(dev, (uint8_t *)&dev->max_queue_pairs,
> + offsetof(struct virtio_crypto_config, max_dataqueues),
> + sizeof(uint16_t));
> + if (ret) {
> + /*
> + * We need to know the max queue pair from the device so that
> + * the control queue gets the right index.
> + */
> + dev->max_queue_pairs = 1;
> + PMD_DRV_LOG(ERR, "(%s) Failed to get max queue pairs from device", dev->path);
> +
> + return ret;
> + }
> +
> + return 0;
> +}
> +
> +static int
> +virtio_user_dev_init_cipher_services(struct virtio_user_dev *dev)
> +{
> + struct virtio_crypto_config config;
> + int ret;
> +
> + dev->crypto_services = RTE_BIT32(VIRTIO_CRYPTO_SERVICE_CIPHER);
> + dev->cipher_algo = 0;
> + dev->auth_algo = 0;
> + dev->akcipher_algo = 0;
> +
> + if (!dev->ops->get_config)
> + return 0;
> +
> + ret = dev->ops->get_config(dev, (uint8_t *)&config, 0, sizeof(config));
> + if (ret) {
> + PMD_DRV_LOG(ERR, "(%s) Failed to get crypto config from device", dev->path);
> + return ret;
> + }
> +
> + dev->crypto_services = config.crypto_services;
> + dev->cipher_algo = ((uint64_t)config.cipher_algo_h << 32) |
> + config.cipher_algo_l;
> + dev->hash_algo = config.hash_algo;
> + dev->auth_algo = ((uint64_t)config.mac_algo_h << 32) |
> + config.mac_algo_l;
> + dev->aead_algo = config.aead_algo;
> + dev->akcipher_algo = config.akcipher_algo;
> + return 0;
> +}
> +
> +static int
> +virtio_user_dev_init_notify(struct virtio_user_dev *dev)
> +{
> +
> + if (virtio_user_foreach_queue(dev, virtio_user_init_notify_queue) < 0)
> + goto err;
> +
> + if (dev->device_features & (1ULL << VIRTIO_F_NOTIFICATION_DATA))
> + if (dev->ops->map_notification_area &&
> + dev->ops->map_notification_area(dev))
> + goto err;
> +
> + return 0;
> +err:
> + virtio_user_foreach_queue(dev, virtio_user_uninit_notify_queue);
> +
> + return -1;
> +}
> +
> +static void
> +virtio_user_dev_uninit_notify(struct virtio_user_dev *dev)
> +{
> + virtio_user_foreach_queue(dev, virtio_user_uninit_notify_queue);
> +
> + if (dev->ops->unmap_notification_area && dev->notify_area)
> + dev->ops->unmap_notification_area(dev);
> +}
> +
> +static void
> +virtio_user_mem_event_cb(enum rte_mem_event type __rte_unused,
> + const void *addr,
> + size_t len __rte_unused,
> + void *arg)
> +{
> + struct virtio_user_dev *dev = arg;
> + struct rte_memseg_list *msl;
> + uint16_t i;
> + int ret = 0;
> +
> + /* ignore externally allocated memory */
> + msl = rte_mem_virt2memseg_list(addr);
> + if (msl->external)
> + return;
> +
> + pthread_mutex_lock(&dev->mutex);
> +
> + if (dev->started == false)
> + goto exit;
> +
> + /* Step 1: pause the active queues */
> + for (i = 0; i < dev->queue_pairs; i++) {
> + ret = dev->ops->enable_qp(dev, i, 0);
> + if (ret < 0)
> + goto exit;
> + }
> +
> + /* Step 2: update memory regions */
> + ret = dev->ops->set_memory_table(dev);
> + if (ret < 0)
> + goto exit;
> +
> + /* Step 3: resume the active queues */
> + for (i = 0; i < dev->queue_pairs; i++) {
> + ret = dev->ops->enable_qp(dev, i, 1);
> + if (ret < 0)
> + goto exit;
> + }
> +
> +exit:
> + pthread_mutex_unlock(&dev->mutex);
> +
> + if (ret < 0)
> + PMD_DRV_LOG(ERR, "(%s) Failed to update memory table", dev->path);
> +}
> +
> +static int
> +virtio_user_dev_setup(struct virtio_user_dev *dev)
> +{
> + if (dev->is_server) {
> + if (dev->backend_type != VIRTIO_USER_BACKEND_VHOST_USER) {
> + PMD_DRV_LOG(ERR, "Server mode only supports vhost-user!");
> + return -1;
> + }
> + }
> +
> + switch (dev->backend_type) {
> + case VIRTIO_USER_BACKEND_VHOST_VDPA:
> + dev->ops = &virtio_ops_vdpa;
> + dev->ops->setup = virtio_crypto_ops_vdpa.setup;
> + dev->ops->get_features = virtio_crypto_ops_vdpa.get_features;
> + dev->ops->cvq_enable = virtio_crypto_ops_vdpa.cvq_enable;
> + dev->ops->enable_qp = virtio_crypto_ops_vdpa.enable_qp;
> + dev->ops->update_link_state = virtio_crypto_ops_vdpa.update_link_state;
> + dev->ops->map_notification_area = virtio_crypto_ops_vdpa.map_notification_area;
> + dev->ops->unmap_notification_area = virtio_crypto_ops_vdpa.unmap_notification_area;
> + break;
> + default:
> + PMD_DRV_LOG(ERR, "(%s) Unknown backend type", dev->path);
> + return -1;
> + }
> +
> + if (dev->ops->setup(dev) < 0) {
> + PMD_INIT_LOG(ERR, "(%s) Failed to setup backend", dev->path);
> + return -1;
> + }
> +
> + return 0;
> +}
> +
> +static int
> +virtio_user_alloc_vrings(struct virtio_user_dev *dev)
> +{
> + int i, size, nr_vrings;
> + bool packed_ring = !!(dev->device_features & (1ull << VIRTIO_F_RING_PACKED));
> +
> + nr_vrings = dev->max_queue_pairs + 1;
> +
> + dev->callfds = rte_zmalloc("virtio_user_dev", nr_vrings * sizeof(*dev->callfds), 0);
> + if (!dev->callfds) {
> + PMD_INIT_LOG(ERR, "(%s) Failed to alloc callfds", dev->path);
> + return -1;
> + }
> +
> + dev->kickfds = rte_zmalloc("virtio_user_dev", nr_vrings * sizeof(*dev->kickfds), 0);
> + if (!dev->kickfds) {
> + PMD_INIT_LOG(ERR, "(%s) Failed to alloc kickfds", dev->path);
> + goto free_callfds;
> + }
> +
> + for (i = 0; i < nr_vrings; i++) {
> + dev->callfds[i] = -1;
> + dev->kickfds[i] = -1;
> + }
> +
> + if (packed_ring)
> + size = sizeof(*dev->vrings.packed);
> + else
> + size = sizeof(*dev->vrings.split);
> + dev->vrings.ptr = rte_zmalloc("virtio_user_dev", nr_vrings * size, 0);
> + if (!dev->vrings.ptr) {
> + PMD_INIT_LOG(ERR, "(%s) Failed to alloc vrings metadata", dev->path);
> + goto free_kickfds;
> + }
> +
> + if (packed_ring) {
> + dev->packed_queues = rte_zmalloc("virtio_user_dev",
> + nr_vrings * sizeof(*dev->packed_queues), 0);
> + if (!dev->packed_queues) {
> + PMD_INIT_LOG(ERR, "(%s) Failed to alloc packed queues metadata",
> + dev->path);
> + goto free_vrings;
> + }
> + }
> +
> + dev->qp_enabled = rte_zmalloc("virtio_user_dev",
> + nr_vrings * sizeof(*dev->qp_enabled), 0);
> + if (!dev->qp_enabled) {
> + PMD_INIT_LOG(ERR, "(%s) Failed to alloc QP enable states", dev->path);
> + goto free_packed_queues;
> + }
> +
> + return 0;
> +
> +free_packed_queues:
> + rte_free(dev->packed_queues);
> + dev->packed_queues = NULL;
> +free_vrings:
> + rte_free(dev->vrings.ptr);
> + dev->vrings.ptr = NULL;
> +free_kickfds:
> + rte_free(dev->kickfds);
> + dev->kickfds = NULL;
> +free_callfds:
> + rte_free(dev->callfds);
> + dev->callfds = NULL;
> +
> + return -1;
> +}
> +
> +static void
> +virtio_user_free_vrings(struct virtio_user_dev *dev)
> +{
> + rte_free(dev->qp_enabled);
> + dev->qp_enabled = NULL;
> + rte_free(dev->packed_queues);
> + dev->packed_queues = NULL;
> + rte_free(dev->vrings.ptr);
> + dev->vrings.ptr = NULL;
> + rte_free(dev->kickfds);
> + dev->kickfds = NULL;
> + rte_free(dev->callfds);
> + dev->callfds = NULL;
> +}
> +
> +#define VIRTIO_USER_SUPPORTED_FEATURES \
> + (1ULL << VIRTIO_CRYPTO_SERVICE_CIPHER | \
> + 1ULL << VIRTIO_CRYPTO_SERVICE_HASH | \
> + 1ULL << VIRTIO_CRYPTO_SERVICE_AKCIPHER | \
> + 1ULL << VIRTIO_F_VERSION_1 | \
> + 1ULL << VIRTIO_F_IN_ORDER | \
> + 1ULL << VIRTIO_F_RING_PACKED | \
> + 1ULL << VIRTIO_F_NOTIFICATION_DATA | \
> + 1ULL << VIRTIO_F_ORDER_PLATFORM)
> +
> +int
> +crypto_virtio_user_dev_init(struct virtio_user_dev *dev, char *path, uint16_t queues,
> + int queue_size, int server)
> +{
> + uint64_t backend_features;
> +
> + pthread_mutex_init(&dev->mutex, NULL);
> + strlcpy(dev->path, path, PATH_MAX);
> +
> + dev->started = 0;
> + dev->queue_pairs = 1; /* mq disabled by default */
> + dev->max_queue_pairs = queues; /* initialize to user requested value for kernel backend */
> + dev->queue_size = queue_size;
> + dev->is_server = server;
> + dev->frontend_features = 0;
> + dev->unsupported_features = 0;
> + dev->backend_type = VIRTIO_USER_BACKEND_VHOST_VDPA;
> + dev->hw.modern = 1;
> +
> + if (virtio_user_dev_setup(dev) < 0) {
> + PMD_INIT_LOG(ERR, "(%s) backend set up fails", dev->path);
> + return -1;
> + }
> +
> + if (dev->ops->set_owner(dev) < 0) {
> + PMD_INIT_LOG(ERR, "(%s) Failed to set backend owner", dev->path);
> + goto destroy;
> + }
> +
> + if (dev->ops->get_backend_features(&backend_features) < 0) {
> + PMD_INIT_LOG(ERR, "(%s) Failed to get backend features", dev->path);
> + goto destroy;
> + }
> +
> + dev->unsupported_features = ~(VIRTIO_USER_SUPPORTED_FEATURES | backend_features);
> +
> + if (dev->ops->get_features(dev, &dev->device_features) < 0) {
> + PMD_INIT_LOG(ERR, "(%s) Failed to get device features", dev->path);
> + goto destroy;
> + }
> +
> + if (virtio_user_dev_init_max_queue_pairs(dev, queues)) {
> + PMD_INIT_LOG(ERR, "(%s) Failed to get max queue pairs", dev->path);
> + goto destroy;
> + }
> +
> + if (virtio_user_dev_init_cipher_services(dev)) {
> + PMD_INIT_LOG(ERR, "(%s) Failed to get cipher services", dev->path);
> + goto destroy;
> + }
> +
> + dev->frontend_features &= ~dev->unsupported_features;
> + dev->device_features &= ~dev->unsupported_features;
> +
> + if (virtio_user_alloc_vrings(dev) < 0) {
> + PMD_INIT_LOG(ERR, "(%s) Failed to allocate vring metadata", dev->path);
> + goto destroy;
> + }
> +
> + if (virtio_user_dev_init_notify(dev) < 0) {
> + PMD_INIT_LOG(ERR, "(%s) Failed to init notifiers", dev->path);
> + goto free_vrings;
> + }
> +
> + if (rte_mem_event_callback_register(VIRTIO_USER_MEM_EVENT_CLB_NAME,
> + virtio_user_mem_event_cb, dev)) {
> + if (rte_errno != ENOTSUP) {
> + PMD_INIT_LOG(ERR, "(%s) Failed to register mem event callback",
> + dev->path);
> + goto notify_uninit;
> + }
> + }
> +
> + return 0;
> +
> +notify_uninit:
> + virtio_user_dev_uninit_notify(dev);
> +free_vrings:
> + virtio_user_free_vrings(dev);
> +destroy:
> + dev->ops->destroy(dev);
> +
> + return -1;
> +}
> +
> +void
> +crypto_virtio_user_dev_uninit(struct virtio_user_dev *dev)
> +{
> + crypto_virtio_user_stop_device(dev);
> +
> + rte_mem_event_callback_unregister(VIRTIO_USER_MEM_EVENT_CLB_NAME, dev);
> +
> + virtio_user_dev_uninit_notify(dev);
> +
> + virtio_user_free_vrings(dev);
> +
> + if (dev->is_server)
> + unlink(dev->path);
> +
> + dev->ops->destroy(dev);
> +}
> +
> +#define CVQ_MAX_DATA_DESCS 32
> +
> +static inline void *
> +virtio_user_iova2virt(struct virtio_user_dev *dev __rte_unused, rte_iova_t iova)
> +{
> + if (rte_eal_iova_mode() == RTE_IOVA_VA)
> + return (void *)(uintptr_t)iova;
> + else
> + return rte_mem_iova2virt(iova);
> +}
> +
> +static inline int
> +desc_is_avail(struct vring_packed_desc *desc, bool wrap_counter)
> +{
> + uint16_t flags = rte_atomic_load_explicit(&desc->flags, rte_memory_order_acquire);
> +
> + return wrap_counter == !!(flags & VRING_PACKED_DESC_F_AVAIL) &&
> + wrap_counter != !!(flags & VRING_PACKED_DESC_F_USED);
> +}
> +
> +int
> +crypto_virtio_user_dev_set_status(struct virtio_user_dev *dev, uint8_t status)
> +{
> + int ret;
> +
> + pthread_mutex_lock(&dev->mutex);
> + dev->status = status;
> + ret = dev->ops->set_status(dev, status);
> + if (ret && ret != -ENOTSUP)
> + PMD_INIT_LOG(ERR, "(%s) Failed to set backend status", dev->path);
> +
> + pthread_mutex_unlock(&dev->mutex);
> + return ret;
> +}
> +
> +int
> +crypto_virtio_user_dev_update_status(struct virtio_user_dev *dev)
> +{
> + int ret;
> + uint8_t status;
> +
> + pthread_mutex_lock(&dev->mutex);
> +
> + ret = dev->ops->get_status(dev, &status);
> + if (!ret) {
> + dev->status = status;
> + PMD_INIT_LOG(DEBUG, "Updated Device Status(0x%08x):"
> + "\t-RESET: %u "
> + "\t-ACKNOWLEDGE: %u "
> + "\t-DRIVER: %u "
> + "\t-DRIVER_OK: %u "
> + "\t-FEATURES_OK: %u "
> + "\t-DEVICE_NEED_RESET: %u "
> + "\t-FAILED: %u",
> + dev->status,
> + (dev->status == VIRTIO_CONFIG_STATUS_RESET),
> + !!(dev->status & VIRTIO_CONFIG_STATUS_ACK),
> + !!(dev->status & VIRTIO_CONFIG_STATUS_DRIVER),
> + !!(dev->status & VIRTIO_CONFIG_STATUS_DRIVER_OK),
> + !!(dev->status & VIRTIO_CONFIG_STATUS_FEATURES_OK),
> + !!(dev->status & VIRTIO_CONFIG_STATUS_DEV_NEED_RESET),
> + !!(dev->status & VIRTIO_CONFIG_STATUS_FAILED));
> + } else if (ret != -ENOTSUP) {
> + PMD_INIT_LOG(ERR, "(%s) Failed to get backend status", dev->path);
> + }
> +
> + pthread_mutex_unlock(&dev->mutex);
> + return ret;
> +}
> +
> +int
> +crypto_virtio_user_dev_update_link_state(struct virtio_user_dev *dev)
> +{
> + if (dev->ops->update_link_state)
> + return dev->ops->update_link_state(dev);
> +
> + return 0;
> +}
> diff --git a/drivers/crypto/virtio/virtio_user/virtio_user_dev.h b/drivers/crypto/virtio/virtio_user/virtio_user_dev.h
> new file mode 100644
> index 0000000000..ef648fd14b
> --- /dev/null
> +++ b/drivers/crypto/virtio/virtio_user/virtio_user_dev.h
> @@ -0,0 +1,88 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2025 Marvell.
> + */
> +
> +#ifndef _VIRTIO_USER_DEV_H
> +#define _VIRTIO_USER_DEV_H
> +
> +#include <limits.h>
> +#include <stdbool.h>
> +
> +#include "../virtio_pci.h"
> +#include "../virtio_ring.h"
> +
> +extern struct virtio_user_backend_ops virtio_crypto_ops_vdpa;
> +
> +enum virtio_user_backend_type {
> + VIRTIO_USER_BACKEND_UNKNOWN,
> + VIRTIO_USER_BACKEND_VHOST_USER,
> + VIRTIO_USER_BACKEND_VHOST_VDPA,
> +};
> +
> +struct virtio_user_queue {
> + uint16_t used_idx;
> + bool avail_wrap_counter;
> + bool used_wrap_counter;
> +};
> +
> +struct virtio_user_dev {
> + union {
> + struct virtio_crypto_hw hw;
> + uint8_t dummy[256];
> + };
> +
> + void *backend_data;
> + uint16_t **notify_area;
> + char path[PATH_MAX];
> + bool hw_cvq;
> + uint16_t max_queue_pairs;
> + uint64_t device_features; /* supported features by device */
> + bool *qp_enabled;
> +
> + enum virtio_user_backend_type backend_type;
> + bool is_server; /* server or client mode */
> +
> + int *callfds;
> + int *kickfds;
> + uint16_t queue_pairs;
> + uint32_t queue_size;
> + uint64_t features; /* the negotiated features with driver,
> + * and will be sync with device
> + */
> + uint64_t frontend_features; /* enabled frontend features */
> + uint64_t unsupported_features; /* unsupported features mask */
> + uint8_t status;
> + uint32_t crypto_status;
> + uint32_t crypto_services;
> + uint64_t cipher_algo;
> + uint32_t hash_algo;
> + uint64_t auth_algo;
> + uint32_t aead_algo;
> + uint32_t akcipher_algo;
> +
> + union {
> + void *ptr;
> + struct vring *split;
> + struct vring_packed *packed;
> + } vrings;
> +
> + struct virtio_user_queue *packed_queues;
> +
> + struct virtio_user_backend_ops *ops;
> + pthread_mutex_t mutex;
> + bool started;
> +
> + struct virtqueue *scvq;
> +};
> +
> +int crypto_virtio_user_dev_set_features(struct virtio_user_dev *dev);
> +int crypto_virtio_user_start_device(struct virtio_user_dev *dev);
> +int crypto_virtio_user_stop_device(struct virtio_user_dev *dev);
> +int crypto_virtio_user_dev_init(struct virtio_user_dev *dev, char *path, uint16_t queues,
> + int queue_size, int server);
> +void crypto_virtio_user_dev_uninit(struct virtio_user_dev *dev);
> +int crypto_virtio_user_dev_set_status(struct virtio_user_dev *dev, uint8_t status);
> +int crypto_virtio_user_dev_update_status(struct virtio_user_dev *dev);
> +int crypto_virtio_user_dev_update_link_state(struct virtio_user_dev *dev);
> +extern const char * const crypto_virtio_user_backend_strings[];
> +#endif
> diff --git a/drivers/crypto/virtio/virtio_user_cryptodev.c b/drivers/crypto/virtio/virtio_user_cryptodev.c
> new file mode 100644
> index 0000000000..606639b872
> --- /dev/null
> +++ b/drivers/crypto/virtio/virtio_user_cryptodev.c
> @@ -0,0 +1,587 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2025 Marvell
> + */
> +
> +#include <stdint.h>
> +#include <stdlib.h>
> +#include <sys/types.h>
> +#include <unistd.h>
> +#include <fcntl.h>
> +
> +#include <rte_malloc.h>
> +#include <rte_kvargs.h>
> +#include <bus_vdev_driver.h>
> +#include <rte_cryptodev.h>
> +#include <cryptodev_pmd.h>
> +#include <rte_alarm.h>
> +#include <rte_cycles.h>
> +#include <rte_io.h>
> +
> +#include "virtio_user/virtio_user_dev.h"
> +#include "virtio_user/vhost.h"
> +#include "virtio_user/vhost_logs.h"
> +#include "virtio_cryptodev.h"
> +#include "virtio_logs.h"
> +#include "virtio_pci.h"
> +#include "virtqueue.h"
> +
> +#define virtio_user_get_dev(hwp) container_of(hwp, struct virtio_user_dev, hw)
> +
> +static void
> +virtio_user_read_dev_config(struct virtio_crypto_hw *hw, size_t offset,
> + void *dst, int length __rte_unused)
> +{
> + struct virtio_user_dev *dev = virtio_user_get_dev(hw);
> +
> + if (offset == offsetof(struct virtio_crypto_config, status)) {
> + crypto_virtio_user_dev_update_link_state(dev);
> + *(uint32_t *)dst = dev->crypto_status;
> + } else if (offset == offsetof(struct virtio_crypto_config, max_dataqueues))
> + *(uint16_t *)dst = dev->max_queue_pairs;
> + else if (offset == offsetof(struct virtio_crypto_config, crypto_services))
> + *(uint32_t *)dst = dev->crypto_services;
> + else if (offset == offsetof(struct virtio_crypto_config, cipher_algo_l))
> + *(uint32_t *)dst = dev->cipher_algo & 0xFFFF;
> + else if (offset == offsetof(struct virtio_crypto_config, cipher_algo_h))
> + *(uint32_t *)dst = dev->cipher_algo >> 32;
> + else if (offset == offsetof(struct virtio_crypto_config, hash_algo))
> + *(uint32_t *)dst = dev->hash_algo;
> + else if (offset == offsetof(struct virtio_crypto_config, mac_algo_l))
> + *(uint32_t *)dst = dev->auth_algo & 0xFFFF;
> + else if (offset == offsetof(struct virtio_crypto_config, mac_algo_h))
> + *(uint32_t *)dst = dev->auth_algo >> 32;
> + else if (offset == offsetof(struct virtio_crypto_config, aead_algo))
> + *(uint32_t *)dst = dev->aead_algo;
> + else if (offset == offsetof(struct virtio_crypto_config, akcipher_algo))
> + *(uint32_t *)dst = dev->akcipher_algo;
> +}
> +
> +static void
> +virtio_user_write_dev_config(struct virtio_crypto_hw *hw, size_t offset,
> + const void *src, int length)
> +{
> + RTE_SET_USED(hw);
> + RTE_SET_USED(src);
> +
> + PMD_DRV_LOG(ERR, "not supported offset=%zu, len=%d",
> + offset, length);
> +}
> +
> +static void
> +virtio_user_reset(struct virtio_crypto_hw *hw)
> +{
> + struct virtio_user_dev *dev = virtio_user_get_dev(hw);
> +
> + if (dev->status & VIRTIO_CONFIG_STATUS_DRIVER_OK)
> + crypto_virtio_user_stop_device(dev);
> +}
> +
> +static void
> +virtio_user_set_status(struct virtio_crypto_hw *hw, uint8_t status)
> +{
> + struct virtio_user_dev *dev = virtio_user_get_dev(hw);
> + uint8_t old_status = dev->status;
> +
> + if (status & VIRTIO_CONFIG_STATUS_FEATURES_OK &&
> + ~old_status & VIRTIO_CONFIG_STATUS_FEATURES_OK) {
> + crypto_virtio_user_dev_set_features(dev);
> + /* Feature negotiation should be only done in probe time.
> + * So we skip any more request here.
> + */
> + dev->status |= VIRTIO_CONFIG_STATUS_FEATURES_OK;
> + }
> +
> + if (status & VIRTIO_CONFIG_STATUS_DRIVER_OK) {
> + if (crypto_virtio_user_start_device(dev)) {
> + crypto_virtio_user_dev_update_status(dev);
> + return;
> + }
> + } else if (status == VIRTIO_CONFIG_STATUS_RESET) {
> + virtio_user_reset(hw);
> + }
> +
> + crypto_virtio_user_dev_set_status(dev, status);
> + if (status & VIRTIO_CONFIG_STATUS_DRIVER_OK && dev->scvq) {
> + if (dev->ops->cvq_enable(dev, 1) < 0) {
> + PMD_INIT_LOG(ERR, "(%s) Failed to start ctrlq", dev->path);
> + crypto_virtio_user_dev_update_status(dev);
> + return;
> + }
> + }
> +}
> +
> +static uint8_t
> +virtio_user_get_status(struct virtio_crypto_hw *hw)
> +{
> + struct virtio_user_dev *dev = virtio_user_get_dev(hw);
> +
> + crypto_virtio_user_dev_update_status(dev);
> +
> + return dev->status;
> +}
> +
> +#define VIRTIO_USER_CRYPTO_PMD_GUEST_FEATURES \
> + (1ULL << VIRTIO_CRYPTO_SERVICE_CIPHER | \
> + 1ULL << VIRTIO_CRYPTO_SERVICE_AKCIPHER | \
> + 1ULL << VIRTIO_F_VERSION_1 | \
> + 1ULL << VIRTIO_F_IN_ORDER | \
> + 1ULL << VIRTIO_F_RING_PACKED | \
> + 1ULL << VIRTIO_F_NOTIFICATION_DATA | \
> + 1ULL << VIRTIO_RING_F_INDIRECT_DESC | \
> + 1ULL << VIRTIO_F_ORDER_PLATFORM)
> +
> +static uint64_t
> +virtio_user_get_features(struct virtio_crypto_hw *hw)
> +{
> + struct virtio_user_dev *dev = virtio_user_get_dev(hw);
> +
> + /* unmask feature bits defined in vhost user protocol */
> + return (dev->device_features | dev->frontend_features) &
> + VIRTIO_USER_CRYPTO_PMD_GUEST_FEATURES;
> +}
> +
> +static void
> +virtio_user_set_features(struct virtio_crypto_hw *hw, uint64_t features)
> +{
> + struct virtio_user_dev *dev = virtio_user_get_dev(hw);
> +
> + dev->features = features & (dev->device_features | dev->frontend_features);
> +}
> +
> +static uint8_t
> +virtio_user_get_isr(struct virtio_crypto_hw *hw __rte_unused)
> +{
> + /* rxq interrupts and config interrupt are separated in virtio-user,
> + * here we only report config change.
> + */
> + return VIRTIO_PCI_CAP_ISR_CFG;
> +}
> +
> +static uint16_t
> +virtio_user_set_config_irq(struct virtio_crypto_hw *hw __rte_unused,
> + uint16_t vec __rte_unused)
> +{
> + return 0;
> +}
> +
> +static uint16_t
> +virtio_user_set_queue_irq(struct virtio_crypto_hw *hw __rte_unused,
> + struct virtqueue *vq __rte_unused,
> + uint16_t vec)
> +{
> + /* pretend we have done that */
> + return vec;
> +}
> +
> +/* This function is to get the queue size, aka, number of descs, of a specified
> + * queue. Different with the VHOST_USER_GET_QUEUE_NUM, which is used to get the
> + * max supported queues.
> + */
> +static uint16_t
> +virtio_user_get_queue_num(struct virtio_crypto_hw *hw, uint16_t queue_id __rte_unused)
> +{
> + struct virtio_user_dev *dev = virtio_user_get_dev(hw);
> +
> + /* Currently, each queue has same queue size */
> + return dev->queue_size;
> +}
> +
> +static void
> +virtio_user_setup_queue_packed(struct virtqueue *vq,
> + struct virtio_user_dev *dev)
> +{
> + uint16_t queue_idx = vq->vq_queue_index;
> + struct vring_packed *vring;
> + uint64_t desc_addr;
> + uint64_t avail_addr;
> + uint64_t used_addr;
> + uint16_t i;
> +
> + vring = &dev->vrings.packed[queue_idx];
> + desc_addr = (uintptr_t)vq->vq_ring_virt_mem;
> + avail_addr = desc_addr + vq->vq_nentries *
> + sizeof(struct vring_packed_desc);
> + used_addr = RTE_ALIGN_CEIL(avail_addr +
> + sizeof(struct vring_packed_desc_event),
> + VIRTIO_VRING_ALIGN);
> + vring->num = vq->vq_nentries;
> + vring->desc_iova = vq->vq_ring_mem;
> + vring->desc = (void *)(uintptr_t)desc_addr;
> + vring->driver = (void *)(uintptr_t)avail_addr;
> + vring->device = (void *)(uintptr_t)used_addr;
> + dev->packed_queues[queue_idx].avail_wrap_counter = true;
> + dev->packed_queues[queue_idx].used_wrap_counter = true;
> + dev->packed_queues[queue_idx].used_idx = 0;
> +
> + for (i = 0; i < vring->num; i++)
> + vring->desc[i].flags = 0;
> +}
> +
> +static void
> +virtio_user_setup_queue_split(struct virtqueue *vq, struct virtio_user_dev *dev)
> +{
> + uint16_t queue_idx = vq->vq_queue_index;
> + uint64_t desc_addr, avail_addr, used_addr;
> +
> + desc_addr = (uintptr_t)vq->vq_ring_virt_mem;
> + avail_addr = desc_addr + vq->vq_nentries * sizeof(struct vring_desc);
> + used_addr = RTE_ALIGN_CEIL(avail_addr + offsetof(struct vring_avail,
> + ring[vq->vq_nentries]),
> + VIRTIO_VRING_ALIGN);
> +
> + dev->vrings.split[queue_idx].num = vq->vq_nentries;
> + dev->vrings.split[queue_idx].desc_iova = vq->vq_ring_mem;
> + dev->vrings.split[queue_idx].desc = (void *)(uintptr_t)desc_addr;
> + dev->vrings.split[queue_idx].avail = (void *)(uintptr_t)avail_addr;
> + dev->vrings.split[queue_idx].used = (void *)(uintptr_t)used_addr;
> +}
> +
> +static int
> +virtio_user_setup_queue(struct virtio_crypto_hw *hw, struct virtqueue *vq)
> +{
> + struct virtio_user_dev *dev = virtio_user_get_dev(hw);
> +
> + if (vtpci_with_packed_queue(hw))
> + virtio_user_setup_queue_packed(vq, dev);
> + else
> + virtio_user_setup_queue_split(vq, dev);
> +
> + if (dev->notify_area)
> + vq->notify_addr = dev->notify_area[vq->vq_queue_index];
> +
> + if (virtcrypto_cq_to_vq(hw->cvq) == vq)
> + dev->scvq = virtcrypto_cq_to_vq(hw->cvq);
> +
> + return 0;
> +}
> +
> +static void
> +virtio_user_del_queue(struct virtio_crypto_hw *hw, struct virtqueue *vq)
> +{
> + /* For legacy devices, write 0 to VIRTIO_PCI_QUEUE_PFN port, QEMU
> + * correspondingly stops the ioeventfds, and reset the status of
> + * the device.
> + * For modern devices, set queue desc, avail, used in PCI bar to 0,
> + * not see any more behavior in QEMU.
> + *
> + * Here we just care about what information to deliver to vhost-user
> + * or vhost-kernel. So we just close ioeventfd for now.
> + */
> +
> + RTE_SET_USED(hw);
> + RTE_SET_USED(vq);
> +}
> +
> +static void
> +virtio_user_notify_queue(struct virtio_crypto_hw *hw, struct virtqueue *vq)
> +{
> + struct virtio_user_dev *dev = virtio_user_get_dev(hw);
> + uint64_t notify_data = 1;
> +
> + if (!dev->notify_area) {
> + if (write(dev->kickfds[vq->vq_queue_index], ¬ify_data,
> + sizeof(notify_data)) < 0)
> + PMD_DRV_LOG(ERR, "failed to kick backend: %s",
> + strerror(errno));
> + return;
> + } else if (!vtpci_with_feature(hw, VIRTIO_F_NOTIFICATION_DATA)) {
> + rte_write16(vq->vq_queue_index, vq->notify_addr);
> + return;
> + }
> +
> + if (vtpci_with_packed_queue(hw)) {
> + /* Bit[0:15]: vq queue index
> + * Bit[16:30]: avail index
> + * Bit[31]: avail wrap counter
> + */
> + notify_data = ((uint32_t)(!!(vq->vq_packed.cached_flags &
> + VRING_PACKED_DESC_F_AVAIL)) << 31) |
> + ((uint32_t)vq->vq_avail_idx << 16) |
> + vq->vq_queue_index;
> + } else {
> + /* Bit[0:15]: vq queue index
> + * Bit[16:31]: avail index
> + */
> + notify_data = ((uint32_t)vq->vq_avail_idx << 16) |
> + vq->vq_queue_index;
> + }
> + rte_write32(notify_data, vq->notify_addr);
> +}
> +
> +const struct virtio_pci_ops crypto_virtio_user_ops = {
> + .read_dev_cfg = virtio_user_read_dev_config,
> + .write_dev_cfg = virtio_user_write_dev_config,
> + .reset = virtio_user_reset,
> + .get_status = virtio_user_get_status,
> + .set_status = virtio_user_set_status,
> + .get_features = virtio_user_get_features,
> + .set_features = virtio_user_set_features,
> + .get_isr = virtio_user_get_isr,
> + .set_config_irq = virtio_user_set_config_irq,
> + .set_queue_irq = virtio_user_set_queue_irq,
> + .get_queue_num = virtio_user_get_queue_num,
> + .setup_queue = virtio_user_setup_queue,
> + .del_queue = virtio_user_del_queue,
> + .notify_queue = virtio_user_notify_queue,
> +};
> +
> +static const char * const valid_args[] = {
> +#define VIRTIO_USER_ARG_QUEUES_NUM "queues"
> + VIRTIO_USER_ARG_QUEUES_NUM,
> +#define VIRTIO_USER_ARG_QUEUE_SIZE "queue_size"
> + VIRTIO_USER_ARG_QUEUE_SIZE,
> +#define VIRTIO_USER_ARG_PATH "path"
> + VIRTIO_USER_ARG_PATH,
> +#define VIRTIO_USER_ARG_SERVER_MODE "server"
> + VIRTIO_USER_ARG_SERVER_MODE,
> + NULL
> +};
> +
> +#define VIRTIO_USER_DEF_Q_NUM 1
> +#define VIRTIO_USER_DEF_Q_SZ 256
> +#define VIRTIO_USER_DEF_SERVER_MODE 0
> +
> +static int
> +get_string_arg(const char *key __rte_unused,
> + const char *value, void *extra_args)
> +{
> + if (!value || !extra_args)
> + return -EINVAL;
> +
> + *(char **)extra_args = strdup(value);
> +
> + if (!*(char **)extra_args)
> + return -ENOMEM;
> +
> + return 0;
> +}
> +
> +static int
> +get_integer_arg(const char *key __rte_unused,
> + const char *value, void *extra_args)
> +{
> + uint64_t integer = 0;
> + if (!value || !extra_args)
> + return -EINVAL;
> + errno = 0;
> + integer = strtoull(value, NULL, 0);
> + /* extra_args keeps default value, it should be replaced
> + * only in case of successful parsing of the 'value' arg
> + */
> + if (errno == 0)
> + *(uint64_t *)extra_args = integer;
> + return -errno;
> +}
> +
> +static struct rte_cryptodev *
> +virtio_user_cryptodev_alloc(struct rte_vdev_device *vdev)
> +{
> + struct rte_cryptodev_pmd_init_params init_params = {
> + .name = "",
> + .private_data_size = sizeof(struct virtio_user_dev),
> + };
> + struct rte_cryptodev_data *data;
> + struct rte_cryptodev *cryptodev;
> + struct virtio_user_dev *dev;
> + struct virtio_crypto_hw *hw;
> +
> + init_params.socket_id = vdev->device.numa_node;
> + init_params.private_data_size = sizeof(struct virtio_user_dev);
> + cryptodev = rte_cryptodev_pmd_create(vdev->device.name, &vdev->device, &init_params);
> + if (cryptodev == NULL) {
> + PMD_INIT_LOG(ERR, "failed to create cryptodev vdev");
> + return NULL;
> + }
> +
> + data = cryptodev->data;
> + dev = data->dev_private;
> + hw = &dev->hw;
> +
> + hw->dev_id = data->dev_id;
> + VTPCI_OPS(hw) = &crypto_virtio_user_ops;
> +
> + return cryptodev;
> +}
> +
> +static void
> +virtio_user_cryptodev_free(struct rte_cryptodev *cryptodev)
> +{
> + rte_cryptodev_pmd_destroy(cryptodev);
> +}
> +
> +static int
> +virtio_user_pmd_probe(struct rte_vdev_device *vdev)
> +{
> + uint64_t server_mode = VIRTIO_USER_DEF_SERVER_MODE;
> + uint64_t queue_size = VIRTIO_USER_DEF_Q_SZ;
> + uint64_t queues = VIRTIO_USER_DEF_Q_NUM;
> + struct rte_cryptodev *cryptodev = NULL;
> + struct rte_kvargs *kvlist = NULL;
> + struct virtio_user_dev *dev;
> + char *path = NULL;
> + int ret;
> +
> + kvlist = rte_kvargs_parse(rte_vdev_device_args(vdev), valid_args);
> +
> + if (!kvlist) {
> + PMD_INIT_LOG(ERR, "error when parsing param");
> + goto end;
> + }
> +
> + if (rte_kvargs_count(kvlist, VIRTIO_USER_ARG_PATH) == 1) {
> + if (rte_kvargs_process(kvlist, VIRTIO_USER_ARG_PATH,
> + &get_string_arg, &path) < 0) {
> + PMD_INIT_LOG(ERR, "error to parse %s",
> + VIRTIO_USER_ARG_PATH);
> + goto end;
> + }
> + } else {
> + PMD_INIT_LOG(ERR, "arg %s is mandatory for virtio_user",
> + VIRTIO_USER_ARG_PATH);
> + goto end;
> + }
> +
> + if (rte_kvargs_count(kvlist, VIRTIO_USER_ARG_QUEUES_NUM) == 1) {
> + if (rte_kvargs_process(kvlist, VIRTIO_USER_ARG_QUEUES_NUM,
> + &get_integer_arg, &queues) < 0) {
> + PMD_INIT_LOG(ERR, "error to parse %s",
> + VIRTIO_USER_ARG_QUEUES_NUM);
> + goto end;
> + }
> + }
> +
> + if (rte_kvargs_count(kvlist, VIRTIO_USER_ARG_QUEUE_SIZE) == 1) {
> + if (rte_kvargs_process(kvlist, VIRTIO_USER_ARG_QUEUE_SIZE,
> + &get_integer_arg, &queue_size) < 0) {
> + PMD_INIT_LOG(ERR, "error to parse %s",
> + VIRTIO_USER_ARG_QUEUE_SIZE);
> + goto end;
> + }
> + }
> +
> + cryptodev = virtio_user_cryptodev_alloc(vdev);
> + if (!cryptodev) {
> + PMD_INIT_LOG(ERR, "virtio_user fails to alloc device");
> + goto end;
> + }
> +
> + dev = cryptodev->data->dev_private;
> + if (crypto_virtio_user_dev_init(dev, path, queues, queue_size,
> + server_mode) < 0) {
> + PMD_INIT_LOG(ERR, "virtio_user_dev_init fails");
> + virtio_user_cryptodev_free(cryptodev);
> + goto end;
> + }
> +
> + if (crypto_virtio_dev_init(cryptodev, VIRTIO_USER_CRYPTO_PMD_GUEST_FEATURES,
> + NULL) < 0) {
> + PMD_INIT_LOG(ERR, "crypto_virtio_dev_init fails");
> + crypto_virtio_user_dev_uninit(dev);
> + virtio_user_cryptodev_free(cryptodev);
> + goto end;
> + }
> +
> + rte_cryptodev_pmd_probing_finish(cryptodev);
> +
> + ret = 0;
> +end:
> + rte_kvargs_free(kvlist);
> + free(path);
> + return ret;
> +}
> +
> +static int
> +virtio_user_pmd_remove(struct rte_vdev_device *vdev)
> +{
> + struct rte_cryptodev *cryptodev;
> + const char *name;
> + int devid;
> +
> + if (!vdev)
> + return -EINVAL;
> +
> + name = rte_vdev_device_name(vdev);
> + PMD_DRV_LOG(INFO, "Removing %s", name);
> +
> + devid = rte_cryptodev_get_dev_id(name);
> + if (devid < 0)
> + return -EINVAL;
> +
> + rte_cryptodev_stop(devid);
> +
> + cryptodev = rte_cryptodev_pmd_get_named_dev(name);
> + if (cryptodev == NULL)
> + return -ENODEV;
> +
> + if (rte_cryptodev_pmd_destroy(cryptodev) < 0) {
> + PMD_DRV_LOG(ERR, "Failed to remove %s", name);
> + return -EFAULT;
> + }
> +
> + return 0;
> +}
> +
> +static int virtio_user_pmd_dma_map(struct rte_vdev_device *vdev, void *addr,
> + uint64_t iova, size_t len)
> +{
> + struct rte_cryptodev *cryptodev;
> + struct virtio_user_dev *dev;
> + const char *name;
> +
> + if (!vdev)
> + return -EINVAL;
> +
> + name = rte_vdev_device_name(vdev);
> + cryptodev = rte_cryptodev_pmd_get_named_dev(name);
> + if (cryptodev == NULL)
> + return -EINVAL;
> +
> + dev = cryptodev->data->dev_private;
> +
> + if (dev->ops->dma_map)
> + return dev->ops->dma_map(dev, addr, iova, len);
> +
> + return 0;
> +}
> +
> +static int virtio_user_pmd_dma_unmap(struct rte_vdev_device *vdev, void *addr,
> + uint64_t iova, size_t len)
> +{
> + struct rte_cryptodev *cryptodev;
> + struct virtio_user_dev *dev;
> + const char *name;
> +
> + if (!vdev)
> + return -EINVAL;
> +
> + name = rte_vdev_device_name(vdev);
> + cryptodev = rte_cryptodev_pmd_get_named_dev(name);
> + if (cryptodev == NULL)
> + return -EINVAL;
> +
> + dev = cryptodev->data->dev_private;
> +
> + if (dev->ops->dma_unmap)
> + return dev->ops->dma_unmap(dev, addr, iova, len);
> +
> + return 0;
> +}
> +
> +static struct rte_vdev_driver virtio_user_driver = {
> + .probe = virtio_user_pmd_probe,
> + .remove = virtio_user_pmd_remove,
> + .dma_map = virtio_user_pmd_dma_map,
> + .dma_unmap = virtio_user_pmd_dma_unmap,
> +};
> +
> +static struct cryptodev_driver virtio_crypto_drv;
> +
> +RTE_PMD_REGISTER_VDEV(crypto_virtio_user, virtio_user_driver);
> +RTE_PMD_REGISTER_CRYPTO_DRIVER(virtio_crypto_drv,
> + virtio_user_driver.driver,
> + cryptodev_virtio_driver_id);
> +RTE_PMD_REGISTER_ALIAS(crypto_virtio_user, crypto_virtio);
> +RTE_PMD_REGISTER_PARAM_STRING(crypto_virtio_user,
> + "path=<path> "
> + "queues=<int> "
> + "queue_size=<int>");
^ permalink raw reply [flat|nested] 58+ messages in thread
* RE: [EXTERNAL] Re: [v2 1/4] common/virtio: move vDPA to common directory
2025-02-06 9:40 ` Maxime Coquelin
@ 2025-02-06 14:21 ` Gowrishankar Muthukrishnan
0 siblings, 0 replies; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2025-02-06 14:21 UTC (permalink / raw)
To: Maxime Coquelin, dev, Akhil Goyal, Chenbo Xia, Fan Zhang, Jay Zhou
Cc: Jerin Jacob, Anoob Joseph, David Marchand
Hi Maxime,
> On 1/7/25 7:44 PM, Gowrishankar Muthukrishnan wrote:
> > Move vhost-vdpa backend implementation into common folder.
>
> If we decided to have a common base for Virtio devices, which I think is a good
> idea to avoid needless duplication, we should do a deeper refactoring by
> sharing all transport layers: PCI and Virtio-user.
>
Yes, but our initial proposal in this RFC is to start with vDPA first.
> I understand it is not realistic to do this for v25.03 release, so in the mean time
> I prefer you duplicate what you need from Vhost-vDPA implementation than
> having an half-baked solution.
>
Ack.
Thanks,
Gowrishankar
> Maintainers, what do you think?
>
> Thanks,
> Maxime
>
> > Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
> > ---
> > Depends-on: patch-149672 ("vhost: include AKCIPHER algorithms in
> > crypto_config")
> > Depends-on: patch-148913 ("crypto/virtio: remove redundant crypto
> > queue free")
> > Depends-on: series-34293 ("crypto/virtio: add packed ring support")
> > Depends-on: series-34291 ("crypto/virtio: add RSA support")
> >
> >
> > drivers/common/virtio/meson.build | 13 +++++++++
> > drivers/common/virtio/version.map | 8 ++++++
> > .../virtio/virtio_user/vhost.h | 4 ---
> > .../common/virtio/virtio_user/vhost_logs.h | 15 ++++++++++
> > .../virtio/virtio_user/vhost_vdpa.c | 28 ++++++++++++++++++-
> > drivers/crypto/virtio/meson.build | 2 +-
> > drivers/meson.build | 1 +
> > drivers/net/virtio/meson.build | 3 +-
> > drivers/net/virtio/virtio_user/vhost_kernel.c | 3 +-
> > drivers/net/virtio/virtio_user/vhost_user.c | 3 +-
> > .../net/virtio/virtio_user/virtio_user_dev.c | 5 ++--
> > .../net/virtio/virtio_user/virtio_user_dev.h | 24 +++++++++-------
> > 12 files changed, 87 insertions(+), 22 deletions(-)
> > create mode 100644 drivers/common/virtio/meson.build
> > create mode 100644 drivers/common/virtio/version.map
> > rename drivers/{net => common}/virtio/virtio_user/vhost.h (97%)
> > create mode 100644 drivers/common/virtio/virtio_user/vhost_logs.h
> > rename drivers/{net => common}/virtio/virtio_user/vhost_vdpa.c (97%)
> >
> > diff --git a/drivers/common/virtio/meson.build
> > b/drivers/common/virtio/meson.build
> > new file mode 100644
> > index 0000000000..a19db9e088
> > --- /dev/null
> > +++ b/drivers/common/virtio/meson.build
> > @@ -0,0 +1,13 @@
> > +# SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2025 Marvell
> > +
> > +if is_windows
> > + build = false
> > + reason = 'not supported on Windows'
> > + subdir_done()
> > +endif
> > +
> > +if is_linux
> > + sources += files('virtio_user/vhost_vdpa.c')
> > + deps += ['bus_vdev']
> > +endif
> > diff --git a/drivers/common/virtio/version.map
> > b/drivers/common/virtio/version.map
> > new file mode 100644
> > index 0000000000..fb98a0ab2e
> > --- /dev/null
> > +++ b/drivers/common/virtio/version.map
> > @@ -0,0 +1,8 @@
> > +INTERNAL {
> > + global:
> > +
> > + virtio_ops_vdpa;
> > + vhost_logtype_driver;
> > +
> > + local: *;
> > +};
> > diff --git a/drivers/net/virtio/virtio_user/vhost.h
> > b/drivers/common/virtio/virtio_user/vhost.h
> > similarity index 97%
> > rename from drivers/net/virtio/virtio_user/vhost.h
> > rename to drivers/common/virtio/virtio_user/vhost.h
> > index eee3a4bc47..adf6551681 100644
> > --- a/drivers/net/virtio/virtio_user/vhost.h
> > +++ b/drivers/common/virtio/virtio_user/vhost.h
> > @@ -11,10 +11,6 @@
> >
> > #include <rte_errno.h>
> >
> > -#include "../virtio.h"
> > -#include "../virtio_logs.h"
> > -#include "../virtqueue.h"
> > -
> > struct vhost_vring_state {
> > unsigned int index;
> > unsigned int num;
> > diff --git a/drivers/common/virtio/virtio_user/vhost_logs.h
> > b/drivers/common/virtio/virtio_user/vhost_logs.h
> > new file mode 100644
> > index 0000000000..653d4d0b5e
> > --- /dev/null
> > +++ b/drivers/common/virtio/virtio_user/vhost_logs.h
> > @@ -0,0 +1,15 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(C) 2025 Marvell
> > + */
> > +
> > +#ifndef _VHOST_LOGS_H_
> > +#define _VHOST_LOGS_H_
> > +
> > +#include <rte_log.h>
> > +
> > +extern int vhost_logtype_driver;
> > +#define RTE_LOGTYPE_VHOST_DRIVER vhost_logtype_driver #define
> > +PMD_DRV_LOG(level, ...) \
> > + RTE_LOG_LINE_PREFIX(level, VHOST_DRIVER, "%s(): ", __func__,
> > +__VA_ARGS__)
> > +
> > +#endif /* _VHOST_LOGS_H_ */
> > diff --git a/drivers/net/virtio/virtio_user/vhost_vdpa.c
> > b/drivers/common/virtio/virtio_user/vhost_vdpa.c
> > similarity index 97%
> > rename from drivers/net/virtio/virtio_user/vhost_vdpa.c
> > rename to drivers/common/virtio/virtio_user/vhost_vdpa.c
> > index bc3e2a9af5..af5c4cbf33 100644
> > --- a/drivers/net/virtio/virtio_user/vhost_vdpa.c
> > +++ b/drivers/common/virtio/virtio_user/vhost_vdpa.c
> > @@ -9,11 +9,12 @@
> > #include <fcntl.h>
> > #include <stdlib.h>
> > #include <unistd.h>
> > +#include <inttypes.h>
> >
> > #include <rte_memory.h>
> >
> > #include "vhost.h"
> > -#include "virtio_user_dev.h"
> > +#include "vhost_logs.h"
> >
> > struct vhost_vdpa_data {
> > int vhostfd;
> > @@ -100,6 +101,29 @@ vhost_vdpa_ioctl(int fd, uint64_t request, void
> *arg)
> > return 0;
> > }
> >
> > +struct virtio_hw {
> > + struct virtqueue **vqs;
> > +};
> > +
> > +struct virtio_user_dev {
> > + union {
> > + struct virtio_hw hw;
> > + uint8_t dummy[256];
> > + };
> > +
> > + void *backend_data;
> > + uint16_t **notify_area;
> > + char path[PATH_MAX];
> > + bool hw_cvq;
> > + uint16_t max_queue_pairs;
> > + uint64_t device_features;
> > + bool *qp_enabled;
> > +};
> > +
> > +#define VIRTIO_NET_F_CTRL_VQ 17
> > +#define VIRTIO_F_IOMMU_PLATFORM 33
> > +#define VIRTIO_ID_NETWORK 0x01
> > +
> > static int
> > vhost_vdpa_set_owner(struct virtio_user_dev *dev)
> > {
> > @@ -715,3 +739,5 @@ struct virtio_user_backend_ops virtio_ops_vdpa = {
> > .map_notification_area = vhost_vdpa_map_notification_area,
> > .unmap_notification_area = vhost_vdpa_unmap_notification_area,
> > };
> > +
> > +RTE_LOG_REGISTER_SUFFIX(vhost_logtype_driver, driver, NOTICE);
> > diff --git a/drivers/crypto/virtio/meson.build
> > b/drivers/crypto/virtio/meson.build
> > index d2c3b3ad07..8181c8296f 100644
> > --- a/drivers/crypto/virtio/meson.build
> > +++ b/drivers/crypto/virtio/meson.build
> > @@ -8,7 +8,7 @@ if is_windows
> > endif
> >
> > includes += include_directories('../../../lib/vhost')
> > -deps += 'bus_pci'
> > +deps += ['bus_pci', 'common_virtio']
> > sources = files(
> > 'virtio_cryptodev.c',
> > 'virtio_cvq.c',
> > diff --git a/drivers/meson.build b/drivers/meson.build index
> > 495e21b54a..2f0d312479 100644
> > --- a/drivers/meson.build
> > +++ b/drivers/meson.build
> > @@ -17,6 +17,7 @@ subdirs = [
> > 'common/nitrox', # depends on bus.
> > 'common/qat', # depends on bus.
> > 'common/sfc_efx', # depends on bus.
> > + 'common/virtio', # depends on bus.
> > 'mempool', # depends on common and bus.
> > 'dma', # depends on common and bus.
> > 'net', # depends on common, bus, mempool
> > diff --git a/drivers/net/virtio/meson.build
> > b/drivers/net/virtio/meson.build index 02742da5c2..bbd73741f0 100644
> > --- a/drivers/net/virtio/meson.build
> > +++ b/drivers/net/virtio/meson.build
> > @@ -54,7 +54,6 @@ if is_linux
> > 'virtio_user/vhost_kernel.c',
> > 'virtio_user/vhost_kernel_tap.c',
> > 'virtio_user/vhost_user.c',
> > - 'virtio_user/vhost_vdpa.c',
> > 'virtio_user/virtio_user_dev.c')
> > - deps += ['bus_vdev']
> > + deps += ['bus_vdev', 'common_virtio']
> > endif
> > diff --git a/drivers/net/virtio/virtio_user/vhost_kernel.c
> > b/drivers/net/virtio/virtio_user/vhost_kernel.c
> > index e42bb35935..3a95ce34d6 100644
> > --- a/drivers/net/virtio/virtio_user/vhost_kernel.c
> > +++ b/drivers/net/virtio/virtio_user/vhost_kernel.c
> > @@ -11,9 +11,10 @@
> >
> > #include <rte_memory.h>
> >
> > -#include "vhost.h"
> > +#include "virtio_user/vhost.h"
> > #include "virtio_user_dev.h"
> > #include "vhost_kernel_tap.h"
> > +#include "../virtqueue.h"
> >
> > struct vhost_kernel_data {
> > int *vhostfds;
> > diff --git a/drivers/net/virtio/virtio_user/vhost_user.c
> > b/drivers/net/virtio/virtio_user/vhost_user.c
> > index c10252506b..2a158aff7e 100644
> > --- a/drivers/net/virtio/virtio_user/vhost_user.c
> > +++ b/drivers/net/virtio/virtio_user/vhost_user.c
> > @@ -16,7 +16,8 @@
> > #include <rte_string_fns.h>
> > #include <rte_fbarray.h>
> >
> > -#include "vhost.h"
> > +#include "virtio_user/vhost_logs.h"
> > +#include "virtio_user/vhost.h"
> > #include "virtio_user_dev.h"
> >
> > struct vhost_user_data {
> > diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c
> > b/drivers/net/virtio/virtio_user/virtio_user_dev.c
> > index 2997d2bd26..7105c54b43 100644
> > --- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
> > +++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
> > @@ -20,10 +20,11 @@
> > #include <rte_malloc.h>
> > #include <rte_io.h>
> >
> > -#include "vhost.h"
> > -#include "virtio.h"
> > +#include "virtio_user/vhost.h"
> > #include "virtio_user_dev.h"
> > +#include "../virtqueue.h"
> > #include "../virtio_ethdev.h"
> > +#include "../virtio_logs.h"
> >
> > #define VIRTIO_USER_MEM_EVENT_CLB_NAME
> "virtio_user_mem_event_clb"
> >
> > diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.h
> > b/drivers/net/virtio/virtio_user/virtio_user_dev.h
> > index 66400b3b62..70604d6956 100644
> > --- a/drivers/net/virtio/virtio_user/virtio_user_dev.h
> > +++ b/drivers/net/virtio/virtio_user/virtio_user_dev.h
> > @@ -25,26 +25,36 @@ struct virtio_user_queue {
> > };
> >
> > struct virtio_user_dev {
> > - struct virtio_hw hw;
> > + union {
> > + struct virtio_hw hw;
> > + uint8_t dummy[256];
> > + };
> > +
> > + void *backend_data;
> > + uint16_t **notify_area;
> > + char path[PATH_MAX];
> > + bool hw_cvq;
> > + uint16_t max_queue_pairs;
> > + uint64_t device_features; /* supported features by device */
> > + bool *qp_enabled;
> > +
> > enum virtio_user_backend_type backend_type;
> > bool is_server; /* server or client mode */
> >
> > int *callfds;
> > int *kickfds;
> > int mac_specified;
> > - uint16_t max_queue_pairs;
> > +
> > uint16_t queue_pairs;
> > uint32_t queue_size;
> > uint64_t features; /* the negotiated features with driver,
> > * and will be sync with device
> > */
> > - uint64_t device_features; /* supported features by device */
> > uint64_t frontend_features; /* enabled frontend features */
> > uint64_t unsupported_features; /* unsupported features mask
> */
> > uint8_t status;
> > uint16_t net_status;
> > uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
> > - char path[PATH_MAX];
> > char *ifname;
> >
> > union {
> > @@ -54,18 +64,12 @@ struct virtio_user_dev {
> > } vrings;
> >
> > struct virtio_user_queue *packed_queues;
> > - bool *qp_enabled;
> >
> > struct virtio_user_backend_ops *ops;
> > pthread_mutex_t mutex;
> > bool started;
> >
> > - bool hw_cvq;
> > struct virtqueue *scvq;
> > -
> > - void *backend_data;
> > -
> > - uint16_t **notify_area;
> > };
> >
> > int virtio_user_dev_set_features(struct virtio_user_dev *dev);
^ permalink raw reply [flat|nested] 58+ messages in thread
* [v3 0/5] vhost: add RSA support
2025-01-07 18:02 ` [v2 0/2] vhost: add RSA support Gowrishankar Muthukrishnan
2025-01-07 18:02 ` [v2 1/2] vhost: add asymmetric " Gowrishankar Muthukrishnan
2025-01-07 18:02 ` [v2 2/2] examples/vhost_crypto: add asymmetric support Gowrishankar Muthukrishnan
@ 2025-02-21 17:30 ` Gowrishankar Muthukrishnan
2025-02-21 17:30 ` [v3 1/5] vhost: skip crypto op fetch before vring init Gowrishankar Muthukrishnan
` (5 more replies)
2 siblings, 6 replies; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2025-02-21 17:30 UTC (permalink / raw)
To: dev, maxime.coquelin, Chenbo Xia
Cc: anoobj, Akhil Goyal, Gowrishankar Muthukrishnan
This patch series supports asymmetric RSA in vhost crypto library.
It also includes changes to improve vhost crypto library:
* support newer QEMU versions.
* fix broken vhost_crypto example application.
* stabilize crypto fastpath operations.
v3:
- spin off new patches from one single patch in v2.
- stabilized vhost crypto lib and example app.
Gowrishankar Muthukrishnan (5):
vhost: skip crypto op fetch before vring init
vhost: update vhost_user crypto session parameters
examples/vhost_crypto: fix user callbacks
vhost: support asymmetric RSA crypto ops
examples/vhost_crypto: support asymmetric crypto
examples/vhost_crypto/main.c | 54 +++-
lib/vhost/vhost_crypto.c | 514 ++++++++++++++++++++++++++++++++---
lib/vhost/vhost_user.h | 33 ++-
lib/vhost/virtio_crypto.h | 67 +++++
4 files changed, 609 insertions(+), 59 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 58+ messages in thread
* [v3 1/5] vhost: skip crypto op fetch before vring init
2025-02-21 17:30 ` [v3 0/5] vhost: add RSA support Gowrishankar Muthukrishnan
@ 2025-02-21 17:30 ` Gowrishankar Muthukrishnan
2025-02-21 17:30 ` [v3 2/5] vhost: update vhost_user crypto session parameters Gowrishankar Muthukrishnan
` (4 subsequent siblings)
5 siblings, 0 replies; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2025-02-21 17:30 UTC (permalink / raw)
To: dev, maxime.coquelin, Chenbo Xia, Fan Zhang, Jay Zhou
Cc: anoobj, Akhil Goyal, Gowrishankar Muthukrishnan, stable
Until virtio avail ring is initialized (by VHOST_USER_SET_VRING_ADDR),
worker thread should not try to fetch crypto op, which would lead to
memory fault.
Fixes: 939066d9656 ("vhost/crypto: add public function implementation")
Cc: stable@dpdk.org
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
lib/vhost/vhost_crypto.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/lib/vhost/vhost_crypto.c b/lib/vhost/vhost_crypto.c
index 3dc41a3bd5..55ea24710e 100644
--- a/lib/vhost/vhost_crypto.c
+++ b/lib/vhost/vhost_crypto.c
@@ -1580,6 +1580,16 @@ rte_vhost_crypto_fetch_requests(int vid, uint32_t qid,
vq = dev->virtqueue[qid];
+ if (unlikely(vq == NULL)) {
+ VC_LOG_ERR("Invalid virtqueue %u", qid);
+ return 0;
+ }
+
+ if (unlikely(vq->avail == NULL)) {
+ VC_LOG_DBG("Virtqueue ring not yet initialized %u", qid);
+ return 0;
+ }
+
avail_idx = *((volatile uint16_t *)&vq->avail->idx);
start_idx = vq->last_used_idx;
count = avail_idx - start_idx;
--
2.25.1
^ permalink raw reply [flat|nested] 58+ messages in thread
* [v3 2/5] vhost: update vhost_user crypto session parameters
2025-02-21 17:30 ` [v3 0/5] vhost: add RSA support Gowrishankar Muthukrishnan
2025-02-21 17:30 ` [v3 1/5] vhost: skip crypto op fetch before vring init Gowrishankar Muthukrishnan
@ 2025-02-21 17:30 ` Gowrishankar Muthukrishnan
2025-02-21 17:30 ` [v3 3/5] examples/vhost_crypto: fix user callbacks Gowrishankar Muthukrishnan
` (3 subsequent siblings)
5 siblings, 0 replies; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2025-02-21 17:30 UTC (permalink / raw)
To: dev, maxime.coquelin, Chenbo Xia
Cc: anoobj, Akhil Goyal, Gowrishankar Muthukrishnan
As per requirements on vhost_user spec, session id should be
located at the end of session parameter.
Update VhostUserCryptoSessionParam structure to support newer QEMU.
Due to additional parameters added in QEMU, received payload from
QEMU would be larger than existing payload, hence breaks parsing
vhost_user message.
This patch addresses both of the above problems.
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
v3:
- decoupled originally from v2 series single patch.
---
lib/vhost/vhost_crypto.c | 12 ++++++------
lib/vhost/vhost_user.h | 33 +++++++++++++++++++++++++++++----
2 files changed, 35 insertions(+), 10 deletions(-)
diff --git a/lib/vhost/vhost_crypto.c b/lib/vhost/vhost_crypto.c
index 55ea24710e..05f3c85884 100644
--- a/lib/vhost/vhost_crypto.c
+++ b/lib/vhost/vhost_crypto.c
@@ -237,7 +237,7 @@ struct vhost_crypto_data_req {
static int
transform_cipher_param(struct rte_crypto_sym_xform *xform,
- VhostUserCryptoSessionParam *param)
+ VhostUserCryptoSymSessionParam *param)
{
int ret;
@@ -273,7 +273,7 @@ transform_cipher_param(struct rte_crypto_sym_xform *xform,
static int
transform_chain_param(struct rte_crypto_sym_xform *xforms,
- VhostUserCryptoSessionParam *param)
+ VhostUserCryptoSymSessionParam *param)
{
struct rte_crypto_sym_xform *xform_cipher, *xform_auth;
int ret;
@@ -341,10 +341,10 @@ vhost_crypto_create_sess(struct vhost_crypto *vcrypto,
struct rte_cryptodev_sym_session *session;
int ret;
- switch (sess_param->op_type) {
+ switch (sess_param->u.sym_sess.op_type) {
case VIRTIO_CRYPTO_SYM_OP_NONE:
case VIRTIO_CRYPTO_SYM_OP_CIPHER:
- ret = transform_cipher_param(&xform1, sess_param);
+ ret = transform_cipher_param(&xform1, &sess_param->u.sym_sess);
if (unlikely(ret)) {
VC_LOG_ERR("Error transform session msg (%i)", ret);
sess_param->session_id = ret;
@@ -352,7 +352,7 @@ vhost_crypto_create_sess(struct vhost_crypto *vcrypto,
}
break;
case VIRTIO_CRYPTO_SYM_OP_ALGORITHM_CHAINING:
- if (unlikely(sess_param->hash_mode !=
+ if (unlikely(sess_param->u.sym_sess.hash_mode !=
VIRTIO_CRYPTO_SYM_HASH_MODE_AUTH)) {
sess_param->session_id = -VIRTIO_CRYPTO_NOTSUPP;
VC_LOG_ERR("Error transform session message (%i)",
@@ -362,7 +362,7 @@ vhost_crypto_create_sess(struct vhost_crypto *vcrypto,
xform1.next = &xform2;
- ret = transform_chain_param(&xform1, sess_param);
+ ret = transform_chain_param(&xform1, &sess_param->u.sym_sess);
if (unlikely(ret)) {
VC_LOG_ERR("Error transform session message (%i)", ret);
sess_param->session_id = ret;
diff --git a/lib/vhost/vhost_user.h b/lib/vhost/vhost_user.h
index 9a905ee5f4..ef486545ba 100644
--- a/lib/vhost/vhost_user.h
+++ b/lib/vhost/vhost_user.h
@@ -99,11 +99,10 @@ typedef struct VhostUserLog {
/* Comply with Cryptodev-Linux */
#define VHOST_USER_CRYPTO_MAX_HMAC_KEY_LENGTH 512
#define VHOST_USER_CRYPTO_MAX_CIPHER_KEY_LENGTH 64
+#define VHOST_USER_CRYPTO_MAX_KEY_LENGTH 1024
/* Same structure as vhost-user backend session info */
-typedef struct VhostUserCryptoSessionParam {
- int64_t session_id;
- uint32_t op_code;
+typedef struct VhostUserCryptoSymSessionParam {
uint32_t cipher_algo;
uint32_t cipher_key_len;
uint32_t hash_algo;
@@ -114,10 +113,36 @@ typedef struct VhostUserCryptoSessionParam {
uint8_t dir;
uint8_t hash_mode;
uint8_t chaining_dir;
- uint8_t *ciphe_key;
+ uint8_t *cipher_key;
uint8_t *auth_key;
uint8_t cipher_key_buf[VHOST_USER_CRYPTO_MAX_CIPHER_KEY_LENGTH];
uint8_t auth_key_buf[VHOST_USER_CRYPTO_MAX_HMAC_KEY_LENGTH];
+} VhostUserCryptoSymSessionParam;
+
+
+typedef struct VhostUserCryptoAsymRsaParam {
+ uint32_t padding_algo;
+ uint32_t hash_algo;
+} VhostUserCryptoAsymRsaParam;
+
+typedef struct VhostUserCryptoAsymSessionParam {
+ uint32_t algo;
+ uint32_t key_type;
+ uint32_t key_len;
+ uint8_t *key;
+ union {
+ VhostUserCryptoAsymRsaParam rsa;
+ } u;
+ uint8_t key_buf[VHOST_USER_CRYPTO_MAX_KEY_LENGTH];
+} VhostUserCryptoAsymSessionParam;
+
+typedef struct VhostUserCryptoSessionParam {
+ uint32_t op_code;
+ union {
+ VhostUserCryptoSymSessionParam sym_sess;
+ VhostUserCryptoAsymSessionParam asym_sess;
+ } u;
+ int64_t session_id;
} VhostUserCryptoSessionParam;
typedef struct VhostUserVringArea {
--
2.25.1
^ permalink raw reply [flat|nested] 58+ messages in thread
* [v3 3/5] examples/vhost_crypto: fix user callbacks
2025-02-21 17:30 ` [v3 0/5] vhost: add RSA support Gowrishankar Muthukrishnan
2025-02-21 17:30 ` [v3 1/5] vhost: skip crypto op fetch before vring init Gowrishankar Muthukrishnan
2025-02-21 17:30 ` [v3 2/5] vhost: update vhost_user crypto session parameters Gowrishankar Muthukrishnan
@ 2025-02-21 17:30 ` Gowrishankar Muthukrishnan
2025-02-21 17:30 ` [v3 4/5] vhost: support asymmetric RSA crypto ops Gowrishankar Muthukrishnan
` (2 subsequent siblings)
5 siblings, 0 replies; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2025-02-21 17:30 UTC (permalink / raw)
To: dev, maxime.coquelin, Chenbo Xia, Fan Zhang, Jay Zhou
Cc: anoobj, Akhil Goyal, Gowrishankar Muthukrishnan, stable
In order to handle new vhost user connection, use new_connection
and destroy_connection callbacks.
Fixes: f5188211c721 ("examples/vhost_crypto: add sample application")
Cc: stable@dpdk.org
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
v3:
- decoupled from v2 single patch.
---
examples/vhost_crypto/main.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/examples/vhost_crypto/main.c b/examples/vhost_crypto/main.c
index 558c09a60f..b1fe4120b9 100644
--- a/examples/vhost_crypto/main.c
+++ b/examples/vhost_crypto/main.c
@@ -362,8 +362,8 @@ destroy_device(int vid)
}
static const struct rte_vhost_device_ops virtio_crypto_device_ops = {
- .new_device = new_device,
- .destroy_device = destroy_device,
+ .new_connection = new_device,
+ .destroy_connection = destroy_device,
};
static int
--
2.25.1
^ permalink raw reply [flat|nested] 58+ messages in thread
* [v3 4/5] vhost: support asymmetric RSA crypto ops
2025-02-21 17:30 ` [v3 0/5] vhost: add RSA support Gowrishankar Muthukrishnan
` (2 preceding siblings ...)
2025-02-21 17:30 ` [v3 3/5] examples/vhost_crypto: fix user callbacks Gowrishankar Muthukrishnan
@ 2025-02-21 17:30 ` Gowrishankar Muthukrishnan
2025-02-21 17:30 ` [v3 5/5] examples/vhost_crypto: support asymmetric crypto Gowrishankar Muthukrishnan
2025-02-22 8:38 ` [v4 0/5] vhost: add RSA support Gowrishankar Muthukrishnan
5 siblings, 0 replies; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2025-02-21 17:30 UTC (permalink / raw)
To: dev, maxime.coquelin, Chenbo Xia
Cc: anoobj, Akhil Goyal, Gowrishankar Muthukrishnan
Support asymmetric RSA crypto operations in vhost-user.
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
v3:
- TLV decode optimization for fast path.
- virtio_crypto.h changes moved from virtio PMD patch series into this series
as asymmetric support starts essentially from library.
---
lib/vhost/vhost_crypto.c | 492 +++++++++++++++++++++++++++++++++++---
lib/vhost/virtio_crypto.h | 67 ++++++
2 files changed, 524 insertions(+), 35 deletions(-)
diff --git a/lib/vhost/vhost_crypto.c b/lib/vhost/vhost_crypto.c
index 05f3c85884..9892603891 100644
--- a/lib/vhost/vhost_crypto.c
+++ b/lib/vhost/vhost_crypto.c
@@ -54,6 +54,14 @@ RTE_LOG_REGISTER_SUFFIX(vhost_crypto_logtype, crypto, INFO);
*/
#define vhost_crypto_desc vring_desc
+struct vhost_crypto_session {
+ union {
+ struct rte_cryptodev_asym_session *asym;
+ struct rte_cryptodev_sym_session *sym;
+ };
+ enum rte_crypto_op_type type;
+};
+
static int
cipher_algo_transform(uint32_t virtio_cipher_algo,
enum rte_crypto_cipher_algorithm *algo)
@@ -206,8 +214,10 @@ struct __rte_cache_aligned vhost_crypto {
uint64_t last_session_id;
- uint64_t cache_session_id;
- struct rte_cryptodev_sym_session *cache_session;
+ uint64_t cache_sym_session_id;
+ struct rte_cryptodev_sym_session *cache_sym_session;
+ uint64_t cache_asym_session_id;
+ struct rte_cryptodev_asym_session *cache_asym_session;
/** socket id for the device */
int socket_id;
@@ -334,10 +344,11 @@ transform_chain_param(struct rte_crypto_sym_xform *xforms,
}
static void
-vhost_crypto_create_sess(struct vhost_crypto *vcrypto,
+vhost_crypto_create_sym_sess(struct vhost_crypto *vcrypto,
VhostUserCryptoSessionParam *sess_param)
{
struct rte_crypto_sym_xform xform1 = {0}, xform2 = {0};
+ struct vhost_crypto_session *vhost_session;
struct rte_cryptodev_sym_session *session;
int ret;
@@ -384,42 +395,277 @@ vhost_crypto_create_sess(struct vhost_crypto *vcrypto,
return;
}
- /* insert hash to map */
- if (rte_hash_add_key_data(vcrypto->session_map,
- &vcrypto->last_session_id, session) < 0) {
+ vhost_session = rte_zmalloc(NULL, sizeof(*vhost_session), 0);
+ if (vhost_session == NULL) {
+ VC_LOG_ERR("Failed to alloc session memory");
+ goto error_exit;
+ }
+
+ vhost_session->type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
+ vhost_session->sym = session;
+
+ /* insert session to map */
+ if ((rte_hash_add_key_data(vcrypto->session_map,
+ &vcrypto->last_session_id, vhost_session) < 0)) {
VC_LOG_ERR("Failed to insert session to hash table");
+ goto error_exit;
+ }
+
+ VC_LOG_INFO("Session %"PRIu64" created for vdev %i.",
+ vcrypto->last_session_id, vcrypto->dev->vid);
+
+ sess_param->session_id = vcrypto->last_session_id;
+ vcrypto->last_session_id++;
+ return;
+
+error_exit:
+ if (rte_cryptodev_sym_session_free(vcrypto->cid, session) < 0)
+ VC_LOG_ERR("Failed to free session");
+
+ sess_param->session_id = -VIRTIO_CRYPTO_ERR;
+ rte_free(vhost_session);
+}
+
+static int
+tlv_decode(uint8_t *tlv, uint8_t type, uint8_t **data, size_t *data_len)
+{
+ size_t tlen = -EINVAL, len;
+
+ if (tlv[0] != type)
+ return -EINVAL;
+
+ if (tlv[1] == 0x82) {
+ len = (tlv[2] << 8) | tlv[3];
+ *data = &tlv[4];
+ tlen = len + 4;
+ } else if (tlv[1] == 0x81) {
+ len = tlv[2];
+ *data = &tlv[3];
+ tlen = len + 3;
+ } else {
+ len = tlv[1];
+ *data = &tlv[2];
+ tlen = len + 2;
+ }
+
+ *data_len = len;
+ return tlen;
+}
+
+static int
+virtio_crypto_asym_rsa_der_to_xform(uint8_t *der, size_t der_len,
+ struct rte_crypto_asym_xform *xform)
+{
+ uint8_t *n = NULL, *e = NULL, *d = NULL, *p = NULL, *q = NULL, *dp = NULL,
+ *dq = NULL, *qinv = NULL, *v = NULL, *tlv;
+ size_t nlen, elen, dlen, plen, qlen, dplen, dqlen, qinvlen, vlen;
+ int len;
+
+ RTE_SET_USED(der_len);
+
+ if (der[0] != 0x30)
+ return -EINVAL;
+
+ if (der[1] == 0x82)
+ tlv = &der[4];
+ else if (der[1] == 0x81)
+ tlv = &der[3];
+ else
+ return -EINVAL;
+
+ len = tlv_decode(tlv, 0x02, &v, &vlen);
+ if (len < 0 || v[0] != 0x0 || vlen != 1)
+ return -EINVAL;
+
+ tlv = tlv + len;
+ len = tlv_decode(tlv, 0x02, &n, &nlen);
+ if (len < 0)
+ return len;
+
+ tlv = tlv + len;
+ len = tlv_decode(tlv, 0x02, &e, &elen);
+ if (len < 0)
+ return len;
+
+ tlv = tlv + len;
+ len = tlv_decode(tlv, 0x02, &d, &dlen);
+ if (len < 0)
+ return len;
+
+ tlv = tlv + len;
+ len = tlv_decode(tlv, 0x02, &p, &plen);
+ if (len < 0)
+ return len;
+
+ tlv = tlv + len;
+ len = tlv_decode(tlv, 0x02, &q, &qlen);
+ if (len < 0)
+ return len;
+
+ tlv = tlv + len;
+ len = tlv_decode(tlv, 0x02, &dp, &dplen);
+ if (len < 0)
+ return len;
+
+ tlv = tlv + len;
+ len = tlv_decode(tlv, 0x02, &dq, &dqlen);
+ if (len < 0)
+ return len;
+
+ tlv = tlv + len;
+ len = tlv_decode(tlv, 0x02, &qinv, &qinvlen);
+ if (len < 0)
+ return len;
+
+ xform->rsa.n.data = n;
+ xform->rsa.n.length = nlen;
+ xform->rsa.e.data = e;
+ xform->rsa.e.length = elen;
+ xform->rsa.d.data = d;
+ xform->rsa.d.length = dlen;
+ xform->rsa.qt.p.data = p;
+ xform->rsa.qt.p.length = plen;
+ xform->rsa.qt.q.data = q;
+ xform->rsa.qt.q.length = qlen;
+ xform->rsa.qt.dP.data = dp;
+ xform->rsa.qt.dP.length = dplen;
+ xform->rsa.qt.dQ.data = dq;
+ xform->rsa.qt.dQ.length = dqlen;
+ xform->rsa.qt.qInv.data = qinv;
+ xform->rsa.qt.qInv.length = qinvlen;
+
+ RTE_ASSERT((tlv + len - &der[0]) == der_len);
+ return 0;
+}
+
+static int
+rsa_param_transform(struct rte_crypto_asym_xform *xform,
+ VhostUserCryptoAsymSessionParam *param)
+{
+ int ret;
- if (rte_cryptodev_sym_session_free(vcrypto->cid, session) < 0)
- VC_LOG_ERR("Failed to free session");
+ ret = virtio_crypto_asym_rsa_der_to_xform(param->key_buf, param->key_len, xform);
+ if (ret < 0)
+ return ret;
+
+ switch (param->u.rsa.padding_algo) {
+ case VIRTIO_CRYPTO_RSA_RAW_PADDING:
+ xform->rsa.padding.type = RTE_CRYPTO_RSA_PADDING_NONE;
+ break;
+ case VIRTIO_CRYPTO_RSA_PKCS1_PADDING:
+ xform->rsa.padding.type = RTE_CRYPTO_RSA_PADDING_PKCS1_5;
+ break;
+ default:
+ VC_LOG_ERR("Unknown padding type");
+ return -EINVAL;
+ }
+
+ xform->rsa.key_type = RTE_RSA_KEY_TYPE_QT;
+ xform->xform_type = RTE_CRYPTO_ASYM_XFORM_RSA;
+ return 0;
+}
+
+static void
+vhost_crypto_create_asym_sess(struct vhost_crypto *vcrypto,
+ VhostUserCryptoSessionParam *sess_param)
+{
+ struct rte_cryptodev_asym_session *session = NULL;
+ struct vhost_crypto_session *vhost_session;
+ struct rte_crypto_asym_xform xform = {0};
+ int ret;
+
+ switch (sess_param->u.asym_sess.algo) {
+ case VIRTIO_CRYPTO_AKCIPHER_RSA:
+ ret = rsa_param_transform(&xform, &sess_param->u.asym_sess);
+ if (unlikely(ret < 0)) {
+ VC_LOG_ERR("Error transform session msg (%i)", ret);
+ sess_param->session_id = ret;
+ return;
+ }
+ break;
+ default:
+ VC_LOG_ERR("Invalid op algo");
sess_param->session_id = -VIRTIO_CRYPTO_ERR;
return;
}
+ ret = rte_cryptodev_asym_session_create(vcrypto->cid, &xform,
+ vcrypto->sess_pool, (void *)&session);
+ if (session == NULL) {
+ VC_LOG_ERR("Failed to create session");
+ sess_param->session_id = -VIRTIO_CRYPTO_ERR;
+ return;
+ }
+
+ vhost_session = rte_zmalloc(NULL, sizeof(*vhost_session), 0);
+ if (vhost_session == NULL) {
+ VC_LOG_ERR("Failed to alloc session memory");
+ goto error_exit;
+ }
+
+ vhost_session->type = RTE_CRYPTO_OP_TYPE_ASYMMETRIC;
+ vhost_session->asym = session;
+
+ /* insert session to map */
+ if ((rte_hash_add_key_data(vcrypto->session_map,
+ &vcrypto->last_session_id, vhost_session) < 0)) {
+ VC_LOG_ERR("Failed to insert session to hash table");
+ goto error_exit;
+ }
+
VC_LOG_INFO("Session %"PRIu64" created for vdev %i.",
vcrypto->last_session_id, vcrypto->dev->vid);
sess_param->session_id = vcrypto->last_session_id;
vcrypto->last_session_id++;
+ return;
+
+error_exit:
+ if (rte_cryptodev_asym_session_free(vcrypto->cid, session) < 0)
+ VC_LOG_ERR("Failed to free session");
+ sess_param->session_id = -VIRTIO_CRYPTO_ERR;
+ rte_free(vhost_session);
+}
+
+static void
+vhost_crypto_create_sess(struct vhost_crypto *vcrypto,
+ VhostUserCryptoSessionParam *sess_param)
+{
+ if (sess_param->op_code == VIRTIO_CRYPTO_AKCIPHER_CREATE_SESSION)
+ vhost_crypto_create_asym_sess(vcrypto, sess_param);
+ else
+ vhost_crypto_create_sym_sess(vcrypto, sess_param);
}
static int
vhost_crypto_close_sess(struct vhost_crypto *vcrypto, uint64_t session_id)
{
- struct rte_cryptodev_sym_session *session;
+ struct vhost_crypto_session *vhost_session = NULL;
uint64_t sess_id = session_id;
int ret;
ret = rte_hash_lookup_data(vcrypto->session_map, &sess_id,
- (void **)&session);
-
+ (void **)&vhost_session);
if (unlikely(ret < 0)) {
- VC_LOG_ERR("Failed to delete session %"PRIu64".", session_id);
+ VC_LOG_ERR("Failed to find session for id %"PRIu64".", session_id);
return -VIRTIO_CRYPTO_INVSESS;
}
- if (rte_cryptodev_sym_session_free(vcrypto->cid, session) < 0) {
- VC_LOG_DBG("Failed to free session");
- return -VIRTIO_CRYPTO_ERR;
+ if (vhost_session->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) {
+ if (rte_cryptodev_sym_session_free(vcrypto->cid,
+ vhost_session->sym) < 0) {
+ VC_LOG_DBG("Failed to free session");
+ return -VIRTIO_CRYPTO_ERR;
+ }
+ } else if (vhost_session->type == RTE_CRYPTO_OP_TYPE_ASYMMETRIC) {
+ if (rte_cryptodev_asym_session_free(vcrypto->cid,
+ vhost_session->asym) < 0) {
+ VC_LOG_DBG("Failed to free session");
+ return -VIRTIO_CRYPTO_ERR;
+ }
+ } else {
+ VC_LOG_ERR("Invalid session for id %"PRIu64".", session_id);
+ return -VIRTIO_CRYPTO_INVSESS;
}
if (rte_hash_del_key(vcrypto->session_map, &sess_id) < 0) {
@@ -430,6 +676,7 @@ vhost_crypto_close_sess(struct vhost_crypto *vcrypto, uint64_t session_id)
VC_LOG_INFO("Session %"PRIu64" deleted for vdev %i.", sess_id,
vcrypto->dev->vid);
+ rte_free(vhost_session);
return 0;
}
@@ -1123,6 +1370,115 @@ prepare_sym_chain_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op,
return ret;
}
+static __rte_always_inline uint8_t
+vhost_crypto_check_akcipher_request(struct virtio_crypto_akcipher_data_req *req)
+{
+ RTE_SET_USED(req);
+ return VIRTIO_CRYPTO_OK;
+}
+
+static __rte_always_inline uint8_t
+prepare_asym_rsa_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op,
+ struct vhost_crypto_data_req *vc_req,
+ struct virtio_crypto_op_data_req *req,
+ struct vhost_crypto_desc *head,
+ uint32_t max_n_descs)
+{
+ struct rte_crypto_rsa_op_param *rsa = &op->asym->rsa;
+ struct vhost_crypto_desc *desc = head;
+ uint8_t ret = VIRTIO_CRYPTO_ERR;
+ uint16_t wlen = 0;
+
+ /* prepare */
+ switch (vcrypto->option) {
+ case RTE_VHOST_CRYPTO_ZERO_COPY_DISABLE:
+ vc_req->wb_pool = vcrypto->wb_pool;
+ if (req->header.opcode == VIRTIO_CRYPTO_AKCIPHER_SIGN) {
+ rsa->op_type = RTE_CRYPTO_ASYM_OP_SIGN;
+ rsa->message.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RO);
+ rsa->message.length = req->u.akcipher_req.para.src_data_len;
+ rsa->sign.length = req->u.akcipher_req.para.dst_data_len;
+ wlen = rsa->sign.length;
+ desc = find_write_desc(head, desc, max_n_descs);
+ if (unlikely(!desc)) {
+ VC_LOG_ERR("Cannot find write location");
+ ret = VIRTIO_CRYPTO_BADMSG;
+ goto error_exit;
+ }
+
+ rsa->sign.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RW);
+ if (unlikely(rsa->sign.data == NULL)) {
+ ret = VIRTIO_CRYPTO_ERR;
+ goto error_exit;
+ }
+
+ desc += 1;
+ } else if (req->header.opcode == VIRTIO_CRYPTO_AKCIPHER_VERIFY) {
+ rsa->op_type = RTE_CRYPTO_ASYM_OP_VERIFY;
+ rsa->sign.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RO);
+ rsa->sign.length = req->u.akcipher_req.para.src_data_len;
+ desc += 1;
+ rsa->message.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RO);
+ rsa->message.length = req->u.akcipher_req.para.dst_data_len;
+ desc += 1;
+ } else if (req->header.opcode == VIRTIO_CRYPTO_AKCIPHER_ENCRYPT) {
+ rsa->op_type = RTE_CRYPTO_ASYM_OP_ENCRYPT;
+ rsa->message.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RO);
+ rsa->message.length = req->u.akcipher_req.para.src_data_len;
+ rsa->cipher.length = req->u.akcipher_req.para.dst_data_len;
+ wlen = rsa->cipher.length;
+ desc = find_write_desc(head, desc, max_n_descs);
+ if (unlikely(!desc)) {
+ VC_LOG_ERR("Cannot find write location");
+ ret = VIRTIO_CRYPTO_BADMSG;
+ goto error_exit;
+ }
+
+ rsa->cipher.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RW);
+ if (unlikely(rsa->cipher.data == NULL)) {
+ ret = VIRTIO_CRYPTO_ERR;
+ goto error_exit;
+ }
+
+ desc += 1;
+ } else if (req->header.opcode == VIRTIO_CRYPTO_AKCIPHER_DECRYPT) {
+ rsa->op_type = RTE_CRYPTO_ASYM_OP_DECRYPT;
+ rsa->cipher.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RO);
+ rsa->cipher.length = req->u.akcipher_req.para.src_data_len;
+ desc += 1;
+ rsa->message.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RO);
+ rsa->message.length = req->u.akcipher_req.para.dst_data_len;
+ desc += 1;
+ } else {
+ goto error_exit;
+ }
+ break;
+ case RTE_VHOST_CRYPTO_ZERO_COPY_ENABLE:
+ default:
+ ret = VIRTIO_CRYPTO_BADMSG;
+ goto error_exit;
+ }
+
+ op->type = RTE_CRYPTO_OP_TYPE_ASYMMETRIC;
+ op->sess_type = RTE_CRYPTO_OP_WITH_SESSION;
+
+ vc_req->inhdr = get_data_ptr(vc_req, desc, VHOST_ACCESS_WO);
+ if (unlikely(vc_req->inhdr == NULL)) {
+ ret = VIRTIO_CRYPTO_BADMSG;
+ goto error_exit;
+ }
+
+ vc_req->inhdr->status = VIRTIO_CRYPTO_OK;
+ vc_req->len = wlen + INHDR_LEN;
+ return 0;
+error_exit:
+ if (vc_req->wb)
+ free_wb_data(vc_req->wb, vc_req->wb_pool);
+
+ vc_req->len = INHDR_LEN;
+ return ret;
+}
+
/**
* Process on descriptor
*/
@@ -1133,17 +1489,21 @@ vhost_crypto_process_one_req(struct vhost_crypto *vcrypto,
uint16_t desc_idx)
__rte_no_thread_safety_analysis /* FIXME: requires iotlb_lock? */
{
- struct vhost_crypto_data_req *vc_req = rte_mbuf_to_priv(op->sym->m_src);
- struct rte_cryptodev_sym_session *session;
+ struct vhost_crypto_data_req *vc_req, *vc_req_out;
+ struct rte_cryptodev_asym_session *asym_session;
+ struct rte_cryptodev_sym_session *sym_session;
+ struct vhost_crypto_session *vhost_session;
+ struct vhost_crypto_desc *desc = descs;
+ uint32_t nb_descs = 0, max_n_descs, i;
+ struct vhost_crypto_data_req data_req;
struct virtio_crypto_op_data_req req;
struct virtio_crypto_inhdr *inhdr;
- struct vhost_crypto_desc *desc = descs;
struct vring_desc *src_desc;
uint64_t session_id;
uint64_t dlen;
- uint32_t nb_descs = 0, max_n_descs, i;
int err;
+ vc_req = &data_req;
vc_req->desc_idx = desc_idx;
vc_req->dev = vcrypto->dev;
vc_req->vq = vq;
@@ -1226,12 +1586,14 @@ vhost_crypto_process_one_req(struct vhost_crypto *vcrypto,
switch (req.header.opcode) {
case VIRTIO_CRYPTO_CIPHER_ENCRYPT:
case VIRTIO_CRYPTO_CIPHER_DECRYPT:
+ vc_req_out = rte_mbuf_to_priv(op->sym->m_src);
+ memcpy(vc_req_out, vc_req, sizeof(struct vhost_crypto_data_req));
session_id = req.header.session_id;
/* one branch to avoid unnecessary table lookup */
- if (vcrypto->cache_session_id != session_id) {
+ if (vcrypto->cache_sym_session_id != session_id) {
err = rte_hash_lookup_data(vcrypto->session_map,
- &session_id, (void **)&session);
+ &session_id, (void **)&vhost_session);
if (unlikely(err < 0)) {
err = VIRTIO_CRYPTO_ERR;
VC_LOG_ERR("Failed to find session %"PRIu64,
@@ -1239,13 +1601,14 @@ vhost_crypto_process_one_req(struct vhost_crypto *vcrypto,
goto error_exit;
}
- vcrypto->cache_session = session;
- vcrypto->cache_session_id = session_id;
+ vcrypto->cache_sym_session = vhost_session->sym;
+ vcrypto->cache_sym_session_id = session_id;
}
- session = vcrypto->cache_session;
+ sym_session = vcrypto->cache_sym_session;
+ op->type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
- err = rte_crypto_op_attach_sym_session(op, session);
+ err = rte_crypto_op_attach_sym_session(op, sym_session);
if (unlikely(err < 0)) {
err = VIRTIO_CRYPTO_ERR;
VC_LOG_ERR("Failed to attach session to op");
@@ -1257,12 +1620,12 @@ vhost_crypto_process_one_req(struct vhost_crypto *vcrypto,
err = VIRTIO_CRYPTO_NOTSUPP;
break;
case VIRTIO_CRYPTO_SYM_OP_CIPHER:
- err = prepare_sym_cipher_op(vcrypto, op, vc_req,
+ err = prepare_sym_cipher_op(vcrypto, op, vc_req_out,
&req.u.sym_req.u.cipher, desc,
max_n_descs);
break;
case VIRTIO_CRYPTO_SYM_OP_ALGORITHM_CHAINING:
- err = prepare_sym_chain_op(vcrypto, op, vc_req,
+ err = prepare_sym_chain_op(vcrypto, op, vc_req_out,
&req.u.sym_req.u.chain, desc,
max_n_descs);
break;
@@ -1271,6 +1634,53 @@ vhost_crypto_process_one_req(struct vhost_crypto *vcrypto,
VC_LOG_ERR("Failed to process sym request");
goto error_exit;
}
+ break;
+ case VIRTIO_CRYPTO_AKCIPHER_SIGN:
+ case VIRTIO_CRYPTO_AKCIPHER_VERIFY:
+ case VIRTIO_CRYPTO_AKCIPHER_ENCRYPT:
+ case VIRTIO_CRYPTO_AKCIPHER_DECRYPT:
+ session_id = req.header.session_id;
+
+ /* one branch to avoid unnecessary table lookup */
+ if (vcrypto->cache_asym_session_id != session_id) {
+ err = rte_hash_lookup_data(vcrypto->session_map,
+ &session_id, (void **)&vhost_session);
+ if (unlikely(err < 0)) {
+ err = VIRTIO_CRYPTO_ERR;
+ VC_LOG_ERR("Failed to find asym session %"PRIu64,
+ session_id);
+ goto error_exit;
+ }
+
+ vcrypto->cache_asym_session = vhost_session->asym;
+ vcrypto->cache_asym_session_id = session_id;
+ }
+
+ asym_session = vcrypto->cache_asym_session;
+ op->type = RTE_CRYPTO_OP_TYPE_ASYMMETRIC;
+
+ err = rte_crypto_op_attach_asym_session(op, asym_session);
+ if (unlikely(err < 0)) {
+ err = VIRTIO_CRYPTO_ERR;
+ VC_LOG_ERR("Failed to attach asym session to op");
+ goto error_exit;
+ }
+
+ vc_req_out = rte_cryptodev_asym_session_get_user_data(asym_session);
+ rte_memcpy(vc_req_out, vc_req, sizeof(struct vhost_crypto_data_req));
+ vc_req_out->wb = NULL;
+
+ switch (req.header.algo) {
+ case VIRTIO_CRYPTO_AKCIPHER_RSA:
+ err = prepare_asym_rsa_op(vcrypto, op, vc_req_out,
+ &req, desc, max_n_descs);
+ break;
+ }
+ if (unlikely(err != 0)) {
+ VC_LOG_ERR("Failed to process asym request");
+ goto error_exit;
+ }
+
break;
default:
err = VIRTIO_CRYPTO_ERR;
@@ -1294,12 +1704,22 @@ static __rte_always_inline struct vhost_virtqueue *
vhost_crypto_finalize_one_request(struct rte_crypto_op *op,
struct vhost_virtqueue *old_vq)
{
- struct rte_mbuf *m_src = op->sym->m_src;
- struct rte_mbuf *m_dst = op->sym->m_dst;
- struct vhost_crypto_data_req *vc_req = rte_mbuf_to_priv(m_src);
+ struct rte_mbuf *m_src = NULL, *m_dst = NULL;
+ struct vhost_crypto_data_req *vc_req;
struct vhost_virtqueue *vq;
uint16_t used_idx, desc_idx;
+ if (op->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) {
+ m_src = op->sym->m_src;
+ m_dst = op->sym->m_dst;
+ vc_req = rte_mbuf_to_priv(m_src);
+ } else if (op->type == RTE_CRYPTO_OP_TYPE_ASYMMETRIC) {
+ vc_req = rte_cryptodev_asym_session_get_user_data(op->asym->session);
+ } else {
+ VC_LOG_ERR("Invalid crypto op type");
+ return NULL;
+ }
+
if (unlikely(!vc_req)) {
VC_LOG_ERR("Failed to retrieve vc_req");
return NULL;
@@ -1321,10 +1741,11 @@ vhost_crypto_finalize_one_request(struct rte_crypto_op *op,
vq->used->ring[desc_idx].id = vq->avail->ring[desc_idx];
vq->used->ring[desc_idx].len = vc_req->len;
- rte_mempool_put(m_src->pool, (void *)m_src);
-
- if (m_dst)
- rte_mempool_put(m_dst->pool, (void *)m_dst);
+ if (op->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) {
+ rte_mempool_put(m_src->pool, (void *)m_src);
+ if (m_dst)
+ rte_mempool_put(m_dst->pool, (void *)m_dst);
+ }
return vc_req->vq;
}
@@ -1407,7 +1828,8 @@ rte_vhost_crypto_create(int vid, uint8_t cryptodev_id,
vcrypto->sess_pool = sess_pool;
vcrypto->cid = cryptodev_id;
- vcrypto->cache_session_id = UINT64_MAX;
+ vcrypto->cache_sym_session_id = UINT64_MAX;
+ vcrypto->cache_asym_session_id = UINT64_MAX;
vcrypto->last_session_id = 1;
vcrypto->dev = dev;
vcrypto->option = RTE_VHOST_CRYPTO_ZERO_COPY_DISABLE;
diff --git a/lib/vhost/virtio_crypto.h b/lib/vhost/virtio_crypto.h
index 28877a5da3..23af171030 100644
--- a/lib/vhost/virtio_crypto.h
+++ b/lib/vhost/virtio_crypto.h
@@ -9,6 +9,7 @@
#define VIRTIO_CRYPTO_SERVICE_HASH 1
#define VIRTIO_CRYPTO_SERVICE_MAC 2
#define VIRTIO_CRYPTO_SERVICE_AEAD 3
+#define VIRTIO_CRYPTO_SERVICE_AKCIPHER 4
#define VIRTIO_CRYPTO_OPCODE(service, op) (((service) << 8) | (op))
@@ -29,6 +30,10 @@ struct virtio_crypto_ctrl_header {
VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AEAD, 0x02)
#define VIRTIO_CRYPTO_AEAD_DESTROY_SESSION \
VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AEAD, 0x03)
+#define VIRTIO_CRYPTO_AKCIPHER_CREATE_SESSION \
+ VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AKCIPHER, 0x04)
+#define VIRTIO_CRYPTO_AKCIPHER_DESTROY_SESSION \
+ VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AKCIPHER, 0x05)
uint32_t opcode;
uint32_t algo;
uint32_t flag;
@@ -152,6 +157,45 @@ struct virtio_crypto_aead_create_session_req {
uint8_t padding[32];
};
+struct virtio_crypto_rsa_session_para {
+#define VIRTIO_CRYPTO_RSA_RAW_PADDING 0
+#define VIRTIO_CRYPTO_RSA_PKCS1_PADDING 1
+ uint32_t padding_algo;
+
+#define VIRTIO_CRYPTO_RSA_NO_HASH 0
+#define VIRTIO_CRYPTO_RSA_MD2 1
+#define VIRTIO_CRYPTO_RSA_MD3 2
+#define VIRTIO_CRYPTO_RSA_MD4 3
+#define VIRTIO_CRYPTO_RSA_MD5 4
+#define VIRTIO_CRYPTO_RSA_SHA1 5
+#define VIRTIO_CRYPTO_RSA_SHA256 6
+#define VIRTIO_CRYPTO_RSA_SHA384 7
+#define VIRTIO_CRYPTO_RSA_SHA512 8
+#define VIRTIO_CRYPTO_RSA_SHA224 9
+ uint32_t hash_algo;
+};
+
+struct virtio_crypto_akcipher_session_para {
+#define VIRTIO_CRYPTO_NO_AKCIPHER 0
+#define VIRTIO_CRYPTO_AKCIPHER_RSA 1
+#define VIRTIO_CRYPTO_AKCIPHER_DSA 2
+ uint32_t algo;
+
+#define VIRTIO_CRYPTO_AKCIPHER_KEY_TYPE_PUBLIC 1
+#define VIRTIO_CRYPTO_AKCIPHER_KEY_TYPE_PRIVATE 2
+ uint32_t keytype;
+ uint32_t keylen;
+
+ union {
+ struct virtio_crypto_rsa_session_para rsa;
+ } u;
+};
+
+struct virtio_crypto_akcipher_create_session_req {
+ struct virtio_crypto_akcipher_session_para para;
+ uint8_t padding[36];
+};
+
struct virtio_crypto_alg_chain_session_para {
#define VIRTIO_CRYPTO_SYM_ALG_CHAIN_ORDER_HASH_THEN_CIPHER 1
#define VIRTIO_CRYPTO_SYM_ALG_CHAIN_ORDER_CIPHER_THEN_HASH 2
@@ -219,6 +263,8 @@ struct virtio_crypto_op_ctrl_req {
mac_create_session;
struct virtio_crypto_aead_create_session_req
aead_create_session;
+ struct virtio_crypto_akcipher_create_session_req
+ akcipher_create_session;
struct virtio_crypto_destroy_session_req
destroy_session;
uint8_t padding[56];
@@ -238,6 +284,14 @@ struct virtio_crypto_op_header {
VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AEAD, 0x00)
#define VIRTIO_CRYPTO_AEAD_DECRYPT \
VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AEAD, 0x01)
+#define VIRTIO_CRYPTO_AKCIPHER_ENCRYPT \
+ VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AKCIPHER, 0x00)
+#define VIRTIO_CRYPTO_AKCIPHER_DECRYPT \
+ VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AKCIPHER, 0x01)
+#define VIRTIO_CRYPTO_AKCIPHER_SIGN \
+ VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AKCIPHER, 0x02)
+#define VIRTIO_CRYPTO_AKCIPHER_VERIFY \
+ VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AKCIPHER, 0x03)
uint32_t opcode;
/* algo should be service-specific algorithms */
uint32_t algo;
@@ -362,6 +416,16 @@ struct virtio_crypto_aead_data_req {
uint8_t padding[32];
};
+struct virtio_crypto_akcipher_para {
+ uint32_t src_data_len;
+ uint32_t dst_data_len;
+};
+
+struct virtio_crypto_akcipher_data_req {
+ struct virtio_crypto_akcipher_para para;
+ uint8_t padding[40];
+};
+
/* The request of the data virtqueue's packet */
struct virtio_crypto_op_data_req {
struct virtio_crypto_op_header header;
@@ -371,6 +435,7 @@ struct virtio_crypto_op_data_req {
struct virtio_crypto_hash_data_req hash_req;
struct virtio_crypto_mac_data_req mac_req;
struct virtio_crypto_aead_data_req aead_req;
+ struct virtio_crypto_akcipher_data_req akcipher_req;
uint8_t padding[48];
} u;
};
@@ -380,6 +445,8 @@ struct virtio_crypto_op_data_req {
#define VIRTIO_CRYPTO_BADMSG 2
#define VIRTIO_CRYPTO_NOTSUPP 3
#define VIRTIO_CRYPTO_INVSESS 4 /* Invalid session id */
+#define VIRTIO_CRYPTO_NOSPC 5 /* no free session ID */
+#define VIRTIO_CRYPTO_KEY_REJECTED 6 /* Signature verification failed */
/* The accelerator hardware is ready */
#define VIRTIO_CRYPTO_S_HW_READY (1 << 0)
--
2.25.1
^ permalink raw reply [flat|nested] 58+ messages in thread
* [v3 5/5] examples/vhost_crypto: support asymmetric crypto
2025-02-21 17:30 ` [v3 0/5] vhost: add RSA support Gowrishankar Muthukrishnan
` (3 preceding siblings ...)
2025-02-21 17:30 ` [v3 4/5] vhost: support asymmetric RSA crypto ops Gowrishankar Muthukrishnan
@ 2025-02-21 17:30 ` Gowrishankar Muthukrishnan
2025-02-22 8:38 ` [v4 0/5] vhost: add RSA support Gowrishankar Muthukrishnan
5 siblings, 0 replies; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2025-02-21 17:30 UTC (permalink / raw)
To: dev, maxime.coquelin, Chenbo Xia
Cc: anoobj, Akhil Goyal, Gowrishankar Muthukrishnan
Support asymmetric crypto operations.
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
examples/vhost_crypto/main.c | 50 +++++++++++++++++++++++++++---------
1 file changed, 38 insertions(+), 12 deletions(-)
diff --git a/examples/vhost_crypto/main.c b/examples/vhost_crypto/main.c
index b1fe4120b9..8bdfc40c4b 100644
--- a/examples/vhost_crypto/main.c
+++ b/examples/vhost_crypto/main.c
@@ -59,6 +59,7 @@ struct vhost_crypto_options {
uint32_t nb_los;
uint32_t zero_copy;
uint32_t guest_polling;
+ bool asymmetric_crypto;
} options;
enum {
@@ -70,6 +71,8 @@ enum {
OPT_ZERO_COPY_NUM,
#define OPT_POLLING "guest-polling"
OPT_POLLING_NUM,
+#define OPT_ASYM "asymmetric-crypto"
+ OPT_ASYM_NUM,
};
#define NB_SOCKET_FIELDS (2)
@@ -202,9 +205,10 @@ vhost_crypto_usage(const char *prgname)
" --%s <lcore>,SOCKET-FILE-PATH\n"
" --%s (lcore,cdev_id,queue_id)[,(lcore,cdev_id,queue_id)]\n"
" --%s: zero copy\n"
- " --%s: guest polling\n",
+ " --%s: guest polling\n"
+ " --%s: asymmetric crypto\n",
prgname, OPT_SOCKET_FILE, OPT_CONFIG,
- OPT_ZERO_COPY, OPT_POLLING);
+ OPT_ZERO_COPY, OPT_POLLING, OPT_ASYM);
}
static int
@@ -223,6 +227,8 @@ vhost_crypto_parse_args(int argc, char **argv)
NULL, OPT_ZERO_COPY_NUM},
{OPT_POLLING, no_argument,
NULL, OPT_POLLING_NUM},
+ {OPT_ASYM, no_argument,
+ NULL, OPT_ASYM_NUM},
{NULL, 0, 0, 0}
};
@@ -262,6 +268,10 @@ vhost_crypto_parse_args(int argc, char **argv)
options.guest_polling = 1;
break;
+ case OPT_ASYM_NUM:
+ options.asymmetric_crypto = true;
+ break;
+
default:
vhost_crypto_usage(prgname);
return -EINVAL;
@@ -376,6 +386,7 @@ vhost_crypto_worker(void *arg)
int callfds[VIRTIO_CRYPTO_MAX_NUM_BURST_VQS];
uint32_t lcore_id = rte_lcore_id();
uint32_t burst_size = MAX_PKT_BURST;
+ enum rte_crypto_op_type cop_type;
uint32_t i, j, k;
uint32_t to_fetch, fetched;
@@ -383,9 +394,13 @@ vhost_crypto_worker(void *arg)
RTE_LOG(INFO, USER1, "Processing on Core %u started\n", lcore_id);
+ cop_type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
+ if (options.asymmetric_crypto)
+ cop_type = RTE_CRYPTO_OP_TYPE_ASYMMETRIC;
+
for (i = 0; i < NB_VIRTIO_QUEUES; i++) {
if (rte_crypto_op_bulk_alloc(info->cop_pool,
- RTE_CRYPTO_OP_TYPE_SYMMETRIC, ops[i],
+ cop_type, ops[i],
burst_size) < burst_size) {
RTE_LOG(ERR, USER1, "Failed to alloc cops\n");
ret = -1;
@@ -411,12 +426,11 @@ vhost_crypto_worker(void *arg)
fetched);
if (unlikely(rte_crypto_op_bulk_alloc(
info->cop_pool,
- RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ cop_type,
ops[j], fetched) < fetched)) {
RTE_LOG(ERR, USER1, "Failed realloc\n");
return -1;
}
-
fetched = rte_cryptodev_dequeue_burst(
info->cid, info->qid,
ops_deq[j], RTE_MIN(burst_size,
@@ -477,6 +491,7 @@ main(int argc, char *argv[])
struct rte_cryptodev_qp_conf qp_conf;
struct rte_cryptodev_config config;
struct rte_cryptodev_info dev_info;
+ enum rte_crypto_op_type cop_type;
char name[128];
uint32_t i, j, lcore;
int ret;
@@ -539,12 +554,21 @@ main(int argc, char *argv[])
goto error_exit;
}
- snprintf(name, 127, "SESS_POOL_%u", lo->lcore_id);
- info->sess_pool = rte_cryptodev_sym_session_pool_create(name,
- SESSION_MAP_ENTRIES,
- rte_cryptodev_sym_get_private_session_size(
- info->cid), 0, 0,
- rte_lcore_to_socket_id(lo->lcore_id));
+ if (!options.asymmetric_crypto) {
+ snprintf(name, 127, "SYM_SESS_POOL_%u", lo->lcore_id);
+ info->sess_pool = rte_cryptodev_sym_session_pool_create(name,
+ SESSION_MAP_ENTRIES,
+ rte_cryptodev_sym_get_private_session_size(
+ info->cid), 0, 0,
+ rte_lcore_to_socket_id(lo->lcore_id));
+ cop_type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
+ } else {
+ snprintf(name, 127, "ASYM_SESS_POOL_%u", lo->lcore_id);
+ info->sess_pool = rte_cryptodev_asym_session_pool_create(name,
+ SESSION_MAP_ENTRIES, 0, 64,
+ rte_lcore_to_socket_id(lo->lcore_id));
+ cop_type = RTE_CRYPTO_OP_TYPE_ASYMMETRIC;
+ }
if (!info->sess_pool) {
RTE_LOG(ERR, USER1, "Failed to create mempool");
@@ -553,7 +577,7 @@ main(int argc, char *argv[])
snprintf(name, 127, "COPPOOL_%u", lo->lcore_id);
info->cop_pool = rte_crypto_op_pool_create(name,
- RTE_CRYPTO_OP_TYPE_SYMMETRIC, NB_MEMPOOL_OBJS,
+ cop_type, NB_MEMPOOL_OBJS,
NB_CACHE_OBJS, VHOST_CRYPTO_MAX_IV_LEN,
rte_lcore_to_socket_id(lo->lcore_id));
@@ -567,6 +591,8 @@ main(int argc, char *argv[])
qp_conf.nb_descriptors = NB_CRYPTO_DESCRIPTORS;
qp_conf.mp_session = info->sess_pool;
+ if (options.asymmetric_crypto)
+ qp_conf.mp_session = NULL;
for (j = 0; j < dev_info.max_nb_queue_pairs; j++) {
ret = rte_cryptodev_queue_pair_setup(info->cid, j,
--
2.25.1
^ permalink raw reply [flat|nested] 58+ messages in thread
* [v3 0/6] crypto/virtio: enhancements for RSA and vDPA
2025-01-07 17:52 ` [v2 0/2] crypto/virtio: add RSA support Gowrishankar Muthukrishnan
2025-01-07 17:52 ` [v2 1/2] crypto/virtio: add asymmetric " Gowrishankar Muthukrishnan
2025-01-07 17:52 ` [v2 2/2] test/crypto: add asymmetric tests for virtio PMD Gowrishankar Muthukrishnan
@ 2025-02-21 17:41 ` Gowrishankar Muthukrishnan
2025-02-21 17:41 ` [v3 1/6] crypto/virtio: add asymmetric RSA support Gowrishankar Muthukrishnan
` (5 more replies)
2 siblings, 6 replies; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2025-02-21 17:41 UTC (permalink / raw)
To: dev; +Cc: anoobj, Akhil Goyal, Gowrishankar Muthukrishnan
This patch series enhances virtio crypto PMD to:
* support RSA
* support packed virtio ring
* support vDPA backend
v3:
- vDPA backend code majorly sourced from virtio net.
Gowrishankar Muthukrishnan (6):
crypto/virtio: add asymmetric RSA support
crypto/virtio: refactor queue operations
crypto/virtio: add packed ring support
crypto/virtio: add vDPA backend
test/crypto: add asymmetric tests for virtio PMD
test/crypto: add tests for virtio user PMD
app/test/test_cryptodev.c | 7 +
app/test/test_cryptodev.h | 1 +
app/test/test_cryptodev_asym.c | 43 +
drivers/crypto/virtio/meson.build | 8 +
drivers/crypto/virtio/virtio_crypto_algs.h | 2 +-
.../virtio/virtio_crypto_capabilities.h | 19 +
drivers/crypto/virtio/virtio_cryptodev.c | 1060 +++++++++++------
drivers/crypto/virtio/virtio_cryptodev.h | 18 +-
drivers/crypto/virtio/virtio_cvq.c | 228 ++++
drivers/crypto/virtio/virtio_cvq.h | 33 +
drivers/crypto/virtio/virtio_logs.h | 6 +-
drivers/crypto/virtio/virtio_pci.h | 38 +-
drivers/crypto/virtio/virtio_ring.h | 65 +-
drivers/crypto/virtio/virtio_rxtx.c | 721 ++++++++++-
drivers/crypto/virtio/virtio_rxtx.h | 13 +
drivers/crypto/virtio/virtio_user/vhost.h | 90 ++
.../crypto/virtio/virtio_user/vhost_vdpa.c | 710 +++++++++++
.../virtio/virtio_user/virtio_user_dev.c | 767 ++++++++++++
.../virtio/virtio_user/virtio_user_dev.h | 85 ++
drivers/crypto/virtio/virtio_user_cryptodev.c | 575 +++++++++
drivers/crypto/virtio/virtqueue.c | 229 +++-
drivers/crypto/virtio/virtqueue.h | 221 +++-
lib/cryptodev/cryptodev_pmd.h | 6 +
23 files changed, 4453 insertions(+), 492 deletions(-)
create mode 100644 drivers/crypto/virtio/virtio_cvq.c
create mode 100644 drivers/crypto/virtio/virtio_cvq.h
create mode 100644 drivers/crypto/virtio/virtio_rxtx.h
create mode 100644 drivers/crypto/virtio/virtio_user/vhost.h
create mode 100644 drivers/crypto/virtio/virtio_user/vhost_vdpa.c
create mode 100644 drivers/crypto/virtio/virtio_user/virtio_user_dev.c
create mode 100644 drivers/crypto/virtio/virtio_user/virtio_user_dev.h
create mode 100644 drivers/crypto/virtio/virtio_user_cryptodev.c
--
2.25.1
^ permalink raw reply [flat|nested] 58+ messages in thread
* [v3 1/6] crypto/virtio: add asymmetric RSA support
2025-02-21 17:41 ` [v3 0/6] crypto/virtio: enhancements for RSA and vDPA Gowrishankar Muthukrishnan
@ 2025-02-21 17:41 ` Gowrishankar Muthukrishnan
2025-02-21 17:41 ` [v3 2/6] crypto/virtio: refactor queue operations Gowrishankar Muthukrishnan
` (4 subsequent siblings)
5 siblings, 0 replies; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2025-02-21 17:41 UTC (permalink / raw)
To: dev, Jay Zhou, Akhil Goyal, Fan Zhang; +Cc: anoobj, Gowrishankar Muthukrishnan
Asymmetric RSA operations (SIGN, VERIFY, ENCRYPT and DECRYPT) are
supported in virtio PMD.
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
Depends-on: series-34674 ("vhost: add RSA support")
v3:
- fast path optimizations.
---
.../virtio/virtio_crypto_capabilities.h | 19 +
drivers/crypto/virtio/virtio_cryptodev.c | 347 +++++++++++++++---
drivers/crypto/virtio/virtio_cryptodev.h | 2 +
drivers/crypto/virtio/virtio_rxtx.c | 243 ++++++++++--
lib/cryptodev/cryptodev_pmd.h | 6 +
5 files changed, 539 insertions(+), 78 deletions(-)
diff --git a/drivers/crypto/virtio/virtio_crypto_capabilities.h b/drivers/crypto/virtio/virtio_crypto_capabilities.h
index 03c30deefd..1b26ff6720 100644
--- a/drivers/crypto/virtio/virtio_crypto_capabilities.h
+++ b/drivers/crypto/virtio/virtio_crypto_capabilities.h
@@ -48,4 +48,23 @@
}, } \
}
+#define VIRTIO_ASYM_CAPABILITIES \
+ { /* RSA */ \
+ .op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC, \
+ {.asym = { \
+ .xform_capa = { \
+ .xform_type = RTE_CRYPTO_ASYM_XFORM_RSA, \
+ .op_types = ((1 << RTE_CRYPTO_ASYM_OP_SIGN) | \
+ (1 << RTE_CRYPTO_ASYM_OP_VERIFY) | \
+ (1 << RTE_CRYPTO_ASYM_OP_ENCRYPT) | \
+ (1 << RTE_CRYPTO_ASYM_OP_DECRYPT)), \
+ {.modlen = { \
+ .min = 1, \
+ .max = 1024, \
+ .increment = 1 \
+ }, } \
+ } \
+ }, } \
+ }
+
#endif /* _VIRTIO_CRYPTO_CAPABILITIES_H_ */
diff --git a/drivers/crypto/virtio/virtio_cryptodev.c b/drivers/crypto/virtio/virtio_cryptodev.c
index 793f50059f..6a264bc24a 100644
--- a/drivers/crypto/virtio/virtio_cryptodev.c
+++ b/drivers/crypto/virtio/virtio_cryptodev.c
@@ -41,6 +41,11 @@ static void virtio_crypto_sym_clear_session(struct rte_cryptodev *dev,
static int virtio_crypto_sym_configure_session(struct rte_cryptodev *dev,
struct rte_crypto_sym_xform *xform,
struct rte_cryptodev_sym_session *session);
+static void virtio_crypto_asym_clear_session(struct rte_cryptodev *dev,
+ struct rte_cryptodev_asym_session *sess);
+static int virtio_crypto_asym_configure_session(struct rte_cryptodev *dev,
+ struct rte_crypto_asym_xform *xform,
+ struct rte_cryptodev_asym_session *session);
/*
* The set of PCI devices this driver supports
@@ -53,6 +58,7 @@ static const struct rte_pci_id pci_id_virtio_crypto_map[] = {
static const struct rte_cryptodev_capabilities virtio_capabilities[] = {
VIRTIO_SYM_CAPABILITIES,
+ VIRTIO_ASYM_CAPABILITIES,
RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
};
@@ -103,22 +109,24 @@ virtio_crypto_send_command(struct virtqueue *vq,
}
/* calculate the length of cipher key */
- if (cipher_key) {
+ if (session->ctrl.header.algo == VIRTIO_CRYPTO_SERVICE_CIPHER) {
switch (ctrl->u.sym_create_session.op_type) {
case VIRTIO_CRYPTO_SYM_OP_CIPHER:
- len_cipher_key
- = ctrl->u.sym_create_session.u.cipher
- .para.keylen;
+ len_cipher_key = ctrl->u.sym_create_session.u.cipher.para.keylen;
break;
case VIRTIO_CRYPTO_SYM_OP_ALGORITHM_CHAINING:
- len_cipher_key
- = ctrl->u.sym_create_session.u.chain
- .para.cipher_param.keylen;
+ len_cipher_key =
+ ctrl->u.sym_create_session.u.chain.para.cipher_param.keylen;
break;
default:
VIRTIO_CRYPTO_SESSION_LOG_ERR("invalid op type");
return -EINVAL;
}
+ } else if (session->ctrl.header.algo == VIRTIO_CRYPTO_AKCIPHER_RSA) {
+ len_cipher_key = ctrl->u.akcipher_create_session.para.keylen;
+ } else {
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("Invalid crypto service for cipher key");
+ return -EINVAL;
}
/* calculate the length of auth key */
@@ -513,7 +521,10 @@ static struct rte_cryptodev_ops virtio_crypto_dev_ops = {
/* Crypto related operations */
.sym_session_get_size = virtio_crypto_sym_get_session_private_size,
.sym_session_configure = virtio_crypto_sym_configure_session,
- .sym_session_clear = virtio_crypto_sym_clear_session
+ .sym_session_clear = virtio_crypto_sym_clear_session,
+ .asym_session_get_size = virtio_crypto_sym_get_session_private_size,
+ .asym_session_configure = virtio_crypto_asym_configure_session,
+ .asym_session_clear = virtio_crypto_asym_clear_session
};
static void
@@ -737,6 +748,8 @@ crypto_virtio_create(const char *name, struct rte_pci_device *pci_dev,
cryptodev->dequeue_burst = virtio_crypto_pkt_rx_burst;
cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
+ RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO |
+ RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_QT |
RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT;
@@ -923,32 +936,24 @@ virtio_crypto_check_sym_clear_session_paras(
#define NUM_ENTRY_SYM_CLEAR_SESSION 2
static void
-virtio_crypto_sym_clear_session(
+virtio_crypto_clear_session(
struct rte_cryptodev *dev,
- struct rte_cryptodev_sym_session *sess)
+ struct virtio_crypto_op_ctrl_req *ctrl)
{
struct virtio_crypto_hw *hw;
struct virtqueue *vq;
- struct virtio_crypto_session *session;
- struct virtio_crypto_op_ctrl_req *ctrl;
struct vring_desc *desc;
uint8_t *status;
uint8_t needed = 1;
uint32_t head;
- uint8_t *malloc_virt_addr;
uint64_t malloc_phys_addr;
uint8_t len_inhdr = sizeof(struct virtio_crypto_inhdr);
uint32_t len_op_ctrl_req = sizeof(struct virtio_crypto_op_ctrl_req);
uint32_t desc_offset = len_op_ctrl_req + len_inhdr;
-
- PMD_INIT_FUNC_TRACE();
-
- if (virtio_crypto_check_sym_clear_session_paras(dev, sess) < 0)
- return;
+ uint64_t session_id = ctrl->u.destroy_session.session_id;
hw = dev->data->dev_private;
vq = hw->cvq;
- session = CRYPTODEV_GET_SYM_SESS_PRIV(sess);
VIRTIO_CRYPTO_SESSION_LOG_INFO("vq->vq_desc_head_idx = %d, "
"vq = %p", vq->vq_desc_head_idx, vq);
@@ -960,34 +965,15 @@ virtio_crypto_sym_clear_session(
return;
}
- /*
- * malloc memory to store information of ctrl request op,
- * returned status and desc vring
- */
- malloc_virt_addr = rte_malloc(NULL, len_op_ctrl_req + len_inhdr
- + NUM_ENTRY_SYM_CLEAR_SESSION
- * sizeof(struct vring_desc), RTE_CACHE_LINE_SIZE);
- if (malloc_virt_addr == NULL) {
- VIRTIO_CRYPTO_SESSION_LOG_ERR("not enough heap room");
- return;
- }
- malloc_phys_addr = rte_malloc_virt2iova(malloc_virt_addr);
-
- /* assign ctrl request op part */
- ctrl = (struct virtio_crypto_op_ctrl_req *)malloc_virt_addr;
- ctrl->header.opcode = VIRTIO_CRYPTO_CIPHER_DESTROY_SESSION;
- /* default data virtqueue is 0 */
- ctrl->header.queue_id = 0;
- ctrl->u.destroy_session.session_id = session->session_id;
+ malloc_phys_addr = rte_malloc_virt2iova(ctrl);
/* status part */
status = &(((struct virtio_crypto_inhdr *)
- ((uint8_t *)malloc_virt_addr + len_op_ctrl_req))->status);
+ ((uint8_t *)ctrl + len_op_ctrl_req))->status);
*status = VIRTIO_CRYPTO_ERR;
/* indirect desc vring part */
- desc = (struct vring_desc *)((uint8_t *)malloc_virt_addr
- + desc_offset);
+ desc = (struct vring_desc *)((uint8_t *)ctrl + desc_offset);
/* ctrl request part */
desc[0].addr = malloc_phys_addr;
@@ -1049,8 +1035,8 @@ virtio_crypto_sym_clear_session(
if (*status != VIRTIO_CRYPTO_OK) {
VIRTIO_CRYPTO_SESSION_LOG_ERR("Close session failed "
"status=%"PRIu32", session_id=%"PRIu64"",
- *status, session->session_id);
- rte_free(malloc_virt_addr);
+ *status, session_id);
+ rte_free(ctrl);
return;
}
@@ -1058,9 +1044,86 @@ virtio_crypto_sym_clear_session(
VIRTIO_CRYPTO_INIT_LOG_DBG("vq->vq_desc_head_idx=%d", vq->vq_desc_head_idx);
VIRTIO_CRYPTO_SESSION_LOG_INFO("Close session %"PRIu64" successfully ",
- session->session_id);
+ session_id);
- rte_free(malloc_virt_addr);
+ rte_free(ctrl);
+}
+
+static void
+virtio_crypto_sym_clear_session(
+ struct rte_cryptodev *dev,
+ struct rte_cryptodev_sym_session *sess)
+{
+ uint32_t len_op_ctrl_req = sizeof(struct virtio_crypto_op_ctrl_req);
+ uint8_t len_inhdr = sizeof(struct virtio_crypto_inhdr);
+ struct virtio_crypto_op_ctrl_req *ctrl;
+ struct virtio_crypto_session *session;
+ uint8_t *malloc_virt_addr;
+
+ PMD_INIT_FUNC_TRACE();
+
+ if (virtio_crypto_check_sym_clear_session_paras(dev, sess) < 0)
+ return;
+
+ session = CRYPTODEV_GET_SYM_SESS_PRIV(sess);
+
+ /*
+ * malloc memory to store information of ctrl request op,
+ * returned status and desc vring
+ */
+ malloc_virt_addr = rte_malloc(NULL, len_op_ctrl_req + len_inhdr
+ + NUM_ENTRY_SYM_CLEAR_SESSION
+ * sizeof(struct vring_desc), RTE_CACHE_LINE_SIZE);
+ if (malloc_virt_addr == NULL) {
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("not enough heap room");
+ return;
+ }
+
+ /* assign ctrl request op part */
+ ctrl = (struct virtio_crypto_op_ctrl_req *)malloc_virt_addr;
+ ctrl->header.opcode = VIRTIO_CRYPTO_CIPHER_DESTROY_SESSION;
+ /* default data virtqueue is 0 */
+ ctrl->header.queue_id = 0;
+ ctrl->u.destroy_session.session_id = session->session_id;
+
+ return virtio_crypto_clear_session(dev, ctrl);
+}
+
+static void
+virtio_crypto_asym_clear_session(
+ struct rte_cryptodev *dev,
+ struct rte_cryptodev_asym_session *sess)
+{
+ uint32_t len_op_ctrl_req = sizeof(struct virtio_crypto_op_ctrl_req);
+ uint8_t len_inhdr = sizeof(struct virtio_crypto_inhdr);
+ struct virtio_crypto_op_ctrl_req *ctrl;
+ struct virtio_crypto_session *session;
+ uint8_t *malloc_virt_addr;
+
+ PMD_INIT_FUNC_TRACE();
+
+ session = CRYPTODEV_GET_ASYM_SESS_PRIV(sess);
+
+ /*
+ * malloc memory to store information of ctrl request op,
+ * returned status and desc vring
+ */
+ malloc_virt_addr = rte_malloc(NULL, len_op_ctrl_req + len_inhdr
+ + NUM_ENTRY_SYM_CLEAR_SESSION
+ * sizeof(struct vring_desc), RTE_CACHE_LINE_SIZE);
+ if (malloc_virt_addr == NULL) {
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("not enough heap room");
+ return;
+ }
+
+ /* assign ctrl request op part */
+ ctrl = (struct virtio_crypto_op_ctrl_req *)malloc_virt_addr;
+ ctrl->header.opcode = VIRTIO_CRYPTO_AKCIPHER_DESTROY_SESSION;
+ /* default data virtqueue is 0 */
+ ctrl->header.queue_id = 0;
+ ctrl->u.destroy_session.session_id = session->session_id;
+
+ return virtio_crypto_clear_session(dev, ctrl);
}
static struct rte_crypto_cipher_xform *
@@ -1291,6 +1354,23 @@ virtio_crypto_check_sym_configure_session_paras(
return 0;
}
+static int
+virtio_crypto_check_asym_configure_session_paras(
+ struct rte_cryptodev *dev,
+ struct rte_crypto_asym_xform *xform,
+ struct rte_cryptodev_asym_session *asym_sess)
+{
+ if (unlikely(xform == NULL) || unlikely(asym_sess == NULL)) {
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("NULL pointer");
+ return -1;
+ }
+
+ if (virtio_crypto_check_sym_session_paras(dev) < 0)
+ return -1;
+
+ return 0;
+}
+
static int
virtio_crypto_sym_configure_session(
struct rte_cryptodev *dev,
@@ -1383,6 +1463,183 @@ virtio_crypto_sym_configure_session(
return ret;
}
+static size_t
+tlv_encode(uint8_t *tlv, uint8_t type, uint8_t *data, size_t len)
+{
+ uint8_t *lenval = tlv;
+ size_t lenval_n = 0;
+
+ if (len > 65535) {
+ goto _exit;
+ } else if (len > 255) {
+ lenval_n = 4 + len;
+ lenval[0] = type;
+ lenval[1] = 0x82;
+ lenval[2] = (len & 0xFF00) >> 8;
+ lenval[3] = (len & 0xFF);
+ rte_memcpy(&lenval[4], data, len);
+ } else if (len > 127) {
+ lenval_n = 3 + len;
+ lenval[0] = type;
+ lenval[1] = 0x81;
+ lenval[2] = len;
+ rte_memcpy(&lenval[3], data, len);
+ } else {
+ lenval_n = 2 + len;
+ lenval[0] = type;
+ lenval[1] = len;
+ rte_memcpy(&lenval[2], data, len);
+ }
+
+_exit:
+ return lenval_n;
+}
+
+static int
+virtio_crypto_asym_rsa_xform_to_der(
+ struct rte_crypto_asym_xform *xform,
+ uint8_t *der)
+{
+ uint8_t data[VIRTIO_CRYPTO_MAX_CTRL_DATA];
+ uint8_t ver[3] = {0x02, 0x01, 0x00};
+ size_t tlen, len;
+ uint8_t *tlv;
+
+ if (xform->xform_type != RTE_CRYPTO_ASYM_XFORM_RSA)
+ return -EINVAL;
+
+ tlv = data;
+ rte_memcpy(tlv, ver, RTE_DIM(ver));
+ tlen = RTE_DIM(ver);
+ len = tlv_encode(tlv + tlen, 0x02, xform->rsa.n.data, xform->rsa.n.length);
+ tlen += len;
+ len = tlv_encode(tlv + tlen, 0x02, xform->rsa.e.data, xform->rsa.e.length);
+ tlen += len;
+ len = tlv_encode(tlv + tlen, 0x02, xform->rsa.d.data, xform->rsa.d.length);
+ tlen += len;
+ len = tlv_encode(tlv + tlen, 0x02, xform->rsa.qt.p.data, xform->rsa.qt.p.length);
+ tlen += len;
+ len = tlv_encode(tlv + tlen, 0x02, xform->rsa.qt.q.data, xform->rsa.qt.q.length);
+ tlen += len;
+ len = tlv_encode(tlv + tlen, 0x02, xform->rsa.qt.dP.data, xform->rsa.qt.dP.length);
+ tlen += len;
+ len = tlv_encode(tlv + tlen, 0x02, xform->rsa.qt.dQ.data, xform->rsa.qt.dQ.length);
+ tlen += len;
+ len = tlv_encode(tlv + tlen, 0x02, xform->rsa.qt.qInv.data, xform->rsa.qt.qInv.length);
+ tlen += len;
+
+ RTE_ASSERT(tlen < VIRTIO_CRYPTO_MAX_CTRL_DATA);
+ len = tlv_encode(der, 0x30, data, tlen);
+ return len;
+}
+
+static int
+virtio_crypto_asym_rsa_configure_session(
+ struct rte_crypto_rsa_xform *rsa,
+ struct virtio_crypto_akcipher_session_para *para)
+{
+ para->algo = VIRTIO_CRYPTO_AKCIPHER_RSA;
+ if (rsa->key_type == RTE_RSA_KEY_TYPE_EXP)
+ para->keytype = VIRTIO_CRYPTO_AKCIPHER_KEY_TYPE_PUBLIC;
+ else
+ para->keytype = VIRTIO_CRYPTO_AKCIPHER_KEY_TYPE_PRIVATE;
+
+ if (rsa->padding.type == RTE_CRYPTO_RSA_PADDING_NONE) {
+ para->u.rsa.padding_algo = VIRTIO_CRYPTO_RSA_RAW_PADDING;
+ } else if (rsa->padding.type == RTE_CRYPTO_RSA_PADDING_PKCS1_5) {
+ para->u.rsa.padding_algo = VIRTIO_CRYPTO_RSA_PKCS1_PADDING;
+ switch (rsa->padding.hash) {
+ case RTE_CRYPTO_AUTH_SHA1:
+ para->u.rsa.hash_algo = VIRTIO_CRYPTO_RSA_SHA1;
+ break;
+ case RTE_CRYPTO_AUTH_SHA224:
+ para->u.rsa.hash_algo = VIRTIO_CRYPTO_RSA_SHA224;
+ break;
+ case RTE_CRYPTO_AUTH_SHA256:
+ para->u.rsa.hash_algo = VIRTIO_CRYPTO_RSA_SHA256;
+ break;
+ case RTE_CRYPTO_AUTH_SHA512:
+ para->u.rsa.hash_algo = VIRTIO_CRYPTO_RSA_SHA512;
+ break;
+ case RTE_CRYPTO_AUTH_MD5:
+ para->u.rsa.hash_algo = VIRTIO_CRYPTO_RSA_MD5;
+ break;
+ default:
+ para->u.rsa.hash_algo = VIRTIO_CRYPTO_RSA_NO_HASH;
+ }
+ } else {
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("Invalid padding type");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int
+virtio_crypto_asym_configure_session(
+ struct rte_cryptodev *dev,
+ struct rte_crypto_asym_xform *xform,
+ struct rte_cryptodev_asym_session *sess)
+{
+ struct virtio_crypto_akcipher_session_para *para;
+ struct virtio_crypto_op_ctrl_req *ctrl_req;
+ uint8_t key[VIRTIO_CRYPTO_MAX_CTRL_DATA];
+ struct virtio_crypto_session *session;
+ struct virtio_crypto_hw *hw;
+ struct virtqueue *control_vq;
+ int ret;
+
+ PMD_INIT_FUNC_TRACE();
+
+ ret = virtio_crypto_check_asym_configure_session_paras(dev, xform,
+ sess);
+ if (ret < 0) {
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("Invalid parameters");
+ return ret;
+ }
+
+ session = CRYPTODEV_GET_ASYM_SESS_PRIV(sess);
+ memset(session, 0, sizeof(struct virtio_crypto_session));
+ ctrl_req = &session->ctrl;
+ ctrl_req->header.opcode = VIRTIO_CRYPTO_AKCIPHER_CREATE_SESSION;
+ ctrl_req->header.queue_id = 0;
+ para = &ctrl_req->u.akcipher_create_session.para;
+
+ switch (xform->xform_type) {
+ case RTE_CRYPTO_ASYM_XFORM_RSA:
+ ctrl_req->header.algo = VIRTIO_CRYPTO_AKCIPHER_RSA;
+ ret = virtio_crypto_asym_rsa_configure_session(&xform->rsa, para);
+ if (ret < 0) {
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("Invalid RSA parameters");
+ return ret;
+ }
+
+ ret = virtio_crypto_asym_rsa_xform_to_der(xform, key);
+ if (ret <= 0) {
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("Invalid RSA primitives");
+ return ret;
+ }
+
+ ctrl_req->u.akcipher_create_session.para.keylen = ret;
+ break;
+ default:
+ para->algo = VIRTIO_CRYPTO_NO_AKCIPHER;
+ }
+
+ hw = dev->data->dev_private;
+ control_vq = hw->cvq;
+ ret = virtio_crypto_send_command(control_vq, ctrl_req,
+ key, NULL, session);
+ if (ret < 0) {
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("create session failed: %d", ret);
+ goto error_out;
+ }
+
+ return 0;
+error_out:
+ return -1;
+}
+
static void
virtio_crypto_dev_info_get(struct rte_cryptodev *dev,
struct rte_cryptodev_info *info)
diff --git a/drivers/crypto/virtio/virtio_cryptodev.h b/drivers/crypto/virtio/virtio_cryptodev.h
index 215bce7863..d8b1e1abdd 100644
--- a/drivers/crypto/virtio/virtio_cryptodev.h
+++ b/drivers/crypto/virtio/virtio_cryptodev.h
@@ -20,6 +20,8 @@
#define VIRTIO_CRYPTO_MAX_KEY_SIZE 256
+#define VIRTIO_CRYPTO_MAX_CTRL_DATA 2048
+
extern uint8_t cryptodev_virtio_driver_id;
enum virtio_crypto_cmd_id {
diff --git a/drivers/crypto/virtio/virtio_rxtx.c b/drivers/crypto/virtio/virtio_rxtx.c
index d02486661f..3cf25d8c1f 100644
--- a/drivers/crypto/virtio/virtio_rxtx.c
+++ b/drivers/crypto/virtio/virtio_rxtx.c
@@ -107,7 +107,7 @@ virtqueue_dequeue_burst_rx(struct virtqueue *vq,
return i;
}
-static int
+static inline int
virtqueue_crypto_sym_pkt_header_arrange(
struct rte_crypto_op *cop,
struct virtio_crypto_op_data_req *data,
@@ -187,7 +187,7 @@ virtqueue_crypto_sym_pkt_header_arrange(
return 0;
}
-static int
+static inline int
virtqueue_crypto_sym_enqueue_xmit(
struct virtqueue *txvq,
struct rte_crypto_op *cop)
@@ -343,24 +343,190 @@ virtqueue_crypto_sym_enqueue_xmit(
return 0;
}
-static int
-virtqueue_crypto_enqueue_xmit(struct virtqueue *txvq,
- struct rte_crypto_op *cop)
+static inline int
+virtqueue_crypto_asym_pkt_header_arrange(
+ struct rte_crypto_op *cop,
+ struct virtio_crypto_op_data_req *data,
+ struct virtio_crypto_session *session)
{
- int ret;
+ struct virtio_crypto_op_ctrl_req *ctrl = &session->ctrl;
+ struct virtio_crypto_op_data_req *req_data = data;
+ struct rte_crypto_asym_op *asym_op = cop->asym;
+
+ req_data->header.session_id = session->session_id;
+
+ switch (ctrl->header.algo) {
+ case VIRTIO_CRYPTO_AKCIPHER_RSA:
+ req_data->header.algo = ctrl->header.algo;
+ if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_SIGN) {
+ req_data->header.opcode = VIRTIO_CRYPTO_AKCIPHER_SIGN;
+ req_data->u.akcipher_req.para.src_data_len
+ = asym_op->rsa.message.length;
+ req_data->u.akcipher_req.para.dst_data_len
+ = asym_op->rsa.sign.length;
+ } else if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_VERIFY) {
+ req_data->header.opcode = VIRTIO_CRYPTO_AKCIPHER_VERIFY;
+ req_data->u.akcipher_req.para.src_data_len
+ = asym_op->rsa.sign.length;
+ req_data->u.akcipher_req.para.dst_data_len
+ = asym_op->rsa.message.length;
+ } else if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_ENCRYPT) {
+ req_data->header.opcode = VIRTIO_CRYPTO_AKCIPHER_ENCRYPT;
+ req_data->u.akcipher_req.para.src_data_len
+ = asym_op->rsa.message.length;
+ req_data->u.akcipher_req.para.dst_data_len
+ = asym_op->rsa.cipher.length;
+ } else if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_DECRYPT) {
+ req_data->header.opcode = VIRTIO_CRYPTO_AKCIPHER_DECRYPT;
+ req_data->u.akcipher_req.para.src_data_len
+ = asym_op->rsa.cipher.length;
+ req_data->u.akcipher_req.para.dst_data_len
+ = asym_op->rsa.message.length;
+ } else {
+ return -EINVAL;
+ }
- switch (cop->type) {
- case RTE_CRYPTO_OP_TYPE_SYMMETRIC:
- ret = virtqueue_crypto_sym_enqueue_xmit(txvq, cop);
break;
default:
- VIRTIO_CRYPTO_TX_LOG_ERR("invalid crypto op type %u",
- cop->type);
- ret = -EFAULT;
- break;
+ req_data->header.algo = VIRTIO_CRYPTO_NO_AKCIPHER;
+ }
+
+ return 0;
+}
+
+static inline int
+virtqueue_crypto_asym_enqueue_xmit(
+ struct virtqueue *txvq,
+ struct rte_crypto_op *cop)
+{
+ uint16_t req_data_len = sizeof(struct virtio_crypto_op_data_req);
+ uint32_t indirect_vring_addr_offset = req_data_len +
+ sizeof(struct virtio_crypto_inhdr);
+ struct virtio_crypto_session *session =
+ CRYPTODEV_GET_ASYM_SESS_PRIV(cop->asym->session);
+ struct virtio_crypto_op_cookie *crypto_op_cookie;
+ struct rte_crypto_asym_op *asym_op = cop->asym;
+ struct virtio_crypto_op_data_req *op_data_req;
+ uint64_t indirect_op_data_req_phys_addr;
+ struct vring_desc *start_dp;
+ struct vq_desc_extra *dxp;
+ struct vring_desc *desc;
+ uint16_t needed = 1;
+ uint16_t num_entry;
+ uint16_t head_idx;
+ uint16_t idx = 0;
+
+ if (unlikely(txvq->vq_free_cnt == 0))
+ return -ENOSPC;
+ if (unlikely(txvq->vq_free_cnt < needed))
+ return -EMSGSIZE;
+ head_idx = txvq->vq_desc_head_idx;
+ if (unlikely(head_idx >= txvq->vq_nentries))
+ return -EFAULT;
+
+ dxp = &txvq->vq_descx[head_idx];
+
+ if (rte_mempool_get(txvq->mpool, &dxp->cookie)) {
+ VIRTIO_CRYPTO_TX_LOG_ERR("can not get cookie");
+ return -EFAULT;
+ }
+ crypto_op_cookie = dxp->cookie;
+ indirect_op_data_req_phys_addr =
+ rte_mempool_virt2iova(crypto_op_cookie);
+ op_data_req = (struct virtio_crypto_op_data_req *)crypto_op_cookie;
+ if (virtqueue_crypto_asym_pkt_header_arrange(cop, op_data_req, session))
+ return -EFAULT;
+
+ /* status is initialized to VIRTIO_CRYPTO_ERR */
+ ((struct virtio_crypto_inhdr *)
+ ((uint8_t *)op_data_req + req_data_len))->status =
+ VIRTIO_CRYPTO_ERR;
+
+ /* point to indirect vring entry */
+ desc = (struct vring_desc *)
+ ((uint8_t *)op_data_req + indirect_vring_addr_offset);
+ for (idx = 0; idx < (NUM_ENTRY_VIRTIO_CRYPTO_OP - 1); idx++)
+ desc[idx].next = idx + 1;
+ desc[NUM_ENTRY_VIRTIO_CRYPTO_OP - 1].next = VQ_RING_DESC_CHAIN_END;
+
+ idx = 0;
+
+ /* indirect vring: first part, virtio_crypto_op_data_req */
+ desc[idx].addr = indirect_op_data_req_phys_addr;
+ desc[idx].len = req_data_len;
+ desc[idx++].flags = VRING_DESC_F_NEXT;
+
+ if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_SIGN) {
+ /* indirect vring: src data */
+ desc[idx].addr = rte_mem_virt2iova(asym_op->rsa.message.data);
+ desc[idx].len = asym_op->rsa.message.length;
+ desc[idx++].flags = VRING_DESC_F_NEXT;
+
+ /* indirect vring: dst data */
+ desc[idx].addr = rte_mem_virt2iova(asym_op->rsa.sign.data);
+ desc[idx].len = asym_op->rsa.sign.length;
+ desc[idx++].flags = VRING_DESC_F_NEXT | VRING_DESC_F_WRITE;
+ } else if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_VERIFY) {
+ /* indirect vring: src data */
+ desc[idx].addr = rte_mem_virt2iova(asym_op->rsa.sign.data);
+ desc[idx].len = asym_op->rsa.sign.length;
+ desc[idx++].flags = VRING_DESC_F_NEXT;
+
+ /* indirect vring: dst data */
+ desc[idx].addr = rte_mem_virt2iova(asym_op->rsa.message.data);
+ desc[idx].len = asym_op->rsa.message.length;
+ desc[idx++].flags = VRING_DESC_F_NEXT;
+ } else if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_ENCRYPT) {
+ /* indirect vring: src data */
+ desc[idx].addr = rte_mem_virt2iova(asym_op->rsa.message.data);
+ desc[idx].len = asym_op->rsa.message.length;
+ desc[idx++].flags = VRING_DESC_F_NEXT;
+
+ /* indirect vring: dst data */
+ desc[idx].addr = rte_mem_virt2iova(asym_op->rsa.cipher.data);
+ desc[idx].len = asym_op->rsa.cipher.length;
+ desc[idx++].flags = VRING_DESC_F_NEXT | VRING_DESC_F_WRITE;
+ } else if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_DECRYPT) {
+ /* indirect vring: src data */
+ desc[idx].addr = rte_mem_virt2iova(asym_op->rsa.cipher.data);
+ desc[idx].len = asym_op->rsa.cipher.length;
+ desc[idx++].flags = VRING_DESC_F_NEXT;
+
+ /* indirect vring: dst data */
+ desc[idx].addr = rte_mem_virt2iova(asym_op->rsa.message.data);
+ desc[idx].len = asym_op->rsa.message.length;
+ desc[idx++].flags = VRING_DESC_F_NEXT | VRING_DESC_F_WRITE;
+ } else {
+ VIRTIO_CRYPTO_TX_LOG_ERR("Invalid asym op");
+ return -EINVAL;
}
- return ret;
+ /* indirect vring: last part, status returned */
+ desc[idx].addr = indirect_op_data_req_phys_addr + req_data_len;
+ desc[idx].len = sizeof(struct virtio_crypto_inhdr);
+ desc[idx++].flags = VRING_DESC_F_WRITE;
+
+ num_entry = idx;
+
+ /* save the infos to use when receiving packets */
+ dxp->crypto_op = (void *)cop;
+ dxp->ndescs = needed;
+
+ /* use a single buffer */
+ start_dp = txvq->vq_ring.desc;
+ start_dp[head_idx].addr = indirect_op_data_req_phys_addr +
+ indirect_vring_addr_offset;
+ start_dp[head_idx].len = num_entry * sizeof(struct vring_desc);
+ start_dp[head_idx].flags = VRING_DESC_F_INDIRECT;
+
+ idx = start_dp[head_idx].next;
+ txvq->vq_desc_head_idx = idx;
+ if (txvq->vq_desc_head_idx == VQ_RING_DESC_CHAIN_END)
+ txvq->vq_desc_tail_idx = idx;
+ txvq->vq_free_cnt = (uint16_t)(txvq->vq_free_cnt - needed);
+ vq_update_avail_ring(txvq, head_idx);
+
+ return 0;
}
static int
@@ -475,31 +641,40 @@ virtio_crypto_pkt_tx_burst(void *tx_queue, struct rte_crypto_op **tx_pkts,
VIRTIO_CRYPTO_TX_LOG_DBG("%d packets to xmit", nb_pkts);
for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
- struct rte_mbuf *txm = tx_pkts[nb_tx]->sym->m_src;
- /* nb_segs is always 1 at virtio crypto situation */
- int need = txm->nb_segs - txvq->vq_free_cnt;
-
- /*
- * Positive value indicates it hasn't enough space in vring
- * descriptors
- */
- if (unlikely(need > 0)) {
+ if (tx_pkts[nb_tx]->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) {
+ struct rte_mbuf *txm = tx_pkts[nb_tx]->sym->m_src;
+ /* nb_segs is always 1 at virtio crypto situation */
+ int need = txm->nb_segs - txvq->vq_free_cnt;
+
/*
- * try it again because the receive process may be
- * free some space
+ * Positive value indicates it hasn't enough space in vring
+ * descriptors
*/
- need = txm->nb_segs - txvq->vq_free_cnt;
if (unlikely(need > 0)) {
- VIRTIO_CRYPTO_TX_LOG_DBG("No free tx "
- "descriptors to transmit");
- break;
+ /*
+ * try it again because the receive process may be
+ * free some space
+ */
+ need = txm->nb_segs - txvq->vq_free_cnt;
+ if (unlikely(need > 0)) {
+ VIRTIO_CRYPTO_TX_LOG_DBG("No free tx "
+ "descriptors to transmit");
+ break;
+ }
}
- }
- txvq->packets_sent_total++;
+ /* Enqueue Packet buffers */
+ error = virtqueue_crypto_sym_enqueue_xmit(txvq, tx_pkts[nb_tx]);
+ } else if (tx_pkts[nb_tx]->type == RTE_CRYPTO_OP_TYPE_ASYMMETRIC) {
+ /* Enqueue Packet buffers */
+ error = virtqueue_crypto_asym_enqueue_xmit(txvq, tx_pkts[nb_tx]);
+ } else {
+ VIRTIO_CRYPTO_TX_LOG_ERR("invalid crypto op type %u",
+ tx_pkts[nb_tx]->type);
+ txvq->packets_sent_failed++;
+ continue;
+ }
- /* Enqueue Packet buffers */
- error = virtqueue_crypto_enqueue_xmit(txvq, tx_pkts[nb_tx]);
if (unlikely(error)) {
if (error == ENOSPC)
VIRTIO_CRYPTO_TX_LOG_ERR(
@@ -513,6 +688,8 @@ virtio_crypto_pkt_tx_burst(void *tx_queue, struct rte_crypto_op **tx_pkts,
txvq->packets_sent_failed++;
break;
}
+
+ txvq->packets_sent_total++;
}
if (likely(nb_tx)) {
diff --git a/lib/cryptodev/cryptodev_pmd.h b/lib/cryptodev/cryptodev_pmd.h
index 5c84a3b847..929c6defe9 100644
--- a/lib/cryptodev/cryptodev_pmd.h
+++ b/lib/cryptodev/cryptodev_pmd.h
@@ -715,6 +715,12 @@ struct rte_cryptodev_asym_session {
uint8_t sess_private_data[];
};
+/**
+ * Helper macro to get session private data
+ */
+#define CRYPTODEV_GET_ASYM_SESS_PRIV(s) \
+ ((void *)(((struct rte_cryptodev_asym_session *)s)->sess_private_data))
+
#ifdef __cplusplus
}
#endif
--
2.25.1
^ permalink raw reply [flat|nested] 58+ messages in thread
* [v3 2/6] crypto/virtio: refactor queue operations
2025-02-21 17:41 ` [v3 0/6] crypto/virtio: enhancements for RSA and vDPA Gowrishankar Muthukrishnan
2025-02-21 17:41 ` [v3 1/6] crypto/virtio: add asymmetric RSA support Gowrishankar Muthukrishnan
@ 2025-02-21 17:41 ` Gowrishankar Muthukrishnan
2025-02-21 17:41 ` [v3 3/6] crypto/virtio: add packed ring support Gowrishankar Muthukrishnan
` (3 subsequent siblings)
5 siblings, 0 replies; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2025-02-21 17:41 UTC (permalink / raw)
To: dev, Jay Zhou; +Cc: anoobj, Akhil Goyal, Gowrishankar Muthukrishnan
Move existing control queue operations into a common place
that would be shared with other virtio type of devices.
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
drivers/crypto/virtio/meson.build | 1 +
drivers/crypto/virtio/virtio_crypto_algs.h | 2 +-
drivers/crypto/virtio/virtio_cryptodev.c | 563 ++++++++-------------
drivers/crypto/virtio/virtio_cvq.c | 129 +++++
drivers/crypto/virtio/virtio_cvq.h | 33 ++
drivers/crypto/virtio/virtio_pci.h | 6 +-
drivers/crypto/virtio/virtio_ring.h | 12 +-
drivers/crypto/virtio/virtio_rxtx.c | 42 +-
drivers/crypto/virtio/virtio_rxtx.h | 13 +
drivers/crypto/virtio/virtqueue.c | 191 ++++++-
drivers/crypto/virtio/virtqueue.h | 89 +++-
11 files changed, 691 insertions(+), 390 deletions(-)
create mode 100644 drivers/crypto/virtio/virtio_cvq.c
create mode 100644 drivers/crypto/virtio/virtio_cvq.h
create mode 100644 drivers/crypto/virtio/virtio_rxtx.h
diff --git a/drivers/crypto/virtio/meson.build b/drivers/crypto/virtio/meson.build
index 45533c9b89..d2c3b3ad07 100644
--- a/drivers/crypto/virtio/meson.build
+++ b/drivers/crypto/virtio/meson.build
@@ -11,6 +11,7 @@ includes += include_directories('../../../lib/vhost')
deps += 'bus_pci'
sources = files(
'virtio_cryptodev.c',
+ 'virtio_cvq.c',
'virtio_pci.c',
'virtio_rxtx.c',
'virtqueue.c',
diff --git a/drivers/crypto/virtio/virtio_crypto_algs.h b/drivers/crypto/virtio/virtio_crypto_algs.h
index 4c44af3733..3824017ca5 100644
--- a/drivers/crypto/virtio/virtio_crypto_algs.h
+++ b/drivers/crypto/virtio/virtio_crypto_algs.h
@@ -22,7 +22,7 @@ struct virtio_crypto_session {
phys_addr_t phys_addr;
} aad;
- struct virtio_crypto_op_ctrl_req ctrl;
+ struct virtio_pmd_ctrl ctrl;
};
#endif /* _VIRTIO_CRYPTO_ALGS_H_ */
diff --git a/drivers/crypto/virtio/virtio_cryptodev.c b/drivers/crypto/virtio/virtio_cryptodev.c
index 6a264bc24a..6bb76ff15e 100644
--- a/drivers/crypto/virtio/virtio_cryptodev.c
+++ b/drivers/crypto/virtio/virtio_cryptodev.c
@@ -64,211 +64,6 @@ static const struct rte_cryptodev_capabilities virtio_capabilities[] = {
uint8_t cryptodev_virtio_driver_id;
-#define NUM_ENTRY_SYM_CREATE_SESSION 4
-
-static int
-virtio_crypto_send_command(struct virtqueue *vq,
- struct virtio_crypto_op_ctrl_req *ctrl, uint8_t *cipher_key,
- uint8_t *auth_key, struct virtio_crypto_session *session)
-{
- uint8_t idx = 0;
- uint8_t needed = 1;
- uint32_t head = 0;
- uint32_t len_cipher_key = 0;
- uint32_t len_auth_key = 0;
- uint32_t len_ctrl_req = sizeof(struct virtio_crypto_op_ctrl_req);
- uint32_t len_session_input = sizeof(struct virtio_crypto_session_input);
- uint32_t len_total = 0;
- uint32_t input_offset = 0;
- void *virt_addr_started = NULL;
- phys_addr_t phys_addr_started;
- struct vring_desc *desc;
- uint32_t desc_offset;
- struct virtio_crypto_session_input *input;
- int ret;
-
- PMD_INIT_FUNC_TRACE();
-
- if (session == NULL) {
- VIRTIO_CRYPTO_SESSION_LOG_ERR("session is NULL.");
- return -EINVAL;
- }
- /* cipher only is supported, it is available if auth_key is NULL */
- if (!cipher_key) {
- VIRTIO_CRYPTO_SESSION_LOG_ERR("cipher key is NULL.");
- return -EINVAL;
- }
-
- head = vq->vq_desc_head_idx;
- VIRTIO_CRYPTO_INIT_LOG_DBG("vq->vq_desc_head_idx = %d, vq = %p",
- head, vq);
-
- if (vq->vq_free_cnt < needed) {
- VIRTIO_CRYPTO_SESSION_LOG_ERR("Not enough entry");
- return -ENOSPC;
- }
-
- /* calculate the length of cipher key */
- if (session->ctrl.header.algo == VIRTIO_CRYPTO_SERVICE_CIPHER) {
- switch (ctrl->u.sym_create_session.op_type) {
- case VIRTIO_CRYPTO_SYM_OP_CIPHER:
- len_cipher_key = ctrl->u.sym_create_session.u.cipher.para.keylen;
- break;
- case VIRTIO_CRYPTO_SYM_OP_ALGORITHM_CHAINING:
- len_cipher_key =
- ctrl->u.sym_create_session.u.chain.para.cipher_param.keylen;
- break;
- default:
- VIRTIO_CRYPTO_SESSION_LOG_ERR("invalid op type");
- return -EINVAL;
- }
- } else if (session->ctrl.header.algo == VIRTIO_CRYPTO_AKCIPHER_RSA) {
- len_cipher_key = ctrl->u.akcipher_create_session.para.keylen;
- } else {
- VIRTIO_CRYPTO_SESSION_LOG_ERR("Invalid crypto service for cipher key");
- return -EINVAL;
- }
-
- /* calculate the length of auth key */
- if (auth_key) {
- len_auth_key =
- ctrl->u.sym_create_session.u.chain.para.u.mac_param
- .auth_key_len;
- }
-
- /*
- * malloc memory to store indirect vring_desc entries, including
- * ctrl request, cipher key, auth key, session input and desc vring
- */
- desc_offset = len_ctrl_req + len_cipher_key + len_auth_key
- + len_session_input;
- virt_addr_started = rte_malloc(NULL,
- desc_offset + NUM_ENTRY_SYM_CREATE_SESSION
- * sizeof(struct vring_desc), RTE_CACHE_LINE_SIZE);
- if (virt_addr_started == NULL) {
- VIRTIO_CRYPTO_SESSION_LOG_ERR("not enough heap memory");
- return -ENOSPC;
- }
- phys_addr_started = rte_malloc_virt2iova(virt_addr_started);
-
- /* address to store indirect vring desc entries */
- desc = (struct vring_desc *)
- ((uint8_t *)virt_addr_started + desc_offset);
-
- /* ctrl req part */
- memcpy(virt_addr_started, ctrl, len_ctrl_req);
- desc[idx].addr = phys_addr_started;
- desc[idx].len = len_ctrl_req;
- desc[idx].flags = VRING_DESC_F_NEXT;
- desc[idx].next = idx + 1;
- idx++;
- len_total += len_ctrl_req;
- input_offset += len_ctrl_req;
-
- /* cipher key part */
- if (len_cipher_key > 0) {
- memcpy((uint8_t *)virt_addr_started + len_total,
- cipher_key, len_cipher_key);
-
- desc[idx].addr = phys_addr_started + len_total;
- desc[idx].len = len_cipher_key;
- desc[idx].flags = VRING_DESC_F_NEXT;
- desc[idx].next = idx + 1;
- idx++;
- len_total += len_cipher_key;
- input_offset += len_cipher_key;
- }
-
- /* auth key part */
- if (len_auth_key > 0) {
- memcpy((uint8_t *)virt_addr_started + len_total,
- auth_key, len_auth_key);
-
- desc[idx].addr = phys_addr_started + len_total;
- desc[idx].len = len_auth_key;
- desc[idx].flags = VRING_DESC_F_NEXT;
- desc[idx].next = idx + 1;
- idx++;
- len_total += len_auth_key;
- input_offset += len_auth_key;
- }
-
- /* input part */
- input = (struct virtio_crypto_session_input *)
- ((uint8_t *)virt_addr_started + input_offset);
- input->status = VIRTIO_CRYPTO_ERR;
- input->session_id = ~0ULL;
- desc[idx].addr = phys_addr_started + len_total;
- desc[idx].len = len_session_input;
- desc[idx].flags = VRING_DESC_F_WRITE;
- idx++;
-
- /* use a single desc entry */
- vq->vq_ring.desc[head].addr = phys_addr_started + desc_offset;
- vq->vq_ring.desc[head].len = idx * sizeof(struct vring_desc);
- vq->vq_ring.desc[head].flags = VRING_DESC_F_INDIRECT;
- vq->vq_free_cnt--;
-
- vq->vq_desc_head_idx = vq->vq_ring.desc[head].next;
-
- vq_update_avail_ring(vq, head);
- vq_update_avail_idx(vq);
-
- VIRTIO_CRYPTO_INIT_LOG_DBG("vq->vq_queue_index = %d",
- vq->vq_queue_index);
-
- virtqueue_notify(vq);
-
- rte_rmb();
- while (vq->vq_used_cons_idx == vq->vq_ring.used->idx) {
- rte_rmb();
- usleep(100);
- }
-
- while (vq->vq_used_cons_idx != vq->vq_ring.used->idx) {
- uint32_t idx, desc_idx, used_idx;
- struct vring_used_elem *uep;
-
- used_idx = (uint32_t)(vq->vq_used_cons_idx
- & (vq->vq_nentries - 1));
- uep = &vq->vq_ring.used->ring[used_idx];
- idx = (uint32_t) uep->id;
- desc_idx = idx;
-
- while (vq->vq_ring.desc[desc_idx].flags & VRING_DESC_F_NEXT) {
- desc_idx = vq->vq_ring.desc[desc_idx].next;
- vq->vq_free_cnt++;
- }
-
- vq->vq_ring.desc[desc_idx].next = vq->vq_desc_head_idx;
- vq->vq_desc_head_idx = idx;
-
- vq->vq_used_cons_idx++;
- vq->vq_free_cnt++;
- }
-
- VIRTIO_CRYPTO_INIT_LOG_DBG("vq->vq_free_cnt=%d", vq->vq_free_cnt);
- VIRTIO_CRYPTO_INIT_LOG_DBG("vq->vq_desc_head_idx=%d", vq->vq_desc_head_idx);
-
- /* get the result */
- if (input->status != VIRTIO_CRYPTO_OK) {
- VIRTIO_CRYPTO_SESSION_LOG_ERR("Something wrong on backend! "
- "status=%u, session_id=%" PRIu64 "",
- input->status, input->session_id);
- rte_free(virt_addr_started);
- ret = -1;
- } else {
- session->session_id = input->session_id;
-
- VIRTIO_CRYPTO_SESSION_LOG_INFO("Create session successfully, "
- "session_id=%" PRIu64 "", input->session_id);
- rte_free(virt_addr_started);
- ret = 0;
- }
-
- return ret;
-}
-
void
virtio_crypto_queue_release(struct virtqueue *vq)
{
@@ -281,6 +76,7 @@ virtio_crypto_queue_release(struct virtqueue *vq)
/* Select and deactivate the queue */
VTPCI_OPS(hw)->del_queue(hw, vq);
+ hw->vqs[vq->vq_queue_index] = NULL;
rte_memzone_free(vq->mz);
rte_mempool_free(vq->mpool);
rte_free(vq);
@@ -299,8 +95,7 @@ virtio_crypto_queue_setup(struct rte_cryptodev *dev,
{
char vq_name[VIRTQUEUE_MAX_NAME_SZ];
char mpool_name[MPOOL_MAX_NAME_SZ];
- const struct rte_memzone *mz;
- unsigned int vq_size, size;
+ unsigned int vq_size;
struct virtio_crypto_hw *hw = dev->data->dev_private;
struct virtqueue *vq = NULL;
uint32_t i = 0;
@@ -339,16 +134,26 @@ virtio_crypto_queue_setup(struct rte_cryptodev *dev,
"dev%d_controlqueue_mpool",
dev->data->dev_id);
}
- size = RTE_ALIGN_CEIL(sizeof(*vq) +
- vq_size * sizeof(struct vq_desc_extra),
- RTE_CACHE_LINE_SIZE);
- vq = rte_zmalloc_socket(vq_name, size, RTE_CACHE_LINE_SIZE,
- socket_id);
+
+ /*
+ * Using part of the vring entries is permitted, but the maximum
+ * is vq_size
+ */
+ if (nb_desc == 0 || nb_desc > vq_size)
+ nb_desc = vq_size;
+
+ if (hw->vqs[vtpci_queue_idx])
+ vq = hw->vqs[vtpci_queue_idx];
+ else
+ vq = virtcrypto_queue_alloc(hw, vtpci_queue_idx, nb_desc,
+ socket_id, vq_name);
if (vq == NULL) {
VIRTIO_CRYPTO_INIT_LOG_ERR("Can not allocate virtqueue");
return -ENOMEM;
}
+ hw->vqs[vtpci_queue_idx] = vq;
+
if (queue_type == VTCRYPTO_DATAQ) {
/* pre-allocate a mempool and use it in the data plane to
* improve performance
@@ -356,7 +161,7 @@ virtio_crypto_queue_setup(struct rte_cryptodev *dev,
vq->mpool = rte_mempool_lookup(mpool_name);
if (vq->mpool == NULL)
vq->mpool = rte_mempool_create(mpool_name,
- vq_size,
+ nb_desc,
sizeof(struct virtio_crypto_op_cookie),
RTE_CACHE_LINE_SIZE, 0,
NULL, NULL, NULL, NULL, socket_id,
@@ -366,7 +171,7 @@ virtio_crypto_queue_setup(struct rte_cryptodev *dev,
"Cannot create mempool");
goto mpool_create_err;
}
- for (i = 0; i < vq_size; i++) {
+ for (i = 0; i < nb_desc; i++) {
vq->vq_descx[i].cookie =
rte_zmalloc("crypto PMD op cookie pointer",
sizeof(struct virtio_crypto_op_cookie),
@@ -379,67 +184,10 @@ virtio_crypto_queue_setup(struct rte_cryptodev *dev,
}
}
- vq->hw = hw;
- vq->dev_id = dev->data->dev_id;
- vq->vq_queue_index = vtpci_queue_idx;
- vq->vq_nentries = vq_size;
-
- /*
- * Using part of the vring entries is permitted, but the maximum
- * is vq_size
- */
- if (nb_desc == 0 || nb_desc > vq_size)
- nb_desc = vq_size;
- vq->vq_free_cnt = nb_desc;
-
- /*
- * Reserve a memzone for vring elements
- */
- size = vring_size(vq_size, VIRTIO_PCI_VRING_ALIGN);
- vq->vq_ring_size = RTE_ALIGN_CEIL(size, VIRTIO_PCI_VRING_ALIGN);
- VIRTIO_CRYPTO_INIT_LOG_DBG("%s vring_size: %d, rounded_vring_size: %d",
- (queue_type == VTCRYPTO_DATAQ) ? "dataq" : "ctrlq",
- size, vq->vq_ring_size);
-
- mz = rte_memzone_reserve_aligned(vq_name, vq->vq_ring_size,
- socket_id, 0, VIRTIO_PCI_VRING_ALIGN);
- if (mz == NULL) {
- if (rte_errno == EEXIST)
- mz = rte_memzone_lookup(vq_name);
- if (mz == NULL) {
- VIRTIO_CRYPTO_INIT_LOG_ERR("not enough memory");
- goto mz_reserve_err;
- }
- }
-
- /*
- * Virtio PCI device VIRTIO_PCI_QUEUE_PF register is 32bit,
- * and only accepts 32 bit page frame number.
- * Check if the allocated physical memory exceeds 16TB.
- */
- if ((mz->iova + vq->vq_ring_size - 1)
- >> (VIRTIO_PCI_QUEUE_ADDR_SHIFT + 32)) {
- VIRTIO_CRYPTO_INIT_LOG_ERR("vring address shouldn't be "
- "above 16TB!");
- goto vring_addr_err;
- }
-
- memset(mz->addr, 0, sizeof(mz->len));
- vq->mz = mz;
- vq->vq_ring_mem = mz->iova;
- vq->vq_ring_virt_mem = mz->addr;
- VIRTIO_CRYPTO_INIT_LOG_DBG("vq->vq_ring_mem(physical): 0x%"PRIx64,
- (uint64_t)mz->iova);
- VIRTIO_CRYPTO_INIT_LOG_DBG("vq->vq_ring_virt_mem: 0x%"PRIx64,
- (uint64_t)(uintptr_t)mz->addr);
-
*pvq = vq;
return 0;
-vring_addr_err:
- rte_memzone_free(mz);
-mz_reserve_err:
cookie_alloc_err:
rte_mempool_free(vq->mpool);
if (i != 0) {
@@ -451,31 +199,6 @@ virtio_crypto_queue_setup(struct rte_cryptodev *dev,
return -ENOMEM;
}
-static int
-virtio_crypto_ctrlq_setup(struct rte_cryptodev *dev, uint16_t queue_idx)
-{
- int ret;
- struct virtqueue *vq;
- struct virtio_crypto_hw *hw = dev->data->dev_private;
-
- /* if virtio device has started, do not touch the virtqueues */
- if (dev->data->dev_started)
- return 0;
-
- PMD_INIT_FUNC_TRACE();
-
- ret = virtio_crypto_queue_setup(dev, VTCRYPTO_CTRLQ, queue_idx,
- 0, SOCKET_ID_ANY, &vq);
- if (ret < 0) {
- VIRTIO_CRYPTO_INIT_LOG_ERR("control vq initialization failed");
- return ret;
- }
-
- hw->cvq = vq;
-
- return 0;
-}
-
static void
virtio_crypto_free_queues(struct rte_cryptodev *dev)
{
@@ -484,10 +207,6 @@ virtio_crypto_free_queues(struct rte_cryptodev *dev)
PMD_INIT_FUNC_TRACE();
- /* control queue release */
- virtio_crypto_queue_release(hw->cvq);
- hw->cvq = NULL;
-
/* data queue release */
for (i = 0; i < hw->max_dataqueues; i++) {
virtio_crypto_queue_release(dev->data->queue_pairs[i]);
@@ -498,6 +217,15 @@ virtio_crypto_free_queues(struct rte_cryptodev *dev)
static int
virtio_crypto_dev_close(struct rte_cryptodev *dev __rte_unused)
{
+ struct virtio_crypto_hw *hw = dev->data->dev_private;
+
+ PMD_INIT_FUNC_TRACE();
+
+ /* control queue release */
+ if (hw->cvq)
+ virtio_crypto_queue_release(virtcrypto_cq_to_vq(hw->cvq));
+
+ hw->cvq = NULL;
return 0;
}
@@ -678,6 +406,99 @@ virtio_negotiate_features(struct virtio_crypto_hw *hw, uint64_t req_features)
return 0;
}
+static void
+virtio_control_queue_notify(struct virtqueue *vq, __rte_unused void *cookie)
+{
+ virtqueue_notify(vq);
+}
+
+static int
+virtio_crypto_init_queue(struct rte_cryptodev *dev, uint16_t queue_idx)
+{
+ struct virtio_crypto_hw *hw = dev->data->dev_private;
+ int queue_type = virtio_get_queue_type(hw, queue_idx);
+ int numa_node = dev->device->numa_node;
+ char vq_name[VIRTQUEUE_MAX_NAME_SZ];
+ unsigned int vq_size;
+ struct virtqueue *vq;
+ int ret;
+
+ PMD_INIT_LOG(INFO, "setting up queue: %u on NUMA node %d",
+ queue_idx, numa_node);
+
+ /*
+ * Read the virtqueue size from the Queue Size field
+ * Always power of 2 and if 0 virtqueue does not exist
+ */
+ vq_size = VTPCI_OPS(hw)->get_queue_num(hw, queue_idx);
+ PMD_INIT_LOG(DEBUG, "vq_size: %u", vq_size);
+ if (vq_size == 0) {
+ PMD_INIT_LOG(ERR, "virtqueue does not exist");
+ return -EINVAL;
+ }
+
+ if (!rte_is_power_of_2(vq_size)) {
+ PMD_INIT_LOG(ERR, "split virtqueue size is not power of 2");
+ return -EINVAL;
+ }
+
+ snprintf(vq_name, sizeof(vq_name), "dev%d_vq%d", dev->data->dev_id, queue_idx);
+
+ vq = virtcrypto_queue_alloc(hw, queue_idx, vq_size, numa_node, vq_name);
+ if (!vq) {
+ PMD_INIT_LOG(ERR, "virtqueue init failed");
+ return -ENOMEM;
+ }
+
+ hw->vqs[queue_idx] = vq;
+
+ if (queue_type == VTCRYPTO_CTRLQ) {
+ hw->cvq = &vq->cq;
+ vq->cq.notify_queue = &virtio_control_queue_notify;
+ }
+
+ if (VTPCI_OPS(hw)->setup_queue(hw, vq) < 0) {
+ PMD_INIT_LOG(ERR, "setup_queue failed");
+ ret = -EINVAL;
+ goto clean_vq;
+ }
+
+ return 0;
+
+clean_vq:
+ if (queue_type == VTCRYPTO_CTRLQ)
+ hw->cvq = NULL;
+ virtcrypto_queue_free(vq);
+ hw->vqs[queue_idx] = NULL;
+
+ return ret;
+}
+
+static int
+virtio_crypto_alloc_queues(struct rte_cryptodev *dev)
+{
+ struct virtio_crypto_hw *hw = dev->data->dev_private;
+ uint16_t nr_vq = hw->max_dataqueues + 1;
+ uint16_t i;
+ int ret;
+
+ hw->vqs = rte_zmalloc(NULL, sizeof(struct virtqueue *) * nr_vq, 0);
+ if (!hw->vqs) {
+ PMD_INIT_LOG(ERR, "failed to allocate vqs");
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < nr_vq; i++) {
+ ret = virtio_crypto_init_queue(dev, i);
+ if (ret < 0) {
+ virtio_crypto_free_queues(dev);
+ return ret;
+ }
+ }
+
+ return 0;
+}
+
/* reset device and renegotiate features if needed */
static int
virtio_crypto_init_device(struct rte_cryptodev *cryptodev,
@@ -803,8 +624,6 @@ static int
virtio_crypto_dev_configure(struct rte_cryptodev *cryptodev,
struct rte_cryptodev_config *config __rte_unused)
{
- struct virtio_crypto_hw *hw = cryptodev->data->dev_private;
-
PMD_INIT_FUNC_TRACE();
if (virtio_crypto_init_device(cryptodev,
@@ -815,10 +634,11 @@ virtio_crypto_dev_configure(struct rte_cryptodev *cryptodev,
* [0, 1, ... ,(config->max_dataqueues - 1)] are data queues
* config->max_dataqueues is the control queue
*/
- if (virtio_crypto_ctrlq_setup(cryptodev, hw->max_dataqueues) < 0) {
- VIRTIO_CRYPTO_INIT_LOG_ERR("control queue setup error");
+ if (virtio_crypto_alloc_queues(cryptodev) < 0) {
+ VIRTIO_CRYPTO_DRV_LOG_ERR("failed to create virtqueues");
return -1;
}
+
virtio_crypto_ctrlq_start(cryptodev);
return 0;
@@ -953,7 +773,7 @@ virtio_crypto_clear_session(
uint64_t session_id = ctrl->u.destroy_session.session_id;
hw = dev->data->dev_private;
- vq = hw->cvq;
+ vq = virtcrypto_cq_to_vq(hw->cvq);
VIRTIO_CRYPTO_SESSION_LOG_INFO("vq->vq_desc_head_idx = %d, "
"vq = %p", vq->vq_desc_head_idx, vq);
@@ -988,14 +808,14 @@ virtio_crypto_clear_session(
/* use only a single desc entry */
head = vq->vq_desc_head_idx;
- vq->vq_ring.desc[head].flags = VRING_DESC_F_INDIRECT;
- vq->vq_ring.desc[head].addr = malloc_phys_addr + desc_offset;
- vq->vq_ring.desc[head].len
+ vq->vq_split.ring.desc[head].flags = VRING_DESC_F_INDIRECT;
+ vq->vq_split.ring.desc[head].addr = malloc_phys_addr + desc_offset;
+ vq->vq_split.ring.desc[head].len
= NUM_ENTRY_SYM_CLEAR_SESSION
* sizeof(struct vring_desc);
vq->vq_free_cnt -= needed;
- vq->vq_desc_head_idx = vq->vq_ring.desc[head].next;
+ vq->vq_desc_head_idx = vq->vq_split.ring.desc[head].next;
vq_update_avail_ring(vq, head);
vq_update_avail_idx(vq);
@@ -1006,27 +826,27 @@ virtio_crypto_clear_session(
virtqueue_notify(vq);
rte_rmb();
- while (vq->vq_used_cons_idx == vq->vq_ring.used->idx) {
+ while (vq->vq_used_cons_idx == vq->vq_split.ring.used->idx) {
rte_rmb();
usleep(100);
}
- while (vq->vq_used_cons_idx != vq->vq_ring.used->idx) {
+ while (vq->vq_used_cons_idx != vq->vq_split.ring.used->idx) {
uint32_t idx, desc_idx, used_idx;
struct vring_used_elem *uep;
used_idx = (uint32_t)(vq->vq_used_cons_idx
& (vq->vq_nentries - 1));
- uep = &vq->vq_ring.used->ring[used_idx];
+ uep = &vq->vq_split.ring.used->ring[used_idx];
idx = (uint32_t) uep->id;
desc_idx = idx;
- while (vq->vq_ring.desc[desc_idx].flags
+ while (vq->vq_split.ring.desc[desc_idx].flags
& VRING_DESC_F_NEXT) {
- desc_idx = vq->vq_ring.desc[desc_idx].next;
+ desc_idx = vq->vq_split.ring.desc[desc_idx].next;
vq->vq_free_cnt++;
}
- vq->vq_ring.desc[desc_idx].next = vq->vq_desc_head_idx;
+ vq->vq_split.ring.desc[desc_idx].next = vq->vq_desc_head_idx;
vq->vq_desc_head_idx = idx;
vq->vq_used_cons_idx++;
vq->vq_free_cnt++;
@@ -1377,14 +1197,16 @@ virtio_crypto_sym_configure_session(
struct rte_crypto_sym_xform *xform,
struct rte_cryptodev_sym_session *sess)
{
- int ret;
- struct virtio_crypto_session *session;
- struct virtio_crypto_op_ctrl_req *ctrl_req;
- enum virtio_crypto_cmd_id cmd_id;
uint8_t cipher_key_data[VIRTIO_CRYPTO_MAX_KEY_SIZE] = {0};
uint8_t auth_key_data[VIRTIO_CRYPTO_MAX_KEY_SIZE] = {0};
+ struct virtio_crypto_op_ctrl_req *ctrl_req;
+ struct virtio_crypto_session_input *input;
+ struct virtio_crypto_session *session;
+ enum virtio_crypto_cmd_id cmd_id;
struct virtio_crypto_hw *hw;
- struct virtqueue *control_vq;
+ struct virtio_pmd_ctrl *ctrl;
+ int dlen[2], dnum;
+ int ret;
PMD_INIT_FUNC_TRACE();
@@ -1396,13 +1218,13 @@ virtio_crypto_sym_configure_session(
}
session = CRYPTODEV_GET_SYM_SESS_PRIV(sess);
memset(session, 0, sizeof(struct virtio_crypto_session));
- ctrl_req = &session->ctrl;
+ ctrl = &session->ctrl;
+ ctrl_req = &ctrl->hdr;
ctrl_req->header.opcode = VIRTIO_CRYPTO_CIPHER_CREATE_SESSION;
/* FIXME: support multiqueue */
ctrl_req->header.queue_id = 0;
hw = dev->data->dev_private;
- control_vq = hw->cvq;
cmd_id = virtio_crypto_get_chain_order(xform);
if (cmd_id == VIRTIO_CRYPTO_CMD_CIPHER_HASH)
@@ -1414,7 +1236,13 @@ virtio_crypto_sym_configure_session(
switch (cmd_id) {
case VIRTIO_CRYPTO_CMD_CIPHER_HASH:
- case VIRTIO_CRYPTO_CMD_HASH_CIPHER:
+ case VIRTIO_CRYPTO_CMD_HASH_CIPHER: {
+ struct rte_crypto_cipher_xform *cipher_xform = NULL;
+ struct rte_crypto_auth_xform *auth_xform = NULL;
+
+ cipher_xform = virtio_crypto_get_cipher_xform(xform);
+ auth_xform = virtio_crypto_get_auth_xform(xform);
+
ctrl_req->u.sym_create_session.op_type
= VIRTIO_CRYPTO_SYM_OP_ALGORITHM_CHAINING;
@@ -1425,15 +1253,19 @@ virtio_crypto_sym_configure_session(
"padding sym op ctrl req failed");
goto error_out;
}
- ret = virtio_crypto_send_command(control_vq, ctrl_req,
- cipher_key_data, auth_key_data, session);
- if (ret < 0) {
- VIRTIO_CRYPTO_SESSION_LOG_ERR(
- "create session failed: %d", ret);
- goto error_out;
- }
+
+ dlen[0] = cipher_xform->key.length;
+ memcpy(ctrl->data, cipher_key_data, dlen[0]);
+ dlen[1] = auth_xform->key.length;
+ memcpy(ctrl->data + dlen[0], auth_key_data, dlen[1]);
+ dnum = 2;
break;
- case VIRTIO_CRYPTO_CMD_CIPHER:
+ }
+ case VIRTIO_CRYPTO_CMD_CIPHER: {
+ struct rte_crypto_cipher_xform *cipher_xform = NULL;
+
+ cipher_xform = virtio_crypto_get_cipher_xform(xform);
+
ctrl_req->u.sym_create_session.op_type
= VIRTIO_CRYPTO_SYM_OP_CIPHER;
ret = virtio_crypto_sym_pad_op_ctrl_req(ctrl_req, xform,
@@ -1443,22 +1275,43 @@ virtio_crypto_sym_configure_session(
"padding sym op ctrl req failed");
goto error_out;
}
- ret = virtio_crypto_send_command(control_vq, ctrl_req,
- cipher_key_data, NULL, session);
- if (ret < 0) {
- VIRTIO_CRYPTO_SESSION_LOG_ERR(
- "create session failed: %d", ret);
- goto error_out;
- }
+
+ dlen[0] = cipher_xform->key.length;
+ memcpy(ctrl->data, cipher_key_data, dlen[0]);
+ dnum = 1;
break;
+ }
default:
ret = -ENOTSUP;
VIRTIO_CRYPTO_SESSION_LOG_ERR(
"Unsupported operation chain order parameter");
goto error_out;
}
- return 0;
+ input = &ctrl->input;
+ input->status = VIRTIO_CRYPTO_ERR;
+ input->session_id = ~0ULL;
+
+ ret = virtio_crypto_send_command(hw->cvq, ctrl, dlen, dnum);
+ if (ret < 0) {
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("create session failed: %d", ret);
+ goto error_out;
+ }
+
+ ctrl = hw->cvq->hdr_mz->addr;
+ input = &ctrl->input;
+ if (input->status != VIRTIO_CRYPTO_OK) {
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("Something wrong on backend! "
+ "status=%u, session_id=%" PRIu64 "",
+ input->status, input->session_id);
+ goto error_out;
+ } else {
+ session->session_id = input->session_id;
+ VIRTIO_CRYPTO_SESSION_LOG_INFO("Create session successfully, "
+ "session_id=%" PRIu64 "", input->session_id);
+ }
+
+ return 0;
error_out:
return ret;
}
@@ -1583,10 +1436,11 @@ virtio_crypto_asym_configure_session(
{
struct virtio_crypto_akcipher_session_para *para;
struct virtio_crypto_op_ctrl_req *ctrl_req;
- uint8_t key[VIRTIO_CRYPTO_MAX_CTRL_DATA];
+ struct virtio_crypto_session_input *input;
struct virtio_crypto_session *session;
struct virtio_crypto_hw *hw;
- struct virtqueue *control_vq;
+ struct virtio_pmd_ctrl *ctrl;
+ int dlen[1];
int ret;
PMD_INIT_FUNC_TRACE();
@@ -1600,7 +1454,8 @@ virtio_crypto_asym_configure_session(
session = CRYPTODEV_GET_ASYM_SESS_PRIV(sess);
memset(session, 0, sizeof(struct virtio_crypto_session));
- ctrl_req = &session->ctrl;
+ ctrl = &session->ctrl;
+ ctrl_req = &ctrl->hdr;
ctrl_req->header.opcode = VIRTIO_CRYPTO_AKCIPHER_CREATE_SESSION;
ctrl_req->header.queue_id = 0;
para = &ctrl_req->u.akcipher_create_session.para;
@@ -1614,7 +1469,7 @@ virtio_crypto_asym_configure_session(
return ret;
}
- ret = virtio_crypto_asym_rsa_xform_to_der(xform, key);
+ ret = virtio_crypto_asym_rsa_xform_to_der(xform, ctrl->data);
if (ret <= 0) {
VIRTIO_CRYPTO_SESSION_LOG_ERR("Invalid RSA primitives");
return ret;
@@ -1626,15 +1481,31 @@ virtio_crypto_asym_configure_session(
para->algo = VIRTIO_CRYPTO_NO_AKCIPHER;
}
+ dlen[0] = ret;
+ input = &ctrl->input;
+ input->status = VIRTIO_CRYPTO_ERR;
+ input->session_id = ~0ULL;
+
hw = dev->data->dev_private;
- control_vq = hw->cvq;
- ret = virtio_crypto_send_command(control_vq, ctrl_req,
- key, NULL, session);
+ ret = virtio_crypto_send_command(hw->cvq, ctrl, dlen, 1);
if (ret < 0) {
VIRTIO_CRYPTO_SESSION_LOG_ERR("create session failed: %d", ret);
goto error_out;
}
+ ctrl = hw->cvq->hdr_mz->addr;
+ input = &ctrl->input;
+ if (input->status != VIRTIO_CRYPTO_OK) {
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("Something wrong on backend! "
+ "status=%u, session_id=%" PRIu64 "",
+ input->status, input->session_id);
+ goto error_out;
+ } else {
+ session->session_id = input->session_id;
+ VIRTIO_CRYPTO_SESSION_LOG_INFO("Create session successfully, "
+ "session_id=%" PRIu64 "", input->session_id);
+ }
+
return 0;
error_out:
return -1;
diff --git a/drivers/crypto/virtio/virtio_cvq.c b/drivers/crypto/virtio/virtio_cvq.c
new file mode 100644
index 0000000000..c4df4a6176
--- /dev/null
+++ b/drivers/crypto/virtio/virtio_cvq.c
@@ -0,0 +1,129 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Marvell
+ */
+
+#include <unistd.h>
+
+#include <rte_common.h>
+#include <rte_eal.h>
+#include <rte_errno.h>
+
+#include "virtio_cvq.h"
+#include "virtqueue.h"
+
+static struct virtio_pmd_ctrl *
+virtio_send_command(struct virtcrypto_ctl *cvq,
+ struct virtio_pmd_ctrl *ctrl,
+ int *dlen, int dnum)
+{
+ struct virtqueue *vq = virtcrypto_cq_to_vq(cvq);
+ struct virtio_pmd_ctrl *result;
+ uint32_t head, i;
+ int k, sum = 0;
+
+ head = vq->vq_desc_head_idx;
+
+ /*
+ * Format is enforced in qemu code:
+ * One TX packet for header;
+ * At least one TX packet per argument;
+ * One RX packet for ACK.
+ */
+ vq->vq_split.ring.desc[head].flags = VRING_DESC_F_NEXT;
+ vq->vq_split.ring.desc[head].addr = cvq->hdr_mem;
+ vq->vq_split.ring.desc[head].len = sizeof(struct virtio_crypto_op_ctrl_req);
+ vq->vq_free_cnt--;
+ i = vq->vq_split.ring.desc[head].next;
+
+ for (k = 0; k < dnum; k++) {
+ vq->vq_split.ring.desc[i].flags = VRING_DESC_F_NEXT;
+ vq->vq_split.ring.desc[i].addr = cvq->hdr_mem
+ + sizeof(struct virtio_crypto_op_ctrl_req)
+ + sizeof(ctrl->input) + sizeof(uint8_t) * sum;
+ vq->vq_split.ring.desc[i].len = dlen[k];
+ sum += dlen[k];
+ vq->vq_free_cnt--;
+ i = vq->vq_split.ring.desc[i].next;
+ }
+
+ vq->vq_split.ring.desc[i].flags = VRING_DESC_F_WRITE;
+ vq->vq_split.ring.desc[i].addr = cvq->hdr_mem
+ + sizeof(struct virtio_crypto_op_ctrl_req);
+ vq->vq_split.ring.desc[i].len = sizeof(ctrl->input);
+ vq->vq_free_cnt--;
+
+ vq->vq_desc_head_idx = vq->vq_split.ring.desc[i].next;
+
+ vq_update_avail_ring(vq, head);
+ vq_update_avail_idx(vq);
+
+ PMD_INIT_LOG(DEBUG, "vq->vq_queue_index = %d", vq->vq_queue_index);
+
+ cvq->notify_queue(vq, cvq->notify_cookie);
+
+ while (virtqueue_nused(vq) == 0)
+ usleep(100);
+
+ while (virtqueue_nused(vq)) {
+ uint32_t idx, desc_idx, used_idx;
+ struct vring_used_elem *uep;
+
+ used_idx = (uint32_t)(vq->vq_used_cons_idx
+ & (vq->vq_nentries - 1));
+ uep = &vq->vq_split.ring.used->ring[used_idx];
+ idx = (uint32_t)uep->id;
+ desc_idx = idx;
+
+ while (vq->vq_split.ring.desc[desc_idx].flags &
+ VRING_DESC_F_NEXT) {
+ desc_idx = vq->vq_split.ring.desc[desc_idx].next;
+ vq->vq_free_cnt++;
+ }
+
+ vq->vq_split.ring.desc[desc_idx].next = vq->vq_desc_head_idx;
+ vq->vq_desc_head_idx = idx;
+
+ vq->vq_used_cons_idx++;
+ vq->vq_free_cnt++;
+ }
+
+ PMD_INIT_LOG(DEBUG, "vq->vq_free_cnt=%d vq->vq_desc_head_idx=%d",
+ vq->vq_free_cnt, vq->vq_desc_head_idx);
+
+ result = cvq->hdr_mz->addr;
+ return result;
+}
+
+int
+virtio_crypto_send_command(struct virtcrypto_ctl *cvq, struct virtio_pmd_ctrl *ctrl,
+ int *dlen, int dnum)
+{
+ struct virtio_pmd_ctrl *result;
+ struct virtqueue *vq;
+ uint8_t status = ~0;
+
+ ctrl->input.status = status;
+
+ if (!cvq) {
+ PMD_INIT_LOG(ERR, "Control queue is not supported.");
+ return -1;
+ }
+
+ rte_spinlock_lock(&cvq->lock);
+ vq = virtcrypto_cq_to_vq(cvq);
+
+ PMD_INIT_LOG(DEBUG, "vq->vq_desc_head_idx = %d, status = %d, "
+ "vq->hw->cvq = %p vq = %p",
+ vq->vq_desc_head_idx, status, vq->hw->cvq, vq);
+
+ if (vq->vq_free_cnt < dnum + 2 || dnum < 1) {
+ rte_spinlock_unlock(&cvq->lock);
+ return -1;
+ }
+
+ memcpy(cvq->hdr_mz->addr, ctrl, sizeof(struct virtio_pmd_ctrl));
+ result = virtio_send_command(cvq, ctrl, dlen, dnum);
+
+ rte_spinlock_unlock(&cvq->lock);
+ return result->input.status;
+}
diff --git a/drivers/crypto/virtio/virtio_cvq.h b/drivers/crypto/virtio/virtio_cvq.h
new file mode 100644
index 0000000000..1935ce1844
--- /dev/null
+++ b/drivers/crypto/virtio/virtio_cvq.h
@@ -0,0 +1,33 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Marvell
+ */
+
+#ifndef _VIRTIO_CVQ_H_
+#define _VIRTIO_CVQ_H_
+
+#include <rte_spinlock.h>
+#include <virtio_crypto.h>
+
+#include "virtio_cryptodev.h"
+
+struct virtqueue;
+
+struct virtcrypto_ctl {
+ const struct rte_memzone *hdr_mz; /**< memzone to populate hdr. */
+ rte_iova_t hdr_mem; /**< hdr for each xmit packet */
+ rte_spinlock_t lock; /**< spinlock for control queue. */
+ void (*notify_queue)(struct virtqueue *vq, void *cookie); /**< notify ops. */
+ void *notify_cookie; /**< cookie for notify ops */
+};
+
+struct virtio_pmd_ctrl {
+ struct virtio_crypto_op_ctrl_req hdr;
+ struct virtio_crypto_session_input input;
+ uint8_t data[VIRTIO_CRYPTO_MAX_CTRL_DATA];
+};
+
+int
+virtio_crypto_send_command(struct virtcrypto_ctl *cvq, struct virtio_pmd_ctrl *ctrl,
+ int *dlen, int pkt_num);
+
+#endif /* _VIRTIO_CVQ_H_ */
diff --git a/drivers/crypto/virtio/virtio_pci.h b/drivers/crypto/virtio/virtio_pci.h
index 41949c3d13..7e94c6a3c5 100644
--- a/drivers/crypto/virtio/virtio_pci.h
+++ b/drivers/crypto/virtio/virtio_pci.h
@@ -176,8 +176,7 @@ struct virtio_pci_ops {
};
struct virtio_crypto_hw {
- /* control queue */
- struct virtqueue *cvq;
+ struct virtqueue **vqs;
uint16_t dev_id;
uint16_t max_dataqueues;
uint64_t req_guest_features;
@@ -190,6 +189,9 @@ struct virtio_crypto_hw {
struct virtio_pci_common_cfg *common_cfg;
struct virtio_crypto_config *dev_cfg;
const struct rte_cryptodev_capabilities *virtio_dev_capabilities;
+ uint8_t weak_barriers;
+ struct virtcrypto_ctl *cvq;
+ bool use_va;
};
/*
diff --git a/drivers/crypto/virtio/virtio_ring.h b/drivers/crypto/virtio/virtio_ring.h
index 55839279fd..e5b0ad74d2 100644
--- a/drivers/crypto/virtio/virtio_ring.h
+++ b/drivers/crypto/virtio/virtio_ring.h
@@ -59,6 +59,7 @@ struct vring_used {
struct vring {
unsigned int num;
+ rte_iova_t desc_iova;
struct vring_desc *desc;
struct vring_avail *avail;
struct vring_used *used;
@@ -111,17 +112,24 @@ vring_size(unsigned int num, unsigned long align)
}
static inline void
-vring_init(struct vring *vr, unsigned int num, uint8_t *p,
- unsigned long align)
+vring_init_split(struct vring *vr, uint8_t *p, rte_iova_t iova,
+ unsigned long align, unsigned int num)
{
vr->num = num;
vr->desc = (struct vring_desc *) p;
+ vr->desc_iova = iova;
vr->avail = (struct vring_avail *) (p +
num * sizeof(struct vring_desc));
vr->used = (void *)
RTE_ALIGN_CEIL((uintptr_t)(&vr->avail->ring[num]), align);
}
+static inline void
+vring_init(struct vring *vr, unsigned int num, uint8_t *p, unsigned long align)
+{
+ vring_init_split(vr, p, 0, align, num);
+}
+
/*
* The following is used with VIRTIO_RING_F_EVENT_IDX.
* Assuming a given event_idx value from the other size, if we have
diff --git a/drivers/crypto/virtio/virtio_rxtx.c b/drivers/crypto/virtio/virtio_rxtx.c
index 3cf25d8c1f..68fccef84b 100644
--- a/drivers/crypto/virtio/virtio_rxtx.c
+++ b/drivers/crypto/virtio/virtio_rxtx.c
@@ -14,13 +14,13 @@ vq_ring_free_chain(struct virtqueue *vq, uint16_t desc_idx)
struct vq_desc_extra *dxp;
uint16_t desc_idx_last = desc_idx;
- dp = &vq->vq_ring.desc[desc_idx];
+ dp = &vq->vq_split.ring.desc[desc_idx];
dxp = &vq->vq_descx[desc_idx];
vq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt + dxp->ndescs);
if ((dp->flags & VRING_DESC_F_INDIRECT) == 0) {
while (dp->flags & VRING_DESC_F_NEXT) {
desc_idx_last = dp->next;
- dp = &vq->vq_ring.desc[dp->next];
+ dp = &vq->vq_split.ring.desc[dp->next];
}
}
dxp->ndescs = 0;
@@ -33,7 +33,7 @@ vq_ring_free_chain(struct virtqueue *vq, uint16_t desc_idx)
if (vq->vq_desc_tail_idx == VQ_RING_DESC_CHAIN_END) {
vq->vq_desc_head_idx = desc_idx;
} else {
- dp_tail = &vq->vq_ring.desc[vq->vq_desc_tail_idx];
+ dp_tail = &vq->vq_split.ring.desc[vq->vq_desc_tail_idx];
dp_tail->next = desc_idx;
}
@@ -56,7 +56,7 @@ virtqueue_dequeue_burst_rx(struct virtqueue *vq,
for (i = 0; i < num ; i++) {
used_idx = (uint16_t)(vq->vq_used_cons_idx
& (vq->vq_nentries - 1));
- uep = &vq->vq_ring.used->ring[used_idx];
+ uep = &vq->vq_split.ring.used->ring[used_idx];
desc_idx = (uint16_t)uep->id;
cop = (struct rte_crypto_op *)
vq->vq_descx[desc_idx].crypto_op;
@@ -115,7 +115,7 @@ virtqueue_crypto_sym_pkt_header_arrange(
{
struct rte_crypto_sym_op *sym_op = cop->sym;
struct virtio_crypto_op_data_req *req_data = data;
- struct virtio_crypto_op_ctrl_req *ctrl = &session->ctrl;
+ struct virtio_crypto_op_ctrl_req *ctrl = &session->ctrl.hdr;
struct virtio_crypto_sym_create_session_req *sym_sess_req =
&ctrl->u.sym_create_session;
struct virtio_crypto_alg_chain_session_para *chain_para =
@@ -304,7 +304,7 @@ virtqueue_crypto_sym_enqueue_xmit(
desc[idx++].flags = VRING_DESC_F_WRITE | VRING_DESC_F_NEXT;
/* indirect vring: digest result */
- para = &(session->ctrl.u.sym_create_session.u.chain.para);
+ para = &(session->ctrl.hdr.u.sym_create_session.u.chain.para);
if (para->hash_mode == VIRTIO_CRYPTO_SYM_HASH_MODE_PLAIN)
hash_result_len = para->u.hash_param.hash_result_len;
if (para->hash_mode == VIRTIO_CRYPTO_SYM_HASH_MODE_AUTH)
@@ -327,7 +327,7 @@ virtqueue_crypto_sym_enqueue_xmit(
dxp->ndescs = needed;
/* use a single buffer */
- start_dp = txvq->vq_ring.desc;
+ start_dp = txvq->vq_split.ring.desc;
start_dp[head_idx].addr = indirect_op_data_req_phys_addr +
indirect_vring_addr_offset;
start_dp[head_idx].len = num_entry * sizeof(struct vring_desc);
@@ -349,7 +349,7 @@ virtqueue_crypto_asym_pkt_header_arrange(
struct virtio_crypto_op_data_req *data,
struct virtio_crypto_session *session)
{
- struct virtio_crypto_op_ctrl_req *ctrl = &session->ctrl;
+ struct virtio_crypto_op_ctrl_req *ctrl = &session->ctrl.hdr;
struct virtio_crypto_op_data_req *req_data = data;
struct rte_crypto_asym_op *asym_op = cop->asym;
@@ -513,7 +513,7 @@ virtqueue_crypto_asym_enqueue_xmit(
dxp->ndescs = needed;
/* use a single buffer */
- start_dp = txvq->vq_ring.desc;
+ start_dp = txvq->vq_split.ring.desc;
start_dp[head_idx].addr = indirect_op_data_req_phys_addr +
indirect_vring_addr_offset;
start_dp[head_idx].len = num_entry * sizeof(struct vring_desc);
@@ -533,25 +533,14 @@ static int
virtio_crypto_vring_start(struct virtqueue *vq)
{
struct virtio_crypto_hw *hw = vq->hw;
- int i, size = vq->vq_nentries;
- struct vring *vr = &vq->vq_ring;
uint8_t *ring_mem = vq->vq_ring_virt_mem;
PMD_INIT_FUNC_TRACE();
- vring_init(vr, size, ring_mem, VIRTIO_PCI_VRING_ALIGN);
- vq->vq_desc_tail_idx = (uint16_t)(vq->vq_nentries - 1);
- vq->vq_free_cnt = vq->vq_nentries;
-
- /* Chain all the descriptors in the ring with an END */
- for (i = 0; i < size - 1; i++)
- vr->desc[i].next = (uint16_t)(i + 1);
- vr->desc[i].next = VQ_RING_DESC_CHAIN_END;
-
- /*
- * Disable device(host) interrupting guest
- */
- virtqueue_disable_intr(vq);
+ if (ring_mem == NULL) {
+ VIRTIO_CRYPTO_INIT_LOG_ERR("virtqueue ring memory is NULL");
+ return -EINVAL;
+ }
/*
* Set guest physical address of the virtqueue
@@ -572,8 +561,9 @@ virtio_crypto_ctrlq_start(struct rte_cryptodev *dev)
struct virtio_crypto_hw *hw = dev->data->dev_private;
if (hw->cvq) {
- virtio_crypto_vring_start(hw->cvq);
- VIRTQUEUE_DUMP((struct virtqueue *)hw->cvq);
+ rte_spinlock_init(&hw->cvq->lock);
+ virtio_crypto_vring_start(virtcrypto_cq_to_vq(hw->cvq));
+ VIRTQUEUE_DUMP(virtcrypto_cq_to_vq(hw->cvq));
}
}
diff --git a/drivers/crypto/virtio/virtio_rxtx.h b/drivers/crypto/virtio/virtio_rxtx.h
new file mode 100644
index 0000000000..2771062e44
--- /dev/null
+++ b/drivers/crypto/virtio/virtio_rxtx.h
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Marvell.
+ */
+
+#ifndef _VIRTIO_RXTX_H_
+#define _VIRTIO_RXTX_H_
+
+struct virtcrypto_data {
+ const struct rte_memzone *hdr_mz; /**< memzone to populate hdr. */
+ rte_iova_t hdr_mem; /**< hdr for each xmit packet */
+};
+
+#endif /* _VIRTIO_RXTX_H_ */
diff --git a/drivers/crypto/virtio/virtqueue.c b/drivers/crypto/virtio/virtqueue.c
index 3e2db1ebd2..af7f121f67 100644
--- a/drivers/crypto/virtio/virtqueue.c
+++ b/drivers/crypto/virtio/virtqueue.c
@@ -7,7 +7,9 @@
#include <rte_mbuf.h>
#include <rte_crypto.h>
#include <rte_malloc.h>
+#include <rte_errno.h>
+#include "virtio_cryptodev.h"
#include "virtqueue.h"
void
@@ -18,7 +20,7 @@ virtqueue_disable_intr(struct virtqueue *vq)
* not to interrupt when it consumes packets
* Note: this is only considered a hint to the host
*/
- vq->vq_ring.avail->flags |= VRING_AVAIL_F_NO_INTERRUPT;
+ vq->vq_split.ring.avail->flags |= VRING_AVAIL_F_NO_INTERRUPT;
}
void
@@ -32,10 +34,193 @@ virtqueue_detatch_unused(struct virtqueue *vq)
for (idx = 0; idx < vq->vq_nentries; idx++) {
cop = vq->vq_descx[idx].crypto_op;
if (cop) {
- rte_pktmbuf_free(cop->sym->m_src);
- rte_pktmbuf_free(cop->sym->m_dst);
+ if (cop->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) {
+ rte_pktmbuf_free(cop->sym->m_src);
+ rte_pktmbuf_free(cop->sym->m_dst);
+ }
+
rte_crypto_op_free(cop);
vq->vq_descx[idx].crypto_op = NULL;
}
}
}
+
+static void
+virtio_init_vring(struct virtqueue *vq)
+{
+ uint8_t *ring_mem = vq->vq_ring_virt_mem;
+ struct vring *vr = &vq->vq_split.ring;
+ int size = vq->vq_nentries;
+
+ PMD_INIT_FUNC_TRACE();
+
+ memset(ring_mem, 0, vq->vq_ring_size);
+
+ vq->vq_used_cons_idx = 0;
+ vq->vq_desc_head_idx = 0;
+ vq->vq_avail_idx = 0;
+ vq->vq_desc_tail_idx = (uint16_t)(vq->vq_nentries - 1);
+ vq->vq_free_cnt = vq->vq_nentries;
+ memset(vq->vq_descx, 0, sizeof(struct vq_desc_extra) * vq->vq_nentries);
+
+ vring_init_split(vr, ring_mem, vq->vq_ring_mem, VIRTIO_PCI_VRING_ALIGN, size);
+ vring_desc_init_split(vr->desc, size);
+
+ /*
+ * Disable device(host) interrupting guest
+ */
+ virtqueue_disable_intr(vq);
+}
+
+static int
+virtio_alloc_queue_headers(struct virtqueue *vq, int numa_node, const char *name)
+{
+ char hdr_name[VIRTQUEUE_MAX_NAME_SZ];
+ const struct rte_memzone **hdr_mz;
+ rte_iova_t *hdr_mem;
+ ssize_t size;
+ int queue_type;
+
+ queue_type = virtio_get_queue_type(vq->hw, vq->vq_queue_index);
+ switch (queue_type) {
+ case VTCRYPTO_DATAQ:
+ /*
+ * Op cookie for every ring element. This memory can be optimized
+ * based on descriptor requirements. For example, if a descriptor
+ * is indirect, then the cookie can be shared among all the
+ * descriptors in the chain.
+ */
+ size = vq->vq_nentries * sizeof(struct virtio_crypto_op_cookie);
+ hdr_mz = &vq->dq.hdr_mz;
+ hdr_mem = &vq->dq.hdr_mem;
+ break;
+ case VTCRYPTO_CTRLQ:
+ /* One control operation at a time in control queue */
+ size = sizeof(struct virtio_pmd_ctrl);
+ hdr_mz = &vq->cq.hdr_mz;
+ hdr_mem = &vq->cq.hdr_mem;
+ break;
+ default:
+ return 0;
+ }
+
+ snprintf(hdr_name, sizeof(hdr_name), "%s_hdr", name);
+ *hdr_mz = rte_memzone_reserve_aligned(hdr_name, size, numa_node,
+ RTE_MEMZONE_IOVA_CONTIG, RTE_CACHE_LINE_SIZE);
+ if (*hdr_mz == NULL) {
+ if (rte_errno == EEXIST)
+ *hdr_mz = rte_memzone_lookup(hdr_name);
+ if (*hdr_mz == NULL)
+ return -ENOMEM;
+ }
+
+ memset((*hdr_mz)->addr, 0, size);
+
+ if (vq->hw->use_va)
+ *hdr_mem = (uintptr_t)(*hdr_mz)->addr;
+ else
+ *hdr_mem = (uintptr_t)(*hdr_mz)->iova;
+
+ return 0;
+}
+
+static void
+virtio_free_queue_headers(struct virtqueue *vq)
+{
+ const struct rte_memzone **hdr_mz;
+ rte_iova_t *hdr_mem;
+ int queue_type;
+
+ queue_type = virtio_get_queue_type(vq->hw, vq->vq_queue_index);
+ switch (queue_type) {
+ case VTCRYPTO_DATAQ:
+ hdr_mz = &vq->dq.hdr_mz;
+ hdr_mem = &vq->dq.hdr_mem;
+ break;
+ case VTCRYPTO_CTRLQ:
+ hdr_mz = &vq->cq.hdr_mz;
+ hdr_mem = &vq->cq.hdr_mem;
+ break;
+ default:
+ return;
+ }
+
+ rte_memzone_free(*hdr_mz);
+ *hdr_mz = NULL;
+ *hdr_mem = 0;
+}
+
+struct virtqueue *
+virtcrypto_queue_alloc(struct virtio_crypto_hw *hw, uint16_t index, uint16_t num,
+ int node, const char *name)
+{
+ const struct rte_memzone *mz;
+ struct virtqueue *vq;
+ unsigned int size;
+
+ size = sizeof(*vq) + num * sizeof(struct vq_desc_extra);
+ size = RTE_ALIGN_CEIL(size, RTE_CACHE_LINE_SIZE);
+
+ vq = rte_zmalloc_socket(name, size, RTE_CACHE_LINE_SIZE, node);
+ if (vq == NULL) {
+ PMD_INIT_LOG(ERR, "can not allocate vq");
+ return NULL;
+ }
+
+ PMD_INIT_LOG(DEBUG, "vq: %p", vq);
+ vq->hw = hw;
+ vq->vq_queue_index = index;
+ vq->vq_nentries = num;
+
+ /*
+ * Reserve a memzone for vring elements
+ */
+ size = vring_size(num, VIRTIO_PCI_VRING_ALIGN);
+ vq->vq_ring_size = RTE_ALIGN_CEIL(size, VIRTIO_PCI_VRING_ALIGN);
+ PMD_INIT_LOG(DEBUG, "vring_size: %d, rounded_vring_size: %d", size, vq->vq_ring_size);
+
+ mz = rte_memzone_reserve_aligned(name, vq->vq_ring_size, node,
+ RTE_MEMZONE_IOVA_CONTIG, VIRTIO_PCI_VRING_ALIGN);
+ if (mz == NULL) {
+ if (rte_errno == EEXIST)
+ mz = rte_memzone_lookup(name);
+ if (mz == NULL)
+ goto free_vq;
+ }
+
+ memset(mz->addr, 0, mz->len);
+ vq->mz = mz;
+ vq->vq_ring_virt_mem = mz->addr;
+
+ if (hw->use_va)
+ vq->vq_ring_mem = (uintptr_t)mz->addr;
+ else
+ vq->vq_ring_mem = mz->iova;
+
+ PMD_INIT_LOG(DEBUG, "vq->vq_ring_mem: 0x%" PRIx64, vq->vq_ring_mem);
+ PMD_INIT_LOG(DEBUG, "vq->vq_ring_virt_mem: %p", vq->vq_ring_virt_mem);
+
+ virtio_init_vring(vq);
+
+ if (virtio_alloc_queue_headers(vq, node, name)) {
+ PMD_INIT_LOG(ERR, "Failed to alloc queue headers");
+ goto free_mz;
+ }
+
+ return vq;
+
+free_mz:
+ rte_memzone_free(mz);
+free_vq:
+ rte_free(vq);
+
+ return NULL;
+}
+
+void
+virtcrypto_queue_free(struct virtqueue *vq)
+{
+ virtio_free_queue_headers(vq);
+ rte_memzone_free(vq->mz);
+ rte_free(vq);
+}
diff --git a/drivers/crypto/virtio/virtqueue.h b/drivers/crypto/virtio/virtqueue.h
index cb08bea94f..9191d1f732 100644
--- a/drivers/crypto/virtio/virtqueue.h
+++ b/drivers/crypto/virtio/virtqueue.h
@@ -12,10 +12,12 @@
#include <rte_memzone.h>
#include <rte_mempool.h>
+#include "virtio_cvq.h"
#include "virtio_pci.h"
#include "virtio_ring.h"
#include "virtio_logs.h"
#include "virtio_crypto.h"
+#include "virtio_rxtx.h"
struct rte_mbuf;
@@ -46,11 +48,26 @@ struct vq_desc_extra {
void *crypto_op;
void *cookie;
uint16_t ndescs;
+ uint16_t next;
};
+#define virtcrypto_dq_to_vq(dvq) container_of(dvq, struct virtqueue, dq)
+#define virtcrypto_cq_to_vq(cvq) container_of(cvq, struct virtqueue, cq)
+
struct virtqueue {
/**< virtio_crypto_hw structure pointer. */
struct virtio_crypto_hw *hw;
+ union {
+ struct {
+ /**< vring keeping desc, used and avail */
+ struct vring ring;
+ } vq_split;
+ };
+ union {
+ struct virtcrypto_data dq;
+ struct virtcrypto_ctl cq;
+ };
+
/**< mem zone to populate RX ring. */
const struct rte_memzone *mz;
/**< memzone to populate hdr and request. */
@@ -62,7 +79,6 @@ struct virtqueue {
unsigned int vq_ring_size;
phys_addr_t vq_ring_mem; /**< physical address of vring */
- struct vring vq_ring; /**< vring keeping desc, used and avail */
uint16_t vq_free_cnt; /**< num of desc available */
uint16_t vq_nentries; /**< vring desc numbers */
@@ -101,6 +117,11 @@ void virtqueue_disable_intr(struct virtqueue *vq);
*/
void virtqueue_detatch_unused(struct virtqueue *vq);
+struct virtqueue *virtcrypto_queue_alloc(struct virtio_crypto_hw *hw, uint16_t index,
+ uint16_t num, int node, const char *name);
+
+void virtcrypto_queue_free(struct virtqueue *vq);
+
static inline int
virtqueue_full(const struct virtqueue *vq)
{
@@ -108,13 +129,13 @@ virtqueue_full(const struct virtqueue *vq)
}
#define VIRTQUEUE_NUSED(vq) \
- ((uint16_t)((vq)->vq_ring.used->idx - (vq)->vq_used_cons_idx))
+ ((uint16_t)((vq)->vq_split.ring.used->idx - (vq)->vq_used_cons_idx))
static inline void
vq_update_avail_idx(struct virtqueue *vq)
{
virtio_wmb();
- vq->vq_ring.avail->idx = vq->vq_avail_idx;
+ vq->vq_split.ring.avail->idx = vq->vq_avail_idx;
}
static inline void
@@ -129,15 +150,15 @@ vq_update_avail_ring(struct virtqueue *vq, uint16_t desc_idx)
* descriptor.
*/
avail_idx = (uint16_t)(vq->vq_avail_idx & (vq->vq_nentries - 1));
- if (unlikely(vq->vq_ring.avail->ring[avail_idx] != desc_idx))
- vq->vq_ring.avail->ring[avail_idx] = desc_idx;
+ if (unlikely(vq->vq_split.ring.avail->ring[avail_idx] != desc_idx))
+ vq->vq_split.ring.avail->ring[avail_idx] = desc_idx;
vq->vq_avail_idx++;
}
static inline int
virtqueue_kick_prepare(struct virtqueue *vq)
{
- return !(vq->vq_ring.used->flags & VRING_USED_F_NO_NOTIFY);
+ return !(vq->vq_split.ring.used->flags & VRING_USED_F_NO_NOTIFY);
}
static inline void
@@ -151,21 +172,69 @@ virtqueue_notify(struct virtqueue *vq)
VTPCI_OPS(vq->hw)->notify_queue(vq->hw, vq);
}
+/* Chain all the descriptors in the ring with an END */
+static inline void
+vring_desc_init_split(struct vring_desc *dp, uint16_t n)
+{
+ uint16_t i;
+
+ for (i = 0; i < n - 1; i++)
+ dp[i].next = (uint16_t)(i + 1);
+ dp[i].next = VQ_RING_DESC_CHAIN_END;
+}
+
+static inline int
+virtio_get_queue_type(struct virtio_crypto_hw *hw, uint16_t vq_idx)
+{
+ if (vq_idx == hw->max_dataqueues)
+ return VTCRYPTO_CTRLQ;
+ else
+ return VTCRYPTO_DATAQ;
+}
+
+/* virtqueue_nused has load-acquire or rte_io_rmb insed */
+static inline uint16_t
+virtqueue_nused(const struct virtqueue *vq)
+{
+ uint16_t idx;
+
+ if (vq->hw->weak_barriers) {
+ /**
+ * x86 prefers to using rte_smp_rmb over rte_atomic_load_explicit as it
+ * reports a slightly better perf, which comes from the saved
+ * branch by the compiler.
+ * The if and else branches are identical with the smp and io
+ * barriers both defined as compiler barriers on x86.
+ */
+#ifdef RTE_ARCH_X86_64
+ idx = vq->vq_split.ring.used->idx;
+ virtio_rmb();
+#else
+ idx = rte_atomic_load_explicit(&(vq)->vq_split.ring.used->idx,
+ rte_memory_order_acquire);
+#endif
+ } else {
+ idx = vq->vq_split.ring.used->idx;
+ rte_io_rmb();
+ }
+ return idx - vq->vq_used_cons_idx;
+}
+
/**
* Dump virtqueue internal structures, for debug purpose only.
*/
#define VIRTQUEUE_DUMP(vq) do { \
uint16_t used_idx, nused; \
- used_idx = (vq)->vq_ring.used->idx; \
+ used_idx = (vq)->vq_split.ring.used->idx; \
nused = (uint16_t)(used_idx - (vq)->vq_used_cons_idx); \
VIRTIO_CRYPTO_INIT_LOG_DBG(\
"VQ: - size=%d; free=%d; used=%d; desc_head_idx=%d;" \
" avail.idx=%d; used_cons_idx=%d; used.idx=%d;" \
" avail.flags=0x%x; used.flags=0x%x", \
(vq)->vq_nentries, (vq)->vq_free_cnt, nused, \
- (vq)->vq_desc_head_idx, (vq)->vq_ring.avail->idx, \
- (vq)->vq_used_cons_idx, (vq)->vq_ring.used->idx, \
- (vq)->vq_ring.avail->flags, (vq)->vq_ring.used->flags); \
+ (vq)->vq_desc_head_idx, (vq)->vq_split.ring.avail->idx, \
+ (vq)->vq_used_cons_idx, (vq)->vq_split.ring.used->idx, \
+ (vq)->vq_split.ring.avail->flags, (vq)->vq_split.ring.used->flags); \
} while (0)
#endif /* _VIRTQUEUE_H_ */
--
2.25.1
^ permalink raw reply [flat|nested] 58+ messages in thread
* [v3 3/6] crypto/virtio: add packed ring support
2025-02-21 17:41 ` [v3 0/6] crypto/virtio: enhancements for RSA and vDPA Gowrishankar Muthukrishnan
2025-02-21 17:41 ` [v3 1/6] crypto/virtio: add asymmetric RSA support Gowrishankar Muthukrishnan
2025-02-21 17:41 ` [v3 2/6] crypto/virtio: refactor queue operations Gowrishankar Muthukrishnan
@ 2025-02-21 17:41 ` Gowrishankar Muthukrishnan
2025-02-21 17:41 ` [v3 4/6] crypto/virtio: add vDPA backend Gowrishankar Muthukrishnan
` (2 subsequent siblings)
5 siblings, 0 replies; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2025-02-21 17:41 UTC (permalink / raw)
To: dev, Jay Zhou; +Cc: anoobj, Akhil Goyal, Gowrishankar Muthukrishnan
Add packed ring support.
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
drivers/crypto/virtio/virtio_cryptodev.c | 125 +++++++
drivers/crypto/virtio/virtio_cryptodev.h | 13 +-
drivers/crypto/virtio/virtio_cvq.c | 103 +++++-
drivers/crypto/virtio/virtio_pci.h | 25 ++
drivers/crypto/virtio/virtio_ring.h | 59 ++-
drivers/crypto/virtio/virtio_rxtx.c | 444 ++++++++++++++++++++++-
drivers/crypto/virtio/virtqueue.c | 50 ++-
drivers/crypto/virtio/virtqueue.h | 134 ++++++-
8 files changed, 924 insertions(+), 29 deletions(-)
diff --git a/drivers/crypto/virtio/virtio_cryptodev.c b/drivers/crypto/virtio/virtio_cryptodev.c
index 6bb76ff15e..92fea557ab 100644
--- a/drivers/crypto/virtio/virtio_cryptodev.c
+++ b/drivers/crypto/virtio/virtio_cryptodev.c
@@ -869,6 +869,125 @@ virtio_crypto_clear_session(
rte_free(ctrl);
}
+static void
+virtio_crypto_clear_session_packed(
+ struct rte_cryptodev *dev,
+ struct virtio_crypto_op_ctrl_req *ctrl)
+{
+ struct virtio_crypto_hw *hw;
+ struct virtqueue *vq;
+ struct vring_packed_desc *desc;
+ uint8_t *status;
+ uint8_t needed = 1;
+ uint32_t head;
+ uint64_t malloc_phys_addr;
+ uint8_t len_inhdr = sizeof(struct virtio_crypto_inhdr);
+ uint32_t len_op_ctrl_req = sizeof(struct virtio_crypto_op_ctrl_req);
+ uint64_t session_id = ctrl->u.destroy_session.session_id;
+ uint16_t flags;
+ uint8_t nb_descs = 0;
+
+ hw = dev->data->dev_private;
+ vq = virtcrypto_cq_to_vq(hw->cvq);
+ head = vq->vq_avail_idx;
+ flags = vq->vq_packed.cached_flags;
+
+ VIRTIO_CRYPTO_SESSION_LOG_INFO("vq->vq_desc_head_idx = %d, "
+ "vq = %p", vq->vq_desc_head_idx, vq);
+
+ if (vq->vq_free_cnt < needed) {
+ VIRTIO_CRYPTO_SESSION_LOG_ERR(
+ "vq->vq_free_cnt = %d is less than %d, "
+ "not enough", vq->vq_free_cnt, needed);
+ return;
+ }
+
+ malloc_phys_addr = rte_malloc_virt2iova(ctrl);
+
+ /* status part */
+ status = &(((struct virtio_crypto_inhdr *)
+ ((uint8_t *)ctrl + len_op_ctrl_req))->status);
+ *status = VIRTIO_CRYPTO_ERR;
+
+ /* indirect desc vring part */
+ desc = vq->vq_packed.ring.desc;
+
+ /* ctrl request part */
+ desc[head].addr = malloc_phys_addr;
+ desc[head].len = len_op_ctrl_req;
+ desc[head].flags = VRING_DESC_F_NEXT | vq->vq_packed.cached_flags;
+ vq->vq_free_cnt--;
+ nb_descs++;
+ if (++vq->vq_avail_idx >= vq->vq_nentries) {
+ vq->vq_avail_idx -= vq->vq_nentries;
+ vq->vq_packed.cached_flags ^= VRING_PACKED_DESC_F_AVAIL_USED;
+ }
+
+ /* status part */
+ desc[vq->vq_avail_idx].addr = malloc_phys_addr + len_op_ctrl_req;
+ desc[vq->vq_avail_idx].len = len_inhdr;
+ desc[vq->vq_avail_idx].flags = VRING_DESC_F_WRITE;
+ vq->vq_free_cnt--;
+ nb_descs++;
+ if (++vq->vq_avail_idx >= vq->vq_nentries) {
+ vq->vq_avail_idx -= vq->vq_nentries;
+ vq->vq_packed.cached_flags ^= VRING_PACKED_DESC_F_AVAIL_USED;
+ }
+
+ virtqueue_store_flags_packed(&desc[head], VRING_DESC_F_NEXT | flags,
+ vq->hw->weak_barriers);
+
+ virtio_wmb(vq->hw->weak_barriers);
+ virtqueue_notify(vq);
+
+ /* wait for used desc in virtqueue
+ * desc_is_used has a load-acquire or rte_io_rmb inside
+ */
+ rte_rmb();
+ while (!desc_is_used(&desc[head], vq)) {
+ rte_rmb();
+ usleep(100);
+ }
+
+ /* now get used descriptors */
+ vq->vq_free_cnt += nb_descs;
+ vq->vq_used_cons_idx += nb_descs;
+ if (vq->vq_used_cons_idx >= vq->vq_nentries) {
+ vq->vq_used_cons_idx -= vq->vq_nentries;
+ vq->vq_packed.used_wrap_counter ^= 1;
+ }
+
+ PMD_INIT_LOG(DEBUG, "vq->vq_free_cnt=%d "
+ "vq->vq_queue_idx=%d "
+ "vq->vq_avail_idx=%d "
+ "vq->vq_used_cons_idx=%d "
+ "vq->vq_packed.cached_flags=0x%x "
+ "vq->vq_packed.used_wrap_counter=%d",
+ vq->vq_free_cnt,
+ vq->vq_queue_index,
+ vq->vq_avail_idx,
+ vq->vq_used_cons_idx,
+ vq->vq_packed.cached_flags,
+ vq->vq_packed.used_wrap_counter);
+
+ if (*status != VIRTIO_CRYPTO_OK) {
+ VIRTIO_CRYPTO_SESSION_LOG_ERR("Close session failed "
+ "status=%"PRIu32", session_id=%"PRIu64"",
+ *status, session_id);
+ rte_free(ctrl);
+ return;
+ }
+
+ VIRTIO_CRYPTO_INIT_LOG_DBG("vq->vq_free_cnt=%d "
+ "vq->vq_desc_head_idx=%d",
+ vq->vq_free_cnt, vq->vq_desc_head_idx);
+
+ VIRTIO_CRYPTO_SESSION_LOG_INFO("Close session %"PRIu64" successfully ",
+ session_id);
+
+ rte_free(ctrl);
+}
+
static void
virtio_crypto_sym_clear_session(
struct rte_cryptodev *dev,
@@ -906,6 +1025,9 @@ virtio_crypto_sym_clear_session(
ctrl->header.queue_id = 0;
ctrl->u.destroy_session.session_id = session->session_id;
+ if (vtpci_with_packed_queue(dev->data->dev_private))
+ return virtio_crypto_clear_session_packed(dev, ctrl);
+
return virtio_crypto_clear_session(dev, ctrl);
}
@@ -943,6 +1065,9 @@ virtio_crypto_asym_clear_session(
ctrl->header.queue_id = 0;
ctrl->u.destroy_session.session_id = session->session_id;
+ if (vtpci_with_packed_queue(dev->data->dev_private))
+ return virtio_crypto_clear_session_packed(dev, ctrl);
+
return virtio_crypto_clear_session(dev, ctrl);
}
diff --git a/drivers/crypto/virtio/virtio_cryptodev.h b/drivers/crypto/virtio/virtio_cryptodev.h
index d8b1e1abdd..f8498246e2 100644
--- a/drivers/crypto/virtio/virtio_cryptodev.h
+++ b/drivers/crypto/virtio/virtio_cryptodev.h
@@ -10,13 +10,21 @@
#include "virtio_ring.h"
/* Features desired/implemented by this driver. */
-#define VIRTIO_CRYPTO_PMD_GUEST_FEATURES (1ULL << VIRTIO_F_VERSION_1)
+#define VIRTIO_CRYPTO_PMD_GUEST_FEATURES (1ULL << VIRTIO_F_VERSION_1 | \
+ 1ULL << VIRTIO_F_IN_ORDER | \
+ 1ULL << VIRTIO_F_RING_PACKED | \
+ 1ULL << VIRTIO_F_NOTIFICATION_DATA | \
+ 1ULL << VIRTIO_RING_F_INDIRECT_DESC | \
+ 1ULL << VIRTIO_F_ORDER_PLATFORM)
#define CRYPTODEV_NAME_VIRTIO_PMD crypto_virtio
#define NUM_ENTRY_VIRTIO_CRYPTO_OP 7
#define VIRTIO_CRYPTO_MAX_IV_SIZE 16
+#define VIRTIO_CRYPTO_MAX_MSG_SIZE 512
+#define VIRTIO_CRYPTO_MAX_SIGN_SIZE 512
+#define VIRTIO_CRYPTO_MAX_CIPHER_SIZE 1024
#define VIRTIO_CRYPTO_MAX_KEY_SIZE 256
@@ -36,6 +44,9 @@ struct virtio_crypto_op_cookie {
struct virtio_crypto_inhdr inhdr;
struct vring_desc desc[NUM_ENTRY_VIRTIO_CRYPTO_OP];
uint8_t iv[VIRTIO_CRYPTO_MAX_IV_SIZE];
+ uint8_t message[VIRTIO_CRYPTO_MAX_MSG_SIZE];
+ uint8_t sign[VIRTIO_CRYPTO_MAX_SIGN_SIZE];
+ uint8_t cipher[VIRTIO_CRYPTO_MAX_CIPHER_SIZE];
};
/*
diff --git a/drivers/crypto/virtio/virtio_cvq.c b/drivers/crypto/virtio/virtio_cvq.c
index c4df4a6176..b69c31b7d5 100644
--- a/drivers/crypto/virtio/virtio_cvq.c
+++ b/drivers/crypto/virtio/virtio_cvq.c
@@ -12,7 +12,102 @@
#include "virtqueue.h"
static struct virtio_pmd_ctrl *
-virtio_send_command(struct virtcrypto_ctl *cvq,
+virtio_send_command_packed(struct virtcrypto_ctl *cvq,
+ struct virtio_pmd_ctrl *ctrl,
+ int *dlen, int dnum)
+{
+ struct virtqueue *vq = virtcrypto_cq_to_vq(cvq);
+ int head;
+ struct vring_packed_desc *desc = vq->vq_packed.ring.desc;
+ struct virtio_pmd_ctrl *result;
+ uint16_t flags;
+ int sum = 0;
+ int nb_descs = 0;
+ int k;
+
+ /*
+ * Format is enforced in qemu code:
+ * One TX packet for header;
+ * At least one TX packet per argument;
+ * One RX packet for ACK.
+ */
+ head = vq->vq_avail_idx;
+ flags = vq->vq_packed.cached_flags;
+ desc[head].addr = cvq->hdr_mem;
+ desc[head].len = sizeof(struct virtio_crypto_op_ctrl_req);
+ vq->vq_free_cnt--;
+ nb_descs++;
+ if (++vq->vq_avail_idx >= vq->vq_nentries) {
+ vq->vq_avail_idx -= vq->vq_nentries;
+ vq->vq_packed.cached_flags ^= VRING_PACKED_DESC_F_AVAIL_USED;
+ }
+
+ for (k = 0; k < dnum; k++) {
+ desc[vq->vq_avail_idx].addr = cvq->hdr_mem
+ + sizeof(struct virtio_crypto_op_ctrl_req)
+ + sizeof(ctrl->input) + sizeof(uint8_t) * sum;
+ desc[vq->vq_avail_idx].len = dlen[k];
+ desc[vq->vq_avail_idx].flags = VRING_DESC_F_NEXT |
+ vq->vq_packed.cached_flags;
+ sum += dlen[k];
+ vq->vq_free_cnt--;
+ nb_descs++;
+ if (++vq->vq_avail_idx >= vq->vq_nentries) {
+ vq->vq_avail_idx -= vq->vq_nentries;
+ vq->vq_packed.cached_flags ^=
+ VRING_PACKED_DESC_F_AVAIL_USED;
+ }
+ }
+
+ desc[vq->vq_avail_idx].addr = cvq->hdr_mem
+ + sizeof(struct virtio_crypto_op_ctrl_req);
+ desc[vq->vq_avail_idx].len = sizeof(ctrl->input);
+ desc[vq->vq_avail_idx].flags = VRING_DESC_F_WRITE |
+ vq->vq_packed.cached_flags;
+ vq->vq_free_cnt--;
+ nb_descs++;
+ if (++vq->vq_avail_idx >= vq->vq_nentries) {
+ vq->vq_avail_idx -= vq->vq_nentries;
+ vq->vq_packed.cached_flags ^= VRING_PACKED_DESC_F_AVAIL_USED;
+ }
+
+ virtqueue_store_flags_packed(&desc[head], VRING_DESC_F_NEXT | flags,
+ vq->hw->weak_barriers);
+
+ virtio_wmb(vq->hw->weak_barriers);
+ cvq->notify_queue(vq, cvq->notify_cookie);
+
+ /* wait for used desc in virtqueue
+ * desc_is_used has a load-acquire or rte_io_rmb inside
+ */
+ while (!desc_is_used(&desc[head], vq))
+ usleep(100);
+
+ /* now get used descriptors */
+ vq->vq_free_cnt += nb_descs;
+ vq->vq_used_cons_idx += nb_descs;
+ if (vq->vq_used_cons_idx >= vq->vq_nentries) {
+ vq->vq_used_cons_idx -= vq->vq_nentries;
+ vq->vq_packed.used_wrap_counter ^= 1;
+ }
+
+ PMD_INIT_LOG(DEBUG, "vq->vq_free_cnt=%d "
+ "vq->vq_avail_idx=%d "
+ "vq->vq_used_cons_idx=%d "
+ "vq->vq_packed.cached_flags=0x%x "
+ "vq->vq_packed.used_wrap_counter=%d",
+ vq->vq_free_cnt,
+ vq->vq_avail_idx,
+ vq->vq_used_cons_idx,
+ vq->vq_packed.cached_flags,
+ vq->vq_packed.used_wrap_counter);
+
+ result = cvq->hdr_mz->addr;
+ return result;
+}
+
+static struct virtio_pmd_ctrl *
+virtio_send_command_split(struct virtcrypto_ctl *cvq,
struct virtio_pmd_ctrl *ctrl,
int *dlen, int dnum)
{
@@ -122,7 +217,11 @@ virtio_crypto_send_command(struct virtcrypto_ctl *cvq, struct virtio_pmd_ctrl *c
}
memcpy(cvq->hdr_mz->addr, ctrl, sizeof(struct virtio_pmd_ctrl));
- result = virtio_send_command(cvq, ctrl, dlen, dnum);
+
+ if (vtpci_with_packed_queue(vq->hw))
+ result = virtio_send_command_packed(cvq, ctrl, dlen, dnum);
+ else
+ result = virtio_send_command_split(cvq, ctrl, dlen, dnum);
rte_spinlock_unlock(&cvq->lock);
return result->input.status;
diff --git a/drivers/crypto/virtio/virtio_pci.h b/drivers/crypto/virtio/virtio_pci.h
index 7e94c6a3c5..79945cb88e 100644
--- a/drivers/crypto/virtio/virtio_pci.h
+++ b/drivers/crypto/virtio/virtio_pci.h
@@ -83,6 +83,25 @@ struct virtqueue;
#define VIRTIO_F_VERSION_1 32
#define VIRTIO_F_IOMMU_PLATFORM 33
+#define VIRTIO_F_RING_PACKED 34
+
+/*
+ * Inorder feature indicates that all buffers are used by the device
+ * in the same order in which they have been made available.
+ */
+#define VIRTIO_F_IN_ORDER 35
+
+/*
+ * This feature indicates that memory accesses by the driver and the device
+ * are ordered in a way described by the platform.
+ */
+#define VIRTIO_F_ORDER_PLATFORM 36
+
+/*
+ * This feature indicates that the driver passes extra data (besides
+ * identifying the virtqueue) in its device notifications.
+ */
+#define VIRTIO_F_NOTIFICATION_DATA 38
/* The Guest publishes the used index for which it expects an interrupt
* at the end of the avail ring. Host should ignore the avail->flags field.
@@ -230,6 +249,12 @@ vtpci_with_feature(struct virtio_crypto_hw *hw, uint64_t bit)
return (hw->guest_features & (1ULL << bit)) != 0;
}
+static inline int
+vtpci_with_packed_queue(struct virtio_crypto_hw *hw)
+{
+ return vtpci_with_feature(hw, VIRTIO_F_RING_PACKED);
+}
+
/*
* Function declaration from virtio_pci.c
*/
diff --git a/drivers/crypto/virtio/virtio_ring.h b/drivers/crypto/virtio/virtio_ring.h
index e5b0ad74d2..c74d1172b7 100644
--- a/drivers/crypto/virtio/virtio_ring.h
+++ b/drivers/crypto/virtio/virtio_ring.h
@@ -16,6 +16,15 @@
/* This means the buffer contains a list of buffer descriptors. */
#define VRING_DESC_F_INDIRECT 4
+/* This flag means the descriptor was made available by the driver */
+#define VRING_PACKED_DESC_F_AVAIL (1 << 7)
+/* This flag means the descriptor was used by the device */
+#define VRING_PACKED_DESC_F_USED (1 << 15)
+
+/* Frequently used combinations */
+#define VRING_PACKED_DESC_F_AVAIL_USED (VRING_PACKED_DESC_F_AVAIL | \
+ VRING_PACKED_DESC_F_USED)
+
/* The Host uses this in used->flags to advise the Guest: don't kick me
* when you add a buffer. It's unreliable, so it's simply an
* optimization. Guest will still kick if it's out of buffers.
@@ -57,6 +66,32 @@ struct vring_used {
struct vring_used_elem ring[];
};
+/* For support of packed virtqueues in Virtio 1.1 the format of descriptors
+ * looks like this.
+ */
+struct vring_packed_desc {
+ uint64_t addr;
+ uint32_t len;
+ uint16_t id;
+ uint16_t flags;
+};
+
+#define RING_EVENT_FLAGS_ENABLE 0x0
+#define RING_EVENT_FLAGS_DISABLE 0x1
+#define RING_EVENT_FLAGS_DESC 0x2
+struct vring_packed_desc_event {
+ uint16_t desc_event_off_wrap;
+ uint16_t desc_event_flags;
+};
+
+struct vring_packed {
+ unsigned int num;
+ rte_iova_t desc_iova;
+ struct vring_packed_desc *desc;
+ struct vring_packed_desc_event *driver;
+ struct vring_packed_desc_event *device;
+};
+
struct vring {
unsigned int num;
rte_iova_t desc_iova;
@@ -99,10 +134,18 @@ struct vring {
#define vring_avail_event(vr) (*(uint16_t *)&(vr)->used->ring[(vr)->num])
static inline size_t
-vring_size(unsigned int num, unsigned long align)
+vring_size(struct virtio_crypto_hw *hw, unsigned int num, unsigned long align)
{
size_t size;
+ if (vtpci_with_packed_queue(hw)) {
+ size = num * sizeof(struct vring_packed_desc);
+ size += sizeof(struct vring_packed_desc_event);
+ size = RTE_ALIGN_CEIL(size, align);
+ size += sizeof(struct vring_packed_desc_event);
+ return size;
+ }
+
size = num * sizeof(struct vring_desc);
size += sizeof(struct vring_avail) + (num * sizeof(uint16_t));
size = RTE_ALIGN_CEIL(size, align);
@@ -124,6 +167,20 @@ vring_init_split(struct vring *vr, uint8_t *p, rte_iova_t iova,
RTE_ALIGN_CEIL((uintptr_t)(&vr->avail->ring[num]), align);
}
+static inline void
+vring_init_packed(struct vring_packed *vr, uint8_t *p, rte_iova_t iova,
+ unsigned long align, unsigned int num)
+{
+ vr->num = num;
+ vr->desc = (struct vring_packed_desc *)p;
+ vr->desc_iova = iova;
+ vr->driver = (struct vring_packed_desc_event *)(p +
+ vr->num * sizeof(struct vring_packed_desc));
+ vr->device = (struct vring_packed_desc_event *)
+ RTE_ALIGN_CEIL(((uintptr_t)vr->driver +
+ sizeof(struct vring_packed_desc_event)), align);
+}
+
static inline void
vring_init(struct vring *vr, unsigned int num, uint8_t *p, unsigned long align)
{
diff --git a/drivers/crypto/virtio/virtio_rxtx.c b/drivers/crypto/virtio/virtio_rxtx.c
index 68fccef84b..4490034c99 100644
--- a/drivers/crypto/virtio/virtio_rxtx.c
+++ b/drivers/crypto/virtio/virtio_rxtx.c
@@ -4,6 +4,7 @@
#include <cryptodev_pmd.h>
#include "virtqueue.h"
+#include "virtio_ring.h"
#include "virtio_cryptodev.h"
#include "virtio_crypto_algs.h"
@@ -107,6 +108,91 @@ virtqueue_dequeue_burst_rx(struct virtqueue *vq,
return i;
}
+static uint16_t
+virtqueue_dequeue_burst_rx_packed(struct virtqueue *vq,
+ struct rte_crypto_op **rx_pkts, uint16_t num)
+{
+ struct rte_crypto_op *cop;
+ uint16_t used_idx;
+ uint16_t i;
+ struct virtio_crypto_inhdr *inhdr;
+ struct virtio_crypto_op_cookie *op_cookie;
+ struct vring_packed_desc *desc;
+
+ desc = vq->vq_packed.ring.desc;
+
+ /* Caller does the check */
+ for (i = 0; i < num ; i++) {
+ used_idx = vq->vq_used_cons_idx;
+ if (!desc_is_used(&desc[used_idx], vq))
+ break;
+
+ cop = (struct rte_crypto_op *)
+ vq->vq_descx[used_idx].crypto_op;
+ if (unlikely(cop == NULL)) {
+ VIRTIO_CRYPTO_RX_LOG_DBG("vring descriptor with no "
+ "mbuf cookie at %u",
+ vq->vq_used_cons_idx);
+ break;
+ }
+
+ op_cookie = (struct virtio_crypto_op_cookie *)
+ vq->vq_descx[used_idx].cookie;
+ inhdr = &(op_cookie->inhdr);
+ switch (inhdr->status) {
+ case VIRTIO_CRYPTO_OK:
+ cop->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+ break;
+ case VIRTIO_CRYPTO_ERR:
+ cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
+ vq->packets_received_failed++;
+ break;
+ case VIRTIO_CRYPTO_BADMSG:
+ cop->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
+ vq->packets_received_failed++;
+ break;
+ case VIRTIO_CRYPTO_NOTSUPP:
+ cop->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
+ vq->packets_received_failed++;
+ break;
+ case VIRTIO_CRYPTO_INVSESS:
+ cop->status = RTE_CRYPTO_OP_STATUS_INVALID_SESSION;
+ vq->packets_received_failed++;
+ break;
+ default:
+ break;
+ }
+
+ vq->packets_received_total++;
+
+ if (cop->asym->rsa.op_type == RTE_CRYPTO_ASYM_OP_SIGN)
+ memcpy(cop->asym->rsa.sign.data, op_cookie->sign,
+ cop->asym->rsa.sign.length);
+ else if (cop->asym->rsa.op_type == RTE_CRYPTO_ASYM_OP_VERIFY)
+ memcpy(cop->asym->rsa.message.data, op_cookie->message,
+ cop->asym->rsa.message.length);
+ else if (cop->asym->rsa.op_type == RTE_CRYPTO_ASYM_OP_ENCRYPT)
+ memcpy(cop->asym->rsa.cipher.data, op_cookie->cipher,
+ cop->asym->rsa.cipher.length);
+ else if (cop->asym->rsa.op_type == RTE_CRYPTO_ASYM_OP_DECRYPT)
+ memcpy(cop->asym->rsa.message.data, op_cookie->message,
+ cop->asym->rsa.message.length);
+
+ rx_pkts[i] = cop;
+ rte_mempool_put(vq->mpool, op_cookie);
+
+ vq->vq_free_cnt += 4;
+ vq->vq_used_cons_idx += 4;
+ vq->vq_descx[used_idx].crypto_op = NULL;
+ if (vq->vq_used_cons_idx >= vq->vq_nentries) {
+ vq->vq_used_cons_idx -= vq->vq_nentries;
+ vq->vq_packed.used_wrap_counter ^= 1;
+ }
+ }
+
+ return i;
+}
+
static inline int
virtqueue_crypto_sym_pkt_header_arrange(
struct rte_crypto_op *cop,
@@ -188,7 +274,7 @@ virtqueue_crypto_sym_pkt_header_arrange(
}
static inline int
-virtqueue_crypto_sym_enqueue_xmit(
+virtqueue_crypto_sym_enqueue_xmit_split(
struct virtqueue *txvq,
struct rte_crypto_op *cop)
{
@@ -343,6 +429,160 @@ virtqueue_crypto_sym_enqueue_xmit(
return 0;
}
+static inline int
+virtqueue_crypto_sym_enqueue_xmit_packed(
+ struct virtqueue *txvq,
+ struct rte_crypto_op *cop)
+{
+ uint16_t idx = 0;
+ uint16_t needed = 1;
+ uint16_t head_idx;
+ struct vq_desc_extra *dxp;
+ struct vring_packed_desc *start_dp;
+ struct vring_packed_desc *desc;
+ uint64_t op_data_req_phys_addr;
+ uint16_t req_data_len = sizeof(struct virtio_crypto_op_data_req);
+ uint32_t iv_addr_offset =
+ offsetof(struct virtio_crypto_op_cookie, iv);
+ struct rte_crypto_sym_op *sym_op = cop->sym;
+ struct virtio_crypto_session *session =
+ CRYPTODEV_GET_SYM_SESS_PRIV(cop->sym->session);
+ struct virtio_crypto_op_data_req *op_data_req;
+ uint32_t hash_result_len = 0;
+ struct virtio_crypto_op_cookie *crypto_op_cookie;
+ struct virtio_crypto_alg_chain_session_para *para;
+ uint16_t flags = VRING_DESC_F_NEXT;
+
+ if (unlikely(sym_op->m_src->nb_segs != 1))
+ return -EMSGSIZE;
+ if (unlikely(txvq->vq_free_cnt == 0))
+ return -ENOSPC;
+ if (unlikely(txvq->vq_free_cnt < needed))
+ return -EMSGSIZE;
+ head_idx = txvq->vq_desc_head_idx;
+ if (unlikely(head_idx >= txvq->vq_nentries))
+ return -EFAULT;
+ if (unlikely(session == NULL))
+ return -EFAULT;
+
+ dxp = &txvq->vq_descx[head_idx];
+
+ if (rte_mempool_get(txvq->mpool, &dxp->cookie)) {
+ VIRTIO_CRYPTO_TX_LOG_ERR("can not get cookie");
+ return -EFAULT;
+ }
+ crypto_op_cookie = dxp->cookie;
+ op_data_req_phys_addr = rte_mempool_virt2iova(crypto_op_cookie);
+ op_data_req = (struct virtio_crypto_op_data_req *)crypto_op_cookie;
+
+ if (virtqueue_crypto_sym_pkt_header_arrange(cop, op_data_req, session))
+ return -EFAULT;
+
+ /* status is initialized to VIRTIO_CRYPTO_ERR */
+ ((struct virtio_crypto_inhdr *)
+ ((uint8_t *)op_data_req + req_data_len))->status =
+ VIRTIO_CRYPTO_ERR;
+
+ desc = &txvq->vq_packed.ring.desc[txvq->vq_desc_head_idx];
+ needed = 4;
+ flags |= txvq->vq_packed.cached_flags;
+
+ start_dp = desc;
+ idx = 0;
+
+ /* packed vring: first part, virtio_crypto_op_data_req */
+ desc[idx].addr = op_data_req_phys_addr;
+ desc[idx].len = req_data_len;
+ desc[idx++].flags = flags;
+
+ /* packed vring: iv of cipher */
+ if (session->iv.length) {
+ if (cop->phys_addr)
+ desc[idx].addr = cop->phys_addr + session->iv.offset;
+ else {
+ if (session->iv.length > VIRTIO_CRYPTO_MAX_IV_SIZE)
+ return -ENOMEM;
+
+ rte_memcpy(crypto_op_cookie->iv,
+ rte_crypto_op_ctod_offset(cop,
+ uint8_t *, session->iv.offset),
+ session->iv.length);
+ desc[idx].addr = op_data_req_phys_addr + iv_addr_offset;
+ }
+
+ desc[idx].len = session->iv.length;
+ desc[idx++].flags = flags;
+ }
+
+ /* packed vring: additional auth data */
+ if (session->aad.length) {
+ desc[idx].addr = session->aad.phys_addr;
+ desc[idx].len = session->aad.length;
+ desc[idx++].flags = flags;
+ }
+
+ /* packed vring: src data */
+ desc[idx].addr = rte_pktmbuf_iova_offset(sym_op->m_src, 0);
+ desc[idx].len = (sym_op->cipher.data.offset
+ + sym_op->cipher.data.length);
+ desc[idx++].flags = flags;
+
+ /* packed vring: dst data */
+ if (sym_op->m_dst) {
+ desc[idx].addr = rte_pktmbuf_iova_offset(sym_op->m_dst, 0);
+ desc[idx].len = (sym_op->cipher.data.offset
+ + sym_op->cipher.data.length);
+ } else {
+ desc[idx].addr = rte_pktmbuf_iova_offset(sym_op->m_src, 0);
+ desc[idx].len = (sym_op->cipher.data.offset
+ + sym_op->cipher.data.length);
+ }
+ desc[idx++].flags = VRING_DESC_F_WRITE | VRING_DESC_F_NEXT;
+
+ /* packed vring: digest result */
+ para = &(session->ctrl.hdr.u.sym_create_session.u.chain.para);
+ if (para->hash_mode == VIRTIO_CRYPTO_SYM_HASH_MODE_PLAIN)
+ hash_result_len = para->u.hash_param.hash_result_len;
+ if (para->hash_mode == VIRTIO_CRYPTO_SYM_HASH_MODE_AUTH)
+ hash_result_len = para->u.mac_param.hash_result_len;
+ if (hash_result_len > 0) {
+ desc[idx].addr = sym_op->auth.digest.phys_addr;
+ desc[idx].len = hash_result_len;
+ desc[idx++].flags = VRING_DESC_F_WRITE | VRING_DESC_F_NEXT;
+ }
+
+ /* packed vring: last part, status returned */
+ desc[idx].addr = op_data_req_phys_addr + req_data_len;
+ desc[idx].len = sizeof(struct virtio_crypto_inhdr);
+ desc[idx++].flags = flags | VRING_DESC_F_WRITE;
+
+ /* save the infos to use when receiving packets */
+ dxp->crypto_op = (void *)cop;
+ dxp->ndescs = needed;
+
+ txvq->vq_desc_head_idx += idx & (txvq->vq_nentries - 1);
+ if (txvq->vq_desc_head_idx == VQ_RING_DESC_CHAIN_END)
+ txvq->vq_desc_tail_idx = idx;
+ txvq->vq_free_cnt = (uint16_t)(txvq->vq_free_cnt - needed);
+ virtqueue_store_flags_packed(&start_dp[0],
+ start_dp[0].flags | flags,
+ txvq->hw->weak_barriers);
+ virtio_wmb(txvq->hw->weak_barriers);
+
+ return 0;
+}
+
+static inline int
+virtqueue_crypto_sym_enqueue_xmit(
+ struct virtqueue *txvq,
+ struct rte_crypto_op *cop)
+{
+ if (vtpci_with_packed_queue(txvq->hw))
+ return virtqueue_crypto_sym_enqueue_xmit_packed(txvq, cop);
+ else
+ return virtqueue_crypto_sym_enqueue_xmit_split(txvq, cop);
+}
+
static inline int
virtqueue_crypto_asym_pkt_header_arrange(
struct rte_crypto_op *cop,
@@ -395,7 +635,7 @@ virtqueue_crypto_asym_pkt_header_arrange(
}
static inline int
-virtqueue_crypto_asym_enqueue_xmit(
+virtqueue_crypto_asym_enqueue_xmit_split(
struct virtqueue *txvq,
struct rte_crypto_op *cop)
{
@@ -529,6 +769,179 @@ virtqueue_crypto_asym_enqueue_xmit(
return 0;
}
+static inline int
+virtqueue_crypto_asym_enqueue_xmit_packed(
+ struct virtqueue *txvq,
+ struct rte_crypto_op *cop)
+{
+ uint16_t idx = 0;
+ uint16_t num_entry;
+ uint16_t needed = 1;
+ uint16_t head_idx;
+ struct vq_desc_extra *dxp;
+ struct vring_packed_desc *start_dp;
+ struct vring_packed_desc *desc;
+ uint64_t op_data_req_phys_addr;
+ uint16_t req_data_len = sizeof(struct virtio_crypto_op_data_req);
+ struct rte_crypto_asym_op *asym_op = cop->asym;
+ struct virtio_crypto_session *session =
+ CRYPTODEV_GET_ASYM_SESS_PRIV(cop->asym->session);
+ struct virtio_crypto_op_data_req *op_data_req;
+ struct virtio_crypto_op_cookie *crypto_op_cookie;
+ uint16_t flags = VRING_DESC_F_NEXT;
+
+ if (unlikely(txvq->vq_free_cnt == 0))
+ return -ENOSPC;
+ if (unlikely(txvq->vq_free_cnt < needed))
+ return -EMSGSIZE;
+ head_idx = txvq->vq_desc_head_idx;
+ if (unlikely(head_idx >= txvq->vq_nentries))
+ return -EFAULT;
+
+ dxp = &txvq->vq_descx[head_idx];
+
+ if (rte_mempool_get(txvq->mpool, &dxp->cookie)) {
+ VIRTIO_CRYPTO_TX_LOG_ERR("can not get cookie");
+ return -EFAULT;
+ }
+ crypto_op_cookie = dxp->cookie;
+ op_data_req_phys_addr = rte_mempool_virt2iova(crypto_op_cookie);
+ op_data_req = (struct virtio_crypto_op_data_req *)crypto_op_cookie;
+ if (virtqueue_crypto_asym_pkt_header_arrange(cop, op_data_req, session))
+ return -EFAULT;
+
+ /* status is initialized to VIRTIO_CRYPTO_ERR */
+ ((struct virtio_crypto_inhdr *)
+ ((uint8_t *)op_data_req + req_data_len))->status =
+ VIRTIO_CRYPTO_ERR;
+
+ desc = &txvq->vq_packed.ring.desc[txvq->vq_desc_head_idx];
+ needed = 4;
+ flags |= txvq->vq_packed.cached_flags;
+
+ start_dp = desc;
+ idx = 0;
+
+ /* packed vring: first part, virtio_crypto_op_data_req */
+ desc[idx].addr = op_data_req_phys_addr;
+ desc[idx].len = sizeof(struct virtio_crypto_op_data_req);
+ desc[idx++].flags = flags;
+
+ if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_SIGN) {
+ /* packed vring: src data */
+ if (asym_op->rsa.message.length > VIRTIO_CRYPTO_MAX_MSG_SIZE)
+ return -ENOMEM;
+ memcpy(crypto_op_cookie->message, asym_op->rsa.message.data,
+ asym_op->rsa.message.length);
+ desc[idx].addr = op_data_req_phys_addr +
+ offsetof(struct virtio_crypto_op_cookie, message);
+ desc[idx].len = asym_op->rsa.message.length;
+ desc[idx++].flags = flags;
+
+ /* packed vring: dst data */
+ if (asym_op->rsa.sign.length > VIRTIO_CRYPTO_MAX_SIGN_SIZE)
+ return -ENOMEM;
+ desc[idx].addr = op_data_req_phys_addr +
+ offsetof(struct virtio_crypto_op_cookie, sign);
+ desc[idx].len = asym_op->rsa.sign.length;
+ desc[idx++].flags = flags | VRING_DESC_F_WRITE;
+ } else if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_VERIFY) {
+ /* packed vring: src data */
+ if (asym_op->rsa.sign.length > VIRTIO_CRYPTO_MAX_SIGN_SIZE)
+ return -ENOMEM;
+ memcpy(crypto_op_cookie->sign, asym_op->rsa.sign.data,
+ asym_op->rsa.sign.length);
+ desc[idx].addr = op_data_req_phys_addr +
+ offsetof(struct virtio_crypto_op_cookie, sign);
+ desc[idx].len = asym_op->rsa.sign.length;
+ desc[idx++].flags = flags;
+
+ /* packed vring: dst data */
+ if (asym_op->rsa.message.length > VIRTIO_CRYPTO_MAX_MSG_SIZE)
+ return -ENOMEM;
+ desc[idx].addr = op_data_req_phys_addr +
+ offsetof(struct virtio_crypto_op_cookie, message);
+ desc[idx].len = asym_op->rsa.message.length;
+ desc[idx++].flags = flags;
+ } else if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_ENCRYPT) {
+ /* packed vring: src data */
+ if (asym_op->rsa.message.length > VIRTIO_CRYPTO_MAX_MSG_SIZE)
+ return -ENOMEM;
+ memcpy(crypto_op_cookie->message, asym_op->rsa.message.data,
+ asym_op->rsa.message.length);
+ desc[idx].addr = op_data_req_phys_addr +
+ offsetof(struct virtio_crypto_op_cookie, message);
+ desc[idx].len = asym_op->rsa.message.length;
+ desc[idx++].flags = flags;
+
+ /* packed vring: dst data */
+ if (asym_op->rsa.cipher.length > VIRTIO_CRYPTO_MAX_CIPHER_SIZE)
+ return -ENOMEM;
+ desc[idx].addr = op_data_req_phys_addr +
+ offsetof(struct virtio_crypto_op_cookie, cipher);
+ desc[idx].len = asym_op->rsa.cipher.length;
+ desc[idx++].flags = flags | VRING_DESC_F_WRITE;
+ } else if (asym_op->rsa.op_type == RTE_CRYPTO_ASYM_OP_DECRYPT) {
+ /* packed vring: src data */
+ if (asym_op->rsa.cipher.length > VIRTIO_CRYPTO_MAX_CIPHER_SIZE)
+ return -ENOMEM;
+ memcpy(crypto_op_cookie->cipher, asym_op->rsa.cipher.data,
+ asym_op->rsa.cipher.length);
+ desc[idx].addr = op_data_req_phys_addr +
+ offsetof(struct virtio_crypto_op_cookie, cipher);
+ desc[idx].len = asym_op->rsa.cipher.length;
+ desc[idx++].flags = flags;
+
+ /* packed vring: dst data */
+ if (asym_op->rsa.message.length > VIRTIO_CRYPTO_MAX_MSG_SIZE)
+ return -ENOMEM;
+ desc[idx].addr = op_data_req_phys_addr +
+ offsetof(struct virtio_crypto_op_cookie, message);
+ desc[idx].len = asym_op->rsa.message.length;
+ desc[idx++].flags = flags | VRING_DESC_F_WRITE;
+ } else {
+ VIRTIO_CRYPTO_TX_LOG_ERR("Invalid asym op");
+ return -EINVAL;
+ }
+
+ /* packed vring: last part, status returned */
+ desc[idx].addr = op_data_req_phys_addr + req_data_len;
+ desc[idx].len = sizeof(struct virtio_crypto_inhdr);
+ desc[idx++].flags = txvq->vq_packed.cached_flags | VRING_DESC_F_WRITE;
+
+ num_entry = idx;
+ txvq->vq_avail_idx += num_entry;
+ if (txvq->vq_avail_idx >= txvq->vq_nentries) {
+ txvq->vq_avail_idx -= txvq->vq_nentries;
+ txvq->vq_packed.cached_flags ^= VRING_PACKED_DESC_F_AVAIL_USED;
+ }
+
+ /* save the infos to use when receiving packets */
+ dxp->crypto_op = (void *)cop;
+ dxp->ndescs = needed;
+
+ txvq->vq_desc_head_idx = (txvq->vq_desc_head_idx + idx) & (txvq->vq_nentries - 1);
+ if (txvq->vq_desc_head_idx == VQ_RING_DESC_CHAIN_END)
+ txvq->vq_desc_tail_idx = idx;
+ txvq->vq_free_cnt = (uint16_t)(txvq->vq_free_cnt - needed);
+ virtqueue_store_flags_packed(&start_dp[0],
+ start_dp[0].flags | flags,
+ txvq->hw->weak_barriers);
+ virtio_wmb(txvq->hw->weak_barriers);
+ return 0;
+}
+
+static inline int
+virtqueue_crypto_asym_enqueue_xmit(
+ struct virtqueue *txvq,
+ struct rte_crypto_op *cop)
+{
+ if (vtpci_with_packed_queue(txvq->hw))
+ return virtqueue_crypto_asym_enqueue_xmit_packed(txvq, cop);
+ else
+ return virtqueue_crypto_asym_enqueue_xmit_split(txvq, cop);
+}
+
static int
virtio_crypto_vring_start(struct virtqueue *vq)
{
@@ -595,19 +1008,22 @@ virtio_crypto_pkt_rx_burst(void *tx_queue, struct rte_crypto_op **rx_pkts,
struct virtqueue *txvq = tx_queue;
uint16_t nb_used, num, nb_rx;
- nb_used = VIRTQUEUE_NUSED(txvq);
-
- virtio_rmb();
-
- num = (uint16_t)(likely(nb_used <= nb_pkts) ? nb_used : nb_pkts);
- num = (uint16_t)(likely(num <= VIRTIO_MBUF_BURST_SZ)
- ? num : VIRTIO_MBUF_BURST_SZ);
+ virtio_rmb(0);
+ num = (uint16_t)(likely(nb_pkts <= VIRTIO_MBUF_BURST_SZ)
+ ? nb_pkts : VIRTIO_MBUF_BURST_SZ);
if (num == 0)
return 0;
- nb_rx = virtqueue_dequeue_burst_rx(txvq, rx_pkts, num);
- VIRTIO_CRYPTO_RX_LOG_DBG("used:%d dequeue:%d", nb_used, num);
+ if (likely(vtpci_with_packed_queue(txvq->hw))) {
+ nb_rx = virtqueue_dequeue_burst_rx_packed(txvq, rx_pkts, num);
+ } else {
+ nb_used = VIRTQUEUE_NUSED(txvq);
+ num = (uint16_t)(likely(num <= nb_used) ? num : nb_used);
+ nb_rx = virtqueue_dequeue_burst_rx(txvq, rx_pkts, num);
+ }
+
+ VIRTIO_CRYPTO_RX_LOG_DBG("used:%d dequeue:%d", nb_rx, num);
return nb_rx;
}
@@ -683,6 +1099,12 @@ virtio_crypto_pkt_tx_burst(void *tx_queue, struct rte_crypto_op **tx_pkts,
}
if (likely(nb_tx)) {
+ if (vtpci_with_packed_queue(txvq->hw)) {
+ virtqueue_notify(txvq);
+ VIRTIO_CRYPTO_TX_LOG_DBG("Notified backend after xmit");
+ return nb_tx;
+ }
+
vq_update_avail_idx(txvq);
if (unlikely(virtqueue_kick_prepare(txvq))) {
diff --git a/drivers/crypto/virtio/virtqueue.c b/drivers/crypto/virtio/virtqueue.c
index af7f121f67..061aa09dbe 100644
--- a/drivers/crypto/virtio/virtqueue.c
+++ b/drivers/crypto/virtio/virtqueue.c
@@ -12,8 +12,23 @@
#include "virtio_cryptodev.h"
#include "virtqueue.h"
-void
-virtqueue_disable_intr(struct virtqueue *vq)
+static inline void
+virtqueue_disable_intr_packed(struct virtqueue *vq)
+{
+ /*
+ * Set RING_EVENT_FLAGS_DISABLE to hint host
+ * not to interrupt when it consumes packets
+ * Note: this is only considered a hint to the host
+ */
+ if (vq->vq_packed.event_flags_shadow != RING_EVENT_FLAGS_DISABLE) {
+ vq->vq_packed.event_flags_shadow = RING_EVENT_FLAGS_DISABLE;
+ vq->vq_packed.ring.driver->desc_event_flags =
+ vq->vq_packed.event_flags_shadow;
+ }
+}
+
+static inline void
+virtqueue_disable_intr_split(struct virtqueue *vq)
{
/*
* Set VRING_AVAIL_F_NO_INTERRUPT to hint host
@@ -23,6 +38,15 @@ virtqueue_disable_intr(struct virtqueue *vq)
vq->vq_split.ring.avail->flags |= VRING_AVAIL_F_NO_INTERRUPT;
}
+void
+virtqueue_disable_intr(struct virtqueue *vq)
+{
+ if (vtpci_with_packed_queue(vq->hw))
+ virtqueue_disable_intr_packed(vq);
+ else
+ virtqueue_disable_intr_split(vq);
+}
+
void
virtqueue_detatch_unused(struct virtqueue *vq)
{
@@ -49,7 +73,6 @@ static void
virtio_init_vring(struct virtqueue *vq)
{
uint8_t *ring_mem = vq->vq_ring_virt_mem;
- struct vring *vr = &vq->vq_split.ring;
int size = vq->vq_nentries;
PMD_INIT_FUNC_TRACE();
@@ -62,10 +85,16 @@ virtio_init_vring(struct virtqueue *vq)
vq->vq_desc_tail_idx = (uint16_t)(vq->vq_nentries - 1);
vq->vq_free_cnt = vq->vq_nentries;
memset(vq->vq_descx, 0, sizeof(struct vq_desc_extra) * vq->vq_nentries);
-
- vring_init_split(vr, ring_mem, vq->vq_ring_mem, VIRTIO_PCI_VRING_ALIGN, size);
- vring_desc_init_split(vr->desc, size);
-
+ if (vtpci_with_packed_queue(vq->hw)) {
+ vring_init_packed(&vq->vq_packed.ring, ring_mem, vq->vq_ring_mem,
+ VIRTIO_PCI_VRING_ALIGN, size);
+ vring_desc_init_packed(vq, size);
+ } else {
+ struct vring *vr = &vq->vq_split.ring;
+
+ vring_init_split(vr, ring_mem, vq->vq_ring_mem, VIRTIO_PCI_VRING_ALIGN, size);
+ vring_desc_init_split(vr->desc, size);
+ }
/*
* Disable device(host) interrupting guest
*/
@@ -171,11 +200,16 @@ virtcrypto_queue_alloc(struct virtio_crypto_hw *hw, uint16_t index, uint16_t num
vq->hw = hw;
vq->vq_queue_index = index;
vq->vq_nentries = num;
+ if (vtpci_with_packed_queue(hw)) {
+ vq->vq_packed.used_wrap_counter = 1;
+ vq->vq_packed.cached_flags = VRING_PACKED_DESC_F_AVAIL;
+ vq->vq_packed.event_flags_shadow = 0;
+ }
/*
* Reserve a memzone for vring elements
*/
- size = vring_size(num, VIRTIO_PCI_VRING_ALIGN);
+ size = vring_size(hw, num, VIRTIO_PCI_VRING_ALIGN);
vq->vq_ring_size = RTE_ALIGN_CEIL(size, VIRTIO_PCI_VRING_ALIGN);
PMD_INIT_LOG(DEBUG, "vring_size: %d, rounded_vring_size: %d", size, vq->vq_ring_size);
diff --git a/drivers/crypto/virtio/virtqueue.h b/drivers/crypto/virtio/virtqueue.h
index 9191d1f732..97a3ace48c 100644
--- a/drivers/crypto/virtio/virtqueue.h
+++ b/drivers/crypto/virtio/virtqueue.h
@@ -28,9 +28,78 @@ struct rte_mbuf;
* sufficient.
*
*/
-#define virtio_mb() rte_smp_mb()
-#define virtio_rmb() rte_smp_rmb()
-#define virtio_wmb() rte_smp_wmb()
+static inline void
+virtio_mb(uint8_t weak_barriers)
+{
+ if (weak_barriers)
+ rte_atomic_thread_fence(rte_memory_order_seq_cst);
+ else
+ rte_mb();
+}
+
+static inline void
+virtio_rmb(uint8_t weak_barriers)
+{
+ if (weak_barriers)
+ rte_atomic_thread_fence(rte_memory_order_acquire);
+ else
+ rte_io_rmb();
+}
+
+static inline void
+virtio_wmb(uint8_t weak_barriers)
+{
+ if (weak_barriers)
+ rte_atomic_thread_fence(rte_memory_order_release);
+ else
+ rte_io_wmb();
+}
+
+static inline uint16_t
+virtqueue_fetch_flags_packed(struct vring_packed_desc *dp,
+ uint8_t weak_barriers)
+{
+ uint16_t flags;
+
+ if (weak_barriers) {
+/* x86 prefers to using rte_io_rmb over rte_atomic_load_explicit as it reports
+ * a better perf(~1.5%), which comes from the saved branch by the compiler.
+ * The if and else branch are identical on the platforms except Arm.
+ */
+#ifdef RTE_ARCH_ARM
+ flags = rte_atomic_load_explicit(&dp->flags, rte_memory_order_acquire);
+#else
+ flags = dp->flags;
+ rte_io_rmb();
+#endif
+ } else {
+ flags = dp->flags;
+ rte_io_rmb();
+ }
+
+ return flags;
+}
+
+static inline void
+virtqueue_store_flags_packed(struct vring_packed_desc *dp,
+ uint16_t flags, uint8_t weak_barriers)
+{
+ if (weak_barriers) {
+/* x86 prefers to using rte_io_wmb over rte_atomic_store_explicit as it reports
+ * a better perf(~1.5%), which comes from the saved branch by the compiler.
+ * The if and else branch are identical on the platforms except Arm.
+ */
+#ifdef RTE_ARCH_ARM
+ rte_atomic_store_explicit(&dp->flags, flags, rte_memory_order_release);
+#else
+ rte_io_wmb();
+ dp->flags = flags;
+#endif
+ } else {
+ rte_io_wmb();
+ dp->flags = flags;
+ }
+}
#define VIRTQUEUE_MAX_NAME_SZ 32
@@ -62,7 +131,16 @@ struct virtqueue {
/**< vring keeping desc, used and avail */
struct vring ring;
} vq_split;
+
+ struct {
+ /**< vring keeping descs and events */
+ struct vring_packed ring;
+ bool used_wrap_counter;
+ uint16_t cached_flags; /**< cached flags for descs */
+ uint16_t event_flags_shadow;
+ } vq_packed;
};
+
union {
struct virtcrypto_data dq;
struct virtcrypto_ctl cq;
@@ -134,7 +212,7 @@ virtqueue_full(const struct virtqueue *vq)
static inline void
vq_update_avail_idx(struct virtqueue *vq)
{
- virtio_wmb();
+ virtio_wmb(0);
vq->vq_split.ring.avail->idx = vq->vq_avail_idx;
}
@@ -172,6 +250,30 @@ virtqueue_notify(struct virtqueue *vq)
VTPCI_OPS(vq->hw)->notify_queue(vq->hw, vq);
}
+static inline int
+desc_is_used(struct vring_packed_desc *desc, struct virtqueue *vq)
+{
+ uint16_t used, avail, flags;
+
+ flags = virtqueue_fetch_flags_packed(desc, vq->hw->weak_barriers);
+ used = !!(flags & VRING_PACKED_DESC_F_USED);
+ avail = !!(flags & VRING_PACKED_DESC_F_AVAIL);
+
+ return avail == used && used == vq->vq_packed.used_wrap_counter;
+}
+
+static inline void
+vring_desc_init_packed(struct virtqueue *vq, int n)
+{
+ int i;
+ for (i = 0; i < n - 1; i++) {
+ vq->vq_packed.ring.desc[i].id = i;
+ vq->vq_descx[i].next = i + 1;
+ }
+ vq->vq_packed.ring.desc[i].id = i;
+ vq->vq_descx[i].next = VQ_RING_DESC_CHAIN_END;
+}
+
/* Chain all the descriptors in the ring with an END */
static inline void
vring_desc_init_split(struct vring_desc *dp, uint16_t n)
@@ -208,7 +310,7 @@ virtqueue_nused(const struct virtqueue *vq)
*/
#ifdef RTE_ARCH_X86_64
idx = vq->vq_split.ring.used->idx;
- virtio_rmb();
+ virtio_rmb(0);
#else
idx = rte_atomic_load_explicit(&(vq)->vq_split.ring.used->idx,
rte_memory_order_acquire);
@@ -223,7 +325,7 @@ virtqueue_nused(const struct virtqueue *vq)
/**
* Dump virtqueue internal structures, for debug purpose only.
*/
-#define VIRTQUEUE_DUMP(vq) do { \
+#define VIRTQUEUE_SPLIT_DUMP(vq) do { \
uint16_t used_idx, nused; \
used_idx = (vq)->vq_split.ring.used->idx; \
nused = (uint16_t)(used_idx - (vq)->vq_used_cons_idx); \
@@ -237,4 +339,24 @@ virtqueue_nused(const struct virtqueue *vq)
(vq)->vq_split.ring.avail->flags, (vq)->vq_split.ring.used->flags); \
} while (0)
+#define VIRTQUEUE_PACKED_DUMP(vq) do { \
+ uint16_t nused; \
+ nused = (vq)->vq_nentries - (vq)->vq_free_cnt; \
+ VIRTIO_CRYPTO_INIT_LOG_DBG(\
+ "VQ: - size=%d; free=%d; used=%d; desc_head_idx=%d;" \
+ " avail_idx=%d; used_cons_idx=%d;" \
+ " avail.flags=0x%x; wrap_counter=%d", \
+ (vq)->vq_nentries, (vq)->vq_free_cnt, nused, \
+ (vq)->vq_desc_head_idx, (vq)->vq_avail_idx, \
+ (vq)->vq_used_cons_idx, (vq)->vq_packed.cached_flags, \
+ (vq)->vq_packed.used_wrap_counter); \
+} while (0)
+
+#define VIRTQUEUE_DUMP(vq) do { \
+ if (vtpci_with_packed_queue((vq)->hw)) \
+ VIRTQUEUE_PACKED_DUMP(vq); \
+ else \
+ VIRTQUEUE_SPLIT_DUMP(vq); \
+} while (0)
+
#endif /* _VIRTQUEUE_H_ */
--
2.25.1
^ permalink raw reply [flat|nested] 58+ messages in thread
* [v3 4/6] crypto/virtio: add vDPA backend
2025-02-21 17:41 ` [v3 0/6] crypto/virtio: enhancements for RSA and vDPA Gowrishankar Muthukrishnan
` (2 preceding siblings ...)
2025-02-21 17:41 ` [v3 3/6] crypto/virtio: add packed ring support Gowrishankar Muthukrishnan
@ 2025-02-21 17:41 ` Gowrishankar Muthukrishnan
2025-02-21 17:41 ` [v3 5/6] test/crypto: add asymmetric tests for virtio PMD Gowrishankar Muthukrishnan
2025-02-21 17:41 ` [v3 6/6] test/crypto: add tests for virtio user PMD Gowrishankar Muthukrishnan
5 siblings, 0 replies; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2025-02-21 17:41 UTC (permalink / raw)
To: dev, Jay Zhou; +Cc: anoobj, Akhil Goyal, Gowrishankar Muthukrishnan
Add vDPA backend to virtio_user crypto.
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
drivers/crypto/virtio/meson.build | 7 +
drivers/crypto/virtio/virtio_cryptodev.c | 57 +-
drivers/crypto/virtio/virtio_cryptodev.h | 3 +
drivers/crypto/virtio/virtio_logs.h | 6 +-
drivers/crypto/virtio/virtio_pci.h | 7 +
drivers/crypto/virtio/virtio_ring.h | 6 -
drivers/crypto/virtio/virtio_user/vhost.h | 90 ++
.../crypto/virtio/virtio_user/vhost_vdpa.c | 710 ++++++++++++++++
.../virtio/virtio_user/virtio_user_dev.c | 767 ++++++++++++++++++
.../virtio/virtio_user/virtio_user_dev.h | 85 ++
drivers/crypto/virtio/virtio_user_cryptodev.c | 575 +++++++++++++
11 files changed, 2283 insertions(+), 30 deletions(-)
create mode 100644 drivers/crypto/virtio/virtio_user/vhost.h
create mode 100644 drivers/crypto/virtio/virtio_user/vhost_vdpa.c
create mode 100644 drivers/crypto/virtio/virtio_user/virtio_user_dev.c
create mode 100644 drivers/crypto/virtio/virtio_user/virtio_user_dev.h
create mode 100644 drivers/crypto/virtio/virtio_user_cryptodev.c
diff --git a/drivers/crypto/virtio/meson.build b/drivers/crypto/virtio/meson.build
index d2c3b3ad07..3763e86746 100644
--- a/drivers/crypto/virtio/meson.build
+++ b/drivers/crypto/virtio/meson.build
@@ -16,3 +16,10 @@ sources = files(
'virtio_rxtx.c',
'virtqueue.c',
)
+
+if is_linux
+ sources += files('virtio_user_cryptodev.c',
+ 'virtio_user/vhost_vdpa.c',
+ 'virtio_user/virtio_user_dev.c')
+ deps += ['bus_vdev']
+endif
diff --git a/drivers/crypto/virtio/virtio_cryptodev.c b/drivers/crypto/virtio/virtio_cryptodev.c
index 92fea557ab..bc737f1e68 100644
--- a/drivers/crypto/virtio/virtio_cryptodev.c
+++ b/drivers/crypto/virtio/virtio_cryptodev.c
@@ -544,24 +544,12 @@ virtio_crypto_init_device(struct rte_cryptodev *cryptodev,
return 0;
}
-/*
- * This function is based on probe() function
- * It returns 0 on success.
- */
-static int
-crypto_virtio_create(const char *name, struct rte_pci_device *pci_dev,
- struct rte_cryptodev_pmd_init_params *init_params)
+int
+crypto_virtio_dev_init(struct rte_cryptodev *cryptodev, uint64_t features,
+ struct rte_pci_device *pci_dev)
{
- struct rte_cryptodev *cryptodev;
struct virtio_crypto_hw *hw;
- PMD_INIT_FUNC_TRACE();
-
- cryptodev = rte_cryptodev_pmd_create(name, &pci_dev->device,
- init_params);
- if (cryptodev == NULL)
- return -ENODEV;
-
cryptodev->driver_id = cryptodev_virtio_driver_id;
cryptodev->dev_ops = &virtio_crypto_dev_ops;
@@ -578,16 +566,41 @@ crypto_virtio_create(const char *name, struct rte_pci_device *pci_dev,
hw->dev_id = cryptodev->data->dev_id;
hw->virtio_dev_capabilities = virtio_capabilities;
- VIRTIO_CRYPTO_INIT_LOG_DBG("dev %d vendorID=0x%x deviceID=0x%x",
- cryptodev->data->dev_id, pci_dev->id.vendor_id,
- pci_dev->id.device_id);
+ if (pci_dev) {
+ /* pci device init */
+ VIRTIO_CRYPTO_INIT_LOG_DBG("dev %d vendorID=0x%x deviceID=0x%x",
+ cryptodev->data->dev_id, pci_dev->id.vendor_id,
+ pci_dev->id.device_id);
- /* pci device init */
- if (vtpci_cryptodev_init(pci_dev, hw))
+ if (vtpci_cryptodev_init(pci_dev, hw))
+ return -1;
+ }
+
+ if (virtio_crypto_init_device(cryptodev, features) < 0)
return -1;
- if (virtio_crypto_init_device(cryptodev,
- VIRTIO_CRYPTO_PMD_GUEST_FEATURES) < 0)
+ return 0;
+}
+
+/*
+ * This function is based on probe() function
+ * It returns 0 on success.
+ */
+static int
+crypto_virtio_create(const char *name, struct rte_pci_device *pci_dev,
+ struct rte_cryptodev_pmd_init_params *init_params)
+{
+ struct rte_cryptodev *cryptodev;
+
+ PMD_INIT_FUNC_TRACE();
+
+ cryptodev = rte_cryptodev_pmd_create(name, &pci_dev->device,
+ init_params);
+ if (cryptodev == NULL)
+ return -ENODEV;
+
+ if (crypto_virtio_dev_init(cryptodev, VIRTIO_CRYPTO_PMD_GUEST_FEATURES,
+ pci_dev) < 0)
return -1;
rte_cryptodev_pmd_probing_finish(cryptodev);
diff --git a/drivers/crypto/virtio/virtio_cryptodev.h b/drivers/crypto/virtio/virtio_cryptodev.h
index f8498246e2..fad73d54a8 100644
--- a/drivers/crypto/virtio/virtio_cryptodev.h
+++ b/drivers/crypto/virtio/virtio_cryptodev.h
@@ -76,4 +76,7 @@ uint16_t virtio_crypto_pkt_rx_burst(void *tx_queue,
struct rte_crypto_op **tx_pkts,
uint16_t nb_pkts);
+int crypto_virtio_dev_init(struct rte_cryptodev *cryptodev, uint64_t features,
+ struct rte_pci_device *pci_dev);
+
#endif /* _VIRTIO_CRYPTODEV_H_ */
diff --git a/drivers/crypto/virtio/virtio_logs.h b/drivers/crypto/virtio/virtio_logs.h
index 988514919f..1cc51f7990 100644
--- a/drivers/crypto/virtio/virtio_logs.h
+++ b/drivers/crypto/virtio/virtio_logs.h
@@ -15,8 +15,10 @@ extern int virtio_crypto_logtype_init;
#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
-extern int virtio_crypto_logtype_init;
-#define RTE_LOGTYPE_VIRTIO_CRYPTO_INIT virtio_crypto_logtype_init
+extern int virtio_crypto_logtype_driver;
+#define RTE_LOGTYPE_VIRTIO_CRYPTO_DRIVER virtio_crypto_logtype_driver
+#define PMD_DRV_LOG(level, ...) \
+ RTE_LOG_LINE_PREFIX(level, VIRTIO_CRYPTO_DRIVER, "%s(): ", __func__, __VA_ARGS__)
#define VIRTIO_CRYPTO_INIT_LOG_IMPL(level, ...) \
RTE_LOG_LINE_PREFIX(level, VIRTIO_CRYPTO_INIT, "%s(): ", __func__, __VA_ARGS__)
diff --git a/drivers/crypto/virtio/virtio_pci.h b/drivers/crypto/virtio/virtio_pci.h
index 79945cb88e..c75777e005 100644
--- a/drivers/crypto/virtio/virtio_pci.h
+++ b/drivers/crypto/virtio/virtio_pci.h
@@ -20,6 +20,9 @@ struct virtqueue;
#define VIRTIO_CRYPTO_PCI_VENDORID 0x1AF4
#define VIRTIO_CRYPTO_PCI_DEVICEID 0x1054
+/* VirtIO device IDs. */
+#define VIRTIO_ID_CRYPTO 20
+
/* VirtIO ABI version, this must match exactly. */
#define VIRTIO_PCI_ABI_VERSION 0
@@ -56,8 +59,12 @@ struct virtqueue;
#define VIRTIO_CONFIG_STATUS_DRIVER 0x02
#define VIRTIO_CONFIG_STATUS_DRIVER_OK 0x04
#define VIRTIO_CONFIG_STATUS_FEATURES_OK 0x08
+#define VIRTIO_CONFIG_STATUS_DEV_NEED_RESET 0x40
#define VIRTIO_CONFIG_STATUS_FAILED 0x80
+/* The alignment to use between consumer and producer parts of vring. */
+#define VIRTIO_VRING_ALIGN 4096
+
/*
* Each virtqueue indirect descriptor list must be physically contiguous.
* To allow us to malloc(9) each list individually, limit the number
diff --git a/drivers/crypto/virtio/virtio_ring.h b/drivers/crypto/virtio/virtio_ring.h
index c74d1172b7..4b418f6e60 100644
--- a/drivers/crypto/virtio/virtio_ring.h
+++ b/drivers/crypto/virtio/virtio_ring.h
@@ -181,12 +181,6 @@ vring_init_packed(struct vring_packed *vr, uint8_t *p, rte_iova_t iova,
sizeof(struct vring_packed_desc_event)), align);
}
-static inline void
-vring_init(struct vring *vr, unsigned int num, uint8_t *p, unsigned long align)
-{
- vring_init_split(vr, p, 0, align, num);
-}
-
/*
* The following is used with VIRTIO_RING_F_EVENT_IDX.
* Assuming a given event_idx value from the other size, if we have
diff --git a/drivers/crypto/virtio/virtio_user/vhost.h b/drivers/crypto/virtio/virtio_user/vhost.h
new file mode 100644
index 0000000000..29cc1a14d4
--- /dev/null
+++ b/drivers/crypto/virtio/virtio_user/vhost.h
@@ -0,0 +1,90 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Marvell
+ */
+
+#ifndef _VIRTIO_USER_VHOST_H
+#define _VIRTIO_USER_VHOST_H
+
+#include <stdint.h>
+#include <linux/types.h>
+#include <linux/ioctl.h>
+
+#include <rte_errno.h>
+
+#include "../virtio_logs.h"
+
+struct vhost_vring_state {
+ unsigned int index;
+ unsigned int num;
+};
+
+struct vhost_vring_file {
+ unsigned int index;
+ int fd;
+};
+
+struct vhost_vring_addr {
+ unsigned int index;
+ /* Option flags. */
+ unsigned int flags;
+ /* Flag values: */
+ /* Whether log address is valid. If set enables logging. */
+#define VHOST_VRING_F_LOG 0
+
+ /* Start of array of descriptors (virtually contiguous) */
+ uint64_t desc_user_addr;
+ /* Used structure address. Must be 32 bit aligned */
+ uint64_t used_user_addr;
+ /* Available structure address. Must be 16 bit aligned */
+ uint64_t avail_user_addr;
+ /* Logging support. */
+ /* Log writes to used structure, at offset calculated from specified
+ * address. Address must be 32 bit aligned.
+ */
+ uint64_t log_guest_addr;
+};
+
+#ifndef VHOST_BACKEND_F_IOTLB_MSG_V2
+#define VHOST_BACKEND_F_IOTLB_MSG_V2 1
+#endif
+
+#ifndef VHOST_BACKEND_F_IOTLB_BATCH
+#define VHOST_BACKEND_F_IOTLB_BATCH 2
+#endif
+
+struct virtio_user_dev;
+
+struct virtio_user_backend_ops {
+ int (*setup)(struct virtio_user_dev *dev);
+ int (*destroy)(struct virtio_user_dev *dev);
+ int (*get_backend_features)(uint64_t *features);
+ int (*set_owner)(struct virtio_user_dev *dev);
+ int (*get_features)(struct virtio_user_dev *dev, uint64_t *features);
+ int (*set_features)(struct virtio_user_dev *dev, uint64_t features);
+ int (*set_memory_table)(struct virtio_user_dev *dev);
+ int (*set_vring_num)(struct virtio_user_dev *dev, struct vhost_vring_state *state);
+ int (*set_vring_base)(struct virtio_user_dev *dev, struct vhost_vring_state *state);
+ int (*get_vring_base)(struct virtio_user_dev *dev, struct vhost_vring_state *state);
+ int (*set_vring_call)(struct virtio_user_dev *dev, struct vhost_vring_file *file);
+ int (*set_vring_kick)(struct virtio_user_dev *dev, struct vhost_vring_file *file);
+ int (*set_vring_addr)(struct virtio_user_dev *dev, struct vhost_vring_addr *addr);
+ int (*get_status)(struct virtio_user_dev *dev, uint8_t *status);
+ int (*set_status)(struct virtio_user_dev *dev, uint8_t status);
+ int (*get_config)(struct virtio_user_dev *dev, uint8_t *data, uint32_t off, uint32_t len);
+ int (*set_config)(struct virtio_user_dev *dev, const uint8_t *data, uint32_t off,
+ uint32_t len);
+ int (*cvq_enable)(struct virtio_user_dev *dev, int enable);
+ int (*enable_qp)(struct virtio_user_dev *dev, uint16_t pair_idx, int enable);
+ int (*dma_map)(struct virtio_user_dev *dev, void *addr, uint64_t iova, size_t len);
+ int (*dma_unmap)(struct virtio_user_dev *dev, void *addr, uint64_t iova, size_t len);
+ int (*update_link_state)(struct virtio_user_dev *dev);
+ int (*server_disconnect)(struct virtio_user_dev *dev);
+ int (*server_reconnect)(struct virtio_user_dev *dev);
+ int (*get_intr_fd)(struct virtio_user_dev *dev);
+ int (*map_notification_area)(struct virtio_user_dev *dev);
+ int (*unmap_notification_area)(struct virtio_user_dev *dev);
+};
+
+extern struct virtio_user_backend_ops virtio_ops_vdpa;
+
+#endif
diff --git a/drivers/crypto/virtio/virtio_user/vhost_vdpa.c b/drivers/crypto/virtio/virtio_user/vhost_vdpa.c
new file mode 100644
index 0000000000..b5839875e6
--- /dev/null
+++ b/drivers/crypto/virtio/virtio_user/vhost_vdpa.c
@@ -0,0 +1,710 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Marvell
+ */
+
+#include <sys/ioctl.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <sys/mman.h>
+#include <fcntl.h>
+#include <stdlib.h>
+#include <unistd.h>
+
+#include <rte_memory.h>
+
+#include "vhost.h"
+#include "virtio_user_dev.h"
+#include "../virtio_pci.h"
+
+struct vhost_vdpa_data {
+ int vhostfd;
+ uint64_t protocol_features;
+};
+
+#define VHOST_VDPA_SUPPORTED_BACKEND_FEATURES \
+ (1ULL << VHOST_BACKEND_F_IOTLB_MSG_V2 | \
+ 1ULL << VHOST_BACKEND_F_IOTLB_BATCH)
+
+/* vhost kernel & vdpa ioctls */
+#define VHOST_VIRTIO 0xAF
+#define VHOST_GET_FEATURES _IOR(VHOST_VIRTIO, 0x00, __u64)
+#define VHOST_SET_FEATURES _IOW(VHOST_VIRTIO, 0x00, __u64)
+#define VHOST_SET_OWNER _IO(VHOST_VIRTIO, 0x01)
+#define VHOST_RESET_OWNER _IO(VHOST_VIRTIO, 0x02)
+#define VHOST_SET_LOG_BASE _IOW(VHOST_VIRTIO, 0x04, __u64)
+#define VHOST_SET_LOG_FD _IOW(VHOST_VIRTIO, 0x07, int)
+#define VHOST_SET_VRING_NUM _IOW(VHOST_VIRTIO, 0x10, struct vhost_vring_state)
+#define VHOST_SET_VRING_ADDR _IOW(VHOST_VIRTIO, 0x11, struct vhost_vring_addr)
+#define VHOST_SET_VRING_BASE _IOW(VHOST_VIRTIO, 0x12, struct vhost_vring_state)
+#define VHOST_GET_VRING_BASE _IOWR(VHOST_VIRTIO, 0x12, struct vhost_vring_state)
+#define VHOST_SET_VRING_KICK _IOW(VHOST_VIRTIO, 0x20, struct vhost_vring_file)
+#define VHOST_SET_VRING_CALL _IOW(VHOST_VIRTIO, 0x21, struct vhost_vring_file)
+#define VHOST_SET_VRING_ERR _IOW(VHOST_VIRTIO, 0x22, struct vhost_vring_file)
+#define VHOST_NET_SET_BACKEND _IOW(VHOST_VIRTIO, 0x30, struct vhost_vring_file)
+#define VHOST_VDPA_GET_DEVICE_ID _IOR(VHOST_VIRTIO, 0x70, __u32)
+#define VHOST_VDPA_GET_STATUS _IOR(VHOST_VIRTIO, 0x71, __u8)
+#define VHOST_VDPA_SET_STATUS _IOW(VHOST_VIRTIO, 0x72, __u8)
+#define VHOST_VDPA_GET_CONFIG _IOR(VHOST_VIRTIO, 0x73, struct vhost_vdpa_config)
+#define VHOST_VDPA_SET_CONFIG _IOW(VHOST_VIRTIO, 0x74, struct vhost_vdpa_config)
+#define VHOST_VDPA_SET_VRING_ENABLE _IOW(VHOST_VIRTIO, 0x75, struct vhost_vring_state)
+#define VHOST_SET_BACKEND_FEATURES _IOW(VHOST_VIRTIO, 0x25, __u64)
+#define VHOST_GET_BACKEND_FEATURES _IOR(VHOST_VIRTIO, 0x26, __u64)
+
+/* no alignment requirement */
+struct vhost_iotlb_msg {
+ uint64_t iova;
+ uint64_t size;
+ uint64_t uaddr;
+#define VHOST_ACCESS_RO 0x1
+#define VHOST_ACCESS_WO 0x2
+#define VHOST_ACCESS_RW 0x3
+ uint8_t perm;
+#define VHOST_IOTLB_MISS 1
+#define VHOST_IOTLB_UPDATE 2
+#define VHOST_IOTLB_INVALIDATE 3
+#define VHOST_IOTLB_ACCESS_FAIL 4
+#define VHOST_IOTLB_BATCH_BEGIN 5
+#define VHOST_IOTLB_BATCH_END 6
+ uint8_t type;
+};
+
+#define VHOST_IOTLB_MSG_V2 0x2
+
+struct vhost_vdpa_config {
+ uint32_t off;
+ uint32_t len;
+ uint8_t buf[];
+};
+
+struct vhost_msg {
+ uint32_t type;
+ uint32_t reserved;
+ union {
+ struct vhost_iotlb_msg iotlb;
+ uint8_t padding[64];
+ };
+};
+
+static int
+vhost_vdpa_ioctl(int fd, uint64_t request, void *arg)
+{
+ int ret;
+
+ ret = ioctl(fd, request, arg);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "Vhost-vDPA ioctl %"PRIu64" failed (%s)",
+ request, strerror(errno));
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+vhost_vdpa_set_owner(struct virtio_user_dev *dev)
+{
+ struct vhost_vdpa_data *data = dev->backend_data;
+
+ return vhost_vdpa_ioctl(data->vhostfd, VHOST_SET_OWNER, NULL);
+}
+
+static int
+vhost_vdpa_get_protocol_features(struct virtio_user_dev *dev, uint64_t *features)
+{
+ struct vhost_vdpa_data *data = dev->backend_data;
+
+ return vhost_vdpa_ioctl(data->vhostfd, VHOST_GET_BACKEND_FEATURES, features);
+}
+
+static int
+vhost_vdpa_set_protocol_features(struct virtio_user_dev *dev, uint64_t features)
+{
+ struct vhost_vdpa_data *data = dev->backend_data;
+
+ return vhost_vdpa_ioctl(data->vhostfd, VHOST_SET_BACKEND_FEATURES, &features);
+}
+
+static int
+vhost_vdpa_get_features(struct virtio_user_dev *dev, uint64_t *features)
+{
+ struct vhost_vdpa_data *data = dev->backend_data;
+ int ret;
+
+ ret = vhost_vdpa_ioctl(data->vhostfd, VHOST_GET_FEATURES, features);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "Failed to get features");
+ return -1;
+ }
+
+ /* Negotiated vDPA backend features */
+ ret = vhost_vdpa_get_protocol_features(dev, &data->protocol_features);
+ if (ret < 0) {
+ PMD_DRV_LOG(ERR, "Failed to get backend features");
+ return -1;
+ }
+
+ data->protocol_features &= VHOST_VDPA_SUPPORTED_BACKEND_FEATURES;
+
+ ret = vhost_vdpa_set_protocol_features(dev, data->protocol_features);
+ if (ret < 0) {
+ PMD_DRV_LOG(ERR, "Failed to set backend features");
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+vhost_vdpa_set_features(struct virtio_user_dev *dev, uint64_t features)
+{
+ struct vhost_vdpa_data *data = dev->backend_data;
+
+ /* WORKAROUND */
+ features |= 1ULL << VIRTIO_F_IOMMU_PLATFORM;
+
+ return vhost_vdpa_ioctl(data->vhostfd, VHOST_SET_FEATURES, &features);
+}
+
+static int
+vhost_vdpa_iotlb_batch_begin(struct virtio_user_dev *dev)
+{
+ struct vhost_vdpa_data *data = dev->backend_data;
+ struct vhost_msg msg = {};
+
+ if (!(data->protocol_features & (1ULL << VHOST_BACKEND_F_IOTLB_BATCH)))
+ return 0;
+
+ if (!(data->protocol_features & (1ULL << VHOST_BACKEND_F_IOTLB_MSG_V2))) {
+ PMD_DRV_LOG(ERR, "IOTLB_MSG_V2 not supported by the backend.");
+ return -1;
+ }
+
+ msg.type = VHOST_IOTLB_MSG_V2;
+ msg.iotlb.type = VHOST_IOTLB_BATCH_BEGIN;
+
+ if (write(data->vhostfd, &msg, sizeof(msg)) != sizeof(msg)) {
+ PMD_DRV_LOG(ERR, "Failed to send IOTLB batch begin (%s)",
+ strerror(errno));
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+vhost_vdpa_iotlb_batch_end(struct virtio_user_dev *dev)
+{
+ struct vhost_vdpa_data *data = dev->backend_data;
+ struct vhost_msg msg = {};
+
+ if (!(data->protocol_features & (1ULL << VHOST_BACKEND_F_IOTLB_BATCH)))
+ return 0;
+
+ if (!(data->protocol_features & (1ULL << VHOST_BACKEND_F_IOTLB_MSG_V2))) {
+ PMD_DRV_LOG(ERR, "IOTLB_MSG_V2 not supported by the backend.");
+ return -1;
+ }
+
+ msg.type = VHOST_IOTLB_MSG_V2;
+ msg.iotlb.type = VHOST_IOTLB_BATCH_END;
+
+ if (write(data->vhostfd, &msg, sizeof(msg)) != sizeof(msg)) {
+ PMD_DRV_LOG(ERR, "Failed to send IOTLB batch end (%s)",
+ strerror(errno));
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+vhost_vdpa_dma_map(struct virtio_user_dev *dev, void *addr,
+ uint64_t iova, size_t len)
+{
+ struct vhost_vdpa_data *data = dev->backend_data;
+ struct vhost_msg msg = {};
+
+ if (!(data->protocol_features & (1ULL << VHOST_BACKEND_F_IOTLB_MSG_V2))) {
+ PMD_DRV_LOG(ERR, "IOTLB_MSG_V2 not supported by the backend.");
+ return -1;
+ }
+
+ msg.type = VHOST_IOTLB_MSG_V2;
+ msg.iotlb.type = VHOST_IOTLB_UPDATE;
+ msg.iotlb.iova = iova;
+ msg.iotlb.uaddr = (uint64_t)(uintptr_t)addr;
+ msg.iotlb.size = len;
+ msg.iotlb.perm = VHOST_ACCESS_RW;
+
+ PMD_DRV_LOG(DEBUG, "%s: iova: 0x%" PRIx64 ", addr: %p, len: 0x%zx",
+ __func__, iova, addr, len);
+
+ if (write(data->vhostfd, &msg, sizeof(msg)) != sizeof(msg)) {
+ PMD_DRV_LOG(ERR, "Failed to send IOTLB update (%s)",
+ strerror(errno));
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+vhost_vdpa_dma_unmap(struct virtio_user_dev *dev, __rte_unused void *addr,
+ uint64_t iova, size_t len)
+{
+ struct vhost_vdpa_data *data = dev->backend_data;
+ struct vhost_msg msg = {};
+
+ if (!(data->protocol_features & (1ULL << VHOST_BACKEND_F_IOTLB_MSG_V2))) {
+ PMD_DRV_LOG(ERR, "IOTLB_MSG_V2 not supported by the backend.");
+ return -1;
+ }
+
+ msg.type = VHOST_IOTLB_MSG_V2;
+ msg.iotlb.type = VHOST_IOTLB_INVALIDATE;
+ msg.iotlb.iova = iova;
+ msg.iotlb.size = len;
+
+ PMD_DRV_LOG(DEBUG, "%s: iova: 0x%" PRIx64 ", len: 0x%zx",
+ __func__, iova, len);
+
+ if (write(data->vhostfd, &msg, sizeof(msg)) != sizeof(msg)) {
+ PMD_DRV_LOG(ERR, "Failed to send IOTLB invalidate (%s)",
+ strerror(errno));
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+vhost_vdpa_dma_map_batch(struct virtio_user_dev *dev, void *addr,
+ uint64_t iova, size_t len)
+{
+ int ret;
+
+ if (vhost_vdpa_iotlb_batch_begin(dev) < 0)
+ return -1;
+
+ ret = vhost_vdpa_dma_map(dev, addr, iova, len);
+
+ if (vhost_vdpa_iotlb_batch_end(dev) < 0)
+ return -1;
+
+ return ret;
+}
+
+static int
+vhost_vdpa_dma_unmap_batch(struct virtio_user_dev *dev, void *addr,
+ uint64_t iova, size_t len)
+{
+ int ret;
+
+ if (vhost_vdpa_iotlb_batch_begin(dev) < 0)
+ return -1;
+
+ ret = vhost_vdpa_dma_unmap(dev, addr, iova, len);
+
+ if (vhost_vdpa_iotlb_batch_end(dev) < 0)
+ return -1;
+
+ return ret;
+}
+
+static int
+vhost_vdpa_map_contig(const struct rte_memseg_list *msl,
+ const struct rte_memseg *ms, size_t len, void *arg)
+{
+ struct virtio_user_dev *dev = arg;
+
+ if (msl->external)
+ return 0;
+
+ return vhost_vdpa_dma_map(dev, ms->addr, ms->iova, len);
+}
+
+static int
+vhost_vdpa_map(const struct rte_memseg_list *msl, const struct rte_memseg *ms,
+ void *arg)
+{
+ struct virtio_user_dev *dev = arg;
+
+ /* skip external memory that isn't a heap */
+ if (msl->external && !msl->heap)
+ return 0;
+
+ /* skip any segments with invalid IOVA addresses */
+ if (ms->iova == RTE_BAD_IOVA)
+ return 0;
+
+ /* if IOVA mode is VA, we've already mapped the internal segments */
+ if (!msl->external && rte_eal_iova_mode() == RTE_IOVA_VA)
+ return 0;
+
+ return vhost_vdpa_dma_map(dev, ms->addr, ms->iova, ms->len);
+}
+
+static int
+vhost_vdpa_set_memory_table(struct virtio_user_dev *dev)
+{
+ int ret;
+
+ if (vhost_vdpa_iotlb_batch_begin(dev) < 0)
+ return -1;
+
+ vhost_vdpa_dma_unmap(dev, NULL, 0, SIZE_MAX);
+
+ if (rte_eal_iova_mode() == RTE_IOVA_VA) {
+ /* with IOVA as VA mode, we can get away with mapping contiguous
+ * chunks rather than going page-by-page.
+ */
+ ret = rte_memseg_contig_walk_thread_unsafe(
+ vhost_vdpa_map_contig, dev);
+ if (ret)
+ goto batch_end;
+ /* we have to continue the walk because we've skipped the
+ * external segments during the config walk.
+ */
+ }
+ ret = rte_memseg_walk_thread_unsafe(vhost_vdpa_map, dev);
+
+batch_end:
+ if (vhost_vdpa_iotlb_batch_end(dev) < 0)
+ return -1;
+
+ return ret;
+}
+
+static int
+vhost_vdpa_set_vring_enable(struct virtio_user_dev *dev, struct vhost_vring_state *state)
+{
+ struct vhost_vdpa_data *data = dev->backend_data;
+
+ return vhost_vdpa_ioctl(data->vhostfd, VHOST_VDPA_SET_VRING_ENABLE, state);
+}
+
+static int
+vhost_vdpa_set_vring_num(struct virtio_user_dev *dev, struct vhost_vring_state *state)
+{
+ struct vhost_vdpa_data *data = dev->backend_data;
+
+ return vhost_vdpa_ioctl(data->vhostfd, VHOST_SET_VRING_NUM, state);
+}
+
+static int
+vhost_vdpa_set_vring_base(struct virtio_user_dev *dev, struct vhost_vring_state *state)
+{
+ struct vhost_vdpa_data *data = dev->backend_data;
+
+ return vhost_vdpa_ioctl(data->vhostfd, VHOST_SET_VRING_BASE, state);
+}
+
+static int
+vhost_vdpa_get_vring_base(struct virtio_user_dev *dev, struct vhost_vring_state *state)
+{
+ struct vhost_vdpa_data *data = dev->backend_data;
+
+ return vhost_vdpa_ioctl(data->vhostfd, VHOST_GET_VRING_BASE, state);
+}
+
+static int
+vhost_vdpa_set_vring_call(struct virtio_user_dev *dev, struct vhost_vring_file *file)
+{
+ struct vhost_vdpa_data *data = dev->backend_data;
+
+ return vhost_vdpa_ioctl(data->vhostfd, VHOST_SET_VRING_CALL, file);
+}
+
+static int
+vhost_vdpa_set_vring_kick(struct virtio_user_dev *dev, struct vhost_vring_file *file)
+{
+ struct vhost_vdpa_data *data = dev->backend_data;
+
+ return vhost_vdpa_ioctl(data->vhostfd, VHOST_SET_VRING_KICK, file);
+}
+
+static int
+vhost_vdpa_set_vring_addr(struct virtio_user_dev *dev, struct vhost_vring_addr *addr)
+{
+ struct vhost_vdpa_data *data = dev->backend_data;
+
+ return vhost_vdpa_ioctl(data->vhostfd, VHOST_SET_VRING_ADDR, addr);
+}
+
+static int
+vhost_vdpa_get_status(struct virtio_user_dev *dev, uint8_t *status)
+{
+ struct vhost_vdpa_data *data = dev->backend_data;
+
+ return vhost_vdpa_ioctl(data->vhostfd, VHOST_VDPA_GET_STATUS, status);
+}
+
+static int
+vhost_vdpa_set_status(struct virtio_user_dev *dev, uint8_t status)
+{
+ struct vhost_vdpa_data *data = dev->backend_data;
+
+ return vhost_vdpa_ioctl(data->vhostfd, VHOST_VDPA_SET_STATUS, &status);
+}
+
+static int
+vhost_vdpa_get_config(struct virtio_user_dev *dev, uint8_t *data, uint32_t off, uint32_t len)
+{
+ struct vhost_vdpa_data *vdpa_data = dev->backend_data;
+ struct vhost_vdpa_config *config;
+ int ret = 0;
+
+ config = malloc(sizeof(*config) + len);
+ if (!config) {
+ PMD_DRV_LOG(ERR, "Failed to allocate vDPA config data");
+ return -1;
+ }
+
+ config->off = off;
+ config->len = len;
+
+ ret = vhost_vdpa_ioctl(vdpa_data->vhostfd, VHOST_VDPA_GET_CONFIG, config);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "Failed to get vDPA config (offset 0x%x, len 0x%x)", off, len);
+ ret = -1;
+ goto out;
+ }
+
+ memcpy(data, config->buf, len);
+out:
+ free(config);
+
+ return ret;
+}
+
+static int
+vhost_vdpa_set_config(struct virtio_user_dev *dev, const uint8_t *data, uint32_t off, uint32_t len)
+{
+ struct vhost_vdpa_data *vdpa_data = dev->backend_data;
+ struct vhost_vdpa_config *config;
+ int ret = 0;
+
+ config = malloc(sizeof(*config) + len);
+ if (!config) {
+ PMD_DRV_LOG(ERR, "Failed to allocate vDPA config data");
+ return -1;
+ }
+
+ config->off = off;
+ config->len = len;
+
+ memcpy(config->buf, data, len);
+
+ ret = vhost_vdpa_ioctl(vdpa_data->vhostfd, VHOST_VDPA_SET_CONFIG, config);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "Failed to set vDPA config (offset 0x%x, len 0x%x)", off, len);
+ ret = -1;
+ }
+
+ free(config);
+
+ return ret;
+}
+
+/**
+ * Set up environment to talk with a vhost vdpa backend.
+ *
+ * @return
+ * - (-1) if fail to set up;
+ * - (>=0) if successful.
+ */
+static int
+vhost_vdpa_setup(struct virtio_user_dev *dev)
+{
+ struct vhost_vdpa_data *data;
+ uint32_t did = (uint32_t)-1;
+
+ data = malloc(sizeof(*data));
+ if (!data) {
+ PMD_DRV_LOG(ERR, "(%s) Faidle to allocate backend data", dev->path);
+ return -1;
+ }
+
+ data->vhostfd = open(dev->path, O_RDWR);
+ if (data->vhostfd < 0) {
+ PMD_DRV_LOG(ERR, "Failed to open %s: %s",
+ dev->path, strerror(errno));
+ free(data);
+ return -1;
+ }
+
+ if (ioctl(data->vhostfd, VHOST_VDPA_GET_DEVICE_ID, &did) < 0 ||
+ did != VIRTIO_ID_CRYPTO) {
+ PMD_DRV_LOG(ERR, "Invalid vdpa device ID: %u", did);
+ close(data->vhostfd);
+ free(data);
+ return -1;
+ }
+
+ dev->backend_data = data;
+
+ return 0;
+}
+
+static int
+vhost_vdpa_destroy(struct virtio_user_dev *dev)
+{
+ struct vhost_vdpa_data *data = dev->backend_data;
+
+ if (!data)
+ return 0;
+
+ close(data->vhostfd);
+
+ free(data);
+ dev->backend_data = NULL;
+
+ return 0;
+}
+
+static int
+vhost_vdpa_cvq_enable(struct virtio_user_dev *dev, int enable)
+{
+ struct vhost_vring_state state = {
+ .index = dev->max_queue_pairs,
+ .num = enable,
+ };
+
+ return vhost_vdpa_set_vring_enable(dev, &state);
+}
+
+static int
+vhost_vdpa_enable_queue_pair(struct virtio_user_dev *dev,
+ uint16_t pair_idx,
+ int enable)
+{
+ struct vhost_vring_state state = {
+ .index = pair_idx,
+ .num = enable,
+ };
+
+ if (dev->qp_enabled[pair_idx] == enable)
+ return 0;
+
+ if (vhost_vdpa_set_vring_enable(dev, &state))
+ return -1;
+
+ dev->qp_enabled[pair_idx] = enable;
+ return 0;
+}
+
+static int
+vhost_vdpa_get_backend_features(uint64_t *features)
+{
+ *features = 0;
+
+ return 0;
+}
+
+static int
+vhost_vdpa_update_link_state(struct virtio_user_dev *dev)
+{
+ /* TODO: It is W/A until a cleaner approach to find cpt status */
+ dev->crypto_status = VIRTIO_CRYPTO_S_HW_READY;
+ return 0;
+}
+
+static int
+vhost_vdpa_get_intr_fd(struct virtio_user_dev *dev __rte_unused)
+{
+ /* No link state interrupt with Vhost-vDPA */
+ return -1;
+}
+
+static int
+vhost_vdpa_get_nr_vrings(struct virtio_user_dev *dev)
+{
+ int nr_vrings = dev->max_queue_pairs;
+
+ return nr_vrings;
+}
+
+static int
+vhost_vdpa_unmap_notification_area(struct virtio_user_dev *dev)
+{
+ int i, nr_vrings;
+
+ nr_vrings = vhost_vdpa_get_nr_vrings(dev);
+
+ for (i = 0; i < nr_vrings; i++) {
+ if (dev->notify_area[i])
+ munmap(dev->notify_area[i], getpagesize());
+ }
+ free(dev->notify_area);
+ dev->notify_area = NULL;
+
+ return 0;
+}
+
+static int
+vhost_vdpa_map_notification_area(struct virtio_user_dev *dev)
+{
+ struct vhost_vdpa_data *data = dev->backend_data;
+ int nr_vrings, i, page_size = getpagesize();
+ uint16_t **notify_area;
+
+ nr_vrings = vhost_vdpa_get_nr_vrings(dev);
+
+ /* CQ is another vring */
+ nr_vrings++;
+
+ notify_area = malloc(nr_vrings * sizeof(*notify_area));
+ if (!notify_area) {
+ PMD_DRV_LOG(ERR, "(%s) Failed to allocate notify area array", dev->path);
+ return -1;
+ }
+
+ for (i = 0; i < nr_vrings; i++) {
+ notify_area[i] = mmap(NULL, page_size, PROT_WRITE, MAP_SHARED | MAP_FILE,
+ data->vhostfd, i * page_size);
+ if (notify_area[i] == MAP_FAILED) {
+ PMD_DRV_LOG(ERR, "(%s) Map failed for notify address of queue %d",
+ dev->path, i);
+ i--;
+ goto map_err;
+ }
+ }
+ dev->notify_area = notify_area;
+
+ return 0;
+
+map_err:
+ for (; i >= 0; i--)
+ munmap(notify_area[i], page_size);
+ free(notify_area);
+
+ return -1;
+}
+
+struct virtio_user_backend_ops virtio_crypto_ops_vdpa = {
+ .setup = vhost_vdpa_setup,
+ .destroy = vhost_vdpa_destroy,
+ .get_backend_features = vhost_vdpa_get_backend_features,
+ .set_owner = vhost_vdpa_set_owner,
+ .get_features = vhost_vdpa_get_features,
+ .set_features = vhost_vdpa_set_features,
+ .set_memory_table = vhost_vdpa_set_memory_table,
+ .set_vring_num = vhost_vdpa_set_vring_num,
+ .set_vring_base = vhost_vdpa_set_vring_base,
+ .get_vring_base = vhost_vdpa_get_vring_base,
+ .set_vring_call = vhost_vdpa_set_vring_call,
+ .set_vring_kick = vhost_vdpa_set_vring_kick,
+ .set_vring_addr = vhost_vdpa_set_vring_addr,
+ .get_status = vhost_vdpa_get_status,
+ .set_status = vhost_vdpa_set_status,
+ .get_config = vhost_vdpa_get_config,
+ .set_config = vhost_vdpa_set_config,
+ .cvq_enable = vhost_vdpa_cvq_enable,
+ .enable_qp = vhost_vdpa_enable_queue_pair,
+ .dma_map = vhost_vdpa_dma_map_batch,
+ .dma_unmap = vhost_vdpa_dma_unmap_batch,
+ .update_link_state = vhost_vdpa_update_link_state,
+ .get_intr_fd = vhost_vdpa_get_intr_fd,
+ .map_notification_area = vhost_vdpa_map_notification_area,
+ .unmap_notification_area = vhost_vdpa_unmap_notification_area,
+};
diff --git a/drivers/crypto/virtio/virtio_user/virtio_user_dev.c b/drivers/crypto/virtio/virtio_user/virtio_user_dev.c
new file mode 100644
index 0000000000..248df11ccc
--- /dev/null
+++ b/drivers/crypto/virtio/virtio_user/virtio_user_dev.c
@@ -0,0 +1,767 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Marvell.
+ */
+
+#include <stdint.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <fcntl.h>
+#include <string.h>
+#include <errno.h>
+#include <sys/mman.h>
+#include <unistd.h>
+#include <sys/eventfd.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <pthread.h>
+
+#include <rte_alarm.h>
+#include <rte_string_fns.h>
+#include <rte_eal_memconfig.h>
+#include <rte_malloc.h>
+#include <rte_io.h>
+
+#include "vhost.h"
+#include "virtio_logs.h"
+#include "cryptodev_pmd.h"
+#include "virtio_crypto.h"
+#include "virtio_cvq.h"
+#include "virtio_user_dev.h"
+#include "virtqueue.h"
+
+#define VIRTIO_USER_MEM_EVENT_CLB_NAME "virtio_user_mem_event_clb"
+
+const char * const crypto_virtio_user_backend_strings[] = {
+ [VIRTIO_USER_BACKEND_UNKNOWN] = "VIRTIO_USER_BACKEND_UNKNOWN",
+ [VIRTIO_USER_BACKEND_VHOST_VDPA] = "VHOST_VDPA",
+};
+
+static int
+virtio_user_uninit_notify_queue(struct virtio_user_dev *dev, uint32_t queue_sel)
+{
+ if (dev->kickfds[queue_sel] >= 0) {
+ close(dev->kickfds[queue_sel]);
+ dev->kickfds[queue_sel] = -1;
+ }
+
+ if (dev->callfds[queue_sel] >= 0) {
+ close(dev->callfds[queue_sel]);
+ dev->callfds[queue_sel] = -1;
+ }
+
+ return 0;
+}
+
+static int
+virtio_user_init_notify_queue(struct virtio_user_dev *dev, uint32_t queue_sel)
+{
+ /* May use invalid flag, but some backend uses kickfd and
+ * callfd as criteria to judge if dev is alive. so finally we
+ * use real event_fd.
+ */
+ dev->callfds[queue_sel] = eventfd(0, EFD_CLOEXEC | EFD_NONBLOCK);
+ if (dev->callfds[queue_sel] < 0) {
+ PMD_DRV_LOG(ERR, "(%s) Failed to setup callfd for queue %u: %s",
+ dev->path, queue_sel, strerror(errno));
+ return -1;
+ }
+ dev->kickfds[queue_sel] = eventfd(0, EFD_CLOEXEC | EFD_NONBLOCK);
+ if (dev->kickfds[queue_sel] < 0) {
+ PMD_DRV_LOG(ERR, "(%s) Failed to setup kickfd for queue %u: %s",
+ dev->path, queue_sel, strerror(errno));
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+virtio_user_destroy_queue(struct virtio_user_dev *dev, uint32_t queue_sel)
+{
+ struct vhost_vring_state state;
+ int ret;
+
+ state.index = queue_sel;
+ ret = dev->ops->get_vring_base(dev, &state);
+ if (ret < 0) {
+ PMD_DRV_LOG(ERR, "(%s) Failed to destroy queue %u", dev->path, queue_sel);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+virtio_user_create_queue(struct virtio_user_dev *dev, uint32_t queue_sel)
+{
+ /* Of all per virtqueue MSGs, make sure VHOST_SET_VRING_CALL come
+ * firstly because vhost depends on this msg to allocate virtqueue
+ * pair.
+ */
+ struct vhost_vring_file file;
+ int ret;
+
+ file.index = queue_sel;
+ file.fd = dev->callfds[queue_sel];
+ ret = dev->ops->set_vring_call(dev, &file);
+ if (ret < 0) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to create queue %u", dev->path, queue_sel);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+virtio_user_kick_queue(struct virtio_user_dev *dev, uint32_t queue_sel)
+{
+ int ret;
+ struct vhost_vring_file file;
+ struct vhost_vring_state state;
+ struct vring *vring = &dev->vrings.split[queue_sel];
+ struct vring_packed *pq_vring = &dev->vrings.packed[queue_sel];
+ uint64_t desc_addr, avail_addr, used_addr;
+ struct vhost_vring_addr addr = {
+ .index = queue_sel,
+ .log_guest_addr = 0,
+ .flags = 0, /* disable log */
+ };
+
+ if (queue_sel == dev->max_queue_pairs) {
+ if (!dev->scvq) {
+ PMD_INIT_LOG(ERR, "(%s) Shadow control queue expected but missing",
+ dev->path);
+ goto err;
+ }
+
+ /* Use shadow control queue information */
+ vring = &dev->scvq->vq_split.ring;
+ pq_vring = &dev->scvq->vq_packed.ring;
+ }
+
+ if (dev->features & (1ULL << VIRTIO_F_RING_PACKED)) {
+ desc_addr = pq_vring->desc_iova;
+ avail_addr = desc_addr + pq_vring->num * sizeof(struct vring_packed_desc);
+ used_addr = RTE_ALIGN_CEIL(avail_addr + sizeof(struct vring_packed_desc_event),
+ VIRTIO_VRING_ALIGN);
+
+ addr.desc_user_addr = desc_addr;
+ addr.avail_user_addr = avail_addr;
+ addr.used_user_addr = used_addr;
+ } else {
+ desc_addr = vring->desc_iova;
+ avail_addr = desc_addr + vring->num * sizeof(struct vring_desc);
+ used_addr = RTE_ALIGN_CEIL((uintptr_t)(&vring->avail->ring[vring->num]),
+ VIRTIO_VRING_ALIGN);
+
+ addr.desc_user_addr = desc_addr;
+ addr.avail_user_addr = avail_addr;
+ addr.used_user_addr = used_addr;
+ }
+
+ state.index = queue_sel;
+ state.num = vring->num;
+ ret = dev->ops->set_vring_num(dev, &state);
+ if (ret < 0)
+ goto err;
+
+ state.index = queue_sel;
+ state.num = 0; /* no reservation */
+ if (dev->features & (1ULL << VIRTIO_F_RING_PACKED))
+ state.num |= (1 << 15);
+ ret = dev->ops->set_vring_base(dev, &state);
+ if (ret < 0)
+ goto err;
+
+ ret = dev->ops->set_vring_addr(dev, &addr);
+ if (ret < 0)
+ goto err;
+
+ /* Of all per virtqueue MSGs, make sure VHOST_USER_SET_VRING_KICK comes
+ * lastly because vhost depends on this msg to judge if
+ * virtio is ready.
+ */
+ file.index = queue_sel;
+ file.fd = dev->kickfds[queue_sel];
+ ret = dev->ops->set_vring_kick(dev, &file);
+ if (ret < 0)
+ goto err;
+
+ return 0;
+err:
+ PMD_INIT_LOG(ERR, "(%s) Failed to kick queue %u", dev->path, queue_sel);
+
+ return -1;
+}
+
+static int
+virtio_user_foreach_queue(struct virtio_user_dev *dev,
+ int (*fn)(struct virtio_user_dev *, uint32_t))
+{
+ uint32_t i, nr_vq;
+
+ nr_vq = dev->max_queue_pairs;
+
+ for (i = 0; i < nr_vq; i++)
+ if (fn(dev, i) < 0)
+ return -1;
+
+ return 0;
+}
+
+int
+crypto_virtio_user_dev_set_features(struct virtio_user_dev *dev)
+{
+ uint64_t features;
+ int ret = -1;
+
+ pthread_mutex_lock(&dev->mutex);
+
+ /* Step 0: tell vhost to create queues */
+ if (virtio_user_foreach_queue(dev, virtio_user_create_queue) < 0)
+ goto error;
+
+ features = dev->features;
+
+ ret = dev->ops->set_features(dev, features);
+ if (ret < 0)
+ goto error;
+ PMD_DRV_LOG(INFO, "(%s) set features: 0x%" PRIx64, dev->path, features);
+error:
+ pthread_mutex_unlock(&dev->mutex);
+
+ return ret;
+}
+
+int
+crypto_virtio_user_start_device(struct virtio_user_dev *dev)
+{
+ int ret;
+
+ /*
+ * XXX workaround!
+ *
+ * We need to make sure that the locks will be
+ * taken in the correct order to avoid deadlocks.
+ *
+ * Before releasing this lock, this thread should
+ * not trigger any memory hotplug events.
+ *
+ * This is a temporary workaround, and should be
+ * replaced when we get proper supports from the
+ * memory subsystem in the future.
+ */
+ rte_mcfg_mem_read_lock();
+ pthread_mutex_lock(&dev->mutex);
+
+ /* Step 2: share memory regions */
+ ret = dev->ops->set_memory_table(dev);
+ if (ret < 0)
+ goto error;
+
+ /* Step 3: kick queues */
+ ret = virtio_user_foreach_queue(dev, virtio_user_kick_queue);
+ if (ret < 0)
+ goto error;
+
+ ret = virtio_user_kick_queue(dev, dev->max_queue_pairs);
+ if (ret < 0)
+ goto error;
+
+ /* Step 4: enable queues */
+ for (int i = 0; i < dev->max_queue_pairs; i++) {
+ ret = dev->ops->enable_qp(dev, i, 1);
+ if (ret < 0)
+ goto error;
+ }
+
+ dev->started = true;
+
+ pthread_mutex_unlock(&dev->mutex);
+ rte_mcfg_mem_read_unlock();
+
+ return 0;
+error:
+ pthread_mutex_unlock(&dev->mutex);
+ rte_mcfg_mem_read_unlock();
+
+ PMD_INIT_LOG(ERR, "(%s) Failed to start device", dev->path);
+
+ /* TODO: free resource here or caller to check */
+ return -1;
+}
+
+int crypto_virtio_user_stop_device(struct virtio_user_dev *dev)
+{
+ uint32_t i;
+ int ret;
+
+ pthread_mutex_lock(&dev->mutex);
+ if (!dev->started)
+ goto out;
+
+ for (i = 0; i < dev->max_queue_pairs; ++i) {
+ ret = dev->ops->enable_qp(dev, i, 0);
+ if (ret < 0)
+ goto err;
+ }
+
+ if (dev->scvq) {
+ ret = dev->ops->cvq_enable(dev, 0);
+ if (ret < 0)
+ goto err;
+ }
+
+ /* Stop the backend. */
+ if (virtio_user_foreach_queue(dev, virtio_user_destroy_queue) < 0)
+ goto err;
+
+ dev->started = false;
+
+out:
+ pthread_mutex_unlock(&dev->mutex);
+
+ return 0;
+err:
+ pthread_mutex_unlock(&dev->mutex);
+
+ PMD_INIT_LOG(ERR, "(%s) Failed to stop device", dev->path);
+
+ return -1;
+}
+
+static int
+virtio_user_dev_init_max_queue_pairs(struct virtio_user_dev *dev, uint32_t user_max_qp)
+{
+ int ret;
+
+ if (!dev->ops->get_config) {
+ dev->max_queue_pairs = user_max_qp;
+ return 0;
+ }
+
+ ret = dev->ops->get_config(dev, (uint8_t *)&dev->max_queue_pairs,
+ offsetof(struct virtio_crypto_config, max_dataqueues),
+ sizeof(uint16_t));
+ if (ret) {
+ /*
+ * We need to know the max queue pair from the device so that
+ * the control queue gets the right index.
+ */
+ dev->max_queue_pairs = 1;
+ PMD_DRV_LOG(ERR, "(%s) Failed to get max queue pairs from device", dev->path);
+
+ return ret;
+ }
+
+ return 0;
+}
+
+static int
+virtio_user_dev_init_cipher_services(struct virtio_user_dev *dev)
+{
+ struct virtio_crypto_config config;
+ int ret;
+
+ dev->crypto_services = RTE_BIT32(VIRTIO_CRYPTO_SERVICE_CIPHER);
+ dev->cipher_algo = 0;
+ dev->auth_algo = 0;
+ dev->akcipher_algo = 0;
+
+ if (!dev->ops->get_config)
+ return 0;
+
+ ret = dev->ops->get_config(dev, (uint8_t *)&config, 0, sizeof(config));
+ if (ret) {
+ PMD_DRV_LOG(ERR, "(%s) Failed to get crypto config from device", dev->path);
+ return ret;
+ }
+
+ dev->crypto_services = config.crypto_services;
+ dev->cipher_algo = ((uint64_t)config.cipher_algo_h << 32) |
+ config.cipher_algo_l;
+ dev->hash_algo = config.hash_algo;
+ dev->auth_algo = ((uint64_t)config.mac_algo_h << 32) |
+ config.mac_algo_l;
+ dev->aead_algo = config.aead_algo;
+ dev->akcipher_algo = config.akcipher_algo;
+ return 0;
+}
+
+static int
+virtio_user_dev_init_notify(struct virtio_user_dev *dev)
+{
+
+ if (virtio_user_foreach_queue(dev, virtio_user_init_notify_queue) < 0)
+ goto err;
+
+ if (dev->device_features & (1ULL << VIRTIO_F_NOTIFICATION_DATA))
+ if (dev->ops->map_notification_area &&
+ dev->ops->map_notification_area(dev))
+ goto err;
+
+ return 0;
+err:
+ virtio_user_foreach_queue(dev, virtio_user_uninit_notify_queue);
+
+ return -1;
+}
+
+static void
+virtio_user_dev_uninit_notify(struct virtio_user_dev *dev)
+{
+ virtio_user_foreach_queue(dev, virtio_user_uninit_notify_queue);
+
+ if (dev->ops->unmap_notification_area && dev->notify_area)
+ dev->ops->unmap_notification_area(dev);
+}
+
+static void
+virtio_user_mem_event_cb(enum rte_mem_event type __rte_unused,
+ const void *addr,
+ size_t len __rte_unused,
+ void *arg)
+{
+ struct virtio_user_dev *dev = arg;
+ struct rte_memseg_list *msl;
+ uint16_t i;
+ int ret = 0;
+
+ /* ignore externally allocated memory */
+ msl = rte_mem_virt2memseg_list(addr);
+ if (msl->external)
+ return;
+
+ pthread_mutex_lock(&dev->mutex);
+
+ if (dev->started == false)
+ goto exit;
+
+ /* Step 1: pause the active queues */
+ for (i = 0; i < dev->queue_pairs; i++) {
+ ret = dev->ops->enable_qp(dev, i, 0);
+ if (ret < 0)
+ goto exit;
+ }
+
+ /* Step 2: update memory regions */
+ ret = dev->ops->set_memory_table(dev);
+ if (ret < 0)
+ goto exit;
+
+ /* Step 3: resume the active queues */
+ for (i = 0; i < dev->queue_pairs; i++) {
+ ret = dev->ops->enable_qp(dev, i, 1);
+ if (ret < 0)
+ goto exit;
+ }
+
+exit:
+ pthread_mutex_unlock(&dev->mutex);
+
+ if (ret < 0)
+ PMD_DRV_LOG(ERR, "(%s) Failed to update memory table", dev->path);
+}
+
+static int
+virtio_user_dev_setup(struct virtio_user_dev *dev)
+{
+ if (dev->is_server) {
+ if (dev->backend_type != VIRTIO_USER_BACKEND_VHOST_USER) {
+ PMD_DRV_LOG(ERR, "Server mode only supports vhost-user!");
+ return -1;
+ }
+ }
+
+ switch (dev->backend_type) {
+ case VIRTIO_USER_BACKEND_VHOST_VDPA:
+ dev->ops = &virtio_crypto_ops_vdpa;
+ break;
+ default:
+ PMD_DRV_LOG(ERR, "(%s) Unknown backend type", dev->path);
+ return -1;
+ }
+
+ if (dev->ops->setup(dev) < 0) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to setup backend", dev->path);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+virtio_user_alloc_vrings(struct virtio_user_dev *dev)
+{
+ int i, size, nr_vrings;
+ bool packed_ring = !!(dev->device_features & (1ull << VIRTIO_F_RING_PACKED));
+
+ nr_vrings = dev->max_queue_pairs + 1;
+
+ dev->callfds = rte_zmalloc("virtio_user_dev", nr_vrings * sizeof(*dev->callfds), 0);
+ if (!dev->callfds) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to alloc callfds", dev->path);
+ return -1;
+ }
+
+ dev->kickfds = rte_zmalloc("virtio_user_dev", nr_vrings * sizeof(*dev->kickfds), 0);
+ if (!dev->kickfds) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to alloc kickfds", dev->path);
+ goto free_callfds;
+ }
+
+ for (i = 0; i < nr_vrings; i++) {
+ dev->callfds[i] = -1;
+ dev->kickfds[i] = -1;
+ }
+
+ if (packed_ring)
+ size = sizeof(*dev->vrings.packed);
+ else
+ size = sizeof(*dev->vrings.split);
+ dev->vrings.ptr = rte_zmalloc("virtio_user_dev", nr_vrings * size, 0);
+ if (!dev->vrings.ptr) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to alloc vrings metadata", dev->path);
+ goto free_kickfds;
+ }
+
+ if (packed_ring) {
+ dev->packed_queues = rte_zmalloc("virtio_user_dev",
+ nr_vrings * sizeof(*dev->packed_queues), 0);
+ if (!dev->packed_queues) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to alloc packed queues metadata",
+ dev->path);
+ goto free_vrings;
+ }
+ }
+
+ dev->qp_enabled = rte_zmalloc("virtio_user_dev",
+ nr_vrings * sizeof(*dev->qp_enabled), 0);
+ if (!dev->qp_enabled) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to alloc QP enable states", dev->path);
+ goto free_packed_queues;
+ }
+
+ return 0;
+
+free_packed_queues:
+ rte_free(dev->packed_queues);
+ dev->packed_queues = NULL;
+free_vrings:
+ rte_free(dev->vrings.ptr);
+ dev->vrings.ptr = NULL;
+free_kickfds:
+ rte_free(dev->kickfds);
+ dev->kickfds = NULL;
+free_callfds:
+ rte_free(dev->callfds);
+ dev->callfds = NULL;
+
+ return -1;
+}
+
+static void
+virtio_user_free_vrings(struct virtio_user_dev *dev)
+{
+ rte_free(dev->qp_enabled);
+ dev->qp_enabled = NULL;
+ rte_free(dev->packed_queues);
+ dev->packed_queues = NULL;
+ rte_free(dev->vrings.ptr);
+ dev->vrings.ptr = NULL;
+ rte_free(dev->kickfds);
+ dev->kickfds = NULL;
+ rte_free(dev->callfds);
+ dev->callfds = NULL;
+}
+
+#define VIRTIO_USER_SUPPORTED_FEATURES \
+ (1ULL << VIRTIO_CRYPTO_SERVICE_CIPHER | \
+ 1ULL << VIRTIO_CRYPTO_SERVICE_HASH | \
+ 1ULL << VIRTIO_CRYPTO_SERVICE_AKCIPHER | \
+ 1ULL << VIRTIO_F_VERSION_1 | \
+ 1ULL << VIRTIO_F_IN_ORDER | \
+ 1ULL << VIRTIO_F_RING_PACKED | \
+ 1ULL << VIRTIO_F_NOTIFICATION_DATA | \
+ 1ULL << VIRTIO_F_ORDER_PLATFORM)
+
+int
+crypto_virtio_user_dev_init(struct virtio_user_dev *dev, char *path, uint16_t queues,
+ int queue_size, int server)
+{
+ uint64_t backend_features;
+
+ pthread_mutex_init(&dev->mutex, NULL);
+ strlcpy(dev->path, path, PATH_MAX);
+
+ dev->started = 0;
+ dev->queue_pairs = 1; /* mq disabled by default */
+ dev->max_queue_pairs = queues; /* initialize to user requested value for kernel backend */
+ dev->queue_size = queue_size;
+ dev->is_server = server;
+ dev->frontend_features = 0;
+ dev->unsupported_features = 0;
+ dev->backend_type = VIRTIO_USER_BACKEND_VHOST_VDPA;
+ dev->hw.modern = 1;
+
+ if (virtio_user_dev_setup(dev) < 0) {
+ PMD_INIT_LOG(ERR, "(%s) backend set up fails", dev->path);
+ return -1;
+ }
+
+ if (dev->ops->set_owner(dev) < 0) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to set backend owner", dev->path);
+ goto destroy;
+ }
+
+ if (dev->ops->get_backend_features(&backend_features) < 0) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to get backend features", dev->path);
+ goto destroy;
+ }
+
+ dev->unsupported_features = ~(VIRTIO_USER_SUPPORTED_FEATURES | backend_features);
+
+ if (dev->ops->get_features(dev, &dev->device_features) < 0) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to get device features", dev->path);
+ goto destroy;
+ }
+
+ if (virtio_user_dev_init_max_queue_pairs(dev, queues)) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to get max queue pairs", dev->path);
+ goto destroy;
+ }
+
+ if (virtio_user_dev_init_cipher_services(dev)) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to get cipher services", dev->path);
+ goto destroy;
+ }
+
+ dev->frontend_features &= ~dev->unsupported_features;
+ dev->device_features &= ~dev->unsupported_features;
+
+ if (virtio_user_alloc_vrings(dev) < 0) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to allocate vring metadata", dev->path);
+ goto destroy;
+ }
+
+ if (virtio_user_dev_init_notify(dev) < 0) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to init notifiers", dev->path);
+ goto free_vrings;
+ }
+
+ if (rte_mem_event_callback_register(VIRTIO_USER_MEM_EVENT_CLB_NAME,
+ virtio_user_mem_event_cb, dev)) {
+ if (rte_errno != ENOTSUP) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to register mem event callback",
+ dev->path);
+ goto notify_uninit;
+ }
+ }
+
+ return 0;
+
+notify_uninit:
+ virtio_user_dev_uninit_notify(dev);
+free_vrings:
+ virtio_user_free_vrings(dev);
+destroy:
+ dev->ops->destroy(dev);
+
+ return -1;
+}
+
+void
+crypto_virtio_user_dev_uninit(struct virtio_user_dev *dev)
+{
+ crypto_virtio_user_stop_device(dev);
+
+ rte_mem_event_callback_unregister(VIRTIO_USER_MEM_EVENT_CLB_NAME, dev);
+
+ virtio_user_dev_uninit_notify(dev);
+
+ virtio_user_free_vrings(dev);
+
+ if (dev->is_server)
+ unlink(dev->path);
+
+ dev->ops->destroy(dev);
+}
+
+#define CVQ_MAX_DATA_DESCS 32
+
+static inline void *
+virtio_user_iova2virt(struct virtio_user_dev *dev __rte_unused, rte_iova_t iova)
+{
+ if (rte_eal_iova_mode() == RTE_IOVA_VA)
+ return (void *)(uintptr_t)iova;
+ else
+ return rte_mem_iova2virt(iova);
+}
+
+static inline int
+desc_is_avail(struct vring_packed_desc *desc, bool wrap_counter)
+{
+ uint16_t flags = rte_atomic_load_explicit(&desc->flags, rte_memory_order_acquire);
+
+ return wrap_counter == !!(flags & VRING_PACKED_DESC_F_AVAIL) &&
+ wrap_counter != !!(flags & VRING_PACKED_DESC_F_USED);
+}
+
+int
+crypto_virtio_user_dev_set_status(struct virtio_user_dev *dev, uint8_t status)
+{
+ int ret;
+
+ pthread_mutex_lock(&dev->mutex);
+ dev->status = status;
+ ret = dev->ops->set_status(dev, status);
+ if (ret && ret != -ENOTSUP)
+ PMD_INIT_LOG(ERR, "(%s) Failed to set backend status", dev->path);
+
+ pthread_mutex_unlock(&dev->mutex);
+ return ret;
+}
+
+int
+crypto_virtio_user_dev_update_status(struct virtio_user_dev *dev)
+{
+ int ret;
+ uint8_t status;
+
+ pthread_mutex_lock(&dev->mutex);
+
+ ret = dev->ops->get_status(dev, &status);
+ if (!ret) {
+ dev->status = status;
+ PMD_INIT_LOG(DEBUG, "Updated Device Status(0x%08x):"
+ "\t-RESET: %u "
+ "\t-ACKNOWLEDGE: %u "
+ "\t-DRIVER: %u "
+ "\t-DRIVER_OK: %u "
+ "\t-FEATURES_OK: %u "
+ "\t-DEVICE_NEED_RESET: %u "
+ "\t-FAILED: %u",
+ dev->status,
+ (dev->status == VIRTIO_CONFIG_STATUS_RESET),
+ !!(dev->status & VIRTIO_CONFIG_STATUS_ACK),
+ !!(dev->status & VIRTIO_CONFIG_STATUS_DRIVER),
+ !!(dev->status & VIRTIO_CONFIG_STATUS_DRIVER_OK),
+ !!(dev->status & VIRTIO_CONFIG_STATUS_FEATURES_OK),
+ !!(dev->status & VIRTIO_CONFIG_STATUS_DEV_NEED_RESET),
+ !!(dev->status & VIRTIO_CONFIG_STATUS_FAILED));
+ } else if (ret != -ENOTSUP) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to get backend status", dev->path);
+ }
+
+ pthread_mutex_unlock(&dev->mutex);
+ return ret;
+}
+
+int
+crypto_virtio_user_dev_update_link_state(struct virtio_user_dev *dev)
+{
+ if (dev->ops->update_link_state)
+ return dev->ops->update_link_state(dev);
+
+ return 0;
+}
diff --git a/drivers/crypto/virtio/virtio_user/virtio_user_dev.h b/drivers/crypto/virtio/virtio_user/virtio_user_dev.h
new file mode 100644
index 0000000000..9cd9856e5d
--- /dev/null
+++ b/drivers/crypto/virtio/virtio_user/virtio_user_dev.h
@@ -0,0 +1,85 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Marvell.
+ */
+
+#ifndef _VIRTIO_USER_DEV_H
+#define _VIRTIO_USER_DEV_H
+
+#include <limits.h>
+#include <stdbool.h>
+
+#include "../virtio_pci.h"
+#include "../virtio_ring.h"
+
+extern struct virtio_user_backend_ops virtio_crypto_ops_vdpa;
+
+enum virtio_user_backend_type {
+ VIRTIO_USER_BACKEND_UNKNOWN,
+ VIRTIO_USER_BACKEND_VHOST_USER,
+ VIRTIO_USER_BACKEND_VHOST_VDPA,
+};
+
+struct virtio_user_queue {
+ uint16_t used_idx;
+ bool avail_wrap_counter;
+ bool used_wrap_counter;
+};
+
+struct virtio_user_dev {
+ struct virtio_crypto_hw hw;
+ enum virtio_user_backend_type backend_type;
+ bool is_server; /* server or client mode */
+
+ int *callfds;
+ int *kickfds;
+ uint16_t max_queue_pairs;
+ uint16_t queue_pairs;
+ uint32_t queue_size;
+ uint64_t features; /* the negotiated features with driver,
+ * and will be sync with device
+ */
+ uint64_t device_features; /* supported features by device */
+ uint64_t frontend_features; /* enabled frontend features */
+ uint64_t unsupported_features; /* unsupported features mask */
+ uint8_t status;
+ uint32_t crypto_status;
+ uint32_t crypto_services;
+ uint64_t cipher_algo;
+ uint32_t hash_algo;
+ uint64_t auth_algo;
+ uint32_t aead_algo;
+ uint32_t akcipher_algo;
+ char path[PATH_MAX];
+
+ union {
+ void *ptr;
+ struct vring *split;
+ struct vring_packed *packed;
+ } vrings;
+
+ struct virtio_user_queue *packed_queues;
+ bool *qp_enabled;
+
+ struct virtio_user_backend_ops *ops;
+ pthread_mutex_t mutex;
+ bool started;
+
+ bool hw_cvq;
+ struct virtqueue *scvq;
+
+ void *backend_data;
+
+ uint16_t **notify_area;
+};
+
+int crypto_virtio_user_dev_set_features(struct virtio_user_dev *dev);
+int crypto_virtio_user_start_device(struct virtio_user_dev *dev);
+int crypto_virtio_user_stop_device(struct virtio_user_dev *dev);
+int crypto_virtio_user_dev_init(struct virtio_user_dev *dev, char *path, uint16_t queues,
+ int queue_size, int server);
+void crypto_virtio_user_dev_uninit(struct virtio_user_dev *dev);
+int crypto_virtio_user_dev_set_status(struct virtio_user_dev *dev, uint8_t status);
+int crypto_virtio_user_dev_update_status(struct virtio_user_dev *dev);
+int crypto_virtio_user_dev_update_link_state(struct virtio_user_dev *dev);
+extern const char * const crypto_virtio_user_backend_strings[];
+#endif
diff --git a/drivers/crypto/virtio/virtio_user_cryptodev.c b/drivers/crypto/virtio/virtio_user_cryptodev.c
new file mode 100644
index 0000000000..6dfdb76268
--- /dev/null
+++ b/drivers/crypto/virtio/virtio_user_cryptodev.c
@@ -0,0 +1,575 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Marvell
+ */
+
+#include <stdint.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <unistd.h>
+#include <fcntl.h>
+
+#include <rte_malloc.h>
+#include <rte_kvargs.h>
+#include <bus_vdev_driver.h>
+#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
+#include <rte_alarm.h>
+#include <rte_cycles.h>
+#include <rte_io.h>
+
+#include "virtio_user/virtio_user_dev.h"
+#include "virtio_user/vhost.h"
+#include "virtio_cryptodev.h"
+#include "virtio_logs.h"
+#include "virtio_pci.h"
+#include "virtqueue.h"
+
+#define virtio_user_get_dev(hwp) container_of(hwp, struct virtio_user_dev, hw)
+
+static void
+virtio_user_read_dev_config(struct virtio_crypto_hw *hw, size_t offset,
+ void *dst, int length __rte_unused)
+{
+ struct virtio_user_dev *dev = virtio_user_get_dev(hw);
+
+ if (offset == offsetof(struct virtio_crypto_config, status)) {
+ crypto_virtio_user_dev_update_link_state(dev);
+ *(uint32_t *)dst = dev->crypto_status;
+ } else if (offset == offsetof(struct virtio_crypto_config, max_dataqueues))
+ *(uint16_t *)dst = dev->max_queue_pairs;
+ else if (offset == offsetof(struct virtio_crypto_config, crypto_services))
+ *(uint32_t *)dst = dev->crypto_services;
+ else if (offset == offsetof(struct virtio_crypto_config, cipher_algo_l))
+ *(uint32_t *)dst = dev->cipher_algo & 0xFFFF;
+ else if (offset == offsetof(struct virtio_crypto_config, cipher_algo_h))
+ *(uint32_t *)dst = dev->cipher_algo >> 32;
+ else if (offset == offsetof(struct virtio_crypto_config, hash_algo))
+ *(uint32_t *)dst = dev->hash_algo;
+ else if (offset == offsetof(struct virtio_crypto_config, mac_algo_l))
+ *(uint32_t *)dst = dev->auth_algo & 0xFFFF;
+ else if (offset == offsetof(struct virtio_crypto_config, mac_algo_h))
+ *(uint32_t *)dst = dev->auth_algo >> 32;
+ else if (offset == offsetof(struct virtio_crypto_config, aead_algo))
+ *(uint32_t *)dst = dev->aead_algo;
+ else if (offset == offsetof(struct virtio_crypto_config, akcipher_algo))
+ *(uint32_t *)dst = dev->akcipher_algo;
+}
+
+static void
+virtio_user_write_dev_config(struct virtio_crypto_hw *hw, size_t offset,
+ const void *src, int length)
+{
+ RTE_SET_USED(hw);
+ RTE_SET_USED(src);
+
+ PMD_DRV_LOG(ERR, "not supported offset=%zu, len=%d",
+ offset, length);
+}
+
+static void
+virtio_user_reset(struct virtio_crypto_hw *hw)
+{
+ struct virtio_user_dev *dev = virtio_user_get_dev(hw);
+
+ if (dev->status & VIRTIO_CONFIG_STATUS_DRIVER_OK)
+ crypto_virtio_user_stop_device(dev);
+}
+
+static void
+virtio_user_set_status(struct virtio_crypto_hw *hw, uint8_t status)
+{
+ struct virtio_user_dev *dev = virtio_user_get_dev(hw);
+ uint8_t old_status = dev->status;
+
+ if (status & VIRTIO_CONFIG_STATUS_FEATURES_OK &&
+ ~old_status & VIRTIO_CONFIG_STATUS_FEATURES_OK) {
+ crypto_virtio_user_dev_set_features(dev);
+ /* Feature negotiation should be only done in probe time.
+ * So we skip any more request here.
+ */
+ dev->status |= VIRTIO_CONFIG_STATUS_FEATURES_OK;
+ }
+
+ if (status & VIRTIO_CONFIG_STATUS_DRIVER_OK) {
+ if (crypto_virtio_user_start_device(dev)) {
+ crypto_virtio_user_dev_update_status(dev);
+ return;
+ }
+ } else if (status == VIRTIO_CONFIG_STATUS_RESET) {
+ virtio_user_reset(hw);
+ }
+
+ crypto_virtio_user_dev_set_status(dev, status);
+ if (status & VIRTIO_CONFIG_STATUS_DRIVER_OK && dev->scvq) {
+ if (dev->ops->cvq_enable(dev, 1) < 0) {
+ PMD_INIT_LOG(ERR, "(%s) Failed to start ctrlq", dev->path);
+ crypto_virtio_user_dev_update_status(dev);
+ return;
+ }
+ }
+}
+
+static uint8_t
+virtio_user_get_status(struct virtio_crypto_hw *hw)
+{
+ struct virtio_user_dev *dev = virtio_user_get_dev(hw);
+
+ crypto_virtio_user_dev_update_status(dev);
+
+ return dev->status;
+}
+
+#define VIRTIO_USER_CRYPTO_PMD_GUEST_FEATURES \
+ (1ULL << VIRTIO_CRYPTO_SERVICE_CIPHER | \
+ 1ULL << VIRTIO_CRYPTO_SERVICE_AKCIPHER | \
+ 1ULL << VIRTIO_F_VERSION_1 | \
+ 1ULL << VIRTIO_F_IN_ORDER | \
+ 1ULL << VIRTIO_F_RING_PACKED | \
+ 1ULL << VIRTIO_F_NOTIFICATION_DATA | \
+ 1ULL << VIRTIO_RING_F_INDIRECT_DESC | \
+ 1ULL << VIRTIO_F_ORDER_PLATFORM)
+
+static uint64_t
+virtio_user_get_features(struct virtio_crypto_hw *hw)
+{
+ struct virtio_user_dev *dev = virtio_user_get_dev(hw);
+
+ /* unmask feature bits defined in vhost user protocol */
+ return (dev->device_features | dev->frontend_features) &
+ VIRTIO_USER_CRYPTO_PMD_GUEST_FEATURES;
+}
+
+static void
+virtio_user_set_features(struct virtio_crypto_hw *hw, uint64_t features)
+{
+ struct virtio_user_dev *dev = virtio_user_get_dev(hw);
+
+ dev->features = features & (dev->device_features | dev->frontend_features);
+}
+
+static uint8_t
+virtio_user_get_isr(struct virtio_crypto_hw *hw __rte_unused)
+{
+ /* rxq interrupts and config interrupt are separated in virtio-user,
+ * here we only report config change.
+ */
+ return VIRTIO_PCI_CAP_ISR_CFG;
+}
+
+static uint16_t
+virtio_user_set_config_irq(struct virtio_crypto_hw *hw __rte_unused,
+ uint16_t vec __rte_unused)
+{
+ return 0;
+}
+
+static uint16_t
+virtio_user_set_queue_irq(struct virtio_crypto_hw *hw __rte_unused,
+ struct virtqueue *vq __rte_unused,
+ uint16_t vec)
+{
+ /* pretend we have done that */
+ return vec;
+}
+
+/* This function is to get the queue size, aka, number of descs, of a specified
+ * queue. Different with the VHOST_USER_GET_QUEUE_NUM, which is used to get the
+ * max supported queues.
+ */
+static uint16_t
+virtio_user_get_queue_num(struct virtio_crypto_hw *hw, uint16_t queue_id __rte_unused)
+{
+ struct virtio_user_dev *dev = virtio_user_get_dev(hw);
+
+ /* Currently, each queue has same queue size */
+ return dev->queue_size;
+}
+
+static void
+virtio_user_setup_queue_packed(struct virtqueue *vq,
+ struct virtio_user_dev *dev)
+{
+ uint16_t queue_idx = vq->vq_queue_index;
+ struct vring_packed *vring;
+ uint64_t desc_addr;
+ uint64_t avail_addr;
+ uint64_t used_addr;
+ uint16_t i;
+
+ vring = &dev->vrings.packed[queue_idx];
+ desc_addr = (uintptr_t)vq->vq_ring_virt_mem;
+ avail_addr = desc_addr + vq->vq_nentries *
+ sizeof(struct vring_packed_desc);
+ used_addr = RTE_ALIGN_CEIL(avail_addr +
+ sizeof(struct vring_packed_desc_event),
+ VIRTIO_VRING_ALIGN);
+ vring->num = vq->vq_nentries;
+ vring->desc_iova = vq->vq_ring_mem;
+ vring->desc = (void *)(uintptr_t)desc_addr;
+ vring->driver = (void *)(uintptr_t)avail_addr;
+ vring->device = (void *)(uintptr_t)used_addr;
+ dev->packed_queues[queue_idx].avail_wrap_counter = true;
+ dev->packed_queues[queue_idx].used_wrap_counter = true;
+ dev->packed_queues[queue_idx].used_idx = 0;
+
+ for (i = 0; i < vring->num; i++)
+ vring->desc[i].flags = 0;
+}
+
+static void
+virtio_user_setup_queue_split(struct virtqueue *vq, struct virtio_user_dev *dev)
+{
+ uint16_t queue_idx = vq->vq_queue_index;
+ uint64_t desc_addr, avail_addr, used_addr;
+
+ desc_addr = (uintptr_t)vq->vq_ring_virt_mem;
+ avail_addr = desc_addr + vq->vq_nentries * sizeof(struct vring_desc);
+ used_addr = RTE_ALIGN_CEIL(avail_addr + offsetof(struct vring_avail,
+ ring[vq->vq_nentries]),
+ VIRTIO_VRING_ALIGN);
+
+ dev->vrings.split[queue_idx].num = vq->vq_nentries;
+ dev->vrings.split[queue_idx].desc_iova = vq->vq_ring_mem;
+ dev->vrings.split[queue_idx].desc = (void *)(uintptr_t)desc_addr;
+ dev->vrings.split[queue_idx].avail = (void *)(uintptr_t)avail_addr;
+ dev->vrings.split[queue_idx].used = (void *)(uintptr_t)used_addr;
+}
+
+static int
+virtio_user_setup_queue(struct virtio_crypto_hw *hw, struct virtqueue *vq)
+{
+ struct virtio_user_dev *dev = virtio_user_get_dev(hw);
+
+ if (vtpci_with_packed_queue(hw))
+ virtio_user_setup_queue_packed(vq, dev);
+ else
+ virtio_user_setup_queue_split(vq, dev);
+
+ if (dev->notify_area)
+ vq->notify_addr = dev->notify_area[vq->vq_queue_index];
+
+ if (virtcrypto_cq_to_vq(hw->cvq) == vq)
+ dev->scvq = virtcrypto_cq_to_vq(hw->cvq);
+
+ return 0;
+}
+
+static void
+virtio_user_del_queue(struct virtio_crypto_hw *hw, struct virtqueue *vq)
+{
+ RTE_SET_USED(hw);
+ RTE_SET_USED(vq);
+}
+
+static void
+virtio_user_notify_queue(struct virtio_crypto_hw *hw, struct virtqueue *vq)
+{
+ struct virtio_user_dev *dev = virtio_user_get_dev(hw);
+ uint64_t notify_data = 1;
+
+ if (!dev->notify_area) {
+ if (write(dev->kickfds[vq->vq_queue_index], ¬ify_data,
+ sizeof(notify_data)) < 0)
+ PMD_DRV_LOG(ERR, "failed to kick backend: %s",
+ strerror(errno));
+ return;
+ } else if (!vtpci_with_feature(hw, VIRTIO_F_NOTIFICATION_DATA)) {
+ rte_write16(vq->vq_queue_index, vq->notify_addr);
+ return;
+ }
+
+ if (vtpci_with_packed_queue(hw)) {
+ /* Bit[0:15]: vq queue index
+ * Bit[16:30]: avail index
+ * Bit[31]: avail wrap counter
+ */
+ notify_data = ((uint32_t)(!!(vq->vq_packed.cached_flags &
+ VRING_PACKED_DESC_F_AVAIL)) << 31) |
+ ((uint32_t)vq->vq_avail_idx << 16) |
+ vq->vq_queue_index;
+ } else {
+ /* Bit[0:15]: vq queue index
+ * Bit[16:31]: avail index
+ */
+ notify_data = ((uint32_t)vq->vq_avail_idx << 16) |
+ vq->vq_queue_index;
+ }
+ rte_write32(notify_data, vq->notify_addr);
+}
+
+const struct virtio_pci_ops crypto_virtio_user_ops = {
+ .read_dev_cfg = virtio_user_read_dev_config,
+ .write_dev_cfg = virtio_user_write_dev_config,
+ .reset = virtio_user_reset,
+ .get_status = virtio_user_get_status,
+ .set_status = virtio_user_set_status,
+ .get_features = virtio_user_get_features,
+ .set_features = virtio_user_set_features,
+ .get_isr = virtio_user_get_isr,
+ .set_config_irq = virtio_user_set_config_irq,
+ .set_queue_irq = virtio_user_set_queue_irq,
+ .get_queue_num = virtio_user_get_queue_num,
+ .setup_queue = virtio_user_setup_queue,
+ .del_queue = virtio_user_del_queue,
+ .notify_queue = virtio_user_notify_queue,
+};
+
+static const char * const valid_args[] = {
+#define VIRTIO_USER_ARG_QUEUES_NUM "queues"
+ VIRTIO_USER_ARG_QUEUES_NUM,
+#define VIRTIO_USER_ARG_QUEUE_SIZE "queue_size"
+ VIRTIO_USER_ARG_QUEUE_SIZE,
+#define VIRTIO_USER_ARG_PATH "path"
+ VIRTIO_USER_ARG_PATH,
+ NULL
+};
+
+#define VIRTIO_USER_DEF_Q_NUM 1
+#define VIRTIO_USER_DEF_Q_SZ 256
+#define VIRTIO_USER_DEF_SERVER_MODE 0
+
+static int
+get_string_arg(const char *key __rte_unused,
+ const char *value, void *extra_args)
+{
+ if (!value || !extra_args)
+ return -EINVAL;
+
+ *(char **)extra_args = strdup(value);
+
+ if (!*(char **)extra_args)
+ return -ENOMEM;
+
+ return 0;
+}
+
+static int
+get_integer_arg(const char *key __rte_unused,
+ const char *value, void *extra_args)
+{
+ uint64_t integer = 0;
+ if (!value || !extra_args)
+ return -EINVAL;
+ errno = 0;
+ integer = strtoull(value, NULL, 0);
+ /* extra_args keeps default value, it should be replaced
+ * only in case of successful parsing of the 'value' arg
+ */
+ if (errno == 0)
+ *(uint64_t *)extra_args = integer;
+ return -errno;
+}
+
+static struct rte_cryptodev *
+virtio_user_cryptodev_alloc(struct rte_vdev_device *vdev)
+{
+ struct rte_cryptodev_pmd_init_params init_params = {
+ .name = "",
+ .private_data_size = sizeof(struct virtio_user_dev),
+ };
+ struct rte_cryptodev_data *data;
+ struct rte_cryptodev *cryptodev;
+ struct virtio_user_dev *dev;
+ struct virtio_crypto_hw *hw;
+
+ init_params.socket_id = vdev->device.numa_node;
+ init_params.private_data_size = sizeof(struct virtio_user_dev);
+ cryptodev = rte_cryptodev_pmd_create(vdev->device.name, &vdev->device, &init_params);
+ if (cryptodev == NULL) {
+ PMD_INIT_LOG(ERR, "failed to create cryptodev vdev");
+ return NULL;
+ }
+
+ data = cryptodev->data;
+ dev = data->dev_private;
+ hw = &dev->hw;
+
+ hw->dev_id = data->dev_id;
+ VTPCI_OPS(hw) = &crypto_virtio_user_ops;
+
+ return cryptodev;
+}
+
+static void
+virtio_user_cryptodev_free(struct rte_cryptodev *cryptodev)
+{
+ rte_cryptodev_pmd_destroy(cryptodev);
+}
+
+static int
+virtio_user_pmd_probe(struct rte_vdev_device *vdev)
+{
+ uint64_t server_mode = VIRTIO_USER_DEF_SERVER_MODE;
+ uint64_t queue_size = VIRTIO_USER_DEF_Q_SZ;
+ uint64_t queues = VIRTIO_USER_DEF_Q_NUM;
+ struct rte_cryptodev *cryptodev = NULL;
+ struct rte_kvargs *kvlist = NULL;
+ struct virtio_user_dev *dev;
+ char *path = NULL;
+ int ret;
+
+ kvlist = rte_kvargs_parse(rte_vdev_device_args(vdev), valid_args);
+
+ if (!kvlist) {
+ PMD_INIT_LOG(ERR, "error when parsing param");
+ goto end;
+ }
+
+ if (rte_kvargs_count(kvlist, VIRTIO_USER_ARG_PATH) == 1) {
+ if (rte_kvargs_process(kvlist, VIRTIO_USER_ARG_PATH,
+ &get_string_arg, &path) < 0) {
+ PMD_INIT_LOG(ERR, "error to parse %s",
+ VIRTIO_USER_ARG_PATH);
+ goto end;
+ }
+ } else {
+ PMD_INIT_LOG(ERR, "arg %s is mandatory for virtio_user",
+ VIRTIO_USER_ARG_PATH);
+ goto end;
+ }
+
+ if (rte_kvargs_count(kvlist, VIRTIO_USER_ARG_QUEUES_NUM) == 1) {
+ if (rte_kvargs_process(kvlist, VIRTIO_USER_ARG_QUEUES_NUM,
+ &get_integer_arg, &queues) < 0) {
+ PMD_INIT_LOG(ERR, "error to parse %s",
+ VIRTIO_USER_ARG_QUEUES_NUM);
+ goto end;
+ }
+ }
+
+ if (rte_kvargs_count(kvlist, VIRTIO_USER_ARG_QUEUE_SIZE) == 1) {
+ if (rte_kvargs_process(kvlist, VIRTIO_USER_ARG_QUEUE_SIZE,
+ &get_integer_arg, &queue_size) < 0) {
+ PMD_INIT_LOG(ERR, "error to parse %s",
+ VIRTIO_USER_ARG_QUEUE_SIZE);
+ goto end;
+ }
+ }
+
+ cryptodev = virtio_user_cryptodev_alloc(vdev);
+ if (!cryptodev) {
+ PMD_INIT_LOG(ERR, "virtio_user fails to alloc device");
+ goto end;
+ }
+
+ dev = cryptodev->data->dev_private;
+ if (crypto_virtio_user_dev_init(dev, path, queues, queue_size,
+ server_mode) < 0) {
+ PMD_INIT_LOG(ERR, "virtio_user_dev_init fails");
+ virtio_user_cryptodev_free(cryptodev);
+ goto end;
+ }
+
+ if (crypto_virtio_dev_init(cryptodev, VIRTIO_USER_CRYPTO_PMD_GUEST_FEATURES,
+ NULL) < 0) {
+ PMD_INIT_LOG(ERR, "crypto_virtio_dev_init fails");
+ crypto_virtio_user_dev_uninit(dev);
+ virtio_user_cryptodev_free(cryptodev);
+ goto end;
+ }
+
+ rte_cryptodev_pmd_probing_finish(cryptodev);
+
+ ret = 0;
+end:
+ rte_kvargs_free(kvlist);
+ free(path);
+ return ret;
+}
+
+static int
+virtio_user_pmd_remove(struct rte_vdev_device *vdev)
+{
+ struct rte_cryptodev *cryptodev;
+ const char *name;
+ int devid;
+
+ if (!vdev)
+ return -EINVAL;
+
+ name = rte_vdev_device_name(vdev);
+ PMD_DRV_LOG(INFO, "Removing %s", name);
+
+ devid = rte_cryptodev_get_dev_id(name);
+ if (devid < 0)
+ return -EINVAL;
+
+ rte_cryptodev_stop(devid);
+
+ cryptodev = rte_cryptodev_pmd_get_named_dev(name);
+ if (cryptodev == NULL)
+ return -ENODEV;
+
+ if (rte_cryptodev_pmd_destroy(cryptodev) < 0) {
+ PMD_DRV_LOG(ERR, "Failed to remove %s", name);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+static int virtio_user_pmd_dma_map(struct rte_vdev_device *vdev, void *addr,
+ uint64_t iova, size_t len)
+{
+ struct rte_cryptodev *cryptodev;
+ struct virtio_user_dev *dev;
+ const char *name;
+
+ if (!vdev)
+ return -EINVAL;
+
+ name = rte_vdev_device_name(vdev);
+ cryptodev = rte_cryptodev_pmd_get_named_dev(name);
+ if (cryptodev == NULL)
+ return -EINVAL;
+
+ dev = cryptodev->data->dev_private;
+
+ if (dev->ops->dma_map)
+ return dev->ops->dma_map(dev, addr, iova, len);
+
+ return 0;
+}
+
+static int virtio_user_pmd_dma_unmap(struct rte_vdev_device *vdev, void *addr,
+ uint64_t iova, size_t len)
+{
+ struct rte_cryptodev *cryptodev;
+ struct virtio_user_dev *dev;
+ const char *name;
+
+ if (!vdev)
+ return -EINVAL;
+
+ name = rte_vdev_device_name(vdev);
+ cryptodev = rte_cryptodev_pmd_get_named_dev(name);
+ if (cryptodev == NULL)
+ return -EINVAL;
+
+ dev = cryptodev->data->dev_private;
+
+ if (dev->ops->dma_unmap)
+ return dev->ops->dma_unmap(dev, addr, iova, len);
+
+ return 0;
+}
+
+static struct rte_vdev_driver virtio_user_driver = {
+ .probe = virtio_user_pmd_probe,
+ .remove = virtio_user_pmd_remove,
+ .dma_map = virtio_user_pmd_dma_map,
+ .dma_unmap = virtio_user_pmd_dma_unmap,
+};
+
+static struct cryptodev_driver virtio_crypto_drv;
+
+uint8_t cryptodev_virtio_user_driver_id;
+
+RTE_PMD_REGISTER_VDEV(crypto_virtio_user, virtio_user_driver);
+RTE_PMD_REGISTER_CRYPTO_DRIVER(virtio_crypto_drv,
+ virtio_user_driver.driver,
+ cryptodev_virtio_user_driver_id);
+RTE_PMD_REGISTER_PARAM_STRING(crypto_virtio_user,
+ "path=<path> "
+ "queues=<int> "
+ "queue_size=<int>");
--
2.25.1
^ permalink raw reply [flat|nested] 58+ messages in thread
* [v3 5/6] test/crypto: add asymmetric tests for virtio PMD
2025-02-21 17:41 ` [v3 0/6] crypto/virtio: enhancements for RSA and vDPA Gowrishankar Muthukrishnan
` (3 preceding siblings ...)
2025-02-21 17:41 ` [v3 4/6] crypto/virtio: add vDPA backend Gowrishankar Muthukrishnan
@ 2025-02-21 17:41 ` Gowrishankar Muthukrishnan
2025-02-21 17:41 ` [v3 6/6] test/crypto: add tests for virtio user PMD Gowrishankar Muthukrishnan
5 siblings, 0 replies; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2025-02-21 17:41 UTC (permalink / raw)
To: dev, Akhil Goyal, Fan Zhang; +Cc: anoobj, Gowrishankar Muthukrishnan
Add asymmetric tests for Virtio PMD.
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
app/test/test_cryptodev_asym.c | 28 ++++++++++++++++++++++++++++
1 file changed, 28 insertions(+)
diff --git a/app/test/test_cryptodev_asym.c b/app/test/test_cryptodev_asym.c
index 9b5f3c545e..ac47be724f 100644
--- a/app/test/test_cryptodev_asym.c
+++ b/app/test/test_cryptodev_asym.c
@@ -4023,6 +4023,19 @@ static struct unit_test_suite cryptodev_octeontx_asym_testsuite = {
}
};
+static struct unit_test_suite cryptodev_virtio_asym_testsuite = {
+ .suite_name = "Crypto Device VIRTIO ASYM Unit Test Suite",
+ .setup = testsuite_setup,
+ .teardown = testsuite_teardown,
+ .unit_test_cases = {
+ TEST_CASE_ST(ut_setup_asym, ut_teardown_asym, test_capability),
+ TEST_CASE_ST(ut_setup_asym, ut_teardown_asym,
+ test_rsa_sign_verify_crt),
+ TEST_CASE_ST(ut_setup_asym, ut_teardown_asym, test_rsa_enc_dec_crt),
+ TEST_CASES_END() /**< NULL terminate unit test array */
+ }
+};
+
static int
test_cryptodev_openssl_asym(void)
{
@@ -4091,8 +4104,23 @@ test_cryptodev_cn10k_asym(void)
return unit_test_suite_runner(&cryptodev_octeontx_asym_testsuite);
}
+static int
+test_cryptodev_virtio_asym(void)
+{
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_VIRTIO_PMD));
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "virtio PMD must be loaded.\n");
+ return TEST_FAILED;
+ }
+
+ /* Use test suite registered for crypto_virtio PMD */
+ return unit_test_suite_runner(&cryptodev_virtio_asym_testsuite);
+}
+
REGISTER_DRIVER_TEST(cryptodev_openssl_asym_autotest, test_cryptodev_openssl_asym);
REGISTER_DRIVER_TEST(cryptodev_qat_asym_autotest, test_cryptodev_qat_asym);
REGISTER_DRIVER_TEST(cryptodev_octeontx_asym_autotest, test_cryptodev_octeontx_asym);
REGISTER_DRIVER_TEST(cryptodev_cn9k_asym_autotest, test_cryptodev_cn9k_asym);
REGISTER_DRIVER_TEST(cryptodev_cn10k_asym_autotest, test_cryptodev_cn10k_asym);
+REGISTER_DRIVER_TEST(cryptodev_virtio_asym_autotest, test_cryptodev_virtio_asym);
--
2.25.1
^ permalink raw reply [flat|nested] 58+ messages in thread
* [v3 6/6] test/crypto: add tests for virtio user PMD
2025-02-21 17:41 ` [v3 0/6] crypto/virtio: enhancements for RSA and vDPA Gowrishankar Muthukrishnan
` (4 preceding siblings ...)
2025-02-21 17:41 ` [v3 5/6] test/crypto: add asymmetric tests for virtio PMD Gowrishankar Muthukrishnan
@ 2025-02-21 17:41 ` Gowrishankar Muthukrishnan
5 siblings, 0 replies; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2025-02-21 17:41 UTC (permalink / raw)
To: dev, Akhil Goyal, Fan Zhang; +Cc: anoobj, Gowrishankar Muthukrishnan
Reuse virtio_crypto tests for testing virtio_crypto_user PMD.
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
app/test/test_cryptodev.c | 7 +++++++
app/test/test_cryptodev.h | 1 +
app/test/test_cryptodev_asym.c | 15 +++++++++++++++
3 files changed, 23 insertions(+)
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 441ecc6ad5..60aacdc155 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -19737,6 +19737,12 @@ test_cryptodev_virtio(void)
return run_cryptodev_testsuite(RTE_STR(CRYPTODEV_NAME_VIRTIO_PMD));
}
+static int
+test_cryptodev_virtio_user(void)
+{
+ return run_cryptodev_testsuite(RTE_STR(CRYPTODEV_NAME_VIRTIO_USER_PMD));
+}
+
static int
test_cryptodev_aesni_mb(void)
{
@@ -20074,6 +20080,7 @@ REGISTER_DRIVER_TEST(cryptodev_dpaa_sec_autotest, test_cryptodev_dpaa_sec);
REGISTER_DRIVER_TEST(cryptodev_ccp_autotest, test_cryptodev_ccp);
REGISTER_DRIVER_TEST(cryptodev_uadk_autotest, test_cryptodev_uadk);
REGISTER_DRIVER_TEST(cryptodev_virtio_autotest, test_cryptodev_virtio);
+REGISTER_DRIVER_TEST(cryptodev_virtio_user_autotest, test_cryptodev_virtio_user);
REGISTER_DRIVER_TEST(cryptodev_octeontx_autotest, test_cryptodev_octeontx);
REGISTER_DRIVER_TEST(cryptodev_caam_jr_autotest, test_cryptodev_caam_jr);
REGISTER_DRIVER_TEST(cryptodev_nitrox_autotest, test_cryptodev_nitrox);
diff --git a/app/test/test_cryptodev.h b/app/test/test_cryptodev.h
index bb54a33d62..f6c7478f19 100644
--- a/app/test/test_cryptodev.h
+++ b/app/test/test_cryptodev.h
@@ -64,6 +64,7 @@
#define CRYPTODEV_NAME_MVSAM_PMD crypto_mvsam
#define CRYPTODEV_NAME_CCP_PMD crypto_ccp
#define CRYPTODEV_NAME_VIRTIO_PMD crypto_virtio
+#define CRYPTODEV_NAME_VIRTIO_USER_PMD crypto_virtio_user
#define CRYPTODEV_NAME_OCTEONTX_SYM_PMD crypto_octeontx
#define CRYPTODEV_NAME_CAAM_JR_PMD crypto_caam_jr
#define CRYPTODEV_NAME_NITROX_PMD crypto_nitrox_sym
diff --git a/app/test/test_cryptodev_asym.c b/app/test/test_cryptodev_asym.c
index ac47be724f..a98e3dc824 100644
--- a/app/test/test_cryptodev_asym.c
+++ b/app/test/test_cryptodev_asym.c
@@ -4118,9 +4118,24 @@ test_cryptodev_virtio_asym(void)
return unit_test_suite_runner(&cryptodev_virtio_asym_testsuite);
}
+static int
+test_cryptodev_virtio_user_asym(void)
+{
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_VIRTIO_USER_PMD));
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "virtio user PMD must be loaded.\n");
+ return TEST_FAILED;
+ }
+
+ /* Use test suite registered for crypto_virtio_user PMD */
+ return unit_test_suite_runner(&cryptodev_virtio_asym_testsuite);
+}
+
REGISTER_DRIVER_TEST(cryptodev_openssl_asym_autotest, test_cryptodev_openssl_asym);
REGISTER_DRIVER_TEST(cryptodev_qat_asym_autotest, test_cryptodev_qat_asym);
REGISTER_DRIVER_TEST(cryptodev_octeontx_asym_autotest, test_cryptodev_octeontx_asym);
REGISTER_DRIVER_TEST(cryptodev_cn9k_asym_autotest, test_cryptodev_cn9k_asym);
REGISTER_DRIVER_TEST(cryptodev_cn10k_asym_autotest, test_cryptodev_cn10k_asym);
REGISTER_DRIVER_TEST(cryptodev_virtio_asym_autotest, test_cryptodev_virtio_asym);
+REGISTER_DRIVER_TEST(cryptodev_virtio_user_asym_autotest, test_cryptodev_virtio_user_asym);
--
2.25.1
^ permalink raw reply [flat|nested] 58+ messages in thread
* [v4 0/5] vhost: add RSA support
2025-02-21 17:30 ` [v3 0/5] vhost: add RSA support Gowrishankar Muthukrishnan
` (4 preceding siblings ...)
2025-02-21 17:30 ` [v3 5/5] examples/vhost_crypto: support asymmetric crypto Gowrishankar Muthukrishnan
@ 2025-02-22 8:38 ` Gowrishankar Muthukrishnan
2025-02-22 8:38 ` [v4 1/5] vhost: skip crypto op fetch before vring init Gowrishankar Muthukrishnan
` (4 more replies)
5 siblings, 5 replies; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2025-02-22 8:38 UTC (permalink / raw)
To: dev, maxime.coquelin, Chenbo Xia
Cc: anoobj, Akhil Goyal, Gowrishankar Muthukrishnan
This patch series supports asymmetric RSA in vhost crypto library.
It also includes changes to improve vhost crypto library:
* support newer QEMU versions.
* fix broken vhost_crypto example application.
* stabilize crypto fastpath operations.
v4:
- fixed CI issues.
Gowrishankar Muthukrishnan (5):
vhost: skip crypto op fetch before vring init
vhost: update vhost_user crypto session parameters
examples/vhost_crypto: fix user callbacks
vhost: support asymmetric RSA crypto ops
examples/vhost_crypto: support asymmetric crypto
examples/vhost_crypto/main.c | 54 +++-
lib/vhost/vhost_crypto.c | 508 ++++++++++++++++++++++++++++++++---
lib/vhost/vhost_user.h | 33 ++-
lib/vhost/virtio_crypto.h | 67 +++++
4 files changed, 603 insertions(+), 59 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 58+ messages in thread
* [v4 1/5] vhost: skip crypto op fetch before vring init
2025-02-22 8:38 ` [v4 0/5] vhost: add RSA support Gowrishankar Muthukrishnan
@ 2025-02-22 8:38 ` Gowrishankar Muthukrishnan
2025-02-22 8:38 ` [v4 2/5] vhost: update vhost_user crypto session parameters Gowrishankar Muthukrishnan
` (3 subsequent siblings)
4 siblings, 0 replies; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2025-02-22 8:38 UTC (permalink / raw)
To: dev, maxime.coquelin, Chenbo Xia, Fan Zhang, Jay Zhou
Cc: anoobj, Akhil Goyal, Gowrishankar Muthukrishnan, stable
Until virtio avail ring is initialized (by VHOST_USER_SET_VRING_ADDR),
worker thread should not try to fetch crypto op, which would lead to
memory fault.
Fixes: 939066d96563 ("vhost/crypto: add public function implementation")
Cc: stable@dpdk.org
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
lib/vhost/vhost_crypto.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/lib/vhost/vhost_crypto.c b/lib/vhost/vhost_crypto.c
index 3dc41a3bd5..55ea24710e 100644
--- a/lib/vhost/vhost_crypto.c
+++ b/lib/vhost/vhost_crypto.c
@@ -1580,6 +1580,16 @@ rte_vhost_crypto_fetch_requests(int vid, uint32_t qid,
vq = dev->virtqueue[qid];
+ if (unlikely(vq == NULL)) {
+ VC_LOG_ERR("Invalid virtqueue %u", qid);
+ return 0;
+ }
+
+ if (unlikely(vq->avail == NULL)) {
+ VC_LOG_DBG("Virtqueue ring not yet initialized %u", qid);
+ return 0;
+ }
+
avail_idx = *((volatile uint16_t *)&vq->avail->idx);
start_idx = vq->last_used_idx;
count = avail_idx - start_idx;
--
2.25.1
^ permalink raw reply [flat|nested] 58+ messages in thread
* [v4 2/5] vhost: update vhost_user crypto session parameters
2025-02-22 8:38 ` [v4 0/5] vhost: add RSA support Gowrishankar Muthukrishnan
2025-02-22 8:38 ` [v4 1/5] vhost: skip crypto op fetch before vring init Gowrishankar Muthukrishnan
@ 2025-02-22 8:38 ` Gowrishankar Muthukrishnan
2025-02-22 8:38 ` [v4 3/5] examples/vhost_crypto: fix user callbacks Gowrishankar Muthukrishnan
` (2 subsequent siblings)
4 siblings, 0 replies; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2025-02-22 8:38 UTC (permalink / raw)
To: dev, maxime.coquelin, Chenbo Xia
Cc: anoobj, Akhil Goyal, Gowrishankar Muthukrishnan
As per requirements on vhost_user spec, session id should be
located at the end of session parameter.
Update VhostUserCryptoSessionParam structure to support newer QEMU.
Due to additional parameters added in QEMU, received payload from
QEMU would be larger than existing payload, hence breaks parsing
vhost_user message.
This patch addresses both of the above problems.
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
lib/vhost/vhost_crypto.c | 12 ++++++------
lib/vhost/vhost_user.h | 33 +++++++++++++++++++++++++++++----
2 files changed, 35 insertions(+), 10 deletions(-)
diff --git a/lib/vhost/vhost_crypto.c b/lib/vhost/vhost_crypto.c
index 55ea24710e..05f3c85884 100644
--- a/lib/vhost/vhost_crypto.c
+++ b/lib/vhost/vhost_crypto.c
@@ -237,7 +237,7 @@ struct vhost_crypto_data_req {
static int
transform_cipher_param(struct rte_crypto_sym_xform *xform,
- VhostUserCryptoSessionParam *param)
+ VhostUserCryptoSymSessionParam *param)
{
int ret;
@@ -273,7 +273,7 @@ transform_cipher_param(struct rte_crypto_sym_xform *xform,
static int
transform_chain_param(struct rte_crypto_sym_xform *xforms,
- VhostUserCryptoSessionParam *param)
+ VhostUserCryptoSymSessionParam *param)
{
struct rte_crypto_sym_xform *xform_cipher, *xform_auth;
int ret;
@@ -341,10 +341,10 @@ vhost_crypto_create_sess(struct vhost_crypto *vcrypto,
struct rte_cryptodev_sym_session *session;
int ret;
- switch (sess_param->op_type) {
+ switch (sess_param->u.sym_sess.op_type) {
case VIRTIO_CRYPTO_SYM_OP_NONE:
case VIRTIO_CRYPTO_SYM_OP_CIPHER:
- ret = transform_cipher_param(&xform1, sess_param);
+ ret = transform_cipher_param(&xform1, &sess_param->u.sym_sess);
if (unlikely(ret)) {
VC_LOG_ERR("Error transform session msg (%i)", ret);
sess_param->session_id = ret;
@@ -352,7 +352,7 @@ vhost_crypto_create_sess(struct vhost_crypto *vcrypto,
}
break;
case VIRTIO_CRYPTO_SYM_OP_ALGORITHM_CHAINING:
- if (unlikely(sess_param->hash_mode !=
+ if (unlikely(sess_param->u.sym_sess.hash_mode !=
VIRTIO_CRYPTO_SYM_HASH_MODE_AUTH)) {
sess_param->session_id = -VIRTIO_CRYPTO_NOTSUPP;
VC_LOG_ERR("Error transform session message (%i)",
@@ -362,7 +362,7 @@ vhost_crypto_create_sess(struct vhost_crypto *vcrypto,
xform1.next = &xform2;
- ret = transform_chain_param(&xform1, sess_param);
+ ret = transform_chain_param(&xform1, &sess_param->u.sym_sess);
if (unlikely(ret)) {
VC_LOG_ERR("Error transform session message (%i)", ret);
sess_param->session_id = ret;
diff --git a/lib/vhost/vhost_user.h b/lib/vhost/vhost_user.h
index 9a905ee5f4..ef486545ba 100644
--- a/lib/vhost/vhost_user.h
+++ b/lib/vhost/vhost_user.h
@@ -99,11 +99,10 @@ typedef struct VhostUserLog {
/* Comply with Cryptodev-Linux */
#define VHOST_USER_CRYPTO_MAX_HMAC_KEY_LENGTH 512
#define VHOST_USER_CRYPTO_MAX_CIPHER_KEY_LENGTH 64
+#define VHOST_USER_CRYPTO_MAX_KEY_LENGTH 1024
/* Same structure as vhost-user backend session info */
-typedef struct VhostUserCryptoSessionParam {
- int64_t session_id;
- uint32_t op_code;
+typedef struct VhostUserCryptoSymSessionParam {
uint32_t cipher_algo;
uint32_t cipher_key_len;
uint32_t hash_algo;
@@ -114,10 +113,36 @@ typedef struct VhostUserCryptoSessionParam {
uint8_t dir;
uint8_t hash_mode;
uint8_t chaining_dir;
- uint8_t *ciphe_key;
+ uint8_t *cipher_key;
uint8_t *auth_key;
uint8_t cipher_key_buf[VHOST_USER_CRYPTO_MAX_CIPHER_KEY_LENGTH];
uint8_t auth_key_buf[VHOST_USER_CRYPTO_MAX_HMAC_KEY_LENGTH];
+} VhostUserCryptoSymSessionParam;
+
+
+typedef struct VhostUserCryptoAsymRsaParam {
+ uint32_t padding_algo;
+ uint32_t hash_algo;
+} VhostUserCryptoAsymRsaParam;
+
+typedef struct VhostUserCryptoAsymSessionParam {
+ uint32_t algo;
+ uint32_t key_type;
+ uint32_t key_len;
+ uint8_t *key;
+ union {
+ VhostUserCryptoAsymRsaParam rsa;
+ } u;
+ uint8_t key_buf[VHOST_USER_CRYPTO_MAX_KEY_LENGTH];
+} VhostUserCryptoAsymSessionParam;
+
+typedef struct VhostUserCryptoSessionParam {
+ uint32_t op_code;
+ union {
+ VhostUserCryptoSymSessionParam sym_sess;
+ VhostUserCryptoAsymSessionParam asym_sess;
+ } u;
+ int64_t session_id;
} VhostUserCryptoSessionParam;
typedef struct VhostUserVringArea {
--
2.25.1
^ permalink raw reply [flat|nested] 58+ messages in thread
* [v4 3/5] examples/vhost_crypto: fix user callbacks
2025-02-22 8:38 ` [v4 0/5] vhost: add RSA support Gowrishankar Muthukrishnan
2025-02-22 8:38 ` [v4 1/5] vhost: skip crypto op fetch before vring init Gowrishankar Muthukrishnan
2025-02-22 8:38 ` [v4 2/5] vhost: update vhost_user crypto session parameters Gowrishankar Muthukrishnan
@ 2025-02-22 8:38 ` Gowrishankar Muthukrishnan
2025-02-22 8:38 ` [v4 4/5] vhost: support asymmetric RSA crypto ops Gowrishankar Muthukrishnan
2025-02-22 8:38 ` [v4 5/5] examples/vhost_crypto: support asymmetric crypto Gowrishankar Muthukrishnan
4 siblings, 0 replies; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2025-02-22 8:38 UTC (permalink / raw)
To: dev, maxime.coquelin, Chenbo Xia, Fan Zhang, Jay Zhou
Cc: anoobj, Akhil Goyal, Gowrishankar Muthukrishnan, stable
In order to handle new vhost user connection, use new_connection
and destroy_connection callbacks.
Fixes: f5188211c721 ("examples/vhost_crypto: add sample application")
Cc: stable@dpdk.org
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
examples/vhost_crypto/main.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/examples/vhost_crypto/main.c b/examples/vhost_crypto/main.c
index 558c09a60f..b1fe4120b9 100644
--- a/examples/vhost_crypto/main.c
+++ b/examples/vhost_crypto/main.c
@@ -362,8 +362,8 @@ destroy_device(int vid)
}
static const struct rte_vhost_device_ops virtio_crypto_device_ops = {
- .new_device = new_device,
- .destroy_device = destroy_device,
+ .new_connection = new_device,
+ .destroy_connection = destroy_device,
};
static int
--
2.25.1
^ permalink raw reply [flat|nested] 58+ messages in thread
* [v4 4/5] vhost: support asymmetric RSA crypto ops
2025-02-22 8:38 ` [v4 0/5] vhost: add RSA support Gowrishankar Muthukrishnan
` (2 preceding siblings ...)
2025-02-22 8:38 ` [v4 3/5] examples/vhost_crypto: fix user callbacks Gowrishankar Muthukrishnan
@ 2025-02-22 8:38 ` Gowrishankar Muthukrishnan
2025-02-22 8:38 ` [v4 5/5] examples/vhost_crypto: support asymmetric crypto Gowrishankar Muthukrishnan
4 siblings, 0 replies; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2025-02-22 8:38 UTC (permalink / raw)
To: dev, maxime.coquelin, Chenbo Xia
Cc: anoobj, Akhil Goyal, Gowrishankar Muthukrishnan
Support asymmetric RSA crypto operations in vhost-user.
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
v4:
- fixed CI issue.
---
lib/vhost/vhost_crypto.c | 486 +++++++++++++++++++++++++++++++++++---
lib/vhost/virtio_crypto.h | 67 ++++++
2 files changed, 518 insertions(+), 35 deletions(-)
diff --git a/lib/vhost/vhost_crypto.c b/lib/vhost/vhost_crypto.c
index 05f3c85884..ba577605c2 100644
--- a/lib/vhost/vhost_crypto.c
+++ b/lib/vhost/vhost_crypto.c
@@ -54,6 +54,14 @@ RTE_LOG_REGISTER_SUFFIX(vhost_crypto_logtype, crypto, INFO);
*/
#define vhost_crypto_desc vring_desc
+struct vhost_crypto_session {
+ union {
+ struct rte_cryptodev_asym_session *asym;
+ struct rte_cryptodev_sym_session *sym;
+ };
+ enum rte_crypto_op_type type;
+};
+
static int
cipher_algo_transform(uint32_t virtio_cipher_algo,
enum rte_crypto_cipher_algorithm *algo)
@@ -206,8 +214,10 @@ struct __rte_cache_aligned vhost_crypto {
uint64_t last_session_id;
- uint64_t cache_session_id;
- struct rte_cryptodev_sym_session *cache_session;
+ uint64_t cache_sym_session_id;
+ struct rte_cryptodev_sym_session *cache_sym_session;
+ uint64_t cache_asym_session_id;
+ struct rte_cryptodev_asym_session *cache_asym_session;
/** socket id for the device */
int socket_id;
@@ -334,10 +344,11 @@ transform_chain_param(struct rte_crypto_sym_xform *xforms,
}
static void
-vhost_crypto_create_sess(struct vhost_crypto *vcrypto,
+vhost_crypto_create_sym_sess(struct vhost_crypto *vcrypto,
VhostUserCryptoSessionParam *sess_param)
{
struct rte_crypto_sym_xform xform1 = {0}, xform2 = {0};
+ struct vhost_crypto_session *vhost_session;
struct rte_cryptodev_sym_session *session;
int ret;
@@ -384,42 +395,277 @@ vhost_crypto_create_sess(struct vhost_crypto *vcrypto,
return;
}
- /* insert hash to map */
- if (rte_hash_add_key_data(vcrypto->session_map,
- &vcrypto->last_session_id, session) < 0) {
+ vhost_session = rte_zmalloc(NULL, sizeof(*vhost_session), 0);
+ if (vhost_session == NULL) {
+ VC_LOG_ERR("Failed to alloc session memory");
+ goto error_exit;
+ }
+
+ vhost_session->type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
+ vhost_session->sym = session;
+
+ /* insert session to map */
+ if ((rte_hash_add_key_data(vcrypto->session_map,
+ &vcrypto->last_session_id, vhost_session) < 0)) {
VC_LOG_ERR("Failed to insert session to hash table");
+ goto error_exit;
+ }
+
+ VC_LOG_INFO("Session %"PRIu64" created for vdev %i.",
+ vcrypto->last_session_id, vcrypto->dev->vid);
+
+ sess_param->session_id = vcrypto->last_session_id;
+ vcrypto->last_session_id++;
+ return;
+
+error_exit:
+ if (rte_cryptodev_sym_session_free(vcrypto->cid, session) < 0)
+ VC_LOG_ERR("Failed to free session");
+
+ sess_param->session_id = -VIRTIO_CRYPTO_ERR;
+ rte_free(vhost_session);
+}
+
+static int
+tlv_decode(uint8_t *tlv, uint8_t type, uint8_t **data, size_t *data_len)
+{
+ size_t tlen = -EINVAL, len;
+
+ if (tlv[0] != type)
+ return -EINVAL;
- if (rte_cryptodev_sym_session_free(vcrypto->cid, session) < 0)
- VC_LOG_ERR("Failed to free session");
+ if (tlv[1] == 0x82) {
+ len = (tlv[2] << 8) | tlv[3];
+ *data = &tlv[4];
+ tlen = len + 4;
+ } else if (tlv[1] == 0x81) {
+ len = tlv[2];
+ *data = &tlv[3];
+ tlen = len + 3;
+ } else {
+ len = tlv[1];
+ *data = &tlv[2];
+ tlen = len + 2;
+ }
+
+ *data_len = len;
+ return tlen;
+}
+
+static int
+virtio_crypto_asym_rsa_der_to_xform(uint8_t *der, size_t der_len,
+ struct rte_crypto_asym_xform *xform)
+{
+ uint8_t *n = NULL, *e = NULL, *d = NULL, *p = NULL, *q = NULL, *dp = NULL,
+ *dq = NULL, *qinv = NULL, *v = NULL, *tlv;
+ size_t nlen, elen, dlen, plen, qlen, dplen, dqlen, qinvlen, vlen;
+ int len;
+
+ RTE_SET_USED(der_len);
+
+ if (der[0] != 0x30)
+ return -EINVAL;
+
+ if (der[1] == 0x82)
+ tlv = &der[4];
+ else if (der[1] == 0x81)
+ tlv = &der[3];
+ else
+ return -EINVAL;
+
+ len = tlv_decode(tlv, 0x02, &v, &vlen);
+ if (len < 0 || v[0] != 0x0 || vlen != 1)
+ return -EINVAL;
+
+ tlv = tlv + len;
+ len = tlv_decode(tlv, 0x02, &n, &nlen);
+ if (len < 0)
+ return len;
+
+ tlv = tlv + len;
+ len = tlv_decode(tlv, 0x02, &e, &elen);
+ if (len < 0)
+ return len;
+
+ tlv = tlv + len;
+ len = tlv_decode(tlv, 0x02, &d, &dlen);
+ if (len < 0)
+ return len;
+
+ tlv = tlv + len;
+ len = tlv_decode(tlv, 0x02, &p, &plen);
+ if (len < 0)
+ return len;
+
+ tlv = tlv + len;
+ len = tlv_decode(tlv, 0x02, &q, &qlen);
+ if (len < 0)
+ return len;
+
+ tlv = tlv + len;
+ len = tlv_decode(tlv, 0x02, &dp, &dplen);
+ if (len < 0)
+ return len;
+
+ tlv = tlv + len;
+ len = tlv_decode(tlv, 0x02, &dq, &dqlen);
+ if (len < 0)
+ return len;
+
+ tlv = tlv + len;
+ len = tlv_decode(tlv, 0x02, &qinv, &qinvlen);
+ if (len < 0)
+ return len;
+
+ xform->rsa.n.data = n;
+ xform->rsa.n.length = nlen;
+ xform->rsa.e.data = e;
+ xform->rsa.e.length = elen;
+ xform->rsa.d.data = d;
+ xform->rsa.d.length = dlen;
+ xform->rsa.qt.p.data = p;
+ xform->rsa.qt.p.length = plen;
+ xform->rsa.qt.q.data = q;
+ xform->rsa.qt.q.length = qlen;
+ xform->rsa.qt.dP.data = dp;
+ xform->rsa.qt.dP.length = dplen;
+ xform->rsa.qt.dQ.data = dq;
+ xform->rsa.qt.dQ.length = dqlen;
+ xform->rsa.qt.qInv.data = qinv;
+ xform->rsa.qt.qInv.length = qinvlen;
+
+ RTE_ASSERT((tlv + len - &der[0]) == der_len);
+ return 0;
+}
+
+static int
+rsa_param_transform(struct rte_crypto_asym_xform *xform,
+ VhostUserCryptoAsymSessionParam *param)
+{
+ int ret;
+
+ ret = virtio_crypto_asym_rsa_der_to_xform(param->key_buf, param->key_len, xform);
+ if (ret < 0)
+ return ret;
+
+ switch (param->u.rsa.padding_algo) {
+ case VIRTIO_CRYPTO_RSA_RAW_PADDING:
+ xform->rsa.padding.type = RTE_CRYPTO_RSA_PADDING_NONE;
+ break;
+ case VIRTIO_CRYPTO_RSA_PKCS1_PADDING:
+ xform->rsa.padding.type = RTE_CRYPTO_RSA_PADDING_PKCS1_5;
+ break;
+ default:
+ VC_LOG_ERR("Unknown padding type");
+ return -EINVAL;
+ }
+
+ xform->rsa.key_type = RTE_RSA_KEY_TYPE_QT;
+ xform->xform_type = RTE_CRYPTO_ASYM_XFORM_RSA;
+ return 0;
+}
+
+static void
+vhost_crypto_create_asym_sess(struct vhost_crypto *vcrypto,
+ VhostUserCryptoSessionParam *sess_param)
+{
+ struct rte_cryptodev_asym_session *session = NULL;
+ struct vhost_crypto_session *vhost_session;
+ struct rte_crypto_asym_xform xform = {0};
+ int ret;
+
+ switch (sess_param->u.asym_sess.algo) {
+ case VIRTIO_CRYPTO_AKCIPHER_RSA:
+ ret = rsa_param_transform(&xform, &sess_param->u.asym_sess);
+ if (unlikely(ret < 0)) {
+ VC_LOG_ERR("Error transform session msg (%i)", ret);
+ sess_param->session_id = ret;
+ return;
+ }
+ break;
+ default:
+ VC_LOG_ERR("Invalid op algo");
sess_param->session_id = -VIRTIO_CRYPTO_ERR;
return;
}
+ ret = rte_cryptodev_asym_session_create(vcrypto->cid, &xform,
+ vcrypto->sess_pool, (void *)&session);
+ if (session == NULL) {
+ VC_LOG_ERR("Failed to create session");
+ sess_param->session_id = -VIRTIO_CRYPTO_ERR;
+ return;
+ }
+
+ vhost_session = rte_zmalloc(NULL, sizeof(*vhost_session), 0);
+ if (vhost_session == NULL) {
+ VC_LOG_ERR("Failed to alloc session memory");
+ goto error_exit;
+ }
+
+ vhost_session->type = RTE_CRYPTO_OP_TYPE_ASYMMETRIC;
+ vhost_session->asym = session;
+
+ /* insert session to map */
+ if ((rte_hash_add_key_data(vcrypto->session_map,
+ &vcrypto->last_session_id, vhost_session) < 0)) {
+ VC_LOG_ERR("Failed to insert session to hash table");
+ goto error_exit;
+ }
+
VC_LOG_INFO("Session %"PRIu64" created for vdev %i.",
vcrypto->last_session_id, vcrypto->dev->vid);
sess_param->session_id = vcrypto->last_session_id;
vcrypto->last_session_id++;
+ return;
+
+error_exit:
+ if (rte_cryptodev_asym_session_free(vcrypto->cid, session) < 0)
+ VC_LOG_ERR("Failed to free session");
+ sess_param->session_id = -VIRTIO_CRYPTO_ERR;
+ rte_free(vhost_session);
+}
+
+static void
+vhost_crypto_create_sess(struct vhost_crypto *vcrypto,
+ VhostUserCryptoSessionParam *sess_param)
+{
+ if (sess_param->op_code == VIRTIO_CRYPTO_AKCIPHER_CREATE_SESSION)
+ vhost_crypto_create_asym_sess(vcrypto, sess_param);
+ else
+ vhost_crypto_create_sym_sess(vcrypto, sess_param);
}
static int
vhost_crypto_close_sess(struct vhost_crypto *vcrypto, uint64_t session_id)
{
- struct rte_cryptodev_sym_session *session;
+ struct vhost_crypto_session *vhost_session = NULL;
uint64_t sess_id = session_id;
int ret;
ret = rte_hash_lookup_data(vcrypto->session_map, &sess_id,
- (void **)&session);
-
+ (void **)&vhost_session);
if (unlikely(ret < 0)) {
- VC_LOG_ERR("Failed to delete session %"PRIu64".", session_id);
+ VC_LOG_ERR("Failed to find session for id %"PRIu64".", session_id);
return -VIRTIO_CRYPTO_INVSESS;
}
- if (rte_cryptodev_sym_session_free(vcrypto->cid, session) < 0) {
- VC_LOG_DBG("Failed to free session");
- return -VIRTIO_CRYPTO_ERR;
+ if (vhost_session->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) {
+ if (rte_cryptodev_sym_session_free(vcrypto->cid,
+ vhost_session->sym) < 0) {
+ VC_LOG_DBG("Failed to free session");
+ return -VIRTIO_CRYPTO_ERR;
+ }
+ } else if (vhost_session->type == RTE_CRYPTO_OP_TYPE_ASYMMETRIC) {
+ if (rte_cryptodev_asym_session_free(vcrypto->cid,
+ vhost_session->asym) < 0) {
+ VC_LOG_DBG("Failed to free session");
+ return -VIRTIO_CRYPTO_ERR;
+ }
+ } else {
+ VC_LOG_ERR("Invalid session for id %"PRIu64".", session_id);
+ return -VIRTIO_CRYPTO_INVSESS;
}
if (rte_hash_del_key(vcrypto->session_map, &sess_id) < 0) {
@@ -430,6 +676,7 @@ vhost_crypto_close_sess(struct vhost_crypto *vcrypto, uint64_t session_id)
VC_LOG_INFO("Session %"PRIu64" deleted for vdev %i.", sess_id,
vcrypto->dev->vid);
+ rte_free(vhost_session);
return 0;
}
@@ -1123,6 +1370,109 @@ prepare_sym_chain_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op,
return ret;
}
+static __rte_always_inline uint8_t
+prepare_asym_rsa_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op,
+ struct vhost_crypto_data_req *vc_req,
+ struct virtio_crypto_op_data_req *req,
+ struct vhost_crypto_desc *head,
+ uint32_t max_n_descs)
+ __rte_requires_shared_capability(&vc_req->vq->iotlb_lock)
+{
+ struct rte_crypto_rsa_op_param *rsa = &op->asym->rsa;
+ struct vhost_crypto_desc *desc = head;
+ uint8_t ret = VIRTIO_CRYPTO_ERR;
+ uint16_t wlen = 0;
+
+ /* prepare */
+ switch (vcrypto->option) {
+ case RTE_VHOST_CRYPTO_ZERO_COPY_DISABLE:
+ vc_req->wb_pool = vcrypto->wb_pool;
+ if (req->header.opcode == VIRTIO_CRYPTO_AKCIPHER_SIGN) {
+ rsa->op_type = RTE_CRYPTO_ASYM_OP_SIGN;
+ rsa->message.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RO);
+ rsa->message.length = req->u.akcipher_req.para.src_data_len;
+ rsa->sign.length = req->u.akcipher_req.para.dst_data_len;
+ wlen = rsa->sign.length;
+ desc = find_write_desc(head, desc, max_n_descs);
+ if (unlikely(!desc)) {
+ VC_LOG_ERR("Cannot find write location");
+ ret = VIRTIO_CRYPTO_BADMSG;
+ goto error_exit;
+ }
+
+ rsa->sign.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RW);
+ if (unlikely(rsa->sign.data == NULL)) {
+ ret = VIRTIO_CRYPTO_ERR;
+ goto error_exit;
+ }
+
+ desc += 1;
+ } else if (req->header.opcode == VIRTIO_CRYPTO_AKCIPHER_VERIFY) {
+ rsa->op_type = RTE_CRYPTO_ASYM_OP_VERIFY;
+ rsa->sign.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RO);
+ rsa->sign.length = req->u.akcipher_req.para.src_data_len;
+ desc += 1;
+ rsa->message.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RO);
+ rsa->message.length = req->u.akcipher_req.para.dst_data_len;
+ desc += 1;
+ } else if (req->header.opcode == VIRTIO_CRYPTO_AKCIPHER_ENCRYPT) {
+ rsa->op_type = RTE_CRYPTO_ASYM_OP_ENCRYPT;
+ rsa->message.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RO);
+ rsa->message.length = req->u.akcipher_req.para.src_data_len;
+ rsa->cipher.length = req->u.akcipher_req.para.dst_data_len;
+ wlen = rsa->cipher.length;
+ desc = find_write_desc(head, desc, max_n_descs);
+ if (unlikely(!desc)) {
+ VC_LOG_ERR("Cannot find write location");
+ ret = VIRTIO_CRYPTO_BADMSG;
+ goto error_exit;
+ }
+
+ rsa->cipher.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RW);
+ if (unlikely(rsa->cipher.data == NULL)) {
+ ret = VIRTIO_CRYPTO_ERR;
+ goto error_exit;
+ }
+
+ desc += 1;
+ } else if (req->header.opcode == VIRTIO_CRYPTO_AKCIPHER_DECRYPT) {
+ rsa->op_type = RTE_CRYPTO_ASYM_OP_DECRYPT;
+ rsa->cipher.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RO);
+ rsa->cipher.length = req->u.akcipher_req.para.src_data_len;
+ desc += 1;
+ rsa->message.data = get_data_ptr(vc_req, desc, VHOST_ACCESS_RO);
+ rsa->message.length = req->u.akcipher_req.para.dst_data_len;
+ desc += 1;
+ } else {
+ goto error_exit;
+ }
+ break;
+ case RTE_VHOST_CRYPTO_ZERO_COPY_ENABLE:
+ default:
+ ret = VIRTIO_CRYPTO_BADMSG;
+ goto error_exit;
+ }
+
+ op->type = RTE_CRYPTO_OP_TYPE_ASYMMETRIC;
+ op->sess_type = RTE_CRYPTO_OP_WITH_SESSION;
+
+ vc_req->inhdr = get_data_ptr(vc_req, desc, VHOST_ACCESS_WO);
+ if (unlikely(vc_req->inhdr == NULL)) {
+ ret = VIRTIO_CRYPTO_BADMSG;
+ goto error_exit;
+ }
+
+ vc_req->inhdr->status = VIRTIO_CRYPTO_OK;
+ vc_req->len = wlen + INHDR_LEN;
+ return 0;
+error_exit:
+ if (vc_req->wb)
+ free_wb_data(vc_req->wb, vc_req->wb_pool);
+
+ vc_req->len = INHDR_LEN;
+ return ret;
+}
+
/**
* Process on descriptor
*/
@@ -1133,17 +1483,21 @@ vhost_crypto_process_one_req(struct vhost_crypto *vcrypto,
uint16_t desc_idx)
__rte_no_thread_safety_analysis /* FIXME: requires iotlb_lock? */
{
- struct vhost_crypto_data_req *vc_req = rte_mbuf_to_priv(op->sym->m_src);
- struct rte_cryptodev_sym_session *session;
+ struct vhost_crypto_data_req *vc_req, *vc_req_out;
+ struct rte_cryptodev_asym_session *asym_session;
+ struct rte_cryptodev_sym_session *sym_session;
+ struct vhost_crypto_session *vhost_session;
+ struct vhost_crypto_desc *desc = descs;
+ uint32_t nb_descs = 0, max_n_descs, i;
+ struct vhost_crypto_data_req data_req;
struct virtio_crypto_op_data_req req;
struct virtio_crypto_inhdr *inhdr;
- struct vhost_crypto_desc *desc = descs;
struct vring_desc *src_desc;
uint64_t session_id;
uint64_t dlen;
- uint32_t nb_descs = 0, max_n_descs, i;
int err;
+ vc_req = &data_req;
vc_req->desc_idx = desc_idx;
vc_req->dev = vcrypto->dev;
vc_req->vq = vq;
@@ -1226,12 +1580,14 @@ vhost_crypto_process_one_req(struct vhost_crypto *vcrypto,
switch (req.header.opcode) {
case VIRTIO_CRYPTO_CIPHER_ENCRYPT:
case VIRTIO_CRYPTO_CIPHER_DECRYPT:
+ vc_req_out = rte_mbuf_to_priv(op->sym->m_src);
+ memcpy(vc_req_out, vc_req, sizeof(struct vhost_crypto_data_req));
session_id = req.header.session_id;
/* one branch to avoid unnecessary table lookup */
- if (vcrypto->cache_session_id != session_id) {
+ if (vcrypto->cache_sym_session_id != session_id) {
err = rte_hash_lookup_data(vcrypto->session_map,
- &session_id, (void **)&session);
+ &session_id, (void **)&vhost_session);
if (unlikely(err < 0)) {
err = VIRTIO_CRYPTO_ERR;
VC_LOG_ERR("Failed to find session %"PRIu64,
@@ -1239,13 +1595,14 @@ vhost_crypto_process_one_req(struct vhost_crypto *vcrypto,
goto error_exit;
}
- vcrypto->cache_session = session;
- vcrypto->cache_session_id = session_id;
+ vcrypto->cache_sym_session = vhost_session->sym;
+ vcrypto->cache_sym_session_id = session_id;
}
- session = vcrypto->cache_session;
+ sym_session = vcrypto->cache_sym_session;
+ op->type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
- err = rte_crypto_op_attach_sym_session(op, session);
+ err = rte_crypto_op_attach_sym_session(op, sym_session);
if (unlikely(err < 0)) {
err = VIRTIO_CRYPTO_ERR;
VC_LOG_ERR("Failed to attach session to op");
@@ -1257,12 +1614,12 @@ vhost_crypto_process_one_req(struct vhost_crypto *vcrypto,
err = VIRTIO_CRYPTO_NOTSUPP;
break;
case VIRTIO_CRYPTO_SYM_OP_CIPHER:
- err = prepare_sym_cipher_op(vcrypto, op, vc_req,
+ err = prepare_sym_cipher_op(vcrypto, op, vc_req_out,
&req.u.sym_req.u.cipher, desc,
max_n_descs);
break;
case VIRTIO_CRYPTO_SYM_OP_ALGORITHM_CHAINING:
- err = prepare_sym_chain_op(vcrypto, op, vc_req,
+ err = prepare_sym_chain_op(vcrypto, op, vc_req_out,
&req.u.sym_req.u.chain, desc,
max_n_descs);
break;
@@ -1271,6 +1628,53 @@ vhost_crypto_process_one_req(struct vhost_crypto *vcrypto,
VC_LOG_ERR("Failed to process sym request");
goto error_exit;
}
+ break;
+ case VIRTIO_CRYPTO_AKCIPHER_SIGN:
+ case VIRTIO_CRYPTO_AKCIPHER_VERIFY:
+ case VIRTIO_CRYPTO_AKCIPHER_ENCRYPT:
+ case VIRTIO_CRYPTO_AKCIPHER_DECRYPT:
+ session_id = req.header.session_id;
+
+ /* one branch to avoid unnecessary table lookup */
+ if (vcrypto->cache_asym_session_id != session_id) {
+ err = rte_hash_lookup_data(vcrypto->session_map,
+ &session_id, (void **)&vhost_session);
+ if (unlikely(err < 0)) {
+ err = VIRTIO_CRYPTO_ERR;
+ VC_LOG_ERR("Failed to find asym session %"PRIu64,
+ session_id);
+ goto error_exit;
+ }
+
+ vcrypto->cache_asym_session = vhost_session->asym;
+ vcrypto->cache_asym_session_id = session_id;
+ }
+
+ asym_session = vcrypto->cache_asym_session;
+ op->type = RTE_CRYPTO_OP_TYPE_ASYMMETRIC;
+
+ err = rte_crypto_op_attach_asym_session(op, asym_session);
+ if (unlikely(err < 0)) {
+ err = VIRTIO_CRYPTO_ERR;
+ VC_LOG_ERR("Failed to attach asym session to op");
+ goto error_exit;
+ }
+
+ vc_req_out = rte_cryptodev_asym_session_get_user_data(asym_session);
+ rte_memcpy(vc_req_out, vc_req, sizeof(struct vhost_crypto_data_req));
+ vc_req_out->wb = NULL;
+
+ switch (req.header.algo) {
+ case VIRTIO_CRYPTO_AKCIPHER_RSA:
+ err = prepare_asym_rsa_op(vcrypto, op, vc_req_out,
+ &req, desc, max_n_descs);
+ break;
+ }
+ if (unlikely(err != 0)) {
+ VC_LOG_ERR("Failed to process asym request");
+ goto error_exit;
+ }
+
break;
default:
err = VIRTIO_CRYPTO_ERR;
@@ -1294,12 +1698,22 @@ static __rte_always_inline struct vhost_virtqueue *
vhost_crypto_finalize_one_request(struct rte_crypto_op *op,
struct vhost_virtqueue *old_vq)
{
- struct rte_mbuf *m_src = op->sym->m_src;
- struct rte_mbuf *m_dst = op->sym->m_dst;
- struct vhost_crypto_data_req *vc_req = rte_mbuf_to_priv(m_src);
+ struct rte_mbuf *m_src = NULL, *m_dst = NULL;
+ struct vhost_crypto_data_req *vc_req;
struct vhost_virtqueue *vq;
uint16_t used_idx, desc_idx;
+ if (op->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) {
+ m_src = op->sym->m_src;
+ m_dst = op->sym->m_dst;
+ vc_req = rte_mbuf_to_priv(m_src);
+ } else if (op->type == RTE_CRYPTO_OP_TYPE_ASYMMETRIC) {
+ vc_req = rte_cryptodev_asym_session_get_user_data(op->asym->session);
+ } else {
+ VC_LOG_ERR("Invalid crypto op type");
+ return NULL;
+ }
+
if (unlikely(!vc_req)) {
VC_LOG_ERR("Failed to retrieve vc_req");
return NULL;
@@ -1321,10 +1735,11 @@ vhost_crypto_finalize_one_request(struct rte_crypto_op *op,
vq->used->ring[desc_idx].id = vq->avail->ring[desc_idx];
vq->used->ring[desc_idx].len = vc_req->len;
- rte_mempool_put(m_src->pool, (void *)m_src);
-
- if (m_dst)
- rte_mempool_put(m_dst->pool, (void *)m_dst);
+ if (op->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) {
+ rte_mempool_put(m_src->pool, (void *)m_src);
+ if (m_dst)
+ rte_mempool_put(m_dst->pool, (void *)m_dst);
+ }
return vc_req->vq;
}
@@ -1407,7 +1822,8 @@ rte_vhost_crypto_create(int vid, uint8_t cryptodev_id,
vcrypto->sess_pool = sess_pool;
vcrypto->cid = cryptodev_id;
- vcrypto->cache_session_id = UINT64_MAX;
+ vcrypto->cache_sym_session_id = UINT64_MAX;
+ vcrypto->cache_asym_session_id = UINT64_MAX;
vcrypto->last_session_id = 1;
vcrypto->dev = dev;
vcrypto->option = RTE_VHOST_CRYPTO_ZERO_COPY_DISABLE;
diff --git a/lib/vhost/virtio_crypto.h b/lib/vhost/virtio_crypto.h
index 28877a5da3..23af171030 100644
--- a/lib/vhost/virtio_crypto.h
+++ b/lib/vhost/virtio_crypto.h
@@ -9,6 +9,7 @@
#define VIRTIO_CRYPTO_SERVICE_HASH 1
#define VIRTIO_CRYPTO_SERVICE_MAC 2
#define VIRTIO_CRYPTO_SERVICE_AEAD 3
+#define VIRTIO_CRYPTO_SERVICE_AKCIPHER 4
#define VIRTIO_CRYPTO_OPCODE(service, op) (((service) << 8) | (op))
@@ -29,6 +30,10 @@ struct virtio_crypto_ctrl_header {
VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AEAD, 0x02)
#define VIRTIO_CRYPTO_AEAD_DESTROY_SESSION \
VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AEAD, 0x03)
+#define VIRTIO_CRYPTO_AKCIPHER_CREATE_SESSION \
+ VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AKCIPHER, 0x04)
+#define VIRTIO_CRYPTO_AKCIPHER_DESTROY_SESSION \
+ VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AKCIPHER, 0x05)
uint32_t opcode;
uint32_t algo;
uint32_t flag;
@@ -152,6 +157,45 @@ struct virtio_crypto_aead_create_session_req {
uint8_t padding[32];
};
+struct virtio_crypto_rsa_session_para {
+#define VIRTIO_CRYPTO_RSA_RAW_PADDING 0
+#define VIRTIO_CRYPTO_RSA_PKCS1_PADDING 1
+ uint32_t padding_algo;
+
+#define VIRTIO_CRYPTO_RSA_NO_HASH 0
+#define VIRTIO_CRYPTO_RSA_MD2 1
+#define VIRTIO_CRYPTO_RSA_MD3 2
+#define VIRTIO_CRYPTO_RSA_MD4 3
+#define VIRTIO_CRYPTO_RSA_MD5 4
+#define VIRTIO_CRYPTO_RSA_SHA1 5
+#define VIRTIO_CRYPTO_RSA_SHA256 6
+#define VIRTIO_CRYPTO_RSA_SHA384 7
+#define VIRTIO_CRYPTO_RSA_SHA512 8
+#define VIRTIO_CRYPTO_RSA_SHA224 9
+ uint32_t hash_algo;
+};
+
+struct virtio_crypto_akcipher_session_para {
+#define VIRTIO_CRYPTO_NO_AKCIPHER 0
+#define VIRTIO_CRYPTO_AKCIPHER_RSA 1
+#define VIRTIO_CRYPTO_AKCIPHER_DSA 2
+ uint32_t algo;
+
+#define VIRTIO_CRYPTO_AKCIPHER_KEY_TYPE_PUBLIC 1
+#define VIRTIO_CRYPTO_AKCIPHER_KEY_TYPE_PRIVATE 2
+ uint32_t keytype;
+ uint32_t keylen;
+
+ union {
+ struct virtio_crypto_rsa_session_para rsa;
+ } u;
+};
+
+struct virtio_crypto_akcipher_create_session_req {
+ struct virtio_crypto_akcipher_session_para para;
+ uint8_t padding[36];
+};
+
struct virtio_crypto_alg_chain_session_para {
#define VIRTIO_CRYPTO_SYM_ALG_CHAIN_ORDER_HASH_THEN_CIPHER 1
#define VIRTIO_CRYPTO_SYM_ALG_CHAIN_ORDER_CIPHER_THEN_HASH 2
@@ -219,6 +263,8 @@ struct virtio_crypto_op_ctrl_req {
mac_create_session;
struct virtio_crypto_aead_create_session_req
aead_create_session;
+ struct virtio_crypto_akcipher_create_session_req
+ akcipher_create_session;
struct virtio_crypto_destroy_session_req
destroy_session;
uint8_t padding[56];
@@ -238,6 +284,14 @@ struct virtio_crypto_op_header {
VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AEAD, 0x00)
#define VIRTIO_CRYPTO_AEAD_DECRYPT \
VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AEAD, 0x01)
+#define VIRTIO_CRYPTO_AKCIPHER_ENCRYPT \
+ VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AKCIPHER, 0x00)
+#define VIRTIO_CRYPTO_AKCIPHER_DECRYPT \
+ VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AKCIPHER, 0x01)
+#define VIRTIO_CRYPTO_AKCIPHER_SIGN \
+ VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AKCIPHER, 0x02)
+#define VIRTIO_CRYPTO_AKCIPHER_VERIFY \
+ VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AKCIPHER, 0x03)
uint32_t opcode;
/* algo should be service-specific algorithms */
uint32_t algo;
@@ -362,6 +416,16 @@ struct virtio_crypto_aead_data_req {
uint8_t padding[32];
};
+struct virtio_crypto_akcipher_para {
+ uint32_t src_data_len;
+ uint32_t dst_data_len;
+};
+
+struct virtio_crypto_akcipher_data_req {
+ struct virtio_crypto_akcipher_para para;
+ uint8_t padding[40];
+};
+
/* The request of the data virtqueue's packet */
struct virtio_crypto_op_data_req {
struct virtio_crypto_op_header header;
@@ -371,6 +435,7 @@ struct virtio_crypto_op_data_req {
struct virtio_crypto_hash_data_req hash_req;
struct virtio_crypto_mac_data_req mac_req;
struct virtio_crypto_aead_data_req aead_req;
+ struct virtio_crypto_akcipher_data_req akcipher_req;
uint8_t padding[48];
} u;
};
@@ -380,6 +445,8 @@ struct virtio_crypto_op_data_req {
#define VIRTIO_CRYPTO_BADMSG 2
#define VIRTIO_CRYPTO_NOTSUPP 3
#define VIRTIO_CRYPTO_INVSESS 4 /* Invalid session id */
+#define VIRTIO_CRYPTO_NOSPC 5 /* no free session ID */
+#define VIRTIO_CRYPTO_KEY_REJECTED 6 /* Signature verification failed */
/* The accelerator hardware is ready */
#define VIRTIO_CRYPTO_S_HW_READY (1 << 0)
--
2.25.1
^ permalink raw reply [flat|nested] 58+ messages in thread
* [v4 5/5] examples/vhost_crypto: support asymmetric crypto
2025-02-22 8:38 ` [v4 0/5] vhost: add RSA support Gowrishankar Muthukrishnan
` (3 preceding siblings ...)
2025-02-22 8:38 ` [v4 4/5] vhost: support asymmetric RSA crypto ops Gowrishankar Muthukrishnan
@ 2025-02-22 8:38 ` Gowrishankar Muthukrishnan
4 siblings, 0 replies; 58+ messages in thread
From: Gowrishankar Muthukrishnan @ 2025-02-22 8:38 UTC (permalink / raw)
To: dev, maxime.coquelin, Chenbo Xia
Cc: anoobj, Akhil Goyal, Gowrishankar Muthukrishnan
Support asymmetric crypto operations.
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
---
examples/vhost_crypto/main.c | 50 +++++++++++++++++++++++++++---------
1 file changed, 38 insertions(+), 12 deletions(-)
diff --git a/examples/vhost_crypto/main.c b/examples/vhost_crypto/main.c
index b1fe4120b9..8bdfc40c4b 100644
--- a/examples/vhost_crypto/main.c
+++ b/examples/vhost_crypto/main.c
@@ -59,6 +59,7 @@ struct vhost_crypto_options {
uint32_t nb_los;
uint32_t zero_copy;
uint32_t guest_polling;
+ bool asymmetric_crypto;
} options;
enum {
@@ -70,6 +71,8 @@ enum {
OPT_ZERO_COPY_NUM,
#define OPT_POLLING "guest-polling"
OPT_POLLING_NUM,
+#define OPT_ASYM "asymmetric-crypto"
+ OPT_ASYM_NUM,
};
#define NB_SOCKET_FIELDS (2)
@@ -202,9 +205,10 @@ vhost_crypto_usage(const char *prgname)
" --%s <lcore>,SOCKET-FILE-PATH\n"
" --%s (lcore,cdev_id,queue_id)[,(lcore,cdev_id,queue_id)]\n"
" --%s: zero copy\n"
- " --%s: guest polling\n",
+ " --%s: guest polling\n"
+ " --%s: asymmetric crypto\n",
prgname, OPT_SOCKET_FILE, OPT_CONFIG,
- OPT_ZERO_COPY, OPT_POLLING);
+ OPT_ZERO_COPY, OPT_POLLING, OPT_ASYM);
}
static int
@@ -223,6 +227,8 @@ vhost_crypto_parse_args(int argc, char **argv)
NULL, OPT_ZERO_COPY_NUM},
{OPT_POLLING, no_argument,
NULL, OPT_POLLING_NUM},
+ {OPT_ASYM, no_argument,
+ NULL, OPT_ASYM_NUM},
{NULL, 0, 0, 0}
};
@@ -262,6 +268,10 @@ vhost_crypto_parse_args(int argc, char **argv)
options.guest_polling = 1;
break;
+ case OPT_ASYM_NUM:
+ options.asymmetric_crypto = true;
+ break;
+
default:
vhost_crypto_usage(prgname);
return -EINVAL;
@@ -376,6 +386,7 @@ vhost_crypto_worker(void *arg)
int callfds[VIRTIO_CRYPTO_MAX_NUM_BURST_VQS];
uint32_t lcore_id = rte_lcore_id();
uint32_t burst_size = MAX_PKT_BURST;
+ enum rte_crypto_op_type cop_type;
uint32_t i, j, k;
uint32_t to_fetch, fetched;
@@ -383,9 +394,13 @@ vhost_crypto_worker(void *arg)
RTE_LOG(INFO, USER1, "Processing on Core %u started\n", lcore_id);
+ cop_type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
+ if (options.asymmetric_crypto)
+ cop_type = RTE_CRYPTO_OP_TYPE_ASYMMETRIC;
+
for (i = 0; i < NB_VIRTIO_QUEUES; i++) {
if (rte_crypto_op_bulk_alloc(info->cop_pool,
- RTE_CRYPTO_OP_TYPE_SYMMETRIC, ops[i],
+ cop_type, ops[i],
burst_size) < burst_size) {
RTE_LOG(ERR, USER1, "Failed to alloc cops\n");
ret = -1;
@@ -411,12 +426,11 @@ vhost_crypto_worker(void *arg)
fetched);
if (unlikely(rte_crypto_op_bulk_alloc(
info->cop_pool,
- RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ cop_type,
ops[j], fetched) < fetched)) {
RTE_LOG(ERR, USER1, "Failed realloc\n");
return -1;
}
-
fetched = rte_cryptodev_dequeue_burst(
info->cid, info->qid,
ops_deq[j], RTE_MIN(burst_size,
@@ -477,6 +491,7 @@ main(int argc, char *argv[])
struct rte_cryptodev_qp_conf qp_conf;
struct rte_cryptodev_config config;
struct rte_cryptodev_info dev_info;
+ enum rte_crypto_op_type cop_type;
char name[128];
uint32_t i, j, lcore;
int ret;
@@ -539,12 +554,21 @@ main(int argc, char *argv[])
goto error_exit;
}
- snprintf(name, 127, "SESS_POOL_%u", lo->lcore_id);
- info->sess_pool = rte_cryptodev_sym_session_pool_create(name,
- SESSION_MAP_ENTRIES,
- rte_cryptodev_sym_get_private_session_size(
- info->cid), 0, 0,
- rte_lcore_to_socket_id(lo->lcore_id));
+ if (!options.asymmetric_crypto) {
+ snprintf(name, 127, "SYM_SESS_POOL_%u", lo->lcore_id);
+ info->sess_pool = rte_cryptodev_sym_session_pool_create(name,
+ SESSION_MAP_ENTRIES,
+ rte_cryptodev_sym_get_private_session_size(
+ info->cid), 0, 0,
+ rte_lcore_to_socket_id(lo->lcore_id));
+ cop_type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
+ } else {
+ snprintf(name, 127, "ASYM_SESS_POOL_%u", lo->lcore_id);
+ info->sess_pool = rte_cryptodev_asym_session_pool_create(name,
+ SESSION_MAP_ENTRIES, 0, 64,
+ rte_lcore_to_socket_id(lo->lcore_id));
+ cop_type = RTE_CRYPTO_OP_TYPE_ASYMMETRIC;
+ }
if (!info->sess_pool) {
RTE_LOG(ERR, USER1, "Failed to create mempool");
@@ -553,7 +577,7 @@ main(int argc, char *argv[])
snprintf(name, 127, "COPPOOL_%u", lo->lcore_id);
info->cop_pool = rte_crypto_op_pool_create(name,
- RTE_CRYPTO_OP_TYPE_SYMMETRIC, NB_MEMPOOL_OBJS,
+ cop_type, NB_MEMPOOL_OBJS,
NB_CACHE_OBJS, VHOST_CRYPTO_MAX_IV_LEN,
rte_lcore_to_socket_id(lo->lcore_id));
@@ -567,6 +591,8 @@ main(int argc, char *argv[])
qp_conf.nb_descriptors = NB_CRYPTO_DESCRIPTORS;
qp_conf.mp_session = info->sess_pool;
+ if (options.asymmetric_crypto)
+ qp_conf.mp_session = NULL;
for (j = 0; j < dev_info.max_nb_queue_pairs; j++) {
ret = rte_cryptodev_queue_pair_setup(info->cid, j,
--
2.25.1
^ permalink raw reply [flat|nested] 58+ messages in thread
end of thread, other threads:[~2025-02-22 8:38 UTC | newest]
Thread overview: 58+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-12-24 7:36 [v1 00/16] crypto/virtio: vDPA and asymmetric support Gowrishankar Muthukrishnan
2024-12-24 7:36 ` [v1 01/16] vhost: include AKCIPHER algorithms in crypto_config Gowrishankar Muthukrishnan
2024-12-24 7:37 ` [v1 02/16] crypto/virtio: remove redundant crypto queue free Gowrishankar Muthukrishnan
2024-12-24 7:37 ` [v1 03/16] crypto/virtio: add asymmetric RSA support Gowrishankar Muthukrishnan
2024-12-24 7:37 ` [v1 04/16] test/crypto: check for RSA capability Gowrishankar Muthukrishnan
2024-12-24 7:37 ` [v1 05/16] test/crypto: return proper codes in create session Gowrishankar Muthukrishnan
2024-12-24 7:37 ` [v1 06/16] test/crypto: add asymmetric tests for virtio PMD Gowrishankar Muthukrishnan
2024-12-24 7:37 ` [v1 07/16] vhost: add asymmetric RSA support Gowrishankar Muthukrishnan
2024-12-24 7:37 ` [v1 08/16] examples/vhost_crypto: add asymmetric support Gowrishankar Muthukrishnan
2024-12-24 7:37 ` [v1 09/16] crypto/virtio: fix dataqueues iteration Gowrishankar Muthukrishnan
2024-12-24 7:37 ` [v1 10/16] crypto/virtio: refactor queue operations Gowrishankar Muthukrishnan
2024-12-24 7:37 ` [v1 11/16] crypto/virtio: add packed ring support Gowrishankar Muthukrishnan
2024-12-24 7:37 ` [v1 12/16] common/virtio: common virtio log Gowrishankar Muthukrishnan
2024-12-24 8:14 ` David Marchand
2025-01-07 10:57 ` [EXTERNAL] " Gowrishankar Muthukrishnan
2024-12-24 7:37 ` [v1 13/16] common/virtio: move vDPA to common directory Gowrishankar Muthukrishnan
2024-12-24 7:37 ` [v1 14/16] common/virtio: support cryptodev in vdev setup Gowrishankar Muthukrishnan
2024-12-24 7:37 ` [v1 15/16] crypto/virtio: add vhost backend to virtio_user Gowrishankar Muthukrishnan
2024-12-24 7:37 ` [v1 16/16] test/crypto: test virtio_crypto_user PMD Gowrishankar Muthukrishnan
2025-01-07 17:52 ` [v2 0/2] crypto/virtio: add RSA support Gowrishankar Muthukrishnan
2025-01-07 17:52 ` [v2 1/2] crypto/virtio: add asymmetric " Gowrishankar Muthukrishnan
2025-01-07 17:52 ` [v2 2/2] test/crypto: add asymmetric tests for virtio PMD Gowrishankar Muthukrishnan
2025-02-21 17:41 ` [v3 0/6] crypto/virtio: enhancements for RSA and vDPA Gowrishankar Muthukrishnan
2025-02-21 17:41 ` [v3 1/6] crypto/virtio: add asymmetric RSA support Gowrishankar Muthukrishnan
2025-02-21 17:41 ` [v3 2/6] crypto/virtio: refactor queue operations Gowrishankar Muthukrishnan
2025-02-21 17:41 ` [v3 3/6] crypto/virtio: add packed ring support Gowrishankar Muthukrishnan
2025-02-21 17:41 ` [v3 4/6] crypto/virtio: add vDPA backend Gowrishankar Muthukrishnan
2025-02-21 17:41 ` [v3 5/6] test/crypto: add asymmetric tests for virtio PMD Gowrishankar Muthukrishnan
2025-02-21 17:41 ` [v3 6/6] test/crypto: add tests for virtio user PMD Gowrishankar Muthukrishnan
2025-01-07 18:02 ` [v2 0/2] vhost: add RSA support Gowrishankar Muthukrishnan
2025-01-07 18:02 ` [v2 1/2] vhost: add asymmetric " Gowrishankar Muthukrishnan
2025-01-29 16:07 ` Maxime Coquelin
2025-01-07 18:02 ` [v2 2/2] examples/vhost_crypto: add asymmetric support Gowrishankar Muthukrishnan
2025-01-29 16:13 ` Maxime Coquelin
2025-01-30 9:29 ` [EXTERNAL] " Gowrishankar Muthukrishnan
2025-02-21 17:30 ` [v3 0/5] vhost: add RSA support Gowrishankar Muthukrishnan
2025-02-21 17:30 ` [v3 1/5] vhost: skip crypto op fetch before vring init Gowrishankar Muthukrishnan
2025-02-21 17:30 ` [v3 2/5] vhost: update vhost_user crypto session parameters Gowrishankar Muthukrishnan
2025-02-21 17:30 ` [v3 3/5] examples/vhost_crypto: fix user callbacks Gowrishankar Muthukrishnan
2025-02-21 17:30 ` [v3 4/5] vhost: support asymmetric RSA crypto ops Gowrishankar Muthukrishnan
2025-02-21 17:30 ` [v3 5/5] examples/vhost_crypto: support asymmetric crypto Gowrishankar Muthukrishnan
2025-02-22 8:38 ` [v4 0/5] vhost: add RSA support Gowrishankar Muthukrishnan
2025-02-22 8:38 ` [v4 1/5] vhost: skip crypto op fetch before vring init Gowrishankar Muthukrishnan
2025-02-22 8:38 ` [v4 2/5] vhost: update vhost_user crypto session parameters Gowrishankar Muthukrishnan
2025-02-22 8:38 ` [v4 3/5] examples/vhost_crypto: fix user callbacks Gowrishankar Muthukrishnan
2025-02-22 8:38 ` [v4 4/5] vhost: support asymmetric RSA crypto ops Gowrishankar Muthukrishnan
2025-02-22 8:38 ` [v4 5/5] examples/vhost_crypto: support asymmetric crypto Gowrishankar Muthukrishnan
2025-01-07 18:08 ` [v2 0/2] crypto/virtio: add packed ring support Gowrishankar Muthukrishnan
2025-01-07 18:08 ` [v2 1/2] crypto/virtio: refactor queue operations Gowrishankar Muthukrishnan
2025-01-07 18:08 ` [v2 2/2] crypto/virtio: add packed ring support Gowrishankar Muthukrishnan
2025-01-07 18:44 ` [v2 0/4] crypto/virtio: add vDPA backend support Gowrishankar Muthukrishnan
2025-01-07 18:44 ` [v2 1/4] common/virtio: move vDPA to common directory Gowrishankar Muthukrishnan
2025-02-06 9:40 ` Maxime Coquelin
2025-02-06 14:21 ` [EXTERNAL] " Gowrishankar Muthukrishnan
2025-01-07 18:44 ` [v2 2/4] common/virtio: support cryptodev in vdev setup Gowrishankar Muthukrishnan
2025-01-07 18:44 ` [v2 3/4] crypto/virtio: add vhost backend to virtio_user Gowrishankar Muthukrishnan
2025-02-06 13:14 ` Maxime Coquelin
2025-01-07 18:44 ` [v2 4/4] test/crypto: test virtio_crypto_user PMD Gowrishankar Muthukrishnan
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).