DPDK patches and discussions
 help / color / mirror / Atom feed
From: Anoob Joseph <anoobj@marvell.com>
To: David Coyle <david.coyle@intel.com>
Cc: "kai.ji@intel.com" <kai.ji@intel.com>,
	"kevin.osullivan@intel.com" <kevin.osullivan@intel.com>,
	"dev@dpdk.org" <dev@dpdk.org>,
	Jerin Jacob Kollanukkaran <jerinj@marvell.com>
Subject: RE: [EXT] [PATCH v3 1/2] crypto/scheduler: support DOCSIS security protocol
Date: Mon, 18 Sep 2023 11:02:51 +0000	[thread overview]
Message-ID: <PH0PR18MB467256548A31E7BD7527A9C2DFFBA@PH0PR18MB4672.namprd18.prod.outlook.com> (raw)
In-Reply-To: <20230914152207.19794-2-david.coyle@intel.com>

Hi David,

Thanks for updating the patches based on the comments provided on previous version. Please see inline for some comments on code.

Thanks,
Anoob

> -----Original Message-----
> From: David Coyle <david.coyle@intel.com>
> Sent: Thursday, September 14, 2023 8:52 PM
> To: dev@dpdk.org
> Cc: kai.ji@intel.com; Anoob Joseph <anoobj@marvell.com>;
> kevin.osullivan@intel.com; David Coyle <david.coyle@intel.com>
> Subject: [EXT] [PATCH v3 1/2] crypto/scheduler: support DOCSIS security
> protocol
> 
> External Email
> 
> ----------------------------------------------------------------------
> Add support to the cryptodev scheduler PMD for the DOCSIS security
> protocol. This includes adding the following to the scheduler:
> - synchronization of worker's security capabilities
> - retrieval of the scheduler's synchronized security capabilities
> - retrieval of the security session size i.e. maximum session size
>   across all workers
> - creation of security sessions on each worker
> - deletion of security sessions on each worker
> 
> Signed-off-by: David Coyle <david.coyle@intel.com>
> Signed-off-by: Kevin O'Sullivan <kevin.osullivan@intel.com>
> ---
>  doc/guides/rel_notes/release_23_11.rst        |   4 +
>  drivers/crypto/scheduler/meson.build          |   2 +-
>  .../scheduler/rte_cryptodev_scheduler.c       | 221 +++++++++-
>  drivers/crypto/scheduler/scheduler_failover.c |  12 +-
>  .../crypto/scheduler/scheduler_multicore.c    |  10 +-
>  .../scheduler/scheduler_pkt_size_distr.c      |  54 +--
>  drivers/crypto/scheduler/scheduler_pmd.c      |  33 ++
>  drivers/crypto/scheduler/scheduler_pmd_ops.c  | 381 +++++++++++++-----
> .../crypto/scheduler/scheduler_pmd_private.h  | 159 +++++---
>  .../crypto/scheduler/scheduler_roundrobin.c   |   6 +-
>  10 files changed, 653 insertions(+), 229 deletions(-)
> 

<snip>

> diff --git a/drivers/crypto/scheduler/rte_cryptodev_scheduler.c
> b/drivers/crypto/scheduler/rte_cryptodev_scheduler.c
> index 258d6f8c43..e8b905af2f 100644
> --- a/drivers/crypto/scheduler/rte_cryptodev_scheduler.c
> +++ b/drivers/crypto/scheduler/rte_cryptodev_scheduler.c
> @@ -5,11 +5,14 @@
>  #include <rte_reorder.h>
>  #include <rte_cryptodev.h>
>  #include <cryptodev_pmd.h>
> +#include <rte_security_driver.h>
>  #include <rte_malloc.h>
> 
>  #include "rte_cryptodev_scheduler.h"
>  #include "scheduler_pmd_private.h"
> 
> +#define MAX_CAPS 256
> +
>  /** update the scheduler pmd's capability with attaching device's
>   *  capability.
>   *  For each device to be attached, the scheduler's capability should be @@ -
> 59,7 +62,6 @@ sync_caps(struct rte_cryptodev_capabilities *caps,
>  					cap->sym.auth.digest_size.max ?
>  					s_cap->sym.auth.digest_size.max :
>  					cap->sym.auth.digest_size.max;
> -
>  			}
> 
>  			if (s_cap->sym.xform_type ==
> @@ -81,25 +83,176 @@ sync_caps(struct rte_cryptodev_capabilities *caps,
> 
>  		memset(&caps[sync_nb_caps - 1], 0, sizeof(*cap));
>  		sync_nb_caps--;
> +		i--;
>  	}
> 
>  	return sync_nb_caps;
>  }
> 
>  static int
> -update_scheduler_capability(struct scheduler_ctx *sched_ctx)
> +check_sec_cap_equal(const struct rte_security_capability *sec_cap1,
> +		struct rte_security_capability *sec_cap2) {
> +	if (sec_cap1->action != sec_cap2->action ||
> +			sec_cap1->protocol != sec_cap2->protocol ||
> +			sec_cap1->ol_flags != sec_cap2->ol_flags)
> +		return 0;
> +
> +	if (sec_cap1->protocol == RTE_SECURITY_PROTOCOL_DOCSIS)
> +		return !memcmp(&sec_cap1->docsis, &sec_cap2->docsis,
> +				sizeof(sec_cap1->docsis));
> +	else
> +		return 0;
> +}
> +
> +static void
> +copy_sec_cap(struct rte_security_capability *dst_sec_cap,
> +		struct rte_security_capability *src_sec_cap) {
> +	dst_sec_cap->action = src_sec_cap->action;
> +	dst_sec_cap->protocol = src_sec_cap->protocol;
> +	if (src_sec_cap->protocol == RTE_SECURITY_PROTOCOL_DOCSIS)
> +		dst_sec_cap->docsis = src_sec_cap->docsis;
> +	dst_sec_cap->ol_flags = src_sec_cap->ol_flags; }
> +
> +static uint32_t
> +sync_sec_crypto_caps(struct rte_cryptodev_capabilities
> *tmp_sec_crypto_caps,
> +		const struct rte_cryptodev_capabilities *sec_crypto_caps,
> +		const struct rte_cryptodev_capabilities
> *worker_sec_crypto_caps) {
> +	uint8_t nb_caps = 0;
> +
> +	nb_caps = sync_caps(tmp_sec_crypto_caps, nb_caps,
> sec_crypto_caps);
> +	sync_caps(tmp_sec_crypto_caps, nb_caps,
> worker_sec_crypto_caps);
> +
> +	return nb_caps;
> +}
> +
> +/** update the scheduler pmd's security capability with attaching
> +device's
> + *  security capability.
> + *  For each device to be attached, the scheduler's security capability
> +should
> + *  be the common capability set of all workers  **/ static uint32_t
> +sync_sec_caps(uint32_t worker_idx,
> +		struct rte_security_capability *sec_caps,
> +		struct rte_cryptodev_capabilities
> sec_crypto_caps[][MAX_CAPS],
> +		uint32_t nb_sec_caps,
> +		const struct rte_security_capability *worker_sec_caps)
>  {
> -	struct rte_cryptodev_capabilities tmp_caps[256] = { {0} };
> -	uint32_t nb_caps = 0, i;
> +	uint32_t nb_worker_sec_caps = 0, i;
> +
> +	if (worker_sec_caps == NULL)
> +		return 0;
> +
> +	while (worker_sec_caps[nb_worker_sec_caps].action !=
> +
> 	RTE_SECURITY_ACTION_TYPE_NONE)
> +		nb_worker_sec_caps++;
> +
> +	/* Handle first worker */
> +	if (worker_idx == 0) {
> +		uint32_t nb_worker_sec_crypto_caps = 0;
> +		uint32_t nb_worker_supp_sec_caps = 0;
> +
> +		for (i = 0; i < nb_worker_sec_caps; i++) {
> +			/* Check for supported security protocols */
> +			if
> (!scheduler_check_sec_proto_supp(worker_sec_caps[i].action,
> +					worker_sec_caps[i].protocol))
> +				continue;
> 
> -	if (sched_ctx->capabilities) {
> -		rte_free(sched_ctx->capabilities);
> -		sched_ctx->capabilities = NULL;
> +			sec_caps[nb_worker_supp_sec_caps] =
> worker_sec_caps[i];
> +
> +			while (worker_sec_caps[i].crypto_capabilities[
> +					nb_worker_sec_crypto_caps].op !=
> +
> 	RTE_CRYPTO_OP_TYPE_UNDEFINED)
> +				nb_worker_sec_crypto_caps++;
> +
> +
> 	rte_memcpy(&sec_crypto_caps[nb_worker_supp_sec_caps][0],
> +				&worker_sec_caps[i].crypto_capabilities[0],

[Anoob] Isn't it possible to have 2 different security devices which may differ in crypto capabilities? My understanding is, the code assumes that crypto capability of both devices would match. It's okay to document it as a known limitation if it is too difficult to solve.

> +
> 	sizeof(sec_crypto_caps[nb_worker_supp_sec_caps][0]) *
> +					nb_worker_sec_crypto_caps);
> +
> +			nb_worker_supp_sec_caps++;
> +		}
> +		return nb_worker_supp_sec_caps;
>  	}
> 

<snip>

> diff --git a/drivers/crypto/scheduler/scheduler_pmd.c
> b/drivers/crypto/scheduler/scheduler_pmd.c
> index 4e8bbf0e09..6dad9bc3dd 100644
> --- a/drivers/crypto/scheduler/scheduler_pmd.c
> +++ b/drivers/crypto/scheduler/scheduler_pmd.c
> @@ -8,6 +8,7 @@
>  #include <rte_hexdump.h>
>  #include <rte_cryptodev.h>
>  #include <cryptodev_pmd.h>
> +#include <rte_security_driver.h>
>  #include <bus_vdev_driver.h>
>  #include <rte_malloc.h>
>  #include <rte_cpuflags.h>
> @@ -233,6 +234,35 @@ cryptodev_scheduler_create(const char *name,
>  		return -ENOMEM;
>  	}
> 
> +	struct rte_security_ctx *security_instance;
> +	security_instance = rte_zmalloc_socket(NULL,
> +					sizeof(struct rte_security_ctx),
> +					RTE_CACHE_LINE_SIZE,
> SOCKET_ID_ANY);
> +	if (security_instance == NULL) {
> +		CR_SCHED_LOG(ERR, "rte_security_ctx memory alloc
> failed");
> +		return -ENOMEM;

[Anoob] The lines above this adds regular cryptodev capabilities. Don't we need to free that as well? 

> +	}
> +
> +	security_instance->device = (void *)dev;
> +	security_instance->ops = rte_crypto_scheduler_pmd_sec_ops;
> +	security_instance->sess_cnt = 0;
> +	dev->security_ctx = security_instance;
> +
> +	/*
> +	 * Initialize security capabilities structure as an empty structure,
> +	 * in case device information is requested when no workers are
> attached
> +	 */
> +	sched_ctx->sec_capabilities = rte_zmalloc_socket(NULL,
> +					sizeof(struct rte_security_capability),
> +					0, SOCKET_ID_ANY);
> +
> +	if (!sched_ctx->sec_capabilities) {
> +		rte_free(security_instance);
> +		CR_SCHED_LOG(ERR, "Not enough memory for security
> capability "
> +				"information");
> +		return -ENOMEM;
> +	}
> +
>  	rte_cryptodev_pmd_probing_finish(dev);
> 
>  	return 0;
> @@ -263,6 +293,9 @@ cryptodev_scheduler_remove(struct
> rte_vdev_device *vdev)
>  					sched_ctx->workers[i].dev_id);
>  	}
> 
> +	rte_free(dev->security_ctx);
> +	dev->security_ctx = NULL;
> +
>  	return rte_cryptodev_pmd_destroy(dev);  }
> 
> diff --git a/drivers/crypto/scheduler/scheduler_pmd_ops.c
> b/drivers/crypto/scheduler/scheduler_pmd_ops.c
> index 294aab4452..34d20ee2de 100644
> --- a/drivers/crypto/scheduler/scheduler_pmd_ops.c
> +++ b/drivers/crypto/scheduler/scheduler_pmd_ops.c
> @@ -8,11 +8,212 @@
>  #include <dev_driver.h>
>  #include <rte_cryptodev.h>
>  #include <cryptodev_pmd.h>
> +#include <rte_security_driver.h>
>  #include <rte_reorder.h>
>  #include <rte_errno.h>
> 
>  #include "scheduler_pmd_private.h"
> 
> +struct scheduler_configured_sess_info {
> +	uint8_t dev_id;
> +	uint8_t driver_id;
> +	union {
> +		struct rte_cryptodev_sym_session *sess;
> +		struct {
> +			struct rte_security_session *sec_sess;
> +			struct rte_security_ctx *sec_ctx;
> +		};
> +	};
> +};
> +
> +static int
> +scheduler_session_create(void *sess, void *sess_params,
> +		struct scheduler_ctx *sched_ctx,
> +		enum rte_crypto_op_sess_type session_type) {
> +	struct rte_mempool *mp = rte_mempool_from_obj(sess);
> +	struct scheduler_session_ctx *sess_ctx;
> +	struct scheduler_configured_sess_info configured_sess[
> +			RTE_CRYPTODEV_SCHEDULER_MAX_NB_WORKERS]
> = {{0}};
> +	uint32_t i, j, n_configured_sess = 0;
> +	int ret = 0;
> +
> +	if (session_type == RTE_CRYPTO_OP_WITH_SESSION)
> +		sess_ctx = CRYPTODEV_GET_SYM_SESS_PRIV(
> +				(struct rte_cryptodev_sym_session *)sess);
> +	else
> +		sess_ctx = SECURITY_GET_SESS_PRIV(
> +				(struct rte_security_session *)sess);
> +
> +	if (mp == NULL)
> +		return -EINVAL;
> +
> +	for (i = 0; i < sched_ctx->nb_workers; i++) {
> +		struct scheduler_worker *worker = &sched_ctx->workers[i];
> +		struct rte_cryptodev *dev = &rte_cryptodevs[worker-
> >dev_id];
> +		uint8_t next_worker = 0;
> +
> +		for (j = 0; j < n_configured_sess; j++) {
> +			if (configured_sess[j].driver_id == worker-
> >driver_id) {
> +				if (session_type ==
> RTE_CRYPTO_OP_WITH_SESSION)
> +					sess_ctx->worker_sess[i] =
> +						configured_sess[j].sess;
> +				else
> +					sess_ctx->worker_sec_sess[i] =
> +						configured_sess[j].sec_sess;
> +
> +				next_worker = 1;
> +				break;
> +			}
> +		}
> +		if (next_worker)
> +			continue;
> +
> +		if (rte_mempool_avail_count(mp) == 0) {
> +			ret = -ENOMEM;
> +			goto error_exit;
> +		}
> +
> +		if (session_type == RTE_CRYPTO_OP_WITH_SESSION) {
> +			struct rte_cryptodev_sym_session *worker_sess =
> +				rte_cryptodev_sym_session_create(worker-
> >dev_id,
> +						(struct
> rte_crypto_sym_xform *)

[Anoob] Is this cast required?

> +						sess_params, mp);
> +
> +			if (worker_sess == NULL) {
> +				ret = -rte_errno;
> +				goto error_exit;
> +			}
> +
> +			worker_sess->opaque_data = (uint64_t)sess;
> +			sess_ctx->worker_sess[i] = worker_sess;
> +			configured_sess[n_configured_sess].sess =
> worker_sess;
> +		} else {
> +			struct rte_security_session *worker_sess =
> +				rte_security_session_create(dev-
> >security_ctx,
> +					(struct rte_security_session_conf *)
> +					sess_params, mp);
> +
> +			if (worker_sess == NULL) {
> +				ret = -rte_errno;
> +				goto error_exit;
> +			}
> +
> +			worker_sess->opaque_data = (uint64_t)sess;
> +			sess_ctx->worker_sec_sess[i] = worker_sess;
> +			configured_sess[n_configured_sess].sec_sess =
> +							worker_sess;
> +			configured_sess[n_configured_sess].sec_ctx =
> +							dev->security_ctx;
> +		}
> +
> +		configured_sess[n_configured_sess].driver_id =
> +							worker->driver_id;
> +		configured_sess[n_configured_sess].dev_id = worker-
> >dev_id;
> +		n_configured_sess++;
> +	}
> +
> +	return 0;
> +
> +error_exit:
> +	sess_ctx->ref_cnt = sched_ctx->ref_cnt;
> +	for (i = 0; i < n_configured_sess; i++) {
> +		if (session_type == RTE_CRYPTO_OP_WITH_SESSION)
> +			rte_cryptodev_sym_session_free(
> +						configured_sess[i].dev_id,
> +						configured_sess[i].sess);
> +		else
> +			rte_security_session_destroy(
> +						configured_sess[i].sec_ctx,
> +						configured_sess[i].sec_sess);
> +	}
> +
> +	return ret;
> +}
> +
> +static void
> +scheduler_session_destroy(void *sess, struct scheduler_ctx *sched_ctx,
> +		uint8_t session_type)
> +{
> +	struct scheduler_session_ctx *sess_ctx;
> +	struct scheduler_configured_sess_info deleted_sess[
> +			RTE_CRYPTODEV_SCHEDULER_MAX_NB_WORKERS]
> = {{0}};
> +	uint32_t i, j, n_deleted_sess = 0;
> +
> +	if (session_type == RTE_CRYPTO_OP_WITH_SESSION)
> +		sess_ctx = CRYPTODEV_GET_SYM_SESS_PRIV(
> +				(struct rte_cryptodev_sym_session *)sess);
> +	else
> +		sess_ctx = SECURITY_GET_SESS_PRIV(
> +				(struct rte_security_session *)sess);
> +
> +	if (sched_ctx->ref_cnt != sess_ctx->ref_cnt) {
> +		CR_SCHED_LOG(WARNING,
> +			"Worker updated between session
> creation/deletion. "
> +			"The session may not be freed fully.");
> +	}
> +
> +	for (i = 0; i < sched_ctx->nb_workers; i++) {
> +		struct scheduler_worker *worker = &sched_ctx->workers[i];
> +		struct rte_cryptodev *dev = &rte_cryptodevs[worker-
> >dev_id];
> +		uint8_t next_worker = 0;
> +
> +		for (j = 0; j < n_deleted_sess; j++) {
> +			if (deleted_sess[j].driver_id == worker->driver_id) {
> +				if (session_type ==
> RTE_CRYPTO_OP_WITH_SESSION)
> +					sess_ctx->worker_sess[i] = NULL;
> +				else
> +					sess_ctx->worker_sec_sess[i] =
> NULL;
> +
> +				next_worker = 1;
> +				break;
> +			}
> +		}
> +		if (next_worker)
> +			continue;
> +
> +		if (session_type == RTE_CRYPTO_OP_WITH_SESSION) {
> +			rte_cryptodev_sym_session_free(worker->dev_id,
> +						sess_ctx->worker_sess[i]);
> +			sess_ctx->worker_sess[i] = NULL;
> +		} else {
> +			rte_security_session_destroy(dev->security_ctx,
> +						sess_ctx-
> >worker_sec_sess[i]);
> +			sess_ctx->worker_sec_sess[i] = NULL;
> +		}
> +
> +		deleted_sess[n_deleted_sess++].driver_id = worker-
> >driver_id;
> +	}
> +}
> +
> +static unsigned int
> +scheduler_session_size_get(struct scheduler_ctx *sched_ctx,
> +		uint8_t session_type)
> +{
> +	uint8_t i = 0;
> +	uint32_t max_priv_sess_size = 0;
> +
> +	/* Check what is the maximum private session size for all workers */
> +	for (i = 0; i < sched_ctx->nb_workers; i++) {
> +		uint8_t worker_dev_id = sched_ctx->workers[i].dev_id;
> +		struct rte_cryptodev *dev =
> &rte_cryptodevs[worker_dev_id];
> +		struct rte_security_ctx *sec_ctx = dev->security_ctx;
> +		uint32_t priv_sess_size = 0;
> +
> +		if (session_type == RTE_CRYPTO_OP_WITH_SESSION) {
> +			priv_sess_size =
> +				(*dev->dev_ops-
> >sym_session_get_size)(dev);
> +		} else {
> +			priv_sess_size = (*sec_ctx->ops-
> >session_get_size)(dev);
> +		}
> +
> +		if (max_priv_sess_size < priv_sess_size)
> +			max_priv_sess_size = priv_sess_size;

[Anoob] Should we use RTE_MAX?

> +	}
> +
> +	return max_priv_sess_size;
> +}
> +
>  /** attaching the workers predefined by scheduler's EAL options */  static
> int  scheduler_attach_init_worker(struct rte_cryptodev *dev) @@ -265,10
> +466,7 @@ scheduler_pmd_close(struct rte_cryptodev *dev)
>  		sched_ctx->private_ctx = NULL;
>  	}
> 
> -	if (sched_ctx->capabilities) {
> -		rte_free(sched_ctx->capabilities);
> -		sched_ctx->capabilities = NULL;
> -	}
> +	scheduler_free_capabilities(sched_ctx);
> 
>  	return 0;
>  }
> @@ -451,92 +649,22 @@ scheduler_pmd_qp_setup(struct rte_cryptodev
> *dev, uint16_t qp_id,  }
> 
>  static uint32_t
> -scheduler_pmd_sym_session_get_size(struct rte_cryptodev *dev
> __rte_unused)
> +scheduler_pmd_sym_session_get_size(struct rte_cryptodev *dev)
>  {
>  	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
> -	uint8_t i = 0;
> -	uint32_t max_priv_sess_size = 0;
> -
> -	/* Check what is the maximum private session size for all workers */
> -	for (i = 0; i < sched_ctx->nb_workers; i++) {
> -		uint8_t worker_dev_id = sched_ctx->workers[i].dev_id;
> -		struct rte_cryptodev *dev =
> &rte_cryptodevs[worker_dev_id];
> -		uint32_t priv_sess_size = (*dev->dev_ops-
> >sym_session_get_size)(dev);
> 
> -		if (max_priv_sess_size < priv_sess_size)
> -			max_priv_sess_size = priv_sess_size;
> -	}
> -
> -	return max_priv_sess_size;
> +	return scheduler_session_size_get(sched_ctx,
> +RTE_CRYPTO_OP_WITH_SESSION);
>  }
> 
> -struct scheduler_configured_sess_info {
> -	uint8_t dev_id;
> -	uint8_t driver_id;
> -	struct rte_cryptodev_sym_session *sess;
> -};
> -
>  static int
>  scheduler_pmd_sym_session_configure(struct rte_cryptodev *dev,
>  	struct rte_crypto_sym_xform *xform,
>  	struct rte_cryptodev_sym_session *sess)  {
>  	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
> -	struct rte_mempool *mp = rte_mempool_from_obj(sess);
> -	struct scheduler_session_ctx *sess_ctx =
> CRYPTODEV_GET_SYM_SESS_PRIV(sess);
> -	struct scheduler_configured_sess_info configured_sess[
> -			RTE_CRYPTODEV_SCHEDULER_MAX_NB_WORKERS]
> = {{0}};
> -	uint32_t i, j, n_configured_sess = 0;
> -	int ret = 0;
> -
> -	if (mp == NULL)
> -		return -EINVAL;
> 
> -	for (i = 0; i < sched_ctx->nb_workers; i++) {
> -		struct scheduler_worker *worker = &sched_ctx->workers[i];
> -		struct rte_cryptodev_sym_session *worker_sess;
> -		uint8_t next_worker = 0;
> -
> -		for (j = 0; j < n_configured_sess; j++) {
> -			if (configured_sess[j].driver_id ==
> -					worker->driver_id) {
> -				sess_ctx->worker_sess[i] =
> -					configured_sess[j].sess;
> -				next_worker = 1;
> -				break;
> -			}
> -		}
> -		if (next_worker)
> -			continue;
> -
> -		if (rte_mempool_avail_count(mp) == 0) {
> -			ret = -ENOMEM;
> -			goto error_exit;
> -		}
> -
> -		worker_sess = rte_cryptodev_sym_session_create(worker-
> >dev_id,
> -			xform, mp);
> -		if (worker_sess == NULL) {
> -			ret = -rte_errno;
> -			goto error_exit;
> -		}
> -
> -		worker_sess->opaque_data = (uint64_t)sess;
> -		sess_ctx->worker_sess[i] = worker_sess;
> -		configured_sess[n_configured_sess].driver_id =
> -			worker->driver_id;
> -		configured_sess[n_configured_sess].dev_id = worker-
> >dev_id;
> -		configured_sess[n_configured_sess].sess = worker_sess;
> -		n_configured_sess++;
> -	}
> -
> -	return 0;
> -error_exit:
> -	sess_ctx->ref_cnt = sched_ctx->ref_cnt;
> -	for (i = 0; i < n_configured_sess; i++)
> -
> 	rte_cryptodev_sym_session_free(configured_sess[i].dev_id,
> -			configured_sess[i].sess);
> -	return ret;
> +	return scheduler_session_create((void *)sess, (void *)xform,
> sched_ctx,
> +				RTE_CRYPTO_OP_WITH_SESSION);
>  }
> 
>  /** Clear the memory of session so it doesn't leave key material behind */
> @@ -545,37 +673,9 @@ scheduler_pmd_sym_session_clear(struct
> rte_cryptodev *dev,
>  		struct rte_cryptodev_sym_session *sess)  {
>  	struct scheduler_ctx *sched_ctx = dev->data->dev_private;
> -	struct scheduler_session_ctx *sess_ctx =
> CRYPTODEV_GET_SYM_SESS_PRIV(sess);
> -	struct scheduler_configured_sess_info deleted_sess[
> -			RTE_CRYPTODEV_SCHEDULER_MAX_NB_WORKERS]
> = {{0}};
> -	uint32_t i, j, n_deleted_sess = 0;
> -
> -	if (sched_ctx->ref_cnt != sess_ctx->ref_cnt) {
> -		CR_SCHED_LOG(WARNING,
> -			"Worker updated between session
> creation/deletion. "
> -			"The session may not be freed fully.");
> -	}
> -
> -	for (i = 0; i < sched_ctx->nb_workers; i++) {
> -		struct scheduler_worker *worker = &sched_ctx->workers[i];
> -		uint8_t next_worker = 0;
> 
> -		for (j = 0; j < n_deleted_sess; j++) {
> -			if (deleted_sess[j].driver_id == worker->driver_id) {
> -				sess_ctx->worker_sess[i] = NULL;
> -				next_worker = 1;
> -				break;
> -			}
> -		}
> -		if (next_worker)
> -			continue;
> -
> -		rte_cryptodev_sym_session_free(worker->dev_id,
> -			sess_ctx->worker_sess[i]);
> -
> -		deleted_sess[n_deleted_sess++].driver_id = worker-
> >driver_id;
> -		sess_ctx->worker_sess[i] = NULL;
> -	}
> +	scheduler_session_destroy((void *)sess, sched_ctx,
> +				RTE_CRYPTO_OP_WITH_SESSION);
>  }
> 
>  static struct rte_cryptodev_ops scheduler_pmd_ops = { @@ -598,3 +698,68
> @@ static struct rte_cryptodev_ops scheduler_pmd_ops = {  };
> 
>  struct rte_cryptodev_ops *rte_crypto_scheduler_pmd_ops =
> &scheduler_pmd_ops;
> +
> +/** Configure a scheduler session from a security session configuration
> +*/ static int scheduler_pmd_sec_sess_create(void *dev, struct
> +rte_security_session_conf *conf,
> +			struct rte_security_session *sess)
> +{
> +	struct rte_cryptodev *cdev = (struct rte_cryptodev *)dev;

[Anoob] Is this cast required?

> +	struct scheduler_ctx *sched_ctx = cdev->data->dev_private;
> +
> +	/* Check for supported security protocols */
> +	if (!scheduler_check_sec_proto_supp(conf->action_type, conf-
> >protocol)) {
> +		CR_SCHED_LOG(ERR, "Unsupported security protocol");
> +		return -ENOTSUP;
> +	}
> +
> +	return scheduler_session_create((void *)sess, (void *)conf,
> sched_ctx,
> +				RTE_CRYPTO_OP_SECURITY_SESSION);
> +}
> +
> +/** Clear the memory of session so it doesn't leave key material behind
> +*/ static int scheduler_pmd_sec_sess_destroy(void *dev,
> +			       struct rte_security_session *sess) {
> +	struct rte_cryptodev *cdev = (struct rte_cryptodev *)dev;

[Anoob] Is this cast required?

> +	struct scheduler_ctx *sched_ctx = cdev->data->dev_private;
> +
> +	scheduler_session_destroy((void *)sess, sched_ctx,
> +				RTE_CRYPTO_OP_SECURITY_SESSION);
> +
> +	return 0;
> +}
> +
> +/** Get sync security capabilities for scheduler pmds */ static const
> +struct rte_security_capability * scheduler_pmd_sec_capa_get(void *dev)
> +{
> +	struct rte_cryptodev *cdev = (struct rte_cryptodev *)dev;

[Anoob] Is this cast required?

> +	struct scheduler_ctx *sched_ctx = cdev->data->dev_private;
> +
> +	return sched_ctx->sec_capabilities;
> +}
> +
> +static unsigned int
> +scheduler_pmd_sec_sess_size_get(void *dev) {
> +	struct rte_cryptodev *cdev = (struct rte_cryptodev *)dev;

[Anoob] Is this cast required?

> +	struct scheduler_ctx *sched_ctx = cdev->data->dev_private;
> +
> +	return scheduler_session_size_get(sched_ctx,
> +				RTE_CRYPTO_OP_SECURITY_SESSION);
> +}
> +
> +static struct rte_security_ops scheduler_pmd_sec_ops = {
> +		.session_create = scheduler_pmd_sec_sess_create,
> +		.session_update = NULL,
> +		.session_get_size = scheduler_pmd_sec_sess_size_get,
> +		.session_stats_get = NULL,
> +		.session_destroy = scheduler_pmd_sec_sess_destroy,
> +		.set_pkt_metadata = NULL,
> +		.capabilities_get = scheduler_pmd_sec_capa_get };
> +
> +struct rte_security_ops *rte_crypto_scheduler_pmd_sec_ops =
> +
> 	&scheduler_pmd_sec_ops;
> diff --git a/drivers/crypto/scheduler/scheduler_pmd_private.h
> b/drivers/crypto/scheduler/scheduler_pmd_private.h
> index 36d0bb6307..ff1e7a83e8 100644
> --- a/drivers/crypto/scheduler/scheduler_pmd_private.h
> +++ b/drivers/crypto/scheduler/scheduler_pmd_private.h
> @@ -5,6 +5,8 @@
>  #ifndef _SCHEDULER_PMD_PRIVATE_H
>  #define _SCHEDULER_PMD_PRIVATE_H
> 
> +#include <rte_security_driver.h>
> +
>  #include "rte_cryptodev_scheduler.h"
> 
>  #define CRYPTODEV_NAME_SCHEDULER_PMD	crypto_scheduler
> @@ -30,7 +32,8 @@ struct scheduler_ctx {
>  	/**< private scheduler context pointer */
> 
>  	struct rte_cryptodev_capabilities *capabilities;
> -	uint32_t nb_capabilities;
> +	struct rte_security_capability *sec_capabilities;
> +	struct rte_cryptodev_capabilities **sec_crypto_capabilities;
> 
>  	uint32_t max_nb_queue_pairs;
> 
> @@ -64,8 +67,12 @@ struct scheduler_qp_ctx {
> 
>  struct scheduler_session_ctx {
>  	uint32_t ref_cnt;
> -	struct rte_cryptodev_sym_session *worker_sess[
> -		RTE_CRYPTODEV_SCHEDULER_MAX_NB_WORKERS];
> +	union {
> +		struct rte_cryptodev_sym_session *worker_sess[
> +			RTE_CRYPTODEV_SCHEDULER_MAX_NB_WORKERS];
> +		struct rte_security_session *worker_sec_sess[
> +			RTE_CRYPTODEV_SCHEDULER_MAX_NB_WORKERS];
> +	};
>  };
> 
>  extern uint8_t cryptodev_scheduler_driver_id; @@ -108,7 +115,22 @@
> scheduler_order_drain(struct rte_ring *order_ring,  }
> 
>  static __rte_always_inline void
> -scheduler_set_worker_session(struct rte_crypto_op **ops, uint16_t
> nb_ops,
> +scheduler_set_single_worker_session(struct rte_crypto_op *op,
> +		uint8_t worker_idx)
> +{
> +	if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
> +		struct scheduler_session_ctx *sess_ctx =
> +				CRYPTODEV_GET_SYM_SESS_PRIV(op->sym-
> >session);
> +		op->sym->session = sess_ctx->worker_sess[worker_idx];
> +	} else if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) {
> +		struct scheduler_session_ctx *sess_ctx =
> +				SECURITY_GET_SESS_PRIV(op->sym-
> >session);
> +		op->sym->session = sess_ctx-
> >worker_sec_sess[worker_idx];
> +	}
> +}
> +
> +static __rte_always_inline void
> +scheduler_set_worker_sessions(struct rte_crypto_op **ops, uint16_t
> +nb_ops,
>  		uint8_t worker_index)
>  {
>  	struct rte_crypto_op **op = ops;
> @@ -129,52 +151,34 @@ scheduler_set_worker_session(struct
> rte_crypto_op **ops, uint16_t nb_ops,
>  			rte_prefetch0(op[7]->sym->session);
>  		}
> 
> -		if (op[0]->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
> -			struct scheduler_session_ctx *sess_ctx =
> -				CRYPTODEV_GET_SYM_SESS_PRIV(op[0]-
> >sym->session);
> -			op[0]->sym->session =
> -				sess_ctx->worker_sess[worker_index];
> -		}
> -
> -		if (op[1]->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
> -			struct scheduler_session_ctx *sess_ctx =
> -				CRYPTODEV_GET_SYM_SESS_PRIV(op[1]-
> >sym->session);
> -			op[1]->sym->session =
> -				sess_ctx->worker_sess[worker_index];
> -		}
> -
> -		if (op[2]->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
> -			struct scheduler_session_ctx *sess_ctx =
> -				CRYPTODEV_GET_SYM_SESS_PRIV(op[2]-
> >sym->session);
> -			op[2]->sym->session =
> -				sess_ctx->worker_sess[worker_index];
> -		}
> -
> -		if (op[3]->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
> -			struct scheduler_session_ctx *sess_ctx =
> -				CRYPTODEV_GET_SYM_SESS_PRIV(op[3]-
> >sym->session);
> -			op[3]->sym->session =
> -				sess_ctx->worker_sess[worker_index];
> -		}
> +		scheduler_set_single_worker_session(op[0],
> worker_index);
> +		scheduler_set_single_worker_session(op[1],
> worker_index);
> +		scheduler_set_single_worker_session(op[2],
> worker_index);
> +		scheduler_set_single_worker_session(op[3],
> worker_index);
> 
>  		op += 4;
>  		n -= 4;
>  	}
> 
>  	while (n--) {
> -		if (op[0]->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
> -			struct scheduler_session_ctx *sess_ctx =
> -				CRYPTODEV_GET_SYM_SESS_PRIV(op[0]-
> >sym->session);
> -
> -			op[0]->sym->session =
> -				sess_ctx->worker_sess[worker_index];
> -			op++;
> -		}
> +		scheduler_set_single_worker_session(op[0],
> worker_index);
> +		op++;
>  	}
>  }
> 
>  static __rte_always_inline void
> -scheduler_retrieve_session(struct rte_crypto_op **ops, uint16_t nb_ops)
> +scheduler_retrieve_single_session(struct rte_crypto_op *op) {
> +	if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION)
> +		op->sym->session = (void *)(uintptr_t)
> +			rte_cryptodev_sym_session_opaque_data_get(op-
> >sym->session);
> +	else if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION)
> +		op->sym->session = (void *)(uintptr_t)
> +			rte_security_session_opaque_data_get(op->sym-
> >session);
> +}
> +
> +static __rte_always_inline void
> +scheduler_retrieve_sessions(struct rte_crypto_op **ops, uint16_t
> +nb_ops)
>  {
>  	uint16_t n = nb_ops;
>  	struct rte_crypto_op **op = ops;
> @@ -194,32 +198,77 @@ scheduler_retrieve_session(struct rte_crypto_op
> **ops, uint16_t nb_ops)
>  			rte_prefetch0(op[7]->sym->session);
>  		}
> 
> -		if (op[0]->sess_type == RTE_CRYPTO_OP_WITH_SESSION)
> -			op[0]->sym->session = (void *)(uintptr_t)
> -
> 	rte_cryptodev_sym_session_opaque_data_get(op[0]->sym-
> >session);
> -		if (op[1]->sess_type == RTE_CRYPTO_OP_WITH_SESSION)
> -			op[1]->sym->session = (void *)(uintptr_t)
> -
> 	rte_cryptodev_sym_session_opaque_data_get(op[1]->sym-
> >session);
> -		if (op[2]->sess_type == RTE_CRYPTO_OP_WITH_SESSION)
> -			op[2]->sym->session = (void *)(uintptr_t)
> -
> 	rte_cryptodev_sym_session_opaque_data_get(op[2]->sym-
> >session);
> -		if (op[3]->sess_type == RTE_CRYPTO_OP_WITH_SESSION)
> -			op[3]->sym->session = (void *)(uintptr_t)
> -
> 	rte_cryptodev_sym_session_opaque_data_get(op[3]->sym-
> >session);
> +		scheduler_retrieve_single_session(op[0]);
> +		scheduler_retrieve_single_session(op[1]);
> +		scheduler_retrieve_single_session(op[2]);
> +		scheduler_retrieve_single_session(op[3]);
> 
>  		op += 4;
>  		n -= 4;
>  	}
> 
>  	while (n--) {
> -		if (op[0]->sess_type == RTE_CRYPTO_OP_WITH_SESSION)
> -			op[0]->sym->session = (void *)(uintptr_t)
> -
> 	rte_cryptodev_sym_session_opaque_data_get(op[0]->sym-
> >session);
> +		scheduler_retrieve_single_session(op[0]);
>  		op++;
>  	}
>  }
> 
> +static __rte_always_inline uint32_t
> +scheduler_get_job_len(struct rte_crypto_op *op) {
> +	uint32_t job_len;
> +
> +	/* op_len is initialized as cipher data length, if
> +	 * it is 0, then it is set to auth data length
> +	 */
> +	job_len = op->sym->cipher.data.length;
> +	job_len += (op->sym->cipher.data.length == 0) *
> +					op->sym->auth.data.length;
> +
> +	return job_len;
> +}
> +
> +static __rte_always_inline void
> +scheduler_free_capabilities(struct scheduler_ctx *sched_ctx) {
> +	uint32_t i;
> +
> +	if (sched_ctx->capabilities) {
> +		rte_free(sched_ctx->capabilities);
> +		sched_ctx->capabilities = NULL;
> +	}
> +
> +	if (sched_ctx->sec_crypto_capabilities) {
> +		i = 0;
> +		while (sched_ctx->sec_crypto_capabilities[i] != NULL) {
> +			rte_free(sched_ctx->sec_crypto_capabilities[i]);
> +			sched_ctx->sec_crypto_capabilities[i] = NULL;
> +			i++;
> +		}
> +
> +		rte_free(sched_ctx->sec_crypto_capabilities);
> +		sched_ctx->sec_crypto_capabilities = NULL;
> +	}
> +
> +	if (sched_ctx->sec_capabilities) {
> +		rte_free(sched_ctx->sec_capabilities);
> +		sched_ctx->sec_capabilities = NULL;
> +	}
> +}
> +
> +static __rte_always_inline int
> +scheduler_check_sec_proto_supp(enum
> rte_security_session_action_type action,
> +		enum rte_security_session_protocol protocol) {
> +	if (action == RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
> &&
> +			protocol == RTE_SECURITY_PROTOCOL_DOCSIS)
> +		return 1;
> +
> +	return 0;
> +}
> +
>  /** device specific operations function pointer structure */  extern struct
> rte_cryptodev_ops *rte_crypto_scheduler_pmd_ops;
> +extern struct rte_security_ops *rte_crypto_scheduler_pmd_sec_ops;
> 
>  #endif /* _SCHEDULER_PMD_PRIVATE_H */
> diff --git a/drivers/crypto/scheduler/scheduler_roundrobin.c
> b/drivers/crypto/scheduler/scheduler_roundrobin.c
> index ad3f8b842a..08041887a8 100644
> --- a/drivers/crypto/scheduler/scheduler_roundrobin.c
> +++ b/drivers/crypto/scheduler/scheduler_roundrobin.c
> @@ -28,11 +28,11 @@ schedule_enqueue(void *qp, struct rte_crypto_op
> **ops, uint16_t nb_ops)
>  	if (unlikely(nb_ops == 0))
>  		return 0;
> 
> -	scheduler_set_worker_session(ops, nb_ops, worker_idx);
> +	scheduler_set_worker_sessions(ops, nb_ops, worker_idx);
>  	processed_ops = rte_cryptodev_enqueue_burst(worker->dev_id,
>  			worker->qp_id, ops, nb_ops);
>  	if (processed_ops < nb_ops)
> -		scheduler_retrieve_session(ops + processed_ops,
> +		scheduler_retrieve_sessions(ops + processed_ops,
>  			nb_ops - processed_ops);
> 
>  	worker->nb_inflight_cops += processed_ops; @@ -87,7 +87,7 @@
> schedule_dequeue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops)
> 
>  	nb_deq_ops = rte_cryptodev_dequeue_burst(worker->dev_id,
>  			worker->qp_id, ops, nb_ops);
> -	scheduler_retrieve_session(ops, nb_deq_ops);
> +	scheduler_retrieve_sessions(ops, nb_deq_ops);
>  	last_worker_idx += 1;
>  	last_worker_idx %= rr_qp_ctx->nb_workers;
> 
> --
> 2.25.1


  reply	other threads:[~2023-09-18 11:02 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-08-09 10:14 [PATCH 0/2] crypto/scheduler: add support for security protocols David Coyle
2023-08-09 10:14 ` [PATCH 1/2] crypto/scheduler: support " David Coyle
2023-08-09 10:14 ` [PATCH 2/2] test/crypto: add security tests for cryptodev scheduler David Coyle
2023-08-11 10:23 ` [PATCH v2 0/2] crypto/scheduler: add support for security protocols David Coyle
2023-08-11 10:24   ` [PATCH v2 1/2] crypto/scheduler: support " David Coyle
2023-08-11 10:24   ` [PATCH v2 2/2] test/crypto: add security tests for cryptodev scheduler David Coyle
2023-08-11 11:09   ` [EXT] [PATCH v2 0/2] crypto/scheduler: add support for security protocols Anoob Joseph
2023-09-11 16:02     ` Coyle, David
2023-09-13  7:02       ` Anoob Joseph
2023-09-14 15:22   ` [PATCH v3 0/2] crypto/scheduler: add support for DOCSIS security protocol David Coyle
2023-09-14 15:22     ` [PATCH v3 1/2] crypto/scheduler: support " David Coyle
2023-09-18 11:02       ` Anoob Joseph [this message]
2023-09-19 14:16         ` [EXT] " Coyle, David
2023-09-14 15:22     ` [PATCH v3 2/2] test/crypto: add DOCSIS security tests for cryptodev scheduler David Coyle
2023-09-15 10:18     ` [PATCH v3 0/2] crypto/scheduler: add support for DOCSIS security protocol Power, Ciara
2023-09-15 14:02     ` Power, Ciara
2023-09-19 14:14     ` [PATCH v4 " David Coyle
2023-09-19 14:14       ` [PATCH v4 1/2] crypto/scheduler: support " David Coyle
2023-09-19 14:14       ` [PATCH v4 2/2] test/crypto: add DOCSIS security tests for cryptodev scheduler David Coyle
2023-09-20  5:45       ` [EXT] [PATCH v4 0/2] crypto/scheduler: add support for DOCSIS security protocol Anoob Joseph
2023-09-21  7:18         ` Akhil Goyal

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=PH0PR18MB467256548A31E7BD7527A9C2DFFBA@PH0PR18MB4672.namprd18.prod.outlook.com \
    --to=anoobj@marvell.com \
    --cc=david.coyle@intel.com \
    --cc=dev@dpdk.org \
    --cc=jerinj@marvell.com \
    --cc=kai.ji@intel.com \
    --cc=kevin.osullivan@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).