DPDK patches and discussions
 help / color / mirror / Atom feed
From: Akhil Goyal <gakhil@marvell.com>
To: Kevin O'Sullivan <kevin.osullivan@intel.com>,
	"dev@dpdk.org" <dev@dpdk.org>
Cc: "kai.ji@intel.com" <kai.ji@intel.com>,
	David Coyle <david.coyle@intel.com>
Subject: RE: [EXT] [PATCH v3 2/2] crypto/qat: add cipher-crc offload support
Date: Thu, 16 Mar 2023 19:15:24 +0000	[thread overview]
Message-ID: <CO6PR18MB448444BD09446B542413B376D8BC9@CO6PR18MB4484.namprd18.prod.outlook.com> (raw)
In-Reply-To: <20230313142603.234169-3-kevin.osullivan@intel.com>

> Subject: [EXT] [PATCH v3 2/2] crypto/qat: add cipher-crc offload support
> 
Update title as 
crypto/qat: support cipher-crc offload

> This patch adds support to the QAT symmetric crypto PMD for combined
> cipher-crc offload feature, primarily for DOCSIS, on gen2/gen3/gen4
> QAT devices.
> 
> A new parameter called qat_sym_cipher_crc_enable has been
> added to the PMD, which can be set on process start as follows:

A new devarg called ....

> 
> -a <qat pci bdf>,qat_sym_cipher_crc_enable=1
> 
> When enabled, a capability check for the combined cipher-crc offload
> feature is triggered to the QAT firmware during queue pair
> initialization. If supported by the firmware, any subsequent runtime
> DOCSIS cipher-crc requests handled by the QAT PMD are offloaded to the
> QAT device by setting up the content descriptor and request
> accordingly.
> 
> If the combined DOCSIS cipher-crc feature is not supported by the
> firmware, the CRC continues to be calculated within the PMD, with just
> the cipher portion of the request being offloaded to the QAT device.
> 
> Signed-off-by: Kevin O'Sullivan <kevin.osullivan@intel.com>
> Signed-off-by: David Coyle <david.coyle@intel.com>
> ---
> v3: updated the file qat.rst with details of new configuration
> ---
>  doc/guides/cryptodevs/qat.rst                |  23 +++
>  drivers/common/qat/qat_device.c              |  12 +-
>  drivers/common/qat/qat_device.h              |   3 +-
>  drivers/common/qat/qat_qp.c                  | 157 +++++++++++++++
>  drivers/common/qat/qat_qp.h                  |   5 +
>  drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c |   2 +-
>  drivers/crypto/qat/dev/qat_crypto_pmd_gens.h |  24 ++-
>  drivers/crypto/qat/dev/qat_sym_pmd_gen1.c    |   4 +
>  drivers/crypto/qat/qat_crypto.c              |  22 ++-
>  drivers/crypto/qat/qat_crypto.h              |   1 +
>  drivers/crypto/qat/qat_sym.c                 |   4 +
>  drivers/crypto/qat/qat_sym.h                 |   7 +-
>  drivers/crypto/qat/qat_sym_session.c         | 196 ++++++++++++++++++-
>  drivers/crypto/qat/qat_sym_session.h         |  21 +-
>  14 files changed, 465 insertions(+), 16 deletions(-)
> 
> diff --git a/doc/guides/cryptodevs/qat.rst b/doc/guides/cryptodevs/qat.rst
> index ef754106a8..32e0d8a562 100644
> --- a/doc/guides/cryptodevs/qat.rst
> +++ b/doc/guides/cryptodevs/qat.rst
> @@ -294,6 +294,29 @@ by comma. When the same parameter is used more
> than once first occurrence of the
>  is used.
>  Maximum threshold that can be set is 32.
> 
> +
> +Running QAT PMD with Cipher-CRC offload feature
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +Support has been added to the QAT symmetric crypto PMD for combined
> Cipher-CRC offload,
> +primarily for the Crypto-CRC DOCSIS security protocol, on GEN2/GEN3/GEN4
> QAT devices.
> +
> +The following parameter enables a Cipher-CRC offload capability check to
> determine
> +if the feature is supported on the QAT device.
> +
> +- qat_sym_cipher_crc_enable

Use the word devarg to make it uniform across DPDK.



> +
> +When enabled, a capability check for the combined Cipher-CRC offload feature
> is triggered
> +to the QAT firmware during queue pair initialization. If supported by the
> firmware,
> +any subsequent runtime Crypto-CRC DOCSIS security protocol requests handled
> by the QAT PMD
> +are offloaded to the QAT device by setting up the content descriptor and
> request accordingly.
> +If not supported, the CRC is calculated by the QAT PMD using the NET CRC API.
> +
> +To use this feature the user must set the parameter on process start as a device
> additional parameter::
> +
> + -a 03:01.1,qat_sym_cipher_crc_enable=1
> +
> +
>  Running QAT PMD with Intel IPSEC MB library for symmetric precomputes
> function
> 
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> ~~~~~~~~~~~~~
> 
> diff --git a/drivers/common/qat/qat_device.c
> b/drivers/common/qat/qat_device.c
> index 8bce2ac073..308c59c39f 100644
> --- a/drivers/common/qat/qat_device.c
> +++ b/drivers/common/qat/qat_device.c
> @@ -149,7 +149,16 @@ qat_dev_parse_cmd(const char *str, struct
> qat_dev_cmd_param
>  			} else {
>  				memcpy(value_str, arg2, iter);
>  				value = strtol(value_str, NULL, 10);
> -				if (value > MAX_QP_THRESHOLD_SIZE) {
> +				if (strcmp(param,
> +					 SYM_CIPHER_CRC_ENABLE_NAME) ==
> 0) {
> +					if (value < 0 || value > 1) {
> +						QAT_LOG(DEBUG, "The value
> for"
> +						" qat_sym_cipher_crc_enable"
> +						" should be set to 0 or 1,"
> +						" setting to 0");

Do not split printable strings across multiple lines even if it cross max limit.
Fix this across the patch.
Moreover max limit is also increased from 80 -> 100


> +						value = 0;
> +					}
> +				} else if (value > MAX_QP_THRESHOLD_SIZE) {
>  					QAT_LOG(DEBUG, "Exceeded max size
> of"
>  						" threshold, setting to %d",
>  						MAX_QP_THRESHOLD_SIZE);
> @@ -369,6 +378,7 @@ static int qat_pci_probe(struct rte_pci_driver *pci_drv
> __rte_unused,
>  			{ SYM_ENQ_THRESHOLD_NAME, 0 },
>  			{ ASYM_ENQ_THRESHOLD_NAME, 0 },
>  			{ COMP_ENQ_THRESHOLD_NAME, 0 },
> +			{ SYM_CIPHER_CRC_ENABLE_NAME, 0 },
>  			[QAT_CMD_SLICE_MAP_POS] = {
> QAT_CMD_SLICE_MAP, 0},
>  			{ NULL, 0 },
>  	};
> diff --git a/drivers/common/qat/qat_device.h
> b/drivers/common/qat/qat_device.h
> index bc3da04238..4188474dde 100644
> --- a/drivers/common/qat/qat_device.h
> +++ b/drivers/common/qat/qat_device.h
> @@ -21,8 +21,9 @@
>  #define SYM_ENQ_THRESHOLD_NAME "qat_sym_enq_threshold"
>  #define ASYM_ENQ_THRESHOLD_NAME "qat_asym_enq_threshold"
>  #define COMP_ENQ_THRESHOLD_NAME "qat_comp_enq_threshold"
> +#define SYM_CIPHER_CRC_ENABLE_NAME "qat_sym_cipher_crc_enable"
>  #define QAT_CMD_SLICE_MAP "qat_cmd_slice_disable"
> -#define QAT_CMD_SLICE_MAP_POS	4
> +#define QAT_CMD_SLICE_MAP_POS	5
>  #define MAX_QP_THRESHOLD_SIZE	32
> 
>  /**
> diff --git a/drivers/common/qat/qat_qp.c b/drivers/common/qat/qat_qp.c
> index 9cbd19a481..1ce89c265f 100644
> --- a/drivers/common/qat/qat_qp.c
> +++ b/drivers/common/qat/qat_qp.c
> @@ -11,6 +11,9 @@
>  #include <bus_pci_driver.h>
>  #include <rte_atomic.h>
>  #include <rte_prefetch.h>
> +#ifdef RTE_LIB_SECURITY
> +#include <rte_ether.h>
> +#endif
> 
>  #include "qat_logs.h"
>  #include "qat_device.h"
> @@ -957,6 +960,160 @@ qat_cq_get_fw_version(struct qat_qp *qp)
>  	return -EINVAL;
>  }
> 
> +#ifdef BUILD_QAT_SYM

Where is this defined? Even no documentation about when to enable/disable it.


> +/* Sends an LA bulk req message to determine if a QAT device supports Cipher-
> CRC
> + * offload. This assumes that there are no inflight messages, i.e. assumes
> + * there's space  on the qp, one message is sent and only one response
> + * collected. The status bit of the response and returned data are checked.
> + * Returns:
> + *     1 if status bit indicates success and returned data matches expected
> + *     data (i.e. Cipher-CRC supported)
> + *     0 if status bit indicates error or returned data does not match expected
> + *     data (i.e. Cipher-CRC not supported)
> + *     Negative error code in case of error
> + */
> +int
> +qat_cq_get_fw_cipher_crc_cap(struct qat_qp *qp)
> +{
> +	struct qat_queue *queue = &(qp->tx_q);
> +	uint8_t *base_addr = (uint8_t *)queue->base_addr;
> +	struct icp_qat_fw_la_bulk_req cipher_crc_cap_msg = {{0}};
> +	struct icp_qat_fw_comn_resp response = {{0}};
> +	struct icp_qat_fw_la_cipher_req_params *cipher_param;
> +	struct icp_qat_fw_la_auth_req_params *auth_param;
> +	struct qat_sym_session *session;
> +	phys_addr_t phy_src_addr;
> +	uint64_t *src_data_addr;
> +	int ret;
> +	uint8_t cipher_offset = 18;
> +	uint8_t crc_offset = 6;
> +	uint8_t ciphertext[34] = {
> +		/* Outer protocol header */
> +		0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
> +		/* Ethernet frame */
> +		0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x06, 0x05,
> +		0x04, 0x03, 0x02, 0x01, 0xD6, 0xE2, 0x70, 0x5C,
> +		0xE6, 0x4D, 0xCC, 0x8C, 0x47, 0xB7, 0x09, 0xD6,
> +		/* CRC */
> +		0x54, 0x85, 0xF8, 0x32
> +	};
> +	uint8_t plaintext[34] = {
> +		/* Outer protocol header */
> +		0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
> +		/* Ethernet frame */
> +		0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x06, 0x05,
> +		0x04, 0x03, 0x02, 0x01, 0x08, 0x00, 0xAA, 0xAA,
> +		0xAA, 0xAA, 0xAA, 0xAA, 0xAA, 0xAA, 0xAA, 0xAA,
> +		/* CRC */
> +		0xFF, 0xFF, 0xFF, 0xFF
> +	};
> +	uint8_t key[16] = {
> +		0x00, 0x00, 0x00, 0x00, 0xAA, 0xBB, 0xCC, 0xDD,
> +		0xEE, 0xFF, 0x00, 0x11, 0x22, 0x33, 0x44, 0x55
> +	};
> +	uint8_t iv[16] = {
> +		0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11,
> +		0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11
> +	};

Is it not better to define them as macros?

> +
> +	session = rte_zmalloc(NULL, sizeof(struct qat_sym_session), 0);
> +	if (session == NULL)
> +		return -EINVAL;
> +
> +	/* Verify the session physical address is known */
> +	rte_iova_t session_paddr = rte_mem_virt2iova(session);
> +	if (session_paddr == 0 || session_paddr == RTE_BAD_IOVA) {
> +		QAT_LOG(ERR, "Session physical address unknown.");
> +		return -EINVAL;
> +	}
> +
> +	/* Prepare the LA bulk request */
> +	ret = qat_cipher_crc_cap_msg_sess_prepare(session,
> +						  session_paddr,
> +						  key,
> +						  sizeof(key),
> +						  qp->qat_dev_gen);
> +	if (ret < 0) {
> +		rte_free(session);
> +		/* Returning 0 here to allow qp setup to continue, but
> +		 * indicate that Cipher-CRC offload is not supported on the
> +		 * device
> +		 */
> +		return 0;
> +	}
> +
> +	cipher_crc_cap_msg = session->fw_req;
> +
> +	src_data_addr = rte_zmalloc(NULL, sizeof(plaintext), 0);
> +	if (src_data_addr == NULL) {
> +		rte_free(session);
> +		return -EINVAL;
> +	}
> +
> +	rte_memcpy(src_data_addr, plaintext, sizeof(plaintext));
> +
> +	phy_src_addr = rte_mem_virt2iova(src_data_addr);
> +	if (phy_src_addr == 0 || phy_src_addr == RTE_BAD_IOVA) {
> +		QAT_LOG(ERR, "Source physical address unknown.");
> +		return -EINVAL;
> +	}
> +
> +	cipher_crc_cap_msg.comn_mid.src_data_addr = phy_src_addr;
> +	cipher_crc_cap_msg.comn_mid.src_length = sizeof(plaintext);
> +	cipher_crc_cap_msg.comn_mid.dest_data_addr = phy_src_addr;
> +	cipher_crc_cap_msg.comn_mid.dst_length = sizeof(plaintext);
> +
> +	cipher_param = (void *)&cipher_crc_cap_msg.serv_specif_rqpars;
> +	auth_param = (void *)((uint8_t *)cipher_param +
> +			ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET);
> +
> +	rte_memcpy(cipher_param->u.cipher_IV_array, iv, sizeof(iv));
> +
> +	cipher_param->cipher_offset = cipher_offset;
> +	cipher_param->cipher_length = sizeof(plaintext) - cipher_offset;
> +	auth_param->auth_off = crc_offset;
> +	auth_param->auth_len = sizeof(plaintext) -
> +				crc_offset -
> +				RTE_ETHER_CRC_LEN;
> +
> +	ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET(
> +			cipher_crc_cap_msg.comn_hdr.serv_specif_flags,
> +			ICP_QAT_FW_LA_DIGEST_IN_BUFFER);
> +
> +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
> +	QAT_DP_HEXDUMP_LOG(DEBUG, "LA Bulk request",
> &cipher_crc_cap_msg,
> +			sizeof(cipher_crc_cap_msg));
> +#endif
> +
> +	/* Send the cipher_crc_cap_msg request */
> +	memcpy(base_addr + queue->tail,
> +	       &cipher_crc_cap_msg,
> +	       sizeof(cipher_crc_cap_msg));
> +	queue->tail = adf_modulo(queue->tail + queue->msg_size,
> +			queue->modulo_mask);
> +	txq_write_tail(qp->qat_dev_gen, qp, queue);
> +
> +	/* Check for response and verify data is same as ciphertext */
> +	if (qat_cq_dequeue_response(qp, &response)) {
> +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
> +		QAT_DP_HEXDUMP_LOG(DEBUG, "LA response:", &response,
> +				sizeof(response));
> +#endif
> +
> +		if (memcmp(src_data_addr, ciphertext, sizeof(ciphertext)) != 0)
> +			ret = 0; /* Cipher-CRC offload not supported */
> +		else
> +			ret = 1;
> +	} else {
> +		ret = -EINVAL;
> +	}
> +
> +	rte_free(src_data_addr);
> +	rte_free(session);
> +	return ret;
> +}
> +#endif
> +
>  __rte_weak int
>  qat_comp_process_response(void **op __rte_unused, uint8_t *resp
> __rte_unused,
>  			  void *op_cookie __rte_unused,
> diff --git a/drivers/common/qat/qat_qp.h b/drivers/common/qat/qat_qp.h
> index 66f00943a5..d19fc387e4 100644
> --- a/drivers/common/qat/qat_qp.h
> +++ b/drivers/common/qat/qat_qp.h
> @@ -153,6 +153,11 @@ qat_qp_get_hw_data(struct qat_pci_device *qat_dev,
>  int
>  qat_cq_get_fw_version(struct qat_qp *qp);
> 
> +#ifdef BUILD_QAT_SYM
> +int
> +qat_cq_get_fw_cipher_crc_cap(struct qat_qp *qp);
> +#endif
> +
>  /* Needed for weak function*/
>  int
>  qat_comp_process_response(void **op __rte_unused, uint8_t *resp
> __rte_unused,
> diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c
> b/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c
> index 60ca0fc0d2..1f3e2b1d99 100644
> --- a/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c
> +++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c
> @@ -163,7 +163,7 @@ qat_sym_crypto_qp_setup_gen2(struct rte_cryptodev
> *dev, uint16_t qp_id,
>  		QAT_LOG(DEBUG, "unknown QAT firmware version");
> 
>  	/* set capabilities based on the fw version */
> -	qat_sym_private->internal_capabilities = QAT_SYM_CAP_VALID |
> +	qat_sym_private->internal_capabilities |= QAT_SYM_CAP_VALID |
>  			((ret >= MIXED_CRYPTO_MIN_FW_VER) ?
>  					QAT_SYM_CAP_MIXED_CRYPTO : 0);
>  	return 0;
> diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
> b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
> index 524c291340..70942906ea 100644
> --- a/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
> +++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
> @@ -399,8 +399,13 @@ qat_sym_convert_op_to_vec_chain(struct
> rte_crypto_op *op,
>  		cipher_ofs = op->sym->cipher.data.offset >> 3;
>  		break;
>  	case 0:
> -		cipher_len = op->sym->cipher.data.length;
> -		cipher_ofs = op->sym->cipher.data.offset;
> +		if (ctx->bpi_ctx) {
> +			cipher_len = qat_bpicipher_preprocess(ctx, op);
> +			cipher_ofs = op->sym->cipher.data.offset;
> +		} else {
> +			cipher_len = op->sym->cipher.data.length;
> +			cipher_ofs = op->sym->cipher.data.offset;
> +		}
>  		break;
>  	default:
>  		QAT_DP_LOG(ERR,
> @@ -428,8 +433,10 @@ qat_sym_convert_op_to_vec_chain(struct
> rte_crypto_op *op,
> 
>  	max_len = RTE_MAX(cipher_ofs + cipher_len, auth_ofs + auth_len);
> 
> -	/* digest in buffer check. Needed only for wireless algos */
> -	if (ret == 1) {
> +	/* digest in buffer check. Needed only for wireless algos
> +	 * or combined cipher-crc operations
> +	 */
> +	if (ret == 1 || ctx->bpi_ctx) {
>  		/* Handle digest-encrypted cases, i.e.
>  		 * auth-gen-then-cipher-encrypt and
>  		 * cipher-decrypt-then-auth-verify
> @@ -456,8 +463,9 @@ qat_sym_convert_op_to_vec_chain(struct
> rte_crypto_op *op,
>  					auth_len;
> 
>  		/* Then check if digest-encrypted conditions are met */
> -		if ((auth_ofs + auth_len < cipher_ofs + cipher_len) &&
> -				(digest->iova == auth_end_iova))
> +		if (((auth_ofs + auth_len < cipher_ofs + cipher_len) &&
> +				(digest->iova == auth_end_iova)) ||
> +				ctx->bpi_ctx)
>  			max_len = RTE_MAX(max_len, auth_ofs + auth_len +
>  					ctx->digest_length);
>  	}
> @@ -691,9 +699,9 @@ enqueue_one_chain_job_gen1(struct qat_sym_session
> *ctx,
>  			auth_param->auth_len;
> 
>  	/* Then check if digest-encrypted conditions are met */
> -	if ((auth_param->auth_off + auth_param->auth_len <
> +	if (((auth_param->auth_off + auth_param->auth_len <
>  		cipher_param->cipher_offset + cipher_param->cipher_length)
> &&
> -			(digest->iova == auth_iova_end)) {
> +			(digest->iova == auth_iova_end)) || ctx->bpi_ctx) {
>  		/* Handle partial digest encryption */
>  		if (cipher_param->cipher_offset + cipher_param->cipher_length
> <
>  			auth_param->auth_off + auth_param->auth_len +
> diff --git a/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c
> b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c
> index 91d5cfa71d..590eaa0057 100644
> --- a/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c
> +++ b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c
> @@ -1205,6 +1205,10 @@ qat_sym_crypto_set_session_gen1(void *cryptodev
> __rte_unused, void *session)
>  	} else if (ctx->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER) {
>  		/* do_auth = 0; do_cipher = 1; */
>  		build_request = qat_sym_build_op_cipher_gen1;
> +	} else if (ctx->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER_CRC) {
> +		/* do_auth = 1; do_cipher = 1; */
> +		build_request = qat_sym_build_op_chain_gen1;
> +		handle_mixed = 1;
>  	}
> 
>  	if (build_request)
> diff --git a/drivers/crypto/qat/qat_crypto.c b/drivers/crypto/qat/qat_crypto.c
> index 84c26a8062..861679373b 100644
> --- a/drivers/crypto/qat/qat_crypto.c
> +++ b/drivers/crypto/qat/qat_crypto.c
> @@ -172,5 +172,25 @@ qat_cryptodev_qp_setup(struct rte_cryptodev *dev,
> uint16_t qp_id,
>  			qat_asym_init_op_cookie(qp->op_cookies[i]);
>  	}
> 
> -	return ret;
> +	if (qat_private->cipher_crc_offload_enable) {
> +		ret = qat_cq_get_fw_cipher_crc_cap(qp);
> +		if (ret < 0) {
> +			qat_cryptodev_qp_release(dev, qp_id);
> +			return ret;
> +		}
> +
> +		if (ret != 0)
> +			QAT_LOG(DEBUG, "Cipher CRC supported on QAT
> device");
> +		else
> +			QAT_LOG(DEBUG, "Cipher CRC not supported on QAT
> device");
> +
> +		/* Only send the cipher crc offload capability message once */
> +		qat_private->cipher_crc_offload_enable = 0;
> +		/* Set cipher crc offload indicator */
> +		if (ret)
> +			qat_private->internal_capabilities |=
> +						QAT_SYM_CAP_CIPHER_CRC;
> +	}
> +
> +	return 0;
>  }
> diff --git a/drivers/crypto/qat/qat_crypto.h b/drivers/crypto/qat/qat_crypto.h
> index 6fe1326c51..e20f16236e 100644
> --- a/drivers/crypto/qat/qat_crypto.h
> +++ b/drivers/crypto/qat/qat_crypto.h
> @@ -36,6 +36,7 @@ struct qat_cryptodev_private {
>  	/* Shared memzone for storing capabilities */
>  	uint16_t min_enq_burst_threshold;
>  	uint32_t internal_capabilities; /* see flags QAT_SYM_CAP_xxx */
> +	bool cipher_crc_offload_enable;
>  	enum qat_service_type service_type;
>  };
> 
> diff --git a/drivers/crypto/qat/qat_sym.c b/drivers/crypto/qat/qat_sym.c
> index 08e92191a3..345c845325 100644
> --- a/drivers/crypto/qat/qat_sym.c
> +++ b/drivers/crypto/qat/qat_sym.c
> @@ -279,6 +279,10 @@ qat_sym_dev_create(struct qat_pci_device
> *qat_pci_dev,
>  		if (!strcmp(qat_dev_cmd_param[i].name,
> SYM_ENQ_THRESHOLD_NAME))
>  			internals->min_enq_burst_threshold =
>  					qat_dev_cmd_param[i].val;
> +		if (!strcmp(qat_dev_cmd_param[i].name,
> +				SYM_CIPHER_CRC_ENABLE_NAME))
> +			internals->cipher_crc_offload_enable =
> +					qat_dev_cmd_param[i].val;
>  		if (!strcmp(qat_dev_cmd_param[i].name, QAT_IPSEC_MB_LIB))
>  			qat_ipsec_mb_lib = qat_dev_cmd_param[i].val;
>  		if (!strcmp(qat_dev_cmd_param[i].name,
> QAT_CMD_SLICE_MAP))
> diff --git a/drivers/crypto/qat/qat_sym.h b/drivers/crypto/qat/qat_sym.h
> index 9a4251e08b..3d841d0eba 100644
> --- a/drivers/crypto/qat/qat_sym.h
> +++ b/drivers/crypto/qat/qat_sym.h
> @@ -32,6 +32,7 @@
> 
>  /* Internal capabilities */
>  #define QAT_SYM_CAP_MIXED_CRYPTO	(1 << 0)
> +#define QAT_SYM_CAP_CIPHER_CRC		(1 << 1)
>  #define QAT_SYM_CAP_VALID		(1 << 31)
> 
>  /**
> @@ -282,7 +283,8 @@ qat_sym_preprocess_requests(void **ops, uint16_t
> nb_ops)
>  			if (ctx == NULL || ctx->bpi_ctx == NULL)
>  				continue;
> 
> -			qat_crc_generate(ctx, op);
> +			if (ctx->qat_cmd !=
> ICP_QAT_FW_LA_CMD_CIPHER_CRC)
> +				qat_crc_generate(ctx, op);
>  		}
>  	}
>  }
> @@ -330,7 +332,8 @@ qat_sym_process_response(void **op, uint8_t *resp,
> void *op_cookie,
>  		if (sess->bpi_ctx) {
>  			qat_bpicipher_postprocess(sess, rx_op);
>  #ifdef RTE_LIB_SECURITY
> -			if (is_docsis_sec)
> +			if (is_docsis_sec && sess->qat_cmd !=
> +
> 	ICP_QAT_FW_LA_CMD_CIPHER_CRC)
>  				qat_crc_verify(sess, rx_op);
>  #endif
>  		}
> diff --git a/drivers/crypto/qat/qat_sym_session.c
> b/drivers/crypto/qat/qat_sym_session.c
> index 6ad6c7ee3a..c0217654c1 100644
> --- a/drivers/crypto/qat/qat_sym_session.c
> +++ b/drivers/crypto/qat/qat_sym_session.c
> @@ -27,6 +27,7 @@
>  #include <rte_crypto_sym.h>
>  #ifdef RTE_LIB_SECURITY
>  #include <rte_security_driver.h>
> +#include <rte_ether.h>
>  #endif
> 
>  #include "qat_logs.h"
> @@ -68,6 +69,13 @@ static void ossl_legacy_provider_unload(void)
> 
>  extern int qat_ipsec_mb_lib;
> 
> +#define ETH_CRC32_POLYNOMIAL    0x04c11db7
> +#define ETH_CRC32_INIT_VAL      0xffffffff
> +#define ETH_CRC32_XOR_OUT       0xffffffff
> +#define ETH_CRC32_POLYNOMIAL_BE RTE_BE32(ETH_CRC32_POLYNOMIAL)
> +#define ETH_CRC32_INIT_VAL_BE   RTE_BE32(ETH_CRC32_INIT_VAL)
> +#define ETH_CRC32_XOR_OUT_BE    RTE_BE32(ETH_CRC32_XOR_OUT)
> +
>  /* SHA1 - 20 bytes - Initialiser state can be found in FIPS stds 180-2 */
>  static const uint8_t sha1InitialState[] = {
>  	0x67, 0x45, 0x23, 0x01, 0xef, 0xcd, 0xab, 0x89, 0x98, 0xba,
> @@ -115,6 +123,10 @@ qat_sym_cd_cipher_set(struct qat_sym_session *cd,
>  						const uint8_t *enckey,
>  						uint32_t enckeylen);
> 
> +static int
> +qat_sym_cd_crc_set(struct qat_sym_session *cdesc,
> +					enum qat_device_gen qat_dev_gen);
> +
>  static int
>  qat_sym_cd_auth_set(struct qat_sym_session *cdesc,
>  						const uint8_t *authkey,
> @@ -122,6 +134,7 @@ qat_sym_cd_auth_set(struct qat_sym_session *cdesc,
>  						uint32_t aad_length,
>  						uint32_t digestsize,
>  						unsigned int operation);
> +
>  static void
>  qat_sym_session_init_common_hdr(struct qat_sym_session *session);
> 
> @@ -630,6 +643,7 @@ qat_sym_session_set_parameters(struct rte_cryptodev
> *dev,
>  	case ICP_QAT_FW_LA_CMD_MGF1:
>  	case ICP_QAT_FW_LA_CMD_AUTH_PRE_COMP:
>  	case ICP_QAT_FW_LA_CMD_CIPHER_PRE_COMP:
> +	case ICP_QAT_FW_LA_CMD_CIPHER_CRC:
>  	case ICP_QAT_FW_LA_CMD_DELIMITER:
>  	QAT_LOG(ERR, "Unsupported Service %u",
>  		session->qat_cmd);
> @@ -645,6 +659,45 @@ qat_sym_session_set_parameters(struct rte_cryptodev
> *dev,
>  			(void *)session);
>  }
> 
> +int
> +qat_cipher_crc_cap_msg_sess_prepare(struct qat_sym_session *session,
> +					rte_iova_t session_paddr,
> +					const uint8_t *cipherkey,
> +					uint32_t cipherkeylen,
> +					enum qat_device_gen qat_dev_gen)
> +{
> +	int ret;
> +
> +	/* Set content descriptor physical address */
> +	session->cd_paddr = session_paddr +
> +				offsetof(struct qat_sym_session, cd);
> +
> +	/* Set up some pre-requisite variables */
> +	session->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_NONE;
> +	session->is_ucs = 0;
> +	session->qat_cmd = ICP_QAT_FW_LA_CMD_CIPHER_CRC;
> +	session->qat_mode = ICP_QAT_HW_CIPHER_CBC_MODE;
> +	session->qat_cipher_alg = ICP_QAT_HW_CIPHER_ALGO_AES128;
> +	session->qat_dir = ICP_QAT_HW_CIPHER_ENCRYPT;
> +	session->is_auth = 1;
> +	session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_NULL;
> +	session->auth_mode = ICP_QAT_HW_AUTH_MODE0;
> +	session->auth_op = ICP_QAT_HW_AUTH_GENERATE;
> +	session->digest_length = RTE_ETHER_CRC_LEN;
> +
> +	ret = qat_sym_cd_cipher_set(session, cipherkey, cipherkeylen);
> +	if (ret < 0)
> +		return -EINVAL;
> +
> +	ret = qat_sym_cd_crc_set(session, qat_dev_gen);
> +	if (ret < 0)
> +		return -EINVAL;
> +
> +	qat_sym_session_finalize(session);
> +
> +	return 0;
> +}
> +
>  static int
>  qat_sym_session_handle_single_pass(struct qat_sym_session *session,
>  		const struct rte_crypto_aead_xform *aead_xform)
> @@ -697,7 +750,7 @@ qat_sym_session_configure_auth(struct rte_cryptodev
> *dev,
>  	switch (auth_xform->algo) {
>  	case RTE_CRYPTO_AUTH_SM3:
>  		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SM3;
> -		session->auth_mode = ICP_QAT_HW_AUTH_MODE0;
> +		session->auth_mode = ICP_QAT_HW_AUTH_MODE2;
>  		break;
>  	case RTE_CRYPTO_AUTH_SHA1:
>  		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA1;
> @@ -1866,6 +1919,9 @@ int qat_sym_cd_cipher_set(struct qat_sym_session
> *cdesc,
>  		ICP_QAT_FW_COMN_NEXT_ID_SET(hash_cd_ctrl,
>  					ICP_QAT_FW_SLICE_DRAM_WR);
>  		cdesc->cd_cur_ptr = (uint8_t *)&cdesc->cd;
> +	} else if (cdesc->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER_CRC) {
> +		cd_pars->u.s.content_desc_addr = cdesc->cd_paddr;
> +		cdesc->cd_cur_ptr = (uint8_t *)&cdesc->cd;
>  	} else if (cdesc->qat_cmd != ICP_QAT_FW_LA_CMD_HASH_CIPHER) {
>  		QAT_LOG(ERR, "Invalid param, must be a cipher command.");
>  		return -EFAULT;
> @@ -2641,6 +2697,135 @@ qat_sec_session_check_docsis(struct
> rte_security_session_conf *conf)
>  	return -EINVAL;
>  }
> 
> +static int
> +qat_sym_cd_crc_set(struct qat_sym_session *cdesc,
> +		enum qat_device_gen qat_dev_gen)
> +{
> +	struct icp_qat_hw_gen2_crc_cd *crc_cd_gen2;
> +	struct icp_qat_hw_gen3_crc_cd *crc_cd_gen3;
> +	struct icp_qat_hw_gen4_crc_cd *crc_cd_gen4;
> +	struct icp_qat_fw_la_bulk_req *req_tmpl = &cdesc->fw_req;
> +	struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req_tmpl-
> >cd_pars;
> +	void *ptr = &req_tmpl->cd_ctrl;
> +	struct icp_qat_fw_auth_cd_ctrl_hdr *crc_cd_ctrl = ptr;
> +	struct icp_qat_fw_la_auth_req_params *crc_param =
> +				(struct icp_qat_fw_la_auth_req_params *)
> +				((char *)&req_tmpl->serv_specif_rqpars +
> +
> 	ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET);
> +	struct icp_qat_fw_ucs_slice_cipher_config crc_cfg;
> +	uint16_t crc_cfg_offset, cd_size;
> +
> +	crc_cfg_offset = cdesc->cd_cur_ptr - ((uint8_t *)&cdesc->cd);
> +
> +	switch (qat_dev_gen) {
> +	case QAT_GEN2:
> +		crc_cd_gen2 =
> +			(struct icp_qat_hw_gen2_crc_cd *)cdesc->cd_cur_ptr;
> +		crc_cd_gen2->flags = 0;
> +		crc_cd_gen2->initial_crc = 0;
> +		memset(&crc_cd_gen2->reserved1,
> +			0,
> +			sizeof(crc_cd_gen2->reserved1));
> +		memset(&crc_cd_gen2->reserved2,
> +			0,
> +			sizeof(crc_cd_gen2->reserved2));
> +		cdesc->cd_cur_ptr += sizeof(struct icp_qat_hw_gen2_crc_cd);
> +		break;
> +	case QAT_GEN3:
> +		crc_cd_gen3 =
> +			(struct icp_qat_hw_gen3_crc_cd *)cdesc->cd_cur_ptr;
> +		crc_cd_gen3->flags =
> ICP_QAT_HW_GEN3_CRC_FLAGS_BUILD(1, 1);
> +		crc_cd_gen3->polynomial = ETH_CRC32_POLYNOMIAL;
> +		crc_cd_gen3->initial_crc = ETH_CRC32_INIT_VAL;
> +		crc_cd_gen3->xor_val = ETH_CRC32_XOR_OUT;
> +		memset(&crc_cd_gen3->reserved1,
> +			0,
> +			sizeof(crc_cd_gen3->reserved1));
> +		memset(&crc_cd_gen3->reserved2,
> +			0,
> +			sizeof(crc_cd_gen3->reserved2));
> +		crc_cd_gen3->reserved3 = 0;
> +		cdesc->cd_cur_ptr += sizeof(struct icp_qat_hw_gen3_crc_cd);
> +		break;
> +	case QAT_GEN4:
> +		crc_cfg.mode = ICP_QAT_HW_CIPHER_ECB_MODE;
> +		crc_cfg.algo = ICP_QAT_HW_CIPHER_ALGO_NULL;
> +		crc_cfg.hash_cmp_val = 0;
> +		crc_cfg.dir = ICP_QAT_HW_CIPHER_ENCRYPT;
> +		crc_cfg.associated_data_len_in_bytes = 0;
> +		crc_cfg.crc_reflect_out =
> +
> 	ICP_QAT_HW_CIPHER_UCS_REFLECT_OUT_ENABLED;
> +		crc_cfg.crc_reflect_in =
> +
> 	ICP_QAT_HW_CIPHER_UCS_REFLECT_IN_ENABLED;
> +		crc_cfg.crc_encoding = ICP_QAT_HW_CIPHER_UCS_CRC32;
> +
> +		crc_cd_gen4 =
> +			(struct icp_qat_hw_gen4_crc_cd *)cdesc->cd_cur_ptr;
> +		crc_cd_gen4->ucs_config[0] =
> +
> 	ICP_QAT_HW_UCS_CIPHER_GEN4_BUILD_CONFIG_LOWER(crc_cfg);
> +		crc_cd_gen4->ucs_config[1] =
> +
> 	ICP_QAT_HW_UCS_CIPHER_GEN4_BUILD_CONFIG_UPPER(crc_cfg);
> +		crc_cd_gen4->polynomial = ETH_CRC32_POLYNOMIAL_BE;
> +		crc_cd_gen4->initial_crc = ETH_CRC32_INIT_VAL_BE;
> +		crc_cd_gen4->xor_val = ETH_CRC32_XOR_OUT_BE;
> +		crc_cd_gen4->reserved1 = 0;
> +		crc_cd_gen4->reserved2 = 0;
> +		crc_cd_gen4->reserved3 = 0;
> +		cdesc->cd_cur_ptr += sizeof(struct icp_qat_hw_gen4_crc_cd);
> +		break;
> +	default:
> +		return -EINVAL;
> +	}
> +
> +	crc_cd_ctrl->hash_cfg_offset = crc_cfg_offset >> 3;
> +	crc_cd_ctrl->hash_flags =
> ICP_QAT_FW_AUTH_HDR_FLAG_NO_NESTED;
> +	crc_cd_ctrl->inner_res_sz = cdesc->digest_length;
> +	crc_cd_ctrl->final_sz = cdesc->digest_length;
> +	crc_cd_ctrl->inner_state1_sz = 0;
> +	crc_cd_ctrl->inner_state2_sz  = 0;
> +	crc_cd_ctrl->inner_state2_offset = 0;
> +	crc_cd_ctrl->outer_prefix_sz = 0;
> +	crc_cd_ctrl->outer_config_offset = 0;
> +	crc_cd_ctrl->outer_state1_sz = 0;
> +	crc_cd_ctrl->outer_res_sz = 0;
> +	crc_cd_ctrl->outer_prefix_offset = 0;
> +
> +	crc_param->auth_res_sz = cdesc->digest_length;
> +	crc_param->u2.aad_sz = 0;
> +	crc_param->hash_state_sz = 0;
> +
> +	cd_size = cdesc->cd_cur_ptr - (uint8_t *)&cdesc->cd;
> +	cd_pars->u.s.content_desc_addr = cdesc->cd_paddr;
> +	cd_pars->u.s.content_desc_params_sz = RTE_ALIGN_CEIL(cd_size, 8) >>
> 3;
> +
> +	return 0;
> +}
> +
> +static int
> +qat_sym_session_configure_crc(struct rte_cryptodev *dev,
> +		const struct rte_crypto_sym_xform *cipher_xform,
> +		struct qat_sym_session *session)
> +{
> +	struct qat_cryptodev_private *internals = dev->data->dev_private;
> +	enum qat_device_gen qat_dev_gen = internals->qat_dev-
> >qat_dev_gen;
> +	int ret;
> +
> +	session->is_auth = 1;
> +	session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_NULL;
> +	session->auth_mode = ICP_QAT_HW_AUTH_MODE0;
> +	session->auth_op = cipher_xform->cipher.op ==
> +				RTE_CRYPTO_CIPHER_OP_ENCRYPT ?
> +					ICP_QAT_HW_AUTH_GENERATE :
> +					ICP_QAT_HW_AUTH_VERIFY;
> +	session->digest_length = RTE_ETHER_CRC_LEN;
> +
> +	ret = qat_sym_cd_crc_set(session, qat_dev_gen);
> +	if (ret < 0)
> +		return ret;
> +
> +	return 0;
> +}
> +
>  static int
>  qat_sec_session_set_docsis_parameters(struct rte_cryptodev *dev,
>  		struct rte_security_session_conf *conf, void *session_private,
> @@ -2681,12 +2866,21 @@ qat_sec_session_set_docsis_parameters(struct
> rte_cryptodev *dev,
>  	if (qat_cmd_id != ICP_QAT_FW_LA_CMD_CIPHER) {
>  		QAT_LOG(ERR, "Unsupported xform chain requested");
>  		return -ENOTSUP;
> +	} else if (internals->internal_capabilities
> +					& QAT_SYM_CAP_CIPHER_CRC) {
> +		qat_cmd_id = ICP_QAT_FW_LA_CMD_CIPHER_CRC;
>  	}
>  	session->qat_cmd = (enum icp_qat_fw_la_cmd_id)qat_cmd_id;
> 
>  	ret = qat_sym_session_configure_cipher(dev, xform, session);
>  	if (ret < 0)
>  		return ret;
> +
> +	if (qat_cmd_id == ICP_QAT_FW_LA_CMD_CIPHER_CRC) {
> +		ret = qat_sym_session_configure_crc(dev, xform, session);
> +		if (ret < 0)
> +			return ret;
> +	}
>  	qat_sym_session_finalize(session);
> 
>  	return qat_sym_gen_dev_ops[qat_dev_gen].set_session((void *)cdev,
> diff --git a/drivers/crypto/qat/qat_sym_session.h
> b/drivers/crypto/qat/qat_sym_session.h
> index 6322d7e3bc..9b5d11ac88 100644
> --- a/drivers/crypto/qat/qat_sym_session.h
> +++ b/drivers/crypto/qat/qat_sym_session.h
> @@ -46,6 +46,12 @@
>  					ICP_QAT_HW_CIPHER_KEY_CONVERT,
> \
>  					ICP_QAT_HW_CIPHER_DECRYPT)
> 
> +#define ICP_QAT_HW_GEN3_CRC_FLAGS_BUILD(ref_in, ref_out) \
> +	(((ref_in & QAT_GEN3_COMP_REFLECT_IN_MASK) << \
> +				QAT_GEN3_COMP_REFLECT_IN_BITPOS) | \
> +	((ref_out & QAT_GEN3_COMP_REFLECT_OUT_MASK) << \
> +				QAT_GEN3_COMP_REFLECT_OUT_BITPOS))
> +
>  #define QAT_AES_CMAC_CONST_RB 0x87
> 
>  #define QAT_CRYPTO_SLICE_SPC	1
> @@ -76,7 +82,12 @@ typedef int (*qat_sym_build_request_t)(void *in_op,
> struct qat_sym_session *ctx,
>  /* Common content descriptor */
>  struct qat_sym_cd {
>  	struct icp_qat_hw_cipher_algo_blk cipher;
> -	struct icp_qat_hw_auth_algo_blk hash;
> +	union {
> +		struct icp_qat_hw_auth_algo_blk hash;
> +		struct icp_qat_hw_gen2_crc_cd crc_gen2;
> +		struct icp_qat_hw_gen3_crc_cd crc_gen3;
> +		struct icp_qat_hw_gen4_crc_cd crc_gen4;
> +	};
>  } __rte_packed __rte_cache_aligned;
> 
>  struct qat_sym_session {
> @@ -152,10 +163,18 @@ qat_sym_session_clear(struct rte_cryptodev *dev,
>  unsigned int
>  qat_sym_session_get_private_size(struct rte_cryptodev *dev);
> 
> +int
> +qat_cipher_crc_cap_msg_sess_prepare(struct qat_sym_session *session,
> +					rte_iova_t session_paddr,
> +					const uint8_t *cipherkey,
> +					uint32_t cipherkeylen,
> +					enum qat_device_gen qat_dev_gen);
> +
>  void
>  qat_sym_sesssion_init_common_hdr(struct qat_sym_session *session,
>  					struct icp_qat_fw_comn_req_hdr
> *header,
>  					enum qat_sym_proto_flag
> proto_flags);
> +
>  int
>  qat_sym_validate_aes_key(int key_len, enum icp_qat_hw_cipher_algo *alg);
>  int
> --
> 2.34.1
> 
> --------------------------------------------------------------
> Intel Research and Development Ireland Limited
> Registered in Ireland
> Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
> Registered Number: 308263
> 
> 
> This e-mail and any attachments may contain confidential material for the sole
> use of the intended recipient(s). Any review or distribution by others is
> strictly prohibited. If you are not the intended recipient, please contact the
> sender and delete all copies.


  parent reply	other threads:[~2023-03-16 19:15 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-03-08 12:12 [PATCH 0/2] crypto/qat: added cipher-crc offload feature Kevin O'Sullivan
2023-03-08 12:12 ` [PATCH 1/2] crypto/qat: added cipher-crc offload support Kevin O'Sullivan
2023-03-08 12:12 ` [PATCH 2/2] crypto/qat: added cipher-crc cap check Kevin O'Sullivan
2023-03-09 14:33 ` [PATCH v2 0/2] crypto/qat: add cipher-crc offload feature Kevin O'Sullivan
2023-03-09 14:33   ` [PATCH v2 1/2] crypto/qat: add cipher-crc offload support to fw interface Kevin O'Sullivan
2023-03-09 14:33   ` [PATCH v2 2/2] crypto/qat: add cipher-crc offload support Kevin O'Sullivan
2023-03-13 14:26   ` [PATCH v3 0/2] crypto/qat: add cipher-crc offload feature Kevin O'Sullivan
2023-03-13 14:26     ` [PATCH v3 1/2] crypto/qat: add cipher-crc offload support to fw interface Kevin O'Sullivan
2023-03-16 12:24       ` Ji, Kai
2023-03-13 14:26     ` [PATCH v3 2/2] crypto/qat: add cipher-crc offload support Kevin O'Sullivan
2023-03-16 12:25       ` Ji, Kai
2023-03-16 19:15       ` Akhil Goyal [this message]
2023-03-20 16:28         ` [EXT] " O'Sullivan, Kevin
2023-04-18 13:39     ` [PATCH v4 0/2] crypto/qat: add cipher-crc offload feature Kevin O'Sullivan
2023-04-18 13:39       ` [PATCH v4 1/2] crypto/qat: add cipher-crc offload support to fw interface Kevin O'Sullivan
2023-04-18 13:39       ` [PATCH v4 2/2] crypto/qat: support cipher-crc offload Kevin O'Sullivan
2023-05-24 10:04       ` [EXT] [PATCH v4 0/2] crypto/qat: add cipher-crc offload feature Akhil Goyal

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CO6PR18MB448444BD09446B542413B376D8BC9@CO6PR18MB4484.namprd18.prod.outlook.com \
    --to=gakhil@marvell.com \
    --cc=david.coyle@intel.com \
    --cc=dev@dpdk.org \
    --cc=kai.ji@intel.com \
    --cc=kevin.osullivan@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).